GROUNDTRUTH INDIA PRIVATE LIMITED

Lead Data Engineer Spark AWS

  • Job Type: Full Time
  • Industry Type: IT Sector
  • Industry Location: Noida, Gurgaon/Gurugram, Delhi / NCR
  • Experience: 6-8yrs
  • No. of Positions: 1
  • Salary Range: 7-9.6 lac
  • Primary Skills: Big Data Spark AWS Airflow Java NoSQL Hadoop
  • Secondary Skills: SCALA Machine Learning Python SQL Skills highlighted with ‘‘ are preferred keyskills
  • Job Location: Noida, Gurgaon/Gurugram, Delhi / NCR
  • Posted Date: 29 days ago
Job Description

Role: Lead Data Engineer
Location: Gurugram, India| Engineering

This is a senior data engineering role where the candidate would be a part of Data Engineering team distributed in the US and India. They are expected to architect complete end-to-end solutions/data pipelines; data ingestion, ETL , reporting and APIs.

This is a great career opportunity to grow, learn and work on the latest big data & AWS cloud technologies with meaningful impact. You will be working with a high-performing team that believes in open communication, collaboration, and embraces good ideas from everywhere to make the team successful.

 

You will:

  • Work with various Big Data technologies on AWS.
  • Deploy data pipelines in production based on CI/CD practices using Docker and Airflow
  • Contribute to software implementation specs, test plans, participate actively in code reviews and adhere to very high-quality coding practices with good test coverage.
  • Mentor and lead other data engineers.

 

You are:

  • Good technical communicator; Someone who can explain the big picture view in a shared setting.
  • Someone who can translate business problems into Software Design, Architecture and code.
  • Technical Leader who motivates and guides team members to grow and thrive

 

You Have:

  • B.Tech./B.E./M.Tech./MCA or equivalent, in computer science or relevant area
  • 5+ years of computer systems design/software architecture experience
  • 3+ years of Java/Scala and/or Python programming with good CS fundamentals
  • 3+ years of Big Data development experience with Hadoop/Spark and writing ETLs
  • 3+ years experience of SQL, RDBMS and/or NoSQL database technologies
  • AWS experience preferred or any other public cloud
  • A big plus and way to standout Experience building Machine Learning pipelines, MLOps, knowledge of fundamental ML/statistical models
Relevant Job Openings
Software Principal Engineer
Hiring Java Lead Developer full stack Cloud
Python Flask developer
Django Developer
Duck Creek Policy Developer Full Time Remote Role
Java Developer