Roles and Responsibilities:
Build real-time/batch data pipelines and manage multiple datasets using S3, Glue, and Redshift. Data migration from on-premise sources into AWS storage.
Effectively use programming languages like Python, Pyspark, and SQL. Ensure a smooth process.
Automate deployment of data processing & model training pipeline using CI/CD and DevOps tools.
Automate set up of cloud infrastructure for various data workloads using AWS SDK, Cloud formation
Desired Candidate Profile:
2 to 4 yrs. of experience as an AWS developer with extremely strong SQL and data analysis skills.
Work with ETL solutions on AWS.
Python, Spark Core, PySpark AWS Glue, Redshift & Redshift Spectrum
S3 & Athena, RDS - Amazon Aurora, Postgres
Microservices and AWS Lambda, Data Pipeline, Data Lake
Good experience with using version control system GIT HUB.
Good experience and deployment of various AWS services, VPC, EC2, S3, ALB, ELB, AUTOSCALING
Cloud Watch, Cloud Trail, Certificate Services, IAM. ETC.
Scripting knowledge of Python, Bash Scripting, PowerShell
Configuring the Docker containers and creating docker files for different environments
Why work with us?
WFH with Flexible timings
Twice a year appraisal for high performers
Be a part of the company's ESOP pool
Sponsorship of Cloud Certifications
Get to work on exciting projects