Roles and Responsibilities
Job Responsibilities :
- Understand & provide innovative solutions to business and product requirements using Big Data Architecture
- Take ownership of end-to-end data-pipeline including system design and integrating required
- Big Data tools & frameworks
- Implementing ETL processes and constructing data warehouse (HDFS, S3, etc.) at scale
- Developing highly performant Spark jobs for deriving data insights and building user preference
- Developing required querying and reporting tools for various business teams
Desired Candidate Profile :
- Experience in creating a large scale data pipeline from scratch
- Experience in creating real time recommender platforms.
- Good to have: Kafka, Flink, Spark, Redshift, Redis, Druid, AWS stack,
- Experience with classification algorithms.
Perks and Benefits :
- Join the next Unicorn
- Flat structure
- Rapid Growth
- Fun working environment
- work from home
- Excellent salary and health benefits