Secondary Skills:AWS Skills highlighted with ‘‘ are preferred keyskills
Job Location:
Hyderabad
Posted Date:
391 days ago
Job Description
Roles and Responsibilities
Experience in Hadoop, & AWS - Any Cloud is required
Python/Pyspark, spark, SQL
Minimum 3+ years of experience Hadoop with overall 4+years experience in IT Industry and over all 4-9 YRS ONLY.
Technology exp
Strong knowledertise of solutioning in Hadoop (preferably Cloudera or Hortonworks distribution), HDFS, Hive, Spark, Oozie.
Desired Candidate Profile
Ensuring proper execution of Team's duties and alignment with business vision and objectives
Works closely with the business Data and Analytics teams and gathering technical requirements
Experience in building and maintaining reliable and scalable ETL pipeline on big data/Cloud platform through the collection, storage, processing, and transformation of large data-sets
Experience working with varied forms of data infrastructure inclusive of relational databases such as SQL, Hadoop,Spark
Proficiency in scripting languages in python, pyspark/Scala spark
Experience in Database design/data modeling.
Must have strong experience in data warehouse concepts.
Experience in AWS cloud is a plus
Experience in Data bricks is a plus
Knowledge of various Big data architectures like Lambda / kappa with usage of automation / scheduling tool like Oozie or any other technology
Excellent oral and written communication and presentation skills, analytical and problem solving skills
Demonstrated ability in solutioning covering data ingestion, data cleansing, ETL, data mart creation and exposing data for consumers.
Perks and Benefits
Best in industry standard [ we are looking for immediate joiner ]
Please directly call - 94458 25063 Priya
Relevant Job Openings
No relevant jobs found. Please try again with another job.