iCore Technologies, LLC

Hadoop Developer

  • Job Type: Contract W2
  • Industry Type: IT Sector
  • Industry Location: Raleigh
  • Experience: NA
  • No. of Positions: 1
  • Primary Skills: Hadoop Kafka
  • Secondary Skills: SQL
  • Job Location: Raleigh, North Carolina
  • Posted Date: Posted today
Job Description

Job Description "Below is the JD for the BigData position:

Exp: 10+ Years

  1. Must have knowledge of Hadoop ecosystem.
  2. Should be good in Linux shell and HDFS commands.
  3. Strong knowledge in SQL, NiFi, HAWQ 4. Should know any script language like shell, python etc...
  4. Good understanding of Kafka like create topic, delete topic etc
  5. Must know how to invoke or submit spark job to perform a test.
  6. Working knowledge with semi-structured data like json, xml ... etc
  7. Verifying ETL to check for different transformation

Below is the JD for the Kafka responsibilities: (To be updated as per Client)

1.Provide expertise and hands on experience working on Kafka connect using schema registry in a very high volume environment

2.Provide expertise in Kafka brokers, zookeepers, KSQL, KStream and Kafka Control center.

3.Provide expertise and hands on experience working on AvroConverters, JsonConverters, and StringConverters.

4.Provide expertise and hands on experience working on Kafka connectors such as MQ connectors, Elastic Search connectors, JDBC connectors, File stream connector, JMS source connectors, Tasks, Workers, converters, Transforms.

5.Provide expertise and hands on experience on custom connectors using the Kafka core concepts and API.

6.Working knowledge on Kafka Rest proxy.

7.Ensure optimum performance, high availability and stability of solutions.

8.Create topics, setup redundancy cluster, deploy monitoring tools, alerts and has good knowledge of best practices.

9.Create stubs for producers, consumers and consumer groups for helping onboard applications from different languages/platforms. Leverage Hadoop ecosystem knowledge to design, and develop capabilities to deliver our solutions using Spark, Scala, Python, Hive, Kafka and other things in the Hadoop ecosystem.

10.Experience with RDBMS systems, particularly Oracle 11/12g

11.Use automation tools like provisioning using Jenkins and Udeploy.

  1. Ability to perform data related benchmarking, performance analysis and tuning.

13. Strong skills in In-memory applications, Database Design, Data Integration"

Relevant Job Openings
Tableau developer
Senior Java Developers
Bigdata Developer
Lead Java Developer
Sr Solution Architect
Java Microservices Developers and Architects