Job Description "Below is the JD for the BigData position:
Exp: 10+ Years
Below is the JD for the Kafka responsibilities: (To be updated as per Client)
1.Provide expertise and hands on experience working on Kafka connect using schema registry in a very high volume environment
2.Provide expertise in Kafka brokers, zookeepers, KSQL, KStream and Kafka Control center.
3.Provide expertise and hands on experience working on AvroConverters, JsonConverters, and StringConverters.
4.Provide expertise and hands on experience working on Kafka connectors such as MQ connectors, Elastic Search connectors, JDBC connectors, File stream connector, JMS source connectors, Tasks, Workers, converters, Transforms.
5.Provide expertise and hands on experience on custom connectors using the Kafka core concepts and API.
6.Working knowledge on Kafka Rest proxy.
7.Ensure optimum performance, high availability and stability of solutions.
8.Create topics, setup redundancy cluster, deploy monitoring tools, alerts and has good knowledge of best practices.
9.Create stubs for producers, consumers and consumer groups for helping onboard applications from different languages/platforms. Leverage Hadoop ecosystem knowledge to design, and develop capabilities to deliver our solutions using Spark, Scala, Python, Hive, Kafka and other things in the Hadoop ecosystem.
10.Experience with RDBMS systems, particularly Oracle 11/12g
11.Use automation tools like provisioning using Jenkins and Udeploy.
13. Strong skills in In-memory applications, Database Design, Data Integration"