Job Summary:
As part of Daman’s Data Engineering team, you will be architecting and delivering highly scalable, high performance data integration and transformation platforms. The solutions you will work on will include cloud, hybrid and legacy environments that will require a broad and deep stack of data engineering skills. You will be using core cloud data warehouse tools, hadoop, spark, events streaming platforms and other data management related technologies. You will also engage in requirements and solution concept development, requiring strong analytic and communication skills.
Responsibilities:
Skills and Qualifications:
Experience using Kafka as a distributed messaging system
Experience with Kafka producer and consumer APIs
Understanding of event-based application patterns & streaming data
Experience with related technologies ex Spark streaming or other message brokers like MQ is a PLUS
3+ years of Data Management Experience
3+ years’ experience developing, deploying and supporting scalable and high-performance data pipelines (leveraging distributed, data movement technologies and approaches, including but not limited to ETL and streaming ingestion and processing)
2+ years’ experience with Hadoop Ecosystem (HDFS/S3, Hive, Spark)
3+ years’ experience in a software engineering, leveraging Java, Python, Scala, etc.
2+ years’ advanced distributed schema and SQL development skills including partitioning for performance of ingestion and consumption patterns
2+ years’ experience with distributed NoSQL databases (Apache Cassandra, Graph databases, Document Store databases)
Daman Is an Equal Opportunity Employer and All Qualified Applicants Will Receive Consideration for Employment Without Regard to Race, Color, Religion, Sex, National Origin, Disability Status, Protected Veteran Status, Or Any Other Characteristic Protected by Law.