0 - 5 years experience in Bigdata technologies
Proficient understanding of distributed computing principles
Ability to solve any ongoing issues with operating the cluster
Knowledge with Hadoop v2, MapReduce, HDFS
Working knowledge with NoSQL databases, such as HBase, Cassandra, MongoDB
Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming
Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala
Knowledge with Spark
Experience with various messaging systems, such as Kafka or RabbitMQ
Should be a self-starter with good communication skills.
Strong problem solving and analytical skills