Minimum of 2 years of building and coding applications using Hadoop components - HDFS, Hbase, Hive, Sqoop, Flume etc.
Minimum of 2 years of coding Java Scala / Spark, Python, Pig programming, Hadoop Streaming, HiveQL
Minimum 4 years understanding of traditional ETL tools & Data Warehousing architect.
Strong personal leadership and collaborative skills, combined with comprehensive, practical experience and knowledge in end-to-end delivery of Big Data solutions.
Experience in Exadata and other RDBMS is a plus.
Must be proficient in SQL/HiveQL
Hands on expertise in Linux/Unix and scripting skills are required.