Strong experience with deployments and administration in Linux/Unix environments.
Responsible for technology ownership on Hadoop Ecosystem.
To be able to benchmark systems, analyse system bottlenecks and propose solutions.
Strong experience in Peta Byte volume data handling.
Experienced in Hadoop design, installation, deploy, admin and manage Hadoop Software on a large cluster.
Strong hands-on experience in Hadoop frameworks like, MapReduce, Pig, Hive, HBase, Oozie, Flume, ZooKeeper, Spark, MongoDB, NoSQL and data lakes.
Install and configure Hadoop Data Flows (HDF).
Strong experience in Hive and Spark queries.
Defining Performance params and performance improvements.
Ability to estimate level of effort and prototype as necessary.
Knowledge in Container based application deployment.
Experience with DevOps Model.
Java / J2EE / Python
Strong knowledge in SQL, NoSQL systems.
AWS / Google Cloud / Azure.