Accenture is a global professional services company with leading capabilities in digital, cloud and security. Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Interactive, Technology and Operations services all powered by the world s largest network of Advanced Technology and Intelligent Operations centers. Our 624,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. We embrace the power of change to create value and shared success for our clients, people, shareholders, partners and communities. Visit us at www.accenture.com
Duties and Responsibilities:
Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems.
Debugging and troubleshooting of issues pertaining to big data and analytics solutions deployed in public cloud and on-prem environments.
Identity application performance improvement opportunities and collaborate with Engineering teams to make improvements.
Assist the support team, with the data pipelines from ingestion to reporting layers and act as an escalation contact to resolve any issues during the daily/weekly/monthly job executions
Enhancing the codes using Python and Spark programming language based on the requirement to stabilize the environment.
Maintain optimal data pipelines on Cloud(AWS/GCP/AZURE).
Continuous analysis of repetitive problems ,RCA and collaboration with Development team for complex code fixes.
Training and mentoring other members in the team
Open to learn new technologies in a short duration
Required Skills & Experience
6-8 years of experience required
Hands on experience in Bigdata Tech stack , Ability to work with programming languages such as Python or Spark to review/understand the existing code and enhance the codes to optimize wherever required.
Experience in implementing the data from different data source , preprocessing using PySpark/Python
Knowledge with Big Data technologies such as Hadoop, Kafka, NoSQL databases.
Experience of working with a variety of databases will be added advantages.
Knowledge of debugging data pipelines that are built using one or more of tools such as Spark, Python, AWS Glue, Azure Data Factory
Good time management skills to finish tasks in a timely manner and provide timely update on status.
Be proactive and upfront about issues at hand and collaborate to find solutions
Self-starter, challenger and analytical thinker
Strong Analytical and Communication Skills
Excellent written and verbal communication skillsQualifications Educational Background/qualificationsB Tech/M Tech from reputed engineering collegesMasters/M Tech in Computer ScienceSoft SkillsStrong problem-solving skillsGood team playerAttention to detailsGood communication skills