Job Description
Role-AWS Spark Scala Data Engineer
Required Technical Skill Set-Spark, Scala, AWS(S3,EMR,Glue,Redshift,Athena,Lambda),SQL,Python
Desired Experience Range**6-8 Yrs
Location of Requirement Chennai, Bangalore,Pune
Desired Competencies (Technical/Behavioral Competency)
Must-Have**
Strong Expertise in AWS Data Engineering
Hands-on expertise in Spark and Scala
Extensive experience with AWS Cloud Services (S3, EMR, Glue, Redshift, Athena etc)
Solid understanding of Distributed computing
Proficiency in SQL
Experience in Data Pipeline Orchestration tools (Airflow, Step Functions etc)
Good-to-Have(Ideally should not be more than 3-5)
Familiarity with CI/CD pipelines
Experience with Streaming Data Pipelines
Relevant AWS Certifications
SNResponsibility of / Expectations from the Role
1Design Develop and maintain Scalable Data Pipelines using Apache Spark (Scala)
2Build and Optimize ETL/ELT workflows on AWS
3Work with large scale structured and unstructured Datasets
4Develop Data Solutions using AWS services such as S3, EMR, Glue, Redshift, Athena,Lambda
5Ensure Data Security, Governance and Compliance Best Practices

We are hiring Fulltime with TechMahindra for below mentioned Skills:\\\\n\\\\n1. Experience with Big Data Environment\\\\n\\\\n2. Hands on coding in Python, Spark, Hive\\\\n\\\\n3. Shell Scripting