Build and maintain efficient and scalable data pipelines and ETL processes.
Design, develop, and maintain dbtmodels to transform raw data into analyticsready datasets using modular, versioncontrolled workflows.
Manage and organize structured and unstructured data stored in AWS S3.
Implement data quality checks, governance, and performance monitoring.
Collaborate with data scientists, analysts, and business teams to deliver reliable and timely data.
Perform database management, schema design, and performance tuning.
Utilize code repositories (e.g., GitHub) for version control and collaboration.
Ensure security, compliance, and best practices for cloud-based data platforms.
Integrate data from SAP and other enterprise systems into existing pipelines (as required).
Support experimentation and model deployment workflows in ML/AI environments.
Required Skills and Qualifications
4+ years of experience in data engineering with proven expertise in:
DBT
AWS S3 and AWS Cloud services (Glue, Lambda, Redshift, etc.)
SQL development and optimization
ETL pipeline design and orchestration
Data warehousing and data migration strategies
Data analysis and debugging skills
Code versioning using GitHub or similar tools
Nice to Have Skills
Experience with AWSSagemaker and Lakehouse architectures.
Familiarity with SAP data integration and ERP data flows.
Job Classification
Industry: IT Services & ConsultingFunctional Area / Department: Engineering - Software & QARole Category: Software DevelopmentRole: Software Development - OtherEmployement Type: Full time