Develop and enhance DBT models and transformations.
Perform data cleansing, validation, and quality checks.
Support cloud-based data engineering workloads (preferably GCP).
Write optimized SQL queries for analytics and data processing.
Collaborate with Data Engineers, Analysts, and Architects for requirement understanding. Monitor pipeline health, troubleshoot issues, and ensure data reliability.
Document workflows, models, and mappings.
Required Skills (Must Have)
Data Modeling understanding (Star/Snowflake).
Strong SQL skills.
Hands-on experience with DBT.
Experience with at least one major cloud platform: GCP (BigQuery, Dataflow, Dataproc, Composer, GCS preferred) OR AWS / Azure equivalent services.
Experience with data transformation and loading technologies.
Good knowledge of Python for data tasks.
Good to Have
PySpark or similar distributed data processing experience.
Exposure to Airflow / Cloud Composer.
Experience with Data Catalog / Dataplex (or similar metadata services).
Familiarity with CI/CD, Git, and version-controlled workflows.
Job Classification
Industry: IT Services & ConsultingFunctional Area / Department: Engineering - Software & QARole Category: Software DevelopmentRole: Data EngineerEmployement Type: Contract