Position: Senior Data Engineer - Azure Databricks
Purpose of the Position:
To design, build, and optimize scalable data pipelines and solutions using Azure Databricks and related technologies, to make faster, data-driven decisions as part of its data transformation journey.
Proficiency in data integration techniques, ETL processes and data pipeline architectures. Well versed in Data Quality rules, principles and implementation.
Location: Nagpur/ Pune/ Chennai/ Bangalore
Type of Employment: FTE
Key Result Areas and Activities:
Data Pipeline Development: Design and implement robust batch and streaming data pipelines using Azure Databricks and Spark.
Data Architecture Implementation: Apply Medallion Architecture to structure data layers (raw, enriched, curated).
Data Quality & Governance: Ensure data accuracy, consistency, and governance using tools like Azure Purview and Unity Catalog.
Performance Optimization: Optimize Spark jobs, Delta Lake tables, and SQL queries for efficiency and cost-effectiveness.
Collaboration & Delivery: Work closely with analysts, architects, and business teams to deliver end-to-end data solutions.
Technical Experience:
Must Have:
Hands-on experience with Azure Databricks, Delta Lake, Data Factory.
Proficiency in Python, PySpark, and SQL with strong query optimization skills.
Deep understanding of Lakehouse architecture and Medallion design patterns.
Experience building scalable ETL/ELT pipelines and data transformations.
Familiarity with Git, CI/CD pipelines, and Agile methodologies.
Good To Have:
Knowledge of data quality frameworks and monitoring practices.
Experience with Power BI or other data visualization tools.
Understanding of IoT data pipelines and streaming technologies like Kafka/Event Hubs.
Awareness of emerging technologies such as Knowledge Graphs.
Qualifications:
Education: Likely a degree in Computer Science, Data Engineering, Information Systems, or a related field.
Proven hands-on experience with Azure data stack (Databricks, Data Factory, Delta Lake).
Experience in building scalable ETL/ELT pipelines.
Familiarity with data governance and DevOps practices.
Qualities:
Strong problem-solving and analytical skills
Attention to detail and commitment to data quality
Collaborative mindset and effective communication
Proactive and self-driven
Passion for learning and staying updated with emerging data technologies

Keyskills: azure databricks cd continuous integration python analytical pyspark microsoft azure ci/cd datafactory rules azure data factory sql data bricks data quality query optimization design patterns devops data pipeline architecture etl data integration etl process architecture
Part of the global G4S security conglomerate, this Delhi-based entity has been operating since 1996. It provides business services such as facilities management, staffing for corporate and administrative roles, and security solutions. It reported revenue of around 31.4 crore in FY 2022 and functions...