Your browser does not support javascript! Please enable it, otherwise web will not work for you.

Data Engineer (Hadoop + Snowflake) @ Coforge

Home > Data Science & Machine Learning






Coforge  Data Engineer (Hadoop + Snowflake)

Job Description

Job Description-

Data Engineer

Engineer will be part of the datastore-migration Factory team that will be responsible to perform for the end-to end datastore migration from on-prem DataLake to AWS hosted LakeHouse. This is a high visibility and crucial project for Goldman Sachs.

Responsibilities of the Engineer include:

1. Pipeline Migration

a. Logic & Scheduling: Refactoring and migrating extraction logic and job scheduling from legacy frameworks to the new Lakehouse environment.

b. Data Transfer: Executing the physical migration of underlying datasets while ensuring data integrity.

c. Stakeholder Engagement: Acting as a technical liaison to internal clients, facilitating "hand-off and sign-off" conversations with data owners to ensure migrated assets meet business requirements.

2. Consumption Pattern Migration

a. Code Conversion: Translating and optimizing legacy SQL and Spark-based consumption patterns (raw and modeled) for compatibility with Snowflake and Iceberg.

b. Usage analysis: Understand usage patterns to deliver the required data products.

c. Stakeholder Engagement: Acting as a technical liaison to internal clients, facilitating "hand-off and sign-off" conversations with data owners to ensure migrated assets meet business requirements.

d. Data Reconciliation & Quality

3. A rigorous approach to data validation is required. Candidates must work with reconciliation frameworks to build confidence that migrated data is functionally equivalent to that already used within production flows.

Engineer will also need to work with internal data management platforms team and must have an aptitude for learning new workflows and language constructs as necessary.

Technical Stack Requirements: While candidates are not expected to be experts in every tool, the collective team must cover the following technologies:

Extraction & Logic:  Kafka, ANSI SQL, FTP, Apache Spark

Data Formats:  JSON, Avro, Parquet

Platforms:  Hadoop (HDFS/Hive), Snowflake, Apache Iceberg, Sybase IQ


Share your resume over Aa***********a@Co****e.Com

Job Classification

Industry: IT Services & Consulting
Functional Area / Department: Data Science & Analytics
Role Category: Data Science & Machine Learning
Role: Data Engineer
Employement Type: Full time

Contact Details:

Company: Coforge
Location(s): Hyderabad

+ View Contactajax loader


Keyskills:   Snowflake Hadoop SQL Parquet Kafka Sybase IQ JSON Apache iceberg AVRO Apache Spark Ansi Sql

 Fraud Alert to job seekers!

₹ Not Disclosed

Similar positions

Data Scientist-EDA

  • Wipro
  • 5 - 7 years
  • Pune
  • 5 hours ago
₹ 22.5-30 Lacs P.A.

AI / ML Engineer

  • Accenture
  • 7 - 12 years
  • Chennai
  • 5 days ago
₹ Not Disclosed

AI / ML Engineer

  • Accenture
  • 10 - 12 years
  • Bengaluru
  • 13 days ago
₹ Not Disclosed

Senior Dbt Engineer (data Build Tool) + Pyspark

  • Coforge
  • 7 - 12 years
  • Noida, Gurugram
  • 7 hours ago
₹ Not Disclosed

Coforge

www.coforge.com