Your browser does not support javascript! Please enable it, otherwise web will not work for you.

Gcp Services (dataproc, Big Query, Composer) , Pyspark @ Tata Consultancy

Home > Software Development

 Gcp Services (dataproc, Big Query, Composer) , Pyspark

Job Description

Location : Bangalore, Chennai, Hyderabad, Pune, Kolkata


6+ Years Experience in the Google Cloud on Big Data/ Analytics/ Data Lake / Datawarehouse DataProc , DataPrep, Data Plex , Cloud Bigtable , Dataflow , Cloud Composer , BigQuery, Databricks, Kafka, Nifi, CDC processing, Snowflake, Datastore , Firestore, Docker, App Engine , Spark, Cloud Data Fusion, Apigee API Management, Kafka, Attunity, Golden Gate, Map Reduce, Hadoop, Hive, HBase, Cassandra, PySpark, Flume, Hive, Impala


Must Have :

  • Design & Implement ETL/data pipeline and Data Lake/ Datawarehouse in Google Cloud
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Big Query, DataProc, Data Prep , Cloud Composer , Dataflow services/technologies.
  • 10+ years of total experience and at least 3+ years of expertise in Cloud data warehouse technologies on Google Cloud data platform covering Big Query, DataProc, Data Prep, Cloud Composer , Dataflow, Databricks etc.
  • Extensive hands-on experience implementing data ingestion and data processing using Google Cloud services: DataProc , Data Prep, Cloud Bigtable , Dataflow , Cloud Composer , Big Query, Databricks, Kafka, Nifi, CDC processing, Snowflake, Datastore , Firestore, Docker, App Engine , Spark, Cloud Data Fusion, Apigee API Management,  etc.
  • Familiarity with the Technology stack available in the industry for data management, data ingestion, capture, processing, and curation: Kafka, Attunity, Golden Gate , Data Plex , Map Reduce, Hadoop, Hive, HBase, Cassandra, PySpark, Flume, Hive, Impala, etc.
  • Design and Implement analytics solution that utilize the ETL/data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing products.

Job Classification

Industry: IT Services & Consulting
Functional Area / Department: Engineering - Software & QA
Role Category: Software Development
Role: Data Engineer
Employement Type: Full time

Contact Details:

Company: Tata Consultancy
Location(s): Hyderabad

+ View Contactajax loader


Keyskills:   Big Query Composer Pyspark Dataproc

 Job seems aged, it may have been expired!
 Fraud Alert to job seekers!

₹ Not Disclosed

Similar positions

Software Development Engineer, Data Collection Technology

  • Morningstar
  • 2 - 5 years
  • Mumbai
  • 10 days ago
₹ Not Disclosed

Hcl Is Hiring Java Bigdata Developer@bangalore/hyderabad

  • HCLTech
  • 9 - 14 years
  • Hyderabad
  • 13 days ago
₹ Not Disclosed

Opening For C,Shell Scripting,Python Developer

  • MNC HR Muskan
  • 5 - 10 years
  • Kolkata
  • 13 days ago
₹ 15-25 Lacs P.A.

Aws With Genai Engineer (python, Genai, Aws Cloud)

  • Gainwell Technologies
  • 9 - 14 years
  • Bengaluru
  • 13 days ago
₹ Not Disclosed

Tata Consultancy

\n\nMavlers is a full-service digital marketing agency that has propelled growth for over 7,000 brands and agencies worldwide. As Google, Mailchimp, WP VIP, Microsoft, Salesforce, and HubSpot partners, we possess the expertise to deliver high-impact projects and campaigns tailored to our clients uni...