Design, develop, test, deploy, and maintain large-scale data pipelines using Python on GCP.
Collaborate with cross-functional teams to gather requirements and deliver high-quality ETL solutions.
Develop scalable and efficient algorithms for processing large datasets using big data technologies like Spark or Hadoop.
Troubleshoot complex issues related to data pipeline failures and optimize system performance.
Job Requirements :
8-10 years of experience in Data Engineering with expertise in Python programming language.
Strong understanding of Big Data concepts such as Spark, Hadoop, etc.
Experience working with cloud-based platforms like Google Cloud Platform (GCP) for building ETL pipelines.
Job Classification
Industry: InternetFunctional Area / Department: Data Science & AnalyticsRole Category: Data Science & Analytics - OtherRole: Data Science & Analytics - OtherEmployement Type: Full time