Handling RFP / RFI / Government Tender technical solution, detailed scope preparation, effort estimations and response drafting
Excellent presentation skills is important
Understand client needs, translate them into business solutions that can be implemented
Responsible for architecture, design and development of scalable data engineering / AI solutions and standards for various business problems using cloud native services, or third party services on hyperscalers
Take ownership of technical solutions from design and architecture perspective, ensure the right direction and propose resolution to potential Data science/Model related problems
Delivering and presenting proofs of concept of key technology components to project stakeholders
Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints.
Design and develop model utilization benchmarks, metrics, and monitoring to measure and improve models. Detect model drift and alert - Prometheus, Grafana stack, Cloud native monitoring stack
Research, design, implement and validate cutting-edge deployment methods across hybrid cloud scenarios
Develop and maintain documentation of the Model flows and integrations, pipelines etc
Evaluate and create PoVs around the performance aspects of DSML platforms and tools in the market against customer requirements
Assist in driving improvements to the Data Engineering stack, with a focus on the digital experience for the user, as well as model performance & security to meet the needs of the business and customers, now & in the future
Required Experience:
Data Engineer with 15+ years of experience with following skills -
Must have 5+ years of experience working with Data modernisation solutions, working experience with on-prem or in cloud AI, DWH solutions based on native and third party solutions based AWS, Azure, GCP, Databricks etc.
Must have experience designing and architecting of data lake/warehouse projects using Paas and Saas - such as Snowflake, Databricks, Redshift, Synapse, BigQuery etc. or data warehouse/data lake implementations on-premise
Must have good knowledge in designing data pipelines ETL/ELT, DS modules implementing complex stored Procedures and standard DWH and ETL concepts
Experience in Data Migration from on-premise RDBMS to cloud data warehouses
Good understanding of relational as well as NoSQL data stores, methods and approaches (star and snowflake, dimensional modelling)
Hands-on experience in Python, PySpark, programming for data integration projects
Support in providing resolution to an extensive range of complicated data pipeline related problems, proactively and as issues surface.
Preferred candidate profile
Understanding of cloud network, security, data security and data access controls and design aspects
AI and Data solutions on Hyperscalers such as Databricks, MS-Fabric, co-pilot, AWS redshift, GCP BigQuery, GCP Gemini etc
Background Agentic AI, GenAI technologies will be added advantage
Hands ON experience for planning and executing POC / MVP / Client projects engaging Data Modernization and AI use case developments
Required Skills:
Bachelor's degree in Computer Science, Information Security, or a related field Skilled in planning, organization, analytics, and problem-solving Excellent communication and interpersonal skills to work collaboratively with clients and team members Comfortable working with statistics
Job Classification
Industry: IT Services & ConsultingFunctional Area / Department: IT & Information SecurityRole Category: IT Infrastructure ServicesRole: Infrastructure ArchitectEmployement Type: Full time