Required Technical Skills
Core Platforms
Databricks Unity Catalog, Delta Live Tables, Databricks SQL, Spark optimization, job orchestration
Tableau calculated fields, LOD expressions, data blending, published data sources, Tableau Server/Cloud
Python pandas, NumPy, PySpark, scikit-learn, SQLAlchemy, Airflow/Prefect integration
Large Language Models (LLMs)
Hands-on familiarity with leading LLMs and their APIs, including but not limited to:
SemiKong (semiconductor-domain LLM) preferred differentiator for this role
OpenAI GPT-4 / GPT-4o
Anthropic Claude (claude-3 / claude-sonnet family)
Google Gemini (1.5 Pro / Flash)
Meta LLaMA 3 / LLaMA 3.1
Mistral / Mixtral
Cohere Command R+
Falcon, Phi-3, and other open-weight models
Prompt engineering, RAG architecture, LangChain / LlamaIndex for applied analytics use cases
Data SQL
Advanced SQL window functions, CTEs, query optimization across Databricks SQL, Snowflake, or similar
Data modeling concepts star/snowflake schemas, dimensional modeling
Experience with large-scale structured and semi-structured datasets (JSON, Parquet, Delta format)
Additional Technical Exposure (Preferred)
Cloud platforms AWS (S3, Glue, Redshift), GCP (BigQuery), or Azure (Synapse, ADLS)
Version control Git/GitHub for script management and collaboration
Basic ML model evaluation and feature engineering in Python
Familiarity with CI/CD pipelines for analytics/ML assets
Experience Qualifications
10+ years of professional experience as a Data Analyst, Analytics Engineer, or closely related role.
Proven track record delivering analytical projects in semiconductor, manufacturing, or hi-tech industry preferred.
Strong portfolio demonstrating Databricks and Tableau work at scale.
Demonstrated hands-on usage of LLMs in a professional or research context not just theoretical knowledge.
Experience working in agile / scrum teams within large enterprise environments.
Bachelors degree (or higher) in Computer Science, Statistics, Engineering, Mathematics, or a related quantitative field.
Professional Competencies
Strong business acumen ability to translate data findings into clear, actionable recommendations for non-technical stakeholders.
Excellent verbal and written communication skills in English.
Self-starter mindset; comfortable with ambiguity and evolving requirements.
High attention to detail and commitment to data accuracy and integrity.
Collaborative team player with the ability to work across global time zones as needed.
Work Conditions
Key Responsibilities
Design, build, and maintain end-to-end data pipelines and analytical workflows on Databricks (Delta Lake, Spark, MLflow).
Develop interactive dashboards and visual reporting layers in Tableau; translate findings into executive-ready narratives.
Write production-quality Python scripts for data ingestion, transformation, automation, and model integration.
Collaborate with data engineers, product managers, and business stakeholders to define KPIs and data requirements.
Apply LLM-powered capabilities (e.g., SemiKong, GPT-4, Claude, Gemini, LLaMA, Mistral) to augment analytics workflows and automate insight generation.
Conduct exploratory data analysis (EDA), statistical modelling, and root-cause analysis on large-scale datasets.
Ensure data quality, governance, and lineage across analytical assets.
Mentor junior analysts and contribute to the teams data literacy initiatives.
Participate in cross-functional sprint planning and agile ceremonies as a technical SME.

Keyskills: Computer science Data analysis Automation Data modeling Analytical Agile Data quality Analytics SQL Python
Zensar stands out as a premier technology consulting and services company, embracing an experience-led everythingphilosophy. We are creators, thinkers, and problem solvers passionate about designing digital experiences that are engineered into scale-ready products, services, and solutions to deliver...