Adobe is redefining customer intelligence. We help enterprises understand their customers deeply what they value, what they need and translate that understanding into meaningful, personalized experiences.
The Adobe Experience Platform (AEP) is at the heart of this mission, powering data integration, governance, advanced analytics, and intelligent services at global scale. Our next chapter focuses on agent-driven AI systems autonomous, adaptive agents that orchestrate complex workflows, learn from data, and deliver real-time insights and actions.
At the core lies our Enterprise Knowledge Base, a foundational platform that unifies structured, unstructured, and semi-structured data. This system fuels advanced AI and Generative AI applications across Adobe s digital marketing ecosystem empowering customers to unlock richer, more intelligent experiences.
We are looking for someone who thrives on solving hard problems, scaling intelligent systems, and creating measurable impact. you'll lead the design and development of low-latency, high-throughput services that power intelligent enterprise experiences across millions of users worldwide.
Job Responsibilities
Lead the technical design and implementation strategy for major systems and components of AEP Knowledge Base and the Agentic ecosystem.
Strong background in distributed storage (Delta Lake, Iceberg), compute (Spark), data warehousing (Snowflake, BigQuery, Redshift) and cloud services (Azure, AWS, GCP).
Knowledge of Vector Stores (FAISS, Pinecone, Weaviate, Milvus).
Optimize for quality of data, low latency, high reliability and scalability across enterprise environments.
Establish best practices for evaluation, safety, and guardrails for system behavior.
Collaborate with research and product teams to integrate modern capabilities into the Platform.
Mentor engineers and drive technical excellence in building production-grade systems.
What you will need to succeed
6+ years in design and development of large scale data intensive distributed systems.
2+ years as a lead.
In depth work experience on technologies like Apache Spark, Delta Lake, Snowflake, Kubernetes, etc
Hands-on experience with public cloud platforms (Azure, AWS, or GCP).
Strong background in relational databases (PostgreSQL, MySQL).
Proficiency in Java, Scala, or Python, and a deep understanding of algorithms and data structures.
A strong sense of ownership and cost-conscious design for compute and memory efficiency.
Excellent communication skills and the ability to collaborate effectively with multi-functional teams.
bachelors, masters, or PhD in Computer Science or a related technical field with equivalent experience.
Job Classification
Industry: IT Services & ConsultingFunctional Area / Department: Engineering - Software & QARole Category: Software DevelopmentRole: Data Platform EngineerEmployement Type: Full time