Your browser does not support javascript! Please enable it, otherwise web will not work for you.

Sr. Quality Assurance Engineer- AI @ Opentext

Home > Quality Assurance and Testing

 Sr. Quality Assurance Engineer- AI

Job Description

YOUR IMPACT

We are seeking a passionate and detail-oriented Quality Assurance (QA) Engineer to join our AI Engineering and Enablement team.

In this role, you will be responsible for validating Generative AI systems, multi-agent workflows, and Retrieval-Augmented Generation (RAG) pipelines developed using frameworks like LangGraph, LangChain, and Crew AI.

You will work closely with AI engineers, data scientists, and product owners to ensure the accuracy, reliability, and performance of LLM-powered enterprise applications.

What The Role Offers

  • Be part of a next-generation AI engineering team delivering enterprise-grade GenAI solutions.
  • Gain hands-on experience testing LangGraph-based agentic workflows and RAG pipelines.
  • Learn from senior AI engineers working on production-grade LLM systems.
  • Opportunity to grow into AI Quality Specialist or AI Evaluation Engineer roles as the team expands.
  • Develop and execute test cases for validating RAG pipelines, LLM integrations, and agentic workflows.
  • Validate context retrieval accuracy, prompt behaviour, and response relevance across different LLM configurations.
  • Conduct functional, integration, and regression testing for GenAI applications exposed via APIs and microservices.
  • Test Agent-to-Agent (A2A) & Model Context Protocol (MCP) communication flows for correctness, consistency, and task coordination.
  • Verify data flow and embedding accuracy between vector databases (Milvus, Weaviate, pgvector, Pinecone).
  • Build and maintain automated test scripts for evaluating AI pipelines using Python and PyTest.
  • Leverage LangSmith, Ragas, or TruLens for automated evaluation of LLM responses (factuality, coherence, grounding).
  • Integrate AI evaluation tests into CI/CD pipelines (GitLab/Jenkins) to ensure continuous validation of models and workflows.
  • Support performance testing of AI APIs and RAG retrieval endpoints for latency, accuracy, and throughput.
  • Assist in creating automated reports summarizing evaluation metrics such as Pr******n@*, Re***l@*, grounding scores, and hallucination rates.
  • Validate guardrail mechanisms, response filters, and safety constraints to ensure secure and ethical model output.
  • Use OpenTelemetry (OTEL) and Grafana dashboards to monitor workflow health and identify anomalies.
  • Participate in bias detection and red teaming exercises to test AI behavior under adversarial conditions.
  • Work closely with AI engineers to understand system logic, prompts, and workflow configurations.
  • Document test plans, results, and evaluation methodologies for repeatability and governance audits.
  • Collaborate with Product and MLOps teams to streamline release readiness and model validation processes.

What You Need To Succeed

  • Education: Bachelor??s degree in Computer Science, AI/ML, Software Engineering, or related field.
  • Experience: 4??7 years in Software QA or Test Automation, with at least 2 years exposure to AI/ML or GenAI systems.
  • Solid hands-on experience with Python and PyTest for automated testing.
  • Basic understanding of LLMs, RAG architecture, and vector database operations.
  • Exposure to LangChain, LangGraph, or other agentic AI frameworks.
  • Familiarity with FastAPI, Flask, or REST API testing tools (Postman, PyTest APIs).
  • Experience with CI/CD pipelines (GitLab, Jenkins) for test automation.
  • Working knowledge of containerized environments (Docker, Kubernetes).
  • Understanding of AI evaluation metrics (Pr******n@*, Re***l@*, grounding, factual accuracy).
  • Exposure to AI evaluation frameworks like Ragas, TruLens, or OpenAI Evals.
  • Familiarity with AI observability and telemetry tools (OpenTelemetry, Grafana, Prometheus).
  • Experience testing LLM-powered chatbots, retrieval systems, or multi-agent applications.
  • Knowledge of guardrail frameworks (Guardrails.ai, NeMo Guardrails).
  • Awareness of AI governance principles, data privacy, and ethical AI testing.
  • Experience with cloud-based AI services (AWS Sagemaker, Azure OpenAI, GCP Vertex AI).
  • Curious and eager to learn emerging AI technologies.
  • Detail-oriented with strong problem-solving and analytical skills.
  • Excellent communicator who can work closely with engineers and product managers.
  • Passion for quality, reliability, and measurable AI performance.
  • Proactive mindset with ownership of test planning and execution.#LI-MD1

Job Classification

Industry: Software Product
Functional Area / Department: Engineering - Software & QA
Role Category: Quality Assurance and Testing
Role: Blockchain Quality Assurance Engineer
Employement Type: Full time

Contact Details:

Company: Opentext
Location(s): Hyderabad

+ View Contactajax loader


Keyskills:   Quality Assurance kubernetes rest python regression testing ai aws sagemaker hr llm microservices docker qa automation gcp grafana regression jenkins gitlab data privacy prometheus flask aws architecture azure

 Fraud Alert to job seekers!

₹ Not Disclosed

Similar positions

Automation Test Engineer - Software Quality Assurance

  • Forward Eye
  • 3 - 8 years
  • Delhi, NCR
  • 8 days ago
₹ Not Disclosed

API/Automation Test Engineer - Software Quality Assurance

  • Forward Eye
  • 2 - 4 years
  • Pune
  • 8 days ago
₹ Not Disclosed

API / Automation Test Engineer - Software Quality Assurance

  • Forward Eye
  • 2 - 4 years
  • Jaipur
  • 8 days ago
₹ Not Disclosed

Automation / Sr. Technical Architect - Playwright

  • QualityKiosk
  • 12 - 20 years
  • Mumbai
  • 10 days ago
₹ Not Disclosed

Opentext

ManekTech Solutions Private Limited - MTSPL started its journey in 2011 with the sole objective of serving clients with agile, scalable and resilient digital solutions. Over the years, we have established a flourishing work culture with our robust team of vastly experienced personnel. Thanks to our ...