We re looking for a Jr AI Security Architect to join our growing Security Architecture team. This role will support the design, implementation, and protection of AI/ML systems, models, and datasets. The ideal candidate is passionate about the intersection of artificial intelligence and cybersecurity, and eager to contribute to building secure-by-design AI systems that protect users, data, and business integrity.
Key Responsibilities
Secure AI Model Development
- Partner with AI/ML teams to embed security into the model development lifecycle, including during data collection, model training, evaluation, and deployment.
- Contribute to threat modeling exercises for AI/ML pipelines to identify risks such as model poisoning, data leakage, or adversarial input attacks.
- Support the evaluation and implementation of model explainability, fairness, and accountability techniques to address security and compliance concerns.
- Develop and train internal models for security purposes
Model Training & Dataset Security
- Help design controls to ensure the integrity and confidentiality of training datasets, including the use of differential privacy, data validation pipelines, and access controls.
- Assist in implementing secure storage and version control practices for datasets and model artifacts.
- Evaluate training environments for exposure to risks such as unauthorized data access, insecure third-party libraries, or compromised containers.
AI Infrastructure Hardening
- Work with infrastructure and MLOps teams to secure AI platforms (e.g., MLFlow, Kubeflow, SageMaker, Vertex AI) including compute resources, APIs, CI/CD pipelines, and model registries.
- Contribute to security reviews of AI-related deployments in cloud and on-prem environments.
- Assist in automating security checks in AI pipelines, such as scanning for secrets, validating container images, and enforcing secure permissions.
Secure AI Integration in Products
- Participate in the review and assessment of AI/ML models embedded into customer-facing products to ensure they comply with internal security and responsible AI guidelines.
- Help develop misuse detection and monitoring strategies to identify model abuse (e.g., prompt injection, data extraction, hallucination exploitation).
- Support product security teams in designing guardrails and sandboxing techniques for generative AI features (e.g., chatbots, image generators, copilots).
Knowledge Sharing & Enablement
- Assist in creating internal training and security guidance for data scientists, engineers, and developers on secure AI practices.
- Help maintain documentation, runbooks, and security checklists specific to AI/ML workloads.
- Stay current on emerging AI security threats, industry trends, and tools; contribute to internal knowledge sharing.