AI Security Personnel
Job descriptions & requirements
- Secure machine learning and AI systems from design and development through deployment and operation. This role sits within cybersecurity, working with data science, engineering, and IT governance teams to ensure AI systems are resilient, trustworthy, and compliant with internal standards and external regulations, translating emerging AI risks into practical, scalable security controls
Key Responsibilities
- Identify, assess, and mitigate security risks across AI/ML pipelines, including data ingestion, model training, deployment, and inference
- Implement controls to protect against AI-specific threats such as data poisoning, model theft, prompt injection, adversarial inputs, and model inversion
- Support secure deployment of AI models in cloud, containerized, and API-based environments
- Assist in securing third-party and open-source models, frameworks, and datasets
- Contribute to AI security risk assessments, threat models, and control mappings
- Support the development and enforcement of AI security standards, guardrails, and secure-by-design patterns
- Align AI security practices with broader enterprise risk, privacy, and compliance requirements (e.g., ISO 27001, NIST, GDPR, emerging AI regulations)
- Participate in AI governance forums and provide security input to model approval and review processes
- Help define logging, monitoring, and alerting requirements for AI systems
- Support investigation and response to AI-related security incidents or misuse
- Track vulnerabilities and emerging threats related to AI platforms and tooling
- Work with data scientists and engineers to embed security into AI development workflows and CI/CD pipelines
- Provide guidance on secure data handling, model access controls, and secrets management
- Contribute to internal training, documentation, and awareness around AI security risks and best practices
- Stay current with evolving AI threats, attack techniques, and defensive controls
- Evaluate AI security tools and capabilities, making recommendations for improvement
- Contribute to the organization's longer-term AI security roadmap
- Perform other responsibilities as assigned by the Head, Security Technology & Engineering
Requirements
Required Knowledge, Skills and Abilities:
- Experience in cybersecurity, cloud security, application security, or data security roles
- Working knowledge of machine learning concepts and AI system architectures
- Understanding of common AI/ML security risks and threat models
- Hands-on experience with ML frameworks or platforms (e.g., PyTorch, TensorFlow, SageMaker, Azure ML)
- Experience securing APIs, cloud services, containers, or MLOps platforms
- Ability to communicate security risks clearly to technical and non-technical stakeholders
- Familiarity with data governance, privacy engineering, or responsible AI principles
- Experience with threat modeling techniques (e.g., STRIDE) applied to AI systems
- Strong troubleshooting, documentation, and communication skills
Qualification & Experience
Mandatory
- Bachelor's degree in Computer Science, Information Security, or a related field
- Scripting or automation skills (e.g., Python)
- Certifications such as CISSP, CCSP, Security+, or emerging AI/security credentials
Desirable
- Cloud security certifications (e.g., AWS Security Specialty, Azure Security Engineer)
Only shortlisted candidates will be contacted.
<
Important safety tips
- Do not make any payment without confirming with the Jobberman Customer Support Team.
- If you think this advert is not genuine, please report it via the Report Job link below.