AI Security Penetration Tester / AI Red Team Engineer
Title: AI Security Penetration Tester / AI Red Team Engineer
Position Overview
We are seeking a highly skilled AI Security Penetration Tester / AI Red Team Engineer to lead offensive security engagements focused on AI/ML-powered applications and platforms. This role is responsible for identifying, exploiting, and demonstrating security risks across traditional and AI-specific attack surfaces, including LLMs, AI-enabled APIs, and AI-driven business logic.
You will collaborate with Engineering, Security, Red Teams, SOC, and AI research teams to proactively identify weaknesses, simulate real-world AI attacks, and guide remediation strategies to strengthen enterprise AI security posture.
Key Responsibilities
β’ Conduct AI-focused penetration testing across web, API, mobile, and AI-powered systems.
β’ Perform AI red teaming exercises including prompt injection, jailbreak testing, model evasion, and adversarial ML attacks.
β’ Identify risks such as model poisoning, data leakage, adversarial inputs, and AI business logic abuse.
β’ Perform threat modeling and architecture reviews for AI-enabled applications.
β’ Develop and enhance AI-focused offensive security tools and testing methodologies.
β’ Research emerging AI attack techniques and assess potential business impact.
β’ Deliver comprehensive penetration testing reports and executive-ready presentations.
β’ Lead engagements end-to-end including scoping, execution, reporting, and remediation validation.
β’ Partner with engineering teams to provide actionable security recommendations.
β’ Collaborate with Red Teams and SOC to continuously improve AI security playbooks.
Required Qualifications
β’ 3+ years of hands-on penetration testing experience (web, API, mobile).
β’ Demonstrated experience in AI red teaming, LLM security testing, or adversarial ML.
β’ Proficiency with tools such as Burp Suite Pro, Netsparker, Checkmarx, or similar.
β’ Working knowledge of AI/ML frameworks (TensorFlow, PyTorch, LLM APIs, LangChain).
β’ Strong understanding of OWASP Top 10, API security, and modern attack vectors.
β’ Excellent written and verbal communication skills.
β’ Relevant security certifications (GWAPT, OSWE, OSWA, CREST, etc.) preferred.
β’ Bachelorβs degree in Computer Science, Cybersecurity, or equivalent experience.
Preferred Qualifications
β’ Experience testing LLM-based applications, chatbots, copilots, or AI workflows.
β’ Familiarity with MLOps, model deployment security, and cloud AI platforms (AWS, Azure, Google Cloud Platform).
β’ Ability to build custom offensive tools/scripts in Python, Go, or similar languages.
β’ Exposure to SOC operations, detection engineering, or purple team exercises.
β’ Contributions to AI security research, blogs, talks, or open-source projects.
What Success Looks Like
β’ AI vulnerabilities identified before production release
β’ Clear demonstration of AI attack paths and business risk
β’ Actionable remediation guidance adopted by engineering teams
β’ Continuous evolution of AI red teaming methodologies
β’ Measurable improvement in AI security posture
Apply tot his job
Apply To this Job
Position Overview
We are seeking a highly skilled AI Security Penetration Tester / AI Red Team Engineer to lead offensive security engagements focused on AI/ML-powered applications and platforms. This role is responsible for identifying, exploiting, and demonstrating security risks across traditional and AI-specific attack surfaces, including LLMs, AI-enabled APIs, and AI-driven business logic.
You will collaborate with Engineering, Security, Red Teams, SOC, and AI research teams to proactively identify weaknesses, simulate real-world AI attacks, and guide remediation strategies to strengthen enterprise AI security posture.
Key Responsibilities
β’ Conduct AI-focused penetration testing across web, API, mobile, and AI-powered systems.
β’ Perform AI red teaming exercises including prompt injection, jailbreak testing, model evasion, and adversarial ML attacks.
β’ Identify risks such as model poisoning, data leakage, adversarial inputs, and AI business logic abuse.
β’ Perform threat modeling and architecture reviews for AI-enabled applications.
β’ Develop and enhance AI-focused offensive security tools and testing methodologies.
β’ Research emerging AI attack techniques and assess potential business impact.
β’ Deliver comprehensive penetration testing reports and executive-ready presentations.
β’ Lead engagements end-to-end including scoping, execution, reporting, and remediation validation.
β’ Partner with engineering teams to provide actionable security recommendations.
β’ Collaborate with Red Teams and SOC to continuously improve AI security playbooks.
Required Qualifications
β’ 3+ years of hands-on penetration testing experience (web, API, mobile).
β’ Demonstrated experience in AI red teaming, LLM security testing, or adversarial ML.
β’ Proficiency with tools such as Burp Suite Pro, Netsparker, Checkmarx, or similar.
β’ Working knowledge of AI/ML frameworks (TensorFlow, PyTorch, LLM APIs, LangChain).
β’ Strong understanding of OWASP Top 10, API security, and modern attack vectors.
β’ Excellent written and verbal communication skills.
β’ Relevant security certifications (GWAPT, OSWE, OSWA, CREST, etc.) preferred.
β’ Bachelorβs degree in Computer Science, Cybersecurity, or equivalent experience.
Preferred Qualifications
β’ Experience testing LLM-based applications, chatbots, copilots, or AI workflows.
β’ Familiarity with MLOps, model deployment security, and cloud AI platforms (AWS, Azure, Google Cloud Platform).
β’ Ability to build custom offensive tools/scripts in Python, Go, or similar languages.
β’ Exposure to SOC operations, detection engineering, or purple team exercises.
β’ Contributions to AI security research, blogs, talks, or open-source projects.
What Success Looks Like
β’ AI vulnerabilities identified before production release
β’ Clear demonstration of AI attack paths and business risk
β’ Actionable remediation guidance adopted by engineering teams
β’ Continuous evolution of AI red teaming methodologies
β’ Measurable improvement in AI security posture
Apply tot his job
Apply To this Job