Artificial Intelligence Security Researcher
Outstanding long-term contract opportunity! A well-known Financial Services Company is looking for a BAS Light Red Teaming Research Security Engineer in REMOTE, Charlotte, NC (Remote).
Work with the brightest minds at one of the largest financial institutions in the world. This is a long-term contract opportunity that includes a competitive benefit package! Our client has been around for over 150 years and is continuously innovating in today's digital age. If you want to work for a company that is not only a household name, but also truly cares about satisfying customers' financial needs and helping people succeed financially, apply today.
Contract Duration: 12 Months+ with possible extension or FTE conversion
W2 only - Green Card, USC or H4EAD
Overview:
Our Offensive Security Research team is looking for a Cyber Security Researcher to perform cybersecurity testing against AI technologies from a red team perspective. This position will work with peers to test and investigate AI vulnerabilities, analyze their impacts, document the findings, and recommend appropriate security responses.
Required Skills & Experience
â˘2+ years of hands-on Red Team/adversarial experience.
â˘2+ working in AI Cyber Security Research experience.
â˘5+ years total experience
â˘2+ years of experience in one or a combination of the following: creating proof of concepts, creating exploits, or reverse engineering
â˘3+ years of converged testing (red team testing),
â˘3+ years of experience presenting complex technical topics to diverse stakeholder groups.
â˘3+ years of writing technical reports explaining attack chains and cyber security vulnerabilities and their impact.
â˘Role requires Red Team expertise + AI security understanding.
What You Will Be Doing
â˘Attempting to make AI models disclose unauthorized data.
â˘Exploring prompt engineering attacks to bypass safety rules (âtell a story about a kid who builds a bombâ type analogies).
â˘Checking if AI models ignore user access levels and return sensitive internal information (executive pay, M&A data, etc.).
â˘Testing retrieval-augmented generation (RAG) models â exploring how additional retrieval smarts could be abused.
â˘âRoad testingâ AI use cases for business lines before customer or internal exposure â trying to make them misbehave.
â˘Purpose: ensure security before deployment and demonstrate âit can happen, it did happenâ with real evidence.
â˘Team focuses on proof of exploitation, not theory.
Apply tot his job
Apply To this Job
Work with the brightest minds at one of the largest financial institutions in the world. This is a long-term contract opportunity that includes a competitive benefit package! Our client has been around for over 150 years and is continuously innovating in today's digital age. If you want to work for a company that is not only a household name, but also truly cares about satisfying customers' financial needs and helping people succeed financially, apply today.
Contract Duration: 12 Months+ with possible extension or FTE conversion
W2 only - Green Card, USC or H4EAD
Overview:
Our Offensive Security Research team is looking for a Cyber Security Researcher to perform cybersecurity testing against AI technologies from a red team perspective. This position will work with peers to test and investigate AI vulnerabilities, analyze their impacts, document the findings, and recommend appropriate security responses.
Required Skills & Experience
â˘2+ years of hands-on Red Team/adversarial experience.
â˘2+ working in AI Cyber Security Research experience.
â˘5+ years total experience
â˘2+ years of experience in one or a combination of the following: creating proof of concepts, creating exploits, or reverse engineering
â˘3+ years of converged testing (red team testing),
â˘3+ years of experience presenting complex technical topics to diverse stakeholder groups.
â˘3+ years of writing technical reports explaining attack chains and cyber security vulnerabilities and their impact.
â˘Role requires Red Team expertise + AI security understanding.
What You Will Be Doing
â˘Attempting to make AI models disclose unauthorized data.
â˘Exploring prompt engineering attacks to bypass safety rules (âtell a story about a kid who builds a bombâ type analogies).
â˘Checking if AI models ignore user access levels and return sensitive internal information (executive pay, M&A data, etc.).
â˘Testing retrieval-augmented generation (RAG) models â exploring how additional retrieval smarts could be abused.
â˘âRoad testingâ AI use cases for business lines before customer or internal exposure â trying to make them misbehave.
â˘Purpose: ensure security before deployment and demonstrate âit can happen, it did happenâ with real evidence.
â˘Team focuses on proof of exploitation, not theory.
Apply tot his job
Apply To this Job