AI Security Professional | Upto $111/hr Remote
2 Days Old
Overview
Position: AI Red-Teamer — Adversarial AI Testing (Advanced)
Type: Hourly Contract
Compensation: $54–$111/hour
Location: Remote
Duration: Full-time or part-time, flexible
About Crossing Hurdles
Crossing Hurdles is a recruitment firm. We refer top candidates to our partners working with the world’s leading AI research labs to help build and train cutting-edge AI models.
Key Responsibilities
- Red-team AI models and agents by crafting jailbreaks, prompt injections, misuse cases, and exploit scenarios
- Generate high-quality human data: annotate AI failures, classify vulnerabilities, and flag systemic risks
- Apply structured approaches using taxonomies, benchmarks, and playbooks to maintain consistency in testing
- Document findings comprehensively to produce reproducible reports, datasets, and attack cases
- Flexibly support multiple projects including LLM jailbreaks and socio-technical abuse testing across different customers
Required Qualifications
- Prior red-teaming experience, such as AI adversarial work, cybersecurity, or socio-technical probing OR a strong AI background that supports rapid learning
- Expertise in adversarial machine learning, including jailbreak datasets, prompt injection, RLHF/DPO attacks, model extraction
- Cybersecurity skills such as penetration testing, exploit development, reverse engineering
- Experience with socio-technical risk areas like harassment, disinformation, or abuse analysis
- Creative probing using psychology, acting, or writing to develop unconventional adversarial methods
Application process
- Upload resume
- AI interview based on your resume (15 min)
- Location:
- United Kingdom
- Salary:
- £80,000 - £100,000
- Job Type:
- PartTime
- Category:
- Human Resources