Senior AI Product Security Researcher
New Today
OverviewWe are seeking a Senior AI Product Security Researcher to join our Security Platforms & Architecture Team to conduct cutting-edge security research on GitLab's AI-powered DevSecOps capabilities. As GitLab transforms software development through intelligent collaboration between developers and specialized AI agents, we need security researchers who can proactively identify and validate vulnerabilities before they impact our platform or customers.In this role, you'll be at the forefront of AI security research, working with GitLab Duo Agent Platform, GitLab Duo Chat, and AI workflows that represent the future of human/AI collaborative development. You'll develop novel testing methodologies for AI agent security, conduct hands-on penetration testing of multi-agent orchestration systems, and translate emerging AI threats into actionable security improvements. Your research will directly influence how we build and secure the next generation of AI-powered DevSecOps tools, ensuring GitLab remains the most secure software factory platform on the market.This position offers the unique opportunity to shape AI security practices in one of the world\'s largest DevSecOps platforms, working with engineering teams who are pushing the boundaries of what\'s possible with AI-assisted software development. You\'ll have access to cutting-edge AI systems and the freedom to explore creative attack scenarios while contributing to the security of millions of developers worldwide.What You\'ll DoIdentify and validate security vulnerabilities in GitLab\'s AI systems through hands-on testing, developing proof-of-concept exploits that demonstrate real-world attack scenariosExecute comprehensive penetration testing targeting AI agent platforms, including prompt injection, jailbreaking, and workflow manipulation techniquesResearch emerging AI security threats and attack techniques to assess their potential impact on GitLab\'s AI-powered platformDesign and implement testing methodologies and tools for evaluating AI agent security and multi-agent system exploitationCreate detailed technical reports and advisories that translate complex findings into actionable remediation strategiesCollaborate with AI engineering teams to validate security fixes through iterative testing and verificationContribute to the development of AI security testing frameworks and automated validation toolsPartner with Security Architecture to inform architectural improvements based on research findingsShare knowledge and mentor team members on AI security testing techniques and vulnerability discoveryWhat You\'ll Bring5+ years of experience in security research, penetration testing, or offensive security roles, with demonstrated expertise in AI/ML securityHands-on experience discovering and exploiting vulnerabilities in AI systems and platformsStrong understanding of AI attack vectors including prompt injection, agent manipulation, and workflow exploitationProficiency in Python with experience in AI frameworks and security testing toolsExperience with offensive security tools and vulnerability discovery methodologiesAbility to read and analyze code across multiple languages and codebasesStrong analytical and problem-solving skills with creative thinking about attack scenariosExcellent written communication skills for documenting technical findings and creating security advisoriesAbility to translate technical findings into clear risk assessments and remediation recommendationsNice to have QualificationsDirect experience testing AI agent platforms, conversational AI systems, or AI orchestration architecturesPublished security research or conference presentations on AI security topicsBackground in software engineering with distributed systems expertiseSecurity certifications such as OSCP, OSCE, GPEN, or similarExperience with GitLab or similar DevSecOps platformsKnowledge of AI agent communication protocols and multi-agent architecturesAbout the teamSecurity Researchers are a part of our Security Platforms and Architecture team, who address complex security challenges facing GitLab and its customers to enable GitLab to be the most secure software factory platform on the market. Composed of Security Architecture and Security Research, we focus on systemic product security risks and work cross-functionally to mitigate them while maintaining Engineering\'s development velocity.Please note that we welcome interest from candidates with varying levels of experience; many successful candidates do not meet every single requirement. Additionally, studies have shown that people from underrepresented groups are less likely to apply to a job unless they meet every single qualification. If you\'re excited about this role, please apply and allow our recruiters to assess your application.The base salary range for this role\'s listed level is currently for residents of listed locations only. Grade level and salary ranges are determined through interviews and a review of education, experience, knowledge, skills, abilities of the applicant, equity with other team members, and alignment with market data. See more information on our benefits and equity. Sales roles are also eligible for incentive pay targeted at up to 100% of the offered base salary.California/Colorado/Hawaii/New Jersey/New York/Washington/DC/Illinois/Minnesota pay rangeWe are seeking a Senior AI Product Security Researcher to join our Security Platforms & Architecture Team to conduct cutting-edge security research on GitLab\'s AI-powered DevSecOps capabilities. As GitLab transforms software development through intelligent collaboration between developers and specialized AI agents, we need security researchers who can proactively identify and validate vulnerabilities before they impact our platform or customers.
#J-18808-Ljbffr
- Location:
- United Kingdom
- Job Type:
- FullTime