Staff Threat Detection & Response Engineer London, UK

New Today

Staff Threat Detection & Response Engineer

London, UK

About the AI Security Institute

The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We’re in the heart of the UK government with direct lines to No. 10, and we work with frontier developers and governments globally.

We’re here because governments are critical for advanced AI going well, and AISI is uniquely positioned to mobilize them. With our resources and the UK government's unique agility and international influence, this is the best place to shape both AI development and government action.

About the Team:

Security Engineering at the AI Security Institute (AISI) exists to help our researchers move fast, safely. We are founding the Security Engineering team in a largely greenfield cloud environment, we treat security as a measurable, researcher centric product. Secure by design platforms, automated governance, and intelligence led detection that protects our people, partners, models, and data. We work shoulder to shoulder with research units and core technology teams, and we optimise for enablement over gatekeeping, proportionate controls, low ego, and high ownership.

What you might work on:

  • Help design and ship paved roads and secure defaults across our platform so researchers can build quickly and safely
  • Build provenance and integrity into the software supply chain (signing, attestation, artefact verification, reproducibility)
  • Support strengthened identity, segmentation, secrets, and key management to create a defensible foundation for evaluations at scale
  • Develop automated, evidence driven assurance mapped to relevant standards, reducing audit toil and improving signal
  • Create detections and response playbooks tailored to model evaluations and research workflows, and run exercises to validate them
  • Threat model new evaluation pipelines with research and core technology teams, fixing classes of issues at the platform layer
  • Assess third party services and hardware/software supply chains; introduce lightweight controls that raise the bar
  • Contribute to open standards and open source, and share lessons with the broader community where appropriate

If you want to build security that accelerates frontier scale AI safety research, and see your work land in production quickly, this is a good place to do it

Role Summary

Build and maintain a modern, mission-aware detection engineering practice. You’ll own AISI’s threat model, define detections that reflect AISI-specific risks, and collaborate with DSIT’s SOC to extend coverage and context. You’ll focus on signal quality, not alert volume. You will extend coverage to AI/ML surfaces, instrumenting the model lifecycle and AI platforms so threats to model weights, data pipelines, GPU estates, and inference endpoints are visible, correlated, and actionable.

Responsibilities

  • Define and evolve AISI’s threat model, working with platform, research, and policy teams
  • Write detection rules, correlation logic, and hunt queries tailored to AISI's risk surface
  • Ensure relevant signals are logged, routed, and contextualised appropriately
  • Maintain detection playbooks, triage documentation, and escalation workflows
  • Act as a liaison between AISI engineering and DSIT's central SOC
  • Evaluate detection gaps and propose new signal sources or telemetry improvements
  • Extend the threat model to AI/ML: data/feature pipelines, training/finetuning, evaluations/release gates, registries, GPUs, and inference services
  • Develop detections for AI-specific risks: model weight custody/exfil (e.g., anomalous KMS decrypts, S3 access), registry tampering, dataset poisoning, training pipeline/image compromise, GPU abuse/cryptomining, and inference abuse (prompt injection/data exfil patterns, anomalous RAG connector access)
  • Define hunts and correlations that tie AI safety/evaluation signals (red-team hits, eval regressions, release gate overrides) to security events and insider/outsider activity
  • Author and rehearse AI-focused incident playbooks (weights leak, compromised model artefacts, inference abuse campaigns) with DSIT SOC

Profile requirements

  • Strong understanding of detection-as-code, MITRE ATT&CK, log pipelines, and cloud signal sources
  • Able to navigate outsourced SOC relationships while owning internal threat understanding
  • Familiarity with AWS CloudTrail, GuardDuty, KMS, S3 access logs, EKS/ECS audit, custom log ingestion; exposure to SageMaker/Bedrock or equivalent a plus
  • Curious, methodical, and proactive mindset
  • Practical grasp of AI/ML attack surfaces and telemetry needs (model registries, weights custody, GPU/accelerator fleets, inference gateways, vector stores)
  • Familiarity with AI threat frameworks (e.g., MITRE ATLAS, OWASP Top 10 for LLMs) desirable
  • Detection engineering mindset focused on signal quality and measurable coverage
  • Familiarity with MITRE ATT&CK and detection pipelines
  • Understanding of cloud-native telemetry and logging gaps
  • Ability to collaborate with outsourced SOCs
  • Instrumenting and detecting threats across AI/ML workloads (weights, datasets, training/inference) and correlating safety and security signals

Salary & Benefits

We are hiring individuals at all ranges of seniority and experience within this research unit, and this advert allows you to apply for any of the roles within this range. Your dedicated talent partner will work with you as you move through our assessment process to explain our internal benchmarking process. The full range of salaries are available below, salaries comprise of a base salary, technical allowance plusadditional benefitsas detailed on this page.

  • Level 3 - Total Package £65,000 - £75,000inclusive of a base salary £35,720 plus additional technical talent allowance of between £29,280 - £39,280
  • Level 4 - Total Package £85,000 - £95,000inclusive of a base salary £42,495 plus additional technical talent allowance of between £42,505 - £52,505
  • Level 5 - Total Package £105,000 - £115,000inclusive of a base salary £55,805 plus additional technical talent allowance of between £49,195 - £59,195
  • Level 6 - Total Package £125,000 - £135,000inclusive of a base salary £68,770 plus additional technical talent allowance of between £56,230 - £66,230
  • Level 7 - Total Package £145,000inclusive of a base salary £68,770 plus additional technical talent allowance of £76,230

This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.

There are a range of pension options available which can be found through the Civil Service website.

#J-18808-Ljbffr
Location:
London, England, United Kingdom
Salary:
£125,000 - £150,000
Job Type:
FullTime
Category:
IT & Technology

We found some similar jobs based on your search