Military medics trial AI for the battlefield
Scientists from the UK and the US tested and explored what it would take for medics to delegate high-stakes decisions to AI on the battlefield.
Experts from the Defence Science and Technology Laboratory (Dstl) are collaborating with the US Defense Advanced Research Projects Agency (DARPA) by using hardware and methodologies developed under DARPA’s In the Moment (ITM) fundamental research program.
In the Moment (ITM)
DARPA’s In the Moment (ITM) research program investigates whether the alignment of AI to individual humans affects their willingness to delegate decisions to AI in high-risk situations. This means encoding AI with human preferences and priorities.
AI systems don’t naturally align with humans (they don’t think or behave like humans), and there aren’t any current methods to measure human decision-making. This begs the question: how do we align AI to humans? ITM aims to answer this question and develop technologies to enable this alignment.
By using the tools and methods from the DARPA ITM program-, the trials in the UK were designed to explore the extent to which people are more likely to delegate to someone or something that has the same decision-making attributes and priorities that they do. The trials also explored whether AI can be ‘aligned’ to individuals’ decision-making attributes.
Military Medics Trial AI for the Battlefield with Dstl and DARPA
Outcome of the trials
The outcome of the trials, which took place in October 2025 at Merville Barracks in Colchester and Brize Norton in Oxfordshire, are expected to help answer big questions around AI and trust and how understanding these issues can save lives.
An increased confidence in delegating could see larger groups of people triaged and treated more quickly with the decision-making principles of an experienced medic guiding practitioners therefore saving lives.
Dstl Human Factor Specialist Suzy said:
We ran a trial that we have been working on with our American colleagues at DARPA and we’re looking at human-AI teaming in a medical triage setting.
In the future we’re expecting a lot more information to be coming into the warfighter.
We’re really interested in how the warfighter makes decisions based on some of this information and how potentially AI systems can help with that.
What the trial investigated
The trial investigated what factors may affect decision-making in a medical triage scenario, when there is no ‘correct’ answer. These factors include:
- merit focus (for instance, would a medic treat an injured attacker or victim first)
- potential quality of life
- quantity of life
- affiliation focus preference (for instance, would a medic prioritise someone from a similar military background for treatment, with all injuries being comparable)
This concept was tested in simulated mass casualty scenarios, by first baselining the participants’ important decision-making attributes in desktop scenarios and then in virtual reality (VR). AI was then used to assimilate the thought process of a lead medic that was either aligned or misaligned to the participants decision-making attributes.
Participants were able to review the responses of the AI and decide if they would trust that ‘medic’ enough to delegate to. They were not told they were dealing with AI until after the exercise.
Next steps
The post-trial analysis and findings will inform ongoing Dstl research within the Humans in Systems and People Implications of AI research streams, specifically the areas of Human-AI teaming and decision-making.
Read our human-centred ways of working with AI in intelligence analysis biscuit book.
https://www.gov.uk/government/news/military-medics-trial-ai-for-the-battlefield