Tur(n)ing the Tables: How Can AI Enhance National Security?
Published by Alex Chalmers and Nathan Benaich on 16 November 2023.
While the UK held its AI Safety Summit, primarily focused on long-term risks, there was a wider “AI Fringe” held in parallel. This brought together academia, industry, investors, and civil society together to discuss more immediate questions around applications and regulation.
I was invited to speak on a panel at the AI and National Security Symposium, hosted by Jonathan Luff of Epsilon Advisory Partners and Kevin Allison of Minerva Technology Policy Advisors - two firms focused on helping fast-growing technology companies navigate government and geopolitics.
Our panel, featuring speakers from academia and the intelligence community, covered a range of topics, including the difficulty of translating research breakthroughs into national security applications, the risks new technology brings, and how we can reduce barriers to entry for start-ups for start-ups. I also covered some State of AI Report 2023 highlights that were relevant for defense and national security - I’ve included the matching slides at the end.
I’ve tidied up the notes I made ahead of the event, including some material we didn’t have time to cover, in case they’re of interest to others. It was great to see such a packed room for an issue that has historically been under-discussed in the AI community. If you’re interested in any of the below, please don’t hesitate to get in touch.
Defense AI ecosystem
War as a catalyst for action
State of AI - highlights for defense
2023 was, of course, the year of the large language model (LLM), and OpenAI crushed all before it. The potential of LLMs to support intelligence analysis is obvious. Jonathan has written an interesting Substack advancing the possibility of the Foreign and Commonwealth Office fine-tuning an LLM using its archives to yield new insights. We’ve already heard that defense ministries around the world are exploring the use of AI in supporting strategy formation.
With this in mind, it’s been striking to see the performance of models, with even relatively small training datasets, on strategy-based tasks.
We’ve also seen striking advances in computer vision, with DINOv2 demonstrating the potential of models that haven’t been trained on manually labeled data to perform well on classification and segmentation tasks.
It wouldn’t be a discussion about defense without looking at drones and the progress of systems trained with model-free deep reinforcement learning in simulation and operated using just on-board compute and sensors is striking - especially as war takes place in an increasingly electromagnetically contested environment.
Although drones are the most high profile use of technology in Ukraine, they have been far from the only one. The Ukrainians have innovated to defend their country by reducing procurement times for privately built technology by 5x and by increasing capped profit margins on government contracts for private vendors.
The report isn’t all good news, however. Firstly, there is the very real possibility of AI being misused.
Leaps forward in progress, have unsurprisingly been associated with leaps forward in geopolitical competition.
And there are serious questions about democratic governments’ ability to get new technology into the hands of those on the frontline who need it most.