Why does so much government tech investment deliver so little?

Published by Nathan Benaich and Alex Chalmers on 9 November 2023.



Tldr: The growing interest in AI from governments is a welcome development, but we believe that excitement should be tempered with discipline. We look at how government technology investment often fails to accomplish its goals. This usually stems from a lack of clear rationale for government action at all, inadequate funding, and the inherent limitations of top-down approaches to technological development. As a result, we see a combination of small, low value grants at one end and grandiose “grands projets” on the other. We see this pattern across EU-wide efforts and increasingly on a smaller scale in the UK. We propose tests that any serious government investment in technology should pass and provide two examples that meet the bar.

Introduction

The state is back. In recent years, calls for the government to play a more active role in the development of technology have grown and, perhaps surprisingly, industry has been leading the charge. There have been calls for greater investment in everything from semiconductors, quantum computing through to space and cloud infrastructure capacity.

This chorus reaches a crescendo when it comes to AI. As a force multiplier for intelligence, AI has the power to transform the economy. It also has clear geopolitical relevance amidst an intensifying arms race between the US and China. At the same time, as competition for talent and GPUs mount, the bar to entry is progressively being raised.

There is a long history of failed government technology investment, with projects that were fundamentally flawed in their rationale, design, and execution. This is particularly obvious in the various pushes for European technological sovereignty we’ve seen in recent decades.

At Air Street, we aren’t hostile to the idea of the government partnering with industry or investing in technology. We’ve even called for it ourselves in specific contexts. Our vision of European Dynamism is grounded in government and early-stage companies working together to tackle critical strategic challenges. At the same time, money should not be frittered away on low-impact grants or on vanity-driven big projects or institutes.

In this piece, we examine a range of examples, draw out the lessons, and identify where we believe that history is already beginning to repeat itself.

Confused rationale - “we need a European X”

Many major technology projects come unstuck when faced with the simple question of ‘why?’. This often turns out to be the misguided political belief that the absence of a major European player is proof of market failure.

The archetypal example is the late Quaero, a Franco-German attempt to build a competitor to Google. The project brought together a consortium of technology companies and public research institutes, with the aim of creating a multimedia search engine - allowing text, image, or video-based search. It would also allow people to search across a range of cultural works.

In 2005, President Jacques Chirac, the project’s main advocate, argued that Quaero was necessary as “we must take up the global challenge of the American giants Yahoo! and Google” and “defend the world's cultural diversity against the looming threat of uniformity … Our power is at stake.” Chirac then drew parallels with the “magnificent success of Airbus”. The exact threat Yahoo represented to cultural diversity or European power remained unspoken. Likely for good reason.

Similarly, the ongoing saga of Gaia-X is a case study in how a shaky strategic rationale can give rise to ill-conceived projects.

Gaia-X is an EU attempt to build a federated data infrastructure that will act as a competitor to US cloud companies. This infrastructure would be grounded in “European values”, giving people more control over their personal data. To stress the importance of sovereignty, the project’s supporters point to the US Cloud Act, which compels US cloud businesses to hand over data on European users to law enforcement when presented with a warrant.

These arguments ignore the prevalence of European cloud services, which already offer the option to store their data within European member states. They similarly overlook converging norms and regulation around data and privacy. As the European Centre for International Political Economy has pointed out, invoking the specter of snooping by US law enforcement is also disingenuous, and does not account for extant European safeguards or existing plans to enable smoother international transfers of data to support criminal investigations.

Unrealistic resource commitments - expense with lack of results

Technological innovation doesn’t come cheap and frequently requires sums of money that governments understandably feel unable to commit. This results in a funding death-zone, where projects receive enough funding to seem expensive, but rarely enough to ensure successful delivery.

Quaero, for example, received approximately €400 million in government and industry backing. This is a significant amount of money, but obviously wholly inadequate for the task of building a challenger to one of the most successful technology companies of all time.

In recent years, Europe has made a dramatic push on semiconductor sovereignty, pledging €43 billion of investment as part of the European Chips Act, in response to the US CHIPS Act. The plan intends to double the EU’s global market share in semiconductors from 10% to 20% by 2030.

This may sound like an ambitious project, but it actually only features €3.3 billion of direct investment - the vast majority of which has been diverted away from other research and development work. Considering the multi-billion dollar costs associated with starting individual semiconductor fabs, it’s implausible that this kind of funding will make a difference at a pan-European level. A 20% market share goal would likely require hundreds of billions of Euros to have even a prospect of success, according to industry insiders.

With both Quaero, the Chips Act, and many other European projects, it’s possible to argue that industry is paying for at least a proportion of the work, which means that taxpayers aren’t losing out. However, government attempts to spur business to invest in bad ideas distort existing markets, they lead to potentially productive investment being channeled into unproductive ends.

Government, industry, strategy - words that should be combined with caution

Even if there were clear rationales or adequate budgets in place for many of these programmes, it remains doubtful whether or not they could have been delivered successfully.

A combination of risk aversion and political pressure means that European innovation efforts are often divided up across a range of countries, private companies, and publicly-funded research institutions. Far from de-risking a project, this multiplies the potential points of failure.

In the case of Quaero, this meant that within a year, the French and the German teams began to clash. The French wanted to focus on the multimedia element, the Germans on a text search engine. This led to the teams parting ways and the German raising additional $165 million in funding for their own project, which was also later discontinued.

Gaia-X similarly brought together an unwieldy coalition of parties, with conflicting interests and objectives. Reports suggest fights between member states over control of the board, a proliferation of working groups, and disagreements about how much of a role non-European companies should be allowed to play in the project.

Once technology projects become a way of channeling money to incumbent businesses or vehicles for job creation, they inevitably become detached from their real purpose, which ought to be delivering an important or missing capability.

Returning to semiconductors, Chris Miller, a world expert on semiconductor competition, believes that the supply chain focus neglects how shortages stem from bad inventory management by European manufacturers, rather than genuine supply problems. Rather than fueling an international subsidy arms race that it has no chance of winning, Europe should be proud of how it already controls critical industry nodes, namely specialized chemicals and machine tools.

Red lights on the dashboard - UK science and technology policy

The AI age is fertile territory for incoherent and inefficient innovation projects. We’ve been closely following UK science and technology policy in recent years and are worried that the same traits are visible - even if the sums of money involved are smaller.

This manifests itself in sprawling lists of commitments and strategies that rarely channel enough funding to make a meaningful difference. This was at the heart of the House of Lords’ Science and Technology Committee’s 2022 criticism of the “profusion of sectoral strategies in areas such as artificial intelligence and life sciences”, which mean that delivery bodies “are being pulled in multiple directions, with insufficient resources to meet the demand”.

We see this in bigger commitments, like the UK’s £1 billion semiconductor strategy, which falls into the European trap of being simultaneously expensive but unimpactful. At the cheaper end of the spectrum, we saw a seemingly random £13 million commitment “to transform healthcare research” through a hodgepodge of research grants and surgical robots, unveiled at the same time of the UK AI Safety Summit.

Along with these kinds of random grants, the UK has also highlighted the danger of investing money in institutions without adequate accountability. In 2022, the Alan Turing Institute had a total income of £51 million (including £35 million of public money), but it would be hard-pressed to demonstrate that it has delivered anything approaching value for money. The UK Government has now moved to create an AI Safety Institute, which we will be monitoring with interest. The ATI experience shows that simple investment, without proper oversight or a clear political imperative, leads to stagnation.

While this is not the Safety Institute’s fault, we were dismayed to read that the Government would only be able to produce an “interim” State of Science report ahead of the follow-up AI Summit hosted by South Korea (already downgraded to a virtual event). Considering the speed at which the field is evolving, this is an unserious timescale and demonstrates the peril of governments being unable to adapt their usual ways of working to a new environment. Hopefully the Safety Institute will be given the freedom to operate more nimbly and ambitiously, so it does not risk becoming ATI mark II.

That’s not to say there’s been no progress. We were happy the government ignored political pressure and allowed Britishvolt to fail, rather than freeing up £100 million to be burned on a project with no prospect of success. Similarly, we’re glad that the prime minister did not jump on the Brit-GPT bandwagon. The idea of a “sovereign LLM” strikes us as a more expensive chatbot equivalent of Quaero.

This kind of discipline needs to be preserved and extended intelligently. It means clearing the barnacles off the boat by not throwing money at every potentially worthy cause or allowing the innovation agenda to be captured by the lobby of trade associations, research councils, and other insiders.

Government innovation - a checklist

As we stated at the outset, we are by no means opposed to all government investment or the concept of sovereign capability in technology. We believe there are some challenges that either private markets are unlikely to solve by themselves or are of sufficient strategic importance that we can’t just hope it will happen.

There are several key tests that policymakers should bear in mind when approaching these decisions:

  • Relevance: is there a clear reason why the government needs to act here? Why won’t the private sector or research institutions be able to handle this?
  • Novelty: is this investment creating a new capability or replicating one that already exists within another friendly power that we are able to access?
  • “Insurance”: if this capability does exist elsewhere, how plausible is the scenario in which access is cut off? Would this investment meaningfully mitigate against this scenario? Were we to be cut off, what safeguards are already in place?
  • Plausibility: is meaningful sovereign capability actually possible? Would the final “sovereign” capability actually be dependent on the very relationships we’re trying to de-risk?
  • Commitment: are we actually able to spend the amount of money required to make this work in a meaningful way?

  • To illustrate this, we propose two sectors where the case is compelling.

    Defense innovation is by definition dependent on the government. The Russian invasion of Ukraine has demonstrated the shifting nature of warfare and that European governments need to act. We also know that incumbent defense suppliers who specialize in creating exquisitely complicated manned hardware platforms are ill-placed to innovate in AI and software. Years of defence underfunding has also left the European defense-industrial base lacking in resilience. We’ve also seen warnings from US officials that there is a possible future where the US military may be too tied down in the South China Sea to adequately defend Europe’s eastern border.

    While the US has been one of the main suppliers of traditional military aid to the Ukrainians, US companies are far from cornering the market in technology like drones. The amount of money needed to effectively partner with earlier-stage defense companies is also a fraction of the amount of money spent on hardware projects that frequently underperform or run billions over budget.

    Another case is compute power for research and early-stage businesses. Considering very real capacity constraints and spiraling GPU costs, there is room here for the government to play a useful role. Despite being essential for cutting-edge AI research, the UK has fewer than 1,000 NVIDIA A100 GPUs available to researchers. This is less than a single research team at Meta. The UK has already recognised the problem, but has not currently attacked the problem with sufficient ambition. The 3,000 GPU cluster, with delivery by 2026, proposed in the UK’s Independent Review of the Future of Compute is inadequate for the task.

    Closing thoughts

    We will likely return to the role of government in the development of technology in future, as we continue to explore the foundations of European Dynamism. This will likely be more in education and removing obstacles to investment, rather than innovating in-house. After all, most successful technology companies were set up by software engineers or others with deep expertise. Jacques Chirac was no Bill Gates.

    Europe has incredible potential, considering its political stability, world-leading universities, and existing talent. But it cannot and should not throw centuries-old economic truths out of the window to conjure up a simulacrum of technological sovereignty. Superpower status will not come via death by a thousand grants or attempting to recreate Silicon Valley in Strasbourg. It will come from what markets do best: discipline, picking battles wisely, and allowing bad projects to fail.