September 21, 2018

AI, EU, go. How Europe can improve the development of AI

Its real clout comes from its power to set standards

THE two superpowers of artificial intelligence (AI) are America and China. Their tech giants have collected the most data, attracted the best talent and boast the biggest computing clouds—the main ingredients needed to develop AI services from facial recognition to self-driving cars. Their dominance deeply worries the European Union, the world’s second-largest economic power (see article). It is busily concocting plans to close the gap.

That Europe wants to foster its own AI industry is understandable. Artificial intelligence is much more than another Silicon Valley buzzword—more, even, than seminal products like the smartphone. It is better seen as a resource, a bit like electricity, that will touch every part of the economy and society. Plenty of people fret that, without its own cutting-edge research and AI champions, big digital platforms based abroad will siphon off profits and jobs and leave the EU a lot poorer. The technology also looms large in military planning. China’s big bet on AI is partly a bet on autonomous weapons; America is likely to follow the same path. Given the doubt over whether America will always be willing to come to Europe’s defence, some see spending on AI as a matter of national security.

Both arguments make sense. But can Europe support AI without wasting money or lapsing into protectionism? The EU has a dismal record in high-tech industrial policy. Witness Quaero, a failed attempt to build a European alternative to Google, or the Human Brain Project, which has spent over €1bn ($1.17bn) with little to show for it. Experts warn against the rise of “AI nationalism”, whereby countries increasingly try to keep their data and their algorithms to themselves.

Two aims should guide EU policy. Instead of focusing its financing on high-profile individual projects, Europe should create the environment for its AI industry to thrive. And instead of keeping foreign providers out, it should use its clout to improve their behaviour.

Creating the right environment means, above all, working to overcome the fragmentation that bedevils Europe. Big and homogeneous home markets give America and China the huge advantage of scale. According to one estimate, China will hold 30% of the world’s data by 2030; America is likely to have just as much. Europe has data, too, but needs to pool its resources. To its credit, the European Commission is arguing for a common market for data. But much more needs to be done, such as laying down rules about how data held by companies and governments can be shared.

National faultlines also cut deep in research and development. Germany has downgraded plans to co-operate with France in AI research, for example. In addition, Europe’s existing research bureaucracy is adept at sucking up funds, to the detriment of startups and outsiders. Better to encourage grass-roots initiatives such as CLAIRE and ELLIS, which seek to create Europe-wide networks of research labs. France has launched JEDI, short for Joint European Disruptive Initiative, an attempt to mimic America’s Defence Advanced Research Projects Agency (DARPA), which allocates money using open competitions and does not hesitate to cull programmes that fail to show promise. More opportunities of this sort, plus an accommodating immigration regime, would attract and retain AI researchers, who often decamp to America (and sometimes even to China).

European policymakers can also make better use of the one area where they are world-class—setting standards. Europe’s market of 500m relatively wealthy consumers is still enticing enough that firms will generally comply with EU rules rather than pull out. An example is a strict new privacy law, the General Data Protection Regulation; the principles of the GDPR are now being used as a benchmark for good data practice in markets well beyond Europe. By imposing common rules, such standards can help the EU’s indigenous AI industry flourish. But they could also have a more subtle effect—of making AI from outside the EU more benign.

By the rule book

America and China both represent flawed models of data collection and governance. China sees AI as a powerful tool to monitor, manage and control its citizens. America’s tech titans scoop up users’ data with insufficient regard for their privacy. The GDPR is just the start. Robust standards are needed to ensure that AI services are transparent and fair and that they do not discriminate against particular groups. Europe has a chance to shape the development of AI so that this vital technology takes more goals into account than simply maximising advertising income and minimising dissent. Even if it comes up with policies that help its native AI industry thrive, Europe may never match America and China. But it can nonetheless help guide AI onto a path that benefits its own citizens, and those in the rest of the world.

___

more The Economist's articles for free HERE