You get an AI summit! And you get an AI summit! – POLITICO

[ad_1]

Press play to listen to this article

Voiced by artificial intelligence.

POLITICO’s weekly transatlantic tech newsletter for global technology elites and political influencers.

POLITICO Digital Bridge

By MARK SCOTT

Send tips here | Subscribe for free | View in your browser

ARE WE ALL BACK AT OUR DESKS? This is Digital Bridge, and I’m Mark Scott, POLITICO’s chief technology correspondent. As everyone and their mother freak out about artificial intelligence, I bring you exclusive footage of the world’s best-known AI systems plotting our demise.

Grab a pew and let’s get cracking:

Ahead of a series of artificial intelligence governance summits planned this fall, no one is really sure what they are about — or who’s in charge.

The United Kingdom has become ground zero for taxpayer-funded targeting of social media users in the hopes of changing their offline behaviors.

It’s almost a year into new rules to curb disinformation across online platforms. The results are in. No one is doing a good job.

WHAT YOU NEED TO KNOW ABOUT AI INTERNATIONAL NEGOTIATIONS

IF YOU, LIKE ME, ARE FED UP WITH ENDLESS ZOOM MEETINGS, spare a thought for G7 officials who gathered virtually on September 7 to hammer out what exactly their countries’ leaders will announce, as soon as November, when it comes to new Western guardrails for generative AI. ICYMI, Japan, which holds the group’s rotating presidency this year, pitched its so-called Hiroshima Process back in the spring. That included G7 wonks figuring out issues like how to handle international AI governance; problems around AI’s use of intellectual property; and the thorny (and, for me, over-hyped) problem of AI-generated disinformation.

Before we get into the substance, here’s a quick review of what’s going to happen before the end of the year. After today’s G7 online meeting, officials from G7 countries and those from the Organization for Economic Cooperation and Development, the European Commission and other interested parties will hash out a policymaking draft (more on that in a minute.) That, in turn, will be shared with companies, academics and civil society groups during an internet governance conference in Kyoto, Japan on October 9. Then, G7 digital ministers will meet sometime in November, or possibly December, to approve whatever has been decided. Simple? Well, no.

What is currently underway is a massive game of horse-trading over what exactly the G7 should announce. On Thursday, the G7 pledged to create an international (but voluntary) code of conduct around the most advanced uses of AI. That included commitments for companies around the safe development of the technology; robust security measures to stop harmful use cases; and the creation of risk management plans to convince officials that firms wouldn’t do anything that hurts society.

There’s a back story to this. In one camp you have the United States, Japan and the U.K. that would prefer solely voluntary commitments around security, risks and innovation. The White House pitched this approach in July when it signed up Microsoft, Meta and Google to a series of non-binding pledges. Japan, too, is eager to give companies significant leeway in how they develop AI systems, within reason, including potentially allowing firms to train their large language models on copyrighted material.

In the other camp (disclaimer: these divisions are somewhat arbitrary) lie the Europeans and, to a lesser degree, Canada. The European Union is eager to get its own legislation, known as the AI Act, completed by year’s end. That would ban certain harmful use cases for AI (like law enforcement’s use of facial recognition) and possibly impose tough restrictions on so-called law language models that underpin generative AI systems. Ottawa is also mulling its own legislation, and published a short-term code of practice that included transparency requirements on how these systems were trained and commitments to not use biased data sets.

In that context, the G7 is trying to thread the needle so that countries can pursue their own forms of AI governance, while also creating a patchwork of international cooperation. The aim: allowing these systems to be used, appropriately, wherever they are rolled out. What will that look like? Several officials involved in the talks told me that this voluntary code of practice (akin to what the European Commission pitched to the White House during the most recent EU-U.S. Trade and Technology Council meeting) is the best step forward. However, efforts to focus these commitments on solely the largest AI companies were recently scrapped.

If that wasn’t enough, there’s another global AI summit — known as the Global Partnership on Artificial Intelligence (GPAI) — planned in New Delhi from December 12-14. That meeting (ironically, set up during Japan’s G20 presidency in 2019) includes a wider set of non-Western countries under India’s current chair of the G20. It’s more of a talking shop compared with other international groups, although its work (like this report on foundation AI models from July) is worth checking out. The fact that China isn’t involved in GPAI also makes that group almost unique in bringing together 29 countries from Argentina to South Korea.

Last, and certainly least, is the United Kingdom. London is trying to gatecrash the global AI parade with its own meeting — slated for November 1-2 and focused on “AI safety” — that has taken almost everyone by surprise. Prime Minister Rishi Sunak is eager to promote the U.K. as a global leader in AI. But has shifted the event’s priority to so-called frontier AI, marketing jargon for the most advanced AI systems. So far, no one — including those within the British government — has a clue to what will come from this event. The U.K. is also eager to invite China, which hasn’t gone down well with some G7 countries.

WHEN GOVERNMENT MESSAGING MEETS SOCIAL MEDIA MICROTARGETING

IF YOU WERE A YOUNG, BLACK BRITISH CITIZEN living or working in inner Manchester, a city in northern England, on March 23, you likely got an unnerving Facebook ad pushed into your news feed. “When it comes to security, we are at the cutting edge. Roles for the digital-savvy, we’ve got it,” ran the campaign — paid for by the British government — aimed at recruiting people from the country’s ethnic minorities into its national security services. The Facebook ad was targeted at specific zip codes with large Black populations and included photos of ethnic-minority government officials.

The ad wasn’t a one-off. From Scottish police departments buying microtargeted Facebook ads to stop child sexual abuse to the U.K.’s Home Office purchasing similar tailored digital ads — aimed at refugees located in France and Belgium, respectively — to curb illegal immigration, Britain has become an outlier in using aggressively directed social media advertising to change people’s offline behavior. That’s the conclusion from an (overly long) study from British academics into how the country’s government agencies have turned to granular digital advertising campaigns to “nudge” people into making different analog decisions.

“It’s a uniquely British thing,” Ben Collier, an academic at the University of Edinburgh and co-author of the report, told me. “In policing, this is absolutely a British export. It’s by far most used by (police) forces with a significant counter-terror history. We’ve looked all over the world, and we only really found this being done in this way — where it’s a clear nudge behavior change — in the U.K.”

So what does that look like? And, why, if you’re outside of Britain, should you care? Across the country, the academics discovered local police departments, national counter-terrorism units and even the country’s central government were running incredibly targeted Facebook ads — often based on ethnic-minority traits like “Afro-textured hair” and “Bangladesh cricket.” That, in itself, isn’t unique. Such targeting has existed within political campaigns since the early days of Barack Obama’s presidential run. But what is unique to the U.K. is that such taxpayer-funded ads were aimed at shifting people’s offline behaviors, including nudging people away from domestic abuse or pushing potential radicals away from the Islamic State.

Creepy? Well, yeah. But what struck me while reading the report’s findings, based on the analysis of more than 12,000 government-purchased ads between January and July 2023, was how similar these targeted campaigns were to what Western national security agencies do in their overseas influence operations. What has become part of the digital arsenal of digital spooks is to pepper adversaries with overt messaging, again using social media microtargeting, to pinpoint specific audiences.

Take these Google ads bought by Zinc Network, a London-based consultancy that has done a lot of anti-disinformation work for Western governments. In a series of Russian-language paid-for spots aimed at the Baltic countries, the group peppered local Russian speakers with anti-Kremlin narratives and pro-Western talking points. The U.S. Federal Bureau of Investigation also got in on the act by buying Chinese- and Russian-language ads that targeted the zip codes of Moscow and Beijing’s embassies and consulates across the U.S. The message: We’re here if you ever want to chat to us. I rank that as top-notch trolling. (H/t to Collier for finding these ads.)

Yet what is different about the British examples is that these government agencies and law enforcement groups almost exclusively targeted domestic, not international, audiences. It’s one thing for a British consultancy with ties to the U.S. and U.K. governments to bombard…

[ad_2]

Read More: You get an AI summit! And you get an AI summit! – POLITICO

2023-09-07 11:34:36

Leave A Reply

Your email address will not be published.