Featured

You searched Google. The AI hallucinated an answer. Who’s legally responsible?

14 May 2024, USA, Mountain View: Google CEO Sundar Pichai speaks at Google I/O. At the developer conference, everything revolved around the topic of artificial intelligence (AI). Photo: Christoph Dernbach/dpa (Photo by Christoph Dernbach/picture alliance via Getty Images)

Google’s shift toward using AI to generate a written answer to user searches instead of providing a list of links ranked algorithmically by relevance was inevitable. Before AI Overview — introduced last week for US users — Google had Knowledge Panels, those information boxes that appear toward the top of some searches, incentivizing users to get their answers directly from Google, rather than clicking through to a result.

AI Overview summarizes search results for a portion of queries, right at the top of the page. The results draw from multiple sources, which are cited in a drop-down gallery under the summary. As with any AI-generated response, these answers vary in quality and reliability.

Overview has told users to change their blinker fluid — which does not exist — seemingly because it picked up on joke responses from forums where users seek car advice from their peers. In a test I ran on Wednesday, Google was able to correctly generate instructions for doing a pushup, drawing heavily from the instructions in a New York Times article. Less than a week after launching this feature, Google announced that they are trying out ways to incorporate ads into their generative responses.

I’ve been writing about Bad Stuff online for years now, so it’s not a huge surprise that, upon gaining access to AI Overview, I started googling a bunch of things that might cause the generative search tool to pull from unreliable sources. The results were mixed, and they seemed to rely a lot on the exact phrasing of my question.

When I typed in queries asking for information on two different people who are widely associated with dubious natural “cures” for cancer, I received one generated answer that simply repeated the claims of this person uncritically. For the other name, the Google engine declined to create generative responses.

Results on basic first aid queries — such as how to clean a wound — pulled from reliable sources to generate an answer when I tried it. Queries about “detoxes” repeated unproven claims and were missing important context.

But rather than try to get a handle on how reliable these results are overall, there’s another question to ask here: If Google’s AI Overview gets something wrong, who is responsible if that answer ends up hurting someone?

Who’s responsible for AI?

The answer to that question may not be simple, according to Samir Jain, the vice president of policy at the Center for Democracy and Technology. Section 230 of the 1996 Communications Decency Act largely protects companies like Google from liability over the third-party content posted on its platforms because Google is not treated as a publisher of the information it hosts.

It’s “less clear” how the law would apply to AI-generated search answers, Jain said. AI Overview makes Section 230 protections a little messier because it’s harder to tell whether the content was created by Google or simply surfaced by it.

“If you have an AI overview that contains a hallucination,  it’s a little difficult to see how that hallucination wouldn’t have at least in part been created or developed by Google,” Jain said. But a hallucination is different from surfacing bad information. If Google’s AI Overview quotes a third party that is itself providing inaccurate information, the protections would still likely apply.

A bunch of other scenarios are stuck in a gray area for now: Google’s generated answers are drawing from third parties but not necessarily directly quoting them. So is that original content, or is it more like the snippets that appear under search results?

While generative search tools like AI Overview represent new territory in terms of Section 230 protections, the risks are not hypothetical. Apps that say they can use AI to identify mushrooms for would-be foragers are already available in app stores, despite evidence that these tools aren’t super accurate. Even in Google’s demo of their new video search, a factual error was generated, as The Verge noticed.

Eating the source code of the internet

There’s another question here beyond when Section 230 may or may not apply to AI-generated answers: the incentives that AI Overview does or does not contain for the creation of reliable information in the first place. AI Overview relies on the web continuing to contain plenty of researched, factual information. But the tool also seems to make it harder for users to click through to those sources.

“Our main concern is about the potential impact on human motivation,” Jacob Rogers, associate general counsel at the Wikimedia Foundation, said in an email. “Generative AI tools must include recognition and reciprocity for the human contributions that they are built on, through clear and consistent attribution.”

The Wikimedia Foundation hasn’t seen a major drop in traffic to Wikipedia or other Wikimedia projects as a direct result of AI chatbots and tools to date, but Rogers said that the foundation was monitoring the situation. Google has, in the past, relied on Wikipedia to populate its Knowledge Panels, and draws from its work to provide fact-check pop-up boxes on, for instance, YouTube videos on controversial topics.

There’s a central tension here that’s worth watching as this technology becomes more prevalent. Google has an incentive to present its AI-generated answers as authoritative. Otherwise, why would you use them?

“On the other hand,” Jain said, “particularly in sensitive areas like health, it will probably want to have some kind of disclaimer or at least some cautionary language.”

Google’s AI Overview contains a small note at the bottom of each result clarifying that it is an experimental tool. And, based on my unscientific poking around, I’d guess that Google has opted for now to avoid generating answers on some controversial topics.

The Overview will, with some tweaking, generate a response to questions about its own potential liability. After a couple dead ends, I asked Google, “Is Google a publisher.”

“Google is not a publisher because it doesn’t create content,” begins the reply. I copied that sentence and pasted it into another search, surrounded by quotes. The search engine found 0 results for the exact phrase.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *