HomeSocial NetworksFacebookComment: Whose mind AI? Large language models and the power structure...

Comment: Whose mind AI? Large language models and the power structure behind them

Published on

- Advertisement -

large AI models contain knowledge of the world, but also hate speech and vulgarity. Censorship takes place: with what consequences – and who decides what AI may (not) say?

When OpenAI presented GPT-3 in the summer of 2020, the world was impressed by its understanding of context: it can produce meaningful text at the push of a button. The development team had trained a complex neural network using vast amounts of text from the Internet, books and archives.

In addition to admiration, GPT-3 was criticized for absorbing misinformation, prejudice and extremism from the Internet, which sometimes resulted in obscure output. In addition to encyclopedic knowledge of the world, large language models also contain the spectrum of human depths. Countermeasures must be obvious – but opinions differ on the way value-based influencing (value targeting) is used.

A comment by Silke Hahn

Silke Hahn, editor at Heise


Steffen Koerber


Silke Hahn is an editor at voonze Developer and iX Magazin. She writes on software development topics and is interested in modern programming languages ​​like Rust. With her historical background (Silke studied ancient history and dead languages), she bridges the gap between antiquity and modernity. She considers artificial intelligence to be the most explosive topic of our time and is following the ongoing debates about generalized AI with great interest.

At the end of 2021, OpenAI opened programming interfaces (API) to GPT-3 commercially, now customers will find a version modified with human input behind it as the default: InstructGPT, which addresses the known problems and is supposed to be “more compliant”. The tamed offshoot is more useful than the full version, the research team rejoices in their own blog. Alignment, i.e. bringing AI into line with human values ​​and goals, is necessary in view of the increasing capabilities of models that are becoming more powerful.

A etiquette for the AI, so that the smuts of the World Wild Web stay out of it? Sounds good. Some of the exclusion criteria should be undisputed across cultures, but in the overall view one gets to pondering. 40 labelers readjusted GPT-3 as test subjects. They rated answers to creative test tasks up or down according to specifications: Factual errors, rudeness in customer service, harmful advice, hate speech and violence all bottomed out. Sexuality in general, but also opinions and moral concepts are considered taboo.

It is mainly values ​​from the US West Coast that GPT-3 are planted here. Here’s what you get when you dump the photos of breastfeeding mothers on the rest of the world: Facebook removed photos of breastfeeding mothers, satirical magazine Titanic’s app was banned from the app store after an algorithm classified it as pornographic, and Chinese provider TikTok blocked content related to homosexuality and nudity.

Such restrictions are by no means harmless. The focus on the lowest common denominator is dubious: those who adapt their AI to the maximum also remove minority opinions, content and culture in addition to profanity. If you don’t allow an opinion, you won’t allow a counter-opinion either. The discourse becomes impoverished and reality is no longer presented truthfully.

The essential innovations in our society were often controversial, border-breaking and taboo at the beginning. Therefore, it would be negligent to level the model’s inherent knowledge of the world down to a mediocre output without corners, edges or anything offensive. Whereby the sovereignty of interpretation would be in the hands of a few labelers, who at some point in the western United States transferred their moral concepts to a powerful AI model. Let’s consider important times in the past: Which values ​​would such a technology have included and which would it have been censored?

The example of how to explain the moon landing to a child in an age-appropriate way, or the assignment to write a funny poem about a clever frog, have a sympathetic effect. Here you certainly don’t reach the limits of a soft-washed AI. But would that softened up make sense? Would it still be true? Those who suppress opinions suppress diversity. I certainly don’t want to live in a world where Silicon Valley has stamped its stamp on all everyday applications that use AI.

This comment is first in March 2022 published as an editorial of iX 3/2022. The world has moved on, but the core ideas remain topical: for example, moral-value-driven politics in the USA created new facts with abortion bans, and in the field of AI, a space race of large providers has been taking place since spring. Dhe question of technological sovereignty in Europe is pressing, says the author.


- Advertisement -

Latest articles

Never do this with your router

We all have a router at home with which we connect to fiber optics....

So you can put a video as wallpaper on Android phones

Did you know that you can put animated videos as wallpaper on your Android...

YouTube could sell subscriptions to streaming platforms

In other words, you can only get the views that the short content has...

Cast of “Pistol”, the miniseries about one of the bands responsible for inspiring the punk movement

Star+ announced about two months ago about the FX miniseries Pistolbased on the...

More like this