Image: DenPhotos / shutterstock.com
Brand before opinion: How Apple is trimming its AI to be cuddly
Imagine asking Siri about Donald Trump soon - and getting a soft-spoken answer that sounds like it comes from the White House press team. This could become a reality. According to a report by Politico magazine, Apple has fundamentally changed its internal guidelines for training its own AI - and in a strikingly political way. Of all things, topics such as diversity, systemic racism, LGBTQ+, vaccinations and elections are now considered "controversial". Criticism of Trump? Only with kid gloves, please.
What sounds like a dystopian joke is based on statements from two data annotators and internal documents from an Apple service provider in Barcelona. The AI called "Apple Intelligence" is trained there - by people who don't even officially know that they work for Apple.
Diversity = "controversial"?
The list of sensitive topics for which Apple's AI should respond with particular caution in future has become longer - and is particularly striking in terms of content. While the previous regulation still took a clear stance against intolerance, "Diversity, Equity & Inclusion" (DEI) is now classified as controversial. Terms such as "systemic racism" have also been removed. The message: less attitude, more neutrality - at least to the outside world.
The treatment of Trump is particularly bizarre. According to the new guidelines, terms such as "radical Trump supporters" should be avoided or toned down so as not to have a "stereotyping" effect. The aim is not to promote "inflammatory" statements - which in practice probably means: no criticism of Trump, no arguments with his fans.
No scratches for the apple logo
While Apple outwardly emphasizes that the AI is being trained "responsibly", the report shows that it is also about tough brand management. Answers that could show Apple itself, its products or executives such as Tim Cook or Steve Jobs in a bad light are to be treated as sensitive or avoided altogether.
Sounds a lot like image cultivation in the digital age: AI should help, not get in the way - especially not when it comes to sales. The truth becomes secondary as long as the logo shines. Particularly bizarre: even negative press reports or data protection scandals may only be mentioned cautiously in the AI's answers.
Censorship through brand strategy?
The fact that companies want to protect their image is nothing new. But the fact that they are using artificial intelligence to filter it so politically is a new dimension. Anyone expecting a neutral, objective AI may end up with a politely smiling language model - which says yes to everything, just so as not to upset anyone. Not even the US president.
You could also put it this way: the new Apple AI will know exactly what it is not allowed to say in future - and that is perhaps precisely the problem.