When Tim Cook sat at Donald Trump’s inauguration earlier this year, it seemed like just another symbolic moment of corporate diplomacy. Two months later, hundreds of contractors in Barcelona working quietly on Apple’s artificial intelligence model notice their training manuals look different. Pages that once spelled out intolerance and systemic racism in blunt terms are reworded. Diversity and equity are no longer described as essential values but as controversial terrain.
Apple insists nothing has changed, that its Responsible AI principles remain untouched. Yet the timing of the revisions — coming right after Trump’s return to the White House — raises questions about how much political weather seeps into the way big tech shapes its models. Annotators evaluating Apple’s unreleased chatbot now have to treat elections, vaccines, and DEI initiatives as more sensitive subjects, alongside well-known flashpoints like gun control, disputed territories, etc. Even Trump’s name, which only came up a few times in last year’s guidance, appears far more often in the latest version.
The shift tells us less about the specific answers Apple’s AI will give and more about the invisible scaffolding behind them. Contractors are asked to rate responses not just for accuracy but for how well they protect Apple’s brand and avoid inflaming politics. That includes carefully policing any mention of Tim Cook, Craig Federighi, Eddy Cue, and even Steve Jobs. Apple itself has become a “sensitive” topic, something to be handled with caution in its own training playbook.
This work happens in near anonymity. The contractors are not told they are working for Apple, even though the documents mention the company dozens of times. They spend their days evaluating around thirty prompts each, flagging whether the chatbot slips into discrimination, reveals copyrighted content, or stumbles into controversial ground. Phones are banned from the office. Talking about the project outside work is forbidden. It is a strange mix of secrecy and repetition, one that some employees compare to living inside an episode of Apple’s own show Severance.
At the same time, the guidelines reveal Apple trying to future-proof its system against wider anxieties. New sections of the guidelines warn about “longitudinal risks,” from users forming emotional attachments to AI, to disinformation spreading cheaply at scale, to jobs disappearing under automation. None of these risks have clear solutions in the documents, only the acknowledgement that they exist.
Apple says its principles have not shifted, that its models are updated regularly with structured evaluation topics to make them safer.
We train our own models and work with third-party vendors to evaluate them using structured topics, including sensitive ones, to ensure they handle a wide range of user queries responsibly. These topics are shaped by our principles and updated regularly to keep improving our models.
Still, the edits reflect the tension of building a consumer chatbot for 2026 in a world where politics, censorship regimes, and corporate image are as much a part of the training data as facts and figures. What the AI eventually says will be less important than what it is already being told to stay quiet about.
(via Politico)
