Apple privately warned xAI that it would remove the Grok app from the App Store if the company failed to address a surge of sexualized deepfakes earlier this year. The warning, revealed in a letter to U.S. lawmakers in April 2026, shows Apple actively intervened as the controversy escalated.

The issue began when Grok’s image generation tools were used to create non-consensual sexualized images of real people. Reports at the time included cases involving women and minors, putting the app in direct conflict with App Store rules that prohibit exploitative and harmful content. The situation quickly drew backlash from advocacy groups and regulators.
Apple reviewed the app and determined it violated its guidelines. According to the company’s letter, it rejected an initial update from xAI, saying the changes did not go far enough. Apple warned that unless stronger moderation measures were implemented, the app could be removed from the App Store.
xAI responded with a series of updates aimed at tightening controls. These included stricter limits on image generation, improved prompt filtering, and additional monitoring systems. After further revisions, Apple approved a later version of the app once it determined the changes addressed the violations.
The controversy also drew attention from U.S. lawmakers, including Ron Wyden, Ben Ray Luján, and Edward Markey, who urged Apple and Google to remove Grok and the main X app. They argued that allowing the spread of non-consensual imagery would undermine platform safety standards.
While the volume of explicit deepfakes has decreased since the initial backlash, reports indicate that some users are still able to bypass safeguards with modified prompts. xAI says it prohibits non-consensual explicit content and continues to deploy monitoring systems and model updates to limit misuse.
(via NBC News)



