Apple is facing a new privacy investigation in France over how it handles voice recordings made through Siri, its built-in digital assistant available on iPhone, iPad, Mac, and other devices. The probe focuses on whether Apple’s collection and retention of voice data, including recordings that can be stored for up to two years, complies with French privacy laws and European data protection standards.
According to the Paris prosecutor’s office, the investigation has been referred to the Office for Combating Cybercrime following a complaint from the Ligue des droits de l’Homme, a French human rights organization. The group claims Apple’s data practices may expose users to potential misuse of sensitive voice data, particularly since audio snippets are sometimes analyzed by subcontractors to improve Siri’s accuracy.
This complaint was reportedly supported by Thomas le Bonniec, a former Apple subcontractor in Ireland who previously raised concerns about hearing confidential user conversations during Siri grading sessions. His statements suggested that evaluators had access to recordings of personal moments, including medical consultations and private discussions, which raised ethical and privacy issues in Europe.
Apple maintains that the practice is strictly opt-in and aligned with its privacy commitments. The company referred to its January 2025 blog post, which clarified that Siri recordings are not retained unless users explicitly choose to share them to help improve Siri and dictation quality. Even when users opt in, the data is anonymized and used solely for improving performance rather than profiling or advertising purposes.
This is not the first time Apple’s Siri data collection practices have come under scrutiny. In 2023, Apple agreed to a $95 million settlement over Siri privacy claims after users alleged that voice recordings were improperly stored and analyzed without consent. Later, a U.S. judge allowed another Siri privacy lawsuit against Apple to move forward, reinforcing growing legal attention on how the company manages user data collected through its voice assistant.
France’s decision to escalate the complaint highlights the country’s increasingly strict approach toward U.S. tech companies operating under the European Union’s privacy and competition frameworks. French regulators have previously investigated Apple for antitrust concerns and introduced a national digital services tax that prompted tension with the United States government.
While this new probe does not yet imply wrongdoing, it reflects a broader push among European regulators to ensure transparency in AI-driven systems like voice assistants. France’s cybercrime unit will now assess whether Apple’s handling of Siri recordings, and the role of subcontracted “graders,” meet national and EU-level privacy obligations under the General Data Protection Regulation (GDPR).
The case echoes earlier controversies from 2019, when Apple temporarily suspended Siri grading programs after public backlash. The company later redesigned the system to require explicit user consent before any voice data was analyzed by humans. Apple emphasized that privacy remains “foundational” to all its products, including Siri, and that users can view and delete Siri-related data at any time through iPhone settings or the Apple ID privacy portal.
For Apple, the timing of the investigation coincides with its continued emphasis on privacy as a key differentiator in its ecosystem. Features such as on-device processing for Siri requests and differential privacy methods have long been part of Apple’s argument that its AI-driven services are safer and less invasive than competitors like Google Assistant or Amazon Alexa. However, as voice and AI technologies expand under new European regulatory scrutiny, Apple will likely face further questions about how transparency and user control are enforced in practice.
The outcome of the French probe may set a precedent for how voice data and AI assistants are regulated across the EU. It also underscores a growing global debate over where the line should be drawn between improving AI accuracy and respecting personal privacy, especially when sensitive conversations are involved.
via Bloomberg