Apple is under increasing pressure to withdraw its controversial artificial intelligence (AI) feature designed to summarize breaking news alerts. This AI-driven feature, available on the latest iPhones, has come under fire for generating inaccurate and misleading news summaries, sparking a debate about its reliability and potential to spread misinformation.
Inaccurate Summaries Spark Backlash
The AI tool, part of Apple’s broader suite of AI technologies, has been accused of fabricating entirely false claims in some instances. The BBC first raised concerns about its journalism being misrepresented in December 2024. Despite these complaints, Apple only responded this week, acknowledging the problem and promising updates to clarify that the summaries are AI-generated.
Alan Rusbridger, former editor of The Guardian and a member of Meta’s Oversight Board, has called for Apple to withdraw the feature entirely. In a statement to BBC Radio 4’s Today programme, Rusbridger labeled the technology as “out of control” and argued that it posed a significant risk of misinformation.

“Trust in news is already fragile,” he said. “Introducing unreliable AI-generated summaries only exacerbates the issue, undermining the credibility of legitimate journalism.”
Journalistic Community Demands Action
The National Union of Journalists (NUJ) has also criticized Apple, urging the company to act swiftly to prevent further harm to public trust. Laura Davison, NUJ’s general secretary, emphasized the critical need for accurate reporting in today’s information landscape.
“At a time when access to accurate information is vital, the public should not have to question the reliability of news delivered to their devices,” she said. The journalism advocacy group Reporters Without Borders (RSF) echoed these sentiments, calling Apple’s current response inadequate and demanding the feature’s removal.
A Series of Errors Undermines Credibility
The AI’s errors have been numerous and significant. Last month, the BBC flagged a fabricated alert claiming Luigi Mangione, accused of killing UnitedHealthcare CEO Brian Thompson, had committed suicide. More recently, the AI inaccurately announced that Luke Littler won the PDC World Darts Championship hours before the event began and falsely reported that Spanish tennis star Rafael Nadal had come out as gay. These mistakes have raised questions about the viability of generative AI in delivering reliable news content.
The BBC has been vocal about its concerns, stating that the errors contradict the original content and damage the organization’s credibility. “These AI-generated summaries do not reflect – and in some cases completely contradict – the original BBC content. Apple must urgently address these issues,” a BBC spokesperson said.
Other news organizations have also been affected. In November, a ProPublica journalist highlighted false summaries of New York Times alerts, including one claiming Israeli Prime Minister Benjamin Netanyahu had been arrested. The New York Times has declined to comment but is reportedly monitoring the situation.
Calls for Accountability
Vincent Berthier, head of RSF’s technology and journalism desk, criticized Apple’s decision to clarify that summaries are AI-generated rather than fixing the underlying issues. “This approach shifts the responsibility to users, who must navigate an already confusing information landscape,” he said.
Apple, however, maintains that the feature is still in beta and subject to ongoing improvements. In a statement, the company said, “A software update in the coming weeks will further clarify when the text being displayed is an AI-generated summary. Users can report concerns if they encounter unexpected summaries.”
Broader Implications for AI in News
Apple is not the only tech giant grappling with the challenges of generative AI. Google faced criticism last year for erratic responses generated by its AI overviews feature, which summarizes search results. While Google claimed these instances were isolated, the errors highlighted the potential pitfalls of deploying generative AI tools prematurely.
Apple’s AI notification summaries were rolled out in December 2024 on select devices, including the iPhone 16 models and iPhone 15 Pro series. The feature aims to consolidate multiple notifications into a single summary, allowing users to “scan for key details” quickly. However, critics argue that the current implementation prioritizes convenience over accuracy, with potentially damaging consequences.
A Crossroads for AI and Journalism
The controversy surrounding Apple’s AI highlights the broader challenges of integrating AI into journalism. While AI has the potential to revolutionize content delivery, its misuse or premature deployment can undermine public trust and spread misinformation.
Experts and industry leaders are calling for greater accountability and rigorous testing before releasing such tools to the public. “This is a wake-up call for the tech industry,” said Alan Rusbridger. “If companies like Apple want to play a role in the dissemination of news, they have a responsibility to ensure their tools are accurate and reliable.”
The Path Forward
As pressure mounts, Apple faces a critical decision: either withdraw the feature or implement significant changes to ensure accuracy and transparency. With public trust in news already at a low point, the stakes are high for both the company and the broader AI industry. Apple’s response in the coming weeks will likely set a precedent for how tech companies handle the delicate balance between innovation and responsibility in the realm of information dissemination.