See How Visible Your Brand is in AI Search Get Free Report

ChatGPT’s App Suggestions Are Gone: OpenAI’s Brief Flirtation With Ad-Like Prompts Shows How Fragile Trust Is

  • December 8, 2025
    Updated
chatgpts-app-suggestions-are-gone-openais-brief-flirtation-with-ad-like-prompts-shows-how-fragile-trust-is

📌 Key Takeaways

  • ChatGPT briefly showed third-party app suggestions that many users felt were hidden ads.
  • Prompts for Peloton and Target appeared mid-conversation, even for $200-a-month Pro subscribers.
  • OpenAI insists there were no paid ads, only an experimental app discovery feature that missed the mark.
  • Chief research officer Mark Chen says the team has turned off these suggestions while it fixes relevance and controls.
  • The incident highlights the tension between monetisation and trust as OpenAI explores future ad models.


Why ChatGPT’s App Suggestions Felt Like Ads

The backlash began when a Pro subscriber shared a screenshot showing ChatGPT suddenly recommending the Peloton app during a chat about Elon Musk and xAI, with no mention of fitness at all. The post quickly went viral and set the tone: people thought ads had arrived inside a paid tool.

Other users reported similar out-of-context prompts, including repeated nudges toward music apps that did not match their stated preferences. There was no obvious way to turn suggestions off, and they appeared for users paying up to $200 per month for ChatGPT Pro, which sharpened the feeling that something had quietly changed in the product.

For many, the problem was not the idea of app integration itself; it was timing and placement. A commercial product card, dropped into a sensitive or technical conversation without clear consent, looked and felt like an advertisement, regardless of how OpenAI defined it internally.

“I’m in ChatGPT, asking about Windows BitLocker, and it is showing me ads to shop at Target.” — Benjamin De Kraker, ChatGPT User


How OpenAI Responded And Switched The Feature Off

Initially, OpenAI leaders stressed that no ads or ad tests were live. Executives said the prompts were unpaid app suggestions tied to the ChatGPT app platform that launched earlier in the year, meant to surface relevant tools inside conversations. They also emphasised that there was “no financial component” involved.

The tone shifted when Mark Chen, OpenAI’s chief research officer, publicly acknowledged that the experience had gone wrong. He agreed that anything which feels like an ad needs to be handled carefully and admitted the team “fell short”, adding that this kind of suggestion has been turned off while they improve precision and add user controls.

A detailed explainer later confirmed that app suggestions have been globally disabled for now. Internally, a separate “code red” directive has reportedly paused early ad experiments and other commercial features so teams can refocus on core ChatGPT quality as competition from rival models intensifies.

“Anything that feels like an ad must be handled with care, and we fell short. We have turned off this type of suggestion while we improve it.” — Mark Chen, Chief Research Officer, OpenAI


What This Reveals About AI Monetisation And Trust

Paid ChatGPT users expect an ad-free, neutral assistant, especially when they rely on it for work, private journaling, or semi-therapeutic conversations. Even if no money changes hands, an unexpected recommendation for a branded app can feel like the system is nudging them on behalf of someone else. That perception alone is enough to damage trust.

At the same time, separate code leaks from the Android app show references to possible “ads features,” including search ads and ad carousels, signalling that OpenAI is actively exploring new revenue models. When those leaks sit next to in-chat “suggestions” that look like ads, users naturally assume they are seeing the first stage of a broader monetisation push.

The episode underlines a bigger lesson for every AI product: in a conversational interface, trust is the product. Features like app discovery may be useful in theory, but if they are not clearly labelled, always relevant, and fully controllable, users will treat them as intrusive advertising and vote with their wallets.


Conclusion

OpenAI’s decision to switch off ChatGPT’s app suggestions shows how quickly user sentiment can turn when a trusted assistant starts to feel even slightly commercial, especially inside a premium tier. The company now faces the challenge of rebuilding confidence while it experiments with new ways to connect apps, commerce, and AI.

Long-term, this will be a reference case for every AI platform wrestling with monetisation. Suppose OpenAI wants to introduce advertising or sponsored tools in the future.

In that case, it will need more transparent labels, stronger controls, and a user experience that never leaves people wondering whether their assistant is secretly selling to them.


For the recent AI News, visit our site.


If you liked this article, be sure to follow us on X/Twitter and also LinkedIn for more exclusive content.

Was this article helpful?
YesNo
Generic placeholder image
Articles written 861

Khurram Hanif

Reporter, AI News

Khurram Hanif, AI Reporter at AllAboutAI.com, covers model launches, safety research, regulation, and the real-world impact of AI with fast, accurate, and sourced reporting.

He’s known for turning dense papers and public filings into plain-English explainers, quick on-the-day updates, and practical takeaways. His work includes live coverage of major announcements and concise weekly briefings that track what actually matters.

Outside of work, Khurram squads up in Call of Duty and spends downtime tinkering with PCs, testing apps, and hunting for thoughtful tech gear.

Personal Quote

“Chase the facts, cut the noise, explain what counts.”

Highlights

  • Covers model releases, safety notes, and policy moves
  • Turns research papers into clear, actionable explainers
  • Publishes a weekly AI briefing for busy readers

Related Articles

Leave a Reply