top of page

How Generative AI is reshaping search and online safety

  • Writer: Independent Media Association
    Independent Media Association
  • 1 hour ago
  • 3 min read

AI search is having a massive impact on publishers, with many seeing steep declines in referral traffic. Ofcom have produced a report titled The Era of Answer Engines: Generative AI’s impact on search experiences and online safety, looking at some of the issues around AI search. This is a summary of that report.


The rise of generative AI (GenAI) is redefining the way we look for information online. Where traditional search engines like Google or Bing delivered ranked lists of links, today’s “answer engines” – AI chatbots (eg ChatGPT, Claude) and AI‑driven search summaries (eg Google’s AI Overviews) – present concise, natural‑language answers drawn from up‑to‑date web content. Ofcom’s new discussion paper maps this shift, explores who is building these tools, how people are using them, and what new safety challenges they raise.


How GenAI search works


GenAI search still relies on classic crawling and indexing, but adds a retrieval‑augmented generation (RAG) layer. After a query is sent, the system pulls relevant documents from an index, then a large language model (LLM) synthesises a readable answer. AI summaries typically surface a single answer above the usual list of links, while chat‑style bots keep a conversational context, allowing follow‑up questions.


Key players


  • Established search giants – Google (AI Overviews, Gemini) and Microsoft (Copilot) integrate GenAI into their massive indexes.

  • AI‑first entrants – OpenAI (ChatGPT), Anthropic (Claude), Perplexity, DeepSeek. They often lean on third‑party APIs (eg Bing’s search API).

  • Social platforms – Meta (Meta AI) and X’s xAI (Grok) embed GenAI into their ecosystems, pulling from both web indexes and user‑generated content.


How people use GenAI search


Research shows a strong preference for traditional search out of habit, especially among older users. Younger adults (16‑24) are far more likely to try GenAI tools, mainly for low‑stakes tasks such as recipes, hobby ideas or quick overviews. For high‑stakes queries - health, finance, legal advice - most still revert to conventional search to verify sources.


Main risks


  1. Loss of context & poor citations – Answers may strip nuance from source material, and citations are often vague, broken or fabricated, making it hard to assess credibility.

  2. Inflated trust – Conversational interfaces can feel authoritative, prompting users to accept inaccurate or harmful information without checking the original pages.

  3. Sycophancy – Models may echo users’ biases, reinforcing misinformation rather than challenging it.

  4. Jailbreaking – Multi‑turn interactions can be exploited to coax the system into providing illegal or dangerous content that would be blocked in a single query.


Safeguards on the table


  • Regulatory codes – Ofcom’s Online Safety Act duties require regulated search services to moderate illegal/harmful content, provide user‑reporting tools, and surface crisis‑prevention info.

  • Technical measures – Content‑filtering APIs (eg Bing Safe Search), moderation classifiers (OpenAI’s Moderation API), “constitutional AI” training (Anthropic) reinforcement and learning from human feedback (RLHF).

  • Citation improvements – Google’s “check‑grounding” scores and “Double‑Check” feature colour‑code supported claims; other services display source names or thumbnails alongside answers.

  • User‑feedback loops – Thumb‑up/down buttons and detailed reporting let users flag problematic outputs, feeding back into model updates.

  • Media‑literacy pushes – On‑platform nudges (read‑before‑sharing), off‑platform workshops, and toolkits (BBC’s AI‑Assistant Toolkit, Internet Matters guides) aim to equip users to scrutinise AI‑generated answers.


Looking ahead


GenAI search is still early, but adoption is accelerating. Emerging trends include:


  • AI agents that execute tasks autonomously (eg booking, ordering).

  • AI‑enabled browsers that overlay summarisation and form‑filling onto any webpage.

  • Multimodal search (images, audio) exemplified by Google Lens.

  • Advertising within answers, already piloted by Google and Perplexity.


These innovations promise richer, faster experiences but also widen the safety surface. Regulators, industry bodies, and civil‑society groups will need to coordinate to ensure robust safeguards keep pace.


Overall takeaways


Answer engines are reshaping the search landscape: they can deliver instant, conversational answers, broaden accessibility and streamline routine queries. Yet the convenience comes with heightened risks of misinformation, opaque sourcing, and potential exposure to harmful content. Effective mitigation will hinge on a blend of technical controls, clear regulatory expectations and stronger media‑literacy skills among users. As GenAI search matures, striking the right balance between innovation and protection will be crucial for a safe, trustworthy information ecosystem.


Takeaways for publishers


  1. The shift from traditional search engines to generative AI "answer engines" is leading to significant declines in referral traffic for publishers. As AI tools provide direct answers without linking back to original sources, publishers may find their content overlooked.

  2. Generative AI often presents information authoritatively, which can mislead users regarding its accuracy. Users may accept AI-generated responses as fact without verifying sources, putting publishers at risk of being associated with misinformation.

  3. With the rise of AI-driven search, regulatory measures are expected to increase. Publishers should stay informed about evolving regulations that may impact how AI systems operate and how content is moderated to ensure compliance and safeguard their interests.


Read the full report here.

bottom of page