Summary

A Guardian investigation found vulnerabilities in OpenAI’s ChatGPT search tool, including susceptibility to manipulation via hidden text and prompt injections.

Malicious actors can influence ChatGPT to produce biased results or return harmful code, posing risks for users.

Tests revealed that hidden content on fake websites could manipulate ChatGPT to deliver overly positive product reviews, even contradicting the site’s actual data. A cybersecurity expert warned these flaws create a “high risk” for deceptive practices.

Experts caution users to treat AI-generated content critically, comparing these vulnerabilities to “SEO poisoning.”

  • Zexks@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    13 days ago

    SẾO poising has been happening for long before LLMs. There learning from and replicating us.