-
The Risks of ChatGPT Search: Misattribution and Misinformation in Publisher Content
- 2024/11/30
- 再生時間: 9 分
- ポッドキャスト
-
サマリー
あらすじ・解説
This summary explores OpenAI's ChatGPT Search and its effect on news publishers, based on analysis and testing by the Tow Center. Despite OpenAI’s claims of collaborating with the news industry, their search tool often misrepresents and misattributes content. For example, ChatGPT frequently cites sources incorrectly, including attributing quotes to the wrong publication or providing incorrect dates and URLs.
The Tow Center's testing revealed that ChatGPT rarely admits when it can't locate a source. Instead, it often fabricates information, making it hard for users to judge the validity of the information provided.
OpenAI allows publishers to block ChatGPT’s access to their content via a “robots.txt” file. However, the Tow Center found that ChatGPT still referenced blocked content, sometimes citing plagiarised copies found on other websites. This raises concerns about OpenAI’s commitment to accuracy and attribution.
The study ultimately highlights the limited control publishers have over how their content appears in ChatGPT Search. Even publishers with licensing agreements with OpenAI experienced misattribution and misrepresentation of their content. The Tow Center concludes that OpenAI needs to address these issues to ensure accurate representation and citation of news content.