In the US, the AI start-up Perplexity is at the center of a new legal dispute. Dow Jones and the New York Post are accusing the company of infringing copyrights and trademark rights by generating made-up news segments and falsely attributing them to real media. But what is behind these allegations and what could this mean for the future of AI-based models?
The accusation: copyright and trademark infringement
The publishers Dow Jones (publisher of the Wall Street Journal) and the New York Post, both owned by Rupert Murdoch's News Corp, recently filed a lawsuit against the AI startup Perplexity in the US Southern District of New York. The main allegation is that Perplexity generates text snippets that sound like real news and erroneously attributes them to the original sources. This is not only a violation of copyright law, but also of trademark law.
Perplexity has already attracted negative attention before. Earlier this month, the company received a cease-and-desist letter from the New York Times for the unauthorized use of its content. Forbes and WIRED also reported that their articles had been copied and used without permission, whereupon both publishers also took legal action.
What is behind the term "hallucination"?
In the AI world, "hallucination" means that a language model generates content that does not exist in reality. Such errors can be particularly problematic when they affect news content, as they spread false information and jeopardize the credibility of the media concerned. In the case of Perplexity, for example, the AI software is said to have taken real paragraphs from an article in the New York Post and then added fictitious content about freedom of speech and online regulation that could not be found in the original source.
According to the complaint, these hallucinations damage publishers' reputations by introducing uncertainty into news consumption and the publishing process. "Perplexity's hallucinations, passed off as authentic content, dilute the value of our brands and affect the public's trust," the complaint states.
Reactions and parallels to other cases
In a statement, Robert Thomson, CEO of News Corp, praised OpenAI as a positive example of the responsible use of AI technology. In contrast, he accused Perplexity and other AI companies of systematically misusing content. The New York Times also recently filed a lawsuit against OpenAI, arguing that the language model falsely attributed quotes from the paper's articles to ChatGPT.
Experts disagree on whether the allegation of trademark infringement will stand up in this case. Vincent Allen, an attorney specializing in intellectual property, believes the copyright allegations are stronger, while he is skeptical about the trademark claim. He points to the landmark case of Dastar v. Twentieth Century Fox, in which the Supreme Court ruled that "origin" as defined by trademark law cannot be applied to intellectual works.
James Grimmelmann, Professor of Digital and Internet Law at Cornell University, also expressed doubts about the validity of the trademark lawsuit. In his opinion, the "dilution" of a trademark involves the use of a trademark on one's own products, which is not the case here.
What does this mean for the future of AI?
If publishers were to prove in court that AI-generated hallucinations violate trademark law, AI companies could face enormous challenges. "It's virtually impossible to guarantee that a language model will never generate something false or legally questionable," explains Elizabeth Renieris, a lawyer specializing in technology. These legal challenges raise important questions: To what extent can and should AI systems be regulated? And how can technology providers ensure that their models comply with the legal framework?
The current lawsuit against Perplexity could set a precedent that fundamentally changes the way AI companies develop and deploy their algorithms. It highlights the need for clear guidance and strict oversight to maintain the balance between innovation and legal responsibility. This could lead to stricter regulation of the AI industry, ensuring both the creativity and credibility of AI-generated content.
Ultimately, not only the technological, but also the ethical and legal responsibility of AI developers will be put to the test. It remains to be seen how legislation will evolve to keep pace with the rapid development of AI technologies while protecting the integrity and security of public communications.




