Business

“New York Times Confronts ChatGPT Creator Over ‘Fair Use’ of Copyrighted Works” | Daily list

An avalanche of lawsuits in federal court in New York will test the fate of ChatGPT and other artificial intelligence products that wouldn’t be as flashy if they didn’t use massive amounts of copyrighted human works.

But do AI chatbots—in this case, the widely marketed products created by OpenAI and its business partner Microsoft—violate copyright and fair competition laws? Professional writers and media outlets face an uphill battle to win that argument in court.

“I want to be an optimist on behalf of the writers, but I’m not. I think they have an uphill battle ahead of them,” said copyright attorney Ashima Aggarwal, who worked for publishing giant John Wiley & Sons.

A lawsuit comes from The New York Times. Another from a group of famous novelists such as John Grisham, Jodi Picoult and George RR Martin. Third, from best-selling nonfiction authors, including the author of the Pulitzer Prize-winning biography on which the hit movie “Oppenheimer” was based.

demands

Each lawsuit presents different arguments, but they all center on the fact that San Francisco-based company OpenAI “created this product from other people’s intellectual property,” said attorney Justin Nelson, who represents nonfiction authors and whose office also represents him. New York Times.

“What OpenAI is saying is that from the beginning of time, as long as it’s on the Internet, it’s free to take over someone else’s intellectual property,” Nelson said.

The New York Times filed a lawsuit in December, alleging that ChatGPT and Microsoft’s Copilot chatbot compete against the same media outlets they train and divert web traffic from newspapers and other copyright owners who rely on advertising revenue. doing. His journalism. He also presented evidence that chatbots were repeating Times articles word for word. On other occasions, chatbots falsely misrepresent a newspaper, damaging its reputation.

A federal judge is presiding over all three cases so far, as well as a quarter of two other nonfiction authors who filed another lawsuit last week. US District Judge Sidney H. Stein has been on the Manhattan court since 1995, when he was appointed by then-President Bill Clinton.

Answer

OpenAI and Microsoft have not yet filed formal defenses in the New York cases, but OpenAI issued a public statement this week calling The New York Times’ lawsuit “meritless” and noting that the chatbot’s ability to repeat some articles verbatim was an “unusual failure.”

“Training artificial intelligence models with publicly available content on the Internet is a legitimate use, as demonstrated by longstanding and widely accepted precedents,” the company said on its blog Monday. He also suggested that The New York Times “directed the model to reproduce it or chose examples of it from several attempts.”

OpenAI signed licensing deals with The Associated Press, German media company Axel Springer and other organizations last year to show how the company is trying to support a healthy news ecosystem. OpenAI pays an undisclosed amount to license the AP News Archive. The New York Times had a similar conversation before deciding to sue.

OpenAI said this year that AP’s access to an “archive of high-quality, fact-based lessons” would enhance the capabilities of its AI systems. But his blog played down the news content for AI training this week, arguing that large language models learn from “vast bodies of human knowledge” and that “no single data source — including The New York Times — is critical to the expected learning.” model.”

Who will win?

Much of the AI ​​industry’s argument is based on the “fair use” principle of US copyright law, which allows limited use of copyrighted material for teaching, research or something else that is protected.

In response, the legal team representing The New York Times wrote on Tuesday that what OpenAI and Microsoft are doing “is not fair use under any circumstances,” because they are taking advantage of the newspaper’s investment in its journalism “to create substitute products without permission or payment.”

Until now, courts have largely sided with tech companies when interpreting how copyright laws treat AI systems. In a defeat for visual artists, a federal judge in San Francisco last year dismissed most of the major lawsuits against AI image generators. Another California judge dismissed comedian Sarah Silverman’s arguments that Facebook’s parent company, Meta, violated her autobiographical rights to create its AI model.

More recent lawsuits have provided more detailed evidence of alleged harm, but Agarwal said that when it comes to using copyrighted material to train artificial intelligence systems that “give a small portion of that material to the users, the courts,” they have to consider this. Not inclined to. Copyright infringement.”

Tech companies cite Google’s success in fending off legal challenges to its digital book library as an example. In 2016, the US The Supreme Court upheld a lower court ruling, rejecting the authors’ contention that Google’s digitization of millions of books and public display of excerpts from them constituted copyright infringement. Copyright.

But judges interpret fair use arguments on a case-by-case basis and “really depend a lot on the facts,” based on economic impact and other factors, said Kathy Wolff, an executive at Dutch firm Walters Kluwer, who also sits. Board of Courts.

“Just because something is free on the Internet, on a website, doesn’t mean you can copy it and email it, much less use it to do commercial business,” Wolff said. “Who will win? I don’t know, but I’m definitely in favor of protecting copyright for everyone. “It drives innovation.”

Beyond the Court

Some media outlets and other content creators are calling on lawmakers or the US Library of Congress Copyright Office to strengthen copyright protections in the age of AI. A panel of the US Senate Judiciary Committee will hear testimony from media executives and advocates on Wednesday in a hearing devoted to the impact of AI on journalism.

Roger Lynch, CEO of the Condé Nast magazine chain, plans to tell senators that generative AI companies are “using our stolen intellectual property to create replacement tools.”

“We believe the legislative solution may be simple: clarify that use of copyrighted material in conjunction with commercial generative AI is not fair use and requires permission,” reads a copy of Lynch’s prepared statements.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button