“ChatGPT creator confronts New York Times over ‘fair use’ of copyrighted works” | Daily list

The avalanche of lawsuits in New York federal court will test the future of ChatGPT and other artificial intelligence products that wouldn’t be as eloquent if they didn’t use large amounts of copyrighted human work.

But do artificial intelligence chatbots (in this case, widely sold products created by OpenAI and its business partner Microsoft) violate copyright and fair competition laws? Professional writers and the media will have a hard time winning this argument in court.

“I would like to be optimistic on behalf of the authors, but I am not. I think they have an uphill battle ahead of them,” said copyright lawyer Ashima Aggarwal, who worked for publishing giant John Wiley & Sons.

One lawsuit came from The New York Times. Another of a group of famous writers such as John Grisham, Jodi Picoult and George R.R. Martin. The third is from best-selling nonfiction authors, including the author of the Pulitzer Prize-winning biography on which the hit movie Oppenheimer was made.

REQUIREMENTS

Each lawsuit presents different arguments, but all are based on the fact that San Francisco-based OpenAI “created this product based on other people’s intellectual property,” said attorney Justin Nelson, who represents the nonfiction writers and whose office also represents The New York Times newspaper.

“OpenAI says that since the beginning of time it has been free to intercept someone’s intellectual property as long as it is on the Internet,” Nelson said.

The New York Times filed the lawsuit in December, arguing that ChatGPT and Microsoft’s Copilot chatbot are competing with the same media they train with and diverting web traffic away from the newspaper and other copyright owners who rely on advertising revenue. which their sites generate to continue production. their journalism. He also presented evidence that chatbots repeated Times articles word for word. In other cases, chatbots have falsely attributed misinformation to a newspaper, damaging its reputation.

A federal judge is presiding over all three cases so far, as well as a quarter of two other nonfiction writers who filed another lawsuit last week. U.S. District Judge Sidney H. Stein has served on the Manhattan court since 1995, when he was appointed by then-President Bill Clinton.

ANSWER

OpenAI and Microsoft have yet to file formal counterarguments in the New York case, but OpenAI issued a public statement this week calling The New York Times’ lawsuit “baseless” and noting that the chatbot’s ability to repeat certain articles verbatim was “baseless.” . an unusual failure.”

“Training artificial intelligence models using publicly available materials on the Internet is a legitimate use, as evidenced by long-standing and widely accepted precedents,” the company said in a blog post Monday. He also suggested that The New York Times “commissioned the model to reproduce it or selected examples from many attempts.”

OpenAI pointed to licensing agreements it signed last year with the Associated Press, German media company Axel Springer and others as examples of how the company is trying to support a healthy news ecosystem. OpenAI is paying an undisclosed sum to license the AP news archive. The New York Times had similar conversations before it decided to sue.

This year, OpenAI said access to AP’s “archive of high-quality, fact-based text” would enhance the capabilities of its artificial intelligence systems. But his blog this week downplayed the content of AI learning news, arguing that large language models learn from “vast amounts of human knowledge” and that “any single data source, including The New York Times, is irrelevant to expected learning model.”

WHO WILL WIN?

Much of the AI ​​industry’s argument relies on the “fair use” doctrine of U.S. copyright law, which allows limited use of copyrighted material for teaching, research, or transforming the protected work into something else.

In response, the legal team representing The New York Times wrote on Tuesday that what OpenAI and Microsoft are doing “does not constitute fair use under any circumstances” because they are using the newspaper’s investment in its journalism “to create substitute products without permissions or payment.”

So far, courts have largely sided with tech companies when interpreting how copyright laws should treat artificial intelligence systems. Last year, a federal judge in San Francisco threw out most of the first major lawsuit against AI image generators, a defeat for visual artists. Another California judge rejected comedian Sarah Silverman’s arguments that Facebook’s parent company Meta infringed on her autobiography to create its artificial intelligence model.

Later lawsuits provided more detailed evidence of the alleged harm, but Aggarwal said that when it comes to using copyrighted content to train artificial intelligence systems that offer “a small portion of that content to users,” courts “appear to be reluctant consider it.” copyright infringement.”

Technology companies are citing Google’s success in defeating lawsuits against its digital book library as a precedent. In 2016, the U.S. Supreme Court upheld a lower court’s ruling that rejected the authors’ argument that Google’s digitization of millions of books and public display of portions of them constituted copyright infringement.

But judges interpret fair use arguments on a case-by-case basis and are “really very much fact-driven” based on economic impact and other factors, said Katie Wolfe, a principal at the Dutch firm Wolters Kluwer, which also sits on the Court’s Board of Review Center Copyright Office, which helps negotiate licenses for print and digital media in the United States.

“Just because something is free on the Internet, on a website, doesn’t mean you can copy it and email it, much less use it to run a commercial business,” Wolfe said. “Who will win? I don’t know, but I’m definitely in favor of copyright protection for everyone. “It encourages innovation.”

OUTSIDE THE COURTS

Some media outlets and other content creators are going beyond the courts, calling on lawmakers or the US Library of Congress Copyright Office to strengthen copyright protection in the age of artificial intelligence. On Wednesday, a panel of the US Senate Judiciary Committee will hear testimony from media executives and lawyers at a hearing on the impact of AI on journalism.

Roger Lynch, CEO of the Conde Nast magazine chain, plans to tell senators that generative artificial intelligence companies are “using our stolen intellectual property to create replacement tools.”

“We believe the legislative solution may be simple: clarify that use of copyrighted content in connection with commercial generative artificial intelligence is not fair use and requires permission,” Lynch said in a copy of prepared statements.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button