اخبار

Generative AI is here, along with critical legal implications


Check out all the on-demand sessions from the Intelligent Security Summit here.


Artificial intelligence (AI) has already made its way into our personal and professional lives. Although the term is frequently used to describe a wide range of advanced computer processes, AI is best understood as a computer system or technological process that is capable of simulating human intelligence or learning to perform tasks and calculations and engage in decision-making.

Until recently, the traditional understanding of AI described machine learning (ML) technologies that recognized patterns and/or predicted behavior or preferences (also known as analytical AI). 

Recently, a different kind of AI is revolutionizing the creative process — generative artificial intelligence (GAI). GAI creates content — including images, video and text — from inputs such as text or audio.

For example, we created the image below using the text prompt “lawyers attempting to understand generative artificial intelligence” with DALL·E 2, a text-to-image GAI.

Event

Intelligent Security Summit On-Demand

Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.

Watch Here

Image generated by author using DALL·E 2

GAI proponents tout its tremendous promise as a creative and functional tool for an entire range of commercial and noncommercial purposes for industries and businesses of all stripes. This may include filmmakers, artists, Internet and digital service providers (ISPs and DSPs), celebrities and influencers, graphic designers and architects, consumers, advertisers and GAI companies themselves.

With that promise comes a number of legal implications. For example, what rights and permissions are implicated when a GAI user creates an expressive work based on inputs involving a celebrity’s name, a brand, artwork, and potentially obscene, defamatory or harassing material? What might the creator do with such a work, and how might such use impact the creator’s own legal rights and the rights of others?

This article considers questions like these and the existing legal frameworks relevant to GAI stakeholders.

GAIs, like other AI, learn from data training sets according to parameters set by the AI programmer. A text-to-image GAI — such as OpenAI’s DALL·E 2 or Stability AI’s Stable Diffusion — requires access to a massive library of images and text pairs to learn concepts and principles.

Similar to how humans learn to associate a blue sky with daytime, GAI learns this through data sets, then processes a photograph of a blue sky with the associated text “day” or “daytime.” From these training sets, GAIs quickly yield unique outputs (including images, videos or narrative text) that might take a human operator significantly more time to create.

For example, Stability AI has stated that its current GAI “model learns from principles, so the outputs are not direct replicas of any single piece.”

The starting data sets implementing software code and expressive outputs raise legal questions. These include important issues of copyright, trademark, right of publicity, privacy and expressive rights under the First Amendment.

For example, depending on how they are coded, these training sets may include copyrighted images that could be incorporated into the GAI’s process without the permission of the copyright owner — indeed, this is squarely at issue in a recently filed class action lawsuit against Stability AI, Midjourney and DeviantArt.

Or they may include images or likenesses of celebrities, politicians or private figures used in ways that may violate those individuals’ right of publicity or privacy rights in the U.S. or abroad. Is allowing users to prompt a GAI to create an image “in the style” of someone permissible if it might dilute the market for that individual’s work? And what if GAIs render outputs that incorporate registered trademarks or suggest product endorsements? The numerous potential permutations of inputs and outputs give rise to a diverse range of legal issues. 

Several leaders in GAI development have begun thinking about or implementing collaborative solutions to address these concerns. For example, OpenAI and Shutterstock recently announced a deal whereby OpenAI will pay for the use of stock images owned by Shutterstock, which in turn “will reimburse creators when the company sells work to train text-to-image AI models.” For its part, Shutterstock agreed to exclusively purchase GAI-generated content produced with OpenAI.

As another example, Stability AI has stated that it may allow creators to choose whether their images will be part of the GAI data sets in the future. 

Education essential

Other potential copyright risks include both claims against GAI users for direct infringement and against GAI platforms for secondary (contributory or vicarious) infringement. Whether or not such claims might succeed, copyright stakeholders are likely to be closely watching the GAI industry, and the novelty and complexity of the technology are sure to present issues of first impression for litigants and courts. 

Indeed, appropriately educating courts about how GAIs work in practice, the differences between GAI engines and the relevant terminology will be critical to litigating claims in this space. For example, the process of “diffusion” that is central to current GAIs typically includes deconstructing images and inputs and repeatedly refining, retooling and rebuilding pixels until a particular output sufficiently correlates to the prompts provided.

Given how the original inputs are broken down and reconstituted, one might even compare the diffusion process to the transformation a caterpillar undergoes in its chrysalis to become a butterfly. On the other hand, litigants challenging GAI platforms have asserted that “AI image generators are 21st-century collage tools that violate the rights of millions of artists.”

When stakeholders, litigants, and courts understand the nuances of the processes involved, they will better be able to reach results that are consistent with the legal frameworks at play.

Is a GAI-created work a transformative fair use?

While some GAI platforms are taking steps to address concerns regarding the use of copyrighted material as inputs and their inclusion in and effect on creative outputs, the fair use doctrine will surely have a role to play for GAI stakeholders as both potential plaintiffs and defendants.

In particular, given the nature of GAI, questions about “transformativeness” are likely to predominate. The more a GAI “transforms” copyrighted images, text or other protected inputs, the more likely owners of GAI platforms and their users are to assert that the use of or reference to copyrighted material is a non-actionable fair use or protected by the First Amendment. 

The traditional four fair use factors will guide courts’ determinations of whether particular GAI-created works qualify for fair use protection. This includes the “purpose and character of the use, including whether such use is of a commercial nature.” Also, “the nature of the underlying copyrighted work itself,” the “amount and substantiality of the portion used in relation to the copyrighted work as a whole,” and “the effect of the use upon the potential market for or value of the copyrighted work.” (17 U.S.C. § 107). 

The fair use doctrine is currently before the Supreme Court in Andy Warhol Found. for Visual Arts, Inc. v. Goldsmith, 11 F.4th 26 (2d Cir. 2021), cert. granted, ___ U.S. ___, 142 S. Ct. 1412 (2022), and the Court’s ruling is highly likely to impact how stakeholders across creative industries (including GAI stakeholders) operate and whether constraints on the fair use framework around copyright will be loosened or tightened (or otherwise affected).

Lawsuits already; more to come

GAI platforms should also consider whether and to what extent the software itself is making a copy of a copyrighted image as part of the GAI process (“cache copying”), even if the output is a significantly transformed version of the inputs.

Doing so as part of the GAI process may give rise to claims of infringement or might be protected as fair use. As usual, these legal questions are highly fact-dependent, but GAI platforms may be able to limit potential liability depending on how their GAI engines function in practice.

And indeed, on November 3, 2022, unnamed programmers filed a proposed class action complaint against GitHub, Microsoft and OpenAI for allegedly infringing protected software code via Copilot, their AI-based product meant to assist and speed the work done by software coders. In a press release issued in connection with the lawsuit, one of the plaintiffs’ lawyers stated, “As far as we know, this is the first class action case in the U.S. challenging the training and output of AI systems. It will not be the last. AI systems are not exempt from the law.” 

These attorneys fulfilled their prediction when they filed their next lawsuit (referenced above) in January 2023, asserting claims against Stability AI, Midjourney and DeviantArt, including for direct and vicarious copyright infringement, violation of the DMCA and violation of California’s statutory and common law right of publicity. 

The named plaintiffs — three visual artists seeking to represent classes of artists and copyright owners — allege that the generated images “are based entirely on the training images [including their works] and are derivative works of the particular images Stable Diffusion draws from when assembling a given output. Ultimately, it is merely a complex collage tool.”

The defendants are sure to disagree with this characterization, and litigation over the specific technical details of the GAI software is likely to be front and center in this action.

Ownership and licensing of AI-generated content

Ownership of GAI-generated content and what the owner can do with such content raises additional legal issues. As between the GAI platform and the user, the details of ownership and usage rights are likely to be governed by GAI terms of service (TOS) agreements.

For this reason, GAI platforms should carefully consider the language of the TOS, what rights and permissions they purport to grant users, and whether and to what extent the platform can mitigate risk when users exploit content in a manner that might violate the TOS. Currently, TOS provisions regarding who is the owner of GAI output and what they can do with it may differ by platform.

For example, with Midjourney, the user owns the GAI-generated image. However, the company retains a broad perpetual, non-exclusive license to use the GAI-generated image and any text or images the user includes in prompts. However, terms are likely to change and evolve over time, including in reaction to the pace of technological development and ensuing legal developments,  

OpenAI’s current terms provide that “as between the parties and to the extent permitted by applicable law, you own all Input, and subject to your compliance with these Terms, OpenAI hereby assigns to you all its right, title and interest in and to Output.”  

Questions of ownership front and center

As companies continue to consider who should own and control GAI content outputs, they will need to weigh considerations of creative flexibility against potential liabilities and harms, and terms and policies that may evolve over time.

Separate questions of permissible use arise for parties who have licensed content that may be included in training sets or GAI outputs. Such licenses — especially if created before GAI was a potential consideration by the parties to such license agreement — may give rise to disputes or require renegotiations. The intent of parties to include all potential future technologies, including those unforeseen at the time of contracting, implicates additional legal issues relevant here.

While questions of ownership are front and center, one key player in the GAI process — the AI itself — is unlikely to qualify for ownership anytime soon. Despite the efforts of AI-rights activists, the U.S. Patent and Trademark Office (USPTO), Copyright Office and courts have been broadly in agreement that an AI (as a nonhuman author) cannot itself own the rights in a work the AI creates or facilitates.

This issue merits watching, however; Shira Perlmutter, register of copyrights and director of the U.S. Copyright Office has indicated the intention to closely examine the AI space, including questions of authorship and generative AI. And a lawsuit challenging the denial of registration of an allegedly AI-authored work remains pending before a court in Washington D.C.

Political concerns and potential liability for immoral and illegal GAI-generated images

Apart from concerns of infringement, GAI raises issues about the potential creation and misuse of harmful, abusive or offensive content. Indeed, this has already occurred via the creation of deepfakes, including deep-faked nonconsensual pornography, violent imagery and political misinformation.

These potentially nefarious uses of the technology have caught the attention of lawmakers, including Congresswoman Anna Eshoo, who wrote a letter to the U.S. National Security Advisor and the Office of Science and Technology Policy to highlight the potential for misuse of “unsafe” GAIs and to call for the regulation of these AI models. In particular, Eshoo discussed the release of open-source GAIs, which present different liability issues because users can remove safety filters from the original GAI code. Without these guardrails — or a platform ensuring compliance with TOS standards — a user can leverage the technology to create violent, abusive, harassing or other offensive images. 

In view of the potential abuses and concerns around AI, the White House Office of Science and Technology Policy recently issued its Blueprint for an AI Bill of Rights, which is meant to “help guide the design, development and deployment of AI and other automated systems so that they protect the rights of the American public.” The Blueprint focuses on safety, algorithmic discrimination protections and data privacy, among other principles. In other words, the government is paying attention to the AI industry.

Given the potential for misuse of GAI and the potential for governmental regulation, more mainstream platforms have taken steps to implement mitigation measures.

AI is in its relative infancy, and as the industry expands, governmental regulators and lawmakers as well as litigants are likely to increasingly need to reckon with these technologies.

Nathaniel Bach is a litigation partner at Manatt Entertainment.

Eric Bergner is a partner and leader of Manatt’s Digital and Technology Transactions practice.

Andrea Del-Carmen Gonzalez is a litigation associate at Manatt Entertainment.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


اكتشاف المزيد من نص كم

اشترك للحصول على أحدث التدوينات المرسلة إلى بريدك الإلكتروني.

اترك تعليقاً

لن يتم نشر عنوان بريدك الإلكتروني. الحقول الإلزامية مشار إليها بـ *

زر الذهاب إلى الأعلى

اكتشاف المزيد من نص كم

اشترك الآن للاستمرار في القراءة والحصول على حق الوصول إلى الأرشيف الكامل.

Continue reading