The Year Ahead in Generative AI
At this very moment, I could boot up OpenAI’s popular ChatGPT and ask it to write this article for me. Or I could ask it to “write a lullaby about a cool goose who just wants to honk.” I could pull up Midjourney, Stable Diffusion, DALL-E 2, or any number of AI art generators and order it to craft me an portrait of this cool goose.
Then comes the ethical conundrums: Could AI write and illustrate a children’s book about the goose for me? Could I sell that book, even though I put much less effort into it than any children’s author would? Who should own the copyright? What about the rights of the authors whose work trained the AI that created my book?
AI technology is developing rapidly, but it’s not clear how close are we to generative AI models regularly making final, release-ready products. People in the art scene have already raised concerns over how generative AI systems, particularly GPT and diffusion-based machine learning models, will disrupt their profession. But AI generated content could be a big deal in many other industries, from programmers to copywriters. So how long until all our proverbial gooses are metaphorically cooked?
To answer these questions, I spoke to three people who all work directly with AI but have varying takes on the technology:
- Irene Solaiman got deep into newfangled AI systems as a researcher and public policy manager at OpenAI back in 2019, a company that has since become one of the biggest names in the generative AI scene. She was there for the release of the GPT2 language model and the API for GPT3, and wrote one of the first toxicity safety reports on GPT3. She also worked for Zillow on the ethics of housing predictive models. Now as the policy director at the machine learning resource platform Hugging Face, she spends quite a bit of time thinking on how this technology will grow, and how companies can ethically direct its progress.
- Alfred Wahlforss is a Harvard graduate student studying data science. He also helped develop the app BeFake, which used open source data from Stable Diffusion’s model alongside Google-designed DreamBooth to create fake selfies based on users’ images. Wahlforss is very bullish on AI technology, and wants to see the technology progress further and further.
- Margaret “Meg” Mitchell is an AI researcher with a storied legacy in just the short time that generative AI has been around. Though she got her start doing research in natural language processing, should would go on to work at Microsoft and Google Research’s machine intelligence division where she became lead in the company’s AI ethics. In 2021, Google fired her after the company reportedly told employees to try and be more “positive” in their talk about the problems with AI. She has remained outspoken about the possibilities and challenges of generative AI since and now works as the chief ethics scientist at Hugging Face.
AI models exacerbate ownership concerns
As AI gets better at generating visual and written content, there’s a real risk that it could undermine creators whose livelihoods depend on their ability to generate content.
Some companies are already using cheap AI art instead of paying for the real thing. In December, science fiction and fantasy publisher Tor was put on blast after sharing the cover of an upcoming book that turned out to be AI art purchased off a stock image site. The work was uncredited, and even the person who touched up the image for the cover went unnamed.
Creators including Polish fantasy artist Greg Rutkowski have come out against AI art for fear their personal brands have been squashed by the proliferation of AI art specifically meant to imitate their work. Rutkowski has supported plans from the likes of the Concept Artists Association who want to lobby for updates to IP and data privacy laws.
Stability AI, the company behind the open source text-to-image diffusion model Stable Diffusion, has made a few concessions to artists over concerns that their work is being stolen or copied by AI. In the company’s release of Stable Diffusion 2, it created new parameters making it more difficult to create images based on celebrities, to craft porn, or to create art “in the style of” real-world artists. Some fans of the more open, open source model were none too happy with the changes. Stability AI also announced it was working with the company Spawning to allow artists to “opt-out” of having their work used in the training of Stable Diffusion 3, which will likely be released in 2023.
Though Stability AI has made some stated efforts to disable AI’s ability to explicitly copy work, there’s a genuine sense among small artists who depend on art and portrait commissions they’ll suffer as AI art generators become even more popular.
That’s not to say things are dire just yet. Mitchell said there are some artists who are seeing some benefits to current AI models, especially helping them speed up their work by creating a baseline they can work off of.
“Some of the specific artists who are speaking out about their work being stolen, I think, will also be artists that will potentially become even more valued as the actual artists, potentially even driving up their sales, or at least driving up the cost of their original pieces of work,” she said.
Tackling AI lies and biases
In November of 2022, Meta’s Galactica AI had to be pulled two days after release since researchers found it was propagating lies and misinformation.
Comments
Post a Comment