KU team finds simple solution when method breaks down›››
AI art
Artificial intelligence could be the future of creativity – butthe tool should be used with care 17 May 2024
If you were active on social media in the final months of 2022, odds are good you noticed a spike in avatars of your friends as fairies or anime characters or figures from a high-fantasy video game.
The images were from a company called Lensa, which uses artificial intelligence to turn selfies into art. And they had more than the social-media influencers buzzing. The technology set off a new wave of debate about the role of artificial intelligence in art as well as ethical issues involving racism, stolen images and revenge porn. But others look ahead to a future where AI assists artists instead of competing with them.
The Lensa app, which uses the Stable Diffusion deep-learning model to render images in various art styles, was not the first use of AI technology to disturb artists worried about being replaced by computers.
In 2018, a piece of digital art called Edmond De Belamy, which was generated by a machine-learning algorithm, sold at a Christie’s art auction for U.S.$432,500, well above its U.S.$10,000 estimate, setting off alarm bells among creatives fearing for jobs and the nature of art itself.
A similar cry erupted in September 2022 when Jason M. Allen won first prize in a digital category at the Colorado State Fair’s annual art competition with an AI-generated piece called Théâtre D’opéra Spatial. Allen used Midjourney, which translates text descriptions into digital artwork (and has been used to produce images in KUST Review).
But both images show that computer-generated art has more human involvement than the AI tag and Christie’s promotional language for Edmond De Belamy (“This portrait … is not the product of a human mind”) might lead you to believe.
Both pieces were products of humans: Edmond De Belamy by a Parisian art collective called Obvious. Both were initiated, selected, printed and promoted by those humans. And humans created the code that built them, infusing the final works with human aesthetics, biases and potential moral issues.
Remembering that it’s humans, not soulless code, ultimately behind the AI product is important to keep in mind, says Ziv Epstein, a Ph.D. student in MIT Media Lab’s Human Dynamics group who has an eye on the emerging technology.
“When we talk about AI as a creator instead of a tool, it undermines credit and responsibility to the artists involved in the creation of AI art,” Epstein tells KUST Review. “Anthropomorphizing AI can undermine our capacity to hold people responsible for the wrongdoings of sociotechnical systems when an AI system commits a moral transgression: The perceived agency of the AI could be a sponge, absorbing responsibility from the other human stakeholders.
“We must be careful how we talk about AI and fight the current conceptualization of AI, typified by corporate-metaphysical circuit brains or embodied androids, lit by blue light and here to take your job. These narratives are not neutral and often cut along lines of power.”
A wrongdoing Epstein might have in mind: Among initial users of the Lensa avatar generator, some people who wear hijabs and/or have dark skin reported that their images seemed to have more glitches than others’ or didn’t look much like them. And this cuts to deeper issues of racism and sexism baked into the code and reported on frequently in recent years.
“AI bias in art hurts Black and other communities of color in two very specific ways,” says Mutale Nkonde, founder and CEO of AI for the People and a UN advisor on AI and human rights. “The app Lensa used AI to create ‘artist’ impression avatars for users and a beauty filter that made non-white women appear more European. This may seem innocuous, but there is data that shows algorithmic recommendation systems used within the image-sharing app Instagram has been found to increase mental-health complaints among young girls because it amplifies images of women with unhealthy bodies.
“This could be true of women with non-European features who watch their physical appearance being erased and devalued,” she tells KUST Review. “This ethnic erasure contributes to the sales of skin-lightening creams in countries in Asia, Africa and the Gulf region and could result in women in these regions engaging in even more self-harming behavior.
A piece of digital art called Edmond De Belamy, which was generated by a machine-learning algorithm, sold at a Christie’s art auction for U.S.$432,500.
“The second concern is the data privacy of the people using these apps in order to work,” Nkonde says. “Users have to upload pictures, and in doing so give the company their biometric data which could be shared and/or sold to data brokers and then used to develop technologies like facial recognition. Facial-recognition systems in the West being used by law-enforcement agencies have problems recognizing people with dark skin and have led to the wrongful arrest of Black men.”
Again: Blame the humans behind the code.
Nkonde sees a solution, however.
“The best way to reduce these biases is by expanding the training datasets used to develop each app. In terms of the Europeanization of visual culture that means training those apps with a wide variety of pictures of people from a wide range of ethnicities. That way an Arab woman using it will be given an image that shows her unique beauty,” she says.
Without expanded datasets, apps and AI risk reflecting – and perhaps amplifying – biases.
“The Stable Diffusion model was trained on unfiltered internet content. So it reflects the biases humans incorporate into the images they produce,” Lensa says in its FAQ.
That unfiltered content used to train the model is also concerning to artists who fear their work is being used without their consent – and may damage their livelihoods by allowing the masses to replicate their style without paying for it.
One of them is Greg Rutkowski, a Polish artist whose high-fantasy digital illustrations of defiant wizards and rampaging orcs are familiar to fans of such games as Dungeons & Dragons and Magic: The Gathering.
His style was commonly requested on Stable Diffusion before the model in November 2022 changed its code to make it harder to copy specific artists’ styles.
“It’s a cool experiment,” he says of the people who used his name as a prompt. “But for me and many other artists, it’s starting to look like a threat to our careers,” he tells the MIT Technology Review.
When we talk about AI as a creator instead of a tool, it undermines credit & responsibility to the artists involved in the creation of art.
Artists have countered with a site called Have I Been Trained, which allows creatives to search for examples of their own work among the 5.8 billion images scraped from the internet, including sites such as Pinterest, to train Stable Diffusion and Midjourney.
Some groups have responded by banning AI-generated art, including online artist community Newgrounds and visual-media company Getty Images, which cited fears of future copyright claims as laws eventually catch up with technology.
Among the laws catching up with the accelerating technology: The United Kingdom in November 2022 announced plans to criminalize the sharing of pornographic deepfakes, often created as a form of revenge porn victimizing primarily women who don’t know their faces have been digitally attached to others’ bodies.
At the same time it changed its code to make copying styles harder, Stable Diffusion also introduced changes that make creating pornographic content more difficult. AI systems Midjourney and DALL-E 2 had previously banned adult-content creation. But other systems remain accessible to deepfake abuses.
Still, some creators remain optimistic about AI-assisted art.
Alexander Reben used a machine-learning algorithm called GPT-3 to slough off a creative slump during the early months of the COVID-19 pandemic.
The algorithm, a language model trained by OpenAI like ChatGPT, which came later, writes original text – essays, fiction, news articles, even dad jokes – from a prompt. Reben played with the tool until he learned he could prod it to write the sort of text one might find on a label next to a piece of art on a gallery wall.
Reben poured through the outputs until he found some he liked, then created in real life the art they described. A whimsical story about an anonymous art collective known as The Plungers that created art with actual toilet plungers, for example, became an IRL installation as part of a series the AI titled “AI Am I?”
“As technology becomes more of an extension and amplification of our minds – just as a wrench is an extension of our hands and amplifies our physical ability – AI becomes more of a collaborator rather than a calculator,” he writes for BBC.com. “Unlike creative tools of the past, such as Photoshop, photographs or pigments, we are now working with tools that seem to have generative imagination, but perhaps no ‘taste.’ The human in the loop adds an important curatorial role in determining the ‘good’ versus ‘bad.’”
Or as AI-avatar creator Lensa says in a tweet: “As cinema didn’t kill theater and accounting software hasn’t eradicated the profession, AI won’t replace artists but can become a great assisting tool.”
Architecture student Qasim Iqbal, for example, uses Midjourney to visualize his designs.
“With Midjourney primarily being a text-to-image generator, it encourages you to summarize and define ideas through words and teaches you to be specific,” he tells My Modern Met.
He says it helps him “test concepts, ideas and directions for projects,” but “it should never be the originator of the idea.”
Others are embracing the technology by trading the pen or the brush for the word to create visual art. This is the emerging domain of “promptology” or the “prompt engineer,” using a new set of skills to coax a desired image out of the models with carefully crafted text.
And then there is the utility of the technology for, well, anyone.
NightCafe, launched in 2019 and named after Vincent Van Gogh’s The Night Café, is one of the systems looking to fulfill the tech’s promise of democratized art:
“We create tools that allow anyone — regardless of skill level — to experience the satisfaction, the therapy, the rush of creating incredible, unique art,” it says, with the caveat that it does not seek to “make artists redundant.”
But for the “but is it art?” crowd, there’s still opportunity to invest skill, thought, talent and effort beyond the push of a button to create with AI tools.
Allen, the Colorado State Fair winner, spent 80 hours on Midjourney and sifted through 900 images before he settled on a picture to print on canvas.
Remembering that it’s humans, not soulless code, ultimately behind the AI product is important to keep in mind, says Ziv Epstein, a Ph.D. student in MIT Media Lab’s Human Dynamics group
Other artists take much longer for their process, investing considerable time and brainpower to learn the technology and make tweaks to the code for the specific result they seek.
“Using machine learning is such a steep learning curve for me,” says Jake Elwes in the paper “AI and the Arts: How Machine Learning Is Changing Artistic Work.
“I understand enough of the technology to use it and hack it, but I’m not writing algorithms myself, so it often takes months of research to work out how to use a model and get it to do what I want it to do. To be able to see some of my artistic voice coming through a black box or a ready-made, and then find an interesting way of subverting it. It’s a long process, not something you can just play with lightly.”
The same might be said for the technology itself.
Get the latest articles, news and other updates from Khalifa University Science and Tech Review magazine