AI-Generated Art

From New Media Business Blog

Jump to: navigation, search



AI art refers to art generated with the assistance of artificial intelligence.

Evolution of AI-Generated Art

Artwork created by AARON drawn by a "turtle" robot equipped with a marker.

One of the early AI art systems is AARON, developed by Harold Cohen in the early 1960s. AARON is notable for its usage of GOFAI programming, where Cohen began composing sets of rules and forms on the computer, in which lines and shapes were drawn on paper by a “turtle” robot. As AARON developed, it learned to make choices about open and closed shapes and foreground and background, and to recognize when an artwork had reached completion. In its early years, AARON had produced black and white images, which were then painted by Cohen. In 1995, Cohen showcased a version of AARON that was able to both draw and colour the forms without mediation. [1]

Tools and Processes

There are various mechanisms used to create AI art. Some of the most prominent are as follows:

Symbolic AI

Symbolic artificial intelligence is also known as Good, Old-Fashioned AI (GOFAI). By using tools such as Logic programming, production rules, and semantic nets and frames, images are generated through a rule-based system. [2]

Machine Learning

Since its design in 2014, generative adversarial networks (GANs) have been widely used in areas such as image, video, and voice generation. The training of GANs involve two sub-models: a “generator” for generating new examples and a “discriminator” for classifying whether generated examples are real or fake, generated by the generator model. [3]

One of the first artistic applications of GANs was by DeepDream, designed by Google engineer Alexander Mordvintsev. DeepDream, characterized by its dream-like psychedelic images, is a computer vision program that uses a convolutional neural network (CNN) to detect and enhance patterns in images. [4] Google has used DeepDream to improve its image search algorithms better identify the objects in these images and return more relevant search results. [5]

AI in Literature

Literature has been revolutionized by technological advancements that have changed the way humans perceive, interpret, and write. Many writers, authors, and producers all around the world are embracing AI to transform data into captivating stories in a multitude of different ways. Poetry, storytelling, and filmmaking are just a few examples of traditional literary genres.

Benefits of AI in Literature

The role AI plays in literature provides many benefits including, but are not limited to, increased efficiency, time savings, simplicity, cost savings, and better stories.

Increases efficiency: AI can assist writers by enabling them to instantly generate thousands of words and visuals on command with the click of a button. They will always have access to a highly intelligent poet on any given day at any given moment, allowing them to produce a lot more content in a short period of time. Instead of searching for inspiration, writers can quickly and easily generate ideas and translate that into high-quality content with the help of AI. [6]

Saves time: There are many people who want to tell stories, but are unable to devote all their time to writing content. By using AI to help automate tasks that humans can do, writers have more free time to focus on other important aspects such as research and development. [7]

Enhances simplicity: Texts can be automatically modified by AI to make it easier for people to understand and interpret, especially those with cognitive disabilities. Although the way computers compose poems or stories seems ambiguous or nonsensical, writers can simply ask AI to construct a straightforward computed-generated poem and/or story. [8]

Introduces cost savings: Sometimes writers lack the ability to spark new content ideas or design compelling illustrations that convey powerful messages. In situations like this, writers often hire subject-matter experts to help do it for them, resulting in expensive labour in the long run. However, employing AI is known to be very cost-effective due to its ability to identify and correct areas of inefficiencies as well as reduce labour costs. [9]

Produces better stories: A common hurdle to human writers is creating content on topics that are unknown, ultimately leading to writers roadblock. For instance, human poetry is often limited to the personal experiences of the author, so conveying feelings and thoughts on topics outside of those experiences can be challenging. In addition, when it comes to creating stories, many people have ideas floating in their heads for years, but never had the ability to produce the art that goes along with it. Because AI can write on an endless number of topics, it can help to inspire creativity and form unique storylines that consist of generating character descriptions, plot lines, and visuals. [10]

Tools and Applications


Verse by Verse [11]: Google’s AI-powered mused is a tool used to compose poetry inspired by classic American poets. Users will first be prompt to select up to three poets for it to emulate, such as Robert Frost, Emily Dickinson, and Walt Whitman. The AI will then ask users to design the structure of their poem, including its poet form, syllable count, and rhyme scheme. Once these selections are made, the tool will ask users to compose their first line of verse and the AI will take over and provide suggestions for the following verses to complete the poem. The purpose of this tool was created to inspire poetic writers. Verse by Verse will never force users to follow its suggestions, but rather it encourages them to make changes and improvements as they see fit.

Deep-speare [12]: A joint neural model of poetic language called Deep-speare produces sonnet-like stanzas that are reminiscent of Shakespearean pieces. A sonnet is a short lyric poem. The AI poet employs three natural-language-processing models: One assesses word probability by selecting each word, a second checks the rhythm of each line of poetry, and a third verifies that each line conforms to the rhyme scheme. Deep-speare has the ability to compose poems that are based on a particular theme, with the models word selections constrained to that specific theme. Because AI writes its poetry to resemble Shakespearean work, many people have found it difficult to distinguish between the two.


AI plays a vital role in immersing audiences in new levels of engagement and encouraging writers to create new forms of dynamic, interactive stories.

Star Maker: World’s first bot-generated graphic novel to win a fine arts competition

Midjourney [13]: In August 2022, the world’s first bot-generated graphic novel, Star Maker, won a fine arts competition against human artists by leveraging the AI program Midjourney. Author Kevin Hess used this tool to create and design graphic images from text descriptions. Star Maker was originally published in 1937 by Olaf Stapledon. Hess reported that this was his favourite novel of all time, so he decided to turn it into a graphic novel. He took the opportunity to create an epic story to an aged novel that is now off-copyright. His graphic novel became the first fully illustrated novel in human history where all illustrations were created by an AI. The novel consists of 706 fully illustrated pages and it took only 100 hours to construct from start to finish. The visual edition of Star Maker demonstrates a critical example of how powerful AI is in elevating artistic capabilities. While the author viewed his win as a memorable milestone, it has sparked a storm of conflicting views with many artists upset and fearful of being replaced by bots.

Another example of a graphic novel that utilized Midjourney to develop its illustrations is the making of Lungflower [14]. Lungflower is the first AI drawn graphic novel to publish in June 2022 by science fiction writer Brian Martinez. The novel is an emotional, one-of-a-kind horror experience consisting of eerie, wild, and vivid artwork.


Today, AI is used in all stages of film production from writing and analyzing scripts, predicting the success of a film, selecting actors, promoting movies, and producing movies. With several people starting to embrace the advantages of technology, AI has played a crucial role in helping creators understand what the audience desires as well as new ways to produce and present content. As a result, AI technology enables these creators to stitch together an engaging and exciting narrative.

For example, German tech entrepreneur Fabien Stelzer is making an entire movie using DALL·E 2, Midjourney, and GPT-3 to generate the films imagery, audio, and script. Stelzer has zero experience and background in film production and is relying solely on AI tools to produce his short sci-film ‘Salt’, which is currently in the making.[15]

Image generated from Arturo Tedesch's initial text description: ‘a b/w scene of a man and a woman seated on a bench, in New York, Queensboro bridge in the background, foggy atmosphere’

DALL·E [16] : The generative AI system DALL·E, used to generate digital images from natural language descriptions, has unlocked new possibilities in Arturo Tedesch’s recent work of film storyboarding. Italian artist Tedeshi decided to convert storyboard texts into striking visuals inspired by famous movies like ‘Pulp Fiction’ and ‘Manhattan’. Using this compelling technology, he created his first image with text that described a scene in the 1979 film ‘Manhattan’. In addition to his initial text description, ‘A b/w scene of a man and a woman seated on a bench, in New York, Queensboro bridge in the background, foggy atmosphere’, he uploaded a few pictures of the Queensboro bridge to better represent the scene. Based on his inputs, the AI-generated image was created in 15 seconds. His latest collaboration with technology signifies how AI can be easily accessible, powerful, and efficient in visualizing ideas. According to Tedeshi, “the art of storyboarding deserves attention not only for the graphic quality, technique, or quick trait, but as a creative bridge between screenplay and film or, if you like, as a link between language and images”.

Cinelytics [17]: A SaaS platform called Cinelytic combines AI and cloud-based technology to help professionals in the entertainment industry make quicker and more informed decisions about the packaging, funding, production, distribution, and marketing of content. As an example, the Warner Brothers using this engine to improve the script of films, predict expected earnings, and gain deeper insights into how a particular movie will perform once it gets released to the public [18].

It's No Game co-written by AI-writer Benjamin

Benjamin: Benjamin is a notable AI-writer, widely referred to as a long short-term memory (LSTM) recurrent neural network [19]. The LSTM algorithm is very effective in predicting entire paragraphs rather than a select few words and provides originality when it comes to generating whole sentences. Benjamin was trained by creative writer Ross Goodwin and was fed with a range of sources found on the internet, including Shakespeare, Golden Age movies, the scripts of Hasselhoff films, and a dictionary of choreographic instructions [20]. In 2017, a short sci-fi film called ‘It’s No Game’, starring David Hasselhoff, was co-written by the AI model Benjamin.

Text-generator Limitations

Generative Pre-trained Transformer 3 (GPT-3): GPT-3 is an autoregressive language model that generates human-like text. It is commonly known as a text generator that processes a vast amount of content, including books, online conversations, Wikipedia, and scientific papers, in order to write on command. GPT-3 is available through OpenAI’s API and is trained to generate any type of text. It has been widely used to compose poetry, articles, narratives, news reports, and dialogues. The model leverages deep learning to respond to a users initial text input and creates the most likely output to complete the prompt. While GPT-3 is the largest and most powerful trained model, it has several limitations that constrain its capabilities. [21]

Model is not constantly learning: Because GPT-3 has been pre-trained, it lacks an ongoing long-term memory that continuously learns from every interaction. Additionally, GPT-3 shares the same drawbacks as all neural networks in that it is unable to analyze and explain why specific inputs lead to certain outputs. In other words, it is incapable of remembering previous inputs or outputs that it has seen, therefore lacking any form of memory. [22]

Model has limited input and output size: Users inputting text as prompt for the output are limited to a few sentences, in which GPT-3 can only take in and output approximately 1,500 words. In addition to the model restricting users to a certain amount of text, it still suffers from slow performance when generating results based on inputs. [23]

Model has machine learning biases: GPT-3 is known to have gender, racial, and religious bias by OpenAI, posing a great danger toward marginalized populations. Such harms include, but are not limited to, discrimination, structural inequalities, and unfair treatment. Because the model was trained from information discovered on the internet, such as Twitter tweets, it generates several biases that humans exhibit in their online conversations, which are typically unfiltered [24]. These biases introduce a large concern that GPT-3 will create false stories and deceitful outputs, resulting in misled user perceptions. Common examples of biases include the following: black people described in negative terms, white people described in positive terms, Muslims linked to violence, and men not being connected to occupations in nursing or receptionist work [25].

GPT-3 is far from perfect with developers facing many challenges in identifying the optimal solution to overcome societal biases that the model exhibits. Such challenges relate to decisions around what topics to exclude, what words are considered ‘bad’, as well as difficulties in locating unwanted bias like racial slurs or subtle associations [26]. Additionally, if language models, like GPT-3, were blinded, there is a possible risk that results may return invalid and unreliable. Consequently, if a user asked ‘does racism exist in this world?’ and the model had never been exposed to this topic, the answer could be factually incorrect and state ‘no’.

AI in Visual Arts

AI-generated visual art has made huge strides in the past few years with key players such as DALL-E-2 and Midjourney being among the popular programs. These programs generate art in seconds based on a text prompt through the use of: 1) logic and patterns or 2) recreation of existing brush strokes. The AI is able to do this by learning from various different art styles and artists to create images that have never been seen before.

AI Visual Art Generators

Tips on editing images in DALL-E (Image credit: The DALL-ery GALL-ery)

DALL-E-2 [27]: This is the most popular art generator and it was developed by OpenAI. It allows people to create art based on text-based descriptions. A unique feature about this program is that it can make edits to existing images that DALL-E has already generated. DALL-E works on a credit based system where users get 50 free credits to create, generate and edit any image. An image generated with the dimensions 1024 by 1024 pixels costs one credit. Users will also receive 15 free credits each month and have the option to purchase more at a price of $15 for 115 credits (enough to generate 460 1024 X 1024-pixel images). Any artists who need subsidized access will be to apply for it through OpenAi.

As shown in the image on the right, creating the right prompt still requires human input and the mindset/skills of an artist. By having an artistic direction, the user is more likely to achieve their intended result that they had envisioned. Therefore, it is important to be clear about the composition, mood, context, lighting, and various other details that help create the artwork.

NightCafe [28]: NightCafe is a program that allows you to generate, share, and print your own art. To create pictures, you can either put in a text prompt or input reference images. You can also specify the art style as a pre-setting rather than having to specify it in the written prompt. Unlike other generators, it operates with more algorithms and options and art styles. Some of the diverse options NightCafe offers include: cubism, cartoon, anime, pop art, gouache and much more. Once prompted, it will create several variations based on the initial request which can be bulk-downloaded. This generator is also unique since it operates on a credit system. Therefore, users earn credits to create images by posting their art on the website and interacting with other artists.

Jason Allen used Midjourney to create the painting and then submitted it to the Colorado State Fair [29].

Midjourney: Midjourney is the program that helped create the painting that won first place for digital art in an art contest held at the Colorado State Fair. Additionally, in comparison to DALL-E, Midjourney has been recognized as possessing an art style that is imperfect and more artistic in nature with varying textures, details and more. Many have noted that the characteristics of Midjourney paintings are “more beautiful, a bit like an analog synth” [30]. A Discord account is needed to utilize this program since that is where users will give text prompts to the Midjourney bot. A free membership comes with credits for 25 free paintings (these credits do not expire but will also not renew). Once the credits have been used, purchase of a membership will be necessary to use, copy, modify, merge, publish, distribute, and sell copies of the assets. It will also give the user authority to DM the artwork directly to themselves rather than having to post it in a public channel.

Benefits of AI in Visual Arts

Coming Up with New Concepts: Creatives can benefit from the AI's ability to help them come up with fresh ideas since algorithms do this by examining data for trends. It could provide inspiration or references that they are unable to find elsewhere.

Improve Pre-Existing Artwork: Machine learning algorithms are enabling AI to become masters in recreating art styles. This helps the AI not only draw inspiration from previous artworks but also allows the AI to replicate nearly identical visuals and improve upon them. This represents a profound change in how we conceptualize and appreciate art in our daily lives.

Efficient: With AI generated art, you do not need to worry about creating endless revisions which will increase the cost of the end product and waste both the artist’s and client's time

Repair Artwork: AI has the potential to repair damaged artwork, and complete missing sections of new and old artworks alike.


Bias in the Artwork: These programs are learning from humans rather than other A.I. entities. Since humans are not free of error, the AI may be exposed to controversial and offensive concepts that can reinforce dangerous stereotypes. For example, DALL-E’s risk and limitations state that “words like personal assistant and flight attendant will generate images of women, and words like CEO will likely give you white men” [31]. To mitigate this, DALL-E tweaked their algorithm to include more diversity within the artwork that was being created. Unfortunately, this “reduced the signal to noise ratio” and users were quite unsatisfied with these changes [32]. It was perceived as a superficial change as it did not address the root problem which is: how data interprets social sciences and the need for rigorous data training to avoid embedding societal stereotypes in these programs.

AI’s inability to create negative or opposite prompts: With negative prompts, instead of creating an image that relates to the given, the AI tries to make the image look as different from the prompt as possible. When a user named SuperComposite tried to create an image that is supposed to be the opposite of an image input described as “a skyline with cryptic lettering” on an unspecified AI art generator, they noticed something eerie about the result[33]. The artwork consisted of a “devastated-looking older woman with defined triangles of rosacea on her cheeks” who has now been named Loab by the internet [34]. The concerning aspect of this image was the gore and violence present in the background. When the user tried to crossbreed this image with others to remove the dark ambience of the art, the generator failed to distort the image. Instead, the results became increasingly horrifying with Loab always present in every artwork generated. Many have noted that the AI seems to be associating the rosacea on Loab’s cheeks with violence and horror which is another example of social bias.


Opening exhibition of Ai-Da with a self portrait at the Design Museum of London[35].


Powered by AI and created in February 2019, Ai-Da is the world’s first robot artist[36]. She was named after Ada Lovelace, the world’s first computer programmer. Ai-Da creates abstract paintings in a variety of mediums that are meant to reflect the turmoil and uncertainty faced by modern society. Her art style is referred to as techno-cubism and her eyes (cameras) interpret the environment around her[37]. Her arm and algorithms help in the creation of her paintings. Ai-Da was funded by by primarily artists; they aim to challenge the concept of creativity and explore how humans can interact with AI to enhance their own creativity. They also refer to Ai-Da as an art project. Ai-Da has attended and presented at several art shows that have captured the attention of large audiences. Her latest appearance was at Oxford in September 2022, for an art exhibition and conference titled “Imagining AI”. The progression of her art is visible in the self-portrait she created about a year ago in comparison to earlier works which consisted of scattered lines unintentionally placed on canvas/paper.

Cosmopolitan Magazine Cover

Cosmopolitan released the very first magazine cover created by AI (DALL-E) in June 2022. The creation of this cover was led by Karen X. Cheng, a digital artist, and team members from both OpenAI and Cosmopolitan [38]. There were several meetings held and hours of writing and tweaking prompts to achieve the final result. The final visual was a “a wide angle shot from below of a female astronaut with an athletic feminine body walking with swagger towards the camera on Mars in an infinite universe, synthwave digital art”[39]. When addressing the creation of this cover, Cosmopolitan stated that it only took 20 seconds to make. This statement disregards the creative process and journey which was essential in landing on the cover that was chosen. This has opened up discourse about what constitutes as art and whether AI is a tool that helps in the creation of it or whether AI is an entity that creates art.

The missing panels are attached to the Night Watch. Credit: RIJK MUSEUM

AI restores missing parts of Rembrandt’s “Night Watch”

Rembrandt is a Dutch painter whose work has been known to represent the Dutch Golden Age which took place in the 1600s. The Night Watch, formally known as Militia Company of District II under the Command of Captain Frans Banninck Cocq, was created by Rembrandt in 1642 [40]. Unfortunately, the painting was impaired in 1715 while being transported to the Rijksmuseum where it has been preserved ever since. The reason behind the damage was due to an attempt to place the painting in a location where it did not fit. As a result, the side panels of the painting were cut off. In 2019, the museum embarked on journey to restore these missing side panels with the use of 3 resources at their disposal:

  1. The remains of side panels that had been cut off
  2. A study of the painting done by Gerrit Lundens during the 17th century
  3. AI technology

The study by Lundens was significantly smaller in comparison to the original which was about 12 feet long. AI was utilized to match the missing pieces in the original painting with Lunden’s. After the AI had detected and resized the missing details, it proceeded to recreate the details in the style of Rembrandt while using Lundens' study as a reference for placement[41]. Once the recreation was successfully completed, the side panels were printed on canvas and attached to both sides of the Night Watch. The reason behind using AI rather than hiring an artist to recreate the remains of the painting was to ensure that the mark of another artist does not affect or change the way Rembrandt's work is perceived. This multi-million dollar project was completed in 2021 and the side panels were taken down after 3 months of being installed out of respect for Rembrandt.

AI in Performing Arts

In the past, music has always been attached to an artist. The process and production of a composition can be indefinite. There is musical talent and creativity associated with each song and that is where the appreciation for music comes from. With current technology, this process can be replaced by artificial intelligence. As technology continues to advance, artificial intelligence will expand in the performing arts from music, to dance, and drama.

History of Music & Technology

Digital Audio Workstation (DAW)

In the 1970s to 1980s, the digital audio workstation was developed and advanced. It consists of the software and hardware used to record, edit, and produce audio. It allows artists to mix multiple tracks or recordings, cut, copy, and paste audio which led to the use of sampling. [1]


The term sampling was coined in the 1970s but was fully developed later. Sampling is one of the foundations of Hip Hop. It is essentially the reusing of a snippet of a recording in another recording. Using digital audio workstations enabled the ability to modify its speed, and tone, and layered with other sounds. [2]

Cher's Album Cover


In 1996, auto-tune was released. It was originally intended for pitch correction. Eventually, many R&B and Hip Hop artists incorporated auto-tune into their music to enhance their artistic twist, some even made it their signature musical trait. These early adopting artists got criticism and backlash for the use of auto-tune as many believed using auto-tune was an indicator of the inability to sing on key. Some of the first artists to use auto-tune were T-Pain and Britney Spears. The first song to use auto-tune, with which many people associate the launch of auto-tune, is Cher's smash-hit Believe. [3]

AI-Generated Music Today

How It Is Developed

There are two ways AI-generated music can be developed today. The first is feeding an abundance of source material into software, the software analyzes the material to find any patterns, and based on the chords, tempo, and note relations it picks up on it produces an output. The second way is relying on hard-coded rules based on musical theory, also known as algorithmic composition [4][5].

It is important to note that the current process of music production by artificial intelligence requires existing music from an actual artist to produce an output. In comparison, artists stuck with writer’s block will often look to the creations of other artists for inspiration. However, because AI-generated music requires input from other artists, it is not deemed to be a creative process.

MIDI Files


The outputs of AI-generated music can be a musical instrument digital interface (MIDI) file or actual audio input. A MIDI file contains the data for audio to be played, such as notes, note length, rhythm, and more. It does not contain actual audio. A simplified explanation for a MIDI file can be analogized as sheet music for an electronic playback device. With some programs such as Amper, actual audio output is developed and produced [6].

Beethoven's initial sketch of his 10th symphony

Experimental Examples

A common experimental case of AI in music has been to compose music an artist would have produced based on all of their previous works. The response of the artists’ fanbase can somewhat tell the resemblance between the AI-produced work and the artists’ authentic work. This has been done with the discographies of Jimi Hendrix, Amy Winehouse, and even Beethoven [7].

Beethoven left this sketch of his 10th symphony before passing away. It became a project joined by a team including a machine learning expert, multiple musicologists, and many more to complete Beethoven’s incomplete sketch using AI. A feature documentary called Beethoven x The AI Project was released to document the entire project. [8].

Miquela's single cover "Not Mine"
Brian Eno's cover "Reflection"

Other Examples

In 2017, viral virtual AI influencer, Miquela, produces her first single "Not Mine". Although she is an AI personality, it is unclear if her music is produced by AI or if her co-founder outsources her work [9].

Brian Eno who concentrates on ambient music released a generative version of his album Reflection. This loop plays infinitely with the help of AI production and adjusts the music depending on the time of day. His intention was to produce endless music, that would resemble "'sitting by a river': it's always the same river, but it's always changing". The significance of this piece emphasizes some of the benefits AI can bring to music, such as efficient, ever-lasting creation.


There are many existing platforms to produce AI-generated music and their required knowledge levels vary. Google and IBM's AI music composing platforms Google Magenta and IBM Watson Beat require coding knowledge and extensions to GitHub. Other software such as Amper Music, Verbasizer, and Logic are DAWs that can incorporate AI into their production.

Benefits of AI in Music

Boost creativity: Similar to literature and visual art, the use of AI in music can boost and supplement human creativity.

Increase efficiency: AI can produce without writer's block. The process of producing music can be significantly shortened.

Musical innovation: The use of AI in music can pave the unknown path for musical innovation. As music has progressed, from instruments to electronics, and technology, AI is another contributor that will shape the future of music.

AI in Dance

Detecting Plagiarism

While AI-generated music is still incredibly new, there is a start of AI in dance. There is the ability to detect similarities in dance choreographies to detect plagiarism using wearable technologies by tracking and comparing the movements. [10]

Producing choreography using Wayne McGregor and Google Arts & Culture Lab's tool

Choreography Creation

The technology to create choreography using AI began with handwriting predictions. This idea isolated hand motions, and the goal was to extend this technology throughout the body. There is a tool that identifies and traces a skeletal render of the dancer, similar to a stick figure, to analyze, mimic, and create dance routines. In 2018, award-winning choreographer Wayne McGregor began his collaboration with Google Arts & Culture Lab to create this tool that produces choreography using AI. [11] [12]

AI in Beauty

Creative minds were the key drivers of innovation and trends in the fashion industry. The need for creativity is slowly becoming obsolete as AI has now been able to take over manufacturing jobs of many at a cheaper cost while also working on the frontend and AI can now detect new trends with demand being one step ahead of what’s going to be in style next. The makeup industry has also made strides in introducing AI these last few years.[42]

Benefits of AI in Beauty

Personalization: The combination of AI and augmented reality (AR) technology offer shoppers a personalized experience, making shopping more targeted to them and easier. An example of this is skin shade matching technology that allows consumers to preview the product on themselves before purchasing or visiting physical stores.

Brand loyalty: Through targeting customers and providing personalized solutions, consumers spend more time connecting with a brand, this can be done online or in-store. The ability to have these virtual interactions with skincare products allows customers to have more positive experiences, coming from the impactful results of having personalized options.

Customer experience interactions: The introduction of chatbots has transformed the way customers interact with brands. They help brands and retailers in managing everything from tracking sales to recommending products to enhance customer experiences.

Improved inventory management: AI uses predictive analytics to help fashion retailers understand customer behaviour and plan inventory accordingly. It also allows businesses to see the outflow of certain styles to plan accurate inventory for products that are in style.

Breakdown of AI in Fashion

AI is now used in all steps of fashion, beginning with the initial product creation to visualizing how the product looks on consumers before they make a purchase. AI now works on anticipating upcoming trends by analyzing past trends and using that information to create new designs or improve current products. [43]

Examples: iLUK is an AI-based personal stylist, using computer vision-based and 3d reconstruction technology to make personal styling based on technology possible. It is designed as a pod that will be placed at retail outlets.

Van Heusen created a similar concept as it implements its virtual mirror in retail environments to let users see how outfits would look on them by scanning the item's barcode. Once scanned, customers can stand in front of the mirror as the virtual garment are projected onto the screen to match the customers body. This is just one feature; the video below will show all concepts the style studio is programmed to do.

Breakdown of AI in Makeup

The cosmetic industry has focused on using AI and AR algorithms to detect the face through a camera by focal points and map the face. By scanning a customer's face, it allows them to apply cosmetics virtually to see how shades and tones work against their skin and get a better personalization for makeup choices offered by brands. [44]

Examples: The company Perfect Corp. unveiled their AI-powered makeup tutorial platform last year. The launch provided a new digital makeup tutorial with step-by-step lessons using AI. YouCam tutorial is the makeup tool that virtually educates consumers on correct makeup application techniques. [45]

AI in Social Media

Virtual Humans (or Digital Humans) are simulations of human beings on computers. These simulations have a number of traditional uses such as in video games, movies, to create avatars, and even as personal assistants. More recently, there has been a trend of teams working to create Virtual Humans to be used as influencers on social media. These influencers have gained large amounts of followers and engagements across their social media accounts, to the point where many of them have begun to collaborate with large brands in sponsorship deals.

Popular Virtual Humans

Some very popular Virtual Humans are Lil Miquela, FN Meka, and Barbie. There is a website dedicated to tracking and sharing updates on all of the up and coming Virtual Humans called [46] Clicking on one of their profiles will bring up bios, basic information, articles, and interviews that they have done.


There are benefits to creating or hiring a Virtual Human to represent your brand. They can be made to appeal to whichever target audience the creator wants, which is great for targeted marketing. For example, many of these figures will try to appeal to kids by taking on the image of a cartoonish character, or in cases where there is already a developed character that appeals to a large group of kids like Barbie, the only thing left to do is make that character appear in scenarios that give it the appearance of being human.

Another benefit is that whole teams can be working on managing and updating the lives of one Virtual Human. This means that there is constantly new content being generated as opposed to real influencers who are limited by the fact that they must go out and create content. In a similar manner, a team could be working on content for multiple Virtual Humans at once, creating scalability that real influencers would have a hard time replicating.

There are no physical boundaries for Virtual Humans, since they can be edited into any scenario all it takes is a bit of photoshop work to have them appearing for a brand in New York and then Paris a few hours later. This allows them to constantly engage different audiences and keep their content feeling new every time.

When it comes to collaboration, creators can easily put their Virtual Humans together in dramatic scenarios to engage followers and create backstories for them. They have also worked with real humans to collaborate by being placed into photos with them in a way that many people who comment on the pictures have a hard time telling what is real and what is not. One of the most popular Virtual Humans on Instagram, Lil Miquela, has many photos with her boyfriend who is a real person hired by the creators of Lil Miquela to take photos where she can be edited in and add to her perceived realness.

One of the important benefits for brands who wish to use Virtual Humans in their marketing efforts, is that they have a very safe brand image for the most part. Compared to humans who at any time could get caught up in scandals, Virtual Humans are completely manufactured and can avoid scandals as they have no real history aside from what is created for them.


The market for Virtual Humans is still new but it is growing according to research done by Statista. [47] This is good news for AI influencers with the goal of creating revenue streams from their creations and according to the owners of one of Lil Miquela, she had brought in over $10 million in sponsorship revenue in 2020. This is all made possible by the willingness of brands to sponsor AI influencers and recently many brands have signed on such as luxury clothing brands Prada, Dior, and Versace and even phone brands like Samsung who used Miquela in their #TeamGalaxy advertisement campaign.


There is still room for controversy with AI influencers even though they start with a clean slate and are made to have a safe brand image. For example, Lil Miquela was under fire in 2019 for a fictional story that suggested she was the victim of a sexual assault. [48] This caused people to question why the creators felt the need add this as part of her character and had a lot of people calling for AI influencers to stay away from such topics.

Another influencer, FN Meka, was accused of cultural appropriation relating to the fact that the character appears to be African American and draws from African American culture in the songs, posts and background associate with the character. The calls of cultural appropriation were amplified when followers found out that the creator was not African American himself.

Criticism and Considerations

Impact on Artists

Genshin Impact Art Piece. Left: AI generated. Right: Original image created by streamer ato1004fd.

Critics have questioned the ethical nature of AI art and whether it should be classified as art, as many AI art systems are trained on pre-existing artwork by artists. This has led to conversations concerning regulation and the rights of artists, as there are currently no mechanisms to opt artists out of training sets. As AI image generators allow users to instantly generate unique images in the style of particular artists when prompted, many critics believe this can financially harm artists.

In October 2022, AT, a Genshin Impact fan artist, was streaming their art process on Twitch. A viewer was able to finish the artwork before AT by feeding an in-progress image into NovelAI. After the original artist completed and posted their work on Twitter, the thief demanded credit. [49]


Stephen Thaler's AI artwork, A Recent Entrance to Paradise, denied copyright protection by the US Copyright Office.

As AI art continues to become more prevalent, debates have sparked whether AI-generated artwork can be copyrighted for commercial use. Currently, there are limited safeguards for artists as U.S. copyright law only protects artists against the reproduction of their actual artworks, not from someone mimicking their style.

The U.S. Copyright Office (USCO) rejected a copyright request of Stephen Thaler’s AI-generated art, A Recent Entrance to Paradise, in 2019 and its February 2022 appeal. In both rulings, the USCO found the “human authorship” element was lacking and was wholly necessary to obtain a copyright.

AI art advocates believe that copyright laws are outdated and need to be adapted to new technology. A spokesperson from OpenAI said: "When DALL-E is used as a tool that assists human creativity, we believe that the images are copyrightable. DALL-E users have full rights to commercialize and distribute the images they create as long as they comply with our content policy."

Other countries put less emphasis on the necessity of human authorship for protection. A judge in Australia ruled last year AI-created inventions can qualify for patent protection. And South Africa allowed Thaler to patent one of his products last year, noting that “the invention was autonomously generated by an artificial intelligence.” While Thaler owns the patent, the A.I. is listed as the inventor. [50]

In September 2022, Getty Images banned AI-generated content from its website citing copyright concerns and uncertain copyright law concerning generative AI. Other image sites such as Shutterstock are also screening for AI-generated artwork.[51]

Turing Test

AICAN artwork, "St. George Killing the Dragon,” sold for $16,000 in a 2017 New York auction.

The Turing Test, developed by computer scientist Alan Turing, measures the capability of a machine to show intelligence equal or identical to humans.

Artificial Intelligence Creative Adversarial Network (AICAN) created by Rutgers University’s Art & AI Lab is the first machine-generated artist to pass the Turing Test. AICAN’s algorithm has been trained on over 100,000 Western art images over the last five centuries. In 2017, Rutgers conducted tests at the Art Basel show, in which the majority of attendees were not able to distinguish artwork made by AICAN or a human artist. The first artwork that was offered for sale from the AICAN collection,“ St. George Killing the Dragon,” was sold for $16,000 at an auction in New York in November 2017.[52]

Final Thoughts

Is AI-generated art considered real art?

Art is subjective and open to interpretation. The formal definition of art states that art is "something that utilizes imagination and skill" and evokes emotions in the viewer. Other definitions refer to art as an evolution of works that came before it. In the past, several artists have been innovators of new art forms that were heavily criticized by the public and are now admired by artists and the public alike. Today, modern/abstract art is often criticized for not being real art, however most situations require context and time to help classify whether something can be labelled as art. For example, Marcel Duchamp, a renowned French painter, was known for his remarkable work in art styles that ranged from cubism all the way to surrealism. Later on in his life, he was intrigued by readymades which refer to manufactured objects that were altered[53]. These 'sculptures' were used to express "irony, humour, and ambiguity"[54]. Accordingly, one of his most famous readymade pieces titled Fountain was a urinal upon which Marcel scribbled the name "R.Mutt". This sculpture was accepted at an exhibition in New York during 1917[55]. The art piece was never showcased on stage, however, today it is considered to be avant-garde.

With this in mind, when we look at AI art, we need to consider the artists who provide input into these machines to generate the artwork. The need for human input (text-based prompts, reference images etc.) showcases that the creative process has not been entirely removed from the creation of AI generated art. Additionally, as AI continues to advance, it will be able to create new and innovative art styles that we may benefit from. In regards to AI such as Ai-Da that do not need the input of humans, it is important to acknowledge how much technology and algorithms influence our daily lives. This alludes to the notion that our agency and thoughts are not truly original or our own[56]. As a result, some may view AI-generated art as real while others do not. Human input and the creative journey can be essential in creating a work of art that is culturally significant and timeless; it can create connections and speak on the experience of what it means to be human. On the other hand, as we increasingly shift towards a technologically integrated and dependant society, the definition of art may evolve to fit the community and culture of our future.

Future Scenarios

There are concerns around the use of AI and how it will affect humans. Currently, there are already talks about creating guidelines for how AI makes decisions that could affect humans outlined in what the United States calls the AI Bill of Rights. [57] The main concern is that AI has the ability to make decisions without human intervention and if there is no regulation on how those decisions are made, then it could be the case that AI is making decisions that have a negative impact on individuals. As an example, if an AI program was used by a credit agency to assign credit scores for its customers and it was in some way biased in the way it interpreted information to give lower credit scores to a specific group of people then that could be considered harmful and have important impacts on individuals lives. The AI Bill of Rights is a proposition that would seek to protect consumers from malicious AI programs.

In relation to AI art, programs have to draw their inspiration from somewhere and in this case it is based on art created by other people. With that in mind, there are future concerns around copyright issues and theft of intellectual property. At the time of writing this there is no case law surrounding AI art as it is a very new technology but it is a hot topic in the art community and will likely be explored by courts in the near future if this technology continues to grow at the pace that it is now.

As AI art begins to create more realistic photos to the point where some can not tell the difference between pictures of real humans and AI generated humans, the idea that scams relating to fake profiles and robots pretending to be humans becomes an interesting issue. There may be a point where scammers have advanced enough programs to trick people into elaborate ways of having their money or personal information stolen from them. Ways of verifying an individuals identify online such as having them send a photo holding up a certain number of fingers or a piece of paper with their name on it could soon become useless or even counterproductive as AI becomes better at simulating real humans.


Lauren Dar Kristina Nguyen Clara Wong Aleena Zaidi Jasdeep Hari Anthony Wallace
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada


Personal tools