As you've undoubtedly noticed here in the past couple weeks, lately I've been toying with and enjoying use of generative AI to create images. It's fun and easy and amazingly satisfying and addictive.
It's come a long way in just the past year. Back in August 2022, John Oliver featured a segment on AI art on his show, Last Week Tonight. Oliver being Oliver, he mainly focused on AI pictures of himself that he found on the internet. This is what AI art looked like in August 2022:
The picture, captioned something like "John Oliver marries a cabbage," was probably created by an early version of the DALL-E model. Sure, it looks like John Oliver, but the face is kind of scrunched up and everything looks pretty artificial and unreal.
But DALL-E has gone through several upgrades and improvements since then. Six months after the original cabbage segment aired, Oliver ran another segment on AI, and in February 2023, this is what AI imagined an older John Oliver would look like marrying Miss Piggy:
The rendering, the textures, the shading, and more are all greatly improved. Given the date, this image was probably produced using DALL-E 2, a big improvement over the original, and Bing.com is now offering DALL-E 3. The picture below obviously isn't John Oliver, but gives you a good idea of what DALL-E 3 is capable of today:
We're but a year past the first picture, and one can only imagine where the technology will be a year from now.
But the problem I have, and I imagine I share with a great many other users, is what to do with all of the images I create. It doesn't take long, but it's very easy to quickly accumulate a collection of 100, 200, 500, or even thousands of AI images. But what to do with them all?
You can create an on-line gallery, sure, but how do you attract viewers? You can share the images on open forums like Reddit, Facebook, Instagram, etc., but you'll probably find that not only will they be largely ignored, some people are downright hostile to AI art. Someone (thankfully not me) posted an on-topic, AI-generated cartoon to a Facebook page, and it was widely disliked. Someone commented, sarcastically, "Look Mommy! I typed some words in the computer and it made a funny picture!" Ouch.
Here's my experience. I made a comic-book style drawing of an enthusiastic Boston Celtic playing street-court basketball and charging toward the viewer. I liked the image and thought it had a certain pep-rally, kick-ass kind of energy. Before a game against the New York Knicks, I posted it to the Celtics subreddit, with the title "Celtics Gonna Run All Over New York Tonight!" I expected to see a lot of comments saying things like, "Hell, yeah, let's go Boston!" and "Knicks don't stand a chance tonight!"
Instead, I received comments like "What the fuck is this shit AI doing on this page?" and "Why am I seeing this? Can't the mods ban AI from this group - it's so annoying!" One person asked, "What kind of oddball shit is this?" and pointed out all the details that made the picture "odd" - the misspelling on the uniforms (AI image generators have a hard time with spelling), the basketball hoop three stories up in the air, and "what team is that ginger on the right supposed to be?" Others started brigading and downvoting my post and eventually the mods took the picture down, sending me a condescending message that they "appreciate some people's enthusiasm" but it's their job to maintain a certain level of quality on the subreddit.
So I get it - not everyone likes AI. Apparently, some people even find it offensive. AI images do bring up unsettling questions about who actually created the picture - the child who typed some words into the computer, or the AI model itself? Or is it some sort of collaboration of human and machine?
Some people are afraid of losing their jobs to AI and see generative text and art as robbing them of their livelihoods. Others are paranoid about being deceived by "deep-fake" technology and, not wanting to be duped, wish that the genie could somehow be put back in the bottle.
Or are some people just offended and even frightened by anything new? I read in the news about a man who purchased some art and hung it on his wall. He really liked the picture, but when he learned that it was generated by a computer and not a human, he felt deceived and disappointed. He no longer liked the picture, took it down, and returned it to the seller. I don't really understand how the identity of the artist affects your enjoyment of the art. If you like a picture, if the composition and color palette and subject matter are pleasing to you, does it matter if the artist was a man or a woman? Straight or gay? Black or white? Christian, Muslim, or atheist? Human or virtual?
Some people say it's not "fine art" without the conscious intention of an artist. But artists like Jackson Pollock have been introducing non-conscious "chance" elements into their "drip paintings" for almost a century now. Is a work of art somehow diminished if its creator allows some element of chance and randomness into the creation process? If you say "yes," then John Cage and Robert Rauschenberg have some bad news for you.
I've heard some people compare the current reaction to AI art with the advent of photography. Back in those days, I'm sure people said, "A photograph isn't art - it's just a reflection captured by lenses and film. Even a child can do it!" But then visionary artists like Ansel Adams came along and made unique and undeniable fine art using photography and ended that controversy.
I think AI imagery is currently waiting on some visionary artist to come along and show us what this new medium can do. I'm not talking about some next Ansel Adams who can make extraordinary images using AI. In fact, I don't think photography is the right medium for comparison at all.
I'm thinking about graffiti. Just a few years ago, no one thought of graffiti as "art." It was a nuisance and an eyesore ("Why am I seeing this? Can't it be removed? It's so annoying!"). But then artists like Banksy and Keith Herring came along and created art that could only be expressed as graffiti. It was context specific - it wouldn't have made sense in a gallery or the pages of a magazine (although the styles have since crossed over from the streets and the back alleys).
Right now, we're waiting for someone to come along and do something - I have no idea what - with AI images that couldn't be done in any other medium. But when that happens there will be the next revolution in art and everyone will look at the medium in a new and different way.
Maybe it's something like performance art. I'm not predicting it will be specifically this, but here's an idea of the kind of thing I'm imagining. The artist takes the stage, sings into a microphone, and her words are translated into text in real time by a computer. The text is then fed into an AI, which instantly generates images based on the prompts onto a screen behind her. She sings, "I was driving on the Golden Gate Bridge," and an image instantly appears behind her of someone doing just that. This continues throughout the song.
As far as I know, the technology doesn't exist quite yet for that kind of instantaneous or near-instantaneous words-to-text-to-image generation, but it wouldn't surprise me at all to see it emerge in the next 12 months, the rate at which this tech is evolving.
We can even imagine a step further. Generators can already produce animated versions of pictures. Imagine someone takes the stage and begins to narrate a story. Using the words-to-text-to-image technology imagined above, an animated version of the story appears on screen. A person could potentially improvise an entire movie on the fly, stream-of-consciousness. Now THAT would be a breakthrough in performance art!
Or a producer walks into the studio to pitch his next film. "Imagine a woman, driving alone, going across the Golden Gate Bridge," he begins, and as he does, that exact image appears behind him and the studio execs don't have to tax their imagination. It's right there in front of them.
Exciting, isn't it? We're right at the cusp of technology and art and the next big thing.