
In most generative AI image models like DALL-E, Midjourney or Adobe's Firefly, you can type "an image of a bird" or "a bicyclist maneuvering through traffic" and get a result instantly that, if it isn't photo-realistic, is at least good enough for an editor on a deadline. Anyone who has taken hundreds of photos to get that one perfect shot would be forgiven for breaking out in a sweat thinking about it.
But I would encourage all creative people to play around with these models and learn how they work. Large language models are essentially a massive mapping project across the whole of human knowledge (ill-gotten or fairly-licensed, depending on the people building it). That's a pretty powerful tool for brainstorming ideas for quickly seeing what you DON'T want to do with your effort and time.
For TTBOOK, I use AI image generators to create rough drafts of ideas that really have no immediate visual to work from, like "consciousness" or "cosmic dread." Being able to quickly execute on lazy or bad ideas quickly clears the road for more creative, interesting or unexpected ones. Moreover, sometimes these models make connections I wouldn't think of — between color temperatures and abstract concepts, between shapes and emotions — that make sense once I see them with my pattern-matching human brain.
Ultimately that's what I think the future of AI looks like — a machine built to mix and match patterns from across all of human creative history, and a human being to take what it surfaces as inspiration for something new.
As we discuss in this weekend's show, the conversation about AI is less about technology and more about what people decide to do with it. It's a long road filled with danger. Ripe for abuse. There will be arguments about what art is, and who gets to claim inspiration versus theft versus exploitation. But my hope is that having these tools in our hands at least opens up the discourse about them to all of us, rather than being confined to AI labs in Silicon Valley.
—Mark