AI-generated art/music is the coolest thing I have ever encountered in my life.
Check out my curated list of AI art/music resources/tools. It includes both non-technical resources that anyone can use to generate art and technical resources on the math and ML stuff.
Also, see my AI art blog posts.
In 2022 Aug, I gave a talk on text-to-image generative models for Machine Learning Tokyo (slides) (video).
In 2022 Oct, I gave a short talk on AI Art NFTs.
Below are some of my favorite outputs curated from my use of DALL·E 2 and Stable Diffusion.
"A very very cute drawing of three Chinese zodiac animals by Takashi Murakami" (DALL·E 2)
"Candid photo of Totoro, photographed by Annie Leibovitz" (DALL·E 2)
"A super saiyan jellyfish, 8k digital art" (DALL·E 2)
"A super saiyan cute kpop girl with blue hair surrounded in flames, 8k digital art" (DALL·E 2)
"rusty metal cyborg with a fiery dark portal to hades in the chest". I first inputted a photo of me into animefilter.com, then uploaded my anime-style face into DALL-E 2 while erasing the white space around it.
"glamorous kpop idol partially made of intricate steampunk cyborg parts posing for a photo, 35mm film" (DALL·E 2)
"A grandiose heavenly xianxia city in the clouds, 8k digital art" (xianxia is 仙俠, a Chinese fantasy genre) (DALL·E 2)
"Isometric view illustration of a sprawling cyberpunk metropolis, bustling Tokyo cityscape, futuristic scifi architecture in the year 2100, nighttime satellite photo, synthwave art by james gilleard, bruce pennington, unreal engine" (Stable Diffusion)
“Pop art of daft punk at a vaporwave neon futuristic cyberpunk Tokyo bustling street at night cyberart by liam wong, rendered in octane, 3d render, trending on cgsociety, blender 3d” (Stable Diffusion)
To generate a video of changing anime faces, I interpolated in a custom autoencoder latent space of the pretrained VAE latent space of the Waifu Diffusion model (fine-tuned from Stable Diffusion). I plan to publish a paper about the methods I used.