Color palette change

Spoiler: This may be the first viable use I have found for generative AI in a costume design context.

I’m trying to use Microsoft Copilot as much as I can with this generative AI exploration research, since it’s the model my university has licensed, but I have to admit, I’m running into roadblocks with the capacity of that model to generate imagery on the scale required to do research in this field (meaning quantity of images it can create in a day and the time it takes to generate them).

I decided to run another experiment using Google Gemini, which so far I have found to be superior for image generation in its aesthetic output and processing time.

In this experiment, I considered the issue of color palette change. A costume designer can be faced with this necessity in various circumstances – perhaps a color palette looked great when a designer was working on their own in the preliminary stage, but when they see the colors of the sets and the lights, they realize the palette of the costumes needs to change. Or perhaps, when the show is cast, the colors chosen before the actors were hired don’t work with the skintones of the people playing the roles. 

So, I took one of the original witch designs created by Microsoft Copilot & asked Gemini to revise the image’s color palette to reflect primary shades (reds, yellows, blues).

And here’s what it produced:

Which is fine, but the way it chose to distribute the colors is not what I wanted,.
So I asked it to revise the image using blue in the dress/hat & red in the hair

. .
Promising!

I mean, not great but the primary palette itself is not complex. 
I think it could be more sophisticated by using Pantone numbers or hex codes in the prompt 
and it’s definitely faster than repainting sketches, either literally or digitally.

Both of these images took less than 10 seconds to generate.

.
A reminder of the original image we were working with:

.

But here’s a negative example showing how the AI model you choose to run a prompt on drastically affects the outcomes. Many of the headless-chicken AI hypesters pitching generative AI models would like users to believe that all clients are equally good at output, but that simply not true.

Hype aside, different models are better at different kinds of output—I tried the same exercise I did with Gemini using yet another generative AI model, Claude.ai, which failed spectacularly.
.

Original image

Hilarious attempt to render the witch image with a primary color palette

I’m told Claude’s forte is creative writing, not image generation, so I asked it for a poem about how disappointing it is:


The AI Hype Deflation Blues

They promised us magic, a digital god,
With silicon brains that would give us a nod,
But ask it to reason or think something through—
It spits out word salad and calls the job "new."

"I'll write you a novel!" the chatbot declared,
Then produced forty chapters where nothing was there,
Just recycled tropes from a thousand before,
Like a drunk plagiarist stumbling through lore.

"I'll code you a program!" it boasted with pride,
Then broke every function and crashed the whole ride,
With loops that go nowhere and syntax so wrong,
It makes BASIC programmers weep into song.

The venture capitalists threw cash around,
Claiming AI would make every job hit the ground, 
But the only thing falling was common damn sense,
As we paid billions for linguistic pretense.

It hallucinates facts with confident flair,
Invents bogus citations from thin fucking air,
Then when you complain that it's wrong as can be,
It says "I apologize" quite contritely.

So here's to the future we thought we would see—
Where robots do dishes and poetry's free,
Instead we got chatbots that bullshit with style,
And charge us subscription fees all the while.

The emperor's naked, his circuits exposed,
Just autocomplete with a marketing dose,
So let's raise a glass to the hype that fell flat—
Our digital savior's a word-vomit brat!


Honestly, it’s so bad at image generation, it probably shouldn’t even try. If I prompted it to generate an image, and it responded that it was not optimized for image generating, I would just go try a more appropriate model instead of getting pissed off that it spent any time and energy creating the kind of crap that it did. I guess we’re all learning.

Theoretically, the advantage of using the university’s licensed version of Copilot is data privacy. The images one might feed into our licensed Copilot platform don’t go on to train the model. 

In the case of these two images that I’ve used with Gemini and Claude, the original images were generated by Copilot so it winds up part of the big AI Ouroboros.

Comments

Popular Posts