Sketching style iteration

This is fascinating. I uploaded a whole show worth of costume designs to Microsoft Copilot in service of teaching it my rendering style—the final renderings from the Playmakers Repertory Company 2010 production of Shipwrecked: an Entertainment

Then I instructed Copilot to create an interpretation of the Elphaba witch design “in the same style” & what it created is not at all what I anticipated, and is not really influenced by the drawing style in my Shipwrecked renderings, and yet it’s absolutely fantastic.

:

For comparison, one of the design renderings I shared with Copilot to teach it my drawing style:

.

Again, it’s not realistic to describe what generative AI does between input and output as “thinking”. It’s possible that it recognized the only female figure among my sketches as that of a Black woman with locs (which was true of the cast of that show, reflected in these final renderings) & that’s what it took from the sketches to generate its new iteration of Elphaba.

I should also mention that it took almost 24 hours from prompt to result on this one. The biggest obstacle [1] I see for this having use in either practical or academic theatrical design applications is the time factor. Tickets are sold and shows have to open, and professors may be unwilling to give a student a deadline extension because of a sluggish AI model.

The more I work with it, the more aesthetically sophisticated the outcomes seem to be, but it feels like this technology is just not quite there yet in terms of potential applicability in our field. 

I’m getting the same sense that I got conducting research on 3D scanning of antique hat blocks in 2011–I could tell that one day the technology would advance enough to do what we were trying to do, but it wasn’t there yet.


[1] Beyond the slew of ethical objections from designers and other theatre artists, which is currently a prodigious obstacle.


Comments

Popular Posts