What Happens When You Feed a Drawing into Runway
This is a process post.
I took drawings from the diagrams series and I uploaded it into Runway. Runway is an AI video generation tool that takes a still image and animates it, using machine learning to predict and generate motion.
I wasn't sure what to expect.
The process
The workflow is straightforward: upload your image, write a short prompt describing the kind of motion you want, set the duration, generate. Runway gives you a few seconds of video — smooth, rendered, uncanny.
The prompt matters more than I initially thought. Vague prompts ("make it move") produce generic drifting or zooming. More specific prompts that respond to what's actually in the image produce something stranger and more interesting. I found prompts that described atmospheric qualities rather than literal movement worked best — slow atmospheric shift, breathing, gentle oscillation — language that matched the register of the drawings themselves rather than trying to impose something foreign onto them.
What I'm still uncertain about
I don't yet know where this leads as a practice. I have lots of ideas on this and am very excited about the possibilities. However, I don’t want to loose the aesthetic qualities nor it become predictable. I’m not sure about narrative either.
What I can say is that Runway is a serious tool for artists working with drawing and image-based practice. The results are not gimmicks. Used thoughtfully — with source material that has something to say — it produces work that extends rather than replaces the original.