GENERATIVE_AI

FIELD_NOTES_002

Figure 1b: Img2Img, 2023

June 23, 2024

Process

  • Blender 3D → render looping animation as image sequence

  • ComfyUI → used the rendered image sequence as a base layer to apply AnimateDiff motion modules, using natural language prompts and negative prompts, with additional layers of refinement and guidance using ControlNet features

Remarks:

Wow, AnimateDiff is amazing. Flickery frame-by-frame style overlays are so last year. Now we’re busting out buttery smooth (well, smoother) animations that maintain structure and coherence temporally. Just look at the difference between Figure 1a and Figure 1b. This improvement is exciting, because it feels actually watchable, especially in the context of morphing loops and abstraction that avoids the inevitable critical lens through which our eyes view realistic scenes… by being, well, surreal.

I’ve found the tools simultaneously powerful, yet comically unpredictable. As it goes when using high level tools— one loses more granular low-level control. As such, it can be a painstaking process of experimentation to find an output without obvious artifacts and imperfections.

Figure 1a: Vid2Vid, 2024

My artwork and I love to frolic through the infinite combinatorial playground of visual language, objects, subjects, concepts (or their active omission / negation).

Original 3D animation

—TO BE CONTINUED—

AI Stylization: Text prompts, AnimateDiff, ControlNet (ComfyUI)

Previous
Previous

GenAI: FIELD_NOTES_001