I cannot take credit for this video. That goes full to Abel Art (on X, formerly Twitter under the handle A_B_E_L_A_R_T) The video was released on Twitter July 31. While I have seen some more polished results, the storytelling aspect is really well done – creating a visual narrative that moves the viewer through story. Not surprising, because Abel Art is Cinematographer, Editor, Sound Designer, and Photographer, with over 15 years of experience in audiovisual production. It shows…

What can YOU do with gen-AI? Be inspired…
Brian Sykes

::: The following is from Abel Art describing his process & video :::

Visual: Midjourney
Movement: RunwayML Gen 2 / Pika Labs
Video / Sounds Effects: CapCut
Additional Music: Pixabay

Short Movie: Sh*t!

The story of a man who wakes up one morning with an empty fridge but a mind full of memories.

Here’s a little fiction I had fun making.
Well, the story was a pretext, what I was trying to achieve here was consistency, consistency of the actor, consistency of the environments, consistency of the set design. And it’s hard, really hard.

A few thoughts:
Basically, general consistency is fairly easy to achieve today (colorimetry and ambience, architecture, era, general character design, etc). But specific and recurring coherence is a tedious task at the moment: recurring objects, faces, outfits, locations, set design…

Basically, if you want to make a horror or sf trailer, you’ll have a lot of fun. 
But if your idea is to make a closed-door movie with 3 characters in an apartment, good luck! 
(For now)

It’s also complicated to have the actor play with an accessory, and even to wear exactly the same outfit. There are tricks for that, like giving him a distinctive touch (a yellow hat, a red scarf, etc.), but they’re just tricks for now. So we use references and remix and blend, blend, remix, blend… 

(I also tried photo collage before going into Runway, the result was surprising).
(I haven’t tried a third-party app, such as reface, roop?because I find it too restrictive).

For me, the priority is in Midjourney, (much more important than lip-synching at the moment), we need to be able to save an actor via an actor seed, and then be able to generate a view of this actor from different angles, easily adding emotions, outfits and accessories.

The same goes for a location, having a seed that corresponds to a place and then being able to pull out different angles in a consistent way, as if Midjourney understood places and characters in 3 dimensions, then being able to put the camera where you want it.


If you are on X (formerly Twitter), give @A_B_E_L_A_R_T a follow…