Future generations might believe that some of these Seedance 2.0 videos (unearthed from a long-buried USB drive) represent the latest in mid-20th and early 21st century cinema. They may not notice the strange movements, the lack of blinking, the stilted dialogue, and the fixation on hand-to-hand combat.
I hope that’s not the case, but considering how real movies are decaying at an alarming rate and that digital content is nothing more than stored, eternal bits and bytes, this scenario isn’t so far-fetched. That, of course, would be a calamity.
A silly little 60s AI comedy short I created with Nano Banana and Seedance 2. pic.twitter.com/haKQuQVYqSFebruary 22, 2026
For its similarity to Douglas Sirk’s mid-20th century films and its saturated ’50s-style blues, it’s almost charming. That is, if you can look past AI artifacts, like a 12-piece band populated by a dozen duplicate musicians. Or, if you don’t mind, numerous restaurant patrons who look eerily similar.
There are other anomalies, and many of them are characteristic of AI videos produced on Seedance 2.0 and other platforms. Still, Seedance 2.0’s abundance of content is unprecedented. As I write this, social media is flooded with short videos featuring countless characters, usually engaged in some sort of battle or impossible intersection between brands.
I’ve seen at least two Matrix videos showing rematches between Neo and Mr. Smith. There’s a video of Marvel’s Doctor Strange fighting DC’s Superman and another of the cast of The Office meeting Iron Man.
One battle after another
A Matrix-level action scene used to cost more than $10 million in the Hollywood industry. Now it’s done in 2 minutes. Seedance 2.0 at MartiniArt_ 🔥 pic.twitter.com/uL1Y3Wf1IuFebruary 23, 2026
Time after AI time
In all cases, those who create the message and generate the videos do not pay attention to intellectual property rules or the concerns expressed by those representing the actors. The worst thing is that the person you know best to play [fill in the blank] The character is forced to play it again (without his consent) in these renegade videos.
It’s a big deal, no doubt, but I found myself gravitating toward two of the more original videos, the ones that attempt to tell new stories without tapping into someone else’s intellectual property.
I began to wonder how they were created and to consider how, even when the ideas and characters are new, there are inescapable oddities in Seedance 2.0 videos.
Time Traveler (made with Seedance 2.0) I created this short time travel scene using Seedance 2.0 in just one day for under $200. pic.twitter.com/ImeoTh0vLeFebruary 22, 2026
It’s a nice 5:30 minute clip, but the AI ​​weirdness keeps piling up. For some reason, everything is “shot” in a Wes Anderson style, with each character framed in the center.
No one blinks, and emotions are either missing or conveyed in strange tics, as if one of the characters is sniffing his pen in a panic.
Like much of the Seedance 2.0 content I’ve consumed, I noticed how the skin on most characters is a little sticky at times. The effects can be good, but they tend to be repetitive. I assume Al-Ghaili generated them once and then reused the sequences.
My favorite part might be the robot. Although, like so many things in this and other AI-generated videos, it’s derivative.
All shots were created with this single image created in Nano Banana on @freepik; For a couple of shots, I took screenshots of videos and brought them back into Nano Banana to create variations or lightly edit. pic.twitter.com/F0EXvwKmbBFebruary 23, 2026
Another AI time, another AI place
As much as I don’t like these videos and the consternation, anxiety and consternation they are generating across multiple industries, I am fascinated by how they are made.
Many creators like to claim that they created the work with a “single message”, but I suspect they are being somewhat disingenuous.
I noticed in Christopher Gwinn’s post that he credited Nano Banana for some of the work in his “Silly Little ’60s AI Comedy Short.” I had to learn more, so I peppered him with questions on social media:
- Was it in a single message or in several?
- Who wrote the dialogue?
- How much description did you have to give to Seedance 2.0 to get the desired result?
- Did you tell him to “use the same ‘actors’ in multiple scenes and within the same scene?”
More than a simple notice
Gwinn, who works in Hollywood as a digital creator, told me in X that he started with a single Nano Banana AI-generated image (above) that he built on Freepik. That image, which was inspired by the films of the French filmmaker. Jaques Tati (famous for the Monsieur Hulot comedies (he directed and starred in), was used to develop the entire Seedance 2.0 sequence.
While Gwinn usually writes his own dialogue, he took a different route with this short comedy: “I told Seedance what was happening in the take. I only wrote a couple of lines; sometimes, after I generated some original dialogue, I’d modify it slightly and run the message again,” he shared with me on Threads.
Gwinn also reused some characters in several shots. Once he had all the pieces, including the same couple dancing in multiple scenes, he cut and edited them in traditional video editing software: switching between Adobe Premiere and CapCut.
What Gwinn described to me was a process, and ultimately not much different than what a traditional filmmaker might do. There are notable exceptions, such as the use of AI-generated personas instead of actors. Plus, for all of Gwinn’s work, he can’t eliminate the mirror-like feel of the entire company.
something is wrong
Sure, it may remind you of comedies from the ’50s, ’60s, or even ’70s, but it also feels out of place. Slapstick comedy makes little sense as there is almost no setup for each joke. We come in almost in the middle of every comic moment. It kind of made me think I was watching a trailer for a middle-aged comedy that was trying too hard to make you laugh.
The other anomalies, such as the physics not quite working and bodies sometimes moving as if they had no bones, are evident in virtually every Seedance 2.0 clip. However, with the rapid advancement of AI, they will be resolved within a few months.
I like to understand how these videos were made. It makes me feel a little better about the rapid progression of this “art”, knowing that the digital creators behind it are probably using much more than a simple message to achieve the desired result.
I hope, however, that in their quest to generate increasingly strange scenarios for Neo, Iron Man, Superman, Brad Pitt and Tom Cruise, they will stop and think about how they can use these tools to create something new and an art that can finally stand on its own.
And of course, you can also follow TechRadar on YouTube and tiktok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.




