The Wiert Corner – irregular stream of stuff

Jeroen W. Pluimers on .NET, C#, Delphi, databases, and personal interests

  • My badges

  • Twitter Updates

  • My Flickr Stream

  • Pages

  • All categories

  • Enter your email address to subscribe to this blog and receive notifications of new posts by email.

    Join 1,860 other subscribers

Writing Doom – Award-Winning Short Film on Superintelligence (2024) – YouTube

Posted by jpluimers on 2025/10/24

Today a year ago, this interesting short film got available on YouTube about what an Artificial Super Intelligence could bring, especially when it became the villain or bad guy: [Wayback/Archive] Writing Doom – Award-Winning Short Film on Superintelligence (2024) – YouTube (some interesting comments below).

Synopsis from [Wayback/Archive] ‎Writing Doom • Film + cast • Letterboxd:

A writing team are given the task of making Artificial Superintelligence the ‘bad guy’ for the next season of their TV show. With the help of a newcomer to the team (a Machine Learning PhD), they must figure out how and why an ASI might function as an antagonist – and the threat it might pose to humanity.

A few important notes:

  • there is no good single definition of intelligence that well defines intelligence, let alone AGI (Artificial General Intelligence) or ASI (Artificial Super Intelligence)
  • ASI and its goals might be different from human intelligence and human goals
  • humanity might not realise or recognise there is ASI (at all, or when it has just become ASI)
  • if humanity does recognise, it might not be able to control (i.e. shut down) an ASI (for many reasons, not just it being too intelligent, but also because lack of consensus – read humanity smashing each others heads for no reason before even reaching consensus)

Maybe AGI and ASI are like nuclear war, and this WarGames conclusion is sensible after all: “the only winning move is not to play” though with the money at stake, AGI and ASI might be obtained. I doubt that will be in my lifetime though.

See also:

Some interesting comments:

  • The 5 year old having to hire a CEO is a fantastic analogy.
  • The “What if destroying us is music to them” and “Making guitars out of trees” got me and I don’t understand why it was so easily dismissed. The best outcome could be our destruction, but worse than than, we could be farmed for their pleasure.
  • As someone who’s been working on ML long before LLM-mania, these arguments are ones that have been driving me insane. To see them articulated with comedy and wit is just amazing.
  • “Maybe it won’t happen.” – The most terrifying statement of all.
  • As an undergraduate pursuing bachelor’s degree in machine learning, this really hits hard. Even with all this progress with machine learning models we know ridiculously little about what actually goes inside the model because there are literally billions of parameters check and we can only know what models are learning through their output and how that output matches to what we actually want. Which makes it really easy for a smart model to fool us.
  • Great film! Touched on just about every major aspect of interacting with an ASI.
    A quote from Arthur C. Clake’s “Childhood’s End” seems apt:
    “For what you have brought into the world may be utterly alien, it may share none of your desires or hopes, it may look on your greatest achievements as childish toys – yet it is something wonderful, and you will have created it.”
  • I recommend reading Nick Bostrom’s “Superintelligence”. We assume that an ASI should pursue the same goals as humans. However, if its goal is to produce paper clips, it could use up all the energy and matter in the universe to achieve this goal 100%. We also cannot be sure that an ASI shares our values. To achieve peace, it could come to the conclusion that it is not achievable as long as humans exist. So the solution would be the extinction of humanity. These are of course extreme examples, but they are intended to show that a superintelligence is not bound by our ethical and moral ideas, since it did not have to follow the same evolutionary cultural development as we did.
  • Jerry: “How would it kill us all? You have to tell us how it would do that.”
    Max: “All right, I’ll write down one possible way, and I’ll pass it to Gail, and she’ll tell us whether it would work. But Gail, you may never, ever reveal the plan. Do you agree to that, Jerry?”
    Jerry: “Yeah, all right.”:
    Gail: “All right. Um… Oh. Oh, hell. Oh, God, that would work. There’s no way we could stop that.”
    Jerry: “What?? Let me see that.”
    Gail and Max: “NO”
    Max: “You agreed to the rules. And that’s just a plan by a human; a superintelligence could come up with something much better. Now you come up with a way to stop that plan, without knowing it beforehand.”
  • Cutting down a tree, to create a guitar, to make music – that hit home.
  • This film hits the key point, that a super intelligent AI will pursue its goals, likely using means we can’t predict and therefore can’t control. No silly terminators, but simply reallocating all resources to its goals. Best case scenario, humans get pushed into small reserves, just as we have pushed highland gorillas into smaller and smaller areas, thus reducing their population. Humans didn’t set out to eliminate highland gorillas or vaquita porpoises, etc. — those are just sad side effects of pursuing our own goals.
  • Best takeaway, for me, is the line “people don’t want to change what they want.”

Crop images from Letterboxd:

[Wayback/Archive] 1276227-writing-doom-0-230-0-345-crop.jpg (230×345)

[Wayback/Archive] 1276227-writing-doom-0-1000-0-1500-crop.jpg (763×1145)

Query: [Wayback/Archive] Writing Doom plot at DuckDuckGo

--jeroen

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.