The Strange Math That Predicts (Almost) Anything – YouTube
Posted by jpluimers on 2026/03/18
Relatively old mathematics that is still relevant: Markov Chains.
It is about predictability of events based on the current state of affairs (and not past state of affairs). Lot’s of AI have been about Markov Chains for a long time: spam filters, text prediction while typing, search engine results, language recognition by letter-pairs, and many more.
A nice video about it is [Wayback/Archive] The Strange Math That Predicts (Almost) Anything – YouTube
Related are many foundations in information technology, of which Markov and Shannon are mentioned in the video:
- Andrey Markov – Wikipedia
- Bayesian statistics – Wikipedia (Dutch: Bayesiaanse statistiek – Wikipedia)
- Claude Shannon – Wikipedia (Dutch: Claude Shannon – Wikipedia), especially papers like:
- [Wayback/Archive] shannon_51.pdf [Wayback PDF View/PDF View] “Prediction and Entropy of Printed English”
- [Wayback/Archive] entropy.pdf [Wayback PDF View/PDF View] “A Mathematical Theory of Communication”
- [Wayback/Archive] Shannon_Claude_E_Weaver_Warren_The_Mathematical_Theory_of_Communication_1963.pdf [Wayback PDF View/PDF View] or
[Wayback/Archive] Shannon-Weaver.pdf [Wayback PDF View/PDF View]
“The Mathematical Theory of Communication”
and Wikipedia entries like
- Shannon coding – Wikipedia
- Shannon’s source coding theorem – Wikipedia
- Shannon entropy – Wikipedia (now generalised as Entropy (information theory) – Wikipedia)
- A Mathematical Theory of Communication – Wikipedia
and this 1998 overview article [Wayback/Archive] Verdu_Sergio_1998_Fifty_Years_of_Shannon_Theory.pdf [Wayback PDF View/PDF View] “Fifty Years of Shannon Theory”
The cool thing about Markov Chain based systems is that in essence, they are memory less because only the current state involved is involved, which has a relatively small footprint.
Oh: and you need to learn how to properly shuffle cards in order to make the shuffle behave like a Markov Chain and ending up with a random enough deck.
Queries:
- [Wayback/Archive] “Prediction and Entropy of Printed English” at DuckDuckGo
- [Wayback/Archive] “A Mathematical Theory of Communication” at DuckDuckGo
- [Wayback/Archive] “The Mathematical Theory of Communication” at DuckDuckGo
Oh, a key comment on that video:
“Worse search engine is better, cause you can show more ads if the user can’t immediately find what they want.”
Google then: “Nah.”
Google now: “Write that down, write that down!”
Yup, Markov Chains is what powered the PageRank feature of the original Google search engine, making them great. AI and greed is what makes them despicable.
--jeroen
[Wayback/Archive] The Strange Math That Predicts (Almost) Anything – YouTube






Leave a comment