The Wiert Corner – irregular stream of stuff

Jeroen W. Pluimers on .NET, C#, Delphi, databases, and personal interests

  • My badges

  • Twitter Updates

  • My Flickr Stream

  • Pages

  • All categories

  • Enter your email address to subscribe to this blog and receive notifications of new posts by email.

    Join 1,860 other subscribers

Examples by b0rk of problems with integers and floating pointing point numbers

Posted by jpluimers on 2026/02/12

From quite a while back but still very relevant today, especially when debugging problems (most people would post them in the order integers, floats, but Julia did it in the opposite way):

  1. [Wayback/Archive] Julia Evans on Twitter: “had a great discussion of how floating point arithmetic can betray you on Mastodon yesterday, there are tons of good examples in the replies”

    [Wayback/Archive] Julia Evans: “today I’m thinking about how floating point numbers can be treacherous — what are specific examples of when they’ve betrayed you?so far I have:…” – Mastodon

  2. [Wayback/Archive] Julia Evans on Twitter: “examples of problems with integers”

Usually I tend to explain integer versus floating point math as lossless versus lossy data compression (for instance WavPack and FLAC versus MP3 compression of PCM audio data, or BMP versus JPEG compression of 2D digital image data).

Either way: floating point and integer problems cause real harm. One interesting comment illustrating that was [Wayback/Archive] Ian Kirker on Twitter: “@b0rk I didn’t see this one in the list, which sticks in my memory: science.org – Fatal Error: How Patriot Overlooked a Scud”

[No wayback/Archive] Fatal Error: How Patriot Overlooked a Scud | Science

If you like listening instead of reading, then [Wayback/Archive] 452: Numbers on Computers Are Weird — Embedded is a great podcast episode where Julia gets interviewed by Christopher White, and Elecia White which I found via [Wayback/Archive] Julia Evans on Twitter: “was on the @embeddedfm podcast this week talking about our upcoming “How Integers and Floats Work” zine, plus some meta discussion about making zines

Either way, be sure to read the other replies to b0rk’s posts too as many interesting tidbits did not make it in her underlying blog posts:

  1. [Wayback/Archive] Examples of floating point problems

    how does floating point work?
    floating point isn’t “bad” or random
    example 1: the odometer that stopped
    example 2: tweet IDs in Javascript
    example 3: a variance calculation gone wrong
    example 4: different languages sometimes do the same floating point calculation differently
    example 5: the deep space kraken
    example 6: the inaccurate timestamp
    example 7: splitting a page into columns
    example 8: collision checking

    None of these 8 examples talk about NaNs or +0/-0 or infinity values or subnormals, but it’s not because those things don’t cause problems – it’s just that I got tired of writing at some point :).

  2. [Wayback/Archive] Examples of problems with integers

    example 1: the small database primary key
    example 2: integer overflow/underflow
    aside: how do computers represent negative integers?
    example 3: decoding a binary format in Java
    example 4: misinterpreting an IP address or string as an integer
    example 5: security problems because of integer overflow
    example 6: the case of the mystery byte order
    example 7: modulo of negative numbers
    example 8: compilers removing integer overflow checks
    example 9: the && typo

In response to her first tweet, I summarised a few links that helped me out during the past in this [Wayback/Archive] Thread by @jpluimers on Thread Reader App

  1. I’ll post a few links here:
    [Wayback/Archive] The Floating-Point Guide – What Every Programmer Should Know About Floating-Point Arithmetic
  2. [Wayback/Archive] What Every Computer Scientist Should Know About Floating-Point Arithmetic
  3. Way better readable PDF version [Wayback/Archive] docs.oracle.com/cd/E19957-01/800-7895/800-7895.pdf (What Every Computer Scientist Should Know About Floating-Point Arithmetic)
  4. [Wayback/Archive] Tikhon Jelvis on Twitter: “Herbie is a tool that takes a floating point expression and automatically finds a more accurate version. Check it out if you’re doing a bunch of floating point wrangling! herbie.uwplse.org The tutorial linked from the main page is a great starting point.”
  5. Herbie is great: [Wayback/Archive] Herbie: Automatically Improving Floating Point Accuracy
  6. [Wayback/Archive] Inside the die of Intel’s 8087 coprocessor chip, root of modern floating point
  7. Image[Wayback/Archive] Jeroen Wiert Pluimers @wiert@mastodon.social on Twitter: “Floating-point operations explained.”

    [Wayback/Archive] Sudama Sahu on Twitter: “Hello bro and sis, u good at math? Something of interest for u all! #SundayThoughts #Maths2020”

    Hello bro.. u good at math, right?
    Hi yes, I’m
    good… If i cut a cake into 3 pieces, each piece will be 0.333 of the main piece, right
    correct
    Ok if we multiply 3 by 0.333 we get 0.999
    so what happened to 0.001?
    u will find it on the knife
    ohh..thans

  8. This thread by [Wayback/Archive] @tannergooding@tech.lgbt (@tannergooding) / Twitter:
    1. [Wayback/Archive] Floating-point is hard. A common misconception is that double values only contain 17 significant digits. In actuality, you only need to produce up to 17 digits to ensure the string will roundtrip back to the original value. (1/5)
    2. [Wayback/Archive] Some strings may require less than 17, but you will never need more. However, many doubles actually contain more than 17 digits in the exact value they represent. (2/5)
    3. [Wayback/Archive] For example, the double with the most significant digits has 767. This value is `2^−1022 * (1 − 2^−52)` and its binary encoding is `0x000F_FFFF_FFFF_FFFF`. (3/5)
    4. [Wayback/Archive] For parsing, you may need to consider upto this many digits, plus one additional for rounding, to ensure you return the exact value. (4/5)
    5. [Wayback/Archive] Knowing the full value can be important for some scenarios, such as testing or working on more complex algorithms where you need to account for rounding error to ensure you return the correct result. Sin/Cos are examples, where you need to account for inaccuracies in PI. (5/5)

    [Wayback/Archive] Jared Parsons on Twitter: “@tannergooding Floating point is easy assuming your epsilon for error is sufficiently high” / Twitter

  9. [Wayback/Archive] https://www.phys.uconn.edu/~rozman/Courses/P2200_15F/downloads/floati… (What Every Programmer Should Know About
    Floating-Point Arithmetic) same title as the above, but very different article.
  10. Your Mastodon thread likely has more, but these have been useful to me in the past.

By now here zine has also been released as per [Wayback/Archive] Julia Evans on Twitter: “”How Integers and Floats Work” is coming out later this week! Here’s the about page: …”

title: computers do math weird Weird things happen when your computer does math. computer (thinking): 0.1 + 0.2 = 0.30000000000000004 computer (thinking): 4294967295 + 1 = 0 person (unhappy): uh, that's not what they taught me in math class... The reason it gets so weird is that your computer has to cram each number into a limited number of bits (8, 16, 32, or 64 bits). When your computer does math, it's running CPU instructions. And there are only 2 kinds of CPU math instructions: those that work on integers, and those that work on floating point numbers. person (talking): let's go learn how your computer handles integers and floating point numbers!

Related blog post: A small table that shows differences between decimal, double and float (Single)

–jeroen

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.