The Wiert Corner – irregular stream of stuff

Jeroen W. Pluimers on .NET, C#, Delphi, databases, and personal interests

  • My badges

  • Twitter Updates

  • My Flickr Stream

  • Pages

  • All categories

  • Enter your email address to subscribe to this blog and receive notifications of new posts by email.

    Join 4,259 other subscribers

Many more web platforms vulnerable to the hash collision attack (not only ASP.NET) #28C3 @hashDoS #hashDoS @ccc

Posted by jpluimers on 2011/12/29

When writing my Patch your ASP.NET servers ASAP early this morning, I didn’t have time to research the full extend of the vulnerabilities published at 28C3 (slides, mp4), though a small bell was ringing a message that I had seen something like it before earlier this century.

I was right, this posting on perlmonks direct me to a /. posting in 2003 pointing me to the research paper on low-bandwidth attacks based on hash collisions (pdf version) that I had seen before. Perl 5.8.1 fixed it September 2003 (search for “hash” in that link).

The attack can be used for DoS because a normal distributed hash table insert of n elements will be running O(n), but a carefully crafted insert of those elements will run O(n^2).

Carefully crafting a worst case scenario depends on how well you can predict collisions in the underlying hash table implementation, which – apparently – is not too difficult, and requires little bandwidth.

Many platforms and languages are vulnerable (already archived at the WayBack machine), including those based on Java, Tomcat, .NET, Ruby, PHP and more in greater or lesser extent. I have the impression that the list only includes big names, but presume platforms based on smaller names (ASP, Delphi, Objective C) are equally vulnerable.

Just read the articles on CERT 903934, oCERT 2011-003Arstechnica, Cryptanalysis.euHeise (German), Hackillusion and the research paper published at 28C3.

a few quotes:

“This attack is mostly independent of the underlying Web application and just relies on a common fact of how Web application servers typically work,” the team wrote, noting that such attacks would force Web application servers “to use 99% of CPU for several minutes to hours for a single HTTP request.”

“Prior to going public, Klink and Wälde contacted vendors and developer groups such as PHP, Oracle, Python, Ruby, Google, and Microsoft. The researchers noted that the Ruby security team and Tomcat have already released fixes, and that “Oracle has decided there is nothing that needs to be fixed within Java itself, but will release an updated version of Glassfish in a future CPU (critical patch update).”

“The algorithmic complexity of inserting n elements into the
table then goes to O(n**2), making it possible to exhaust hours of CPU time using a single HTTP request”

“We show that PHP 5, Java, ASP.NET as well as v8 are fully vulnerable to this issue and PHP 4,
Python and Ruby are partially vulnerable, depending on version or whether the server
running the code is a 32 bit or 64 bit machine.”

Microsoft seems to have been notified pretty late in the cycle, I presume because the researchers started with a some platforms and finally realized the breath of platforms involved.

The ultimate solution is to patch/fix the platforms using for instance a randomized hash function a.k.a. universal hashing.

Microsoft will provide a patch for ASP.NET later today, Ruby already patched and other vendors will soon or have already (please comment if you know of other platforms and patches).

The links this morning indicated there were no known attacks. That is (maybe was) true for ASP.NET, but for PHP a public proof of concept of such a DoS is has been published by Krzysztof Kotowicz (blog) with sources at github and a demo html page.

Temporary workarounds (based on the some of the links in this and the prior blog post, and the workarounds mentioned here and here):

  1. If you can: replace hash tables by more applicable data structures
    (I know this falls in the for-if anti-pattern category, but lots of people still use a hammer when a different tool works much better)
  2. Limit the request size
  3. Limit the maximum number of entries in the hash table
  4. Limit form requests only for sites/servers/etc that need it.
  5. Limit the CPU time that a request can use
  6. Filter out requests with large number of form entries

Some platforms already have applied temporary workarounds (I know of Tomcat (default max 10000 parameters), and PHP (default max_input_vars = 1000) did, and looks like the ASP.NET fix will do too).

Other platforms (like JRuby 1.6.5.1, CRuby 1.8.7 (comments) and Perl 5.8.1 in September 2003 ) fixed it the proper way.

Note: workarounds are temporary measures that will also deny legitimate requests. The only solution is to apply a fix or patch.

A major lesson learned today for a few people around me: when vendors start publishing “out of band” updates, do not trust a single 3rd party assessment with state “initial investigation”, but be diligent and do some further research.

–jeroen

PS: Just found out that most Azure users won’t need to manually apply a fix: just make sure your Hosted Service OS servicing policy is set to “Auto”.

6 Responses to “Many more web platforms vulnerable to the hash collision attack (not only ASP.NET) #28C3 @hashDoS #hashDoS @ccc”

  1. […] Many more web platforms vulnerable to the hash collision attack (not only ASP.NET) #28C3 @hashDoS #h… (already archived at the WayBack machine) […]

  2. I found your #1 workaround interesting: “If you can: replace hash tables by more applicable data structures”

    a hash table is certainly “applicable” as it promises O(1) on insert and locate operations, but that only matters if the number of items is relatively large.

    I wonder how many params an average POST or GET request might have? 10? easily! 100? certainly some will have! 1000? possibly, but something is starting to go wrong! 10000? Your design is sick!

    There is certainly a reason why hash table benchmarks typically deal with hundreds of thousands or even millions of items ;-)

    • jpluimers said

      You don’t want to know what I have seen passed in SOAP requests and processed through hash-tables. Those kinds of hashtable usage can be just as vulnerable. Often other data structures can be used which are much better.

      Finally I have seen tremendous abuse of ViewState. That alone is a vulnerability on a different level, but now the hash collision makes it even worse. Temporarily that is.

      And yes I agree: hash tables can be very beneficial on large amounts of data. Hopefully .NET will introduce an implementation that is far less vulnerable than the current one.

      I presume the just released patch is just a stopgap until such a new implementation is rolled out.

  3. Indy 10 uses a plain TStringList to store request parameters. No hash involved. (Same is true for IntraWeb – at least for the versions I was involved with. I don’t think they changed it since then, as it basically relies on the Indy foundation – and as I really don’t see benefits)

  4. Checked TIdHTTPServer in Indy 9 / Delphi 7 (yes, 10 years old…):
    TIdHTTPRequestInfo.DecodeAndSetParams is using an unsorted list, not a hash table. Every parameter is simply beeing added to the Params list. Of course retrieving the value of a parameter has O(n) and not O(1) like in hash tables, which is ugly, but that’s another question.
    How about newer versions of Indy?

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.