Vulnerabilities that just won't die - Compression Bombs

Recently Cyberis has reviewed a number of next-generation firewalls and content inspection devices - a subset of the test cases we formed related to compression bombs - specifically delivered over HTTP. The research prompted us to take another look at how modern browsers handle such content given that the vulnerability (or perhaps more accurately, ‘common weakness’ - has been reported and well known for over ten years. The results surprised us - in short, the majority of web browsers are still vulnerable to compression bombs leading to various denial-of-service conditions, including in some cases,full exhaustion of all available disk space with no user input.

Introduction to HTTP Compression

HTTP compression is a capability widely supported by web browsers and other HTTP User-Agents, allowing bandwidth and transmission speeds to be maximised between client and server. Supporting clients will advertise supported compression schemas, and if a mutually supported scheme can be negotiated, the server will respond with a compressed HTTP response.

Compatible User-Agents will typically decompress encoded data on-the-fly. HTML content, images and other files transmitted are usually handled in memory (allowing pages to rendered as quickly as possible), whilst larger file downloads will usually be decompressed straight to disk to prevent unnecessary consumption of memory resources on the client.

Gzip (RFC1952) is considered the most widely supported compression schema in use today, although the common weaknesses discussed in this post are applicable to all schemas in use today.

What is a Compression Bomb?

Quite simply, a compression bomb is compressed content that extracts to a size much larger than the developer expected; in other words, incorrect handling of highly compressed data. This can result in various denial-of-service conditions, for example memory, CPU and free disk space exhaustion.

Using an entropy rate of zero (for example, /dev/zero), coupled with multiple rounds of encoding that modern browsers support (see our ResponseCoder post), a 43 Kilobyte HTTP server response will equate to a 1 Terabyte file when decompressed by a receiving client - an effective compression ratio of 25,127,100:1.

It is trivial to make a gzip bomb on the Linux command line - see below for an example of a 10MB file being compressed to just 159 bytes using two rounds of gzip compression:

$ dd if=/dev/zero bs=10M count=1 | gzip -9 | gzip -9 | wc -c
1+0 records in
1+0 records out
10485760 bytes (10 MB) copied, 0.149518 s, 70.1 MB/s

Testing Framework

Cyberis has released a testing framework, both for generic HTTP response tampering and various sizes of gzip bombs. GzipBloat ( is a PHP script to deliver pre-compressed gzipped content to a browser, specifying the correct HTTP response headers for the number of encoding rounds used, and optionally a ‘Content-Disposition’ header. A more generic response tampering framework - ResponseCoder ( - allows more fine grained control, although content is currently compressed on the fly - limiting its effectiveness when used to deliver HTTP compression bombs. Both tools are designed to assist you in testing both intermediary devices (content inspection/next-generation firewalls etc.) and browsers for compression bomb vulnerabilities.

During our tests, we delivered compressed content in a variety of different forms, both as ‘file downloads’ and in-line ‘HTML content’. The exact tests we conducted and the results can be read in our more detailed paper on this topic here

Is my Browser Vulnerable?

It is actually easier to name the browser that is not vulnerable - namely Opera - all other major desktop browsers (Internet Explorer, Firefox, Chrome, Safari) available today exhibited at least one denial-of-service condition during our test. 

The most serious condition observed was an effective denial-of-service against Windows operating systems when a large gzip encoded file is returned with a ‘Content-Disposition’ header - no user interaction was required to exploit the vulnerability, and recovery from the condition required knowledge of the Temporary Internet Files directory structure and command line access. This seemed to affect all recent versions of IE, including IE11 on Windows 8.1 Preview.

Our results demonstrated that the most popular web browsers in use today are vulnerable to various denial-of-service conditions - namely memory, CPU and free disk space consumption - by failing to consider the high compression ratios possible from data with an entropy rate of zero. Depending on the HTTP response headers used, vulnerable browsers will either decompress the content in memory, or directly to disk - only terminating when operating system resources are exhausted.


With the growth of mobile data connectivity, improvements in data compression for Internet communications has become highly desirable from a performance perspective, but extensions to these techniques outside of original protocol specifications can have unconsidered impacts for security.

Although compression bombs have been a known threat for a number of years, the growing ubiquity of advanced content inspection devices, and the proliferation User-Agents which handle compression mechanisms differently, has substantially changed the landscape for these types of attack.

The attacks discussed here will provide an effective denial-of-service against a number of popular client browsers, but the impact in these cases is rather limited. Ultimately, the greater impact of this style of attack is likely to be felt by intermediate content inspection devices with a large pool of users. It is possible a number of advanced content inspection devices may be susceptible to these decompression denial-of-service attacks themselves, potentially as the result of a single server-client response. In an environment with high availability requirements and a large pool of users, a denial-of-service attack which could be launched by a single malicious Internet server could have a devastating impact.