Vulnerability in content distribution networks found by researchers

Credit to Author: Danny Bradbury| Date: Thu, 24 Oct 2019 14:41:44 +0000

Researchers have found a flaw that could lead to denial of service attacks on content distribution networks around the world.

A content distribution network (CDN) is a network of computers that makes it faster and more efficient for people to access content on the internet. The computers are spread around different regions, and each stores a website’s content in a process called caching.

When someone wants to access content from the website (known as the origin), they’re directed to the computer in the CDN that’s closest to them. Because the CDN has cached the data, they can download it more quickly and efficiently than if they downloaded it directly from the origin site.

The researchers, Hoai Viet Nguyen, Luigi Lo Iacono, and Hannes Federrath, figured out a way to make these CDNs serve up error pages, even when the origin website is working. The attack, called CPDoS, works by fooling the CDN into caching an error page.

Every so often, the CDN will choose not to serve up the page it has cached when responding to a request, but will instead go and get a fresh one. The attacker keeps pinging the CDN with a page request until this happens.

The attacker specially crafts their request so that the originating site won’t know what to do with it. Instead, the site returns an error page, and the CDN caches it. So whenever anyone else asks for the same page, the CDN shows them the error page. It’s effectively a denial of service attack.

What does the attacker do to their request to make it so indigestible? It all comes down to hypertext transfer protocol (HTTP) requests. HTTP is the language that web servers and browsers use to communicate. When your browser sends a HTTP request to the server it includes a header, which contains information such as the version of the browser you’re using, the operating system you’re running, and the page you want.

The attacker can tamper with these headers in three different ways to make them confuse the web server:

HTTP Header Oversize. If the origin server has a smaller allowed size than the CDN, then the attacker can send a header request that the CDN is ok with. The CDN forwards it to the origin server, which returns an error page.

HTTP Meta Character. The same concept, but using a character in the header that shouldn’t be there, like a line feed (n). If the CDN doesn’t filter this out and sends to the origin server, the attacker wins.

HTTP Method Override. HTTP headers contain methods that tell a web application what the browser is trying to do, like GET a piece of information or POST something to the server. There are other ones too, like DELETE, which can be pretty dangerous, so many servers forbid them. But HTTP has a method that overrides that ban. An attacker can send an HTTP header that says “I know this says GET, but what I really mean is DELETE.” If the CDN dutifully passes this onto an origin server that won’t honour it, it’ll return an error. Attacker 1, CDN, 0.

An attacker could theoretically disrupt websites that use CDNs by hitting lots of web pages with these attacks, but there’s a simple solution according to the researchers. CDNs can switch off error page caching. The websites using those CDNs can also alter their own configuration files to do the same thing.

The researchers published a table showing which CDNs were affected on a website dedicated to the research in an associated paper. Amazon’s CDN CloudFront was by far the worst affected but it has now fixed the vulnerability, according to the researchers’ website.

Cloudflare’s CTO John Graham-Cumming said that the vulnerabilities in his software were relatively easy to fix.

“We’re talking about software that we have under our control,” he said, adding that the company patched the issue in hours.

The main work involved reaching out to customers who had misconfigured their websites’ caching:

We have a very large number of customers and we take this stuff super seriously and jumped on it very quickly. Although a tiny number of our customers were potentially vulnerable, it is very important to fix this stuff fast.

For its part, Akamai said in a blog post that any vulnerabilities would be on the customer side:

We have determined that the default caching behavior used for error response is compliant with the relevant RFCs, and are not impacted by this attack. However, non-standard configurations may be implemented to allow for the caching of error messages and would therefore be vulnerable. Customers are strongly advised to review their individual configurations with the account teams to verify that customization has not rendered their site vulnerable.

http://feeds.feedburner.com/NakedSecurity

Leave a Reply