S3 Ep146: Tell us about that breach! (If you want to.)
Credit to Author: Paul Ducklin| Date: Thu, 03 Aug 2023 17:56:25 +0000
WEIRD BUT TRUE
No audio player below? Listen directly on Soundcloud.
With Doug Aamoth and Paul Ducklin. Intro and outro music by Edith Mudge.
You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.
READ THE TRANSCRIPT
DOUG. Firefox updates, another Bug With An Impressive Name, and the SEC demands disclosure.
All that, and more, on the Naked Security podcast.
[MUSICAL MODEM]
Welcome to the podcast, everybody.
I am Doug Aamoth; he is Paul Ducklin.
Paul, I hope you will be proud of me… I know you are a cycling enthusiast.
I rode a bicycle yesterday for 10 American miles, which I believe is roughly 16km, all while pulling a small but not unheavy child behind the bike in a two-wheeled carriage.
And I’m still alive to tell the tale.
Is that a long way to ride a bike, Paul?
DUCK. [LAUGHS] It depends how far you really needed to go.
Like, if it was actually 1200 metres that you had to go and you got lost… [LAUGHTER]
My enthusiasm for cycling is very high, but it doesn’t mean that I deliberately ride further than I need to, because it’s my primary way of getting around.
But 10 miles is OK.
Did you know that American miles and British miles are, in fact, identical?
DOUG. That is good to know!
DUCK. And have been since 1959, when a bunch of countries including, I think, Canada, South Africa, Australia, the United States and the UK got together and agreed to standardise on an “international inch”.
I think the Imperial inch got very, very slightly smaller and the American inch got very, very slightly longer, with the result that the inch (and therefore the yard, and the foot, and the mile)…
…they’re all defined in terms of the metre.
One inch is exactly 25.4mm
Three significant figures is all you need.
DOUG. Fascinating!
Well, speaking of fascinating, it’s time for our This Week in Tech History segment.
This week, on 01 August 1981, Music Television, also known as MTV, went live as part of American cable and satellite television packages, and introduced the public to music videos.
The first one played [SINGS, RATHER WELL IN FACT] “Video Killed the Radio Star” by The Buggles.
Fitting at the time, although ironic nowadays as MTV rarely plays music videos any more, and plays no new music videos whatsoever, Paul.
DUCK. Yes, it is ironic, isn’t it, that cable TV (in other words, where you had wires running under the ground into your house) killed the radio (or the wireless) star, and now it looks as though cable TV, MTV… that sort of died out because everyone’s got mobile networks that work wirelessly.
What goes around comes around, Douglas.
DOUG. Alright, well, let’s talk about these Firefox updates.
We get a double dose of Firefox updates this month, because they’re on a 28 day cycle:
Firefox fixes a flurry of flaws in the first of two releases this month
No zero-days in this first round out of the gate, but some teachable moments.
We have listed maybe half of these in your article, and one that really stood out to me was: Potential permissions request bypass via clickjacking.
DUCK. Yes, good old clickjacking again.
I like that term because it pretty much describes what it is.
You click somewhere, thinking you’re clicking on a button or an innocent link, but you’re inadvertently authorising something to happen that isn’t obvious from what the screen’s showing under your mouse cursor.
The problem here seems to be that under some circumstances, when a permissions dialog was about to pop up from Firefox, for example, say, “Are you really sure you want to let this website use your camera? have access to your location? use your microphone?”…
…all of those things that, yes, you do want to get asked.
Apparently, if you could get the browser to a performance point (again, performance versus security) where it was struggling to keep up, you could delay the appearance of the permissions pop-up.
But by having a button at the place where the pop-up would appear, and luring the user into clicking it, you could attract the click, but the click would then get sent to the permissions dialog that you hadn’t quite seen yet.
A sort of visual race condition, if you like.
DOUG. OK, and the other one was: Off-screen canvas could have bypassed cross-origin restrictions.
You go on to say that one web page could peek at images displayed in another page from a different site.
DUCK. That’s not supposed to happen, is it?
DOUG. No!
DUCK. The jargon term for that is the “same-origin policy”.
If you’re running website X and you send me a whole bunch of JavaScript that sets a whole load of cookies, then all that’s stored in the browser.
But only further JavaScript from site X can read that data back.
The fact that you’re browsing to site X in one tab and site Y in the other tab doesn’t let them peek at what the other is doing, and the browser is supposed to keep all of that stuff apart.
That’s obviously pretty important.
And it seems here that, as far as I understand it, if you were rendering a page that wasn’t being displayed yet…
…an off-screen canvas, which is where you create, if you like, a virtual web page and then at some future point you say, “Right now I’m ready to display it,” and bingo, the page appears all at once.
The problem comes with trying to make sure that the stuff that you’re rendering invisibly doesn’t inadvertently leak data, even though it never ultimately gets displayed to the user.
They spotted that, or it was responsibly disclosed, and it was patched.
And those two, I think, were included in the so called “High”-level vulnerabilities.
Most of the others were “Moderate”, with the exception of Mozilla’s traditional, “We found a whole lot of bugs through fuzzing and through automated techniques; we didn’t probe them to find out if they could be exploited at all, but we are willing to assume that somebody who tried hard enough could do so.”
That’s an admission that we both like so much, Doug… because potential bugs are worth quashing, even if you feel certain in your heart that nobody will ever figure out how to exploit them.
Because in cybersecurity, it pays never to say never!
DOUG. Alright, you’re looking for Firefox 116, or if you’re on an extended release, 115.1.
Same with Thunderbird.
And let’s move on to… oh, man!
Paul, this is exciting!
We have a new BWAIN after a double-BWAIN last week: a Bug With An Impressive Name.
This one is called Collide+Power:
Performance and security clash yet again in “Collide+Power” attack
DUCK. [LAUGHS] Yes, it’s intriguing, isn’t it, that they chose a name that has a plus sign in it?
DOUG. Yes, that makes it hard to say.
DUCK. You can’t have a plus sign in your domain name, so the domain name is collidepower.com
.
DOUG. Alright, let me read from the researchers themselves, and I quote:
The root of the problem is that shared CPU components, like the internal memory system, combine attacker data and data from any other application, resulting in a combined leakage signal in the power consumption.
Thus, knowing its own data, the attacker can determine the exact data values used in other applications.
DUCK. [LAUGHS] Yes, that makes a lot of sense if you already know what they’re talking about!
To try and explain this in plain English (I hope I’ve got this correctly)…
This goes down to the performance-versus-security problems that we’ve talked about before, including last week’s podcast with that Zenbleed bug (which is far more serious, by the way):
Zenbleed: How the quest for CPU performance could put your passwords at risk
There’s a whole load of data that gets kept inside the CPU (“cached” is the technical term for it) so that the CPU doesn’t need to go and fetch it later.
So there’s a whole lot of internal stuff that you don’t really get to manage; the CPU looks after it for you.
And the heart of this attack seems to go something like this…
What the attacker does is to access various memory locations in such a way that the internal cache storage remembers those memory locations, so it doesn’t have to go and read them out of RAM again if they get reused quickly.
So the attacker somehow gets these cache values filled with known patterns of bits, known data values.
And then, if the victim has memory that *they* are using frequently (for example, the bytes in a decryption key), if their value is suddenly judged by the CPU to be more likely to be reused than one of the attackers’s values, it kicks the attacker’s value out of that internal superfast cache location, and puts the new value, the victim’s value, in there.
And what these researchers discovered (and as far fetched as the attack sounds in theory and is in practice, this is quite an amazing thing to discover)…
The number of bits that are different between the old value in the cache and the new value *changes the amount of power required to perform the cache update operation*.
Therefore if you can measure the power consumption of the CPU precisely enough, you can make inferences about which data values got written into the internal, hidden, otherwise invisible cache memory inside the CPU that the CPU thought was none of your business.
Quite intriguing, Doug!
DOUG. Outstanding.
OK, there are some mitigations.
That section, it starts off: “First of all, you do not need to worry,” but also nearly all CPUs are affected.
DUCK. Yes, that’s interesting, isn’t it?
It says “first of all” ( normal text) “you” (in italics) “do not need to worry” (in bold). [LAUGHS]
So, basically, no one’s going to attack you with this, but maybe the CPU designers want to think about this in the future if there’s any way around it. [LAUGHS]
I thought that was an interesting way of putting it.
DOUG. OK, so the mitigation is basically to turn off hyperthreading.
Is that how it works?
DUCK. Hyperthreading makes this much worse, as far as I can see.
We already know that hyperthreading is a security problem because there have been numerous vulnerabilities that depend upon it before.
It’s where a CPU, say, with eight cores is pretending to have 16 cores, but actually they’re not in separate parts of the chip.
They’re actually pairs of sort of pseudo-cores that share more electronics, more transistors, more capacitors, than is perhaps a good idea for security reasons.
If you’re running good old OpenBSD, I think they decided hyperthreading is just too hard to secure with mitigations; might as well just turn it off.
By the time you’ve taken the performance hits that the mitigations require, you might as well just not have it.
So I think that turning off hyperthreading will greatly immunise you against this attack.
The second thing you can do is, as the authors say in bold: do not worry. [LAUGHTER]
DOUG. That’s a great mitigation! [LAUGHS]
DUCK. There’s a great bit (I’ll have to read this out, Doug)…
There’s a great bit where the researchers themselves found that to get any sort of reliable information at all, they were getting data rates of somewhere between 10 bits and 100 bits per hour out of the system.
I believe that at least Intel CPUs have a mitigation that I imagine would help against this.
And this brings us back to MSRs, those model-specific registers that we spoke about last week with Zenbleed, where there was a magic bit that you could turn on that said, “Don’t do the risky stuff.”
There is a feature you can set called RAPL filtering, and RAPL is short for running average power limit.
It’s used by where programs that want to see how a CPU is performing for power management purposes, so you don’t need to break into the server room and put a power monitor onto a wire with a little probe on the motherboard. [LAUGHS]
You can actually get the CPU to tell you how much power it’s using.
Intel at least has this mode called RAPL filtering, which deliberately introduces jitter or error.
So you will get results that, on average, are accurate, but where each individual reading will be off.
DOUG. Let’s now turn our attention to this new SEC deal.
The Security and Exchange Commission is demanding four-day disclosure limits on cybersecurity breaches:
SEC demands four-day disclosure limit for cybersecurity breaches
But (A) you get to decide if an attack is serious enough to report, and (B) the four-day limit doesn’t start until you decide something is important enough to report, Paul.
So, a good first start, but perhaps not as aggressive as we would like?
DUCK. I agree with your assessment there, Doug.
It sounded great when I first looked at it: “Hey, you’ve got this four-day disclosure if you have a data breach or a cybersecurity problem.”
But then there was this bit about, “Well, it has to be considered a material problem,” a legal term that means that it actually matters enough to be worth disclosing in the first place.
And then I got to that bit (and it’s not a very long press release by the SEC) that sort-of said, “As soon as you’ve decided that you really ought to report this, then you’ve still got four days to report it.”
Now, I imagine that, legally, that’s not quite how it will work. Doug
Maybe we’re being a little bit harsh in the article?
DOUG. You zoom in on ransomware attacks, saying that there are a few different types, so let’s talk about that… it’s important in determining whether this is a material attack that you need to report.
So what kind of ransomware are we looking at?
DUCK. Yes, just to explain, I thought that was an important part of this.
Not to point fingers at the SEC, but this is something that doesn’t seem to have come out in the wash in many or any countries yet…
…whether just suffering a ransomware attack is inevitably enough to be a material data breach.
This SEC document doesn’t actually mention the “R-word” at all.
There’s no mention of ransomware-specific stuff.
And ransomware is a problem, isn’t it?
In the article, I wanted to make it clear that the word “ransomware”, which we still widely use, is not quite the right word anymore, is it?
We should probably call it “blackmailware” or just simply “cyberextortion”.
I identify three main types of ransomware attack.
Type A is where the crooks don’t steal your data, they just get to scramble your data in situ.
So they don’t need to upload a single thing.
They scramble it all in a way that they can provide you with the decryption key, but you won’t see a single byte of data leaving your network as a telltale sign that something bad is going on.
Then there’s a Type B ransomware attack, where the crooks go, “You know what, we’re not going to risk writing to all the files, getting caught doing that. We’re just going to steal all the data, and instead of paying the money to get your data back, you’re paying for our silence.”
And then, of course, there’s the Type C ransomware attack, and that is: “Both A and B.”
That’s where the crooks steal your data *and* they scramble it and they go, “Hey, if it’s not one thing that’s going to get you in trouble, it’s the other.”
And it would be nice to know where what I believe the legal profession calls materiality (in other words, the legal significance or the legal relevance to a particular regulation)…
…where that kicks in, in the case of ransomware attacks.
DOUG. Well, this is a good time to bring in our Commenter of the Week, Adam, on this story.
Adam gives his thoughts about the various types of ransomware attack.
So, starting with Type A, where it’s just a straightforward ransomware attack, where they lock up the files and leave a ransom note to have them unlocked…
Adam says:
If a company is hit by ransomware, found no evidence of data exfiltration after a thorough investigation, and recovered their data without paying the ransom, then I would be inclined to say, “No [disclosure needed].”
DUCK. You’ve done enough?
DOUG. Yes.
DUCK. You didn’t quite prevent it, but you did the next-best thing, so you don’t need to tell your investors….
The irony is, Doug, if you had done that as a company, you might want to tell your investors, “Hey, guess what? We had a ransomware attack like everyone else, but we got out of it without paying the money, without engaging with the crooks and without losing any data. So even though we weren’t perfect, we were the next best thing.”
And it actually might carry a lot of weight to disclose that voluntarily, even if the law said you didn’t have to.
DOUG. And then, for Type B, the blackmail angle, Adam says:
That’s a tricky situation.
Theoretically, I would say, “Yes.”
But that’s likely going to lead to a lot of disclosures and damaged business reputations.
So, if you have a bunch of companies coming out and saying, “Look, we got hit by ransomware; we don’t think anything bad happened; we paid the crooks to keep them quiet; and we are trusting that they’re not going to spill the beans,” so to speak…
…that does create a tricky situation, because that could damage a company’s reputation, but had they not disclosed it, no one would know.
DUCK. And I see that Adam felt the same way that both of you and I did about the business of, “You have four days, and no more than four days… from the moment that you think the four days should start.”
He rumbled that as well, didn’t he?
He said:
Some companies will likely adopt tactics to greatly delay deciding whether there is a material impact.
So, we don’t quite know how this will play out, and I’m sure the SEC doesn’t quite know either.
It may take a couple of test cases for them to figure out what’s the right amount of bureaucracy to make sure that we all learn what we need to know, without forcing companies to disclose every little IT glitch that ever happens and bury us all in a load of paperwork.
Which essentially leads to breach fatigue, doesn’t it?
If you’ve got so much bad news that isn’t terribly important just washing over you…
…somehow, it’s easy to miss the really important stuff that’s in amongst all the “did I really need to hear about that?”
Time will tell, Douglas.
DOUG. Yes, tricky!
And I know I say this all the time, but we will keep an eye on this, because it will be fascinating to watch this unfold.
So, thank you, Adam, for sending in that comment.
DUCK. Yes, indeed!
DOUG. If you have an interesting story, comment or question you’d like to submit, we’d love to read on the podcast.
You can email tips@sophos.com, you can comment on any one of our articles, or you can hit us up on social: @nakedsecurity.
That’s our show for today; thanks very much for listening.
For Paul Ducklin, I’m Doug Aamoth, reminding you until next time to…
BOTH. Stay secure.
[MUSICAL MODEM]