Some Internet companies that were vulnerable to the bug have already updated their servers with a security patch to fix the issue. This means you'll need to go in and change your passwords immediately for these sites. Even that is no guarantee that your information wasn't already compromised, but there's also no indication that hackers knew about the exploit before this week. The companies that are advising customers to change their passwords are doing so as a precautionary measure.
Also, if you reused the same password on multiple sites, and one of those sites was vulnerable, you'll need to change the password everywhere. It's not a good idea to use the same password across multiple sites, anyway.
Go buy 1password already.
Heartbleed should bleed X.509 to death:
4 companies controlling 90.6% of the internet's secrets. This is fucking insane. Do you have any reason to trust this lot with anything, no less the security of 90.6% of all your 'secure' internet traffic? Do you honestly believe that the NSA/GCHQ didn't see this and say "Well that could be a lot worse"?
What we have done here is fitted our doors with some mega heavy duty locks, and given the master keys to a loyal little dog. Sure, he barks at you with a smile, but can you ever be sure he won't be distracted by an appealing steak from your worst enemy? Of course not, he's a fucking dog. We've seen two-faced dogs before - one was called RSA. They just loved that NSA steak.
At this point, the probability is close to one that every target has had its private keys extracted by multiple intelligence agencies. The real question is whether or not someone deliberately inserted this bug into OpenSSL, and has had two years of unfettered access to everything. My guess is accident, but I have no proof.
I'm hearing that the CAs are completely clogged, trying to reissue so many new certificates. And I'm not sure we have anything close to the infrastructure necessary to revoke half a million certificates.
Possible evidence that Heartbleed was exploited last year.
Also I'd like to point out again that nearly every security bug you've experienced in your entire life was Dennis Ritchie's fault, for building the single most catastrophic design bug in the history of computing into the C language: the null-terminated string. Thanks, Dennis. Your gift keeps on giving.
For those on linux, keepassx is serviceable (pretty crap, but it works). I just don't understand in what universe mouse input is a valid source of entropy, but that must just be me.
Wait, it's your contention that aimless mouse movements are utterly predictable? Or do I misunderstand you?
Actually, I haven't looked at the code in like a decade, but my recollection is that it's not the mouse-movements themselves that are the source of entropy, but the millisecond timing of them arriving in the kernel. The least significant value of the millisecond the message happens to arrive is unpredictable and depends on what else is happening in the machine at the time, other hardware interrupts, etc., etc., etc...
You can't trust 1password, that's proprietary software, you can never know what it does with your passwords.
You could recommend Keepass instead.
Pretty hilarious bitching about proprietary software on an article having to do with one of the worst security bugs in history, which lived for two years in an open-source library.
That point can't be made. How many big holes are there in proprietary software that the general public just doesn't know about? Remember the critical holes in MSIE that went unfixed for 6 years? With the code, you at least have the chance to find and fix stuff. Transparency is always better than a false sense of security awarded by secrecy in such cases.
If it's proprietary, then 'the hackers' won't likely have access to the code to know exploits exists. I, of course, use tons of open source software but part of laying your soul bare is that people get to see the shitty parts, too.
I have no idea how a deadbolt works, yet I use one every day to secure my home. I must be stupid because I don't know how everything works.
There's a difference between 'technical trust' and 'reputational trust', two terms I just made up.
'The hackers' can find exploits without source code.
Nothing prevents you from opening up your deadbolt to see how it works. You may not care, but others do.
It's almost like this discussion hasn't been taking place on the internet every day for the past 20 years.
Of course hackers can find exploits without source code, but in some cases the code makes it that much easier. I guess that's kind of my point. Some of us have no problem with proprietary software.
When you lay out your source code you have to accept that everyone has access to it, the good people and the bad people. More developers, smarter developers, might make better software but criminals may be better and finding and exploiting bugs than your community.
I'm not trying to argue that proprietary software is intrinsically better than open source, and it shouldn't be argued the other way, either. There is plenty of incredibly shitty open source software out there. Each has its place when asking the question 'who do you trust'?
Don't trust a corporation to do anything other than maximize immediate profit. They've put out insecure products and sold us out to the spooks time and time again.
One might say the same thing about open source in this case, except instead of selling out we trust most of the Internet's security to a protocol that let something this stupid through.
The irony of your statement and what it implies, relative to the discussion at hand, is beyond hilarious.
"Don't trust corporations, instead trust this severely broken thing we're discussing right now." Classic.
"one of the worst publicly-known security bugs in history..." FTFY.
I agree that null-terminated strings are awful, but a string in C is just a special case of an array. Isn't the core problem that arrays in C don't know their own length?
Yes, but one tracks the other and that's less pithy. Intrinsic types and boxing don't have zazz, and then people just want to go "hurr durr parentheses."
It's perfectly possible to write a safe string/buffer library in C, you just have to remember that C is only glorified machine code and if you want things done properly you have to do them yourself. The ad-hoc 'oh, here's an integer, what happens if I use it as a buffer length?' attitude that so many C programmers seem to apply to everything is the real problem.
(Did Lisp implementations, once they got down to the machine level, have any tricks to avoid having to check array lengths on every index operation, or did you just take the performance hit and it Wasn't That Bad After All?)
Design matters. Users do what you encourage. Users don't do what's impossible.
Some implementations have the array length coded into a header at the beginning of the array, along with the data type and other metadata. Another trick is to have arrays aligned to particular locations so you only need to check the low bits of a pointer’s value rather than calculate the actual length. You often have to have arrays in a particular location because they are GCed differently from other flavours of data anyway. All of this is completely invisible to the programmer, of course.
In most Common Lisp implementations, (declare (optimize (speed 3) (space 3) (safety 0) (debug 0))) would usually suppress the array bounds checks and run CAR on non-lists etc.
On the LispMs, CAR on a non-list would still throw an exception because type-check and dereference happened in parallel at the hardware/microcode level.
True, I was thinking more about the generic (RISC) CPU implementations. On LispMs you weren't really be tempted to avoid these checks (at least not for performance reasons) because the hardware did them for you "for free".
On today's (generic) hardware this doesn't happen automatically, but the performance overhead of checking a length or type field looks quite ridiculous compared to the enormous expense of a memory access or, heavens forbid, a memcpy(). So it seems sensible that modern languages do that for you.
You have to understand the conditions when C was designed, and what was the competition environment. People would say "my good old trusted ASSEMBLER compiler produces code 20% faster and 30% leaner, who needs that C thingy". So all the language features had to map 1:1 to machine instruction set (PDP-11 at that time).
You are ignorant and easy to ignore. Try harder.
You are an idiot. How much 1password pays you for promotion? Too much.
OpenSSL was created in the late 90s. The limits of hardware that predates it by 20 years is pretty well irrelevant. When someone drills a hole in someone's head in 2014 to let the demons out we don't talk about why there were valid reasons for people to believe in treppaning during the dark ages. We ask why the fuck someone is drilling holes in people's heads to solve problems in the 21st century.
If you're unaware of jwz's software pedigree by now, perhaps you should keep your asinine comments to yourself.
Calling Sergei ignorant and then not explaining why he is ignorant is intellectually lazy. Not knowing something isn't bad, nor does it make a someone a bad person in any way. On the other hand, insulting people for their lack of knowledge is rude. I'm not questioning your knowledge, or your intelligence, just your ability to remain civil in the face of non-confrontational dialogue.
>jwz's software pedigree by now, perhaps you should keep your asinine comments to yourself.
jwz's skill is irrelevant here, his reply is uncivil.
I did not use the word ignorant as an insult. That's on you.
It's not my responsibility to correct every wrong person on the internet. Funny thing: there are many.
But it's nice to see that you stand behind your word so little that you won't attach your name to them. Coward.
AFAICS the root cause here is the the protocol did include a string length when it didn't need to (and, of course, that OpenSSL was using that length, when it should have been ignoring it).
Everyone's made a stupid coding mistake that ended up in production, but just imagine how embarassing it would be to make international headlines. Robin Seggelmann must be feeling contrite right now.
As usual, anti-simitism is uninformed.
I'm not sure what you think null termination has to do with this bug. It's a case of duelling length field. The record layer has a length that OpenSSL respects, but the application layer has a different length that wasn't checked to make sure it's sane, so a memcpy goes off the end of the allocation. There's no check for a terminator character of any type at any stage of the process.
Reading beyond the end of an object should be impossible at the language level. That has everything to do with null termination.
If you are starting from the assumption that memcpy behaves like memcpy, then you're broken by design.
> Reading beyond the end of an object should be impossible at the language level.
Yeah, but when I went to university, that was associated with Pascal. "LOLPASCAL", chanted the students and lecturers, "give us the power of C without having to tell our compiler how long our strings are."
Alas, things don't seem to have changed much.
There are many languages available that provide that functionality, yet OpenSSL wasn't implemented in them. For that very reason, in fact!
C is broken by design. Simplicity, portability and performance ALWAYS took precedence in its design, over safety/security. If you asked Dennis about this he unabashedly admitted that C/Unix were not designed with security as a goal. He left that decision up to the customer/vendors, which is also why C/Unix have been so enormously successful. As have been pointed out, there are compilers and libraries available that can mitigate these issues somewhat if you chose to use them.
C and its derivatives are system programming languages. They are not for amateurs and I freely admit that they are very often used when a safer choice would be preferable (especially given how powerful modern architectures are).
The real lesson-learned here is that popular open-source security libraries need to be more carefully audited. The sort of "business logic" failures that caused the HeartBleed 'bug' can happen in any language.
Non-amateurs wrote this bug. In C. That is the "real lesson", if you'd leave your mind open to it instead of going into must-defend-Dennis mode.
You are not a member of an elite just because you can hack C. I have a soft spot for K&R 2nd Ed too; it was my first real programming language and I loved the simplicity and clarity of the language and the book. But it's long past time to admit that it wasn't really as good a design as we thought, even taken on its own merits.
It really depends on the context. If you are writing embedded systems, then null-terminated strings are a good design. It keeps the compiled code sparse and efficient. Now, consider that back when C was created, everything was an embedded system as the Internet did not exist (at all).
I agree that these days, in context, null-terminated strings are a "bad" design for code that relies heavily on accepting data from unauthorized sources. But again, that's why there are hardened libraries and toolchains available (which I always use). I personally find this model ideal, as it allows one to use the same language for multiple purposes, both efficiently and securely.
I will also point out that I've observed similar 'bugs' in all of the popular programming languages. That doesn't make any of them bad from a design perspective, its just a reality of modern software development.
Btw, I'm not a C hacker (unless you count compiling and occasionally debugging it). I'm just pointing out why it was designed how it was and that it almost certainly would have been less successful if it was a bigger/safer language.
Null-terminated strings are never a good idea, and what we call "embedded systems" now would have nearly been considered supercomputers when C was a going concern. Jesus. Just stop.
Btw, you are wrong to blame Dennis for null-terminated strings, as they predate the C language (and were used in BCPL, C's predecessor):
Given that the assemblers for the hardware that the first versions of Unix/C were targeting also used null-terminated strings, its really not surprising that C chose that idiom as well.
Just because he didn't come up with the idea doesn't mean it wasn't a terrible idea. He's the one who chose to build C around that model. It is his legacy we live with, not BCPL's.
I normally wouldn't respond as its punching down at this point, but I can't resist in your case.
You are talking about an "idea" from the 1960's, when you were a toddler and I wasn't even born yet. Actual practical security problems wouldn't surface for decades later, at which point Dennis and his peers had already moved well on from 70's style K&R C to Plan9 and C++, both of which solved this problem (and more) reasonably well.
That you and your peers chose to use obsolete software development tools when better choices were available is no ones fault but your own.
But hey, good security is hard. So, I suppose this can be expected as well:
I blame Obama, personally.
I'll leave it to others to catalog which fallacies you're applying here.
"C++ solved this problem reasonably well", herp derp.
More fun with Computer Security!
Theo de Raadt comments on why existing kernel/library mitigations didn't prevent this attack:
Strings in C++:
Strings in Plan9:
I have to say, the marketing angle on this security bug was particularly nice. Catchy name, good logo, very scary!
It was weirdly well done. How did that happen?
Patrick McKenzie wrote a great analysis on exactly this subject. It's a nice read.
A neckbeard seems to disagree with you. I point it out because I like it when you yell at them.
I'm not going to defend the null-terminated string but if they'd been using strncpy rather than memcpy they wouldn't have ended up in this stinker. Of course you could say the same about any number of better practices, as well as using a less broken shitty language.
They couldn't have used strncpy, because there's no nul-terminated string involved. Everything is length+data in this situation.
Also strncpy() is almost never what you actually wanted. The purpose of strncpy() when it was conceived was to copy strings into length-limited data structures that don't use a NUL terminator, e.g. disk blocks. You almost certainly aren't using one of those, so strncpy() isn't what you need and using it will probably introduce yet more bugs into your program.
strcpy_s() is the function that does what the average fool expects strncpy() to do, except it's from The Future which means you probably aren't allowed to use it in your programs..
Mostly though, JWZ is just right. C is crap and that's Dennis' fault in the same way that the Web is crap and that's Tim's fault. And really it's all our fault for not saying "Well, but this is crap, we should at least fix it first". We've had decades to make C obsolete and the best we've managed is to curb some of its worst excesses (C11 finally gets rid of gets() for example).
All software is crap, but sometimes we put up with it.
I hear all kinds of talk about compromised Certificate Authorities. So will I have to upgrade all of my browsers? Including on my phone?
Either that or backup everything you want to keep and don't put anything you don't want everyone to see on the Internet unless you encrypt on your own hardware and use steganography first, like every competent sysadmin keeps telling everyone.
Also, I have to stick my neck out and blame thread recycling instead of forking for this one. Thread recycling also provides a host of memory leak and deadlock bugs, or at least makes them matter when they exist, but as long as the top traffic sites get performance wins from recycling old threads when they should be using coprocesses, then it will continue to be a "best practice." The NSA should give me $10 million for saying this, at least the old NSA that didn't buy exploits from organized crime and look the other way when they get used for blackmail and extortion.
Forking makes the situation worse, because now the probability of that block of RAM that the heartbeat gave away being the server's secret key is much, much more likely.
And then, because most servers don't use perfect forward secrecy, any messages you have ever exchanged with the server become freely decryptable
I guess that all depends on what kind of pages mmus give you from sbrk, and I might be two full decades behind on that and wouldn't be surprised if it varies by platform. But I'm pretty sure that got fixed in the 80s on some platforms.
There's way more to the everything-is-terribleness, naturally.
I don't see how null-terminated strings are a bad design decision./'@W4#@';-=}\.'3-+#%lu!NO CARRIER
any piece of software that locks users in is crap - i.e., 1password. we are too lazy to export your data in human readable form. die.
Keepass: that's a good, FOSS product.
Lastpass: that's a superior to all (feature wise) was product.
So somehow JSON is a proprietary non-human-readable format now?
WTF are you talking about Karl?
WTF are you talking about? I literally just exported my whole vault, it's a folder package with JSON inside.
Just because it doesn't end in .txt doesn't mean you're "locked in".
If you really need to import it to something else, you can transform the JSON into anything you want with 10 minutes jQuery or any other language you're comfortable in.
hilarious response. truly.
Well, Karl, if that's all it takes, i sincerely think they should hire you to take their commercial crapware to the next level. See: competing products, including free ones, achieve greater functionality and superior ease of use without burdening end users with writing their own code.
after that, maybe you can help those halfwits get past copy/paste -- aka stupidest way to transfer passwords. see: another excellent reason to avoid 1passcrap is their use of the clipboard, which is easily spied upon by apps on all platforms. with lastpass and keepass, my passwords get filled into chrome, firefox, etc. without use of the clipboard.
For all the rage you have you sure are misinformed about how 1Password works.
Get a room, you guys.
in the aforementioned Keepass and Lastpass, i have no problem importing and exporting plain text, thus allowing me to interchange with other products painlessly.
The good things about open source are not without failure modes.