Not a satisfying answer, but I believe this is authenticating the browser vs the device. This doesn't happen to me with Safari, because it has whatever magic API juju to establish trust with MacOS. Firefox, not so much.
I am using Safari.
My guess is that they don't know, for whatever reason, that the browser you're using is on the same device, and they're blindly sending the code to one (or all) of the devices they know you do own. I have to do this dance when I log in to my mother's Apple account (I know, bad me, she is 8x and not technically inclined and I really am not going to talk her through fixing mail over the phone or whatever) to help her with stuff: it sends the code to her iPad, she reads it to me down the phone and I then type it in, and that actually is checking something meaningful.
So the question is, if you're using Safari on an Apple device why on earth don't they know that you are already using the device so it's all meaningless.
This also was meant to be a top-level comment. I think I need to stop using this interweb thing
No worries, some sites that want to "check to verify it's you who logged in" also feed that yes/no question back to the same device. This way you have to copy the code yourself, in case the device was stolen by some entity capable of stealing your login and using it, but not copying numbers between two sets of boxes.
/s (as seems to be necessary in these fallen times)
Yeah, I had this happen to me today as well while trying to login to my appleid with safari. I still think it's kinda stupid but I wouldn't like to have to power on my dev iphone every time I wanted to auth...
The two factors used to authenticate you are "something you know" and "something you have". "Something you know" was the password on the previous screen, "something you have" is the computer (or other device) showing the verification code.
The way it is more safe than before is that if some rando figures out your password and goes to the same login page, they don't get that window. But if they know all that and steal/are in a position to use your computer/"second factor", yeah, you're still boned, but at least they have to do that.
And yes, in a strict security sense, it would be safer to not show the code on the exact same device, if that was possible to determine. But it's not enough of a hit to security to undo the advantage from the previous paragraph.
In other words, "this is not 2FA at all".
That's the same thing I experienced earlier today when an Android app insisted that I use Google Authenticator to use:
If a malicious party is able to get so far as to log into my phone (not necessarily easy, but not tremendously difficult: All it takes is my phone and something like one of my fingers, both of which are easy for a physical attacker to get), then they're already very well-equipped to use both the app and the Google "2FA" authenticator required by that app.
The summation is 1FA: One-Finger Access.
It just has an extra, useless step.
I think you'll find that the factors are usually referred to as "something you forgot", "something you lost" and "something you were".
This is none of those things.
The best explanation so far is that there is a difference between compromising your password and compromising your cellphone, unless you use your password on your cellphone, in which case "2 factors" arguably becomes "2 steps".
But if someone manages to acquire both your password and your cellphone, they have the "2 factors" anyhow. BTW "multiple factors" are often misunderstood, an explanation here:
«acquire both your password and your cellphone»
Part of the subtleties of this approach is that someone does not actually need to acquire your cellphone, only your cellphone call number, which may be a lot easier.
I appreciate that apple apparently looked at the UX of hotp/totp/webauthn and said "this is unusable nerd bullshit" because that is completely the correct response.
I am less appreciative of the fact that they were apparently completely unable to come up with anything better and in fact came up with something demonstrably worse. Now, as the saying goes, we have two problems. (Well more like six or seven at the very least.)
My credit union does the same thing with their mobile app, except it also automatically fills in the code for you when the text arrives, which is nice, because I don't have to type it in with my fingers like an animal, but also moronically stupid in exactly the same way.
They do call it "two step" instead of "two factor", which is accurate but doesn't make it any more secure.
What about the user experience of WebAuthn do you think is "unusable nerd bullshit" ?
Im no expert but I think DoctorMemory may have gone too far with "this is unusable nerd bullshit" as it isnt completely "bullshit".
I think there is some truth to the "unusable nerd" part in my experience as I dont know a single nontechnologically savvy human being that uses TOTP on purpose.
Having to explain to people "this second step provides you with a lot more security" makes a normal human's eyes roll into the back of their skull and foam come out of their mouths as they proceed to drop to the floor in violent and very unreasonable convulsions over this pretty simple idea. I guess TOTP should be simple enough an illiterate monkey who spends its whole day masturbating should be able to use it, but then would it still be secure? But at least the monkey would be using TOTP. I dont know.
I think totp is a good idea. I use it and I think others should use it too even if it isnt a perfect system.
This sub-thread isn't about TOTP it's about WebAuthn.
Whereas TOTP involves copying digits from your phone into your computer (or worse, from one app in the phone to another) which makes you wonder why we bother having computers if we're left doing this drudgery, in second factor mode WebAuthn involves only some gesture of your presence, such as tapping a touch sensor or pressing a button on a physical object.
This is probably the biggest usability win, you barely notice it exists. From the security point of view it eliminates phishing, obviously it blocks brute force attacks and such, plus it has much better privacy properties than other solutions, and you can do two factor with the same basic technology.
The price is, you need the physical object. Some outfits did the maths and just bought all their employees a token. Google for example. You won't read stories about phishing Google employees any more because it completely stopped working years ago when they bought Security Keys. But most organisations don't bother, and most individuals won't either.
To be clear: I think fido/webauthn are the best of a bad bunch of options right now and I wish more places supported it, but it has the ugly property that the user experience gets exponentially worse the more you use it.
If you've just got one yubikey (or whatever your principal is) and you use it to secure your google account and maybe your bank account, it's great. Works exactly like it says on the tin, and the fact that it's ironclad against phishing is really really nice.
But... you probably don't just have one key. And you probably have more than one place that you don't use some sort of SSO system to sign into. You'll probably have one key permanently jammed into the usb port on your laptop and a second one on your keychain as a backup in case your laptop gets stolen. And now things quickly start getting out of control: have I remembered to register both keys on all of the sites I use them for? (Or maybe I have a work laptop and a backup key for work and now I have four keys to keep track of, oh god.) How do I keep track of which sites use webauthn at all? (1Password has no idea and I think no easy way of keeping track.)
But wait, it gets worse: it turns out the keys aren't quite as indestructible as first advertised. (Looking at you, Yubikey Nano 4C, you utter piece of shite with a _35%_ failure rate at my company.) There turn out to be a bunch of way more common events than "stolen laptop" that might prompt you to need to replace a key, and when that happens you have to remember each site that you've registered the key on, log onto them with your backup key, and go through the process of de-registering and re-registering. Better hope you named them consistently! (Better hope the site allows you to name them!)
But wait, it gets worse! The webauthn experience is still complete shit on mobile! Phone support for hardware keys is either "very bad" or "2-3x the cost of a normal fido key" or sometimes both (yubikey NFC). In the last year allegedly both Mobile Safari and Chrome have gained support for keeping your auth principal in the phone secure enclave so the phone itself becomes your key, but funny story: Google Login still doesn't support that so you're probably still having to configure TOTP just to have a fallback plan.
And of course, my bank doesn't support it. Your bank probably doesn't support it. Financial institutions come in two flavors: small ones like credit unions who buy all of their customer-facing software from a small number of white-label providers, and huge ones like citibank that operate at the speed of the heavily regulated massive institutions that they are. In nearly all cases the answer is: we offer the cutting-edge option of SMS 2FA, or we offer none at all. (Vanguard is the lone exception among the institutions I have accounts at, although finding their hardware key registration page is a bit of a challenge.)
I use webauthn where I can because there's not really a better alternative right now, but I'd seriously hesitate to recommend it to any of my non-technical family members and if I did it would be for the sole purpose of securing their primary email account.
I can't help you with 1Password. I can keep arbitrary notes in pass (Jason Donenfeld's "Unix-style" password manager, if piping /dev/urandom through tr -dc to make new passwords makes sense to you then you need pass) but I also do keep a paper list of sites with WebAuthn at least partly because it's still a small list and I like to be able to reel off the list of important sites you can secure this way.
I think every site that actually works as intended (AWS is an outlier by insisting on only having one FIDO token, even though the standard is pretty clear why you shouldn't do that) allowed me to name my keys. Speaking as someone who built an implementation (albeit a toy, for my vanity site) it's really annoying if you can't name the keys. They don't come with names, and the arbitrary random gibberish that makes the protocol work is not memorable, so in testing you will want the keys to have names. Accordingly I've never seen anywhere (except AWS) that doesn't do this.
I can't test the claim about Mobile Google Login, because I have an Android phone, so, from its point of view it already is authenticated, there is no way (that I know of) to actually do the WebAuthn dance with Google Login on the phone. I also won't check Facebook works on the phone as I have a policy of Facebook only living inside the Facebook Container on one copy of Firefox on one computer to reduce contamination. However other sites I use work, albeit few of them are things I would really need on my phone. GitHub for example, I've never used on my phone but since we're discussing this I signed into it from the phone and added the phone itself as a Security Key, very low friction.
In fact I suspect it's low enough friction that it would trigger our host to say "This clearly isn't doing anything" and that's how I ended up implementing it myself to be sure I knew how it works.
I don't think "My bank doesn't do it" counts as reason to consider the User Experience "unusable nerd bullshit". Banks wouldn't know a good user experience if they saw one, nor good security. Not so long ago I was still explaining to people that maybe your bank does redirect you from a login process to an HTTP URL, but no that isn't actually secure and you mustn't do it.
This isn’t 2FA.
Sometimes they may ask you to sign in on another Apple device, usually when you’re setting up a new one. I’m not even then I’d call that 2FA.
The closest thing to real 2FA is when they’re asking you for pet names and which city you were born in, after confirming a password and sending you a code.
Apparently this all changes soon.
Threat surface: Someone steals my iCloud password, and can access my iCloud data.
First protection factor: My iCloud password.
Second protection factor: A secure, enrolled device (macBook, iPhone, iPad) tied to my identity which can present a one-time code for the first access from untrusted devices using Safari.
Objection: What if the first factor (iCloud password) also unlocks the secure, enrolled device (MacBook). Not much of a two-factor, eh?
Argument: You still need both the password and physical access to the enrolled device (two factors) to make the attack. So it is still two factor authentication: The attacker needs both your password and the device to obtain a trusted connection in Safari.
If you use Safari on the iPhone, you have the same issue. Using Safari on the iPhone to access iCloud has the 2FA code come to the iPhone (same device!). You can object that this looks like one factor, except that you still need both the password and the enrolled devices. An advantage in this case: the iPhone uses different credentials to unlock (passcode, TouchId, FaceId), so you need to know both the iCloud password and the passcode, even when using the same device.
One way to improve the security for macOS: Use a different password than the iCloud password to log in to and unlock the macBook. You can set a different password to log into the macBook in the Users & Groups preferences. It's less convenient that using the iCloud password, but more secure if someone who knows your iCloud password gets a hold of your macBook.
You can declare your macBook an untrusted device to stop the 2FA messages from coming through. This also breaks all iCloud integration. But, if you think people can get easily into your macBook and know your iCloud password, your whole iCloud is compromised anyway.
My employer moved our 401(k) servicing so I had to create an account at the new site. After picking username and password, it asked me for a cell phone number to which it would send a text with a secret code. That done, I had full access to the account.
So "something I know" -- the password I just made up -- and "something I have" -- the phone I just told you about. Nothing to prove my identify.
There is multi-factor auth (MFA), which John Oliver at least referenced correctly in a recent Last Week Tonight on ransomware.
There is two factor auth (two factors, e.g. a username and a passphrase are two factors, also see: Citadal BBS "paranoid mode" [default in the 1980s was just a passphrase, or "one factor auth" which worked really well in most places until abusers became more commonplace than users]).
Then there is out-of-band signaling. e.g. SS7, IS/ANSI41 forward and reverse digital control channels for AMPS cellular networks, and such.
You can have out-of-band signaling with any authentication mechanism, but simply being in-band, doesn't negate the number of factors at play. It is wise but not mandatory to verify your SSH public keys from a console upon first logon, but how many people do that? Very few. So too it is with various authentication mechanisms and whether their authentication is in-band or out-of-band.
The peril is: if you are being MITM in-band, a verification mechanism which is out-of-band may mitigate that.
However, it is rarely ever implemented, can you guess why?
Think: layering violations and how difficult it may be to verify anything in the first place.
Or, as we called it at Georgia Tech, one factor authentication. But Duo "makes it easy!"