Facebook rate limiting

Apparently Facebook would prefer that nobody use their API, because they just rolled out rules that apparently mean I'm only allowed to load 200 URLs per hour. But it's even worse than that, because it's reporting:

X-App-Usage: {"call_count":58, "total_cputime":0, "total_time":152}

which means I only made 58% of the allowed calls, but because "total time" is over 100, now I'm locked out for some length of time that I am not allowed to know. I can't even tell what "total_time" means or how I can possibly be expected to adjust it.

What is happening and how do I make it stop?

I think maybe the same thing is happening with Instagram, but they just say "Oops, something went wrong" so who the fuck knows.

Previously, previously.

Tags: , , , ,

19 Responses:

  1. As best as I can tell, the way Facebook wants you to deal with this is to notice when total_cputime and total_time are increasing and add additional delays until they no longer increase on a request to request basis.

    If this sounds insane to you, well, me too.

    • jwz says:

      I'm already sleeping for 20 seconds between each GET and that didn't fix it.

    • jwz says:

      And when it locks me out, I wait for a full hour before trying again. Sometimes it takes 4+ hours to let me back in again.

      • What other headers are you seeing? Any X-Business-Use-Case usage headers with "estimated_time_to_regain_access" when you get throttled, or are you just seeing the generic error with message / type / code / trace? If the latter, which code (curious if you're getting any of the weirder ones)?

    • jwz says:

      Check out this complete madness. It makes no god damned sense:

      11:54:03 AM: Loaded https://graph.facebook.com/dnalounge?fields=id&access_token=...
      Got {"call_count":57,"total_cputime":0,"total_time":224}
      Sleep for 90 minutes and then retry the same URL:

      1:24:03 PM: {"call_count":32,"total_cputime":0,"total_time":84}

      I slept while locked, so no other process was making API requests, and in that time "total_time" only went down by 140.

      Another 90 minutes later:

      2:54:04 PM: {"call_count":59,"total_cputime":0,"total_time":172}

      So I made LITERALLY ZERO REQUESTS since I had total_time 84, and when trying to re-make that one same request, total_time jumped up by 88. And call_count went up too.

      Double-You Tee Fuck.

      • What in the name of Christ.

        • jwz says:

          Also when I go to https://developers.facebook.com/apps/...../rate-limit-details/app/ it says that in the last 24 hours there were 1241 total calls to gr:get:Event/admins and I am very, very certain that the real number is less than ten.

          It's acting like something is billing time/calls against the "DNA Lounge" app that is not my perl script and I don't see how that's possible.

          The dashboard thingy shows "active login users: 1" which is what it should be.

          • Pavel says:

            Is it at all possible that someone else got access to your access token? Either someone else, or some goofy script running on some forgotten box? I know this is probably the equivalent of "have you tried turning it off and on again", but would generating new access credentials and killing the old ones shed any light on this?

            • jwz says:

              That seems unlikely but I guess I might as well re-roll the app tokens.

              • Pavel says:

                If I had a nickel for every time the dumbest possible thing ended up being the cause of a software-related blunder, I could afford to finally give in to my impostor syndrome and retire.

              • Nick Lamb says:

                I actually have an even harsher suggestion because I have learned to be properly afraid of both my own incompetence and the world's never ceasing dick-ishness.

                I'd generate new access tokens, invalidate the old ones, and then wait, and watch Facebook's own view for as long as I could stand before giving the new tokens to my software.

                Because the moment the new tokens are pasted into actual code there are two extra things that might go wrong. Maybe I'm so dumb I accidentally caused an effect I didn't understand, like there's another copy of the same code I didn't realise ends up running or there's an unrelated thing making more threads that'll all end duplicating the API calls. Something I never dreamed of, and yet...

                But also, maybe some dumb library I brought in is now (deliberately or not) exfiltrating my API keys. Like its author decided every time Perfectly Normal Error happens it should silently create a Pastebin with the environment variables in, one of which is my Facebook API key. That sort of nonsense. And of course once the data is exfiltrated, regardless of why, scumbags are going to abuse that every way they can.

                If the numbers keep going up when the only valid keys are in your head or on a PostIt note in your apartment, not in any software, then that points pretty clearly to Facebook messing up.

  2. Nick Lamb says:

    First, some almost good news. The IETF working group httpbis (which develops HTTP, right now mostly fixing HTTP/2 with a view to taking over work on HTTP/3 once QUIC settles down) seemed at least vaguely interested in taking draft-polli-ratelimit-headers-01. One day, not so distant from now, we might actually have standardised rate-limiting and the tooling that comes with that.

    Now the inevitable bad news. Hand-rolled APIs are garbage, when we've talked to Facebook APIs in the past we always ran into bugs Facebook wasn't aware of and then it was a lottery as to whether they'd fix the bug or just declare it to be a feature or some mix of the two. For example we called an API that was advertised without any specific limit for things we could monitor, but after we used it some number of times (not a round number, but a crazy number like 83) it would just fail all subsequent calls saying the parameters were bogus. On a hunch I told it to delete one monitored item, and then everything worked again until our code added another one and it broke. Facebook acknowledged this as a bug, but their "fix" was to increase the limit to 100 items and then say you've added too many items go away.

    Even where there's a standard like RFC 6962 people are reluctant to put their hands up and say "We goofed, we'll fix it". One major public CT log named after a mythical creature would periodically give me 400 errors which offered in explanation only the phrase "The API call has failed due to being rate limited" and occasionally 502 gateway errors, suggesting there's some crummy reverse proxy in front of the log. I reported all this, and got... radio silence but I can tell you that several days later the log went unavailable for a few minutes and then miraculously stopped having errors. Huh.

  3. Jerry C says:

    The only way to win is not to play.

    • Glaurung says:

      Since Facebook is now the default town square of the Internet, "Not Playing" means that absolutely no one will ever hear about your event/band/small business/thing.

  4. Jim says:

    Have you actually measured ad spend ROI on Google, LinkedIn, NextDoor.com, and Reddit? Facebook will downtrend until they match their repudiation of their oft-stated commitment to unethical business with deeds.

    • jwz says:

      Yes. Facebook ads are still the only ones that work. I don't like this any more than you do.

      • Jim says:

        One of my mentor co-admins at hackthefuture.org just offered me a Facebook campaign for our fundraiser. I asked them to go ahead, but so far we only have had one additional one-time $10 donation. So if it's really such a great thing, I have high hopes for when they get around to turning it on. I wonder if the fact that I'm not on Facebook is going to spoil it, though.

  • Previously