WTF, certbot

A few weeks ago, my Let's Encrypt cron job started complaining that certbot-auto is no longer supported on CentOS 7.7. Ummmm thaaaaanks? So I changed "certbot-auto" to "certbot" but now it's saying:

Attempting to parse the version 1.9.0 renewal configuration file found at /etc/letsencrypt/renewal/jwz.org.conf with version 0.38.0 of Certbot. This might not work.

How is this shit supposed to work? What am I expected to do on CentOS 7.7? Certbot 0.38.0 is the latest version in yum.

Previously, previously.

Tags: , , , ,

27 Responses:

  1. McDanno says:

    certbot standalone is apparently old and busted, and something called snapd appears to include the new hotness.

    https://certbot.eff.org/instructions

    I'll be enjoying doing this same dance sometime soon I guess.

    • Doctor Memory says:

      Having done it recently:

      1- it's fine, it works, I don't think about it much now
      2- what the FUCK, letsencrypt? this is a goddamn python script. Why in the name of god am I being asked to install a new package management system with a resident daemon in order to run it?!

      Yes, snaps are kinda nifty. But again: a fucking python script. The future is dumb.

      • Kyzer says:

        This is the Windows-isation of Linux, where every package brings every single dependency along with them, and a pile of disk space and RAM is burned in the name of developer gratification. They want to release without having to rely on maintainers, and rather than go with what's appropriate for any given distro (Python 2 vs 3, NSS vs OpenSSL vs LibreSSL vs ...), they want one-size-fits-all. If they want Python 3 and your distro uses Python 2, well tough fucking luck, here's a complete copy of Python 3 to run our script. As they say here:

        While the Certbot team tries to keep the Certbot packages offered by various operating systems working in the most basic sense, due to distribution policies and/or the limited resources of distribution maintainers, Certbot OS packages often have problems that other distribution mechanisms do not.

        • Aidan Gauland says:

          Python 2 was EOL'd by upstream last year, after being in legacy support mode for years. I do not think it is at all reasonable to expect developers to support it at this point.

        • George Dorn says:

          If only Python had an established standard for handling per-application dependencies.

          Or three.

        • margaret says:

          this quote matches my experience for certbot and practically every other subsystem. the certbot configurator which manages installation, setup, and updates for debian was just a few lines. zero edits when 20 came along but a plate of spaghetti and a head shaped dent in my keyboard when i added centos7. the only exceptions on the debian side for all the 50 or so sub-systems managed are for packages that hadn't come out for pre-release 20 (and a couple for 14, to be fair). the shitshow that is the {% if this_fucking_sub_version_of_rhel_or_centos_and_or_sw_version %} filling the config files is glorious.

          given N number of install systems developers will use N**(julian_birthday*vendors_stock_price) install methods. EDA install script rant excised for sanity.

          the good news is that centos is going to become (even more of a) a moving target.

        • kaidenshi says:

          This is the Windows-isation of Linux, where every package brings every single dependency along with them, and a pile of disk space and RAM is burned in the name of developer gratification.

          Not only this, but snaps are required to use Canonical's closed source server as their backend, with no FOSS option or alternative. As of 20.04 I will no longer use or deploy an Ubuntu or Ubuntu-derived distro because of that alone.

      • saxmaniac says:

        I love the slovenly snobbery of snaps. It’s just too hard to make a distro-specific release, so we’ll just put the entire distro plus the application inside a disk image and mount that. Everyone gets their own libc! We can now patch libc itself!

        I can sympathize with the problem, but the solution is ridiculous.

        How multiple applications are supposed to do this all at the same time is a mystery.

  2. McDanno says:

    n-gate has always held that letsencrypt are idiots. This does not exactly do much to dispel that impression.

    http://n-gate.com/software/2017/

  3. Sven Wallman says:

    I encountered the same kind of problems, got tired of the randomness and switched to acme.sh (https://github.com/acmesh-official/acme.sh).

  4. CJ says:

    On my CentOS boxes which use it (running 7.9), I'm using the EPEL repo (https://fedoraproject.org/wiki/EPEL) to provide certbot, and the latest version on there is 1.10.1, at least for the x86_64 arch. I do see that they've got an old "aarch64" dir in there whose latest version is still 0.38.0, and they mention on EPEL's page that "EPEL-7 for aarch64 is no longer supported as Red Hat ended support for this architecture." Are you using aarch64 by any chance?

    Anyway, I've not had any problems with that EPEL-provided certbot, though I admit I've not used the "certbot-auto" you mentioned... My cronfiles for these tend to just call out to:

    /bin/certbot renew -q --post-hook "/sbin/apachectl graceful"

    (Also, if your install really is just on CentOS 7.7, I'd recommend getting that updated to 7.9, 'cause otherwise it's been unpatched for some time now. If you'd just installed when it was 7.7 and have been using "yum update" periodically since then, then you're almost certainly already on 7.9. You can check /etc/centos-release. "Minor" upgrades within EL major releases are nearly always flawless in terms of backwards compatibility (though there've been exceptions, of course), so you shouldn't have too much to worry about there.)

    • CJ says:

      Ah, now that this has sat in my brain for awhile, I wonder if maybe 7.8 was when they'd dropped support for aarch64? Which might explain it if you're still on 7.7 and have that older version.

      Since all the certbot packages are all technically noarch, since they're just Python, it might Just Work if you grab the more recent packages from the x86_64 repo. Could give it a go, anyway, though you might end up having to chase dependencies a bit. I assume you'd probably need these three, at least:

      https://mirrors.sonic.net/epel/7/x86_64/Packages/c/certbot-1.10.1-1.el7.noarch.rpm
      https://mirrors.sonic.net/epel/7/x86_64/Packages/p/python2-certbot-1.10.1-1.el7.noarch.rpm
      https://mirrors.sonic.net/epel/7/x86_64/Packages/p/python2-acme-1.10.1-1.el7.noarch.rpm

      ... plus whatever python2-certbot-* integration you might need (like apache or nginx, etc) from https://mirrors.sonic.net/epel/7/x86_64/Packages/p/

      • Big says:

        I’m not sure “it might Just Work” is the right approach for something as important as ssl certs.

        (Though it’s looking a lot like if you care about your shit that LetsEncrypt is also not the right approach...)

        • CJ says:

          Hah, you'll get no argument from me on that front -- I only use letsencrypt on my readonly mostly non-dynamic sites that honestly have no need for SSL in the first place (apart from the general-purpose "it's probably a good idea to put SSL on everything anyway" kind of argument).

          In this case, since we're just talking about getting the newer version of the already-in-place software installed, though, "it might Just Work" doesn't seem too awful, though. The letsencrypt architecture's already in place in this case.

    • jwz says:

      I do "yum update" every now and then, but /etc/centos-release says 7.7.1908. Though I haven't upgraded kernel or libc in ages because in my experience that leads to a full Kessler cascade of upgrades including PHP and MySQL and Apache, and any time those change it usually means that code changes are required and the site is fucked for a few days, since the maintainers of those packages give not the tiniest slice of a fuck about backward compatibility.

      So I don't know what package update causes that file to change.

      • CJ says:

        Huh, weird. /etc/centos-release should be owned by the "centos-release" package, which would ordinarily be updated as part of the usual "yum update." (You can check which file belongs to which package with an "rpm -qf /foo/bar/baz") I wonder if your repos are hardcoded to a specific CentOS version? For the base CentOS repos, your /etc/yum.repos.d/CentOS-Base.repo should have these three configured:

        [base]
        name=CentOS-$releasever - Base
        mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os&infra=$infra
        #baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/
        gpgcheck=1
        gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

        #released updates
        [updates]
        name=CentOS-$releasever - Updates
        mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates&infra=$infra
        #baseurl=http://mirror.centos.org/centos/$releasever/updates/$basearch/
        gpgcheck=1
        gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

        #additional packages that may be useful
        [extras]
        name=CentOS-$releasever - Extras
        mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras&infra=$infra
        #baseurl=http://mirror.centos.org/centos/$releasever/extras/$basearch/
        gpgcheck=1
        gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

        ... and then if you've got EPEL too, /etc/yum.repos.d/epel.repo should contain:

        [epel]
        name=Extra Packages for Enterprise Linux 7 - $basearch
        #baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch
        metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch&infra=$infra&content=$contentdir
        failovermethod=priority
        enabled=1
        gpgcheck=1
        gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7

        The one nice thing about using a Redhat-based distro, when they aren't busy prematurely killing product lines, is that they tend to stay extremely backwards-compatible throughout the lifetime of, say EL7. An upgrade from 7.7 -> 7.9 is super unlikely to require any code or configuration changes (though of course there are exceptions). You'd definitely have upgrades to that PHP+MySQL+Apache stack, though, so those would get touched by it.

        Regardless, as I say, you might be able to get away with just grabbing those updated certbot RPMs from EPEL in the meantime, though so long as you're just on x86_64, getting those repos fixed up and system updated would be a good thing...

        • jwz says:

          Yeah, /etc/centos-release wasn't getting updated because I was picking and choosing packages to update, and the one that owns that file wasn't one of them.

  5. Michael says:

    I spent a few years hacking around this problem in increasingly complex ways because I couldn't find a version of certbot which worked on my really really ancient Gentoo machine (I haven't used Gentoo in years, it was a Pentium III, many other really inexcusable reasons, &c.)

    My solution was ultimately to just switch from nginx to Caddy and now I don't bother screwing around with this anymore. Not really suggesting you do this, more sharing your pain :)

  6. timeless says:

    I thought they were just trying to EOL certbot-auto for RHEL6/CentOS6 (I get the same warning).

    And yeah, the dance from certbot-auto to certbot pretty much always results in this unhelpful error. certbot-auto is (was?) effectively bleeding edge, whereas distro releases tend to be fairly stale (which is why I generally ended up using certbot-auto).

    It's laziness on the side of perpetual beta software developers who don't want to write code to validate that a given config file is compatible w/ a newer version (I'm certainly guilty of this). -- The same strategy that saw XUL Cache files blown away when versions didn't match (although in that case, I think they were blown away a little too unassertively resulting in problems -- perhaps a better comparison is necko's cache which would be deleted if it thought there was any chance of incompatibility). One could be charitable and say "at least they admit they aren't trying instead of pretending it'll work and then saying oops when it doesn't." -- Contrast that with some screensaver devs who mislead people into thinking they have security when they don't ;-) .

    Generally there's a single line in the conf file which marks the version that the file was last maintained by and one can hack it to a different value to silence the warning -- as long as the files are in fact compatible.

    Unfortunately, to figure it out, you kind of need to generate a new certificate w/ the new older client and compare the generated configuration with your existing configuration to check to see if it's using the same fields/structure.

    • pakraticus says:

      Looks like they stopped supporting certbot-auto. See https://certbot.eff.org/docs/install.html#certbot-auto

      This worked from a 'docker run --rm -i -t centos:centos7.7.1908'
      yum install -y python3
      pip3 install --user certbot
      ~/.local/bin/certbot

      And
      yum install -y python3
      python3 -m venv ~/certbot-venv
      . ~/certbot-venv/bin/activate
      pip install certbot
      certbot

      work.
      It's a damn shame the docs don't mention either.

  7. vc says:

    I've been quite happy with acme-tiny for my letsencrypt needs.

    https://github.com/diafygi/acme-tiny

  8. Jim says:

    None of us should have capitulated to key escrow, certificate authorities, CALEA, management coprocessors, national security letters, NIST mathematics, or the technical validity of the Astronomy Picture of the Day.

  9. pde says:

    As others have noted the Certbot dev team is encouraging everyone to switch to snap to get fresh releases these days, but if you just want to use your older OS packages and silence the original annoying warning in your cron job, running this once:

    sudo certbot renew --force-renewal

    should do the trick. This would happen automatically at your next scheduled renewal, though cron will complain about the warning on stderr in the meantime. You can do cerbot renew --dry-run first to check that renewal is with an older version is going to work as expected, if you want.

  • Previously