buildroot

Lazyweb, if you can point me at working instructions for getting a MacPorts-based Raspberry Pi cross-compilation environment up and running on MacOS 10.12, that would be appreciated.

(Be advised before you point me at instructions for an older release of MacOS, I have tried that, and it doesn't work.)

I'm trying to get buildroot going but Apparently Apple's "clang can totally just impersonate gcc" theory is false, and even after doing "port select --set gcc mp-gcc7", MacPorts seems to be using the wrong compiler. I can't even find a straight answer on whether I should be installing "arm-elf-gcc" or "arm-none-eabi-gcc", but neither of them will build when doing "port install".

Fun fact: buildroot requires GNU sed. Because we're still innovating in sed syntax here in the 21st century, FFS.

Previously.


Update: Apparently buildroot does attempt to download and build its own cross-compilation toolchain if you run it with a functional native gcc. So here's how far I've gotten so far in this buildroot-2016.08 shitshow:

  • port install gsed
  • export PATH=/opt/local/libexec/gnubin:$PATH
  • make raspberrypi3_defconfig
  • port install gcc7 gcc_select
  • port select --set gcc mp-gcc7
  • port install coreutils md5sha1sum findutils
  • make all
  • m4 doesn't build; manually add this to its "configure" command:
    ac_cv_type_struct_sched_param=yes

  • toolchain doesn't build. add this to toolchain/toolchain-wrapper.c:
    static const char *program_invocation_short_name;
    ...
    program_invocation_short_name = basename;
  • toolchain still doesn't build. remove this from toolchain/toolchain-wrapper.mk:
    -Wl,--hash-style=$(TOOLCHAIN_WRAPPER_HASH_STYLE)

  • And then this:
    /scripts/Makefile.headersinst:55: *** Missing UAPI file ./include/uapi/linux/netfilter/xt_CONNMARK.h. Stop.

    Guess what that means? It means "After all these years, jackasses in charge of the Linux kernel still spitefully and with malice aforethought include case-conflicting file names in the source."

    Apparently their official policy is "all reasonable file systems are case sensitive." (This is objectively false: all reasonable file systems are case preserving insensitive with Unicode canonicalization.)

    So this means start over from scratch inside a sparseimage:
    hdiutil create -type SPARSE -fs 'Case-sensitive Journaled HFS+' -size 8g buildroot

  • And then this: compilation of uclibc-1.0.17 fails with:
    ./../include/elf.h:30:10: fatal error: endian.h: No such file or directory
    ../utils/porting.h:48:10: fatal error: link.h: No such file or directory

    No idea what's going on there.


Update 2: Ok, so I finally accepted the truth that "the only way to build a Debian system is using another Debian system" and installed a Debian in VirtualBox to run Buildroot to create a CF card to run on a Pi.

It is one fucked up universe we live in where "run another operating system in an emulator" makes more sense than "just cross-compile." What is this, A Deepness in the Sky?

Fun fact: Virtualbox "recommends" the Debian emulator have 1GB of RAM, which is apparently insufficient for compiling a Linux kernel. Of course the build process has completely sensible error messages when that goes wrong. Oh wait, no it doesn't.

So now Buildroot spat out something that boots, but not something that recognizes USB keyboards, so that's lovely. Back down into the salt mines...

Tags: , , , ,

13 Responses:

  1. Will says:

    With a pi 3, do you really need a x-compile environment? Can you not just tramp it?

  2. Not a laser says:

    clang certainly can't impersonate gcc yet. Even if it could, I don't know if you want the sort of fun that debugging a Linux kernel compiled with something other than gcc involves.

    I dimly recall that buildroot could actually build its own toolchain. You may not need to install a cross-compiler, just a native gcc that buildroot can use in order to bootstrap its own toolchain. You can use that toolchain for cross-compiling later, too.

    If not, I'd suggest you go with Linux in a VM instead. I had "tons" of fun a few years ago trying to get buildroot to work on a BSD machine.

    And whatever you do, if you see something that says Yocto on the packages, someone suggesting that you use Yocto, or someone trying to sell Yocto to you (maybe in a nice wrapping with a company name on it), run like hell and, if possible, nuke the guy from orbit.

    • yuubi says:

      I have seen buildroot build its own cross toolchain. (Sadly, it doesn't help you build a host toolchain if you want to build some googleware that needs latest-greatest g++; you get to do that manually or update your host system).

      > yocto .. nuke from orbit

      Oh shit. Someone is trying to sell $WORK a chip whose "support package" is based on yocto. Everything was different from buildroot, but following the instructions yielded an image that would boot. What sort of Interesting Times do I have ahead of me?

      • Not a laser says:

        That depends on what you need to do with it, really. If you don't need to do too much customization, I think it's just as good as buildroot (maybe even a little more flexible on the distribution/deployment side). If you need to do some useful development on top of it, maybe it would help if you could take a day or two to play with upstream Yocto a little (it can helpfully build qemu images out of the box).

        Frankly, my biggest beef is that error reporting is virtually non-existent. It's worse than TeX. If you misspell SRC_URI in a recipe, instead of getting an error like "No SRC repository defined for package" and Clang-like warning "Unknown variable SCR_URI, perhaps you meant SRC_URI?", you get two screens' worth of errors saying "couldn't find your package". If you write ${var) instead of ${var} in a function, you get a stack trace from bitbake which chokes while trying to parse your line, because what's the fun in doing that boring 30% that's left once you get your parser to parse.

        Furthermore, while it's a tool that's been developed with the best of intentions, it's very frequently misused. That's because a lot of things (e.g. specifying package licenses, amending image configuration) are done based on "convention", but this convention is generally not documented anywhere. To make things worse, the Reference Manual is a very verbose equivalent of the "Increment i by one" comments above i++ lines. As a result, even trivial questions, like "Why is file X ending on my rootfs", "Why is file X not ending up on my rootfs even though I'm including the package" or "Why is library Y compiled with this flag when I just explicitly removed it in my .bbappend" take hours, sometimes days to answer. It's an incredibly complex suite of tools, and the manual glosses over it indifferently; it's very difficult to do non-trivial things with it if you can't read Python code (which is what the back-end uses) to see what's happening behind the scenes.

        That being said, if the support package is really just upstream Yocto with a thin BSP layer that just adds a few custom packages and the compulsory ancient kernel version tha the manufacturer supports, you're probably safe. The upstream community is pretty tight and follows conventions well, so this is fairly uncommon. If it's bigger than that, or worse, $CHIP_MANUFACTURER Embedded Linux, maybe you should ask for a demo first...

  3. hillu says:

    arm-none-eabi-gcc does not make sense, it is meant for OS-less target environments, if arm-elf-gcc is the only other choice, you'd want to use that.

  4. douche extraordinaire says:

    1. Install VirtualBox
    2. ???
    3. Profit!

  5. will says:

    I would recommend trying glibc as the C library as uClibc is only really a win on really small systems and is patchily maintained.

  6. It is one fucked up universe we live in where "run another operating system in an emulator" makes more sense than "just cross-compile." What is this, A Deepness in the Sky?

    I just finished A Fire Upon the Deep. Is A Deepness in the Sky worth it?

    • pavel_lishin says:

      I think Deepness is the better novel.

      Although, I think the Tines are conceptually more interesting as aliens, their world felt like a generic semi-fantasy feudal politics story.

      The sequel (Children of the Sky or something) is even more of that; not much is gained at all by placing humans on an alien world, from what I remember - you could tell a functionally identical "what-if" alt-history story set on Earth.

  • Previously