nginx hls shenanigans

Dear Lazyweb, is it possible to configure nginx so that when my RTMP source has disconnected, HLS clients continue to get video (e.g., looping colorbars) instead of 404?

The goal here is so that when there's a network glitch, the user doesn't have to hit reload, it will just come back to life eventually. (I have tried accomplishing this on the client-side with various Javascript tricks, but the browsers' autoplay restrictions and the general shittiness of the <VIDEO> element conspire to make that mostly impossible.)

Phase 1 might be to convert a 404 on "/live.m3u8" to an internal redirect to a static HLS directory of colorbars. Maybe "try_files" could do this. But phase 2 would be to make the video loop, and a static m3u8 file can't do that. So something would have to fake up an m3u8 with new timestamps on the old TS files, I guess?

Previously, previously.

Tags: , , , ,

9 Responses:

  1. James Baicoianu says:

    I haven't tried it yet, but SteamRIP seems like it fits the bill and is meant to work with an existing nginx-rtmp-streaming service

    • jwz says:

      It looks like it's adding yet another ffmpeg re-encoding pass around everything. That is less than ideal.

      • Tha_14 says:

        Well, I don't think you have any other options when it comes to nginx rtmp module(or whatever it's called). You can kinda get stuck using ffmpeg to do most things as the module simply doesn't have the functionality. Hopefully someone has done the exact thing you want to do without execing ffmpeg but it might be highly unlikely...

        • James Baicoianu says:

          Yeah it seems it should be possible to do this with fancy m3u8 generation tricks rather than executing ffmpeg, but as long as the codecs in the static loop match what's in the stream it should at least just be a simple stream copy, rather than an expensive reencode.

          Definitely not ideal or elegant, but shouldn't add too much overhead.

    • Graham Lee says:

      Thank you so much for this! I run a not-for-Jeff-profit weekly stream and this is one of two missing pieces to make it work.

      The other is posting a notification when the stream goes live, but I think I have a solution for that: using the njs plugin to run a script when OBS connects and supplies the correct stream key, to send a toot to a mastodon account.

      Thanks again!

  2. atporter says:

    Given how simple HLS is, couldn't you do this by using a 404 handler? If you get a 404 on that path always return a color bars segment. Not hip enough to know exactly how to do this with nginx but something like:

    error_page 404 =200 /path/to/colorbars.mp4;

  3. Yattaman says:

    I had a similar need with a simple webradio I was setting up to stream audio for game sessions. I ended up using liquidsoap ( to push some generic waiting music when there was no audible input on the source. I did it only for audio so I don't know what needs to be done with video (liquidsoap supports video streams). If you are willing to introduce another dependency and look into the liquidsoap language maybe it could be what you are looking for.

    This is the script I am using (it also mixes two audio sources):

    #!/usr/bin/env liquidsoap

    # Log file


    # If something goes wrong, we'll play this
    waiting = single("waiting.ogg")

    # Input from mpd http stream
    mpdstream = input.harbor(

    # Input from alsa device (from soundboard)
    soundboard = input.alsa(

    soundboard = amplify(10.0, soundboard)

    # Mix mpd and soundboard
    mixed = add([mpdstream, soundboard])

    # Source fails if there is no audio for at least 2 seconds with a sensitivity of -70 db
    mixed_strip_blank = strip_blank(

    # Set fallback on waiting music
    myradio = fallback(
    track_sensitive = false,
    [mixed_strip_blank, waiting]

    # Send stream to icecast
    host = "localhost",
    port = 8000,
    password = "icecast_password",
    mount = "myradio",

  4. rollcat says:

    Some acquired experience since my last HLS comment... I would say, just stay away from manipulating the downstream HLS with random hacks, this is terribly brittle in practice, and you'll get different behaviour from different players. I've tested about a dozen players, across Web, Android, iOS, Tizen, various frameworks, etc, with various silly hacks (exactly like injecting fallback segments), and it's hard enough to get things working in VOD, let alone live. There's no one playbook because HLS is broad enough that the intersection of what works where when deserves a bible of its own, especially as you add adaptive variants, separate audio, CMAF/fMP4, old/broken clients, etc. But if your setup is simple enough, and/or you don't care about some browsers/players, some variant of some hack might just work.

    Things to keep in mind if you go down the HLS welding route: 1. you must keep EXT-X-MEDIA-SEQUENCE rolling. It states how many segments have previously disappeared from the playlist, so this is the first step towards making sure the player doesn't choke. You probably need to parse the upstream playlist and dynamically generate your own. 2. You may need to insert EXT-X-DISCONTINUITY tags when the cutoff happens; when where why etc is still a bit of a mystery to me. Players seem inconsistent in handling edge cases around it (iOS does one thing, Android/ExoPlayer another...) 3. Match the encoding parameters exactly, on both streams. 4. If you're using a master playlist with variants, separate audio, subtitles - everything just spirals into a clusterfuck - abandon all hope.

    The least headache-inducing way to achieve this is to just ensure you always keep feeding the encoder+muxer+packager with a valid live feed. At $JOB we use a $100k rig from AWS Elemental and it does that for us; the budget option that comes to my mind is to try GStreamer with a video mixer element, detect RTMP fail from an error handler, and switch the mixer sources. That may require quite a bit of coding though.

    Regarding autoplay, this is also tricky / shitty. I've had some success by avoiding iframes (you can also pass allow="autoplay"), and making sure the user interacts with the page before starting playback (e.g. clicking on a giant play button, or making it an SPA). If you can't make the user interact (kiosk environment), there are alternatives to using a web browser, like (again) GStreamer - you can lazy-launch a simple player from the command line, or write a full blown app with the C / Python bindings. A basic player in Python should be about 10-20 lines, you could add things like remote control from there.

    • jwz says:

      Yeah, the whole thing is a minefield, and it's hard to tell which set of mines are better.

      I'm already dealing with the "click once before anything will play" nightmare -- see audio_enabler in video.html -- but the problem is with restarting.

      When watchdog_timer notices that the play-head is no longer advancing, it tears down and re-creates the player. This works most of the time if there was a client-side network error: some connection dropped, and the player went into "pause" mode, waiting for you to hit play again. In that case, it can usually get it going again without user interaction.

      I say "usually" because sometimes Safari goes back into "the user must click first" mode. Maybe this happens at random. Maybe it happens after 12+ hours without a click. I can't tell.

      The worse problem is when there was a source-side network error. In that case, index.m3u8 has briefly gone 404, the player says "The format is not supported", and as far as I can tell, no amount of Javascript will get it out of that state and make it start playing again. (Also I've found no way to intercept the display of that error string and replace it with something more accurate.)

      "Stick a mixer in between" isn't practical. It's the link between the on-site video generator and the AWS fan-out host that tends to flake out, since that's a Monkeybrains wireless connection and they have (generously) two nines of uptime.

      Also this is exacerbated by the fact that in order to work around various OBS fuckery, my scripts often have to take down the stream and re-create it for some large number of seconds while things reset and settle down. That number of seconds is often larger than the buffer size in the stream, because I found that large buffer sizes caused a whole other set of problems.

  • Previously