- How do I ensure that only valid stream keys are used? I want to generate stream keys and give those to people I'm working with, but I don't want others to be able to make up random keys and use my server as an open video relay. Restricting by IP is not practical. I suspect the answer has something to do with "on_connect" or "on_publish" but I don't see any practical examples. Ideally I'd just have a text file listing valid stream keys.
- I'm streaming to an endpoint from OBS. Then, I try to stream to that same endpoint from ffmpeg. ffmpeg reports "Server error: Already publishing" which is good and proper. However, somehow the existing stream gets all fucked up anyway. It starts getting audio and video dropouts, clients have to reconnect, and the only fix seem to be to restart the stream. This is less than ideal. I'd rather not have that DoS avenue available.
Securing nginx RTMP stream keys
Tags: computers, dnalounge, firstperson, lazyweb, webcasting
1) Most software I've seen has a database of some flavor containing valid stream keys. Usually they associate it with a username or whatever the identifier is. Here's how Open Streaming Platform does it (using MySQL as the DB)
I'm sure it could be done similarly with lighter-weight software (or even a text file of authorized keys, like you said). I mean hell, a text file works for sshd authorized keys.
2) Perhaps the endpoint could be configured to drop and restart connection if a new "here is my key, start streaming" request is received. I know on other services they tell you to protect the stream keys with your life or anyone can stream on your account. Perhaps their solution is to just stop the old stream and start with the new one. In your example, the OBS connection would be terminated and the ffmpeg connection would be streaming.
It would stop one variety of DoS but a new one would open up in that someone could get the stream key and hijack the stream. Depending how you generate & hand out keys, that may be an acceptable risk.
I asked a question about nginx, and you replied with a bunch of vagaries about what other software might or might not do.
Why would you do that?
How do you think that is helpful in any way?
You just point on_publish to a URL that gets POSTed the stream name and key, and make sure it 200s if OK and 403s if Forbidden. Here's an example in Python:
Unfortunately there isn't a mechanism to just look them up from a text file. You can hack around it with only Nginx config if you want:
But you'd probably be happier just doing it in Perl.
If I set "on_publish" to any https URL it says "invalid port in url". Apparently it can only do http? What fresh hell is this?
Just add an internal http location that does proxy_pass to https one.
https is always a pita, because someone wants to do hostname validation, someone doesn't, someone wants is with custom CA file, etc.
Dude, it's 2020. If your software can't do https out of the box you are not tall enough for this ride.
Imagine a C programmer is tasked to connect to a remote web server. Their standard libraries don't include any mechanism to even GET a URI let alone POST to one, but they do have Unix sockets. So of course because C programmers are real men (I think I'm allowed to say that because I was one?) they roll their own primitive HTTP client implementation with sockets. It speaks a rudimentary HTTP/1.0 with Host header.
The way you end up with that error is they take the HTTPS URL you gave them, they spot it doesn't begin "http://" and so they just pass the whole string to an Nginx internal C API and hope that can do better. It can't and gets lost in the weeds trying to find an integer where there isn't one. It supports Unix sockets, but it can't do HTTPS or anything modern.
All of this code looks terrifyingly fragile, I remember writing C code like this, what were we thinking? Your remote server should try to keep it very simple and business like, answer the HTTP request directly, close the socket, don't get fancy or the backend code may explode.