Fix an occassional DTS overlap by
closing the filtergraph after each
segment and re-creating it at the
beginning of each segment, instead
of attempting to persist the
filtergraph in between segments.
This overlap occurred mostly when
flip-flopping segments between transcoders,
or processing non-consecutive segments within
a single transcoder. This was due to drift in
adjusting input timestamps to match the fps
filter's expectation of mostly consecutive
timestamps while adjusting output timestamps
to remove accumulated delay from the filter.
There is roughly a 1% performance hit on my
machine from re-creating the filtergraph.
Because we are now resetting the filter after
each segment, we can remove a good chunk of
the special-cased timestamp handling code
before and after the filtergraph since
we no longer need to handle discontinuities
between segments.
However, we do need to keep some filter flushing
logic in order to accommodate low-fps or low-frame
content.
This does change our outputs, usually by one
fewer frame. Sometimes we seem to produce an
*additional* frame - it is unclear why. However,
as the test cases note, this actually clears up a
numer of long-standing oddities around the expected
frame count, so it should be seen as an improvement.
---
It is important to note that while this fixes DTS
overlap in a (rather unpredictable) general case,
there is another overlap bug in one very specific case.
These are the conditions for bug:
1. First and second segments of the stream are being
processed. This could be the same transcoder or
different ones.
2. The first segment starts at or near zero pts
3. mpegts is the output format
4. B-frames are being used
What happens is we may see DTS < PTS for the
very first frames in the very first segment,
potentially starting with PTS = 0, DTS < 0.
This is expected for B-frames.
However, if mpegts is in use, it cannot take negative
timestamps. To accompdate negative DTS, the muxer
will set PTS = -DTS, DTS = 0 and delay (offset) the
rest of the packets in the segment accordingly.
Unfortunately, subsequent transcodes will not know
about this delay! This typically leads to an overlap
between the first and second segments (but segments after
that would be fine).
The normal way to fix this would be to add a constant delay
to all segments - ffmpeg adds 1.4s to mpegts by default.
However, introducing a delay right now feels a little
odd since we don't really offer any other knobs to control
the timestamp (re-transcodes would accumulate the delay) and
there is some concern about falling out of sync with the
source segment since we have historically tried to make
timestamps follow the source as closely as possible.
So we're leaving this particular bug as-is for now.
There is some commented-out code that adds this delay
in case we feel that we would need it in the future.
Note that FFmpeg CLI also has the exact same problem
when the muxer delay is removed, so this is not a
LPMS-specific issue. This is exercised in the test cases.
Example of non-monotonic DTS after encoding and after muxing:
Segment.Frame | Encoder DTS | Encoder PTS | Muxer DTS | Muxer PTS
--------------|-------------|-------------|-----------|-----------
1.1 | -20 | 0 | 0 | 20
1.2 | -10 | 10 | 10 | 30
1.3 | 0 | 20 | *20* | 40
1.4 | 10 | 30 | *30* | 50
2.1 | 20 | 40 | *20* | 40
2.2 | 30 | 50 | *30* | 50
2.3 | 40 | 60 | 40 | 60
LPMS - Livepeer Media Server
LPMS is a media server that can run independently, or on top of the Livepeer network. It allows you to manipulate / broadcast a live video stream. Currently, LPMS supports RTMP as input format and RTMP/HLS as output formats.
LPMS can be integrated into another service, or run as a standalone service. To try LPMS as a standalone service, simply get the package:
go get -d github.com/livepeer/lpms/cmd/example
Go to the lpms root directory at $GOPATH/src/github.com/livepeer/lpms. If needed, install the required dependencies; see the Requirements section below. Then build the sample app and run it:
go build cmd/example/main.go
./example
Requirements
LPMS requires libavcodec (ffmpeg) and friends. See install_ffmpeg.sh . Running this script will install everything in ~/compiled. In order to build LPMS, the dependent libraries need to be discoverable by pkg-config and golang. If you installed everything with install_ffmpeg.sh , then run export PKG_CONFIG_PATH=~/compiled/lib/pkgconfig:$PKG_CONFIG_PATH so the deps are picked up.
Running golang unit tests (test.sh) requires the ffmpeg and ffprobe executables in addition to the libraries. To ensure the executables are available, run export PATH=~/compiled/bin:$PATH. Additionally it requires ffmpeg to be build with additional codecs and formats enabled. To build ffmpeg with all codecs and formats enabled, ensure clang is installed and then run BUILD_TAGS=debug-video ./install_ffmpeg.sh. However, none of these are run-time requirements; the executables are not used outside of testing, and the libraries are statically linked by default. Note that dynamic linking may substantially speed up rebuilds if doing heavy development.
Testing out LPMS
The test LPMS server exposes a few different endpoints:
rtmp://localhost:1935/stream/testfor uploading/viewing RTMP video stream.http://localhost:7935/stream/test_hls.m3u8for consuming the HLS video stream.
Do the following steps to view a live stream video:
-
Start LPMS by running
go run cmd/example/main.go -
Upload an RTMP video stream to
rtmp://localhost:1935/stream/test. We recommend using ffmpeg or OBS.For ffmpeg on osx, run:
ffmpeg -f avfoundation -framerate 30 -pixel_format uyvy422 -i "0:0" -c:v libx264 -tune zerolatency -b:v 900k -x264-params keyint=60:min-keyint=60 -c:a aac -ac 2 -ar 44100 -f flv rtmp://localhost:1935/stream/testFor OBS, fill in Settings->Stream->URL to be rtmp://localhost:1935
-
If you have successfully uploaded the stream, you should see something like this in the LPMS output
I0324 09:44:14.639405 80673 listener.go:28] RTMP server got upstream I0324 09:44:14.639429 80673 listener.go:42] Got RTMP Stream: test -
Now you have a RTMP video stream running, we can view it from the server. Simply run
ffplay http://localhost:7935/stream/test.m3u8, you should see the hls video playback.
Integrating LPMS
LPMS exposes a few different methods for customization. As an example, take a look at cmd/main.go.
To create a new LPMS server:
// Specify ports you want the server to run on, and the working directory for
// temporary files. See `core/lpms.go` for a full list of LPMSOpts
opts := lpms.LPMSOpts {
RtmpAddr: "127.0.0.1:1935",
HttpAddr: "127.0.0.1:7935",
WorkDir: "/tmp"
}
lpms := lpms.New(&opts)
To handle RTMP publish:
lpms.HandleRTMPPublish(
//getStreamID
func(url *url.URL) (strmID string) {
return getStreamIDFromPath(reqPath)
},
//getStream
func(url *url.URL, rtmpStrm stream.RTMPVideoStream) (err error) {
return nil
},
//finishStream
func(url *url.URL, rtmpStrm stream.RTMPVideoStream) (err error) {
return nil
})
To handle RTMP playback:
lpms.HandleRTMPPlay(
//getStream
func(ctx context.Context, reqPath string, dst av.MuxCloser) error {
glog.Infof("Got req: ", reqPath)
streamID := getStreamIDFromPath(reqPath)
src := streamDB.db[streamID]
if src != nil {
src.ReadRTMPFromStream(ctx, dst)
} else {
glog.Error("Cannot find stream for ", streamID)
return stream.ErrNotFound
}
return nil
})
To handle HLS playback:
lpms.HandleHLSPlay(
//getHLSBuffer
func(reqPath string) (*stream.HLSBuffer, error) {
streamID := getHLSStreamIDFromPath(reqPath)
buffer := bufferDB.db[streamID]
s := streamDB.db[streamID]
if s == nil {
return nil, stream.ErrNotFound
}
if buffer == nil {
//Create the buffer and start copying the stream into the buffer
buffer = stream.NewHLSBuffer()
bufferDB.db[streamID] = buffer
//Subscribe to the stream
sub := stream.NewStreamSubscriber(s)
go sub.StartHLSWorker(context.Background())
err := sub.SubscribeHLS(streamID, buffer)
if err != nil {
return nil, stream.ErrStreamSubscriber
}
}
return buffer, nil
})
GPU Support
Processing on Nvidia GPUs is supported. To enable this capability, FFmpeg needs to be built with GPU support. See the FFmpeg guidelines on this.
To execute the nvidia tests within the ffmpeg directory, run this command:
go test --tags=nvidia -run Nvidia
To run the tests on a particular GPU, use the GPU_DEVICE environment variable:
# Runs on GPU number 3
GPU_DEVICE=3 go test --tags=nvidia -run Nvidia
Aside from the tests themselves, there is a sample program that can be used as a reference to the LPMS GPU transcoding API. The sample program can select GPU or software processing via CLI flags. Run the sample program via:
# software processing
go run cmd/transcoding/transcoding.go transcoder/test.ts P144p30fps16x9,P240p30fps16x9 sw
# nvidia processing, GPU number 2
go run cmd/transcoding/transcoding.go transcoder/test.ts P144p30fps16x9,P240p30fps16x9 nv 2
Testing GPU transcoding with failed segments from Livepeer production environment
To test transcoding of segments failed on production in Nvidia environment:
-
Install Livepeer from sources by following the installation guide
-
Install Google Cloud SDK
-
Make sure you have access to the bucket with the segments
-
Download the segments:
gsutil cp -r gs://livepeer-production-failed-transcodes /home/livepeer-production-failed-transcodes -
Run the test
cd transcoder FAILCASE_PATH="/home/livepeer-production-failed-transcodes" go test --tags=nvidia -timeout 6h -run TestNvidia_CheckFailCase -
After the test has finished, it will display transcoding stats. Per-file results are logged to
results.csvin the same directory
Contribute
Thank you for your interest in contributing to LPMS!
To get started:
- Read the contribution guide
- Check out the open issues
- Join the #dev channel in the Discord