This PR refactors the RTMP to RTC bridge to support multiple video
tracks by implementing lazy initialization of audio and video tracks.
Instead of pre-determining track parameters during bridge construction,
tracks are now initialized dynamically when the first packet of each
type is received, allowing proper codec detection and track
configuration for dual video track scenarios.
Failed to view WHEP with HEVC before publishing RTMP, because the
default codec is AVC and will not be updated until the stream is
published. This enables better handling of streams with multiple video
tracks in RTMP to WebRTC bridging scenarios. Now, you are able to:
1. View WHEP with HEVC: Play with WebRTC:
http://localhost:8080/players/whep.html?schema=http&&codec=hevc
2. Then publish by RTMP: `ffmpeg -stream_loop -1 -re -i doc/source.flv
-c:v libx265 -profile:v main -preset fast -b:v 2000k -maxrate 2000k
-bufsize 2000k -bf 0 -c:a aac -b:a 48k -ar 44100 -ac 2 -f flv
rtmp://localhost/live/livestream`
Or publish RTMP with HEVC, then view by WHEP.
Note that if the codecs do not match, the error log will display RTC:
`Drop for ssrc xxxxxx not found`. For example, this can occur if you
publish RTMP with HEVC while viewing the stream with AVC.
**Introduce**
This pull request builds upon the foundation laid in
https://github.com/ossrs/srs/pull/4289 . While the previous work solely
implemented unidirectional HEVC support from RTMP to RTC, this
submission further enhances it by introducing support for the RTC to
RTMP direction.
**Usage**
Launch SRS with `rtc2rtmp.conf`
```bash
./objs/srs -c conf/rtc2rtmp.conf
```
**Push with WebRTC**
Upgrade browser to Chrome(136+) or Safari(18+), then open [WHIP
encoder](http://localhost:8080/players/whip.html?schema=http&&codec=hevc),
push stream with URL that enables HEVC by query string `codec=hevc`:
```bash
http://localhost:1985/rtc/v1/whip/?app=live&stream=livestream&codec=hevc
```
This query string `codec=hevc` is used to select the video codec, and
generate lines in the answer SDP.
```
m=video 9 UDP/TLS/RTP/SAVPF 49 123
a=rtpmap:49 H265/90000
```
The encoder log also show the codec:
```
Audio: opus, 48000HZ, channels: 2, pt: 111
Video: H265, 90000HZ, pt: 49
```
**Play with RTMP**
Play HEVC stream via RTMP.
```bash
ffplay -i rtmp://localhost/live/livestream
```
You will see the codec in logs:
```
Stream #0:0: Audio: aac (LC), 48000 Hz, stereo, fltp
Stream #0:1: Video: hevc (Main), yuv420p(tv, bt709), 320x240, 30 fps, 30 tbr, 1k tbn
```
You can also use [WHEP
player](http://localhost:8080/players/whep.html?schema=http&&codec=hevc)
to play the stream.
Important refactor with AI:
* [AI: Refactor packet cache for RTC frame
builder.](b8ffa1630e)
* [AI: Refactor the packet copy and free for
SrsRtcFrameBuilder](f3487b45d7)
* [AI: Refactor the frame detector for
SrsRtcFrameBuilder](4ffc1526b9)
* [AI: Refactor the packet_video_rtmp for
SrsRtcFrameBuilder](81f6aef4ed)
* [AI: Add utests for
SrsCodecPayload.codec](61eb1c0bfc)
* [AI: Add utests for VideoPacketCache in
SrsRtcFrameBuilder.](fd25480dfa)
* [AI: Add utests for VideoFrameDetector in
SrsRtcFrameBuilder.](b4aa977bbd)
* [AI: Add regression test for RTC2RTMP with
HEVC.](5259a2aac3)
---------
Co-authored-by: Jacob Su <suzp1984@gmail.com>
Co-authored-by: winlin <winlinvip@gmail.com>
1. It cannot retrieve codec information on `Firefox` by
`getSenders/getReceivers`
2. It can retrieve codec information on `Chrome` by `getReceivers`, but
incorrect, like this:

3. So, we retrieve codec information from `getStats`, and it works well.
4. The timer is used because sometimes the codec cannot be retrieved
when `iceGatheringState` is `complete`.
5. Testing has been completed on the browsers listed below.
- [x] Chrome
- [x] Edge
- [x] Safari
- [x] Firefox
---------
Co-authored-by: winlin <winlinvip@gmail.com>
1. When the chunk message header employs type 1 and type 2, the extended
timestamp denotes the time delta.
2. When the DTS (Decoding Time Stamp) experiences a jump and exceeds
16777215, there can be errors in DTS calculation, and if the audio and
video delta differs, it may result in audio-video synchronization
issues.
---------
`TRANS_BY_GPT4`
---------
Co-authored-by: 彭治湘 <zuolengchan@douyu.tv>
Co-authored-by: Haibo Chen(陈海博) <495810242@qq.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: winlin <winlinvip@gmail.com>
In the scenario of converting WebRTC to RTMP, this conversion will not
proceed until an SenderReport is received; for reference, see:
https://github.com/ossrs/srs/pull/2470.
Thus, if HTTP-FLV streaming is attempted before the SR is received, the
FLV Header will contain only audio, devoid of video content.
This error can be resolved by disabling `guess_has_av` in the
configuration file, since we can guarantee that both audio and video are
present in the test cases.
However, in the original regression tests, the
`TestRtcPublish_HttpFlvPlay` test case contains a bug:
5a404c089b/trunk/3rdparty/srs-bench/srs/rtc_test.go (L2421-L2424)
The test would pass when `hasAudio` is true and `hasVideo` is false,
which is actually incorrect. Therefore, it has been revised so that the
test now only passes if both values are true.
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: winlin <winlinvip@gmail.com>
To enable H.265 support for the WebRTC protocol, upgrade the pion/webrtc
library to version 4.
---------
Co-authored-by: john <hondaxiao@tencent.com>
Co-authored-by: winlin <winlinvip@gmail.com>
SrsUniquePtr does not support array or object created by malloc, because
we only use delete to dispose the resource. You can use a custom
function to free the memory allocated by malloc or other allocators.
```cpp
char* p = (char*)malloc(1024);
SrsUniquePtr<char> ptr(p, your_free_chars);
```
This is used to replace the SrsAutoFreeH. For example:
```cpp
addrinfo* r = NULL;
SrsAutoFreeH(addrinfo, r, freeaddrinfo);
getaddrinfo("127.0.0.1", NULL, &hints, &r);
```
Now, this can be replaced by:
```cpp
addrinfo* r = NULL;
getaddrinfo("127.0.0.1", NULL, &hints, &r);
SrsUniquePtr<addrinfo> r2(r, freeaddrinfo);
```
Please aware that there is a slight difference between SrsAutoFreeH and
SrsUniquePtr. SrsAutoFreeH will track the address of pointer, while
SrsUniquePtr will not.
```cpp
addrinfo* r = NULL;
SrsAutoFreeH(addrinfo, r, freeaddrinfo); // r will be freed even r is changed later.
SrsUniquePtr<addrinfo> ptr(r, freeaddrinfo); // crash because r is an invalid pointer.
```
---------
Co-authored-by: Haibo Chen <495810242@qq.com>
Co-authored-by: john <hondaxiao@tencent.com>
```
../../../src/utest/srs_utest_st.cpp:27: Failure
Expected: (st_time_2 - st_time_1) <= (100), actual: 119 vs 100
[ FAILED ] StTest.StUtimeInMicroseconds (0 ms)
```
Maybe github's vm, running the action jobs, is slower. I notice this
error happens frequently, so let the UT pass by increase the number.
---------
Co-authored-by: Haibo Chen <495810242@qq.com>
Co-authored-by: winlin <winlinvip@gmail.com>
## How to reproduce?
1. Refer this commit, which contains the web demo to capture screen as
video stream through RTC.
2. Copy the `trunk/research/players/whip.html` and
`trunk/research/players/js/srs.sdk.js` to replace the `develop` branch
source code.
3. `./configure && make`
4. `./objs/srs -c conf/rtc2rtmp.conf`
5. open `http://localhost:8080/players/whip.html?schema=http`
6. check `Screen` radio option.
7. click `publish`, then check the screen to share.
8. play the rtmp live stream: `rtmp://localhost/live/livestream`
9. check the video stuttering.
## Cause
When capture screen by the chrome web browser, which send RTP packet
with empty payload frequently, then all the cached RTP packets are
dropped before next key frame arrive in this case.
The OBS screen stream and camera stream do not have such problem.
## Add screen stream to WHIP demo
><img width="581" alt="Screenshot 2024-08-28 at 2 49 46 PM"
src="https://github.com/user-attachments/assets/9557dbd2-c799-4dfd-b336-5bbf2e4f8fb8">
---------
Co-authored-by: winlin <winlinvip@gmail.com>