Commit Graph

547 Commits

Author SHA1 Message Date
Jacob Su
7aba442a38 HLS: Remove deprecated hls_acodec/hls_vcodec configs. v7.0.53 (#4225)
## Summary
Removes the deprecated `hls_acodec` and `hls_vcodec` configuration
options and implements automatic codec detection for HLS streams, fixing
issues with video-only streams incorrectly showing audio information.

## Problem
- When streaming video-only content via RTMP, HLS output incorrectly
contained audio track information due to hardcoded default codec
settings
- The static `hls_acodec` and `hls_vcodec` configurations were
inflexible and caused compatibility issues with some players
- Users had to manually configure `hls_acodec an` to fix video-only
streams

## Solution
- **Remove deprecated configs**: Eliminates `hls_acodec` and
`hls_vcodec` configuration options entirely
- **Dynamic codec detection**: HLS muxer now automatically detects and
uses actual stream codecs in real-time
- **Improved defaults**: Changes from hardcoded AAC/H.264 defaults to
disabled state, letting actual stream content determine codec
information
- **Real-time codec switching**: Supports codec changes during streaming
with proper logging

## Changes
- Remove `get_hls_acodec()` and `get_hls_vcodec()` from SrsConfig
- Update HLS muxer to use `latest_acodec_`/`latest_vcodec_` for codec
detection
- Add codec detection logic in `write_audio()` and `write_video()`
methods
- Remove deprecated config options from all configuration files
- Add comprehensive unit tests for codec detection functionality

Fixes #4223

---------

Co-authored-by: Haibo Chen <495810242@qq.com>
Co-authored-by: OSSRS-AI <winlinam@gmail.com>
2025-08-13 06:34:05 -04:00
Jacob Su
30ea67f5f2 MP4 DVR: Fix audio/video synchronization issues in WebRTC recordings. v6.0.172 v7.0.52 (#4230)
Fixes #3993 - WebRTC streams recorded to MP4 via DVR exhibit audio/video
synchronization issues, with audio typically ahead of video. **Note:
This issue is specific to MP4 format; FLV recordings are not affected.**

When WebRTC streams are converted to RTMP and then muxed to MP4, the
audio and video tracks may start at different timestamps. The MP4 muxer
was not accounting for this timing offset between the first audio and
video samples in the STTS (Sample Time-to-Sample) table, causing the
tracks to be misaligned in the final MP4 file.

Introduces `SrsMp4DvrJitter` class specifically for MP4 audio/video
synchronization:

- **Timestamp Tracking**: Records the DTS of the first audio and video
samples
- **Offset Calculation**: Computes the timing difference between track
start times
- **MP4 STTS Correction**: Sets appropriate `sample_delta` values in the
MP4 STTS table to maintain proper A/V sync

- Added `SrsMp4DvrJitter` class in `srs_kernel_mp4.hpp/cpp`
- Integrated jitter correction into `SrsMp4SampleManager::write_track()`
for MP4 format only
- Added comprehensive unit tests covering various timing scenarios
- **Scope**: Changes are isolated to MP4 kernel code and do not affect
FLV processing

This fix ensures that MP4 DVR recordings from WebRTC streams maintain
proper audio/video synchronization regardless of the relative timing of
the first audio and video frames, while leaving FLV format processing
unchanged.

---------

Co-authored-by: Haibo Chen <495810242@qq.com>
Co-authored-by: john <hondaxiao@tencent.com>
Co-authored-by: winlin <winlinvip@gmail.com>
Co-authored-by: OSSRS-AI <winlinam@gmail.com>
2025-08-12 09:54:29 -04:00
Haibo Chen(陈海博)
db5e43967c Valgrind: Return error for unsupported check=new on Valgrind < 3.21. v7.0.52 (#4301)
## Problem
The `valgrind?check=new` API parameter uses `VALGRIND_DO_NEW_LEAK_CHECK`
which is only available in Valgrind 3.21+. On older versions like
CentOS's default Valgrind 3.16, this causes undefined behavior since the
macro is not defined.

## Solution
- Check for `VALGRIND_DO_NEW_LEAK_CHECK` availability before processing
the request
- Return `ERROR_NOT_SUPPORTED` with version information when unsupported
- Move the version check before thread creation to avoid unnecessary
resource allocation

## Changes
- Early validation of `check=new` parameter compatibility
- Proper error response with current Valgrind version details
- Prevents undefined behavior on older Valgrind installations

Fixes compatibility issues with older Valgrind versions commonly found
in enterprise Linux distributions.

---------

Co-authored-by: Jacob Su <suzp1984@gmail.com>
Co-authored-by: winlin <winlinvip@gmail.com>
Co-authored-by: OSSRS-AI <winlinam@gmail.com>
2025-08-12 09:13:59 -04:00
Jacob Su
f2e56c7e83 fix srt cmake 4.x compiling error. v7.0.52 (#4431)
## How to reproduce?

1. cmake version 4.0.3
2. clean srt build cache:
    `rm -rf objs/Platform-*`
3. `./configure`

compiling error output:

> Build srt-1-fit
> patching file
'./objs/Platform-SRS7-Darwin-24.6.0-Clang17.0.0-arm64/srt-1-fit/srtcore/api.cpp'
> Running: cmake .
-DCMAKE_INSTALL_PREFIX=/Users/jacobsu/hack/media/srs/trunk/objs/Platform-SRS7-Darwin-24.6.0-Clang17.0.0-arm64/3rdparty/srt
-DENABLE_APPS=0 -DENABLE_STATIC=1 -DENABLE_CXX11=0 -DENABLE_SHARED=0
-DOPENSSL_INCLUDE_DIR=/usr/local/opt/openssl/include
-DOPENSSL_LIBRARIES=/usr/local/opt/openssl/lib/libcrypto.a
> CMake Error at CMakeLists.txt:10 (cmake_minimum_required):
>   Compatibility with CMake < 3.5 has been removed from CMake.
> 
> Update the VERSION argument <min> value. Or, use the <min>...<max>
syntax
> to tell CMake that the project requires at least <min> but has been
updated
>   to work with policies introduced by <max> or earlier.
> 
> Or, add -DCMAKE_POLICY_VERSION_MINIMUM=3.5 to try configuring anyway.

> 
> -- Configuring incomplete, errors occurred!

## Cause

CMake 4.x not long compatible with function cmake_minimum_required
(VERSION 2.8.12 FATAL_ERROR) with only min version anymore.

## Solution

add `add -DCMAKE_POLICY_VERSION_MINIMUM=3.5` to cmake cmd args.


---------

Co-authored-by: Haibo Chen <495810242@qq.com>
Co-authored-by: OSSRS-AI <winlinam@gmail.com>
2025-08-12 08:32:36 -04:00
Winlin
6e1134fe9b Use clang format. v7.0.52 (#4433)
---------

Co-authored-by: ChenGH <chengh_math@126.com>
2025-08-11 23:19:19 -04:00
Jacob Su
339897e0c7 Feature: Support HLS with fmp4 segment for HEVC/LLHLS. v7.0.51 (#4159)
Currently, SRS only supports HLS with MPEG-TS format segment files, but
for LL-HLS and HEVC, it requires the fMP4 format. See #4327 for details.
Furthermore, fMP4 has a smaller overhead compared to TS, and fMP4 can be
used for DVR. In short, fMP4 is definitely the future segment format for
HLS.

Start SRS with the config file that enables HLS with fMP4:

```
./objs/srs -c conf/hls.mp4.conf
```

Publish stream by FFmpeg:

```
ffmpeg -re -i doc/source.flv -c copy -f flv rtmp://localhost/live/livestream
```

Play the stream by SRS player:
[http://localhost:8080/live/livestream.m3u8](http://localhost:8080/players/srs_player.html?stream=livestream.m3u8)

Finished by AI:

* [AI: Change init.mp4 to the same directory of
m3u8.](17621c8442)
* [AI: Fix the error handling
bug.](af3758a592)
* [AI: Fix Chrome stuttering
problem.](aaab60c314)

---------

Co-authored-by: winlin <winlinvip@gmail.com>
2025-08-11 20:55:06 -04:00
Winlin
c762e8204a AI: HTTP-FLV: Fix heap-use-after-free crash during stream unmount. v6.0.171 v7.0.50 (#4432)
## Summary
Fixes a critical heap-use-after-free crash in HTTP-FLV streaming that
occurs when a client requests a stream while it's being unmounted
asynchronously.

## Problem
- **Issue**: #4429 - Heap-use-after-free crash in
`SrsLiveStream::serve_http()`
- **Root Cause**: Race condition between coroutines in single-threaded
SRS server:
1. **Coroutine A**: HTTP client requests FLV stream → `serve_http()`
starts
2. **Coroutine B**: RTMP publisher disconnects → triggers async stream
destruction
3. **Async Worker**: Destroys `SrsLiveStream` object while Coroutine A
is yielded
  4. **Coroutine A**: Resumes and accesses freed memory → **CRASH**

## Solution
1. **Early viewer registration**: Add HTTP connection to `viewers_` list
immediately in `serve_http()` before any I/O operations that could yield
2. **Lifecycle protection**: Split `serve_http()` into wrapper and
implementation to ensure proper viewer management
3. **Stream availability checks**: Add fast checks for stream disposal
state before critical operations
4. **Improved error handling**: Convert warnings to fatal errors when
trying to free alive streams

## Key Changes
- **`SrsLiveStream::serve_http()`**: Now immediately registers viewer
and delegates to `serve_http_impl()`
- **`SrsLiveStream::serve_http_impl()`**: Contains the actual HTTP
serving logic
- **`SrsHttpStreamDestroy::call()`**: Enhanced error handling and longer
wait timeout
- **Stream state validation**: Added checks for `entry->enabled` before
proceeding with stream operations

Fixes #4429
2025-08-11 18:57:31 -04:00
Shengming Yuan
3562c27224 Allow Forward to be configured with Env Var. v6.0.170 v7.0.49 (#4245)
Allow Env Var to control forwarding function.

By AI:

* [AI: Add utests for
PR.](1b978d19a5)

---------

Co-authored-by: Haibo Chen <495810242@qq.com>
Co-authored-by: chundonglinlin <chundonglinlin@163.com>
Co-authored-by: winlin <winlinvip@gmail.com>
2025-07-28 08:37:38 -04:00
chundonglinlin
e712b12a15 RTC: audio packet jitter buffer. v7.0.48 (#4295)
Rtp packets may be retransmitted, disordered, jittery, delayed,
etc.There may be abnormalities when converting to rtmp.

To reproduce this problem, you need to set the network reordering by
[tc-ui](https://github.com/ossrs/tc-ui). Note that you need a linux
server, and start it by docker:

```bash
docker run --network=host --privileged -it --restart always -d \
    --name tc -v /lib/modules:/lib/modules:ro ossrs/tc-ui:1
```

Set up 5% packet reordering and a 1ms delay; then you will notice that
the audio is stuttering, somewhat noisy, and lacks fluency.

```bash
curl http://localhost:2023/tc/api/v1/config/raw -X POST \
  -d 'tcset ens5 --direction incoming --delay 40ms --reordering 5% --port 8000'
```

> Note: Even without network conditions, the natural state can also
cause packet reordering, especially in public cloud platforms such as
AWS EC2.

> Note: You can use command `curl
http://localhost:2023/tc/api/v1/config/raw -X POST -d 'tcdel --all
ens5'` to reset the network condition settings.

Check the web console, you will see the reordering setup:

<img width="500" alt="TC Settings"
src="https://github.com/user-attachments/assets/b278fdf4-9fcc-4aac-b534-dfa34e28c371"
/>

Then, publish stream via WHIP: http://localhost:8080/players/whip.html

And, play via HTTP-FLV: http://localhost:8080/players/srs_player.html

Finished by AI:

* [AI: Extract audio jitter buffer to class
AudioPacketCache](a4097d9374)
* [AI: Add utest and fix
bug.](c919227af5)

---------

Co-authored-by: Haibo Chen <495810242@qq.com>
Co-authored-by: winlin <winlinvip@gmail.com>
2025-07-16 21:36:56 -04:00
Haibo Chen(陈海博)
5dc292ce64 NEW PROTOCOL: Support viewing stream over RTSP. v7.0.47 (#4333)
## Introduce

This PR adds support for viewing streams via the RTSP protocol. Note
that it only supports viewing streams, not publishing streams via RTSP.

Currently, only publishing via RTMP is supported, which is then
converted to RTSP. Further work is needed to support publishing RTC/SRT
streams and converting them to RTSP.

## Usage

Build and run SRS with RTSP support:

```
cd srs/trunk && ./configure --rtsp=on && make -j16
./objs/srs -c conf/rtsp.conf
```

Push stream via RTMP by FFmpeg:

```
ffmpeg -re -i doc/source.flv -c copy -f flv rtmp://localhost/live/livestream
```

View the stream via RTSP protocol, try UDP first, then use TCP:

```
ffplay -i rtsp://localhost:8554/live/livestream
```

Or specify the transport protocol with TCP:

```
ffplay -rtsp_transport tcp -i rtsp://localhost:8554/live/livestream
```

## Unit Test

Run utest for RTSP:

```
./configure --utest=on & make utest -j16
./objs/srs_utest
```

## Regression Test

You need to start SRS for regression testing.

```
./objs/srs -c conf/regression-test-for-clion.conf
```

Then run regression tests for RTSP.

```
cd srs/trunk/3rdparty/srs-bench
go test ./srs -mod=vendor -v -count=1 -run=TestRtmpPublish_RtspPlay
```

## Blackbox Test

For blackbox testing, SRS will be started by utest, so there is no need
to start SRS manually.

```
cd srs/trunk/3rdparty/srs-bench
go test ./blackbox -mod=vendor -v -count=1 -run=TestFast_RtmpPublish_RtspPlay_Basic
```

## UDP Transport

As UDP requires port allocation, this PR doesn't support delivering
media stream via UDP transport, so it will fail if you try to use UDP as
transport:

```
ffplay -rtsp_transport udp -i rtsp://localhost:8554/live/livestream

[rtsp @ 0x7fbc99a14880] method SETUP failed: 461 Unsupported Transport
rtsp://localhost:8554/live/livestream: Protocol not supported

[2025-07-05 21:30:52.738][WARN][14916][7d7gf623][35] RTSP: setup failed: code=2057
(RtspTransportNotSupported) : UDP transport not supported, only TCP/interleaved mode is supported
```

There are no plans to support UDP transport for RTSP. In the real world,
UDP is rarely used; the vast majority of RTSP traffic uses TCP.

## Play Before Publish

RTSP supports audio with AAC and OPUS codecs, which is significantly
different from RTMP or WebRTC.

RTSP uses commands to exchange SDP and specify the audio track to play,
unlike WHEP or HTTP-FLV, which use the query string of the URL. RTSP
depends on the player’s behavior, making it very difficult to use and
describe.

Considering the feature that allows playing the stream before publishing
it, it requires generating some default parameters in the SDP. For OPUS,
the sample rate is 48 kHz with 2 channels, while AAC is more complex,
especially regarding the sample rate, which may be 44.1 kHz, 32 kHz, or
48 kHz.

Therefore, for RTSP, we cannot support play-then-publish. Instead, there
must already be a stream when playing it, so that the audio codec is
determined.

## Opus Codec

No Opus codec support for RTSP, because for RTC2RTSP, it always converts
RTC to RTMP frames, then converts them to RTSP packets. Therefore, the
audio codec is always AAC after converting RTC to RTMP.

This means the bridge architecture needs some changes. We need a new
bridge that binds to the target protocol. For example, RTC2RTMP converts
the audio codec, but RTC2RTSP keeps the original audio codec.

Furthermore, the RTC2RTMP bridge should also support bypassing the Opus
codec if we use enhanced-RTMP, which supports the Opus audio codec. I
think it should be configurable to either transcode or bypass the audio
codec. However, this is not relevant to RTSP.

## AI Contributor

Below commits are contributed by AI:

* [AI: Remove support for media transport via
UDP.](755686229f)
* [AI: Add crutial logs for each RTSP
stage.](9c8cbe7bde)
* [AI: Support AAC doec for
RTSP.](7d7cc12bae)
* [AI: Add option --rtsp for
RTSP.](f67414d9ee)
* [AI: Extract SrsRtpVideoBuilder for RTC and
RTSP.](562e76b904)

---------

Co-authored-by: Jacob Su <suzp1984@gmail.com>
Co-authored-by: winlin <winlinvip@gmail.com>
2025-07-11 08:18:40 -04:00
Haibo Chen(陈海博)
6208b6fe61 Fix H.264 B-frame detection logic to comply with specification. v5.0.224 v6.0.169 v7.0.46 (#4414)
For H.264, only when the NAL Type is 1, 2, 3, or 4 is it possible for
B-frames to be present; that is, non-IDR pictures and slice data.

The current `SrsVideoFrame::parse_avc_bframe()` function uses incorrect
logic to determine if a NALU can contain B-frames. The original
implementation only checked for specific NALU types (IDR, SPS, PPS) to
mark as non-B-frames, but this approach misses many other NALU types
that cannot contain B-frames according to the H.264 specification.

According to H.264 specification (ISO_IEC_14496-10-AVC-2012.pdf, Table
7-1), B-frames can **only** exist in these specific NALU types:
- Type 1: Non-IDR coded slice (`SrsAvcNaluTypeNonIDR`)
- Type 2: Coded slice data partition A (`SrsAvcNaluTypeDataPartitionA`) 
- Type 3: Coded slice data partition B (`SrsAvcNaluTypeDataPartitionB`)
- Type 4: Coded slice data partition C (`SrsAvcNaluTypeDataPartitionC`)

All other NALU types (IDR=5, SEI=6, SPS=7, PPS=8, AUD=9, etc.) cannot
contain B-frames by definition.

---------

Co-authored-by: Jacob Su <suzp1984@gmail.com>
Co-authored-by: winlin <winlinvip@gmail.com>
2025-07-10 09:08:05 -04:00
Winlin
b2a827f8cf Refine code and add tests for #4289. v7.0.45 (#4412)
Use AI to understand, add comments, add utests, refactor code for PR
#4289
2025-07-04 17:26:12 -04:00
Winlin
c5b6b72876 RTMP2RTC: Support dual video track for bridge. v7.0.44 (#4413)
This PR refactors the RTMP to RTC bridge to support multiple video
tracks by implementing lazy initialization of audio and video tracks.
Instead of pre-determining track parameters during bridge construction,
tracks are now initialized dynamically when the first packet of each
type is received, allowing proper codec detection and track
configuration for dual video track scenarios.

Failed to view WHEP with HEVC before publishing RTMP, because the
default codec is AVC and will not be updated until the stream is
published. This enables better handling of streams with multiple video
tracks in RTMP to WebRTC bridging scenarios. Now, you are able to:

1. View WHEP with HEVC: Play with WebRTC:
http://localhost:8080/players/whep.html?schema=http&&codec=hevc
2. Then publish by RTMP: `ffmpeg -stream_loop -1 -re -i doc/source.flv
-c:v libx265 -profile:v main -preset fast -b:v 2000k -maxrate 2000k
-bufsize 2000k -bf 0 -c:a aac -b:a 48k -ar 44100 -ac 2 -f flv
rtmp://localhost/live/livestream`

Or publish RTMP with HEVC, then view by WHEP.

Note that if the codecs do not match, the error log will display RTC:
`Drop for ssrc xxxxxx not found`. For example, this can occur if you
publish RTMP with HEVC while viewing the stream with AVC.
2025-07-04 14:23:13 -04:00
Haibo Chen(陈海博)
cbc98dc0d9 rtc2rtmp: Support RTC-to-RTMP remuxing with HEVC. v7.0.43 (#4349)
**Introduce**

This pull request builds upon the foundation laid in
https://github.com/ossrs/srs/pull/4289 . While the previous work solely
implemented unidirectional HEVC support from RTMP to RTC, this
submission further enhances it by introducing support for the RTC to
RTMP direction.

**Usage**

Launch SRS with `rtc2rtmp.conf`

```bash
./objs/srs -c conf/rtc2rtmp.conf
```

**Push with WebRTC**

Upgrade browser to Chrome(136+) or Safari(18+), then open [WHIP
encoder](http://localhost:8080/players/whip.html?schema=http&&codec=hevc),
push stream with URL that enables HEVC by query string `codec=hevc`:

```bash
http://localhost:1985/rtc/v1/whip/?app=live&stream=livestream&codec=hevc
```

This query string `codec=hevc` is used to select the video codec, and
generate lines in the answer SDP.

```
m=video 9 UDP/TLS/RTP/SAVPF 49 123
a=rtpmap:49 H265/90000
```

The encoder log also show the codec:

```
Audio: opus, 48000HZ, channels: 2, pt: 111
Video: H265, 90000HZ, pt: 49
```

**Play with RTMP**

Play HEVC stream via RTMP.

```bash
ffplay -i rtmp://localhost/live/livestream
```

You will see the codec in logs:

```
  Stream #0:0: Audio: aac (LC), 48000 Hz, stereo, fltp
  Stream #0:1: Video: hevc (Main), yuv420p(tv, bt709), 320x240, 30 fps, 30 tbr, 1k tbn
```

You can also use [WHEP
player](http://localhost:8080/players/whep.html?schema=http&&codec=hevc)
to play the stream.

Important refactor with AI:

* [AI: Refactor packet cache for RTC frame
builder.](b8ffa1630e)
* [AI: Refactor the packet copy and free for
SrsRtcFrameBuilder](f3487b45d7)
* [AI: Refactor the frame detector for
SrsRtcFrameBuilder](4ffc1526b9)
* [AI: Refactor the packet_video_rtmp for
SrsRtcFrameBuilder](81f6aef4ed)
* [AI: Add utests for
SrsCodecPayload.codec](61eb1c0bfc)
* [AI: Add utests for VideoPacketCache in
SrsRtcFrameBuilder.](fd25480dfa)
* [AI: Add utests for VideoFrameDetector in
SrsRtcFrameBuilder.](b4aa977bbd)
* [AI: Add regression test for RTC2RTMP with
HEVC.](5259a2aac3)

---------

Co-authored-by: Jacob Su <suzp1984@gmail.com>
Co-authored-by: winlin <winlinvip@gmail.com>
2025-07-03 08:24:42 -04:00
Haibo Chen(陈海博)
07e7984fdf Player: Get codec by webrtc api: pc.getStats. v7.0.42 (#4310)
1. It cannot retrieve codec information on `Firefox` by
`getSenders/getReceivers`
2. It can retrieve codec information on `Chrome` by `getReceivers`, but
incorrect, like this:

![image](https://github.com/user-attachments/assets/e0bb93b1-ccd0-46c0-ae21-074934f66a1e)

3. So, we retrieve codec information from `getStats`, and it works well.
4. The timer is used because sometimes the codec cannot be retrieved
when `iceGatheringState` is `complete`.
5. Testing has been completed on the browsers listed below.
   - [x] Chrome
   - [x] Edge
   - [x] Safari
   - [x] Firefox

---------

Co-authored-by: winlin <winlinvip@gmail.com>
2025-06-04 10:28:46 -04:00
Haibo Chen(陈海博)
133866a944 Transcode: Bugfix: Fix loop transcoding with host. #3516. v6.0.168 v7.0.41 (#4325)
#### What issue has been resolved?
for issue: https://github.com/ossrs/srs/issues/3516
https://github.com/ossrs/srs/issues/4055
https://github.com/ossrs/srs/pull/3618

#### What is the root cause of the problem?
The issue arises from a mismatch between the `input` and `output`
formats within the
[`SrsEncoder::initialize_ffmpeg`](https://github.com/ossrs/srs/pull/4325/files#diff-a3dd7c498fc26d36def2e8c2c3b7edfe1bf78f0620b1a838aefa70ba119cad03L241-L254)
function.

For example:
Input: `rtmp://127.0.0.1:1935/live?vhost=__defaultVhost__/livestream_ff`
Output:
`rtmp://127.0.0.1:1935/live/livestream_ff?vhost=__defaultVhost__`

This may result in the failure of the [code
segment](https://github.com/ossrs/srs/pull/4325/files#diff-a3dd7c498fc26d36def2e8c2c3b7edfe1bf78f0620b1a838aefa70ba119cad03L292-L298)
responsible for determining whether to loop.

#### What is the approach to solving this issue?
It simply involves modifying the order of `stream` and `vhost`.

#### How was the issue introduced?
The commit introducing this bug is:
7d47017a00
The order of [parameters in the configuration
file](7d47017a00 (diff-428de168925d659dae72bb49273c3b048ed2800906c6848560badae854250126L26-R26))
has been modified to address the `ingest` issue.

#### Outstanding issues
Please note that this PR does not entirely resolve the issue; for
example, modifying the `output` format in configuration still results in
exceptions. To comprehensively address this problem, extensive code
modifications would be required.

However, strictly adhering to the configuration file format can
effectively prevent this issue.

---------

Co-authored-by: Jacob Su <suzp1984@gmail.com>
Co-authored-by: john <hondaxiao@tencent.com>
Co-authored-by: winlin <winlinvip@gmail.com>
2025-06-04 10:11:58 -04:00
Hamed Mansouri
8b65fe2063 Update the release in the README for consistent. v7.0.40 (#4341)
---------
Co-authored-by: Haibo Chen <495810242@qq.com>
Co-authored-by: winlin <winlinvip@gmail.com>
2025-06-04 08:08:00 -04:00
Haibo Chen(陈海博)
2d4bb8e839 Update the codename for version 7.0 to "Kai". v7.0.39 (#4368)
Co-authored-by: winlin <winlinvip@gmail.com>
2025-06-04 08:04:27 -04:00
ChenGH
cc115afc1d Script: Use clang-format to unify the coding style. v7.0.38 (#4366)
1. add clang-format config file
2. add clang_format.sh file, use to format cpp code before pr merged.

---------

Co-authored-by: winlin <winlinvip@gmail.com>
2025-06-01 22:01:15 -04:00
pengzhixiang
9b942fafcc RTMP: Use extended timestamp as delta when chunk fmt=1/2. v6.0.167 v7.0.37 (#4356)
1. When the chunk message header employs type 1 and type 2, the extended
timestamp denotes the time delta.
2. When the DTS (Decoding Time Stamp) experiences a jump and exceeds
16777215, there can be errors in DTS calculation, and if the audio and
video delta differs, it may result in audio-video synchronization
issues.

---------

`TRANS_BY_GPT4`

---------

Co-authored-by: 彭治湘 <zuolengchan@douyu.tv>
Co-authored-by: Haibo Chen(陈海博) <495810242@qq.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: winlin <winlinvip@gmail.com>
2025-05-29 14:26:05 -04:00
Haibo Chen(陈海博)
33b0a0fe7d Fix error about TestRtcPublish_HttpFlvPlay. v7.0.36 (#4363)
In the scenario of converting WebRTC to RTMP, this conversion will not
proceed until an SenderReport is received; for reference, see:
https://github.com/ossrs/srs/pull/2470.
Thus, if HTTP-FLV streaming is attempted before the SR is received, the
FLV Header will contain only audio, devoid of video content.
This error can be resolved by disabling `guess_has_av` in the
configuration file, since we can guarantee that both audio and video are
present in the test cases.

However, in the original regression tests, the
`TestRtcPublish_HttpFlvPlay` test case contains a bug:

5a404c089b/trunk/3rdparty/srs-bench/srs/rtc_test.go (L2421-L2424)

The test would pass when `hasAudio` is true and `hasVideo` is false,
which is actually incorrect. Therefore, it has been revised so that the
test now only passes if both values are true.

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: winlin <winlinvip@gmail.com>
2025-05-29 14:07:56 -04:00
Haibo Chen(陈海博)
9c559dcb48 VSCode: Support GDB on Linux and LLDB on macOS. v7.0.35 (#4362)
Co-authored-by: winlin <winlinvip@gmail.com>
2025-05-29 08:44:44 -04:00
Haibo Chen(陈海博)
974826800f update pion/webrtc to v4. v7.0.34 (#4359)
To enable H.265 support for the WebRTC protocol, upgrade the pion/webrtc
library to version 4.

---------

Co-authored-by: john <hondaxiao@tencent.com>
Co-authored-by: winlin <winlinvip@gmail.com>
2025-05-26 17:48:53 +08:00
Haibo Chen(陈海博)
0c88ddbcdf rtmp2rtc: Support RTMP-to-WebRTC conversion with HEVC. v7.0.33 (#4289)
```bash
C:\Program Files\Google\Chrome\Application>"C:\Program Files\Google\Chrome\Application\chrome.exe" --enable-features=WebRtcAllowH265Receive --force-fieldtrials=WebRTC-Video-H26xPacketBuffer/Enabled

open -a "Google Chrome" --args --enable-features=WebRtcAllowH265Receive --force-fieldtrials=WebRTC-Video-H26xPacketBuffer/Enabled
```

> Note: The latest Chrome browser (version 136) fully enables this by
default, so there's no need to launch it with any extra parameters.

```bash
./objs/srs -c conf/rtmp2rtc.conf
```

```bash
ffmpeg -stream_loop -1 -re -i input.mp4 -c:v libx265 -preset fast -b:v 2000k -maxrate 2000k -bufsize 4000k -bf 0 -c:a aac -b:a 128k -ar 44100 -ac 2 -f flv rtmp://localhost/live/livestream
```

```bash
http://localhost:1985/rtc/v1/whep/?app=live&stream=livestream
```

![image](https://github.com/user-attachments/assets/bdbf4c67-b7e2-4dc6-92a1-93e2c78e00fe)

sendrecv offer
```bash
--enable-features=WebRtcAllowH265Send,PlatformHEVCEncoderSupport,WebRtcAllowH265Receive --force-fieldtrials=WebRTC-Video-H26xPacketBuffer/Enabled
```

sendonly offer
```bash
--enable-features=WebRtcAllowH265Send,PlatformHEVCEncoderSupport
```

recvonly offer
```bash
--enable-features=WebRtcAllowH265Receive --force-fieldtrials=WebRTC-Video-H26xPacketBuffer/Enabled
```

* Browser Test for supporting H265

https://webrtc.github.io/samples/src/content/peerconnection/change-codecs/

![image](https://github.com/user-attachments/assets/174476df-a7aa-4951-9880-56328ec75065)

* How to test Safari: https://github.com/ossrs/srs/pull/3441
* Debug in Safari

![image](https://github.com/user-attachments/assets/6cf94fca-e3ed-46d2-a102-a472f1699b4e)

---------

Co-authored-by: chundonglinlin <chundonglinlin@163.com>
Co-authored-by: winlin <winlinvip@gmail.com>
Co-authored-by: john <hondaxiao@tencent.com>

---------

Co-authored-by: chundonglinlin <chundonglinlin@163.com>
Co-authored-by: john <hondaxiao@tencent.com>
2025-05-14 07:49:04 -04:00
winlin
d35d02f112 Release v6.0-a2, 6.0 alpha2, v6.0.165, 169712 lines. 2025-05-03 20:28:55 -04:00
Haibo Chen(陈海博)
e00937e387 Fix memory leaks from errors skipping resource release. v7.0.32 (#4308)
---------

Co-authored-by: winlin <winlinvip@gmail.com>
Co-authored-by: john <hondaxiao@tencent.com>

---------

Co-authored-by: john <hondaxiao@tencent.com>
2025-04-30 12:09:31 +08:00
winlin
3fbd609bc7 Update CHANGELOG for #4309. v7.0.31 2025-04-26 06:58:00 -04:00
Lukas
5f134798b6 Typo: "forked" process in log output. v7.0.30 (#4292)
---------

Co-authored-by: Haibo Chen <495810242@qq.com>
Co-authored-by: winlin <winlinvip@gmail.com>

Co-authored-by: Haibo Chen <495810242@qq.com>
2025-03-21 19:18:11 +08:00
chundonglinlin
e2461cd16d Build: update build version to v7. v7.0.29 (#4294)
Update the prompt document address to the latest version v7.

---------

Co-authored-by: Haibo Chen <495810242@qq.com>
Co-authored-by: winlin <winlinvip@gmail.com>

---------

Co-authored-by: Haibo Chen(陈海博) <495810242@qq.com>
Co-authored-by: john <hondaxiao@tencent.com>
2025-03-21 19:15:05 +08:00
Arjen10
464a0134f3 replace values with enums. v6.0.166 v7.0.28 (#4303)
---------

Co-authored-by: Haibo Chen <495810242@qq.com>
Co-authored-by: john <hondaxiao@tencent.com>
Co-authored-by: winlin <winlinvip@gmail.com>

---------

Co-authored-by: Haibo Chen(陈海博) <495810242@qq.com>
Co-authored-by: john <hondaxiao@tencent.com>
2025-03-21 19:11:19 +08:00
Haibo Chen(陈海博)
909c9efa40 free sample to prevent memory leak. v5.0.222 v6.0.164 v7.0.26 (#4305)
修复`SrsRtpRawPayload::copy()`方法中sample_覆盖的问题。

---------

Co-authored-by: chundonglinlin <chundonglinlin@163.com>
Co-authored-by: winlin <winlinvip@gmail.com>

---------

Co-authored-by: john <hondaxiao@tencent.com>
2025-03-21 10:38:08 +08:00
Haibo Chen(陈海博)
feb2abbd73 update geekyeggo/delete-artifact to 5.0.0. v5.0.221 v6.0.163 v7.0.25 (#4302)
>
https://github.com/marketplace/actions/delete-artifact?version=v5.0.0#-compatibility

The current version of `actions/upload-artifact` is `v4`, and the
corresponding version for `delete-artifact` should be `v5`.



---------

`TRANS_BY_GPT4`

---------

Co-authored-by: chundonglinlin <chundonglinlin@163.com>
Co-authored-by: winlin <winlinvip@gmail.com>

---------

Co-authored-by: john <hondaxiao@tencent.com>
2025-03-18 08:25:48 +08:00
chundonglinlin
3d8ef92a23 Dvr: support h265 flv fragments. v6.0.162 v7.0.24 (#4296)
1. Issue
When segmenting H.265 encoded FLV files using a DVR, the system does not
create FLV segments at regular intervals as specified by the
`dvr_wait_keyframe` configuration.

2. Configure dvr.segment.conf
```config
# the config for srs to dvr in segment mode
# @see https://ossrs.net/lts/zh-cn/docs/v4/doc/dvr
# @see full.conf for detail config.

listen              1935;
max_connections     1000;
daemon              off;
srs_log_tank        console;
vhost __defaultVhost__ {
    dvr {
        enabled      on;
        dvr_path     ./objs/nginx/html/[app]/[stream].[timestamp].flv;
        dvr_plan     segment;
        dvr_duration    30;
        dvr_wait_keyframe       on;
    }
}
```

3. Stream Push Testing
### FFmpeg Stream Push
Domestic FFmpeg version (codecId=12)
```sh
hevc-12-ffmpeg -stream_loop -1 -re -i 264_aac.flv -c:v libx265 -preset fast -b:v 2000k -maxrate 2000k -bufsize 4000k -bf 0 -c:a aac -b:a 128k -ar 44100 -ac 2 -f flv rtmp://localhost/live/livestream
```
FFmpeg version 6.0 or higher (supports `enhanced RTMP`)
```sh
ffmpeg -stream_loop -1 -re -i 264_aac.flv -c:v libx265 -preset fast -b:v 2000k -maxrate 2000k -bufsize 4000k -bf 0 -c:a aac -b:a 128k -ar 44100 -ac 2 -f flv rtmp://localhost/live/livestream
```

OBS streaming (version 30.0 or above supports `enhanced RTMP`)

![image](https://github.com/user-attachments/assets/fd2806c3-b0e3-44c4-a2d5-e04e6e5386ff)

![image](https://github.com/user-attachments/assets/15ef9c45-e15a-426e-b70c-d4bdd5dc8055)

## 4. Playback Testing
SRS player (supports both `enhanced RTMP` and `codec=12 FLV`)
```
http://127.0.0.1:8080/players/srs_player.html
```
Domestic ffplay (supports `codec=12 FLV`)
```
hevc-12-ffplay http://127.0.0.1:8080/live/livestream.1740311867638.flv
```
ffplay (versions above ffmpeg 6.0 support `enhanced RTMP`)
```
ffplay http://127.0.0.1:8080/live/livestream.1740311867638.flv
```

![image](https://github.com/user-attachments/assets/711a4182-418c-4134-934f-cba41a08e06f)



---------

`TRANS_BY_GPT4`

---------

Co-authored-by: Haibo Chen <495810242@qq.com>
Co-authored-by: john <hondaxiao@tencent.com>
Co-authored-by: winlin <winlinvip@gmail.com>

---------

Co-authored-by: john <hondaxiao@tencent.com>
2025-03-18 07:34:04 +08:00
johzzy
93cba246bc fix typo about heartbeat. v5.0.220 v6.0.161 v7.0.23 (#4253)
---------

Co-authored-by: Haibo Chen <495810242@qq.com>
Co-authored-by: john <hondaxiao@tencent.com>
Co-authored-by: winlin <winlinvip@gmail.com>

---------

Co-authored-by: john <hondaxiao@tencent.com>
2025-02-20 13:47:52 +08:00
Haibo Chen
6a612784d0 fix ci error. v5.0.219 v6.0.160 v7.0.22 (#4291)
Starting January 30th, 2025, GitHub Actions customers will no longer be
able to use v3 of
[actions/upload-artifact](https://github.com/actions/upload-artifact) or
[actions/download-artifact](https://github.com/actions/download-artifact).
Customers should update workflows to begin using [v4 of the artifact
actions](https://github.blog/2024-02-12-get-started-with-v4-of-github-actions-artifacts/).
Learn more:
https://github.blog/changelog/2024-04-16-deprecation-notice-v3-of-the-artifact-actions/

---------

Co-authored-by: john <hondaxiao@tencent.com>
Co-authored-by: winlin <winlinvip@gmail.com>

---------

Co-authored-by: john <hondaxiao@tencent.com>
2025-02-20 12:26:24 +08:00
ChenGH
13597d1b7f update copyright to 2025. v5.0.218 v6.0.159 v7.0.21 (#4271)
update copyright to 2025

---------

Co-authored-by: john <hondaxiao@tencent.com>
Co-authored-by: winlin <winlinvip@gmail.com>
2025-01-14 17:35:18 +08:00
Jacob Su
7951bf3bd6 Test: Fix utest fail for StUtimeInMicroseconds. v7.0.20 (#4218)
```
../../../src/utest/srs_utest_st.cpp:27: Failure
Expected: (st_time_2 - st_time_1) <= (100), actual: 119 vs 100
[  FAILED  ] StTest.StUtimeInMicroseconds (0 ms)
```

Maybe github's vm, running the action jobs, is slower. I notice this
error happens frequently, so let the UT pass by increase the number.

---------

Co-authored-by: Haibo Chen <495810242@qq.com>
Co-authored-by: winlin <winlinvip@gmail.com>
2024-10-31 18:25:57 +08:00
Haibo Chen
58e775ce8d HLS: Fix error when stream has extension. #4215 v5.0.217 v6.0.158 v7.0.19 (#4216)
---------

Co-authored-by: Jacob Su <suzp1984@gmail.com>
Co-authored-by: winlin <winlinvip@gmail.com>
2024-10-31 17:50:56 +08:00
Jacob Su
101382afd0 RTC2RTMP: Fix screen sharing stutter caused by packet loss. v5.0.216 v6.0.157 v7.0.18 (#4160)
## How to reproduce?

1. Refer this commit, which contains the web demo to capture screen as
video stream through RTC.
2. Copy the `trunk/research/players/whip.html` and
`trunk/research/players/js/srs.sdk.js` to replace the `develop` branch
source code.
3. `./configure && make`
4. `./objs/srs -c conf/rtc2rtmp.conf`
5. open `http://localhost:8080/players/whip.html?schema=http`
6. check `Screen` radio option.
7. click `publish`, then check the screen to share.
8. play the rtmp live stream: `rtmp://localhost/live/livestream`
9. check the video stuttering.

## Cause
When capture screen by the chrome web browser, which send RTP packet
with empty payload frequently, then all the cached RTP packets are
dropped before next key frame arrive in this case.

The OBS screen stream and camera stream do not have such problem.

## Add screen stream to WHIP demo

><img width="581" alt="Screenshot 2024-08-28 at 2 49 46 PM"
src="https://github.com/user-attachments/assets/9557dbd2-c799-4dfd-b336-5bbf2e4f8fb8">

---------

Co-authored-by: winlin <winlinvip@gmail.com>
2024-10-15 19:00:07 +08:00
Jacob Su
e7d78462fe ST: Use clock_gettime to prevent time jumping backwards. v7.0.17 (#3979)
try to fix #3978 

**Background**
check #3978 

**Research**

I referred the Android platform's solution, because I have android
background, and there is a loop to handle message inside android.


ff007a03c0/core/java/android/os/Handler.java (L701-L706C6)

```
    public final boolean sendMessageDelayed(@NonNull Message msg, long delayMillis) {
        if (delayMillis < 0) {
            delayMillis = 0;
        }
        return sendMessageAtTime(msg, SystemClock.uptimeMillis() + delayMillis);
    }
```


59d9dc1f50/libutils/SystemClock.cpp (L37-L51)

```
/*
 * native public static long uptimeMillis();
 */
int64_t uptimeMillis()
{
    return nanoseconds_to_milliseconds(uptimeNanos());
}


/*
 * public static native long uptimeNanos();
 */
int64_t uptimeNanos()
{
    return systemTime(SYSTEM_TIME_MONOTONIC);
}
```


59d9dc1f50/libutils/Timers.cpp (L32-L55)
```
#if defined(__linux__)
nsecs_t systemTime(int clock) {
    checkClockId(clock);
    static constexpr clockid_t clocks[] = {CLOCK_REALTIME, CLOCK_MONOTONIC,
                                           CLOCK_PROCESS_CPUTIME_ID, CLOCK_THREAD_CPUTIME_ID,
                                           CLOCK_BOOTTIME};
    static_assert(clock_id_max == arraysize(clocks));
    timespec t = {};
    clock_gettime(clocks[clock], &t);
    return nsecs_t(t.tv_sec)*1000000000LL + t.tv_nsec;
}
#else
nsecs_t systemTime(int clock) {
    // TODO: is this ever called with anything but REALTIME on mac/windows?
    checkClockId(clock);


    // Clock support varies widely across hosts. Mac OS doesn't support
    // CLOCK_BOOTTIME (and doesn't even have clock_gettime until 10.12).
    // Windows is windows.
    timeval t = {};
    gettimeofday(&t, nullptr);
    return nsecs_t(t.tv_sec)*1000000000LL + nsecs_t(t.tv_usec)*1000LL;
}
#endif
```
For Linux system, we can use `clock_gettime` api, but it's first
appeared in Mac OSX 10.12.

`man clock_gettime`

The requirement is to find an alternative way to get the timestamp in
microsecond unit, but the `clock_gettime` get nanoseconds, the math
formula is the nanoseconds / 1000 = microsecond. Then I check the
performance of this api + math division.

I used those code to check the `clock_gettime` performance.

```
#include <sys/time.h>
#include <time.h>
#include <stdio.h>
#include <unistd.h>

int main() {
	struct timeval tv;
	struct timespec ts;
	clock_t start;
	clock_t end;
	long t;

	while (1) {
		start = clock();
		gettimeofday(&tv, NULL);
		end = clock();
		printf("gettimeofday clock is %lu\n", end - start);
		printf("gettimeofday is %lld\n", (tv.tv_sec * 1000000LL + tv.tv_usec));

		start = clock();
		clock_gettime(CLOCK_MONOTONIC, &ts);
		t = ts.tv_sec * 1000000L + ts.tv_nsec / 1000L;
		end = clock();
		printf("clock_monotonic clock is %lu\n", end - start);
		printf("clock_monotonic: seconds is %ld, nanoseconds is %ld, sum is %ld\n", ts.tv_sec, ts.tv_nsec, t);

		start = clock();
		clock_gettime(CLOCK_MONOTONIC_RAW, &ts);
		t = ts.tv_sec * 1000000L + ts.tv_nsec / 1000L;
		end = clock();
		printf("clock_monotonic_raw clock is %lu\n", end - start);
		printf("clock_monotonic_raw: nanoseconds is %ld, sum is %ld\n", ts.tv_nsec, t);

		sleep(3);
	}
	
	return 0;
}

```

Here is output:

env: Mac OS M2 chip.

```
gettimeofday clock is 11
gettimeofday is 1709775727153949
clock_monotonic clock is 2
clock_monotonic: seconds is 1525204, nanoseconds is 409453000, sum is 1525204409453
clock_monotonic_raw clock is 2
clock_monotonic_raw: nanoseconds is 770493000, sum is 1525222770493
```
We can see the `clock_gettime` is faster than `gettimeofday`, so there
are no performance risks.

**MacOS solution**

`clock_gettime` api only available until mac os 10.12, for the mac os
older than 10.12, just keep the `gettimeofday`.
check osx version in `auto/options.sh`, then add MACRO in
`auto/depends.sh`, the MACRO is `MD_OSX_HAS_NO_CLOCK_GETTIME`.


**CYGWIN**
According to google search, it seems the
`clock_gettime(CLOCK_MONOTONIC)` is not support well at least 10 years
ago, but I didn't own an windows machine, so can't verify it. so keep
win's solution.

---------

Co-authored-by: winlin <winlinvip@gmail.com>
2024-10-15 17:52:17 +08:00
Winlin
2e4014ae1c Proxy: Support proxy server for SRS. v7.0.16 (#4158)
Please note that the proxy server is a new architecture or the next
version of the Origin Cluster, which allows the publication of multiple
streams. The SRS origin cluster consists of a group of origin servers
designed to handle a large number of streams.

```text
                         +-----------------------+
                     +---+ SRS Proxy(Deployment) +------+---------------------+
+-----------------+  |   +-----------+-----------+      +                     +
| LB(K8s Service) +--+               +(Redis/MESH)      + SRS Origin Servers  +
+-----------------+  |   +-----------+-----------+      +    (Deployment)     +
                     +---+ SRS Proxy(Deployment) +------+---------------------+
                         +-----------------------+
```

The new origin cluster is designed as a collection of proxy servers. For
more information, see [Discussion
#3634](https://github.com/ossrs/srs/discussions/3634). If you prefer to
use the old origin cluster, please switch to a version before SRS 6.0.

A proxy server can be used for a set of origin servers, which are
isolated and dedicated origin servers. The main improvement in the new
architecture is to store the state for origin servers in the proxy
server, rather than using MESH to communicate between origin servers.
With a proxy server, you can deploy origin servers as stateless servers,
such as in a Kubernetes (K8s) deployment.

Now that the proxy server is a stateful server, it uses Redis to store
the states. For faster development, we use Go to develop the proxy
server, instead of C/C++. Therefore, the proxy server itself is also
stateless, with all states stored in the Redis server or cluster. This
makes the new origin cluster architecture very powerful and robust.

The proxy server is also an architecture designed to solve multiple
process bottlenecks. You can run hundreds of SRS origin servers with one
proxy server on the same machine. This solution can utilize multi-core
machines, such as servers with 128 CPUs. Thus, we can keep SRS
single-threaded and very simple. See
https://github.com/ossrs/srs/discussions/3665#discussioncomment-6474441
for details.

```text
                                       +--------------------+
                               +-------+ SRS Origin Server  +
                               +       +--------------------+
                               +
+-----------------------+      +       +--------------------+
+ SRS Proxy(Deployment) +------+-------+ SRS Origin Server  +
+-----------------------+      +       +--------------------+
                               +
                               +       +--------------------+
                               +-------+ SRS Origin Server  +
                                       +--------------------+
```

Keep in mind that the proxy server for the Origin Cluster is designed to
handle many streams. To address the issue of many viewers, we will
enhance the Edge Cluster to support more protocols.

```text
+------------------+                                               +--------------------+
+ SRS Edge Server  +--+                                    +-------+ SRS Origin Server  +
+------------------+  +                                    +       +--------------------+
                      +                                    +
+------------------+  +     +-----------------------+      +       +--------------------+
+ SRS Edge Server  +--+-----+ SRS Proxy(Deployment) +------+-------+ SRS Origin Server  +
+------------------+  +     +-----------------------+      +       +--------------------+
                      +                                    +
+------------------+  +                                    +       +--------------------+
+ SRS Edge Server  +--+                                    +-------+ SRS Origin Server  +
+------------------+                                               +--------------------+
```

With the new Origin Cluster and Edge Cluster, you have a media system
capable of supporting a large number of streams and viewers. For
example, you can publish 10,000 streams, each with 100,000 viewers.

---------

Co-authored-by: Jacob Su <suzp1984@gmail.com>
2024-09-09 12:06:02 +08:00
Winlin
b475d552aa Heartbeat: Report ports for proxy server. v5.0.215 v6.0.156 v7.0.15 (#4171)
The heartbeat of SRS is a timer that requests an HTTP URL. We can use
this heartbeat to report the necessary information for registering the
backend server with the proxy server.

```text
SRS(backend) --heartbeat---> Proxy server
```

A proxy server is a specialized load balancer for media servers. It
operates at the application level rather than the TCP level. For more
information about the proxy server, see issue #4158.

Note that we will merge this PR into SRS 5.0+, allowing the use of SRS
5.0+ as the backend server, not limited to SRS 7.0. However, the proxy
server is introduced in SRS 7.0.

It's also possible to implement a registration service, allowing you to
use other media servers as backend servers. For example, if you gather
information about an nginx-rtmp server and register it with the proxy
server, the proxy will forward RTMP streams to nginx-rtmp. The backend
server is not limited to SRS.

---------

Co-authored-by: Jacob Su <suzp1984@gmail.com>
2024-09-09 10:37:41 +08:00
Winlin
15fbe45a9a FLV: Refine source and http handler. v6.0.155 v7.0.14 (#4165)
1. Do not create a source when mounting FLV because it may not unmount
FLV when freeing the source. If you access the FLV stream without any
publisher, then wait for source cleanup and review the FLV stream again,
there is an annoying warning message.

```bash
# View HTTP FLV stream by curl, wait for stream to be ready.
# curl http://localhost:8080/live/livestream.flv -v >/dev/null
HTTP #0 127.0.0.1:58026 GET http://localhost:8080/live/livestream.flv, content-length=-1
new live source, stream_url=/live/livestream
http: mount flv stream for sid=/live/livestream, mount=/live/livestream.flv

# Cancel the curl and trigger source cleanup without http unmount.
client disconnect peer. ret=1007
Live: cleanup die source, id=[], total=1

# View the stream again, it fails.
# curl http://localhost:8080/live/livestream.flv -v >/dev/null
HTTP #0 127.0.0.1:58040 GET http://localhost:8080/live/livestream.flv, content-length=-1
serve error code=1097(NoSource)(No source found) : process request=0 : cors serve : serve http : no source for /live/livestream
serve_http() [srs_app_http_stream.cpp:641]
```

> Note: There is an inconsistency. The first time, you can access the
FLV stream and wait for the publisher, but the next time, you cannot.

2. Create a source when starting to serve the FLV client. We do not need
to create the source when creating the HTTP handler. Instead, we should
try to create the source in the cache or stream. Because the source
cleanup does not unmount the HTTP handler, the handler remains after the
source is destroyed. The next time you access the FLV stream, the source
is not found.

```cpp
srs_error_t SrsHttpStreamServer::hijack(ISrsHttpMessage* request, ISrsHttpHandler** ph) {
    SrsSharedPtr<SrsLiveSource> live_source;
    if ((err = _srs_sources->fetch_or_create(r.get(), server, live_source)) != srs_success) { }
    if ((err = http_mount(r.get())) != srs_success) { }

srs_error_t SrsBufferCache::cycle() {
    SrsSharedPtr<SrsLiveSource> live_source = _srs_sources->fetch(req);
    if (!live_source.get()) {
        return srs_error_new(ERROR_NO_SOURCE, "no source for %s", req->get_stream_url().c_str());
    }

srs_error_t SrsLiveStream::serve_http(ISrsHttpResponseWriter* w, ISrsHttpMessage* r) {
    SrsSharedPtr<SrsLiveSource> live_source = _srs_sources->fetch(req);
    if (!live_source.get()) {
        return srs_error_new(ERROR_NO_SOURCE, "no source for %s", req->get_stream_url().c_str());
    }
```

> Note: We should not create the source in hijack, instead, we create it
in cache or stream:

```cpp
srs_error_t SrsHttpStreamServer::hijack(ISrsHttpMessage* request, ISrsHttpHandler** ph) {
    if ((err = http_mount(r.get())) != srs_success) { }

srs_error_t SrsBufferCache::cycle() {
    SrsSharedPtr<SrsLiveSource> live_source;
    if ((err = _srs_sources->fetch_or_create(req, server_, live_source)) != srs_success) { }

srs_error_t SrsLiveStream::serve_http(ISrsHttpResponseWriter* w, ISrsHttpMessage* r) {
    SrsSharedPtr<SrsLiveSource> live_source;
    if ((err = _srs_sources->fetch_or_create(req, server_, live_source)) != srs_success) { }
```

> Note: This fixes the failure and annoying warning message, and
maintains consistency by always waiting for the stream to be ready if
there is no publisher.

3. Fail the http request if the HTTP handler is disposing, and also keep
the handler entry when disposing the stream, because we should dispose
the handler entry and stream at the same time.

```cpp
srs_error_t SrsHttpStreamServer::http_mount(SrsRequest* r) {
        entry = streamHandlers[sid];
        if (entry->disposing) {
            return srs_error_new(ERROR_STREAM_DISPOSING, "stream is disposing");
        }

void SrsHttpStreamServer::http_unmount(SrsRequest* r) {
    std::map<std::string, SrsLiveEntry*>::iterator it = streamHandlers.find(sid);
    SrsUniquePtr<SrsLiveEntry> entry(it->second);
    entry->disposing = true;
```

> Note: If the disposal process takes a long time, this will prevent
unexpected behavior or access to the resource that is being disposed of.

4. In edge mode, the edge ingester will unpublish the source when the
last consumer quits, which is actually triggered by the HTTP stream.
While it also waits for the stream to quit when the HTTP unmounts, there
is a self-destruction risk: the HTTP live stream object destroys itself.

```cpp
srs_error_t SrsLiveStream::serve_http(ISrsHttpResponseWriter* w, ISrsHttpMessage* r) {
    SrsUniquePtr<SrsLiveConsumer> consumer(consumer_raw); // Trigger destroy.

void SrsHttpStreamServer::http_unmount(SrsRequest* r) {
    for (;;) { if (!cache->alive() && !stream->alive()) { break; } // A circle reference.
    mux.unhandle(entry->mount, stream.get()); // Free the SrsLiveStream itself.
```

> Note: It also introduces a circular reference in the object
relationships, the stream reference to itself when unmount:

```text
SrsLiveStream::serve_http 
    -> SrsLiveConsumer::~SrsLiveConsumer -> SrsEdgeIngester::stop 
    -> SrsLiveSource::on_unpublish -> SrsHttpStreamServer::http_unmount 
        -> SrsLiveStream::alive
```

> Note: We should use an asynchronous worker to perform the cleanup to
avoid the stream destroying itself and to prevent self-referencing.

```cpp
void SrsHttpStreamServer::http_unmount(SrsRequest* r) {
    entry->disposing = true;
    if ((err = async_->execute(new SrsHttpStreamDestroy(&mux, &streamHandlers, sid))) != srs_success) { }
```

> Note: This also ensures there are no circular references and no
self-destruction.

---------

Co-authored-by: Jacob Su <suzp1984@gmail.com>
2024-09-01 13:02:07 +08:00
Winlin
740f0d38ec Edge: Fix flv edge crash when http unmount. v6.0.154 v7.0.13 (#4166)
Edge FLV is not working because it is stuck in an infinite loop waiting.
Previously, there was no need to wait for exit since resources were not
being cleaned up. Now, since resources need to be cleaned up, it must
wait for all active connections to exit, which causes this issue.

To reproduce the issue, start SRS edge, run the bellow command and press
`CTRL+C` to stop the request:

```bash
curl http://localhost:8080/live/livestream.flv -v >/dev/null
```

It will cause edge to fetch stream from origin, and free the consumer
when client quit. When `SrsLiveStream::do_serve_http` return, it will
free the consumer:

```cpp
srs_error_t SrsLiveStream::do_serve_http(ISrsHttpResponseWriter* w, ISrsHttpMessage* r) {
    SrsUniquePtr<SrsLiveConsumer> consumer(consumer_raw);
```

Keep in mind that in this moment, the stream is alive, because only set
to not alive after this function return:

```cpp
    alive_viewers_++;
    err = do_serve_http(w, r); // Free 'this' alive stream.
    alive_viewers_--; // Crash here, because 'this' is freed.
```

When freeing the consumer, it will cause the source to unpublish and
attempt to free the HTTP handler, which ultimately waits for the stream
not to be alive:

```cpp
SrsLiveConsumer::~SrsLiveConsumer() {
    source_->on_consumer_destroy(this);

void SrsLiveSource::on_consumer_destroy(SrsLiveConsumer* consumer) {
    if (consumers.empty()) {
        play_edge->on_all_client_stop();

void SrsLiveSource::on_unpublish() {
    handler->on_unpublish(req);

void SrsHttpStreamServer::http_unmount(SrsRequest* r) {
    if (stream->entry) stream->entry->enabled = false;

    for (; i < 1024; i++) {
        if (!cache->alive() && !stream->alive()) {
            break;
        }
        srs_usleep(100 * SRS_UTIME_MILLISECONDS);
    }
```

After 120 seconds, it will free the stream and cause SRS to crash
because the stream is still active. In order to track this potential
issue, also add an important warning log:

```cpp
srs_warn("http: try to free a alive stream, cache=%d, stream=%d", cache->alive(), stream->alive());
```

SRS may crash if got this log.

---------

Co-authored-by: Jacob Su <suzp1984@gmail.com>
2024-09-01 06:44:35 +08:00
Winlin
a7aa2eaf76 Fix #3767: RTMP: Do not response empty data packet. v6.0.153 v7.0.12 (#4162)
If SRS responds with this empty data packet, FFmpeg will receive an
empty stream, like `Stream #0:0: Data: none` in following logs:

```bash
ffmpeg -i rtmp://localhost:11935/live/livestream
#  Stream #0:0: Data: none
#  Stream #0:1: Audio: aac (LC), 44100 Hz, stereo, fltp, 30 kb/s
#  Stream #0:2: Video: h264 (High), yuv420p(progressive), 768x320 [SAR 1:1 DAR 12:5], 212 kb/s, 25 fps, 25 tbr, 1k tbn
```

This won't cause the player to fail, but it will inconvenience the user
significantly. It may also cause FFmpeg slower to analysis the stream,
see #3767

---------

Co-authored-by: Jacob Su <suzp1984@gmail.com>
2024-09-01 06:40:16 +08:00
Winlin
05c3a422a5 HTTP-FLV: Notify connection to expire when unpublishing. v6.0.152 v7.0.11 (#4164)
When stopping the stream, it will wait for the HTTP Streaming to exit.
If the HTTP Streaming goroutine hangs, it will not exit automatically.

```cpp
void SrsHttpStreamServer::http_unmount(SrsRequest* r)
{
    SrsUniquePtr<SrsLiveStream> stream(entry->stream);
    if (stream->entry) stream->entry->enabled = false;
    srs_usleep(...); // Wait for about 120s.
    mux.unhandle(entry->mount, stream.get()); // Free stream.
}

srs_error_t SrsLiveStream::serve_http(ISrsHttpResponseWriter* w, ISrsHttpMessage* r)
{
    err = do_serve_http(w, r); // If stuck in here for 120s+
    alive_viewers_--; // Crash at here, because stream has been deleted.
```

We should notify http stream connection to interrupt(expire):

```cpp
void SrsHttpStreamServer::http_unmount(SrsRequest* r)
{
    SrsUniquePtr<SrsLiveStream> stream(entry->stream);
    if (stream->entry) stream->entry->enabled = false;
    stream->expire(); // Notify http stream to interrupt.
```

Note that we should notify all viewers pulling stream from this http
stream.

Note that we have tried to fix this issue, but only try to wait for all
viewers to quit, without interrupting the viewers, see
https://github.com/ossrs/srs/pull/4144


---------

Co-authored-by: Jacob Su <suzp1984@gmail.com>
2024-08-31 23:15:51 +08:00
Winlin
f8319d6b6d Fix crash when quiting. v6.0.151 v7.0.10 (#4157)
1. Remove the srs_global_dispose, which causes the crash when still
publishing when quit.
2. Always call _srs_thread_pool->initialize for single thread.
3. Support `--signal-api` to send signal by HTTP API, because CLion
eliminate the signals.

---

Co-authored-by: Jacob Su <suzp1984@gmail.com>
2024-08-24 22:40:39 +08:00
Jacob Su
cc6db250fb Build: Fix srs_mp4_parser compiling error. v6.0.150 v7.0.9 (#4156)
`SrsAutoFree` moved to `srs_core_deprecated.hpp`.

---------

Co-authored-by: winlin <winlinvip@gmail.com>
2024-08-24 21:48:10 +08:00
Winlin
d4248503e7 ASAN: Disable memory leak detection by default. v7.0.8 (#4154)
By setting the env `ASAN_OPTIONS=halt_on_error=0`, we can ignore memory
leaks, see
https://github.com/google/sanitizers/wiki/AddressSanitizerFlags

By setting env `ASAN_OPTIONS=detect_leaks=0`, we can disable memory
leaking detection in parent process when forking for daemon.
2024-08-22 18:43:45 +08:00
Winlin
ff6a608099 ST: Replace macros with explicit code for better understanding. v7.0.7 (#4149)
Improvements for ST(State Threads):

1. ST: Use g++ for CXX compiler.
2. ST: Remove macros for clist.
3. ST: Remove macros for global thread and vp.
4. ST: Remove macros for vp queue operations.
5. ST: Remove macros for context switch.
6. ST: Remove macros for setjmp/longjmp.
7. ST: Remove macro for stack pad.
8. ST: Refine macro for valgrind.

---------

Co-authored-by: Jacob Su <suzp1984@gmail.com>
2024-08-22 11:28:25 +08:00