1. It cannot retrieve codec information on `Firefox` by
`getSenders/getReceivers`
2. It can retrieve codec information on `Chrome` by `getReceivers`, but
incorrect, like this:

3. So, we retrieve codec information from `getStats`, and it works well.
4. The timer is used because sometimes the codec cannot be retrieved
when `iceGatheringState` is `complete`.
5. Testing has been completed on the browsers listed below.
- [x] Chrome
- [x] Edge
- [x] Safari
- [x] Firefox
---------
Co-authored-by: winlin <winlinvip@gmail.com>
1. When the chunk message header employs type 1 and type 2, the extended
timestamp denotes the time delta.
2. When the DTS (Decoding Time Stamp) experiences a jump and exceeds
16777215, there can be errors in DTS calculation, and if the audio and
video delta differs, it may result in audio-video synchronization
issues.
---------
`TRANS_BY_GPT4`
---------
Co-authored-by: 彭治湘 <zuolengchan@douyu.tv>
Co-authored-by: Haibo Chen(陈海博) <495810242@qq.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: winlin <winlinvip@gmail.com>
In the scenario of converting WebRTC to RTMP, this conversion will not
proceed until an SenderReport is received; for reference, see:
https://github.com/ossrs/srs/pull/2470.
Thus, if HTTP-FLV streaming is attempted before the SR is received, the
FLV Header will contain only audio, devoid of video content.
This error can be resolved by disabling `guess_has_av` in the
configuration file, since we can guarantee that both audio and video are
present in the test cases.
However, in the original regression tests, the
`TestRtcPublish_HttpFlvPlay` test case contains a bug:
5a404c089b/trunk/3rdparty/srs-bench/srs/rtc_test.go (L2421-L2424)
The test would pass when `hasAudio` is true and `hasVideo` is false,
which is actually incorrect. Therefore, it has been revised so that the
test now only passes if both values are true.
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: winlin <winlinvip@gmail.com>
To enable H.265 support for the WebRTC protocol, upgrade the pion/webrtc
library to version 4.
---------
Co-authored-by: john <hondaxiao@tencent.com>
Co-authored-by: winlin <winlinvip@gmail.com>
SrsUniquePtr does not support array or object created by malloc, because
we only use delete to dispose the resource. You can use a custom
function to free the memory allocated by malloc or other allocators.
```cpp
char* p = (char*)malloc(1024);
SrsUniquePtr<char> ptr(p, your_free_chars);
```
This is used to replace the SrsAutoFreeH. For example:
```cpp
addrinfo* r = NULL;
SrsAutoFreeH(addrinfo, r, freeaddrinfo);
getaddrinfo("127.0.0.1", NULL, &hints, &r);
```
Now, this can be replaced by:
```cpp
addrinfo* r = NULL;
getaddrinfo("127.0.0.1", NULL, &hints, &r);
SrsUniquePtr<addrinfo> r2(r, freeaddrinfo);
```
Please aware that there is a slight difference between SrsAutoFreeH and
SrsUniquePtr. SrsAutoFreeH will track the address of pointer, while
SrsUniquePtr will not.
```cpp
addrinfo* r = NULL;
SrsAutoFreeH(addrinfo, r, freeaddrinfo); // r will be freed even r is changed later.
SrsUniquePtr<addrinfo> ptr(r, freeaddrinfo); // crash because r is an invalid pointer.
```
---------
Co-authored-by: Haibo Chen <495810242@qq.com>
Co-authored-by: john <hondaxiao@tencent.com>
```
../../../src/utest/srs_utest_st.cpp:27: Failure
Expected: (st_time_2 - st_time_1) <= (100), actual: 119 vs 100
[ FAILED ] StTest.StUtimeInMicroseconds (0 ms)
```
Maybe github's vm, running the action jobs, is slower. I notice this
error happens frequently, so let the UT pass by increase the number.
---------
Co-authored-by: Haibo Chen <495810242@qq.com>
Co-authored-by: winlin <winlinvip@gmail.com>
## How to reproduce?
1. Refer this commit, which contains the web demo to capture screen as
video stream through RTC.
2. Copy the `trunk/research/players/whip.html` and
`trunk/research/players/js/srs.sdk.js` to replace the `develop` branch
source code.
3. `./configure && make`
4. `./objs/srs -c conf/rtc2rtmp.conf`
5. open `http://localhost:8080/players/whip.html?schema=http`
6. check `Screen` radio option.
7. click `publish`, then check the screen to share.
8. play the rtmp live stream: `rtmp://localhost/live/livestream`
9. check the video stuttering.
## Cause
When capture screen by the chrome web browser, which send RTP packet
with empty payload frequently, then all the cached RTP packets are
dropped before next key frame arrive in this case.
The OBS screen stream and camera stream do not have such problem.
## Add screen stream to WHIP demo
><img width="581" alt="Screenshot 2024-08-28 at 2 49 46 PM"
src="https://github.com/user-attachments/assets/9557dbd2-c799-4dfd-b336-5bbf2e4f8fb8">
---------
Co-authored-by: winlin <winlinvip@gmail.com>
try to fix#3978
**Background**
check #3978
**Research**
I referred the Android platform's solution, because I have android
background, and there is a loop to handle message inside android.
ff007a03c0/core/java/android/os/Handler.java (L701-L706C6)
```
public final boolean sendMessageDelayed(@NonNull Message msg, long delayMillis) {
if (delayMillis < 0) {
delayMillis = 0;
}
return sendMessageAtTime(msg, SystemClock.uptimeMillis() + delayMillis);
}
```
59d9dc1f50/libutils/SystemClock.cpp (L37-L51)
```
/*
* native public static long uptimeMillis();
*/
int64_t uptimeMillis()
{
return nanoseconds_to_milliseconds(uptimeNanos());
}
/*
* public static native long uptimeNanos();
*/
int64_t uptimeNanos()
{
return systemTime(SYSTEM_TIME_MONOTONIC);
}
```
59d9dc1f50/libutils/Timers.cpp (L32-L55)
```
#if defined(__linux__)
nsecs_t systemTime(int clock) {
checkClockId(clock);
static constexpr clockid_t clocks[] = {CLOCK_REALTIME, CLOCK_MONOTONIC,
CLOCK_PROCESS_CPUTIME_ID, CLOCK_THREAD_CPUTIME_ID,
CLOCK_BOOTTIME};
static_assert(clock_id_max == arraysize(clocks));
timespec t = {};
clock_gettime(clocks[clock], &t);
return nsecs_t(t.tv_sec)*1000000000LL + t.tv_nsec;
}
#else
nsecs_t systemTime(int clock) {
// TODO: is this ever called with anything but REALTIME on mac/windows?
checkClockId(clock);
// Clock support varies widely across hosts. Mac OS doesn't support
// CLOCK_BOOTTIME (and doesn't even have clock_gettime until 10.12).
// Windows is windows.
timeval t = {};
gettimeofday(&t, nullptr);
return nsecs_t(t.tv_sec)*1000000000LL + nsecs_t(t.tv_usec)*1000LL;
}
#endif
```
For Linux system, we can use `clock_gettime` api, but it's first
appeared in Mac OSX 10.12.
`man clock_gettime`
The requirement is to find an alternative way to get the timestamp in
microsecond unit, but the `clock_gettime` get nanoseconds, the math
formula is the nanoseconds / 1000 = microsecond. Then I check the
performance of this api + math division.
I used those code to check the `clock_gettime` performance.
```
#include <sys/time.h>
#include <time.h>
#include <stdio.h>
#include <unistd.h>
int main() {
struct timeval tv;
struct timespec ts;
clock_t start;
clock_t end;
long t;
while (1) {
start = clock();
gettimeofday(&tv, NULL);
end = clock();
printf("gettimeofday clock is %lu\n", end - start);
printf("gettimeofday is %lld\n", (tv.tv_sec * 1000000LL + tv.tv_usec));
start = clock();
clock_gettime(CLOCK_MONOTONIC, &ts);
t = ts.tv_sec * 1000000L + ts.tv_nsec / 1000L;
end = clock();
printf("clock_monotonic clock is %lu\n", end - start);
printf("clock_monotonic: seconds is %ld, nanoseconds is %ld, sum is %ld\n", ts.tv_sec, ts.tv_nsec, t);
start = clock();
clock_gettime(CLOCK_MONOTONIC_RAW, &ts);
t = ts.tv_sec * 1000000L + ts.tv_nsec / 1000L;
end = clock();
printf("clock_monotonic_raw clock is %lu\n", end - start);
printf("clock_monotonic_raw: nanoseconds is %ld, sum is %ld\n", ts.tv_nsec, t);
sleep(3);
}
return 0;
}
```
Here is output:
env: Mac OS M2 chip.
```
gettimeofday clock is 11
gettimeofday is 1709775727153949
clock_monotonic clock is 2
clock_monotonic: seconds is 1525204, nanoseconds is 409453000, sum is 1525204409453
clock_monotonic_raw clock is 2
clock_monotonic_raw: nanoseconds is 770493000, sum is 1525222770493
```
We can see the `clock_gettime` is faster than `gettimeofday`, so there
are no performance risks.
**MacOS solution**
`clock_gettime` api only available until mac os 10.12, for the mac os
older than 10.12, just keep the `gettimeofday`.
check osx version in `auto/options.sh`, then add MACRO in
`auto/depends.sh`, the MACRO is `MD_OSX_HAS_NO_CLOCK_GETTIME`.
**CYGWIN**
According to google search, it seems the
`clock_gettime(CLOCK_MONOTONIC)` is not support well at least 10 years
ago, but I didn't own an windows machine, so can't verify it. so keep
win's solution.
---------
Co-authored-by: winlin <winlinvip@gmail.com>
Please note that the proxy server is a new architecture or the next
version of the Origin Cluster, which allows the publication of multiple
streams. The SRS origin cluster consists of a group of origin servers
designed to handle a large number of streams.
```text
+-----------------------+
+---+ SRS Proxy(Deployment) +------+---------------------+
+-----------------+ | +-----------+-----------+ + +
| LB(K8s Service) +--+ +(Redis/MESH) + SRS Origin Servers +
+-----------------+ | +-----------+-----------+ + (Deployment) +
+---+ SRS Proxy(Deployment) +------+---------------------+
+-----------------------+
```
The new origin cluster is designed as a collection of proxy servers. For
more information, see [Discussion
#3634](https://github.com/ossrs/srs/discussions/3634). If you prefer to
use the old origin cluster, please switch to a version before SRS 6.0.
A proxy server can be used for a set of origin servers, which are
isolated and dedicated origin servers. The main improvement in the new
architecture is to store the state for origin servers in the proxy
server, rather than using MESH to communicate between origin servers.
With a proxy server, you can deploy origin servers as stateless servers,
such as in a Kubernetes (K8s) deployment.
Now that the proxy server is a stateful server, it uses Redis to store
the states. For faster development, we use Go to develop the proxy
server, instead of C/C++. Therefore, the proxy server itself is also
stateless, with all states stored in the Redis server or cluster. This
makes the new origin cluster architecture very powerful and robust.
The proxy server is also an architecture designed to solve multiple
process bottlenecks. You can run hundreds of SRS origin servers with one
proxy server on the same machine. This solution can utilize multi-core
machines, such as servers with 128 CPUs. Thus, we can keep SRS
single-threaded and very simple. See
https://github.com/ossrs/srs/discussions/3665#discussioncomment-6474441
for details.
```text
+--------------------+
+-------+ SRS Origin Server +
+ +--------------------+
+
+-----------------------+ + +--------------------+
+ SRS Proxy(Deployment) +------+-------+ SRS Origin Server +
+-----------------------+ + +--------------------+
+
+ +--------------------+
+-------+ SRS Origin Server +
+--------------------+
```
Keep in mind that the proxy server for the Origin Cluster is designed to
handle many streams. To address the issue of many viewers, we will
enhance the Edge Cluster to support more protocols.
```text
+------------------+ +--------------------+
+ SRS Edge Server +--+ +-------+ SRS Origin Server +
+------------------+ + + +--------------------+
+ +
+------------------+ + +-----------------------+ + +--------------------+
+ SRS Edge Server +--+-----+ SRS Proxy(Deployment) +------+-------+ SRS Origin Server +
+------------------+ + +-----------------------+ + +--------------------+
+ +
+------------------+ + + +--------------------+
+ SRS Edge Server +--+ +-------+ SRS Origin Server +
+------------------+ +--------------------+
```
With the new Origin Cluster and Edge Cluster, you have a media system
capable of supporting a large number of streams and viewers. For
example, you can publish 10,000 streams, each with 100,000 viewers.
---------
Co-authored-by: Jacob Su <suzp1984@gmail.com>
The heartbeat of SRS is a timer that requests an HTTP URL. We can use
this heartbeat to report the necessary information for registering the
backend server with the proxy server.
```text
SRS(backend) --heartbeat---> Proxy server
```
A proxy server is a specialized load balancer for media servers. It
operates at the application level rather than the TCP level. For more
information about the proxy server, see issue #4158.
Note that we will merge this PR into SRS 5.0+, allowing the use of SRS
5.0+ as the backend server, not limited to SRS 7.0. However, the proxy
server is introduced in SRS 7.0.
It's also possible to implement a registration service, allowing you to
use other media servers as backend servers. For example, if you gather
information about an nginx-rtmp server and register it with the proxy
server, the proxy will forward RTMP streams to nginx-rtmp. The backend
server is not limited to SRS.
---------
Co-authored-by: Jacob Su <suzp1984@gmail.com>
1. Do not create a source when mounting FLV because it may not unmount
FLV when freeing the source. If you access the FLV stream without any
publisher, then wait for source cleanup and review the FLV stream again,
there is an annoying warning message.
```bash
# View HTTP FLV stream by curl, wait for stream to be ready.
# curl http://localhost:8080/live/livestream.flv -v >/dev/null
HTTP #0 127.0.0.1:58026 GET http://localhost:8080/live/livestream.flv, content-length=-1
new live source, stream_url=/live/livestream
http: mount flv stream for sid=/live/livestream, mount=/live/livestream.flv
# Cancel the curl and trigger source cleanup without http unmount.
client disconnect peer. ret=1007
Live: cleanup die source, id=[], total=1
# View the stream again, it fails.
# curl http://localhost:8080/live/livestream.flv -v >/dev/null
HTTP #0 127.0.0.1:58040 GET http://localhost:8080/live/livestream.flv, content-length=-1
serve error code=1097(NoSource)(No source found) : process request=0 : cors serve : serve http : no source for /live/livestream
serve_http() [srs_app_http_stream.cpp:641]
```
> Note: There is an inconsistency. The first time, you can access the
FLV stream and wait for the publisher, but the next time, you cannot.
2. Create a source when starting to serve the FLV client. We do not need
to create the source when creating the HTTP handler. Instead, we should
try to create the source in the cache or stream. Because the source
cleanup does not unmount the HTTP handler, the handler remains after the
source is destroyed. The next time you access the FLV stream, the source
is not found.
```cpp
srs_error_t SrsHttpStreamServer::hijack(ISrsHttpMessage* request, ISrsHttpHandler** ph) {
SrsSharedPtr<SrsLiveSource> live_source;
if ((err = _srs_sources->fetch_or_create(r.get(), server, live_source)) != srs_success) { }
if ((err = http_mount(r.get())) != srs_success) { }
srs_error_t SrsBufferCache::cycle() {
SrsSharedPtr<SrsLiveSource> live_source = _srs_sources->fetch(req);
if (!live_source.get()) {
return srs_error_new(ERROR_NO_SOURCE, "no source for %s", req->get_stream_url().c_str());
}
srs_error_t SrsLiveStream::serve_http(ISrsHttpResponseWriter* w, ISrsHttpMessage* r) {
SrsSharedPtr<SrsLiveSource> live_source = _srs_sources->fetch(req);
if (!live_source.get()) {
return srs_error_new(ERROR_NO_SOURCE, "no source for %s", req->get_stream_url().c_str());
}
```
> Note: We should not create the source in hijack, instead, we create it
in cache or stream:
```cpp
srs_error_t SrsHttpStreamServer::hijack(ISrsHttpMessage* request, ISrsHttpHandler** ph) {
if ((err = http_mount(r.get())) != srs_success) { }
srs_error_t SrsBufferCache::cycle() {
SrsSharedPtr<SrsLiveSource> live_source;
if ((err = _srs_sources->fetch_or_create(req, server_, live_source)) != srs_success) { }
srs_error_t SrsLiveStream::serve_http(ISrsHttpResponseWriter* w, ISrsHttpMessage* r) {
SrsSharedPtr<SrsLiveSource> live_source;
if ((err = _srs_sources->fetch_or_create(req, server_, live_source)) != srs_success) { }
```
> Note: This fixes the failure and annoying warning message, and
maintains consistency by always waiting for the stream to be ready if
there is no publisher.
3. Fail the http request if the HTTP handler is disposing, and also keep
the handler entry when disposing the stream, because we should dispose
the handler entry and stream at the same time.
```cpp
srs_error_t SrsHttpStreamServer::http_mount(SrsRequest* r) {
entry = streamHandlers[sid];
if (entry->disposing) {
return srs_error_new(ERROR_STREAM_DISPOSING, "stream is disposing");
}
void SrsHttpStreamServer::http_unmount(SrsRequest* r) {
std::map<std::string, SrsLiveEntry*>::iterator it = streamHandlers.find(sid);
SrsUniquePtr<SrsLiveEntry> entry(it->second);
entry->disposing = true;
```
> Note: If the disposal process takes a long time, this will prevent
unexpected behavior or access to the resource that is being disposed of.
4. In edge mode, the edge ingester will unpublish the source when the
last consumer quits, which is actually triggered by the HTTP stream.
While it also waits for the stream to quit when the HTTP unmounts, there
is a self-destruction risk: the HTTP live stream object destroys itself.
```cpp
srs_error_t SrsLiveStream::serve_http(ISrsHttpResponseWriter* w, ISrsHttpMessage* r) {
SrsUniquePtr<SrsLiveConsumer> consumer(consumer_raw); // Trigger destroy.
void SrsHttpStreamServer::http_unmount(SrsRequest* r) {
for (;;) { if (!cache->alive() && !stream->alive()) { break; } // A circle reference.
mux.unhandle(entry->mount, stream.get()); // Free the SrsLiveStream itself.
```
> Note: It also introduces a circular reference in the object
relationships, the stream reference to itself when unmount:
```text
SrsLiveStream::serve_http
-> SrsLiveConsumer::~SrsLiveConsumer -> SrsEdgeIngester::stop
-> SrsLiveSource::on_unpublish -> SrsHttpStreamServer::http_unmount
-> SrsLiveStream::alive
```
> Note: We should use an asynchronous worker to perform the cleanup to
avoid the stream destroying itself and to prevent self-referencing.
```cpp
void SrsHttpStreamServer::http_unmount(SrsRequest* r) {
entry->disposing = true;
if ((err = async_->execute(new SrsHttpStreamDestroy(&mux, &streamHandlers, sid))) != srs_success) { }
```
> Note: This also ensures there are no circular references and no
self-destruction.
---------
Co-authored-by: Jacob Su <suzp1984@gmail.com>
Edge FLV is not working because it is stuck in an infinite loop waiting.
Previously, there was no need to wait for exit since resources were not
being cleaned up. Now, since resources need to be cleaned up, it must
wait for all active connections to exit, which causes this issue.
To reproduce the issue, start SRS edge, run the bellow command and press
`CTRL+C` to stop the request:
```bash
curl http://localhost:8080/live/livestream.flv -v >/dev/null
```
It will cause edge to fetch stream from origin, and free the consumer
when client quit. When `SrsLiveStream::do_serve_http` return, it will
free the consumer:
```cpp
srs_error_t SrsLiveStream::do_serve_http(ISrsHttpResponseWriter* w, ISrsHttpMessage* r) {
SrsUniquePtr<SrsLiveConsumer> consumer(consumer_raw);
```
Keep in mind that in this moment, the stream is alive, because only set
to not alive after this function return:
```cpp
alive_viewers_++;
err = do_serve_http(w, r); // Free 'this' alive stream.
alive_viewers_--; // Crash here, because 'this' is freed.
```
When freeing the consumer, it will cause the source to unpublish and
attempt to free the HTTP handler, which ultimately waits for the stream
not to be alive:
```cpp
SrsLiveConsumer::~SrsLiveConsumer() {
source_->on_consumer_destroy(this);
void SrsLiveSource::on_consumer_destroy(SrsLiveConsumer* consumer) {
if (consumers.empty()) {
play_edge->on_all_client_stop();
void SrsLiveSource::on_unpublish() {
handler->on_unpublish(req);
void SrsHttpStreamServer::http_unmount(SrsRequest* r) {
if (stream->entry) stream->entry->enabled = false;
for (; i < 1024; i++) {
if (!cache->alive() && !stream->alive()) {
break;
}
srs_usleep(100 * SRS_UTIME_MILLISECONDS);
}
```
After 120 seconds, it will free the stream and cause SRS to crash
because the stream is still active. In order to track this potential
issue, also add an important warning log:
```cpp
srs_warn("http: try to free a alive stream, cache=%d, stream=%d", cache->alive(), stream->alive());
```
SRS may crash if got this log.
---------
Co-authored-by: Jacob Su <suzp1984@gmail.com>
If SRS responds with this empty data packet, FFmpeg will receive an
empty stream, like `Stream #0:0: Data: none` in following logs:
```bash
ffmpeg -i rtmp://localhost:11935/live/livestream
# Stream #0:0: Data: none
# Stream #0:1: Audio: aac (LC), 44100 Hz, stereo, fltp, 30 kb/s
# Stream #0:2: Video: h264 (High), yuv420p(progressive), 768x320 [SAR 1:1 DAR 12:5], 212 kb/s, 25 fps, 25 tbr, 1k tbn
```
This won't cause the player to fail, but it will inconvenience the user
significantly. It may also cause FFmpeg slower to analysis the stream,
see #3767
---------
Co-authored-by: Jacob Su <suzp1984@gmail.com>
When stopping the stream, it will wait for the HTTP Streaming to exit.
If the HTTP Streaming goroutine hangs, it will not exit automatically.
```cpp
void SrsHttpStreamServer::http_unmount(SrsRequest* r)
{
SrsUniquePtr<SrsLiveStream> stream(entry->stream);
if (stream->entry) stream->entry->enabled = false;
srs_usleep(...); // Wait for about 120s.
mux.unhandle(entry->mount, stream.get()); // Free stream.
}
srs_error_t SrsLiveStream::serve_http(ISrsHttpResponseWriter* w, ISrsHttpMessage* r)
{
err = do_serve_http(w, r); // If stuck in here for 120s+
alive_viewers_--; // Crash at here, because stream has been deleted.
```
We should notify http stream connection to interrupt(expire):
```cpp
void SrsHttpStreamServer::http_unmount(SrsRequest* r)
{
SrsUniquePtr<SrsLiveStream> stream(entry->stream);
if (stream->entry) stream->entry->enabled = false;
stream->expire(); // Notify http stream to interrupt.
```
Note that we should notify all viewers pulling stream from this http
stream.
Note that we have tried to fix this issue, but only try to wait for all
viewers to quit, without interrupting the viewers, see
https://github.com/ossrs/srs/pull/4144
---------
Co-authored-by: Jacob Su <suzp1984@gmail.com>
1. Remove the srs_global_dispose, which causes the crash when still
publishing when quit.
2. Always call _srs_thread_pool->initialize for single thread.
3. Support `--signal-api` to send signal by HTTP API, because CLion
eliminate the signals.
---
Co-authored-by: Jacob Su <suzp1984@gmail.com>