5.1.1. Release notes for v0.14

About release 0.14

The main features of this SDK release include:

  • Several improvements to video stages when using them in different combinations and orders. Previously certain orders and combinations could cause visual artefacts or conflicts.

  • Text overlay now supports the ability to set the text while the feed is playing.

  • Fixes to some previously noted issues with the privacy mode.

  • Some previously deprecated features have now been removed (see below for details).

  • Improvements to the demo application to demonstrate the new features and options available.

  • The default mixing of local-recorded audio feeds has changed, and a new more flexible overload of the createRecordingFeed method has been added to allow applications to specify which audio feeds to mix into which audio tracks in the output files.

  • Media server session receiving latency improvements in CPU-constrained scenarios.

  • Experimental features:

    • Wayland desktop capture via Pipewire has been added.

    • Queue tuning diagnostics can be enabled from the application.

NOTE: This release introduces some breaking changes, please see the notes below.

v0.14.3

Queue diagnostics (experimental feature)

To help diagnose latency issues, we have an added an experimental feature to trace queue overruns. Queues help manage media flow where the input rate cannot be consistently consumed by further processing (e.g. an input video is encoded before sending to a remote session). When the input cannot be processed in time, we can either hold on to frames to keep quality/continuity of the media, or drop frames to avoid a (potentially increasing) backlog. SDK uses queues with a finite buffer for incoming media, which will drop older buffers when full. When queuing frames, we are of course introducing latency (an incoming media frame is held onto and processed later). Queues should be tuned to balance excessive latency (buffer too large) whilst maintaining quality by avoiding dropping buffers if possible (enough buffer to smooth occasional slow-processing).

The experimental queue diagnostics feature allows the application to enable tracing of queue overruns. The log output can be “spammy”, so we advise switching on for one feed at a time to get a clear picture rather than interleaving output in the log.

Enabling diagnostics only requires you to make a request on the feed, but importantly the feed must have started before diagnostics can be enabled, to set up the appropriate pipeline hooks.

// Include the FeedDiagnosticsRequest header
#include <PxMedia/FeedDiagnosticsRequest.h>

// When the feed starts, enable the diagnostics using a given log level for output
someFeed->onFeedStarted(
    [](auto& feed)
    {
        std::cout << "Feed " << feed->streamId() << " started playing" << std::endl;
        auto diagnostics = feed->feedRequest(experimental::FeedDiagnosticsRequest(LOG_LEVEL));
        if (!diagnostics) {
            // Failed to enable diagnostics, check diagnostics.error() for cause
        }
    });

When enabled, the log will output when the pipeline’s queue is overrunning, including the buffer size which indicates the potential latency introduced by the queue’s buffering.

` [INFO] MediaPipeline [webcam] queue [queue6]: OVERRUN (100ms) `

v0.14.2

Minor breaking changes

A few headers in PxUtility have been renamed for clarity and consistency. In these cases, the file references are not expected to be directly including in application code, but rather are included by other headers in the SDK.

  • pxassert.h becomes PXASSERT.h

  • int-utility.h becomes int-utils.h

  • traits-utility.h becomes traits-utils.h

Local UDP latency improvement in CPU-constrained scenarios

An queue has been added between the WebRTC receiver and the local UDP output. This is intended to address a growing-latency issue that could occur in CPU-constrained scenarios, when the local UDP output encoder was not able to keep up with the incoming video feed. The new queue will buffer frames up to a limit (100ms), and discard older frames if the queue fills up. This was the original intended behaviour of the pipeline.

v0.14.1

New experimental feature: Wayland desktop capture via Pipewire

The SDK now includes experimental support for capturing Wayland desktop sessions using Pipewire. This feature is intended for testing and may not be fully stable. Note that configuration of Wayland and Pipewire to expose Wayland video via Pipewire is outside the scope of this SDK. See VideoInputFeedPipewire for more details.

Local recording audio mixing changes

In previous SDK releases, PeerSession::createRecordingFeed would record given audio feeds into separate audio tracks in the output files. In this release, the method has been updated to mix the specified audio feeds into a single audio track in the output files by default.

The old default behaviour can be achieved by using the new overload of PeerSession::createRecordingFeed, which allows an application to specify which audio feeds to mix into which audio tracks in the output files, using the audioTrackFeedIds parameter.

Bug fixes

  • Fixed an issue where attempting to create a feed when not connected to a media server session would result in a crash. Now, a meaningful error is returned instead.

v0.14.0

Previously deprecated features removed

The following classes and features, which were previously deprecated, have been removed:

  • PxMedia::VideoStageRotateCrop

  • PxUtility::HttpFetchTyped

  • PxUtility::RestService

Fixes to previous known issues with VideoStagePrivacy

Now when using VideoStagePrivacy:

  • When the privacy state is toggled, the video feed resolution matches the incoming feed resolution (previously it was set to a fixed output resolution).

  • When using the privacy feature with AVOutputFeedFile, the output file will be continuous (previously, it would split the file when privacy was toggled).

  • Visual artefacts and conflicts between privacy and other video stages such as rotate and crop, or adaptive streaming, have now been resolved.

New feature: Feed request support for setting text overlay text

The feed request now supports setting text overlay text while the feed is playing, using a new request type: VideoStageTextOverlay::RequestSetText.

For more information on feed requests, see the Feed Requests section.

Updated demo features & fixes

Previously the demo applied the same adaptive streaming size, irrespective of whether the feed was cropped or rotated. This resulted in the video appearing with an incorrect aspect ratio, causing the video to be stretched or squashed. Now, the demo accounts for these stages when they are applied so the correct adaptive streaming size is used.

The demo application can now demonstrate additional “feed stages”, including specifying the order that stages are applied:

  • --demo-crop=<n>: Crop the outgoing session video by the specified number of pixels from each side.

  • --demo-rotate=ON: Demonstrates rotating the outgoing session feed by 90 degrees clockwise.

  • --demo-privacy=ON: Toggles privacy on the webcam feed at intervals.

  • --demo-text-overlay=<text>: Sets the text overlay on the webcam to the specified text.

  • --demo-framerate=<fps>: Sets a frame rate to apply to the outgoing session feed.

The order in which the options appear on the command line determines the order in which they are applied to the outgoing session video feed (not the demo “preview” shown at demo startup).

You can also now choose whether the demo outputs the local webcam feed to a window or to a UDP output. Use the --video-output=<out> with a value of window or udp to specify the output type. The output defaults to window if not specified.