-
Notifications
You must be signed in to change notification settings - Fork 1
Simple Streaming Client Sample
The Simple Streaming Client sample implements a simple streaming client application which streams video and audio from a server, such as, for example, the Remote Desktop Server sample. It has the following functionality:
- Receiving, decoding, post-processing and displaying a video stream from the server
- Receiving, decoding, post-processing tha playing an audio stream from the server
- Capturing user input from a keyboard, a mouse and a game controller connected to the client machine and sending it to the server
- Optionally encrypting all network traffic using AES encryption with a pre-shared password to generate the encryption key
- Standard Dynamic Range (SDR) and High Dynamic Range (HDR) streaming when supported by the OS and codec (HEVC and AV1 on Windows only)
- Server auto discovery or direct connection via a URL
- Streaming over UDP and TCP
- Extensive logging
Video post-processing includes the following:
- Video scaling, including high-quality upscaling when the resolution of the video stream received is lower than the resolution of the client display
- Compression artifact removal (denoising)
The following video codecs are supported (subject to hardware support):
- h.264/AVC
- h.265/HEVC
- AV1
Audio post-processing includes the following:
- Resampling
- Remixing to the client's speaker layout
The following audio codecs are supported:
- AAC
- Opus
The following hardware configurations are supported:
- An AMD or Intel CPU or APU
- AMD, NVidia or Intel GPU with a hardware-accelerated video decoder supporting at least one of the supported codecs. NOTE: additional steps are required to run the client sample on a non-AMD GPU. See the Running on non-AMD GPU section for more details
Run the SimpleStreamingClient executable with the following command line parameters (command line parameters are not case-sensitive):
- -Server <url> - specifies the URL of the server to connect to. The URL is provided in the following format: [protocol://]ip address[:port], where the protocol can either UDP or TCP. The protocol and the port are optional and will default to UDP and 1235 respectively when not provided explicitly. Note that TCP must be enabled on the server in order for the client to be able to connect using TCP. UDP transport is available regardless of the server configuration. When the -Server command line argument is not provided, the client would perform automatic server discovery by sending a UDP broadcast to the local IP subnet on the port specified by the -port command line argument (see below). The client would attempt to connect to the first responding server automatically using the preferred protocol provided by the server's response. NOTE: automatic server discovery can only detect servers connected to the same IP subnet. If your subnet spans across multiple physical segments connected by routers, please make sure that the routers allow UDP broadcasts to travel across the segments.
- -Port <port> - specifies the UDP port the UDP discovery broadcasts are sent to. This parameter also defines the port for direct UDP or TCP connections when the -Server command line argument is specified. Default: 1235.
- -Encrypted [true|false] - enables optional AES encryption of all network traffic between the client and the server. You must provide the -Encrypted command line parameter with the same value on the server in order for the client to be able to connect to the server. You must also specify the same passphrase on the client and the server using the -Pass command line argument (see below). NOTE: while the actual data stream between the server and the client, including video, audio, controller events and application-defined messages sent over the AMD Transport are encrypted when encryption is enabled, discovery broadcasts are sent in plaintext regardless of the value of the -Encrypted argument.
- -DatagramSize <size> - datagram size in bytes. Applicable only when streaming over UDP. For more information please refer to the Configuration Considerations section. Default: 65507.
- -DeviceID <id> - specifies a client device ID. This ID is used to uniquely identify the client to the server. When not supplied, a random value will be generated. It is recommended that you generate a unique value in your application and persist it across multiple runs.
- -Uvd [0|1] - hardware-accelerated decoder instance. Certain AMD GPUs, such as Radeon RX6800/6900/7800/7900 have two instances of the video codec engine. You can specify which instance is to be used for video decoding when running the sample on GPUs that have more than one decoder instance. Default: 0.
- -PreserveAspectRatio [true|false] - specifies whether the video stream's aspect ratio should be preserved when it is different from the aspect ratio of the display connected to the client. Default: true.
- -VideoDenoiser [true|false] - specifies whether the video denoiser (compression artifact remover) should be enabled or disabled. Keep in mind that video denoiser will increase the overall latency and might limit the frame rate at high resolutions when running on low-end or mobile hardware. Enable it when the hardware is capable of maintaining the desired frame rate at the desired stream resolution and when latency (responsiveness) is less important than image quality. Enabling video denoiser is useful when streaming over a slower network connection at a lower bitrate. NOTE: not available on non-AMD GPUs. Performance penalty is higher on HDR streams. Default: true.
- -HQUpscale [true|false] - specifies whether the high-quality video upscaler is enabled. The high quality upscaler produces a sharper, more detailed image when upscaling from a lower-resolution video stream to a higher resolution display compared to simpler bilinear or bicubic scalers. Enable it when the hardware is capable of maintaining the desired frame rate at the desired stream resolution and when latency (responsiveness) is less important than image quality. NOTE: not available on non-AMD GPUs. Performance penalty is higher on HDR streams. Default: true.
- -RelativeMouse [true|false] - specifies whether the mouse movements are sent to the server as relative "steps" or absolute screen coordinates. In the absolute mode the mouse movements are bound by the client's window. This limitation isn't present in the relative mode, however some temporary cursor desynchronization might be present when streaming over higher latency networks, which can manifest itself as cursor lag or rubberbanding. The absolute mode is the preferred mode in VDI applications, however, for gaming the relative mode is a better choice. Games often use the mouse to move the camera view (panning/dolly) with the cursor being hidden. In absolute mode the range of camera view movements would be limited by the screen boundaries. Default: true (relative).
- -ShowCursor [true|false] - specifies whether the mouse cursor needs to be rendered by the client (true) or is embedded in the video stream (false). Default: true.
- -LogFile <passphrase> - specifies the path to the log file. Default: RemoteDesktopServer.log located in the executable's directory.
To run the client sample on a non-AMD GPU you will need to include a special lite version of AMF, containing generic implementations of the video decoder and scaler/color space convertor, with your client application installation. This includes two DLLs amfrt64.dll and amflte64.dll containing the implementation of a video decoder and a color space converter that can run without the AMD graphics driver. Include the prebuilt/Windows/AMD64/amflte64.dll file in your installation package and place it next to you client application's main executable. It is safe to install this DLL even on clients with AMD graphics, however, it will not be loaded on AMD platforms. NOTE - some functionality, such as High Quality Upscaler and Video Denoiser are available on AMD graphics only and will be disabled when the client runs on NVidia or Intel GPUs.
This section provides a high level overview of the Simple Streaming Client sample's code. The sample's source code is located in the samples/SimpleStreamingClient/ directory. This overview is not meant to be a reference, but is designed to provide guidance as you navigate the source code and help in understanding of the most important and the not-so-obvious aspects of the sample. We suggest that you follow the source code while reading this section.
The client application is implemented in the SimpleStreamigClient class. This is a base class which contains all platform-agnostic functionality. All Windows-specific functionality is implemented in the SimpleStreamigClientWin class derived from SimpleStreamigClient. This approach allows for cleaner code, not polluted with platform-specific #ifdef statements.
These classes are responsible for the application initialization and termination, command line parsing and starting the video and audio decoding pipelines and the network client.
The SimpleStreamingClient class implements the ssdk::transport_common::ClientTransport::ConnectionManagerCallback which receives notifications when a server is discovered or when a connection with a server is established or terminated.
The Simple Streaming Client sample uses the AMD Network Transport protocol to communicate with the server. The client component of the AMD Network Transport is implemented by the ssdk::transport_amd::ClientTransportImpl class, derived from the ssdk::transport_common::ClientTransport class. The implementation of the ssdk::transport_amd::ClientTransportImpl class is located in ssdk/transports/transport_amd/ClientTransportImpl.*. For more information about the AMD Network Transport please refer to the AMD Network Transport section.
If you wish to replace the AMD Transport protocol with another protocol, derive your own implementation class from ssdk::transport_common::ClientTransport and instantiate it instead of ssdk::transport_amd::ClientTransportImpl. Follow the instructions outlined in the Implementing Custom Protocols section.
The ClientTransport object must first be initialized by calling the ssdk::transport_common::ClientTransport::Start() method. The Start() requires a transport-specific parameter block defined by the ssdk::transport_common::ClientTransport::ClientInitParameters class. This is a base class. Each implementation of a transport must derive its own parameters class from it. The AMD Network Transport defines the ssdk::transport_amd::ClientTransportImpl::ClientInitParametersAMD class.
A client can connect to a server either by passing a server URL to the ssdk::transport_common::ClientTransport::Connect() method. For AMD Network Transport the URL must have the following format:
[protocol://]<server IP address>[:port]
where protocol can be either "UDP" or "TCP", with UDP being the default.
If you choose to implement your own transport, the URL format would be defined by your implementation.
When using the AMD Network Transport, the client can automatically discover the servers on the IP subnet by sending a UDP broadcast requesting a response from the servers. The discovery process is initiated by calling the ssdk::transport_common::ClientTransport::FindServers() method. The UDP port the broadcast is sent to is configurable by calling the ssdk::transport_amd::ClientTransportImpl::ClientInitParametersAMD::SetDiscoveryPort() method. When a server responds, the Client Transport object calls the ssdk::transport_common::ClientTransport::ConnectionManagerCallback::OnServerDiscovered() method which receives a ServerDescriptor as a parameter. The ssdk::transport_common::ClientTransport::ServerDescriptor class contains a server name, description, a URL it can be reached at and a list of supported video and audio streams. When a server supports multiple protocols, for example, a TCP- and a UDP-based AMD Transport, the OnServerDiscovered() callback will be called multiple times for the same server, once for each protocol.
NOTE: The server discovery process described above is specific to the AMD Network Transport. Other transport implementations may support other discovery mechanisms, such as, for example, a directory server. In such cases a call to FindServers() would result in a request being sent to the directory server, which would respond with a list of available servers. The server discovery mechanism is specific to the transport, however the ClientTransport API has been designed to be agnostic of the underlying implementation.
Each server can stream one or more video and/or audio streams. There are many reasons why multiple streams can be desirable. For example, a server could stream the same video content using various codecs, resolutions, bitrates or frame rate. Alternatively, a server might have multiple displays connected to it (physical or virtual) and capture different streams from different displays. To receive a video or an audio stream, a client must subscribe to it. This can be achieved by calling the ssdk::transport_common::ClientTransport::SubscribeToVideoStream() and the *ssdk::transport_common::ClientTransport::SubscribeToAudioStream() methods. Likewise, streams can be unsubscribed from by calling ssdk::transport_common::ClientTransport::UnsubscribeFromVideoStream() and ssdk::transport_common::ClientTransport::UnsubscribeFromAudioStream().
Each video and audio stream is preceded by the initialization block, which contains information about various stream parameters, such as codec, resolution, frame rate and so on for video and codec, sampling rate, channel layout for audio. It can also contain an optional binary blob generated by the encoder on the server and used for decoder initialization on the client. Video is streamed as a sequence of frames and audio (compressed or uncompressed) is streamed as a sequence of buffers. Every time an initialization block, a video frame or an audio buffer is received by the client, a call to the corresponding method of the ssdk::transport_common::ClientTransport::VideoReceiverCallback or the ssdk::transport_common::ClientTransport::AudioRecieverCallback is triggered. These callbacks are implemented by the ssdk::video::VideoDispatcher and the ssdk::video::AudioDispatcher classes respectively. Their purpose is to distribute initialization blocks, video frames and audio buffers belonging to different streams to their respective decoders and pipelines.
Once streams have been separated from each other by a Dispatcher, they are fed to an Input object. The purpose of an input is to unwrap video frames or audio buffers and convert them to an uncompressed stream. For compressed streams an Input object incorporates a decoder. A monoscopic video input used by the Simple Streaming Client sample is implemented in the ssdk::video::MonoscopicVideoInput class located in sdk/video/MonoscopicVideoInput.*. An audio input is implemented in the ssdk::audio::AudioInput class located in sdk/auido/AudioInput.*.
Post-processing pipelines have their input data (uncompressed video frames or audio buffers) pushed by their corresponding inputs. They implement the Consumer interface defined in the corresponding Input class. A post-processing pipeline consists of a sequence of amf::AMFComponent objects. A pipeline pushes a video frame or an audio buffer through each of these components. A commonly used video processing pipeline is implemented by the ssdk::video::VideoReceiverPipeline class. Likewise, an audio processing pipeline is implemented by the ssdk::audio::AudioReceiverPipeline class.
A receiver pipeline is providing input to a presenter. Streaming SDK uses sample video and audio presenters from public AMF samples.
Since video and audio streams are sent across the network in separate messages without buffering to achieve low latency, they may experience delays while in transit and these delays may result in video and audio getting out of sync on the client. While video can be fast-forwarded after the network congestion has cleared by reducing the wait between frames, audio cannot as its playback speed is determined by the sampling rate. When audio has been delayed, one of the ways to accelerate it to catch up with the video is to skip one or more decoded buffers and not play them. This is often acceptable since without buffering a delay of an audio buffer would result in an audible gap anyway. The synchronization between video and audio is performed by the ssdk::util::AVSynchronizer object inserted between the outputs of the video and audio receiver pipelines and the video and audio presenters. The implementation of the ssdk::util::AVSynchronizer class is located in the sdk/util/AVSynchronizer.* files.
The Simple Streaming Client sample captures user input from a keyboard, a mouse and a game controller connected to the client device and sends it to the server, where input events are injected into their corresponding input queues. All input devices on the client are represented by a set of "driver" classes derived from the ssdk::ctls::ControllerBase class, defined in sdk/controllers/client/ControllerBase.h. All controllers are managed by the ssdk::ctls::ControllerManager class, which is responsible for instantiation of individual controller objects, querying their state, forming event strings and passing them to the Network Transport layer.