Configuring Channel Recording via Web Console
SmartMEDIA Admin Console — a web-based interface allowing you to configure the recording and transcoding of source media streams. The console is compatible with almost all popular web browsers. This article describes the console features you can use to configure the channel recording.
Interface Overview
For the address and access details of the console, please contact the SmartLabs Support Team or your system administrator. The login page looks like this:

Login page
After you enter login and password, the Home page opens:

Home page
The main menu allowing you to switch between 3 spaces:
- Channels — a space for setting up the recording and transcoding of source media streams;
- Users — a space for managing the Admin Console users;
- Server — a space for operator-controlled SmartMEDIA server settings.
Setting Up Channel Recording
To add a new channel, click . After that, on the right side of the page you will be prompted to edit channel settings:

Channel settings
The channel settings are divided into several sections arranged in the order of original media stream processing. Depending on requirements, certain sections may not be filled (remain empty). Examples of typical channel configuration are shown in the section “Channel Configuration Examples“. Each processing step (section) will be described in the following chapters.
Basic Settings
This section allows you to edit the basic settings of the channel recording.
- ID — the unique channel identifier
- Recording window — the default content storage period, in units of time.
- Use wallclock instead PTS — If the option is enabled, when recording SCTE-35 events, the system time is saved instead of the time encoded in the event.
- Container type — the container type in which the content should be recorded: ts or mp4.
- Index storage — the identifier of the index database configuration. The list of available storages is determined by the global configuration of the SmartMEDIA Conveyor service.
- Offset — the system time offset when recording SCTE-35 events, if the Use wallclock instead PTS option is enabled.
Data storages
The Data storages section allows you to select a storage for recording ready-to-broadcast content. You can add one or several available storages, the list of which is determined by the global configuration of the SmartMEDIA Conveyor service.
Content can be recorded to POSIX storage (local file systems, NFS, external storage) and to external object storage using Amazon S3 protocol.
Sources
This section contains the settings of the source TV streams (multicast or unicast in the TS container) received by the SmartMEDIA Conveyor service.
Incoming streams must meet the following requirements:
- Container: SPTS or MPTS MPEG2-TS (ISO/IEC 13818-1, ITU-T).
- Protocols:
- UDP over IP Multicast (without RTP encapsulation);
- HTTP (MPEG2-TS stream over HTTP);
- HLS according to draft-pantos-http-live-streaming-05, without encryption.
- Supported video codecs: H.262/MPEG2, H.264/AVC, H.265/HEVC.
- Video resolution: up to 4K, up to 60fps. Both progressive and interlaced streams are supported.
- Supported audio codecs: AAC, AC3, MP2, MP3, DTS.
Parameters to configure:
- ID — the unique source identifier.
- url — URL of the source TS stream (HTTP or UDP).
- Format for multicast sources: udp://224.5.6.7:5000
- Format for HTTP sources (HLS or continuous HTTP stream) — regular URLs are specified.
- hls.stream_type — HTTP stream type. If the parameter value is HLS and the url parameter has an HTTP URL, SmartMEDIA Conveyor will work with this stream over HLS protocol.
- Select the HLS value, if the master playlist is specified in the url parameter and then you want to set the profile (hls.bandwidth) that will be selected as the source.
- Select the HLS_STREAM value, if the url parameter contains a specific profile playlist.
- hls.bandwidth — (only for hls.stream_type=HLS) the bandwidth, by which the stream will be selected from the variant playlist. If the parameter value is 0, the first stream in the playlist is used.
Demuxers
This section defines the parameters of the source streams demultiplexing.
SmartMEDIA supports 2 recording modes:
- Main mode — recording with re-multiplexing
The incoming stream is completely demultiplexed, only the desired tracks (audio and video) are used. Then the elementary stream is encrypted if necessary, packed into the desired container (MPEG2-TS or ISO BMFF) and written into the storage.
To work in this recording mode, the incoming stream should not be encrypted, otherwise all information about encryption will be lost and it will not be possible to reproduce it.
If you are going to use this mode, you must fill in the fields in the Demuxers section to determine which video, audio, or subtitle tracks you want to extract from the source stream. The number and type of tracks in the source stream can be found using third-party utilities such as ffprobe. - Pass-Through
Incoming TS-stream is written in the form as it comes to the server. The stream is not remuxed, it is divided into chunks and written into the storage. In particular, all timestamps are stored in the stream, as well as CC errors, if they were present in the incoming thread.
In this mode, the server can also accept encrypted TS streams if the TS packet structure is preserved and the NAL-unit headers are not encrypted (for example, DVB Simulcrypt or Common Encryption for MPEG2-TS encrypted streams).
This mode does not require demultiplexing, so the fields of the “Demuxers” section should remain empty.
Parameters to configure:
- ID — the unique demuxer identifier
- source — the TS stream source identifier
- MPTS program number — ID of the program to be demultiplexed from the MultiProgram MPEG-TS stream. If the program ID is not set (equal to zero), and several PMT PIDs are detected in the PAT table, an error warning is written to the log.
- Preset — an option that defines how to track PSS changes in H264 video streams:
– all — all PSSs found in the stream are compared. If the PSS has changed, headers with initialization data are created for each stream and recording of a new period is started. The PPS is not added into chunks.
– first — only the first PSS found in the stream is stored when starting or restarting the recording process. In this case, the first and all subsequent PPSs are stored during chunk muxing. - Remove filler data — an option allowing to drop NAL units with the FILLER_DATA type when remuxing H264 and H265 streams. Some SmartTV models are not able to correctly play streams with FILLER_DATA, so it is recommended to enable this option. If you need to keep FILLER_DATA in the stream for some reason, leave this option unchecked.
- Parse SCTE 35 — flag to enable/disable parsing of the SCTE-35 markers with their subsequent saving to the database.
- Interpret all I-Frames as IDR — an option allowing to split the stream into chunks not only by IDR IFrame, but also by non-IDR IFrame. In standard mode, SmartMEDIA Conveyor service searches for an IDR frame to start a new chunk. However, some non-standard streams may not contain IDR frames, but only non-IDR Iframes. To work with such streams, enable this option.
- AAC – decode first sample — for some AAC streams, it is not enough to get the samplingFrequencyIndex value from the ADTS header, as a result, the actual sampling frequency is doubled. This option enables the decoding of the first sample when starting (restarting) the recording using ffmpeg.
- tracks — the list of output tracks in the format <type><number>. For example: vid0, aud1 or sub0. The following types are supported: vid — video, aud — audio, sub — subtitles (DVB Subtitles (ETSI EN 300 743) or Teletext (ETS 300 706)). For Teletext subtitles the page number can be defined as an argument, e.g. sub0?777 for page #777. The default page number is 888.
Network transcoders
After demultiplexing, transcoding can be applied to the extracted video streams. Video transcoder functions include decoding, filtering, and encoding, which can be performed by network transcoders. Typically, their installation and configuration are carried out by SmartLabs. If so, please contact SmartLabs Support Team to get the settings for this section.
You can also configure the network transcoders yourself or find out the settings for this section using the information from the Configuration reference of the SmartTranscoder and SmartNvidiaTranscoder services.
Parameters to configure:
- ID — the unique identifier of a network transcoder.
- host, Port — IP address and port of the network transcoder.
In this section, network transcoders are only connected to the system. Their configuration is performed in the next section — Video transcoders.
Video transcoders
This section describes the settings for both network and local transcoders (if the configured SmartMEDIA server also has GPUs intended for transcoding). The incoming stream must be unencrypted (otherwise it cannot be decoded).
The following filters can be applied to video streams:
- Deinterlacing (applied automatically if the incoming stream is interlaced);
- Frame resize;
- Logo overlay;
- Frame crop.
Target formats can be:
- Video codecs: H.264/AVC, H.265/HEVC;
- Video format: up to 4K, up to 60fps, progressive scan;
Parameters to configure:
- ID — the unique identifier of the video transcoder configuration.
- Decode encrypted — if the original multicast stream has a reference to any CAS in PMT, then such stream is considered encrypted and cannot be decoded. But in certain cases, the stream is “encrypted“ with an encryption level of ”0”. Such a stream can be decoded with this function.
- Source — the video stream ID for transcoding, specified together with the demuxer. Example: dmx0:vid0, where dmx0 — demuxer ID, vid0 — video stream ID.
- Transcoder — the unique identifier of the local or network transcoder.
- Max input resolution — maximum input stream resolution expected by the transcoder for the channel. Valid values are: sd, hd and uhd.
SmartMEDIA License defines the maximum number of SD, HD and UHD streams to be transcoded. If the configuration has more transcoders with the specified resolution than allowed by the license, warning is written in the log file and transcoder service stops. However, licenses for higher resolutions can be used to process channels with lower resolutions (for instance, UHD license can be used to process HD and SD streams, or HD license can be used to process SD streams).
Furthermore, besides checking the configuration, transcoder continuously checks the input stream resolution whether it corresponds to the declared value. If the stream resolution exceeds the declared one (for instance, sd is declared, but stream is 1920×1080 pixels) channel processing stops. - Use MFE — enables the Intel Multi-Frame Encoding technology when transcoding with Intel QSV-enabled CPUs.
MFE improves GPU utilization while encoding multiple streams with low resolution (less then 1080p), especially for the most powerful GPUs (Intel Iris Pro Graphics 580 and better). If Multi-Frame Encoding is enabled for the transcoder, all output streams described in Outs section can be encoded in parallel. However, the technology doesn’t affect encoding of multiple streams of different channels because they are encoded in different QSV sessions.
See the Intel MediaSDK documentation for more details. - Enable closed captions — a flag to enable decoding of EIA-608/708 Closed Captions from the H264/H265 video stream and to include them into the output H264/H265 video streams. QSV supports only H264 video streams, Nvidia supports H264/H265 video streams. Subtitles of this type are passed in SEI (Supplemental Enhanced Information) messages within the video track, so the separate track is not created in the playlist and their processing is part of the video transcoding.
- Degree of parallelism — the degree of parallelism when executing tasks inside the Intel MediaSDK. Specifies the number of asynchronous operations that the application can run before explicit synchronization (waiting) is required to get the result. This value applies to the decoder, encoder, and VPP filters, meaning that each pipeline component can run the specified number of asynchronous operations. Values > 1 can improve GPU utilization and performance, especially if a small number of low-resolution channels are transcoded and a value = 1 does not lead to full GPU utilization. However, increasing this value also leads to increased resource consumption, in particular, it is necessary to allocate more surfaces for storing decoded frames. Usually, a value > 5 does not lead to a noticeable performance improvement but leads to increased resource consumption. If GPU utilization is already achieved due to the fact that a large number of channels are transcoded into several bitrates, then it may be advisable to reduce the Degree of parallelism (for example, to 2) for balanced resource consumption. If GPU utilization is already achieved by running transcoding of a large number of channels in several bitrates, then it may be advisable to reduce the value (for example, to 2) for balanced resource consumption.
- Enable double-rate deinterlacing — flag to double the frame rate when deinterlacing.
- logo — a section with settings for the logo overlaid on top of the video stream (the Logo Overlay filter). If omitted, the logo is not overlaid.
For best transcoding performance, only one image is overlaid on a video frame at a time. If you need to overlay several graphic elements at the same time, combine them into a single image in a graphics editor in advance. - URL — HTTP URL or path in the local file system to the PNG image (32-bit truecolor RGBA) that will be overlaid as a static logo. It can be used to set a single permanent logo or placeholder when using dynamic Timed Logos. In the second case, the static logo will be displayed when there is no active dynamic Timed Logo in the scheduler. The image is downloaded from the specified URL each time the channel recording starts. If the image cannot be downloaded, a warning is displayed in the log and the channel is recorded without the logo overlay.
If the URL is not specified, then the static logo is not used, but dynamic Timed Logos created via the JSON-RPC API and stored in the database can be overlaid. - X, Y — horizontal and vertical coordinate of the upper-left corner of the logo on the original video frame (before scaling), in pixels. Used only if a static logo URL is specified.
- Opacity — defines the logo opacity from 0 to 100, where 0 — completely transparent, 100 — completely opaque. The parameter must be known when the transcoder starts, therefore it is set in the transcoder configuration and cannot be changed when creating dynamic Timed Logos via JSON-RPC API.
- Transparent color — grayscale color from white to black, which is considered transparent when overlaying the logo using the Luma Keying method, i.e. only the brightness component of the color (Luma) from 0 to 255 is used to determine the transparent areas of the logo. The color is set in the FFmpeg color format, i.e. in the form of a name (black, white, red) or in the format #RRGGBB. So it is possible to specify colors other than gray such as red or blue, but since only the brightness component is used, light gray, light red and light blue colors will be the same.
The parameter must be known when the transcoder starts, therefore it is set in the transcoder configuration and cannot be changed when creating dynamic Timed Logos via JSON-RPC API. This parameter is also used as a flag to enable the dynamic Timed Logo scheduler. To be able to configure time-bound logos via the JSON-RPC API, you must first set the color for Luma Keying (common to all logos) in the transcoder configuration. - Crop — the section responsible for configuring a rectangular area for cropping the edges of the video frame (Crop filter). If not specified, the cropping is not applied.
If the Logo Overlay feature is configured, the logo is positioned in the coordinates of the cropped video frame, not the full decoded frame. For example, if the logo.x = 0 and crop.x = 100, then the logo will not be cropped. - Width, Height — the width and height of the cropping rectangular area, in pixels. If not set, the source frame width and height are used. If set, it is limited to the range [from 2 to input width and height].
- X, Y — the horizontal and vertical coordinate of the left side of the rectangular cropping area, in pixels. If not set, it is calculated as (input width – Crop.Width)/2 and (input height – Crop.Height)/2, i.e. the area is horizontally aligned to the center of the frame. If set, it is limited to the [0, input width – Crop.Width] and [0, input height – Crop.Height] range.
- Outs — output video streams settings.
- bitrate — the target output steam bitrate. Specified in size units.
- Max bitrate — the maximum bitrate for the output stream, in bits/s. Specified in size units . If it is set (not equal to 0) and is equal to the target bitrate (bitrate field), then the Constant Bitrate Control (CBR) mode is used, otherwise — Variable Bitrate Control (VBR). For the VBR mode, this parameter specifies the maximum bitrate at which encoded frames are sent to the Video Buffering Verifier (VBV). For the VBR mode, this parameter determines the maximum bitrate of encoded frames to be delivered to the Video Buffering Verifier (VBV). If set to 0, VBR mode will be used and the parameter will be calculated by Intel MediaSDK based on the target bitrate, frame rate, codec profile and level, and other parameters.
- codec — the video codec to be used to encode the stream: h264 or h265.
- VBV buffer size — the size of the Video Buffering Verifier (VBV) buffer, in bits. Affects the variability and deviation of the bitrate from the average value. The smaller the buffer, the closer the bitrate is to the average value, less spikes, but the image quality may degrade. It is recommended to start with a buffer size sufficient for 2 seconds of video with a maximum bitrate of Max bitrate, that is, VBV buffer size = 2 * Max bitrate. Then decrease or increase it depending on the requirements for the bitrate variability. If the client device has a small buffer (for example, old STBs), then you can reduce the VBV buffer size to a size of one Max bitrate. If you need to strictly limit the bitrate fluctuations, you can reduce the buffer to half of the Max bitrate or less. But in this case, it is recommended to check that the image quality remains acceptable. If the buffer size is insufficient, the encoder has less room for optimization and efficient encoding, so the quality can be reduced (increased Quantization Parameter, QP) for large frames (for example, I-frames) so that they fit into the bitrate limit. This can be represented as the periodic appearance of blurry and fuzzy frames. This is especially true for the Constant Bitrate Control (CBR) mode. In this case, it is recommended to switch to Variable Bitrate Control (VBR) mode and use an enlarged VBV buffer.
If the parameter is not set (equal to 0), the Intel MediaSDK uses the automatically calculated value. - Initial VBV buffer occupancy — the initial filling of the VBV buffer in bits, after which the decoder starts decoding. This parameter, along with the bitrate, Max bitrate, and VBV buffer size parameters, is used to control the output bitrate and is part of the common Video Buffering Verifier (VBV) or Hypothetical Reference Decoder (HRD) model. This is a simplified model assuming that data enters a buffer of a fixed VBV buffer size according to the individual size of the encoded frames, and exits the buffer with a constant bitrate bitrate. The decoder starts decoding after the buffer is filled to the size of Initial VBV buffer occupancy. The purpose of the encoder is to prevent this hypothetical buffer from overflowing or underflowing.
If the parameter is not set (equal to 0), the Intel MediaSDK uses the automatically calculated value. - gop_size — the duration of the GOP structure (distance between adjacent IFrames) of the output video stream, in frames.
- Width, Height — the frame width and height of the output video stream, in pixels.
- pix_fmt — output pixel format that describes the number of bits per color component and pixels representation method. The following values are supported:
nv12 — planar format YUV 4:2:0, 8 bit/color. Used for the H264 and HEVC MAIN Profile codecs. In ffmpeg terms, this format is called yuv420p.
p010 — planar format YUV 4:2:0, 10 bit/color. Used for the HEVC MAIN10 Profile codec. In ffmpeg terms — yuv420p10le. - Preset — a value that sets the balance between the quality of the output video and the transcoding speed. The following values are supported:
best_speed — maximum transcoding speed with acceptable video quality;
balanced — balance between speed and quality;
best_quality — best video quality, slow transcoding.
When using Intel QSV technology, it is recommended to start with the value best_speed and reduce the speed only if the video quality is not objectively satisfactory. The peculiarity of Intel QSV is that when using best_speed, the video quality virtually does not decrease (according to a note from Intel MediaSDK Release Notes, sometimes the quality is even better than when using balanced), and the speed increases significantly. - Quality — the desired level of image quality that affects the degree of frame quantization (QP). Valid values: from 1 (best quality) to 51 (worst quality). If the quality is set (not equal to 0), the Quality Variable Bitrate Control (QVBR) mode will be used, since the traditional VBR and CBR modes do not support specifying the quality, but only the bitrate limits. The QVBR algorithm is some hybrid of the well-known Constant Rate Factor (CRF) algorithm and the usual Variable Bitrate Control (VBR), suitable for video streaming purposes. Traditional CRF allows you to set the quality level without bitrate limits, so large bitrate fluctuations make it more suitable for file transcoding (VOD) than for Live video, which is transmitted over channels of limited bandwidth. QVBR tries to achieve the desired subjective level of quality (i.e., at the level of human perception, rather than at the level of more formal metrics such as PSNR) by limiting fluctuations near the target bitrate and maintaining compatibility with the Video Buffering Verifier (VBV) or HRD model.
If the desired quality is not set (equal to 0), then only the bitrate limits and the traditional VBR or CBR mode are applied. - Frame rate numerator & denominator — frame rate numerator and denominator. The frame rate is set as a rational number. Only reducing the input stream frame rate is supported.
Audio transcoders
Along with demultiplexed video streams, audio streams can be transcoded. Audio transcoder functions include decoding, filtering, and encoding. The incoming stream must be unencrypted (otherwise it cannot be decoded).
The original bitrate of the audio stream can be converted.
The AAC codec is used as the target format.
Parameters to configure:
- ID — the unique identifier of the audio transcoder.
- Decode encrypted — if the original multicast stream has a reference to any CAS in PMT, then such stream is considered encrypted and cannot be decoded. But in certain cases, the stream is “encrypted“ with an encryption level of ”0”. Such a stream can be decoded with this function.
- Source — the audio stream ID for transcoding, specified together with the demuxer. Example: dmx0:aud0, where dmx0 — demuxer ID, aud0 — audio stream ID.
- Outs — settings of the audio transcoder outputs.
- bitrate — the bitrate of the output audio stream. Specified in size units.
- channels count — number of audio channels of the output stream
- sample rate — audio sampling rate for the output stream
Muxers
This section allows you to multiplex (pack) transcoded or source media streams into a container of a given format. If adaptive streaming (ABR) is intended, create several streams in this section according to the operator platform requirements.
To protect against unauthorized viewing, copying, etc. during the recording process, the content can be encrypted according to one of the following standards:
- HLS-AES — only the MPEG2-TS container is supported and subsequent delivery over the HLS protocol. The entire chunk, including the headers of TS packets, is encrypted using the AES-CBC algorithm with PKCS#7 padding. Information about encryption can only be added to the HLS playlist (#EXT-X-KEY tag).
- ISO/IEC 23001-7: 2015 Part 7 (Common encryption in ISO base media file format files), abbreviated to CENC. The ISO BMFF/MP4 Fragmented container is supported. Only the elementary stream (payload) is encrypted, the container and the headers of the NAL packets of the video stream remain unencrypted. The data is encrypted using the AES-CTR algorithm.
To encrypt with this standard, select the widevine value in the encryptor.drm field, what is corresponding to the Google Widevine DRM system.
It also should be considered that:
- if you’re going to use adaptive bitrate streaming (ABR), all video streams should have the same GOP structure and have synchronous key frames (I-frames);
- the value of DTS counters for audio and video samples should not differ by more than 1.5 seconds;
- the value of DTS counters should increase monotonically;
- if the PMT table in the MPEG2-TS container was changed, a table version change is required;
- there should be no CC errors in the TS-stream.
Playlist generation for the recorded content is performed by the SmartMEDIA Playlist Generator component on the subscriber device request.
HLS playlist generation is supported for unencrypted content, as well as encrypted with the HLS-AES standard content recorded in the MPEG2-TS container.
DASH playlists generation is supported for content recorded in the ISO BMFF/MP4 Fragmented container.
Parameters to configure:
- bw — the bandwidth required to play the stream. Used to generate DASH or HLS+MP4 playlists.
- dir — the subdirectory (for POSIX storages) or URL prefix (for S3/object storages) relative to the channel directory to which the stream will be recorded; must be unique within the channel.
- lang — the two-letter designation of the stream language (used for audio) in accordance with ISO 639-1:2002.
- encryptor.drm — the DRM settings identifier defined in the global configuration. If not specified, the stream is not encrypted.
- encryptor.key_type — the key type: SD, HD or AUDIO. Attention! If you use SmoothStreaming streams, the keys on all tracks — both video and audio — must be the same (e.g. SD).
- role — sets the role for Adaptation in the DASH manifest according to the “5.8.5.5 DASH role scheme” clause of the “ISO/IEC 23009-1” standard. Possible values: caption, subtitle, main, alternate, supplementary, commentary, dub. The role can be used to mark the DescribedVideo track in the manifest with the commentary value.
- PVR skip — if the option is enabled, this track won’t be used for the nPVR program recording. The disabled option leaves the track in the nPVR record as-is. This parameter can be used to optimize the storage space for nPVR records.
- Save text subtitles in TTML — if the option is enabled, text subtitles are saved in the TTML format; if disabled — VTT.
- sources — list of sources of media samples for the recorded stream.
- source — the name for the TS stream (when recording without remuxing) or sample source track.
- master — the main source flag for synchronizing chunks (there should be only one within the channel). It is recommended to select the source containing the video track with the maximum bitrate.
Thumbnails
The section for creating of thumbnail images from the keyframes of the track specified by the Source track for thumbnails option. These images can be used by the player for better visualization during seek.
Unlike Thumbnails mosaic, each JPEG file contains a picture only for a single frame.
Parameters to configure:
- Dir — the filesystem subdirectory (for POSIX storages) or URL prefix (for S3/object storages) relative to the channel directory/URI to which the thumbnails are recorded. Must be unique within the channel.
- Width, Height — the width and height of the preview image. The original video width and height are used if the value is not set.
Thumbnails mosaic
This section allows you to create a thumbnail mosaic from the keyframes of the track specified by the Source track for thumbnails option.
A single JPEG image contains multiple thumbnails, a “thumbnails mosaic cols x thumbnails mosaic rows” matrix of pictures, that can be used by the player for preview when seeking. This feature allows to speed up loading of thumbnails to the client device.
Parameters to configure:
- thumbnails mosaic cols, rows — the number of columns and rows in the thumbnail mosaic.
- Dir — the filesystem subdirectory (for POSIX storages) or URL prefix (for S3/object storages) relative to the channel directory/URI to which the thumbnail mosaics are recorded. Must be unique within the channel.
- Width, Height — the width and height of the preview image. The original video width and height are used if the value is not set.
- Source track for thumbnails — the identifier of the video track, which is to be used for thumbnails generation. Thumbnail generation is off if the value is not set. Thumbnails are generated as JPEG images. They could be used in the media players for preview purposes during seek.
- Accessibility 608 — the value of the Accessibility tag in the MPD playlist for schemeIdUri=”urn:scte:dash:cc:cea-608:2015″
Channel Configuration Examples
Example 1: single-bitrate MP4 DASH content recording without transcoding and encryption, with subtitles
Basic settings
- ID = CH_MP4_TTXSUBB
- Container type = mp4
- Recording window = 30m
- Index storage = mongo_local
Data storages
s3
Sources
- ID = src0
- url = udp://239.8.8.8:1234
- hls.stream_type = Empty field
- hls.bandwidth = 0
Demuxers
- ID = dmx0
- source = src0
- tracks → Track = vid0
- tracks → Track = aud0
- tracks → Track = sub0?888
Video transcoders
Empty fields
Audio transcoders
Empty fields
Muxers
Video stream
- bw = 2500000
- dir = video/bw2500000
- lang = Empty field
- encryptor.drm = Empty field
- encryptor.key_type = SD (by default)
- sources → source = dmx0:vid0
- sources → master = ✔️
Audio stream
- bw = 128000
- dir = audio/eng
- lang = en
- encryptor.drm = Empty field
- encryptor.key_type = SD (by default)
- sources → source = dmx0:aud0
- sources → master = Empty field
Subtitles
- bw = 4000
- dir = sub/eng
- lang = en
- encryptor.drm = Empty field
- encryptor.key_type = SD (by default)
- sources → source = dmx0:sub0?888
- sources → master = Empty field
The playlist URL in this example might look like this: http://streaming_server_address/dash/CH_MP4_TTXSUB/playlist.mpd
Example 2: multi-bitrate MP4 DASH content recording with transcoding, without encryption
Basic settings |
|
ID |
CH_MP4_TRANS |
Container type |
mp4 |
Recording window |
30m |
Index storage |
mongo_local |
Data storages |
|
s3 |
|
Sources |
|
ID |
src0 |
url |
udp://239.8.8.8:1234 |
hls.stream_type |
Empty field |
hls.bandwidth |
0 |
Demuxers |
|
ID |
dmx0 |
source |
src0 |
tracks → Track |
vid0 |
tracks → Track |
aud0 |
Video transcoders |
|
ID |
vtrc |
Decode encrypted |
Empty field |
Source |
dmx0:vid0 |
Video stream 1 |
|
Outs → bitrate |
4500k |
Outs → codec |
h264 |
Outs → gop_size |
30 |
Outs → width |
1920 |
Outs → height |
1080 |
Video stream 2 |
|
Outs → bitrate |
2500k |
Outs → codec |
h264 |
Outs → gop_size |
30 |
Outs → width |
1280 |
Outs → height |
720 |
Audio transcoders |
|
ID |
atrc |
Decode encrypted |
Empty field |
Source |
dmx0:aud0 |
Audio stream 1 |
|
Outs → bitrate |
128k |
Muxers |
|
Video stream 1 |
|
bw |
4500000 |
dir |
video/high |
lang |
Empty field |
encryptor.drm |
Empty field |
encryptor.key_type |
SD (by default) |
sources → source |
vtrc:vid0 |
sources → master |
✔️ |
Video stream 2 |
|
bw |
2500000 |
dir |
video/low |
lang |
Empty field |
encryptor.drm |
Empty field |
encryptor.key_type |
SD (by default) |
sources → source |
vtrc:vid1 |
sources → master |
Empty field |
Audio stream |
|
bw |
128000 |
dir |
audio/rus |
lang |
ru |
encryptor.drm |
Empty field |
encryptor.key_type |
SD (by default) |
sources → source |
atrc:aud0 |
sources → master |
Empty field |
Subtitles |
|
bw |
4000 |
dir |
sub/eng |
lang |
en |
encryptor.drm |
Empty field |
encryptor.key_type |
SD (by default) |
sources → source |
dmx0:sub0?888 |
sources → master |
Empty field |
The playlist URL in this example might look like this: http://streaming_server_address/dash/CH_MP4_TRANS/playlist.mpd
Example 3: single-bitrate TS HLS content recording without remuxing
Basic settings |
|
ID |
CH_TS_SINGLE |
Container type |
ts |
Recording window |
30m |
Index storage |
mongo_local |
Data storages |
|
s3 |
|
Sources |
|
ID |
src0 |
url |
udp://239.8.8.8:1234 |
hls.stream_type |
Empty field |
hls.bandwidth |
0 |
Demuxers |
|
Empty fields |
|
Video transcoders |
|
Empty fields |
|
Audio transcoders |
|
Empty fields |
|
Muxers |
|
bw |
2700000 |
dir |
bw2700000 |
lang |
Empty field |
encryptor.drm |
Empty field |
encryptor.key_type |
SD (by default) |
sources → source |
src0 |
sources → master |
✔️ |
The playlist URL in this example might look like this: http://streaming_server_address/hls/CH_TS_SINGLE/variant.m3u8
Example 4: multi-bitrate TS HLS content recording without remuxing
Basic settings |
|
ID |
CH_TS_ADAPTIVE |
Container type |
ts |
Recording window |
30m |
Index storage |
mongo_local |
Data storages |
|
s3 |
|
Sources |
|
Source 1 |
|
ID |
src1 |
url |
udp://239.8.8.1:1234 |
hls.stream_type |
Empty field |
hls.bandwidth |
0 |
Source 2 |
|
ID |
src2 |
url |
udp://239.8.8.2:1234 |
hls.stream_type |
Empty field |
hls.bandwidth |
0 |
Source 3 |
|
ID |
src3 |
url |
udp://239.8.8.3:1234 |
hls.stream_type |
Empty field |
hls.bandwidth |
0 |
Demuxers |
|
Empty fields |
|
Video transcoders |
|
Empty fields |
|
Audio transcoders |
|
Empty fields |
|
Muxers |
|
Stream 1 |
|
bw |
3700000 |
dir |
high |
lang |
Empty field |
encryptor.drm |
Empty field |
encryptor.key_type |
SD (by default) |
sources → source |
src1 |
sources → master |
✔️ |
Stream 2 |
|
bw |
2500000 |
dir |
med |
lang |
Empty field |
encryptor.drm |
Empty field |
encryptor.key_type |
SD (by default) |
sources → source |
src2 |
sources → master |
Empty field |
Stream 3 |
|
bw |
1300000 |
dir |
low |
lang |
Empty field |
encryptor.drm |
Empty field |
encryptor.key_type |
SD (by default) |
sources → source |
src3 |
sources → master |
Empty field |
The playlist URL in this example might look like this: http://streaming_server_address/hls/CH_TS_ADAPTIVE/variant.m3u8
Used Data Types and Formats
URL Format
Protocol |
Description |
udp://<address>:<port>[?<URL params>] |
UDP stream, unicast and multicast addresses supported. URL params:
|
http://<address>[:<port>]/<path> |
HTTP server. |
file://<path> |
The file on the local file system, the path must be absolute. Additional params:
|
Examples:
file:///video/spart.ts
udp://10.1.0.2:7556
udp://233.1.1.0:5000?socket_buffer=4194304&interface=10.5.1.34
udp://239.65.2.1:5000?source_ip=185.5.40.63
http://wolf/spart.ts
Specific Field Types
- Time. Allows you to specify the duration in convenient units of time. If no postfix is specified, the time unit is considered to be a second.
d — days
h — hours
m — minutes
s — secondsExample:1d — 1 day; 2h — 2 hours; 3s — 3 seconds.
- Data size. Allows you to specify the size using multipliers (Kilo-, Mega-, GigaBytes). If no postfix is specified, the size unit is considered to be megabytes.
k, K — kilobytes
m, M — megabytes
g, G — gigabytesExample:1k, 2g, 3M.