Skip to content

MDI (RFC 4445)

MDI (Media Delivery Index), defined in RFC 4445, is an optional but useful indicator of IP network quality specifically for constant-bitrate transport stream delivery. It provides two metrics: DF (Delay Factor) and MLR (Media Loss Rate).

MDI is primarily useful when configuring and validating IP networks carrying multicast streams, where jitter has the greatest potential impact. For RF inputs, jitter is minimal and MDI is less relevant.

Note

MDI is only meaningful for constant-bitrate (CBR) streams. For variable-bitrate streams, DF and MLR values will fluctuate unpredictably. Chaotic MDI values on a known VBR stream are expected and not a fault.

DF (Delay Factor)

DF is the maximum time, in milliseconds, that payload packets spend waiting in the virtual MDI buffer before being processed. It represents the minimum size of receive buffer that a device would need to handle the current network jitter without losing packets.

Jitter occurs when packets arrive at irregular intervals — sometimes in bursts, sometimes with gaps. The DF buffer absorbs these variations. If the burst is larger than the buffer, packets are dropped.

What DF tells you:

  • A low, stable DF means the network delivers packets consistently. The receiver needs only a small buffer.
  • A high or fluctuating DF means there is significant jitter. The receiver needs a larger buffer to avoid drops.
  • DF > receiver buffer size → expect packet loss and CCE errors.

For live broadcast: minimize DF to reduce end-to-end latency. DF is reduced by reducing network jitter — provision sufficient bandwidth, prioritize broadcast traffic, avoid devices that process high-priority requests with long queues, and eliminate sharing of the broadcast path with bursty traffic.

Correlation with IPAT: If DF fluctuates in sync with CCE errors, the receiving device's buffer is insufficient for the current network jitter. Either increase the buffer (if configurable) or improve the network.

MLR (Media Loss Rate)

MLR is the count of RTP/UDP payload packets that were not received — i.e., packets that were lost in the network. MLR measures actual packet loss.

Under normal conditions, MLR should be zero. Any non-zero MLR means payload packets were lost, which will typically produce CCE errors. Causes include:

  • Excessive jitter exceeding the receiver buffer (DF-related loss)
  • Network failures, congestion, or link saturation
  • Hardware faults in network interfaces or switches

MLR vs. CCE: These metrics are not always perfectly correlated. A "broken" stream being retransmitted inside RTP/UDP would show no MLR (the UDP packets arrive correctly) but still produce CCE errors (the TS inside is corrupt). Conversely, it is theoretically possible for MLR to be non-zero with no CCE, though this would indicate anomalous behavior in the measuring device.

Best practice: Eliminate all non-zero MLR before investigating other error causes. If MLR is non-zero, you have confirmed network-level packet loss, and this is the root cause to address first.

Differences between devices

RFC 4445 notes that MDI values can differ between devices from different manufacturers because the standard allows some latitude in implementation. Do not use DF and MLR values from different devices for direct comparison. Use TS Analyzer's MDI readings consistently within a single monitoring environment.