OTT Content Delivery

Eyevinn Technology
10 min readMar 21, 2019
Written by: Boris Asadanin, Streaming Media Consultant and Partner at Eyevinn Technology


Assumption: Like for any other web-based service, distribution of HTTP video streaming services should be no different. You just need an ordinary HTTP web server from which the clients can download the content. Unfortunately, this assumption only scratches the surface of HTTP video distribution. OTT content delivery relies on much more sophisticated technologies to ensure low latency, good quality video streaming services.

This article is the first of two describing OTT content delivery networks and distribution methods. The second article describes multi-CDN and CDN selection, and CDN types.

This publication is part of a series of articles describing the principles of the technology behind video streaming. It could be read without any prior knowledge on the subject.


Remember the traditional linear analogue broadcasted TV. Unless your TV antenna on the roof had fallen due to a storm (or having too many crows just hanging on it) there were rarely any quality issues. The broadcast was always on and the quality was reliable. To be fair, there was a very narrow selection of channels and the quality was SD TV grade, but it worked come rain or shine.

Since the beginning of streaming video content over IP networks the main goal has always been to achieve the uptime and quality necessary to resemble the traditional analogue linear TV service. And so far, we are still behind. So why are we forcing IP distribution models despite all the issues?

Fig 1. “Hey Alfred, I hear they are going over to IP distribution. Where are we going to hang out then?”

Let us get the perspectives straight. IP based TV has enabled services that we could only dream of before. The IP based medium has enabled us to explore new levels of quality in HD, 4K, and most recently HDR. It has enabled us to fit virtually any number of channels through the medium, and of course all other service models such as network video recording (nPVR), Video on Demand (VOD), Catch up or Start over TV and even synchronized camera angles. Last but not least, what we primarily relate OTT to — streaming to any screen at any place. Watching Netflix videos on your mobile phone on your way to work would be hard to achieve using the old analogue network.

So how is video distributed across the IP networks?

CDN — Content Delivery Network

A Content Deliver Network, CDN, is exactly what it is called; a network delivering content. CDNs can of course deliver any type of content over HTTP, but it is necessary to stream video content over IP networks to maintain quality to a higher number of viewers.

In its most simple description a CDN is a network of caching servers that all work together to stream content closer to the users.

Developing a CDN

Imagine starting up a new local video streaming service from home. You would of course invest in reliable enterprise grade hardware to host web servers which would successfully stream content to anyone in the region.

As the service grows you will have to invest in more hardware to accommodate your growing subscriber base. But as the subscriber base is growing, it also attracts subscribers from far away. You find people wanting to watch your content from Berlin, Rome, but also in Hong Kong, Los Angeles and Sydney. And this is where you start experiencing problems. The long geographical distance forces your streamed content passing many routers and unknown networks. The risk of having your content passing a congested router or link, getting packet loss, or just hitting a broken router on the way, becomes imminent.

Fig 2. Some likely and some less likely causes to broken network links.

So, the natural way to overcome this is to deploy additional web servers in remote locations (Hong Kong, Los Angeles, and Sydney) to minimize the traffic hops, latency, and the risk of broken routers or links. To function properly, the remote servers need local storage to cache your content and to stream it to their region. Any time subscribers from these regions want to access your content, their HTTP requests will be routed to the remote servers that will serve the content quickly and reliably.

To populate the remote caches with content, live content will be transferred from your central servers only once, and VOD content may be transferred at any time.

Congratulations! With your original central servers at home and your remote caching and streaming servers (Hong Kong, Los Angeles, and Sydney) you have built your own CDN!

The follow-up article on Content Delivery describes why this workflow to building an own CDN is highly unfeasible and very unusual.

Time for new terminology which will be used throughout this article series:

· Origin — Where the content originates from. This is the location from which all other servers request the content, direct or indirect.

· POP — Point of Presence. All remote caching and streaming locations are called POPs. A POP can consist of any number of caching and streaming servers organized for redundancy or performance.

· Origin Shield — A POP close to the origin used to protect the origin from heavy load. All other POPs connect to the origin shield instead of reaching out to the real origin. The Origin Shield adds another caching layer/hierarchical step within the CDN and prevents origin overload.

Functions of a CDN

Understanding what a CDN is we can now grasp the main functionalities.

Origin Shield — Protecting the Origin

CDNs generally hide origin servers by making sure that clients never reach the actual origin server. All requests end up in a CDN POP which can usually offer the requested asset directly from its local cache. In case of a cache miss, the CDN POP downloads the asset from the origin server and keeps the asset in local storage for any upcoming request. The main benefits are security and performance.

An origin shield brings an even greater value to shielding the origin server. An origin shield hides the origin also from the CDN POPs by creating an intermediate cache layer between the CDN POPs and the origin. The added value is performance, especially with a correctly dimensioned shield storage.

Fig 3. The origin shield offloads the origin by creating a middle layer between the CDN POPs and the origin.


With POPs located close to the users, the content traffic travels only a few hops before reaching the destination. This decreases latency and increases the performance of how many subscribers can be accommodated.

Throughput is another aspect of performance achieved by multiple POPs able to stream content. More about traffic peaks in the Scalability section below.


With multiple POPs it is almost impossible to knock the streaming service down. If one POP goes offline, the traffic load is distributed across other nearby POPs preventing any down time. With only a few selected CDN POPs reaching our origin servers, the risk of malicious traffic reaching your origin server is minimal. And even if the origin server goes down for a moment, the majority of the service will still be online by the cached content in the CDN POPs.


A CDN comes naturally with security benefits. By definition, a higher number of POPs are less susceptible to D-DOS attacks, since the traffic is spread across multiple server locations. Also, in most CDN services additional security features are built in, like Bot Mitigation, Web Application Firewalls, rate limiting etc, which neutralizes the threats in the network rather than at the very doorstep of the origin servers and their local firewalls.

Fig 4. D-DOS attack distributed between CDN POPs. Firewalls within each CND POP defeats threats before they reach the origin servers.


Scalability is a key function of a CDN which ensures that any traffic fluctuations are handled. With a cost equal to the number of bytes delivered, there is no reason to pay for more traffic than is actually used.

Nevertheless, CDNs must be built to handle any traffic peak. Cloud CDNs are usually built for multi tenancy with many times the capacity than any one streaming customer would ever require. Private self-built CDNs without any backup must also be built handling any upcoming traffic peak. This may leave much capacity idle at normal traffic load times which is unwanted.

Building for perfect scalability and performance is key to OTT services. This will be further discussed in the follow up article OTT Content Delivery part 2.

CDN Routing

For the general streaming service (and any general online service) there is a web portal, like a homepage, at which clients are browsing through available content eventually selecting and requesting an asset. This is the central point for content requests. When a request comes in the backend of the portal can assess where the requesting client is located and redirect the client to the most appropriate server. Would you access our video service from Hong Kong for instance, you would go to our web portal ( and browse through the content. Once you have selected an asset, the system locates your IP address and redirects you to the Hong Kong POP.

There are generally three ways to do the CDN routing; DNS based routing, Anycast routing, and HTTP redirects. Each is described below.

DNS Based Routing

The DNS based routing leverages the DNS technology to relay a content asset request to the optimal CDN node. Note that the focus lies outside describing the DNS protocol itself, only the routing principles for DNS based routing.

Fig 5. DNS based routing flow.
  1. The client browses to the video service portal of an OTT video service. Let us use Eyevinn’s Streaming Tech TV+ service as a good example:
  2. When the user selects a video asset,, the users host sends a DNS query to its local DNS.
  3. The local DNS relays the query to the authoritative DNS of Here something interesting happens. The authoritative DNS observes the plus part of the request URL understanding that this is a video asset, which should be served by the CDN related to the video service. The authoritative DNS server responds with the hostname of the related CDN —
  4. From this point on the request is relayed over to Global CDNs private DNS infrastructure. The private DNS of Global CDN must now find a suitable streaming server node (POP) that can stream the asset to the user. Depending on rules the DNS will usually answer with the IP address of the closest streaming node achieving highest streaming performance.
  5. The local DNS passes on the IP address to the appointed Global CDN POP.
  6. The client sends the asset request straight to the appointed Global CDN POP, which for OTT streaming responds with the manifest file of the asset.

Anycast Routing

Anycast routing is based on the network protocol Anycast. Anycast is a “one-to-one-of-many”protocol meaning that a connection is established between a client and any one server in a collection. All servers in the collection shares the same IP address and thus from the client perspective there is only one destination but multiple routing paths to it. By using the BGP protocol the client can determine the best route to its destination. The selection of routes can be made based on several parameters like least hops, lowest cost, or least congestion. In a CDN application with geographically distributed POPs the selection result usually points out the nearest POP.

Fig 6. Client perspective of an Anycast routed CDN; one server with multiple routes. Client chooses the shortest path to its destination using the BGP protocol.

The initial steps before DNS request reaching the CDN is equal to the DNS routing. The Global CDNs private DNS always responds with the IP shared by all POPs in the CDN. The client decides route and ends up with the closest or best performing POP.

HTTP Redirect Routing

HTTP redirect routing-based solutions often include a “Request Dispatcher/Router” that use HTTP 302 redirect as the client redirection technique. The “Request Dispatcher” defines how to select a POP in the collection. For instance, content awareness makes sure that a client is redirected to the POPs currently caching the requested asset. Combining multiple rules can result in quite advanced selection criteria: redirect to the closest POP that holds the content formatted for my type of client and desired bitrate. Note that in these cases maybe the few saved ms of latency is not the main goal to achieve, but to optimize cache efficiency; instead of having all caches caching the same set of content. These more intelligent types of rules will intelligently cache different sets of content to achieve the best cache hit ratio and thereby minimizing downloads from backend.

Fig 7. HTTP Redirect based routing flow.

Note that the initial steps 1–3 are equal to the DNS based routing workflow. It is when the client request reaches the CDN, step 4, we have the unique workflow.

4. The request is relayed over to our private CDNs Request Dispatchers fronted by a load balancer for scalability and availability. The Request Dispatcher from our private CDN must now find a suitable streaming server node (POP) that can stream the asset to the user. Depending on rules the Request Dispatcher will answer with the IP address of the most appropriate streaming node achieving highest streaming performance.

5. The local DNS passes on the IP address to the appointed private CDN POP.

6. The client sends the asset request straight to the appointed private CDN POP, which for OTT streaming responds with the manifest file of the asset.

Final Notes

Within the media streaming industry, CDNs are quickly becoming a commodity. CDNs are reduced to delivery components in the entire media distribution chain. Multiple CDNs are used in parallel to ensure availability, and price reduction.

In the next CDN article we go through various types of CDNs, multi- and hybrid-CDN setups and CDN selection, and business models.

Eyevinn Technology is the leading independent consultant firm specializing in video technology and media distribution, and proud organizer of the yearly nordic conference Streaming Tech Sweden.



Eyevinn Technology

We are consultants sharing the passion for the technology for a media consumer of the future.