In March of this year, Zencoder, Aspera, Amazon Web Services and Netflix gathered in New York City to discuss cloud-based media workflow.
Netflix and Zencoder are built on AWS and use Aspera for high-speed file transfer to the cloud. All of these companies rely on the cloud for scalable storage and processing, however, for large files, the advantages of the cloud can attenuated by sluggish transfer speeds over the open Internet.
This white paper presents four strategies for eliminating bandwidth bottlenecks that crop up when using the cloud for transcoding. In the end, combining accelerated file transfer and parallel processing in the cloud results in transcoding workflow that is up to 10x faster , and makes more efficient use of bandwidth, than on-premise solutions.
Rethinking Large Video Files in the Cloud: Strategies for Eliminating Bandwidth Bottlenecks
1. Rethinking Large Video Files
in the Cloud
STRATEGIES FOR ELIMINATING BANDWIDTH
BOTTLENECKS
About Zencoder
Zencoder is the largest and fastest cloud-based encoding service in the world.
Its products enable content providers to quickly transcode and publish video to
consumers on virtually any Internet connected device, including web, mobile, and TV.
To learn more about Zencoder, visit http://zencoder.com
or contact us at info@zencoder.com
2. Eliminating Bandwidth Bottlenecks
Bandwidth is a problem for high-bitrate video. Cloud-based transcoding has enormous
advantages over on-premise transcoding: better ROI, faster speeds, and massive
scalability. But professional video content is often stored at 30-100 Mbps (or more),
resulting in very large files. Conventional wisdom holds that these files are too large to
transfer over the public Internet.
Figure 1. Common professional video formats.
Format Bitrate Size (per hour) Transfer time (lossy TCP)[1]
DNxHD 36 36 Mbps 15.8 GB 1.65 hours
ProRes 422, SD PAL 41 Mbps 18.0 GB 1.88 hours
AVC Intra 100 100 Mbps 43.9 GB 4.59 hours
DNxHD 220 220 Mbps 96.7 GB 10.09 hours
ProRes 4444 HD 330 Mbps 145.0 GB 15.14 hours
[1] Actual TCP transfer at 21.8 Mbps over a 1000 Mbps connection with 10ms delay and 0.1% packet loss.
This problem becomes even worse when considering the size of an entire content
library. If a publisher creates two hours of high bitrate 50 Mbps video each day, they will
have a library of 32,000 GB after two years. What happens if it becomes necessary to
transcode the entire library for a new mobile device or a new resolution? Even though a
scalable transcoding system can transcode 32,000 GB of content in just a few hours,
moving that content over the public internet at 100 Mbps would take over 30 days.
Fortunately, there are solutions to these problems, and major media organizations like
Netflix and PBS are embracing cloud-based services. In this chapter of 12 Patterns of
High Volume Video Encoding, we will discuss four techniques used by major publishers
to eliminate these bandwidth bottlenecks and efficiently transcode video in the cloud.
-1-
3. 1. Store video content close to video processing
The easiest way to eliminate bandwidth bottlenecks is to locate hosting and transcoding
together. For example, if your transcoding system is run on Amazon EC2, and you
archive your video with Amazon S3, you have free, near-instant transfer between
storage and processing. (This isn't always possible, so if your storage and transcoding
are in separate places, the next point will help.)
Fig 2. Time to Transfer 45 GB of Video
(in hours)
TCP 4.59
Cloud 0.10
Transfer time of 1 hour of 100 Mbit/s video. TCP achieves 22 Mbit/s transfer over a
1 Gbit/s line in typical network conditions (10ms delay, 0.1% packet loss). In-cloud
transfer represents tested speeds of 1 Gbit/s between Amazon S3 and Zencoder.
To eliminate bandwidth bottlenecks, store video close to transcoding.
-2-
4. 2. Use accelerated file transfer
When transferring files over long distances, standard TCP transfer protocols like FTP
and HTTP under-utilize bandwidth significantly. For example, a 100 Mbps connection
may actually only transfer 10 Mbps over TCP, given a small amount of latency and
packet loss. This is due to the structure of the TCP protocol, which scales back
bandwidth utilization when it thinks the network is over-utilized. This is useful for general
internet traffic, because it ensures that everyone has fair access to limited bandwidth.
But it is counter-productive when transferring large files over a dedicated connection.
When it is necessary to transfer high-bitrate content over the Internet, use accelerated
file transfer technology. Aspera and other providers offer UDP-based transfer protocols,
which perform significantly better than TCP over most network conditions.
If Aspera or other UDP-based file transfer technologies aren’t an option, consider
transferring files via multiple TCP connections to make up for some of the inefficiencies
of TCP.
Fig 3. Time to Transfer 45 GB of Video
(in hours)
TCP 4.59
Aspera 0.20
Transfer time of 1 hour of 100 Mbit/s video. TCP achieves 22 Mbit/s transfer over a
1 Gbit/s line in typical network conditions (10ms delay, 0.1% packet loss). Aspera
achieves 509 Mbit/s transfer over the same network conditions.
To maximize bandwidth utilization, use file transfer technologies like
Aspera, UDP, or multiple TCP connections.
-3-
5. 3. Transfer once, encode many
For video to be viewable on multiple devices over various connection speeds,
different video resolutions, bitrates, and codecs are needed. Many web and
mobile publishers create 10-20 versions of each file. So when doing high-volume
encoding, it is important that a file is only transferred once, and each transcode is
then performed in parallel.
When using this approach, you can effectively divide the transfer time by the
number of encodes to determine the net transfer time. For example, if transfer
takes 6 minutes, but you perform 10 transcodes in the cloud, the net transfer
required for each transcode is only 36 seconds.
Fig 4. Time to Transfer and Encode 10 Outputs
(in hours)
On-premise (serial)
Cloud (TCP, parallel)
Cloud (Aspera, parallel)
0 1.25 2.50 3.75 5.00
Transfer time Encoding time
Transfer and encoding time of 10 outputs of 1 Hour 50 Mbit/s video at 2x realtime encoding speed.
To achieve maximum efficiency, transfer a high quality file to the cloud
only once, and then perform multiple encodes in parallel.
-4-
6. 4. Syndicate from the cloud after transcoding
Whether you transcode in the cloud or on-premise, some bandwith is required. In one
case, a high-bitrate mezzanine file is sent to the cloud for transcoding. In the other
case, when transcoding on-premise, several transcoded files are sent directly to a CDN,
publishing platform, or to partners like iTunes or Hulu. Both cases require outbound
bandwidth, and in many cases, syndicating from the cloud requires less overall
bandwidth than syndicating from an on-premise system.
For example, it is not uncommon for broadcast video to be syndicated at high bitrates. If
a broadcaster uses a 100 Mbps mezzanine format, and then syndicates that content to
five partners at 50 Mbps, it is clearly more efficient to only send the original file out of
the network for transcoding, and let the transcoding system handle the other transfers.
Scenario A: high-bitrate syndication
• Input file: 100 Mbps
• Syndicated output: ∑(50 + 50 + 50 + 50 + 50) = 250 Mbps
In this scenario, 150 Mbps of egress bandwidth is saved by syndicating content from
the cloud.
Fig 5. Comparing Bandwidth Requirements of On-Premise and Cloud
Encoding
-5-
7. Not everyone syndicates high-bitrate content, of course. But even when encoding low-
bitrate web and mobile video, multiple small files adds up. The example below shows
actual bitrates recommended for a major OTT video device, encoded to 10 bitrates, for
both MP4 and HTTP Live Streaming.
Scenario B: low-bitrate syndication
• Input file: 50 Mbps
• Syndicated output: ∑(9 + 6 + 4.5 + 3.4 + 2.25 + 1.5 + 1.1 + 0.75 + 0.55 + 0.35 + 9
+ 6 + 4.5 + 3.4 + 2.25 + 1.5 + 1.1 + 0.75 + 0.55 + 0.35) = 58.8 Mbps
Even in this scenario, sending a 50 Mbps file to the cloud requires less overall
bandwidth than transcoding internally and delivering all 20 formats separately; and the
original is maintained in the cloud for subsequent transcoding.
To save transfer bandwidth, syndicate content from external encoding
system.
Conclusion
While transferring high-bitrate video can be a challenge, the correct approach to cloud
transcoding can mitigate these problems. High volume publishers should follow these
four basic guidelines:
‣ Store content in the cloud
‣ Use accelerated file transfer technology
‣ Ingest each file once to a parallel cloud transcoding system
‣ Syndicate directly from the cloud
By implementing these recommendations, media companies of all types can offload
video processing to the cloud, and realize the benefits of scale, flexibility, and ROI
provided by cloud transcoding.
-6-
8. Appendix: Bandwidth Growth
There is one important fundamental driver that is helping to solve the bandwidth
problem: cheaper and wider bandwidth. Neilsen's Law of Internet Bandwidth has
tracked accurately from 1983 to the present, and states that high-end Internet
connection speeds will increase by 50% per year. Video bitrates are growing at a slower
rate, and so sending high-bitrate content over the Internet will become less of a problem
over time.
Fig 6. Average Internet Connectivity and Sample Streaming Bitrates
But it isn’t enough to wait for internet bandwidth to improve. The right architecture,
covered in the body of this document, is still required to efficiently transcode high bitrate
content and large libraries.
-7-