How to render and export video content for web

Lewis Wake
Wednesday 6 April 2016

You’ve filmed your video, and you’ve gone to great technical lengths to edit it. Now you need to get your film from the editing suite to the web. Rendering a video for web can be often over-looked and rushed, but there are some essential phrases and terminology you must be aware of to make sure your video is in the best condition possible.

Screen Shot 2016-03-15 at 21.03.51

1. High definition

If you’ve gotten this far, it is likely that your modern camera or smartphone you used to record your video footage was automatically creating high definition clips. High definition cameras are likely to record video footage at either 720p or 1080p. You might have seen these terms float around when viewing videos on YouTube, or shopping for a new television. Let’s explore what they mean.

It all comes down to pixel density and resolution. Here are some typical web resolution standards you’ll encounter:

720p means, 1280 by 720 pixels. This is the most common high definition resolution, and it’s the native resolution recorded by most cameras made after 2009. When you’re watching a YouTube video in HD, this is the likely resolution you’ll be watching it in as it is of high quality, plus it is quicker to stream compared with 1080p.

720

1080p means, 1920 by 1080 pixels. This is the resolution you’ll find is a selling point of a lot of televisions. It is also the resolution displayed when you’re watching a movie on a blu-ray disk. Most cameras created after 2012 utilise this resolution while recording. When uploading videos for the web, this is currently the high resolution standard.

1080px

4K is that new term you’ve probably seen floating around the tech world. It translates to 3840 by 2160 pixels. Four times the fidelity of 1080p (you’ll have probably figured out why it’s called 4K now). In my opinion, the human eye can’t precisely determine quality above this resolution. This resolution is the absolute highest quality obtainable from modern cameras on the market. The iPhone 6S is the latest smartphone to utilise recording in 4K. This resolution is still new technology, and it is unlikely anyone will watch your web video in 4K as it takes an age to load, even on our university’s network. Give it a few years to catch up and we’ll soon see 4K become the new high definition standard.

4k

all-the-screens

It is important when rendering a video, to not go above the resolution you recorded your video on. Just because you have recorded footage at 720p, does not mean you can stretch and distort it to become a 4K video. Whatever resolution you’ve recorded your footage at render it at that width and height.

2. Frame rate

Frame rate translates to the amount of still images that are projected in your video per second. The higher the frame rate, the smoother the motion in your video will be.

Frame rate is technology that has adapted as television has evolved over the decades. If you have been unaware of frame rate up until this point, choosing a frame rate for your video from a list of varying numbers may seem daunting.

Typically, in animation and film, the frame rate is clocked at 24 frames per second (fps). However, it is common now for digital cameras to record up to 60fps, and YouTube supports this frame rate. This frame rate is the new standard for high definition video. But again, unless you’ve already recorded your footage at 60fps, you can’t render out a video at 60fps.

Can you see the difference between a video at 30fps, and 60fps?

My iPhone 5S camera can record slow-motion footage at 120fps. The slow-motion footage that is captured is automatically slowed down four times to 30fps, this is why it appears so seamless.

If you were to create slow-motion footage from just a regular camera, you’ll notice how jumpy it is. Slowing down 30fps four times gives you a frame rate of 7.5fps. Not exactly a seamless viewing experience. The most up to date iPhone’s can record slow-motion footage at 240fps.

As with choosing your resolution, it is best to match the original source when choosing frame rate.

3. Field order

You might come across some videos that are at 1080p resolution, and some at 1080i. You’ll wonder, “what’s the difference between the ‘p’ and the ‘i’?

The ‘p’ stands for ‘Progressive’, and the ‘i’ stands for ‘Interlaced’. But which one is better for my video?

All video displays, whether analog or digital, work by breaking a single frame of video into individual lines of horizontal resolution running across the screen. Standard definition NTSC and PAL are both interlaced video formats, as opposed to high definition video, or video displayed on a computer screen, which are progressive-scanned video formats. With progressive scanning, these lines are drawn one at a time, from the top of the screen to the bottom.

Interlaced video, including NTSC and PAL, works differently. When you record footage with your camcorder, each video frame is broken down into two fields, each containing half of the total lines of resolution in the frame. The first field is recorded, then the second, and both are laid down to tape, one after the other, so that both fields constitute one frame. When you play the tape back, a television monitor displays each recorded frame in two passes, first drawing field 1, then drawing field 2.

Field order refers to the order in which video fields are recorded from your video equipment to your hard disk. If you remember that video fields come one after another in time, as if playing 60 “frames” per second, it becomes a little easier to understand.

There are two options for field order:
Upper (Field 2 is dominant, so the second field is drawn first.)
Lower (Field 1 is dominant, so the first field is drawn first.)

Basically, progressive field order is better for web videos, and interlaced is better for footage that will be aired on television.

4. Format

Just like image files, there are dozens of video formats to choose from when rendering your video file. You may have seen common file types including, ‘mp4’, ‘mov’, and ‘avi’. But what do these all mean for your video?

MP4 is the official filename extension for MPEG-4 Part 14 in essence. And MPEG-4 Part 14 is a standard specified as a part of MPEG-4, which is a method of defining compression of audio and visual digital data and introduced by the Moving Picture Experts Group (MPEG) in late 1998. MP4 is a multimedia container format most commonly applied to store digital video and audio streams, especially those defined by MPEG, but also used to reserve other data such as subtitles and still images.

MOV was originally developed by Apple as a file format for its QuickTime movie player. The MOV format presented a lot of advantages that are quite usable to everyday use but the proprietary nature of the MOV format was a major hindrance. The MP4 file format was later developed as an industry standard, the developments was greatly based on the MOV file format to the point that they were exactly identical at first. The changes that were introduced were very minor and mostly involved data tagging information.

AVI (Audio Video Interleave) is also a multimedia container format as part of its Video for Windows tech introduced by Microsoft in 1992 in reprisal for the MOV file format, developed by Apple computers. AVI files can contain both audio and video data in a file container able to support synchronous audio-with-video playback.

Both Apple and Microsoft have the hardware capabilities of rendering video files as ‘MP4’, and for uploading video footage to the web, this is currently the ideal format to choose.

Related topics

Share this story