Both films and television present a sequence of images as moving video at a fixed frame rate. Cinema runs at 24Hz (or frames-per-second) while TV runs at 50Hz or 60Hz depending on whether you're running PAL or NTSC. 3D games like Day of Defeat don't run at fixed frame rates per se, as anyone with an aging graphics card will testify - frames are rendered as quickly as possible.
Ideally, games are rendered at the refresh rate of your monitor: 60-75Hz for LCD and often 85-100Hz for CRT displays. This is where Vertical Sync (V-Sync) comes into play. Vertical Sync is there to ensure that the frame rate is either the same as your monitor's refresh rate, or an even fraction of it in more complex scenes. This is to avoid tearing, where the frames are rendered out of sync with the rate at which they are being drawn by your monitor.
When shooting footage on film stock, each frame is being exposed for the same amount of time as every other frame - a constant frame rate. When the camera tracks or pans quickly (think action sequence), or if objects in the scene move quickly (think speeding car), Motion Blur occurs.
Let's consider the latter example: a speeding car, travelling 200km/h. Each frame is exposed for 1/24th of a second*, and our car is moving 55 metres per second. For each frame, the car has moved about 2.3 metres, so while the nose of the car car might be crossing the finishing line at the start of the 0.041s exposure of one frame, by the end of that 1/24th of a second it has moved a full half car-length. This is the cause of the motion blur.
* Reader Dan Vetanovetz kindly emailed us to point out that, because of the mechanics of film cameras, this isn't technically accurate. He supplied us with updated data, but the principle is the same. We'd rather not confuse people with the extended explanation - suffice to say that when objects move faster than the film, you get motion blur. Thanks Dan!
The result is similar to what can be achieved with conventional anti-aliasing. However, unlike traditional anti-aliasing, which relies on space, motion blur relies on time - you could think of it as anti-aliasing with respect to time rather than space.
To give you an idea of what motion blur does to a scene, we grabbed some screenshots from the movie linked below. In essence, Valve are attempting to render images over a finite period of time, rather than render the scene with respect to instances in time; to make the rendering of frames more movie-like. These screenshots are actually pretty deceptive, as the image appears very blurred. However - due to the way that the human eye works - the scene appears to be very smooth when it is in motion. Take a look for yourself.
There are existing methods of approximating real-time motion blur, but Valve feels that neither one can offer the same level of realism as the Accumulation Buffer mode they use. We have a short video clip showing one of the methods they chose not to use, known as Frame Feedback. This re-uses images from older frames in real time - it just results in a laggy mess that doesn't work well at all. Download the video otherwise known as "When Motion Blur Goes Wrong".
The second real-time method - known as Vector Motion Blur - is used in the ATI Dangerous Curves demo which can be downloaded from ATI's demo page. It renders an image which stores the motion vectors at each pixel on the screen. These vectors are then used in a post process with a steerable filter kernel to blur along the direction of travel.
The problem is that it's too specialised for anything other than demos - games are considered to have too complex scenes, as there is the interaction of the gamer, which can drastically change the way things are on the screen. You're forced into breaking the scene up into foreground and background and then compositing them back together after the calculations have been done. Also, this method doesn't account for motion along curved surfaces, so motion blur will not look right if you're moving in anything other than a straight line.