In the field of video compression a video frame is compressed using different algorithms with different advantages and disadvantages, centered mainly around amount of data compression. These different algorithms for video frames are called picture types or frame types. The three major picture types used in the different video algorithms are I, P and B. I-frames are the least compressible and don't require other video frames to decode. P-frames can use data from previous I-frames to decompress and are more compressible than I-frames. B-frames can use both previous and forward frames for data reference to get the highest amount of data compression.
Summary


There are three types of pictures (or frames) used in video compression: I-frames, P-frames, and B-frames. An I-frame is an 'Intra-coded picture', in effect a fully-specified picture, like a conventional static image file. P-frames and B-frames hold only part of the image information, so they need less space to store than an I-frame, and thus improve video compression rates.

A P-frame ('Predicted picture') holds only the changes in the image from the previous frame. For example, in a scene where a car moves across a stationary background, only the car's movements need to be encoded. The encoder does not need to store the unchanging background pixels in the P-frame, so saving space. P-frames are also known as delta-frames. A B-frame ('Bi-predictive picture') saves even more space by using differences between the current frame and both the preceding and following frames to specify its content.
Pictures

Pictures that are used as a reference for predicting other pictures are referred to as reference pictures. In such designs, the pictures that are coded without prediction from other pictures are called the I pictures. Pictures that use prediction from a single reference picture (or a single picture for prediction of each region) are called the P pictures. And pictures that use a prediction signal that is formed as a (possibly weighted) average of two reference pictures are called the B pictures.

Slices
In the latest international standard, known as H.264/MPEG-4 AVC, the granularity of the establishment of prediction types is brought down to a lower level called the slice level of the representation. A slice is a spatially distinct region of a picture that is encoded separately from any other region in the same picture. In that standard, instead of I pictures, P pictures, and B pictures, there are I slices, P slices, and B slices.

Macroblocks
Strictly speaking, the term picture is a more general term than frame, as a picture can be either a frame or a field. A frame is a complete image captured during a known time interval, and a field is the set of odd-numbered or even-numbered scanning lines composing a partial image. When video is sent in interlaced-scan format, each frame is sent as the field of odd-numbered lines followed by the field of even-numbered lines. Informally, the term "frame" is often used when the actual intent is the alternate meaning of "picture" -- a field.

Typically, pictures are segmented into macroblocks, and individual prediction types can be selected on a macroblock basis rather than being the same for the entire picture, as follows:

  • I pictures can contain only intra macroblocks
  • P pictures can contain either intra macroblocks or predicted macroblocks
  • B pictures can contain intra, predicted, or bi-predicted macroblocks

Furthermore, in the most recent video codec standard H.264, the picture can be segmented into sequences of macroblocks called slices, and instead of using I, B and P picture type selections, the encoder can choose the prediction style distinctly on each individual slice. Also in H.264 are found several additional types of pictures/slices:

  • SI-frames/slices (Switching I); Facilitates switching between coded streams; contains SI macroblocks (a special type of intra coded macroblock).
  • SP-frames/slices (Switching P); Facilitates switching between coded streams; contains P and/or I macroblocks
  • multi-frame motion estimation (up to 16 reference frames, or 32 reference fields)

 

Multi-frame motion estimation will allow increases in the quality of the video while allowing the same compression ratio. SI- SP-frames (defined for Extended profile) will allow for increases in the error resistance. When such frames are used along with a smart decoder, it is possible to recover the broadcast streams of damaged DVDs.

Intra coded frames (or slices or I-frames or Key frames)

  • Are pictures coded without reference to any pictures except themselves.
  • May be generated by an encoder to create a random access point (to allow a decoder to start decoding properly from scratch at that picture location).
  • May also be generated when differentiating image details prohibit generation of effective P or B frames.

Typically require more bits to encode than other picture types.
Often, I-frame are used for random access and are used as references for the decoding of other pictures. Intra refresh periods of a half-second are common on such applications as digital television broadcast and DVD storage. Longer refresh periods may be used in some environments. For example, in videoconferencing systems it is common to send I frames very infrequently.

 

Predicted frames (or slices)

  • Require the prior decoding of some other picture(s) in order to be decoded.
  • May contain both image data and motion vector displacements and combinations of the two.
  • Can reference previous pictures in decoding order.
  • In older standard designs (such as MPEG-2), use only one previously-decoded picture as a reference during decoding, and require that picture to also precede the P picture in display order.
  • In H.264, can use multiple previously-decoded pictures as references during decoding, and can have any arbitrary display-order relationship relative to the picture(s) used for its prediction.
  • Typically require fewer bits for encoding than I pictures do.

Bi-directional predicted frames (or slices,) a.k.a. B pictures

  • Require the prior decoding of some other picture(s) in order to be decoded.
  • May contain both image data and motion vector displacements and combinations of the two.
  • Include some prediction modes that form a prediction of a motion region (e.g., a macroblock or a smaller area) by averaging the predictions obtained using two different previously-decoded reference regions.
  • In older standard designs (such as MPEG-2), B pictures are never used as references for the prediction of other pictures. As a result, a lower quality encoding (resulting in the use of fewer bits than would otherwise be the case) can be used for such B pictures because the loss of detail will not harm the prediction quality for subsequent pictures.
  • In H.264, may or may not be used as references for the decoding of other pictures (at the discretion of the encoder).
  • In older standard designs (such as MPEG-2), use exactly two previously-decoded pictures as references during decoding, and require one of those pictures to precede the B picture in display order and the other one to follow it.
  • In H.264, can use one, two, or more than two previously-decoded pictures as references during decoding, and can have any arbitrary display-order relationship relative to the picture(s) used for its prediction.
  • Typically require fewer bits for encoding than either I or P pictures do.