Multimedia apllication imag, video formats

Post on 13-Jan-2015

523 views 0 download

Tags:

description

Multimedia Applications: Image Data types Color Model Images File Formats Frames

Transcript of Multimedia apllication imag, video formats

Multimedia Applications for BCA

Pardeep Sharmapardeepsharma727@gmail.comBGSBU, Rajouri

Contents

• Image Data Types• Popular File Formats (JPEG, PNG,GIF)• Color Models

Image Data Types

1-Bit Image

• Each pixel is stored as a single bit(0 or 1).• A 640 * 480 monochrome image requires

38.4kb of storage.• Such Images are also referred as binary

images.• Simplest form of Image.

Monochrome 1-bit Lena Image

8-bit Gray Image

• Each pixel is usually stored as byte(0-255).• The entire image can be thought of 2D array of

pixel value. We refer such array as bitmap.• Image resolution refer as the number of pixel in

digital image. Fairly high resolution for such image might be 1600 * 1200 where as low resolution be 640 * 480.

• A 640 * 480 gray image requires over 300kb of storage.

Bitplanes

Bitmap

8-bit Gray Image

8-bit Color Image

• One byte for each pixel.• Support 256 out of million possible colors.• A 640 * 480 gray image requires over 300kb of

storage(Same as 8 bit gray).• Requires color Look-up tables(LUT).

8-bit Color Image

Color Look-up Table

• The idea used in 8-bit color image is to store only index, or code value.

• If a pixel stores say value 25, the meaning is to go to row 25 in LUT.

• LUT is often called pallette.

Color Look-up Table

24-bit Color Image

• Each pixel is represent by three bytes(RGB).• Supports 256 * 256 * 256 possible colors

(16,777,216).• A 640 * 480 gray image requires over 921kb of

storage without any compression.• Most 24 bit images are 32 bit images, one extra

byte of data for each pixel is used to store for alpha value, representing special effect information.

24-bit Color Image

8-bit Color Image

24-bit Color Image

Popular File Format

• GIF• JPEG• PNG• TIFF• PS

GIF

• Graphic Interchange Format• GIF devised by UNISYS Corporation and

Compuserve, and initially for transmitting graphic images over telephone line through MODEM.

• GIF uses Lempel-ziv-Welch algorithm (Compression).

• GIF standard is limited to 8-bit(256) color image only.

• Support interlacing.

Interlacing

GIF Flavors

• GIF 87a -> Original version, Support multiple

image in a stream• GIF 89a -> Support animation delay, transparent

background and storage of application specific metadata.

GIF

GIF

Animation

JPEG

JPEG

• This standard was created by a working group of the international organization for standardization(ISO).

JPEG

• This standard was created by a working group of the international organization for standardization(ISO).

• The group is called Joint Photographic Expert Group.

JPEG

• This standard was created by a working group of the international organization for standardization(ISO).

• The group is called Joint Photographic Expert Group.

• Provide high range of compression(Lossy)

JPEG

• This standard was created by a working group of the international organization for standardization(ISO).

• The group is called Joint Photographic Expert Group.

• Provide high range of compression(Lossy)• Take advantage of limitation in the human

vision system to achieve high rate of compression.

PNG

• Portable Network Graphic.

PNG

• Portable Network Graphic.• Patent held by UNISYS and COMPUSERVE.

PNG

• Portable Network Graphic.• Patent held by UNISYS and COMPUSERVE.• LZW Compression method.

PNG

• Portable Network Graphic.• Patent held by UNISYS and COMPUSERVE.• LZW Compression method.• PNG files support upto 48 bit of color

information.

Color Models in Images

Color Models in Images

• RGB• CMY

RGB Color Model

• RGB color model is additive color model

RGB Color Model

• RGB color model is additive color model• Red, Green, Blue three colors are added

together in various ways to reproduces Many different colors.

RGB Color Model

• RGB color model is additive color model• Red, Green, Blue three colors are added

together in various ways to reproduces Many different colors.

• Each of three beams are called component of that color.

RGB Color Model

• RGB color model is additive color model• Red, Green, Blue three colors are added

together in various ways to reproduces Many different colors.

• Each of three beams are called component of that color.

• Zero intensity of each color gives darkest color(No Light, consider BLACK).

• Full intensity of each color gives white color.

RGB Color Model

RGB Color Model

RGB Color Model

CMY Color Model

• Stands for Cyan, Magenta, Yellow

CMY Color Model

• Stands for Cyan, Magenta, Yellow• CMY is subtractive color model used in

printing.

CMY Color Model

• Stands for Cyan, Magenta, Yellow• CMY is subtractive color model used in

printing.• A printed color that look red absorb other two

component G and B and reflect Red.

CMY Color Model

• Stands for Cyan, Magenta, Yellow• CMY is subtractive color model used in

printing.• A printed color that look red absorb other two

component G and B and reflect Red.• In CMY, Black arise from subtracting all the

light by laying down the link C=M=Y=1.

CMY Color Model

CMY Color Model

• Cyan Subtract Red.• Magenta subtract Green• Yellow subtract Blue.

CMYK

• CMYK are supposed to mix to black.• However, they mix to a muddy brown.• Truly black is cheaper than mixing color ink to

make black.• CMYK refer four inks used in some color

printing: Cyan, Magenta, Yellow and black.• This is called undercolor removal.

Video Signals

Video signals are organized into three different ways :

• Component Video

Video Signals

Video signals are organized into three different ways :

• Component Video• Composite Video

Video Signals

Video signals are organized into three different ways :

• Component Video• Composite Video• S-Video

Component Video

• Make use of three separate signals for red green and blue image plane.(Component Video)

Component Video

• Make use of three separate signals for red green and blue image plane.(Component Video)

• This kind of system has three wires (Connectors) for connecting camera or other devices.

Component Video

• Make use of three separate signals for red green and blue image plane.(Component Video)

• This kind of system has three wires (Connectors) for connecting camera or other devices.

• Color signal not restricted to always RGB

Component Video

• Make use of three separate signals for red green and blue image plane.(Component Video)

• This kind of system has three wires (Connectors) for connecting camera or other devices.

• Color signal not restricted to always RGB• We can form three signal via a luminance-

chrominance transformation (YIQ or YUV)

Component Video

• Make use of three separate signals for red green and blue image plane.(Component Video)

• This kind of system has three wires (Connectors) for connecting camera or other devices.

• Color signal not restricted to always RGB• We can form three signal via a luminance-

chrominance transformation (YIQ or YUV)• There are no crosstalk between three different

channels.

Component Video

YUV

YIQ

• YIQ used in NTSC(National Television System Committee)

Composite Video

• Color(chrominance) and intensity(Luminance) signal are mixed into single carrier wave.

• Chrominance is a composite of two color component(I and Q or U and V).

• In NTSC TV, I and Q combine into composite chrome signal.

• When connecting to TV or VCR, composite video uses one wire and video color signals are mixed, not sent separately.

S- Video

• S-Video uses two wires: One for luminance and other for and other for composite chrominance.

• There is less crosstalk.

NTSC(National Television System Committee)

PAL

• PAL stands for Phase Alternate Lines

Color Models in Videos

• YUV• YIQ• YCbCr

YUV

• YUV coding used for PAL.• It codes a luminance signal(for gamma

correction) equal to y’.

YUV

• YUV coding used for PAL.• It codes a luminance signal(for gamma

correction) equal to y’.• Chrominance refers to the difference between

a color and a reference white at a same luminance.

YUV

• YUV coding used for PAL.• It codes a luminance signal(for gamma

correction) equal to y’.• Chrominance refers to the difference between

a color and a reference white at a same luminance.

• It can be represent by the color differences U , V.

YUV

• We go the (Y’,U,V) to (R,G,B) by inverting the matrix.

• Y’ is equal to the same value R’.(coz sum of coefficient is 1)

• For black & white image chroma(UV) is zero.

YIQ

• Y’IQ is used in NTSC.• I for in-phase chrominance.• Q for quadrature chrominance.

Y component

I component

Q component

YCbCr

• Y is the Luma component and Cb and Cr are the blue difference and red difference chroma component.

• Used for digital video encoding digital camera.• YCbCr is used in JPEG and MPEG.

YCbCr

74

Color Space – Comparison

Color space

Color mixing

Primary parameters

Used for Pros and cons

RGB Additive Red,Green, Blue

Easy but wasting bandwidth

CMYK Subtractive Cyan, Magenta, Yellow, Black

Printer Works in pigment mixing

YCbCrYPbPr

additive Y(luminance), Cb(blue chroma), Cr(red chroma)

Video encoding, digital camera

Bandwidth efficient

YUV additive Y(luminance),U(blue chroma), V(red chroma)

Video encoding for PAL

Bandwidth efficient

YIQ additive Y(luminance),I(rotated from U),Q(rotated from V)

Video encoding for NTSC

Bandwidth efficient

Sample Input

Gamma Corrected Input

Monitor Output

    

Graph of Correction L' = L ^ (1/2.5)

Gamma Correction• Gamma correction provides displaying an image accurately on a computer screen. • Images which are not properly corrected can look either bleached out, or too dark.• Trying to reproduce colors accurately also requires some knowledge of gamma. • Varying the amount of gamma correction changes not only the brightness, but also the ratios

of red to green to blue.

Introduction to Video Compression

• A video consist of time-ordered sequence of frames(Images).

• An obvious solution to video compression would be predictive coding based on previous frame.

• Exploit spatial redundancy within frames (like JPEG: transforming, quantizing, variable length coding)

• Exploit temporal redundancy between frames

Compression in the time domain

• difference between consecutive frames is often small• remove inter-frame redundancy• sophisticated encoding, relatively fast decoding

78

Difference Frames

• Differences between two frames can be caused by– Camera motion: the outlines of background or

stationary objects can be seen in the Diff Image– Object motion: the outlines of moving objects can

be seen in the Diff Image– Illumination changes (sun rising, headlights, etc.)– Scene Cuts: Lots of stuff in the Diff Image– Noise

79

Motion Estimation

• Determining parameters for the motion descriptions

• For some portion of the frame, estimate its movement between 2 frames- the current frame and the reference frame

• What is some portion?– Individual pixels (all of them)?– Lines/edges (have to find them first)– Objects (must define them)– Uniform regions (just chop up the frame)

Motion Estimation

MPEG

• Motion Picture Expert Group.• First devised in 1988 by a group of almost 1000

experts(from about 25).• Primary motivations:

– High compression rate for video storage comparable to VHS quality

– Random access capability

• Overall MPEG Standard combines video and audio signal into one large compression algorithm.

MPEG-1

• MPEG-1 audio/video digital compression standard was approved by the International Organization for Standardization(ISO) / International Electrotechnical Commission(IEC).

• MPEG-1 adopts the CCIR601 digital TV format also known as SIF (Source Input Format).

• MPEG-1 supports only non-interlaced video. Normally, its picture resolution is:– 352 × 240 for NTSC video at 30 fps– 352 × 288 for PAL video at 25 fps– It uses 4:2:0 chroma subsampling

MPEG-1

• The MPEG-1 standard is also referred to as ISO/IEC 11172.

• It has five parts: 1172-1 Systems 11172-2 Video 11172-3 Audio 11172-4 Conformance 11172-5 Software.

Digital Video Compression Fundamentals and Standards

842008/12/26

Frames Types• I-frame (Intracoded Frame)

– Coded in one frame such as DCT. – This type of frame do not need previous frame

• P-frame (Predictive Frame)– One directional motion prediction from a previous frame

• The reference can be either I-frame or P-frame– Generally referred to as inter-frame– Consist only changes occurs in previous frames.

• B-frame (Bi-directional predictive frame)– Bi-directional motion prediction from a previous or future frame

• The reference can be either I-frame or P-frame– Generally referred to as inter-frame

Digital Video Compression Fundamentals and Standards

852008/12/26

Group of Pictures• The distance between two nearest P-frame or P-frame and I-frame

– denoted by M• The distance between two nearest I-frames

– denoted by N

I B B P B B P B B I

GOP

Bidirectional Motion Compensation

Forward Motion Compensation N=9

M=3

Compressed video stream