Saturday, April 28, 2018

diploma in computer engineering multimedia notes

Multimedia

General definition:

                              Multimedia is any combination of text, art graphics, sound, animation and video delivered by computer or any other electronic or digitally manipulated means.

 

                  It is richly presented sensation. The thoughts and actions centers of people minds or brain can be electrified by weaving together the sensual elements of multimedia dazzling pictures and animations, engaging sounds, compelling video clips and raw textual information.

 

                 The multimedia word is delivered from the Latin words ‘multi’ and ‘medium’. The word multi means many or multiple and the word medium means middle.

 

 

 

Medium:

              Here the medium can be defined as an intervening substance through which something is transmitted or carried on. In another way medium is means for distributions and presentation of information, for example, graphics, speech and music.

 

           The media are classified according to the given mediums:

Perception medium

Representation medium

Presentation medium

Storage medium

Transmission medium

Information exchange medium

 

Perception medium:

                                           The perception of information occurs mostly to seeing or hearing the information. For the perception of information through seeing the virtual media such as text, image and video are use. Where as for perception of information through nearing auditory media such as music, noise and speech are relevant (related).

 

Representation medium:

                                           It is a medium which gives the idea about how the information’s are represented internally in the computer. i.e. how it is coded there are various formats used to represent media information in a computer. For example:

Text characters are coded in ASCII or EBCDIC (extended binary code for decimal interchange code).

An audio stream can be represented using simple PCM………….. With a linear quantization of 16 bits per simple.

An image can be coded in JPEG format.

A combine audio, video sequence can be coded in computer in MPEG format.

 

 

Presentation medium:

                                        It refers to the tools and devices for the input and output of information for example paper, screen and speaker for computer are output media, where as keyboard, mouse, camera, and microphone are input media. Using this media the information’s are presented in front of the audience or from one system to another system.

 

Storage medium:

                                Where the information is preserved for stored for future used for example micro film, floppy disk, hard disk, CD, and even paper also.

 

Transmission medium:

                                         Different information carries that enable continuous data transmission media. The information is transmitted over networks, which uses wire, cable transmission such as co-axial and fiber optics as well as free air space (wire less) transmission.

 

Information exchange medium:

                                                        Which information carries will be used for information exchange between different places. Information  can flow through intermediate storage media, where the storage medium is transported outside of computer network to the destination, through direct transmission using computer net, or to through the combine, usages of storage and transmission media.(electronic milling systems.)

 

 

 Standard definition of multimedia:-

                                                   A multimedia system is characteritize by computer control, integration production, manipulation, presentation storage and communication of independent information which is encoded at least through  a continuous (time dependent) and a discrete (time independent) medium.

 

 Properties of multimedia system’s:-

Combination of media.

Independence.

Computer supported integration(computer control)

Communication system.

 

Combination of media:

                                         According to the definition of multimedia system, a multimedia system must be composed with the help of different mediums and devices and all together when works or comes in function then it forms the multimedia system.

 

 

 

Independence:

                            In the multimedia system different media should be independent from each other where as there should be inherently tight connection between different media to work together also.

 

Computer supported integration:

                                                         The different independent media are combined in arbitrary forms to work together as a system with the support of computers. Computer supported integration also called control through the computer in multimedia systems.

 

Communication systems:

                                              Communication capable multimedia system must be approached. Multimedia information not only be created, proceed and stored but also be distributed above the single computer boundary which makes the multimedia application much popular and useful in distributed environment.

 

 

 

Data stream (flow) character tics:

                                                     The data or information of discrete or continuous media is transmitted and information exchange takes place between source and destination after dividing the information into individual units called packets. The source and destination can be located on same or different two machine or computers. The sequence of individual packets transmitted in time dependent (continuous) fashion is called a data flow stream or data flow.

 

                  For example speech in telephone system is the data stream is the data stream of continuous media where as retrieval of data file from database is the data stream of discrete media.

 

                             The data stream of different media has different features during transmission of information. In computer communication and switching the data transmission has the 3 attributed modes:

 

Asynchronous transmission mode.

Synchronous transmission mode.

Isochronous transmission mode.

 

Asynchronous transmission mode:

                                                             This mode provides no timely restriction for communication that is packets reach to the receiver as fast as possible. All information of discrete media can be transmitted as an asynchronous data stream. For example protocol for e-mail in world wide wave, Ethernet in LAN.

 

           This mode is not appropriate for continuous media. If this mode has to be used in continuous media, then an additional method should be applied to provide the time restrictions.

 

Synchronous transmission mode:

                                                          This mode defines maximum end to end delay for each packet of a data stream. The upper bound of delay will never be violated (broken). Although a packet can reach to the receiver at any arbitrary earlier time, it is reliable mode for multimedia since a maximum end to end delay can be applied.

 

Isochronous transmission mode:

                                                         This mode defines a minimum end to end delay. This means that the delay of individual packets is bounded to minimum called “jitter”.

 

            In this mode the necessary storage of video data at the receiver could be strongly reduced. These demands on intermediate storage must be consider in all components along the data route between source and sink.

 

 

Data stream characteristics for continuous media:

                                                                             The data stream characteristics in multimedia system are related to any audio and video data transfer. Also the data stream characteristics are influenced y the compression during the data transfer. Its characteristics apply for distributed as well as local environment. Hence the data stream characteristics can be discussed on the basic of three given properties or factors.

According to the time intervals between complete transmissions of consecutive packets.

According to the variation of the amount of consecutive packets.

According to the continuity or connection between consecutive packets.

 

According to time  intervals between consecutive packets:

                                                                                                  On the basic of this factor we find out three properties they are:

 

Strongly periodic data stream:

                                                     If time intervals are of the same length between two consecutive t packets that is a constant, then the stream is called strongly periodic and in the ideal case the jitter has the value zero. For example: PCM coded speech in traditional telephone switching.

 

Weakly periodic data stream:

                                                   If time intervals between two consecutive packets is not constant but are of periodic nature with finite period then the data stream is called weakly periodic.

 

A-periodic data stream:

                                           If the sequence of time intervals is neither strongly nor weakly periodic, instead the time period or time gap various between packets to packets during transmission then such data stream is called A-periodic data stream.

 

According to variation of consecutive packet amounts:

                                                                                             On the basic of these factors there are also three types of data stream they are:

 

Strongly regular data stream:

                                                    If the amount of data stays constant during the life time of a data stream, this feature is specially found in uncompressed digital data transmission, For example video stream of camera in uncompressed form, audio stream of CD.

 

Weakley regular data stream:

                                                    If the amount of data stream varies periodically with time and not shows the behaviors of strongly regular data stream then it is called Weakley regular data stream, For example compressed video stream.

 

Irregular data stream:

                                       If the amount of data is neither constant nor changes according to a periodic function, then the data streams are called irregular data stream. Transmission and processing of this category data stream is more complicated. Since data stream has a variable (bit) rate after applying compression methods.

 

According to continuity:

                                            On the basic of this factor their can be also 2 types or characters:

 

Continuous data stream:

                                            If consecutive packets are directly transmitted one after another without any time gap then such data streams are called continuous data stream, For example audio data use for B channel of Isdn with transmission rate for 64 kbps.

 

Unconnected data stream:

                                              A data stream with gaps between information units is called and unconnected data stream.

 

                   The transmission of a connected data stream through a channel with a higher capacity treads gaps between individual packets, for example the data stream coded with JPEG method with 1.2 mbps on a FDDI network.

 

Information units:

                            The information’s which has to be used in multimedia system either for storage purpose or for transmission from one system to another system are generally larger than bandwidth of communication channel or larger than the system, writing capacity size of the system. Hence the system divides the information into small chunks or units. The sequence of these chunks or packets forms the data streams and these are called information units.

 

            According to the use of the information units either for transmission line for reading, writing purpose from or to storage device, the information units can be categories under two types.

 

Logical data units (LDU):

                                               When the information units are used for reading or writing the information from or to the storage device then the information units are known as logical data unit.

 

Protocol data unit (PDU):

                                               When the information units are produced for transmission and are going to be loaded on communication channel then each unit (packets) are called protocol data unit (PDU). Since in each unit some protocol layered stamps, error control code and flow control code including destination address are added in the information unit so its name is protocol data unit.

 

 

Sound / Audio system:

 

Concept of sound system:

                                        Sound is a physical phenomenon by which matters are vibrated such as a violin string, guitar, and block of woods. As the matter vibrates, pressure variation is created in the surrounding air. This low and high air pressure is propagated through a air in a wave light motion. Which when reaches to human ear, a sound is heard. The propagations of oscillation which is called a wave form.

 

               The wave form repeats the same shape at regular intervals which is called period.(one complete cycle)

 

              The natural sounds are not perfectly smooth or uniformly period. The sounds which have recognizable periodicity tend to be more musical then non periodic sounds.

 

              Example of periodic sounds sources are musical instrument, vowel sounds whistling wind, bird songs, etc

 

            Non periodic sounds are coughs, sneezing, rousing water etc.

 

Some terms:

 

Frequency:

                      The number of periods per second or number of cycles completed by a wave form in a second is called the frequency of sound or wave. It is measured in hertz (HZ)

 

1000 hz    à 1khz

1000 khz  à 1mhz

1000 mhz à 1ghz

1000 ghz  à 1thz

 

    The frequency range of sound in divided into four groups.

infra sound:à from 0 to 20 HZ

human hearing or audio sounds:à 20HZ to 20 KHZ

ultra sound:à 20KHZ to 1 GHZ

hyper sound:à 1 GHZ to 10 THZ

 

           Multimedia system uses 20hz to 20khz frequency range. This range of sound is called audio and wave as acoustic signal.

 

        Beside speech and music, any other audio signal is treated as noise for example, aero plane, falling water.

 

Amplitude:

                        The maximum displacement of the air pressure wave from its mean or quit scent state in vertical direction is called amplitude of the wave or sound. It measures of the strength of a signal such as sound or voltage.

 

 

Representation of sound in computer:

                                                            The smooth continuous wave from sound is not directly represented in computer. The computer measures the amplitude of wave from at regular intervals of time to produce a series of numbers. Each of these measurements is a sample, or digitally sample wave form.

 

          The process by which audio signal converted into digital signal is the analog to digital converter ADC and digital to analog is DAC.

 

           AM79C30A, digital subscriber controller chip is an example of ADC available in SPARC station, it include built in speaker for audio output. Similarly DAC is available in UNIXTM device.

 

 

 

 

Sampling rate:

                      The rate at which a continuous wave form is sample is called sampling rate. It is also measured in hertz. The CD standard sampling rate is 44100 Hz that is the wave form is sample 44100 times per seconds. According NYQUIEST sampling theorem the sampling rate should be at list twice of the maximum frequency responses.

 

 

Quantization:

                     It is the process by which the sample values from the wave form are taken at discrete time. Since sample values are taken at discrete time so its resolution or quantizations of sample values are also discrete, it depends on the number of bits used in measuring the height of the wave form. An 8 bit quantization can have 2^8=256 possible values. Similarly to 16 bits can have 2^16=65536, the lower the quantization, the lower the quality of the sound.

 

Sound hardware:

Microphone

speaker

jack

 

 

Audio formats:

                       AM79C30A, digital subscriber controller provides voice quality audio. This converter uses 8 bits meu law, encoded quantization and a sampling rate of 8000 Hz; it is fast and accurate for telephone quality speech input. CD quality audio is uses 44100 samples per second with 16 bits linear PCM, encoded quantization.

 

Music:

          The relationship between music and computer has become more and more important especially due to development MIDI (musical instrument digital interface). Its important continuous these days are in the music industry.

 

(What is midi): the midi is a small piece of equipments that plugs directly into serial port which interfaces between electronic musical instruments and computer. The midi allows the transmission of music in full scale output.

 

 

 

 

 

Midi basic concepts:

                                It is a standard that manufacturers of electronic musical instrument have agreed upon. It is a set of specifications they use in building their instruments so that the instruments of different manufactures can communicate musical information between one another. A midi interface has two different components:

 

Hardware:

                Concepts the equipments of musical instruments, stipulates that a midi port is built into an instrument, specify the midi cable and deals with electronic signal that are sent over the cable.

 

Data format:

                        The data format encodes the information traveling through the hardware. The encoding includes the notion of the beginning and end of a note, basic frequency, and sound volume. The midi data allowed an encoding of about 10 octaves, which cross ponds to 128 notes. The midi data format is digital, grouped into midi messages which communicate one musical event between machines. These musical events are usually the action that a musician performs while playing musical instruments. For example ten (10) minute music can create 200 KB midi data

 

 

Midi devices:

                     A computer can control output of individual instrument through midi interface. The data are generated with a keyboard and reproduced through a sound generator. A sequencer can stored data and modify the musical data. The sequences are a computer.

 

                The synthesizer is the heart of midi system. It looks like a simple piano keyboard with a panel full up buttons. It has the following components:

sound generator

microprocessor

keyboard

control panel

auxiliary controller

memory

 

 

Midi messages:

                         Midi messages transmit information between midi devices and determine what times of musical events can be passed from device to device. The format of midi message consists of the status bytes which describes the kind of message and another is the data byte which is keeps the musical events. The midi messages are divided into two types:

 

Channel message:èWhich goes only to specify device.

System message:è go to all devices in a midi system because the system message does not specify any channel numbers.

 

 

 

 

 

Speech:

            It is an acoustic signal produced by human and machine as well, can be perceived (accepted), understood by human and machine also.

 

            The human adjusts very efficiently to different speakers and their speech habits. The speech signal of human comprises a subjective lowest spectral component known as the pitch, which is not proportional to frequency. The human ear can be taking 600 Hz to 6000 Hz. Speech signal has two properties, which can be used in speech processing:

Periodic behaviors

Maxima 3-5 frequency band

 

 

 

Speech generation:

                             The speech generation has a long history of resource as given below:

By the middle of 19th centaury helmhaltz built a mechanical tract copying together several mechanical resonators with which sound could be generated.

In 1940 Dudley produced the 1st speech synthesizer through limitation of mechanical vibration using electrical oscillation.

 

 

              For speech generation, real time signal, generation is the important requirement. By fulfilling it a speech output system can transform text into speech automatically without any lengthy processing. Some application needs only a vocabulary for it. For example telephone answering machine.

 

 

 

Speech analysis:

                          Speech analysis serves to analyze who is speaking, what has been said and in which pattern or mood the speech is delivered. It helps us to find out the identification of the speaker and to understand or recognize the speech that is what has been said.

 

             The computer identifies the speaker using an aquatic finger print. Aquatic finger print is a digitally stored speech probe of a person. The speaker has to say the certain sentences into a microphone; the computer gets the speaker voice and matches, it with the digitally stored speech to identify the speakers. To recognize and understand the speech signal itself, based on speech sequence, the crossponding text is generated. This can lead to speech control typewriter, a translations system or a part of a work place for handicapped.

 

            Another important area of speech analysis is to find out the speech pattern with respect to how a certain statement was said. If a person is angry a speech sounds differently than in calm mood. An application of this analysis can be lie detectors.

 

 

Speech transmission:

                                 It deals with the efficient coding of his speech signal so that speech or sound can be transmitted at low transmission rate over networks. The goal of sound transmission is to provide the speech quality to the receiver as was generated at the receiver as was generated at the sender side. For the transmission of speech the given four steps are used they are:

Signal form coding:

                                    To achieve the must efficient coding of the audio signal, the data rate of a PCM coded stereo audio signal with CD quality requirement is

 

 

Rate = 2*(44000/sec)*(16bit/8bit per byte)

        = 176400 bytes /sec

        = 1411200 bit/sec.

 

            The  telephone quality needs only 64 kb/sec. to transmit the CD quality signal over telephone the rate of it should be lowered or it difference pulse code modulation can be used to get the data rate to 56 kbps.

 

Source coding:

                             Parameterized system work with source coding algorithm, Here the specific speech characteristics are used for data rate reduction.

 

             The signal is divided into a set of frequency channels because only certain maxima are relevant to speech. The differences between voiced and unvoiced sounds are considered into account. Voice less sounds are simulated by the noise generator and for the voiced sounds, the simulation comes from a sequence of pulse the data rate of 3 kbps can be generated with a channel encoder but quality is not always satisfactory.

 

Recognization:

                             Speech Recognization (analysis) occurs on the sender side and speech synthesis or generation occurs on the receiver side only the characteristics of the speech elements are transmitted that is middle frequency bandwidth are the speech elements. For it frequency bandwidths are used in the corresponding digital filter with brings the data rate down to 50 bits per seconds.

 

Achieved quality:

                                 The fundamental goal for speech and audio transmission with respect to multimedia system is how to achieve the minimum data rate for a given quality. The quality of speech depends on the data rate as well as the quality of audio depends on the number of bits per sample value.

 

             For speech quality the data rate of 8 kbps can be used in telephone quality standard and CD quality can be achieved with reduction from 16 bits per sample value to 2 bits per sample value that is only 1/8th of actual data needs to be transmitted.

 

 

 

Image and Graphics:

                                 An image is a spatial (3D) representation of virtual or real object either in 2D or 3D. An image may be abstractly thought of as continuous functions defining usually are a rectangular reason of a sample. A recorded image may be in photo graphics, analog video signal or digital format. In computer an image is usually a recorded image such as a video image, digital image or picture. Also in computer graphics, an image is always a digital image where as in multimedia application all formats can be presented.

 

Graphics and graphics format:

                                                Graphics image formats are specified through graphics primitive and there attributes. The graphics primitives belongs lines, rectangles, circle, and ellipse, text strings in 2D in a graphical image or polyhedron in 3D objects. Graphical package determines which primitives are supported. Attributes of graphics specify line style, line width and color.

 

        The graphics images are not represented in a matrix of pixel. The higher level image representation needs to be converted into lower level image representation during image processing to the form of pixel. Some graphics packages like SRGP (simple raster graphics package), take the graphics primitive and attributes, generate a bitmap or PIXMAP. Bitmap is an array of pixel values that map one by one pixel on the screen. PIXMAP is a term which describes a multiple bit per pixel image. Low in color system usage 8 bit per pixel that is 2^8 =256 colors allowed. Some expensive system has 24 bits per pixel allowing 16 million colors. After the conversion phase the graphics format is prevented as a digital image format.

 

 

Digital image representation:

                                              An image might be through of as a function with resulting values of the light intensity at each point over a planner region. These function needs to be sampled at discrete intervals. The sampling quantizes the intensity values into discrete level.

 

         A digital images in represented by a matrix of numerical value representing a quantized intensity values. When a image is a 2D matrix then, I (r, C) is the intensity values at the position corresponding to row “r” and column “c” of the matrix.

 

        The point at which an image is sample is known as picture elements, commonly everreviated as pixel-er (pixel element). The pixel value of intensity and each point of image are called gray scale levels, which represents colors of the image. Each pixel intensity is represented by an integer and is determine from the continuous image by averaging over a small neighbors hood around the pixel location. If there are two intensity values 0 and 1 then black and white image whereas when 8 bits integers are use to store is pixel values then 256 colors. (0 for black & 255 for white&0 &255 all another integer values for different other colors.)

 

 

Image format:

                      There are different image formats among which for multimedia system, two image format are considered. They are:

Captured image formats:

                                           The image format that comes out of an image frame grabber is the captured image format. This format is specified by two main parameters. Special resolution is specified as pixel X pixel and color encoding is specified by bits per pixel. These two parameters values depend on hardware and software for input and output of image.

 

Stored image format:

                                       An image is stored in a two dimensional or 2D array of values. Each value represents in the image, for a bit map these value are binary digit where as for color image it may be a collection of:

Three numbers representing the intensity of red, blue, and green components of color at the pixel.

Three numbers that are indices to tables or red, green and blue intensity.

A single numbers that is an index to a table of color triplets (RGB)

 

                   Some current image file formats are: GIF (graphical interchange format), JPEG, TIFF (tagged image file format) etc.

 

 

Image synthesis or generation:

                                                 Computer graphics concern the pictorial synthesis of real or imaginary objects from their computer based model. Image synthesis is an integral part of all computers user interface and is indispensable for visualizing 2D, 3D and higher dimensional objects, Area as diverse as education, science, engineering, medicine advertising, entertainment, all really on graphics. E.g.

user interface:

                            Application on PC having UI on DTP (desktop publishing) window in graphics:

Office automation and electronic publishing.

Simulation and animation for scientific visualization and entertainment.

 

Interactive computer graphics are the most important means of producing images since the invention of photo graphics and television images can be generated by video digitizer cards that capture analog signals and create a digital image this digital image are use, for image Recognization and in communication for video conferencing.

 

 

Image analysis:

                        It is the method of extracting the description from image that is necessary for high level scene analysis method. Image analysis techniques includes the comprtation of perceived brightness and color, partial or complect recovery of 3D data in the scene, location of discontinuities corresponding to objects in the scene and characterization of the properties of uniform region in the image.

 

           Image analysis is important in many areas as like overall survey photographic, slow scan TV images of the moon or planets x-ray image, TV images taken from an industrial robust visual sensor etc.

 

           The sub areas of image processing include image enhancement, pattern detection and Recognization scene analysis and computer vision.

Image enhancement:

                                      It deals with improving the image quality by illuminating noise (extraneous (unwanted) pixel or missing pixel) or by enhancing contrast.

 

Pattern detection and Recognization:

                                                                it deals  with detecting and clarifying standard patterns and finding distortions from  these pattern for e.g. optical character recognition.

 

Scene analysis and computer vision:

                           it deals with recognizing and reconstructing 3D modes of scene from several 3D images. An example is an industrial robot sensing, the relative sizes, shape, posting and colors of objects.

 

 

Image Recognization:

                                  To fully recognize an object in an image means knowing that there is an agreement between the sensory projection and the observed images. How the object appears in the image has to do with the spatial configuration of the pixel. The image Recognization can be completed in the given steps:

Conditioning:

                          It estimates the informative pattern on the basic of the observed image. The conditioning suppresses noise, uninteresting systematic or patterned variations to normalize the image.

 

Labeling:

                   it suggests the informative pattern has structure as a spatial arrangement of events, each spatial events being a set of connected pixels labeling determines in what kind of spatial events each pixel participates for e.g. edge detection. The labeling operation labels the kind of primitive spatial events in which the pixel participates.

 

Grouping:

                     it indicates the events by collecting together or identifying maximal connected set of pixel participating in the same kind of events. A grouping operation, where edges are grouped into lines, is called line fitting. The grouping operation involves a change of logical data structure. The observed image, conditioned image and labeled image all are digital image data structures.

 

Extracting:

                       This operation computes for each group of pixel a list of property. The properties might include its centroid, area, orientation, spatial movement circle scribing circles, inscribing circle etc. it can also measure spatial relationship between two or more grouping.

 

Matching:

                      It is the matching operations that determines the interpretation of some related set of image events, associating these image with some given 3D object or 2D object. There are wide variety of matching operations, the classic example is template matching, which compares the examine pattern with stored models of known pattern and choose the best match.

 

 

 

 

Image transmission:

                                To transmit digital images through computer networks the following requirement should be following requirement should be consider.

The network must accommodate bursts data transport because image transmission is busty. The burst is caused by the large size of image.

Image transmission requires reliable transport.

Time dependence is not a dominate factor of the images in contrast to audio/video transmission.

 

      The image size depends on the image, representation format used for transmission. There are three important methods used for image transmission.

Raw image data transmission (original sent)

Compressed image data transmission (compress and sent)

Symbolic image data transmission (used in symbol)

 

 

 

Video and Animation:

                                   Motion video and computer based information’s are the basic media for multimedia system. The video signal is displayed on the screen using a cathode ray tube (CRT), an electron beam which carries corresponding pattern information such as intensity in a viewed scene. The video signal representation includes three aspects:

Visual representation:

                      The central objective or goal of visual representation is to provide the viewers scene of present in the scene and of participation in the events portrayed. For it, the televised image should convey spatial and temporal content of the scene there are 9 measures for visual representation:

Vertical detail and viewing distance.

Horizontal detail and picture width.

Total detail content of image

Perception of depth

Luminance and chrominance

Temporal aspect of illumination

Continuity of motion

Flicker

Temporal aspect of video quality

 

Luminance:à percentage of white color or brightness.

Chrominance:à quality of color that combine hue and saturations

Hue:à separate one color from another.

 

 

Visual transmission:

                                     Video signals are transmitted in a single, channel to the receivers, the color components that is one luminance and two chrominance. The transmissions of luminance and chrominance signal are carried in a single channel by specifying the chrominance subscriber to be an odd multiple of one half of the line scanning frequency. It causes the component frequency of chrominance interleaved with luminance. It avoids the interference between these two signals for less degradation of image the luminance bandwidth is kept under 3 MHz below the chrominance frequencies 3.55 MHz.

 

                The basic video bandwidth required to transmit luminance and chrominance signal is 42 MHz for NTSC standard, but HDTV it is twice of NTSC.

 

 

Visual digitalization:

                                      a picture or motion video should be converted from analog to digital representation before it can be processed by a computer or transmitted over a computer network. The conversion process is called digitalization. In general form, digitalization consists of sampling the grey color. Level in the grey level at MXN array of points may take only value in a continuous range. For digital processing, the grey level into k intervals, and obtain any one value form the k intervals. For a  pictures to reconstruct from quantized samples it is necessary to use hundred or more quantizing samples are taken, in regions of a pictures where the gray level changes slowly. The result of sampling and quantizing is a digital image, at which point a rectangular array of integer values representing pixels is obtained.

 

                  The next step in the creation of digital motion video is to digitize pictures in time and get a sequence of digital image per second that approximates analog motion video.

 

 

 

Computer based animation:

                                           By literally, animation means to bring something into life. An animation covers all possible changes that have visual effect and the visual effects can be different in nature. It might include time varying positions (motion dynamics), shape, color, transparency, structure and texture of an object (update dynamics), changes in lightening, camera position, orientation and focus.

 

                            A computer based animation is on animation performed by a computer using graphical tools to provide visual effects. A computer based animation includes the given four basic steps:

 

Input process:

                            It is the first stage before the animation. The image of object must be digitized or drawing should be completed before the animation process. These can be done by optical scanning or tracing the drawing with a data tablet or producing the drawings by the use of drawing application or programs. The drawing may need to be post processed (filtered) to clean up any glitches arising from the input process. The digitize image should be kept in the key frames at extreme or characteristics position which has to be animated.

 

Composition stage:

                                   Foreground and background figures and colors are combined to generate the individual frames for the final animation in this stage. Several low resolution frames are place in a rectangular array to generate a trial film using pan zoom, features available in frame buffers. The frame buffer can take a particular portion of such an image (pan) and then enlarge it the entire screen (zoom). This process can be repeated on several frames of animation stored in the single image. If it is done fast enough, it gives the effect of continuity for animation.

 

In-between process:

                                     The animation of movement from one position to another position needs a composition of frames with intermediate position in-between the key frames. This is called in-between process it is created out by interpolation method, in this process systems gets only the starting and ending position. The easiest interpolation method is linear Interpolation called lerping, it has the limitations. For example if lerping is used to calculate intermediate position of a thrown ball in the air using the sequence of 3 key frames, starting mid and final position, then it will not helped. In such situation splines are used. Splines can be used to vary any parameter smoothly as a function of time. It can make an individual point move smoothly in space and time, in-between process also involves interpolation of objects shape in immediate frames.

 

Changing colors:

                                 For changing color, computer based animation, usages CLUT (color lookup table) in a frame buffer and the process of double buffering. The LUT animation is generated by manipulating the color lookup table. The simplest method is to cycle the colors in the LUT. Thus changing the color of various pieces of the image, using LUT animation is faster than sending an entire new pixmap to the frame buffer or each frame.

 

                 For example 8 color bits per pixel in 640*512 frame buffer, a single image contains 320 kilobytes of information. Transferring a new image in every 1/30th second to the buffer required a bandwidth of 9mbps where as LUT needs a few hundred to few thousands bytes.

 

 

 

Animation language:

                                 There is several animation languages already develop. All of them can be categories under three groups:

 

linear list notations language:

                                                    It is the specially animation supporting language. Each event in the animation is described by start and ending frame number and an action that is to take place (event). The example of this type of languages is SCEFO (scene format).

For example:

42, 53, B, rotate, “palm”, 1, 30

Here,

 

42 => start frame no.

53 => ending frame no.

B => table

Rotate => action.

Palm => object

1 => start angle

30 => end angle.

 

General purpose languages:

                                                 The high level computer languages which are developed for the normal application software development also have the animation supporting features along with graphics drawing, For example QBASIC, C, C++, java etc.

 

Graphical language:

                                 it is also computer high level language and especially develop for graphics drawing and animation has been already develop for e.g. AutoCAD.

 

 

 

Display of animation:

                                  To display the animation on video motivator as a series of horizontal scan lines from top to bottom in pixels (raster system). The animated object must be scan, converted into their pixmap image in the frame buffer. The conversion must be done at least ten times per second (preferably 15 to 20 times), for smooth animations effects. Hence a new image must be created in no more than 100 milliseconds, from these 100 milliseconds scan conversion should take only a small portion of time, so that redraw and erase of the object on the display can be done fast enough.

 

 

 

Transmission of animation:

                                           The transmission of animation over computer network may be using any one of two methods:

 

By using symbolic representation:

                                                           the symbolic representation of animation object (for example ball) is transmitted together with the operation commands performed on the object and at the receiver side the animation is displayed after scan converting into pixmap. In this case the transmission time is short because the symbolic representation of animated object is smaller in size then pixmap but display time at receiver takes longer because of scan converting operation has to be performing at receiver side.

 

By pixmap representation:

                                               In this method the animated object is converted into pixmap and then transmitted to the receiver. In this case, the transmission time is longer in comparisons to symbolic representation, because the size of the pixmap is larger than symbolic but display time is shorter because no scan conversion should have to perform at receiver side.

 

 

 

 

Communication system in multimedia:

                                                             Multimedia applications such as multimedia mail, collaborative works systems, virtual reliability systems etc required high speed networks with high speed transfer rate communication system.

 

              The higher layer of multimedia communication system (MCS) can be divided into two architectural sub systems:

 

Application sub system:

                                          This section prevents management and service issues for group collaboration and session management. The group collaboration and session management provide the support for a large group of multimedia applications such as tale-collaborations. The application subsystem can be discussed under two heading:

 

Collaborative computing:

                                             The collaborative computing is generally knows as computer supported co operative works (CSCW). The current infrastructure of networked computer along with audio and video makes it easier for people to co operative and bridge space and time in collaborative computing environments.

 

                   There are many tools collaborative computing for example, email, bulleting boards use networks, screen sharing tools, text based conferencing system, IRC or internet relay chat, CompuServe, telephone conference system, conference room, video conferencing etc.

 

Session management:

                                       It is an important part of multimedia communication architecture. It is the core part which separates the control from the actual transport, needed during the transport. Session management architecture is built around the session manager, which separates the control from the transport. The session control architecture, consist of a session manager, media agents and the shared work space agents.

 

                 The session manager includes local and remote functionality. In local functionality it contains, membership control management such as participant authentication, control management for shared work space such as floor control, media control management such inter communication and synchronization among media agents and conference control management also such an communication establishment, modification and closing of conference.

 

                 In remote functionality the session manager, communicates with other session managers to exchange session state information which may include the floor information, configuration information etc. in different conferencing system, the conference control and the floor control can be embedded either in the application layer or in session layer.

 

              Media agent:- media agent are the separated for the session manager and are responsible for decision specific to each type of media. This modularity allows the replacement of agent where as shared work space agent transmits shared objects as like tale pointer co ordinate, graphical or textual object among the shared application.

 

 

 

 

Transport system:

                                 These systems deal about multimedia transmission along with its application requirement. The multimedia application in network environment requires the following four aspects for the data holding in computing and communications.

 

Data through put (transfer rate):

                                                      Audio or video data shows a stream like behavior. It requires high data throughput even in compressed mode. These stream exist concurrently in work station or network, requires a high throughout. Data movement on local end system produces manipulation of large quantity of data in real time and can create a bottleneck in the system.

 

Fast data forwarding:

                                    It imposes a problem on end system where different application exists in the same end system. Each application requires ranging from normal error free data transmission to new time constraint traffic type traffic, but the faster a communication can transfer a few data packet need to be buffered. This movement leads to a careful spatial and temporal resources management in the end system and routers or switches. This causes end to end delay. The delay of one second may be tolerated, where as in video phone or video conference, the delay should be lower than 200 m/s for natural communication.

 

Service guarantees:

                                    to achieve services guarantee, resource management most be used without resources management in end system and switches/routers, multimedia system can not provide reliable QOS (quality of service) to their user because transmission over unreserved resources leads to be dropped or delayed packets.

 

Multicasting:

                          It is important requirement for multimedia distributed application in terms of sharing resource like networks and communication protocol processing at end system.

 

 

 

 

Quality of service and resource management:

                                                                        The service quality of multimedia communication system, which satisfied the user application requirement into communication system is quality of service it is a concept foe specifying how “good“the offered, networking services are. QOS can be characterized some of them are:

 

QOS layering:

                            The multimedia communication system consists of three layers application, system (including communication service and OS service) and devices ( network and multimedia devices) given the application may or may not reside a human user that is QOS of these 3 layers influence the quality of service of multimedia communication systems(MCS).

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 



Service object:

                            Service is performed on different object as media sources, media sinks, connections and virtual circuits (VCS). QOS parameterization specific these service objects.

 

QOS description:

                                 It is meant for service and processing a transport or network connection, it includes 3 parameters:

 

Application QOS parameter:

                                                   It describes the requirements for the application services specified in terms of media quality and media relations.

 

System QOS parameters:

                                             It describes requirement on communication services and OS service in term of bits per second, number of errors processing time, size etc.

 

Network QOS parameters:

                                               Describes requirements a network service in terms of network traffic (load) and network performance

 

 

QOS parameters value and types of services:

                                                                             It determines a type of service on the basic of connection less and connection oriented services they are:

 

Guaranteed:

                            Provide QOS guaranteed by single value (target value or average value) or a pair of values (mean and average value/ lowest quality and target quality).

 

Predictable service (historical service):

                                                                     It based on past network behavior by matching past service quality, predicts current QOS.

 

Best effort service:

                                   it is based on either guarantees or on partial guarantees most of the current networks protocol have best effort services.

 

Resource:

                      It is a system entity required by tasks for manipulating data; each resource has a set of distinguished characteristics:

 

There are active and passive resources for e.g. CPU a network adapter, for protocol processing are active resources where as main memory (buffer space) bandwidth are passive resource.

A resource can be either used exclusively by one process or shared between various process, for example speaker, speaker is an exclusive resource where as bandwidth shared resource.

A resource that exists only ones in the system in known as a single resource otherwise it is multiple resources.

 

 

Resource management:

                                     The resources are managed by various components of a resource management sub system in a network multimedia system. The main goal of resource management is to provide guaranteed delivery of multimedia data. To do so three actions are used:

 

To reserve an allocate resources, during multimedia call establishment so that traffic can flow according to the QOS specification.

To provide resource according to the QOS specification.

To adopt to resources changes during on going multimedia data processing.

 

         Resource management subsystem includes resource manager at the hosts as well as at the network nodes. Resource management protocols are used to exchange information among the available resources.

 

 

Trends in collaborative computing:

                                                       Multimedia networked application such as tele- medicine, tele-working, virtual collaborative space, distributed simulation and tele actions are new application demands for collaborative computing.

 

             Future collaborative computing will incorporate a number of people (possibly, unknown) at geographically distributed sites, using a variety of application from different application domains, with the heterogeneity, interoperability issues needs to be satisfied.

 

 

Trends in transport subsystem:

                                                The trends in transport system can be given in two topics.

 

Special purpose protocol approach:

                                                             It is also known as the internet paradigm. This approach is to design various special purpose protocols on top of internet protocol IP for different classes of application, for example TCP’s special purpose protocol for reliable data communication where as UDP is for unreliable data communication and RTP is for audio and video transport.

 

General purpose protocol approach:

                                                             the general purpose protocol approach is to provide a general set of service that the user can pick and use, for example: XTP where the user can select one way two way or three way handling for connection setup and release etc.

             

             a more realistic and flexible approach may be to develop application tailored protocols that are customized for specific types of services, such as transferring voice, video, texts and image data.

 

 

 

 

Multimedia database management systems (MDBMS):

                                                                                        Mdbms systems are database system, where besides text and other discrete data, audio and video information will also be stored, manipulated and retrieved. To support this functionality, MMDB (multimedia database) system requires a proper storage technology and a proper file system so that external devices can be accessed easily through up file directory. The measure characteristics of MDBMS can be given as below:

 

Corresponding storage media:à the storage media can be both computer integrated components and external devices in MNBMS.

It support descriptive query.

It must have device independent interface.

It also must support independent data format interface.

It must have view specific and simultaneous data access that is same multimedia data can be accessed even simultaneously through different queries by several applications.

It must be manage large amounts of data.

It must have relational consistency of data management.

It must support real time data transfer.

It should support the long time transactions i.e. the long time transaction must be reliable etc.

 

Documents, hypertext and MHEH:

 

Documents:

                  A multimedia documents is comprised of information coded in at least one continuous (time dependent) medium and one discrete or time independent medium. The documents consist of a set of structural information that can be in different forms of media. And during presentation, can be generated or recorded .the document is aimed at the perception of a human; a close relation between information units integrates different media which is also called synchronization. A multimedia document is closely related to its environment of tools, data abstraction, basic system concepts, and document architecture.

 

              The continuous and discrete are processed differently these days that is text is processed within an editor program a motion picture can be manipulated with same editor program only by library calls. The goal of multimedia data abstraction is used to achieve integrated processing i.e. uniform, discretion and processing of all media.

 

              Abstraction of multimedia data serves as the fundamental building block for programming different multimedia application, especially editors and other document processing tools.

 

              Basic system concepts for document processing used multimedia abstraction and also serve as concepts for the information architecture (document architecture) in a multimedia document.

 

 

 

 

 

 

 

 

 

 

 

 

 

Information (document) architecture of multimedia document:

 

Text Box: Presentation modelText Box: Manipulation model
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 



                       Document architecture describes the connections among the individual elements represented as models, which are presentation model, manipulation model and representation model.

 

                 The manipulation model describes all the operations allowed for creations change and deletion of multimedia information. The presentation model defines the:

 

Protocol for exchanging the information among different computers and

The formats for storing data.

 

             The presentation models produce the multimedia information in front of the audience using defined interface as well as take input.

 

 

Hyper text:

                 The information which is none linearly linked together in such a way that there exists not only a reading sequence but also the reader decides on his or her reading path is called hyper text. The reader can start in a lexicon (header) with a notion hyper text, and go through a cross reference to systems and finish with descript. By these associations through reference link, the author of the information determines the actual link of hyper text information; technically a hyper text structure is a graph consisting of nodes and edges where.

 

                Nodes are the actual information you needs. They are example texts elements, individual graphics, audio or video logical data units (LDU). The information’s are shown at the user interface mostly is there own windows.

 

               The edges provides links to together information unit they are usually called pointers or links. A pointer is mostly a directed edge and includes its own information too. The origin of a pointer is called an anchor which exists in actual information unit (node).

 

Hyper media system:

                                 The hyper media system includes none linear information link of hyper text systems and the continuous or discrete media of multimedia systems. For example, if a non linear link consists of text and video data, then this is a hyper media, multimedia and hyper text system.

 

SGML (Standard Generalized Markup Language):

                                                                                    The standard generalized markup language was support mostly by American publishers. The author of document preparer the text contain, they specific a uniform way the little, tables etc without the description of the actual representation (script type line specification). The publisher specifies the resulting layout, the SGML determines the form of tags, but is does not specify their location or meanings the SGML makes a frame available with which the user specifies syntax description in an object specific system. The SGML specifies the syntax but not the semantics.

 

Document architecture SGML:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 



                The processing of SGML document divided into two processes that are formatter and parser. The parser only knows the meaning of the tag and it transforms the document into a formatted document, where as the parser uses the tags, occurring into the document in combination with the corresponding document type specification of the document structure with the help of tag. Completed here parts of the layouts are linked together.

 

                  SGML defines the syntax for tags through a grammar but doesn’t define the semantics of these tags, The SGML document architecture emphasis in representation model. SGML with its tags posses representation model, object, classes and inheritance can be used for the definition of the structure, multimedia data are the supported in the SGML standard only in the form of graphics.

 

 

 

 

Open document architecture (ODA):

                                                          The open document architecture (org 89) was initially called the office document architecture because it supports mostly office oriented applications. The main goal of this document architecture is to support the exchange, processing and presentation of documents in opened system.

 

               The main property of ODA is the destination among content, logical structure and layout structure where as in SGML only logical structure and contents are defined. ODA defines semantic (rules) also. These three aspects are linked toward document as shown in the diagram.

 

 

 

 

 

 

 



       

 

 

 

 

                                                                

 

 

 

       These three aspects are orthogonal views of the same document, each of this view represents one aspects together we get the actual document.

 

Content portions:

                                 The content of the document consist of content portions. A content architecture describes:

 

the specification of the elements

possible access function

the data coding

 

              The individual elements are the logical data units (LDU) which are determine for each medium. The access functions serve for the manipulation of individual elements. The loading of the data determines the mapping with respect to bits and bytes. ODA has content architecture for media text, geometrical graphics and raster graphics.

 

Layout structure:

                                The layout structure specifies mainly the representation of a document. It is related to a two dimensional representation with respect to paper or screen using frames the position and size of individual layout elements is established for e.g. the page size and type styles.

 

Logical structure:

                                The logical structure includes the portioning of the content i.e. paragraph individual headings, titles, figures etc this are specified according to the tree structure.

 

 

 

MHEG (multimedia and hypermedia information coding export group):

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 



                 The ISO/ IEC/ 2ECL/SC29 committee are working on the standardization of the exchange format for multimedia systems, i.e. coding of audio, pictures, and multimedia and hypermedia information. The actual standards are developed at the international level in three working groups, co- operating with research and industry, the diagram shows that the three standard  deal with the coding and compression of individual media. The result of the working groups, the jpeg, mpeg arc of special important in the area of multimedia system.

 

           In multimedia presentation, the contents are described in the form of individual information objects with the help of WG1, WG11 and WH12. The structure (processing in time) is specified through spatial relation between the information objects. The standard of this structure description is the subject of WG12, which is known as the multimedia and hypermedia information coding export group (MHEG). The final MHEG standard will be described in three documents. The first part concept and exchange formats, the second parts describes an alternative semantically to the first part, is morph (alternative) syntax of the exchange format and the third part presents a reference architecture for a linkage to the script language.

 

 

 

 

User interface:

                       That certain means or tools which interact between the components of computer or a simply a computer and its user to communicate the information’s, data, output instructions among them. The user interface may contain several tools like command buttons, text box, menus, pop-up, scroll bars, list box, combo box, image, video and audio player, links etc.

 

            A good mmultimedia interface must contain the multiple media sometimes using multiple modes as written text together with spoken language. Multimedia will be meaningless if application do not, use the various media at the user interface for input and output, since media determines how human computer interaction occur but also how well.

 

 

Basic design issues:

                               The main emphasis in the design of multimedia user interfaces is multimedia presentation. An effective presentation process should not involve only sequential flow of actions but, also parallel and interactive actions, i.e. there is a requirement for intensive feed back going on between the components making decision about media and modalities. The design includes a number of higher level concerns such as goal and focus of the dialog, The user content and current task including the media relation to represent that data and information to represent that data and information in a way that correspond those concerns.

 

          There are several factors which should be considered while designing the user interface some of them are:

Determine the appropriate information content to be communicated.

to represent the communicative intent

Represent the essential characteristics of the information.

To choose the proper media for information presentation.

To co-ordinates different media and assembling techniques, with in the presentation.

To provide the interactive exploration of the information presented.

 

        The objective of multimedia presentations should be the appropriateness principles that are the information should be exactly acceptable to the task neither more nor less.

 

 

Video at user interface:

                                     At the user interface, video is implemented through a continuous sequence of individual images video can be manipulated at user interface similar to manipulation of still (normal) picture/image. A continuous sequence of 15 images per second gives a rough perception of continuous motion pictures.

 

             The functionality for video is not as simple to deliver because of the requirement of high data transfer rate, which is not guaranteed by most of the hardware in current graphics system. Special hardware for visualization of motion picture is available today, mostly through additional video cards.

 

 

Audio at user interface:

                                    Audio can be implemented at the user interface for application control. Thus speech analysis is necessary.

      

                Speech analysis is either speaker dependent or speaker independent, speaker depends solution allow the input of approx 25 thousands different words with a relatively low error rate. Here an intensive learning phase to trends the speech analysis system for speaker specific characteristics is necessary prior to the speech analysis phase. A speaker independent system can recognize only a limited set of words and no training phase is needed. During audio output, the additional presentation dimension of space can be introduced using two or more separate channel to give a more natural distribution of sound. The best known example of these techniques is stereo.

 

 

 

User friendliness as primary goal:

                                                     User friendliness is a property of a good user interface, for example multimedia integrated telephone service having good user interface than ISDN telephone service.

 

            The design of user friendly graphical interface requires the consideration of many conditions. The addition of audio and video at the user interface does not simplify this process, there are a number of applicable criteria for multimedia user interface some of them are:

 

Application instructions must be easy to learn i.e. application must have a easy to learn instruction section.

A concept sensitive help function using hypermedia techniques must be available in user interface.

A user friendly interface must have the property that he user remembers the application instruction rules.

A user interface should enable effective use of the application.

The color combination, character set, resolution and form of the window must be impressive for different taste user; it determines the first and last impression of the application.

A user friendly human computer graphical interface must be fulfill immediately if any missing.

User friendly interface must have specific supports for elements entry by menu or by graphical interface.

Individual function must be placed together in meaning full fashion through alphabetic ordering or logical grouping.

Presentation that is optical image must be by full text, abbreviated text, icons, graphics, and motion video.

The interface also may have different D.B (dialog box) for supporting different operations.

 

 

 

Abstraction for programming:

 

Abstraction level:

Multimedia application

 

Object oriented language

 

High level programming language

 

Toolkits

 

System software

 

Libraries

 

Device, drivers for containers media

 

Devices

                              Abstractions level in programming defines different approaches with a varying degree of detail for representing, accessing and manipulating data. The abstraction levels with respect to multimedia data at their relations among each other are represented in given diagram.

                                       

 

 

 

 

 

 

 

 

 

 

 

 

 

 

A multimedia application may access each level:

                                                                                  A device for processing continuous media can exists as a separate component in a computer. In this case, a device is not the part of operating system, but is directly accessible to every component and application.

 

            A library the simplest abstraction level includes the necessary function for controlling the corresponding hardware with specific device access operation. Multimedia devices are bounded through the device driver and operating system. Hence the processing of the continuous data becomes part of the system software. The multimedia device drivers embedded in operating system simplify the implementation of device access and scheduling.

 

           Dedicated programming language allows for the implementation of real time program. The corresponding program mostly runs in a real time environment (RTE) separate from the actual application.

 

          Higher procedural programming language built the next abstraction level, These are used often to implement, commercial multimedia application. The code generated from the compiler can be processed through libraries and system interface or continuous data.

 

         The object oriented environment provides the application with a class hierarchy for the manipulation of multimedia here also the code generated from the compiler are processed through libraries and system interface for continuous data.

 

 

Libraries:

               The processing of continuous media is based on a set of functions which are embedded into libraries. These libraries are provided together with the corresponding hardware.

 

               The device drivers and libraries, which controls all available functions also supports each device. The libraries differ very much in their degree of abstraction. Some libraries can be considered as extension of the GUI, where as other libraries consist of control instructions passed as control blocks to the corresponding drivers, libraries are very useful at the OS level but over write functions are based for different drivers not fixed so there will be a varieties of interfaces and the set of different libraries.

 

 

System software:

                           In multimedia the device access has become part of the operating system instead of individual device libraries. The access to multimedia and support for continuous media processing implemented in OS is for the experimental system called NTSC (Nemo trusted supervisor call), which is developed from the university of Cambridge, runs in supervisor mode. 3 domains running in user mode are system, device driver and application.

 

            The NTSC implements those function required by user mode 3 processes. The system processes implement the majority of the service provided by the OS. The device driver processes are similar to system processes but are distinguished but the fast that they are attached to device interrupt stubs which executes in supervisor mode. Where as application process contain user programmers processes interact with each other through the system interaction IPC (inter process communication) using low level system abstraction events and shared memory.

 

Toolkits:

              A simpler approach from the users point in a programming environment for control of audio and video data processing can be taken by using toolkits. The toolkits are redimate software used to

 

Abstract from the actual physical layer.

Allow a uniform interface for communication with all different devices of continuous media.

Introduce the client server theory (paradigm) i.e. communication can be hidden from the application.

 

                    Toolkits should represent interfaces at the system software level i.e. it is possible to embedded them into the programming language or subject oriented environment or language.

 

Multithreading:

                        Multithreading allows an application to divide it tasks into individual threads. The operating system can  then divide processing time an these threads, which multithreading occurs many benefits an a processing machine, multithreading application are essential for taking full advantages of multiprocessor computers system.

 

              Multithreaded programs with parallel executing threads can take full advantage of any number of processors within the system, with multiprocessing power, multithreaded application can run multithreads simultaneously an different processor, finishing make tasks in less time.

 

Advantage of multithreading:

                                              Windows NT, Windows 95 and above all, Linux processor machine as a multiprocessor machine. The multithreaded operating system provides numerous benefits especially, in PC based data communication and acquisition the important advantages are:

 

Previous alternatives to the task for:

à Same task takes much longer to execute than others.

à Same task needs a better deterministic outcome then others.

à Same user interface activity is to be run can currently with the hardware communication.

User level and Kernel level:

                                            In case of user level threads, threads are implemented in user level libraries than the systems calls and the threads switching operations does not interrupt the kernel etc.

 

                Where as incase of kernel level threads, kernel uses the thread management, the code of the thread is supposed to contain no thread management code.

 

Advantages of user level threads:

Fast switching among threads: since switching between user level threads can be done independent of the OS, there fore it is done quickly.

Threads scheduling can be application specifies.

 

Disadvantage of user level threads:

When a user level thread executes a system call not only that, threads is blocked but also all of the threads within the process are blocked. This is because OS is unaware of the present of threads and only knows about the existence of a process, actually constituting these threads.

Multithreaded application using user level threads can not take advantage of multiprocessing since kernel assigns one process to only one processors at a time. This is again because OS is unaware of the presence of threads and it schedules processor that threads.

 

Advantage of kernel level:

The operating system is aware of presents of thread in the processors therefore even if they are threads of process gets blocked. The OS chooser the next one to run from the same pracers or from the different are.

The OS can also schedule multiple threads from the same process on multiple processors.

 

 Disadvantage of kernel level threads:

Switching between threads is time consuming because the kernel must do the switching through an interrupt.

 

 

 

Inter processor communication (IPC):

                                                            The IPC is a facility provided by an OS through which co operating process can communicate with each other. This facility can allow processes to co operative and synchronize their action. IPC is provided by a message system, same OS can use both message buffered to establish communication and synchronization among the processes, there are two types of IPC:

direct communication IPC method ( communication = inter process message)

mail boxes (indirect communication method)

 

 

Multimedia applications:

                                       The availability of multimedia hardware and software components has driven the enhancement of existing applications towards being move user friendly (known as re- engineering). It has also instead the continuous development of new multimedia application, Application crocide for the whole domain of multimedia computing and communication, because they are only result why any body would invert in the area.

 

 

Program and structure:

                                    Several programs for the development of multimedia application has been establishes during the last few years some well known us. The high performance computing an communication (HPCC) having a major component is “information infrastructure technology and application” (IITA) program which will support advanced application such as tele medians remote education and training, tele- operation, information access, some program from Europe are ESPRIT (European strategic program for record in information technology and scientific program) RACE (resort in advance communication in Europe) acts DELTA etc.

 

                 The race will cover the application for tele interaction, tele shopping, tele working etc.

 

Structure:

               There are many views on how multimedia application should be classified, a market oriented view and pragmatic view may divide the current multimedia application into kiosk application, education application and application in the area of co operative work. Another view would be communication oriented view dividing multimedia application into interactive and distribution oriented application.

 

              The third possibility is same view derived from hyper text, hyper media area.

 

Media preparation:

                              Media preparation is performed by multimedia input output hardware and it is supporting software, hardware and software are basic component for introducing media into the digital world of a computer appropriate hardware is the primarily requirement for working with multimedia application. The software’s create the environment to work actively with the multimedia application. It allows the computer user to use and interactively word with the multimedia hardware some specified multimedia; devices should be made available for media preparation on the basics requirement such as:

Audio support hardware.

Video support hardware

Scanner devices

Recognize devices

Motion based devices

Tracking devices

 

 

Media composition:

media composition is involved editing single media that is changing it is objects, such as character, audio sentence, video forms, and attributes such as fonts of a character, recording speed of audio sentence, color of an image etc.

for all above works in media composition, there should be several special software made available some of them are:

Text editors as like coral-draw, Photoshop, MacDraw etc.

Graphics editors as like x fig it is drawing application with graphical and structured level on object level editing capability.

Image editors: as like photo style Photoshop.

Animation editors: as like macro media flash.

Sound editors: as like MIDI editors, musical kit, syntheses.

Video editors: as like DF vision from touch vision system quick time etc.

 

 

Media integration:

                            Media integration specifies relationship between various media elements to represent and manipulate a multimedia object. Integration is very much dependent on technology that is plate form specifies and format specifies although there are attempts to provide tools which will integrate media on any plate form with any format. An example of media integration in multimedia application can be found in an authoring tool, consider an application which co ordinates multimedia presentation. These application needs to provide a dynamic behavior and support several user action to integrate media to a required multimedia presentation. An anthoring system is a set of software tools for creating multimedia application embedded is an anthoring environment. This kind of application can be either written in a programming language or implemented using an anthoring system. Similarly there will be another editors like multimedia editors to manipulate multimedia document, hypermedia/ hypertext editors etc.

 

 

Media communications:

                                       Media communication denotes applications which exchange different media over a network through tele service for e.g. video conferencing, co operative work, mailing etc to multimedia application end users.

      

                    The advantages of tele services in multimedia application is that the end user can be located in different places and;

Still interact closely in a quite natural way.

Operative on remote data and resources in the same way as with the local data and resources.

 

    The main disadvantage of tele services is it takes longer time for communication than processing.

Media consumptions:

                                 Media consumption is the act of viewing, listening are the most common way by which the user consume media. Feeling multimedia information can be experienced in motion based entertainment parks for e.g. through virtual reality.

 

               The major emphasis of media consumption is on viewing multimedia information (presentation) the presentation of multimedia information is often done through authoring tools as well as by other tools. It is important to convene the user of media to consume them because people like to do the same or in ways similar to how they used to do them in past so:

Familiar human user interface must be created.

The user needs to be educated.

The users need to be carefully navigated through new application.

 

 

Media entertainment:

                                 There is several applications that uses the multimedia for entertainment to bring a different and more involved entertainment experience then what is available with a standard TV or movie theaters for e.g.

virtual reality entertainment (VRE)

location based entertainment (LBE)

large screen film

Motion based simulator etc.

 

          Computer base VRE systems are 3D, interactive as opposed to passive and use one or more devices in an attempt to provide the user with a sense of presence either visual or audio.

 

 

Trends in multimedia application:

                                                     There are several possible trends in multimedia application some of them are:

Applications are going from re- engineering of existing application to establish a new application domain. The new application may require re- engineering of user interface, new integration technique.

Multimedia applications are designed more and more for distributed environment rather than for local environment.

The trend is going toward open solution so that application can be portable across various platforms.

Media consumption is going from passive mode of user computer interaction to an active mode of an interaction.

Media communication services are going from unidirectional to hyper directional information flow fro example interactive TV.

 

 


                                                      THE END

Steps of creating mail merge :

  Step1. Prepare data source Steps 2: Prepare main document Steps 3: Click mailing tab Steps 4: Start mail merge button from start mai...