Why?

Blogging Reading Chatting Meeting. The other aspect of life.

Thursday, June 26, 2008

Integrated Drive Electronics (IDE)

SCSI connections may connect both internal devices, which are inside the chassis of your computer and use the computer’s power supply, and external devices. Integrated derive Electronics (IDE) connections, on the other hand, are typically only internal and connect hard disks, CD-ROM derives, and other peripherals inside a PC. A PC motherboard can support two IDE controllers, and each controller in turn can support two devices (a master and a slave).

Small Computer System Interface (SCSI)

The Small Comuter System Interface (SCSI) is built into all current models. You can connect as many as eight devices (ID numbers from 0 to 7) to the SCSI port, but one of them must be the computer itself with ID 7, and one is usually your internal hard disk with ID 0.

Framework for Multimedia Systems

The framework presented here provides an overall picture of the development of distributed multimedia systems from which a system architecture can be developed. The framework highlights the dominant feature of multimedia systems: the integration of multimedia computing and communications, including raditional telecommunications and telephony functions.
Low-cost multimedia technology is evolving to provide richer information processing and communications systems. These systems, though tightly interrelated, have distinct physical facilities, logical models, and functionality. Multimedia information systems extend the processing, storage, and retrieval capabilities of existing information systems by introducing new media data types, including image, audio, and video. These new data types offer perceptually richer and more accessible representations for many kinds of information. Multimedia communication systems extend I existing point-to-point connectivity by permitting synchronised multipoint group communications.

Introduction to Multimedia Technology

A complex weave of communications, electronics, and computer technologies is emerging to create a new multimedia fabric for the next decade. The nature of this cloth is still evolving as an assortment of industries-telecommunications, consumer electronics, computers, cable and broadcast television, and information providers-compete for the emerging market. The potentially biggest near-term market will be the home and consumer, as recent developments such as consumer-oriented computer products, interactive television, and video-on-demand suggest. However, in time many expect the new technologies to be as pervasive as today’s television and telephone and the impact to reach to the sum of computers, telecommunications, and electronics, touching all parts of society from industry to government, education to recreation.
The concepts behind what is emerging today date back to over four decades to a series of visionary thinkers who foresaw the evolution of computers towards richer personalised devices that would become an extension of the individual. In 1945 Vannevar Bush, then the Director of the Office of Scientific Research and Development in the U.S. government, suggested, that one of the future devices available for individuals would be a memex, “a device in which one stores all his books, records, and communications, and which is mechanised so that it can be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory.” The memex would additionally be an associative device, so that related items could be easily located.
Multimedia Systems
Multimedia technology is breaking down the traditional boundaries between devices for computing, personal communications, and consumer entertainment. Major industries are rethinking their market strategy and forming a web of new alliances connecting entertainment, telecommunications, computing, publishing, and other global enterprises. Multimedia devices are expected to replace ubiquitous appliances such as the telephone and television and change many of the activities associated with them. The large scale of these trends and the many participants in these developments has added to the complexity of the possibilities. This chapter provides a contemporary account of the major technology forces affecting this convergence and identifies the significant near-term issues that lie ahead.

Entertainment

When the average person talks about consumer electronics, TV is what first comes to mind. Television is a very young industry. The first black and white broadcasts started in 1941.Color TV sets were not available at consumer prices until 1964. Cable TV was not widely available in most areas of the united States until the 1980s. Today, the average cable system has more than forty channels to choose from .CNN, MTV, HBO, and the like were not available in the 1960s and only just started in the 1970s,MTS stereo television broadcasts and stereo TV sets have only been available in the 1990s.In Europe and Japan commercial and cable TV has not flourished the way it has in the United States due to regulatory constraints and language barriers. Most countries have fewer than five TV channels.
Consumer electronics in the early 1980s introduced inexpensive home VCRs. The war between Beta and VHS was hot and heavy. Eventually VHS won due to JVC’s aggressive licensing and VHS’s eight-hour capability, The picture quality was poor, and the technology was inferior, but strong marketing won out. Camcorders, another recent innovation, have replaced the 8 mm and super8movie cameras in a very few years. The compact disc (CD) was introduced by philips and Sony in 1981.Today, it’s almost impossible to get a real record (as in LP). Similar changes have occurred in tape recording technology and electronic musical synthesisers.

Communications

In 1980, the Bell System in the United States was still a monopoly. On January 1, 1982, Judge Harold Green dissected the Bell System into Regional Bell Operating Companies (RBOCs), commonly referred to as “Baby Bells”. Bell Labs and Western Electric were also restructured (as American Bell and later renamed AT&T Consumer Products) so as not to have an unfair advantage with the Bell name. With deregulation in place, any number of companies could sell a telephone or provide long-distance service. Years later, most of the cheap $5 phones are a thing of the past. After the FCC opened up the 46-MHz band, use of cordless phones has grown significantly. The FCC has now also opened up a new 900 MHz band for cordless phones, which has extended range and less interference with radio-controlled toys than the 49 MHz band.
In the rest of the world, most countries still have a government monopoly on the phone system, though this is changing because of influence from the U.S. market. The united Kingdom is probably the most progressive non-U.S. market. Cellular phones have been terrified so inexpensively in the united kingdom that most people own cellular phones, Many U.S. companies, such as US West, have set up U.K. subsidiaries to sell cable TV and phone service at 15 percent lower rates than the local monopoly, The monopoly position does have its advantages however, For new technology to take hold in the United States, each of the RBOCs must endorse the new technology and the market must show a need. In other countries, the PTT may proclaim that a certain technology will be adopted (like integrated services Digital network (ISDN) for instance), and there is a government mandate to make it happen. For this reason, Europe is much father ahead in deploying ISDN than the united States.
In the united States, the Federal Communication commission (FCC) has a reasonable scope of influence to open doors for multimedia communications. In 1992, the FCC allocated a new frequency band at 218-219 MHz for interactive video applications. This new band will be used for a nationwide interactive TV service called TV-Answer, which is based on a radio frequency (RF) cellular-like modem. Since most cable networks today are transmit- only. This new RF channel can provide the return communications path for these new services. Home shopping is one obvious use of this technology.

Digital signature

Digital signature is a sequence of bits that are calculated mathematically while signing a given document. Since it is easy to copy, alter or move a file on computers without leaving any trail, one needs to be very careful in designing a signature scheme. In keeping with the properties of a handwritten signature, a digital signature should depend on some secret known only to the signer and on the content of the message should be able to distinguish between a forgery and a valid signature without requiring the signer to reveal any secret information. A general digital scheme consists of three algorithms:
1. A key generation algorithm
2. A signing algorithm
3. A verification algorithm
Cryptography: The process or skill of communicating in or deciphering secret writings or ciphers.
Working of digital signature:
Public key cryptography gives reliable method for digital signing and signature verification system based on public/private key pairs. A person can sign a given digital message with private key. The two steps involved are:
1. Calculate the hash value
In the first step of the process, a hash value of the message is calculated by applying some cryptographic hashing algorithm. The calculated hash value is a sequence of bits, usually with affixed length, extracted in some manner from the message. All reliable algorithms for message digest calculation apply such mathematical that when just a single bit from the input message is changed, a completely different digest is obtained.
2. Calculate the digital signature
In the second step of digitally signing a message, the information obtained in the first step hash-value of the message is encrypted with the private key of the person who signs the message and thus an encrypted hash-value, also called digital signature is obtained.
Verifying signed data
A digital signature is associated with a X.509 certificate which contains the sender’s public key. This key is used to decrypt the digital signature into the original hash value on the recipient’s computer. To verify the digital signature, the same hashing algorithm is used to generate a hash value based on the original data. The decrypted hash value is compared to the generated hash value. If the values match, the digital signature is valid.
1. Calculate the current hash value
2. Calculate the original hash value
3. Compare the current and the original hash-value
Benefits of digital signature:
Authenticity
Although digital signatures alone cannot prevent the content from being manipulated during delivery, using digital signatures provides a mechanism to detect tampering of it occurs. If the data is altered in any way after being digitally signed, the recipient can tell via properties of the signature that the data sent does not match the data received.

Acknowledgement
Data can be signed by the recipient as well as the sender. When a recipient signs the data and returns it to the sender, this signature is an acknowledgement. Digital signatures used in this way also provide no repudiation; the ability to prove that the data was sent by the signer.

Computers

The past ten years have brought much change to the world of computers, communications, and consumer products. Let’s review how far we’ve come. In 1980, personal computers were Apple IIs, Radio Shack TRS80s, Commodore Pets, and perhaps lesser known SOL20s and Exidy Sorcerers. Mini-computers were PDP 11/70 and VAX 11/780. Remarkably, punch cards were still around for those “big jobs.” The Commodore 64 appeared in 1981 and eventually won the home computer war against the likes of game machines such as the Atari 2600 VCS and Atari 5200, Mattel Intellivision, and others because it was a “real” computer instead of just a game machine. The C64 and its successor, the C128, also won over other similar home computers such as the Atari 400/800 and the various flavors of Radio Shack TRS80 models. This was a crazy time for the birth of the home computer. Many companies came and went in an attempt to provide the next hot consumer product.
In 1988, Sharp introduced an electronic organiser call the Wizard. With application programs on solid-state “credit cards”, the Wizard was basically an electronic address book and appointment scheduler. Palmtop computers were introduced in 1990 by Atari with the Portfolio and in 1991 by Hewlett-Packard with the 95LX. There were intended to take the Wizard concept one step further by adding a bigger display, a mini-QWERTY keyboard, and MS-DOS compatibility. Also in 1993 Hewlett-Packard upped the ante with a newer palmtop called 100LX and a subnotebook called the Omnibook 300.
The Newton and CD-I concepts are the first generation of the new convergence of computers, communications, and consumer products. They are not traditional televisions or phones or computers, but a hybrid of each. As new technologies become available, the fine line between these products gets even fuzzier.

WEB PAGE SET UP

Creation of a personal web page
A personal homepage is your means of putting your own information on the world wide web (www). A web page is nothing more than a text file written with special tags that format the contents, point to other pages, andinsert images and sounds. The tags arecalled hypertext markup language, or html.
There are a variety of ways to generate a web page:
• We may create our text file in many different ways using a text editor such as notepad (windows), simpletext(mac) or pico(unix).
• There are many software packages, such as dream weaver, that you can use to generate the html code for you. Additionally most word processing packages such as ms word, will usually save to the HTML format as well.
• We may choose to use a template (an existing frame of a document) and just fill in the details.
We will now discuss some of the basic HTML tags used:
HEAD TAG
tag has no attributes. Several tags are included inside it.
1.BASEFONT Tag
It defines the font size to be used in html document.




2.BASE Tag
It is useful for setting some global parameters of an html document.

WATER SPORTS TO DIE FOR



3.META Tag
This tag is used to include additional information about the document and can be used to pass additional information to a browser. There is no ending tag for and a document can have multiple tags. The attribute the meta tag are NAME, CONTENT, and HTTP-EQUIV.

CONTENT=”woodworking, cabinetmaking, handmade furniture”>


HTML and colors
There are two ways of defining colors in HTML documents. One involves simple color names , such as blue, cranberry, green, orange etc. However, since different browsers have different lists and since the definitions of individual colors may vary from browser to browser, we recommend using the color numbering scheme. For eg:
000000 means 00or no red, 00 of green, and 00of blue. So, 000000 represents black.# sign is to denote that they represent a color.

BODY TAG
The text and HTML code that goes between the body beginning and ending tags is rendered and displayed in the document area of the browser’s window.

1.TEXT color

The text attribute is used to change the default text color for an entire document.


2. BACKGROUND color

The BGCOLOR attribute is used to set the background of an HTML document to a single color.


3.HYPERLINK colors
Three attributes are used for changing the color of a hyperlink, where the color depends on the current state of the hyperlink. The three possible states are: unvisited, visited , and currently thinking of visiting.
LINK unvisited hyperlink
VLINK visited hyperlink
ALINK a hyperlink the user is thinking of visiting. The A stands for active hyperlink.
LINK=”#FF0000”
VLINK=”#808080”
ALINK=”#FFFF00”>

HTML FONT COLORS
It allows us to change the color of any portion of text.

I am going swimming today

WAP TO SHOW THE WORKING OF STATIC MEMBERS

#include

#include

class test

{private:

int code;

static int count;

public:

void setcode()

{code=++count;}

void showcode()

{cout<<"code--"<

static void showcount()

{cout<<"count--"<

};

int test::count;

void main()

{clrscr();

test t1,t2;

test::showcount();

t1.setcode();

t1.showcode();

test::showcount();

t2.setcode();

t2.showcode();

test::showcount();

getch();

}

UNFORMATTED INPUT OUTPUT

Overloaded operators >> and <<

The operators ‘>>’ and ’<<’ are overloaded respectively in the istream and ostream class. The following is the general format of reading data from the keyboard.

Cin>>variable1>> variable2>> ......>> variablenN

The input data are separated by white spaces and should match the type of data in the cin list. Spaces and new lines will be skipped. The operator reads the data character by character and assigns it to the indicated location. The reading variable will be terminated at the encounter of a white space or a character that does not match the destination type. For example:

Int code;

Suppose the following data is given as input: 4258D

The operator will read the characters up to 8 and the value 4258 is assigned to code.

Put() and get() functions

The classes istream and ostream define two member functions get() an put() respectively to handle the single character input/output operations. We can use both types of get() functions get(char*) and get(void) prototypes to fetch character including the blank spaces, tab and the newline character. The get(char*) version assigns the input character to its argument and the get(void) version returns the input character.

Example :

Char c;

Cin.get(c);

While(c != ‘\n’)

{

cout<<>

Cin.get(c);

}

This code reads and displays a line of text (terminated by a newline character). Remember, the operator >> can also be used to read a character. The above while loop will not work properly if the statement cin>>c; is used in place of cin.get(c);

The get(void) version is used as follows:

.....

Char c;

C=cin.get; // cin.get(c) replaced

.......

The value returned by the function get() is assigned to the variable c.

The function put(), a member of ostream class, can be used to output a line of text, charater by character. For example,

Cout.put(‘x’);

Displays the character x and

cout.put(ch);

displays the value of variable ch.

Getline() and write() functions

We can read and display a line of text more efficiently using the line oriented input output functions getline() and write(). The getline() function reads a whole line of text that end s with a newline character .

Syntax: cin.getline(line,size);

Eg: char name[20];

Cin.getline(name,20);

Assume that the input was

Bjarne stroustrup

This input will be read correctly and assigned to the character array name. Let us suppose the input was : object oriented programming

In this the input will be terminated after reading the first 19 characters.

The write() function displays an entire line and has the following form:

Cout.write(line,size)

The first argument line represents the name of the string to be displayed and the second argument size indicates the number of characters to display. Note that it does not stop dislaying the charaters automatically when the null character is encountered. If the size is greater than the length of line, then it displays beyond the bound of line.

Monday, May 26, 2008

Time and Multimedia Requirements

The problem of synchronising data presentation, user interaction, and physical devices reduces to satisfying temporal precedence relationships under real timing constraints. In this section, we introduce conceptual models that describe temporal information necessary to represent multimedia synchronisation. We also describe language- and graph-based approaches to specification and survey existing methodologies applying these approaches.
The goal of temporal specification is to provide a means of expressing temporal relationships among data objects requiring synchronisation at the time of their creation, in the process of orchestration. This temporal specification ultimately can be used to facilitate database storage and playback of the orchestrated multimedia objects from storage.
To describe temporal synchronisation, an abstract model is necessary for characterising the processes and events associated with presentation of elements with varying display requirements. The presentation problem requires simultaneous, sequential, and independent display of heterogeneous data. This problem closely resembles that of the execution of sequential and parallel threads in a concurrent computational system, for which numerous approaches exist. Many concurrent languages support this concept, for example, CSP and Ada; however, the problem differs for multimedia data presentation. Computational systems are generally interested in the solution of problems which desire high throughput, such as the parallel solution to matrix inversion. On the other hand, multimedia presentation is concerned with the coherent presentation of heterogeneous media to a user. Therefore, there exists a bound on the speed of delivery beyond which a user cannot assimilate the information content of the presentation. For computational systems it is always desired to produce a solution in minimum time. An abstract multimedia timing specification concerns presentation rather than computation.

Time-Based Media Representation and Delivery

Time-Based Media Representation and Delivery
Temporal relationships are a characteristic feature of multimedia objects. Continuous media objects such as video and audio have strict synchronous timing requirements. Composite objects can have arbitrary timing relation-ships. These relationships might be specified to achieve some particular visual effect of sequence. Several different schemes have been proposed for modeling time-based media. These are reviewed and evaluated in terms of representational completeness and delivery techniques.
Introduction
Multimedia refers to the integration of text, images, audio, and video in a variety of application environments. These data can be heavily time-dependent, such as audio and video in a motion picture, and can require time-ordered presentation during use. The task of coordinating such sequences is called multimedia synchronisation or orchestration. Synchronisation can be applied to the playout of concurrent or sequential streams of data and also to the external events generated by a human user. Consider the following illustration of a newscast shown in Figure 1.14. In this illustration multiple media are shown versus a time axis in a timeline representation of the time-ordering of the application.
Temporal relationships between the media may be implied, as in the simultaneous acquisition of voice and video, or may be explicitly formulated, as in the case of a multimedia document which possesses voice annotated text. In either situation, the characteristics of each medium and the relationships among them must be established in order to provide coordination in the presence of vastly different presentation requirements In addition to simple linear playout of time-dependent data sequences other modes of data presentation are also viable and should be supported by a multimedia information system (MMIS). These include reverse, fast forward, fast-backward, and random access. Although these operations are quite ordinary in existing technologies (e.g., VCRs), when nonsequential storage, data compression, data distribution, and random communication delays are introduced, the provision for these capabilities can be very difficult.

DVI Technology

DVI technology is different from the other standards discussed here because it is specifically based on the use of special hardware. Intel Corporation and IBM Corporation have developed a programmable chipset, which implements the technology in a co-processor environment on any type of computer platform. These chips support a wide range of multimedia functions in software, including JPEG compression and several DVI-unique compression algorithms for stills or motion video. Because of the programmability of the chipset, it can respond to new algorithm developments-for example, the programming of the chips to do MPEG processing is being explored. Intel is committed to producing higher-speed DVI processors in the future and has even discussed the possible integration of DVI functionality with a future generation of the x86 CPU family. Thus, the DVI hardware is an important engine for present and future compression developments.
DVI Technology Motion Video Compression
DVI Technology can do both symmetric and asymmetric motion video compression/decompression. The asymmetric approach is called Production-Level Video (PLV). Video for PLV must be sent to a central compression facility, which uses large computers and special interface equipment, but any DVI system is capable of playing back the resulting compressed video. The picture quality of PLV is the highest that can currently be achieved. The other DVI compression approach is Real Time Video (RTV). It is done on any DVI system that has the Action Media Capture Board installed. Playback of RTV is on the same system or any other DVI system. Because RTV is a symmetric approach, which requires that compression be done with only the computing power available in a DVI system, RTV picture quality is not as good as PLV picture quality.
DVI Production-Level Compression
PLV is an interframe compression technique; the algorithm details are proprietary, and we can only say that it is block-oriented and that it involves multiple compression techniques. Since it was designed specifically for the DVI chipset, it is optimised for that environment, and it probably would not make much sense to run it on different hardware.
PLV is an interframe compression technique; the algorithm details are proprietary, and we can only say that it is block-oriented and that it involves multiple compression techniques. Since it was designed specifically for the DVI chipset, it is optimised for that environment, and it probably would not make much sense to run it on different hardware.
PLV compression is an asymmetric approach where a large computer does the compression and the DVI hardware in the PC does the decompression. It takes a facility costing several hundred thousand dollars to perform PLV compression at reasonable speeds. Since this cost is too much for a single application developer to bear, centralised facilities are provided where developers can send their video to be compressed for a fee. We will discuss what such facilities involve and how they are made available to developers.

MPEG

The Moving Picture Experts Group (http://drogo.cselt.stet.it/mpeg/), a working group convened by the International Standards Organization (ISO) and the International Electro-technical Commission (IEC) to create Standards for digital representation of moving pictures and associated audio and other data, has developed the MPEG standard. MPEGI and MPEG2
are the current standards. Using MPEGI, you can deliver 1.2 Mbps (megabits per second) of video and 250 Kbps (kilobits per second) of two-channel stereo audio using CD-ROM technology. MPEG2. a completely different system from MPEG1, requires higher data rates (3 to 15 Mbps) but deliverers higher image resolution, picture quality, interlaced video formats, multi-resolution scalability, and multichannel auto features.
MPEG may become the method of choice for encoding motion images because it has become widely accepted for both Internet and DVD-Video For MPEG1, hardware playback will become common on motherboards and video cards. Software decompression algorithms for MPEGI, while slower than hardware solutions, are incorporated in QuickTime, and Microsoft has licensed a driver from MediaMatics for use with Windows 95.


MPEG-1
It can deliver 1.2 mbps of video and 250 kpbs of two channel stereo audio using CD-ROM technology. For MPEG hardware will be common on motherboard and video card’s software are from quick time, Microsoft, and Media motion.
MPEG-2
It can requires higher data rates i.e., (3 to 15 mbps) but delivers higher image resolution. Picture quality, interlaced. Video format & multichannel stereo audio feature. MPEG-2 require dedicated hardware and software.
MPEG-4
It is also available and provides a new content based method for multimedia elements. It also offer not only indexing, hyperlinking, querying, browing, uploading, downloading and deleting functions but also ‘hybrid natural and synthetic data coding’. With MPEG-4, multiple views and multiple soundtracks of a scene as well as 3D view are also available.
MPEG-7
On the standard horizon, it goes a further step to MPEG-4 by integrating image, sound or motion video elements being used in a composition.
E.g., MPEG-7 may include classes for facial expressions, personality characteristics etc.
The MPEG Motion Video Compression Standard
Digital motion video can be accomplished with the JPEG still-image standard if you have fast enough hardware to process 30 images per second. However, the maximum compression potential cannot be achieved because the redundancy between frames is not being exploited. Further more, there are many other things to be considered in compressing and decompressing motion video, as indicated in the objectives below.

Video Compression

To digitize and store a IO-second clip of full-motion video in your computer requires transfer of an enormous amount of data in a very short amount of time. Reproducing just one frame of digital video component video at 2.4-bits requires almost 1MB of computer data; 30 seconds of video will fill a gigabyte hard disk. Full-size, full-motion video requires that the computer deliver data at about 30MB per second—this is simply more than Macintoshes and PCs can handle. Typical hard disk drives transfer data at only about 1MB per second, and quad-speed CD-ROM players at a paltry 600K per second. This overwhelming technological bottleneck is currently being overcome by digital video compression schemes or codecs (coders/decoders). A codec is the algorithm used to compress a video for delivery and then decode it in real-time for fast playback. Real-time video compression algorithms such as MPEG P*64, DVI/Indeo, JPEG, Cinepak, ClearVideo RealVideo, and VDOwave are now available to compress digital video information at rates that range from 50:1 to 200:1. JPEG, MPEG, and P*64 compression schemes use Discrete Cosine Transform (DCT), an encoding algorithm that quantifies the human eye's ability to detect color and image distortion. All of these codecs employ lossy compression algorithms.
In addition to compressing video data, steaming technologies are being implemented to provide reasonable quality low-bandwidth video on the Web. By starting playback of a video as soon as enough data have transferred to the user's computer to sustain this playback, users do not have to wait for an often very large file to download. Microsoft, RealNetworks, VXtreme, VDOnet, Xing, Precept, Cubic, Motorola, Vivo, Vosaic, and Oracle are actively pursuing the commercialization of streaming technology on the Web. RealNetworks (http://www.real.com) claims that by 1998 more than 28 million copies of its RealPlayer software had been downloaded electronically and more than 100,000 hours per week of live audio and video content were being broadcast over the Web using their RealAudio and RealVideo technology, and more than 150,000 Web pages were using RealNetworks’ streaming software. Tools such as Terran Interactive’s Media Cleaner Pro (http://www .terran-int.com) allow you to customize the move and audio compression chores and optimize your media for either CD-ROM or Web delivery.
QuickTime, Apple's software-based architecture for seamlessly integrating sound, animation, text. and video (data that change over time), is often thought of as a compression standard, but it is really much more than that.

Digital Video

The next evolution toward fully integrating motion video computers eliminates the analog television form of video from the multimedia delivery platform. If a video clip can first be converted from analog to digital, then stored as data on a hard disk, CD-ROM, or other mass-storage device, that clip can be played back on the computer's monitor without overlay boards, videodisc players, or second monitors. This playback is accomplished using software architectures such as QuickTime or Video for "Windows (replaced by Microsoft's newer Active Movie for Windows).
As a multimedia producer or developer, you will need to convert your video source material from its common analog form (videotape) to a digital form manageable by the end user's computer system. So an understanding of analog video and some special hardware must remain in your multimedia toolbox.
Analog to digital conversion of video can often be accomplished using the video overlay hardware described above, but to repetitively digitize a full-screen color video image every 1/30 second and store it to disk or RAM severely taxes both Macintosh and PC processing capabilities—special hardware, compression software, and massive amounts of digital storage space are required. And repetitively reading from disk and displaying to the monitor the full-screen color images of motion video, at a rate of one frame every 1/30 second, taxes the computational and display power of both the Macintosh sad the PC.
The final evolutionary step to fully digital video will not occur until the acquisition and recording of video becomes entirely a digital procedure and analog videotape is removed from the process.

Audio compression

For processes which reduce the dynamic range (without changing the amount of digital data) of audio signals, see dynamic range compression (audio).Audio compression is a form of data compression designed to reduce the size of audio files. Audio compression algorithms are implemented in computer software as audio codecs. Generic data compression algorithms perform poorly with audio data, seldom reducing file sizes much below 87% of the original, and are not designed for use in real time. Consequently, specific audio "lossless" and "lossy" algorithms have been created. Lossy algorithms provide far greater compression ratios and are used in mainstream consumer audio devices. As with image compression, both lossy and lossless compression algorithms are used in audio compression, lossy being the most common for everyday use. In both lossy and lossless compression, information redundancy is reduced, using methods such as coding, pattern recognition and linear prediction to reduce the amount of information used to describe the data.

Musical Instrument Digital Interface

MIDI (Musical Instrument Digital Interfac) is an industry-standard protocol that enables electronic musical instruments, computers and other equipment to communicate, control and synchronize with each other.

.MIDI does not transmit an audio signal or media — it simply transmits digital data "event messages" such as the pitch and intensity of musical notes to play, control signals for parameters such as volume, vibrato and panning, cues and clock signals to set the tempo. As an electronic protocol, it is notable for its success, both in its widespread adoption throughout the industry, and in remaining essentially unchanged in the face of technological developments since its introduction in 1983.

Speech Synthesis

A major driving force in speech synthesis has come from text-to-speech (TTS). A TTS system assumes that the text already exists in machine-readable form, such as an ASCII file. The machine-readable form is possible obtained from optical character recognition. TTS converts the text symbols to a parameter stream representing sounds. This includes expanding common abbreviations, such as “Pres.,” and symbols like “&”. Also, the system figures out how to handle numbers: 1492 can be read as a date, and even the dollar amount $1492.00 could be read starting with “fourteen hundred...” or “one thousand four hundred....” After creating a uniform symbol stream. The system creates initial sound parameter representation, often at the world level-some words may simply be looked up in a dictionary.
Other parts of the stream are broken down into morphemes, the syntactic basic units of the language. With luck, a group of text symbols corresponds to one morpheme. It is often the case that there is a regular mapping between the symbols of such a group and some sound, in which case the group can be turned into sound. As a last resort, the system converts individual text symbols to sound using rules. The system synthesises sounds from the parameter string based on an articulatory model or using sampled sounds, LPC, or formats. The synthesis system may store units at the level of phonemes, diphthongs, syllables, or words.

Encoding and Transmitting Speech

The simplest way to encode speech is to use PCM, discussed above. The 8-bit, 8-kHz standard for speech is of significantly lower quality than what is required for music. Still, at the nominal 64-kb/sec rate for speech, if one bit per sample can be saved, then the total saving is 8 kb/sec. Methods for lowering the bit rate thus remain an active area of research. The ADPCM method discussed above can easily save 2 to 4 bits per sample. PCM, ADPCM, and related methods attempt to describe the waveform itself. There are other methods, such as the Subband coding discussed above under MPEG. We now turn to another class of methods, called voice coders, or vocoders.
The human vocal tract can be simplified by assuming, for example, that the source of vibration for voiced sounds is not affected by the rest of the vocal tract. The series of filters that model the vocal tract can be modeled such that if one filter changes, there is no effect on the others. Under such conditions, we can calculate the voice model coefficients independently of the fundamental frequency or the voiced/unvoiced decision. We can also reasonably assume that formats change quite slowly compared to the rate of individual pulses from the vocal tract and transmit the filter coefficients at a slower rate.

Speech Production

The organs involved in speech include the larynx, which encloses loose flaps of muscle called vocal cords. The puffs of air that are released create a waveform, which can be approximated by a series of rounded pulses. The waveform created by the vocal cords propagates through a series of irregularly shaped tubes, including the throat, the mouth, and the nasal passages. At the lips and other points in the tract, part of the waveform is transmitted further, and part is reflected. The flow can be significantly constricted or further, and part is reflected. The flow can be significantly constricted or completely interrupted by the uvula, the teeth, and the lips.
A voiced sound occurs when the vocal cords produce a more or less regular waveform. The less periodic, unvoiced sounds involve turbulence in which some part of the whole tract is tightened. Vowels are voiced sounds produced without any major obstruction in the vocal cavity. In speech, formats are created by the position of tongue and jaw, for example. In separating vowels, the first three formats are the most significant. In the male, the fundamental frequency of voiced sounds in around 80-160 Hz, with three formats around 500, 1500, and 2500 Hz. The fundamental of the female is around 200 Hz and higher, with the formats perhaps 10 percent higher than those of the male. Consonants arise when the vocal tract is more or less obstructed. Sounds at the level of consonants and vowels are collectively known as phonemes, the most basic unit of speech differentiation, analysis, and synthesis. The next level up from phonemes is the diphthong and the syllable, then the word.

Stereophonic sound

Stereophonic sound, commonly called stereo, is the reproduction of sound, using two or more independent audio channels, through a symmetrical configuration of loudspeakers, in such a way as to create a pleasant and natural impression of sound heard from various directions, as in natural hearing. It is often contrasted with monophonic (or "monaural", or just mono) sound, where audio is in the form of one channel, often centered in the sound field (analogous to a visual field).
The word "stereophonic" — derived from Greek stereos = "solid" and phōnē = "sound" — was coined by Western Electric, by analogy with the word "stereoscopic". In popular usage, stereo usually means 2-channel sound recording and sound reproduction using data for more than one speaker simultaneously.
In technical usage, stereo or stereophony means sound recording and sound reproduction that uses stereographic projection to encode the relative positions of objects and events recorded. A stereo system can include any number of channels, such as the surround sound 5.1- and 6.1-channel systems used on high-end film and television productions. However, in common use it refers to systems with only two channels. The electronic device for playing back stereo sound is often referred to as "a stereo".During two-channel stereo recording, two microphones are placed in strategically chosen locations relative to the sound source, with both recording simultaneously. The two recorded channels will be similar, but each will have distinct time-of-arrival and sound-pressure-level information. During playback, the listener's brain uses those subtle differences in timing and sound-level to triangulate the positions of the recorded objects.Stereo recordings often cannot be played on monaural systems without a significant loss of fidelity. Since each microphone records each wavefront at a slightly different time, the wavefronts are out of phase; as a result, constructive and destructive interference can occur, if both tracks are played back on the same speaker. This phenomenon is known as phase cancellation.

Quadraphonic sound

Quadraphonic sound uses four channels in which speakers are positioned at all four corners of the listening space, reproducing signals that are independent of each other. "Quad", in its original form, was a commercial failure, and the LP formats were plagued with technical problems that were solved too late to save the system from a consumer point of view. The format was more expensive and required extra speakers, which became a decorating problem. It also suffered from lack of a standard format for LP records.
As its name suggests, with discrete formats the four channels are passed through a four-channel transmission medium and presented to four speakers.
This format was less popular than others because of incompatibility, poor longevity, and strict setup requirements.
The quadraphonic music was encoded with sum and difference signals (encoded in the 18 to 30 kHz range) on the standard stereo grooves of vinyl which also had the undesirable side-effect of limiting the high frequency response to 15 kHz at the most. To play back the record, a special high-frequency cartridge andstylus was required, in addition to a CD-4 demodulator and the usual quadraphonic receiver or amplifier. This system produced additional wear and tear, so JVC introduced "super vinyl", a very durable type of record. The cartridge used had a shibata type stylus and an extended frequency response. Later, linear contact styli were developed that improved the performance of CD-4 systems. However, this development came too late to save CD-4 from extinction. CD-4 records could be played as stereo records if care was taken to use a shibata (or linear contact) stylus to protect the subcarrier modulations.

Channel Left Front Right Front Left Back Right Back
Normal Frequency Left 1 0 1 0
Normal Frequency Right 0 1 0 1
High-Frequency Left 1 0 -1 0
High-Frequency Right 0 1 0 -1

Quad-8 / Quadraphonic 8-Track
Quadraphonic 8-Track was a discrete 4-Channel Tape Cartridge system introduced by RCA in Sept. 1970 and called QUAD-8 (later shortened to just Q8).The format was almost identical in appearance to stereo 8-tracks except for a small notch in the upper left corner of the cartridge. This signaled a quadraphonic 8-track player to combine the odd tracks as audio channels for Program 1 and the even tracks as channels for Program 2. The format was not entirely compatible with stereo or mono players - although quadraphonic players would play stereo 8-tracks, playing quadraphonic tapes on stereo players results in hearing only one-half the channels at a time. Some stereo 8-track players touted simulated quadraphonic sound (through upmixing stereo 8-tracks) but were not quadraphonic 8-track players. The last release in the quadraphonic 8-track format was in 1978.

Transmission of Digital Sound

When one is sending signals from one piece of digital equipment to another, many problems can be encountered. There is always the fallback position of connecting the analog output of one digital machine to the analog output of one digital machine to the analog input of another digital machine. A copy made in this manner will still sound very clean. But often in real-world work, several generations of copies must be made. If the signal is converted to the analog domain for every copy, unacceptable amounts of noise are gradually introduced.
One problem in connecting digital equipment is that there are several standard combinations of sample rate, number of channels, bit and byte order, and number of bits per channel used in digital audio recording. Some are intended for professional recording studios, other strictly for consumers, and some are called “semiprofessional.” For example, the compact disc nominally stores stereo 16-bit signals at a 44.1-kHz sample rate; this consumer format is acceptable for semiprofessional use. Another format is the Digital Compact Cassette (DCC), which uses a different form of encoding, with sample rates of 32, 44.1, and 48 kHz.
Other kinds of problems occur in recording studios, for example, where equipment with noisy fans must be isolated from the recording studio itself. A digital signal can degrade when transmitted over some kinds of long cables.
To solve the problem of differing sample rates, one uses a sample-rate converter. This process can also introduce subtle amounts of noise if the implementation is not handled properly. Chips have been introduced recently to simplify implementation.

Parametric Representations

Another class of representations has nothing to do with models of hearing or acoustics in particular, but exploits some numerical properties to achieve high-quality audio results. The first major breakthrough in this arena was frequency modulation, developed at Stanford by John Chowning [42,43]. FM had been used for several decades for radio transmission. Chowning discovered that certain combinations of carrier and modulator frequencies could produce waveforms that mimicked the properties of musical waveforms. FM was used in digital synthesisers and other musical devices, mostly by Yamaha. Many PC plug-in cards also use FM chips for sound synthesis.
Other techniques such as granular synthesis, waveshaping, Chant, or the Karplus-Strong algorithm have been investigated in the computer music community and implemented in some synthesisers such as the Korg ½ Series (waveshaping). Techniques in the speech community such as vector quantisation are discussed below.

Sub-band coding

Sub-band coding (SBC) is any form of transform coding that breaks a signal into a number of different frequency bands and encodes each one independently. This decomposition is often the first step in data compression for audio and video signals.
The utility of SBC is perhaps best illustrated with a specific example. When used for audio compression, SBC exploits what might be considered a deficiency of the human auditory system. Human ears are normally sensitive to a wide range of frequencies, but when a sufficiently loud signal is present at one frequency, the ear will not hear weaker signals at nearby frequencies. We say that the louder signal masks the softer ones. The louder signal is called the masker, and the point at which masking occurs is known, appropriately enough, as the masking threshold.The basic idea of SBC is to enable a data reduction by discarding information about frequencies which are masked. The result necessarily differs from the original signal, but if the discarded information is chosen carefully, the difference will not be noticeable.
The simplest way to encode audio signals is pulse-code modulation (PCM), which is used on audio CDs, DAT recordings, and so on. All digitization represents continuous signals with a finite set of numbers, and is thus fundamentally inexact. The more bits (numbers) used, the finer the granularity in the digital representation, and the smaller the error. Such quantization errors may be thought of as a type of noise, because they are effectively the difference between the original source and its binary representation. With PCM, the only way to mitigate the audible effects of these errors is to use enough bits to ensure that the noise is low enough to be masked either by the signal itself or by other sources of noise. A high quality signal is possible, but at the cost of a high bitrate (over 700 kbit/s for one channel of CD audio, e.g.). In effect, many bits are wasted in encoding masked portions of the signal because PCM makes no assumptions about how the human ear hears.More clever ways of digitizing an audio signal can reduce that waste by exploiting known characteristics of the auditory system. A classic method is nonlinear PCM, such as mu-law encoding (named after a perceptual curve in auditory perception research). Small signals are digitized with finer granularity than are large ones; the effect is to add noise that is proportional to the signal strength. Sun's Au file format for sound is a popular example of mu-law encoding. Using 8-bit mu-law encoding would cut the per-channel bitrate of CD audio down to about 350 kbit/s, or about half the standard rate. Because this simple method only minimally exploits masking effects, it produces results that are often audibly poorer than the original.

Methods of Encoding the Analog Signal

Real-world requirements may make it impossible to handle the full bit stream of, for example, CD-quality audio. One solution (delta modulation is to encode not the value of each sample, but the difference between one sample and the next. In most cases, fewer bits per sample need to be transmitted, but delta modulation has problems. A more practical variant is called adaptive delta pulse code modulation, or ADPCM. In order to handle both signals that change quickly as well as signals that change slowly, the step size encoded between adjacent samples varies according to the signal itself. In other words, if the waveform is changing rapidly, large steps can be quantised. This is the method used in CD-I (compact disc-interactive), discussed below.
For speech signals, a widely used system works with a quantisation step size that increases logarithmically with the level of the signal. This means that the quantisation levels are closest together when the signal is quiet and spaced further apart when the signal is louder. CCITT Recommendation G.711 codifies the A-law and u-law encoding scheme whereby speech is transmitted at 8 kHz.

Digital Representations of Sound

Time-Domain Sampled Representation
Sound in the analog world is said to be continuous in both time and amplitude. The sound’s analog amplitude can be measured to an arbitrary degree of accuracy, and measurements can be taken at any point in time. A digital signal is different; the signal is defined only at certain points in time, and the signal may take on only a finite number of values.
To sample a signal means to examine it at some point in time. Sampling usually happens
at equally separated intervals, at a rate called the sample frequency, determined by the
Sampling Theorem. If a signal contains frequency components up to some frequency f, then
the sample frequency must be at lease 2f in order to reconstruct the signal properly. In
practical systems, the sample frequency must be higher than 2f. In the early days of digital
audio, sample frequencies at 44.1 kHz and 48 kHz were adopted to handle the full 20-kHz
range of human hearing. The sample frequency of 32 kHz is also common. In multimedia
systems for the PC market, submultiples of 44.1 (22.05 kHz, 11.025 kHz) are often found.
The highest frequency that can be handled (i.e., one-half the sample rate) is often called the Nyquist frequency.
To quantise a signal means to determine the signal’s value to some arbitrary degree of accuracy. The digital signal is defined only at the times where the vertical bars occur. The height of each vertical bar can take on only certain values, shown by dashed lines, which are sometimes higher and sometimes lower than the original signal. If the height of each bar is translated into a digital number, then the signal is said to be represented by pulse-code modulation, or PCM. Pohlmann gives a good overview of other digital representations of sampled signals. (In the world of digital music, a sampled and quantised signal shown by the vertical bars in figure 1.9 (a) is called simply a sampled signal, to distinguish it from signals synthesised, for example, with frequency modulation; this is discussed below in more detail.)

Cast-Based Animation

Cast-based animation, which also is called sprite animation, is a very popular form of animation and has seen a lot of usage in games. Cast-based animation involves objects that move independently of the background. At this point, you may be a little confused by the use of the word "object" when referring to parts of an image. In this case, an object is something that logically can be thought of as a separate entity from the background of an image. For example, in the animation of a forest, the trees might be part of the background, but a deer would be a separate object moving independently of the background. Each object in a cast-based animation is referred to as a sprite, and can have a changing position. Almost every video game uses sprites to some degree.

Frame-Based Animation

Frame-based animation is the simpler of the animation techniques. It involves simulating movement by displaying a sequence of static frames. A movie is a perfect example of frame-based animation; each frame of the film is a frame of animation. When the frames are shown in rapid succession, they create the illusion of movement. In frame-based animation, there is no concept of an object distinguishable from the background; everything is reproduced on each frame. This is an important point, because it distinguishes frame-based animation from cast-based animation. The number of images used in the Count applets in the last chapter would make a good frame-based animation. By treating each image as an animation frame and displaying them all over time, you can create counting animations.

Animation

What is animation? To put it simply, animation is the illusion of movement. When you watch television, you see lots of things moving around. You are really being tricked into believing that you are seeing movement. In the case of television, the illusion of movement is created by displaying a rapid succession of images with slight changes in the content. The human eye perceives these changes as movement because of its low visual acuity. The human eye can be tricked into perceiving movement with as low as 12 frames of movement per second. It should come as no surprise that frames per second (fps) is the standard unit of measure for animation. It should also be no surprise that computers use the same animation technique as television sets to trick us into seeing movement.

Anti Aliasing

In statistics, signal processing, computer graphics and related disciplines, aliasing refers to an effect that causes different continuous signals to become indistinguishable (or aliases of one another) when sampled. It also refers to the distortion or artifact that results when a signal is sampled and reconstructed as an alias of the original signal.
When we view a digital photograph, the reconstruction (interpolation) is performed by a display or printer device, and by our eyes and our brain. If the reconstructed image differs from the original image, we are seeing an alias. An example of spatial aliasing is the Moiré pattern one can observe in a poorly pixelized image of a brick wall. Techniques that avoid such poor pixelizations are called anti-aliasing.Temporal aliasing is a major concern in the sampling of video and audio signals. Music, for instance, may contain high-frequency components that are inaudible to us. If we sample it with a frequency that is too low and reconstruct the music with a digital to analog converter, we may hear the low-frequency aliases of the undersampled high frequencies. Therefore, it is common practice to remove the high frequencies with a filter before the sampling is done.

Morphing

Morphing is a special effect in motion pictures and animations that changes (or morphs) one image into another through a seamless transition. Most often it is used to depict one person turning into another through some magical or technological means or as part of a fantasy or surreal sequence. Traditionally such a depiction would be achieved through cross-fading techniques on film. Since the early 1990s, this has been replaced by computer software to create more realistic transitions.In early feature films a morph would be achieved by cross-fading from the motion picture of one actor or object to another. Because of the limitations of this technique the actors or objects would have to stay virtually motionless in front of a background that did not change or move in the frame between the before and after shots.Later more sophisticated cross-fading techniques were employed that faded different parts of one image to the other gradually instead of fading the entire image at onceThe effect is technically called a "spatially-warped cross-dissolve".
Morphing software continues to advance today and many programs can automatically morph images that correspond closely enough with relatively little instruction from the user. This has led to the use of morphing techniques to create convincing slow-motion effects where none existed in the original film or video footage by morphing between each individual frame (see optical flow). Morphing has also appeared as a transition technique between one scene and another in television shows, even if the contents of the two images are entirely unrelated. The software in this case attempts to find corresponding points between the images and distort one into the other as they crossfade. In effect morphing has replaced the use of crossfading as a transition in some television shows, though crossfading was originally used to produce morphing effects. Morphing is used far more heavily today than ever before. In years past, effects were obvious, which led to their overuse. Now, morphing effects are most often designed to be invisible.

Vector graphics

.Vector graphics (also called geometric modeling or object-oriented graphics) is the use of geometrical primitives such as points, lines, curves, and polygons, which are all based upon mathematical equations to represent images in computer graphics. It is used in contrast to the term raster graphics, which is the representation of images as a collection of pixels, and used as the sole graphic type for actual photographic images.
Most computer displays translate vector representations of an image to a raster format. The drawing software is used for creating and editing vector graphics. You can change the image by editing these objects. You can stretch them, twist them, colour them and so on with a series of tools. The raster image containing a value for every pixel on the screen, is stored in memory. Starting in the earliest days of computing in the 1950s and into the 1980s, a different type of display, the vector graphics system, was used. In these "calligraphic" systems the electron beam of the CRT display monitor was steered directly to trace out the shapes required, line segment by line segment, with the rest of the screen remaining black. This process was repeated many times a second ("stroke refresh") to achieve a flicker-free or near flicker-free picture. These systems allowed very high-resolution line art and moving images to be displayed without the (for that time) unthinkably huge amounts of memory that an equivalent-resolution raster system would have needed, and allowed entire subpictures to be moved, rotated, blinked, etc. by modifying only a few words of the graphic data "display file." These vector-based monitors were also known as X-Y displays.A special type of vector display is known as the storage tube, which has a video tube that operates very similar to an Etch A Sketch. As the electron beam moves across the screen, an array of small low-power electron flood guns keep the path of the beam continuously illuminated. This allows the video display itself to act as a memory storage for the computer. The detail and resolution of the image can be very high, and the vector computer could slowly paint out paragraphs of text and complex images over a period of a few minutes, while the storage display kept the previously written parts continuously visible. The image retention of a storage display can last for many hours with the vector storage display powered, but the screen can clear instantly with the push of a button or a signal from the driving vector computer.
Vectorising is good for removing unnecessary detail from a photograph. This is especially useful for information graphics or line art. (Images were converted to JPEG for display on this page.)
The term vector graphics is mainly used today in the context of two-dimensional computer graphics. It is one of several modes an artist can use to create an image on a raster display. Other modes include text, multimedia and 3D rendering. Virtually all modern 3D rendering is done using extensions of 2D vector graphics techniques. Plotters used in technical drawing still draw vectors directly to paper.

ATM & ADSL

A Virtual Channel (VC) denotes the transport of ATM cells which have the same unique identifier, called the Virtual Channel Identifier (VCI). This identifier is encoded in the cell header. A virtual channel represents the basic means of communication between two end-points, and is analogous to an X.25 virtual circuit.
A Virtual Path (VP) denotes the transport of ATM cells belonging to virtual channels which share a common identifier, called the Virtual Path Identifier (VPI), which is also encoded in the cell header. A virtual path, in other words, is a grouping of virtual channels which connect the same end-points. This two layer approach results in improved network performance. Once a virtual path is set up, the addition/removal of virtual channels is straightforward.
ATM has proved very successful in the WAN scenario and numerous telecommunication providers have implemented ATM in their wide-area network cores. Also many ADSL implementations use ATM. However, ATM has failed to gain wide use as a LAN technology, and its complexity has held back its full deployment as the single integrating network technology in the way that its inventors originally intended.
Many people, particularly in the Internet protocol-design community, considered this vision to be mistaken. Their argument went something like this: We know that there will always be both brand-new and obsolescent link-layer technologies, particularly in the LAN area, and it is fair to assume that not all of them will fit neatly into the synchronous optical networking model for which ATM was designed. Therefore, some sort of protocol is needed to provide a unifying layer over both ATM and non-ATM link layers, and ATM itself cannot fill that role. Conveniently, we have this protocol called "IP" which already does that. Ergo, there is no point in implementing ATM at the network layer.
In addition, the need for cells to reduce jitter has reduced as transport speeds increased (see below), and improvements in Voice over IP (VoIP) have made the integration of speech and data possible at the IP layer, again removing the incentive for ubiquitous deployment of ATM. Most

Assymetrical Digital Subscriber Line (ADSL) is a form of DSL, a data communications technology that enables faster data transmission over copper telephone lines than a conventional voiceband modem can provide. It does this by utilizing frequencies that are not used by a voice telephone call. A splitter or micro filters allow a single telephone connection to be used for both ADSL service and voice calls at the same time. Because phone lines vary in quality and were not originally engineered with ADSL in mind, it can generally only be used over short distances, typically less than 3mi (5 km). At the telephone exchange the line generally terminates at a DSLAM where another frequency splitter separates the voice band signal for the conventional phone network. Data carried by the ADSL is typically routed over the telephone company's data network and eventually reaches a conventional internet network.
The distinguishing characteristic of ADSL over other forms of DSL is that the volume of data flow is greater in one direction than the other, i.e. it is asymmetric. Providers usually market ADSL as a service for consumers to connect to the Internet in a relatively passive mode: able to use the higher speed direction for the "download" from the Internet but not needing to run servers that would require high speed in the other direction.

Internet & World Wide Web

The Internet is a worldwide, publicly accessible series of interconnected computer networks that transmit data by packet switching using the standard Internet Protocol (IP). It is a "network of networks" that consists of millions of smaller domestic, academic, business, and government networks, which together carry various information and services, such as electronic mail, online chat, file transfer, and the interlinked Web pages and other documents of the World Wide Web.

WWW's historical logo designed by Robert CailliauThe World Wide Web (commonly shortened to the Web) is a system of interlinked, hypertext documents accessed via the Internet. With a web browser, a user views web pages that may contain text, images, videos, and other multimedia and navigates between them using hyperlinks. The World Wide Web was created in 1989 by Sir Tim Berners-Lee from the United Kingdom, and Robert Cailliau from Belgium, working at CERN in Geneva, Switzerland. Since then, Berners-Lee has played an active role in guiding the development of web standards (such as the markup languages in which web pages are composed), and in recent years has advocated his vision of a Semantic Web.

The Internet and the World Wide Web are not synonymous. The Internet is a collection of interconnected computer networks, linked by copper wires, fiber-optic cables, wireless connections, etc. In contrast, the Web is a collection of interconnected documents and other resources, linked by hyperlinks and URLs. The World Wide Web is one of the services accessible via the Internet, along with many others including e-mail, file sharing and others described below. The Internet protocol suite is a collection of standards and protocols organized into layers so that each layer provides the foundation and the services required by the layer above. In this scheme, the Internet consists of the computers and networks that handle Internet Protocol (IP) data packets. Transmission Control Protocol (TCP) depends on IP and solves problems like data packets arriving out of order or not at all. Next comes Hypertext Transfer Protocol (HTTP), which is an application layer protocol. It runs on top of TCP/IP and provides user agents, such as web browsers, with access to the files, documents and other resources of the World Wide Web.

Local Area Network (LAN)

A local area network (LAN) is usually privately owned and links the devices in a single office, building, or campus (see Figure 1.4). Depending on the needs of an organisation and the type of technology used, a LAN can be as simple as two PCs and a printer in someone's home office, or it can extend throughout a company and include voice, sound, and video peripherals. Currently, LAN size is limited to a few kilometers.
LANs are designed to allow resources to be shared between personal computers or workstations. The resources to be shared can include hardware (e.g., a printer), software (e.g., an application program), or data. A common example of a LAN, found in many business environments, links a work group of task related computers, for example, engineering workstations or accounting PCs. One of the computers may be given a large-capacity disk drive and become a server to the other clients. Software can be stored on this central server and used as needed by the whole group. In this example, the size of the LAN may be determined by licensing restrictions on the number of users per copy of software, or by restrictions on the number of users licensed to access the operating system.
In addition to size, LANs are distinguished from other types of networks by their transmission media and topology. In general, a given LAN will use only one type of transmission medium. The most common LAN topologies are bus, ring, and star. Traditionally, LANs have data rates in the 4 to 16 Mbps range. Today, however speeds are increasing and can reach 100 Mbps with gigabit systems in development.

Presentation Systems

Commercial presentation packages such as Aldus Persuasion and Lotus Freelance allow novice users to create presentations using text, graphics, scanned images, and animations with only minimal training. Presentations are typically linear sequences of information, so the interfaces for users have often been organised along the time dimension. However, presentations on the computer do not have to be limited to sequential display. Current presentation packages allow users to construct interactive, i.e., nonlinear, presentations that more closely resemble those produced by conventional authoring systems.
The requirement to deal more explicitly with the time dimension became even stronger as
time-based media such as audio and video segments became available for presentations.
New user interface metaphors were developed to allow users to organize their material along these lines.

Authoring Systems

Authoring Systems
Many authoring systems are based on the model that one or more authors will create the design and enter the media elements for an experience with which an end user will interact. The result of the authoring process is a structure of elements, linked in various paths determined by the authors(s). This structure appears to the end user as a series of screens containing information in various forms and interactive options available through buttons, icons or keystrokes.
This model is translated into separate and distinctly different interfaces for the author and the end user. The author's interface typically has a suite of on-screen tools available through menus, icons, text prompts, or other options, which can be invoked to create and place elements on the screen. The end-user's interface is defined by the elements that the author has invented. For example, a screen that includes text in paragraphs, a scanned image, and audio segments may have hot buttons or sensitive regions on the screen that the end user can select to interact with the screen element navigate to other parts of the presentation, or simply get help.
Implicit in this model has been the idea that the end user will not add or modify the content of the experience and, in fact, will interact with only on the terms set by the author(s). While this approach has be common, it does not necessarily provide the end user with the effective experience. One of the trends in new software is the support annotation and personalization by the end user, especially for training.

Multimedia Presentation and Authoring

Multimedia Presentation and Authoring
Easy-to-use tools for creating, manipulating, and presenting multimedia content will be an important factor in the utility of multimedia information. The digital representation of multimedia data simplifies many of the problems that have limited the use of analog audio and video editing systems to specialists. Yet interactive nonlinear time-based hypermedia data are still difficult and time-consuming to compose. The most powerful contemporary authoring tools use a programming paradigm to allow complex interaction and synchronisation relationships to be defined. These programming facilities are typically provided through either a scripting language or an iconic visual programming interface. Individual media types are edited using separate media-specific editors. This chapter reviews the common approaches to multimedia authoring and presentation and discusses where future tools might be able to improve on the current tools.
Overview
Authoring and presentation systems are the software programs that allow people to create and deliver an experience for an end user (Figure 1.8). This experience can take many forms, from a computer-based training course to a room-sized presentation or a virtual-reality environment requiring head mounted displays and spatial input gloves. The common denominator across all of these forms is that the experience is an interactive one: The end user interacts with the programs on the computer by providing input (touching a screen, typing on a keyboard, or making a gesture in space, for example), which affects the output from the computer. This chapter surveys recent approaches to multimedia authoring and presentation that allow manipulation of diverse media forms through various modes of interaction.
Authoring software is designed to support the creation of an interact nonlinear experience in the sense that there are several pathways through the material, so the end user can make choices about where to go in presentation, as well as how long to view each screen. The end user's choice frequently referred to as navigating. Authoring software for train frequently supports testing of the end user, maintaining individual and tracking the paths that users take through the material, known courseware. There are often a variety of statistical reporting utilities included for the trainer as well. From a commercial standpoint, authoring system: tend to be more expensive than presentation systems, and often have of price for the full authoring environment plus a per copy or site license for run-time playback-only module for each end-user machine.
Presentation packages are designed to support the same media types, be without support for testing, scorekeeping, or tracking of end users. Sort packages are intended for creation of simple linear presentations, which not offer multiple pathways through the information. Presentation packages usually have a single price and simply offer different modes for cerotic and playback.
There are other software packages, which can be considered authority systems in the broad sense of the term. Some are intended for creation of hypertext documents, the term coined by Ted Nelson to mean non-sequential writing. These packages have certain features not necessarily found in training or presentation systems. These include the ability to make links from one place in a document to another, such as a reference to textual term that can be activated as a hot button to take the reader to a full explanation of the term. Another feature is the capability to search all of the text in the entire document for all the occurrences of a specific word or phrase, known as indexed search.

Video Devices

No other contemporary message medium has the visual impact of video. With a video digitizing board installed in your computer, you can display a television picture on your monitor. Some boards included a frame-grabber feature for capturing the image and turning it into a color bitmap, which can be saved as a PICT or TIFF file and then used as part of a graphic or a background in your project.
Display of video on any computer platform requires manipulation of an enormous amount of data. When used in conjunction with videodisc players, which give you precise control over the images being viewed, video cards let you place an image into a window on the computer monitor; a second television screen dedicated to video is not required. And video cards typically come with excellent special effects software.

Monitors

The monitor you need for development of multimedia project depends on the type of multimedia application you are creating, as well as what computer you’re using. A wide variety of monitors is available for both Macintoshes and PCs. High-end, large-screen graphics monitors are available for both, and they are expensive.
Serious multimedia developers will often attach more then one monitor to their computers, using add-on graphics boards. This is because many authoring systems allow you to work with several open windows at a time, so you can dedicate one monitor to viewing the work you are creating or designing, and you can perform various editing tasks in windows on other monitors that do not block the view of your work. Developing in Director is best with at least two monitors, one to view your work, the other to view the “Score.” A third monitor is often added by Director developers to display the “Cast”. For years, one of the advantages of the Macintosh for making multimedia has been that it is very easy to attach multiple monitors for development work. Finally, with Windows 98, PCs can be configured for more than one monitor.

Digital Versatile Disc (DVD)

In December 1995, nine major electronics companies (Toshiba, Matsushita, Sony, Philips, Time Warner, Pioneer, JVC, Hitachi, and Mitsubishi Electric) agreed to promote a new optical disc technology for distribution of multimedia and feature-length movies called DVD (see Table 1.2).
DVD can provide 720 pixels per horizontal line, whereas current televisions (NTSC) provide 240-television pictures will be sharper and more detailed. With Dolby AC-3 Digital Surround Sound as part of the specification, six discrete audio channels can be programmed for digital surround sound, and with a separate sun woofer channel, developers can program the low-frequency doom and gloom music popular with Hollywood. DVD also supports Dolby Pro-Logic Surround Sound, standard stereo, and mono audio. Users can randomly access any section of the disc and use the slow motion and freeze-frame features during movies.

The Media Control Interface (MCI)

Windows provides the Media Control Interface (MCI), a unified command-driven method for software to talk to related multimedia peripheral devices. With MCI any hardware (or software device can be connected to a computer running Windows. Using the appropriate drivers (normally supplied by the device manufacturer), programmers can control the device with simple command strings or codes sent to the MCI.

PLUG INS

A SOFTWARE application designed to extend the functionality of a web browser. Plug-ins are launched from within the browser and are capable of playing audio, showing movies, and running animations, among other things.
All plug in models require two entities the plug-in host and the plug-in itself. The host could be an application, operating system, or even other plug-in. plug-ins are written and compiled entirely from the host, typically by another developer. When the host code is executed , it uses whatever mechanism is provided by the plug-in architecture to locate compatible plug-ins and load them, thus adding capabilities to the host that were not previously available.

When the host application is launched , it searches for all plug-ins with the appropriate identifier. For each plug-in found , the host uses the plug-in architecture to obtain the processing description string and a pointer to the processing function..

Friday, May 23, 2008

Acer 4710z review

Acer 4710z

Processor: AMD Turion X2 @1.6GHz
RAM: 1GB DDR2 RAM
Graphics Card: nVidia 7000m GT
Hard Disk: 160GB SATA HDD
Display: TFT

First Impressions: In the first look the laptop looks different, different as compared to other laptops which have grey colored or a black color finishing. this laptop has a plastic finish making it look more like a portable DVD player. Although after sometime you will find this feature adding to the appeal of the laptop.

Looks & Weight: The blue and grey colored laptop is slim and has curved edges. the 2.3 kg wieght makes it very light to carry. the design is really gaining points as it is very sleek and smooth.

Hardware Detection: the laptop came with linux loaded but i loaded Windows vista premium. the hardware detection worked well for hardwares i added except that the security check in vista made it annoying.

Performance & Battery Life: the 6 cell battery has a typical power backup of 2 hours although this is not appreciable for my use but still is good with vista. overall i was satisfied with the performance in the initial weeks but slowly i am starting to get annoyed as the AMD processor did not actuall live upto my expectations. the system has a very high booting time of almost 2 minutes, even with minimum startup programs. the PC would do better with a XP or other OS.

the 14" screen is good for all purposes but you wouldn't want to watch your favorite flick on it. the VGA webcam is said to be optimized for use over slow speed internet but th refresh rate is very bad and one would not record a video greeting with that and also the PC did nt come with any software to support video capturing. Although the windows movie maker supports this in XP but not in Vista.

Sound Quality is good and the speakers would live upto all expectations.the volume control button is actually a rotating wheel that one would find in an old walkman, very passe.

The DVD writer works fine with Roxio and Nero. And it has been able to play even some of my scratched CDs that my old PC would not play.

The bluetooth is working fine and is synchronizing well with other devices.

Support: i have not really tried Acer support yet, thankfully. i dont think the support would be any less than good as my friends have had to use that couple of times and they never complained about it.

Final Thoughts: this piece from Acer is a good budget laptop. very light in weight and on pocket. i would suggest using it with Windows XP.

Saturday, May 17, 2008

Michael Arrington and the Gillmor gang: exclusive transcript [3]

Mike: Mmm hmm.

Steve: I'm hearing your bit torrent scooping up all the CBS shows so that you can watch them on your time schedule.

Mike: I can't remember the last time I watched CBS television show. I can't remember the last time I was on a CBS owned television station for any reason at all.

Steve: Letterman? You don't watch Letterman?

Mike: Pardon me?

Steve: You don't watch Letterman?

Mike: No. I watch TV. When I do it's HBO or Showtime, usually they are series or something because that is the only decent stuff on television.

Man 3: You watch Charlie Rose don't you?

Mike: I watched it twice.

Man 3: Both times you were on.

Steve: So, Marc Canter - Marc Canter -

Mike: Marc Canter is on?

Steve: Yeah.

Mike: Thanks for the link Marc.

Marc Canter: No problem sir.

Mike: I didn't realize you were having some kind of data summit thing. I don't think it's appropriate to have something like that without me there. I mean, I am one of the leading lights in the data portability movement.

Marc: It was mainly for geeks. It was just all the guys. It was all the practitioners, the guys who are actually writing the code and we were able to discuss key issues.

Of course, the Facebook shutdown happened during the summit so we were able to discuss that in real time. It was pretty interesting.

Mike: Again, I would appreciate it if you brought me in as an expert in those areas.

[overtalk]

Marc: Again, most of the people showed up by the way so they will fly high.

Steve: OK. So, what I am trying to do is stitch the Facebook thingamajig into this particular conversation, given what we were just talking about which is that these clouds are going to start to try and create the user relationships.

Isn't that what is going on with Facebook Connect and the Friend Connect that Google showed on Monday?

Marc: Well, let me first point out that, in the case of Comcast, they already do have those people signed up. They are going to try and go in and connect together they are installed base around the country and offer them all e-mail synchronization capabilities.

Steve: Yeah, I think that's bullshit. But I've had Comcast for six years and I have never once looked at my e-mail address and never will.

Man: Yeah.

Man 2: Yeah.

Marc: You're typical of absolutely nobody though, except the other people on this call.

Steve: Yeah, exactly.

Marc: No, no, no, no, no.

Steve: Good for you. The ivory tower nonsense. The number of people that are influenced and motivated by RSS technology that nobody even knows what it stands for, is enormous in this economy.

So we can talk about early adopters being a clueless part of the marketplace, but everybody will be there shortly.

Man 2: I don't know.

Marc: Comcast's world - I have my doubts.

Steve: They are never going to get there. By the time that they are surfaced as some sort of a vendor with a user relationship, there are going to be four or five other people on top of the stack above them that are the ones that the user thinks -


Steve Gillmor: They're never going to get there. By the time that they are surfaced as some sort of a vendor with a user relationship, there's going to be four or five other people on top of the stack above them. That are the ones that the user thinks they're dealing with.

Man 1: Exactly.

Man 2: Comcast is looking at this as just a list sort of an activity and they'll kill that and people will go away in no time. [noise] Brian Roberts doesn't know his ass from his elbow about any of this stuff. He's already under siege from stockholders from missing opportunities. Unless there's a change at the very top, I don't understand why this is going to matter to very many people. But as Mike said and as I said earlier, it's sort of a cloud opportunity for them.

I mean they're looking at the sports franchises. They look at people like the Red Sox doing this TV show called "Sox Appeal." You know what that one is, Dana? Where they have it's a dating show and the young good looking guy and the young good looking woman are sitting on top of the left field wall. And they have dates and you know, that sort of content is very viral potentially. And could get sort of a franchise built online.

And a lot of the different sports franchises both pro and probably in the South college too, could build content and build that into the Internet. And Comcast could be the platform for all that and they could sell some ads. But I don't think Brian Roberts is the guy to execute that.

Man 3: Yeah, no I agree. But I think you're right, it's an untapped opportunity to take these affinities, whether it's an affinity for sports or an affinity for dating or food. All these lifestyle things, Harley Davidsons and motorcycles.

Bring that in through a digital connection of some kind, match these people up and then give them the opportunity to create content back at you. Whether it's a discussion, whether it's a chat or a tweet, and then that's a nice little viral engine.

The more content that's created, the more people get affinity. The more affinity, the more they're tied in. And then you can monetize it from everything from t-shirts to advertisements to a new motorcycle.

Steve: Yeah, but they're like Sun Microsystems, they're not going to get any of that money. It's going to go right past them.

Man 3: That's why having brand, having content affinity, having lifestyle affinity is where the money should be. And the digital [??] opportunities are just commodities.

Steve: OK, I get it. So, Mike Arrington, are you still there?

Michael: Yep.

Steve: OK. Can you explain what's going on currently with this Facebook Friend Connect spat?

Michael: Yeah. Just hold on one second. Hey, so you've got to add a logo to that. Yep, it's posted, so...

Steve: Suddenly we're in a Laurence Feldman video again.

Michael: Yeah, "Hendricks!" [laughs] No we just broke a story. So you guys are talking about some really random stuff. And I really think that, you know, I think that Marc kind of nailed it when he said I was right, in his post. And I think this is all about - and I'm just kind of joking around - but I really think that what this is all about is a land grab. And it's really nothing more than that.

And this is what I wrote last night in the middle of the night because Scobel's post really pissed me off. And I'm happy to talk about that too, but I think that if you look back at the days of the mid to late 1990s, walled gardens were getting a lot of attention. People hated them, but they had the big market caps and AOL is the perfect example of they get you in and they keep you in.

And you know they had their problems, and eventually the open web sort of broke that down. And I think Steve, I think you were here sitting in my office a day or two ago talking about how the original walled garden was email at CompuServe. Was it you saying this? Anyway that eventually open email just broke apart those walled gardens as well.

So I think that today the only way that people can really have a walled garden is by trying to own user identity. And by that I mean the core sort of definition of you on the Internet. Louie Lamure [??] wrote a story a few weeks ago where he talked about his identity is distributed all around the Internet.

And he hates the fact that it sort of ends up being centralized. I think he was talking about FriendFeed at that point. For some reason he didn't like that, because he wasn't in control of it.

But you think about, we have pictures at Flickr. We might have videos at YouTube. We have our email account, we have our profile page, we have our blog, etc. Maybe up to 20 different places that we consider our identity. Our twitter account now I think is becoming increasingly that way.

And you know, there's problems with that. The fact that it's decentralized, but it also means that each of those companies owns a piece of us. And they seem to be hoping to really hold on to that data and not let it out. In other words, not let you take it back very easily. And not let you share it easily with other applications.

You can think about how every time we join another social network we have to re-add our friend list. And they try to make it easy by using your address book from Google or things like, although I generally try not to give them those credentials. But it's a real pain in the butt. And then synchronizing it and you add friends over time.

And we probably all have at least four or five distinct friend groups around the Internet. And so that's why the social networks are so important. The fact that they have our identity, some basic profile information and our friends list, it's just this hugely valuable information.

So the post I wrote yesterday is basically saying how this is the new walled garden. All these guys are becoming open ID issuers hoping that you use your Yahoo ID or whatever to log in around the Internet and sort of make it your permanent ID.

The social networks, all three of them - well, MySpace and now Google if you count them which they're not really a social network yet with any presence - but they're all trying to create software now where you can take your friends list and take your basic identity and sort of port that out to third party websites. So that you don't have to rebuild your friend list. Which is the upside for users.

The problem is that none of them, they're all talking about open standards and how they care about the user. But as we saw yesterday when Facebook shut down Google, they all want to be very much in control of the tools used and the standards set on how this data is shared and what can be done with it.

So I think we're seeing the beginning of a major data war and a race to grab users and their data. And then a race to see how much of that data they can sort of keep behind their walled gardens. And so I think that's what we're seeing and that's why I wrote the post, because I want to call that out.

And I think the main thing that I wrote in my post was how dare Facebook tell me... I sort of repositioned the argument, I said how dare Facebook tell me that I can't give Google access to my data? And let Google then do things with it.

Steve: Well, that's exactly right. Finally somebody has realized that they own their data in the first place.

Man 4: But aren't these guys also kind of playing a hierarchy of data in terms of who owns what about how many people? And then that's going to determine their valuations in their mind.

Steve: It may determine their valuations, but it has nothing to do with the facts. The users own their data, period. And you know, I don't know what Scobel said that pissed you off...

Michael: Well OK, let me talk about that because what Scobel did... See, Scobel screwed up earlier this year, because he used a Plaxo tool that went into Facebook and grabbed all of his friends contact information. And that's not trivial to do because all that contact information is presented in a JPEG. It's an image because Facebook doesn't want people to easily scrape it.

So Plaxo went in, pulled up the friends list, pulled up each page. Used optical character recognition software to turn that into free text and then export it into Plaxo. And because of that, Facebook just banned Robert Scobel. Shut down his account temporarily. He was wrong then. And the reason I think he was wrong to try to do that is he thought, "This is my data, I want to export it."

But my argument was, that isn't his data, that's my data. If I'm his friend and he's pulling my contact information out of it, he's changed the rules on me. The rules that I know are set by Facebook, which is that the data is presented only in an image, etcetera and he's exporting where God knows what could happen to it.

And I don't know if, a couple years ago I wrote about a company called Jigsaw. Which actually people get paid to upload contact information for possible sales leads. And I think it's one of the worst things on the Internet. And so this sort of this my basic contact information, I need to have some control over who gets that and how. So I think he was wrong. And I think he knew he was wrong.

So yesterday, he sees this situation happen where Facebook bans Google from letting people export their own contact information. And in his eyes, I think he saw it as an analogy to what happened earlier in the year. So he changed his position and said, "No, no. I think Facebook is right. I think...