Intro to Comp Csc 111_Intensive 2

47
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O. INTRODUCTION TO COMPUTER SCIENCE COURSE OUTLINE 1. Historical Development of Computer 2. Basic Computer Concepts (Definition and Components of the Computer System) 3. Types of Computers (Analogue, Digital and Hybrid) 4. Hardware Components 5. Software Components 6. Peripheral Devices 1.0 HISTORY OF COMPUTERS 1.1 INTRODUCTION A computer is basically a processor of information. Its power and versatility is gotten from the fact that it has a high speed of operation and the information contained in it may be processed in many different forms, with text, sound, video, computer generated graphics etc. Since civilizations began, many of the advances made by science and technology have depended upon the ability to process large amounts of data and perform complex mathematical calculations. For thousands of years, mathematicians, scientists and businessmen have searched for computing machines that could perform calculations and analyze data quickly and efficiently. One such device was the abacus invented around 500 BC. The abacus was an important counting machine in ancient Babylon, China, and throughout Europe where it was used until the late middle ages. In 1833, Prof. Charles Babbage, the father of the computer, developed a machine called analytical engine which was the vase for the modern digital computer. It was followed by a series of improvements in mechanical counting machines that led up to the development of accurate mechanical adding machines in the 1930’s. These machines used a complicated assortment of gears and levers to perform the calculations but they were far to Page 1

Transcript of Intro to Comp Csc 111_Intensive 2

Page 1: Intro to Comp Csc 111_Intensive 2

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

INTRODUCTION TO COMPUTER SCIENCE

COURSE OUTLINE

1. Historical Development of Computer 2. Basic Computer Concepts (Definition and Components of the Computer

System)3. Types of Computers (Analogue, Digital and Hybrid)4. Hardware Components5. Software Components6. Peripheral Devices

1.0 HISTORY OF COMPUTERS 1.1 INTRODUCTIONA computer is basically a processor of information. Its power and versatility is gotten from the fact that it has a high speed of operation and the information contained in it may be processed in many different forms, with text, sound, video, computer generated graphics etc.

Since civilizations began, many of the advances made by science and technology have depended upon the ability to process large amounts of data and perform complex mathematical calculations. For thousands of years, mathematicians, scientists and businessmen have searched for computing machines that could perform calculations and analyze data quickly and efficiently. One such device was the abacus invented around 500 BC.The abacus was an important counting machine in ancient Babylon, China, and throughout Europe where it was used until the late middle ages.

In 1833, Prof. Charles Babbage, the father of the computer, developed a machine called analytical engine which was the vase for the modern digital computer.It was followed by a series of improvements in mechanical counting machines that led up to the development of accurate mechanical adding machines in the 1930’s. These machines used a complicated assortment of gears and levers to perform the calculations but they were far to slow to be of much use to scientists. Also, a machine capable of making simple decisions such as which number is larger was needed. A machine capable of making decisions is called a computer.The first computer like machine was the Mark I developed by a team from IBM and Harvard University. It used mechanical telephone relays to store information and it processed data entered on punch cards. This machine was not a true computer since it could not make decisions.In June 1943, work began on the world's first electronic computer. It was built at the University of Pennsylvania as a secret military project during World War II and was to be used to calculate the trajectory of artillery shells. It covered 1500 square feet and weighed 30 tons. The project was not completed until 1946 but the effort

Page 1

Page 2: Intro to Comp Csc 111_Intensive 2

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

was not wasted. In one of its first demonstrations, the computer solved a problem in 20 seconds that took a team of mathematicians three days. This machine was a vast improvement over the mechanical calculating machines of the past because it used vacuum tubes instead of relay switches. It contained over 17,000 of these tubes, which were the same type tubes used in radios at that time. The invention of the transistor made smaller and less expensive computers possible. Although computers shrank in size, they were still huge by today’s standards. Another innovation to computers in the 60’s was storing data on tape instead of punch cards. This gave computers the ability to store and retrieve data quickly and reliably.

1.2 GENERATION OF COMPUTERSIt is usual to associate each stage of computer development commonly referred to as generation, with one sort of technological innovation or another. Each generation usually makes possible certain things which were not possible earlier. The characteristics of present day computer have been arrived at through a process of development, most of which has been occurring since the mid 1940s.

The electronic digital computers, which were introduced in 1950's, were using vacuum tubes. Following this, the development in the electronic components helped in the development of digital computers also. The second-generation computers used transistors.The introduction of Integrated Circuits (ICs), also known as chips opened the door for the development of third generation computers. A very large number of circuit elements (transistors, diodes, resistors, etc.,) could be integrated into a very small (less than 5 mm square) surface of silicon and hence the name IC. The third generation computers used Small-Scale Integrated Circuits (SSI) which contain about 10-20 components. When Large-Scale Integrated Circuits (LSI) (around 30,000 components) was developed, the fourth generation computers were produced.

1.2.1FIRST GENERATION COMPUTER (1945-1957)The characteristic technology of the first generation computers was the use of vacuum tubes as the basic blocks for building the logic parts of the computer. The technological base was therefore circuitry, consisting of wires and thermionic valves. The valves had hollow tubes where non-solid state as electrical pulses had to flow through. They used magnetic drums and delay lines for their internal storage. Examples of first generation computers were Mark I (1944), ENIAC (1946), EDVAC (1947), EDSAC (1949) and UNIVAC (1951).

1.2.2SECOND GENERATION COMPUTER (1958-1963)The characteristics technology of this generation was the transistor which revolutionized not only the computer industry but the whole of electronic engineering. Although the transistor itself was developed by a team led by William Shockley in the late 1940s at Bell Labs, it was not until late 1950s that it began to

Page 2

Page 3: Intro to Comp Csc 111_Intensive 2

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

replace the valves of the first generation. The components of this generation computers (printed circuit diode and transistors) were based on the solid state technology since electricity did not have to flow through space as in the thermionic valve. Transistors could do all that the thermionic valves could do at a much faster rate, consumed far less electricity (power) and were physically smaller and cheaper.Another important technology of this age was the use of magnetic core storage instead of magnetic drum of the first generation. The second generation also saw the advent of magnetic tape which replaced, to some extent punch card systems (input/output operations). Examples of computers of this age were the IBM 7090 and IBM 7094. their applications included payroll and inventory processing.

1.2.3THIRD GENERATION COMPUTERS (1964-1969)A very important technological innovation of the 3rd generation was the Integrated Circuits (I.C). This innovation involves the manufacture of complex electronic circuits attached to a small piece of silicon chip less than two millimeters long. The introduction of I.C. made the computers faster, cheaper and smaller than the circuits they replaced. The first set of I.C. produced contained about ten or twenty interconnected transistors and diodes giving three or four basic circuits on a single module. Modules of this type are referred to as Small Scale Integrated circuits (SSI). Later, the number of transistor per chip rose to hundred or more so that counters and storage registers could be fabricated on a single chip. Such modules are called Medium Scale Integration (MSI). Recently, Long Scale Integration (LSI) is in the market containing tens of thousands of transistors and diodes on one chip.Another important feature of this generation was the replacement of the magnetic core and magnetic drum memories of the first and second generations by cheap Metal Oxide Semi-Conductor (MOS), this provided fast memory access.Typical commercial computers of this age were IBM S/360 – S/370 series (1964-1965); Burroughs B5000, the CDC 6600 (19964); the CDC 7600 (1969) and PDP-11 series.New applications were credit card billing, airline reservation systems and market forecasting.

1.2.4FOURTH GENERATION COMPUTERS (1970-1990s)One of the results of Large Scale Integration was the production of the microprocessor in the 4th generation. A microprocessor is a central processing unit of a microcomputer fabricated on a small chip. We also began to see the development of Very Large Scale Integration (VLSI) during this generation.This generation witnessed the flooding of the market with a wide variety of software tools like data-base management system, word processing packages, spreadsheet packages, graphical packages and games packages, etc. It also witnessed the enhancement of networking capabilities.Commercial machines of this generation were IBM 3033 and Burroughs B7700 mainframes, the HP 3000 minicomputers and apple 11 microcomputers.

Page 3

Page 4: Intro to Comp Csc 111_Intensive 2

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

New applications were mathematical modeling and simulation, electronic funds transfer, computer-aided instruction and home applications.

Page 4

Page 5: Intro to Comp Csc 111_Intensive 2

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

1.2.5FIFTH GENERATION COMPUTERS (Now)The characteristics technology of this generation is VLSI, Parallel Processing and Artificial Intelligence (AI) whose main attraction over previous computer would be speed and power.AI is a way of making computers appears to be reasoning as exemplified in the use of Robots in factories. Applications now include financial planning management, database management, word processing/office automation and desktop publishing.Examples are VAX 6000 computers, the NCR Tower 386, MACINTOSH (MAC 11) and the IBM PC/AT 286.

2.0 BASIC COMPUTER CONCEPTSTo an uninformed mind, the mention of a computer gives the impression of a machine with giant brains flashing different colors of light. Such people feel that these machine brains think for themselves and provide solutions to problems that no human being has ever done. This feeling often leads to the question ‘Can a computer think?’, the answer to this is ‘No’. The truth about the computer is that it is a combination of different electronic machines coupled together to form what is known as the Computer System.It stores and processes information whether numeric or non-numeric (i.e. it works with both numbers and words). It can be given a list of instructions which, unlike human beings, it can always remember (unless you instruct it to forget the instructions).Although the computer systems do the work of human beings at fantastically high speeds, a lot of thinking is done by the humans who feed them with information and program them to perform particular operations with the information they are given.

2.1 DEFINITION OF A COMPUTERA computer is an electronic device capable of executing instructions, developed based on algorithms stored in its memory, to process data fed to it and produces the required results faster than human beings.

A computer is an electronic machine that: Accepts (read) information or data; Stores accepted data/information in memory until it is needed, Processes (manipulates) the information according to the instructions

provided by the user, and finally Returns the results (intelligent reports) to the user from the processed

(manipulated) data.

The computer can store and manipulate large quantities of data at very high speed, but a computer cannot think. A computer makes decisions based on simple comparisons such as one number being larger than another. Although the computer can help solve a tremendous variety of problems, it is simply a machine; it cannot solve problems on its own.

Page 5

Page 6: Intro to Comp Csc 111_Intensive 2

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

2.2 BASIC UNITSA computer is designed using four basic units. They are

1. INPUT UNIT: Computers need to receive data and instructions in order to solve any problem. Therefore we need to put the data and instructions into the computer. The input unit consists of one or more input device. The keyboard and mouse of a computer are the most commonly used input devices.

2. CENTRAL PROCESSING UNIT (CPU): It is the main part of a computer system; it is the electronic brain of the computer. The CPU in a personal computer is usually a single chip. It organizes and carries out instructions that come from either the user or from the software. It interprets the instructions in the program and executes one by one. It consists of three major units.

a. CONTROL UNIT: It controls and directs the transfer of program instructions and data between various units, in other words, it controls the electronic flow of information around the computer.

b. ARITHMETIC AND LOGIC UNIT (ALU): Arithmetic operations like (+,-,*,^,/), logical operations like (AND, OR, NOT) and relational operations like (<,>,<=,>=) are being carried out in this Unit. It is responsible for mathematical calculations and logical comparisons.

c. REGISTERS: They are used to store instructions and data for further use.

3. MEMORY UNIT: It is used to store the Programs and data.

4. OUTPUT UNIT: It is used to print/display the results, which are stored in the memory unit.

Secondary Stored Devices refer to floppy disks, magnetic disks, magnetic disks, magnetic tapes, hard disks, compact disks etc., which are used to store huge information for future use.

Page 6

Page 7: Intro to Comp Csc 111_Intensive 2

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

2.3 BLOCK DIAGRAM OF A COMPUTERThe block diagram of a computer is shown below: 

            

The components

of a computer are connected by using buses. A bus is a collection of wire that carries electronic signals from one component to another. There are standard buses such as Industry Standard Architecture (ISA), Extended Industry Standard Architecture (EISA), Micro-Channel Architecture (MCA), and so on. The standard bus permits the user to purchase the components from different vendors and connect them together easily.

The processor is plugged into the computer’s motherboard. The motherboard is a rigid rectangular card containing the circuitry that connects the processor and all the other components that make up your personal computer. In most personal computers, some of the components are attached directly to the motherboard and some are housed on their own small circuit boards that plug into the expansion slots built into the motherboard. The various input and output devices have a standard way of connecting to the CPU and Memory. These are called interface standards. Some popular interface standards are the RS-232C and Small Computer System Interconnect (SCSI). The places where the standard interfaces are provided are called ports.

Page 7

Page 8: Intro to Comp Csc 111_Intensive 2

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

2.4 INPUT DEVICESA computer would be useless without some way for you to interact with it because the machine must be able to receive your instructions and deliver the results of these instructions to you. Input devices accept instructions and data from you the user. Some popular input devices are listed below.

Examples of Input Devices:TEXT INPUT DEVICES Keyboard - The most common input device is the Keyboard. It is used to input

letters, numbers, and commands from the user. A device to input text and characters by depressing buttons (referred to as keys), similar to a typewriter. The most common English-language key layout is the QWERTY layout.

POINTING DEVICES Mouse – Mouse is a small device held in hand and pushed along a flat surface. It

can move the cursor in any direction. In a mouse a small ball is kept inside and the ball touches the pad through a hole at the bottom of the mouse. When the mouse is moved, the ball rolls. This movement of the ball is converted into electronic signals and sent to the computer. Mouse is very popular in the modern computers that use Windows and other Graphical User Interface (GUI) applications.

Trackball - a pointing device consisting of an exposed protruding ball housed in a socket that detects rotation about two axes.

GAMING DEVICES Joystick - a general control device that consists of a handheld stick that pivots

around one end, to detect angles in two or three dimensions. Gamepad - a general game controller held in the hand that relies on the digits

(especially thumbs) to provide input. Game controller - a specific type of controller specialized for certain gaming

purposes.

IMAGE, VIDEO INPUT DEVICES Image scanner - a device that provides input by analyzing images, printed text,

handwriting, or an object. Webcam - a low resolution video camera used to provide visual input that can be

easily transferred over the internet.

AUDIO INPUT DEVICES Microphone - an acoustic sensor that provides input by converting sound into an

electrical signal

Page 8

Page 9: Intro to Comp Csc 111_Intensive 2

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

2.5 OUTPUT DEVICES

Examples of Output Devices:

AUDIO OUTPUT DEVICES Speakers - a device that converts analog audio signals into the equivalent air

vibrations in order to make audible sound. Headset - a device similar in functionality to computer speakers used mainly to

not disturb others nearby.

VIDEO OUTPUT DEVICES

Monitor or Video Display Unit (VDU)Monitors provide a visual display of data. It looks like a television. Monitors are of different types and have different display capabilities.

Other output devices are given below: Printer Drum Plotter Flat Bed Plotter Microfilm and Microfiche Graphic Display device (Digitizing Tablet) Speech Output Unit

3.0 CLASSIFICATION (TYPES) OF COMPUTERSThe need to gather, collate and process data and retrieve the processed information easily, accurately and speedily for effective decision making has given rise to the manufacturing of different types of computer.

3.1 CLASSIFICATION ACCORDING TO SIZEThe three major types of computer based on some features such as size, cost & simplicity are:-

1. Super Computers2. Mainframe Computer3. Minicomputers4. Microcomputers

1. Super Computers, otherwise known as monsters, are the largest computers and are super fast. They request efficient cooling systems and only a few had been installed. They are used in research and forecasting.

2. Mainframe computers are very large, often filling an entire room. They can store enormous amount of information, can perform many tasks at the same time, can communicate with many users at the same time (multi-user environment), and are very expensive. The price of a mainframe computer

Page 9

Page 10: Intro to Comp Csc 111_Intensive 2

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

frequently runs into the millions of dollars. Mainframe computers usually have many terminals connected to them. These terminals look like small computers but they are only devices used to send and receive information from the actual computer using wires. Terminals can be located in the same room with the mainframe computer, but they can also be in different rooms, buildings, or cities. Large businesses, government agencies, and universities usually use this type of computer. Examples of mainframe computers include IBM360, IBM 370, ICL 1900, ICL 2900 and DEC 10.

3. Minicomputers are similar but much smaller than mainframe computers and they are also much less expensive. The cost of these computers can vary. They possess most of the features found on mainframe computers, but on a more limited scale. They can still have many terminals, but not as many as the mainframes. They can store a tremendous amount of information, but again usually not as much as the mainframe. Data is usually input by means of keyboard and adapt to a number of environments into which large mainframes cannot fit because they emit less heat. Medium and small businesses typically use these computers.Examples are Digital PDP11 and VAX Range, the Data General Range, HP1000

and 3000.

4. Microcomputers are the types of computers we are using in your classes. These computers are usually divided into desktop models and laptop models. They are terribly limited in what they can do when compared to the larger models discussed above because they can only be used by one person at a time, they are much slower than the larger computers, and they cannot store nearly as much information, but they are excellent when used in small businesses, homes, and school classrooms. These computers are inexpensive and easy to use. They have become an indispensable part of modern life. They can be broken into:

a. Desktops: they are sizeable and are placed on top of tables. They are found in offices and computer centers.

b. Laptops: they are portable and can be carried about. They can be placed on the lap while working.

c. Palmtops: they are held on the palm when used. They are pocket-sized.

3.2 CLASSIFICATION ACCORDING TO APPLICATIONComputers can still be further classified on the basis of application into digital, analog and hybrid.

1. Digital Computer converts all input, whether numbers, alphanumeric or other symbols into binary form (computer understandable form). The input data is processed in the binary form but the processed information is converted to decimal (the original input form). This is necessary because of the ease with

Page 10

Page 11: Intro to Comp Csc 111_Intensive 2

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

which the input interpretation of the output and processing are to be handled. Most of today’s business applications use digital computers.

2. Analogue Computer does not hold data in discrete digital form. They measure physical quantities and give output in form of electric signal or calibrated moving parts. They work on continuous data e.g. volume control. It stimulates the system in question by representing data in proportionally physical quality such as voltage. This means that an analogue computer holds data in quantity and volume. The analogue machine, because it has only limited memory facility and is restricted to the type of calculation it can perform, can only be used for certain specialized engineering and scientific applications. Analog devices are used for time, volume, pressure or temperature measurement, weapon guidance etc.

3. Hybrid Computer system bridges the gap between digital and analogue computers. They combine the features of both digital and analogue computers. These feature means that a hybrid computer needs conversion element which will accept analogue inputs and outputs digital values. The device responsible for the conversion is known as digitizer. These computers can be employed in process control applications, specialized engineering and scientific applications.

4.0 HARDWARE The electronic circuits used in building the computer that executes the software is known as the hardware of the computer. For example, a TV bought from a shop is hardware; the various entertainment programs transmitted from the TV station are software. An important point to note is, hardware is a one-time expense and is necessary whereas software is a continuing expense and is vital. However, for the purpose of this course, we would be studying computer hardware and software.

Hardware is best described as a device that is physically connected to your computer or something that can be physically touched. A CD-ROM, monitor, and printer are all examples of computer hardware.Almost all computer hardware requires some type of software or drivers to be installed before it can properly communicate with the computer. If these drivers contain problems or do not properly work with the computer, this can cause issues with the hardware device you are attempting to install in the computer.Therefore, device driver is a kind of software installed on computer system to enable the device being connected, communicate properly and effectively with the computer. They usually come in CD-ROM or floppy diskettes.

Page 11

Page 12: Intro to Comp Csc 111_Intensive 2

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

5.0 SOFTWAREA set of programs associated with the operation of a computer is called software. Software is a term used for all sorts of programs (set of instructions) that activates the hardware to function, that is, controls the computer system and its operation.

Computer software may be classified into two broad categories:1. System Software2. Application Software

5.1 SYSTEM SOFTWARESystem software will come provided with each computer and is necessary for the computers operation. This software acts as an interpreter between the computer and user. It interprets your instructions into binary code and likewise interprets binary code into language the user can understand. In the past you may have used MS-DOS or Microsoft Disk Operating System which was a command line interface. This form of system software required specific commands to be typed. Windows XP Professional is a more recent version of system software and is known as a graphical interface. This means that it uses graphics or "icons" to represent various operations. You no longer have to memorize commands; you simply point to an icon and click. The system software are written by computer manufacturers to facilitate the optimal (maximum) use of the hardware systems or provide suitable environment for writing, editing, debugging, testing and running of users programs. They are an essential (important) part of any computer system.

Example of systems software include:-a. Operating systemb. Language translatorsc. Service programs

a. Operating SystemThese are collection of programs acting as an interface between the user of computers on one hand and the hardware on the other. It provides the users with features that make it easier for him/her to code, test execute, debug and maintain his programs while efficiently managing the hardware resources.An operating system can be said to be a complex piece of software needed to harness (control) the power of a computer system and make it easier to use.Operating System are programs written by the manufacturer to help the computer user control all the various devices

Functions of the Operating Systemi. Resource Sharingii. Input/Output Handlingiii. Memory Management

Page 12

Page 13: Intro to Comp Csc 111_Intensive 2

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

iv. Filing Systemv. Protection and Error Handlingvi. Program Interactionvii. Program Controlviii. Accounting of Computing Resources

b. Language TranslatorsThese are programs that can convert source codes written in high – level language such as BASIC, FORTRAN, COBOL and C into Object Code (machine language).

BASIC - Beginners All – Purpose Symbolic Instruction CodeFORTRAN – FORmula TRANslatorCOBOL – Common Business Oriented Language

At the initial stage of computer development, programs were written directly in machine language but that is not the case now.There is therefore the need to translate programs written in these other languages such as BASIC or COBOL to machine language.The initial program written in a language different from machine language is called the Source Program (Code) and its equivalent in machine language is called Object Program (Code).Example of language translators are compilers, assemblers, interpreters and loaders.

i. Compilers: this is a translator (computer program) that accepts a source program in one high level language reads and translates the entire user’s program into an equivalent program in machine language. For each high-level language, there is a different compiler e.g. COBOL compiler, FORTRAN compiler. A compiler also detects errors that arise from the use of the language. Compilers are ‘portable’, i.e. COBOL compiler on one machine is similar to another with minimum changes.

ii. Assemblers: this is a computer program that accepts a source program in assembly language and produces an equivalent machine language. Each machine has its own assembly language, as a result, assembly language is not ‘portable’ i.e. assembly language of one machine cannot run on another machine.

iii.Interpreters: This translator that accepts high level source program, reads, translates and executes the program one line at a time. It also produces the errors (if any) at the end of every line. An example of this is the BASIC interpreter.

iv.Loader: It is a system program used to store the machine language program into the memory of the computer.

Page 13

Page 14: Intro to Comp Csc 111_Intensive 2

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

c. Service Programs They are specialized programs that perform routine functions and are always available to any computer user. They perform the following operation: -

i. File conversionii. File Copyiii. File Reorganizationiv. File Maintenancev. File Sortingvi. Dumping routinesvii. House keeping operationsviii. Tracing routinesix. Library programx. Editing

5.2 APPLICATION SOFTWAREIt is the set of programs necessary to carry out operations for a specified application. They are written by the computer programmer in order to perform specific jobs for the computer user.For many applications, it is necessary to produce sets of programs which are used in conjunction with various service programs.Technology has come a long way in the last hundred years but the ideas behind using it are still much the same, we should make use of computers and associated software if it saves time and hence saves money or is able to do something which would be quite impossible by manual means. This would include working with great precision and speed, doing highly complex calculations or doing boring and repetitive or dangerous tasks that are unacceptable if they are being carried out by humans.

Application Software is classified into two areas:a. Application packagesb. User application programs

a. Application packagesThey come in a complete set of suite of programs with its documentation covering a business routine. It is supplied by a software house or a manufacturer, either on lease or purchase from dealers in computer hardware.Application software is any software used for specified applications such as Word Processing, Spreadsheet, Database, Presentation Graphics, Communication, Tutorials, Entertainment and Games.

Examples are:i. Accounting Package e.g. SAGE ACCOUNTANT, PEACHTREE ACCOUNTING etc.ii. Word Processing Packages e.g. WordStar, Word Perfect, Microsoft Word,

Notepadiii. Spreadsheet Packages. E.g. Lotus 1-2-3, Microsoft Excel

Page 14

Page 15: Intro to Comp Csc 111_Intensive 2

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

iv. Utilities e.g. Norton Utilitiesv. Integrated Packages e.g. Microsoft Works.vi. Graphics Packages e.g. PM (Print Master) and Harvard graphic.vii. Desktop Publishing Packages e.g. Ventura Publisher, Pagemakerviii. Games Packages e.g. Chess, Scrabbleix. Web Design Packages e.g. Macromedia DreamWeaver MX, Microsoft

Frontage.

b. Users Applications ProgramsThis is a suite of programs written by a programmer or computer user, required for operation of their individual business/tasks or customized for corporate company, institutions and government agency such as JAMB, WAEC etc.

User application programs are used to do a lot of things, a few is mentioned below: To solve a set of equations To process examination results To prepare a Pay-Bill for an organization To prepare Electricity - Bill for each month.

6.0 PERIPHERAL DEVICESIn computer hardware, a peripheral device is any device attached to a computer in order to expand its functionality. Some of the more common peripheral devices are printers, scanners, disk drives, tape drives, microphones, speakers, and cameras. A device can also refer to a non-physical item, such as a pseudo terminal, a RAM drive, or a network adapter.Before the advent of the personal computer, any connected device added to the three base components — the motherboard, CPU and working memory (RAM, ROM, or core) — was considered to be a peripheral device.The personal computer has expanded the sense of what devices are needed on a base system, and keyboards, monitors, and mice are no longer generally considered to be peripheral devices.More specifically, the term is used to describe those devices that are optional in nature, as opposed to hardware that is either demanded or always required in principle.The term also tends to be applied to devices that are hooked up externally, typically through some form of computer bus like USB. Typical examples include joysticks, printers and scanners. Devices such as monitors and disk drives are not considered peripherals when they are not truly optional.Some people do not consider internal devices such as video capture cards to be peripherals because they are added inside the computer case; for them, the term peripherals are reserved exclusively for devices that are hooked up externally to the computer. It is debatable however whether PCMCIA cards qualify as peripherals under this restrictive definition, because some of them go fully inside the laptop, while some, like WiFi cards, have external appendages.

Page 15

Page 17: Intro to Comp Csc 111_Intensive 2

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

USES OF COMPUTERSThe computer is used to assist man in business organization, in research and in many aspects of life. Computer affects our daily lives more and more and hopefully can be used to improve the quality of our lives by releasing us from dull, respective tasks and allowing us to expand our minds.

AREAS OF APPLICATIONScientific Research – Medicine, Space Technology, Weather Forecast.Business Application – Payroll, Office Automation, Stock Control and Sales, Banking.Industrial Application – Quality Control, Oil refineriesCommunication – Transportation, Libraries, Education – Administrative and Guidance, Computer Assisted Instruction (CAI), Computer Managed Instruction (CMI)

ADVANTAGES, DISADVANTAGES & LIMITATIONS OF COMPUTERSAdvantages of ComputersComputers can facilitate self-paced learning. In the CAI mode, for example, computers individualize learning, while giving immediate reinforcement and feedback. Computers are a multimedia tool. With integrated graphic, print, audio, and video capabilities, computers can effectively link various technologies. Interactive video and CD-ROM technologies can be incorporated into computer-based instructional units, lessons, and learning environments. Computers are interactive. Microcomputer systems incorporating various software packages are extremely flexible and maximize learner control. Computer technology is rapidly advancing. Innovations are constantly emerging, while related costs drop. By understanding their present needs and future technical requirements, the cost-conscious educator can effectively navigate the volatile computer hardware and software market. Computers increase access. Local, regional, and national networks link resources and individuals, wherever they might be. In fact, many institutions now offer complete undergraduate and graduate programs relying almost exclusively on computer-based resources. The Computers can be used for other variety of things. For example, you can:Write a letter and keep in touchUse the word processing package (MS Word) to type it up.If you are sending it abroad it can be cheaper and easier to send it by e-mailScanned photographs can be sent either in a letter or attached to an e-mail.Create your own CV Use the word processing package to type it up. If you save it to disk you can make changes as and when you need to.

Page 17

Page 18: Intro to Comp Csc 111_Intensive 2

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

Plan your holidayUse the Internet to search for flights, accommodation or up-to-date information about your destination.Do accountsUsing a spreadsheet (e.g. MS Excel) can help you do calculations and keep up-to-date accounts. Go shoppingThe Internet offers you lots of chances to plan your purchases of everything from books to weddings! However please be careful when revealing your Credit Card details. Only do so if you are certain you have a secure connection. Research your family treeUse Internet resources to locate information.Get tips and advice from the Council's IntranetKeep your records with a database (MS Access) and write up your findings (MS Word).Learn a new skillThere are many learning opportunities available to you including workbooks, CD-ROMs and on-line.Do your homework/college assignmentsYou can type up your essay & report.Carry out research from the Internet or CD-ROMAdd images and graphics.

Disadvantages of ComputersDiscouraging people with less technology advantages. Internet availability. Connection tariffs. Speed of technologies advance outsmarts the users' possibilities. Technical disabilities need more acquired knowledge. Centers on one specialization at a time. Learning a lot in a particular study field is not necessarily useful. Non-availability of on-line reference material. Many institutions do not recognize a specific qualification. English knowledge must be satisfactory. Overestimation of time available. Learners experience frustration going through all their mail messages.

Limitations of ComputersComputer networks are costly to develop. Technology is changing rapidly. Widespread computer illiteracy still exists. Students must be highly motivated and proficient in computer operation.

COMPUTER MEMORYMain Memory

Page 18

Page 19: Intro to Comp Csc 111_Intensive 2

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

A flip-flop made of electronic semiconductor devices is used to fabricate a memory cell. These memory cells organized as a Random Access Memory (RAM). Each cell has a capability to store one bit of information. A main memory or store of a computer is organized using a large number of cells. Each cell stores a binary digit. A memory cell, which does not loose the bit stored in it when no power is supplied to the cell, is known as a non-volatile cell.A word is a group of bits, which are stored and retrieved as a unit. A memory system is organized to store a number of words. A Byte consists of  8 bits. A word may store one or more bytes. The storage capacity of a memory is the number of bytes it can store. The address of the location from where a word is to be retrieved or to be stored is entered in a Memory Address Register (MAR). The data retrieved from memory or to be stored in memory are placed in a Memory Data Register (MDR). The time taken to write a word is known as the Write time. The time to retrieve information is called the Access time of the memory.The time taken to access a word in a memory is independent of the address of the word and hence it is know as a Random Access Memory (RAM). The main memory used to store programs and data in a computer is a RAM. A RAM may be fabricated with permanently stored information, which cannot be erased. Such a memory is called a Read Only Memory (ROM). For more specialized uses, a user can store his won special functions or programs in a ROM. Such ROM's are called Programmable ROM (PROM). A serial access memory is organized by arranging memory cells in a linear sequence. Information is retrieved or stored in such a memory by using a read/write head. Data is presented serially for writing and is retrieved serially during read.Secondary or Auxiliary storage devicesMagnetic surface recording devices commonly used in computers are Hard disks, Floppy disks, CD-ROMs and Magnetic tapes. These devices are known as secondary or auxiliary storage devices. We will see some of these devices below.Floppy Disk Drive (FDD)In this device, the medium used to record the data is called as floppy disk. It is a flexible circular disk of diameter 3.5 inches made of plastic coated with a magnetic material. This is housed in a square plastic jacket. Each floppy disk can store approximately on million characters. Data recorded on a floppy disk is read and stored in a computer's memory by a device called a floppy disk is read and stored in a computer's memory by a device called a floppy disk drive (FDD). A floppy disk is inserted  in a slot of the FDD. The disk is rotated normally at 300 revolutions per minute. A reading head is positioned touching a track. A voltage is induced in a coil wound on the head when a magnetized spot moves below the head. The polarity of the induced voltage when a 0 is read. The voltage sensed by the head coil is amplified, converted to an appropriate signal and stored in computer's memory.Floppy Disks com with various capacities as mentioned below. 51/4 drive- 360KB, 1.2MB (1 KB= 210 = 1024 bytes) 31/2 drive- 1.44 Mb, 2.88 MB (1MB= 220 bytes) Compact Disk Drive (CDD) CD-ROM (Compact Disk Read Only Memory) used a laser beam to record and read data along spiral tracks on a 51/4 disk. A disk can store around 650 MB of

Page 19

Page 20: Intro to Comp Csc 111_Intensive 2

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

information. CD-ROMs are normally used to store massive text data. (such as encyclopedias) which is permanently recorded and read many times. Recently CD writers have come in the market. Using a CD writer, lot of information can be written on CD-ROM and stored for future reference.Hard Disk Drive (HDD)Unlike a floppy disk that is flexible and removable, the hard disk used in the PC is permanently fixed. The hard disk used in a higher end Pc can have a maximum storage capacity of 17 GB (Giga Byte; 1 GB= 1024 MB = 230 bytes). Now a days, hard disks capacities of 540 MB, 1 GB, 2 GB, 4 GB and 8 GB are quite common. The data transfer rate between the CPU and hard disk is much higher as compared to the between the CPU and the floppy disk drive. The CPU can use the hard disk to load programs and data as well as to store data. The hard disk is a very important Input/Output (I/O) device. The hard disk drive doesn't require any special care other than the requirement that one should operate the PC within a dust-free and cool room (preferably air-conditioned).In summary, a computer system is organized with a balanced configuration of different types of memories. The main memory (RAM) is used to store program being currently executed by the computer. Disks are used to store large data files and program files. Tapes are serial access memories and used to backup the files form the disk. CD-ROMs are used to store user manuals, large text, audio and video data.Central Processing UnitA processing unit in a computer interprets instructions in a program and carries them out. An instruction in general, consists of a part, which specifies the operation to be performed and other parts, which specify the address of operand. In a processor, a strong of bits is used to code operations and another string of n operations in binary so that 2x = n. For example to code 16 operations we need 4 bits, since 24=16.An instruction consisting of an operation code and operand address or addresses designed for a specific computer is know as a machine language instruction of that computer. Machine language instructions for input/output, data movement, arithmetic, logic and controlling sequence of operations are available in all computers. A computer's processor has storage registers to store operands and results. It also has a register to store the instruction being executed called "Instruction Register" and a register which store the address of the next instruction to be executed called “Program Counter Register".A sequence of machine language instructions to solve a problem is known as a machine language program. A computer executes a machine language programs in two phases. In the first phase, it reads and stores the program in its memory. After storing the program, it initiates program execution. In this phase, instructions are retrieved from memory one after another and are decoded and executed.

Page 20

Page 21: Intro to Comp Csc 111_Intensive 2

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

DATA REPRESENTATIONIntroductionIn any modern numbering system, the numbers are represented by unique patterns of unique symbols; individual symbols are usually called digits. The common system is the decimal system with ten different symbols 0,1,2,3,4,5,6,7,8 and 9. In a computer, however, the pattern of symbols which will represent the numbers are created by some physical condition. (e.g. transistor or valve passing current) which is either in one state (e.g. passing current) or in only one other possible state (not passing current).

Bits and BytesDigital computers therefore use a binary system of number representation with only two different digit symbols usually represented by 0 and 1. These are called " Binary digITS" (or) "BITS" for short. The reason computers use the base-2 system is because it makes it a lot easier to implement them with current electronic technology. You could wire up and build computers that operate in base-10, but they would be fiendishly expensive right now. On the other hand, base-2 computers are relatively cheap.

Bits are rarely seen alone in computers, they are bundled together into 8-bit collections.A set of 8 bits is called a byte and each byte stores one character.

ASCII (American Standards Code for Information Interchange) codes are used to represent each character. The ASCII code includes codes for English Letters (Both Capital & Small), decimal digits, 32 special characters and codes for a number of symbols used to control the operation of a computer which are non-printable.

NUMERATIONMan has over the years developed symbols to represent numbers. A group of symbols that can be used according to some rules to express numbers is called numeration system. The symbols of a numeration system are called numerals. The number of unique digits in a numeration system is called number base or base of the numeration system.Hindu-Arabic Notation uses base ten probably because humans have ten fingers. Thus ten and its powers are the basic numbers of the system.In the topic, we shall examine four number bases used very frequently in computing.They are: Decimal SystemBinary SystemOctal SystemHexadecimal System

Decimal Numbers

Page 21

Page 22: Intro to Comp Csc 111_Intensive 2

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

The easiest way to understand bits is to compare them to something you know: digits. A digit is a single place that can hold numerical values between 0 and 9. Digits are normally combined together in groups to create larger numbers. The decimal system is a number system to the base ten where counting is done in groups of ten. For example, 6,357 has four digits. It is understood that in the number 6,357, the 7 is filling the ‘1s place’, while the 5 is filing the 10s place, the 3 is filling the 100s place and the 6 id filling the 1,000s place. So you could express things this way if you wanted to be explicit as:(6*1000) + (3*100) + (5*10) + (7*1) = 6000 + 300 + 50 + 7 = 6357

It can be expressed using the powers of ten as:(6*103) + (3*102) + (5*101) + (7*100) = 6000 + 300 + 50 + 7 = 6357

Binary numbersThe binary system (also called base two) has just two states: usually called ‘on and ‘off’ or ‘1’ and ‘0’. The reason why this why this system is so important is that it is the most simple system to implement in practice using the electronic technology available today. It is easy to detect very quickly if a circuit is switched on or off. It would be a much more difficult task to detect levels in between these two extremes. Hence binary is ideal for use in modern electronic digital computers.

Binary numbers are formed using the positional notation. Powers of 2 are used as weights in the binary number system. A binary number 10111, has a decimal value equal to 1*24+0*23+1*21+1*20=23.

To understand the binary system, it is useful to think more carefully about a system with which most people will be more familiar, i.e. the decimal system or base ten also known as denary. When counting in base ten, the symbols that are used are 0,1,2,3,4,5,6,7,8 and 9. i.e. ten different symbols in base ten. Decimal numbers are arranged into units, tens, hundreds and thousands etc.Th H T U2 0 6 7The above represents two thousands, no hundreds, six tens and seven units making two thousand and sixty seven. In base ten each column from right to left is obtained by multiplying the previous column by ten.Likewise, in binary system, there are just two symbols 0 and 1. Therefore any number must be represented using 0s and 1s only. This time the column heading will be (from right to left) 1s, 2s, 4s, 8s, 16s, etc. to obtain the next column heading, the number is multiplied by two i.e. 1x2=2s, 2x2=4s, 4x2=8s etc. therefore, in the binary system, the number:16 8 4 2 11 0 1 1 1would represent one lot of 16, no lots of 8, one lot of 4, one lot of 2 and one unit of 1.

Page 22

Page 23: Intro to Comp Csc 111_Intensive 2

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

In other to avoid confusion about the base number being used, a useful subscript notation was developed. Therefore, 1012 means base two and 10116 means base sixteen.If no subscript is used, then it is usual to assume that base ten is the base number being used.

Decimal number

T U

5 – bit binary number16 8 4 2

1

Decimal number

T U

5 – bit binary number16 8 4 2 1

0 0 0 0 00

1 6 1 0 0 0 0

1 0 0 0 01

1 7 1 0 0 0 1

2 0 0 0 10

1 8 1 0 0 1 0

3 0 0 0 11

1 9 1 0 0 1 1

4 0 0 1 00

2 0 1 0 1 0 0

5 0 0 1 01

2 1 1 0 1 0 1

6 0 0 1 10

2 2 1 0 1 1 0

7 0 0 1 11

2 3 1 0 1 1 1

8 0 1 0 00

2 4 1 1 0 0 0

9 0 1 0 01

2 5 1 1 0 0 1

1 0 0 1 0 10

2 6 1 1 0 1 0

1 1 0 1 0 11

2 7 1 1 0 1 1

1 2 0 1 1 00

2 8 1 1 1 0 0

1 3 0 1 1 01

2 9 1 1 1 0 1

1 4 0 1 1 10

3 0 1 1 1 1 0

1 5 0 1 1 11

3 1 1 1 1 1 1

Page 23

Page 24: Intro to Comp Csc 111_Intensive 2

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

To count in binary, we simply start with 00000, to the required number of digits. The units column changes 0,1,0,1 etc. the twos column has two 0s followed two 1s then two 0s etc. the fours column has four 0s followed by four 1s then four 0s etc. the eights column has eight 1s followed by eight 1s then eight 0s etc.When it is hoped to obtain large number, the method above would be cumbersome, therefore, mathematical methods can be used to convert decimal numbers to binary.

A decimal number is converted into an equivalent binary number by dividing the number by 2 and storing the remainder as the least significant bit of the binary number. For example, consider the decimal number 23. Its equivalent binary number is obtained as show below in figure CONVERSION OF DECIMAL TO BINARY – (using repeated division)EXAMPLE 18310 = 101101112 2310 = 101112

2 183

remainder 1

2 91 remainder 12 45 remainder 12 22 remainder 02 11 remainder 12 5 remainder 12 2 remainder 02 1 remainder 1

0

TO CONVERT BACK TO DECIMAL – 101101112

27 26 25 24 23 22 21 20

128 64 32 16 8 4 2 11 0 1 1 0 1 1 1

(1x27) + (0x26) + (1x25) + (1x24) + (0x23) + (1x22) + (1x21) + (1x20) (1x128) + (0x64) + (1x32) + (1x16) + (0x8) + (1x4) + (1x2) + (1x1) = 18310

ADDITION AND SUBTRACTION OF BINARY NUMBERSIt is useful to be able to add, subtract binary numbers as handled by the computer.1 + 0 = 10 + 1 = 10 + 0 = 01 + 1 = 0 with a carry of 1 to the next position (2s), i.e. 1 0.

Examples of additions1 1 0 12 1 0 0 12 1 0 1 02

+ 1 0 1 02 + 1 1 12 + 1 1 0 02

Page 24

Page 25: Intro to Comp Csc 111_Intensive 2

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

1 0 1 1 12 1 0 0 0 02 1 0 1 1 02

Examples of subtractions

1 1 10 02 1 12 10 02 1 02 1 12

0 1 0 1 0 02 - 0 1 1 1 12 -1 0 0 1 1 02 0 0 1 0 02 therefore 100112 – 11112 = 1002

OCTAL NUMBERSIf the principles of the last few sections have been well understood then it is relatively easy to extend the concepts of the basic rules to any other number base. However, in computing, only two other number bases are of any importance. These are octal (base eight) and more important, hexadecimal (base sixteen). In octal we have eight different characters: (0,1,2,3,4,5,6 and 7). The column headings in octal are units, 8s, 64s, 512s etc. We have seen that a computer uses binary numbers rather than decimal numbers to perform arithmetic operations; however pure binary representation has one disadvantage. Much space is wasted in storing data than in any other numbering system. To conserve storage some computers group binary numbers into bunch of threes. Such grouping is possible with octal numbering system (base 8). Here, a single octal number is used to represent three binary digits.

CONVERTING OCTAL TO DECIMALTo find the decimal equivalent of any octal number, we express the number in expanded notation and add the results.Example:2378 = (2 * 82) + (3*81) + (7*80) b. 12348 = (1*83) (2 * 82) + (3*81) + (4*80)= (2*64) + (3*8) + (7*1) = (1*512) + (2*64) + (3*8) + (7*1)= 128+24+7 = 15910 = 512 + 128 + 24 + 7 = 67110

CONVERTING DECIMAL TO OCTAL (using remainder method)To obtain the Octal equivalent of a decimal number, we again use the remainder method, dividing by the base 8.Example:

a. Convert 15910 into base 8.

b. Convert 67110 into base 8

8 159

8 671

Page 25

Page 26: Intro to Comp Csc 111_Intensive 2

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

8 19 remainder 7 8 83 remainder 4

8 2 remainder 3 8 10 remainder 3

0 remainder 2 8 1 remainder 2

=2378 0 remainder 1= 12348

CONVERTING BINARY TO OCTALAs mentioned above, the octal numbering system can be represented as a group of three binary digits or bits. In other words, any three binary digits can be represented as a single octal number. These are used as special codes for digits of octal numbers (base 8) called three-bit equivalent forms as shown below: -

Converting binary to octal requires the subdivision of the binary numbers into groups of threes.ExampleBinary to Octala) 1010110112 b) 110001101112

Solutiona) b)101

011 011

5 3 3Therefore 1010110112 = 5338 and 110001101112 = 30678

CONVERTING OCTAL TO BINARYIn like manner, converting an octal number to its binary equivalent requires the representing each octal number as three binary numbers.

ExampleOctal to Binarya) 1528 b) 47328

1 5 2001

101 010

ADDITION AND SUBTRACTION OF OCTAL NUMBERS

Page 26

Digits of base 8 0 1 2 3 4 5 6 7Equivalent number in base 2

000

001

010

011

100

101

110

111

011

000 110 111

3 0 6 7

4 7 3 2100

111 011 010

Page 27: Intro to Comp Csc 111_Intensive 2

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

To add two octal numbers, we proceed as we do in the decimal system, keeping in mind however, that when adding in a column involves 7+1 = 8 which is octal 10, 0 part being written down under the column while the 1 is carried to the left column. In subtraction, any carry over to the right is counted as an eight which is added to the number being carried to. Example 3, 0 – 1 needs a carry from 7 on the left, however the carry is now 0+8=8, then finish the subtraction.

Examples 2 3 48 3 4 38 76 08 58 2 32 614

+ 1 78 + 7 4 58 - 5 1 38 - 1 2 72 5 38 1 3 1 08 1 7 28 1 0 78

HEXADECIMAL NUMBERSAs seen above in octal numbering system, the computer saves some storage space by grouping together three binary digits to produce a single digit. A further step is taken in some computers, to group four binary digits to produce a digit in the base 16 or hexadecimal numbering system. In other words, this number base comes into its own when it is necessary to deal with large groups of binary numbers. The base of the hexadecimal system is 16 and the symbols used in this system are 0,1,2,4,5,6,7,8,9,A,B,C,D,E,F. Strings of 4 bits have an equivalent hexadecimal value (four-bit equivalent forms).

Decimal 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

15

Hexadecimal

0 1 2 3 4 5 6 7 8 9 A B C D E F

For example, 6B is represented by 0110 1011 or 110 1011, 3E1 is represented by 0011 1110 0001 or 11 1110 0001 and 5DBE34 is represented by 101 1101 1011 1110 0011 0100. Decimal fractions can also be converted to binary fractions.

We use the method as previously discussed to convert from any number system to the decimal system; multiply each digit by its place value and then obtain the total. Care should be taken when using hexadecimal digits A to F.

CONVERSION OF HEXADECIMAL NUMBERS TO DECIMALC3BD16 = (C * 163) + (3 * 162) + (B*161) + (D*160)= (12*4096) + (3*256) + (11*16) + (13*1)= 50,10910

F6A16 = (F * 162) + (6*161) + (A*160)= (F*256) + (6*16) + (A*1)= (15*256) + (6*16) + (10*1)= 3,94610

Page 27

Page 28: Intro to Comp Csc 111_Intensive 2

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

CONVERSION OF DECIMAL NUMBERS TO HEXADECIMAL (Using remainder method)a. 24910 =F916 b. 158310 = 62F16

16 249

16 1583

16 15 remainder 9 16 98 remainder F0 remainder F 16 6 remainder 2

0 remainder 6

CONVERSION OF HEXADECIMAL TO BINARY Any four bit binary number can be converted to a hexadecimal number using the special codes (four-bit equivalent forms). The binary numbers are divided into groups of four digits and each group is represented by a single hexadecimal digit.

a. 1111001101102 b. 110110101021111

0011

0111

0110

0011

0110

1010

F 3 7 6 3 6 A

Therefore 1111001101102= F37616 and 11011010102= 36A16

CONVERSION OF BINARY TO HEXADECIMAL a. BDC16 = 1011 1100 11012 b. A57F16 = 1010 0101 0111 11112

B C D A 5 7 F1011

1100

1101

1010

0101

0111

1111

ADDITION AND SUBTRACTION OF HEXADECIMAL NUMBERSHexadecimal arithmetic operations are similar to those of other number systems. The carrying of a hexadecimal digit to the next position is done in exactly the same manner as in the decimal number system. A sum of 16 results in a carry of 1 (1016 = 1610) and in subtraction each borrow equals 16.Examples B A C16 D B1 A16 A9 117 516 1 B10 622

+ 4 4 116 + 6 2 716 - 5 2 316 - 1 2 7F E D16 1 3 E 116 4 F 28 8 F16

Hint:C+1 = 12+1 = 13 (D) 5-3 = 2A+4 = 10+4 = 14 (E) 1-2 (carry 16 from A) =16+1=17, 17-2=15(F)B+4 = 11+4 = 15 (F) after borrowing, A-1=10-1=9, 9-5=4ASCII CHARACTER ENCODINGThe name ASCII is an acronym for: American Standard Code for Information Interchange. It is a character encoding standard developed several decades ago to

Page 28

Page 29: Intro to Comp Csc 111_Intensive 2

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

provide a standard way for digital machines to encode characters. The ASCII code provides a mechanism for encoding alphabetic characters, numeric digits, and punctuation marks for use in representing text and numbers written using the Roman alphabet. As originally designed, it was a seven bit code. The seven bits allow the representation of 128 unique characters. All of the alphabet, numeric digits and Standard English punctuation marks are encoded.

The ASCII standard was later extended to an eight bit code (which allows 256 unique code patterns) and various additional symbols were added, including characters with diacritical marks (such as accents) used in European languages, which don’t appear in English. There are also numerous non-standard extensions to ASCII giving different encoding for the upper 128 character codes than the standard. For example, the character set encoded into the display card for the original IBM PC had a non-standard encoding for the upper character set. This is a non-standard extension that is in very wide spread use, and could be considered a standard in itself.

Standard ASCII Character Sets

Bytes are frequently used to hold individual characters in a text document. In the ASCII character set, each binary value between 0 and 127 is given a specific character. Most computers extend the ASCII character set to use the full range of 256 characters available in a byte. The upper 128 handle special things like accented characters from common foreign languages. The table above shows the 127 standard ASCII codes which computers uses to store text documents both on disk and in memory.Parity Check Bit

Page 29

Page 30: Intro to Comp Csc 111_Intensive 2

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

In the early computers systems, the ASCII code consisted of just 128 characters as explained above. This is because the 8th bit was used for parity checking. Errors may occur while recording and reading data and when data is transmitted from one unit to another unit in a computer. Detection of a single error in the code for a character is possible by introducing an extra bit in its code. This bit, know as the parity check bit, is appended to the code. The user can set the parity bit either as even or odd. The user chooses this bit so that the total number of ones ('1') in the new code is even or odd depending upon the selection. If a single byte is incorrectly read or written or transmitted, then the error can be identified using the parity check bit.However, when being used inside computers, parity was not really necessary and the top bit in the byte was wasted. It then was a good idea to make use of this top bit (8th bit), thus releasing a further 128 characters giving a total of 256 overall. This has led to what is now known as the Extended ASCII character set.

Page 30

Page 31: Intro to Comp Csc 111_Intensive 2

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

Binary and Hexadecimal Number TablesPowers of 2: Hexadecimal

DigitsBinary Equivalent

20 ......................1

0 0 0000

21 ......................2

1 1 0001

22 ......................4

2 2 0010

23 ......................8

3 3 0011

24 ....................16

4 4 0100

25 ....................32

5 5 0101

26 ....................64

6 6 0110

27 .................. 128

7 7 0111

28 .................. 256

8 8 1000

29 .................. 512

9 9 1001

210 ................ 1024

10 A 1010

211 ................ 2048

11 B 1011

212 ................ 4096

12 C 1100

213 ................ 8192

13 D 1101

214 .............. 16384

14 E 1110

215 .............. 32768

15 F 1111

216 .............. 65536

Equivalent Numbers in Decimal, Binary and Hexadecimal Notation:Decimal

Binary Hexadecimal

0 00000000 001 00000001 012 00000010 023 00000011 03

Page 31

Page 32: Intro to Comp Csc 111_Intensive 2

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

4 00000100 045 00000101 056 00000110 067 00000111 078 00001000 089 00001001 0910 00001010 0A11 00001011 0B12 00001100 0C13 00001101 0D14 00001110 0E15 00001111 0F16 00010000 1017 00010001 1131 00011111 1F32 00100000 2063 00111111 3F64 01000000 4065 01000001 41127 01111111 7F128 10000000 80129 10000001 81255 11111111 FF256 00000001000

00000 0100

32767 0111111111111111

7FFF

32768 1000000000000000

8000

65535 1111111111111111

FFFF

Page 32

Page 33: Intro to Comp Csc 111_Intensive 2

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

EXERCISES ON DATA REPRESENTATIONExpress these binary numbers as decimal numbers: a) 101012 b) 10112 c) 111012

Change the following decimal number into binary numbers a) 46310 b) 3210

Evaluate the following: a) 110112 + 100112 b) 1011012 + 10102 + 101102

(c) 110002 – 111012 d) 1110112 - 1100012

Express these octal numbers as decimal numbers: a) 65478 b) 3458 c) 23398

Change the following decimal number into octal numbers a) 346110 b) 193210

Using the three-bit equivalent forms: Change from octal to binary: a) 67738 b) 24158 c) 3458

Change from binary to octal: a) 10101110010012 b) 101011101001002 Calculate the following in base 8: a) 37638 + 35318 b) 1368 + 7328 (c) 53218 - 6778

Express these hexadecimal as decimal numbers: a) 8D516 b) 7ED16 c) B6AC16

Change the following decimal number into hexadecimals a) 345610 b) 142110

Using the four-bit equivalent forms: Change from hexadecimal to binary: a) 4DE16 b) 5C3A16 c) B6F816

Change from binary to hexadecimal: a) 10101110010012 b) 1001111101001002 Calculate the following in base 16: a) B5D + 45C b) F1D – 1E3 c) 6789 + 4321

Page 33

Page 34: Intro to Comp Csc 111_Intensive 2

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

COMPUTER LANGUAGESThe term computer language includes a wide variety of languages used to communicate with computers. It is broader than the more commonly-used term programming language. Programming languages are a subset of computer languages. For example, HTML is a markup language and a computer language, but it is not traditionally considered a programming language.Computer languages can be divided into two groups: high-level languages and low-level languages. High-level languages are designed to be easier to use, more abstract, and more portable than low-level languages. Syntactically correct programs in some languages are then compiled to low-level language and executed by the computer. Most modern software is written in a high-level language, compiled into object code, and then translated into machine instructions.Computer languages could also be grouped based on other criteria. Another distinction could be made between human-readable and non-human-readable languages. Human-readable languages are designed to be used directly by humans to communicate with the computer. Non-human-readable languages, though they can often be partially understandable, are designed to be more compact and easily processed, sacrificing readability to meet these ends.

Machine languageThe computers can execute a program written using binary digits only. This type of programs is called machine language programs. Since these programs use only '0's and '1's it will be very difficult for developing programs for complex problem solving. Also it will be very difficult for a person to understand a machine language program written by another person. At present, computer users do not write programs using machine language. Also these programs written for execution in one computer cannot be used on another type of computer. i.e., the programs were machine dependent.

Assembly LanguageIn assembly language mnemonic codes are used to develop program for problem solving. The program given below shows assembly language program to add two numbers A & B.

    Assembly language is designed mainly to replace each machine code with

and understandable mnemonic code. To execute an assembly language program it should first be translates into an equivalent machine language program. Writing and understanding programs in assembly language is easier than that of machine language. The programs written in assembly language are also machine-dependent.

Page 34

Program code Description

READ AADD BSTORE CPRINT CHALT

It reads the value of A.The value of B is added with A.The result is store in C.The result in 'C' is printed.Stop execution.

Page 35: Intro to Comp Csc 111_Intensive 2

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

High Level LanguagesHigh level languages are developed to allow application programs, which are machine independent. High level language permits the user to use understandable codes using the language structure. In order to execute a high-level language program, it should be translated into a machine language either using a compiler or interpreter. The high level languages commonly used are FORTRAN (FORmula TRANslation), BASIC (Beginner's All-purpose Symbolic Instruction Code), COBOL (COmmon Business Oriented Language). Recently developed programming language such as Visual Foxpro, Visual Basic (VB), Visual C++ (VC++) are more popular among the software developers. The following program written in BASIC language is to add two given numbers.Program Code Description

10 INPUT A,B20 LET C=A+B30 PRINT C40 END

To read the value of A&BA&B are added and result is stored in CPrint the value of CStop execution

Page 35

Page 36: Intro to Comp Csc 111_Intensive 2

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

COMPUTERS AND COMMUNICATIONSLocal Area Network (LAN) & Wide Area Network (WAN)Computers available in remote locations can communicate with each other using a telecommunication line. One way of connecting the computers is by using devices called modems. A modem is used to transfer data from one computer to another using the telephone lines. A modem converts the strings of 0s and 1s into electrical signals which can be transferred over the telephone lines. Both the receiving and the transmitting computer have a telephone connection and a modem. An external modem is connected to the computer like a typical input or an output device. An internal modem is fitted into the circuitry related to the CPU and Memory.Interconnection of computers which are within the same building or nearby locations forms a network of computers and this network is called a Local Area Network (LAN). A LAN permits sharing of data files, computing resources and peripherals. Interconnection of computers located in far away locations using telecommunication system is know as Wide Area Network (WAN).COMPUTER COMMUNICATION USING TELEPHONE LINES 

InternetIntercommunication between computer networks is possible now. Computer networks located in different Organizations can communicate with each other through a facility know as Internet. Internet is a world wide computer network, which interconnects computer networks across countries. The Internet facility known as Internet. Internet is a world wide computer network, which interconnects computer networks across countries. The Internet facilitates electronic mail (email), file-transfer between any two computers and remote access to a computer connected in the internet. This intercommunication facility has changed the style of functioning of the business organization and it has made the world a global village.

Page 36