MLR INSTITUTE OF TECHNOLOGY
Dundigal, Hyderabad - 500 043
COMPUTER SCIENCE AND ENGINEERING
COURSE DESCRIPTION FORM
Course Title DIGITAL LOGIC DESIGN
Course Code A1404
Regulation R13-JNTUH
Course Structure Lectures Tutorials Practicals Credits
3 1 - 4
Course Coordinator Prof. K. L. Chugh
Team of Instructors
Ram Mohan Rao, Shampa Ghosh, Gopala Gupta
I. COURSE OVERVIEW:
The course addresses the concepts, principles and techniques of designing digital systems. The course teaches the fundamentals of digital systems applying the logic design and development techniques. This course forms the basis for the study of advanced subjects like Computer Architecture and Organization, Microprocessor through Interfacing, VLSI Designing. Students will learn principles of digital systems logic design and distinguish between analog and digital representations. They will be able to analyze a given combinational or sequential circuit using k-map and Boolean algebra as a tool to simplify and design logic circuits. Construct and analyze the operation of a latch, flip-flop and its application in synchronization circuits.
II. PREREQUISITES:
Level Credits Periods/Weeks Prerequisites
Electronic Devices & Circuits, PC Software Lab Mathematics
UG 4 4
III. COURSE ASSESSMENT METHODS:
a) Marks Distributions
Session Marks (25M) University End Total
Exam Marks
Marks
There shall be 2 midterm examinations. Each
midterm examination consists of subjective type and
objective type tests.
The subjective test is for 10 marks, with duration of 1 hour.
Subjective test of each semester shall contain 4 questions;
the student has to answer 2 questions, each carrying 5
marks.
The objective type test is for 10 marks with duration of 20
minutes. It consists of
10 multiple choice and 10 objective type questions, the 75 100
student have to answer all the questions and each carry half
mark.
First midterm examination shall be conducted for the first
two and half units of syllabus and second midterm
examination shall be conducted for the remaining portion.
Five marks are earmarked for assignments. There shall be
two assignments in every theory course. Marks shall be
awarded considering the average of two assignments in each
course.
IV. EVALUATION SCHEME:
S. No Component Duration Marks 1 I Mid Examination 90 minutes 20 2 I Assignment - 05 3 II Mid Examination 90 minutes 20 4 II Assignment - 05 5 External Examination 3 hours 75 V. COURSE OBJECTIVES:
i. Comprehend different number systems including the binary number systems and Boolean
algebra principles. ii Apply Boolean algebra to switching logic design and simplification. iii Design circuit design using logic gates, multiplexers, decoders, registers, counters and programmable logic arrays. iv Develop state diagrams and state transition tables for state machines. VI. COURSE OUTCOME: 1.Describe number systems, binary addition and subtraction, 2’s complement representation and operations with this representation. 2. Understand switching algebra theorems and apply them for logic functions. 3. Identify the importance of SOP and POS canonical forms in the minimization or other optimization of Boolean formulas in general and digital circuits. 4. Minimize functions using any type of minimizing algorithms (Boolean algebra, Karnaugh map or Tabulation method). 5. Understand the bi-stable elements and the different latches, flip-flops. 6. Analyze the design procedures of Combinational and Sequential circuits. 7. Design the finite state machine using algorithmic state machine charts and perform simple projects with a few flip-flops. 2 | P a g e
VII. HOW PROGRAM OUTCOMES ARE ASSESSED:
Program Outcomes Level Proficiency
assessed by
A An ability to apply knowledge of computing, mathematical
Assignment
foundations, algorithmic principles, and computer science and
, Exams
engineering theory in the modeling and design of computer-based
Lectures
systems to real-world problems (fundamental engineering analysis
S
skills).
B An ability to design and conduct experiments, as well as to analyze S Lectures
Assignments
and interpret data (information retrieval skills).
, Tutorials
C An ability to design and construct a hardware and software system, H Assignments,
tutorials, lectures component, or process to meet desired needs, within realistic constraints.
D An ability to function effectively on multi-disciplinary teams N -
(teamwork).
An ability to analyze a problem, and identify, formulate and use the Lectures
Tutorials
E appropriate computing and engineering requirements for obtaining H
its solution (engineering problem solving skills).
F An understanding of professional, ethical, legal, security and social S Lectures
Assignments
issues and responsibilities (professional integrity).
An ability to communicate effectively, both in writing and orally
N -
G
(speaking / writing skills).
The broad education necessary to analyze the local and global impact
H of computing and engineering solutions on individuals, S Examination
organizations, and society (engineering impact assessment skills).
Recognition of the need for, and an ability to engage in continuing
I professional development and life-long learning (continuing N -
education awareness).
N -
J A knowledge of contemporary issues (social awareness).
An ability to use current techniques, skills, and tools necessary for Lectures,
K computing and engineering practice H
Assigments
(practical engineering analysis Skills) Tutorials,
Exam
L Graduates are able to participate and succeed in competitive examination
S Tutorials
like GRE, GATE, TOEFL, GMAT etc.
The use of current application software; the design and use of operating Lectures,
M systems; and the analysis, design, testing, and documentation of computer H
Assignmetns
programs for the use in information engineering technologies.
The basic knowledge of electronics, electrical components,
N computer architecture and applications of microcomputer H Lectures,
systems, communications needed in data transport. Assignments
N = None S = Supportive H = Highly Related
DIGITAL LOGIC DESIGN
SYLLABUS
UNIT-I Digital Systems: Binary Numbers, Octal, Hexa Decimal and other base numbers, Number base conversions,
complements, signed binary numbers, Floating point number representation, binary codes, error detecting and
correcting codes, digital logic gates(AND, NAND,OR,NOR, Ex-OR, Ex-NOR), Boolean algebra , basic theorems
and properties, Boolean functions, canonical and standard forms.
UNIT-II
Gate –Level Minimization and combination circuits , The K-Maps Methods, Three Variable, Four Variable, Five
Variable , sum of products , product of sums Simplification, Don’t care conditions , NAND and NOR
implementation and other two level implantation.
UNIT-III
Combinational Circuits (CC): Design Procedure, Combinational circuit for different code converters and other
problems, Binary Adder, subtractor, Multiplier, Magnitude Comparator, Decoders, Encoders, Multiplexers,
Demultiplexers.
UNIT-IV
Synchronous Sequential Circuits: Latches, Flip -flops, analysis of clocked sequential circuits, design of counters,
Up-down counters, Ripple counters , Registers, Shift registers, Synchronous Counters. Asynchronous Sequential
Circuits: Reduction of state and follow tables, Role free Conditions.
UNIT-V
Memory: Random Access memory, types of ROM, Memory decoding, address and data bus, Sequential Memory,
Cache Memory, Programmable Logic Arrays, memory Hierarchy in terms of capacity and access time.
TEXT BOOKS:
1) Digital Design- M. Morris Mano.
REFERENCE BOOKS:
1) Switching and Finite Automata Theory by Zvi. Kohavi, Tata McGraw Hill.
2) Switching and Logic Design, C.V.S. Rao, Pearson Education.
3) Digital Principles and Design – Donald D.Givone, Tata McGraw Hill, Edition.
4) Fundamentals of Digital Logic & Micro Computer Design , 5TH Edition, M. Rafiquzzaman John Wiley.
DIGITAL LOGIC DESIGN
TOPIC WISE NOTES
UNIT-1
Digital Systems: Binary Numbers, Octal, Hexa Decimal and other base numbers
Number systems provide the basis for all operations in information processing systems. In a
number system the information is divided into a group of symbols; for example, 26 English
letters, 10 decimal digits etc. In conventional arithmetic, a number system based upon ten units
(0 to 9) is used. However, arithmetic and logic circuits used in computers and other digital
systems operate with only 0's and 1's because it is very difficult to design circuits that require ten
distinct states. The number system with the basic symbols 0 and 1 is called binary. ie. A binary
system uses just two discrete values. The binary digit (either 0 or 1) is called a bit.
A group of bits which is used to represent the discrete elements of information is a symbol. The
mapping of symbols to a binary value is known a binary code. This mapping must be unique. For
example, the decimal digits 0 through 9 are represented in a digital system with a code of four
bits. Thus a digital system is a system that manipulates discrete elements of information that is
represented internally in binary form.
Decimal Numbers
The invention of decimal number system has been the most important factor in the development
of science and technology. The decimal number system uses positional number representation,
which means that the value of each digit is determined by its position in a number.
The base, also called the radix of a number system is the number of symbols that the system
contains. The decimal system has ten symbols: 0,1,2,3,4,5,6,7,8,9. In other words, it has a base
of 10. Each position in the decimal system is 10 times more significant than the previous
position. The numeric value of a decimal number is determined by multiplying each digit of the
number by the value of the position in which the digit appears and then adding the products.
Thus the number 2734 is interpreted as
Here 4 is the least significant digit (LSD) and 2 is the most significant digit (MSD).
In general in a number system with a base or radix r, the digits used are from 0 to r-1 and the
number can be represented as
Equation (1) is for all integers and for the fractions (numbers between 0 and 1), the following
equation holds.
Thus for decimal fraction 0.7123
Binary Numbers
The binary number has a radix of 2. As r = 2, only two digits are needed, and these are 0 and 1.
Like the decimal system, binary is a positional system, except that each bit position corresponds
to a power of 2 instead of a power of 10. In digital systems, the binary number system and other
number systems closely related to it are used almost exclusively. Hence, digital systems often
provide conversion between decimal and binary numbers. The decimal value of a binary number
can be formed by multiplying each power of 2 by either 1 or 0 followed by adding the values
together.
Example : The decimal equivalent of the binary number 101010.
In binary r bits can represent symbols. e.g. 3 bits can represent up to 8 symbols, 4 bits for 16
symbols etc. For N symbols to be represented, the minimum number of bits required is the
lowest integer 'r'' that satisfies the relationship.
e.g. if N = 26, minimum r is 5 since .
Octal Numbers
Digital systems operate only on binary numbers. Since binary numbers are often very long, two
shorthand notations, octal and hexadecimal, are used for representing large binary numbers.
Octal systems use a base or radix of 8. Thus it has digits from 0 to 7 (r-1). As in the decimal and
binary systems, the positional valued of each digit in a sequence of numbers is fixed. Each
position in an octal number is a power of 8, and each position is 8 times more significant than the
previous position.
Example : The decimal equivalent of the octal number 15.2.
Hexadecimal Numbers
The hexadecimal numbering system has a base of 16. There are 16 symbols. The decimal digits 0
to 9 are used as the first ten digits as in the decimal system, followed by the letters A, B, C, D, E
and F, which represent the values 10, 11,12,13,14 and 15 respectively. Table 1 shows the
relationship between decimal, binary, octal and hexadecimal number systems.
Hexadecimal numbers are often used in describing the data in computer memory. A computer
memory stores a large number of words, each of which is a standard size collection of bits. An 8-
bit word is known as a Byte. A hexadecimal digit may be considered as half of a byte. Two
hexadecimal digits constitute one byte, the rightmost 4 bits corresponding to half a byte, and the
leftmost 4 bits corresponding to the other half of the byte. Often a half-byte is called nibble.
If "word" size is n bits there are 2n possible bit patterns so only 2n possible distinct numbers can
be represented. It implies that all possible numbers cannot be represent and some of these bit
patterns (half?) to represent negative numbers. The negative numbers are generally represented
with sign magnitude i.e. reserve one bit for the sign and the rest of bits are interpreted directly as
the number. For example in a 4 bit system, 0000 to 0111 can be used to positive numbers from
+0 to +2n-1 and represent 1000 to 1111 can be used for negative numbers from -0 to -2n-1. The
two possible zero's redundant and also it can be seen that such representations are arithmetically
costly.
Another way to represent negative numbers are by radix and radix-1 complement (also called r's
and (r-1)'s). For example -k is represented as Rn -k. In the case of base 10 and corresponding
10's complement with n=2, 0 to 99 are the possible numbers. In such a system, 0 to 49 is
reserved for positive numbers and 50 to 99 are for positive numbers.
Examples:
+3 = +3
-3 = 10 2 -3 = 97
2's complement is a special case of complement representation. The negative number -k is equal
to 2 n -k. In 4 bits system, positive numbers 0 to 2n-1 is represented by 0000 to 0111 and
negative numbers -2n-1 to -1 is represented by 1000 to 1111. Such a representation has only one
zero and arithmetic is easier. To negate a number complement all bits and add 1
Example:
119 10 = 01110111 2
Complementing bits will result
10001000
+1 add 1
10001001
That is 10001001 2 = - 119 10
Number base conversions
This section describes the conversion of numbers from one number system to another. Radix
Divide and Multiply Method is generally used for conversion. There is a general procedure for
the operation of converting a decimal number to a number in base r. If the number includes a
radix point, it is necessary to separate the number into an integer part and a fraction part, since
each part must be converted differently. The conversion of a decimal integer to a number in base
r is done by dividing the number and all successive quotients by r and accumulating the
remainders. The conversion of a decimal fraction is done by repeated multiplication by r and the
integers are accumulated instead of remainders.
Integer part - repeated divisions by r yield LSD to MSD
Fractional part - repeated multiplications by r yield MSD to LSD
Example: Conversion of decimal 23 to binary is by divide decimal value by 2 (the base) until the
value is 0
The answer is 23 10 = 10111 2
Divide number by 2; keep track of remainder; repeat with dividend equal to quotient until zero;
first remainder is binary LSB and last is MSB.
The conversion from decimal integers to any base-r system is similar to this above example,
except that division is done by r instead of 2.
Example:
Convert (0.7854) 10 to binary.
0.7854 x 2 = 1.5708; a -1 = 1
0.5708 x 2 = 1.1416; a -2 = 1
0.1416 x 2 = 0.2832; a -3 = 0
0.2832 x 2 = 0.5664; a -4 = 0
The answer is (0.7854) 10 = (0.1100) 2
Multiply fraction by two; keep track of integer part; repeat with multiplier equal to product
fraction; first integer is MSB , last is the LSB; conversion may not be exact; a repeated fraction.
The conversion from decimal fraction to any base-r system is similar to this above example,
except the multiplication is done by r instead of 2.
The conversion of decimal numbers with both integer and fraction parts is done by converting
the integer and the fraction separately and then combining the two answers.
Thus (23.7854) 10 = (10111. 1100) 2
For converting a binary number to octal, the following two step procedure can be used.
Group the number of bits into 3's starting at least significant symbol. If the number of bits is not
evenly divisible by 3, then add 0's at the most significant end.
Write the corresponding 1 octal digit for each group
Examples:
Similarly for converting a binary number to hex, the following two step procedure can be used.
Group the number of bits into 4's starting at least significant symbol. If the number of bits is not
evenly divisible by 4, then add 0's at the most significant end.
Write the corresponding 1 hex digit for each group
Examples:
The hex to binary conversion is very simple; just write down the 4 bit binary code for each
hexadecimal digit
Example:
Similarly for octal to binary conversion, write down the 8 bit binary code for each octal digit.
The hex to octal conversion can be carried out in 2 steps; first the hex to binary followed by the
binary to octal. Similarly, decimal to hex conversion is completed in 2 steps; first the decimal to
binary and from binary to hex as described above. \
Example:
Similarly for octal to binary conversion, write down the 8 bit binary code for each octal digit.
The hex to octal conversion can be carried out in 2 steps; first the hex to binary followed by the
binary to octal. Similarly, decimal to hex conversion is completed in 2 steps; first the decimal to
binary and from binary to hex as described above.
complements
Properties of Two's Complement Numbers
X plus the complement of X equals 0.
There is one unique 0.
Positive numbers have 0 as their leading bit ( MSB ); while negatives have 1 as their MSB .
The range for an n-bit binary number in 2's complement representation is from -2 (n-1) to 2 (n-1)
- 1
The complement of the complement of a number is the original number.
Subtraction is done by addition to the 2's complement of the number.
Value of Two's Complement Numbers
For an n-bit 2's complement number the weights of the bits is the same as for unsigned numbers
except of the MSB . For the MSB or sign bit, the weight is -2 n-1. The value of the n-bit 2's
complement number is given by:
A 2's-complement = (a n-1 ) x (-2 n-1 ) + (a n-2 ) x (2 n-1 ) + ... (a 1 ) x (2 1 ) + a 0
For example, the value of the 4-bit 2's complement number 1011 is given by:
= 1 x -2 3 + 0 x 2 2 + 1 x 2 1 + 1
= -8 + 0 + 2 + 1
= -5
An n-bit 2's complement number can converted to an m-bit number where m>n by appending m-
n copies of the sign bit to the left of the number. This process is called sign extension. Example:
To convert the 4-bit 2's complement number 1011 to an 8-bit representation, the sign bit (here =
1) must be extended by appending four 1's to left of the number:
1011 4-bit 2's-complement = 11111011 8-bit 2's-complement
To verify that the value of the 8-bit number is still -5; value of 8-bit number
= -27 + 26 + 25 + 24 + 23 +2 +1
=-128 + 64 + 32 + 16 +8 +2+1
= -128 + 123 = -5
Similar to decimal number addition, two binary numbers are added by adding each pair of bits
together with carry propagation. An addition example is illustrated below:
X 190
Y 141
X + Y 331
Similar to addition, two binary numbers are subtracted by subtracting each pair of bits together
with borrowing, where needed. For example:
X 229
Y 46
X - Y 183
Two' complement addition/subtraction example
Overflow occurs if signs (MSBs) of both operands are the same and the sign of the result is
different. Overflow can also be detected if the carry in the sign position is different from the
carry out of the sign position. Ignore carry out from MSB.
Two' complement addition/subtraction example
Overflow occurs if signs (MSBs) of both operands are the same and the sign of the result is
different. Overflow can also be detected if the carry in the sign position is different from the
carry out of the sign position. Ignore carry out from MSB.
signed binary numbers
Definition
Property
Two's complement representation allows the use of binary arithmetic operations on signed
integers, yielding the correct 2's complement results.
Positive Numbers
Positive 2's complement numbers are represented as the simple binary.
Negative Numbers
Negative 2's complement numbers are represented as the binary number that when added to a
positive number of the same magnitude equals zero.
Integer 2's Complement
Signed Unsigned
5 5 0000 0101
4 4 0000 0100
3 3 0000 0011
2 2 0000 0010
1 1 0000 0001
0 0 0000 0000
-1 255 1111 1111
-2 254 1111 1110
-3 253 1111 1101
-4 252 1111 1100
-5 251 1111 1011
Note: The most significant (leftmost) bit indicates the sign of the integer; therefore it is
sometimes called the sign bit.
If the sign bit is zero,
then the number is greater than or equal to zero, or positive.
If the sign bit is one,
then the number is less than zero, or negative.
Calculation of 2's Complement
To calculate the 2's complement of an integer, invert the binary equivalent of the number by
changing all of the ones to zeroes and all of the zeroes to ones (also called 1's complement), and
then add one.
For example,
0001 0001(binary 17) such that 1110 1111(two's complement -17)
NOT(0001 0001) = 1110 1110 (Invert bits)
1110 1110 + 0000 0001 = 1110 1111 (Add 1)
2's Complement Addition
Two's complement addition follows the same rules as binary addition.
For example,
5 + (-3) = 2 0000 0101 = +5
+ 1111 1101 = -3
0000 0010 = +2
2's Complement Subtraction
Two's complement subtraction is the binary addition of the minuend to the 2's complement of the
subtrahend (adding a negative number is the same as subtracting a positive one).
For example,
7 - 12 = (-5) 0000 0111 = +7
+ 1111 0100 = -12
1111 1011 = -5
2's Complement Multiplication
Two's complement multiplication follows the same rules as binary multiplication.
For example,
(-4) × 4 = (-16) 1111 1100 = -4
× 0000 0100 = +4
1111 0000 = -16
2's Complement Division
Two's complement division is repeated 2's complement subtraction. The 2's complement of the
divisor is calculated, then added to the dividend. For the next subtraction cycle, the quotient
replaces the dividend. This repeats until the quotient is too small for subtraction or is zero, then it
becomes the reminder. The final answer is the total of subtraction cycles plus the remainder.
For example,
7 ÷ 3 = 2 remainder 1 0000 0111 = +7 0000 0100 =
+4
+ 1111 1101 = -3 + 1111 1101 = -3
0000 0100 = +4 0000 0001 = +1 (remainder)
Sign Extension
To extend a signed integer from 8 bits to 16 bits or from 16 bits to 32 bits, append additional bits
on the left side of the number. Fill each extra bit with the value of the smaller number's most
significant bit (the sign bit).
For example,
Signed Integer 8-bit Representation 16-bit Representation
-1 1111 1111 1111 1111 1111 1111
+1 0000 0001 0000 0000 0000 0001
Other Representations of Signed Integers
Sign-Magnitude Representation
Another method of representing negative numbers is sign-magnitude. Sign-magnitude
representation also uses the most significant bit of the number to indicate the sign. A negative
number is the 7-bit binary representation of the positive number with the most significant bit set
to one. The drawbacks to using this method for arithmetic computation are that a different set of
rules are required and that zero can have two representations (+0, 0000 0000 and -0, 1000 0000).
Offset Binary Representation
A third method for representing signed numbers is offset binary. Begin calculating a offset
binary code by assigning half of the largest possible number as the zero value. A positive integer
is the absolute value added to the zero number and a negative integer is subtracted. Offset binary
is popular in A/D and D/A conversions, but it is still awkward for arithmetic computation.
For example,
Largest value for 8-bit integer = 28 = 256
Offset binary zero value = 256 ÷ 2 = 128(decimal) = 1000 0000(binary)
1000 0000(offset binary 0) + 0001 0110(binary 22) = 1001 0110(offset binary +22)
1000 0000(offset binary 0) - 0000 0111(binary 7) = 0111 1001(offset binary -7)
Signed Integer Sign Magnitude Offset Binary
+5 0000 0101 1000 0101
+4 0000 0100 1000 0100
+3 0000 0011 1000 0011
+2 0000 0010 1000 0010
+1 0000 0001 1000 0001
0 0000 0000
1000 0000 1000 0000
-1 1000 0001 0111 1111
-2 1000 0010 0111 1110
-3 1000 0011 0111 1101
-4 1000 0100 0111 1100
-5 1000 0101 0111 1011
Floating point number representation
A floating-point number (or real number) can represent a very large (1.23×10^88) or a very
small (1.23×10^-88) value. It could also represent very large negative number (-1.23×10^88) and
very small negative number (-1.23×10^88), as well as zero, as illustrated:
A floating-point number is typically expressed in the scientific notation, with a fraction (F), and
an exponent (E) of a certain radix (r), in the form of F×r^E. Decimal numbers use radix of 10
(F×10^E); while binary numbers use radix of 2 (F×2^E).
Representation of floating point number is not unique. For example, the number 55.66 can be
represented as 5.566×10^1, 0.5566×10^2, 0.05566×10^3, and so on. The fractional part can be
normalized. In the normalized form, there is only a single non-zero digit before the radix point.
For example, decimal number 123.4567 can be normalized as 1.234567×10^2; binary number
1010.1011B can be normalized as 1.0101011B×2^3.
It is important to note that floating-point numbers suffer from loss of precision when represented
with a fixed number of bits (e.g., 32-bit or 64-bit). This is because there are infinite number of
real numbers (even within a small range of says 0.0 to 0.1). On the other hand, a n-bit binary
pattern can represent a finite 2^n distinct numbers. Hence, not all the real numbers can be
represented. The nearest approximation will be used instead, resulted in loss of accuracy.
It is also important to note that floating number arithmetic is very much less efficient than integer
arithmetic. It could be speed up with a so-called dedicated floating-point co-processor. Hence,
use integers if your application does not require floating-point numbers.
In computers, floating-point numbers are represented in scientific notation of fraction (F) and
exponent (E) with a radix of 2, in the form of F×2^E. Both E and F can be positive as well as
negative. Modern computers adopt IEEE 754 standard for representing floating-point numbers.
There are two representation schemes: 32-bit single-precision and 64-bit double-precision
IEEE-754 32-bit Single-Precision Floating-Point Numbers
In 32-bit single-precision floating-point representation:
The most significant bit is the sign bit (S), with 0 for positive numbers and 1 for negative
numbers.
The following 8 bits represent exponent (E).
The remaining 23 bits represents fraction (F).
Float Normalized Form
Let's illustrate with an example, suppose that the 32-bit pattern is 1 1000 0001 011 0000 0000
0000 0000 0000, with:
S = 1
E = 1000 0001
F = 011 0000 0000 0000 0000 0000
In the normalized form, the actual fraction is normalized with an implicit leading 1 in the form of
1.F. In this example, the actual fraction is 1.011 0000 0000 0000 0000 0000 = 1 + 1×2^-2 +
1×2^-3 = 1.375D.
The sign bit represents the sign of the number, with S=0 for positive and S=1 for negative
number. In this example with S=1, this is a negative number, i.e., -1.375D.
In normalized form, the actual exponent is E-127 (so-called excess-127 or bias-127). This is
because we need to represent both positive and negative exponent. With an 8-bit E, ranging from
0 to 255, the excess-127 scheme could provide actual exponent of -127 to 128. In this example,
E-127=129-127=2D.
Hence, the number represented is -1.375×2^2=-5.5D.
De-Normalized Form
Normalized form has a serious problem, with an implicit leading 1 for the fraction, it cannot
represent the number zero! Convince yourself on this!
De-normalized form was devised to represent zero and other numbers.
For E=0, the numbers are in the de-normalized form. An implicit leading 0 (instead of 1) is used
for the fraction; and the actual exponent is always -126. Hence, the number zero can be
represented with E=0 and F=0 (because 0.0×2^-126=0).
We can also represent very small positive and negative numbers in de-normalized form with
E=0. For example, if S=1, E=0, and F=011 0000 0000 0000 0000 0000. The actual fraction is
0.011=1×2^-2+1×2^-3=0.375D. Since S=1, it is a negative number. With E=0, the actual
exponent is -126. Hence the number is -0.375×2^-126 = -4.4×10^-39, which is an extremely
small negative number (close to zero).
binary codes
Alphanumeric Codes
Earlier computers were used only for the purpose of calculations i.e. they were only used as a
calculating device. But now computers are not just used for numeric representations, they are
also used to represent information such as names, addresses, item descriptions etc. Such
information is represented using letters and symbols. Computer is a digital system and can only
deal with l's and 0’s. So to deal with letters and symbols they use alphanumeric codes.
Gray code
The Gray code was designed by Frank Gray at Bell Labs in 1953. It belongs to a class of codes
called the minimum change code. The successive coded characters never differ in more than one-
bit. Owing to this feature, the maximum error that can creep into a system using the binary gray
code to encode data is much less than the worst -case error encountered in case of straight binary
encoding.
Excess-3 Code (XS3)
Excess-3, also called XS3, is a non-weighted code used to express decimal number-s. It is
another important binary code. It is particularly significant for arithmetic operations as it
overcomes the shortcomings encountered while using the 8421 BCD code to add two decimal
digits whose sum exceeds 9. This code is used in some old computers.
BINARY CODED DECIMAL (BCD)
The binary coded decimal (BCD) is a type of binary code used to represent a given decimal
number in an equivalent binary form. Its main advantage is that it allows easy conversion to
decimal digits for printing or display and faster calculations.
Error Detecting And Correcting Codes
I. Introduction
- ------------
A. With any technology, there is a danger that information will be corrupted
due to physical imperfections in storage media or electronic "noise".
Since an undetected error of even 1 bit can be catastrophic, we wish to
take measures to detect such errors as might occur. As a minimum, we
seek error detection; but error correction is even better.
B. We discuss error-detecting and correcting codes here - in the context of
memory systems - because information in storage is vulnerable to
corruption. The same principles are often used when transmitting datq
from one place to another over a network - the other place where data
is especially vulnerable to corruption (even more so.)
II. Simple Error detecting/correcting codes
-- ------ ----- -------------------- -----
A. All error detection/correction schemes depending on using more bits to
store information than what one actually needs.
Of course, the simplest such scheme is parity. With each data item, we
store an extra bit called the parity bit.
1. One convention, called odd parity, specifies that the parity bit be set
so that the total number of 1 bits (including the parity bit) is odd
Examples:
Data item 01010101
Item + parity 101010101
Data item 10000011
Item + parity 010000011
2. An alternate convention, called even parity, sets the parity bit so
that the total number of 1 bits (including the parity bit) is even.
The choice of which convention to use depends on what sort of
catastrophic error is judged most likely to occur. For example, if
a complete system failure would normally result in all data being
converted to 0's, then odd parity would report something wrong but
even would not. On the other hand, if system failure would turn all
data to 1's (and the total length of data plus parity is odd) then
even parity would be preferred,
3. To check the correctness of an item, the receiver simply counts the
number of 1 bits and ensures that they are odd or even, as the case
may be. (Of course, both the originator and the receiver of the item
must use the same convention as to odd/even parity.)
4. Parity is what we call a 1-bit error detecting code
1
a. It can tell us that some bit is corrupt - but not which one is
corrupt. When a parity error occurs, any data bit might be in error,
or the data might itself be fine and the parity bit corrupt!
b. It can be fooled by a multiple error. If 2 bits are corrupted,
a parity test will tell us everything is ok.
B. More sophisticated schemes provide for not only detecting but also
correcting errors, and may be able to detect multi-bit errors. One
rather simple example is a scheme once used on magnetic tape.
1. Data on tape is written in blocks, composed of frames. Each frame
typically has 8 data bits plus a frame parity bit. Each block has a
an extra frame called the block checksum, each of whose bits is a
parity check on one row position throughout the length of the block -
e.g.
d d d d d d d d d d l <- this bit checks all the d's to its left
d d d d d d d d d d l
d d d d d d d d d d l
d d d d d d d d d d l
d d d d d d d d d d l d = data
d d d d d d d d d d l f = frame parity
d d d d d d d d d d l l = longitudinal parity
d d d d d d d d d d l
f f f f f f f f f f l <- this bit checks both all the l's above
^ it and all the d's to its left
|
This bit checks all d's above it
2. Consider what would happen if there were a single bit in error in
the block. When the user reconstructs the f's and l's, he would
find that one f and one l are incorrect. Together, these two would
isolate the one bit in error, which could be corrected by inverting it.
Thus, we have 1 bit error correction.
3. Now suppose two bits were corrupted.
a. If they were in the same frame, we would have 2 l's in error.
b. If they were in the same row, we would have 2 f's in error.
c. If they were in different frames and different rows, we would
have two errors each in f's and l's.
d. In any case, would be able to detect a 2 bit error (though we
could not correct it.)
4. Finally, consider the effect of a 3 bit error. In most cases, we
would get multiple error indications in both f's and l's - letting
us know that the data is corrupt. However, there is one pathological
case to be aware of. Let e be the bits in error:
2
e e <- l ok here
e <- l signals error here
^ ^
| |
f ok here f signals error here
Unfortunately, this would look like a (correctable) 1 bit error.
But we would correct the wrong bit, thus ending up with a 4 bit error!
5. For this reason, this encoding scheme is called a one-bit error
correcting, two-bit error detecting scheme. Its usefulness is
based on the assumption that errors involving three or more bits are
so improbable as to be allow us to assume such errors won't occur.
a. Suppose that the probability of a bit being corrupted is
10-9. (One per billion).
i. Such an error can be corrected and processing can proceed. If
we process a billion bits per second, such a situation will
arise, on the average, once per second.
ii. The probability of a two bit error is 10-18. Such an error
can be detected - and some alternate path can be pursued to
get the correct value. (E.g. recomputing it from an old value
and a log of transactions, or retransmitting if we are dealing
with a message over a network.) If we process a billion bits per
second, this will occur, on the average, once every 317 years.
iii. The probability of a three bit error is 10-27. Such an error
might be detected (many would be), but could escape detection.
If we process a billion bits/second, this amounts to no more
than one undetected error in 317 billion years - probably
fairly safe!
b. Note that neither this scheme, nor any other, can GUARANTEE that
an undetected error won't occur - it can only make such an error
improbable enough to make us willing to trust the system.
c. The fact that error-correcting and detecting schemes are only
probably correct means that, in some sense, computer-processed
data is never ABSOLUTELY GUARANTEED to be accurate.
III. Hamming Codes
--- ------- -----
A. Now we consider a scheme that can be used for error detection/correction
in a single word of data. The scheme is called a Hamming code. The
example we will develop is for a word length of 11 bits - but the idea
could be extended to any word size.
DISTRIBUTE HANDOUT; SHOW PROJECTABLE PAGE 1
3
1. Let there be n data bits (numbered d0 .. dn-1)
2. We will add m correcting bits, where m is the smallest integer
such that 2m >= (n + m + 1). (Example: 11 data bits - let m = 4 -
24 = 16 = (11 + 4 + 1). We number these bits c0 .. cm-1
3. We will logically, though not necessarily physically, intersperse
the correcting bits so that their positions in the overall word
are powers of 2. (We number bits in the overall word 1 .. m + n).
Ex: 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1
d10 d9 d8 d7 d6 d5 d4 c3 d3 d2 d1 c2 d0 c1 c0
4. Let each ci be a parity check on the remaining bits in logical
positions which contain 2i in their binary representation - e.g.
c0 checks all bits in odd numbered positions
c1 checks bits in positions 3, 6, 7, 10, 11, 14, 15
c2 checks bits in positions 5, 6, 7, 12, 13, 14, 15
c3 checks bits in positions 9 .. 15
(Observe: no c checks any other c - only d's. Thus, the c's can
be computed knowing only the d's.)
5. To store a word, add in the necessary correcting bits and store
the combination. To check a word retrieved from storage:
a. Extract the d bits.
b. Compute what the c bits should be.
c. XOR the computed c bits with the actual c bits.
i. If the result is all 0's, the stored word is ok.
ii. If the result is non-0, treat the XOR result as the binary
representation of an integer. This is the number of the bit
position where the error occurred (assuming a 1 bit error),
where bit positions must be numbered starting at 1, not 0.
Example (using odd parity): data originally:
10101010101 => 1010101_010_1__ w/slots for c's PROJECTABLE 2
Correcting bits: c0 = 0
c1 = 1
c2 = 0
c3 = 1
4
Stored word is 101010110100110 PROJECTABLE 3,4
- - --
Assume data bit 0 (slot 3 if bit slots are numbered starting at 1)
is corrupted in storage, so that word as read is 101010110100010
The receiver would extract the data bits 10101010100, and would
compute the c bits on this basis to be
c0 = 1 PROJECTABLE 5,6
c1 = 0
c2 = 0
c3 = 1
XORing with the c bits extracted from the stored data yields 0011 -
indicating that the error is in slot 3 of the stored data, as desired.
PROJECTABLE 7
B. The code we have presented gives 1 bit error correction, but could be
fooled by a 2 bit error. (We would, in fact, create a 3-bit error
by "correcting" the wrong bit!)
C. To add 2-bit error detection, we simply add a conventional parity bit.
1. In the absence of error, the parity bit will indicate no error,
and the correcting bits will show no error (exact match between
computed and stored values.)
2. A 1-bit error can always be corrected.
a. If it is in the data or correcting bits, it will cause both the
parity bit and the correcting bits to report an error, and the
correcting bits can be used to correct it.
b. If it is in the parity bit, it will cause the parity bit to report
an error, but the correcting bits will report no error. In this
case, we know that the parity bit is wrong.
3. A 2-bit error will always be detected but cannot be corrected.
a. If two data and/or correcting bits are wrong, this will cause the
parity bit to NOT report an error, but the correcting bits WILL
report an error - this is taken as an indication of a double error
that we can detect but not fix.
(There is no way to corrupt exactly two bits in such a way as to
cause the correcting bits to report no error, because each data
bit affects at least two correcting bits, and no two data bits
affect the same set of correcting bits.)
digital logic gates(AND, NAND,OR,NOR, Ex-OR, Ex-NOR)
For two inputs, there are 16 ways we can assign output values. Besides AND and OR, there are
five other operations which are useful.
BUFFER
The unary Buffer operation is useful in the real world
NAND
NAND (NOT - AND ) is the complement of the AND operation
NOR
NOR (NOT - OR) is the complement of the OR operation
XOR
Exclusive OR is similar to the inclusive OR except output is 0 for 1. It is stated in other words as
the output is 1 when modulo 2 input sum is equal to 1.
XNOR
Exclusive NOR is the complement of the XOR operation. Alternatively the output is 1 when
modulo 2 input sum is not equal to 1.
Minimal Logic Operator Sets
AND , OR, NOT are all that's needed to express any combinational logic function as switching
algebra expression. However two other minimal logic operator sets are also possible with NAND
gates or NOR gates. The following is a demonstration of how just NANDs or NORs can do AND
, OR, NOT operations.
NAND as a Minimal Set
NOR as a Minimal Set
Boolean algebra
Due to historical reasons, digital circuits are called switching circuits, digital circuit functions are
called switching functions and the algebra is called switching algebra. The algebraic system
known as Boolean algebra named after the mathematician George Boole. George Boole Invented
multi-valued discrete algebra (1854) and E. V. Huntington developed its postulates and theorems
(1904). Historically, the theory of switching networks (or systems) is credited to Claude
Shannon, who applied mathematical logic to describe relay circuits (1938). Relays are controlled
electromechanical switches and they have been replaced by electronic controlled switches called
logic gates. A special case of Boolean Algebra known as Switching Algebra is a useful
mathematical model for describing the combinational circuits. In this section we will briefly
discus how the Boolean algebra is applied to the design of digital systems.
Examples of Huntington 's postulates are given below:
Closure
If X and Y are in set (0, 1) then operations are also in set (0, 1)
Identity
Distributive
Complement
Note that for each property, one form is the dual of the other; (zeros to ones, ones to zeros, '.'
operations to '+' operations, '+' operations to '.' operations).
From the above postulates the following theorems could be derived.
Associative
Idempotence
Absorption
Simplification
Consensus
Adjacency
Demorgans
In general form
ery useful for complementing function expressions; for example
Switching Algebra Operations
A set is a collection of objects (or elements) and for example a set Z {0, 1} means that Z is a set
containing two elements distinguished by the symbols 0 and 1. There are three primary
operations AND , OR and NOT.
NOT
It is anary complement or inversion operation. Usually shown as over bar ( ), other forms are and
AND
Also known as the conjunction operation; output is true (1) only if all inputs are true. Algebraic
operators are '.', '&', ' '
or
Also known as the disjunction operation; output is true (1) if any input is true. Algebraic
operators are '+', '|', ''
AND and OR are called binary operations because they are defined on two operands X and Y.
Not is called a unary operation because it is defined on a single operand X. All of these
operations are closed. That means if one applies the operation to two elements in a set Z {0, 1},
the result will be always an element in the set B and not something else.
Like standard algebra, switching algebra operators have a precedence of evaluation. The
following rules are useful in this regard.
NOT operations have the highest precedence
AND operations are next
OR operations are lowest
Parentheses explicitly define the order of operator evaluation and it is a good practice to use
parentheses especially for situations which can cases doubt.
Note that in Boolean algebra the operators AND and OR are not linear group operations; so one
cannot solve equations by "adding to" of "multiplying" on both sides of the equal sign as is done
with real, complex numbers in standard algebra.
basic theorems and properties
If X and Y are in set (0, 1) then operations are also in set (0, 1)
Identity
Distributive
Complement
Note that for each property, one form is the dual of the other; (zeros to ones, ones to zeros, '.'
operations to '+' operations, '+' operations to '.' operations).
From the above postulates the following theorems could be derived.
Associative
Idempotence
Absorption
Simplification
Consensus
Adjacency
Demorgans
In general form
Very useful for complementing function expressions; for example
Boolean functions
Introduction
The study of building a computer begins with digital logic design. Nearly every computer is built
using digital logic. There may be an occasional unusual computer (say, neural networks), but
most of those are usually found in universities and research labs.
Compared to an electrical engineering curriculum, we'll study digital logic design in a fraction of
the time. How can we study it so quickly? Two reasons. First, we'll go over the material quicker
than in an electrical engineering course. Second, we'll leave out some topics (most notably,
circuit minimization and race conditions).
Just like the study of mathematical logic is done in two steps (propositional logic, followed by
predicate calculus), we'll also study digital logic in two steps. First, we talk about combinational
logic, then sequential logic.
I like to think of combinational logic circuits as implementations of Boolean functions. Think of
Boolean functions as an abstraction, and combinational logic circuits are the implementation of
that abstraction.
Of course, you can also think of combinational logic circuits as an abstraction, too, and the actual
silicon as the implementation. This goes to show you that there can be many levels of
abstraction, even in hardware.
We begin by discussing Boolean functions.
Boolean Functions
To understand Boolean functions, you need a basic understanding of set theory. At least, as much
set theory, as you'd see in a introductory level discrete math course.
First, let's define the set B = {0, 1}, where B is the set of Boolean values.
Instead of using true and false, we'll use 0 (for false) and 1 (for true). While this choice may
appear to be purely representational (i.e., we're using 0 and 1 as shorthand for false and true),
using numerals 0 and 1 also serves a second practical purpose: we can treat the 0's and 1's as
numbers in a representation system, and eventually do math with the 0's and 1's.
We define
Bk = B1 x B2 x ... x Bk
where Bi = B. This is the Cartesian cross product of k sets.
Bk is set of all k-tuples (b1, b2, ..., bk) such that bi is either 0 or 1 for all 1 <= i <= k. A k-tuple is
basically a k-dimensional coordinate.
Now that sounds mathematical doesn't it? However, you can also think of Bk as the set of all k-
bit bitstrings. That should make this set much easier to understand. As you should know by now,
Bk has size, 2k, since it's basically the set of all possible k-bit bitstring.
Now was can write a definition for Boolean functions.
Definition A Boolean function is Bk -> Bm, which is the set of all functions that map k bit inputs
to m bit outputs, where k >= 0 and m > 0.
Recall what a function is, from your discrete math courses.
A function consists of:
a domain (the set being mapped from),
a codomain (the set being mapped to),
a mapping from the domain to the codomain.
Usually, for each element in the domain, the mapping assigns one element of the codomain. If
this happens, the function is said to be total.
Sometimes, there are some elements in a domain that are not assigned at all. Thus, the mapping
for these elements are unknown. Such functions are said to be partial because not every element
in the domain is mapped.
For finite domains and codomains, we can draw pictures.
Here are a few observations on the above diagram.
Each dot represents an element in the set.
Each dot in the domain has exactly one outgoing arrow. More than one outgoing arrow per dot is
not permitted.
If there is at least one dot with zero outgoing arrows in the domain, the function is partial.
However, each dot in the codomain can have any number of arrows pointing to it, including no
arrows (0, 1, or more).
The domain and codomain do not have to have the same number of elements.
The domain and codomain can be the same set. They do not have to be different sets.
Bk -> Bm is the set of all functions that map k-bit bitstrings (which is the domain of a single
function in the set) to m-bit bitstrings (which is the codomain of a single function in this set).
Remember Bk -> Bm this is a set of functions, not one single function.
This set is quite large, as you might imagine. The total number of functions is 22km. The way we
come up with this answer is based on truth tables.
Truth Tables
When you study programming, you learn about functions. In a programming language, a
function usually computes the output, given an input. The idea of computation is so ingrained,
that it's hard to imagine a function being defined without computation.
Yet, fundamentally, a (mathematical) function is simply something that maps inputs to outputs.
There's no need to explain how this mapping is done. Of course, in reality, we worry about how
to perform this mapping using computation because it gives us a compact way to represent the
mapping. Furthermore, it's also its own study. Algorithms is the study of efficient computatble
functions that solve problems.
You may think "I've never seen a mapping before", but you have. Truth tables! Truth tables are
one way to define a Boolean function.
Let's look at an example of a truth table. We're going to follow the following convention. Input
variables start with the letter x, possibly with some numeric subscripts. Output variables start
with the letter z, possibly with some numeric subscripts.
x2 x1 x0 z2 z1 z0
0 0 0 0 0 0
0 0 1 1 0 0
0 1 0 0 1 0
0 1 1 1 1 0
1 0 0 0 0 1
1 0 1 1 0 1
1 1 0 0 1 1
1 1 1 1 1 1
Since there are 3 input variables (i.e., 3 bits), there are 23 = 8 possible 3-bit patterns. Thus, there
are 8 rows.
This truth table is a function of the following type B3 -> B3. That is, it's a function which is an
element of the set of functions from 3-bit inputs to 3-bit outputs.
Number of Boolean Functions
The number of boolean functions is 22km. This is a very large number of functions, whose size
primarily depends on the number of input variables.
Let's look at the previous truth table.
x2 x1 x0 z2 z1 z0
0 0 0 - - -
0 0 1 - - -
0 1 0 - - -
0 1 1 - - -
1 0 0 - - -
1 0 1 - - -
1 1 0 - - -
1 1 1 - - -
The rows with the numbers are all possible 3-bit input values. A function maps each one of the
rows to some 3-bit output value. I've replaced the outputs with dashes. Pick any combination of
0's and 1's to replace the dashes. Each combination represents a function.
The question is how many possible ways are there of filling out the dashes with 0's and 1's?
To answer this we need to count the number of dashes. This turns out to be the number of rows
times the number of columns.
How many rows are there? The number of rows is 2k given k input bits. That makes sense
because we are attempting to map every possible k-bit value to an m-bit value, so the total
number of k-bit values is 2k.
How many columns are there? There are as many columns as bits used for the output. In this
case, there are m bits used for the output.
Therefore, there are 2km slots with dashes in them. Each of those can be filled with either a 0 or
a 1. So, think of all those slots as one very large bitstring. That is, a bitstring with 2km bits.
And how many possible bitstring patterns are there for n bits? There are 2n. In this case, n =
2km, so plug it in, and you get: 22km
Summary
Here are some concluding facts:
Boolean functions map k-bit bitstrings to m-bit bitstrings, where k >= 0 and m >= 1. If k == 0,
that just means it is a function without any arguments. Effectively, this means the Boolean
function is a constant, i.e., it always returns 0, or it returns 1.
Functions map a k-bit value to a m-bit value. This mapping does not have to be unique. That is,
many different k-bit bitstrings can map to the same m-bit bitstring. If it is unique, the function is
said to be one-to-one.
Truth tables are one way of specifying a Boolean function.
There are 22km possible functions with k-bit inputs and m-bit outputs.
Combinational logic circuits implement Boolean functions. These are circuits with k-bit inputs
and have m-bit outputs.
canonical and standard forms.
The important terms we are discussing in this section are
Logic Expression - a mathematical formula consisting of logical operators and variables.
Logic Operator - a function that gives a well defined output according to switching algebra.
Logic Variable - a symbol representing the two possible switching algebra values of 0 and 1.
Logic Literal - the values 0 and 1 or a logic variable or it's complement.
The analysis means a digital circuit is given and we are asked to determine its input-output
relationship (its purpose, operation, what it does). One studies the circuit and then states the
input-output relationship of the circuit in text or on a truth table or on an operation table or on an
operation diagram. The synthesis means an input-output relationship is given and we are asked to
design the digital circuit. The input-output relationship is a very crucial component of digital
circuit study. Complex digital circuits are blocks are analyzed/designed individually and finally
the whole circuit is analyzed/designed.
Combinational circuit analysis starts with a schematic and answers the following questions:
What is the truth table(s) for the circuit output function(s)
What is the logic expression(s) for the circuit output function(s)
Two types of analyses are possible literal as well as symbolic analysis. Literal analysis is process
of manually assigning a set of values to the inputs, tracing the results, and recording the output
values. For 'n'' inputs there are 2n possible input combinations. From input values, gate outputs
are evaluated to form next set of gate inputs and evaluation continues until gate outputs are
circuit outputs. The literal analysis only gives us the truth table.
Symbolic analysis also starts with the circuit diagram like literal analysis. But instead of
assigning values, gate output expressions are determined. Intermediate expressions are combined
in following gates to form complex expressions. Symbolic analysis is more work but gives us
complete information of both the truth table and logic expression.
Now we will consider an example for the analysis of combinational logic circuit shown in Fig. 3.
Figure 3
Analyzing this circuit, it can be seen that
Output of Gate G1 = AB
Output of Gate G2 = CD
Output of Gate G3 = AB + CD
From this we could then construct a truth table (Table 3) to calculate the output of the circuit.
The truth table is constructed by considering the output of each gate in turn and then building up
towards the complete output.
Alternatively, the output of the circuit can be evaluated by substituting values directly into the
logic equation.
For example, when A = 1, B = 1, C = 1, D = 0
then Y = AB + CD = 1 . 1 + 1. 0 = 1 + 0 = 1
This can then be repeated for all other input combinations.
The analysis is followed by synthesis i.e. we will consider how to design and implement a logic
circuit to enable it to perform the desired specified operation. In this instance, we start with the
equation and determine circuit to implement. For example consider the logic function
X = AB + CDE
This is composed of two terms, AB and CDE . The first term is formed by ANDing A and B and
the second term is formed by ANDing together C , D and E . These two terms are then ORed
together. This can then be implemented using the AND and OR gates, as shown in Fig. 4.
Generally, as the number of levels are increased, the overall delay is increased due to the
contribution of propagation delays at each gate.
Figure 4: Implementation of X=AB+ CDE
Canonical and Standard forms
A binary variable may be either in its true form or its complement . For n variables, the
maximum number of input variable combinations is given by N = 2 n. Then considering the
AND gate, each of the N logic expressions formed is called a standard product or minterm. As
indicated in Table 1-13, binary digits '1' and '0' are taken to represent a given variable for
example or its complement respectively. Also from Table 1-13 note that each minterm is
assigned a symbol (P j) each where j is the decimal equivalent to the binary number of the
minterm designated.
Similarly, if we consider an OR gate, each of the N logic expressions formed is called a standard
sum or maxterm. In this case binary digits '1' and '0' are taken to represent a given complemented
variable and its true form respectively. As shown in Table 1-13, a symbol (S j) is assigned to
each maxterm where j is the decimal equivalent to the binary number of the maxterm designated.
Also observe that each maxterm is the complement of its corresponding minterm, and vice versa.
The minterms and maxterms may be used to define the two standard forms for logic expressions,
namely the sum of products (SOP), or sum of minterms, and the product of sums (POS), or
product of maxterms. These standard forms of expression aid the logic circuit designer by
simplifying the derivation of the function to be implemented. Boolean functions expressed as a
sum of products or a product of sums are said to be in canonical form. Note the POS is not the
complement of the SOP expression.
SUM OF PRODUCTS (OR of AND terms)
The SOP expression is the equation of the logic function as read off the truth table to specify the
input combinations when the output is a logical 1. To illustrate, let us consider Table 6.
Analysis also can be categorized into the functional analysis (determine what is computed) and
the timing analysis (determine how long it takes to compute it). The logic expression is
manipulated using Boolean (or switching) algebra and optimized to minimize the number of
gates needed, or to use specific type of gates.
Observe that the output is high for the rows labelled 3, 5 and 6. The SOP expression for this
circuit is thus given any of the following:
F = P 3 + P 5 + P 6
Each product (AND) term is a Minterm. ANDed product of literals in which each variable
appears exactly once, in true or complemented form (but not both). Each minterm has exactly
one '1' in the truth table. When minterms are ORed together each minterm contributes a '1' to the
final function. Note that all product terms are not minterms.
PRODUCT OF SUMS ( AND of OR terms)
The POS expression is the equation of the logic function as read off the truth table to specify the
input combinations when the output is a logical 0. To illustrate, let us again consider Table 1-14.
Observe that the output is low for the rows labeled 0, 1, 2, 4 and 7. The POS expression for this
circuit is thus given by any of the following:
F = S 0 S 1 S 2 S 4 S 7
Each OR (sum) term is a Maxterm. ORed product of literals in which each variable appears
exactly once, in true or complemented form (but not both). Each maxterm has exactly one '0' in
the truth table. When maxterms are ANDed together each maxterm contributes a '0' to the final
function. Please note that not all sum terms are maxterms.
Top Related