Propositional Logic
A. Statements and Logical Expressions Read Epp: pp. 1-8
Don't rush through the topics in this section, because most of the remaining material in the module is made up from the pieces presented here. Study these topics until you can do the exercises with confidence.
Try to get a clear understanding of the difference between a statement (defined on page 2) and a sentence; it will help you throughout the module. The two terms are often used interchangeably in an everyday sense when we speak or write, but logic requires greater precision. For example, "Discrete Mathematics with Applications is a good textbook" is a sentence but not a statement, while "Discrete Mathematics with Applications is a textbook" is both a sentence and a statement. What is the difference between a statement and a sentence?
A sentence that is not a statement could be true sometimes (or under some conditions) and false at other times (or under different conditions). A sentence that is a statement must have only one of the values true or false at all times (or under all conditions).
We can assign a variable (a letter) such as p to a statement, then say that p can be T (true) or F (false). Once we do that, it is possible to talk about statements in the abstract as p or q or r without using English to express them. They are variables with values and the values are true or false. Try exercise 5 on page 15 to get an idea of what is involved.
If p is a statement, then not p (~p) is also a statement. Its truth values are the opposite of p's truth values. If p and q are statements, then their conjunction (p and q) and disjunction (p or q) are also statements. Truth tables for not, and, and or are representations of the definitions of negation, conjunction, and disjunction. In general, truth tables are convenient representations of the truth values of statements. Truth tables give all possible true-false values of a statement.
Any statement that combines simple statements (p, q, r) with logical operators (ands, ors, nots) is called a compound statement. The representation of a compound statement in symbols is called a logical expression, or statement form, and is denoted by an upper-case letter such as P, Q, or R. Such an expression or form also has a truth table to represent all possible true-false values.
Once the basic rules and definitions are known, the construction of a truth table can be a mechanical process. To begin with, a truth table for a simple statement p has two rows, because p has only two possible values: true and false. A truth table for compound statements made up from p and q has four rows, because for each value of p there are two possible values of q. A truth table for compound statements made up from p, q, and r has 2 x 2 x 2 = 8 rows. Why? A truth table for a compound statement can be constructed in steps.
After studying the definitions and working through the examples on pages 4-8, try exercises 6-10 and 12-16 on page 15. Begin the construction of each truth table by completing columns for p, q, and r separately before working on the ands, ors, and nots. Then, allow a column for each logical operation (and, or, not) in the order of its evaluation, as shown in the example. Note that parentheses have the same effect as in algebra: operations inside parentheses are performed first.
B. Logical Equivalence, Tautologies, and Contradictions Read Epp: pp. 8-14
Truth tables can help clarify the three main topics in this section. Construct the truth tables for two statement forms P and Q, then examine the results. If the truth tables are identical, then the statement forms are logically equivalent. In that case, one statement form can be substituted for another without affecting the results. This substitution can simplify complicated expressions, making them easier to understand. Logical equivalence of expressions can also be used to simplify a computer program or a digital logic circuit, discussed later in this module.
If the same row of the two truth tables has different values (one T and the other F), then the two statement forms are not logically equivalent. Examples 1.1.7 and 1.1.8 illustrate non-equivalence and equivalence. You should be able to establish the equivalencies in Theorem 1.1.1 by using truth tables. To test yourself, try exercises 17-32 on pages 15 and 16.
A statement form is a tautology when its truth table contains T (or 1) in every row of the column showing the final result, meaning that the statement is always true regardless of the values of any of the simple statements from which it is composed. A statement form is a contradiction when its truth table contains F (or 0) in every row of the column showing the final result. By definition, a contradiction can never be true. To test your understanding of these concepts, first work through our examples illustrated by the hot-links for tautology and contradiction. Then work through the examples on pages 12 and 13, and finally, try exercises 37-40 on page 16.
Exercise 50 on page 16 provides an example of a complicated set of criteria that can be simplified, or at least clarified somewhat, by using propositional logic. There are five simple statements in the exercise. How many rows will be in the truth table, assuming all five statements are used in a compound condition for terminating membership?
C. Conditional Statements Read Epp: pp. 17-27
The if-then construction is frequently used in writing, programming, and conversation. It is a particularly troublesome construction in logic because its definition as depicted by a truth table doesn't conform to our usual way of thinking about the statement.
For example, we don't like to think of the statement "If it is raining, then I will get wet" as true when, in fact, it is not raining. But in logic, the conditional statement "If it is raining, then I will get wet" is defined to have the value T when the statement "It is raining" has the value F. The key ingredient here is "defined." There is no real explanation necessary (or, perhaps, possible) for a definition. Note that the author tries to convince you by giving an example at the bottom of page 17, but the definition (and truth table) on page 18 is all that matters, whether you are convinced or not. Before proceeding further, carefully work through the examples on pages 18-20 and try exercises 1-8 on page 27.
Study the definitions of contrapositive, converse, inverse, only if, and biconditional on pages 21-24 and use the truth tables to distinguish among the terms. Try exercises 8-19 on page 27 to help fix the concepts in your mind.
You will often encounter the terms necessary condition, sufficient condition, and necessary and sufficient condition in mathematics and in technical writing. The terms are alternatives to the if-then forms you have already studied. Study the definitions of these terms given on page 25. Work through the examples on that page, then try exercises 38-40 on page 28. You should also read the "Remarks" section on pages 26 and 27 to better understand some of the distinctions between logic and informal-language usage.
D. Arguments Read Epp: pp. 28-39
Before you begin this section, try to clear your mind of the usual thoughts that occur when you hear or use the word "argument." Then read Epp's definition of argument. In logic, the main concern is whether an argument is valid (or invalid) as Epp defines it. This use is quite different from what "argument" usually implies in informal language. One example of a valid argument is a proof of a theorem in mathematics.
Although there are many terms and issues in this section, you will be expected to know only how to determine whether an argument is valid by using truth tables. The steps to follow are listed at the bottom of page 29 and two examples are given on page 30.
One way to save time in determining the validity of an argument is to know a collection of valid argument forms, such as those on pages 31-33 (also see the summary on page 39). You will be expected to recognize and use only the two most common forms, modus ponens and modus tollens, defined on pages 31 and 32. You will also be expected to recognize the contradiction rule argument form on pages 37 and 38. Try exercises 1-11 on pages 39 and 40 to assess your understanding.
You will not be expected to know the material on pages 34-39. There are, however, some interesting examples on these pages. You may have encountered in puzzle books the type of logical problem in examples 1.3.9 and 1.3.15.
Also, note the definition of fallacy on page 35 and the CAUTION! on pages 36 and 37 that includes the examples of a valid argument having a false conclusion and an invalid argument having a true conclusion. Examples of logical misuse are all too common in discussions, in the media, or in advertisements.
E. Application Read Epp: pp. 41-54
This section connects logical expressions, truth tables, and digital logic circuits, such as those in computer systems. Pages 41-44 provide background information related to the topics in the rest of the section.
The input and output of digital logic circuits, also called combinational circuits, are represented by the binary digits 1 and 0 in I/O tables (instead of the T and F in truth tables of logical expressions). P, Q, and R stand for inputs with values 1 (T) and 0 (F) in some part of a digital logic circuit. These inputs pass through and, or, and not gates. The output (1 or 0) at each gate is determined by the values entering the gate and the type of gate.
For example, if P and Q are inputs to an and gate, and P = 1 and Q =
0, then the output P and Q is 0 (just as the logical expression p
An I/O table for a digital logic circuit is constructed by evaluating
outputs for all possible inputs of 1 and 0 as they pass through a
sequence of and, or, and not gates, just as constructing a truth table
of a compound logical form is obtained by evaluating each simple form
in sequence for all possible true-false values.
Click here to see a digital logic circuit with 1s and 0s passing
through gates, resulting in an I/O table.
The term Boolean expression applies to any expression consisting of
(1) variables that take binary values and (2) and, or, and not
operators. A digital logic circuit can be represented by a Boolean
expression as shown in example 1.4.3. A circuit, as you have already
learned, can also be represented by an I/O table. It seems reasonable,
therefore, that a Boolean expression can be represented by an I/O
table. Finally, it follows that given any one of the three
entities--digital logic circuit, a Boolean expression, or an I/O
table--the other two entities can be determined. Work through examples
1.4.4 and 1.4.5, then try exercises 5-21 on pages 55 and 56.
The comparison of logic and circuits can be continued. Equivalence
plays an important role. When the truth tables of two logical forms
are identical, the forms are equivalent and either form can replace
the other. When two digital logic circuits have identical outputs in
their I/O tables, then the circuits are equivalent and the simpler one
can replace the more complicated one, resulting in an increase in
speed in any system using the circuit. See pages 52 and 53 for an
example, then try any of the exercises 26-30 on pages 56 and 57,
according to your own interest.
***************************************************************
Predicate Logic
A. Basic Concepts
Read Epp: pp. 75-77
Predicate logic combines the notion of predicate from natural language
with set and functional notation from mathematics. Here is an example:
Assume the predicate P represents the phrase "is currently a student
in CMIS 160." For what values of x is the statement P(x): "x is
currently a student in CMIS 160" true? Without thinking much about it,
your answer is probably something like "all of the names of students
on the CMIS 160 class list." For example, your name is one value of x
for which the statement is true. More specifically, if Mary is a
current student in CMIS 160, then P(Mary): Mary is currently a student
in CMIS has the value T.
Let's formalize the above ideas and add notation.
Let P stand for the words "is currently a student in CMIS 160"
Let S be the set of all current students in UMUC
Let C be the set of all current students on the CMIS 160 class list
Then P(x) is "x is currently a student in CMIS 160." If a student x is
in S but not in C, then the value of P(x) is false. For every student
x who is in C, P(x) has the value T. Another way of saying this is:
"the truth set of P(x) is C." In notation, the truth set of P(x) is
{x
where the symbol | is read "such that."
This notation appears in the definition on page 77, but first study
the definitions of predicate and domain on page 76. Note that truth
tables are not involved here. Instead you are dealing with sets of
elements for which a predicate is true.
Study the notation in the blue-shaded box on page 77. Note the
similarity in concept between the arrow and subset. In particular,
logic uses the single-direction arrow to indicate that one truth set
is a proper subset of another, and the double-direction arrow to
indicate identical truth sets.
B. Quantifiers
Read Epp: pp. 77-80
"For all" (
As in your work with sets, finding a counterexample is a standard and
effective way to establish the falsity of a universal statement. For
finite sets, the method of exhaustion is also a standard and effective
way to establish the truth of a universal statement. In comparison,
for statements using the existential quantifier, you only need to find
one element to establish truth, but all elements to establish falsity.
Using the example above,
Let P stand for the words "is currently a student in CMIS 160"
Let S be the set of all current students in UMUC
Let C be the set of all current students on the CMIS 160 class list
To show the truth of
all you need to do is identify one student who is currently in CMIS 160.
To show the truth of
you need to find all students in C on the class list for CMIS 160.
However, to show the falsity of
you only need to find one student in S who is not on the CMIS 160 class list.
After working through examples 2.1.4 and 2.1.5, you should begin to
appreciate the use of notation in logic. Such notation gives precise
form and meaning to sometimes confusing informal language
sentences. Try exercises 8-10 on page 87 to acquire some facility with
translating between formal and informal language.
C. Universal Conditional Statements and Negations
Read Epp: pp. 80, 81, 83-87
Combining the universal quantifier with a conditional statement
produces a form that is commonly encountered in mathematical
writing. Examples 2.1.6 and 2.1.7 illustrate the informal and formal
versions. Try exercises 13-16 on page 88 to test your understanding
and to gain some practice in writing formal and informal versions of
the universal conditional statement.
Negations of quantified statements, especially the universal
conditional statement, also are important in informal and formal
discourse. On pages 83 and 84, study the two theorems and Epp's
informal statements describing them (in bold). Work through examples
2.1.10-2.1.12. Pay particular attention to the CAUTION! on page 85.
You can find in the media examples of the type of misuse of negation
described in the CAUTION! Look for statements such as "All _____ not
______" (e.g., "All is not well in this country" or "All humans are
not equal"). The misuse is much more frequent than you might expect.
Negations of universal conditional statements are a bit more
complicated. The symbolic form of a negation is given on page
86. After working through example 2.1.13, try exercises 19, 24-28, 30,
and 34. The "Vacuous Truth of Universal Statements" section provides
some interesting reading.
D. Multiple Quantifiers, Relationships among Quantifiers, Universal
Conditional Statement Variants, and Prolog Read Epp: pp. 89-97
Here we expand predicate logic topics and concepts introduced
previously. Pages 89-92 acquaint you with writing statements
containing two quantifiers: combinations of
Example 2.2.4 illustrates a technique for finding truth values of
multiply quantified statements. Note that the example does not
generate a truth table. Exercise 20 on pages 97 and 98 continues this
example.
Study the discussion of negating multiply quantified statements on
pages 90-92, including example 2.2.5. Sometimes the negation of a
statement can be better understood than the statement itself. Try part
(b) of exercises 12, 15, and 16 on page 97 to test your understanding.
You will not be responsible for the rest of the material of pages
92-97, but read it to gain some appreciation for the complexity
involved. In particular, note the brief Prolog section; it suggests
how this computer language might be applied to logical problems.
***************************************************************
Methods of Proof
Direct Proof and Counterexample
Read Epp: pp. 112-121
The topics in this section apply more to mathematics than to
computing. Although these topics are beyond the scope of this course,
you should read the pages to become acquainted with the ideas and
discussions that are given. In particular, skim the following topics:
the introductory material on pages 112 and 113
the notion of constructive proofs of existence on pages 115 and 116
the limitations of proof by exhaustion on page 116
the steps involved in the method of direct proof
the discussion in the solution to example 3.1.5 on pages 117 and 118
the directions for writing proofs of universal statements on pages 119 and 120
the common mistakes identified on pages 120 and 121
When you gain experience in writing a fairly long computer program
from specifications, given to you or determined by you, then you will
be able to relate the programming process you go through to that of
writing a proof given a set of conditions or axioms. (Note the
reference in Knuth discussing this relationship, given at the bottom
of page 118.)
***************************************************************
Proof by Mathematical Induction
Read Epp: pp. 194-201
Mathematical induction is a proof technique that can be applied to
computing, primarily in proving program segments correct and in
determining algorithm or program efficiency (for example, in
determining how many times an operation is performed in an
algorithm). Induction is essentially an inverse process of recursion,
a standard process in computer algorithms and programming. Recursion
is introduced in the next module.
Carefully work through the explanations on pages 194-197. Study the
"Principle of Mathematical Induction" given on page 196, which should
give you an intuitive idea of the process of a proof by mathematical
induction. Then study example 4.2.1 to get a better idea of what
constitutes a proof by mathematical induction. You may want to review
some algebraic rules before you begin.
Example 4.2.1 illustrates the typical amount of work involved in the
proof and typifies the two-step structure of the proof. In general,
step one, establishing the truth of the proposition for the lowest
value of an integer, does not require much work; a mere statement or
simple equation might be enough. Step two, showing that the
proposition is true for the integer value n + 1 (assuming it is true
for all n between the lowest integer value and n) usually requires the
application of algebraic rules, such as factoring or exponentiation.
In trying to explain the process of proof by mathematical induction,
Epp includes explanations within the proof. They are not part of the
formal proof and might lead to some confusion. They also make the
proof seem longer and more complex than it really is. Rewrite the
proof of Theorem 4.2.2 on pages 199 and 200 without the italicized
statements. Doing this might help clarify what is required in the
proof. Then try exercises 3-5, 7-9, and 12 on pages 204 and 205 to
test your ability to apply the proof process.
Only one form of proof by mathematical induction is considered in this
course. There are several other forms that apply to different types of
propositions. The remaining parts of the chapter in Epp discuss two
additional forms, give examples of each, and provide an application in
proving a program segment correct. These topics are beyond the scope
of this course, but you may want to read about them for your own
interest.
***************************************************************
Set Theory
A. Elements of a Set
Read Epp: pp. 231 and 232
It might seem strange that the key terms, set and element, are
undefined, but there really are no simpler words for defining these
terms. Be sure you understand what a set and an element of a set
are. Naming a set immediately establishes a binary condition: either
an item is an element of that set or it isn't. There is no other
possibility.
Epp uses the upper case letters R, Z, and Z+ consistently in the text
to represent, as shown below, the set of all real numbers, the set of
all integers, and the set of all positive integers: R the set of all
real numbers Z the set of all integers Z+ the set of all positive
integers
Although these sets, or subsets of them, appear in various places
throughout the text, in this course we emphasize only the finite
subsets of these infinite sets, rather than the infinite sets
themselves or their infinite subsets.
B. Subsets
Read Epp: pp. 232 and 233
A is a subset of B if, and only if, every element of A is an element
of B. Click the Animate button to see this concept illustrated.
In the example below, C is not a subset of B because not every element
of C is an element of B.
A is a proper subset of B if, and only if, every element of A is an
element of B, but there is at least one element of B that is not in
A. In the first animated example above, A is a proper subset of B.
Learn the definitions of subset and proper subset, given also in the
highlighted portions of pages 232 and 233. If you are unfamiliar with
set notation, refer to the inside front cover of Epp's text.
Use Venn diagrams to depict the sets in example 5.1.3, and verify the
solutions by drawing Venn diagrams. Then try exercises 4-6 on page 242
to check your understanding of subsets, using Venn diagrams as an aid
in seeing the solutions to these problems as well.
The distinction between element of a set and subset of a set as
illustrated in example 5.1.5 is worth noting. Test your understanding
by doing exercise 2 on page 242.
C. Set Equality
Read Epp: pp. 234 and 235
Whether or not sets are equal might seem obvious, but set equality can
be difficult to prove. Learn the definition and test your
understanding by doing exercises 1 and 3 on page 242.
If you have time, browse through example 5.1.6, but don't get
discouraged by it. The solution contains some formal proof methods
that you will not need to know at this time. You will become more
adept at argumentation of this kind as you proceed through the course.
D. Operations on Sets
Read Epp: pp. 236 and 237
The definitions of the operations intersection, union, difference, and
complement are given on page 236. To see animations illustrating
these operations, click on the buttons below the picture of sets A and
B.
Learn these definitions of operations on sets. The result of each
operation is always a set. Note the role played by the universal set.
Venn diagrams are especially helpful in visualizing results of set
operations. Test your understanding of the operations by trying the
examples and exercises 8-10 on pages 242 and 243. Exercises 12-16 are
more complex: try them after you finish the others.
E. Cartesian Products of Sets
Read Epp: pp. 237-239
The definitions and notation for ordered n-tuples and Cartesian
products were used in the Application Context discussion. Study the
definitions and examples on pages 237-239, then try exercises 17 and
18 on page 243.
F. Formal Languages
Read Epp: pp. 239-242 (optional)
We will study the material on formal languages in module 5 of this
course. You do not need to study it now. (If you have algorithm
development and programming experience, you might be interested in the
discussion of testing for a subset on pages 241 and 242, but it is
optional for this course.)
G. Proving or Disproving Set Properties
Read Epp: pp. 244-256
This section on properties of sets may be difficult to read,
especially if this is your first encounter with proofs in
mathematics. Work on one part of a page or, at most, a few pages at a
time. Draw Venn diagrams as visual aids whenever you can. (Epp has
highlighted set properties, theorems, and proof methodology.) The use
of a counterexample to establish that an alleged property is false is
an important proof methodology.
A proof is a precise sequence of true statements that flow logically
from one or more assumptions to a conclusion. The proof itself
requires no explanatory remarks, but it is good practice to include
with each statement a reference to a definition, property, or theorem
so that the reader can follow the argument more readily. Epp explains
the process to follow in great detail, then shows the actual
proof. Note the difference between the explanation and the proof by
studying example 5.2.2. The proof is considerably shorter than the
explanation.
When confronted with an alleged theorem, you have two choices: prove
it or disprove it. Disproving an alleged theorem (i.e., proving that a
property is false) requires finding one counterexample (a case for
which the property is not true). For a theorem or property to be true,
it must apply to all cases, not just some. Thus, if you can find just
one instance for which the alleged theorem or property is not true (a
Venn diagram might help in this regard), then you have completed the
task. If you cannot find a counterexample, then try to construct a
proof.
Before proving a statement, it is necessary to understand what the
statement says. Separate the assumptions from what is to be
proved. After doing this, proceed to the proof itself. A proof of a
set property or theorem usually consists of the following steps:
Write down what is to be proved.
Write definitions and/or already proven theorems that apply to the situation.
Assume x is an element of the set on the left side of a relation.
Use properties and/or theorems to deduce that x is an element of the right side.
When appropriate, repeat steps 1-4 by reversing the left and right sides.
The box outlined in blue on page 249 illustrates this process. Note
that step 5 is appropriate only when the theorem to be proved involves
set equality.
H. The Empty Set, Partitions of Sets, and the Power Set of a Set
Read Epp: pp. 258-265
Study the definitions and properties, and theorems pertaining to the
empty set. The empty set plays an important role in set theory. For
example, it provides a handy way to define two or more sets as
disjoint, i.e., not having any elements in common. (See page 262.)
Also, remember that the empty set is a subset of every set.
The definition of disjoint sets leads to the definition of partitions
of sets. A given set can be partitioned in a number of ways. For
example, the set of all students in this course can be partitioned
into the two subsets, all males and all females. Another partition is
all students younger than 25, all students aged 25 through 40, and all
students over 40.
Partitioning a Set Two Ways
PARTITION 1
Assume:
D is the set of all students in this section of CMIS 160
M is the set of all male students in D
F is the set of all female students in D
Then {M,F} is a partition of D
PARTITION 2
Assume:
D is the set of all students in this section of CMIS 160
A is the set of all students in D whose age is less than 25
B is the set of all students in D whose age is greater than or equal to 25 and less than or equal to 40
C is the set of all students in D whose age is greater than 40
Then {A,B,C} is a partition of D
How can you apply the idea of set partition to the process of putting
together a jigsaw puzzle?
Note that the elements of a power set are all sets, including the
empty set and the set itself. You should be able to list all elements
in the power set of a given set and know how many elements there are,
but you will not be expected to prove statements about power sets.
Read Epp: pp. 239-242 and 588-589
Formal Languages
The material on these pages uses the basic notions of sets, so you may
need to review the definitions of set theory in module 1 and on pages
231-236 of Epp. Using the English language as an analogy might help
you to grasp the concepts and techniques related to formal
languages. The English language consists of 26 characters (letters)
and words made up of the characters. The rules for forming words are
complex and far too numerous to list here. The 26 characters form the
alphabet of the language and the words are strings of characters in
the alphabet. More formally, let the alphabet be
S = {a, b, c, d, ..., y, z},
where S is the Greek letter sigma. Then strings are ordered n-tuples
of characters of S (where n is a finite, positive integer) that follow
the rules for constructing English words. The English language over S
is the set of strings over S. Note that the language could be infinite
even though the alphabet is finite and each string is of finite
length.
In formal languages, there is no guarantee that the strings over an
alphabet make any sense. You can specify an alphabet any way you want,
as long as the number of distinct characters is finite. You can impose
any rules you want to limit the strings that are in the language over
the alphabet. The set of strings that results from the alphabet and
rules specifications becomes the formal language over the alphabet.
Study the summary of definitions and the notation in the shaded boxes
on page 240. Work through the three examples on pages 240 and 241, but
omit Algorithm 5.1.1 and example 5.1.14. Try exercises 19 and 20 on
page 243 to assess your ability to apply the definitions.
In module 3, we indicated that the material on lexicographic order
would be revisited in module 5. The discussion of formal languages we
have just completed, combined with your understanding of partial order
relations from module 3, should prepare you for the theorem,
definition, and example of lexicographic order on pages 588 and 589.
If the notation in the first two parts of the theorem seems confusing,
consider a different pair of words for each part. For example, for
part 1, consider the two words compute and computer. Let the a's in
the theorem refer to the characters in compute, and the b's refer to
the characters in computer. Then m = 7 (the number of characters in
compute) and n = 8 (the number of characters in computer). Note that
the first m characters are the same in both words.
Applying the theorem gives the result that compute precedes computer
in lexicographic order. For part 2 of the theorem, try the two words
computer and computing. After working through example 10.5.6, try
exercise 11 on page 600 to assess your understanding.
***************************************************************
Functions
A. Basic Definitions
Read Epp: pp. 344 and 345
The definition in the shaded box on page 345 is critical to an
understanding of functions. Keep it handy as you work through this
chapter. Although the definition includes all functions, in discrete
mathematics it is applied to functions on sets--more specifically,
sets with a finite or countable number of elements. The examples and
exercises that pertain to sets with an infinite, noncountable number
of elements will not be included in this course.
B. Function Representations
Read Epp: pp. 345-354
Several convenient ways to visualize a function include an arrow
diagram, function machine, and table. The arrow diagram works well for
sets with very few elements. As examples 7.1.1 and 7.1.2 show, you can
readily identify whether a function or nonfunction is represented.
The function machine might appeal to your interest in computing. The
function machine requires that you are given some input, then you
perform the operations specified, and that produces output. This
input/output may imply that a computer program can be considered a
function. Can all programs be considered functions? If your answer is
no, then you should be able to describe conditions for which a given
program is not a function. Of course, you could try to find a
counterexample as you did in the section on sets.
A table is another way to specify a function. A table was described in
the Application Context for this module, and an example of a function
was included.
Examples 7.1.10 and 7.1.11 give insights into some interesting
applications in computing. Note that Figure 7.1.6 on page 352 shows a
table and an arrow diagram for the same function. These examples, and
example 7.1.13, will not be pursued further in this course.
Try exercises 1-5, 12, 20, 21, 25, and 26 on pages 354-356 to assess your understanding of the topics in Section 7.1.
C. Function Properties
Read Epp: pp. 369-384
Read each definition a number of times, and try to provide your own
examples with the help of an arrow diagram. Work through Epp's
examples for one-to-one, onto, and inverse functions, as well as
one-to-one correspondence (bijection). (Omit the hash function
application on pages 373 and 374 for now. Hashing will be referred to
in part III, section E of the Commentary.)
Take your time with this material; it can be confusing. For your own
interest, you may want to look briefly at the examples involving
infinite sets, but concentrate on the finite set examples. Then try
exercises 1b, 1c, 2-6, 31, and 32 on pages 384-387.
Section 7.4 on the pigeonhole principle will not be covered in this
course. Essentially, the principle states that if you try to put
objects into containers and you have more objects than containers,
then at least one container will get more then one object. Yes, it is
as simple as it sounds, but the principle has a number of fascinating
and unexpected applications. You can get some idea of this by browsing
through the examples and exercises, if you wish.
B. Finite-State Automata
Read Epp: pp. 357-369
Review the section on combinational circuits, and then note the
difference between combinational and sequential circuits indicated on
page 357. The work on finite-state automata you are about to undertake
has direct application to these types of circuits, hence to computers
themselves.
On page 358, Epp provides some historical background appropriate to
the topic of finite-state automata. Although mathematical in origin
and intent, the topic has had a great impact in computing. For a more
detailed discussion of the work of Alan Turing and his Turing machine,
see Gersting's book, pages 597-609. Additional material on pages
610-617 takes you into the realm of theoretical computer science, but
it gives you more background on why we study formal languages and
finite-state automata in computing.
The material in Gersting also includes a very interesting result:
there are unsolvable problems in the sense that an algorithm cannot be
specified to solve the problem (or halt) after a finite number of
steps. All of this material in Gersting is for your own information
and will not be tested in this course. (Information about Gersting's
book is given in the syllabus under "Course Materials.")
Work through example 7.2.1, then try exercise 1 on page 367 to test
your understanding. Study the definition of a finite-state automaton
on page 360 and the concepts of a (state-) transition diagram and
next-state table. Examples 7.2.2 and 7.2.3 illustrate the ideas and
definitions. Exercises 2-13 on pages 367-369 pertain to these
topics. Work enough of them to convince yourself that you understand
the processes before approaching the next topics.
Pages 362-367 introduce four issues. We will study two of them:
1. given a finite-state automaton, what language does it accept
2. given a language, what finite-state automaton accepts it
The other two issues are concerned with (1) finding the eventual state
of an automaton that is already in a state when an input string is
given and (2) writing a program to simulate a specific finite-state
automaton. They will not be included in this course.
Study the definitions in the box on page 362 and work through example
7.2.4. The definitions specify what it takes for a string and a
language to be accepted by a finite-state automaton. The example
includes only one accepting state shown by the double circle. In
general, a finite-state automaton can have more than one accepting
state. Exercises 14-19 on page 369 will test your understanding of the
process of finding a language accepted by a finite-state automaton.
Examples 7.2.6 and 7.2.7 show how to design a finite-state automaton
given a specific language. Carefully work through these examples. Then
try exercises 20 and 22-29 on page 369 to assess your ability to
design a finite-state automaton for a specified language. Note that
exercise 30 suggests a practical application of the theory.
D. Finite-State Automata
Read Epp: pp. 357-369
Review the section on combinational circuits, and then note the
difference between combinational and sequential circuits indicated on
page 357. The work on finite-state automata you are about to undertake
has direct application to these types of circuits, hence to computers
themselves.
On page 358, Epp provides some historical background appropriate to
the topic of finite-state automata. Although mathematical in origin
and intent, the topic has had a great impact in computing. For a more
detailed discussion of the work of Alan Turing and his Turing machine,
see Gersting's book, pages 597-609. Additional material on pages
610-617 takes you into the realm of theoretical computer science, but
it gives you more background on why we study formal languages and
finite-state automata in computing.
The material in Gersting also includes a very interesting result:
there are unsolvable problems in the sense that an algorithm cannot be
specified to solve the problem (or halt) after a finite number of
steps. All of this material in Gersting is for your own information
and will not be tested in this course. (Information about Gersting's
book is given in the syllabus under "Course Materials.")
Work through example 7.2.1, then try exercise 1 on page 367 to test
your understanding. Study the definition of a finite-state automaton
on page 360 and the concepts of a (state-) transition diagram and
next-state table. Examples 7.2.2 and 7.2.3 illustrate the ideas and
definitions. Exercises 2-13 on pages 367-369 pertain to these
topics. Work enough of them to convince yourself that you understand
the processes before approaching the next topics.
Pages 362-367 introduce four issues. We will study two of them:
1. given a finite-state automaton, what language does it accept
2. given a language, what finite-state automaton accepts it
The other two issues are concerned with (1) finding the eventual state
of an automaton that is already in a state when an input string is
given and (2) writing a program to simulate a specific finite-state
automaton. They will not be included in this course.
Study the definitions in the box on page 362 and work through example
7.2.4. The definitions specify what it takes for a string and a
language to be accepted by a finite-state automaton. The example
includes only one accepting state shown by the double circle. In
general, a finite-state automaton can have more than one accepting
state. Exercises 14-19 on page 369 will test your understanding of the
process of finding a language accepted by a finite-state automaton.
Examples 7.2.6 and 7.2.7 show how to design a finite-state automaton
given a specific language. Carefully work through these examples. Then
try exercises 20 and 22-29 on page 369 to assess your ability to
design a finite-state automaton for a specified language. Note that
exercise 30 suggests a practical application of the theory.
E. Function Composition
Read Epp: pp. 401-410
Composition of functions extends the idea of functions. The notation
might be difficult, but the basic concept is relatively simple: my
sending a message to person A, who adds a note and forwards the
composite message to person B, produces the same result (from person
B's point of view) as if I had sent the composite message to person B
directly.
Work through the definitions and theorems, but not the proofs. Then
try exercises 1, 2, and 20 on pages 410 and 411. The application of
composite functions to tables in a relational database is
straightforward.
***************************************************************
Recursion
Read Epp: pp. 424-435
A. Recursively Defined Sequences
Before starting this section, read the material on sequences on pages
180-182, including examples 4.1.1 and 4.1.2. This material provides
some basic information about sequences and gives examples of two of
the three ways to define a sequence, as indicated on pages 424 and
425. The third way to define a sequence, using recursion, is the topic
of this section. Note the paragraph on page 425 that compares proving
theorems by mathematical induction to defining sequences by recursion.
Study the definitions on page 425. Initial conditions play an
essential role in recursion. They provide the starting point for
determining the rest of the values of a sequence. Work through
examples 8.1.1-8.1.3, then try exercises 1-8 on page 438 to test your
ability to find terms of a recursively defined sequence.
Example 8.1.4 involves a proof process for a recurrence relation. The object is to show
Sk = -kSk-1
As you work through the proof, note that the process hinges on
multiplying both sides of the line numbered 8.1.2 by the quantity
-k. With this multiplication and some algebraic manipulation, we can
show Sk = -kSk-1. Try exercises 9-11 on page 438 to see if you can
follow the process illustrated in the example.
Three common examples of the use of recursively defined sequences are
the towers of hanoi, Fibonacci numbers, and compound interest. Read
pages 427-433, working through examples 8.1.5-8.1.7. Try exercises 17,
18, 22, 23, 27, and 29 on pages 439 and 440 to practice applying the
concepts and techniques related to recursively defined sequences. The
additional examples on pages 433-438 will not be included in this
course, nor will the extensions of the topic on pages 441-471. Our
only purpose is to introduce you to the methodology for handling
recursively defined sequences.
B. Recursive Functions
Read Epp: pp. 471-473
These pages introduce another aspect of recursion, the notion of
functions that are defined recursively. Work through the three
examples that are given. Examples 8.4.7 and 8.4.8 are well known in
the area of theory of computing. Example 8.4.8 deals with one of the
difficulties in the area of recursive functions, determining whether
they are well defined. Try exercises 19, 21, and 23 on page 475 to get
an idea of how values of recursive functions are computed. Note how
quickly the complexity of the calculations increases when higher
values of the functions are sought.
***************************************************************
Relations
A. Basic Definitions
Read Epp: pp. 533-536
In chapter 7, you were introduced to a special kind of relation, a
function. There, each element a of a set A was related to one and only
one element b of a set B. In other notation, the pair (a,b) is an
element of the set AxB. A relation can be more general, however. It
allows an element of A to be related to more than one element of
B. That is, the pairs (a,b), (a,c), and (a,d) could all be a relation
(but not a function), provided of course, that b, c, and d were all
elements of B.
The elements depicting a relation are ordered; the first element is in
A and the second element is in B. If the pairs were written (b,a),
(c,a), and (d,a), then they would be elements of the set BxA, where
the elements of B are related to elements of A. Pairs signify a binary
relation, which is the type of relation that appears in this chapter.
Some examples of relations were identified in the database application
in this module. As another example, consider the set B of book names
and the set I of ISBNs. The binary relation of book names to ISBNs is
a function because each book name has a unique ISBN. The same is true
of the relation of book names to publisher names. However, if the
binary relation of publisher names to book names were a function, then
the publishers would soon be out of business. Why?
Study section 10.1 in the same way you studied Section 7.1, again
concentrating on finite sets. Try Exercises 1, 2, 9, 12, 13, 17, and
18 on pages 544 and 545.
B. Relation Representations
Read Epp: pp. 537-542
Binary relations are commonly represented by arrow diagrams and
tables. If A = {0, 2, 3, 5} and B = {1, 4, 7}, then the representation
R = {(0,1), (0, 7), (2,4), (3,7), (5,1)} identifies the same relation
as the table: A 0 0 2 3 5 B 1 7 4 7 1
and the arrow diagram below. Click "Relation" to see how the relation
is formed.
What is the inverse of this relation? Click "Inverse" to see how the
inverse of this relation is formed.
The definition on page 543 extends the notion of a binary relation to
an n-ary relation. Example 10.1.11 illustrates the concept.
C. Relations as Directed Graphs
Read Epp: pp. 542-544
When a relation is defined from a set to itself, the corresponding
arrow diagram is also called a directed graph. Suppose we change the
previous example to put all elementsq of both sets into just one set,
A, but keep R the same. Then A = {0, 1, 2, 3, 4, 5, 7} and the arrow
diagram is now a directed graph.
A directed graph can have arrows from an element to itself (if, for
example, (3,3) were in the relation) and from an element to another
element then back to the first element (if, for example, both (2,3)
and (3,2) were in the relation). Directed Graph
Directed graphs are helpful representations in understanding the
properties defined in the next section, and they will be discussed in
more detail in module 2.
D. Relation Properties
Read Epp: pp. 546-553
Three important properties of relations--reflexive, symmetric, and
transitive--are introduced in section 10.2. As an example, consider
the set of people in a movie theater and the relation defined as "in
the same row as." This relation is reflexive because each person is
in the same row as him- or herself. The relation is symmetric
because, for each person in the same row as a second person, that
second person is in the same row as the first person. Finally, the
relation is transitive because for each person in the same row as a
second person who is also in the same row as a third person, the first
person is in the same row as the third person.
Showing that a relation is reflexive, symmetric, and transitive is
necessary to establish an equivalence relationship, the topic of the
next section. Working through the examples and exercises 1-8, 15, 16,
18, and 37 on pages 554 and 555 is essential to your grasping the
three properties of relations.
As before, work only with finite sets--with one exception, the
relation of congruence modulo n on pages 552 and 553. Before studying
example 10.2.5, you might need to refer back to examples 10.1.1 and
10.1.2 on page 535. The concept of congruence modulo n is particularly
useful in computing. Computer programs that incorporate methods such
as hashing and encryption use congruence modulo p, where p is a prime
number.
E. Partitions, Equivalence Classes, and Equivalence Relations
Read Epp: pp. 555-570
In this section, you may omit the proofs. The terms partition,
equivalence classes, and equivalence relation are
interrelated. Working exercises 1-5, 10, 11, 14, 16, 18, and 23 on
pages 570 and 571 will give you a grasp of the three concepts.
An equivalence relation on a set partitions the set into equivalence
classes. Understanding this statement gives you an immediate grasp of
the computing method known as hashing, an alternative to sorting when
you need to store items in a particular order.
Hash functions are initially discussed on pages 373 and 374 as a
computer application for the topics in Epp's chapter 5. If you are
familiar with the notion of hash functions through other computing
experience then you should read this material. Otherwise, it might not
mean anything to you at this time. When you do encounter hash
functions in a different computer course, you will understand them
better because of your study of functions. Hashing involves
partitioning a set of items into equivalence classes.
If you have not encountered the term (or the idea of) hash functions,
and you are interested in finding out more about them, look in a
programming text that covers the second semester of a two-semester
course sequence.
Partial Order Relations
A. Basic Definitions
Read Epp: pp. 585-588
Compare the definition of antisymmetric to that of symmetric presented
in the previous module (Epp, page 547). Antisymmetric is a stronger
(more restrictive) property; it requires the extra condition that a
and b be the same element whenever a is in relation to b and b is in
relation to a.
Arrow diagrams clearly distinguish the two concepts. Any relation that
has in its arrow-diagram representation an arrow from a to b and an
arrow from b to a cannot be antisymmetric. Example 10.5.1 illustrates
this statement, and exercises 1, 2, and 4 on pages 599 and 600 will
test your understanding.
Note that in example 10.5.2 actual numbers (elements of the set) are
used to provide a counterexample. (Remember that one counterexample is
all that is needed to disprove a statement.) Using specific elements
in a set is a good technique; a concrete example makes it easier to
come up with a general solution.
Also compare the definitions of partial order relation and equivalence
relation (page 559). Use actual sets and numbers when working through
examples 10.5.3-10.5.5 to get some idea of what to expect in a partial
order relation. Then try exercise 6 on page 600. Also, go back to
exercises 1, 2, and 4 to determine whether each relation is a partial
order relation.
For now, omit the section on lexicographic order (pages 588 and 589)
and the corresponding exercises on pages 599 and 600. We will return
to this topic in module 5.
B. Hasse Diagrams
Read Epp: pp. 590-592
A Hasse diagram is a means of graphically representing a partial order
relation. It is (1) a graph and (2) the simplest representation from
which all relations in the partial order can be derived.
To construct a Hasse diagram, given a partial order relation, first
construct its arrow diagram with all arrows pointing upward. Then,
systematically remove edges that are unnecessary because of the
reflexive and transitive properties of the relation.
Look at our example illustrating how a Hasse diagram is constructed.
If you add times for completing jobs to the edges, the Hasse diagram
becomes a PERT chart controlling job scheduling (recall the
Application Context).
Try to construct the diagram without first doing the digraph. Also,
reverse the steps to get the digraph from the Hasse diagram. Examples
10.5.7 and 10.5.8 illustrate this process. Try exercises 16, 17, 19,
and 21b on page 600.
C. Sets that are Partially or Totally Ordered
Read Epp: pp. 592-594
In the previous example, elements such as 3 and 12 are called
comparable and elements such as 2 and 3 are noncomparable, according
to the definitions on page 592. The set A in the previous example is a
partially ordered set (poset) with respect to the relation "divides,"
because "divides" is a partially ordered relation on A. However, A is
not a totally ordered set, because it contains noncomparable
elements. A set such as {1, 2, 4, 8, 16} is a totally ordered
set. Note that its Hasse diagram is the single chain
1 ------ 2 ------ 4 ------ 8 ------ 16
All totally ordered sets have this kind of Hasse diagram.
Study the definitions on page 594 and work through example
10.5.10. Note that a partially ordered set can have more than one
maximal or minimal element, but at most one greatest or least
element. The existence of a minimal element in a finite, nonempty,
partially ordered set is the basis for the next topic, topological
sorting. Try exercises 22-29, 34, 39, and 40 on pages 600 and 601.
D. Topological Sorting and Applications
Read Epp: pp. 596-599
Note that the discussion of topological sorting is restricted to
finite, nonempty, partially ordered sets. A topological sorting on
such a set produces a listing of the set's elements in the form of a
chain; that is, it creates a total order on the set. There can be more
than one topological sorting of a given set. Work through example
10.5.11 and the first application on pages 595-597, then try exercises
42, 43, and 46a on page 601.
Reread the Application Context. The scenario is a job-scheduling
problem similar to the application in example 10.5.12. The PERT chart
in the example is a Hasse diagram. The chart portrays the relations
among components. When weights (time, in the example) are added to the
edges, then a critical path can be determined. This path indicates the
minimum amount of time required to complete the entire job. Try
exercises 46b and 47 on page 601 to test your understanding of the
application in example 10.5.12.
***************************************************************
Graphs
A. Basic Definitions and Terminology
Read Epp: pp. 602-607
The blue-shaded box on page 603 contains a number of definitions and
terms that are essential to the study of graphs. Refer to the box
frequently until you are clear about the meaning of each term. The
notion of an empty graph might seem strange, but it is a useful one
when studying algorithms on graphs.
A standard way to illustrate the definition of a graph is to use a
table of edges and endpoints, as in qexample 11.1.1. A table can show
that two different drawings of a graph either represent different
graphs or the same graph (as illustrated in examples 11.1.2 and
11.1.3). Do exercises 1-7 on pages 616 and 617 to test your
understanding of some of the basic graph definitions.
Note the difference between a directed graph (digraph) as defined on
page 607, and a graph. In a digraph, ordered pairs of vertices
indicate both the edges and the directions of edqges. In a graph,
there are no directions from one vertex to another, so a pair of
vertices (irrespective of order) represents an edge, but in a digraph,
order is important. For example, if a, b, c, d represent vertices,
then (a,b) and (b,a) both represent the same edge in a graph, but
represent different edges in a digraph.
B. Applications
Read Epp: pp. 607-609
Read examples 11.1.4-11.1.6 carefully. They illustrate applications of
graphs. To test your understanding of example 11.1.5, answer the
following questions: Is Poetry Magazine a suburban weekly? Does it
contain printed writing? What kind of paper finish does it have? Also
try exercises 10 and 11 on page 617.
C. Other Types of Graphs
Read Epp: pp. 609-612
The definitions and examples on pages 609-612 identify additional
types of graphs. Note the similarity between the concepts of subgraphs
and subsets. Try exercises 24 and 36-39 on pages 617-619 to test your
understanding of the definitions in this section. Note that a few of
the exercises also contain definitions.
Paths and Circuits
A. Definitions
Read Epp: pp. 602-604, 612, 619-625
Review the information on pages 602-604, and be sure you understand
the definition of degree, given on page 612.
The Bridges of Konigsberg problem in Section 11.2 exemplifies a useful
aspect of mathematics: that of changing the representation of a
problem either to apply known results about the new representation or
to understand the problem better. The solution to the Konigsberg
problem should become clear after you study the material in this
section.
A number of new terms are defined: walk, closed walk, path, simple
path, circuit, simple circuit, connected vertices and graphs, and
connected component of a graph. Try to get an intuitive idea about
each term by thinking about it without reading the definition.
For example, consider the possible differences between taking a
meandering walk and following a path. Then think of the differences
between ending up where you started or at another destination. Work
through the examples given in this section of Epp and try exercises 1,
2, 4, and 5 on page 636 to get a more concrete understanding of each
term's meaning. The table on page 622 may help you remember
characteristics and differences.
Item a. in Lemma 11.2.1 on page 625 presents an important
characteristic of trees. After you do the examples of connected graphs
and components of graphs, try exercises 6-8 on pages 636 and 637. The
concept of a bridge, as defined in exercise 6, can be applied to
networking.
By the time you finish all of the exercises and examples, you should
realize that the definitions of the terms in this section are little
more than formalized expressions of intuitive notions about the
terms. In other words, you should have little or no difficulty in
explaining to someone else what each term means, even though you might
not state the exact definition. If you cannot do this, then review the
definitions and rework the exercises and examples, or try similar
exercises in a different discrete mathematics text.
B. Euler and Hamiltonian Circuits
Read Epp: pp. 625-635
The topics in this part of section 11.2 are a little more complicated
than the preceding topics. Study the definitions of Euler and
Hamiltonian circuits. Note their differences: an Euler circuit can
visit the same vertex more than once and must include all edges,
whereas a Hamiltonian circuit must include every vertex exactly once
and might not include all edges.
You may omit the proofs, but take a common sense approach to
understanding the theorems and the corollary in this section of
Epp. For example, consider Theorem 11.2.2 on page 626. For an Euler
circuit to exist, each vertex must have the characteristic that any
time you arrive at the vertex along an edge, you must also depart the
vertex by a different edge (otherwise you would traverse the same edge
more than once). This means that each vertex has degree 2 if it is
counted only once in the circuit, degree 4 if counted twice, and so
on. In short, each vertex must have even degree.
Apply the same type of reasoning to the contrapositive version of
Theorem 11.2.2 on page 627. If any vertex of a graph has odd degree,
then either you arrive at that vertex along an edge and do not depart,
or you depart, but do not arrive. In either case, you cannot complete
a circuit by traversing all edges, hence there can be no Euler circuit
in the graph.
After working through the examples on pages 627-632, try exercises
12-22 on pages 637 and 638 to test your understanding of Euler paths
and circuits. Work through example 11.2.8, then try exercises 23-31 on
pages 638 and 639 to assess your understanding of Hamiltonian
circuits. Finally, do example 11.2.9, then test your ability to solve
a traveling salesman problem by completing exercise 36 on page 639.
D. Matrix Representations
Read Epp: pp. 640-656
Any experience you might have had with two-dimensional arrays, or
matrices, in a programming language should help you with the material
on pages 640-656. In particular, the representation and storage of
matrices, and operations on them, play an important role in working
with graphs. This is another instance of the utility of knowing about
one set of concepts--in this case, aspects of matrices--as an aid in
understanding other concepts, properties of graphs.
As indicated on pages 644-646, the adjacency matrix of a directed or
undirected graph tells quite a bit about the graph, especially if the
adjacency matrix is symmetric. By examining the result of multiplying
an adjacency matrix by itself, you can determine the number of
connected edges there are between certain pairs of vertices in a
graph. For example, if a graph represents an airline routing among
numerous cities, then the graph's adjacency matrix and powers of the
matrix would enable you to determine the number of stops required to
get from any one city to any other. Adjacency Matrix Example Let A,
B, C, D be four cities served by an airline as shown in this digraph:
In the matrix, vertices of the digraph are shown as row and column
headers. The matrix shows that there is exactly one nonstop flight
from A to B (a 1 in Row A, column B), from D to C (a 1 in row D,
column C), and so on. Confirm this information by looking at the
digraph.
Let's look at M2 .
M2=M·M= M
M2 gives information about paths of length 2, which require exactly
one stopover as you proceed from origin to destination. (You may need
to review how to multiply M by M to get M2.)
The entries in M2 provide information such as the following:
There is one path from A to B that requires exactly one stopover (A to C to B).
(The 1 in row A, column B indicates one path of length 2 from A to B.)
There is one path from A to D that requires exactly one stopover.
There are no paths from A to C requiring exactly one stopover.
There are two paths from D to B that require exactly one stopover.
To continue the example, consider M3 = M · M2
M3=M·M2= M
M3 gives information about paths of length 3, which require exactly
two stopovers as you proceed from origin to destination. The matrix M3
shows:
There is one path of length 3 from A back to A, from A to C, and from A to D.
There are 2 paths of length 3 from B back to B, and so on.
Try exercises 2-7, 19, and 22 on pages 654 and 655 to test your
understanding of the definitions in this section. If you have little
or no experience with matrix operations, then try exercises 1, 8-10,
and 12. II. Trees
A. Basic Definitions
Read Epp: pp. 664 and 665
A tree is a graph that has a number of restrictions, as given in the
definition on page 664. The definition includes the word circuit, a
term that will be discussed in detail in module 3. You can find its
definition on page 622.
For now, you need to know that a circuit (in a graph sense, not in an
electronic sense) is a sequence of vertices and edges that begin and
end at the same vertex without repeating an edge. For example, in the
graph below, the sequence of edges from B to C to E to D to B is a
circuit
but the sequence (below) from A to C to E to D is not a circuit.
The sequence of edges from C to B to D to E to B to C in the graph
below also is not a circuit. Why?
B. Examples
Read Epp: pp. 665-668
Decision trees are helpful structures in determining outcomes of
certain kinds of games and as problem-solving tools.
For example, you can construct a tree depicting all possible moves in
the game of Tic-Tac-Toe. Start with an empty diagram. Draw edges to
diagrams containing each possible first insertion of an X (or O).
From each of those diagrams, draw edges to each possible insertion of
an O (or X). Continue this process until the entire tree is
constructed.
You can reduce your effort considerably by noting, for example, that
an initial insertion into one corner has the same effect as an
insertion into any other corner. Thus, you need to show only one, not
four diagrams for that insertion. Try this yourself, then look at our
partial solution to the TIC-TAC-TOE example.
How would you use the information shown in the completed tree to play
the game? Could you construct a tree for the game of chess?
Syntactic derivation, or parse trees, not only have applications in
natural languages, but in computing as well. Numerous compilers used
tree structures to store a computer program's identifiers and to
verify proper syntax. A tree, rather than a graph, is an appropriate
structure for syntactic verification because there is a unique path to
a vertex in a tree. This prevents ambiguity, i.e., more than one
interpretation.
Try exercises 1 and 2 on page 681 to assess your understanding of the
types of trees presented in this section.
C. Terminology
Read Epp: pp. 668-676
For this course, you should become familiar with the terms defined on
pages 669 and 675, as well as Theorem 11.5.2 on page 670 (but not the
proof). Theorem 11.5.4 on page 674 specifies a condition for a graph
to be a tree. You can read the rest of the material if you are
interested, but we will not use it in this course. On pages 681 and
682, exercises 7, 32, and 33 will help you develop familiarity with
the relevant terms and concepts.
D. Binary Trees
Read Epp: pp. 676-681
A binary tree is a key data structure in computing. For example, one
application of binary trees involves sorting and searching.
Learn the definitions on page 676 and the theorems on pages 678 and
679 (but not the proofs). Try exercises 34-50 on pages 682 and 683 to
test your understanding. III. Spanning Trees
A. Basic Concepts
Read Epp: pp. 683-685
Learn the definitions and the proposition (but not the proof) on pages
684 and 685. They are essential to following the algorithms that come
next. Work through example 11.6.1 on these pages, then try exercises
1-4 on page 692.
B. Finding Minimal Spanning Trees
Read Epp: pp. 686-692
Spend time working through the Kruskal and Prim algorithms on pages
686 and 689. Study the examples that come immediately after the
algorithms. Each step of Kruskal's algorithm, as shown in our example,
looks at all remaining edges of the graph (those edges that have not
been deleted in previous steps), then adds the one that has the least
weight, but does not create a circuit.
Each step of Prim's algorithm, as shown in our example, looks only at
adjacent edges before adding an edge.
Note that the theorems on pages 688 and 691 guarantee that a minimal
spanning tree results from applying either algorithm. You can omit the
proofs of the theorems. Carefully work through example 11.6.4, then
try exercises 5-11 on pages 692 and 693.