Search

The Online Encyclopedia and Dictionary

 
     
 

Encyclopedia

Dictionary

Quotes

   
 

Combinatorics

(Redirected from Counting argument)

Combinatorics is a branch of mathematics that studies finite collections of objects that satisfy specified criteria, and is in particular concerned with "counting" the objects in those collections (enumerative combinatorics) and with deciding whether certain "optimal" objects exist (extremal combinatorics). One of the most prominent combinatorialists of recent times was Gian-Carlo Rota, who helped formalize the subject beginning in the 1960s. The prolific problem-solver Paul Erdős worked mainly on extremal questions. The study of how to count objects is sometimes thought of separately as the field of enumeration.

An example of a combinatorial question is the following: What is the number of possible orderings of a deck of 52 playing cards? That number equals 52! (i.e., "fifty-two factorial"). It is the product of all the natural numbers from one to fifty-two. It may seem surprising that this number, about 8.065817517094 × 1067, is so large. That is a little bit more than 8 followed by 67 zeros. Comparing that number to some other large numbers, it is greater than the square of Avogadro's number, 6.022 × 1023, "the number of atoms, molecules, etc., in a gram mole".

Contents

Counting functions

Calculating the number of ways that certain patterns can be formed is the beginning of combinatorics. Let S be a set with n objects. Combinations of k objects from this set S are subsets of S having k elements each (where the order of listing the elements does not distinguish two subsets). Permutations of k objects from this set S refer to sequences of k different elements of S (where two sequences are considered different if they contain the same elements but in a different order). Formulas for the number of permutations and combinations are readily available and important throughout combinatorics.

More generally, given an infinite collection of finite sets {Si} typically indexed by the natural numbers, enumerative combinatorics seeks a variety of ways of describing a counting function, f(n), which counts the number of objects in Sn for any n. Although the activity of counting the number of elements in a set is a rather broad mathematical problem, in a combinatorial problem the elements Si will usually have a relatively simple combinatorial description, and little additional structure.

The simplest such functions are closed formulas, which can be expressed as a composition of elementary functions such as factorials, powers, and so on. As noted above, the number of possible different orderings of a deck of n cards is f(n) = n!.

This approach may not always be entirely satisfactory (or practical) for every combinatorial problem. For example, let f(n) be the number of distinct subsets of the integers in the interval [1,n] that do not contain two consecutive integers; thus for example, with n = 4, we have {}, {1}, {2}, {3}, {4}, {1,3}, {1,4}, {2,4}, so f(4) = 8. It turns out that f(n) is the n+2 Fibonacci number, which can be expressed in closed form as:

f(n) = \frac{\phi^{n+2}}{\sqrt{5}} - \frac{(1-\phi)^{n+2}}{\sqrt{5}}

where φ = (1 + √5) / 2, the Golden mean. However, given that we are looking at sets of integers, the presence of the √5 in the result may be considered as "unaesthetic" from a combinatoric viewpoint. Alternatively, f(n) may be expressed as the recurrence

f(n) = f(n - 1) + f(n - 2)

which may be more satisfactory (from a purely combinatorial view), since it more clearly shows why the result is as shown.

Another approach is to find an asymptotic formula

f(n) ~ g(n)

where g(n) is a "familiar" function, and where f(n) approaches g(n) as n approaches infinity. In some cases, a simple asymptotic function may be preferable to a horribly complicated closed formula that yields no insight to the behaviour of the counted objects. In the above example, an asymptotic formula would be

f(n) \sim \frac{\phi^{n+2}}{\sqrt{5}}

as n becomes large.

Finally, and most usefully, f(n) may be expressed by a formal power series, called its generating function, which is most commonly either the ordinary generating function

\sum f(n) x^n

or the exponential generating function

\sum f(n) \frac{x^n}{n!}

where the sums are taken for n ≥ 0. Once determined, the generating function may allow one to extract all the information given by the previous approaches. In addition, the various natural operations on generating functions such as addition, multiplication, differentiation, etc., have a combinatorial significance; and this allows one to extend results from one combinatorial problem in order to solve others.

Results

Some very subtle patterns can be developed and some surprising theorems proved. One example of a surprising theorem is of Frank P. Ramsey:

Suppose 6 people meet each other at a party. Each pair of people either know each other or don't know each other. It is always the case that one can find 3 people out of the 6 such that they either all know each other or that they are all strangers to each other.

The proof is a short proof by contradiction: suppose that there aren't 3 people who either all know each other or all don't know each other. Then consider any one person at the party, hereafter called person A: among the remaining 5 people, there must be at least three who either all know or all do not know A. Without loss of generality, assume three such people all know A. But then among those three people, at least two of them must know each other (otherwise we would have 3 people who all don't know each other). But then those two also know A, so we have 3 people who all know each other. (This is a special case of Ramsey's theorem)

An alternate proof works by double counting: count the number of ordered triples of people (A,B,C) where person B knows person A but does not know person C. Suppose person K knows k of the 5 others. Then he is the B of exactly k(5-k) such triples - A must be one of the k people he knows, and C must be one of the (5-k) people he doesn't. Therefore, he is the B of either 0*5=0, 1*4=4 or 2*3=6 such triples. Since there are 6 people, and each one is the B of at most 6 triples, there are at most 36 triples.

Now consider a triple of people where exactly 1 pair know each other. It is clear that we can turn them into such an (A,B,C) in exactly two ways: let C be the one who is a stranger, and then call one of the others A and the other B. Similarly if exactly 2 pairs know each other, they can be turned into such at triple in exactly two ways: let A be the person who knows both of the others, whilst B and C (in some order) are the two who do not know each other. Therefore, there are at most 36/2=18 triples where either exactly 1 pair or exactly 2 pairs know each other. Since there are 20 triples, there must be at most 2 triples who either all know each other, or are all strangers to each other.

The idea of finding order in random configurations gives rise to Ramsey theory. Essentially this theory says that any sufficiently large configuration will contain at least one instance of some other type of configuration.

See also

References

Last updated: 05-21-2005 10:45:58