Our story begins in the 4th century B.C. At that time Aristotle invented formal logic. Very few advances in human thought are made by one man alone without extensive aid from predecessors, but this seems to be one of them if not the only one [9, page 196,]. Most of Aristotle's logical writings appear in the Prior Analytics which is in turn part of the Organon. It should be noted, however, that the oldest actual manuscript of Aristotle's work is only five centuries old. We do not know which parts of it, if any, are actual words of Aristotle and which are commentaries added by later logicians. We do not know his eventual opinions about logic. The subject is not mentioned in his central writings on metaphysics. Aristotle can be excused if he did not see the significance of logic because it has only acquired any practical use since the invention of mechanical devices using logical principles (e.g. Jacquard looms, elevator control panels, punch card sorters, digital computers, etc.). Formal logic involves meticulous attention to innumerable details and long strings of exotic symbols, just the sort of thing that computers love and people hate.
Aristotle was a common sense man of science interested in discovering
universal laws by observing the world around him. One of the central topics
in his science is classification, putting individuals into species, species
into genera, etc. In Prior Analytics,
he discusses the general laws of
classification itself. He says that logic is an argument in which certain
things being laid down, other things necessarily follow. In particular, the
things to be said about classification fall into just a few ritualistic
forms. All men are mortal; no man is rational; some dogs are rational; not
all men are philosophers. These are sometimes given the labels A,E,I,O
respectively. e.g. Exy means that no x are y. Aristotle set himself the task
of classifying all instances of such a sentence following from a set of
previously laid down sentences. He argued that no conclusion could require
infinitely many hypothesis and then went on to show that all valid arguments
could be derived by a finite sequence of syllogisms from one of the following
four types:
![]()
Here we are using a notation standard in modern logic. A single inference form is given by specifying its hypothesis above a horizontal bar and its conclusion below the bar.
The important thing about these specifications is the mindless way in which they can be applied. Even a computer can be taught to recognize the forms A, E, I, and O. The computer can even be taught to make inferences based on the above four rules. Aristotle's contribution to logic can be summarized by saying that a computer can be taught to make all valid deductions concerning classification systems. If the subject is complicated enough such as in the medical diagnosis of pulmonary diseases, the computer may even be able to do the job better than a human [2].
The idea that logic can be used to make machines think was not explicitly realized until some time after Aristotle's death. Around 1662 Blaise Pascal actually constructed a machine which would perform additions and subtractions. Leibnitz improved upon Pascal's machine, adding the capability of multiplication and division. Leibnitz' machine was completed in 1673 and exhibited in London. For this achievement Leibnitz was elected a Fellow of the Royal Society. Leibnitz said of his invention, "Also the astronomers surely will not have to continue to exercise the patience which is required for computation. It is this that deters them from computing or correcting tables, from the construction of Ephemerides, from working on hypothesis, and from discussions of observations with each other. For it is unworthy of excellent men to lose hours like slaves in the labor of calculation which could safely be relegated to anyone else if machines were used." [4, page 8,]. So, in the middle of the 17th century, we see for the first time a realization that it might someday be possible to perform routine calculations by machine.
At the same time there was a tremendous explosion in the power of routine calculation. Newton invented calculus in 1666 and Leibnitz reinvented it in 1676. In the winter of 1676 while working in Paris, Leibnitz created the integral sign; in 1681, while William Penn was setting up camp at the northern extreme of Delaware Bay, Newton formulated his famous laws of classical physics and solved the differential equations for planetary motion. By extending the techniques of algebra to the infinite and the infinitesimal, Leibnitz and Newton and their followers were able to change the face of the earth, but the domain of calculation had not been extended beyond numbers. Leibnitz' contribution to our tale is that he also realized that logic was capable of pushing calculation into almost any domain. He wanted a new scientific language which would help not merely in the communication of thoughts but also in thinking itself, and this he called a lingua philosophica or characteristica universalis. His fundamental hope for this language was that its symbolism would mirror the structure of the world so that we could determine the exact relationship between objects merely by examining their symbols. This would make possible what Leibnitz called a calculus ratiocinator or a mechanical method for drawing conclusions [7].
Needless to say, Leibnitz did not succeed in constructing such a language, although he did make some progress in formalizing the propositional calculus. The man who did formulate a general language for logical deduction was Gottlob Frege whose 1879 book, Begriffsschrift, eine der arithmetischen nachgebildete Formelsprache des reinen Denkins, is generally regarded as the most important work in the history of logic. To the 20th century observer, bred on the mother's milk of nihilism, it comes as no surprise that Frege's system does not fully realize Leibnitz' original goals. In fact, it does not even realize Frege's goals.
The first great nihilist theorem of the 20th century is Russell's 1903 observation that Frege's axioms are inconsistent. In modern terms, Frege took as a basic truth of logic that every condition determines a set. e.g. the condition "red" determines the set of red objects. Russell showed that this condition is self-contradictory. Consider the collection R of all those sets which are not members of themselves. Then R is a member of R iff R is not a member of itself.

None the less, Frege's language, which has grown into what we now call the predicate calculus, does serve to bring widely diverse fields into the domain of calculation and thus opens up the possibility of making machines behave like human experts. Also, Frege's achievement of creating the most general possible formal language was instrumental in proving theorems about the limitations of formal languages. Gödel's incompleteness theorem would appear to say that Leibnitz' characteristica universalis is theoretically impossible.
Frege's system of logical notation was awkward and difficult to read. It used two dimensional patterns to represent logical statements. His notation was superseded by a system introduced by Peano in 1894 [11] which is essentially equivalent to modern logic. Peano's system was further refined by Hilbert who in 1899 used modern logic to finally carry out the program usually attributed to Euclid, namely the rigorous axiomatic development of plane geometry [5].