The Danish Peace Academy

SCIENCE AND SOCIETY

John Avery
H.C. Ørsted Institute, University of Copenhagen

Chapter 18 ARTIFICIAL INTELLIGENCE

The first computers

The dramatic development of molecular biology during the period following World War II would have been impossible without X-ray crystallography; and the application of X-ray crystallography to large biological molecules would have been impossible without another equally dramatic postwar development - the advent of high-speed electronic digital computers. The first programmable universal computers were completed in the middle 1940’s; but they had their roots in the much earlier ideas of Blaise Pascal (1623-1662), Gottfried Wilhelm Leibniz (1646-1716), Joseph Marie Jacquard (1752-1834) and Charles Babbage (1791-1871).

In 1642, the distinguished French mathematician and philosopher, Blaise Pascal, completed a working model of a machine for adding and subtracting. According to tradition, the idea for his “calculating box” came to Pascal when, as a young man of 17, he sat thinking of ways to help his father (who was a tax collector). In describing his machine, Pascal wrote:

“I submit to the public a small machine of my own invention, by means of which you alone may, without any effort, perform all the operations of arithmetic, and may be relieved of the work which has often times fatigued your spirit when you have worked with the counters or with the pen.”

Pascal’s machine, which worked by means of toothed wheels, was much improved by Leibniz, who constructed a mechanical calculator which, besides adding and subtracting, could also multiply and divide. His first machine was completed in 1671; and Leibniz’ description of it, written in Latin, is preserved in the Royal Library at Hanover:

“There are two parts of the machine, one designed for addition (and subtraction), and the other designed for multiplication (and division); and they should fit together. The adding (and subtracting) machine coincides completely with the calculating box of Pascal. Something, however, must be added for the sake of multiplication...”

“The wheels which represent the multiplicand are all of the same size, equal to that of the wheels of addition, and are also provided with ten teeth which, however, are movable so that at one time there should protrude 5, at another 6 teeth, etc., according to whether the multiplicand is to be represented five times or six times, etc.” “For example, the multiplicand 365 consists of three digits, 3, 6, and 5. Hence the same number of wheels is to be used. On these wheels, the multiplicand will be set if from the right wheel there protrude 5 teeth, from the middle wheel 6, and from the left wheel 3.”

By 1810, calculating machines based on Leibniz’ design were being manufactured commercially; and mechanical calculators of a similar design could be found in laboratories and offices until the 1960’s. The idea of a programmable universal computer is due to the English mathematician, Charles Babbage, who was the Lucasian Professor of Mathematics at Cambridge University. (In the 17th century, Isaac Newton held this post, and in the 20th century, P.A.M. Dirac also held it.)

In 1812, Babbage conceived the idea of constructing a machine which could automatically produce tables of functions, provided that the functions could be approximated by polynomials. He constructed a small machine, which was able to calculate tables of quadratic functions to eight decimal places; and in 1832 he demonstrated this machine to the Royal Society and to representatives of the British government.

The demonstration was so successful that Babbage secured financial support for the construction of a large machine which would tabulate sixth-order polynomials to twenty decimal places. The large machine was never completed, and twenty years later, after having spent seventeen thousand pounds on the project, the British government withdrew its support. The reason why Babbage’s large machine never was finished can be understood from the following account by Lord Moulton of a visit to the mathematician’s laboratory:

“One of the sad memories of my life is a visit to the celebrated mathematician and inventor, Mr. Babbage. He was far advanced in age, but his mind was still as vigorous as ever. He took me through his workrooms.”

“In the first room I saw the parts of the original Calculating Machine, which had been shown in an incomplete state many years before, and had even been put to some use. I asked him about its present form. ‘I have not finished it, because in working at it, I came on the idea of my Analytical Machine, which would do all that it was capable of doing, and much more. Indeed, the idea was so much simpler that it would have taken more work to complete the Calculating Machine than to design and construct the other in its entirety; so I turned my attention to the Analytical Machine.’”

“After a few minutes talk, we went into the next workroom, where he showed me the working of the elements of the Analytical Machine. I asked if I could see it. ‘I have never completed it,’ he said, ‘because I hit upon the idea of doing the same thing by a different and far more effective method, and this rendered it useless to proceed on the old lines.’”

“Then we went into a third room. There lay scattered bits of mechanism, but I saw no trace of any working machine. Very cautiously I approached the subject, and received the dreaded answer: ‘It is not constructed yet, but I am working at it, and will take less time to construct it altogether than it would have taken to complete the Analytical Machine from the stage in which I left it.’ I took leave of the old man with a heavy heart.”

Babbage’s first calculating machine was a special-purpose mechanical computer, designed to tabulate polynomial functions; and he abandoned this design because he had hit on the idea of a universal programmable computer. Several years earlier, the French inventor, Joseph Marie Jacquard, had constructed an automatic loom in which punched cards were used to control the warp threads. Inspired by Jacquard’s invention, Babbage planned to use punched cards to program his universal computer.

(Jacquard’s looms could be programmed to weave extremely complex patterns: A portrait of the inventor, woven on one of his looms in Lyons, hung in Babbage’s drawing room.)

One of Babbage’s frequent visitors was Augusta Ada, Countess of Lovelace (1815-1852), the daughter of Lord and Lady Byron. She was a mathematician of considerable ability, and it is through her lucid descriptions that we know how Babbage’s never-completed Analytical Machine was to have worked.

The next step towards modern computers was taken by Hermann Hollerith, a statistician working for the United States Bureau of the Census. He invented electromechanical machines for reading and sorting data punched onto cards. Hollerith’s machines were used to analyse the data from the 1890 United States Census; and similar machines began to be manufactured and used in business and administration.

In 1937, Howard Aiken, of Harvard University, became interested in combining Babbage’s ideas with some of the techniques which had developed from Hollerith’s punched card machines. He approached the International Business Machine Corporation, the largest manufacturer of punched card equipment, with a proposal for the construction of a large, automatic, programmable calculating machine.

Aiken’s machine, the Automatic Sequence Controlled Calculator (ASCC), was completed in 1944 and presented to Harvard University. Based on geared wheels, in the Pascal-Leibniz-Babbage tradition, ASCC had more than three quarters of a million parts and used 500 miles of wire. ASCC was unbelievably slow by modern standards - it took three-tenths of a second to perform an addition - but it was one of the first programmable general-purpose digital computers ever completed. It remained in continuous use, day and night, for fifteen years.

In the ASCC, binary numbers were represented by relays, which could be either on or off. The on position represented 1, while the off position represented 0, these being the only two digits required to represent numbers in the binary (base 2) system. Electromechanical calculators similar to ASCC were developed independently by Konrad Zuse in Germany and by George R. Stibitz at the Bell Telephone Laboratory.

Meanwhile, at Iowa State University, the physicist John V. Atanaso ff and his student, Clifford E. Berry, had developed a special-purpose electronic digital computer designed to solve large sets of simultaneous equations. The Atanasoff-Berry Computer (ABC) was completed in 1943. It used capacitors as a memory device; but since they gradually lost their charge, Atanasoff included a device for periodically “jogging” the memory (i.e. recharging the capacitors). Because of a relatively minor fault with the input-output system, ABC was never used for practical computational problems; and Atanasoff and Berry had to abandon it to work on research related to the war effort.

Like ASCC, ABC represented numbers in binary notation. Although it was a special-purpose machine, ABC represented a milestone in computing: It was the first electronic digital computer. (Analogue computers, such as the Differential Analyser designed by Vannevar Bush at M.I.T., have a separate history, and we will not discuss them here.)

In 1943, the electronic digital computer, Colossus, was completed in England by a group inspired by the mathematicians A.M. Turing, M.H.A. Newman. Colossus was the first large-scale electronic computer. It was used to break the German Enigma code; and it thus affected the course of World War II.

In 1946, ENIAC (Electronic Numerical Integrator and Calculator) became operational. This general-purpose computer, designed by J.P. Eckert and J.W. Mauchley of the University of Pennsylvania, contained 18,000 vacuum tubes, one or another of which was often out of order. However, during the periods when all its vacuum tubes were working, an electronic computer like Colossus or ENIAC could shoot ahead of an electromechanical machine (such as ASCC) like a hare outdistancing a tortoise.

Microelectronics

During the summer of 1946, a course on “The Theory and Techniques of Electronic Digital Computers” was given at the University of Pennsylvania. The ideas put forward in this course had been worked out by a group of mathematicians and engineers headed by J.P. Eckert, J.W. Mauchley and John von Neumann, and these ideas very much influenced all subsequent computer design.

The problem of unreliable vacuum tubes was solved in 1948 by John Bardeen, William Shockley and Walter Brattain of the Bell Telephone Laboratories. Application of quantum theory to solids had lead to an understanding of the electrical properties of crystals. Like atoms, crystals were found to have allowed and forbidden energy levels. The allowed energy levels for an electron in a crystal were known to form bands, i.e., some energy ranges with many allowed states (allowed bands), and other energy ranges with none (forbidden bands). The lowest allowed bands were occupied by electrons, while higher bands were empty. The highest filled band was called the “valence band”, and the lowest empty band was called the “conduction band”.

According to quantum theory, whenever the valence band of a crystal is only partly filled, the crystal is a conductor of electricity; but if the valence band is completely filled with electrons, the crystal is an electrical insulator. (A completely filled band is analogous to a room so packed with people that none of them can move.)

In addition to conductors and insulators, quantum theory predicted the existence of “semiconductors” - crystals where the valence band is completely filled with electrons, but where the energy gap between the conduction band and the valence band is very small. For example, crystals of the elements silicon and germanium are semiconductors. For such a crystal, thermal energy is sometimes enough to lift an electron from the valence band to the conduction band.

Bardeen, Shockley and Brattain found ways to control the conductivity of germanium crystals by injecting electrons into the conduction band, or alternatively by removing electrons from the valence band. They could do this by “doping” the crystals with appropriate impurities, or by injecting electrons with a special electrode. The semiconducting crystals whose conductivity was controlled in this way could be used as electronic valves, in place of vacuum tubes.

By the 1960’s, replacement of vacuum tubes by transistors in electronic computers had led not only to an enormous increase in reliability and a great reduction in cost, but also to an enormous increase in speed.

It was found that the limiting factor in computer speed was the time needed for an electrical signal to propagate from one part of the central processing unit to another. Since electrical impulses propagate with the speed of light, this time is extremely small; but nevertheless, it is the limiting factor in the speed of electronic computers.

In order to reduce the propagation time, computer designers tried to make the central processing units very small; and the result was the development of integrated circuits and microelectronics. (Another motive for miniaturization of electronics came from the requirements of space exploration.)

Integrated circuits were developed in which single circuit elements were not manufactured separately. Instead, the whole circuit was made at one time. An integrated circuit is a sandwich-like structure, with conducting, resisting and insulating layers interspersed with layers of germanium or silicon, “doped ” with appropriate impurities. At the start of the manufacturing process, an engineer makes a large drawing of each layer. For example, the drawing of a conducting layer would contain pathways which fill the role played by wires in a conventional circuit, while the remainder of the layer would consist of areas destined to be etched away by acid.

The next step is to reduce the size of the drawing and to multiply it photographicallly. The pattern of the layer is thus repeated many times, like the design on a piece of wallpaper. The multiplied and reduced drawing is then focused through a reversed microscope onto the surface to be etched.

Successive layers are built up by evaporating or depositing thin films of the appropriate substances onto the surface of a silicon or germanium wafer. If the layer being made is to be conducting, the surface would consist of an extremely thin layer of copper, covered with a photosensitive layer called a “photoresist”. On those portions of the surface receiving light from the pattern, the photoresist becomes insoluble, while on those areas not receiving light, the photoresist can be washed away.

The surface is then etched with acid, which removes the copper from those areas not protected by photoresist. Each successive layer of a wafer is made in this way, and finally the wafer is cut into tiny “chips”, each of which corresponds to one unit of the wallpaper-like pattern.

Although the area of a chip may be much smaller than a square centimeter, the chip can contain an extremely complex circuit. A typical programmable minicomputer or “microprocessor”, manufactured during the 1970’s, could have 30,000 circuit elements, all of which were contained on a single chip. By 1986, more than a million transistors were being placed on a single chip.

As a result of miniaturization, the speed of computers rose steadily. In 1960, the fastest computers could perform a hundred thousand elementary operations in a second. By 1970, the fastest computers took less than a second to perform a million such operations. In 1987, a computer called GF11 was designed to perform 11 billion floatingpoint operations (flops) per second.

GF11 (Gigaflop 11) is a scientific parallel-processing machine constructed by IBM. Approximately ten floating-point operations are needed for each machine instruction. Thus GF11 runs at the rate of approximately a thousand million instructions per second (1,100 MIPS).

The high speed achieved by parallel-processing machines results from dividing a job into many sub-jobs on which a large number of processing units can work simultaneously.

Computer memories have also undergone a remarkable development. In 1987, the magnetic disc memories being produced could store 20 million bits of information per square inch; and even higher densities could be achieved by optical storage devices. (A “bit” is the unit of information. For example, the number 25, written in the binary system, is 11001. To specify this 5-digit binary number requires 5 bits of information. To specify an n-digit binary number requires n bits of information. Eight bits make a “byte”.)

In the 1970’s and 1980’s, computer networks were set up linking machines in various parts of the world. It became possible (for example) for a scientist in Europe to perform a calculation interactively on a computer in the United States just as though the distant machine were in the same room; and two or more computers could be linked for performing large calculations. It also became possible to exchange programs, data, letters and manuscripts very rapidly through the computer networks.

The exchange of large quantities of information through computer networks was made easier by the introduction of fiber optics cables.

By 1986, 250,000 miles of such cables had been installed in the United States. If a ray of light, propagating in a medium with a large refractive index, strikes the surface of the medium at a grazing angle, then the ray undergoes total internal reflection. This phenomenon is utilized in fiber optics: A light signal can propagate through a long, hairlike glass fiber, following the bends of the fiber without losing intensity because of total internal reflection. By 1987, devices were being manufactured commercially which were capable of transmitting information through fiber optics cables at the rate of 1.7 billion bits per second.

Automation

During the last three decades, the cost of computing has decreased exponentially by between twenty and thirty percent per year. Meanwhile, the computer industry has grown exponentially by twenty percent per year (faster than any other industry). The astonishing speed of this development has been matched by the speed with which computers have become part of the fabric of science, engineering, industry, commerce, communications, transport, publishing, education and daily life in the industrialized parts of the world.

The speed, power and accuracy of computers has revolutionized many branches of science. For example, before the era of computers, the determination of a simple molecular structure by the analysis of X-ray diffraction data often took years of laborious calculation; and complicated structures were completely out of reach. In 1949, however, Dorothy Crowfoot Hodgkin used an electronic computer to work out the structure of penicillin from X-ray data. This was the first application of a computer to a biochemical problem; and it was followed by the analysis of progressively larger and more complex structures. Proteins, DNA, and finally even the detailed structures of viruses were studied through the application of computers in crystallography.

The enormous amount of data needed for such studies was gathered automatically by computer-controlled diffractometers; and the final results were stored in magnetic-tape data banks, available to users through computer networks.

The application of quantum theory to chemical problems is another field of science which owes its development to computers. When Erwin Schr¨odinger wrote down his wave equation in 1926, it became possible, in principle, to calculate most of the physical and chemical properties of matter. However, the solutions to the Schr¨odinger equation for many-particle systems can only be found approximately; and before the advent of computers, even approximate solutions could not be found, except for the simplest systems.

When high-speed electronic digital computers became widely available in the 1960’s, it suddenly became possible to obtain solutions to the Schrödinger equation for systems of chemical and even biochemical interest. Quantum chemistry (pioneered by such men as J.C. Slater, R.S. Mullikin, D.R. Hartree, V. Fock, J.H. Van Vleck, L. Pauling, E.B. Wilson, P.O. L¨owdin, E. Clementi, C.J. Ballhausen and others) developed into a rapidly-growing field, as did solid state physics. Through the use of computers, it became possible to design new materials with desired chemical, mechanical, electrical or magnetic properties. Applying computers to the analysis of reactive scattering experiments, D. Herschbach, J. Polanyi and Y. Lee were able to achieve an understanding of the dynamics of chemical reactions.

The successes of quantum chemistry led Albert Szent-Gy¨orgyi, A. and B. Pullman, H. Scheraga and others to pioneer the fields of quantum biochemistry and molecular dynamics. Computer programs for drug design were developed, as well as molecular-dynamics programs which allowed the conformations of proteins to be calculated from a knowledge of their amino acid sequences. Studies in quantum biochemistry have yielded insights into the mechanisms of enzyme action, photosynthesis, active transport of ions across membranes, and other biochemical processes.

In medicine, computers began to be used for monitoring the vital signs of critically ill patients, for organizing the information flow within hospitals, for storing patients’ records, for literature searches, and even for differential diagnosis of diseases.

The University of Pennsylvania has developed a diagnostic program called INTERNIST-1, with a knowledge of 577 diseases and their interrelations, as well as 4,100 signs, symptoms and patient characteristics. This program was shown to perform almost as well as an academic physician in diagnosing difficult cases. QMR (Quick Medical Reference), a microcomputer adaptation of INTERNIST-1, incorporates the diagnostic functions of the earlier program, and also offers an electronic textbook mode.

Beginning in the 1960’s, computers played an increasingly important role in engineering and industry. For example, in the 1960’s, Rolls Royce Ltd. began to use computers not only to design the optimal shape of turbine blades for aircraft engines, but also to control the precision milling machines which made the blades. In this type of computer-assisted design and manufacture, no drawings were required.

Furthermore, it became possible for an industry requiring a part from a subcontractor to send the machine-control instructions for its fabrication through the computer network to the subcontractor, instead of sending drawings of the part.

In addition to computer-controlled machine tools, robots were also introduced. They were often used for hazardous or monotonous jobs, such as spray-painting automobiles; and they could be programmed by going through the job once manually in the programming mode. By 1987, the population of robots in the United States was between 5,000 and 7,000, while in Japan, the Industrial Robot Association reported a robot population of 80,000.

Chemical industries began to use sophisticated computer programs to control and to optimize the operations of their plants. In such control systems, sensors reported current temperatures, pressures, flow rates, etc. to the computer, which then employed a mathematical model of the plant to calculate the adjustments needed to achieve optimum operating conditions.

Not only industry, but also commerce, felt the effects of computerization during the postwar period. Commerce is an informationintensive activity; and in fact some of the crucial steps in the development of information-handling technology developed because of the demands of commerce: The first writing evolved from records of commercial transactions kept on clay tablets in the Middle East; and automatic business machines, using punched cards, paved the way for the development of the first programmable computers.

Computerization has affected wholesaling, warehousing, retailing, banking, stockmarket transactions, transportation of goods - in fact, all aspects of commerce. In wholesaling, electronic data is exchanged between companies by means of computer networks, allowing orderprocessing to be handled automatically; and similarly, electronic data on prices is transmitted to buyers.

The key to automatic order-processing in wholesaling was standardization. In the United States, the Food Marketing Institute, the Grocery Manufacturers of America, and several other trade organizations, established the Uniform Communications System (UCS) for the grocery industry. This system specifies a standard format for data on products, prices and orders.

Automatic warehouse systems were designed as early as 1958. In such systems, the goods to be stored are placed on pallets (portable platforms), which are stacked automatically in aisles of storage cubicles. A computer records the position of each item for later automatic retrieval.

In retailing, just as in wholesaling, standardization proved to be the key requirement for automation. Items sold in supermarkets in most industrialized countries are now labeled with a standard system of machine-readable thick and thin bars known as the Universal Product Code (UPC). The left-hand digits of the code specify the manufacturer or packer of the item, while the right-hand set of digits specify the nature of the item. A final digit is included as a check, to make sure that the others were read correctly. This last digit (called a modulo check digit) is the smallest number which yields a multiple of ten when added to the sum of the previous digits.

When a customer goes through a check-out line, the clerk passes the purchased items over a laser beam and photocell, thus reading the UPC code into a small embedded computer or microprocessor at the checkout counter, which adds the items to the customer’s bill. The microprocessor also sends the information to a central computer and inventory data base. When stocks of an item become low, the central computer generates a replacement order. The financial book-keeping for the retailing operation is also carried out automatically by the central computer.

In many places, a customer passing through the checkout counter of a supermarket is able to pay for his or her purchases by means of a plastic card with a magnetic, machine-readable identification number. The amount of the purchase is then transmitted through a computer network and deducted automatically from the customer’s bank account. If the customer pays by check, the supermarket clerk may use a special terminal to determine whether a check written by the customer has ever “bounced”.

Most checks are identified by a set of numbers written in the Magnetic- Ink Character Recognition (MICR) system. In 1958, standards for the MICR system were established, and by 1963, 85 percent of all checks written in the United States were identified by MICR numbers. By 1968, almost all banks had adopted this system; and thus the administration of checking accounts was automated, as well as the complicated process by which a check, deposited anywhere in the world, returns to the payors bank.

Container ships were introduced in the late 1950’s, and since that time, container systems have increased cargo-handling speeds in ports by at least an order of magnitude. Computer networks contributed greatly to the growth of the container system of transportation by keeping track of the position, ownership and contents of the containers. In transportation, just as in wholesaling and retailing, standardization proved to be a necessary requirement for automation. Containers of a standard size and shape could be loaded and unloaded at ports by specialized tractors and cranes which required only a very small staff of operators. Standard formats for computerized manifests, control documents, and documents for billing and payment, were instituted by the Transportation Data Coordinating Committee, a non-profit organization supported by dues from shipping firms.

In the industrialized parts of the world, almost every type of work has been made more efficient by computerization and automation. Even artists, musicians, architects and authors find themselves making increasing use of computers: Advanced computing systems, using specialized graphics chips, speed the work of architects and film animators.

The author’s traditional typewriter has been replaced by a word-processor, the composer’s piano by a music synthesizer. In the Industrial Revolution of the 18th and 19th centuries, muscles were replaced by machines. Computerization represents a Second Industrial Revolution: Machines have begun to perform not only tasks which once required human muscles, but also tasks which formerly required human intelligence.

In industrial societies, the mechanization of agriculture has very much reduced the fraction of the population living on farms. For example, in the United States, between 1820 and 1980, the fraction of workers engaged in agriculture fell from 72 percent to 3.1 percent.

There are signs that computerization and automation will similarly reduce the number of workers needed in industry and commerce. Computerization is so recent that, at present, we can only see the beginnings of its impact; but when the Second Industrial Revolution is complete, how will it affect society? When our children finish their education, will they face technological unemployment?

As we saw in an previous chapter, the initial stages of the First Industrial Revolution produced much suffering, because labor was regarded as a commodity to be bought and sold according to the laws of supply and demand, with almost no consideration for the needs of the workers. Will we repeat this mistake? Or will society learn from its earlier experience, and use the technology of automation to achieve widely-shared human happiness?

The Nobel-laureate economist, Wassily W. Leontief, has made the following comment on the problem of technological unemployment: “Adam and Eve enjoyed, before they were expelled from Paradise, a high standard of living without working. After their expulsion, they and their successors were condemned to eke out a miserable existence, working from dawn to dusk. The history of technological progress over the last 200 years is essentially the story of the human species working its way slowly and steadily back into Paradise. What would happen, however, if we suddenly found ourselves in it? With all goods and services provided without work, no one would be gainfully employed. Being unemployed means receiving no wages. As a result, until appropriate new income policies were formulated to fit the changed technological conditions, everyone would starve in Paradise.”

To say the same thing in a slightly different way: consider what will happen when a factory which now employs a thousand workers introduces microprocessor-controlled industrial robots and reduces its work force to only fifty. What will the nine hundred and fifty redundant workers do? They will not be able to find jobs elsewhere in industry, commerce or agriculture, because all over the economic landscape, the scene will be the same.

There will still be much socially useful work to be done - for example, taking care of elderly people, beautifying the cities, starting youth centers, planting forests, cleaning up pollution, building schools in developing countries, and so on. These socially beneficial goals are not commercially “profitable”. They are rather the sort of projects which governments sometimes support if they have the funds for it. However, the money needed to usefully employ the nine hundred and fifty workers will not be in the hands of the government. It will be in the hands of the factory owner who has just automated his production line. In order to make the economic system function again, either the factory owner will have to be persuaded to support socially beneficial but commercially unprofitable projects, or else an appreciable fraction of his profits will have to be transferred to the government, which will then be able to constructively re-employ the redundant workers. The future problems of automation and technological unemployment may force us to rethink some of our economic ideas. It is possible that helping young people to make a smooth transition from education to secure jobs will become one of the important responsibilities of governments, even in countries whose economies are based on free enterprise. If such a change does take place in the future, while at the same time socialistic countries are adopting a few of the better features of free enterprise, then one can hope that the world will become less sharply divided by contrasting economic systems.

Neural networks

If civilization survives, future historians may regard the invention of computers as an even more important step in cultural evolution than the invention of printing or the invention of writing. Exploration of the possibilities of artificial intelligence has only barely begun. In part, the future development of computers will depend on more sophisticated programs (software), and in part on new types of computer architecture (hardware).

Physiologists have begun to make use of insights derived from computer design in their efforts to understand the mechanism of the brain; and computer designers are beginning to construct computers modeled after neural networks. We may soon see the development of computers capable of learning complex ideas, generalization, value judgements, artistic creativity, and much else that was once thought to be uniquely characteristic of the human mind. Efforts to design such computers will undoubtedly give us a better understanding of the way in which the brain performs its astonishing functions.

Much of our understanding of the nervous systems of higher animals is due to the Spanish microscopist, Ram´on y Cajal, and to the English physiologists, Alan Hodgkin and Andrew Huxley. Cajal’s work, which has been confirmed and elaborated by modern electron microscopy, showed that the central nervous system is a network of nerve cells (neurons) and threadlike fibers growing from them. Each neuron has many input fibers (dendrites), and one output fiber (the axon), which may have several branches.

In 1952, working with the giant axon of the squid (which can be as large as a millimeter in diameter), Hodgkin and Huxley showed that nerve fibers are like long tubes. Inside the tube is a fluid which contains potassium and sodium ions. In a resting nerve, the concentration of potassium inside is higher than it is in the normal body fluids outside, and the concentration of sodium is lower. These abnormal concentrations are maintained by a “pump”, which uses metabolic energy to bring potassium ions into the nerve and to expel sodium ions.

The tubelike membrane surrounding the nerve fiber is more permeable to sodium than to potassium; and the positively-charged sodium ions tend to leak back into the resting nerve, producing a small difference in electrical potential between the inside and outside. This electrical potential helps to hold the molecules of the nerve membrane in an orderly layer, so that the membrane’s permeability to ions is low.

Hodgkin and Huxley showed that when a nerve cell “fires”, the whole situation changes dramatically. Potassium ions begin to flow out of the nerve, destroying the electrical potential which maintained order in the membrane. A wave of depolarization passes along the nerve. Like a row of dominos falling, the disturbance propagates from one section to the next: Potassium ions flow out, the order-maintaining electrical potential disappears, the next small section of the nerve membrane becomes permeable, and so on. Thus, Hodgkin and Huxley showed that when a nerve cell fires, a quick pulse-like electrical and chemical disturbance is transmitted along the fiber.

The fibers of nerve cells can be very long, but finally the signal reaches a junction where one nerve cell is joined to another, or where a nerve is joined to a muscle. The junction is called a “synapse”. At the synapse, chemical transmitters are released which may cause the next nerve cell to fire, or which may inhibit it from firing, depending on the type of synapse. The chemical transmitters released by nerve impulses were first studied by Sir Henry Dale, Sir John Eccles and Otto Loewi, who found that they can also trigger muscle contraction. (Among the substances believed to be exitatory transmitters are acetylcholine, noradrenalin, norepinephrine, serotonin, dopamine and glutamate, while gamma-amino-butyric acid is believed to be an inhibitory transmitter.)

Once a nerve cell fires, a signal will certainly go out along its axon. However, when the signal comes to a synapse, where the axon makes contact with the dendrite of another cell, it is not at all certain that the next nerve cell will fire. Whether it does so or not depends on many things: It depends on the frequency of the pulses arriving along the axon. (The transmitter substances are constantly being broken down.) It depends on the type of transmitter substance. (Some of them inhibit the firing of the next cell.) And finally, the firing of the next neuron depends on the way in which the synapse has been modified by its previous history and by the concentration of various chemicals in the blood.

The variety and plasticity of synapses, and the complex, branching interconnections of dendrites and axons, help to account for the subtlety of the nervous system, as well as its sensitivity to various chemicals in the blood. Some neurons (called “and” cells) fire only when all their input dendrites are excited. Other neurons (called “or” cells) fire when any one of the dendrites is excited. Still other neurons (called “inhibited” cells) fire when certain dedrites are excited only if other inhibiting dendrites are not excited. Interestingly, “and” circuits, “or” circuits and “inhibited” circuits have played a fundamental role in computer design ever since the the beginning of electronic computers. In the 1960’s, the English neuroanatomist J.Z. Young proposed a model of the visual cortex of the octopus brain. In Young’s model, the arrangement of “and”, “or” and “inhibited” cells performs the function of pattern abstraction. The model is based both on learning experiments with the octopus, and on microscopic studies of the octopus brain.

According to Young’s model, the visual pattern received by the retina of the octopus eye is mapped in a direct way onto the outer layer of neurons in the animal’s visual cortex. The image on the retina forms a picture on the cortex, just as though it were projected onto a screen. However, the arrangement of “and”, “or” and “inhibited” cells in the cortex is such that as the signals from the retina are propagated inward to more deeply-lying layers, certain deep cortical cells will fire only in response to a particular pattern on the retina.

In Young’s model, the signal then comes to a branch, where it can either stimulate the octopus to attack or to retreat. There is a bias towards the attack pathway; and therefore, the first time an octopus is presented with an object of any shape, it tends to attack it. However, if the experimenter administers an electric shock to the animal, synapses in the attack pathway are modified, and the attack pathway is blocked. When the octopus later is presented with an object of the same shape, the signal comes through in exactly the same way as before. However, this time when it reaches the attack-retreat branch, the attack pathway is blocked, and the signal causes the animal to retreat. The octopus has learned!

It is possible the computers of the future will have pattern-recognition and learning abilities derived from architecture inspired by our understanding of the synapse, by Young’s model, or by other biological models. However, pattern recognition and learning can also be achieved by programming, using computers of conventional architecture. Programs already exist which allow computers to understand both handwriting and human speech; and a recent chess-playing program was able to learn by studying a large number of championship games. Having optimized its parameters by means of this learning experience, the chess-playing program was able to win against grand masters!

Like nuclear physics and genesplicing, artificial intelligence presents a challenge: Will society use its new powers wisely and humanely?

The computer technology of the future can liberate us from dull and repetitive work, and allow us to use our energies creatively; or it can produce unemployment and misery, depending on how we organize our society. Which will we choose?

Chapter 19: CARING FOR THE EARTH.

Suggestions for further reading

1. N. Metropolis, J. Howlett, and Gian-Carlo Rota (editors), A History of Computing in the Twentieth Century, Academic Press (1980).
2. S.H. Hollingdale and G.C. Tootil, Electronic Computers, Penguin Books Ltd. (1970).
3. Alan Turing, The Enigma of Intelligence, Burnett, London (1983).
4. R. Randell (editor), The Origins of Digital Computers, Selected Papers, Springer-Verlag, New York (1973).
5. Allan R. Mackintosh, The First Electronic Computer, Physics Today, March, (1987).

Top


Go to The Danish Peace Academy
Back to Index

fredsakademiet.dk.