(Redirected from Neural networks
A neural network is an interconnected group of artificial or biological neurons. It is possible to differentiate between two major groups of neural networks:
In modern usage the term most often refers to artificial neural networks, especially in computer science and related fields. There exist hybrids, incorporating biological neurons as part of electronic circuits, so there is not always a clear delineation.
In general, a neural network is composed of a group of connected neurons. A single neuron can be connected to many other neurons, so that the overall structure of the network can be very complex.
Artificial intelligence and cognitive modeling try to simulate some properties of neural networks. Approaching human learning and memory is the main interest in these models. These artificial neural networks are advantageous, especially in pattern recognition and classification tasks. They have found an application in the control of processes in the chemical industry, speech recognition, optical character recognition and adaptive software such as software agents (e.g. in computer and video games) and autonomous robots.
In some comparisons between the brain and computers, the following calculation is made: There are billions of neurons in the human brain; estimates suggest about 2×1012 neurons with individual differences. Since the relaxation time of these neurons is about 10 ms, this could amount to a processing speed of 100 Hz. The whole brain could therefore have a processing power of roughly 2×1014 logical operations per second. To compare, a 64-bit PowerPC 970 processor at a frequency of 3 GHz corresponds to 2×1011 logical operations per second, making the brain roughly one thousand times as powerful as a current high-end consumer PC. However, this comparison is extremely speculative. The working of biological neural networks is not well understood; it is not clear that anything like the "logical operations" performed by a computer actually occur in biological neural networks.
Perhaps the most fundamental difference between brain and computer is that today's computers operate primarily sequentially, or with a small amount of parallelism (for details, see hyper-threading, SIMD, MMX and SSE2), while human brains are massively parallel. Furthermore, while a computer is centralized with a processor at its core, the question of whether the brain is centralized or decentralized (distributed) is unresolved. Given Turing's model of computation, the Turing machine (which shows that any computation that can be performed by a parallel computer can be done by a sequential computer), this is likely to be a functional, not fundamental, distinction.
The parallel distributed processing of the mid-1980s became popular under the name connectionism. In early 1950s Friedrich Hayek was one of the first to posit the idea of spontaneous order in the brain arising out of decentralized networks of simple units (neurons). A design issue in cognitive modeling, also relating to neural networks, is additionally a decision between holistic and atomism, or (more concrete) modular in structure.