Artificial Neural Networks
Artificial Neural networks, sometimes referred to as just Neural Networks, are decision making systems that were inspired by the nervous systems of animals. It works by having several processing systems working together to solve a problem rather than just one. These networks, just like humans, learn by example.
The first Artificial Neuron was created by Warren McCulloch, a neurophysiologist, and Walter Pits, a logician, in 1943. However it was limited by the technology of the time so it was unable to do much. In 1949 Donald Hebb brought to light the importance that neural pathways are improved with each use. This becomes essential because of its similarity to the way in which humans learn. "MADALINE" was developed in 1959 by Windrom and Hoff of Stanford University. It was the first neural network applied to a real world problem: eliminating the echoes on phone lines. This system is still in commercial use.
The early successes of the neural networks caused an exaggeration of the potential, such as a computer that could program itself. This led to a fear of the "thinking machine" which is still felt today, though even with all the advances in technology we are far from achieving the "thinking machines" we so fear.
In 1982 there were 3 major breakthroughs in ANNs. The first was John Hopfield whose proposition renewed interest in the field. Early systems only used a one way connection between neurons but Hopfield proposed using a bidirectional connection between the lines which he believed would be able to create more useful machines. Also that year Reilly and Cooper created a multiple layer network. Each of the layers used a different strategy of problem solving. This they called a "Hybrid network."
How They Work
Artificial Neural Networks (ANN) are computational processes that are designed after a simplified biological brain. They are supposed to acquire the intelligence that these cell networks usually contain. The ANNs can be trained to recognize patterns and images. For example; if you show the ANN several images of cars, it should be able to identify other cars using this "learned" information.
A Neural Network is created by the connection of several neurons to each other. These connections can be done is several different ways. Each of these methods of connection are classified as different types of networks. Some networks include: the forward connection NN, the Kohonen network, the Hopfield network and the Back Propagation network
Biological systems work based on an interconnection of neurons, a special type of cell. Harnessing the adaptability, memory capacity, real-time capabilities, error tolerance and the context-sensitivity of the human brain is the main goal of ANNs. The speed at which the brain processes information is slow in comparison to electronic processing yet the entire processing operation is achieved relatively quickly which suggests that in biological computation the computation is broken into several steps that are each processed in small section on a large number of parallel processes. This parallel processing is the basis for the setup of ANNs.
Normal computer programs require interaction with an outside interface. These neural systems work in a similar way to the human brain. They are in their ideal form trainable, adaptive and self-organizing. They develop themselves based on data and they may provide computation architectures through training rather than design.
No system is without its disadvantages. Neural networks are well suited for certain applications especially with training and pattern association. The idea that ANNs can solve all problems is unlikely.
There is no clear rules of guidelines for arbitrary applications.
There is no way to access the internal workings of the network.
Training can be difficult or impossible.
It is able to work on a problem massively through parallel systems
It could be tolerant of faults because of its parallel design.
It can be designed to be adaptive.
There is little need for characterizations of problems