Biological & Artificial Neural Network
Biological & Artificial Neural Network
Artificial neural systems are well-known machine learning techniques that mimic the composition of learning in biological organisms(natural life forms). The human nervous system contains cells, which are called neurons. The neurons are associated with each other with the utilization of axons and dendrites, and the interfacing districts among axons and dendrites are called as synapses. These associations are shown in Figure (a). The qualities of synaptic connections frequently change because of external stimuli. This change is the way learning happens in living beings. This biological(natural) component is reproduced in artificial neural networks, which contain computational units that are called neurons. The computational units are associated with each other through weights, which serve a similar role as the qualities of synaptic associations in biological(natural) organisms. ||| Each input to the neuron is scaled with a weight, which influences the function computed at that unit. Its architecture is shown in Figure (b).
An artificial neural network computes a function of the input neuron to the output neurons and utilizing the weights as middle parameters. Learning happens by changing the weights interfacing with the neurons. Similarly, as external stimuli are required for learning in biological(natural) organisms, the external stimulus in the artificial neural network is given by the training data containing examples of input and output combinations of the function to be learned. For instance, the training data may contain pixel representation of images (input) and their annotated labels (e.g., carrot, banana) as an output. These training data sets are fed into the neural network by utilizing the input representation to make a prediction of the output labels. The training data gives feedback to the correctness of the weights in the neural network relying upon how well the predicted output (e.g., probability of carrot) for a specific input matches the annotated output label in the training data. One can see the errors made by the neural network in the calculation of a function as a kind of undesirable feedback in a biological organism, prompting adjustments in synaptic strengths. Also, the weights between neurons are adjusted in a neural network because of prediction errors. The objective of changing the weights is to modify the computed function to make the prediction increasingly more correct in future iterations. In this way, the weights are changed cautiously in a mathematically supported manner in order to decrease the error in computation on that example. By successfully adjusting the weights between neurons over many input-output combinations, the function computed by the neural network is refined after some time with the goal that it gives more accurate predictions. Along these lines, if the neural network is trained with a wide range of different images of bananas, it will inevitably be able to appropriately perceive a banana in an image it has not seen previously. This capacity to accurately compute functions of not seen inputs by training over a finite set of input-output combinations is referred to as model generalization. The main use of all machine learning models is obtained from their ability to generalize their learning from seen training data to unseen data examples.
The biological comparison is regularly criticized as a poor caricature of the activities of the human mind; by and by, the standards of neuroscience have regularly been valuable in planning neural network models. An alternate view is that neural systems are assembled as more elevated level reflections of the old-style models that are generally utilized in machine learning. In fact, the most basic units of calculation in the neural network are propelled by traditional machine learning algorithms like least-squares regression and strategic regression. Neural network gains their capacity by assembling many such basic units, and learning weights of the various units together so as to limit the prediction error. From this perspective, a neural network can be seen as a computational graph of elementary units in which greater power is gained by associating them specific ways. At the point when a neural network is utilized in its most basic form, without snaring together multiple units, the learning algorithms regularly diminish to classical machine learning models. The genuine power of a neural model over old-style strategies is released when these elementary computational units are joined, and the weights of the elementary models are trained to utilize their dependencies on each other. By combining different units, one is expanding the power of the model to learn more complicated functions of the data than are natural in the elementary models of basic machine learning. The manner by which these units are joined likewise assumes a role in the power of the architecture and requires some understanding and insight from the analyst. Moreover, adequate training data is also required so as to learn the larger number of weights in these expanded computational graphs. You will get this concept more clear on insideaiml.com.
Hope you like my blog.
Discover Past Posts