23.08.2022
749

What are Neural Networks – How It Works and Why is It So Important

Andrii Mazur
Author at ApiX-Drive
Reading time: ~10 min

In this publication, we will try to explain the phenomenon of neural networks in an accessible form and assess the validity of expectations. Both enthusiastic and apocalyptic.

Content:
1. Excursion to anatomy
2. Starting to copy nature
3. Artificial neural network
4. How to train a neural network
5. Neural networks in business
6. Limits of Possibilities
***

The word combination "neural networks" is loved by journalists. In terms of the frequency of mentions in the media, neural networks are second only to the undertakings of Elon Musk. And confidently compete with artificial intelligence and global warming.

No wonder neural networks are both admired and feared. Some unprecedented technological breakthroughs and catastrophic consequences of their application are expected.

Excursion to anatomy

The very name of this phenomenon comes from biology. Neurons are special cells of living organisms that form the nervous system (the most primitive creatures with a nervous system are jellyfish and corals). Their task is to receive, transmit and process information through electrical impulses and chemical reactions.

The largest number of neurons is in the brain. The number of neurons in the human brain is estimated at about 100 billion. It is possible that there are many more, but we will keep these 100 billion in memory.

There are 100 billion neurons in the human brain


The next important point for understanding the operation of neural networks is the structure of a neuron. Each such cell consists of a body and special offshoots – several dendrites and one axon. Here we take note that the dendrites are the INPUT channels for information, and the axon is the only OUTPUT of information from the neuron.

Neurons have specialization. Some of them (receptor) receive information coming from outside. In other words, they are sensors. Others (effector) transmit a signal from the nervous system to surrounding cells. Finally, there are intermediate (intercalary) neurons.

In the body, billions of neurons are interconnected. The interlacing is incredibly intricate. So, according to some data, each neuron contacts with 10,000 neighboring cells (according to other sources – from 20,000). Finally, synapses are located at the points of contact of two neurons – special formations that can change the signal passing through them.

That is, information about the outside world from the receptors passes through a certain number of intercalary neurons. On this way, it somehow changes passing synapses. And only at the end it gets to the effector neurons. This is followed by the reaction of the body to the received signal.

Starting to copy nature

The first attempts to model the elementary functions of the nervous system were made as early as the middle of the 20th century. The first model of a biological neural network was the Perceptron, a program implemented on the Mark-1 electronic computer in the USA in 1960.

The Perceptron repeated the work of just one neuron. For that level of programming and computing power, a really impressive result was achieved. The machine learned to recognize several letters.

Despite the success with the creation of the Perceptron, interest in neural networks soon faded. Which is quite understandable. Computer technology could not cope with the complex mathematical apparatus used in modeling neural networks.

The use of semiconductors has given the idea of artificial neural networks a new impetus. The role of neurons was assigned to transistors. Miniaturization has made it possible to pack millions of transistors into microchips with an area of a couple of square centimeters.

And yet the number of transistors in the most perfect computer does not yet reach the number of neurons in the brain of a highly developed living being. But the fundamental, and insurmountable (at least at present) obstacle is not even the number of electronic components. Recall that each neuron interacts with thousands of nerve cells. A single transistor – only with a few.

Be that as it may, already at the beginning of the 2000s, the potential of computers made it possible to implement ideas that had previously not gone beyond theories. Neural networks have evolved from a curious but generally useless IT phenomenon to a powerful tool for solving many problems.

Artificial neural network

The computer does an excellent job of storing and reordering even huge amounts of data according to predetermined instructions – programs. However, such an approach does not allow it, even in principle, to do what the brain of even a not very developed animal easily does. For example, such a computer is not able to recognize certain patterns and then change its actions.

In other words, learn. The concept of artificial neural networks overcomes this barrier.

Artificial neural network


How is the neural network trained? To begin with, once again we will briefly talk about its structure.

An artificial neural network consists of analogues of neurons combined into layers. Don't take the term "layer" literally. Physically, no layers of any chips exist. The word "layer" helps to understand the scheme that explains the operation of the neural network. Just like the electrical wiring diagram in a house has nothing to do with the actual placement of lamps, sockets and wires connecting them. A layer is a collection of elements that perform a specific task. And occupying a certain place in the information processing pipeline. The input layer (analogous to receptor neurons) transmits data from the outside world into the neural network. The output layer (analogous to effector neurons) tells the outside world how the neural network reacts to the information received.

Between the input and output layers there are layers that process the information coming from the receptor layer. They are called hidden. All elements of the hidden layers are interconnected with each other and the elements of the output layer.

Neural networks in which the number of hidden layers is more than one are called deep. This clarification is important for understanding the term “deep learning”.

Now let me remind you that in biological neural networks there are special zones at the points of contact of neurons – synapses. In such zones, the signal passing between the neurons changes. The role of synapses in an artificial neural network is performed by the so-called weights. The weight can be positive if one neuron excites the other and negative if it suppresses it. Moreover, the weight does not work like a simple switch. By changing the weight, you can not only turn on / off the connection, but also regulate the strength of the influence of one element of the neural network on the surrounding ones.

The next important component of an artificial neural network is the feedback between the output layer and the weights of the inner layers.

How to train a neural network

Learning a neural network is understood as the process of adjusting the weights of relationships between elements of hidden layers.

Let's assume that we have some reference signal. And we want the neural network to produce the same output.

Connect applications without developers in 5 minutes!
Use ApiX-Drive to independently integrate different services. 350+ ready integrations are available.
  • Automate the work of an online store or landing
  • Empower through integration
  • Don't spend money on programmers and integrators
  • Save time by automating routine tasks
Test the work of the service for free right now and start saving up to 30% of the time! Try it

Arbitrary data is fed to the input layer. They pass through the inner layers, changing by means of weights on the links between the elements of the inner layers. Having entered the output layer, such modified data is compared with the standard.

The feedback sends a command to change the weights. Figuratively speaking, a person points to a neural network: “This is done wrong. Try again." The data is again passed through the neural network and the values of the weights are changed again through the feedback. Sooner or later, the difference between the reference value and the value produced by the network will disappear. The neural network will pass data corresponding to the standard to the output layer, ignoring the rest. That is, it will learn to recognize a certain set of data (this phrase can be understood as an object, sound, color – whatever).

Let's try to explain with an example how you can teach a neural network to recognize a cup.

The input layer contains several input elements that describe some parameters of the object. For a cup, these may be:

  • a handle of a certain shape on its side (otherwise it can be confused with a saucepan);
  • proportions (height is greater than width – unlike a plate);
  • size (to distinguish a cup from a jug);
  • round section.

All such questions can be answered in the yes/no format (one/zero) and, in the end, the image of the cup can be created as a combination of zeros and ones.

By “showing” a sufficient number of cups to the neural network, it can be ensured that it will recognize the object regardless of the design. Whether it's a rough office mug, an elegant Meissen porcelain teacup, or a small espresso container, the neural network recognizes it. And now you can show the neural network a jug or a frying pan. It will be interesting to know – how will it determine what it sees?

Neural networks in business

Regardless of how we feel about artificial neural networks, each of us uses them daily for our own purposes. For example, when using tools such as Google Search or Google Translate. So it is in business. Perhaps we just don't realize how widespread these computer algorithms are in business.

So here are some numbers.

In 2020, the global market for neural networks was estimated at $14.35 billion. According to forecasts, by 2030 it will reach $152.61 billion. Average annual growth rate - 26.7%.

North America (more precisely, the USA) remains the leader in the use of neural networks in business. It is assumed that the region will retain its position until 2030. This is explained by an earlier start in the use of neural networks and technological leadership. However, it is expected that in the next decade in the Asia-Pacific region, the use of neural networks will begin to grow rapidly. Including – due to the constant increase in the amount of data in the Internet of things (IoT).

Although the COVID-19 pandemic caused a sharp decline in this market in 2020, the neural network space is expected to continue its growth again. Companies are increasingly using telecommuting. This increases the demand for cloud solutions, spatial data and tools for analytics and market forecasting. The growth in the volume of data generated by various industries, the need to manage and analyze these arrays have become key factors in the growth of the neural network market.

The most profitable and successful application of neural networks turned out to be in the sector of banking, financial services and insurance.

Here, neural networks are used to predict market trends, stock prices. In the banking business, neural networks are engaged in identifying deviations and anomalies in transactions in order to detect fraud. Insurance companies use this tool to filter out fraudulent claims and doubtful circumstances in insurance cases, segmenting the clientele for optimal pricing.

In marketing, neural networks segment consumers by consumption model and economic status. Neural networks have shown themselves well in making recommendations for consumer products.

However, the use of neural networks has its pitfalls. One of the main complaints when trying to use neural networks in business is that they need a long training time using a large amount of high quality data.

Another problem is the lack of trained specialists working with neural networks and the lack of understanding by the top management of companies of the tasks that neural networks can solve. Very often this leads to high expectations and subsequent disappointment with the implementation of neural networks.

Limits of Possibilities

I started talking about neural networks with their popularity in the media. People who are not too dedicated to the subject both admire the opening prospects and express fear and apprehension of the widespread use of neural networks. First of all, neural networks are suspected of the possibility of gaining intelligence.

That is why it is important to be aware of the limitations for using neural networks.

Let's start with the fact that the analogy between biological and artificial neural networks is the most general. If only because a person to this day does not fully know by what principles the brain works. No artificial neural network can match the complexity of its organization with the brain of even a primitive mammal.

On the other hand, a neural network, even the most sophisticated one, does not fundamentally differ from many computer algorithms used by mankind. The new possibilities of computers give more opportunities for the application of these algorithms. But still, these are quantitative successes, not qualitative ones. In the early 90s, i386 computers used the Multi-Edit text editor, and today there are the latest versions of Word with incomparable capabilities compared to Multi-Edit. But fundamentally it is just a word processing tool. Similarly, neural networks - no matter how powerful they are, are always designed to perform a specific task.

Neural networks are designed to perform a specific task


Once upon a time, many were impressed by the victory of the Alpha Go program over the masters of the game of Go. At the same time, many lost sight of the fact that, unlike a living person, the program can do nothing but play. That is, it is a very complex, extremely expensive, but highly specialized tool. There is no need to talk about any artificial intelligence in this story.

Skeptics generally pay attention to the fact that the limit of possibilities for neural networks is what makes such a part of the brain as the cerebellum, which is responsible for automatic reactions. Consciousness and reason are connected with completely different parts of the brain.

Now about the ability to self-learning. Let's define self-learning differently - repeatedly restarting the algorithm with a constant change in the input data.

And that's all. All pathos disappears. Nothing like human learning. The maximum that comes to mind is the development of some simple skills to automatism.

In addition, without external influence from someone endowed with abstract thinking, the neural network will not learn anything. In the part of the universe closest to us, man remains the owner of abstract thinking. The aliens haven't shown up yet. Therefore, without human influence, neural networks are simply not capable of functioning and development.

Therefore, one should not be afraid that one day neural networks will think about life and their relationships with people. They will always be nothing more than a tool for solving the problems assigned to them by man.

***

Strive to take your business to the next level, achieve your goals faster and more efficiently? Apix-Drive is your reliable assistant for these tasks. An online service and application connector will help you automate key business processes and get rid of the routine. You and your employees will free up time for important core tasks. Try Apix-Drive features for free to see the effectiveness of the online connector for yourself.