Can High School Mathematics Help You Code Neural Networks?

Neural Network

Using high school calculus and elementary algebra coding neural networks are becoming increasingly popular

With Artificial Intelligence blooming in every sector and aspect of our lives, almost similar to human intelligence, there is one variety that really stands out among all others. An artificial Neural Network or simply Neural Network is a system of hardware and /or software patterned after the operation of neurons in the human brain. Neural networks are a variety of deep learning, another variety of Artificial Intelligence. Neural Networks are best explained based on depth. This depth indicates the number of layers or tiers between the input and output or how many hidden layers are present in the model. This description influenced by layers is what makes the term similar to deep learning. Artificial neural networks (ANN) date back to the early days of computing. Mathematicians Warren McCulloch and Walter Pitts in 1943, built a circuitry system that ran simple algorithms. This system was supposed to approximate the functioning of the human brain. When commercially used, these technologies focus on solving problems related to complex signal processing or pattern recognition. Operations if ANN includes a large number of processors functioning in parallel. They are arranged in tiers. The first tier of the lot receives raw input information. This information resembles optic nerves used in visual processing in humans. The successive tiers receive the output of the preceding tiers and not the raw input information. The last tier of the lot produces the final output. Usually, ANN is fed a large amount of data. This step is called training. Training consists of providing large amounts of data as input and then further providing instructions about the preferred kind of output. This step is done because ANN is adaptive in nature. They modify themselves as they learn from initial training and further additional runs provide more information about the world. There are multiple areas where neural networks are successfully applied with image recognition being one of the first areas. Other areas include Chabot, stock market prediction, natural language processing, translation of languages, generating language, drug discoveries, and development of drugs, planning, and optimization of delivery driver routes. The mathematical application of neural networks is an important aspect of moving forward with Artificial Intelligence education. Mathematical functions of Multivariable Calculus and Linear Algebra are used in the understanding of mathematical functions of Neural Networks. However, the same understanding can be achieved through High school calculus and elementary algebra. In doing so, the functions can be split up into four parts.  

The four parts of the calculation include:

1. Creating the Network: The first step in the calculation. In this step, a very basic neural network architecture is used. This network comprises of 1 input layer with 2 neurons, 1 hidden layer with 2 neurons, and an output layer with a neuron. For the activation function, the sigmoid function is used. Sigmoid function refers to a logistic function where, the carrying capacity is 1 and the inflection point is at (0, 0.5). 2. Forward Propagation: The next step includes forward propagation. To start this step inputs are multiplied with their corresponding weights. In the hidden layer, neurons are connected. For each neuron in the hidden layer, the product of the input neuron that is connecting to it and the weight of the link in between them is summed up. Then by applying the activation function to the products. As the calculation move from the hidden layer to the output layer similar logic is applied to each layer. 3. Backpropagation: The third and last part of mathematical calculations include backpropagation. The major aim of neural networks is to take some inputs and match them to a certain output. Through backpropagation, the same goal of matching the input and output is achieved. 4. Testing it out using coding: Although the calculations have pretty much worked out, they cannot be labeled as correct unless the practical application is verified. Therefore this mathematical calculation is applied using coding. By the use of dummy data, the coding is carried out. This step is crucial in ensuring that the calculations are correct. 5. Coding will verify the neural network created using mathematical applications: Through the use of high school calculus and algebra neural networks can be calculated. By learning how to do so, one can apply notations, derivations and explanations learned in creating more complex neural network structures.  
More Trending Stories 
Join our WhatsApp and Telegram Community to Get Regular Top Tech Updates
Whatsapp Icon Telegram Icon

Disclaimer: Any financial and crypto market information given on Analytics Insight are sponsored articles, written for informational purpose only and is not an investment advice. The readers are further advised that Crypto products and NFTs are unregulated and can be highly risky. There may be no regulatory recourse for any loss from such transactions. Conduct your own research by contacting financial experts before making any investment decisions. The decision to read hereinafter is purely a matter of choice and shall be construed as an express undertaking/guarantee in favour of Analytics Insight of being absolved from any/ all potential legal action, or enforceable claims. We do not represent nor own any cryptocurrency, any complaints, abuse or concerns with regards to the information provided shall be immediately informed here.

Close