I am trying to write a simple neuron function in c++ similar to this image. I am using sigmoid function as activation function.
This is my c++ neuron function
#include<math.h>
double neuron(double layer_inputs[],int iter)
{
// Feed forwarding single neuron
double network=0;
double bias=1;
double activation;
//get number of elements in the layer
const int num=sizeof(layer_inputs)/sizeof(layer_inputs[0]);
double weight[num];
for ( int i = 0; i < num; i++ )
{
if(iter==0)
{
//first time assigning random weights
weight[i]=rand();
}
//feed forwarding summation
network=network+(layer_inputs[i]*weight[i]+bias);
}
activation= 1.0 / (1.0 + exp(-network)); //sigmoid activation function
return activation;
}
The problem is I dont know whether I made any logical error in my code. iter
is the iteration variable to check if the neuron is activating for first time. Is my writing the neuron in a neural network correct.
EDIT:
Even though not from a programmatic or quant background I am fascinated about programming,neural networks and artificial intelligence. I hve used inbuilt functions in caret
R but for more understanding I want to create a simple neural network from scratch. I learned most of the basics from internet and I am posting my codes here because I am sure I made some illogical but executing script.
#include<iostream>
#include <math.h>//pow, exp
#include "sqrtnn.h" //neuron()
int main()
{
double input[]= {1,4,9,16,25,36,49,64,81,100};
double output[]= {1,2,3,4,5,6,7,8,9,10};
//number of layers
double layer=3;
double output_network[10];
double error[10];
double learning_rate=0.02;
//number of iterations
int iter=10;
int input_num=sizeof(input)/sizeof(input[0]);
std::cout<<"Simple neural network for solving square rootn nINPUT -> OUTPUTnn";
for ( int i = 0; i < iter; i++ )
{
for ( int j = 0; j < input_num; j++ )
{
for ( int k = 0; k < layer; k++ )
{
//feed forwarding
output_network[j] =neuron(input,i) ; //sigmoid activation function
//back propogation
error[j]=1/2*pow(output[j]-output_network[j],2);//error function
std::cout<<input[j]<<" -> "<< output[j]<<"= "<< error[j] <<"n";
}
}
}
return 0;
}
2
Answers
You’re writing C code, not C++. C arrays don’t know their own size. Use
std::vector<double> layer_inputs
so you can calllayer_inputs.size()
.Other C bits in your code: don’t declare variables until you need them; you have declared
activation
far too early. In fact, I wouldn’t define it at all – justreturn 1.0 / (1.0 + exp(-network));
.here is the problem just convert
double input[10]
todouble input[]
because it is c++ not c do this on output array also and it is not a training neural network model because actual (ready maid inputs/outputs) are provided anddouble learning_rate = 0.02;
is declared not used anywhere.