Neural networks are models of biological neural structures. The starting point for most neural networks is a model neuron. Each input is modified by a weight, which multiplies with the input value. The neuron will combine these weighted inputs and, with reference to a threshold value and activation function, use these to determine its output. This behavior follows closely our understanding of how real neurons work.
Neural neworks are typically organized in layers. Layers are made up of a number of interconnected 'nodes' which contain an 'activation function'. Patterns are presented to the network via the 'input layer', which communicates to one or more 'hidden layers' where the actual processing is done via a system of weighted 'connections'. The hidden layers then link to an 'output layer' where the answer is output as shown in the graphic below.
Backpropagation is a common method of teaching artificial neural networks how to perform a given task.
It is a supervised learning method, and is a generalization of the delta rule. It requires a teacher that knows, or can calculate, the desired output for any input in the training set. It is most useful for feed-forward networks (networks that have no feedback, or simply, that have no connections that loop). The term is an abbreviation for "backward propagation of errors". Backpropagation requires that the activation function used by the artificial neurons (or "nodes") be differentiable.
The Steps
There are many advantages and limitations to neural network analysis and to discuss this subject properly we would have to look at each individual type of network, which isn't necessary for this general discussion. In reference to backpropagational networks however, there are some specific issues potential users should be aware of.
Neural neworks are typically organized in layers. Layers are made up of a number of interconnected 'nodes' which contain an 'activation function'. Patterns are presented to the network via the 'input layer', which communicates to one or more 'hidden layers' where the actual processing is done via a system of weighted 'connections'. The hidden layers then link to an 'output layer' where the answer is output as shown in the graphic below.
Backpropagation is a common method of teaching artificial neural networks how to perform a given task.
It is a supervised learning method, and is a generalization of the delta rule. It requires a teacher that knows, or can calculate, the desired output for any input in the training set. It is most useful for feed-forward networks (networks that have no feedback, or simply, that have no connections that loop). The term is an abbreviation for "backward propagation of errors". Backpropagation requires that the activation function used by the artificial neurons (or "nodes") be differentiable.
The Steps
1. Initilaize weights with random values
2. Present the input vector to the network
3. Evaluate the output of the network after a forward propagation of the signal
2. Present the input vector to the network
3. Evaluate the output of the network after a forward propagation of the signal
4.Calculate error (T - O) at the output units
5.Compute delta_wh for all weights from hidden layer to output layer ; backward pass
6.Compute delta_wi for all weights from input layer to hidden layer ; backward pass continued
7. Update weights
8. Termination Criteria. Goto Step 2 for a xed number of iterations or an error. #include<opencv2/ml/ml.hpp>
class atsANN
{
public:
void atstraineval(cv::Mat trainingData, cv::Mat trainingClasses, cv::Mat testData, cv::Mat testClasses)
{
cv::Mat layers = cv::Mat(4, 1, CV_32SC1); //create 4 layers
layers.row(0) = cv::Scalar(2); //input layer accepts x and y
layers.row(1) = cv::Scalar(10);//hidden layer
layers.row(2) = cv::Scalar(15);//hidden layer
layers.row(3) = cv::Scalar(1); //output layer returns 1 or -1
//Create the ANN
CvANN_MLP ann;
//ANN criteria for termination
CvTermCriteria criter;
criter.max_iter = 100;
criter.epsilon = 0.00001f;
criter.type = CV_TERMCRIT_ITER | CV_TERMCRIT_EPS;
//ANN parameters
CvANN_MLP_TrainParams params;
params.train_method = CvANN_MLP_TrainParams::BACKPROP;
params.bp_dw_scale = 0.05f;
params.bp_moment_scale = 0.05f;
params.term_crit = criter; //termination criteria
ann.create(layers);
// train
ann.train(trainingData, trainingClasses, cv::Mat(), cv::Mat(), params);
cv::Mat response(1, 1, CV_32FC1);
cv::Mat predicted(testClasses.rows, 1, CV_32F);
for(int i = 0; i < testData.rows; i++) {
cv::Mat response(1, 1, CV_32FC1);
cv::Mat sample = testData.row(i);
ann.predict(sample, response);
predicted.at<float>(i,0) = response.at<float>(0,0);
}
float percentage = evaluate(predicted, testClasses) * 100;
cout << "Artificial Neural Network Evaluated Accuracy = " << percentage << "%" << endl;
prediction = predicted;
}
void showplot(cv::Mat testData)
{
plot_binary(testData, prediction, "Predictions Backpropagation");
}
private:
cv::Mat prediction;
};
{
public:
void atstraineval(cv::Mat trainingData, cv::Mat trainingClasses, cv::Mat testData, cv::Mat testClasses)
{
cv::Mat layers = cv::Mat(4, 1, CV_32SC1); //create 4 layers
layers.row(0) = cv::Scalar(2); //input layer accepts x and y
layers.row(1) = cv::Scalar(10);//hidden layer
layers.row(2) = cv::Scalar(15);//hidden layer
layers.row(3) = cv::Scalar(1); //output layer returns 1 or -1
//Create the ANN
CvANN_MLP ann;
//ANN criteria for termination
CvTermCriteria criter;
criter.max_iter = 100;
criter.epsilon = 0.00001f;
criter.type = CV_TERMCRIT_ITER | CV_TERMCRIT_EPS;
//ANN parameters
CvANN_MLP_TrainParams params;
params.train_method = CvANN_MLP_TrainParams::BACKPROP;
params.bp_dw_scale = 0.05f;
params.bp_moment_scale = 0.05f;
params.term_crit = criter; //termination criteria
ann.create(layers);
// train
ann.train(trainingData, trainingClasses, cv::Mat(), cv::Mat(), params);
cv::Mat response(1, 1, CV_32FC1);
cv::Mat predicted(testClasses.rows, 1, CV_32F);
for(int i = 0; i < testData.rows; i++) {
cv::Mat response(1, 1, CV_32FC1);
cv::Mat sample = testData.row(i);
ann.predict(sample, response);
predicted.at<float>(i,0) = response.at<float>(0,0);
}
float percentage = evaluate(predicted, testClasses) * 100;
cout << "Artificial Neural Network Evaluated Accuracy = " << percentage << "%" << endl;
prediction = predicted;
}
void showplot(cv::Mat testData)
{
plot_binary(testData, prediction, "Predictions Backpropagation");
}
private:
cv::Mat prediction;
};
There are many advantages and limitations to neural network analysis and to discuss this subject properly we would have to look at each individual type of network, which isn't necessary for this general discussion. In reference to backpropagational networks however, there are some specific issues potential users should be aware of.
- Backpropagational neural networks (and many other types of networks) are in a sense the ultimate 'black boxes'. Apart from defining the general archetecture of a network and perhaps initially seeding it with a random numbers, the user has no other role than to feed it input and watch it train and await the output. In fact, it has been said that with backpropagation, "you almost don't know what you're doing". Some software freely available software packages (NevProp, bp, Mactivation) do allow the user to sample the networks 'progress' at regular time intervals, but the learning itself progresses on its own. The final product of this activity is a trained network that provides no equations or coefficients defining a relationship (as in regression) beyond it's own internal mathematics. The network 'IS' the final equation of the relationship.
- Backpropagational networks also tend to be slower to train than other types of networks and sometimes require thousands of epochs. If run on a truly parallel computer system this issue is not really a problem, but if the BPNN is being simulated on a standard serial machine (i.e. a single SPARC, Mac or PC) training can take some time. This is because the machines CPU must compute the function of each node and connection separately, which can be problematic in very large networks with a large amount of data. However, the speed of most current machines is such that this is typically not much of an issue.
#include <iostream>
#include <math.h>
#include <string>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
//required for multilayer perceptrons
#include<opencv2/ml/ml.hpp>
using namespace cv;
using namespace std;
#define numTrainingPoints 200
#define numTestPoints 2000
#define size 200
// accuracy
float evaluate(cv::Mat& predicted, cv::Mat& actual) {
assert(predicted.rows == actual.rows);
int t = 0;
int f = 0;
for(int i = 0; i < actual.rows; i++) {
float p = predicted.at<float>(i,0);
float a = actual.at<float>(i,0);
if((p >= 0.0 && a >= 0.0) || (p <= 0.0 && a <= 0.0)) {
t++;
} else {
f++;
}
}
return (t * 1.0) / (t + f);
}
// plot data and class
void plot_binary(cv::Mat& data, cv::Mat& classes, string name) {
cv::Mat plot(size, size, CV_8UC3);
plot.setTo(CV_RGB(255,255,255));
for(int i = 0; i < data.rows; i++) {
float x = data.at<float>(i,0) * size;
float y = data.at<float>(i,1) * size;
if(classes.at<float>(i, 0) > 0) {
cv::circle(plot, Point(x,y), 2, CV_RGB(255,0,0),1);
} else {
cv::circle(plot, Point(x,y), 2, CV_RGB(0,255,0),1);
}
}
cv::imshow(name, plot);
}
// function to learn
int dataLearn(float x, float y) {
return y > atan(y/x) ? -1 : 1;
}
// label data with equation
cv::Mat labelData(cv::Mat points) {
cv::Mat labels(points.rows, 1, CV_32FC1);
for(int i = 0; i < points.rows; i++) {
float x = points.at<float>(i,0);
float y = points.at<float>(i,1);
labels.at<float>(i, 0) = dataLearn(x, y);
}
return labels;
}
class atsANN
{
public:
void atstraineval(cv::Mat trainingData, cv::Mat trainingClasses, cv::Mat testData, cv::Mat testClasses)
{
cv::Mat layers = cv::Mat(4, 1, CV_32SC1); //create 4 layers
layers.row(0) = cv::Scalar(2); //input layer accepts x and y
layers.row(1) = cv::Scalar(10);//hidden layer
layers.row(2) = cv::Scalar(15);//hidden layer
layers.row(3) = cv::Scalar(1); //output layer returns 1 or -1
//Create the ANN
CvANN_MLP ann;
//ANN criteria for termination
CvTermCriteria criter;
criter.max_iter = 100;
criter.epsilon = 0.00001f;
criter.type = CV_TERMCRIT_ITER | CV_TERMCRIT_EPS;
//ANN parameters
CvANN_MLP_TrainParams params;
params.train_method = CvANN_MLP_TrainParams::BACKPROP;
params.bp_dw_scale = 0.05f;
params.bp_moment_scale = 0.05f;
params.term_crit = criter; //termination criteria
ann.create(layers);
// train
ann.train(trainingData, trainingClasses, cv::Mat(), cv::Mat(), params);
cv::Mat response(1, 1, CV_32FC1);
cv::Mat predicted(testClasses.rows, 1, CV_32F);
for(int i = 0; i < testData.rows; i++) {
cv::Mat response(1, 1, CV_32FC1);
cv::Mat sample = testData.row(i);
ann.predict(sample, response);
predicted.at<float>(i,0) = response.at<float>(0,0);
}
float percentage = evaluate(predicted, testClasses) * 100;
cout << "Artificial Neural Network Evaluated Accuracy = " << percentage << "%" << endl;
prediction = predicted;
}
void showplot(cv::Mat testData)
{
plot_binary(testData, prediction, "Predictions Backpropagation");
}
private:
cv::Mat prediction;
};
int main() {
cv::Mat trainingData(numTrainingPoints, 2, CV_32FC1);
cv::Mat testData(numTestPoints, 2, CV_32FC1);
cv::randu(trainingData,0,1); //generate uniformly-distributed random numbers from the range [low, high)
cv::randu(testData,0,1);
cv::Mat trainingClasses = labelData(trainingData);
cv::Mat testClasses = labelData(testData);
plot_binary(trainingData, trainingClasses, "Plot of Training Data");
plot_binary(testData, testClasses, "Plot ofTest Data");
atsANN myANN;
myANN.atstraineval(trainingData,trainingClasses,testData,testClasses);
myANN.showplot(testData);
cv::waitKey();
return 0;
}
#include <math.h>
#include <string>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
//required for multilayer perceptrons
#include<opencv2/ml/ml.hpp>
using namespace cv;
using namespace std;
#define numTrainingPoints 200
#define numTestPoints 2000
#define size 200
// accuracy
float evaluate(cv::Mat& predicted, cv::Mat& actual) {
assert(predicted.rows == actual.rows);
int t = 0;
int f = 0;
for(int i = 0; i < actual.rows; i++) {
float p = predicted.at<float>(i,0);
float a = actual.at<float>(i,0);
if((p >= 0.0 && a >= 0.0) || (p <= 0.0 && a <= 0.0)) {
t++;
} else {
f++;
}
}
return (t * 1.0) / (t + f);
}
// plot data and class
void plot_binary(cv::Mat& data, cv::Mat& classes, string name) {
cv::Mat plot(size, size, CV_8UC3);
plot.setTo(CV_RGB(255,255,255));
for(int i = 0; i < data.rows; i++) {
float x = data.at<float>(i,0) * size;
float y = data.at<float>(i,1) * size;
if(classes.at<float>(i, 0) > 0) {
cv::circle(plot, Point(x,y), 2, CV_RGB(255,0,0),1);
} else {
cv::circle(plot, Point(x,y), 2, CV_RGB(0,255,0),1);
}
}
cv::imshow(name, plot);
}
// function to learn
int dataLearn(float x, float y) {
return y > atan(y/x) ? -1 : 1;
}
// label data with equation
cv::Mat labelData(cv::Mat points) {
cv::Mat labels(points.rows, 1, CV_32FC1);
for(int i = 0; i < points.rows; i++) {
float x = points.at<float>(i,0);
float y = points.at<float>(i,1);
labels.at<float>(i, 0) = dataLearn(x, y);
}
return labels;
}
class atsANN
{
public:
void atstraineval(cv::Mat trainingData, cv::Mat trainingClasses, cv::Mat testData, cv::Mat testClasses)
{
cv::Mat layers = cv::Mat(4, 1, CV_32SC1); //create 4 layers
layers.row(0) = cv::Scalar(2); //input layer accepts x and y
layers.row(1) = cv::Scalar(10);//hidden layer
layers.row(2) = cv::Scalar(15);//hidden layer
layers.row(3) = cv::Scalar(1); //output layer returns 1 or -1
//Create the ANN
CvANN_MLP ann;
//ANN criteria for termination
CvTermCriteria criter;
criter.max_iter = 100;
criter.epsilon = 0.00001f;
criter.type = CV_TERMCRIT_ITER | CV_TERMCRIT_EPS;
//ANN parameters
CvANN_MLP_TrainParams params;
params.train_method = CvANN_MLP_TrainParams::BACKPROP;
params.bp_dw_scale = 0.05f;
params.bp_moment_scale = 0.05f;
params.term_crit = criter; //termination criteria
ann.create(layers);
// train
ann.train(trainingData, trainingClasses, cv::Mat(), cv::Mat(), params);
cv::Mat response(1, 1, CV_32FC1);
cv::Mat predicted(testClasses.rows, 1, CV_32F);
for(int i = 0; i < testData.rows; i++) {
cv::Mat response(1, 1, CV_32FC1);
cv::Mat sample = testData.row(i);
ann.predict(sample, response);
predicted.at<float>(i,0) = response.at<float>(0,0);
}
float percentage = evaluate(predicted, testClasses) * 100;
cout << "Artificial Neural Network Evaluated Accuracy = " << percentage << "%" << endl;
prediction = predicted;
}
void showplot(cv::Mat testData)
{
plot_binary(testData, prediction, "Predictions Backpropagation");
}
private:
cv::Mat prediction;
};
int main() {
cv::Mat trainingData(numTrainingPoints, 2, CV_32FC1);
cv::Mat testData(numTestPoints, 2, CV_32FC1);
cv::randu(trainingData,0,1); //generate uniformly-distributed random numbers from the range [low, high)
cv::randu(testData,0,1);
cv::Mat trainingClasses = labelData(trainingData);
cv::Mat testClasses = labelData(testData);
plot_binary(trainingData, trainingClasses, "Plot of Training Data");
plot_binary(testData, testClasses, "Plot ofTest Data");
atsANN myANN;
myANN.atstraineval(trainingData,trainingClasses,testData,testClasses);
myANN.showplot(testData);
cv::waitKey();
return 0;
}
could you explain to me how using mlp in base segmentation object detection
ReplyDelete