Skip to main content

Artificial Intelligence (Multilayered perceptrons) in OPENCV

Neural networks are models of biological neural structures. The starting point for most neural networks is a model neuron. Each input is modified by a weight, which multiplies with the input value. The neuron will combine these weighted inputs and, with reference to a threshold value and activation function, use these to determine its output. This behavior follows closely our understanding of how real neurons work.

 Neural neworks are typically organized in layers. Layers are made up of a number of interconnected 'nodes' which contain an 'activation function'. Patterns are presented to the network via the 'input layer', which communicates to one or more 'hidden layers' where the actual processing is done via a system of weighted 'connections'. The hidden layers then link to an 'output layer' where the answer is output as shown in the graphic below.

 Backpropagation is a common method of teaching artificial neural networks how to perform a given task.
 It is a supervised learning method, and is a generalization of the delta rule. It requires a teacher that knows, or can calculate, the desired output for any input in the training set. It is most useful for feed-forward networks (networks that have no feedback, or simply, that have no connections that loop). The term is an abbreviation for "backward propagation of errors". Backpropagation requires that the activation function used by the artificial neurons (or "nodes") be differentiable.

The Steps
1. Initilaize weights with random values
2. Present the input vector to the network
3. Evaluate the output of the network after a forward propagation of the signal
4.Calculate error (T - O) at the output units
5.Compute delta_wh for all weights from hidden layer to output layer ; backward pass
6.Compute delta_wi for all weights from input layer to hidden layer ; backward pass continued
7. Update weights 
8. Termination Criteria. Goto Step 2 for a xed number of iterations or an error.

#include<opencv2/ml/ml.hpp>
class atsANN
{

public:

    void atstraineval(cv::Mat trainingData, cv::Mat trainingClasses, cv::Mat testData, cv::Mat testClasses)
    {
        cv::Mat layers = cv::Mat(4, 1, CV_32SC1); //create 4 layers

        layers.row(0) = cv::Scalar(2); //input layer accepts x and y
        layers.row(1) = cv::Scalar(10);//hidden layer
        layers.row(2) = cv::Scalar(15);//hidden layer
        layers.row(3) = cv::Scalar(1); //output layer returns 1 or -1

        //Create the ANN
        CvANN_MLP ann;

        //ANN criteria for termination
        CvTermCriteria criter;
        criter.max_iter = 100;
        criter.epsilon = 0.00001f;
        criter.type = CV_TERMCRIT_ITER | CV_TERMCRIT_EPS;

        //ANN parameters
        CvANN_MLP_TrainParams params;
        params.train_method = CvANN_MLP_TrainParams::BACKPROP;
        params.bp_dw_scale = 0.05f;
        params.bp_moment_scale = 0.05f;
        params.term_crit = criter; //termination criteria

        ann.create(layers);
        // train
        ann.train(trainingData, trainingClasses, cv::Mat(), cv::Mat(), params);

        cv::Mat response(1, 1, CV_32FC1);
        cv::Mat predicted(testClasses.rows, 1, CV_32F);

        for(int i = 0; i < testData.rows; i++) {
            cv::Mat response(1, 1, CV_32FC1);
            cv::Mat sample = testData.row(i);

            ann.predict(sample, response);
            predicted.at<float>(i,0) = response.at<float>(0,0);

        }
        float percentage = evaluate(predicted, testClasses) * 100;
        cout << "Artificial Neural Network Evaluated Accuracy = " << percentage << "%" << endl;
        prediction = predicted;
      
    }
    void showplot(cv::Mat testData)
    {
        plot_binary(testData, prediction, "Predictions Backpropagation");
    }
private:
    cv::Mat prediction;
};

 There are many advantages and limitations to neural network analysis and to discuss this subject properly we would have to look at each individual type of network, which isn't necessary for this general discussion. In reference to backpropagational networks however, there are some specific issues potential users should be aware of.

  • Backpropagational neural networks (and many other types of networks) are in a sense the ultimate 'black boxes'. Apart from defining the general archetecture of a network and perhaps initially seeding it with a random numbers, the user has no other role than to feed it input and watch it train and await the output. In fact, it has been said that with backpropagation, "you almost don't know what you're doing". Some software freely available software packages (NevProp, bp, Mactivation) do allow the user to sample the networks 'progress' at regular time intervals, but the learning itself progresses on its own. The final product of this activity is a trained network that provides no equations or coefficients defining a relationship (as in regression) beyond it's own internal mathematics. The network 'IS' the final equation of the relationship.
  • Backpropagational networks also tend to be slower to train than other types of networks and sometimes require thousands of epochs. If run on a truly parallel computer system this issue is not really a problem, but if the BPNN is being simulated on a standard serial machine (i.e. a single SPARC, Mac or PC) training can take some time. This is because the machines CPU must compute the function of each node and connection separately, which can be problematic in very large networks with a large amount of data. However, the speed of most current machines is such that this is typically not much of an issue. 
 #include <iostream>
#include <math.h>
#include <string>

#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>

//required for multilayer perceptrons
    #include<opencv2/ml/ml.hpp>


using namespace cv;
using namespace std;

#define numTrainingPoints 200
#define numTestPoints 2000
#define size 200

// accuracy
float evaluate(cv::Mat& predicted, cv::Mat& actual) {
    assert(predicted.rows == actual.rows);
    int t = 0;
    int f = 0;
    for(int i = 0; i < actual.rows; i++) {
        float p = predicted.at<float>(i,0);
        float a = actual.at<float>(i,0);
        if((p >= 0.0 && a >= 0.0) || (p <= 0.0 &&  a <= 0.0)) {
            t++;
        } else {
            f++;
        }
    }
    return (t * 1.0) / (t + f);
}

// plot data and class
void plot_binary(cv::Mat& data, cv::Mat& classes, string name) {
    cv::Mat plot(size, size, CV_8UC3);
    plot.setTo(CV_RGB(255,255,255));
    for(int i = 0; i < data.rows; i++) {

        float x = data.at<float>(i,0) * size;
        float y = data.at<float>(i,1) * size;

        if(classes.at<float>(i, 0) > 0) {
            cv::circle(plot, Point(x,y), 2, CV_RGB(255,0,0),1);
        } else {
            cv::circle(plot, Point(x,y), 2, CV_RGB(0,255,0),1);
        }
    }
    cv::imshow(name, plot);
}

// function to learn
int dataLearn(float x, float y) {
    return y > atan(y/x) ? -1 : 1;

}

// label data with equation
cv::Mat labelData(cv::Mat points) {
    cv::Mat labels(points.rows, 1, CV_32FC1);
    for(int i = 0; i < points.rows; i++) {
             float x = points.at<float>(i,0);
             float y = points.at<float>(i,1);
             labels.at<float>(i, 0) = dataLearn(x, y);
        }
    return labels;
}

class atsANN
{

public:

    void atstraineval(cv::Mat trainingData, cv::Mat trainingClasses, cv::Mat testData, cv::Mat testClasses)
    {
        cv::Mat layers = cv::Mat(4, 1, CV_32SC1); //create 4 layers

        layers.row(0) = cv::Scalar(2); //input layer accepts x and y
        layers.row(1) = cv::Scalar(10);//hidden layer
        layers.row(2) = cv::Scalar(15);//hidden layer
        layers.row(3) = cv::Scalar(1); //output layer returns 1 or -1

        //Create the ANN
        CvANN_MLP ann;

        //ANN criteria for termination
        CvTermCriteria criter;
        criter.max_iter = 100;
        criter.epsilon = 0.00001f;
        criter.type = CV_TERMCRIT_ITER | CV_TERMCRIT_EPS;

        //ANN parameters
        CvANN_MLP_TrainParams params;
        params.train_method = CvANN_MLP_TrainParams::BACKPROP;
        params.bp_dw_scale = 0.05f;
        params.bp_moment_scale = 0.05f;
        params.term_crit = criter; //termination criteria

        ann.create(layers);
        // train
        ann.train(trainingData, trainingClasses, cv::Mat(), cv::Mat(), params);

        cv::Mat response(1, 1, CV_32FC1);
        cv::Mat predicted(testClasses.rows, 1, CV_32F);

        for(int i = 0; i < testData.rows; i++) {
            cv::Mat response(1, 1, CV_32FC1);
            cv::Mat sample = testData.row(i);

            ann.predict(sample, response);
            predicted.at<float>(i,0) = response.at<float>(0,0);

        }
        float percentage = evaluate(predicted, testClasses) * 100;
        cout << "Artificial Neural Network Evaluated Accuracy = " << percentage << "%" << endl;
        prediction = predicted;
      
    }
    void showplot(cv::Mat testData)
    {
        plot_binary(testData, prediction, "Predictions Backpropagation");
    }
private:
    cv::Mat prediction;
};

int main() {

    cv::Mat trainingData(numTrainingPoints, 2, CV_32FC1);
    cv::Mat testData(numTestPoints, 2, CV_32FC1);

    cv::randu(trainingData,0,1); //generate uniformly-distributed random numbers from the range [low, high)
    cv::randu(testData,0,1);

    cv::Mat trainingClasses = labelData(trainingData);
    cv::Mat testClasses = labelData(testData);

    plot_binary(trainingData, trainingClasses, "Plot of Training Data");
    plot_binary(testData, testClasses, "Plot ofTest Data");

    atsANN myANN;
    myANN.atstraineval(trainingData,trainingClasses,testData,testClasses);
    myANN.showplot(testData);


    cv::waitKey();

    return 0;
}

Comments

  1. could you explain to me how using mlp in base segmentation object detection

    ReplyDelete

Post a Comment

Popular posts from this blog

Computing Entropy of an image (CORRECTED)

entropy is a measure of the uncertainty associated with a random variable. basically i want to get a single value representing the entropy of an image. 1. Assign 255 bins for the range of values between 0-255 2. separate the image into its 3 channels 3. compute histogram for each channel 4. normalize all 3 channels unifirmely 5. for each channel get the bin value (Hc) and use its absolute value (negative log is infinity) 6. compute Hc*log10(Hc) 7. add to entropy and continue with 5 until a single value converges 5. get the frequency of each channel - add all the values of the bin 6. for each bin get a probability - if bin 1 = 20 bin 2 = 30 then frequency is 50 and probability is 20/50 and 30/50 then compute using shannon formula  REFERENCE: http://people.revoledu.com/kardi/tutorial/DecisionTree/how-to-measure-impurity.htm class atsHistogram { public:     cv::Mat DrawHistogram(Mat src)     {         /// Separate the image in 3 places ( R, G and B )    

Artificial Intelligence (K Nearest Neighbor) in OPENCV

In pattern recognition , the k -nearest neighbor algorithm ( k -NN) is a method for classifying objects based on closest training examples in the feature space . k -NN is a type of instance-based learning , or lazy learning where the function is only approximated locally and all computation is deferred until classification. The k -nearest neighbor algorithm is amongst the simplest of all machine learning algorithms: an object is classified by a majority vote of its neighbors, with the object being assigned to the class most common amongst its k nearest neighbors ( k is a positive integer , typically small). If k = 1, then the object is simply assigned to the class of its nearest neighbor. The k -NN algorithm can also be adapted for use in estimating continuous variables. One such implementation uses an inverse distance weighted average of the k -nearest multivariate neighbors. This algorithm functions as follows: Compute Euclidean or Mahalanobis distance from target plo

Blob Detection, Connected Component (Pure Opencv)

Connected-component labeling (alternatively connected-component analysis, blob extraction, region labeling, blob discovery, or region extraction) is an algorithmic application of graph theory, where subsets of connected components are uniquely labeled based on a given heuristic. Connected-component labeling is not to be confused with segmentation. i got the initial code from this URL: http://nghiaho.com/?p=1102 However the code did not compile with my setup of OpenCV 2.2, im guessing it was an older version. so a refactored and corrected the errors to come up with this Class class atsBlobFinder     {     public:         atsBlobFinder()         {         }         ///Original Code by http://nghiaho.com/?p=1102         ///Changed and added commments. Removed Errors         ///works with VS2010 and OpenCV 2.2+         void FindBlobs(const cv::Mat &binary, vector < vector<cv::Point>  > &blobs)         {             blobs.clear();             // Fill the la