Skip to main content

Template Matching using OpenCV internal function


For this example we need to add the following to the linker dependencies:
opencv_core220d.lib
opencv_highgui220d.lib
opencv_imgproc220d.lib

Code:

#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include <string.h>

#include<opencv2\opencv.hpp>
#include <opencv2\highgui\highgui.hpp>

int main(int argc, char *argv[])
{
IplImage *img;
IplImage *tpl;
IplImage *res;
CvPoint minloc, maxloc;
double minval, maxval;
int img_width, img_height;
int tpl_width, tpl_height;
int res_width, res_height;

/* check for arguments */
if( argc < 3 ) {
printf( "Usage: template_match <reference> <template>\n" );
return 1;
}

/* load reference image */
img = cvLoadImage( argv[1], CV_LOAD_IMAGE_COLOR );
/* always check */

if( img == 0 ) {
printf( "Cannot load file %s!\n", argv[1] );
return 1;
}

/* load template image */
tpl = cvLoadImage( argv[2], CV_LOAD_IMAGE_COLOR );

/* always check */
if( tpl == 0 ) {
printf("Cannot load file %s!\n", argv[2] );
return 1;
}

/* get image's properties */
img_width = img->width;
img_height = img->height;
tpl_width = tpl->width;
tpl_height = tpl->height;
res_width = img_width - tpl_width + 1;
res_height = img_height - tpl_height + 1;

/* create new image for template matching computation */
res = cvCreateImage( cvSize( res_width, res_height ), IPL_DEPTH_32F, 1 );

/* choose template matching method to be used */
cvMatchTemplate( img, tpl, res, CV_TM_SQDIFF );
/*cvMatchTemplate( img, tpl, res, CV_TM_SQDIFF_NORMED );
cvMatchTemplate( img, tpl, res, CV_TM_CCORR );
cvMatchTemplate( img, tpl, res, CV_TM_CCORR_NORMED );
cvMatchTemplate( img, tpl, res, CV_TM_CCOEFF );
cvMatchTemplate( img, tpl, res, CV_TM_CCOEFF_NORMED );*/

cvMinMaxLoc( res, &minval, &maxval, &minloc, &maxloc, 0 );
/* draw red rectangle */
cvRectangle( img,
cvPoint( minloc.x, minloc.y ),
cvPoint( minloc.x + tpl_width, minloc.y + tpl_height ),
cvScalar( 0, 0, 255, 0 ), 1, 0, 0 );
/* display images */
cvNamedWindow( "reference", CV_WINDOW_AUTOSIZE );
cvNamedWindow( "template", CV_WINDOW_AUTOSIZE );
cvShowImage( "reference", img );
cvShowImage( "template", tpl );

/* wait until user press a key to exit */
cvWaitKey( 0 );

/* free memory */
cvDestroyWindow( "reference" );
cvDestroyWindow( "template" );
cvReleaseImage( &img );
cvReleaseImage( &tpl );
cvReleaseImage( &res );

return 0;
}

Comments

  1. Nice tutorial
    And i want to know that is it possible to run template matching on video capture
    I mean ongoing video...
    Please reply

    ReplyDelete
    Replies
    1. Yes it is possible..
      you can take input from webcam using cvCaptureFromCAM()and then get the src image using cvQueryFrame().
      example:
      CvCapture *capture=cvCaptureFromCAM(1);
      while(1)
      {
      img=cvQueryFrame(capture);
      }

      Delete
  2. Nice,

    but i have question, how i can know if the object has been found or not?

    note: i don't want to draw any rectangles

    i am sory for my English

    thanx

    ReplyDelete
  3. how can I know the best location for the fitting ?
    if I have multiple templates and I wana find the object
    how can I determine the best locations ?

    ReplyDelete
  4. Hi Aresh T. Saharkhiz,

    Thank you for posting such a nice program. I have one query for you.

    Your program is working fine with one image which consist the face with eyes template like in your example. What about multiple faces with the single template.

    For example: Suppose I cropped an image like yours eyes only and want to match template with multiple faces. I am asking because I tried it and its working for one only. I tried it for car light detection. Any suggestions.

    Actually, I need your help. With the same template, I want to check for multiple cars.

    Thanks in advance.

    ReplyDelete
  5. Hi This always draw a rectangle even template is not in the picture. Can we avoid that. so if template not found nothing happens?

    ReplyDelete
  6. This comment has been removed by the author.

    ReplyDelete

Post a Comment

Popular posts from this blog

Computing Entropy of an image (CORRECTED)

entropy is a measure of the uncertainty associated with a random variable. basically i want to get a single value representing the entropy of an image. 1. Assign 255 bins for the range of values between 0-255 2. separate the image into its 3 channels 3. compute histogram for each channel 4. normalize all 3 channels unifirmely 5. for each channel get the bin value (Hc) and use its absolute value (negative log is infinity) 6. compute Hc*log10(Hc) 7. add to entropy and continue with 5 until a single value converges 5. get the frequency of each channel - add all the values of the bin 6. for each bin get a probability - if bin 1 = 20 bin 2 = 30 then frequency is 50 and probability is 20/50 and 30/50 then compute using shannon formula  REFERENCE: http://people.revoledu.com/kardi/tutorial/DecisionTree/how-to-measure-impurity.htm class atsHistogram { public:     cv::Mat DrawHistogram(Mat src)     {         /// Separate the image in 3 places ( R, G and B )    

Artificial Intelligence (K Nearest Neighbor) in OPENCV

In pattern recognition , the k -nearest neighbor algorithm ( k -NN) is a method for classifying objects based on closest training examples in the feature space . k -NN is a type of instance-based learning , or lazy learning where the function is only approximated locally and all computation is deferred until classification. The k -nearest neighbor algorithm is amongst the simplest of all machine learning algorithms: an object is classified by a majority vote of its neighbors, with the object being assigned to the class most common amongst its k nearest neighbors ( k is a positive integer , typically small). If k = 1, then the object is simply assigned to the class of its nearest neighbor. The k -NN algorithm can also be adapted for use in estimating continuous variables. One such implementation uses an inverse distance weighted average of the k -nearest multivariate neighbors. This algorithm functions as follows: Compute Euclidean or Mahalanobis distance from target plo

Blob Detection, Connected Component (Pure Opencv)

Connected-component labeling (alternatively connected-component analysis, blob extraction, region labeling, blob discovery, or region extraction) is an algorithmic application of graph theory, where subsets of connected components are uniquely labeled based on a given heuristic. Connected-component labeling is not to be confused with segmentation. i got the initial code from this URL: http://nghiaho.com/?p=1102 However the code did not compile with my setup of OpenCV 2.2, im guessing it was an older version. so a refactored and corrected the errors to come up with this Class class atsBlobFinder     {     public:         atsBlobFinder()         {         }         ///Original Code by http://nghiaho.com/?p=1102         ///Changed and added commments. Removed Errors         ///works with VS2010 and OpenCV 2.2+         void FindBlobs(const cv::Mat &binary, vector < vector<cv::Point>  > &blobs)         {             blobs.clear();             // Fill the la