Skip to main content

finding the center of gravity in opencv





Alot of feature detectors such as the Haar classifier will return rectangular shapes presented as a detected feature.

one of the things to do in order to track these, is to find the center of the rectangle.


int main( int argc, char** argv )
{
atscameraCapture movie;
char code = (char)-1;
for(;;)
        {
            //get camera
            cv::Mat imgs = movie.ats_getImage();

#define drawCross( center, color, d, img )                                 \
            line( img, Point( center.x - d, center.y - d ),                \
            Point( center.x + d, center.y + d ), color, 2, CV_AA, 0); \
            line( img, Point( center.x + d, center.y - d ),                \
            Point( center.x - d, center.y + d ), color, 2, CV_AA, 0 )

            cv::Point center;

            //given 2 points representing the rectangle
            cv::Point topLeft(100,100);
            cv::Point bottomRight(200,200);
            cv::rectangle(imgs,topLeft,bottomRight,Scalar( 0, 255, 255 ),-1, 8 );

            //compute for center of triangle
            center.x = (topLeft.x+bottomRight.x)/2;
            center.y = (topLeft.y+bottomRight.y)/2;
            drawCross(center,Scalar(0,0,255), 5,imgs);

            imshow("camera",  imgs);
            code = (char)waitKey(100);
           
             if( code == 27 || code == 'q' || code == 'Q' )
            break;
        }
 return 0;

}

Another way of finding the center of gravity is using the internal function of OPENCV called cv::moments

Mat canny_output;
  vector<vector<Point> > contours;
  vector<Vec4i> hierarchy;
  RNG rng(12345);

/** @function main */
int main( int argc, char** argv )
{
atscameraCapture movie;
char code = (char)-1;
for(;;)
        {
            //get camera
            cv::Mat src_gray;
            cv::Mat imgs = movie.ats_getImage();

#define drawCross( center, color, d, img )                                 \
            line( img, Point( center.x - d, center.y - d ),                \
            Point( center.x + d, center.y + d ), color, 2, CV_AA, 0); \
            line( img, Point( center.x + d, center.y - d ),                \
            Point( center.x - d, center.y + d ), color, 2, CV_AA, 0 )

             /// Convert image to gray and blur it
              cvtColor( imgs, src_gray, CV_BGR2GRAY );
              blur( src_gray, src_gray, Size(3,3) );


           

              /// Detect edges using canny
              Canny( src_gray, canny_output, 10, 50, 3 );
              /// Find contours
              findContours( canny_output, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );

              /// Get the moments
              vector<Moments> mu(contours.size() );
              for( int i = 0; i < contours.size(); i++ )
                 { mu[i] = moments( Mat(contours[i]), false ); }

              ///  Get the mass centers:
              vector<Point2f> mc( contours.size() );
              for( int i = 0; i < contours.size(); i++ )
                 { mc[i] = Point2f( mu[i].m10/mu[i].m00 , mu[i].m01/mu[i].m00 );
              }
               Mat drawing = Mat::zeros( canny_output.size(), CV_8UC3 );
              for( int i = 0; i< contours.size(); i++ )
                 {
                   Scalar color = Scalar( rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255) );
                   drawContours( drawing, contours, i, color, 2, 8, hierarchy, 0, Point() );
                   circle( drawing, mc[i], 4, color, -1, 8, 0 );
            printf(" * Contour[%d] - Area (M_00) = %.2f - Area OpenCV: %.2f - Length: %.2f \n", i, mu[i].m00, contourArea(Mat(contours[i])), arcLength( Mat(contours[i]), true ) );
      


              }

              namedWindow( "Contours", CV_WINDOW_AUTOSIZE );
              imshow( "Contours", drawing );

            //imshow("camera",  src_gray);
            code = (char)waitKey(1000);
           
             if( code == 27 || code == 'q' || code == 'Q' )
            break;
        }
 return 0;

}

Comments

  1. Hello Friend I am doing my project in open CV. I would like to calculate center of gravity of image in open CV.

    Any help from your side is appreciated.
    Thank you in advance.

    ReplyDelete

Post a Comment

Popular posts from this blog

Computing Entropy of an image (CORRECTED)

entropy is a measure of the uncertainty associated with a random variable. basically i want to get a single value representing the entropy of an image. 1. Assign 255 bins for the range of values between 0-255 2. separate the image into its 3 channels 3. compute histogram for each channel 4. normalize all 3 channels unifirmely 5. for each channel get the bin value (Hc) and use its absolute value (negative log is infinity) 6. compute Hc*log10(Hc) 7. add to entropy and continue with 5 until a single value converges 5. get the frequency of each channel - add all the values of the bin 6. for each bin get a probability - if bin 1 = 20 bin 2 = 30 then frequency is 50 and probability is 20/50 and 30/50 then compute using shannon formula  REFERENCE: http://people.revoledu.com/kardi/tutorial/DecisionTree/how-to-measure-impurity.htm class atsHistogram { public:     cv::Mat DrawHistogram(Mat src)     {         /// Separate the image in 3 places ( R, G and B )    

Artificial Intelligence (K Nearest Neighbor) in OPENCV

In pattern recognition , the k -nearest neighbor algorithm ( k -NN) is a method for classifying objects based on closest training examples in the feature space . k -NN is a type of instance-based learning , or lazy learning where the function is only approximated locally and all computation is deferred until classification. The k -nearest neighbor algorithm is amongst the simplest of all machine learning algorithms: an object is classified by a majority vote of its neighbors, with the object being assigned to the class most common amongst its k nearest neighbors ( k is a positive integer , typically small). If k = 1, then the object is simply assigned to the class of its nearest neighbor. The k -NN algorithm can also be adapted for use in estimating continuous variables. One such implementation uses an inverse distance weighted average of the k -nearest multivariate neighbors. This algorithm functions as follows: Compute Euclidean or Mahalanobis distance from target plo

Blob Detection, Connected Component (Pure Opencv)

Connected-component labeling (alternatively connected-component analysis, blob extraction, region labeling, blob discovery, or region extraction) is an algorithmic application of graph theory, where subsets of connected components are uniquely labeled based on a given heuristic. Connected-component labeling is not to be confused with segmentation. i got the initial code from this URL: http://nghiaho.com/?p=1102 However the code did not compile with my setup of OpenCV 2.2, im guessing it was an older version. so a refactored and corrected the errors to come up with this Class class atsBlobFinder     {     public:         atsBlobFinder()         {         }         ///Original Code by http://nghiaho.com/?p=1102         ///Changed and added commments. Removed Errors         ///works with VS2010 and OpenCV 2.2+         void FindBlobs(const cv::Mat &binary, vector < vector<cv::Point>  > &blobs)         {             blobs.clear();             // Fill the la