Skip to main content

logpolar transform using Camera


The logpolar transform function emulates the human “foveal” vision and can be used for fast scale and rotation-invariant template matching, for object tracking and so forth. The function can not operate in-place.
"The foveal system of the human eye is the only part of the retina that permits 100% visual acuity. The line-of-sight is a virtual line connecting the fovea with a fixation point in the outside world." Wikipedia;

Code:


#include <opencv2/video/tracking.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <stdio.h>
#include <opencv2/objdetect/objdetect.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include <opencv2/imgproc/imgproc_c.h>
#include <opencv2/video/tracking.hpp>
#include <iostream>
#include <vector>
int main(int argc, char** argv)
{
CvCapture *capture = 0;
IplImage *src = 0;
/* initialize camera */
capture = cvCaptureFromCAM( 0 );
for(;;) {
/* get a frame */
src = cvQueryFrame( capture );
/* always check */

if( !src ) break;
IplImage* dst = cvCreateImage( cvSize(256,256), 8, 3 );
IplImage* src2 = cvCreateImage( cvGetSize(src), 8, 3 );
cvLogPolar( src, dst, cvPoint2D32f(src->width/2,src->height/2), 40,
CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS );
cvLogPolar( dst, src2, cvPoint2D32f(src->width/2,src->height/2), 40,
CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS+CV_WARP_INVERSE_MAP );
cvNamedWindow( "log-polar", 1 );
cvShowImage( "log-polar", dst );
cvNamedWindow( "inverse log-polar", 1 );
cvShowImage( "inverse log-polar", src2 );
cvWaitKey(10);
}
return 0;
}

Comments

  1. what is the purpose or use of Log Polar Transform or Inverse Log Polar Transform in real-life? can it be used to estimate distance from frontal camera view?

    ReplyDelete

Post a Comment

Popular posts from this blog

Computing Entropy of an image (CORRECTED)

entropy is a measure of the uncertainty associated with a random variable. basically i want to get a single value representing the entropy of an image. 1. Assign 255 bins for the range of values between 0-255 2. separate the image into its 3 channels 3. compute histogram for each channel 4. normalize all 3 channels unifirmely 5. for each channel get the bin value (Hc) and use its absolute value (negative log is infinity) 6. compute Hc*log10(Hc) 7. add to entropy and continue with 5 until a single value converges 5. get the frequency of each channel - add all the values of the bin 6. for each bin get a probability - if bin 1 = 20 bin 2 = 30 then frequency is 50 and probability is 20/50 and 30/50 then compute using shannon formula  REFERENCE: http://people.revoledu.com/kardi/tutorial/DecisionTree/how-to-measure-impurity.htm class atsHistogram { public:     cv::Mat DrawHistogram(Mat src)     {      ...

Blob Detection, Connected Component (Pure Opencv)

Connected-component labeling (alternatively connected-component analysis, blob extraction, region labeling, blob discovery, or region extraction) is an algorithmic application of graph theory, where subsets of connected components are uniquely labeled based on a given heuristic. Connected-component labeling is not to be confused with segmentation. i got the initial code from this URL: http://nghiaho.com/?p=1102 However the code did not compile with my setup of OpenCV 2.2, im guessing it was an older version. so a refactored and corrected the errors to come up with this Class class atsBlobFinder     {     public:         atsBlobFinder()         {         }         ///Original Code by http://nghiaho.com/?p=1102         ///Changed and added commments. Removed Errors     ...

Region of interest selection ROI

#include <stdlib.h> #include <stdio.h> #include <math.h> #include <string.h> #include<opencv2\opencv.hpp> #include <opencv2\highgui\highgui.hpp> int main(int argc, char *argv[]) { CvCapture *capture = 0; IplImage *frame = 0; int key = 0; /* initialize camera */ capture = cvCaptureFromCAM( 0 ); /* always check */ if ( !capture ) { printf("Cannot open initialize webcam!\n" ); exit(0); } /* create a window for the video */ cvNamedWindow( "result", CV_WINDOW_AUTOSIZE ); while( key != 'q' ) { /* get a frame */ frame = cvQueryFrame( capture ); /* always check */ if( !frame ) break; /* sets the Region of Interest*/ cvSetImageROI(frame, cvRect(150, 50, 150, 250)); /* create destination image */ IplImage *img2 = cvCreateImage(cvGetSize(frame), frame->depth, frame->nChannels); /* * do the main processing with subimage here. * in this example, we simply invert the subimage ...