Home

Christopher Johnson

Project Admins:

Introduction

This sourecode was originally designed to answer the following question http://stackoverflow.com/questions/8291649/face-recognition-using-surf-in-emgucv/8302370#8302370. It was uploaded here for others to use, it is a quick simple project and current investigation is being implemented to see if the EMGU surf detection algorithm can be used to detect multiple objects.

Feel free to download the source and have a play and why not see if you can help find the solution.

Setting up:

To start of with you will need to replace the image "Genericface.jpg" with a copy of your own or the face you wish to detect.

The main method your interested in is public Image<gray, byte=""> DoSurf(Image<gray, byte=""> input) this is called from ProcessFrame after an image is acquired from a capture device (webcam). Image<gray, byte=""> observedImage = input.Copy(); is the frame aquired. The code is similar to the SurfDetection example provided with EMGU however I have been working on trying to detect multiple objects within an image. If you wish to try and do this you will need to adjust setting in the folowing code</gray,></gray,></gray,>

  Features2DTracker.MatchedImageFeature[] matchedFeatures = tracker.MatchFeature(imageFeatures, 2, 20); //20
  matchedFeatures = Features2DTracker.VoteForUniqueness(matchedFeatures, 0.8); //0.8 default 0.1 allows for exact match
  matchedFeatures = Features2DTracker.VoteForSizeAndOrientation(matchedFeatures, 1.5, 20);

Basically the method I'm inpecting is sending a select number of matchedFeatures to HomographyMatrix to see if an object can be recognised. This as it stands does not work overly well I expect that I have yet to find good setting and more importantly I need the use list of matched features within matchedFeatures properly as they are not listed sequentially. I will attempt to re-write the code to test all combinations of the matchedFeatures list.

I've included it in case you wish to play around with it.

Anyhow the main section that does the recognition is in the region "Surf detection under testing". The for loop is where the code is testing for multiple matches in face detection this is unlikely however so feel free to delete some of this code if it is surplus to requirement .

This is the code that will use all the features and draw a rectangle around any detected faces that match that of the one your looking for.

 //Merge the object image and the observed image into one image for display
 #region draw lines between the matched features
 foreach (Features2DTracker.MatchedImageFeature matchedFeature in matchedFeatures)
 {
      PointF p = matchedFeature.ObservedFeature.KeyPoint.Point;
      p.Y += face.Height;
      res.Draw(new LineSegment2DF(matchedFeature.SimilarFeatures[0].Feature.KeyPoint.Point, p), new Gray(255), 1);
 }
 #endregion

 if (No_Match)
 {
      HomographyMatrix homography = Features2DTracker.GetHomographyMatrixFromMatchedFeatures(matchedFeatures);

      #region draw the project region on the image
      if (homography != null)
      {  //draw a rectangle along the projected model
           Rectangle rect = face.ROI;
           PointF[] pts = new PointF[] { 
                     new PointF(rect.Left, rect.Bottom),
                     new PointF(rect.Right, rect.Bottom),
                     new PointF(rect.Right, rect.Top),
                     new PointF(rect.Left, rect.Top)};
           homography.ProjectPoints(pts);

           for (int k = 0; k < pts.Length; k++) pts[k].Y += face.Height;

           res.DrawPolyline(Array.ConvertAll<PointF, Point>(pts, Point.Round), true, new Gray(255.0), 5);
      }
      #endregion
 }
 #endregion
 return res; 

If you do delete the for loop you will also have to remove the if (No_Match) statement or set it to permanently true.