Menu

Facial recognition questions

npeltier
2014-03-01
2015-02-21
  • npeltier

    npeltier - 2014-03-01

    Hi,

    I'm Nicolas. I use OpenImaj for my personnal robotic project and I think OpenImaj is a great Java library. Congratulations for your work.
    For my project, I use the HaarCascadeDetector to detect and follow faces (the detector sends events to the motors of the head). It is fast enough for real-time. Now, I develop the facial recognition module : the robot must recognize the detected faces persons to say their name else ask them to learn and save it in the recognition engine. There are more informations in my personnal site : www.nicolas-peltier.fr (sorry, but it is in french).

    I have questions about this facial recognition :

    1) Which engine you advise me for my project ? This is a webcam stream and the robot must recognize with the minimum of samples for one person, because the robot could see a person only one time before. I try the three recognizers : Eigen, Fisher and Annotator but I never managed to run the Eigen and Fisher (may be a bad configuration or not enough samples in the database).
    And what about the detector, the extractor and the comparator for the engine ?

    2) After the detection with the HaarCascadeDetector, I have a DetectedFace object. But for the recognition engine, it seems that I must have a CLMDetectedFace object for the AnnotatorEngine, for example. Can I transform the DetectedFace in CLMDetectedFace (or other) with a "transform" util class, or do I have to redo the detection with the appropriate detector ?

    Thanks for your help.

    Nicolas Peltier

     
  • Jonathon Hare

    Jonathon Hare - 2014-03-03

    Bonjour Nicolas,

    1) None of the current face recognition strategies will work all that well without large numbers of training images, however, you could start with EigenFaces and see if you can get it to work a bit (see http://stackoverflow.com/questions/15496269/how-to-identify-new-faces-with-openimaj/15500588#15500588 for example code and parameters). We're working on some new techniques that should work better with only one training example, but these are not integrated yet.

    2) if the recogniser needs a CLMDetectedFace, then you should use the CLMFaceDetector instead of the HaarCascadeDetector to get the correct type of faces. In the case of the EigenFaces code example linked above, the recogniser only needs DetectedFace objects, but the aligner needs FKEDetectedFace objects to make transform the images so that the eyes are in same place in all examples (hence the FKEFaceDetector).

    Hope that helps,

    Jon

     
  • npeltier

    npeltier - 2014-03-04

    Bonjour Jon,

    Thanks for your reply.

    1) I already tried this configuration for the Eigenfaces recognizer without success. But I had not many pics in the database. I will retry with more pics.
    The best results I had during my tests are with the Annotator recognizer with only 1 or 2 pics by person (Obama, Hollande, Sarkozy). But when I try to recognize a Julia Roberts pic, the engine recognizes Nicolas Sarkozy with a confidence of 1.0 !!! Is it normal ?
    For infos, I use the 1.1 version.

    2) I try others detector but they are slower than the HaarCascade detector that has a real-time results (even at 25 fps : incredible !!!) I chose 80 pixels for the min size, smaller size slows computation.

    Other question : is it possible to use a video listener (with beforeUpdate) without having a display ? I use the beforeUpdate handler only to detect face and send event to my robot.

    I continue my tests and I will post a return.

    Thanks again.

    Nicolas

     
  • npeltier

    npeltier - 2014-03-16

    Hi Jon,

    I did another tests with the EigenFaceRecognizer with the configuration shown on your post link.
    With many of samples of my face taken with a webcam stream (~50 simultaneous images in the recognizer) or with the tests in my previous post, I didn't have any result.
    So, I launched these tests in debug mode and I found a strange behaviour putting a breakpoint in the KNNAnnotator class.

    When I attempt to recognize my face with the webcam, the indice corresponds of my name annotation in the tab annotations but the distance is between -0.9 and -0.7.
    With the near database photo faces (Sarkozy, Hollande, ...), the indice corresponds of their respective annotation with a distance ~ -0.5.
    And with the exactly the same photo, the indice corresponds of their respective annotation with a distance -1.

    I don't know if it is normal to have a negative distance here. With a positive distance, it would be close of the threshold and there would be matches.

    I tried to understand the calculating of the distance but it becomes to complex for me (except the Euclidian distance) ;-)

    I use the version 1.1 of OpenIMAJ on Ubuntu 12.04.

    So, if you have an idea ...

    Thanks,

    Nicolas

    public class KNNAnnotator<OBJECT, ANNOTATION, EXTRACTOR extends FeatureExtractor<FEATURE, OBJECT>, FEATURE>
            extends
            IncrementalAnnotator<OBJECT, ANNOTATION, EXTRACTOR>
    {
    ...
    public List<ScoredAnnotation<ANNOTATION>> annotate(final OBJECT object) {
            if (this.nn == null)
                this.nn = new ObjectNearestNeighboursExact<FEATURE>(this.features, this.comparator);
    
            final TObjectIntHashMap<ANNOTATION> selected = new TObjectIntHashMap<ANNOTATION>();
    
            final List<FEATURE> queryfv = new ArrayList<FEATURE>(1);
            queryfv.add(this.extractor.extractFeature(object));
    
            final int[][] indices = new int[1][this.k];
            final float[][] distances = new float[1][this.k];
    
            this.nn.searchKNN(queryfv, this.k, indices, distances);
    
            int count = 0;
            for (int i = 0; i < this.k; i++) {
                // Distance check
                if (this.comparator.isDistance()) {
                    if (distances[0][i] > this.threshold) {
                        continue;
                    }
                } else {
                    if (distances[0][i] < this.threshold) {
                        continue;
                    }
                }
    
                final Collection<ANNOTATION> anns = this.annotations.get(indices[0][i]);
    
                for (final ANNOTATION ann : anns) {
                    selected.adjustOrPutValue(ann, 1, 1);
                    count++;
                }
            }
    
            final TObjectIntIterator<ANNOTATION> iterator = selected.iterator();
            final List<ScoredAnnotation<ANNOTATION>> result = new ArrayList<ScoredAnnotation<ANNOTATION>>(selected.size());
            while (iterator.hasNext()) {
                iterator.advance();
    
                result.add(new ScoredAnnotation<ANNOTATION>(iterator.key(), (float) iterator.value() / (float) count));
            }
    
            return result;
        } 
    }
    
     
  • Jonathon Hare

    Jonathon Hare - 2014-03-16

    Hi Nicolas,

    What distance measure are you using? With Euclidean I certainly wouldn't expect to see negative distances, and I think it should all work as expected. Looking through the code, it appears there might be a bug if you choose a similarity measure (e.g. COSINE_SIM) and try to use that with the KNNAnnotator; I'll have to have a closer look at this...

    Regarding you previous question about video listeners, there is a static VideoDisplay.createOffscreenVideoDisplay() method that will do what you want.

    Jon

     
    • npeltier

      npeltier - 2014-03-17

      Hi Jon,

      I used this EigenFaceRecognizer that is mentioned in your StackOverflow link but I had the negative distances :

      EigenFaceRecogniser.create(20, new RotateScaleAligner(), 1, DoubleFVComparison.CORRELATION, 0.9f);
      

      So, I'm testing the same recognizer with the Euclidean distance. Now, I have positive distances but I don't know what to put for the threshold.
      What is the best distance strategy ?

      For the listener, I have not tested yet.

      Thanks.

      Nico

       
  • Anonymous

    Anonymous - 2014-08-02

    How to create FisherFaceRecognizer instance in openimaj, What should be the parameters to create method?

     
  • npeltier

    npeltier - 2015-02-08

    Hi,

    I come back after few months. I did some tests to try differents configurations of facial recognition engines and I had "bad" recognition rates : ~ 40 % for Fisherface to 60 % for Annotator when I was expecting ~ 70 %.
    I did some tests with the LFW database faces with fifty persons and from 5 to 30 training images per person. I used EigenFace, FisherFace and Annotator :

    What is the problem ? Not homogeneous training images ? Lighting condition ? Color ? Not frontal faces ? Configuration of the engines ? Could you help me please ?

    The code of the 3 engines with the version 1.3.1 :

    // Eigenface
    final FaceDetector<KEDetectedFace, FImage> detector = new FKEFaceDetector(new HaarCascadeDetector(80));
    final EigenFaceRecogniser<KEDetectedFace, String> recogniser = EigenFaceRecogniser.create(20, new RotateScaleAligner(), 1, DoubleFVComparison.EUCLIDEAN, 0.9f);
    moteurReconnaissance = FaceRecognitionEngine.create(detector, recogniser);
    
    // Fisherface
    final FaceDetector<KEDetectedFace, FImage> detector = new FKEFaceDetector(new HaarCascadeDetector(80));
    FisherFaceRecogniser<KEDetectedFace, String> recogniser = FisherFaceRecogniser.create(20, new RotateScaleAligner(), 1, DoubleFVComparison.EUCLIDEAN);
    moteurReconnaissance = FaceRecognitionEngine.create(detector, recogniser);
    
    // Annotator
    final LocalLBPHistogram.Extractor<CLMDetectedFace> extractor = new LocalLBPHistogram.Extractor<CLMDetectedFace>(new CLMAligner());
    final FacialFeatureComparator<LocalLBPHistogram> comparator = new FaceFVComparator<LocalLBPHistogram, FloatFV>(FloatFVComparison.EUCLIDEAN);
    final KNNAnnotator<CLMDetectedFace, String, LocalLBPHistogram> knn = KNNAnnotator.create(extractor, comparator);
    AnnotatorFaceRecogniser<CLMDetectedFace, String> recogniser = AnnotatorFaceRecogniser.create(knn);
    moteurReconnaissance = FaceRecognitionEngine.create(detector, recogniser);
    

    I have another questions about facial recognition. I want to develop gender and emotion recognitions.
    For gender recognition, I used FisherFace recognition with two classifiers (male and female) as mentionned in this article, with LFW database yet and its metadata for training. But, I had bad results again. Probably I have the same problems as above. Is it the good method for this recognition with OpenImaj ?

    For emotion recognition, I used the same method : Fisherface with classifiers (happy, sad, surprised) with the Cohn-Kanade database : not very convincing results ;-)
    I saw another method implemented in Javascript with a CLM tracker. Is it possible to determine the emotion with the OpenImaj CLMFaceTacker results ? If yes, how to do that ?

    Thanks for you reply.

    Nicolas

    PS : I saw you have an Odroid U3. Is it enough powerful to run a Java / OpenImaj facial recognition program with a webcam stream ?

     

    Last edit: npeltier 2015-02-08
  • npeltier

    npeltier - 2015-02-21

    Hi,

    I finally found the emotion recogniser sample with the CLMFaceTracker here. I didn't see it !!! ;-)

    Nicolas

     

Anonymous
Anonymous

Add attachments
Cancel





Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.