User Activity

  • Posted a comment on discussion Open Discussion on dlib C++ Library

    Maybe I found the key point. When I fliped landmarks, I didn't change their orders.(The '00' point in the origional image may be changed to '16' in the fliped image) .

  • Posted a comment on discussion Open Discussion on dlib C++ Library

    I fliped every image of train data and its landmarks. And I checked some smaples comperwith your shared XML files. I used the same train code to train different models, the only difference is used different xml file.

  • Posted a comment on discussion Open Discussion on dlib C++ Library

    I try to train a shape predictor with using 300W dataset (the images are same to dlib's shape_predictor_68_face_landmarks). So Bounding boxes weren't given, I used dlib's face detector to find the nearest box to landmarks, and ignore the samples that face detector did not detect any faces. I compared the final XML file with you shared http://dlib.net/files/data/ibug_300W_large_face_landmark_dataset.tar.gz . The labels are near. I used the same parameters refered to the paper. The results model got...

  • Posted a comment on discussion Open Discussion on dlib C++ Library

    Thank you for helping sloving how to prepare .xml file. But I have another question. When I prepared a big xml file (about 100,000 images with their bounding boxes and landmarks) to use 'train_shape_predictor.py', I found it will take so many memory to run the script. My machine only have 8GB RAM + 8GB virtual memory, and it will kill itself when it was out of memory. Could you give me some suggestion to deal with this case?

  • Posted a comment on discussion Open Discussion on dlib C++ Library

    You mean I can use the "face detector" of dlib to create the bounding box for preparing the traing data when I run train_shape_predictor? But if the result of face detector is wrong or can not match the face landmark, it will effect the traing?

  • Posted a comment on discussion Open Discussion on dlib C++ Library

    I want to use the train_shape_predictor to train my own model of another dataset. But the annotations of those are pts files without bounding box infomation. So how did you train your model for 300W dataset? The boxes were annotated or created ?

View All

Personal Data

Username:
chengluzhu
Joined:
2018-05-10 11:59:53

Projects

  • No projects to display.

Personal Tools

MongoDB Logo MongoDB