User Activity

  • Modified a comment on discussion Help on dlib C++ Library

    Hi guys, I still have difficulty understanding what to choose as a stop condition when I use test_one_step() inside the training loop. trainer.get_steps_without_progress() < MAX_STEPS_WITHOUT_PROGRESS, trainer.get_test_steps_without_progress() or both? And what is a good ratio of MAX_STEPS_WITHOUT_PROGRESS, e.g. 1% or 10% of max epochs? From the dnn_introduction2_ex.cpp.html: // Loop until the trainer's automatic shrinking has shrunk the learning rate to 1e-6. // Given our settings, this means it...

  • Modified a comment on discussion Help on dlib C++ Library

    Hi guys, I still have difficulty understanding what to choose as a stop condition when I use test_one_step() inside the training loop. trainer.get_steps_without_progress() < MAX_STEPS_WITHOUT_PROGRESS, trainer.get_test_steps_without_progress() or both? And what is a good ratio of MAX_STEPS_WITHOUT_PROGRESS, e.g. 1 or 10% of max epochs? From the dnn_introduction2_ex.cpp.html: // Loop until the trainer's automatic shrinking has shrunk the learning rate to 1e-6. // Given our settings, this means it...

  • Modified a comment on discussion Help on dlib C++ Library

    Hi guys, I still have difficulty understanding what to choose as a stop condition when I use test_one_step() in inside the training loop. trainer.get_steps_without_progress() < MAX_STEPS_WITHOUT_PROGRESS, trainer.get_test_steps_without_progress() or both? From the dnn_introduction2_ex.cpp.html: // Loop until the trainer's automatic shrinking has shrunk the learning rate to 1e-6. // Given our settings, this means it will stop training after it has shrunk the // learning rate 3 times. while(trainer.get_learning_rate()...

  • Posted a comment on discussion Help on dlib C++ Library

    Hi guys, I still have difficulty understanding what to choose as a stop condition when I use test_one_step() in inside the training loop. trainer.get_steps_without_progress() < MAX_STEPS_WITHOUT_PROGRESS, trainer.get_test_steps_without_progress() or both? // Loop until the trainer's automatic shrinking has shrunk the learning rate to 1e-6. // Given our settings, this means it will stop training after it has shrunk the // learning rate 3 times. while(trainer.get_learning_rate() >= 1e-6) { mini_batch_samples.clear();...

  • Posted a comment on discussion Help on dlib C++ Library

    Sorry, I've found out that the normalizer can be easily serialized and deserialized.

  • Posted a comment on discussion Help on dlib C++ Library

    Hello @Davis, I use a simple dnn_trainer to perform a classification (<100 features, 3 labels). This is necessary in my case because the size of the neural network on the different systems is created dynamically for each user. At the same time, I limit the maximum duration of training via the step-by-step training (train_one_step). This works fine but now I want to improve the normalization and save the normalization vector to the DNN. Is there a way (similar to http://dlib.net/krr_classification_ex.cpp.html)...

  • Posted a comment on discussion Help on dlib C++ Library

    Thanks a lot @davisking :)

  • Modified a comment on discussion Help on dlib C++ Library

    Hi guys, I want to create a neural network, but the topology is known only at runtime. So I learned about the examples to set the num_outputs. This works very well for layers without an activation function, but I need to customize them all. using FFN = dlib::loss_multiclass_log< dlib::fc<3, dlib::sig<dlib::fc<5, dlib::sig<dlib::fc<5, dlib::input<dlib::matrix<float>>>>>>>>; FFN net(dlib::num_fc_outputs(iNumberOfLabels)); // only first layer??? // or dlib::layer<1>(net).layer_details().set_num_outputs(numberOfLabels);...

View All

Personal Data

Username:
fohlen
Joined:
2019-06-03 08:37:20

Projects

  • No projects to display.

Personal Tools

MongoDB Logo MongoDB