User Activity

  • Posted a comment on discussion Help on dlib C++ Library

    Addition of question 2, how could I access A,C,N in this graph(assume it is a convolution layer with 3 input filters, 64 output filters, kernel size is 3*3)? I hope this could make the question easier to understand.

  • Modified a comment on discussion Help on dlib C++ Library

    Porting the weights of vgg16 to dlib and have some questions 1 : How could I change the weights of each layer? //out_tensor is just a copy of the tensor dlib::resizable_tensor out_tensor = dlib::layer<N>(output).layer_details().get_layer_params(); float *output = out_tensor.host(); std::copy(weight.ptr<float>(0), weight.ptr<float>(0) + weight.total(), output); std::copy(bias.ptr<float>(0), bias.ptr<float>(0) + bias.total(), output + weight.total()); The problem get_layer_params() return a copy of...

  • Modified a comment on discussion Help on dlib C++ Library

    Porting the weights of vgg16 to dlib and have some questions 1 : How could I change the weights of each layer? //out_tensor is just a copy of the tensor dlib::resizable_tensor out_tensor = dlib::layer<N>(output).layer_details().get_layer_params(); float *output = out_tensor.host(); std::copy(weight.ptr<float>(0), weight.ptr<float>(0) + weight.total(), output); std::copy(bias.ptr<float>(0), bias.ptr<float>(0) + bias.total(), output + weight.total()); The problem get_layer_params() return a copy of...

  • Modified a comment on discussion Help on dlib C++ Library

    Porting the weights of vgg16 to dlib and have some questions 1 : How could I change the weights of each layer? //out_tensor is just a copy of the tensor dlib::resizable_tensor out_tensor = dlib::layer<N>(output).layer_details().get_layer_params(); float *output = out_tensor.host(); std::copy(weight.ptr<float>(0), weight.ptr<float>(0) + weight.total(), output); std::copy(bias.ptr<float>(0), bias.ptr<float>(0) + bias.total(), output + weight.total()); The problem get_layer_params() return a copy of...

  • Modified a comment on discussion Help on dlib C++ Library

    Porting the weights of vgg16 to dlib and have some questions 1 : How could I change the weights of each layer? //out_tensor is just a copy of the tensor dlib::resizable_tensor out_tensor = dlib::layer<N>(output).layer_details().get_layer_params(); float *output = out_tensor.host(); std::copy(weight.ptr<float>(0), weight.ptr<float>(0) + weight.total(), output); std::copy(bias.ptr<float>(0), bias.ptr<float>(0) + bias.total(), output + weight.total()); The problem get_layer_params() return a copy of...

  • Modified a comment on discussion Help on dlib C++ Library

    Porting the weights of vgg16 to dlib and have some questions 1 : How could I change the weights of each layer? //out_tensor is just a copy of the tensor dlib::resizable_tensor out_tensor = dlib::layer<N>(output).layer_details().get_layer_params(); float *output = out_tensor.host(); std::copy(weight.ptr<float>(0), weight.ptr<float>(0) + weight.total(), output); std::copy(bias.ptr<float>(0), bias.ptr<float>(0) + bias.total(), output + weight.total()); The problem get_layer_params() return a copy of...

  • Posted a comment on discussion Help on dlib C++ Library

    Porting the weights of vgg16 to dlib and got some problems. 1 : How could I change the weights of each layer? //out_tensor is just a copy of the tensor dlib::resizable_tensor out_tensor = dlib::layer<N>(output).layer_details().get_layer_params(); float *output = out_tensor.host(); std::copy(weight.ptr<float>(0), weight.ptr<float>(0) + weight.total(), output); std::copy(bias.ptr<float>(0), bias.ptr<float>(0) + bias.total(), output + weight.total()); The problem get_layer_params() return a copy of...

  • Modified a comment on discussion Help on dlib C++ Library

    I would implement one of the person reid papers that uses deep learning for that....

View All

Personal Data

Username:
thamngapwei
Joined:
2015-09-04 04:32:05

Projects

This is a list of open source software projects that thamngapwei is associated with:

Personal Tools