Addition of question 2, how could I access A,C,N in this graph(assume it is a convolution layer with 3 input filters, 64 output filters, kernel size is 3*3)? I hope this could make the question easier to understand.
Porting the weights of vgg16 to dlib and have some questions 1 : How could I change the weights of each layer? //out_tensor is just a copy of the tensor dlib::resizable_tensor out_tensor = dlib::layer<N>(output).layer_details().get_layer_params(); float *output = out_tensor.host(); std::copy(weight.ptr<float>(0), weight.ptr<float>(0) + weight.total(), output); std::copy(bias.ptr<float>(0), bias.ptr<float>(0) + bias.total(), output + weight.total()); The problem get_layer_params() return a copy of...
Porting the weights of vgg16 to dlib and have some questions 1 : How could I change the weights of each layer? //out_tensor is just a copy of the tensor dlib::resizable_tensor out_tensor = dlib::layer<N>(output).layer_details().get_layer_params(); float *output = out_tensor.host(); std::copy(weight.ptr<float>(0), weight.ptr<float>(0) + weight.total(), output); std::copy(bias.ptr<float>(0), bias.ptr<float>(0) + bias.total(), output + weight.total()); The problem get_layer_params() return a copy of...
Porting the weights of vgg16 to dlib and have some questions 1 : How could I change the weights of each layer? //out_tensor is just a copy of the tensor dlib::resizable_tensor out_tensor = dlib::layer<N>(output).layer_details().get_layer_params(); float *output = out_tensor.host(); std::copy(weight.ptr<float>(0), weight.ptr<float>(0) + weight.total(), output); std::copy(bias.ptr<float>(0), bias.ptr<float>(0) + bias.total(), output + weight.total()); The problem get_layer_params() return a copy of...
Porting the weights of vgg16 to dlib and have some questions 1 : How could I change the weights of each layer? //out_tensor is just a copy of the tensor dlib::resizable_tensor out_tensor = dlib::layer<N>(output).layer_details().get_layer_params(); float *output = out_tensor.host(); std::copy(weight.ptr<float>(0), weight.ptr<float>(0) + weight.total(), output); std::copy(bias.ptr<float>(0), bias.ptr<float>(0) + bias.total(), output + weight.total()); The problem get_layer_params() return a copy of...
Porting the weights of vgg16 to dlib and have some questions 1 : How could I change the weights of each layer? //out_tensor is just a copy of the tensor dlib::resizable_tensor out_tensor = dlib::layer<N>(output).layer_details().get_layer_params(); float *output = out_tensor.host(); std::copy(weight.ptr<float>(0), weight.ptr<float>(0) + weight.total(), output); std::copy(bias.ptr<float>(0), bias.ptr<float>(0) + bias.total(), output + weight.total()); The problem get_layer_params() return a copy of...
Porting the weights of vgg16 to dlib and got some problems. 1 : How could I change the weights of each layer? //out_tensor is just a copy of the tensor dlib::resizable_tensor out_tensor = dlib::layer<N>(output).layer_details().get_layer_params(); float *output = out_tensor.host(); std::copy(weight.ptr<float>(0), weight.ptr<float>(0) + weight.total(), output); std::copy(bias.ptr<float>(0), bias.ptr<float>(0) + bias.total(), output + weight.total()); The problem get_layer_params() return a copy of...
I would implement one of the person reid papers that uses deep learning for that....