dclib-devel Mailing List for dlib C++ Library (Page 3)
Brought to you by:
davisking
You can subscribe to this list here.
2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(6) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2008 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
(7) |
Aug
|
Sep
|
Oct
(7) |
Nov
(3) |
Dec
|
2009 |
Jan
(4) |
Feb
(3) |
Mar
(4) |
Apr
(3) |
May
(9) |
Jun
(5) |
Jul
(7) |
Aug
(2) |
Sep
(1) |
Oct
(1) |
Nov
|
Dec
(1) |
2010 |
Jan
(1) |
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2011 |
Jan
(1) |
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2013 |
Jan
(3) |
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(8) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2015 |
Jan
|
Feb
(4) |
Mar
(2) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2016 |
Jan
|
Feb
(5) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(5) |
2017 |
Jan
|
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
|
Jul
|
Aug
(10) |
Sep
|
Oct
(12) |
Nov
(1) |
Dec
(20) |
2018 |
Jan
(11) |
Feb
(10) |
Mar
(8) |
Apr
|
May
(8) |
Jun
(2) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
(1) |
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Eloi Du B. <elo...@gm...> - 2017-12-08 22:06:55
|
It can have a very low influence when making an auto encoder, as you would have a 7 bits input instead of 8 if I'm right. It's minor I agree. 2017-12-08 16:03 GMT-06:00 Davis King <dav...@gm...>: > It doesn't really matter. Either is fine. Maybe 255 would have been a > more harmonious choice. Oh well :) > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Dclib-devel mailing list > Dcl...@li... > https://lists.sourceforge.net/lists/listinfo/dclib-devel > > |
From: Davis K. <dav...@gm...> - 2017-12-08 22:03:22
|
It doesn't really matter. Either is fine. Maybe 255 would have been a more harmonious choice. Oh well :) |
From: Eloi Du B. <elo...@gm...> - 2017-12-08 21:49:10
|
Hi, Digging in the code I found this, which I found curious: input.h +105 *p = (temp.red-avg_red)/256.0; p += offset; *p = (temp.green-avg_green)/256.0; p += offset; *p = (temp.blue-avg_blue)/256.0; p += offset; As the input goes from 0 to 255, isn't it better to divide by 255? Not sure it has a big influence, just want to notify. Bests, Eloi. |
From: Davis K. <dav...@gm...> - 2017-12-07 10:38:20
|
Look in the setup() function of each layer. You can see how it's configured there. |
From: Eloi Du B. <elo...@gm...> - 2017-12-07 04:36:03
|
Hi, I would like to know, when creating and training a neural net, what strategy is defaulted as far as weight initialization? Thanks, |
From: Davis K. <dav...@gm...> - 2017-12-03 12:14:08
|
Try it and see. It's probably not good. On Sat, Dec 2, 2017 at 9:28 PM, Eloi Du Bois <elo...@gm...> wrote: > Is there a particular reason not to set the weight to 1? I made my own > ignore_dropout layer which basically takes a dropout as constructor and > does nothing, but I'm not sure this is right. > I'm guessing that after a big number of iterations, ignoring the dropouts > should be right as the overall network should be trained equally. > > 2017-12-02 20:10 GMT-06:00 Davis King <dav...@gm...>: > >> It's setting the weights to 0.5. >> >> On Sat, Dec 2, 2017 at 8:42 PM, Eloi Du Bois <elo...@gm...> >> wrote: >> >>> Additionnal question: if I replace the dropouts by a multiply layer, and >>> then copying my network, is this setting the dropouts weights to 1 ? I >>> mean, is it allowing me to use my full network's weights instead of >>> cancelling some? >>> >>> Thanks, >>> >>> 2017-12-02 19:24 GMT-06:00 Eloi Du Bois <elo...@gm...>: >>> >>>> Ah ok !! That's actually good news, I was stuck at 80% of accuracy with >>>> the dropouts, so now I should get something pretty nice. >>>> >>>> Many thanks! >>>> >>>> 2017-12-02 19:23 GMT-06:00 Davis King <dav...@gm...>: >>>> >>>>> The dropout layers always do dropout. So if you don't want them to do >>>>> that then you have to replace them with something else, which is generally >>>>> what you want to do. >>>>> >>>>> ------------------------------------------------------------ >>>>> ------------------ >>>>> Check out the vibrant tech community on one of the world's most >>>>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >>>>> _______________________________________________ >>>>> Dclib-devel mailing list >>>>> Dcl...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/dclib-devel >>>>> >>>>> >>>> >>> >>> ------------------------------------------------------------ >>> ------------------ >>> Check out the vibrant tech community on one of the world's most >>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >>> _______________________________________________ >>> Dclib-devel mailing list >>> Dcl...@li... >>> https://lists.sourceforge.net/lists/listinfo/dclib-devel >>> >>> >> >> ------------------------------------------------------------ >> ------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> Dclib-devel mailing list >> Dcl...@li... >> https://lists.sourceforge.net/lists/listinfo/dclib-devel >> >> > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Dclib-devel mailing list > Dcl...@li... > https://lists.sourceforge.net/lists/listinfo/dclib-devel > > |
From: Eloi Du B. <elo...@gm...> - 2017-12-03 02:28:42
|
Is there a particular reason not to set the weight to 1? I made my own ignore_dropout layer which basically takes a dropout as constructor and does nothing, but I'm not sure this is right. I'm guessing that after a big number of iterations, ignoring the dropouts should be right as the overall network should be trained equally. 2017-12-02 20:10 GMT-06:00 Davis King <dav...@gm...>: > It's setting the weights to 0.5. > > On Sat, Dec 2, 2017 at 8:42 PM, Eloi Du Bois <elo...@gm...> > wrote: > >> Additionnal question: if I replace the dropouts by a multiply layer, and >> then copying my network, is this setting the dropouts weights to 1 ? I >> mean, is it allowing me to use my full network's weights instead of >> cancelling some? >> >> Thanks, >> >> 2017-12-02 19:24 GMT-06:00 Eloi Du Bois <elo...@gm...>: >> >>> Ah ok !! That's actually good news, I was stuck at 80% of accuracy with >>> the dropouts, so now I should get something pretty nice. >>> >>> Many thanks! >>> >>> 2017-12-02 19:23 GMT-06:00 Davis King <dav...@gm...>: >>> >>>> The dropout layers always do dropout. So if you don't want them to do >>>> that then you have to replace them with something else, which is generally >>>> what you want to do. >>>> >>>> ------------------------------------------------------------ >>>> ------------------ >>>> Check out the vibrant tech community on one of the world's most >>>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >>>> _______________________________________________ >>>> Dclib-devel mailing list >>>> Dcl...@li... >>>> https://lists.sourceforge.net/lists/listinfo/dclib-devel >>>> >>>> >>> >> >> ------------------------------------------------------------ >> ------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> Dclib-devel mailing list >> Dcl...@li... >> https://lists.sourceforge.net/lists/listinfo/dclib-devel >> >> > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Dclib-devel mailing list > Dcl...@li... > https://lists.sourceforge.net/lists/listinfo/dclib-devel > > |
From: Davis K. <dav...@gm...> - 2017-12-03 02:10:18
|
It's setting the weights to 0.5. On Sat, Dec 2, 2017 at 8:42 PM, Eloi Du Bois <elo...@gm...> wrote: > Additionnal question: if I replace the dropouts by a multiply layer, and > then copying my network, is this setting the dropouts weights to 1 ? I > mean, is it allowing me to use my full network's weights instead of > cancelling some? > > Thanks, > > 2017-12-02 19:24 GMT-06:00 Eloi Du Bois <elo...@gm...>: > >> Ah ok !! That's actually good news, I was stuck at 80% of accuracy with >> the dropouts, so now I should get something pretty nice. >> >> Many thanks! >> >> 2017-12-02 19:23 GMT-06:00 Davis King <dav...@gm...>: >> >>> The dropout layers always do dropout. So if you don't want them to do >>> that then you have to replace them with something else, which is generally >>> what you want to do. >>> >>> ------------------------------------------------------------ >>> ------------------ >>> Check out the vibrant tech community on one of the world's most >>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >>> _______________________________________________ >>> Dclib-devel mailing list >>> Dcl...@li... >>> https://lists.sourceforge.net/lists/listinfo/dclib-devel >>> >>> >> > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Dclib-devel mailing list > Dcl...@li... > https://lists.sourceforge.net/lists/listinfo/dclib-devel > > |
From: Davis K. <dav...@gm...> - 2017-12-03 02:09:55
|
Just use the = operator to assign one net to another. It will do the right thing. On Sat, Dec 2, 2017 at 8:32 PM, Eloi Du Bois <elo...@gm...> wrote: > Ok sorry, affine for bn_con and multiply for dropouts is what I was > looking for. > > Thanks, > > 2017-12-02 19:29 GMT-06:00 Eloi Du Bois <elo...@gm...>: > >> Mmmh, I see thanks. >> >> Is there a copy helper function to copy two net that might have different >> architecture ? Like bn_con on the training net and affine on the prod net >> or dropouts and dropouts removed on the production net? I guess a solution >> is to use direct access to the right layers and copy them one by one, but >> this is a bit of a pain. >> >> Thanks >> >> 2017-12-02 19:24 GMT-06:00 Davis King <dav...@gm...>: >> >>> It's about making the network do what you want. You have to think about >>> what you want to do. Do you want to do batch normalization all the time? >>> Maybe you do, but in most cases that isn't what you want when you are >>> really using a model because it does something weird to the data that only >>> really makes sense during training. >>> >>> ------------------------------------------------------------ >>> ------------------ >>> Check out the vibrant tech community on one of the world's most >>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >>> _______________________________________________ >>> Dclib-devel mailing list >>> Dcl...@li... >>> https://lists.sourceforge.net/lists/listinfo/dclib-devel >>> >>> >> > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Dclib-devel mailing list > Dcl...@li... > https://lists.sourceforge.net/lists/listinfo/dclib-devel > > |
From: Eloi Du B. <elo...@gm...> - 2017-12-03 01:42:23
|
Additionnal question: if I replace the dropouts by a multiply layer, and then copying my network, is this setting the dropouts weights to 1 ? I mean, is it allowing me to use my full network's weights instead of cancelling some? Thanks, 2017-12-02 19:24 GMT-06:00 Eloi Du Bois <elo...@gm...>: > Ah ok !! That's actually good news, I was stuck at 80% of accuracy with > the dropouts, so now I should get something pretty nice. > > Many thanks! > > 2017-12-02 19:23 GMT-06:00 Davis King <dav...@gm...>: > >> The dropout layers always do dropout. So if you don't want them to do >> that then you have to replace them with something else, which is generally >> what you want to do. >> >> ------------------------------------------------------------ >> ------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> Dclib-devel mailing list >> Dcl...@li... >> https://lists.sourceforge.net/lists/listinfo/dclib-devel >> >> > |
From: Eloi Du B. <elo...@gm...> - 2017-12-03 01:32:50
|
Ok sorry, affine for bn_con and multiply for dropouts is what I was looking for. Thanks, 2017-12-02 19:29 GMT-06:00 Eloi Du Bois <elo...@gm...>: > Mmmh, I see thanks. > > Is there a copy helper function to copy two net that might have different > architecture ? Like bn_con on the training net and affine on the prod net > or dropouts and dropouts removed on the production net? I guess a solution > is to use direct access to the right layers and copy them one by one, but > this is a bit of a pain. > > Thanks > > 2017-12-02 19:24 GMT-06:00 Davis King <dav...@gm...>: > >> It's about making the network do what you want. You have to think about >> what you want to do. Do you want to do batch normalization all the time? >> Maybe you do, but in most cases that isn't what you want when you are >> really using a model because it does something weird to the data that only >> really makes sense during training. >> >> ------------------------------------------------------------ >> ------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> Dclib-devel mailing list >> Dcl...@li... >> https://lists.sourceforge.net/lists/listinfo/dclib-devel >> >> > |
From: Eloi Du B. <elo...@gm...> - 2017-12-03 01:29:12
|
Mmmh, I see thanks. Is there a copy helper function to copy two net that might have different architecture ? Like bn_con on the training net and affine on the prod net or dropouts and dropouts removed on the production net? I guess a solution is to use direct access to the right layers and copy them one by one, but this is a bit of a pain. Thanks 2017-12-02 19:24 GMT-06:00 Davis King <dav...@gm...>: > It's about making the network do what you want. You have to think about > what you want to do. Do you want to do batch normalization all the time? > Maybe you do, but in most cases that isn't what you want when you are > really using a model because it does something weird to the data that only > really makes sense during training. > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Dclib-devel mailing list > Dcl...@li... > https://lists.sourceforge.net/lists/listinfo/dclib-devel > > |
From: Eloi Du B. <elo...@gm...> - 2017-12-03 01:25:06
|
Ah ok !! That's actually good news, I was stuck at 80% of accuracy with the dropouts, so now I should get something pretty nice. Many thanks! 2017-12-02 19:23 GMT-06:00 Davis King <dav...@gm...>: > The dropout layers always do dropout. So if you don't want them to do that > then you have to replace them with something else, which is generally what > you want to do. > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Dclib-devel mailing list > Dcl...@li... > https://lists.sourceforge.net/lists/listinfo/dclib-devel > > |
From: Davis K. <dav...@gm...> - 2017-12-03 01:25:03
|
It's about making the network do what you want. You have to think about what you want to do. Do you want to do batch normalization all the time? Maybe you do, but in most cases that isn't what you want when you are really using a model because it does something weird to the data that only really makes sense during training. |
From: Davis K. <dav...@gm...> - 2017-12-03 01:23:35
|
The dropout layers always do dropout. So if you don't want them to do that then you have to replace them with something else, which is generally what you want to do. |
From: Eloi Du B. <elo...@gm...> - 2017-12-03 01:14:09
|
Hi, I am wondering, I see in the doc that after a training, I should replace bn_con and bn_fc with affine layers. Is that only for speed or is it influencing the result when doing a forward pass? Thanks, |
From: Eloi Du B. <elo...@gm...> - 2017-12-03 01:12:29
|
Hi, I am wondering: In the documentation, I see that you recommend to remove the dropouts layers after the training. Does that influences only the speed or are the dropouts applied in the forward pass? |
From: Eloi Du B. <elo...@gm...> - 2017-11-03 18:55:46
|
Hi, I have a matrix defined as following: M = [nr*nc][nr*nc] More clearly, 2 times a patch of 16x16 pixels. Now, on my neural network, I'd like to work on two branches working on subsets of an input matrix, meaning on each patch separately and then concat the results. I think I have to use the inception pattern but I'm not sure how: I'm tempted to do something like: template<int Index, class MatInputT> using FeatureExtractNetT = dlib::relu<dlib::fc<8, dlib::extract<Index, 1, kPatchDiam, kPatchDiam, MatInputT>>>; template<class MatInputT> using FeatureExtractNetT0 = FeatureExtractNetT<0, MatInputT>; template<class MatInputT> using FeatureExtractNetT1 = FeatureExtractNetT<1, MatInputT>; template<class MatInputT> using NetT = dlib::my_loss<dlib::fc<1, dlib::relu<dlib::fc<16, dlib::inception2<FeatureExtractNetT0, FeatureExtractNetT1, dlib::input<MatInputT>>>>>>; But this throws an exception: Error detected at line 620. Error detected in file d:\_dev\3rdparties\dlib\build-gpu\installrelease\include\ dlib\dnn/tensor.h. Error detected in function class dlib::alias_tensor_instance __cdecl dlib::alias _tensor::operator ()(class dlib::tensor &,unsigned __int64) const. Failing expression was offset+size() <= t.size(). Not sure why... Thanks for any help, <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon> Virus-free. www.avast.com <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> |
From: Davis K. <dav...@gm...> - 2017-10-23 09:52:30
|
Thanks, glad you like dlib :) On Sun, Oct 22, 2017 at 8:10 PM, Eloi Du Bois <elo...@gm...> wrote: > Ok, thousand thanks, your lib is very pleasing :) > > 2017-10-22 19:08 GMT-05:00 Davis King <dav...@gm...>: > >> Yes: http://dlib.net/dlib/dnn/layers_abstract.h.html#extract_ >> >> On Sun, Oct 22, 2017 at 8:06 PM, Eloi Du Bois <elo...@gm...> >> wrote: >> >>> Ok, thank you for your help. Is there any reshape layer that I could use? >>> >>> >>> 2017-10-22 19:04 GMT-05:00 Davis King <dav...@gm...>: >>> >>>> Well, I don't know what paper you are following. Maybe it's the most >>>> convenient way. But you probably want to use the dnn_trainer to train >>>> multiple steps at a time when optimizing each of the subnets. But maybe >>>> not. >>>> >>>> Anyway, the fc layer outputs are 1 row and 1 columns but k channels. >>>> That's why you get only 16 outputs. This is all in the documentation for >>>> the layers. >>>> >>>> On Sun, Oct 22, 2017 at 7:58 PM, Eloi Du Bois <elo...@gm...> >>>> wrote: >>>> >>>>> I'm sorry, I'm not sure I see how I can do using another way. >>>>> I think I'm almost done training a gaussian function, only problem I >>>>> have right now is that this network: >>>>> >>>>> // A 1 filter; 2x1 transposed conv layer that does 2x upsampling >>>>> template<class SUBNET> >>>>> using cont2 = dlib::cont<1, 1, 2, 1, 2, SUBNET>; >>>>> template<class SUBNET> >>>>> using GeneratorT = cont2<cont2<cont2<cont2<dlib::fc<8, SUBNET>>>>>; >>>>> // This will take an input of size 32 and generate an output of 128 >>>>> >>>>> Is generating vectors of 16, I'm surprised as I put a fully connected >>>>> layer on the input that outputs 8 values. Then, everything is multiplied by >>>>> 2 going thru cont layers. It should be 128, no? How can this happen? Is >>>>> there something I'm missing or is there something else? >>>>> >>>>> Thanks, >>>>> >>>>> >>>>> 2017-10-22 7:47 GMT-05:00 Davis King <dav...@gm...>: >>>>> >>>>>> It does it for you. Really though, I don't see why you would need to >>>>>> call the backward function yourself considering the task you have outlined. >>>>>> >>>>>> ------------------------------------------------------------ >>>>>> ------------------ >>>>>> Check out the vibrant tech community on one of the world's most >>>>>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >>>>>> _______________________________________________ >>>>>> Dclib-devel mailing list >>>>>> Dcl...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/dclib-devel >>>>>> >>>>>> >>>>> >>>>> ------------------------------------------------------------ >>>>> ------------------ >>>>> Check out the vibrant tech community on one of the world's most >>>>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >>>>> _______________________________________________ >>>>> Dclib-devel mailing list >>>>> Dcl...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/dclib-devel >>>>> >>>>> >>>> >>>> ------------------------------------------------------------ >>>> ------------------ >>>> Check out the vibrant tech community on one of the world's most >>>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >>>> _______________________________________________ >>>> Dclib-devel mailing list >>>> Dcl...@li... >>>> https://lists.sourceforge.net/lists/listinfo/dclib-devel >>>> >>>> >>> >>> ------------------------------------------------------------ >>> ------------------ >>> Check out the vibrant tech community on one of the world's most >>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >>> _______________________________________________ >>> Dclib-devel mailing list >>> Dcl...@li... >>> https://lists.sourceforge.net/lists/listinfo/dclib-devel >>> >>> >> >> ------------------------------------------------------------ >> ------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> Dclib-devel mailing list >> Dcl...@li... >> https://lists.sourceforge.net/lists/listinfo/dclib-devel >> >> > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Dclib-devel mailing list > Dcl...@li... > https://lists.sourceforge.net/lists/listinfo/dclib-devel > > |
From: Eloi Du B. <elo...@gm...> - 2017-10-23 00:10:24
|
Ok, thousand thanks, your lib is very pleasing :) 2017-10-22 19:08 GMT-05:00 Davis King <dav...@gm...>: > Yes: http://dlib.net/dlib/dnn/layers_abstract.h.html#extract_ > > On Sun, Oct 22, 2017 at 8:06 PM, Eloi Du Bois <elo...@gm...> > wrote: > >> Ok, thank you for your help. Is there any reshape layer that I could use? >> >> >> 2017-10-22 19:04 GMT-05:00 Davis King <dav...@gm...>: >> >>> Well, I don't know what paper you are following. Maybe it's the most >>> convenient way. But you probably want to use the dnn_trainer to train >>> multiple steps at a time when optimizing each of the subnets. But maybe >>> not. >>> >>> Anyway, the fc layer outputs are 1 row and 1 columns but k channels. >>> That's why you get only 16 outputs. This is all in the documentation for >>> the layers. >>> >>> On Sun, Oct 22, 2017 at 7:58 PM, Eloi Du Bois <elo...@gm...> >>> wrote: >>> >>>> I'm sorry, I'm not sure I see how I can do using another way. >>>> I think I'm almost done training a gaussian function, only problem I >>>> have right now is that this network: >>>> >>>> // A 1 filter; 2x1 transposed conv layer that does 2x upsampling >>>> template<class SUBNET> >>>> using cont2 = dlib::cont<1, 1, 2, 1, 2, SUBNET>; >>>> template<class SUBNET> >>>> using GeneratorT = cont2<cont2<cont2<cont2<dlib::fc<8, SUBNET>>>>>; // >>>> This will take an input of size 32 and generate an output of 128 >>>> >>>> Is generating vectors of 16, I'm surprised as I put a fully connected >>>> layer on the input that outputs 8 values. Then, everything is multiplied by >>>> 2 going thru cont layers. It should be 128, no? How can this happen? Is >>>> there something I'm missing or is there something else? >>>> >>>> Thanks, >>>> >>>> >>>> 2017-10-22 7:47 GMT-05:00 Davis King <dav...@gm...>: >>>> >>>>> It does it for you. Really though, I don't see why you would need to >>>>> call the backward function yourself considering the task you have outlined. >>>>> >>>>> ------------------------------------------------------------ >>>>> ------------------ >>>>> Check out the vibrant tech community on one of the world's most >>>>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >>>>> _______________________________________________ >>>>> Dclib-devel mailing list >>>>> Dcl...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/dclib-devel >>>>> >>>>> >>>> >>>> ------------------------------------------------------------ >>>> ------------------ >>>> Check out the vibrant tech community on one of the world's most >>>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >>>> _______________________________________________ >>>> Dclib-devel mailing list >>>> Dcl...@li... >>>> https://lists.sourceforge.net/lists/listinfo/dclib-devel >>>> >>>> >>> >>> ------------------------------------------------------------ >>> ------------------ >>> Check out the vibrant tech community on one of the world's most >>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >>> _______________________________________________ >>> Dclib-devel mailing list >>> Dcl...@li... >>> https://lists.sourceforge.net/lists/listinfo/dclib-devel >>> >>> >> >> ------------------------------------------------------------ >> ------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> Dclib-devel mailing list >> Dcl...@li... >> https://lists.sourceforge.net/lists/listinfo/dclib-devel >> >> > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Dclib-devel mailing list > Dcl...@li... > https://lists.sourceforge.net/lists/listinfo/dclib-devel > > |
From: Davis K. <dav...@gm...> - 2017-10-23 00:09:05
|
Yes: http://dlib.net/dlib/dnn/layers_abstract.h.html#extract_ On Sun, Oct 22, 2017 at 8:06 PM, Eloi Du Bois <elo...@gm...> wrote: > Ok, thank you for your help. Is there any reshape layer that I could use? > > > 2017-10-22 19:04 GMT-05:00 Davis King <dav...@gm...>: > >> Well, I don't know what paper you are following. Maybe it's the most >> convenient way. But you probably want to use the dnn_trainer to train >> multiple steps at a time when optimizing each of the subnets. But maybe >> not. >> >> Anyway, the fc layer outputs are 1 row and 1 columns but k channels. >> That's why you get only 16 outputs. This is all in the documentation for >> the layers. >> >> On Sun, Oct 22, 2017 at 7:58 PM, Eloi Du Bois <elo...@gm...> >> wrote: >> >>> I'm sorry, I'm not sure I see how I can do using another way. >>> I think I'm almost done training a gaussian function, only problem I >>> have right now is that this network: >>> >>> // A 1 filter; 2x1 transposed conv layer that does 2x upsampling >>> template<class SUBNET> >>> using cont2 = dlib::cont<1, 1, 2, 1, 2, SUBNET>; >>> template<class SUBNET> >>> using GeneratorT = cont2<cont2<cont2<cont2<dlib::fc<8, SUBNET>>>>>; // >>> This will take an input of size 32 and generate an output of 128 >>> >>> Is generating vectors of 16, I'm surprised as I put a fully connected >>> layer on the input that outputs 8 values. Then, everything is multiplied by >>> 2 going thru cont layers. It should be 128, no? How can this happen? Is >>> there something I'm missing or is there something else? >>> >>> Thanks, >>> >>> >>> 2017-10-22 7:47 GMT-05:00 Davis King <dav...@gm...>: >>> >>>> It does it for you. Really though, I don't see why you would need to >>>> call the backward function yourself considering the task you have outlined. >>>> >>>> ------------------------------------------------------------ >>>> ------------------ >>>> Check out the vibrant tech community on one of the world's most >>>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >>>> _______________________________________________ >>>> Dclib-devel mailing list >>>> Dcl...@li... >>>> https://lists.sourceforge.net/lists/listinfo/dclib-devel >>>> >>>> >>> >>> ------------------------------------------------------------ >>> ------------------ >>> Check out the vibrant tech community on one of the world's most >>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >>> _______________________________________________ >>> Dclib-devel mailing list >>> Dcl...@li... >>> https://lists.sourceforge.net/lists/listinfo/dclib-devel >>> >>> >> >> ------------------------------------------------------------ >> ------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> Dclib-devel mailing list >> Dcl...@li... >> https://lists.sourceforge.net/lists/listinfo/dclib-devel >> >> > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Dclib-devel mailing list > Dcl...@li... > https://lists.sourceforge.net/lists/listinfo/dclib-devel > > |
From: Eloi Du B. <elo...@gm...> - 2017-10-23 00:07:06
|
Ok, thank you for your help. Is there any reshape layer that I could use? 2017-10-22 19:04 GMT-05:00 Davis King <dav...@gm...>: > Well, I don't know what paper you are following. Maybe it's the most > convenient way. But you probably want to use the dnn_trainer to train > multiple steps at a time when optimizing each of the subnets. But maybe > not. > > Anyway, the fc layer outputs are 1 row and 1 columns but k channels. > That's why you get only 16 outputs. This is all in the documentation for > the layers. > > On Sun, Oct 22, 2017 at 7:58 PM, Eloi Du Bois <elo...@gm...> > wrote: > >> I'm sorry, I'm not sure I see how I can do using another way. >> I think I'm almost done training a gaussian function, only problem I have >> right now is that this network: >> >> // A 1 filter; 2x1 transposed conv layer that does 2x upsampling >> template<class SUBNET> >> using cont2 = dlib::cont<1, 1, 2, 1, 2, SUBNET>; >> template<class SUBNET> >> using GeneratorT = cont2<cont2<cont2<cont2<dlib::fc<8, SUBNET>>>>>; // >> This will take an input of size 32 and generate an output of 128 >> >> Is generating vectors of 16, I'm surprised as I put a fully connected >> layer on the input that outputs 8 values. Then, everything is multiplied by >> 2 going thru cont layers. It should be 128, no? How can this happen? Is >> there something I'm missing or is there something else? >> >> Thanks, >> >> >> 2017-10-22 7:47 GMT-05:00 Davis King <dav...@gm...>: >> >>> It does it for you. Really though, I don't see why you would need to >>> call the backward function yourself considering the task you have outlined. >>> >>> ------------------------------------------------------------ >>> ------------------ >>> Check out the vibrant tech community on one of the world's most >>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >>> _______________________________________________ >>> Dclib-devel mailing list >>> Dcl...@li... >>> https://lists.sourceforge.net/lists/listinfo/dclib-devel >>> >>> >> >> ------------------------------------------------------------ >> ------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> Dclib-devel mailing list >> Dcl...@li... >> https://lists.sourceforge.net/lists/listinfo/dclib-devel >> >> > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Dclib-devel mailing list > Dcl...@li... > https://lists.sourceforge.net/lists/listinfo/dclib-devel > > |
From: Davis K. <dav...@gm...> - 2017-10-23 00:04:22
|
Well, I don't know what paper you are following. Maybe it's the most convenient way. But you probably want to use the dnn_trainer to train multiple steps at a time when optimizing each of the subnets. But maybe not. Anyway, the fc layer outputs are 1 row and 1 columns but k channels. That's why you get only 16 outputs. This is all in the documentation for the layers. On Sun, Oct 22, 2017 at 7:58 PM, Eloi Du Bois <elo...@gm...> wrote: > I'm sorry, I'm not sure I see how I can do using another way. > I think I'm almost done training a gaussian function, only problem I have > right now is that this network: > > // A 1 filter; 2x1 transposed conv layer that does 2x upsampling > template<class SUBNET> > using cont2 = dlib::cont<1, 1, 2, 1, 2, SUBNET>; > template<class SUBNET> > using GeneratorT = cont2<cont2<cont2<cont2<dlib::fc<8, SUBNET>>>>>; // > This will take an input of size 32 and generate an output of 128 > > Is generating vectors of 16, I'm surprised as I put a fully connected > layer on the input that outputs 8 values. Then, everything is multiplied by > 2 going thru cont layers. It should be 128, no? How can this happen? Is > there something I'm missing or is there something else? > > Thanks, > > > 2017-10-22 7:47 GMT-05:00 Davis King <dav...@gm...>: > >> It does it for you. Really though, I don't see why you would need to >> call the backward function yourself considering the task you have outlined. >> >> ------------------------------------------------------------ >> ------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> Dclib-devel mailing list >> Dcl...@li... >> https://lists.sourceforge.net/lists/listinfo/dclib-devel >> >> > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Dclib-devel mailing list > Dcl...@li... > https://lists.sourceforge.net/lists/listinfo/dclib-devel > > |
From: Eloi Du B. <elo...@gm...> - 2017-10-22 23:58:11
|
I'm sorry, I'm not sure I see how I can do using another way. I think I'm almost done training a gaussian function, only problem I have right now is that this network: // A 1 filter; 2x1 transposed conv layer that does 2x upsampling template<class SUBNET> using cont2 = dlib::cont<1, 1, 2, 1, 2, SUBNET>; template<class SUBNET> using GeneratorT = cont2<cont2<cont2<cont2<dlib::fc<8, SUBNET>>>>>; // This will take an input of size 32 and generate an output of 128 Is generating vectors of 16, I'm surprised as I put a fully connected layer on the input that outputs 8 values. Then, everything is multiplied by 2 going thru cont layers. It should be 128, no? How can this happen? Is there something I'm missing or is there something else? Thanks, 2017-10-22 7:47 GMT-05:00 Davis King <dav...@gm...>: > It does it for you. Really though, I don't see why you would need to call > the backward function yourself considering the task you have outlined. > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Dclib-devel mailing list > Dcl...@li... > https://lists.sourceforge.net/lists/listinfo/dclib-devel > > |
From: Davis K. <dav...@gm...> - 2017-10-22 12:48:02
|
It does it for you. Really though, I don't see why you would need to call the backward function yourself considering the task you have outlined. |