From: Kyunghoon L. <aer...@gm...> - 2012-07-17 02:25:46
|
Hi David, Thanks for the explanation. I did a little experiment by lowering training_tolerance, and found the greedy process got terminated because it chooses the same parameters even before getting near to the specified training_tolerance. For instance, I used reduced_basis_ex5 and I got the following (the outputs are tip displacements in x & y) for training_tolerance = 1e-05 use_relative_bound_in_greedy = true ---- Basis dimension: 9 ---- Performing RB solves on training set Maximum (relative) error bound is 0.000486633 Exiting greedy because the same parameters were selected twice truth output[0] = 22.7201 truth output[1] = -36397 rb output[0] = 22.7196 +/- 3.48492 rb output[1] = -36397 +/- 69.9526 Now I lowered training_tolerance to 1.e-10, and I got the following: use_relative_bound_in_greedy = true ---- Basis dimension: 8 ---- Performing RB solves on training set Maximum (relative) error bound is 0.00181167 Exiting greedy because the same parameters were selected twice truth output[0] = 22.7201 truth output[1] = -36397 rb output[0] = 22.7209 +/- 2.24415 rb output[1] = -36397 +/- 45.0467 As you can see from the two different training tolerance sets, I end up with similar results despite different tolerance sets. Because of this early termination, I cannot get a fine tuned RB model having very low error bounds. Can you suggest me some advice, plz? Best, K. Lee. On Mon, Jul 16, 2012 at 8:14 PM, David Knezevic <dkn...@se...>wrote: > Hi K, > > I changed the code so that (by default) it terminates if the greedy > chooses the same parameters twice, because that should not happen. (It > typically happens when the error bound is dominated by rounding error.) > I think the issue here is just that your training_tolerance is too low. > > David > > > > On 07/16/2012 08:10 AM, Kyunghoon Lee wrote: > > Hi all, > > > > I made a few reduced basis models based on the cantilever example, > > reduced_basis_ex5, and I found my models always terminate the greedy > > training early before satisfying either Nmax or training_tolerance. > > Despite increasing n_training_samples, I keep getting > > > > Exiting greedy because the same parameters were selected twice > > > > , and I cannot find what causes the problem. I'd appreciate for any > > suggestions. > > > > Regards, > > K. Lee. > > > ------------------------------------------------------------------------------ > > Live Security Virtual Conference > > Exclusive live event will cover all the ways today's security and > > threat landscape has changed and how IT managers can respond. Discussions > > will include endpoint security, mobile security and the latest in malware > > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > > _______________________________________________ > > Libmesh-users mailing list > > Lib...@li... > > https://lists.sourceforge.net/lists/listinfo/libmesh-users > > > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > Libmesh-users mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libmesh-users > |