I'm working on a layout system for my GUI library. I have a set of widgets; each has min size, max size and preferred size. So input is those 3 sets of values and a single number - target size (e.g. for a horizontal layout it's the total width of all the widgets). Output is a set of sizes for each widget (e.g. widths for horizontal layout).
So I write a pretty simple function that has the global minimum when its input equals the best possible widget widths:
where deviationFromPreferredSize is essentially itemSizes - preferredSizes, except it returns 0 if preferredSize is 0 (which means "no preference"), and it penalizes negative values by multiplying them by 4 (because it is usually better to stretch widgets than to compact them too much).
And then I'm searching for a minimum of this function:
It works very well at first glance, but as I keep testing (and building more complex layouts) I find more and more cases where it yields clearly poor results for which targetSizeFunction yields large numbers (like 100000), although I know that the function has a minimum at 0 or at least extremely close to 0. I think that whenever I'm not very lucky with picking the starting point (I use heuristics for it, but basically it's just guessing) I get stuck at some local minimum instead of the global one.
This is probably more of a general math / numerical methods question than dlib question, but is there any trick to help me find the global minimum reliably?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
find_min_box_constraints() assumes the objective function is continuous but yours isn't. However, I don't see why you can't make it continuous. For example, why use fabs() in the calculation of sizeDeviation when you could return the squared error? If you did that and maybe supplied a function for the derivative rather than numerically approximating it then it will probably work.
It should also be noted that, unless I'm not understanding your code, your objective function has only one minimum (because it's convex). So the issue is probably due to the combination of an inaccurate derivative and non-differentiability. If you fixed those issues then it should work fine.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Thanks, squaring instead of using fabs helped with my particular case. Not sure how robust the function really is now, though.
Dlib looks like a mature and popular library, has no one ever created alternative optimization algorithms for it? I understand it's probably not the most used dlib's module, but still.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
If you make the function convex ( http://en.wikipedia.org/wiki/Convex_function) and differentiable then it
should always work. Which is what I think you just did, if I understand
what you are describing correctly.
There are some solvers people have contributed. But most people don't use
the solvers directly, rather they use the tools build on top of them like
the machine learning components.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I'm working on a layout system for my GUI library. I have a set of widgets; each has min size, max size and preferred size. So input is those 3 sets of values and a single number - target size (e.g. for a horizontal layout it's the total width of all the widgets). Output is a set of sizes for each widget (e.g. widths for horizontal layout).
So I write a pretty simple function that has the global minimum when its input equals the best possible widget widths:
where
deviationFromPreferredSizeis essentiallyitemSizes - preferredSizes, except it returns 0 ifpreferredSizeis 0 (which means "no preference"), and it penalizes negative values by multiplying them by 4 (because it is usually better to stretch widgets than to compact them too much).And then I'm searching for a minimum of this function:
It works very well at first glance, but as I keep testing (and building more complex layouts) I find more and more cases where it yields clearly poor results for which
targetSizeFunctionyields large numbers (like100000), although I know that the function has a minimum at 0 or at least extremely close to 0. I think that whenever I'm not very lucky with picking the starting point (I use heuristics for it, but basically it's just guessing) I get stuck at some local minimum instead of the global one.This is probably more of a general math / numerical methods question than dlib question, but is there any trick to help me find the global minimum reliably?
find_min_box_constraints() assumes the objective function is continuous but yours isn't. However, I don't see why you can't make it continuous. For example, why use fabs() in the calculation of sizeDeviation when you could return the squared error? If you did that and maybe supplied a function for the derivative rather than numerically approximating it then it will probably work.
It should also be noted that, unless I'm not understanding your code, your objective function has only one minimum (because it's convex). So the issue is probably due to the combination of an inaccurate derivative and non-differentiability. If you fixed those issues then it should work fine.
I think the simulated annealing optimization would suit my case well. Is there an implementation for dlib? Any stochastic methods at all?
Last edit: VG 2015-02-17
Thanks, squaring instead of using
fabshelped with my particular case. Not sure how robust the function really is now, though.Dlib looks like a mature and popular library, has no one ever created alternative optimization algorithms for it? I understand it's probably not the most used dlib's module, but still.
If you make the function convex (
http://en.wikipedia.org/wiki/Convex_function) and differentiable then it
should always work. Which is what I think you just did, if I understand
what you are describing correctly.
There are some solvers people have contributed. But most people don't use
the solvers directly, rather they use the tools build on top of them like
the machine learning components.