Screenshot instructions:
Windows
Mac
Red Hat Linux
Ubuntu
Click URL instructions:
Rightclick on ad, choose "Copy Link", then paste here →
(This may not be possible with some types of ads)
From: Adam Smith <amsmith@cs...>  20130104 06:51:23
Attachments:
Message as HTML

Does anyone have a concise explanation of what the optheu? I'm interested to know under which conditions I can predict clasp's first guess in optimization problems. Consider the following two programs. Each asks for a way to roll 100 dice so that the sum of the faces is minimized. With some quick mental analysis, 100 is clearly the optimal answer (rolling a 1 for each die). Here's the first program: die(1..100). face(1..6). 1 { roll(D,F):face(F) } 1 : die(D). #minimize [ roll(D,F)=F ]. For this, "clingo 1 stats" finds the opt=100 solution on the first try. Well, after 500 (why 500?) choices, it finds a solution without hitting any conflicts. Here's the second program: (F > 7F basically numbers the faces backwards) die(1..100). face(1..6). 1 { roll(D,F):face(F) } 1 : die(D). #minimize [ roll(D,F)=7F ]. With and without "optheu", clingo's first guess is actually the worst (opt=600). Interestingly, picking vsids instead of berkmin as the heuristic finds opt=101 as the first solution (almost 100, but not quite). I was hoping "optheu" would steer the solver here with some greedilypickthebestoption strategy, but it must be doing something else. What's it's intent? 
From: Martin <mjb@cs...>  20130104 10:51:32

On Thu, 20130103 at 22:43 0800, Adam Smith wrote: > Does anyone have a concise explanation of what the optheu? I'm > interested to know under which conditions I can predict clasp's first > guess in optimization problems. <snip> > I was hoping "optheu" would steer the solver here with some > greedilypickthebestoption strategy, but it must be doing something > else. What's it's intent? My understanding (and I haven't read the code so am likely wrong) is that optheu uses information from the objective function in the branching heuristic, optionally to pick the literal, or the phase or both. What information and how it is used  I'd have to look at the source. My /suspicion/ is that it's not a very big factor as VSIDS and Berkmin are rather sensitive and making a big change would have a major effect on solver performance. IIRC their default phase (branch true or branch false) are different which is probably why you get the 600/101. I'd try with saveprogress (or whatever the phasesaving option is) and restartonmodel and see what the convergence is like. I think the key issue is that the solver doesn't know where the problem is between pure combinatorial (only one solution) and pure optimisation (no constraints) and so the heuristic has to work across the spectrum. Greedy optimisation only works at the pure optimisation end while VSIDS (with some heuristic awareness) seems to work across the range. I guess the challenge is to work out how to dynamically adjust the amount of use that is made (maybe on the rate of solutions found to conflicts hit) or (easier) do a couple of initial greedy passes to reduce the optimisation bound. I guess 'CDCL with optimisation' is a relatively unstudied problem but there are probably other interesting aspects, such as needing different clause learning / pruning strategies, etc. Cheers,  Martin 
From: Benjamin Kaufmann <kaufmann@cs...>  20130104 10:53:39

Hi, > Does anyone have a concise explanation of what the optheu? I'm > interested to know under which conditions I can predict clasp's first guess > in optimization problems. optheu (i.e. optheu=1) enables a static sign heuristic. It *does not* change how/which variables are picked by the main heuristic but simply sets the preferred sign of variables contained in minimize statements s.th. the literal occurring in the minimize statement is first set to false. > Consider the following two programs. Each asks for a way to roll 100 dice > so that the sum of the faces is minimized. With some quick mental analysis, > 100 is clearly the optimal answer (rolling a 1 for each die). > > Here's the first program: > > die(1..100). > face(1..6). > 1 { roll(D,F):face(F) } 1 : die(D). > #minimize [ roll(D,F)=F ]. > > For this, "clingo 1 stats" finds the opt=100 solution on the first try. > Well, after 500 (why 500?) choices, it finds a solution without hitting any > conflicts. > > Here's the second program: (F > 7F basically numbers the faces backwards) > > die(1..100). > face(1..6). > 1 { roll(D,F):face(F) } 1 : die(D). > #minimize [ roll(D,F)=7F ]. > > With and without "optheu", clingo's first guess is actually the worst > (opt=600). Interestingly, picking vsids instead of berkmin as the heuristic > finds opt=101 as the first solution (almost 100, but not quite). First, why the 500 choices: There are 100 1{...}1 rules, each with 6 atoms. By default, clasp's decision heuristic prefers to set atoms to false. Hence, for each rule five atoms are chosen to false and the last one is then derived to true. Regarding optheu: In both programs, the minimize statement contains positive literals. So, optheu makes the negative literal the preferred one. Given that this is already the default literal for atoms and there are no conflicts influencing the heuristic, optheu actually has no effect. The reason for the observed optimize values is accidental: initially, there are no heuristic values for the variables  they are simply picked in the order defined by the underlying data structure (an array in case of Berkmin, a heap in case of VSIDs). Furthermore, in the first example, variables with lower id have higher weight, while in the second example the opposite holds. As an interesting remark: If you call clasp with "transext=choice", choices drop to 100 and the observed optimize values are interchanged. In that case, clasp introduces rules and auxatoms like roll_aux(100,6) : not roll(100,6). roll(100,6) : not roll_aux(100,6). ... The heuristic then picks and falsifies the aux atom first so that the corresponding roll()atoms is derived to true. Best regards, Ben 
From: Benjamin Kaufmann <kaufmann@cs...>  20130104 11:03:54

> Consider the following two programs. Each asks for a way to roll 100 dice > so that the sum of the faces is minimized. With some quick mental analysis, > 100 is clearly the optimal answer (rolling a 1 for each die). > > Here's the first program: > > die(1..100). > face(1..6). > 1 { roll(D,F):face(F) } 1 : die(D). > #minimize [ roll(D,F)=F ]. > > [...] > > Here's the second program: (F > 7F basically numbers the faces backwards) > > die(1..100). > face(1..6). > 1 { roll(D,F):face(F) } 1 : die(D). > #minimize [ roll(D,F)=7F ]. > Btw: In both cases, the unsatisfiablecore based approach used in unclasp really helps. $ gringo dice1.lp  unclasp q unclasp version 0.1 Reading from stdin OPTIMUM FOUND Models : 1 Enumerated: 2 Optimum : yes Optimization: 100 Time : 0.000s (Solving: 0.00s 1st Model: 0.00s Unsat: 0.00s) Regards, Ben 
From: Adam Smith <amsmith@cs...>  20130105 00:35:48
Attachments:
Message as HTML

Thanks for the explanations, guys. I can see how "optheu" was having a much more minor effect than I was thinking. Ben, your suggestion of using unclasp is very interesting. I wasn't thinking I needed unclasp unless I was interested in seeing the unsat core myself, but I can see how it would help quickly prove infeasibility of anything better than a score of 100. Related to optimization, what bugged me more about the examples I gave was, even when it guessed a score=100 solution without a single conflict, it would still grind forever trying to prove there was nothing better. Applying a trick I just learned from the new book (congrats, btw), I applied this rewrite: die(1..100). face(1..6). 1 { roll(D,F):face(F) } 1 : die(D). penalty(D,F) : roll(D,F). penalty(D,F1) : penalty(D,F), F > 1. #minimize [ penalty(D,F) ]. Now, the vsids and vmtf heuristics solve the problem instantly (finding an optimal solution as the first model), and even berkmin terminates in a fraction of a second (after enumerating 400ish increasingly better solutions). It's great that this works so well, but I'm not sure I could articulate the general strategy it comes from. It's something along the lines of "if one of two mutually exclusive must be true, find a way to charge for the amount of weight they share in the optimize statement". (Almost sounds like a job for a metaprogram!) On Fri, Jan 4, 2013 at 3:03 AM, Benjamin Kaufmann < kaufmann@...> wrote: > Consider the following two programs. Each asks for a way to roll 100 dice >> so that the sum of the faces is minimized. With some quick mental >> analysis, >> 100 is clearly the optimal answer (rolling a 1 for each die). >> >> Here's the first program: >> >> die(1..100). >> face(1..6). >> 1 { roll(D,F):face(F) } 1 : die(D). >> #minimize [ roll(D,F)=F ]. >> >> [...] >> >> >> Here's the second program: (F > 7F basically numbers the faces >> backwards) >> >> die(1..100). >> face(1..6). >> 1 { roll(D,F):face(F) } 1 : die(D). >> #minimize [ roll(D,F)=7F ]. >> >> > Btw: In both cases, the unsatisfiablecore based approach used in > unclasp really helps. > $ gringo dice1.lp  unclasp q > unclasp version 0.1 > Reading from stdin > OPTIMUM FOUND > > Models : 1 > Enumerated: 2 > Optimum : yes > Optimization: 100 > Time : 0.000s (Solving: 0.00s 1st Model: 0.00s Unsat: 0.00s) > > Regards, > Ben > 
Sign up for the SourceForge newsletter:
No, thanks