Using CostRegular to Assign Costs


  • Anonymous


    I am new to CP, but used Choco to solve a simple problem where I need to arrange a given set of tiles on a grid.  That was a simple constraint problem, but now I want to extend a related idea to include optimisation.

    I will explain the problem in detail, but you may be able to answer my questions without it, so feel free to skip ahead.  The problem is based on a square grid of cells.  Each cell can contain a tile that shows pipes entering/leaving the square on various sides.  So a simple tile might have a "horizontal" pipe going from left to right.  A more complex tile might have an additional pipe from the bottom meeting the horizontal pipe to give a T.  And the tiles must be constrained so that (1) all pipes connect from tile to tile; (2) no tiles lead off the "edge" of the grid; (3) every cell contains some pipes; (4) the result is consistent with some cells that are specified at the start; (5) the available set of tiles is fixed (and equal in size to the number of empty cells).

    So far, so good - that was constraint only and I have solved it.  But now I want to extend that by allowing an open set of tiles (no globalCardinality; drop 5 above) and with the requirement that the total length of the pipes be a minimum.  In my original example a horizontal pipe from left to right has a length of 2 (the minimum) while the T has a length of 3.

    My problem is: how to embed this idea of lengths into the problem, so that I can minimise the sum?  And it seems to me that a very simple version of costRegular will do this: I accept *any* word, and the weights are in the cost array.

    My questions, then, are:

    - If I express the regular expression as a string, how do I encode values greater than 10?  Is it possible to use, say, base 64 symbols?  (my IntegerVariables are bit patterns that encode the pipes; so 5 (1+4) might be the horizontal pipe and 13 (1+4+8) the T).

    - Is my general approach correct?  Is this the simplest way to assign a a cost to different values, when there is no simple formula (actually the formula is the number of 1 bits in the value)?  I am surprised there is nothing simpler, but I have gone through the API and docs without finding anything.

    - Can you recommend a resource for learning this kind of thing?  I am a software engineer, but my education is in the physical sciences (I have a PhD in astronomy, but have been in software dev for 15+ years).

    Thanks - any help appreciated (Choco is quite cool - I am evaluating it for work and this will help show how, I hope, it provides more flexibility/adaptability than a hand-coded DFS).


    PS The solution to the constraint problem is at


  • Anonymous

    Hmmm.  FiniteAutomaton(".*") appears to run "forever".  I will wait for comments (please!) as am feeling a bit lost.  Thanks, Andrew

  • Hi Andrew,

    - If you consider no grammatical restriction on your solution set, then you should not need the CostRegular but just the Equation constraint instead. Equation will be set on new cost variables which must be channeled to the assignment variables using a FeasPair or an Nth constraint for instance.

    - Symbol "." is not accepted in a regular expression here: you should specify the whole alphabet, e.g "(0|1|2|3)*"
    There are two alternatives to obtain the "full automaton" from the empty automaton: a = new FiniteAutomaton();
    1) build it:
    int s =a.addState(); a.setInitialState(s); a.setFinal(s); for (i=0; i<=maxSymbol; i++) a.addTransition(s,s,i); return a;
    2) or get the complement :
    for (i=0; i<=maxSymbol; i++) a.addToAlphabet(i); return a.complement();

    - To encode values (symbols) greater than 9 in a regular expression, just put them in angle brackets: "55<10>5".


    Can you recommend a resource for learning this kind of thing?

    Do you mean a resource about modeling with CP ? CP is an active research domain and there is currently no popularizing works of that kind. The best way to learn is perhaps as you do, by training on specific problems. You may also look to different solver manuals and example codes (see Hakan's blog, or to the global constraint catalog online:
    You will find there definitions and usages of a large variety of global constraints, as well as their counterparts in Choco (when exist).

    Hope it helps,
    Don't hesitate to share with us the conclusions of your evaluation of Choco for your work !

  • Note that in your problem, you could actually use a Regular constraint on each row/column in order to model the neighborhood constraints (e.g. I does not precede - or T in any row)
    The 2nd example of the CostRegular documentation shows how to aggregate several forbidden regexps.
    By using CostRegular constraints instead, you can compute the length of the path by double counting (sum of the rows = sum of the columns).
    And by using MultiCostRegular, you can even model the max cardinality of each tile ! 

    This puzzle problem is great :)


  • Anonymous

    thanks for the replies (just when i was giving up hope :o).  i will try all the above.  thanks again - this all looks very useful.  andrew


  • Anonymous

    hi again,

    thanks for the help above.  now that i understand the regexp/dfa constraints i can indeed use them for constraining neighbours.

    i also tried two different ways of optimising - one using the "accept everything" dfa with costs, and one using the sum of cost variables.  both worked fine, with correct results and near-identical performance EXCEPT when i increased the size of the problem to an 8x8 puzzle.  at that point, the cost variable approach found a solution in 6ms (about what i expected from performance on earlier sizes) but the dfa approach took over an hour (and then i killed it).

    that was such a drastic change that i assume some internal logic changed the default search strategy, or stopped using some optimization that is only appropriate for a small number of variables, or something?

    anyway, if anyone knows why the time required should suddenly explode, i would love to know.

    also, another interesting factoid - when finding the minimum solution for unconstrained puzzles, those with even sides run faster than those with odd sides.  i assume this is because even puzzles can be covered with pipe using minimal tiles (I, _ and the corners) throughout, while odd puzzles require two more expensive "T" tiles (so i guess the search stops at the first minimal solution in the even case, but must check all solutions in the odd case).

    you can see this if you look at the pictures at

    anyway, that's all - posting mainly to say thanks again, and report success, but if there's a simple explanation for the slow dfa on the 8x8 problem it would be cool to know what it was…


  • Mmm… that is clearly strange because the propagation at the root node should already find the lower bound 128 (8*8*2) which is the optimum.
    Note that when benchmarking optimization problems with CP you must consider separately the time needed to find an optimal solution and the time needed to prove that the last solution is optimal. Without lower bound or any specific search strategy, it is very difficult for the solver to prove optimality of a minimization problem (this is the reason why your problem is much harder on grids of odd size).
    Of course the default search strategy is not favorable to global constraints, this could be an explanation and the CostRegularValSelector could be an alternative.
    Did you post only one costRegular on the whole set of variables ? Did the solver find a first solution ? Is it optimal ?

    I have modeled your problem using one costRegular on each row and column to take into account the neighborhood constraints as well as the optimization criterium. I got something like this:

    rowFA = new FiniteAutomaton(all + "*((" + pipeRight + noPipeLeft + ")|(" + noPipeRight + pipeLeft +"))" + all + "*)");
    rowRA = rowFA.complement();

    foreach row:
    post new CostRegular(getCostVar(row), getVars(row), new FiniteAutomaton(rowFA), tileSizes);

    foreach col:
    post new CostRegular(getCostVar(col), getVars(col), new FiniteAutomaton(colFA), tileSizes);

    post (hCost = sum(getRowCostVars));
    post (vCost = sum(getColCostVars));
    post(hCost = vCost);
    minimize hCost;

    On my laptop with the default search strategy, this model finds (and proves) the optimum of the 8x8 instance in 0.2s, the 50x50 instance in 5.9s, the 100/100 en 16s.
    For odd sizes, the optimum of the 7x7 instance is found in 0.1s but the optimality proof takes more than 200s just because the solver does not know that at least 2 T tiles are needed in any feasible solution.
    The difficult things start here…



  • Anonymous

    hi - sorry for not replying for so long, but work has been very busy and there were a lot of details in your email.  i finally had time today to rewrite everything following your suggestions.  unfortunately, i didn't manage to complete it (i am missing the edge restrictions), but everything is much clearer than it was (i also had some time a week or two back and enabled the logging, modified various parameters, etc, which also helped).

    although i still don't completely understand why the double counting is necessary.  is the idea simply to add as many constraints in as many different ways as possible?  because it seems like either rows or columns alone would be sufficient.

    anyway, i felt bad about never replying after all the help, so this is mainly to say thanks again.  it's been a huge help and i am now much more confident using the software.

    cheers, andrew

  • Hi Andrew,
    It was a pleasure to help you on such an interesting problem.

    Concerning double-counting, you are right: this is not mandatory at all. You just need one way to count.
    However redundancy is very often a good thing in CP: if an information is duplicated in several constraints, then each constraint propagator may individually make more deductions. Some of these deductions could not have been possible by simple propagation between the set of constraints if only one of the constraints had the information.
    Note that this is not a general rule: sometimes (if the individual propagators are too time-consuming or if they do not make more deductions from the information) you do not gain anything with redundancy, but it is worthwhile to try… especially  double-counting which is an easy special case of redundancy on matrix models with row and column constraints.