Menu

quadratic constraints optimization

Anonymous
2011-06-06
2013-05-03
1 2 > >> (Page 1 of 2)
  • Anonymous

    Anonymous - 2011-06-06

    Dear Hayato,

    sorry to bother you again.
    But I had two question:

    - is it possible (and meaningfull) to give an initial guess? Would this improve the accuracy of the final solution or merely speed up convergence?

    - I have the following problem:

    find x, y
    s.t. fi(x,y) = x(i)^2 + x(j)^2 - 2 * x(i)x(j) cos(aij) - y(k)^2 = 0

    I can try to solve this using the fourth degree polynomial:
    f(x,y) = sum(fi^2)

    But, would rather like to solve the original problem, using only second degree monomials. However, I'm not sure how to use sparsePOP if there is no objective function, only constraints. In addition, there might be no (x,y) which satisfies all constraints exactly, especially in an over-constraint, noisy setting. In which case I'd like to get the point (x,y) which best approximates the set of constraints.

    When I try it with the nonsensical objective funcion min f = 0, I get an error saying that the POP is probably infeasible.

     
  • Hayato Waki

    Hayato Waki - 2011-06-06

    Dear Bartel

    I'm happy to hear that you were successful in installing and using SparsePOPC++.

    I answer your two questions:

    - is it possible (and meaningfull) to give an initial guess? Would this improve the accuracy of the final solution or merely speed up convergence?

    Although SDPA can use such an initial point for improving accuracy and/or speed, this part is not implemented in SparsePOPC++. This is one of our future works.

    - I have the following problem:

    In which case I'd like to get the point (x,y) which best approximates the set of constraints.

    Maybe, you can get the best point by solving the following POP:

    min sum_i epsilon(i)
    s.t.  -epsilon(i) <= fi <= epsilon(i) for all i (j,k?)

    You may be also able to get another best point by solving the following POP:

    min sum_i epsilon(i)^2
    s.t.  fi = epsilon(i) for all i (j,k?)

    In those POPs, we minimize an error which you define for your problem.

    Best regards
    Hayato Waki

     
  • Anonymous

    Anonymous - 2011-06-08

    Dear,

    Consider then the following two as requests for future additions:
    - possibility to give an initial estimate
    - possibility to solve a system of polynomials without explicit objective function (in de relaxation: minimize the trace of the moment matrix by default if no objvar is given)

    Regards,
    Bartel

     
  • Hayato Waki

    Hayato Waki - 2011-06-08

    Dear Bartel

    Thank you for your requests. I will try to add your requests in SparsePOPC++ in near future.

    Best regards
    Hayato Waki

     
  • Hayato Waki

    Hayato Waki - 2011-06-08

    Dear Baret

    > When I try it with the nonsensical objective funcion min f = 0, I get an error saying that the POP is probably infeasible.

    SparsePOP return this message because the resulting SDP with the moment matrix is infeasible.
    In this case, theoretically, the original POP is also infeasible. So, even if we add the trace of the moment matrix as the objective function, SparsePOP also returns this message.

    In this case, you need to change  your mathematical models of you problem.
    For example, you should minimize errors of constraints as in the previous messages.

    Best regards
    Hayato Waki

     
  • Nobody/Anonymous

    Dear Hayato,

    Thank you for considering.

    I would find it very strange if the POP would be infeasible:

    min f = 0
    s.t. xi^2 + xj^2 - aij*xi*xj = yk^2 for various i,j,k

    returns a correct solution if only the x's are unknown variables, and aij and yk are given constants. Hence the POP(x | y,a) is feasible, and obviously the POP(x,y | a) must then be feasible as well.
    But if I define one of the yk's as an unkown optimization variable, I get the error: "at least one of the primal or the dual SDP is expected to be infeasible." From the above, I think the problem should be elsewhere then in infeasibility (the sdpa output says something about "pdINF criteria :: line 1192 in sdpa_parts.cpp").

    Thank you,
    Bartel

    P.S.: I have tried both your sum of squares and linear errors, but they don't seem to bypass the error.

     
  • Nobody/Anonymous

    Update: I implemented the alternative with sum of squared residuals epsilon_i again and
    - it succeeds in bypassing the error message most of the times (but not always)
    - I still have to give more than the theoretic minimum of yk's as known constants though
    - but with only a few yk's as unknowns, it seem to give good results

    I saw you also know something about graph realization (sensor network localization), so maybe this makes sense to you:
    yk's are actually distances between verteces of a (complete) graph. Would it make sense if the relaxation's failure or success depends on the resulting graph's rigidity (flexibility or redundant rigidity). If the graph is redundantly rigid, yk's become intercorrelated. Would this somehow spoil the relaxation if there are more variables than actual degrees of freedom?

    Just a thought I was pursuing,
    Greetings,
    Bartel

     
  • Hayato Waki

    Hayato Waki - 2011-06-09

    Dear Bartel

    Thank you for trying my idea for your problem.

    Could you give me your problem in which SparsePOP returns the message "at least one of the primal or the dual SDP is expected to be infeasible.". I would like to know the reason because it may depend on a bug in SparsePOPC++.

    Best regards
    Hayato Waki

     
  • Anonymous

    Anonymous - 2011-06-09

    I have sent you the problem by e-mail.

     
  • Hayato Waki

    Hayato Waki - 2011-06-09

    Dear Bartel

    Thank you for quick response. Now, because I do not have SparsePOPC++,  I will post the reason later.

    But, I checked whether matlab version of sparsePOP can work or not. The matlab version seems to work correctly. I guess that SparsePOPC++ may contain a bug.

    The output is as follows:

    ## Computational Results by sparsePOP.m with SeDuMi ##
    ## Printed by printSolution.m ##
    # Problem File Name   = feasibility.gms
    # parameters:
      relaxOrder          = 2
      sparseSW            = 1
      SDPsolverOutFile    = 1
    # SDP solved by SeDuMi:
      size of A           =
      no of nonzeros in A = 3244
      no of LP variables  = 274
      no of FR variables  = 398
      no of SDP blocks    = 15
      max size SDP block  = 21
      ave size SDP block  = 8.13e+00
    # SeDuMi information:
      SeDuMi.pars.eps      = 1.00e-09
      SDPsolverInfo.numerr = 0
      SDPsolverInfo.pinf   = 0
      SDPsolverInfo.dinf   = 0
    # Approximate optimal value information:
      SDPobjValue         = +9.1168140e-10
      POP.objValue        = +0.0000000e+00
      relative obj error  = +9.117e-10
      POP.absError        = -1.124e-09
      POP.scaledError     = -1.706e-10
    # elapsed time:
      elapsedTime.readData    =     0.14
      elapsedTime.conversion  =     0.06
      elapsedTime.SeDuMi      =     1.94
      elapsedTime.total       =     2.15
    # Approximate optimal solution information:
      POP.xVect =
         1:+2.1284332e+00    2:+2.5165678e+00    3:+1.3397320e+00    4:+2.4130911e+00    5:+1.9882144e+00
         6:+2.0942544e+00    7:+2.3335710e+00    8:+1.2783292e+00    9:+2.3598709e+00   10:+1.8341126e+00
        11:+6.9941802e-01

    Best regards
    Hayato Waki

     
  • Hayato Waki

    Hayato Waki - 2011-06-09

    Dear Bartel

    I have two comments for solve your input file.

    In the current version, SparsePOPC++ cannot read your input problem correctly because the right hand sides of f01 and f11 are non constant term. Could you replace them as follows:

    f01.. x00^2 + x02^2 - 1.97286778637632*x00*x02 -d02 =E= 0;
    f11.. x10^2 + x12^2 - 1.98741792804935*x10*x12 -d02 =E= 0;

    Then you can get another error message

    " The iteration has exceeded the maxIterationx and/or stopped with
    no information on the primal and dual feasibility."

    To avoid this message, should be reduceMomentMatSW in param.pop to be 0. In this case it does not work well although it remove a degeneracy from the resulting SDP.

    Then you may be able to  get  the following output:

    # Approximate optimal value information:
      SDPobjValue         = +0.0000000e+00
      POP.objValue        = +0.0000000e+00
      relatvie obj error  = +0.000e+00
      POP.absError        = -6.342e-07
      POP.scaledError     = -9.627e-08
    # cpu time:
      cpuTime.conversion  =     0.06
      cpuTime.SDPA        =     0.65
      cpuTime.total       =     0.60
    # Approximate optimal solution information:
         1: +2.1284330e+00    2: +2.5165675e+00    3: +1.3397319e+00    4: +2.4130908e+00    5: +1.9882142e+00
         6: +2.0942542e+00    7: +2.3335707e+00    8: +1.2783292e+00    9: +2.3598707e+00   10: +1.8341124e+00
        11: +6.9941806e-01

    Best regards
    Hayato Waki

     
  • Anonymous

    Anonymous - 2011-06-10

    As simple as that, hmm? Great! thanks for the correction.

    Could you try to replace gradually more of the righthand-side constants to " - dij =E= 0;" ? Because, I still get infeasible errors if I set more of the d's to optimization variables.
    (dij coresponds to function with xki and xkj, i = 0…4; j = i + 1…5 and k = 1,2)

    Thank you so much!
    Bartel

    P.S.: warn me if I take too much of your time

     
  • Hayato Waki

    Hayato Waki - 2011-06-10

    Dear Bartel

    Could you send your input problems in which SparsePOPC++ returns an error message if possible?
    I changed 4 constants into variables. But, SparsePOPC++ did not return such an error message.

    Did you set reduceMomentMatSW to be 0?

    Best regards
    Hayato Waki

     
  • Hayato Waki

    Hayato Waki - 2011-06-10

    Dear  Bartel

    I found an error message by changing fives variables.

    I guess that the reason is that the resulting SDP has (many) linear equations.  In that case, the SDP does not have any interior feasible solutions, and thus SDPA may not be able to solve it correctly.

    To avoid this difficulty, you should formulate your problems into POPs which do not have equality constraints. Otherwise, you should use SeDuMi or SDPT3 to solve the SDP relaxation problems. In the case, you need matlab.

    Best regards
    Hayato Waki

     
  • Anonymous

    Anonymous - 2011-06-10

    Dear Hayato,

    It used to go awry in most of the cases around 6 out of ten unknowns, sometimes even at 5; but by setting reduceMomentMatSW to zero, it seem to behave better. Apparantly the problem I have send you happens to be quite well behaving. In that case, I had to go up to 8 out of ten of the d's to be set as optimization variables before it ended up in an error.

    I have in the mean time found another cause for the errors: if the input file contains somthing of the form a - -b = 0 instead of a + b = 0, I get the error.

    (Note: I randomly generate these problems to test the methodology. Li Hondong claims to have solved this kind of problem with all x,d unkown (which is the eventual use), while setting the ||d|| = 1 to avoid ending up with a trivial zero solution. If I understand his paper correctly, he even did so with a relaxation order of 1)

    Anyway, it's getting morning here, I need to get some sleep. Thanks for the help, I'll look further into it tommorow.

    Kind Regards,
    Bartel

     
  • Anonymous

    Anonymous - 2011-06-10

    Oh, well. Cross posting.

    Thanks, I'll try further with the residual model min sum(ei) s.t. fi <= ei, fi >= -ei then, and report on you if this gives any trouble. I happened to try gloptipoly earlier today, and I also had to change fi = 0 by fi <= 0 and fi >= 0 to get the problem solved. But if I increased my problem's size, SeDuMi ran out of memory, so that was no good. I have the impression that sdpa is much faster and doesn't suffer from out of memory exceptions like SeDuMi (I also tried sparsePOP matlab with SeDuMi both ran out of memory).

    Greetings,
    Bartel

     
  • Hayato Waki

    Hayato Waki - 2011-06-10

    Dear  Bartel

    I have two  comments for you reply.

    > I have in the mean time found another cause for the errors: if the input file contains somthing of the form a - -b = 0 instead of a + b = 0, I get the error.

    In this case, sparsePOP reads a -b =0 and solve it without any error messages, so please write your problems carefully.

    > But if I increased my problem's size, SeDuMi ran out of memory, so that was no good

    To recover the difficulty on size, SparsePOP was developed.

    I understood the reason why you do not use sedumi and matlab. Thanks and have a good sleep.

    Best regards
    Hayato Waki

     
  • Anonymous

    Anonymous - 2011-06-10

    Dear Hayato,

    I will take care for the double - - issue.
    I also noticed that when some equations under the Equation-tag are not later written out, there is a stack dump. An exception which is not correctly catched?

    I have replaced the equality problem by the triple inequality problem (with extra residual variables):

    min f = sum(r_i)
    s.t. f_i - r_i <= 0, f_i + r_i >= 0, r_i >= 0

    In addition, to decrease the number of variables and equations, I have merged the functions
    f_i - d_k = 0 and g_i - d_k = 0
    in the equivalent problem
    f_i - g_i = 0
    if d_k is an optimization variable.
    (I can then later retrieve d_k from f_i or g_i and the residual of the function f_i - g_i)

    Now I get the error in sdpa "step length is too small, cannot move" and the solution returned by sparsePOP is not the theoretic correct value. Any idea as to the cause of that kind of error?

    Thank you ever so kindly,
    Bartel

    P.S.: Here's an example (relaxOrder = 2):

    Variables
    objvar,x00,x01,x02,x03,x04,x10,x11,x12,x13,x14,r00,r01,r02,r03,r04,r05,r06,r07,r08,r10,r11,r13;

    Positive Variables
    x00,x01,x02,x03,x04,x10,x11,x12,x13,x14,r00,r01,r02,r03,r04,r05,r06,r07,r08,r10,r11,r13;

    Equations
    f,fp00,fn00,fp01,fn01,fp02,fn02,fp03,fn03,fp04,fn04,fp05,fn05,fp06,fn06,fp07,fn07,fp08,fn08,fp10,fn10,fp11,fn11,fp13,fn13;

    f..  r00 + r01 + r02 + r03 + r04 + r05 + r06 + r07 + r08 + r10 + r11 + r13 - objvar =E= 0;
    fp00.. x00^2 + x01^2  - 1.85767107923161*x00*x01 - 0.538458477364385 - r00 =L= 0;
    fn00.. x00^2 + x01^2  - 1.85767107923161*x00*x01 - 0.538458477364385 + r00 =G= 0;
    fp01.. x00^2 + x02^2  - 1.68300829448853*x00*x02 - 1.03404111904313 - r01 =L= 0;
    fn01.. x00^2 + x02^2  - 1.68300829448853*x00*x02 - 1.03404111904313 + r01 =G= 0;
    fp02.. x00^2 + x03^2  - 1.98912096081019*x00*x03 - x10^2 - x13^2  + 1.99283348527135*x10*x13 - r02 =L= 0;
    fn02.. x00^2 + x03^2  - 1.98912096081019*x00*x03 - x10^2 - x13^2  + 1.99283348527135*x10*x13 + r02 =G= 0;
    fp03.. x00^2 + x04^2  - 1.98789451388289*x00*x04 - 0.0488516227674283 - r03 =L= 0;
    fn03.. x00^2 + x04^2  - 1.98789451388289*x00*x04 - 0.0488516227674283 + r03 =G= 0;
    fp04.. x01^2 + x02^2  - 1.67841741496402*x01*x02 - x11^2 - x12^2  + 1.8256358156973*x11*x12 - r04 =L= 0;
    fn04.. x01^2 + x02^2  - 1.67841741496402*x01*x02 - x11^2 - x12^2  + 1.8256358156973*x11*x12 + r04 =G= 0;
    fp05.. x01^2 + x03^2  - 1.92431565889479*x01*x03 - x11^2 - x13^2  + 1.9624812040378*x11*x13 - r05 =L= 0;
    fn05.. x01^2 + x03^2  - 1.92431565889479*x01*x03 - x11^2 - x13^2  + 1.9624812040378*x11*x13 + r05 =G= 0;
    fp06.. x01^2 + x04^2  - 1.92619533983684*x01*x04 - x11^2 - x14^2  + 1.96270206422686*x11*x14 - r06 =L= 0;
    fn06.. x01^2 + x04^2  - 1.92619533983684*x01*x04 - x11^2 - x14^2  + 1.96270206422686*x11*x14 + r06 =G= 0;
    fp07.. x02^2 + x03^2  - 1.71744293833825*x02*x03 - x12^2 - x13^2  + 1.85167473509343*x12*x13 - r07 =L= 0;
    fn07.. x02^2 + x03^2  - 1.71744293833825*x02*x03 - x12^2 - x13^2  + 1.85167473509343*x12*x13 + r07 =G= 0;
    fp08.. x02^2 + x04^2  - 1.68361791095533*x02*x04 - x12^2 - x14^2  + 1.85491773064862*x12*x14 - r08 =L= 0;
    fn08.. x02^2 + x04^2  - 1.68361791095533*x02*x04 - x12^2 - x14^2  + 1.85491773064862*x12*x14 + r08 =G= 0;
    fp10.. x10^2 + x11^2  - 1.92373447148408*x10*x11 - 0.538458477364385 - r10 =L= 0;
    fn10.. x10^2 + x11^2  - 1.92373447148408*x10*x11 - 0.538458477364385 + r10 =G= 0;
    fp11.. x10^2 + x12^2  - 1.85619295981631*x10*x12 - 1.03404111904313 - r11 =L= 0;
    fn11.. x10^2 + x12^2  - 1.85619295981631*x10*x12 - 1.03404111904313 + r11 =G= 0;
    fp13.. x10^2 + x14^2  - 1.99288153720548*x10*x14 - 0.0488516227674283 - r13 =L= 0;
    fn13.. x10^2 + x14^2  - 1.99288153720548*x10*x14 - 0.0488516227674283 + r13 =G= 0;

     
  • Hayato Waki

    Hayato Waki - 2011-06-10

    Dear  Bartel

    > I also noticed that when some equations under the Equation-tag are not later written out, there is a stack dump. An exception which is not correctly catched?

    Sorry. I cannot understand your situation. Could you tell me more detail?

    > Now I get the error in sdpa "step length is too small, cannot move" and the solution returned by sparsePOP is not the theoretic correct value. Any idea as to the cause of that kind of error?

    In this case, you may be able to solve your problems correctly by changing betaStar, betaBar and gammaStar in param.sdpa. When I changed betaStar = 0.01 and betaBar = 0.02, SDPA returns "pdOPT". See the manual of SDPA for more details.

    Best regards
    Hayato Waki

     
  • Anonymous

    Anonymous - 2011-06-10

    Dear,

    > stack dump:
    if you would, by mistake,delete the line "fp00.. x00^2 + x01^2  - 1.85767107923161*x00*x01 - 0.538458477364385 - r00 =L= 0;" in the above example, while fp00 is in the list under "Equation", you get a stack dump since there's no definition of fp00.. . It would be more elegant if you would get an error message of the kind "no definition found for equation fp00".
    Not that its that important, but I thought to mention it.

    Bartel

     
  • Hayato Waki

    Hayato Waki - 2011-06-10

    Dear  Bartel

    Thank you for your comment. In the next version, SparsePOP returns an error message for such instances.

    Best regards
    Hayato Waki

     
  • Nobody/Anonymous

    Dear Hayato,

    if you can bear with me a little more…

    I'm trying to use the sum of square representation: i.e. mininimize the 4th degree f(x)
    min f(x) = sum( fi( x(i), x(j) )^2 )+ sum( fi( x(i), x(j), y(k) )^2 )
    over (i,j) when y(k) is known and over (i,j,k) when y(k) is unknown

    which looks a lot like the problem in sensor network minimization.

    Hence, in order to exploit sparsity, I have been looking at Nie Jiawang's work about SOS relaxation over sums of small (squared) polynomials and write the above problem as:

    max s
    s.t. f(x) - s = sum( mi' * Wi * mi ), Wi >= 0 for all i
    where Wi depends only on a subset of variable, e.g. x(i), x(j) and possible y(k);

    I'm not sure however, whether my interpretation is correct. I've described it also on
    http://sedumi.ie.lehigh.edu/index.php?option=com_kunena&Itemid=78&func=view&catid=10&id=5565

    The main thing is that terms which appear in multiple fi (like x(i)^4) as well as the optimization variable "s" appear in multiple blocks. Particularly, "s" appears in each block Wi. So that instead of maximizing s, I maximize sum( si ). Is this correct?

    Because when I run it through sdpa, the result is relative instable. It does give me the correct mininum objective value, but seems to have a very ill-defined optimal value x*

    Thank you,
    Bartel

     
  • Hayato Waki

    Hayato Waki - 2011-06-20

    Dear Bartel

    Before answering your questions, I would like to confirm your problem.

    fi(x(i, x(j)) = x(i)^2 + x(j)^2 - aij * (i)*x(j) - bij, where aij and bij are real constant values.
    fi(x(i, x(j), y(k)) = x(i)^2 + x(j)^2 - aij * (i)*x(j) - y(k), where aij is real constant values.

    Is this correct? To answer your questions, I need to know your problem correctly.

    Best regards
    Hayato Waki

     
  • Nobody/Anonymous

    Apart from some typos, that seems correct, yes.
    (note that y(k) is a squared distance (hence positive) and may be replaced by y(k)^2, which would make the second function homogeneous)

     
  • Hayato Waki

    Hayato Waki - 2011-06-20

    Dear Bartel

    Thanks. I have two comments.

    (1) I agree with you for a part of reduction by symmetry in polynomials.

    In the first functions, mi = (1, x(i), x(j), x(i)^2, x(i)*x(j), x(j)^2)^T and Wi consists of 3 * 3 block matrix as follows:
    Wi = , where a is scalar, b is 3* 1 vector, c is 2*2 matrix and d is 3*3 matrix.

    In the second functions, mi=(1, x(i), x(j), y(k), x(i)^2, x(i)*x(j), x(j)^2,…,y(k)^2)^T and Wi consists of 3 * 3 block matrix as follows:
    Wi = , where a is scalar, b is 6* 1 vector, c is 3*3 matrix and d is 6*6 matrix.

    I think your problem may be able to be more reduced  by using symmetry in polynomials. If you are interested in the way to use symmetry, read the following papers:

    K. Gatermann and P.A. Parrilo, Symmetry groups, semidefinite pro- grams, and sums of squares, Journal of Pure and Applied Algebra, Vol. 192 (2004), pp. 95–128.

    L. Jansson, J.B. Lasserre, C. Riener and T. Theobald, Exploiting sym- metries in SDP-relaxations for polynomial optimization, LAAS-report, Toulouse, September 2006.

    (2) s in f(x) -s is variable in your maximization, but it is regarded as a constant term in the identity f(x) - s = sum(…).
    Indeed, the identity is on x, so s is a constant term when you get linear equations on Wi by comparing coefficients in both sides of the identity. So I think that s does not appear in multiple blocks of Wi, but you get only the equation constant-s =sum_i (Wi)_{0,0}

    I cannot understand why you maximize sum(si). What is si?

    Best regards
    Hayato Waki

     
1 2 > >> (Page 1 of 2)

Log in to post a comment.