|
From: Anders P. <and...@op...> - 2004-04-21 09:21:23
|
Sorry just A from the example below... I will speed things up if you can reduce the number of times you calculate the QR decomposition (Regardless of which matrix implementation you use). In practise perhaps you can create groups of calculations that share the same x values. /Anders On 2004-04-20, at 14.57, Inger, Matthew wrote: > In my test, yes, they are the same, but in practice, they > will not be. What's an A-Matrix? > > -----Original Message----- > From: Anders Peterson [mailto:and...@op...] > Sent: Tuesday, April 20, 2004 4:53 AM > To: Inger, Matthew > Cc: 'oja...@li...' > Subject: Re: [ojAlgo-user] Questions > > > Do you re-use the A-matrix? Are the x values the same for the 10k > regressions? > > On 2004-04-19, at 21.03, Inger, Matthew wrote: > >> There are a small number of variables. In most cases, we >> have a single variable equation, and up to 24 points with >> which to fit the linear regression. >> >> Then, we do that 10,000 times. >> >> Basically, each row in the database contains time elapsed >> data (typically one value per month). We use the # of the >> month as the x values, and the data as the y value, and try >> to fit a line to it to determine short term trends. >> >> So for each row, we calculate 1 set of coefficients (b0 and b1, >> aka intercept & slope), and make 1 more predictions. >> >> However, there are potentially millions of rows in the database. >> >> >> -----Original Message----- >> From: Anders Peterson [mailto:and...@op...] >> Sent: Monday, April 19, 2004 2:32 PM >> To: Inger, Matthew >> Subject: Re: [ojAlgo-user] Questions >> >> >> This is just a naive first attempt... >> >> On 2004-04-19, at 20.06, Inger, Matthew wrote: >> >>> PS: As far as solving the equations, i basically have two >>> matrices. X and Y. Any kind of code snippet would help here. >>> Keep in mind, equations are of the form >>> >>> y = b0 + b1*X >> >> 1*b0 + x*b1 = y >> >> A*X = B >> >> where >> >> A: >> 1 x1 >> 1 x2 >> 1 x3 >> ... ... >> >> B: >> y1 >> y2 >> y3 >> ... >> >> X: >> b0 >> b1 >> >> X = A.solve(B); >> >> Are you doing this 10 000 times with a relatively small number of x- >> and y-values or do you have 10 000 x- and y-values for each >> calculation? >> >> /Anders >> >>> -----Original Message----- >>> From: Anders Peterson [mailto:and...@op...] >>> Sent: Monday, April 19, 2004 2:02 PM >>> To: Inger, Matthew >>> Cc: 'oja...@li...' >>> Subject: Re: [ojAlgo-user] Questions >>> >>> >>> On 2004-04-19, at 18.33, Inger, Matthew wrote: >>> >>>> 1. Have any performance tests been run with BigMatrix? >>> >>> Not really... >>> >>>> I'm using >>>> BigMatrix to do a >>>> LinearRegression algorithm. I had previously been using Jama >>>> directly, >>>> but our FD >>>> department noticed a difference between the results given by >>>> java >>>> and >>>> the LINEST >>>> function in Excel. I have determined that the difference is >>>> DEFINATELY >>>> due to rounding >>>> limitations in the java double primitive type. >>>> >>>> These issues go away when i use ojAlgo library, >>> >>> That's great news to me - that is the reason I created ojAlgo! >>> >>>> however, the time >>>> required to do the >>>> calculations increases to about 12-13x what it was using >>>> regular >>>> double >>>> calculations. >>>> >>>> Doing 10,000 linear regressions with Jama took about .781s >>>> (according >>>> to junit), and >>>> with BigMatrix, took 10.345s >>> >>> I wasn't aware the difference was that big. ;-) >>> >>>> Any idea why such a hug disparity? >>> >>> Try altering the scale. A larger scale means better results, but it >>> takes longer. You can change a matrix' scale whenever you want - the >>> new scale will be used from then on. Elements are not rounded unless >>> you call enforceScale(). >>> >>> Internally BigMatrix uses various decompositions for some of the more >>> complex calculations. These decompositions inherit a scale from the >>> parent matrix. It is however not the same - it's bigger. I believe >>> the >>> formula is 2 * (3 + max(9, matrixScale)) and this formula is >>> evaluated >>> when the decomposition is created. After the decomposition is created >>> its scale wont change even if you change the matrix' scale. The scale >>> of a decomposition is never smaller than 24 (according to the above >>> formula). >>> >>> I'm not happy with this, and have been meaning to change it. >>> Decompositions should have the same scale as its parent matrix, and >>> when it is changed, updated "everywhere". >>> >>> Would you prefer that? >>> >>>> 2. Is there a way to use the solvers in OjAlg to do the linear >>>> regression? >>>> Or is that not >>>> something that is provided? >>> >>> How are you currently doing it? >>> >>> I imagine the best way would be to simply build an over-determined >>> set >>> of linear equations - Ax=B - and call A.solve(B). Internally >>> BigMatrix >>> would then use the QR decomposition to give a least squares >>> estimation >>> of x. That would be my first attempt. >>> >>> You could use the QP solver do the same thing, but I don't see how >>> that >>> would simplify or improve anything. >>> >>> What version of ojAlgo are you using - v1, v2 or CVS? >>> >>> I recommend working from CVS. Unfortunately I have moved things >>> around >>> a bit in between version... >>> >>> I don't have access to a profiling tool, but I know they can be very >>> useful in finding bottlenecks. If you can identify a problem, and/or >>> suggest a specific improvement, I'll be happy to help you implement >>> it. >>> >>> /Anders >>> >>>> ---------------------- >>>> Matthew Inger >>>> Design Architect >>>> Synygy, Inc >>>> 610-664-7433 x7770 >>>> >>>> >>>> >>>> ------------------------------------------------------- >>>> This SF.Net email is sponsored by: IBM Linux Tutorials >>>> Free Linux tutorial presented by Daniel Robbins, President and CEO >>>> of >>>> GenToo technologies. Learn everything from fundamentals to system >>>> administration.http://ads.osdn.com/? >>>> ad_id=1470&alloc_id=3638&op=click >>>> _______________________________________________ >>>> ojAlgo-user mailing list >>>> ojA...@li... >>>> https://lists.sourceforge.net/lists/listinfo/ojalgo-user >> > > > ------------------------------------------------------- > This SF.Net email is sponsored by: IBM Linux Tutorials > Free Linux tutorial presented by Daniel Robbins, President and CEO of > GenToo technologies. Learn everything from fundamentals to system > administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click > _______________________________________________ > ojAlgo-user mailing list > ojA...@li... > https://lists.sourceforge.net/lists/listinfo/ojalgo-user > |