|
From: Anders P. <and...@op...> - 2004-04-20 09:09:14
|
On 2004-04-19, at 20.56, Inger, Matthew wrote: > Sorry, > > about 28% is spent in multiplyRight, and 20% is spent in the BigMatrix > constructor. Generally speaking; working with BigDecimal will be slower than working with double. That's the price you pay to get rid of the rounding/representation errors. Would it help to implement multiplyRight in one of the MatrixStore subclasses? > If i switch to JamaMatrix, the time goes back down to what it was > using Jama by itself. This is a feature of ojAlgo! You can easily switch between three implementations with different characteristics: BigMatrix is more precise JamaMatrix is faster JampackMatrix works with complex numbers I'm sure each of the implementations can be improved in many respects, but you can't have it all. > -----Original Message----- > From: Anders Peterson [mailto:and...@op...] > Sent: Monday, April 19, 2004 2:14 PM > To: Inger, Matthew > Subject: Re: [ojAlgo-user] Questions > > > On 2004-04-19, at 20.04, Inger, Matthew wrote: > >> Let me at least see how turning the scale down helps. >> It appears that the majority of the time is spent in multiply >> and in the constructor. > > The BigMatrix or the QRDecomposition constructor? > >> I can send you the screenshot of what >> devpartner is telling me if you want. > > Yes, please. > > /Anders > >> -----Original Message----- >> From: Anders Peterson [mailto:and...@op...] >> Sent: Monday, April 19, 2004 2:02 PM >> To: Inger, Matthew >> Cc: 'oja...@li...' >> Subject: Re: [ojAlgo-user] Questions >> >> >> On 2004-04-19, at 18.33, Inger, Matthew wrote: >> >>> 1. Have any performance tests been run with BigMatrix? >> >> Not really... >> >>> I'm using >>> BigMatrix to do a >>> LinearRegression algorithm. I had previously been using Jama >>> directly, >>> but our FD >>> department noticed a difference between the results given by >>> java >>> and >>> the LINEST >>> function in Excel. I have determined that the difference is >>> DEFINATELY >>> due to rounding >>> limitations in the java double primitive type. >>> >>> These issues go away when i use ojAlgo library, >> >> That's great news to me - that is the reason I created ojAlgo! >> >>> however, the time >>> required to do the >>> calculations increases to about 12-13x what it was using regular >>> double >>> calculations. >>> >>> Doing 10,000 linear regressions with Jama took about .781s >>> (according >>> to junit), and >>> with BigMatrix, took 10.345s >> >> I wasn't aware the difference was that big. ;-) >> >>> Any idea why such a hug disparity? >> >> Try altering the scale. A larger scale means better results, but it >> takes longer. You can change a matrix' scale whenever you want - the >> new scale will be used from then on. Elements are not rounded unless >> you call enforceScale(). >> >> Internally BigMatrix uses various decompositions for some of the more >> complex calculations. These decompositions inherit a scale from the >> parent matrix. It is however not the same - it's bigger. I believe the >> formula is 2 * (3 + max(9, matrixScale)) and this formula is evaluated >> when the decomposition is created. After the decomposition is created >> its scale wont change even if you change the matrix' scale. The scale >> of a decomposition is never smaller than 24 (according to the above >> formula). >> >> I'm not happy with this, and have been meaning to change it. >> Decompositions should have the same scale as its parent matrix, and >> when it is changed, updated "everywhere". >> >> Would you prefer that? >> >>> 2. Is there a way to use the solvers in OjAlg to do the linear >>> regression? >>> Or is that not >>> something that is provided? >> >> How are you currently doing it? >> >> I imagine the best way would be to simply build an over-determined set >> of linear equations - Ax=B - and call A.solve(B). Internally BigMatrix >> would then use the QR decomposition to give a least squares estimation >> of x. That would be my first attempt. >> >> You could use the QP solver do the same thing, but I don't see how >> that >> would simplify or improve anything. >> >> What version of ojAlgo are you using - v1, v2 or CVS? >> >> I recommend working from CVS. Unfortunately I have moved things around >> a bit in between version... >> >> I don't have access to a profiling tool, but I know they can be very >> useful in finding bottlenecks. If you can identify a problem, and/or >> suggest a specific improvement, I'll be happy to help you implement >> it. >> >> /Anders >> >>> ---------------------- >>> Matthew Inger >>> Design Architect >>> Synygy, Inc >>> 610-664-7433 x7770 >>> >>> >>> >>> ------------------------------------------------------- >>> This SF.Net email is sponsored by: IBM Linux Tutorials >>> Free Linux tutorial presented by Daniel Robbins, President and CEO of >>> GenToo technologies. Learn everything from fundamentals to system >>> administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click >>> _______________________________________________ >>> ojAlgo-user mailing list >>> ojA...@li... >>> https://lists.sourceforge.net/lists/listinfo/ojalgo-user >>> >> > |