You can subscribe to this list here.
2012 |
Jan
|
Feb
(5) |
Mar
(5) |
Apr
(10) |
May
(5) |
Jun
|
Jul
|
Aug
(17) |
Sep
(6) |
Oct
(1) |
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2013 |
Jan
(1) |
Feb
(4) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
(3) |
Sep
|
Oct
(1) |
Nov
(18) |
Dec
(10) |
2014 |
Jan
(12) |
Feb
|
Mar
(2) |
Apr
(11) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(3) |
Nov
|
Dec
|
From: Andras V. <and...@ui...> - 2014-10-30 12:38:41
|
Dear Raimar, I think enabling compression as an option is a very good idea, especially if you say that in cases it can lead to significant gains. Perhaps a small rationale why you opted for bzip2 would be in order. I think that compressing the individual archives is perhaps a better option because if the zip-stream is split at continuation instead, then the structure of the state file will depend on the sequence of continuations, which may cause other problems in the future. Best regards, András On Wed, Oct 22, 2014 at 4:25 PM, Raimar Sandner <rai...@ui...> wrote: > Dear András, > > how is everything going in Budapest? I hope you are well. > > Maybe you remember the time when we discussed the binary boost > serialization, > back then I was under the impression that compressing the state files did > not > gain much disk space. > > Now with my real simulations however I found that it makes a huge > difference > in disk space, and therefore I need to compress the state files with > bzip2. I > thought it would be nice if, in Python, I could read in the compressed > state > files directly. Therefore I added transparent bzip2 support for _reading_ > of > state files in the branch called "compression". This is done by changing > the > interface to std::istream instead of std::ifstream and working with boost's > filtering_istream, to which one can add a decompressor if necessary. This > is > actually quite nice. On the downside we need yet another binary boost > library, > boost_iostreams. This library is detected by cmake, and compression > support is > enabled or disabled accordingly. > > In principle the _writing_ of compressed state files does also work. > However > there is a problem with continuation: we end up with several full > bzip2-files > concatenated. This is actually legal, it is called a multi-stream bzip2 > file > and I can decompress these files with bunzip2 without problems. However, > unfortunately the boost decompressor has a bug and cannot handle these > files. > As a result, I have disabled the _writing_ of compressed state files at the > moment. Instead, I just use the external bzip2 after the trajectory is > done. > If I want to continue a trajectory, I have to decompress first of course. > > One could solve this problem easily by compressing the individual archives > inside the state file, not the file as a whole. But then one cannot use the > external bzip2/bunzip2 utilities on the state files anymore. > > What do you think of compression as such, and of the implementation? > > Best regards > Raimar > > > > > > ------------------------------------------------------------------------------ > Comprehensive Server Monitoring with Site24x7. > Monitor 10 servers for $9/Month. > Get alerted through email, SMS, voice calls or mobile push notifications. > Take corrective actions from your mobile device. > http://p.sf.net/sfu/Zoho > _______________________________________________ > Cppqed-support mailing list > Cpp...@li... > https://lists.sourceforge.net/lists/listinfo/cppqed-support > |
From: Raimar S. <rai...@ui...> - 2014-10-26 16:27:44
|
Dear András, just a heads up: boost-1.56.0 seems to introduce an incompatible change to the binary archive format. State files generated with with this boost version cannot be read with earlier versions of boost. I noticed because I use the newer boost versions on the cluster but not on my desktop. Fortunately backwards-compatibility seems to be provided, the new boost can read archives generated with older versions. Best regards Raimar |
From: Raimar S. <rai...@ui...> - 2014-10-22 14:26:24
|
Dear András, how is everything going in Budapest? I hope you are well. Maybe you remember the time when we discussed the binary boost serialization, back then I was under the impression that compressing the state files did not gain much disk space. Now with my real simulations however I found that it makes a huge difference in disk space, and therefore I need to compress the state files with bzip2. I thought it would be nice if, in Python, I could read in the compressed state files directly. Therefore I added transparent bzip2 support for _reading_ of state files in the branch called "compression". This is done by changing the interface to std::istream instead of std::ifstream and working with boost's filtering_istream, to which one can add a decompressor if necessary. This is actually quite nice. On the downside we need yet another binary boost library, boost_iostreams. This library is detected by cmake, and compression support is enabled or disabled accordingly. In principle the _writing_ of compressed state files does also work. However there is a problem with continuation: we end up with several full bzip2-files concatenated. This is actually legal, it is called a multi-stream bzip2 file and I can decompress these files with bunzip2 without problems. However, unfortunately the boost decompressor has a bug and cannot handle these files. As a result, I have disabled the _writing_ of compressed state files at the moment. Instead, I just use the external bzip2 after the trajectory is done. If I want to continue a trajectory, I have to decompress first of course. One could solve this problem easily by compressing the individual archives inside the state file, not the file as a whole. But then one cannot use the external bzip2/bunzip2 utilities on the state files anymore. What do you think of compression as such, and of the implementation? Best regards Raimar |
From: Michael L. <mic...@un...> - 2014-05-11 22:05:04
|
Hi András, testing and the start of the semester delayed the port of heev. I just tagged a new release 1.1.0 that contains lapack::ev for hermitian matrices. On my machines it passed all tests. Let me know if it works for you. Cheers, Michael Am 13.04.2014 um 19:13 schrieb Michael Lehn <mic...@un...>: > Hi András, > > I already started with the heev port. I hope to finish it this week. > > Cheers, > > Michael > > > Am 13.04.2014 um 19:09 schrieb Andras Vukics <and...@ui...>: > >> Hi Michael, >> >> Thanks for your explanation. Far from being confusing, it made me understood the concept. >> >> My confusion had probably stemmed from the fact that cxxlapack is distributed together with flens (same git repository), so that I thought it’s an integral part (as „default” backend). But then this is used only for testing, benchmarks, etc. And for those parts of lapack that are not yet rewritten in flens. >> >> Seeing the Hermitian eigenproblem implemented would be great. This has great relevance in quantum physics for calculating energy spectra of systems (eigenvalues of the Hamilton operator, which is normally Hermitian). >> >> Indeed, the main point for us is to be independent of a pre-installed lapack. C++QED has now cmake for its build system, and with the external-project feature of cmake, we could even achieve that a given git version of flens gets checked out during the build process of C++QED and used by the framework. This is probably the least bothersome way for a user. >> >> Best regards, >> András >> >> >> >> On Fri, Apr 11, 2014 at 11:08 AM, Michael Lehn <mic...@un...> wrote: >> Hi András, >> >> thanks for testing FLENS on your ARCH box! And of course for using FLENS :-) >> >> About CXXLAPACK and FLENS-LAPACK: >> >> - CXXLAPACK is a C++ wrapper for LAPACK. Like CBLAS is a C-wrapper for BLAS >> or CLAPACK for LAPACK. So this requires a LAPACK backend. Having a link >> error like "undefined reference to 'zheev_'" means that the backend is missing. >> This wrapper is pretty low-level. It only eases the task of calling Fortran >> functions from C++. The wrapper for (z)heev looks like this >> >> http://apfel.mathematik.uni-ulm.de/~lehn/FLENS/cxxlapack/interface/heev.tcc.html >> >> - FLENS-LAPACK is a C++ rewrite of LAPACK.... unfortunately only of a subset >> of LAPACK. As you noticed e.g. (z)heev is not yet rewritten. By rewrite >> I actually mean that it is a standalone C++ implementation of LAPACK. So >> no LAPACK backend is needed. But for testing my reimplementation of >> FLENS-LAPACK I compare results against LAPACK. For users the advantage of >> FLENS-LAPACK is that you don't have the LAPACK dependency. Furthermore you >> can use mpfr data types. My reason for FLENS-LAPACK is to prove that with >> C++ & FLENS you can do the same implementation as with Fortran (same speed) >> just simpler. >> >> If you compile with USE_CXXLAPACK then FLENS-LAPACK is preferred over LAPACK. But >> if a functions is missing in FLENS-LAPACK the LAPACK implementation gets used. That >> means for a non-symmetrix/non-hermitian matrix 'ev' calls the FLENS-LAPACK >> implementation. And (at the moment) it calls for a symmetric/hermitian matrix the >> LAPACK Version (through the CXXLAPACK wrapper). >> >> -> So a reimplementation of heev is next on the list. That would mean that you >> are independent of LAPACK >> >> If you compile with ALWAYS_USE_CXXLAPACK then only the LAPACK beackend gets used. >> Even if a C++ implementation exists in FLENS-LAPACK. I use this option for >> benchmarking FLENS-LAPACK against LAPACK. >> >> Speaking of benchmarks. The performance of LAPACK does not depend on the Fortran >> compiler. Analogously, neither does the performance of FLENS-LAPACK depend on >> the C++ compiler. In both cases it depends on the speed of the BLAS kernel. In >> FLENS is include a simple default implementation similar to Netlibs REFBLAS. >> >> So for high performance there still remains a dependency for a tuned BLAS core. >> >> Sorry for the long mail. But I am watching my student taking their math final right >> now so I have a lot of time to type. I hope it is not confusing even more :-) >> >> Cheers, >> >> Michael >> > > ------------------------------------------------------------------------------ > Put Bad Developers to Shame > Dominate Development with Jenkins Continuous Integration > Continuously Automate Build, Test & Deployment > Start a new project now. Try Jenkins in the cloud. > http://p.sf.net/sfu/13600_Cloudbees_______________________________________________ > Cppqed-support mailing list > Cpp...@li... > https://lists.sourceforge.net/lists/listinfo/cppqed-support |
From: Raimar S. <rai...@ui...> - 2014-04-29 13:00:34
|
Dear Wojciech, I just uploaded a bugfix release 2.10.2 to sourceforge, which also includes András' mentioned fix, so there is no need to use the git version anymore. In our latest major release 2.10 the improvement of user friendliness was one of the main goals. We appreciate any feedback on issues you might have encountered, to further improve C++QED. Very best regards Raimar On Thursday, April 24, 2014 15:30:23 Vukics András wrote: > Hi, > > Thanks for your mail. > > In principle, C++QED has no limit on the system arity, it’s only that > BLITZ_ARRAY_LARGEST_RANK needs to be increased for arity larger than 11 for > pure state-vector and 5 for density-operator manipulations. > > So, you have hit upon a bug, for which thank you again! I have just pushed > a fix to both the Development and master branches in the git repository on > SourceForge. > > We will eventually package this as a bugfix release for SourceForge and > Ubuntu, but in the meantime please try the git version, and let me know if > it works now. > > In the newest git version you will find a new script 6qbits.cc in directory > CPPQEDscripts, showing what macros might be affected when increasing the > system arity, but usually it’s only BLITZ_ARRAY_LARGEST_RANK and > QUANTUMOPERATOR_TRIDIAGONAL_MAX_RANK. > > Best regards, > András > > > > On Thu, Apr 24, 2014 at 3:26 PM, Wojciech Kozlowski < > > woj...@ph...> wrote: > > Dear Raimar, > > > > I changed the value by its #define everywhere where it appeared (in > > TMP_Tools.h in CPPQEDcore/utils and there are also two blitz.h headers > > with this definition) and I recompiled everything. Didn't work. > > > > Has anybody ever used C++QED to simulate a system with more than 5 > > components? I think that's my problem here as adding an additional > > interaction to wrap around and create a periodic lattice (instead of > > adding a new site) compiles and runs smoothly. > > > > Cheers, > > Wojciech > > > > On 24/04/14 13:33, Raimar Sandner wrote: > > > Dear Wojciech, > > > > > > thanks for your feedback! > > > > > > How did you set BLITZ_ARRAY_LARGEST_RANK, as a #define or as a -D > > > > compiler > > > > > flag? It has to be set before any blitz header is included. > > > > > > I think also C++QED itself has to be recompiled with this new value, not > > > > only > > > > > the script, probably András can explain the details better. He is the > > > > one that > > > > > implemented this feature in blitz and I have not used it yet :) > > > > > > Currently I am traveling and cannot check my emails regularly. Maybe > > > > András > > > > > will get in touch with you soon, otherwise I will have a closer look at > > > > the > > > > > problem next week. > > > > > > Cheers > > > Raimar > > > > > > On Thursday, April 24, 2014 12:19:41 Wojciech Kozlowski wrote: > > >> Hi, > > >> > > >> I've recently tried using C++QED for my own research which is a Bose > > >> Hubbard lattice model coupled to a cavity. I've implemented the actual > > >> BHM elements already (which are simply site elements coupled with a > > >> tunnelling interaction element between nearest neighbours), but I've > > >> encountered an obstacle when I try to make the system larger than 5 > > >> sites - it fails to compile. I've attached the compiler error output to > > >> this e-mail. My own attempt to try and solve this problem got me to the > > >> BLITZ_ARRAY_LARGEST_RANK value, but changing it to a higher value did > > >> not solve the problem. Does C++QED have a very limited system size? I > > >> hope there's a workaround as at the moment it's the best tool that I > > >> found for what I need. > > >> > > >> Cheers, > > >> Wojciech > > > > -------------------------------------------------------------------------- > > ----> > > > Start Your Social Network Today - Download eXo Platform > > > Build your Enterprise Intranet with eXo Platform Software > > > Java Based Open Source Intranet - Social, Extensible, Cloud Ready > > > Get Started Now And Turn Your Intranet Into A Collaboration Platform > > > http://p.sf.net/sfu/ExoPlatform > > > _______________________________________________ > > > Cppqed-support mailing list > > > Cpp...@li... > > > https://lists.sourceforge.net/lists/listinfo/cppqed-support > > > > -------------------------------------------------------------------------- > > ---- Start Your Social Network Today - Download eXo Platform > > Build your Enterprise Intranet with eXo Platform Software > > Java Based Open Source Intranet - Social, Extensible, Cloud Ready > > Get Started Now And Turn Your Intranet Into A Collaboration Platform > > http://p.sf.net/sfu/ExoPlatform > > _______________________________________________ > > Cppqed-support mailing list > > Cpp...@li... > > https://lists.sourceforge.net/lists/listinfo/cppqed-support |
From: Andras V. <and...@ui...> - 2014-04-24 13:36:53
|
Hi, Thanks for your mail. In principle, C++QED has no limit on the system arity, it’s only that BLITZ_ARRAY_LARGEST_RANK needs to be increased for arity larger than 11 for pure state-vector and 5 for density-operator manipulations. So, you have hit upon a bug, for which thank you again! I have just pushed a fix to both the Development and master branches in the git repository on SourceForge. We will eventually package this as a bugfix release for SourceForge and Ubuntu, but in the meantime please try the git version, and let me know if it works now. In the newest git version you will find a new script 6qbits.cc in directory CPPQEDscripts, showing what macros might be affected when increasing the system arity, but usually it’s only BLITZ_ARRAY_LARGEST_RANK and QUANTUMOPERATOR_TRIDIAGONAL_MAX_RANK. Best regards, András On Thu, Apr 24, 2014 at 3:26 PM, Wojciech Kozlowski < woj...@ph...> wrote: > Dear Raimar, > > I changed the value by its #define everywhere where it appeared (in > TMP_Tools.h in CPPQEDcore/utils and there are also two blitz.h headers > with this definition) and I recompiled everything. Didn't work. > > Has anybody ever used C++QED to simulate a system with more than 5 > components? I think that's my problem here as adding an additional > interaction to wrap around and create a periodic lattice (instead of > adding a new site) compiles and runs smoothly. > > Cheers, > Wojciech > > On 24/04/14 13:33, Raimar Sandner wrote: > > Dear Wojciech, > > > > thanks for your feedback! > > > > How did you set BLITZ_ARRAY_LARGEST_RANK, as a #define or as a -D > compiler > > flag? It has to be set before any blitz header is included. > > > > I think also C++QED itself has to be recompiled with this new value, not > only > > the script, probably András can explain the details better. He is the > one that > > implemented this feature in blitz and I have not used it yet :) > > > > Currently I am traveling and cannot check my emails regularly. Maybe > András > > will get in touch with you soon, otherwise I will have a closer look at > the > > problem next week. > > > > Cheers > > Raimar > > > > On Thursday, April 24, 2014 12:19:41 Wojciech Kozlowski wrote: > >> Hi, > >> > >> I've recently tried using C++QED for my own research which is a Bose > >> Hubbard lattice model coupled to a cavity. I've implemented the actual > >> BHM elements already (which are simply site elements coupled with a > >> tunnelling interaction element between nearest neighbours), but I've > >> encountered an obstacle when I try to make the system larger than 5 > >> sites - it fails to compile. I've attached the compiler error output to > >> this e-mail. My own attempt to try and solve this problem got me to the > >> BLITZ_ARRAY_LARGEST_RANK value, but changing it to a higher value did > >> not solve the problem. Does C++QED have a very limited system size? I > >> hope there's a workaround as at the moment it's the best tool that I > >> found for what I need. > >> > >> Cheers, > >> Wojciech > > > > > ------------------------------------------------------------------------------ > > Start Your Social Network Today - Download eXo Platform > > Build your Enterprise Intranet with eXo Platform Software > > Java Based Open Source Intranet - Social, Extensible, Cloud Ready > > Get Started Now And Turn Your Intranet Into A Collaboration Platform > > http://p.sf.net/sfu/ExoPlatform > > _______________________________________________ > > Cppqed-support mailing list > > Cpp...@li... > > https://lists.sourceforge.net/lists/listinfo/cppqed-support > > > > ------------------------------------------------------------------------------ > Start Your Social Network Today - Download eXo Platform > Build your Enterprise Intranet with eXo Platform Software > Java Based Open Source Intranet - Social, Extensible, Cloud Ready > Get Started Now And Turn Your Intranet Into A Collaboration Platform > http://p.sf.net/sfu/ExoPlatform > _______________________________________________ > Cppqed-support mailing list > Cpp...@li... > https://lists.sourceforge.net/lists/listinfo/cppqed-support > |
From: Vukics A. <an...@vu...> - 2014-04-24 13:30:50
|
Hi, Thanks for your mail. In principle, C++QED has no limit on the system arity, it’s only that BLITZ_ARRAY_LARGEST_RANK needs to be increased for arity larger than 11 for pure state-vector and 5 for density-operator manipulations. So, you have hit upon a bug, for which thank you again! I have just pushed a fix to both the Development and master branches in the git repository on SourceForge. We will eventually package this as a bugfix release for SourceForge and Ubuntu, but in the meantime please try the git version, and let me know if it works now. In the newest git version you will find a new script 6qbits.cc in directory CPPQEDscripts, showing what macros might be affected when increasing the system arity, but usually it’s only BLITZ_ARRAY_LARGEST_RANK and QUANTUMOPERATOR_TRIDIAGONAL_MAX_RANK. Best regards, András On Thu, Apr 24, 2014 at 3:26 PM, Wojciech Kozlowski < woj...@ph...> wrote: > Dear Raimar, > > I changed the value by its #define everywhere where it appeared (in > TMP_Tools.h in CPPQEDcore/utils and there are also two blitz.h headers > with this definition) and I recompiled everything. Didn't work. > > Has anybody ever used C++QED to simulate a system with more than 5 > components? I think that's my problem here as adding an additional > interaction to wrap around and create a periodic lattice (instead of > adding a new site) compiles and runs smoothly. > > Cheers, > Wojciech > > On 24/04/14 13:33, Raimar Sandner wrote: > > Dear Wojciech, > > > > thanks for your feedback! > > > > How did you set BLITZ_ARRAY_LARGEST_RANK, as a #define or as a -D > compiler > > flag? It has to be set before any blitz header is included. > > > > I think also C++QED itself has to be recompiled with this new value, not > only > > the script, probably András can explain the details better. He is the > one that > > implemented this feature in blitz and I have not used it yet :) > > > > Currently I am traveling and cannot check my emails regularly. Maybe > András > > will get in touch with you soon, otherwise I will have a closer look at > the > > problem next week. > > > > Cheers > > Raimar > > > > On Thursday, April 24, 2014 12:19:41 Wojciech Kozlowski wrote: > >> Hi, > >> > >> I've recently tried using C++QED for my own research which is a Bose > >> Hubbard lattice model coupled to a cavity. I've implemented the actual > >> BHM elements already (which are simply site elements coupled with a > >> tunnelling interaction element between nearest neighbours), but I've > >> encountered an obstacle when I try to make the system larger than 5 > >> sites - it fails to compile. I've attached the compiler error output to > >> this e-mail. My own attempt to try and solve this problem got me to the > >> BLITZ_ARRAY_LARGEST_RANK value, but changing it to a higher value did > >> not solve the problem. Does C++QED have a very limited system size? I > >> hope there's a workaround as at the moment it's the best tool that I > >> found for what I need. > >> > >> Cheers, > >> Wojciech > > > > > ------------------------------------------------------------------------------ > > Start Your Social Network Today - Download eXo Platform > > Build your Enterprise Intranet with eXo Platform Software > > Java Based Open Source Intranet - Social, Extensible, Cloud Ready > > Get Started Now And Turn Your Intranet Into A Collaboration Platform > > http://p.sf.net/sfu/ExoPlatform > > _______________________________________________ > > Cppqed-support mailing list > > Cpp...@li... > > https://lists.sourceforge.net/lists/listinfo/cppqed-support > > > > ------------------------------------------------------------------------------ > Start Your Social Network Today - Download eXo Platform > Build your Enterprise Intranet with eXo Platform Software > Java Based Open Source Intranet - Social, Extensible, Cloud Ready > Get Started Now And Turn Your Intranet Into A Collaboration Platform > http://p.sf.net/sfu/ExoPlatform > _______________________________________________ > Cppqed-support mailing list > Cpp...@li... > https://lists.sourceforge.net/lists/listinfo/cppqed-support > -- *(Non nobis, Domine, non nobis, sed nomini tuo da gloriam.)* |
From: Wojciech K. <woj...@ph...> - 2014-04-24 13:27:00
|
Dear Raimar, I changed the value by its #define everywhere where it appeared (in TMP_Tools.h in CPPQEDcore/utils and there are also two blitz.h headers with this definition) and I recompiled everything. Didn't work. Has anybody ever used C++QED to simulate a system with more than 5 components? I think that's my problem here as adding an additional interaction to wrap around and create a periodic lattice (instead of adding a new site) compiles and runs smoothly. Cheers, Wojciech On 24/04/14 13:33, Raimar Sandner wrote: > Dear Wojciech, > > thanks for your feedback! > > How did you set BLITZ_ARRAY_LARGEST_RANK, as a #define or as a -D compiler > flag? It has to be set before any blitz header is included. > > I think also C++QED itself has to be recompiled with this new value, not only > the script, probably András can explain the details better. He is the one that > implemented this feature in blitz and I have not used it yet :) > > Currently I am traveling and cannot check my emails regularly. Maybe András > will get in touch with you soon, otherwise I will have a closer look at the > problem next week. > > Cheers > Raimar > > On Thursday, April 24, 2014 12:19:41 Wojciech Kozlowski wrote: >> Hi, >> >> I've recently tried using C++QED for my own research which is a Bose >> Hubbard lattice model coupled to a cavity. I've implemented the actual >> BHM elements already (which are simply site elements coupled with a >> tunnelling interaction element between nearest neighbours), but I've >> encountered an obstacle when I try to make the system larger than 5 >> sites - it fails to compile. I've attached the compiler error output to >> this e-mail. My own attempt to try and solve this problem got me to the >> BLITZ_ARRAY_LARGEST_RANK value, but changing it to a higher value did >> not solve the problem. Does C++QED have a very limited system size? I >> hope there's a workaround as at the moment it's the best tool that I >> found for what I need. >> >> Cheers, >> Wojciech > > ------------------------------------------------------------------------------ > Start Your Social Network Today - Download eXo Platform > Build your Enterprise Intranet with eXo Platform Software > Java Based Open Source Intranet - Social, Extensible, Cloud Ready > Get Started Now And Turn Your Intranet Into A Collaboration Platform > http://p.sf.net/sfu/ExoPlatform > _______________________________________________ > Cppqed-support mailing list > Cpp...@li... > https://lists.sourceforge.net/lists/listinfo/cppqed-support |
From: Raimar S. <rai...@ui...> - 2014-04-24 12:32:59
|
Dear Wojciech, thanks for your feedback! How did you set BLITZ_ARRAY_LARGEST_RANK, as a #define or as a -D compiler flag? It has to be set before any blitz header is included. I think also C++QED itself has to be recompiled with this new value, not only the script, probably András can explain the details better. He is the one that implemented this feature in blitz and I have not used it yet :) Currently I am traveling and cannot check my emails regularly. Maybe András will get in touch with you soon, otherwise I will have a closer look at the problem next week. Cheers Raimar On Thursday, April 24, 2014 12:19:41 Wojciech Kozlowski wrote: > Hi, > > I've recently tried using C++QED for my own research which is a Bose > Hubbard lattice model coupled to a cavity. I've implemented the actual > BHM elements already (which are simply site elements coupled with a > tunnelling interaction element between nearest neighbours), but I've > encountered an obstacle when I try to make the system larger than 5 > sites - it fails to compile. I've attached the compiler error output to > this e-mail. My own attempt to try and solve this problem got me to the > BLITZ_ARRAY_LARGEST_RANK value, but changing it to a higher value did > not solve the problem. Does C++QED have a very limited system size? I > hope there's a workaround as at the moment it's the best tool that I > found for what I need. > > Cheers, > Wojciech |
From: Wojciech K. <woj...@ph...> - 2014-04-24 11:57:52
|
Hi, I've recently tried using C++QED for my own research which is a Bose Hubbard lattice model coupled to a cavity. I've implemented the actual BHM elements already (which are simply site elements coupled with a tunnelling interaction element between nearest neighbours), but I've encountered an obstacle when I try to make the system larger than 5 sites - it fails to compile. I've attached the compiler error output to this e-mail. My own attempt to try and solve this problem got me to the BLITZ_ARRAY_LARGEST_RANK value, but changing it to a higher value did not solve the problem. Does C++QED have a very limited system size? I hope there's a workaround as at the moment it's the best tool that I found for what I need. Cheers, Wojciech |
From: Michael L. <mic...@un...> - 2014-04-13 17:13:31
|
Hi András, I already started with the heev port. I hope to finish it this week. Cheers, Michael Am 13.04.2014 um 19:09 schrieb Andras Vukics <and...@ui...>: > Hi Michael, > > Thanks for your explanation. Far from being confusing, it made me understood the concept. > > My confusion had probably stemmed from the fact that cxxlapack is distributed together with flens (same git repository), so that I thought it’s an integral part (as „default” backend). But then this is used only for testing, benchmarks, etc. And for those parts of lapack that are not yet rewritten in flens. > > Seeing the Hermitian eigenproblem implemented would be great. This has great relevance in quantum physics for calculating energy spectra of systems (eigenvalues of the Hamilton operator, which is normally Hermitian). > > Indeed, the main point for us is to be independent of a pre-installed lapack. C++QED has now cmake for its build system, and with the external-project feature of cmake, we could even achieve that a given git version of flens gets checked out during the build process of C++QED and used by the framework. This is probably the least bothersome way for a user. > > Best regards, > András > > > > On Fri, Apr 11, 2014 at 11:08 AM, Michael Lehn <mic...@un...> wrote: > Hi András, > > thanks for testing FLENS on your ARCH box! And of course for using FLENS :-) > > About CXXLAPACK and FLENS-LAPACK: > > - CXXLAPACK is a C++ wrapper for LAPACK. Like CBLAS is a C-wrapper for BLAS > or CLAPACK for LAPACK. So this requires a LAPACK backend. Having a link > error like "undefined reference to 'zheev_'" means that the backend is missing. > This wrapper is pretty low-level. It only eases the task of calling Fortran > functions from C++. The wrapper for (z)heev looks like this > > http://apfel.mathematik.uni-ulm.de/~lehn/FLENS/cxxlapack/interface/heev.tcc.html > > - FLENS-LAPACK is a C++ rewrite of LAPACK.... unfortunately only of a subset > of LAPACK. As you noticed e.g. (z)heev is not yet rewritten. By rewrite > I actually mean that it is a standalone C++ implementation of LAPACK. So > no LAPACK backend is needed. But for testing my reimplementation of > FLENS-LAPACK I compare results against LAPACK. For users the advantage of > FLENS-LAPACK is that you don't have the LAPACK dependency. Furthermore you > can use mpfr data types. My reason for FLENS-LAPACK is to prove that with > C++ & FLENS you can do the same implementation as with Fortran (same speed) > just simpler. > > If you compile with USE_CXXLAPACK then FLENS-LAPACK is preferred over LAPACK. But > if a functions is missing in FLENS-LAPACK the LAPACK implementation gets used. That > means for a non-symmetrix/non-hermitian matrix 'ev' calls the FLENS-LAPACK > implementation. And (at the moment) it calls for a symmetric/hermitian matrix the > LAPACK Version (through the CXXLAPACK wrapper). > > -> So a reimplementation of heev is next on the list. That would mean that you > are independent of LAPACK > > If you compile with ALWAYS_USE_CXXLAPACK then only the LAPACK beackend gets used. > Even if a C++ implementation exists in FLENS-LAPACK. I use this option for > benchmarking FLENS-LAPACK against LAPACK. > > Speaking of benchmarks. The performance of LAPACK does not depend on the Fortran > compiler. Analogously, neither does the performance of FLENS-LAPACK depend on > the C++ compiler. In both cases it depends on the speed of the BLAS kernel. In > FLENS is include a simple default implementation similar to Netlibs REFBLAS. > > So for high performance there still remains a dependency for a tuned BLAS core. > > Sorry for the long mail. But I am watching my student taking their math final right > now so I have a lot of time to type. I hope it is not confusing even more :-) > > Cheers, > > Michael > |
From: Andras V. <and...@ui...> - 2014-04-13 17:09:31
|
Hi Michael, Thanks for your explanation. Far from being confusing, it made me understood the concept. My confusion had probably stemmed from the fact that cxxlapack is distributed together with flens (same git repository), so that I thought it's an integral part (as "default" backend). But then this is used only for testing, benchmarks, etc. And for those parts of lapack that are not yet rewritten in flens. Seeing the Hermitian eigenproblem implemented would be great. This has great relevance in quantum physics for calculating energy spectra of systems (eigenvalues of the Hamilton operator, which is normally Hermitian). Indeed, the main point for us is to be independent of a pre-installed lapack. C++QED has now cmake for its build system, and with the external-project feature of cmake, we could even achieve that a given git version of flens gets checked out during the build process of C++QED and used by the framework. This is probably the least bothersome way for a user. Best regards, András On Fri, Apr 11, 2014 at 11:08 AM, Michael Lehn <mic...@un...>wrote: > Hi András, > > thanks for testing FLENS on your ARCH box! And of course for using FLENS > :-) > > About CXXLAPACK and FLENS-LAPACK: > > - CXXLAPACK is a C++ wrapper for LAPACK. Like CBLAS is a C-wrapper for > BLAS > or CLAPACK for LAPACK. So this requires a LAPACK backend. Having a link > error like "undefined reference to 'zheev_'" means that the backend is > missing. > This wrapper is pretty low-level. It only eases the task of calling > Fortran > functions from C++. The wrapper for (z)heev looks like this > > > http://apfel.mathematik.uni-ulm.de/~lehn/FLENS/cxxlapack/interface/heev.tcc.html > > - FLENS-LAPACK is a C++ rewrite of LAPACK.... unfortunately only of a > subset > of LAPACK. As you noticed e.g. (z)heev is not yet rewritten. By rewrite > I actually mean that it is a standalone C++ implementation of LAPACK. So > no LAPACK backend is needed. But for testing my reimplementation of > FLENS-LAPACK I compare results against LAPACK. For users the advantage > of > FLENS-LAPACK is that you don't have the LAPACK dependency. Furthermore > you > can use mpfr data types. My reason for FLENS-LAPACK is to prove that > with > C++ & FLENS you can do the same implementation as with Fortran (same > speed) > just simpler. > > If you compile with USE_CXXLAPACK then FLENS-LAPACK is preferred over > LAPACK. But > if a functions is missing in FLENS-LAPACK the LAPACK implementation gets > used. That > means for a non-symmetrix/non-hermitian matrix 'ev' calls the FLENS-LAPACK > implementation. And (at the moment) it calls for a symmetric/hermitian > matrix the > LAPACK Version (through the CXXLAPACK wrapper). > > -> So a reimplementation of heev is next on the list. That would mean > that you > are independent of LAPACK > > If you compile with ALWAYS_USE_CXXLAPACK then only the LAPACK beackend > gets used. > Even if a C++ implementation exists in FLENS-LAPACK. I use this option for > benchmarking FLENS-LAPACK against LAPACK. > > Speaking of benchmarks. The performance of LAPACK does not depend on the > Fortran > compiler. Analogously, neither does the performance of FLENS-LAPACK > depend on > the C++ compiler. In both cases it depends on the speed of the BLAS > kernel. In > FLENS is include a simple default implementation similar to Netlibs > REFBLAS. > > So for high performance there still remains a dependency for a tuned BLAS > core. > > Sorry for the long mail. But I am watching my student taking their math > final right > now so I have a lot of time to type. I hope it is not confusing even more > :-) > > Cheers, > > Michael > > |
From: Michael L. <mic...@un...> - 2014-04-11 09:08:31
|
Hi András, thanks for testing FLENS on your ARCH box! And of course for using FLENS :-) About CXXLAPACK and FLENS-LAPACK: - CXXLAPACK is a C++ wrapper for LAPACK. Like CBLAS is a C-wrapper for BLAS or CLAPACK for LAPACK. So this requires a LAPACK backend. Having a link error like "undefined reference to 'zheev_'" means that the backend is missing. This wrapper is pretty low-level. It only eases the task of calling Fortran functions from C++. The wrapper for (z)heev looks like this http://apfel.mathematik.uni-ulm.de/~lehn/FLENS/cxxlapack/interface/heev.tcc.html - FLENS-LAPACK is a C++ rewrite of LAPACK.... unfortunately only of a subset of LAPACK. As you noticed e.g. (z)heev is not yet rewritten. By rewrite I actually mean that it is a standalone C++ implementation of LAPACK. So no LAPACK backend is needed. But for testing my reimplementation of FLENS-LAPACK I compare results against LAPACK. For users the advantage of FLENS-LAPACK is that you don't have the LAPACK dependency. Furthermore you can use mpfr data types. My reason for FLENS-LAPACK is to prove that with C++ & FLENS you can do the same implementation as with Fortran (same speed) just simpler. If you compile with USE_CXXLAPACK then FLENS-LAPACK is preferred over LAPACK. But if a functions is missing in FLENS-LAPACK the LAPACK implementation gets used. That means for a non-symmetrix/non-hermitian matrix 'ev' calls the FLENS-LAPACK implementation. And (at the moment) it calls for a symmetric/hermitian matrix the LAPACK Version (through the CXXLAPACK wrapper). -> So a reimplementation of heev is next on the list. That would mean that you are independent of LAPACK If you compile with ALWAYS_USE_CXXLAPACK then only the LAPACK beackend gets used. Even if a C++ implementation exists in FLENS-LAPACK. I use this option for benchmarking FLENS-LAPACK against LAPACK. Speaking of benchmarks. The performance of LAPACK does not depend on the Fortran compiler. Analogously, neither does the performance of FLENS-LAPACK depend on the C++ compiler. In both cases it depends on the speed of the BLAS kernel. In FLENS is include a simple default implementation similar to Netlibs REFBLAS. So for high performance there still remains a dependency for a tuned BLAS core. Sorry for the long mail. But I am watching my student taking their math final right now so I have a lot of time to type. I hope it is not confusing even more :-) Cheers, Michael Am 11.04.2014 um 10:25 schrieb Andras Vukics <and...@ui...>: > Dear Michael, > > I have just run the testsuite with your latest commits on my ARCH box, and everything seems to be fine. I used gcc 4.8.2. > > As to USE_CXXLAPACK: what you say makes me suspect that I do not understand the fundamental structure of FLENS-LAPACK :) . I thought that FLENS-LAPACK needs some backend, which can be either cxxlapack (distributed together with flens) or some lapack library already present on the system. I thought that by defining USE_CXXLAPACK, we instruct FLENS to use cxxlapack, which we need to do if we do not have any other lapack library. What is it that I misunderstand? > > Whether there is something missing that we need: > Without USE_CXXLAPACK (that I tried now after you said we don’t necessarily need this #define) everything works (compiles, links and runs correctly) except Hermitian eigenproblem, which doesn’t find the correct overload for the function ev. > With USE_CXXLAPACK Hermitian eigenproblem also compiles, but it fails to link, issuing the error > /usr/local/include/cxxlapack/interface/heev.tcc:91: error: undefined reference to 'zheev_' > > The good news is that we have released the new, shiny Milestone 10 release, which optionally depends on the header-only FLENS, we have completely deprecated our (optional) dependence on the old FLENS! We also have a very shiny new API documentation whose entry point is the SourceForge C++QED page: http://cppqed.sourceforge.net > > Thanks a lot for your help in this with bringing FLENS to a useable state for us. Compile time is not an issue in our case at the moment, but if it ever becomes so, precompiled headers sound like a good idea (although I have no experience whatsoever with them). > > Keep in touch! > Best regards, > András > > > > > On Tue, Apr 8, 2014 at 9:07 PM, Michael Lehn <mic...@un...> wrote: > Hi Andras and Raimar, > > I am done with merging the latest modification. Could you check if FLENS passes > the LAPACK tests on your machines? I tested with gcc 4.7, 4.8 and 4.9. But only > on Mac OS X. On my Ubuntu machine I only have gcc 4.6 which does not support all > required C++11 features and clang++. But clang++ does complex arithmetic different > from gfortran (as there is no IEEE standard that is legal) and therefore I get > different roundoff errors. > > For testing set and export CXX, CC, FC and run make && make check. > > Andras also mentioned that he compiled with USE_CXXLAPACK. That is only required > if some LAPACK function is not re-implemented in FLENS-LAPACK or if you want to > compare results with netlib's LAPACK. So is there anything missing in FLENS-LAPACK > that you need? > > Furthermore, if compile time becomes an issue (the drawback of header-only) we > could think about using precompiled headers. This would have to be integrated > into your build process. > > Cheers, > > Michael > > > Am 28.02.2014 um 17:28 schrieb Andras Vukics <and...@ui...>: > >> Hi Michael, >> >> Thanks for the prompt reply, it’s good to see that FLENS-LAPACK is so well supported! The requirement of bitwise correspondence is indeed impressive. >> >> It would be very nice if we could use a fixed upstream version of FLENS in our new release of C++QED that we plan to publish in about a month. >> >> But I don’t want to press you :) . >> >> Cheers, >> András >> >> >> >> >> On Fri, Feb 28, 2014 at 4:53 PM, Michael Lehn <mic...@un...> wrote: >> Hi Andras, >> >> thanks for the hints! ZGEEV has two workspaces work and rWork. The dimension >> of rWork should always be 2*N. The wrapper with implicit workspace creation >> should create an internal vector of correct size. The actual computational >> routine should have an assertion that ensures the correct size. An actual workspace >> query for calculating optimal workspace size is only needed for the workspace work. >> >> As the LAPACK test suit does not use the implicit workspace creation this was >> undetected. >> >> The ZGEEV passed all the LAPACK tests. In this process I also compare results >> bit-by-bit with calls to the original Netlib implementation (Version 3.3.1). Also >> in each call of any subroutine the results get compared this way. So if there still >> should be a bug it is also in LAPACK 3.3.1 >> >> And having cout's in my repository is a really annoying weakness of me ... grrr >> >> Cheers, >> >> Michael >> >> >> Am 28.02.2014 um 15:42 schrieb Andras Vukics <and...@ui...>: >> >>> Dear Michael, >>> >>> Recently, we have started to migrate C++QED to the header-only FLENS-LAPACK. The most important feature for us is the generic complex eigenvalue solver (zgeev). I guess it is also among the most problematic ones from the C++ implementation point of view. >>> >>> How would you evaluate the present maturity of the complex eigen solver? I see that in the public upstream branch, it still emits a note to cout at entering the routine. >>> >>> I noticed that the internal workspace query doesn’t work correctly, so e.g. this fails: >>> >>> typedef GeMatrix<FullStorage<dcomp> > Matrix; >>> typedef DenseVector<Array<dcomp> > Vector; >>> >>> Matrix a(3,3); >>> >>> a=dcomp(0.89418418,0.88510796),dcomp(0.09935077,0.0955461 ),dcomp(0.19438046,0.77926046), >>> dcomp(0.54987465,0.76065909),dcomp(0.07944067,0.34810127),dcomp(0.55701817,0.03426611), >>> dcomp(0.80508047,0.48160102),dcomp(0.06586927,0.75840275),dcomp(0.15394400,0.99410489); >>> >>> Vector v(3); >>> >>> lapack::ev(false,false,a,v,a,a); >>> >>> cout<<v<<endl; >>> >>> with: >>> >>> flens_d: /usr/local/include/flens/vectortypes/impl/densevector.tcc:253: View flens::DenseVector<flens::Array<double, flens::IndexOptions<int, 1>, std::allocator<double> > >::operator()(const Range<IndexType> &) [A = flens::Array<double, flens::IndexOptions<int, 1>, std::allocator<double> >]: Assertion `range.lastIndex()<=lastIndex()' failed. >>> Aborted (core dumped) >>> >>> >>> But this works fine: >>> >>> typedef GeMatrix<FullStorage<dcomp> > Matrix; >>> typedef DenseVector<Array<dcomp> > Vector; >>> >>> Matrix a(3,3); >>> >>> a=dcomp(0.89418418,0.88510796),dcomp(0.09935077,0.0955461 ),dcomp(0.19438046,0.77926046), >>> dcomp(0.54987465,0.76065909),dcomp(0.07944067,0.34810127),dcomp(0.55701817,0.03426611), >>> dcomp(0.80508047,0.48160102),dcomp(0.06586927,0.75840275),dcomp(0.15394400,0.99410489); >>> >>> Vector v(3), work(12); DenseVector<Array<double> > rwork(6); >>> >>> lapack::ev(false,false,a,v,a,a,work,rwork); >>> >>> cout<<v<<endl; >>> >>> >>> This is all with USE_CXXLAPACK defined. >>> >>> Best regards, >>> András >>> >>> >>> >>> >>> On Wed, Oct 3, 2012 at 11:14 PM, Michael Lehn <mic...@un...> wrote: >>> Hi András! >>> >>> I am using gcc 4.7.1, gcc 4.7.2 and clang 3.2. All of them support all the >>> C++11 features I ever needed so far. However, I never tested things like lamda, >>> regexp so far. Just the few thing that were very handy for FLENS: >>> >>> - rvalue-references >>> - auto >>> - alias templates >>> - variadic templates >>> - some type traits >>> >>> You are absolutely right about clang. It's error messages but also warnings are >>> amazing and very useful! And its complete concept (from font-end to back-end) is >>> amazing. I also use the clang parser for creating the tutorial. For example >>> creating cross references in listings like in >>> >>> http://apfel.mathematik.uni-ulm.de/~lehn/FLENS/flens/examples/lapack-gelqf.cc.html >>> >>> I also used to compile the examples in the tutorials with clang in the past. But >>> because of a problem with how MacPort installed clang I had to switch to gcc. At >>> the moment I switch the examples back to clang. (Yes I admit that I converted from >>> Linux to Mac, but I still use VIM in fullscreen). >>> >>> For testing FLENS-LAPACK I use g++ and gfortran. The combination clang and gfortran >>> should also work except for complex numbers. Both g++ and clang++ compute the complex >>> arithmetic operations by falling back to their C99 implementation. And they seem to >>> use different algorithms for that. Consequently you get different roundoff errors. >>> >>> Clang seems to implement the algorithm proposed in Appendix G in >>> >>> http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1124.pdf >>> >>> the division of two double complex numbers (p. 470) caused my tests to fail (ok, the >>> difference was really small). It finally found out the gcc uses the following algorithm >>> that merely avoids overflow: >>> >>> complex<double> >>> c_div(complex<double> a, complex<double> b) >>> { >>> complex<double> c; >>> if (fabs(b.real()) < fabs(b.imag())) { >>> double r = b.real() / b.imag(); >>> double den = b.imag() + r * b.real(); >>> c.real() = (a.real() * r + a.imag()) / den; >>> c.imag() = (a.imag() * r - a.real()) / den; >>> } else { >>> double r = b.imag() / b.real(); >>> double den = b.real() + r * b.imag(); >>> c.real() = (a.real() + r * a.imag()) / den; >>> c.imag() = (a.imag() - r * a.real()) / den; >>> } >>> return c; >>> } >>> >>> If you run the FLENS-LAPACK test let me know your results. >>> >>> >>> Sorry, I just realize that I am really giving long answers to short questions ;-) >>> >>> Cheers, >>> >>> Michael >>> >>> >>> >>> >>> Am 03.10.2012 um 22:02 schrieb Andras Vukics: >>> >>> > Hi Michael, >>> > >>> > It's really kind of you to have a look into C++QED from an FLENS point of view. >>> > >>> > I have a question on a slightly other note: What compiler would you >>> > recommend to be able to use as much of C++11 as possible today? I >>> > myself have found that llvm-clang has already a lot of these features >>> > implemented. I incidentally often use it for another purpose: its >>> > error messages are often much nicer that those of gcc. >>> > >>> > What compiler do you use to test FLENS-LAPACK? >>> > >>> > Thanks with best regards, >>> > András >>> > >>> > >>> > On Sat, Sep 22, 2012 at 10:18 PM, Michael Lehn <mic...@un...> wrote: >>> >> Hi Andras, >>> >> >>> >> thanks for your kind reply and sorry for my last response. I am >>> >> very happy that FLENS is of use for you! >>> >> >>> >> You are right about the C++11 requirements and I completely understand >>> >> your point. Luckily time will solve the problem of lacking C++11 >>> >> support for us. I will try to use the interim time to have a closer >>> >> look on what FLENS features and LAPACK functions you need in C++QED. >>> >> For example, I think it would be a useful feature for you if FLENS >>> >> already contains generic implementations for all the LAPACK functions >>> >> you need. >>> >> >>> >> Best wishes and greetings from Ulm, >>> >> >>> >> Michael >>> >> >>> >> >>> >> >>> >> >>> >> Am 01.09.2012 um 18:57 schrieb Andras Vukics: >>> >> >>> >>> Hi Michael, >>> >>> >>> >>> Thanks for your mail. Not only in the past, but also presently C++QED >>> >>> optionally depends on FLENS (recently we've even prepared some ubuntu >>> >>> packages, cf. https://launchpad.net/~raimar-sandner/+archive/cppqed). >>> >>> All the eigenvalue calculations of the framework are done with LAPACK >>> >>> *through* FLENS, and if the FLENS dependency is not satisfied, then >>> >>> these tracts of the framework get disabled. (Earlier, we used to use >>> >>> our own little C++ interface to LAPACK, but soon gave it up -- you of >>> >>> all people probably understand why.) >>> >>> >>> >>> However, it is not your new FLENS/LAPACK implementation, but your old >>> >>> project that we use, namely the 2011/09/08 version from CVS. For >>> >>> months now, we've been aware of your new project, and it looks very >>> >>> promising to us. >>> >>> >>> >>> The reason why we think we cannot switch at the moment is that our >>> >>> project is first of all an application programming framework, in which >>> >>> end-users are expected to write and compile their C++ applications in >>> >>> the problem domain of open quantum systems. This means that when they >>> >>> compile, they also need to compile the FLENS parts that they actually >>> >>> use. Now, as far as I understand, the new FLENS/LAPACK uses c++11 >>> >>> features, which at the moment severely limits the range of suitable >>> >>> compilers. >>> >>> >>> >>> But we are definitely looking forward to be able to switch, and are >>> >>> glad that the project is well alive! >>> >>> >>> >>> Best regards, >>> >>> András >>> >>> >>> >>> >>> >>> >>> >>> On Sat, Sep 1, 2012 at 2:26 AM, Michael Lehn <mic...@un...> wrote: >>> >>>> Hi, >>> >>>> >>> >>>> I am a developer of FLENS. Browsing the web I saw that C++QED was using FLENS >>> >>>> for some optional modules in the past. There was quite a list I always wanted to >>> >>>> improve in FLENS. Unfortunately other projects were consuming too much time. >>> >>>> Last year I finally found time. If you are still interested in using LAPACK/BLAS >>> >>>> functionality from within a C++ project some of its features might be interesting >>> >>>> of you: >>> >>>> >>> >>>> 1) FLENS is header only. It comes with generic BLAS implementation. By "generic" I >>> >>>> mean template functions. However, for large problem sizes you still have to use >>> >>>> optimized BLAS implementations like ATLAS, GotoBLAS, ... >>> >>>> >>> >>>> 2) FLENS comes with a bunch of generic LAPACK function. Yes, we actually re-implemented >>> >>>> quite a number of LAPACK functions. The list of supported LAPACK function include LU, >>> >>>> QR, Cholesky decomposition, eigenvalue/-vectors computation, Schur factorization, etc. >>> >>>> Here an overview of FLENS-LAPACK: >>> >>>> >>> >>>> http://www.mathematik.uni-ulm.de/~lehn/FLENS/flens/lapack/lapack.html >>> >>>> >>> >>>> These LAPACK ports are more then just a reference implementation. It provides the same >>> >>>> performance as Netlib's LAPACK. But much more important we carefully ensure that it >>> >>>> also produces exactly the same results as LAPACK. Note that LAPACK is doing some cool >>> >>>> stuff to ensure stability and high precision of its result while providing excellent >>> >>>> performance at the same time. Consider the caller graph of LAPACK's routine dgeev which >>> >>>> computes the eigenvalues/-vectors of a general matrix: >>> >>>> >>> >>>> http://www.mathematik.uni-ulm.de/~lehn/dgeev.pdf >>> >>>> >>> >>>> The routine is pretty sophisticated and for example switches forth and back between >>> >>>> different implementations of the QR-method (dlaqr0, dlaqr1, dlaqr2, ...). By this >>> >>>> the routine even converges for critical cases. We ported all these functions one-by-one. >>> >>>> And tested them as follows: >>> >>>> (i) On entry we make copies of all arguments. >>> >>>> (ii) (a) we call our port >>> >>>> (ii) (b) we call the original LAPACK function >>> >>>> (iii) We compare the results. >>> >>>> If a single-threaded BLAS implementation is used we even can reproduce the same roundoff >>> >>>> errors (i.e. we do the comparison bit-by-bit). >>> >>>> >>> >>>> 3) If you use FLENS-LAPACK with ATLAS or GotoBLAS as BLAS backend then you are on par >>> >>>> with LAPACK from MKL or ACML. Actually MKL and ACML just use the Netlib LAPACK and optimize >>> >>>> a few functions (IMHO there's a lot of marketing involved). >>> >>>> >>> >>>> 4) You still can use an external LAPACK function (if FLENS-LAPACK does not provide a needed >>> >>>> LAPACK port you even have to). >>> >>>> >>> >>>> 5) Porting LAPACK to FLENS had another big advantage: It was a great test for our matrix and >>> >>>> vector classes: >>> >>>> (i) We now can prove that our matrix/vector classes do not introduce any performance penalty. >>> >>>> You get the same performance as you get with plain C or Fortran. >>> >>>> (ii) Feature-completeness of matrix/vector classes. We at least know that our matrix/vector >>> >>>> classes allow a convenient implementation of all the algorithms in LAPACK. An in our >>> >>>> opinion the FLENS-LAPACK implementation also looks sexy. >>> >>>> >>> >>>> I hope that I could tease you a little to have a look at FLENS on http://flens.sf.net >>> >>>> >>> >>>> FLENS is back, >>> >>>> >>> >>>> Michael >>> >>>> >>> >>>> >>> >>>> >>> >>>> >>> >>>> >>> >>>> ------------------------------------------------------------------------------ >>> >>>> Live Security Virtual Conference >>> >>>> Exclusive live event will cover all the ways today's security and >>> >>>> threat landscape has changed and how IT managers can respond. Discussions >>> >>>> will include endpoint security, mobile security and the latest in malware >>> >>>> threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ >>> >>>> _______________________________________________ >>> >>>> Cppqed-support mailing list >>> >>>> Cpp...@li... >>> >>>> https://lists.sourceforge.net/lists/listinfo/cppqed-support >>> >>> >>> >>> >>> >> >>> > >>> > >>> >>> >> >> > > |
From: Andras V. <and...@ui...> - 2014-04-11 08:25:44
|
Dear Michael, I have just run the testsuite with your latest commits on my ARCH box, and everything seems to be fine. I used gcc 4.8.2. As to USE_CXXLAPACK: what you say makes me suspect that I do not understand the fundamental structure of FLENS-LAPACK :) . I thought that FLENS-LAPACK needs some backend, which can be either cxxlapack (distributed together with flens) or some lapack library already present on the system. I thought that by defining USE_CXXLAPACK, we instruct FLENS to use cxxlapack, which we need to do if we do not have any other lapack library. What is it that I misunderstand? Whether there is something missing that we need: - *Without *USE_CXXLAPACK (that I tried now after you said we don't necessarily need this #define) everything works (compiles, links and runs correctly) except Hermitian eigenproblem, which doesn't find the correct overload for the function ev. - *With* USE_CXXLAPACK Hermitian eigenproblem also compiles, but it fails to link, issuing the error /usr/local/include/cxxlapack/interface/heev.tcc:91: error: undefined reference to 'zheev_' The good news is that we have released the new, shiny Milestone 10 release, which optionally depends on the header-only FLENS, we have completely deprecated our (optional) dependence on the old FLENS! We also have a very shiny new API documentation whose entry point is the SourceForge C++QED page: http://cppqed.sourceforge.net Thanks a lot for your help in this with bringing FLENS to a useable state for us. Compile time is not an issue in our case at the moment , but if it ever becomes so, precompiled headers sound like a good idea (although I have no experience whatsoever with them). Keep in touch! Best regards, András On Tue, Apr 8, 2014 at 9:07 PM, Michael Lehn <mic...@un...>wrote: > Hi Andras and Raimar, > > I am done with merging the latest modification. Could you check if FLENS > passes > the LAPACK tests on your machines? I tested with gcc 4.7, 4.8 and 4.9. > But only > on Mac OS X. On my Ubuntu machine I only have gcc 4.6 which does not > support all > required C++11 features and clang++. But clang++ does complex arithmetic > different > from gfortran (as there is no IEEE standard that is legal) and therefore I > get > different roundoff errors. > > For testing set and export CXX, CC, FC and run make && make check. > > Andras also mentioned that he compiled with USE_CXXLAPACK. That is only > required > if some LAPACK function is not re-implemented in FLENS-LAPACK or if you > want to > compare results with netlib's LAPACK. So is there anything missing in > FLENS-LAPACK > that you need? > > Furthermore, if compile time becomes an issue (the drawback of > header-only) we > could think about using precompiled headers. This would have to be > integrated > into your build process. > > Cheers, > > Michael > > > Am 28.02.2014 um 17:28 schrieb Andras Vukics <and...@ui...>: > > Hi Michael, > > Thanks for the prompt reply, it's good to see that FLENS-LAPACK is so > well supported! The requirement of bitwise correspondence is indeed > impressive. > > It would be very nice if we could use a fixed upstream version of FLENS in > our new release of C++QED that we plan to publish in about a month. > > But I don't want to press you :) . > > Cheers, > András > > > > > On Fri, Feb 28, 2014 at 4:53 PM, Michael Lehn <mic...@un...>wrote: > >> Hi Andras, >> >> thanks for the hints! ZGEEV has two workspaces work and rWork. The >> dimension >> of rWork should always be 2*N. The wrapper with implicit workspace >> creation >> should create an internal vector of correct size. The actual >> computational >> routine should have an assertion that ensures the correct size. An >> actual workspace >> query for calculating optimal workspace size is only needed for the >> workspace work. >> >> As the LAPACK test suit does not use the implicit workspace creation this >> was >> undetected. >> >> The ZGEEV passed all the LAPACK tests. In this process I also compare >> results >> bit-by-bit with calls to the original Netlib implementation (Version >> 3.3.1). Also >> in each call of any subroutine the results get compared this way. So if >> there still >> should be a bug it is also in LAPACK 3.3.1 >> >> And having cout's in my repository is a really annoying weakness of me >> ... grrr >> >> Cheers, >> >> Michael >> >> >> Am 28.02.2014 um 15:42 schrieb Andras Vukics <and...@ui...>: >> >> Dear Michael, >> >> Recently, we have started to migrate C++QED to the header-only >> FLENS-LAPACK. The most important feature for us is the generic complex >> eigenvalue solver (zgeev). I guess it is also among the most problematic >> ones from the C++ implementation point of view. >> >> How would you evaluate the present maturity of the complex eigen solver? >> I see that in the public upstream branch, it still emits a note to cout at >> entering the routine. >> >> I noticed that the internal workspace query doesn't work correctly, so >> e.g. this fails: >> >> typedef GeMatrix<FullStorage<dcomp> > Matrix; >> typedef DenseVector<Array<dcomp> > Vector; >> >> Matrix a(3,3); >> >> a=dcomp(0.89418418,0.88510796),dcomp(0.09935077,0.0955461 >> ),dcomp(0.19438046,0.77926046), >> >> dcomp(0.54987465,0.76065909),dcomp(0.07944067,0.34810127),dcomp(0.55701817,0.03426611), >> >> dcomp(0.80508047,0.48160102),dcomp(0.06586927,0.75840275),dcomp(0.15394400,0.99410489); >> >> Vector v(3); >> >> lapack::ev(false,false,a,v,a,a); >> >> cout<<v<<endl; >> >> with: >> >> flens_d: /usr/local/include/flens/vectortypes/impl/densevector.tcc:253: >> View flens::DenseVector<flens::Array<double, flens::IndexOptions<int, 1>, >> std::allocator<double> > >::operator()(const Range<IndexType> &) [A = >> flens::Array<double, flens::IndexOptions<int, 1>, std::allocator<double> >> >]: Assertion `range.lastIndex()<=lastIndex()' failed. >> Aborted (core dumped) >> >> >> But this works fine: >> >> typedef GeMatrix<FullStorage<dcomp> > Matrix; >> typedef DenseVector<Array<dcomp> > Vector; >> >> Matrix a(3,3); >> >> a=dcomp(0.89418418,0.88510796),dcomp(0.09935077,0.0955461 >> ),dcomp(0.19438046,0.77926046), >> >> dcomp(0.54987465,0.76065909),dcomp(0.07944067,0.34810127),dcomp(0.55701817,0.03426611), >> >> dcomp(0.80508047,0.48160102),dcomp(0.06586927,0.75840275),dcomp(0.15394400,0.99410489); >> >> Vector v(3), work(12); DenseVector<Array<double> > rwork(6); >> >> lapack::ev(false,false,a,v,a,a,work,rwork); >> >> cout<<v<<endl; >> >> >> This is all with USE_CXXLAPACK defined. >> >> Best regards, >> András >> >> >> >> >> On Wed, Oct 3, 2012 at 11:14 PM, Michael Lehn <mic...@un...>wrote: >> >>> Hi András! >>> >>> I am using gcc 4.7.1, gcc 4.7.2 and clang 3.2. All of them support all >>> the >>> C++11 features I ever needed so far. However, I never tested things >>> like lamda, >>> regexp so far. Just the few thing that were very handy for FLENS: >>> >>> - rvalue-references >>> - auto >>> - alias templates >>> - variadic templates >>> - some type traits >>> >>> You are absolutely right about clang. It's error messages but also >>> warnings are >>> amazing and very useful! And its complete concept (from font-end to >>> back-end) is >>> amazing. I also use the clang parser for creating the tutorial. For >>> example >>> creating cross references in listings like in >>> >>> >>> http://apfel.mathematik.uni-ulm.de/~lehn/FLENS/flens/examples/lapack-gelqf.cc.html >>> >>> I also used to compile the examples in the tutorials with clang in the >>> past. But >>> because of a problem with how MacPort installed clang I had to switch to >>> gcc. At >>> the moment I switch the examples back to clang. (Yes I admit that I >>> converted from >>> Linux to Mac, but I still use VIM in fullscreen). >>> >>> For testing FLENS-LAPACK I use g++ and gfortran. The combination clang >>> and gfortran >>> should also work except for complex numbers. Both g++ and clang++ >>> compute the complex >>> arithmetic operations by falling back to their C99 implementation. And >>> they seem to >>> use different algorithms for that. Consequently you get different >>> roundoff errors. >>> >>> Clang seems to implement the algorithm proposed in Appendix G in >>> >>> http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1124.pdf >>> >>> the division of two double complex numbers (p. 470) caused my tests to >>> fail (ok, the >>> difference was really small). It finally found out the gcc uses the >>> following algorithm >>> that merely avoids overflow: >>> >>> complex<double> >>> c_div(complex<double> a, complex<double> b) >>> { >>> complex<double> c; >>> if (fabs(b.real()) < fabs(b.imag())) { >>> double r = b.real() / b.imag(); >>> double den = b.imag() + r * b.real(); >>> c.real() = (a.real() * r + a.imag()) / den; >>> c.imag() = (a.imag() * r - a.real()) / den; >>> } else { >>> double r = b.imag() / b.real(); >>> double den = b.real() + r * b.imag(); >>> c.real() = (a.real() + r * a.imag()) / den; >>> c.imag() = (a.imag() - r * a.real()) / den; >>> } >>> return c; >>> } >>> >>> If you run the FLENS-LAPACK test let me know your results. >>> >>> >>> Sorry, I just realize that I am really giving long answers to short >>> questions ;-) >>> >>> Cheers, >>> >>> Michael >>> >>> >>> >>> >>> Am 03.10.2012 um 22:02 schrieb Andras Vukics: >>> >>> > Hi Michael, >>> > >>> > It's really kind of you to have a look into C++QED from an FLENS point >>> of view. >>> > >>> > I have a question on a slightly other note: What compiler would you >>> > recommend to be able to use as much of C++11 as possible today? I >>> > myself have found that llvm-clang has already a lot of these features >>> > implemented. I incidentally often use it for another purpose: its >>> > error messages are often much nicer that those of gcc. >>> > >>> > What compiler do you use to test FLENS-LAPACK? >>> > >>> > Thanks with best regards, >>> > András >>> > >>> > >>> > On Sat, Sep 22, 2012 at 10:18 PM, Michael Lehn < >>> mic...@un...> wrote: >>> >> Hi Andras, >>> >> >>> >> thanks for your kind reply and sorry for my last response. I am >>> >> very happy that FLENS is of use for you! >>> >> >>> >> You are right about the C++11 requirements and I completely understand >>> >> your point. Luckily time will solve the problem of lacking C++11 >>> >> support for us. I will try to use the interim time to have a closer >>> >> look on what FLENS features and LAPACK functions you need in C++QED. >>> >> For example, I think it would be a useful feature for you if FLENS >>> >> already contains generic implementations for all the LAPACK functions >>> >> you need. >>> >> >>> >> Best wishes and greetings from Ulm, >>> >> >>> >> Michael >>> >> >>> >> >>> >> >>> >> >>> >> Am 01.09.2012 um 18:57 schrieb Andras Vukics: >>> >> >>> >>> Hi Michael, >>> >>> >>> >>> Thanks for your mail. Not only in the past, but also presently C++QED >>> >>> optionally depends on FLENS (recently we've even prepared some ubuntu >>> >>> packages, cf. https://launchpad.net/~raimar-sandner/+archive/cppqed >>> ). >>> >>> All the eigenvalue calculations of the framework are done with LAPACK >>> >>> *through* FLENS, and if the FLENS dependency is not satisfied, then >>> >>> these tracts of the framework get disabled. (Earlier, we used to use >>> >>> our own little C++ interface to LAPACK, but soon gave it up -- you of >>> >>> all people probably understand why.) >>> >>> >>> >>> However, it is not your new FLENS/LAPACK implementation, but your old >>> >>> project that we use, namely the 2011/09/08 version from CVS. For >>> >>> months now, we've been aware of your new project, and it looks very >>> >>> promising to us. >>> >>> >>> >>> The reason why we think we cannot switch at the moment is that our >>> >>> project is first of all an application programming framework, in >>> which >>> >>> end-users are expected to write and compile their C++ applications in >>> >>> the problem domain of open quantum systems. This means that when they >>> >>> compile, they also need to compile the FLENS parts that they actually >>> >>> use. Now, as far as I understand, the new FLENS/LAPACK uses c++11 >>> >>> features, which at the moment severely limits the range of suitable >>> >>> compilers. >>> >>> >>> >>> But we are definitely looking forward to be able to switch, and are >>> >>> glad that the project is well alive! >>> >>> >>> >>> Best regards, >>> >>> András >>> >>> >>> >>> >>> >>> >>> >>> On Sat, Sep 1, 2012 at 2:26 AM, Michael Lehn < >>> mic...@un...> wrote: >>> >>>> Hi, >>> >>>> >>> >>>> I am a developer of FLENS. Browsing the web I saw that C++QED was >>> using FLENS >>> >>>> for some optional modules in the past. There was quite a list I >>> always wanted to >>> >>>> improve in FLENS. Unfortunately other projects were consuming too >>> much time. >>> >>>> Last year I finally found time. If you are still interested in >>> using LAPACK/BLAS >>> >>>> functionality from within a C++ project some of its features might >>> be interesting >>> >>>> of you: >>> >>>> >>> >>>> 1) FLENS is header only. It comes with generic BLAS >>> implementation. By "generic" I >>> >>>> mean template functions. However, for large problem sizes you >>> still have to use >>> >>>> optimized BLAS implementations like ATLAS, GotoBLAS, ... >>> >>>> >>> >>>> 2) FLENS comes with a bunch of generic LAPACK function. Yes, we >>> actually re-implemented >>> >>>> quite a number of LAPACK functions. The list of supported LAPACK >>> function include LU, >>> >>>> QR, Cholesky decomposition, eigenvalue/-vectors computation, Schur >>> factorization, etc. >>> >>>> Here an overview of FLENS-LAPACK: >>> >>>> >>> >>>> >>> http://www.mathematik.uni-ulm.de/~lehn/FLENS/flens/lapack/lapack.html >>> >>>> >>> >>>> These LAPACK ports are more then just a reference implementation. >>> It provides the same >>> >>>> performance as Netlib's LAPACK. But much more important we >>> carefully ensure that it >>> >>>> also produces exactly the same results as LAPACK. Note that LAPACK >>> is doing some cool >>> >>>> stuff to ensure stability and high precision of its result while >>> providing excellent >>> >>>> performance at the same time. Consider the caller graph of LAPACK's >>> routine dgeev which >>> >>>> computes the eigenvalues/-vectors of a general matrix: >>> >>>> >>> >>>> http://www.mathematik.uni-ulm.de/~lehn/dgeev.pdf >>> >>>> >>> >>>> The routine is pretty sophisticated and for example switches forth >>> and back between >>> >>>> different implementations of the QR-method (dlaqr0, dlaqr1, dlaqr2, >>> ...). By this >>> >>>> the routine even converges for critical cases. We ported all these >>> functions one-by-one. >>> >>>> And tested them as follows: >>> >>>> (i) On entry we make copies of all arguments. >>> >>>> (ii) (a) we call our port >>> >>>> (ii) (b) we call the original LAPACK function >>> >>>> (iii) We compare the results. >>> >>>> If a single-threaded BLAS implementation is used we even can >>> reproduce the same roundoff >>> >>>> errors (i.e. we do the comparison bit-by-bit). >>> >>>> >>> >>>> 3) If you use FLENS-LAPACK with ATLAS or GotoBLAS as BLAS backend >>> then you are on par >>> >>>> with LAPACK from MKL or ACML. Actually MKL and ACML just use the >>> Netlib LAPACK and optimize >>> >>>> a few functions (IMHO there's a lot of marketing involved). >>> >>>> >>> >>>> 4) You still can use an external LAPACK function (if FLENS-LAPACK >>> does not provide a needed >>> >>>> LAPACK port you even have to). >>> >>>> >>> >>>> 5) Porting LAPACK to FLENS had another big advantage: It was a >>> great test for our matrix and >>> >>>> vector classes: >>> >>>> (i) We now can prove that our matrix/vector classes do not >>> introduce any performance penalty. >>> >>>> You get the same performance as you get with plain C or Fortran. >>> >>>> (ii) Feature-completeness of matrix/vector classes. We at least >>> know that our matrix/vector >>> >>>> classes allow a convenient implementation of all the algorithms >>> in LAPACK. An in our >>> >>>> opinion the FLENS-LAPACK implementation also looks sexy. >>> >>>> >>> >>>> I hope that I could tease you a little to have a look at FLENS on >>> http://flens.sf.net >>> >>>> >>> >>>> FLENS is back, >>> >>>> >>> >>>> Michael >>> >>>> >>> >>>> >>> >>>> >>> >>>> >>> >>>> >>> >>>> >>> ------------------------------------------------------------------------------ >>> >>>> Live Security Virtual Conference >>> >>>> Exclusive live event will cover all the ways today's security and >>> >>>> threat landscape has changed and how IT managers can respond. >>> Discussions >>> >>>> will include endpoint security, mobile security and the latest in >>> malware >>> >>>> threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ >>> >>>> _______________________________________________ >>> >>>> Cppqed-support mailing list >>> >>>> Cpp...@li... >>> >>>> https://lists.sourceforge.net/lists/listinfo/cppqed-support >>> >>> >>> >>> >>> >> >>> > >>> > >>> >>> >> >> > > |
From: Raimar S. <rai...@ui...> - 2014-04-01 00:42:18
|
Dear András, thanks for putting this online. Unfortunately the cpypyqed docs are not built correctly, the reference of the classes and functions is missing. Could you please send me your logs when building the cpypyqed documentation? Best regards Raimar On Monday, March 31, 2014 12:48:20 Andras Vukics wrote: > Dear All, > Today, I uploaded the new documentation to the main sourceforge page > http://cppqed.sourceforge.net/ > At the same time, I set the head of the master git branch to coincide with > Development. > The Milestone 10 file releases (CPC, SourceForge, Debian, Arch), should not > be very far, either. > Best regards, > András |
From: Andras V. <and...@ui...> - 2014-03-31 10:48:47
|
Dear All, Today, I uploaded the new documentation to the main sourceforge page http://cppqed.sourceforge.net/ At the same time, I set the head of the master git branch to coincide with Development. The Milestone 10 file releases (CPC, SourceForge, Debian, Arch), should not be very far, either. Best regards, András |
From: Andras V. <and...@ui...> - 2014-03-15 12:22:39
|
Hi Raimar, The reason is historical: the ParticleCavity bundle was designed to cover the 4 usecases appearing in 1particle1mode.cc, which all fulfill this requirement for physical reasons, so it seemed a good idea to introduce this check against erroneous parameter-passing to this script. That is, it was not designed as a really generic element. (The same applies to some other interaction elements.) Please feel free to extend the element (constructor overloads, etc.) at your leisure, I also do not find it a good idea to write a specialized custom interaction for this case, especially because we claim that the supported interaction elements are of "general purpose". Best regards, András On Fri, Mar 14, 2014 at 3:54 PM, Raimar Sandner <rai...@ui...>wrote: > Hi András, > > I am trying to simulate particles moving along a cavity, pumped > transversally. > > Could you explain to me the reason for UnotEtaeffSignDiscrepancy? Why is it > not allowed to have Unot<0 (cooling regime for DeltaC<0) but still > VClass>0? > > Also, to compare my simulation to Wolfgang's result, I would need to set > Unot=0 and still have eta>0 in the Hamiltonian eta(a^dagger m(x) + h.h.). > The > constructor of ParticleAlongCavity does not allow to set the parameters > independently in such a way. Du you think such a generic constructor would > be > of general interest to have it in the framework? I would like to avoid > maintaining my own specialized interaction for this purpose. > > Very best regards > Raimar > |
From: Raimar S. <rai...@ui...> - 2014-01-08 17:06:15
|
Works great, thanks for the fix. I could have thought of that... Raimar On Wednesday, January 08, 2014 17:18:52 Andras Vukics wrote: > As I suspected, it is an inclusion-related error. Apparently, the on-demand > compilation of composite in cpypyqed occurs in a different inclusion > environment than the compilation of scripts. But, of course, headers should > be self-contained for this very reason :) > Fixed on #124e7112 > > Dr. Andras Vukics > Institute for Theoretical Physics > University of Innsbruck > > On Wed, Jan 8, 2014 at 4:24 PM, Raimar Sandner <rai...@ui...>wrote: > > On Wednesday, January 08, 2014 16:23:39 you wrote: > > > On Wednesday, January 08, 2014 16:19:09 Raimar Sandner wrote: > > > > Hi András, > > > > > > > > in the framework everything works fine, but for some reason in the > > > > python > > > > > > wrapper this change is making me some trouble in the on-demand > > > > compilation > > > > > > of composite. > > > > > > > > I'm out of ideas for the moment, could you maybe have a look at the > > > > generated project (attached)? Maybe you see something in the error > > > > message. > > > > > > > > Thanks and best regards > > > > Raimar > > > > > > I forgot to mention, you would need the python branch... > > > > > > git checkout -b python --recursive > > > gi...@ge...:cppqed/complete.git > > > > Sorry, git clone of course. > > > > > > > > -------------------------------------------------------------------------- > > ---- Rapidly troubleshoot problems before they affect your business. Most > > IT organizations don't have a clear picture of how application > > performance affects their revenue. With AppDynamics, you get 100% > > visibility into your Java,.NET, & PHP application. Start your 15-day FREE > > TRIAL of AppDynamics Pro! > > http://pubads.g.doubleclick.net/gampad/clk?id=84349831&iu=/4140/ostg.clktr > > k > > _______________________________________________ > > Cppqed-support mailing list > > Cpp...@li... > > https://lists.sourceforge.net/lists/listinfo/cppqed-support |
From: Andras V. <and...@ui...> - 2014-01-08 16:19:21
|
As I suspected, it is an inclusion-related error. Apparently, the on-demand compilation of composite in cpypyqed occurs in a different inclusion environment than the compilation of scripts. But, of course, headers should be self-contained for this very reason :) Fixed on #124e7112 Dr. Andras Vukics Institute for Theoretical Physics University of Innsbruck On Wed, Jan 8, 2014 at 4:24 PM, Raimar Sandner <rai...@ui...>wrote: > On Wednesday, January 08, 2014 16:23:39 you wrote: > > On Wednesday, January 08, 2014 16:19:09 Raimar Sandner wrote: > > > Hi András, > > > > > > in the framework everything works fine, but for some reason in the > python > > > wrapper this change is making me some trouble in the on-demand > compilation > > > of composite. > > > > > > I'm out of ideas for the moment, could you maybe have a look at the > > > generated project (attached)? Maybe you see something in the error > > > message. > > > > > > Thanks and best regards > > > Raimar > > > > I forgot to mention, you would need the python branch... > > > > git checkout -b python --recursive > > gi...@ge...:cppqed/complete.git > > Sorry, git clone of course. > > > > ------------------------------------------------------------------------------ > Rapidly troubleshoot problems before they affect your business. Most IT > organizations don't have a clear picture of how application performance > affects their revenue. With AppDynamics, you get 100% visibility into your > Java,.NET, & PHP application. Start your 15-day FREE TRIAL of AppDynamics > Pro! > http://pubads.g.doubleclick.net/gampad/clk?id=84349831&iu=/4140/ostg.clktrk > _______________________________________________ > Cppqed-support mailing list > Cpp...@li... > https://lists.sourceforge.net/lists/listinfo/cppqed-support > |
From: Raimar S. <rai...@ui...> - 2014-01-08 15:22:00
|
On Wednesday, January 08, 2014 16:23:39 you wrote: > On Wednesday, January 08, 2014 16:19:09 Raimar Sandner wrote: > > Hi András, > > > > in the framework everything works fine, but for some reason in the python > > wrapper this change is making me some trouble in the on-demand compilation > > of composite. > > > > I'm out of ideas for the moment, could you maybe have a look at the > > generated project (attached)? Maybe you see something in the error > > message. > > > > Thanks and best regards > > Raimar > > I forgot to mention, you would need the python branch... > > git checkout -b python --recursive > gi...@ge...:cppqed/complete.git Sorry, git clone of course. |
From: Raimar S. <rai...@ui...> - 2014-01-08 15:21:26
|
On Wednesday, January 08, 2014 16:19:09 Raimar Sandner wrote: > Hi András, > > in the framework everything works fine, but for some reason in the python > wrapper this change is making me some trouble in the on-demand compilation > of composite. > > I'm out of ideas for the moment, could you maybe have a look at the > generated project (attached)? Maybe you see something in the error message. > > Thanks and best regards > Raimar I forgot to mention, you would need the python branch... git checkout -b python --recursive gi...@ge...:cppqed/complete.git |
From: Raimar S. <rai...@ui...> - 2014-01-08 15:16:56
|
Hi András, in the framework everything works fine, but for some reason in the python wrapper this change is making me some trouble in the on-demand compilation of composite. I'm out of ideas for the moment, could you maybe have a look at the generated project (attached)? Maybe you see something in the error message. Thanks and best regards Raimar |
From: Raimar S. <rai...@ui...> - 2014-01-08 14:34:35
|
Hi András, in principle this is a good Idea, I have also thought about how to get this to work without additional user action. On Wednesday, January 08, 2014 15:17:55 Andras Vukics wrote: > I suggest that we put the inclusion of "Version.h" and > "component_versions.h" into a header which is included by all scripts. A > straightforward choice is Evolution.h, which we then move from core to > scripts. Version.h is unproblematic, but component_versions.h is automatically generated and only available in the build directory of script projects. Only there all the components can be known. This can also include custom elements projects, about which C++QED knows nothing. Therefore the header including component_versions.h must only be used by scripts. > Also, this call: > updateVersionstring(cppqed_component_versions()); > should perhaps be wrapped in the constructor of some class, of which we > then initialize a dummy instant in the scope of the global namespace. Scott > Meyers writes about how to do this correctly in issue 4 of Effective C++ > (3rd edition), I will have to check. For the reason stated above, this call cannot be in any of the C++QED libraries. Maybe we can work with a template that gets instantiated by the script? But I'm not sure how to do this automatically, and I don't have the book at hand. > Then, script writers will not be additionaly burdened by the > version-display issue. > What do you think? That would indeed be very nice! Best regards Raimar |
From: Andras V. <and...@ui...> - 2014-01-08 14:18:25
|
Hi Raimar, I suggest that we put the inclusion of "Version.h" and "component_versions.h" into a header which is included by all scripts. A straightforward choice is Evolution.h, which we then move from core to scripts. Also, this call: updateVersionstring(cppqed_component_versions()); should perhaps be wrapped in the constructor of some class, of which we then initialize a dummy instant in the scope of the global namespace. Scott Meyers writes about how to do this correctly in issue 4 of Effective C++ (3rd edition), I will have to check. Then, script writers will not be additionaly burdened by the version-display issue. What do you think? Best regards, András On Thu, Nov 7, 2013 at 2:23 PM, Raimar Sandner <rai...@ui...>wrote: > Dear András, > > I have tackled the next point on the issues list: version information with > git commit sha1 values. This is non-trivial if you want to have the git > commits of _all_ repositories which make up a particular script. This can > of course only be achieved in the scripts project itself, as only there all > the other repositories are known (core, elements, custom elements, custom > scripts). > > Please have a look at C++QED_core (branch version) and > CustomScriptsExample (branch version). These are branched from master, so > you will need C++QED_elements and CustomElementsExample from master. > > When you run 2ParticlesRingCavitySine --version, you will see something > like: > > # C++QEDcore Version 2.9.1, core git commit: > b70df0f1ce07ba6c4cf4130fc6c7c5b29e82a95b > # elements git commit: f42c89e651211f572c75f363a368f2e67f0b6883 > # elements_custom_example git commit: > c2b9647117e47fd1f4d9e6714fe6079121756e1c > # scripts_custom_example git commit: > 5417c29938df3dce9fcdb9b53f4a6ec0a859ee1b > > # Andras Vukics, vu...@us... > > # Compiled with > # Boost library collection : Version 1.54 > # Gnu Scientific Library : Version 1.15 > # Blitz++ numerical library: Config date: Fri Aug 30 13:01:08 CEST 2013 > > When you run the simulation, you will notice that the exact same version > information is included in the header of the data. This is made possible by > adding the following line to the script: > > updateVersionstring(cppqed_component_versions()); > > Without this call (all the original scripts), the version string will only > contain the core git commit. To take advantage of the full version > information, only "Version.h", "component_versions.h" and the call to > updateVersionstring has to be included in a script. > > When you add a new commit in one of the repositories, cmake will > automatically be run to regenerate the correct version information. Only > very little code containing the version string needs to be recompiled, the > rest is done by relinking. > > If you agree, I will merge this into master and Development. > > Best regards > Raimar > > > Implementation details: > > All libraries now have the function cppqed_*_version() returning the git > commit as string, where * is core, elements, elements_custom_example etc. > If the project name is "core" then also the numerical version of C++QED as > defined in the CMakeLists.txt is included. The definition and declaration > is auto-generated by cmake. > > Additionally, script executables are linked with the autogenerated > function cppqed_component_versions(), where all the cppqed_*_version() > strings of the sub-components are combined. > > In the core library there is now a global variable cppqed_versionstring > which is initialized with the core version information. The function > updateVersionstring can be used in scripts to update the global variable > with the more accurate version information given by > cppqed_component_versions(). The global variable is then consulted via > versionHelper() if --version is present on the command line and for the > data header. > > You can have a look at all the auto-generated .cc and .h files, they are > in the top level build directories of the projects. > > > ------------------------------------------------------------------------------ > November Webinars for C, C++, Fortran Developers > Accelerate application performance with scalable programming models. > Explore > techniques for threading, error checking, porting, and tuning. Get the most > from the latest Intel processors and coprocessors. See abstracts and > register > http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk > _______________________________________________ > Cppqed-support mailing list > Cpp...@li... > https://lists.sourceforge.net/lists/listinfo/cppqed-support > |
From: Andras V. <and...@ui...> - 2014-01-08 11:41:24
|
> > I'm glad it works and that you like it. Which parts of pycppqed are needed > for > the physics test suite? If it is only comparing trajectories, this is > already > supported by the python test driver. In the long run it is probably best to > merge pycppqed into cpypyqed. > Yes, it’s only a question of comparing trajectories on the level of display, which basically boils down to comparing certain slices of numpy arrays as the trajectory output files can be directly loaded into such. I think we can already consider pycppqed obsoleted since its other features are obsolete or have never worked correctly anyway. |