You can subscribe to this list here.
2012 |
Jan
|
Feb
(5) |
Mar
(5) |
Apr
(10) |
May
(5) |
Jun
|
Jul
|
Aug
(17) |
Sep
(6) |
Oct
(1) |
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2013 |
Jan
(1) |
Feb
(4) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
(3) |
Sep
|
Oct
(1) |
Nov
(18) |
Dec
(10) |
2014 |
Jan
(12) |
Feb
|
Mar
(2) |
Apr
(11) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(3) |
Nov
|
Dec
|
From: Raimar S. <rai...@ui...> - 2013-11-11 10:27:51
|
I found the bug which prevented -DBZ_DEBUG, new packages are building right now. However, the Launchpad build farm automatically adds -O2 also to the debug builds. I could probably filter that out, should I? Regards Raimar On Monday, November 11, 2013 11:13:22 AM Raimar Sandner wrote: > Dear András, > > could you test new packages of the C++QED master branch in my repository > ppa:raimar-sandner/cppqed ? > > apt-get install cppqedscripts libcppqedelements-2.9-0-dev > > This will also pull in a backport of cmake-2.8.11 and boost-1.53 for > precise, as well as a blitz version from our own branch. I have created all > the source packages on a virtual Ubuntu saucy machine on my Gentoo box. I > tested the precise builds in a chroot for this distribution, but testing > them also on a native precise system would be good. > > The libcppqedcore-2.9-0-dev package contains the example elements and > scripts projects in /usr/share/doc/, please test if those build as well. > > The packages contain both Debug and Release configurations. The scripts > compiled in Debug mode have a _d suffix from now on, also in regular cmake > builds. This will prevent users from accidentally using a debug script for > production. The correct library is found automatically by cmake depending > on which build configuration is used in custom scripts and elements. > > Although the debug scripts are linked to the correct libraries they run as > fast as their non-debug counterpart, and I have noticed -DBZ_DEBUG was not > used in the builds. I'm looking into this problem. > > Best regards > Raimar > > ---------------------------------------------------------------------------- > -- November Webinars for C, C++, Fortran Developers > Accelerate application performance with scalable programming models. Explore > techniques for threading, error checking, porting, and tuning. Get the most > from the latest Intel processors and coprocessors. See abstracts and > register > http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk > _______________________________________________ > Cppqed-support mailing list > Cpp...@li... > https://lists.sourceforge.net/lists/listinfo/cppqed-support |
From: Raimar S. <rai...@ui...> - 2013-11-11 10:10:39
|
Dear András, could you test new packages of the C++QED master branch in my repository ppa:raimar-sandner/cppqed ? apt-get install cppqedscripts libcppqedelements-2.9-0-dev This will also pull in a backport of cmake-2.8.11 and boost-1.53 for precise, as well as a blitz version from our own branch. I have created all the source packages on a virtual Ubuntu saucy machine on my Gentoo box. I tested the precise builds in a chroot for this distribution, but testing them also on a native precise system would be good. The libcppqedcore-2.9-0-dev package contains the example elements and scripts projects in /usr/share/doc/, please test if those build as well. The packages contain both Debug and Release configurations. The scripts compiled in Debug mode have a _d suffix from now on, also in regular cmake builds. This will prevent users from accidentally using a debug script for production. The correct library is found automatically by cmake depending on which build configuration is used in custom scripts and elements. Although the debug scripts are linked to the correct libraries they run as fast as their non-debug counterpart, and I have noticed -DBZ_DEBUG was not used in the builds. I'm looking into this problem. Best regards Raimar |
From: Raimar S. <rai...@ui...> - 2013-11-07 13:57:43
|
I forgot to mention, this fails graciously if the git information is not found. In this case it will show "GITDIR-NOTFOUND" in the version instead of the commit. Raimar On Thursday, November 07, 2013 02:23:10 PM Raimar Sandner wrote: > Dear András, > > I have tackled the next point on the issues list: version information with > git commit sha1 values. This is non-trivial if you want to have the git > commits of _all_ repositories which make up a particular script. This can > of course only be achieved in the scripts project itself, as only there all > the other repositories are known (core, elements, custom elements, custom > scripts). > > Please have a look at C++QED_core (branch version) and CustomScriptsExample > (branch version). These are branched from master, so you will need > C++QED_elements and CustomElementsExample from master. > > When you run 2ParticlesRingCavitySine --version, you will see something > like: > > # C++QEDcore Version 2.9.1, core git commit: > b70df0f1ce07ba6c4cf4130fc6c7c5b29e82a95b # elements git commit: > f42c89e651211f572c75f363a368f2e67f0b6883 > # elements_custom_example git commit: > c2b9647117e47fd1f4d9e6714fe6079121756e1c # scripts_custom_example git > commit: 5417c29938df3dce9fcdb9b53f4a6ec0a859ee1b > > # Andras Vukics, vu...@us... > > # Compiled with > # Boost library collection : Version 1.54 > # Gnu Scientific Library : Version 1.15 > # Blitz++ numerical library: Config date: Fri Aug 30 13:01:08 CEST 2013 > > When you run the simulation, you will notice that the exact same version > information is included in the header of the data. This is made possible by > adding the following line to the script: > > updateVersionstring(cppqed_component_versions()); > > Without this call (all the original scripts), the version string will only > contain the core git commit. To take advantage of the full version > information, only "Version.h", "component_versions.h" and the call to > updateVersionstring has to be included in a script. > > When you add a new commit in one of the repositories, cmake will > automatically be run to regenerate the correct version information. Only > very little code containing the version string needs to be recompiled, the > rest is done by relinking. > > If you agree, I will merge this into master and Development. > > Best regards > Raimar > > > Implementation details: > > All libraries now have the function cppqed_*_version() returning the git > commit as string, where * is core, elements, elements_custom_example etc. > If the project name is "core" then also the numerical version of C++QED as > defined in the CMakeLists.txt is included. The definition and declaration > is auto-generated by cmake. > > Additionally, script executables are linked with the autogenerated function > cppqed_component_versions(), where all the cppqed_*_version() strings of > the sub-components are combined. > > In the core library there is now a global variable cppqed_versionstring > which is initialized with the core version information. The function > updateVersionstring can be used in scripts to update the global variable > with the more accurate version information given by > cppqed_component_versions(). The global variable is then consulted via > versionHelper() if --version is present on the command line and for the > data header. > > You can have a look at all the auto-generated .cc and .h files, they are in > the top level build directories of the projects. > |
From: Raimar S. <rai...@ui...> - 2013-11-07 13:20:45
|
Dear András, I have tackled the next point on the issues list: version information with git commit sha1 values. This is non-trivial if you want to have the git commits of _all_ repositories which make up a particular script. This can of course only be achieved in the scripts project itself, as only there all the other repositories are known (core, elements, custom elements, custom scripts). Please have a look at C++QED_core (branch version) and CustomScriptsExample (branch version). These are branched from master, so you will need C++QED_elements and CustomElementsExample from master. When you run 2ParticlesRingCavitySine --version, you will see something like: # C++QEDcore Version 2.9.1, core git commit: b70df0f1ce07ba6c4cf4130fc6c7c5b29e82a95b # elements git commit: f42c89e651211f572c75f363a368f2e67f0b6883 # elements_custom_example git commit: c2b9647117e47fd1f4d9e6714fe6079121756e1c # scripts_custom_example git commit: 5417c29938df3dce9fcdb9b53f4a6ec0a859ee1b # Andras Vukics, vu...@us... # Compiled with # Boost library collection : Version 1.54 # Gnu Scientific Library : Version 1.15 # Blitz++ numerical library: Config date: Fri Aug 30 13:01:08 CEST 2013 When you run the simulation, you will notice that the exact same version information is included in the header of the data. This is made possible by adding the following line to the script: updateVersionstring(cppqed_component_versions()); Without this call (all the original scripts), the version string will only contain the core git commit. To take advantage of the full version information, only "Version.h", "component_versions.h" and the call to updateVersionstring has to be included in a script. When you add a new commit in one of the repositories, cmake will automatically be run to regenerate the correct version information. Only very little code containing the version string needs to be recompiled, the rest is done by relinking. If you agree, I will merge this into master and Development. Best regards Raimar Implementation details: All libraries now have the function cppqed_*_version() returning the git commit as string, where * is core, elements, elements_custom_example etc. If the project name is "core" then also the numerical version of C++QED as defined in the CMakeLists.txt is included. The definition and declaration is auto-generated by cmake. Additionally, script executables are linked with the autogenerated function cppqed_component_versions(), where all the cppqed_*_version() strings of the sub-components are combined. In the core library there is now a global variable cppqed_versionstring which is initialized with the core version information. The function updateVersionstring can be used in scripts to update the global variable with the more accurate version information given by cppqed_component_versions(). The global variable is then consulted via versionHelper() if --version is present on the command line and for the data header. You can have a look at all the auto-generated .cc and .h files, they are in the top level build directories of the projects. |
From: Raimar S. <rai...@ui...> - 2013-11-04 23:31:25
|
Dear András, I have merged the new repository layout into Development. This was certainly one of the more interesting merges I have had so far :) The branches are currently named: C++QED -> Development C++QEDcore -> Development_split_elements CPPQEDelements -> Development C++QEDscripts -> Development_split_elements I think after some testing we can merge the Development_split_elements branches back into Development (this will be a fast-forward), thus concluding the split. Then new doxygen-branches can be created as well. For the OpenMP branch it might be an option to rebase and push --force (will be a simpler history graph), or otherwise merge. This should be straight forward, as the changes in OpenMP touch only the core. I had to disable some scripts, could you check if this was to be expected (I think so) or if the failures result from the merge? To re-enable for testing, remove the scripts from the EXCLUDE_SCRIPTS variable in CMakeLists.txt. Best regards Raimar P.S.: Some technical details... Are you currently working with the C++QED repository and submodules or with the standalone repositories? If you have standalone repositories, just checkout the corresponding branches, you can ignore the rest. If you are using C++QED, checkout Development in C++QED. This will not touch the submodules, therefore you will have a dirty working directory because the recorded commits of the submodules don't match. You have to switch branches in the submodules manually, or to automate this, call inside C++QED: $ git submodule foreach 'BRANCH=$(git config --file $toplevel/.gitmodules --get submodule.${name}.branch); git checkout ${BRANCH:-master} This command is so useful that I have created an alias in ~/.gitconfig: [alias] attach = "submodule foreach 'BRANCH=$(git config --file $toplevel/.gitmodules --get submodule.${name}.branch); git checkout ${BRANCH:-master}'" With this alias in place, the work flow for cloning and switching branches would be: git clone --recurse <URL_of_C++QED> C++QED cd C++QED git attach Change to Development branch: git checkout Development git attach Change back to master: git checkout master git attach |
From: Raimar S. <rai...@ui...> - 2013-11-04 03:13:09
|
Hi András, I have created two new repositories, CustomElementsExample and CustomScriptsExample. They contain very simple and almost self-explanatory CMakeLists.txt which show how to create custom projects for elements and scripts. Additionally I added an experimental feature to the build process, the cmake user registry. This is a quite cool feature of cmake, let's see if it proves useful. Wen building CPPQEDcore, CPPQEDelements or any of the custom elements project, the library locations are recorded in ~/.cmake, and projects depending on those libraries can find them without user intervention. The only problem I see is that this might interfere with installed versions of the library... This means you can now (after updating all the submodules) just go ahead and build everything, either monolithic or modular, and don't have to care about CMAKE_PREFIX_PATH or <something>_DIR variables. Everything should be found automatically. This also holds for the custom projects (elements and scripts). Best regards Raimar |
From: Raimar S. <rai...@ui...> - 2013-10-28 10:50:11
|
Hi András, over the weekend I finished the split into core, elements and scripts and adapted the cmake build process accordingly. It still needs some polishing, but in general it works quite well I think. First of all you will need cmake 2.8.9. I'm sorry about this dependency, I know your Ubuntu distribution doesn't have it yet. But there are some very nice features, which if missing would need ugly workarounds. You can install the most recent version of cmake with: wget http://www.cmake.org/files/v2.8/cmake-2.8.12-Linux-i386.sh chmod +x cmake-2.8.12-Linux-i386.sh ./cmake-2.8.12-Linux-i386.sh In the new layout we have four repositories: * C++QED: very lightweight repository bringing along the other repositories as git submodules. A simple CMakeLists.txt enables the user to compile everything at once. * C++QEDcore: branch split_elements * C++QEDelements: branch master (as this is a new repository I didn't have to avoid master) * C++QEDscripts: branch split_elements The latter three repositories can be compiled as standalone cmake projects, given they find their dependencies. There are three modes of compilation, it would be great if you could test all of them. 1. Monolithic build from within C++QED ************************************** $ git clone --recursive ssh://vu...@gi.../p/cppqed/cppqed C++QED Then open the top level CMakeLists.txt as usual in kdevelop and compile. 2. Modular build without installation ************************************* $ git clone -b split_elements ssh://vu...@gi.../p/cppqed/cppqed_core CPPQEDcore $ git clone ssh://vu...@gi.../p/cppqed/cppqed_elements CPPQEDelements $ git clone -b split_elements ssh://vu...@gi.../p/cppqed/cppqed_scripts CPPQEDscripts * build CPPQEDcore as usual * when configuring CPPQEDelements, call cmake with -DCPPQED_DIR=/path/to/CPPQEDcore/build, then compile CPPQEDelements * when configuring CPPQEDscripts, call cmake with -DCPPQED_DIR=/path/to/CPPQEDcore/build and -DCPPQelements_DIR=/path/to/CPPQEDelements/build, then compile It is also possible to skip the -D, let the configuration in kdevelop fail for the first time and then set the path in the project configuration (Project->Open configuration, set CPPQED_DIR and CPPQEDelements_DIR there). 3. Modular build with installation ********************************** * Clone the three repositories as in 2. * Call cmake with -DCMAKE_INSTALL_PREFIX=/some/prefix (to install into prefix) and -DCMAKE_PREFIX_PATH=/some/prefix (to find stuff in prefix) * Build and install CPPQEDcore, then CPPQEDelements, then CPPQEDscripts (the scripts are not installed as of now) Method 1 is very similar to the way it used to be, but one has to handle git submodules, which can be confusing at first (only if one wants to use this as a working directory for development). It also has the advantage that cmake knows how to build every target in the project and rebuilds all dependencies automatically. Method 2 might prove useful for development, especially when maintaining additional elements repositories. One does not have to remember to install all the time. Dependencies cannot be built automatically, i.e. if core changes it is not sufficient to rebuild the scripts, as the scripts project does not know how to build the core. One has to rebuild core and then scripts. But if the scripts project sees that the core has changed, it will relink automatically. Method 3 will come in most convenient when binary packages exist of the libraries. In this scenario, the libraries are found automatically by cmake. Some technical details: * I implemented the right include path dependencies between components, both in core and in elements. * I removed the details subdirectories. Include files former in details are now prefixed with "details_", this way the include guards stay as the were. When installed, include files will go into a flat directory CPPQED-2.9/{core,elements}. This just resembles the way cmake works better, otherwise a lot of boilerplate code has to be used to keep the directory structure of include files. * Given the first two points, there is now a problem with structure/Free.h, where a file from ../quantumoperator is included. When installed, this path is not correct anymore. To work around this problem until it is fixed, I have introduced a compiler macro indicating whether to include ../quantumoperator/TridiagonalFwd.h or just TridiagonalFwd.h. * If one repository is built in DEBUG mode it cannot be used together with another repository in RELEASE mode. At the moment this will be detected at link time. With method 3 it is also possible to install both DEBUG and RELEASE in parallel. Dependent components will pick up the right library automatically. * Libraries and executables should run without the need of LD_LIBRARY_PATH environment variable. All non-standard library paths are built into the binaries, the resulting executables should be relocatable as long as the libraries stay where they are. It would also be possible to make the libraries relocatable, but only if they are in standard paths or LD_LIBRARY_PATH is used. * I have not touched the development branch yet * I have not tested what happens if clang and g++ are mixed. This should actually work, right? If not one would have to test for consistent compilers. Todo: provide a simple way for users to create their own elements- and script-projects. Please keep in touch, this definitely needs a lot of testing and probably still contains bugs. We should also talk about a good workflow regarding the git submodules. Some reading on submodules: http://git-scm.com/book/en/Git-Tools-Submodules Best regards Raimar |
From: Raimar S. <rai...@ui...> - 2013-08-22 13:47:57
|
Hi András, found the problem, sorry for the noise. I didn't use the cppqed fork of blitz. :-/ Best Raimar On Thursday 22 August 2013 11:45:52 Andras Vukics wrote: > Dear Raimar, > The agenda that you've listed sounds very good. > I haven't checked recently whether in the development branch all the > scripts compile, but the most representatives ones I do compile from time > to time. So it is (sort of) normal if some of them do not compile, but the > most important ones should. > For a long time now, I haven't experienced any problems with NaNs, so I > have no idea where this structure::InfiniteDetectedException comes from for > you. I have just checked and it works fine for me. > I use g++-4.6.3 and clang++-3.0/3.3, boost-1.46.1, and I tried both with > cmake and bjam. > The testsuitePhysics is now in a separate branch called TestSuite. I also > ran those tests, but pycppqed apparently cannot treat correctly the present > form of the data, probably because of some parsing error. > Sorry for not being able to help more. Best regards, > András > > > Dr. Andras Vukics > Institute for Theoretical Physics > University of Innsbruck > > > On Wed, Aug 21, 2013 at 2:33 PM, Raimar Sandner > > <rai...@ui...>wrote: > > Dear András, > > > > finally I found some time to try to catch up with the latest C++QED > > development. In the end I would like to make the necessary changes to > > pycppqed > > and my own scripts, so that I can use the new infrastructure. Then I would > > also try to adapt the work on the Python interface, and package the new > > development version of C++QED. > > > > I checked out the development branch and could compile it with gcc-4.6, > > only > > the script PumpedLossyModeRegression failed to compile (maybe this is > > expected?) It seems to use trajectory::readViaSStream with a wrong > > interface. > > > > However, there is something strange going on at runtime, most of the > > scripts > > throw structure::InfiniteDetectedException. For example I tried > > 1particle1mode > > without any arguments. Running this in the debugger shows that the > > exception > > is thrown in LiouvilleanAveragedCommonRanked::average. Before I dive any > > deeper into this, do you know what the cause could be? Which configuration > > are > > you currently using for development and testing, i.e. boost version, > > compiler, > > build system? I tried with boost-1.49, g++-4.6.3 and cmake. > > > > What happened to the testsuite_physics, or what is the preferred way at > > the > > moment to test the framework? > > > > Thank you and best regards > > Raimar > > > > > > -------------------------------------------------------------------------- > > ---- Introducing Performance Central, a new site from SourceForge and > > AppDynamics. Performance Central is your source for news, insights, > > analysis and resources for efficient Application Performance Management. > > Visit us today! > > http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktr > > k > > _______________________________________________ > > Cppqed-support mailing list > > Cpp...@li... > > https://lists.sourceforge.net/lists/listinfo/cppqed-support |
From: Andras V. <and...@ui...> - 2013-08-22 09:46:21
|
Dear Raimar, The agenda that you've listed sounds very good. I haven't checked recently whether in the development branch all the scripts compile, but the most representatives ones I do compile from time to time. So it is (sort of) normal if some of them do not compile, but the most important ones should. For a long time now, I haven't experienced any problems with NaNs, so I have no idea where this structure::InfiniteDetectedException comes from for you. I have just checked and it works fine for me. I use g++-4.6.3 and clang++-3.0/3.3, boost-1.46.1, and I tried both with cmake and bjam. The testsuitePhysics is now in a separate branch called TestSuite. I also ran those tests, but pycppqed apparently cannot treat correctly the present form of the data, probably because of some parsing error. Sorry for not being able to help more. Best regards, András Dr. Andras Vukics Institute for Theoretical Physics University of Innsbruck On Wed, Aug 21, 2013 at 2:33 PM, Raimar Sandner <rai...@ui...>wrote: > Dear András, > > finally I found some time to try to catch up with the latest C++QED > development. In the end I would like to make the necessary changes to > pycppqed > and my own scripts, so that I can use the new infrastructure. Then I would > also try to adapt the work on the Python interface, and package the new > development version of C++QED. > > I checked out the development branch and could compile it with gcc-4.6, > only > the script PumpedLossyModeRegression failed to compile (maybe this is > expected?) It seems to use trajectory::readViaSStream with a wrong > interface. > > However, there is something strange going on at runtime, most of the > scripts > throw structure::InfiniteDetectedException. For example I tried > 1particle1mode > without any arguments. Running this in the debugger shows that the > exception > is thrown in LiouvilleanAveragedCommonRanked::average. Before I dive any > deeper into this, do you know what the cause could be? Which configuration > are > you currently using for development and testing, i.e. boost version, > compiler, > build system? I tried with boost-1.49, g++-4.6.3 and cmake. > > What happened to the testsuite_physics, or what is the preferred way at the > moment to test the framework? > > Thank you and best regards > Raimar > > > ------------------------------------------------------------------------------ > Introducing Performance Central, a new site from SourceForge and > AppDynamics. Performance Central is your source for news, insights, > analysis and resources for efficient Application Performance Management. > Visit us today! > http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktrk > _______________________________________________ > Cppqed-support mailing list > Cpp...@li... > https://lists.sourceforge.net/lists/listinfo/cppqed-support > |
From: Raimar S. <rai...@ui...> - 2013-08-21 12:36:11
|
Dear András, finally I found some time to try to catch up with the latest C++QED development. In the end I would like to make the necessary changes to pycppqed and my own scripts, so that I can use the new infrastructure. Then I would also try to adapt the work on the Python interface, and package the new development version of C++QED. I checked out the development branch and could compile it with gcc-4.6, only the script PumpedLossyModeRegression failed to compile (maybe this is expected?) It seems to use trajectory::readViaSStream with a wrong interface. However, there is something strange going on at runtime, most of the scripts throw structure::InfiniteDetectedException. For example I tried 1particle1mode without any arguments. Running this in the debugger shows that the exception is thrown in LiouvilleanAveragedCommonRanked::average. Before I dive any deeper into this, do you know what the cause could be? Which configuration are you currently using for development and testing, i.e. boost version, compiler, build system? I tried with boost-1.49, g++-4.6.3 and cmake. What happened to the testsuite_physics, or what is the preferred way at the moment to test the framework? Thank you and best regards Raimar |
From: Andras V. <and...@ui...> - 2013-06-14 13:17:29
|
Dear All, The new doxygen-based API reference can be previewed in its present germinal state @ http://cppqed.sourceforge.net/doxygen/namespaces.html Any comments are welcome. Best regards, András Vukics Dr. Andras Vukics Institute for Theoretical Physics University of Innsbruck |
From: Andras V. <and...@ui...> - 2013-03-21 20:15:23
|
P.S.: O f course, the present strategy increases the number of dpLimit-overshots, which could be relieved by using liouvilleanSuggestedDtTry=factor*dpLimit_/dpOverDt with factor<1 instead of the present liouvilleanSuggestedDtTry=dpLimit_/dpOverDt (cf. MCWF_Trajectory.tcc:140) But I do not think it be a problem. On Wed, Mar 20, 2013 at 10:19 AM, Andras Vukics <and...@ui...>wrote: > Dear All, > > This is a continuation of the discussion > @ > http://sourceforge.net/mailarchive/message.php?msg_id=29183308 . > > As of Development branch revision #280, I have modified the > MCWF_Trajectory driver in such a way that stepsize *increase* by solely the > Liouvillean part of the system dynamics is possible. > > The dramatic difference is displayed by comparing the timesteps in this > run: > PumpedLossyMode_C++QED --cutoff 100 --minitFock 70 --nTh 1 --dc 1 --T 2 > between revision #279 and #280. > > Best regards, > András > > > > Dr. Andras Vukics > Institute for Theoretical Physics > University of Innsbruck > |
From: Andras V. <and...@ui...> - 2013-03-20 09:19:30
|
Dear All, This is a continuation of the discussion @ http://sourceforge.net/mailarchive/message.php?msg_id=29183308 . As of Development branch revision #280, I have modified the MCWF_Trajectory driver in such a way that stepsize *increase* by solely the Liouvillean part of the system dynamics is possible. The dramatic difference is displayed by comparing the timesteps in this run: PumpedLossyMode_C++QED --cutoff 100 --minitFock 70 --nTh 1 --dc 1 --T 2 between revision #279 and #280. Best regards, András Dr. Andras Vukics Institute for Theoretical Physics University of Innsbruck |
From: Raimar S. <rai...@ui...> - 2013-02-18 11:26:46
|
Great, seems to work fine. I will also check in on Mac if I find time... Meanwhile, could you merge cppqed/raimar and cppqed/raimar/Documentation? Thanks! Raimar On Wednesday 13 February 2013 09:33:10 Andras Vukics wrote: > I have reverted the main branch to something like the former revision #264, > but with some features from the development branch and raimar/gcc included, > and 4qbits.cc and 6qbits.cc removed. > This now compiles for me on two somewhat different platforms with gcc-4.4, > gcc-4.6 and clang. > The development branch is under > bzr://cppqed.bzr.sourceforge.net/bzrroot/cppqed/Development > > > Dr. Andras Vukics > Institute for Theoretical Physics > University of Innsbruck > > > On Tue, Feb 12, 2013 at 1:32 PM, Raimar Sandner > > <rai...@ui...>wrote: > > Dear András, > > > > we can only use the current branch as stable if we sort out all the > > problems > > which occur. We should take a revision which reportedly works on all > > supported > > platform with all supported compilers. > > > > Which compiler do you use for daily work? I found that g++ 4.6 needs the > > additional -std=c++0x flag. On the other hand the options > > > > "-Wno-local-type- > > > > template-args" and "-Wno-c++11-extensions" are not known. > > > > Best > > Raimar > > > > > Dear Raimar, > > > Agreed. > > > However, we should identify the present main branch with either the > > > > stable > > > > > or the development. I don’t know which one is better. Perhaps it should > > > > be > > > > > the stable, since in that case the users do not need to meddle with > > > > branch > > > > > names when downloading. > > > Best regards, > > > András > > > > > > > > > > > > Dr. Andras Vukics > > > Institute for Theoretical Physics > > > University of Innsbruck > > > > > > > > > On Tue, Feb 12, 2013 at 12:58 PM, Raimar Sandner > > > > > > <rai...@ui...>wrote: > > > > Dear András, > > > > > > > > I think we should introduce two branches beneath cppqed, one called > > > > "stable" > > > > and one "development". I will continue to use the development version > > > > in > > > > > > daily > > > > work to test it, then we can merge to stable from time to time. > > > > Eventually > > > > > > stable leads up to releases. Casual users or course participants can > > > > be > > > > directed to download the stable branch. > > > > > > > > At the moment there seem to be several problems with our official > > > > branch > > > > > > which > > > > prevent compilation for example with gcc-4.4 (I will investigate this > > > > and > > > > > > report back). The development branch will help to catch problems which > > > > are > > > > > > not > > > > immediately visible. > > > > > > > > Best regards > > > > Raimar > > > > -------------------------------------------------------------------------- > > > > > > ---- Free Next-Gen Firewall Hardware Offer > > > > Buy your Sophos next-gen firewall before the end March 2013 > > > > and get the hardware for free! Learn more. > > > > http://p.sf.net/sfu/sophos-d2d-feb > > > > _______________________________________________ > > > > Cppqed-support mailing list > > > > Cpp...@li... > > > > https://lists.sourceforge.net/lists/listinfo/cppqed-support |
From: Andras V. <and...@ui...> - 2013-02-13 08:33:39
|
I have reverted the main branch to something like the former revision #264, but with some features from the development branch and raimar/gcc included, and 4qbits.cc and 6qbits.cc removed. This now compiles for me on two somewhat different platforms with gcc-4.4, gcc-4.6 and clang. The development branch is under bzr://cppqed.bzr.sourceforge.net/bzrroot/cppqed/Development Dr. Andras Vukics Institute for Theoretical Physics University of Innsbruck On Tue, Feb 12, 2013 at 1:32 PM, Raimar Sandner <rai...@ui...>wrote: > Dear András, > > we can only use the current branch as stable if we sort out all the > problems > which occur. We should take a revision which reportedly works on all > supported > platform with all supported compilers. > > Which compiler do you use for daily work? I found that g++ 4.6 needs the > additional -std=c++0x flag. On the other hand the options > "-Wno-local-type- > template-args" and "-Wno-c++11-extensions" are not known. > > Best > Raimar > > > > Dear Raimar, > > Agreed. > > However, we should identify the present main branch with either the > stable > > or the development. I don’t know which one is better. Perhaps it should > be > > the stable, since in that case the users do not need to meddle with > branch > > names when downloading. > > Best regards, > > András > > > > > > > > Dr. Andras Vukics > > Institute for Theoretical Physics > > University of Innsbruck > > > > > > On Tue, Feb 12, 2013 at 12:58 PM, Raimar Sandner > > > > <rai...@ui...>wrote: > > > Dear András, > > > > > > I think we should introduce two branches beneath cppqed, one called > > > "stable" > > > and one "development". I will continue to use the development version > in > > > daily > > > work to test it, then we can merge to stable from time to time. > Eventually > > > stable leads up to releases. Casual users or course participants can be > > > directed to download the stable branch. > > > > > > At the moment there seem to be several problems with our official > branch > > > which > > > prevent compilation for example with gcc-4.4 (I will investigate this > and > > > report back). The development branch will help to catch problems which > are > > > not > > > immediately visible. > > > > > > Best regards > > > Raimar > > > > > > > > > > -------------------------------------------------------------------------- > > > ---- Free Next-Gen Firewall Hardware Offer > > > Buy your Sophos next-gen firewall before the end March 2013 > > > and get the hardware for free! Learn more. > > > http://p.sf.net/sfu/sophos-d2d-feb > > > _______________________________________________ > > > Cppqed-support mailing list > > > Cpp...@li... > > > https://lists.sourceforge.net/lists/listinfo/cppqed-support > |
From: Andras V. <and...@ui...> - 2013-02-12 12:11:55
|
Dear Raimar, Agreed. However, we should identify the present main branch with either the stable or the development. I don’t know which one is better. Perhaps it should be the stable, since in that case the users do not need to meddle with branch names when downloading. Best regards, András Dr. Andras Vukics Institute for Theoretical Physics University of Innsbruck On Tue, Feb 12, 2013 at 12:58 PM, Raimar Sandner <rai...@ui...>wrote: > Dear András, > > I think we should introduce two branches beneath cppqed, one called > "stable" > and one "development". I will continue to use the development version in > daily > work to test it, then we can merge to stable from time to time. Eventually > stable leads up to releases. Casual users or course participants can be > directed to download the stable branch. > > At the moment there seem to be several problems with our official branch > which > prevent compilation for example with gcc-4.4 (I will investigate this and > report back). The development branch will help to catch problems which are > not > immediately visible. > > Best regards > Raimar > > > ------------------------------------------------------------------------------ > Free Next-Gen Firewall Hardware Offer > Buy your Sophos next-gen firewall before the end March 2013 > and get the hardware for free! Learn more. > http://p.sf.net/sfu/sophos-d2d-feb > _______________________________________________ > Cppqed-support mailing list > Cpp...@li... > https://lists.sourceforge.net/lists/listinfo/cppqed-support > |
From: Raimar S. <rai...@ui...> - 2013-02-12 11:59:05
|
Dear András, I think we should introduce two branches beneath cppqed, one called "stable" and one "development". I will continue to use the development version in daily work to test it, then we can merge to stable from time to time. Eventually stable leads up to releases. Casual users or course participants can be directed to download the stable branch. At the moment there seem to be several problems with our official branch which prevent compilation for example with gcc-4.4 (I will investigate this and report back). The development branch will help to catch problems which are not immediately visible. Best regards Raimar |
From: Andras V. <and...@ui...> - 2013-01-21 10:47:00
|
Hi Raimar, According to this page<http://sourceforge.net/apps/trac/sourceforge/wiki/Bazaar#Multiplerepositories>, it is not possible to have several Bazaar repositories on SourceForge, one has to simulate the effect with multiple branches (as I’ve done so far). Best regards, András Dr. Andras Vukics Institute for Theoretical Physics University of Innsbruck |
From: Andras V. <and...@ui...> - 2012-10-13 13:08:32
|
Dear All, As revision #247, I have pushed an update with a small extension to Blitz++, which removes the limitation of maximum rank on those Blitz++ functions that we use in the framework. It of course uses Boost.Preprocessor for automatic code generation, and the macro BLITZ_ARRAY_LARGEST_RANK, set to 11 by default. With this the rank limitation 11 for state-vector and 5 for density-operator manipulations is removed. At the moment, the extended Blitz++ version can be used by simply copying (or linking) the files under directory blitz of the distribution to the place where blitz is installed on the system, replacing the old files. Raimar, how do you think we should distribute this extended Blitz++ ? Perhaps it's time to start our own Blitz++ development branch... Best regards, András |
From: Andras V. <and...@ui...> - 2012-09-26 14:51:42
|
Dear Raimar, The constructors are templatized for the same reason they are templatized in interaction elements (cf. e.g. ParticleOrthogonalToCavity): in order that they accept both stack objects and shared_pointers (and any combination of these possibilities for the several arguments in ParticleOrthogonalToCavity and the like). Then, the different possibilities get dispatched by cpputils::sharedPointerize. However, in the case of Act and BinarySystem it's an overkill because a simple overload would do perfectly (not to mention that at the moment there aren't really maker functions for interaction elements). So, if you think it solves the problem, feel free to replace the template constructor by a pair of overloaded constructors. Hope this helps, András On Wed, Sep 26, 2012 at 4:11 PM, Raimar Sandner <rai...@ui...> wrote: > Dear András, > > I am currently trying to understand the new shared pointer branch and to adapt > the Python interface accordingly. > > Why is the interaction now a template argument, e.g. in binary::make and the > constructor of Act? Unfortunately this further complicates exposing these > functions to Python. > > For binary::make it is possible to work around it by exposing binary::doMake > directly instead. For Act we would have to take the specific interaction into > account when we decide if a new Act-instantiation has to be compiled for > Python on demand, so we have to compile more often. > > I do not understand why the template is needed in the first place. Why not > deduce the Interaction class from SubSystemsInteraction as before? This way > the same instantiation can be used for every Interaction with the correct > rank. > > Best regards, > Raimar > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > Cppqed-support mailing list > Cpp...@li... > https://lists.sourceforge.net/lists/listinfo/cppqed-support |
From: Raimar S. <rai...@ui...> - 2012-09-26 14:13:19
|
Dear András, I am currently trying to understand the new shared pointer branch and to adapt the Python interface accordingly. Why is the interaction now a template argument, e.g. in binary::make and the constructor of Act? Unfortunately this further complicates exposing these functions to Python. For binary::make it is possible to work around it by exposing binary::doMake directly instead. For Act we would have to take the specific interaction into account when we decide if a new Act-instantiation has to be compiled for Python on demand, so we have to compile more often. I do not understand why the template is needed in the first place. Why not deduce the Interaction class from SubSystemsInteraction as before? This way the same instantiation can be used for every Interaction with the correct rank. Best regards, Raimar |
From: Andras V. <and...@ui...> - 2012-09-21 15:38:36
|
Dear Raimar, Thanks, I'll have a look at these changes. The smartPointerization/ branch is almost ready for merging, if you feel like trying it, your comments are welcome. Best regards, András On Thu, Sep 20, 2012 at 12:11 PM, Raimar Sandner <rai...@ui...> wrote: > Dear András, > > I replaced the branch raimar/intel with a new one, with these changes the > framework compiles with Intel's compiler version 12.1. It also passes the > physics testsuite (I used test.py with bitmask 127 and evalFew.sh reported all > true, is there a more thorough testsuite?). > > Best regards > Raimar > > > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://ad.doubleclick.net/clk;258768047;13503038;j? > http://info.appdynamics.com/FreeJavaPerformanceDownload.html > _______________________________________________ > Cppqed-support mailing list > Cpp...@li... > https://lists.sourceforge.net/lists/listinfo/cppqed-support |
From: Raimar S. <rai...@ui...> - 2012-09-20 10:12:25
|
Dear András, I replaced the branch raimar/intel with a new one, with these changes the framework compiles with Intel's compiler version 12.1. It also passes the physics testsuite (I used test.py with bitmask 127 and evalFew.sh reported all true, is there a more thorough testsuite?). Best regards Raimar |
From: Andras V. <and...@ui...> - 2012-09-01 16:58:24
|
Hi Michael, Thanks for your mail. Not only in the past, but also presently C++QED optionally depends on FLENS (recently we've even prepared some ubuntu packages, cf. https://launchpad.net/~raimar-sandner/+archive/cppqed). All the eigenvalue calculations of the framework are done with LAPACK *through* FLENS, and if the FLENS dependency is not satisfied, then these tracts of the framework get disabled. (Earlier, we used to use our own little C++ interface to LAPACK, but soon gave it up -- you of all people probably understand why.) However, it is not your new FLENS/LAPACK implementation, but your old project that we use, namely the 2011/09/08 version from CVS. For months now, we've been aware of your new project, and it looks very promising to us. The reason why we think we cannot switch at the moment is that our project is first of all an application programming framework, in which end-users are expected to write and compile their C++ applications in the problem domain of open quantum systems. This means that when they compile, they also need to compile the FLENS parts that they actually use. Now, as far as I understand, the new FLENS/LAPACK uses c++11 features, which at the moment severely limits the range of suitable compilers. But we are definitely looking forward to be able to switch, and are glad that the project is well alive! Best regards, András On Sat, Sep 1, 2012 at 2:26 AM, Michael Lehn <mic...@un...> wrote: > Hi, > > I am a developer of FLENS. Browsing the web I saw that C++QED was using FLENS > for some optional modules in the past. There was quite a list I always wanted to > improve in FLENS. Unfortunately other projects were consuming too much time. > Last year I finally found time. If you are still interested in using LAPACK/BLAS > functionality from within a C++ project some of its features might be interesting > of you: > > 1) FLENS is header only. It comes with generic BLAS implementation. By "generic" I > mean template functions. However, for large problem sizes you still have to use > optimized BLAS implementations like ATLAS, GotoBLAS, ... > > 2) FLENS comes with a bunch of generic LAPACK function. Yes, we actually re-implemented > quite a number of LAPACK functions. The list of supported LAPACK function include LU, > QR, Cholesky decomposition, eigenvalue/-vectors computation, Schur factorization, etc. > Here an overview of FLENS-LAPACK: > > http://www.mathematik.uni-ulm.de/~lehn/FLENS/flens/lapack/lapack.html > > These LAPACK ports are more then just a reference implementation. It provides the same > performance as Netlib's LAPACK. But much more important we carefully ensure that it > also produces exactly the same results as LAPACK. Note that LAPACK is doing some cool > stuff to ensure stability and high precision of its result while providing excellent > performance at the same time. Consider the caller graph of LAPACK's routine dgeev which > computes the eigenvalues/-vectors of a general matrix: > > http://www.mathematik.uni-ulm.de/~lehn/dgeev.pdf > > The routine is pretty sophisticated and for example switches forth and back between > different implementations of the QR-method (dlaqr0, dlaqr1, dlaqr2, ...). By this > the routine even converges for critical cases. We ported all these functions one-by-one. > And tested them as follows: > (i) On entry we make copies of all arguments. > (ii) (a) we call our port > (ii) (b) we call the original LAPACK function > (iii) We compare the results. > If a single-threaded BLAS implementation is used we even can reproduce the same roundoff > errors (i.e. we do the comparison bit-by-bit). > > 3) If you use FLENS-LAPACK with ATLAS or GotoBLAS as BLAS backend then you are on par > with LAPACK from MKL or ACML. Actually MKL and ACML just use the Netlib LAPACK and optimize > a few functions (IMHO there's a lot of marketing involved). > > 4) You still can use an external LAPACK function (if FLENS-LAPACK does not provide a needed > LAPACK port you even have to). > > 5) Porting LAPACK to FLENS had another big advantage: It was a great test for our matrix and > vector classes: > (i) We now can prove that our matrix/vector classes do not introduce any performance penalty. > You get the same performance as you get with plain C or Fortran. > (ii) Feature-completeness of matrix/vector classes. We at least know that our matrix/vector > classes allow a convenient implementation of all the algorithms in LAPACK. An in our > opinion the FLENS-LAPACK implementation also looks sexy. > > I hope that I could tease you a little to have a look at FLENS on http://flens.sf.net > > FLENS is back, > > Michael > > > > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > Cppqed-support mailing list > Cpp...@li... > https://lists.sourceforge.net/lists/listinfo/cppqed-support |
From: Michael L. <mic...@un...> - 2012-09-01 00:50:19
|
Hi, I am a developer of FLENS. Browsing the web I saw that C++QED was using FLENS for some optional modules in the past. There was quite a list I always wanted to improve in FLENS. Unfortunately other projects were consuming too much time. Last year I finally found time. If you are still interested in using LAPACK/BLAS functionality from within a C++ project some of its features might be interesting of you: 1) FLENS is header only. It comes with generic BLAS implementation. By "generic" I mean template functions. However, for large problem sizes you still have to use optimized BLAS implementations like ATLAS, GotoBLAS, ... 2) FLENS comes with a bunch of generic LAPACK function. Yes, we actually re-implemented quite a number of LAPACK functions. The list of supported LAPACK function include LU, QR, Cholesky decomposition, eigenvalue/-vectors computation, Schur factorization, etc. Here an overview of FLENS-LAPACK: http://www.mathematik.uni-ulm.de/~lehn/FLENS/flens/lapack/lapack.html These LAPACK ports are more then just a reference implementation. It provides the same performance as Netlib's LAPACK. But much more important we carefully ensure that it also produces exactly the same results as LAPACK. Note that LAPACK is doing some cool stuff to ensure stability and high precision of its result while providing excellent performance at the same time. Consider the caller graph of LAPACK's routine dgeev which computes the eigenvalues/-vectors of a general matrix: http://www.mathematik.uni-ulm.de/~lehn/dgeev.pdf The routine is pretty sophisticated and for example switches forth and back between different implementations of the QR-method (dlaqr0, dlaqr1, dlaqr2, ...). By this the routine even converges for critical cases. We ported all these functions one-by-one. And tested them as follows: (i) On entry we make copies of all arguments. (ii) (a) we call our port (ii) (b) we call the original LAPACK function (iii) We compare the results. If a single-threaded BLAS implementation is used we even can reproduce the same roundoff errors (i.e. we do the comparison bit-by-bit). 3) If you use FLENS-LAPACK with ATLAS or GotoBLAS as BLAS backend then you are on par with LAPACK from MKL or ACML. Actually MKL and ACML just use the Netlib LAPACK and optimize a few functions (IMHO there's a lot of marketing involved). 4) You still can use an external LAPACK function (if FLENS-LAPACK does not provide a needed LAPACK port you even have to). 5) Porting LAPACK to FLENS had another big advantage: It was a great test for our matrix and vector classes: (i) We now can prove that our matrix/vector classes do not introduce any performance penalty. You get the same performance as you get with plain C or Fortran. (ii) Feature-completeness of matrix/vector classes. We at least know that our matrix/vector classes allow a convenient implementation of all the algorithms in LAPACK. An in our opinion the FLENS-LAPACK implementation also looks sexy. I hope that I could tease you a little to have a look at FLENS on http://flens.sf.net FLENS is back, Michael |