nmr-relax-announce Mailing List for relax
Molecular dynamics by NMR data analysis
Brought to you by:
edauvergne,
troelslinnet
You can subscribe to this list here.
2006 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
(1) |
Jun
|
Jul
(3) |
Aug
|
Sep
(1) |
Oct
|
Nov
(2) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2007 |
Jan
(1) |
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(1) |
Dec
|
2008 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
(1) |
Oct
(1) |
Nov
(1) |
Dec
|
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
(1) |
Dec
|
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
2011 |
Jan
(2) |
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
|
Nov
(1) |
Dec
|
2012 |
Jan
|
Feb
|
Mar
(1) |
Apr
(2) |
May
(1) |
Jun
(1) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
(2) |
Nov
|
Dec
|
2013 |
Jan
(1) |
Feb
(2) |
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
(1) |
Nov
(2) |
Dec
(1) |
2014 |
Jan
(3) |
Feb
(2) |
Mar
(1) |
Apr
|
May
(2) |
Jun
(1) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
(1) |
Nov
(2) |
Dec
(1) |
2015 |
Jan
(1) |
Feb
(1) |
Mar
(1) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
|
Dec
(1) |
2016 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
|
Feb
(2) |
Mar
(1) |
Apr
(1) |
May
|
Jun
(1) |
Jul
(5) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
From: Edward d'A. <ed...@nm...> - 2020-08-26 11:05:47
|
This is a major feature release that adds initial support for wxPython-Phoenix. It includes a large number of under the hood changes to support more modern Python versions and packages, a lot of polish of the relax text output, improved test suite control, and improved and modernised Travis CI support for automatically checking the integrity of the software (https://travis-ci.com/github/nmr-relax/relax/builds). For the official, easy to navigate release notes, please see http://wiki.nmr-relax.com/Relax_5.0.0 . The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html). The full list of changes is: Features: * Support for wxPython-Phoenix. * Bmrblib: This Python package is now once again optional and relax can run without it. * MS Windows builds are now 64-bit by default. * Major improvements to relax text output. Changes: * TestSuite: Skipped tests are no longer run when individual tests are supplied on the command line. The RelaxTestLoader.loadTestsFromNames() method has been implemented to gracefully handle the skipping of tests when only a single test is run. * Travis CI config: Fixes for PyPI numpy no longer being compatible with Python 2.7. Older versions of numpy now need to be manually specified for the Python 2.7 job. * Travis CI config: Attempt at making the MS Windows build job run again. The Travis CI infrastructure has changed yet again and the Windows job fails in the setup stage. These changes are just a guess to try to make this work again. * Travis CI config: 2nd attempt at making the MS Windows build job run again. Chocolatey was automatically installing the new Python 3.8.0 but the paths pointed to the 3.7 version. Now the 3.7.4 Python version is explicitly specified. * SCons: Change for the MS Windows build architecture from the default of 32-bit to 64-bit. Previously the default was 32-bit compilation on all Windows systems, via the WIN_TARGET_OVERRIDE flag, as official Python never used to release 64-bit builds for Windows systems. As this is no longer the case, the 32-bit override is now only set for the old Python 2 versions. * Travis CI config: Creation of a job for testing relax on an arm64 CPU. The system Python and its packages are used to avoid timeouts on arm64. Installing the Python packages via pip prior to running causes a Travis CI time out, as most of the 50 minutes allowed are used up by the compilation of SciPy. Despite the successful installation of the wxPython site-package on the system Python3, the GUI tests are not activated as there is a problem with xvfb on the arm64 Travis CI jobs. * N_state_model.test_populations system test: Loosened two of the checks to allow arm64 to pass. * wxPython: Added the dep_check.old_wx flag for differentiating between Classic and Phoenix. * wxPython-Phoenix: Fix for the wx.BoxSizer.AddSpacer() function calls. The old wxPython conversion of the size argument to (size, size) breaks the layout, so that the tuple arguments are essential. However tuple arguments are not allowed in wxPython-Phoenix. Therefore the dep_check.old_wx flag is used to differentiate the behaviour of the different wxPythons. * wxPython-Phoenix: Fix for the old wx.Sizer.DeleteWindows() method. This method no longer exists, so instead the Clear() method with the deleteWindows argument (or delete_windows in Phoenix) is used instead. * wxPython-Phoenix: Fix for the missing wx.SystemSettings_GetMetric() function. This has been switched to wx.SystemSettings.GetMetric() which is present in the original wxPython and Phoenix. * wxPython-Phoenix: Fixes for the relax gui About dialog. The wx.Frame.Center() function call only works if the window is shown (i.e. it is broken in the test suite), and the wx.DC.EndDrawing() function has been dropped in Phoenix. * wxPython-Phoenix: Fixes for the GUI sequence and file input elements. The wx.Frame.Center() function call only works if the window is shown (i.e. it is broken in the test suite). * wxPython-Phoenix: Support for the splash screen. The wx.SplashScreen and associated variables have shifted into wx.adv. * wxPython-Phoenix: Support for the relax icon. The wx.IconBundle.AddIconFromFile() function have been replaced by wx.IconBundle.AddIcon() in the current phoenix. * wxPython-Phoenix: Fix for the spin viewer window. The wx.Window.GetClientSizeTuple() function does not exist in Phoenix. However this can simply be replaced by wx.Window.GetClientSize() in the current code. * Deletion hack: The wx.Bitmap.HasAlpha() function is missing in current phoenix. * relax GUI: Fix for the window icons. * wxPython-Phoenix: Switch away from the depreciated wx.Menu.AppendItem() function. Classic still requires the calls to this function, but phoenix now uses wx.Menu.Append() instead. * wxPython: Renamed the dep_check.old_wx flag to dep_check.wx_classic. * wxPython-Phoenix: Prominent feedback warning the user about using unstable Phoenix versions. This includes both a RelaxWarning on start up and placing the warning text in red in the center of the blank relax GUI main window. Currently all Phoenix versions are labelled as unstable, however this can be changed in the future directly in the dep_check module. * wxPython-Phoenix: Switch away from the depreciated wx.ToolBar.AddLabelTool() function. This is still used for "Classic". For Phoenix, the wx.ToolBar.AddTool() function is used instead. * wxPython-Phoenix: Switch away from the depreciated wx.Window.SetToolTipString() function. Instead wx.Window.SetToolTip(wx.ToolTip(text)) is used for both "Classic" and Phoenix. * wxPython-Phoenix: Switch from wx.NamedColour() to wx.Colour() in the relax controller. "Classic" still uses the old function. * wxPython-Phoenix: Switch from the depreciated wx.Text.GetSizeTuple() to wx.Text.GetSize(). This seems to work on "Classic" as well. * wxPython-Phoenix: Switch from the depreciated wx.TreeCtrl.GetItemPyData() function. "Classic" is still using this function, but Phoenix is now using wx.TreeCtrl.GetItemData(). * wxPython-Phoenix: Switch from the depreciated wx.TreeCtrl.SetDimensions() function. Instead SetSize() is now being used for Phoenix. * wxPython-Phoenix: Switch from the depreciated wx.TreeCtrl.SetPyData() function. "Classic" is still using this function, but Phoenix is now using wx.TreeCtrl.SetItemData(). * wxPython-Phoenix: Switch from the depreciated wx.StockCursor() wrapper function. The overloaded wx.Cursor class can be used instead in Phoenix. * wxPython-Phoenix: Switch from the depreciated wx.EmptyBitmap() wrapper function. Instead Phoenix versions can simply use the overloaded wx.Bitmap class with the same arguments. * wxPython-Phoenix: Switch from wrapper to overloaded functions for the wx.ListCtrl elements. * Python 3.8 support: The platform.linux_distibution() function no longer exists. This is now replaced by the distro site-package. The lib.compat package deals with this difference. * Model-free analysis: Obscure syntax error bug fix for an issue highlighted by Python 3.8. The error in the set_xh_vect() function. This is only encountered when reading an ancient relax 1.2 model-free results file. * Travis CI config: Changes as suggested by the experimental Travis CI Build Config Explorer. The config text was pasted into https://config.travis-ci.com/explore and changed as suggested. * Travis CI config: Shifted the OpenMPI required packages into an apt 'addons' section. * Travis CI config: Shifted the API doc 'build' specific parts into the jobs matrix. This allows an environmental variable to be removed and a simplification of the 'script' section. * Travis CI config: Shifted the FSF copyright validation specific parts into the jobs matrix. This allows an environmental variable to be removed and a simplification of the 'script' section. * Travis CI config: Removal of the now unused TEST environmental variable. * Travis CI config: Simplification of the single processor and OpenMPI execution. The MPIRUN and RELAX_ARGS arguments have been introduced. These are normally unset but, for the OpenMPI jobs, they are set to "mpirun -np 2" and "--multi=mpi4py" respectively. This allows the duplicated entries for the information printout and test suite execution to be collapsed into one. * Travis CI config: Removal of the 'pip upgraded package' job. This job does not seem to be necessary for testing relax. * Travis CI config: Conversion of the Ubuntu Xenial job to Ubuntu Bionic. * Travis CI config: Removal of the 'language' key in the jobs matrix when the value is 'python'. This is a duplication as the language is set to Python outside of the matrix. * Multi-processor: Shifted the processor type checking into the initial command line parsing. This allows a non-zero error code to be returned to the shell. * Travis CI config: Shifted the echoing of environmental vars into a new 'before_script' section. This allows the echoing to occur for all jobs. * Scons: Improvements to the string formatting and the printout for the C module compilation. This includes showing the target architecture for MS Windows compilation. * Scons: Document the environmental variables used. * Information printout: Improved output for Python3 compiled C modules. The bytesteam is now decoded. * Scons: The MS Windows binary target architecture is now determined by the Python binary arch. * Test suite: Implementation of a command line option for disabling IO capture. This was previously handled by using the debug command line option which simply prevented IO capture. This type of output is very hard to parse by eye, as the tests are not well separated and the debugging output is very verbose. Now the --no-capt or --no-capture option has been implement to disable the IO capture. The debug command line option no longer disables IO capture, rather it allows for finer control of the test suite in that verbose debugging output is now only shown for tests that do not pass. When IO capture is disabled, extra formatted output is used to provide clear separators, titles, descriptions and endings for each test. * Test suite: Argument reordering and better docstring documentation in the relax test suite runners. * Test suite: All adjustable widths are now set using the value of status.text_width. This include the separators for the tests and the test suite summary lines at the end. * Fix for Python 2.5 support. * Command line processing: Switch from the depreciated optparse Python module to argparse. The argument parsing code and help text has also been improved. * Travis CI config: Added the relax --test and --version modes. * Travis CI config: Alphabetical ordering of the environmental variable printouts. * Help: Improvements to the help printout, including new descriptions for the argument groups. * Status object: Improved logic for determining the ideal text width for relax. * Travis CI config: Added testing of the relax --help mode. * Test suite: Added text wrapping set to the relax text width for the test description. This is the description shown when running without IO capture. * Information printout: Improved formatting for MS Windows. The repr() function results in '\\' for path separators rather than '\', causing the formatting to be out. * Test suite: Addition of a new command line option for listing all of the test names. The new '--list-tests' option will cause the names of the tests to be printed out and not run any tests. * Travis CI config: Try to force a Py2 compatible version of kiwisolver, as needed by matplotlib. * Travis CI config: The virtual machines with Python2 now seem to require SCons be manually installed. * Test suite: Fixes for the Palmer.test_palmer_omp system test. The modelfree4 binary type 'linux-x86_64-gcc' seems to now produce slightly different results with newer system libraries. The checks in this test have been updated to reflect this. Bugfixes: * GUI: Bug fix for the deletion of analysis tabs on Python 3. The value of None cannot be compared to an integer. This bug appears to only be triggered by another bug - a GUI tearDown() or deletion failure on MS Windows with wxPython-Phoenix and Python 3. * Bug fix: Restoration of the simple user function menus. |
From: Stefano L. C. <ste...@un...> - 2019-07-12 22:59:17
|
Hi Edward, indeed, no models have been selected for any of the residues. That might be, logically, the problem. This is the message I read after each of the nine attempted models: "RelaxWarning: No spins are selected therefore the optimisation or calculation cannot proceed." Suggestions? Stefano > On 12 Jul 2019, at 15:57, Edward d'Auvergne <ed...@nm...> wrote: > > On Fri, 12 Jul 2019 at 15:16, Stefano Luciano Ciurli > <ste...@un...> wrote: >> >> Hi Edward, >> I hope I am sending the message to the correct address now. >> >> I think I have solved the issue that was causing the mentioned problem with the apparently incomplete set up by doing what you indicated. >> >> Now the calculation has gone for about 5 min with the following result: >> >> ================================================================== >> = Completion of the d'Auvergne protocol model-free auto-analysis = >> ================================================================== >> >> Elapsed time: 5 minutes and 54.350 seconds >> >> Exception raised in thread. >> >> Traceback (most recent call last): >> File "gui/analyses/execute.pyc", line 87, in run >> File "gui/analyses/auto_model_free.pyc", line 834, in run_analysis >> File "auto_analyses/dauvergne_protocol.pyc", line 249, in __init__ >> File "auto_analyses/dauvergne_protocol.pyc", line 610, in execute >> File "auto_analyses/dauvergne_protocol.pyc", line 855, in model_selection >> File "prompt/uf_objects.pyc", line 161, in __call__ >> File "pipe_control/model_selection.pyc", line 260, in select >> File "pipe_control/pipes.pyc", line 68, in bundle >> File "lib/checks.pyc", line 81, in __call__ >> RelaxNoPipeError: RelaxError: The data pipe 'aic - mf (Sun Jul 7 18:21:56 2019)' has not been created yet. >> >> What should I make of it? > > Cheers! The error message has been seen before: > > http://www.nmr-relax.com/mail.gna.org/public/relax-users/2012-11/msg00005.html > http://www.nmr-relax.com/mail.gna.org/public/relax-users/2008-11/msg00034.html > > However I've never had a bug report with enough details to reproduce > the problem to be able to catch it in the test suite. My guess is > that if you carefully look at the log messages (hopefully you ran it > with the log output to file), you'll see a series of RelaxWarning > messages earlier which explain the problem in detail. I would also > guess that no models have been select for one or more spin systems. > > Regards, > > Edward |
From: Edward d'A. <ed...@nm...> - 2019-07-12 13:58:07
|
On Fri, 12 Jul 2019 at 15:16, Stefano Luciano Ciurli <ste...@un...> wrote: > > Hi Edward, > I hope I am sending the message to the correct address now. > > I think I have solved the issue that was causing the mentioned problem with the apparently incomplete set up by doing what you indicated. > > Now the calculation has gone for about 5 min with the following result: > > ================================================================== > = Completion of the d'Auvergne protocol model-free auto-analysis = > ================================================================== > > Elapsed time: 5 minutes and 54.350 seconds > > Exception raised in thread. > > Traceback (most recent call last): > File "gui/analyses/execute.pyc", line 87, in run > File "gui/analyses/auto_model_free.pyc", line 834, in run_analysis > File "auto_analyses/dauvergne_protocol.pyc", line 249, in __init__ > File "auto_analyses/dauvergne_protocol.pyc", line 610, in execute > File "auto_analyses/dauvergne_protocol.pyc", line 855, in model_selection > File "prompt/uf_objects.pyc", line 161, in __call__ > File "pipe_control/model_selection.pyc", line 260, in select > File "pipe_control/pipes.pyc", line 68, in bundle > File "lib/checks.pyc", line 81, in __call__ > RelaxNoPipeError: RelaxError: The data pipe 'aic - mf (Sun Jul 7 18:21:56 2019)' has not been created yet. > > What should I make of it? Cheers! The error message has been seen before: http://www.nmr-relax.com/mail.gna.org/public/relax-users/2012-11/msg00005.html http://www.nmr-relax.com/mail.gna.org/public/relax-users/2008-11/msg00034.html However I've never had a bug report with enough details to reproduce the problem to be able to catch it in the test suite. My guess is that if you carefully look at the log messages (hopefully you ran it with the log output to file), you'll see a series of RelaxWarning messages earlier which explain the problem in detail. I would also guess that no models have been select for one or more spin systems. Regards, Edward |
From: Stefano L. C. <ste...@un...> - 2019-07-12 13:32:08
|
Hi Edward, I hope I am sending the message to the correct address now. I think I have solved the issue that was causing the mentioned problem with the apparently incomplete set up by doing what you indicated. Now the calculation has gone for about 5 min with the following result: ================================================================== = Completion of the d'Auvergne protocol model-free auto-analysis = ================================================================== Elapsed time: 5 minutes and 54.350 seconds Exception raised in thread. Traceback (most recent call last): File "gui/analyses/execute.pyc", line 87, in run File "gui/analyses/auto_model_free.pyc", line 834, in run_analysis File "auto_analyses/dauvergne_protocol.pyc", line 249, in __init__ File "auto_analyses/dauvergne_protocol.pyc", line 610, in execute File "auto_analyses/dauvergne_protocol.pyc", line 855, in model_selection File "prompt/uf_objects.pyc", line 161, in __call__ File "pipe_control/model_selection.pyc", line 260, in select File "pipe_control/pipes.pyc", line 68, in bundle File "lib/checks.pyc", line 81, in __call__ RelaxNoPipeError: RelaxError: The data pipe 'aic - mf (Sun Jul 7 18:21:56 2019)' has not been created yet. What should I make of it? Stefano > On 12 Jul 2019, at 09:22, Edward d'Auvergne <ed...@nm...> wrote: > > On Sun, 7 Jul 2019 at 18:37, Stefano Luciano Ciurli > <ste...@un...> wrote: >> >> Hi Edward, >> >> thank you for this update. The previous problems are solved now! >> >> Proceeding further, and having loaded spins from a PDB structure which includes, of course, H and N nuclei, I receive a message when executing the program that says that the set up is incomplete, and that interatomic data for the dipole-dipole interaction is missing, followed by the full list of N and H atoms for each residue in the protein sequence. >> >> Also, it suggests to try the “spin-isotope user function”. >> >> What should I do? > > Hi Stefano, > > It took a while for your message to get through as you emailed the > relax announcement mailing list. I've now changed the address for > this email thread to the relax users mailing list. I have configured > the relax announcement list to set the explicit "Reply-To:" header to > the users mailing list. Could you please deactivate the setting in > your email software that is causing this to be ignored? This > "Reply-To:" header is also used for the commits mailing list to direct > those responses to the development mailing list. The announcement and > commit mailing lists are read-only. > > For this "The set up is incomplete. Please check for the following > missing information: Interatomic data (for the dipole-dipole > interaction)" issue, have you clicked on all of the buttons under the > relaxation data list GUI element? You need to click on each one of > these buttons to complete the set up. Note that these are > deliberately not mandatory, as people with corner case molecular > systems will sometimes instead use the user function menus to set up > their non-standard spin systems. > > Regards, > > Edward |
From: Edward d'A. <ed...@nm...> - 2019-07-12 07:22:40
|
On Sun, 7 Jul 2019 at 18:37, Stefano Luciano Ciurli <ste...@un...> wrote: > > Hi Edward, > > thank you for this update. The previous problems are solved now! > > Proceeding further, and having loaded spins from a PDB structure which includes, of course, H and N nuclei, I receive a message when executing the program that says that the set up is incomplete, and that interatomic data for the dipole-dipole interaction is missing, followed by the full list of N and H atoms for each residue in the protein sequence. > > Also, it suggests to try the “spin-isotope user function”. > > What should I do? Hi Stefano, It took a while for your message to get through as you emailed the relax announcement mailing list. I've now changed the address for this email thread to the relax users mailing list. I have configured the relax announcement list to set the explicit "Reply-To:" header to the users mailing list. Could you please deactivate the setting in your email software that is causing this to be ignored? This "Reply-To:" header is also used for the commits mailing list to direct those responses to the development mailing list. The announcement and commit mailing lists are read-only. For this "The set up is incomplete. Please check for the following missing information: Interatomic data (for the dipole-dipole interaction)" issue, have you clicked on all of the buttons under the relaxation data list GUI element? You need to click on each one of these buttons to complete the set up. Note that these are deliberately not mandatory, as people with corner case molecular systems will sometimes instead use the user function menus to set up their non-standard spin systems. Regards, Edward |
From: Stefano L. C. <ste...@un...> - 2019-07-07 16:52:51
|
Hi Edward, thank you for this update. The previous problems are solved now! Proceeding further, and having loaded spins from a PDB structure which includes, of course, H and N nuclei, I receive a message when executing the program that says that the set up is incomplete, and that interatomic data for the dipole-dipole interaction is missing, followed by the full list of N and H atoms for each residue in the protein sequence. Also, it suggests to try the “spin-isotope user function”. What should I do? Stefano [cid:5D0...@ho...] On 14 Jun 2019, at 12:10, Edward d'Auvergne <ed...@nm...<mailto:ed...@nm...>> wrote: This is a minor bugfix release that re-enables the reading of Bruker Dynamics Center NOE data files. For the official, easy to navigate release notes, please see http://wiki.nmr-relax.com/Relax_4.1.3 . The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html). The full list of changes is: Features: N/A Changes: * FSF Copyright Validation configuration: Blacklisted the PDF user manual. This allows the checking of relax tags to pass. * Release checklist document: Describe the relax fork of latex2html. * API manual: No longer raise errors when parsing the pystarlib docstrings. * Release checklist document: Minor improvements to match the practical aspects of the release. * User manual: Proper abbreviation of the "Quarterly Reviews of Biophysics" journal name. * Test suite: New system test to catch the failure of reading newer Bruker DC NOE data files. The system test is Bruker.test_bug_15_NOE_read_fail and it catches bug #15 (https://sourceforge.net/p/nmr-relax/tickets/15/). The test uses truncated data from Stefano Ciurli as attached to the bug report. * Bruker DC: Silence the warnings about spin names already existing. The user does not need to see such warnings. * Travis CI config: Explicitly set 'trusty' as the distribution name for the default images. In the support request titled "Failure of GUI testing via xvfb" (https://support.travis-ci.com/hc/en-us/requests/7654), the Travis CI support staff suggested that we explicitly set 'dist: trusty'. * Bruker DC: A different way to silence the warnings about spin names already existing. The previous attempt at setting the force flag to True was causing failures in a number of system tests. Therefore a new flag 'warn_flag' has been added to pipe_control.mol_res_spin.name_spin() to allow warnings to be explicitly silenced. * Travis CI config: Use Xenial for running all tests on Linux and Python 2.7. This is from the support request titled "Failure of GUI testing via xvfb" (https://support.travis-ci.com/hc/en-us/requests/7654). * Travis CI config: Manual support for old SciPy versions on Python 2.7. SciPy 1.3.0 now requires Python >= 3.5. Therefore the OLD_MATPLOTLIB variable has been renamed to OLD_PY2_PACKAGES and, when set, is now used to install old matplotlib and scipy versions when using Python 2.7. * Travis CI config: Deactivate the Mac OS X updates to avoid timeouts. The 'brew update' and 'brew upgrade python3' take up half of the build time for the Mac OS X target. This large amount of time sometimes causes this build to hit the Travis CI time limits. Bugfixes: * Bruker DC: Support for handling newer versions of the NOE data file. This fixes bug #15 (https://sourceforge.net/p/nmr-relax/tickets/15/), the failure to read newer versions of the Bruker DC NOE data files. This was simply a parsing issue as the NOE column is now "NOE [ ]" whereas previous DC versions used the text "NOE" or "NOE [none]". _______________________________________________ nmr-relax-announce mailing list nmr...@li...<mailto:nmr...@li...> https://lists.sourceforge.net/lists/listinfo/nmr-relax-announce |
From: Edward d'A. <ed...@nm...> - 2019-06-14 10:11:02
|
This is a minor bugfix release that re-enables the reading of Bruker Dynamics Center NOE data files. For the official, easy to navigate release notes, please see http://wiki.nmr-relax.com/Relax_4.1.3 . The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html). The full list of changes is: Features: N/A Changes: * FSF Copyright Validation configuration: Blacklisted the PDF user manual. This allows the checking of relax tags to pass. * Release checklist document: Describe the relax fork of latex2html. * API manual: No longer raise errors when parsing the pystarlib docstrings. * Release checklist document: Minor improvements to match the practical aspects of the release. * User manual: Proper abbreviation of the "Quarterly Reviews of Biophysics" journal name. * Test suite: New system test to catch the failure of reading newer Bruker DC NOE data files. The system test is Bruker.test_bug_15_NOE_read_fail and it catches bug #15 (https://sourceforge.net/p/nmr-relax/tickets/15/). The test uses truncated data from Stefano Ciurli as attached to the bug report. * Bruker DC: Silence the warnings about spin names already existing. The user does not need to see such warnings. * Travis CI config: Explicitly set 'trusty' as the distribution name for the default images. In the support request titled "Failure of GUI testing via xvfb" (https://support.travis-ci.com/hc/en-us/requests/7654), the Travis CI support staff suggested that we explicitly set 'dist: trusty'. * Bruker DC: A different way to silence the warnings about spin names already existing. The previous attempt at setting the force flag to True was causing failures in a number of system tests. Therefore a new flag 'warn_flag' has been added to pipe_control.mol_res_spin.name_spin() to allow warnings to be explicitly silenced. * Travis CI config: Use Xenial for running all tests on Linux and Python 2.7. This is from the support request titled "Failure of GUI testing via xvfb" (https://support.travis-ci.com/hc/en-us/requests/7654). * Travis CI config: Manual support for old SciPy versions on Python 2.7. SciPy 1.3.0 now requires Python >= 3.5. Therefore the OLD_MATPLOTLIB variable has been renamed to OLD_PY2_PACKAGES and, when set, is now used to install old matplotlib and scipy versions when using Python 2.7. * Travis CI config: Deactivate the Mac OS X updates to avoid timeouts. The 'brew update' and 'brew upgrade python3' take up half of the build time for the Mac OS X target. This large amount of time sometimes causes this build to hit the Travis CI time limits. Bugfixes: * Bruker DC: Support for handling newer versions of the NOE data file. This fixes bug #15 (https://sourceforge.net/p/nmr-relax/tickets/15/), the failure to read newer versions of the Bruker DC NOE data files. This was simply a parsing issue as the NOE column is now "NOE [ ]" whereas previous DC versions used the text "NOE" or "NOE [none]". |
From: Edward d'A. <ed...@nm...> - 2019-04-26 14:09:05
|
This is a minor feature and bugfix release. It includes tooltip improvements in the GUI for the user function windows and wizards, the addition of the newly published primary reference for the frame order analysis ( https://doi.org/10.1017/S0033583519000015 ), and improved formatting for the bibliography and index of the relax manual. There have also been improvements for the automated testing of relax by Travis CI ( https://travis-ci.com/nmr-relax/relax ). This includes the naming of the build jobs, the execution of the software verification tests, the installation of wxPython to enable GUI testing and the running of the whole test suite, the reordering of the system tests back before the unit tests to avoid hiding some nasty relaxation dispersion bugs, a fix for matplotlib on Mac OS X so that the tests will finally run on this OS, a new build job for the API documentation, and a new build job for the Free Software Foundation copyright validation script. For the official, easy to navigate release notes, please see http://wiki.nmr-relax.com/Relax_4.1.2 . The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html). The full list of changes is: Features: * relax GUI: Improved tooltips for the buttons of the user function windows and wizards. This follows from the mailing list discussion at https://sourceforge.net/p/nmr-relax/mailman/message/36611489/. * User manual: Addition of the newly published frame order reference. * Formatting improvements for the user manual bibliography and index sections. Changes: * Development scripts: Improvements to the Python detection in the Python module seeking script. * Release checklist document: Updated the text to better match the new release process. * HTML manual: CSS fix for newer LaTeX2HTML versions. The text width in the HTML appears to now be fixed to a maximum width matching the text dimensions in the PDF. This looks bad together with the wider images and code snippets. * System tests: Added two tests to catch bug #12 (https://sourceforge.net/p/nmr-relax/tickets/12/). This is the failure to catch the '#' character when setting the molecule name. The tests are Structure.test_bug_12_hash_in_mol_name_via_arg and Structure.test_bug_12_hash_in_mol_name_via_file. These cover the two ways a '#' character can enter a molecule name - via the file name or via the set_mol_name argument. Both the structure.read_pdb and structure.read_xyz user functions are checked. * Test suite: 2 new system tests to catch the failure of reading newer Bruker DC files. The system tests are Bruker.test_bug_13_T1_read_fail and Bruker.test_bug_13_T2_read_fail and these catch bug #13 (https://sourceforge.net/p/nmr-relax/tickets/13/). * User function definitions: Clarifications for the bruker.read text. * User manual: Clean up of the bibliography entry titles. Species names are properly italicised with genus names capitalised, nuclear isotopes are superscripted, R1rho, R2, etc. are properly subscripted, the Perrin articles are translated into English, symbols are now symbols, and unnecessary capitalisation has been removed from the bibtex. * User manual: Standardisation of the frame order indexing. * User manual: Standardisation of the relaxation dispersion indexing. * Travis CI config: Attempt at installing wxPython for Ubuntu and Python 2.7. This would allow for the whole test suite to be run on Travis CI on at least one OS. The instructions come from the stackoverflow response by dthor at https://stackoverflow.com/questions/29290011/using-travis-ci-with-wxpython-tests. * FSF Copyright Validation script: The script now returns an exit status. * Travis CI config: Avoid updating Conda. This seems to cause a breakage in installing matplotlib. * Travis CI: matplotlib is now manually installed to allow for older versions on Python 2.7. The current pip default of 3.0.3 is incompatible with Python 2.7. It is not clear how the installation of Conda (for wxPython support) caused the 3.0.3 version to be installed instead of the 2.2.4 version. So now the version is manually set in the Travis CI script. * Travis CI config: Enable xvfb to allow for wxPython and testing of the GUI. * Test suite: Restored the original test suite order to reveal relaxation dispersion bugs. The system tests should come first. This allows the maximum amount of code that might accidentally change read-only variables to run prior to the unit tests, where such changes are often subsequently picked up. * Test suite: The keyboard interrupt terminates the test suite once again. * FSF Copyright Validation script: The return status now starts at 0 to allow for early returns. * FSF Copyright Validation script: Support for saving and reading the committer information. This allows the committer information (file name, committer name, and copyright years) from older repositories to be saved and later read into the script. In this case, the old Subversion history has been read and the committer information placed into the fsfcv.svn_committer_info.bz2 file (in the devel_scripts/ directory). This compressed file is now specified in the fsfcv.conf.py configuration file. The result is that the fsfcv script can be run on the relax git repository without requiring a checkout of the old SVN repository. * Travis CI config: Improvements to the comments and spacing. * API manual: Scons compilation via epydoc now fails if a warning or error is found. This manually parses the epydoc output to skip the unavoidable wxPython warnings. Any error or warning will now cause an error to be raised. This results in a non-zero return code from scons to allow the api_manual_html target to be checked in scripts. * Travis CI config: Named all of the jobs. * Travis CI config: General clean up and execution of the software verification tests. * API manual: Scons compilation via epydoc now fails if an import error occurs. * Travis CI config: Alphabetical ordering of environmental variables and required Python packages. * Travis CI config: Creation of an API documentation build job. * Travis CI config: Fix for the Mac OS X build. This job passes, but the test suite fails with the following traceback message when trying to import matplotlib: "ImportError: Python is not installed as a framework. The Mac OS X backend will not be able to function correctly if Python is not installed as a framework. See the Python documentation for more information on installing Python as a framework on Mac OS X. Please either reinstall Python as a framework, or try one of the other backends. If you are using (Ana)Conda please install python.app and replace the use of 'python' with 'pythonw'. See 'Working with Matplotlib on OSX' in the Matplotlib FAQ for more information.". The fix is simply to create $HOME/.matplotlib/matplotlibrc with the contents 'backend: TkAgg'. * API manual: Greater filtering of the file list passed to epydoc. Now only relax modules ending in *.py are processed. That means that all base directory scripts, including 'sconstruct', are excluded from the API documentation. * API manual: More reliable parsing of the epydoc output to detect non-wxPython issues. * FSF Copyright Validation configuration: Improvements to the repository configuration section. The different configurations can now be chosen via a variable, rather than requiring code to be uncommented. * FSF Copyright Validation script: Sorted the years for the committer information output. This makes it easier to read the file and will help with compression. * FSF Copyright Validation script: A new file with all committer information up to 2018. This is to allow for much faster execution of the FSFCV script, by only looking at the git log from the start of 2019. * FSF Copyright Validation script: Support for skipping the first commit. This is for truncated history, where for example the git repository start date is set to a later date than the git repository migration or initial SVN commit, when the committer information up to a given date is read from a file. * FSF Copyright Validation script: Fix for tracking renames when saved committer information is used. * Travis CI config: Execution of the FSF copyright validation script as part of the testing. This is set to run only on a new Python 3.7 build job, simply to avoid unnecessary repetition. All of the git history needs to be fetched for the script to work, and the script requires the pytz Python module. * FSF Copyright Validation script: Addition of a repository configuration printout. This is to help in debugging, as it is otherwise not clear where the source of the copyright information comes from. * Release checklist document: Rewrote the 'preparation' instructions for Travis CI. All previous manual checking is now performed automatically by Travis CI for each push to the GitHub mirror repository. Bugfixes: * relax GUI: wxPython-Phoenix 4.x fix to allow relax to start again. In the later wxPython versions, relax would not be able to start either the GUI or any of the test suite due to a new error "wx._core.PyNoAppError: The wx.App object must be created first!". This was not present in wxPython-Phoenix 3. The Relax_icons class (a wx.IconBundle derived class) is no longer instantiated on import. * Structure loading: Fix for bug #12, the acceptance of the invalid '#' character in molecule names. See https://sourceforge.net/p/nmr-relax/tickets/12/. A simple check has not been added to the load_pdb() and load_xyz() functions of the internal structural object in lib.structure.internal.object. This ensures that the '#' character can never be set as the molecule name, independently if it was taken from a file name or set via the set_mol_name arguments of the structure.read_pdb or structure.read_xyz user functions. * Bruker DC: Complete redesign of the backend to support reading newer (or older) file versions. This fixes bug #13 (https://sourceforge.net/p/nmr-relax/tickets/13/), the failure of reading newer Bruker DC files. The backend has been resigned so that the relax library produces a complex Python object representation of the Bruker DC results file. This object now stores all of the data present within the Bruker DC file. The design is more flexible as precise column ordering no longer matters. * Fix for bug #14, the freezing of user functions in the GUI. See https://sourceforge.net/p/nmr-relax/tickets/14/. The user functions freeze if an error occurs that is not a RelaxError, with the mouse pointer stuck on the busy cursor. These non-RelaxErrors are now caught and manually dealt with by the GUI interpreter. Like all GUI freezing bugs, this was introduced with the huge GUI speed up prior in relax 4.1.0. These also only to appear to be a freeze, but it is actually the failure to update and show the relax controller combined with not turning off the busy mouse cursor. * GUI bug fix: Avoidance of the numpy depreciation of '== None'. This deprecation causes the GUI to fail with recent numpy versions. * Relaxation dispersion: Protection of all of the MODEL_PARAMS_* variables from modification. These are now only used with copy.deepcopy(). This removes a number of bugs in which the lists, which should be read-only, are permanently modified by the addition of 'r1'. The system tests add 'r1' and then the unit tests subsequently fail. This would also be an issue if an experiment without the 'r1' parameter is analysed after one with that parameter, without restarting relax. * Relaxation dispersion bug fix: The 'r1' parameter was missing from the nested parameter algorithm. This is the nesting_param() function of the specific_analyses.relax_disp.model module. The 'r1' parameter must be treated differently from the other model parameters, just as the 'r2*' parameters are. * Dispersion auto-analysis: Bug fix for the plotting of the R1 parameter. The plotting relied on the insertion of the 'r1' parameter into the read only MODEL_PARAMS_* variables of lib.dispersion.variables. Now the Model_class class from specific_analyses.relax_disp.models is being used to dynamically determine the parameters of the model. |
From: Edward d'A. <ed...@nm...> - 2019-03-08 13:18:49
|
This is a major bugfix release. The release fixes multiple issues with the relax GUI and with the relaxation dispersion analyses. Please see the notes below for details. For the official, easy to navigate release notes, please see http://wiki.nmr-relax.com/Relax_4.1.1 . The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html). The full list of changes is: Features: * N/A Changes: * Mac OS X distribution file: Fixes for the DMG file generation. The .git directories are no longer bundled (the check in setup.py was for .svn directories), and the sobol_test.py script contained a bug that blocked the image generation. * Release Checklist: Rewrite for the shift to a git repository and to the SourceForge infrastructure. * Test suite: Temporary file fix for the Bmrb system and GUI tests. The temporary files normally used by these tests were accidentally removed in a previous commit. The result was temporary files being placed in the current directory. * log_converter.py development script: Conversion from SVN to git. A number of spacing bugs have also been removed, simplifying the release process. * relax manual: The find_replicate_titles.py script can now handle the presence of latex2html. If latex2html had been set up via the docs/devel/latex2html/setup script, then find_replicate_titles.py would fail due to the presence of *.tex files outside of docs/latex/. * Update from LaTeX2HTML 2008 to 2019. The instructions now point to the latex2html repository fork at SourceForge (https://sourceforge.net/p/nmr-relax/code-latex2html/ci/master/tree/), with the relax manual specific branches. * GUI tests: Addition of the User_functions.test_bug_2_structure_read_pdb_failure test. This is to catch bug #2 (https://sourceforge.net/p/nmr-relax/tickets/2/), the failure of the structure.read_pdb user function in the GUI. * GUI tests: Addition of the User_functions.test_bug_3_no_argument_validation test. This is to catch bug #3 (https://sourceforge.net/p/nmr-relax/tickets/3/), the absence of user function argument validation within the GUI. * Unit tests: Addition of two tests for specific_analyses.relax_disp.parameters.param_num(). This is to catch bug #6 (https://sourceforge.net/p/nmr-relax/tickets/6/), the failure of the parameter counting for the 3-site relaxation dispersion models when spins are clustered. The two unit tests are Test_parameters.test_param_num_clustered_spins and Test_parameters.test_param_num_single_spin in the unit test module _specific_analyses._relax_disp.test_parameters. * Unit tests: Addition of two tests for specific_analyses.relax_disp.parameters.loop_parameters(). The two unit tests are Test_parameters.test_loop_parameters_clustered_spins and Test_parameters.test_loop_parameters_single_spin in the unit test module _specific_analyses._relax_disp.test_parameters. These were added to try to catch the typo error at the end of the function, where the 'dwH_AB' parameter appears twice (the second should be 'dwH_AC'). However the typo was not caught in the tests as no currently implemented dispersion model contains the 'dwH_AC' parameter. Hence it is a latent bug. The tests do catch a minor error with the 'R2eff' model in which the 'i0' parameter is always returned. 'i0' should only be returned when exponential curve data is present. This bug has no apparent affect on the current operation of relax, so the parameter is probably handled correctly downstream. * Module specific_analyses.relax_disp.parameters: Fix for loop_parameters() with the 'R2eff' model. This now only returns the 'i0' parameter when exponential curve data is present. This fix has no apparent affect on the operation of relax, so the 'i0' parameter is probably correctly handled in code that calls the loop_parameters() function. * Dispersion: Shift of the model parameters from the parameter loop to lib.dispersion.variables. This removes all references to specific model parameters from the loop_parameters() function in the specific_analyses.relax_disp.parameters module into lib.dispersion.variables. This simplifies the loop_parameters() function and should minimise latent bugs. * Unit tests: Addition of two tests for specific_analyses.relax_disp.parameters.linear_constraints(). The two unit tests are Test_parameters.test_linear_constraints_clustered_spins and Test_parameters.test_linear_constraints_single_spin in the unit test module _specific_analyses._relax_disp.test_parameters. These show that the linear constraints are correctly assembled for single and clustered spins for all models. * Module specific_analyses.relax_disp.parameters: Docstring, whitespace, and comment fixes. * Unit tests: Addition of tests for lib.dispersion.ns_mmq_3site and lib.dispersion.ns_r1rho_3site. These are to catch bug #9 (https://sourceforge.net/p/nmr-relax/tickets/9/), and specifically test for when pA is 1.0 and the other probabilities are zero. Two new unit tests of the _lib._dispersion.test_ns_mmq_3site module include Test_ns_mmq_3site.test_ns_mmq_3site_mq and Test_ns_mmq_3site.test_ns_mmq_3site_sq_dq_zq, and a single new unit test of the _lib._dispersion.test_ns_r1rho_3site module is Test_ns_r1rho_3site.test_ns_r1rho_3site. * Unit tests: Addition of two tests for specific_analyses.relax_disp.parameters.param_conversion(). The two unit tests are Test_parameters.test_param_conversion_clustered_spins and Test_parameters.test_param_conversion_single_spin in the unit test module _specific_analyses._relax_disp.test_parameters. These tests uncovered that the pC parameter for the 3-site R1rho dispersion models 'NS R1rho 3-site' and 'NS R1rho 3-site linear' is not being calculated in the param_conversion() function. This is now reported as bug #11 (https://sourceforge.net/p/nmr-relax/tickets/11/). * Unit tests: Creation of the Test_parameters.test_param_conversion_clustered_spins_sim test. This is to check the specific_analyses.relax_disp.parameters.param_conversion() function for a cluster of 2 spins for Monte Carlo simulations. It was a failed attempt to catch bug #10 (https://sourceforge.net/p/nmr-relax/tickets/10/). The problem probably lies in the Monte Carlo simulation setup functions in the specific analysis API rather than in the module specific_analyses.relax_disp.parameters. * Unit tests: Test of the dispersion specific analysis API function sim_init_values(). This is an attempt at catching bug #10 (https://sourceforge.net/p/nmr-relax/tickets/10/), the failure of the 3-site dispersion models when setting the pC parameter for Monte Carlo simulations. The failing test however shows that the sim_init_values() function probably needs a complete overhaul. * Dispersion: Improved handling of deselected spins in the loop_parameters() function. This is from the specific_analyses.relax_disp.parameters module. The function can now handle the first spins in the cluster being deselected. * FSFCV configuration: Skip some false positive copyrights in the docs/CHANGES file. Bugfixes: * Fix for bug #2 (https://sourceforge.net/p/nmr-relax/tickets/2/), the failure of the structure.read_pdb user function in the GUI. The problem was that the file selection argument was being set up incorrectly as two GUI elements - an inactive file selection element and a normal value setting GUI element. Only the second value input GUI element was active (due to the GUI elements being stored in a dictionary, with the first key value being overwritten by the second). * Fix for bug #3 (https://sourceforge.net/p/nmr-relax/tickets/3/), the absence of user function argument validation within the GUI. The code for the user function argument validation in the prompt/script UIs was simply copied and slightly modified to fit into the GUI user function window execution. All arguments are now passed into the new lib.arg_check.validate_arg() function and are checked based on their user function definitions. * Fix for bug #4 (https://sourceforge.net/p/nmr-relax/tickets/4/), the relax controller in the GUI not displaying text when required. Calls to the captured IO stream flush() methods are now been made in a number of places to allow the controller to show the text when required. This includes after printing out the intro text, after any captured and GUI handled errors, after clicking on the "help->licence" menu entry, after thread exceptions, and after a number of GUI message dialogs. The bug is only present in relax 4.1.0. * Typo fix in the description of the 'atomic' argument for the structure.rmsd user function. * Fix for bug #5 (https://sourceforge.net/p/nmr-relax/tickets/5/), the incorrect numpy version check in the relaxation dispersion auto-analysis. The dep_check.version_comparison() function is now used for the version comparisons. * Dispersion: Fix for bug #7 (https://sourceforge.net/p/nmr-relax/tickets/7/), the model list containing 'No Rex' twice. The MODEL_LIST_FULL variable contained the model 'No Rex' twice. The only manifestation of the bug is a RelaxError message showing the full list of models, when a user selects a non-existent dispersion model. * Dispersion: Fix for bug #6 (https://sourceforge.net/p/nmr-relax/tickets/6/), the incorrect parameter counting for 3-site models with spin clustering. The issue was that the list of spin-specific parameters was incomplete. To resolve this, the parameter names have been shifted into the lib.dispersion.variables module lists PARAMS_R1, PARAMS_GLOBAL, and PARAMS_SPIN. By removing the parameter names from other parts of relax, the lib.dispersion.variables module will serve as a single point of failure and hence it will much easier to maintain the relaxation dispersion code when new models with new parameters are added. * Dispersion: Fix for bug #8 (https://sourceforge.net/p/nmr-relax/tickets/8/), the accidental modification of the hardcoded variables. The MODEL_PARAMS lists in lib.dispersion.variables were accidentally being modified by the Model_class class in the specific_analyses.relax_disp.model module. The list for a given model was being set as the self.params list. This list would then have the 'r1' parameter pre-pended to it if that parameter is optimised for a model, and hence the lib.dispersion.variables list would be permanently modified. Now copy.deepcopy() is being used for all variables to avoid this issue. This bug was uncovered in the unit tests as the _specific_analyses._relax_disp.test_model tests were causing 'r1' to be added, and then the later _specific_analyses._relax_disp.test_parameters tests would fail as 'r1' should not be in those lists. This bug is highly unlikely to be encountered by users of relax. You would need to run two analyses, one after the other without closing relax, and the first analysis would need to have 'r1' optimised * Dispersion: Fix for bug #9 (https://sourceforge.net/p/nmr-relax/tickets/9/), the failure of the 3-site dispersion models when pB and pC are zero. When both are zero, for example during a comprehensive grid search when model nesting is not utilised, a divide by zero error occurs. This is now caught and large values (1e100) are set for the rates instead. * Dispersion: Fix for bug #11 (https://sourceforge.net/p/nmr-relax/tickets/11/), the missing pC calculation for the 3-site R1rho models. The models 'NS R1rho 3-site' and 'NS R1rho 3-site linear' were simply missing from the list of models for the pC parameter. * Dispersion: Fix for bug #10 (https://sourceforge.net/p/nmr-relax/tickets/10/), the 3-site model failure of setting pC for Monte Carlo simulations. For this, the sim_init_values() function of the relaxation dispersion specific API in specific_analyses.relax_disp.api has been completely rewritten. The specific_analyses.relax_disp.parameters.param_conversion() function is now called at the start to generate initial non-model parameters, and at the end to populate the simulation structures. The rest of the function has been stripped down and significantly simplified. |
From: Edward d'A. <ed...@nm...> - 2019-02-21 16:15:14
|
This is a major feature and bugfix release. This is also the first release after the permanent Gna! shutdown (https://en.wikipedia.org/wiki/Gna!) and the complete migration of relax's free software infrastructure to SourceForge (https://sourceforge.net/p/nmr-relax/mailman/message/36580981/), the first release after the complicated migration from the original Subversion version control repository to git for the relax source code and the relax website, and the first release after three years of development. In the meantime, a new demo repository has been created containing all the data and instructions required to perform and demonstrate different relax analyses. Features of this release include the addition of a bash completion script, large speed improvements in the GUI and in the execution of many relax user functions, improved sample scripts, significant relax manual updates, support for newer NMRPipe SeriesTab files, improved Docker images, automated testing of relax via Travis-CI, the new user functions frame_order.decompose, structure.add_helix, and structure.add_sheet, and significant improvements for user function argument checking and user feedback via RelaxErrors. For the official, easy to navigate release notes, please see http://wiki.nmr-relax.com/Relax_4.1.0 . The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html). The full list of changes is: Features: * Greater wxPython-Phoenix support while maintaining compatibility with wxPython-Classic. * Creation of a bash completion script for completing command line arguments with the tab key (docs/bash_completion.sh). * A significantly more responsive relax GUI. * Converted the steady-state NOE analysis sample script to use the auto-analysis. * Standardisation of initial and final printouts in the auto-analyses, including the elapsed time. * More of the GUI main menu entries are disabled during execution locking. * Safe execution of all of the auto-analyses. * Huge speed ups for many parts of relax with the addition of fast and temporary hash lookup tables and cross-referencing for the molecule, residue, spin and interatomic data containers. * Many improvements and updates throughout the relax manual. * Support for the new format of the NMRPipe SeriesTab files. * Improvements for the Docker container scripts and documentation in devel_scripts/Docker/. * Automated testing of relax via Travis-CI. * New user function frame_order.decompose for a new representation of the frame order analysis results. * Addition of the new user functions structure.add_helix and structure.add_sheet for manually defining secondary structure. * Speed up of the 'fit to first' algorithm in the structure.superimpose user function. * Significant improvements to the checking of arguments passed into user functions, and the resultant error messages for invalid arguments. * Improvements and fixes for the RelaxError messages to better explain user errors. * A large number of updates for the switch from the Subversion version control repository to git, and the move from the closed Gna! infrastructure to SourceForge. Changes: * Removal of the Mac OS X taskbar icon functionality. This code has been disabled since its deletion back in Jun 2012 (r16772), as it does not work with wxPython 2.8 or 2.9. However with wxPython Phoenix, the disabled code fails as there is no wx.TaskBarIcon. * Keyword to positional argument conversion for the GUI wx.ListCtrl.SetStringItem() function calls. The keyword arguments for this function must exist for backwards compatibility with ancient wxPython versions. The current documentation lists them as positional arguments, and keyword arguments are not accepted by wxPython-Phoenix. * Keyword to positional argument conversion for the GUI wx.ScrolledWindow.EnableScrolling() calls. These function calls were using keyword arguments, however the old wxPython and Phoenix documentation say that these are not keyword arguments (this must have been for backwards compatibility with very old wxPython versions). * Keyword to positional argument conversion for a GUI wx.BoxSizer.Clear() call. This is for the spin containers in the spin viewer window. The keyword argument in wxPython classic is deleteWindows however in Phoenix it is delete_windows. * Decreased the precision of a check in the Rx.test_r1_analysis GUI test. This is to allow the test to pass on wxPython-Phoenix and Python 3. * Keyword to positional argument conversion for the GUI wx.Font() calls. A number of these were being called with keyword arguments, however the old wxPython and Phoenix documentation say that these are not keyword arguments (this must have been for backwards compatibility with very old wxPython versions). * Replacement of a wx.ListCtrl.DeleteAllColumns() function call from the spectrum GUI element. This function does not exist in wxPython-Phoenix. Instead the columns are looped over and wx.ListCtrl.DeleteColumn() is called instead. * Creation of an initial bash script for enabling bash completion. * Improvements for the bash completion relax script. Directories and relax scripts are now much better handled. * Fine tuning of the bash completion relax script. The option "-o nospace" for "complete" has been removed as spaces are not added for directories anyway. This means that a space is added after all options and scripts. * More precision decreases in the Rx.test_r1_analysis GUI test. This is to allow the test to pass on wxPython-Phoenix and Python 3. * Updates to the upload section of the release checklist document for sending files to SourceForge. * Added release instructions for creating the README.rst files for the download area. This is for using the custom html2rest at ttps://sourceforge.net/p/nmr-relax/code-python-html2rest to automatically generate the reStructuredText file from the wiki release notes. * Expanded the release checklist instructions for creating the README.rst files. * Updates to many frame order test suite shared data relax scripts. These scripts are used for data generation and display, and are not part of the test suite. The updates are for the frame_order.pdb_model and pymol.frame_order user functions which no longer support the 'dist' keyword argument (this functionality was shifted into the frame_order.simulate user function). * First commit after the svn to git migration: Created a .gitignore file for the new git repository. * Documented the svn to git repository migration. All of the scripts used and detailed instructions have been included. * Standardisation of the section titles in a number of the documentation files. * The files auto-generated during the PDF user manual compilation are now ignored by git. * Git support for the repository version information. This is used in the relax introductory text, the manual compilation, and in the relax save states. The version.repo_revision variable has been renamed to version.repo_head to be repository type independent. For the repository URL, all of the git remotes are included. * C module blacklisting of the Relax_disp.test_bug_24601_r2eff_missing_data system test. The test is skipped if the C modules are not compiled. * Added .pyc and .so files to be ignored. * Fix for depcheck, when packages has an appended release candidate number. For example: numpy 1.8.0rc1. * Added a script to check for copyright notice compliance to the FSF standard. This standard is from https://www.gnu.org/prep/maintain/html_node/Copyright-Notices.html. * Support for multiple git and svn repositories in the FSF copyright notice compliance checking script. * Collection of all commits to attribute to other authors. This is for the FSF copyright notice compliance checking script. * Collection of all commits to exclude by the FSF copyright notice compliance checking script. * FSF compliant copyright notices for all files in the documentation directory docs/devel/. This includes two README files with the copyright notices for all of the patches. * FSF compliant copyright notices for all files in the documentation directory docs/latex/. This includes a README file with the copyright notices for the binary graphics. * FSF compliant copyright notices for all files in the documentation directory docs/html/. This includes a README file with the copyright notices for latex2html-2008 icons. The copyright notice script has been updated to handle false negatives (significant git commits without copyright ownership), and additional copyrights not present in the git log. * FSF compliant copyright notices for all remaining files in the documentation directory. * Added the original oxygen icon AUTHORS and COPYING files and standardised the README file titles. The AUTHORS and COPYING files from the original svn repository svn://anonsvn.kde.org/home/kde/trunk/kdesupport/oxygen-icon have been added to the repository for better documentation of the copyright. The README file had also been updated with the origin information. * FSF compliant copyright notices for the entirety of the graphics/ directory. * FSF compliant copyright notices for the extern/ directory. The packages within this directory are skipped in the devel_scripts/copyright_notices.py copyright compliance checking script. * Update to FSF compliant copyright notices for all modules in the auto_analyses package. * Update to FSF compliant copyright notices for all modules in the data_store package. * FSF compliant copyright notices for the entirety of the devel_scripts/ directory. * Update to FSF compliant copyright notices for all modules in the gui package. * Update to FSF compliant copyright notices for all modules in the lib package. * Update to FSF compliant copyright notices for all modules in the multi package. * Update to FSF compliant copyright notices for all modules in the pipe_control package. * Update to FSF compliant copyright notices for all modules in the prompt package. * Update to FSF compliant copyright notices for all scripts in the sample_scripts/ directory. * Update to FSF compliant copyright notices for all modules in the scons package. * Update to FSF compliant copyright notices for all modules in the specific_analyses package. * Update to FSF compliant copyright notices for all modules in the target_functions package. * Update to FSF compliant copyright notices for all modules in the user_functions package. * Update to FSF compliant copyright notices for all modules and files in the base relax directory. * Update to FSF compliant copyright notices for all unit test modules. * Module docstring standardisation for the system test scripts. * Update to FSF compliant copyright notices for all system test modules and scripts. * Update to FSF compliant copyright notices for all verification test modules. * Update to FSF compliant copyright notices for all GUI test modules. * Update to FSF compliant copyright notices for the base test suite modules. * Support for automated copyright notice placement in README files. This is directly within the FSF copyright notice compliance checking script. * Update to FSF compliant copyright notices for all scripts in the test_suite/shared_data/ directory. * Self exclusion of the FSF compliant copyright notice commits. * Cosmetic change for the test___all__() unit test base class method. The files are now sorted. * Blacklisted missing files are now skipped in the test___all__() unit test base class method. This allows for the test_suite.unit_tests._target_functions.test___init__.Test___init__() unit test to pass when the relaxation curve-fitting C modules are not compiled. * Changed the relax state file name for the state.save user function calls in the sample scripts. This is to make it clearer what the files are. The old '*save.bz2' notation has been removed and the files are now generally called 'state.bz2'. * Update to FSF compliant copyright notices for the external Sobol package. An explicit README file has been added to clarify the copyright status of all files. * Added a trivial relax script to help regenerate the pec_diag.eps diagram. * Added the base Xmgrace data file for the generation of the NOE data plot. This is for regenerating graphics/screenshots/noe_analysis/grace.svg. The copyright notice checking script has been updated for this old 2004 file. * Changed a number of references to "Linux" to "GNU/Linux". * Replaced all references to "open source" in the manual with "free software". * Removed the ancient CIA.vc references in the development chapter of the manual. * Added a README file for the extern/numdifftools package. This is taken from the VC log and explains the origin, version, and licensing of the package. * Added the base Xmgrace data file for the generation of the R2 peak intensity data plot. This is for regenerating graphics/screenshots/xmgrace_peak_intensities.svg. The copyright notice checking script has been updated for this old 2004 file. * Copyright notice updates for the graphics/misc/relaxGUI_splash* files. * Fixes for the FSF copyright notice compliance checking script. * Update to FSF compliant copyright notices for the external numdifftools package. Explicit README files have been added to clarify the copyright status of all files. * Removal of the numdifftools extern package, as this can be easily installed in Python using pip. * Support for Grace-formatted units in the specific analysis parameter object. This is currently used by the relaxation curve-fitting analysis for the Rx parameters. * Created the Relax_fit.test_auto_analysis_pipe_name system test to catch a missing RelaxNoPipeError. This is to catch the error "NameError: global name 'RelaxNoPipeError' is not defined". * Conversion of the relaxation curve-fitting sample script to use the auto-analysis. * Improved documentation for the DIFF_MODEL variable in the dauvergne_protocol.py sample script. The fact that it can be supplied as a list is now mentioned in the script docstring, and the default value is now a list with all of the global models. * Support for NMR proton pseudo-atom identification from PDB files in the internal structural object. The standard pseudo-atoms are now identified as being protons. * Removed a duplicated proton frequency check in the relax_data.read user function. This resulted duplicated RelaxWarnings being printed out. * Huge improvement for the responsiveness of the relax GUI. The relax controller window log panel was being updated with a wx.CallAfter() call after every write to the IO streams. If a relax analysis was proceeding very quickly, which is the case in most analyses, this created a huge backlog of GUI updates. The result was that the GUI would freeze, running at 100% CPU usage in its own thread, with the analysis running at 100% on another thread. The fix was to shift the log panel write() call to be triggered by the Timer already being used by the gauges, rather than by the IO stream write() methods. The text was already placed on a Queue object, so this change is very simple. Another small change was made to the log panel write() method to avoid a number of unneeded wx calls. This should also have a significant impact on the GUI updating. * Saved state file name change for the steady-state NOE and relaxation curve-fitting auto-analyses. The names are now simply 'state.bz2'. This is so the file is easier to identify as being a relax state file that can be loaded with the state.load user function. * The relaxation curve-fitting sample script now timestamps the data pipe bundle name. * Redesign of Troels' grace2images.py script. The executable script creation has been shifted from the relaxation curve-fitting auto-analysis (auto_analyses.relax_fit) into the new function lib.plotting.grace.create_grace2images(). This is now also used by the steady-state NOE auto-analysis. The content of the script has also been shifted into the lib.plotting.grace.GRACE2IMAGES variable to allow for easier code editing. The grace2images.py script itself has been heavily modified: The script now uses Python3 by default; The depreciated optparse module has been changed to argparse; A copyright notice has been placed at the top of the script; The top comment has been converted into a docstring; The default format is now EPS rather than PNG, as PNG is often not supported as an output device; Bug fix in that all formats can now be created (supplying "JPG" previously did nothing); General code and comment cleanups. * The FSF copyright notice compliance checking script is no longer dependent on relax. The relevant lib.io relax module functions have been copied into the script, and modified with the assumptions of Python 3 only compatibility and less flexible input. * The relax status singleton now stores the time it was created as the program starting time. This is to allow for elapsed time calculations, which will be used in the auto-analyses for more detailed printouts. * Creation of the lib.timing.print_elapsed_time() function. This prints out an elapsed time value in day, hour, minute, and second format. A number of unit tests have been added to check the handling of different time values, including plurals. * Standardisation of initial and final printouts in the auto-analyses, including the elapsed time. The main auto-analyses now use lib.sectioning.title() for marking the start and the end of the analysis. And after the final title() printout, the lib.timing.print_elapsed_time() function is called to provide user feedback to how long relax had been running for. * Creation of the Relax_disp.test_bug_missing_replicates GUI test. This is to catch an Attribute error when the replicated spectra are specified via the spectrum list GUI element rather than the peak intensity loading wizard. * More of the GUI main menu entries are disabled during execution locking. This includes all of the 'Tools' menu entries to block the free file format from changing mid-execution, the system information user function from being called, and the test suite from being run. The BMRB export menu entry is also disabled. * Safe execution of all of the auto-analyses (those that acquire the execution lock). The whole of the __init__() code of the auto-analyses is now wrapped within a try-finally set of statements. This is to be absolutely sure that the execution lock is released. This is not always the case, for example the Relax_fit.test_auto_analysis_pipe_name system test was not releasing the lock due to a RelaxError, and this was causing the later GUI tests to fail. * Updated the Rx.test_r1_analysis GUI test for the changed state file name in the auto-analysis. * Fix for the FSF copyright notice compliance checking script for lib/plotting/grace.py. The copyright notices within the grace2images.py script in the module variable are now ignored. This additionally required removing duplicate copyright notices as both the module and embedded script have "Copyright (C) 2013 Troels E. Linnet". * Unique and temporary hash support in the spin containers. These private data structures will allow for fast SpinContainer to InteratomContainer and reverse lookups. The hash is temporary and only created when a SpinContainer is created. It is not stored, so it is regenerated between relax sessions. * Unique and temporary hash support in the interatomic data containers. The interatomic data containers now have a unique and temporary private hash assigned to it, just as with the spin containers. They also now have the ability to store the unique spin container hashes. This is currently unused but will allow for fast SpinContainer to InteratomContainer and reverse lookups. * The interatomic data containers now store the SpinContainer hashes. * The InteratomContainer._hash value is now stored in the spin containers it refers to. * Bmrb system test fixes for the new SpinContainer private hash data structures. These structures are now blacklisted in the data pipe comparisons. * Speed up for the pipe_control.interatomic.define() function. The create_interatom() function will now accept the two spin containers as arguments. As the define() function already has these, they are now passed in avoid two calls to the pipe_control.mol_res_spin.return_spin() function. * Creation of the pipe_control.interatomic.hash_update() function. This is used when copying interatomic data containers (the pipe_control.interatomic.copy() function) to make sure that the spin hashes in the receiving data pipe are stored in the new interatomic data container. * Converted all pipe_control.mol_res_spin.return_spin() function calls to use keyword arguments. This is in preparation for adding support for the temporary spin hashes. The pipe_control.mol_res_spin module return_spin_from_selection() and return_spin_from_index() function calls have also been updated, just in case. * Support for a spin hash fast lookup table for the molecule, residue and spin data structures. The fast lookup table is stored as dp.mol._spin_hash_lookup. This matches the dp.mol._spin_id_lookup fast lookup table, but is a simpler table to maintain as there is only one hash ever per spin and that hash is unique. The table is maintained by the pipe_control.mol_res_spin module. * Conversion of all return_spin() calls with interatom spin IDs to use the spin_hash argument instead. This should slightly speed up the spin lookups. * Improved the formatting of the interatomic data container list to help with debugging. The data is now presented with the format_table() function of the lib.text.table module. * Data container hash cross-reference recreation. This is used by the model_selection, pipe.copy, result.read and state.read user functions. The cross referencing recreation is for both spin containers and interatomic data containers. The old pipe_control.mol_res_spin.metadata_update() and new pipe_control.interatomic.metadata_update() functions are called after the loading a results or state file, or a data pipe copy, so that the data structures properly cross-reference each other's hashes. * Huge speed up of the interatomic data container handling. The pipe_control.interatomic.create_interatom(), return_interatom(), and return_interatom_list() functions now operate with the unique spin hashes rather than spin IDs. This avoids the expensive calls to the now deleted pipe_control.interatomic.id_match() function. * Fixes for the copying of spin or interatomic data containers. The data_store.prototype methods Prototype.__clone__() and Prototype.__deepcopy__() will now regenerate the unique hash if a _generate_hash() function is present. This function has been added to SpinContainer and InteratomContainer. * Changed the spin ID printout for the rdc.read user function to be the unique ID rather than file ID. This is to help with debugging. * Bug fix for the N_state_model.test_CaM_IQ_tensor_fit system test. Some of the RDC data contained RDCs between two @N spins rather than an @N and @H spin. This bug was only uncovered by the switch to the spin and interatomic data container hashes for fast lookups. * Fix for the data store _back_compat_hook() method when creating interatomic data containers. The pipe_control.interatomic module define() function has been renamed to define_dipole_pair() for clarity and it now accepts two spin containers as arguments, overriding the spin ID arguments. This fixes the State.test_old_state_loading GUI test that was failing after the conversion to spin and interatomic data container hashes for fast lookups. * Printout fix for the check_read_results_1_3() method of the Mf system tests. * The interatomic_loop() function now uses the spin hash fast lookup table rather than spin IDs. * Redesign of the create_spin() function of the pipe_control.mol_res_spin module. This function is the backend of the spin.create user function and is also used throughout relax. Instead of creating a single residue or spin, if only a name and not number is supplied, now multiple spins are created. If the residue name is supplied but not the residue number, now all residues matching the given name will have new spins created. For example creating the spin with the name 'NE1' and only specifying the residue name 'TRP', then all tryptophans in all molecules will have NE1 indole side-chain spins created. This makes the operation of the spin.create user function more logical for the user. * Support for catching segfaults and other errors from Modelfree4. This allows for non-silent exiting from the Popen() class. All signals are now reported via RelaxErrors. * Added the text of the LGPLv3 licence to the extern.sobol package. * Added FSF recommended LGPLv3 licence notices to the top of all of the extern.sobol files. Excluded is the auto-generated test output file. * Renamed the LGPLv3 file in the extern.sobol package to COPYING.LESSER. * Updated all of the minfx project links from Gna! to the SourceForge site. * Updated all of the relax deployment scripts for the Gna! shutdown. These now use the SourceForge sites for relax, minfx, and bmrblib instead. The svn to git conversion is also taken into account, and git is used to pull in the latest relax code from the SourceForge mirror. * Converted a large number of Gna! links to point to the equivalent Web Archive URL. Most of these links should have had a snapshot made in the Internet Archive Wayback Machine. * Added some hyperlinks to the external programs listed in the intro chapter of the user manual. * Added the relaxation dispersion software support to the intro interfacing section. * The prompt UI is no longer referenced as the 'primary' interface in the intro chapter of the manual. * Added relaxation dispersion to the GUI features in the intro chapter. * Added relaxation dispersion to the list of all data pipe types in the intro chapter. * Improvements to the script UI text in the intro chapter. * Linked to the internal Gna! mailing list archives for the multi-processor announcement. This is for http://www.nmr-relax.com/mail.gna.org/public/relax-devel/2007-05/msg00000.html. * Added new sections to the infrastructure and development chapters about the Gna! shutdown. This is to warn that the information in these chapters of the manual is out of date. * Updated the NESSY link to point to the new SourceForge location for the project. * Changed the relax PDF manual link from Gna! to SourceForge for the HTML manual footers. This is in the latex2html configuration file so that the automatically created HTML manual pages point to a valid location. * Changed the relax PDF manual link from Gna! to SourceForge for the HTML manual headers. This is in the LaTeX header, so that the automatically created HTML manual pages point to a valid location. * Converted Gna! mail archive links in the manual to point to the copies at http://www.nmr-relax.com. * Rewrote the core design of relax development section of the relax manual. The code design figure has also been updated. All of the content was still written for the relax 1.3 releases. * Removed the dead Freshmeat/Freecode and Gmane text from the development chapter of the manual. * Copyright notice and FSF compliant copyright notice script updates. * Renamed the FSF Copyright Validator script to the acronym fsfcv. * Split the FSF Copyright Validation script into a configuration file and an executable script. The configuration part of the script has been retained but with all data stripped to be able to provide a blank template for a new configuration file. And the new mimetypes section has been converted into a variable rather than manipulating the mimetypes Python module so that the configuration script requires no Python imports. * Converted the whole FSF copyright notice validation script code into a class. This is in preparation for a number of major changes to the script. * The FSF copyright notice validation script now uses the argparse Python module. This is for more powerful command line argument processing. The new --blank-config option will now print out the blank configuration file, and the DEBUG variable has been replaced with the -d or --debug command line option. * Improved the documentation of the fsfcv configuration file. * Implementation of the configuration file parsing. This uses modern Python import mechanisms to load the blank config first for default values, followed by the user supplied configuration file. * Implemented the verbosity argument so per-file messages are only printed when activated. * The FSF Copyright Validation script will now add the current directory repository if not supplied. This allows the script to be executed without a configuration file. * New command line option for the FSF copyright validation script to only check for missing notices. This will only print out files with missing copyright notices. Files marked as valid may nevertheless have incorrect notices. * The capitalisation of "Copyright (C)" no longer matters for the FSF Copyright Validation script. This is for the copyright notices within the file. The configuration file has been updated for the lower case copyright notices (false positives). * Reactivated the user supplied binary mimetypes for the FSF Copyright Validation script. * More robust reading of copyright notices from binary files in the FSF Copyright Validation script. The reading of the text file will now return and empty list if a UnicodeDecodeError occurs. * Updated the fsfcv configuration file for the fsfcv script and configuration file itself. * Fixes for the extern/numpy_future.py copyright notices. * Support for multiple additional years in the FSF Copyright Validation script. * Added a progress meter, a simple spinner, to the FSF Copyright Validation script. This is taken directly from lib.text.progress, and the output is sent to STDERR. All other script output is now sent to STDOUT. It is only active if the verbose flag is off. * Separated the missing copyright notices from non-valid copyright notices in the fsfcv script. These are now counted separately and a different message printed out for the missing notice case. * Support added to the fsfcv script for handling content not within a version control repository. The untracked and non-valid copyright counting is turned off in this case. * Improved the feedback from the progress meter in the fsfcv script. This now says what the numbers are, using text such as "X files checked.". * Activated the 'link' option for the epydoc API documentation. This allows for the navigation link to point to "/" rather than "http://www.nmr-relax.com". This is for SSL and https:// preparations, so that the http://www.nmr-relax.com part of the URL is not present in the local links. * Shifted the epydoc API documentation copyright notice insertion into the scons script. This notice was previously hardcoded into the devel_scripts/google_analytics.js script - as that is the GPLv3+ copyright notice of that script with the date of 2012. Instead the copyright notice in the Google analytics script is now skipped and the correct FDLv1.3 copyright notice with the current year programmatically inserted via the scons/manuals.py script. * Adding new format of NMRPipe SeriesTab which give errors. * Added the systemtest 'relax -s Relax_disp.test_b ug_seriestab_format -d' to check for the new format of NMRPipe SeriesTab. * Changes to lib/spectrum/nmrpipe.py to handle NMRPipe SeriesTab, when assignment has not been performed. Auto detecting the multiplier column. * Fixing for allowing renaming of SeriesTab spectrum ID' * Fix for help section in ./grace2images.py file. It was unclear how to get different types of images. * Extended the systemtest Relax_disp.test_bug_seriestab_format to include reading of several SeriesTab files, and selecting intensity column. * Modified lib/spectrum/nmrpipe.py in read_seriestab() to allow for selecting intensity column. * Allow for int_col to be a list to make a proper warning. * Initial try for running a Docker image with gedit. This is an attempt to try running OpenDX later. * Simplification of Dockerfile. * Removing dockerfile for gedit. * Adding a Dockerfile, which makes it easy to build an Ubuntu image and Launch OpenDX. This is very useful on a Mac. * If the current directory is mounted to home, then dx.map files is working. * Improved the help to settings in XQuartz when running Docker on a mac and accessing the OpenDX GUI. * Renamed the extern.sobol.sobol_lib-not_tested module to sobol_lib_untested. This is in preparation for updating to the newest upstream code. * Updated the extern.sobol package to the latest upstream code. This is the new MIT licensed code (which was previously LGPL licensed) from http://people.sc.fsu.edu/~jburkardt/py_src/sobol/sobol.html. The licence text has been modified to suit the licence change, and the LGPL copyright notices dropped from all files. The Python 3 updates to the relax version of the package have been transferred into the new code. * Added the MIT licence with copyright notices to the top of all files. The origin of all code was traced back through the MATLAB sources, FORTRAN90 sources, and FORTRAN77 sources. The original f77 code did not contain any shared lines of code with the f90 code, so no copyright statements for Bennett Fox were added. Comments were added to each function to document the history of all of the code. * Easier reading of the Dockerfile. * Extended the help section of running a Docker container, so now it is also possible to run a bash session in the container. * Fix for deploy script of relax to ubuntu. The version variables was wrongly set. * When running Docker with OpenDX, the current working directory is now mounted on $HOME/work instead of $HOME. * Made the Ultimate Docker file, which package relax and OpenDX together in one Dockerfile. Everything can now be packed together. This makes it an ultimate opportunity to easily ship the relax Docker image to run 'everywhere' easily. * Letting the default intensity column of SeriesTab be 'VOL'. This is the column SeriesTab uses. The 'HEIGHT' column is copied in from the nmrDraw test.tab file, and does not represent the measurement. * Fixes to sconstruct, when building with Python 3 and SCons. The current sconstruct caused an 'SyntaxError: invalid syntax' when using '`' in the file. * Fixes to sconstruct, when cleaning with Python 3 and SCons. This fix is to print the list represented. * Removed the Oxygen Icon directory from the skipped directory list of the fsfcv script. * Added copyright notices for every Oxygen Icon. * Small fix for the FSF Copyright Validation script (fsfcv). * Capitalised the copyright symbol in the Sobol' external library copyright notices. This is for easier handling by the FSF Copyright Validation script. * Fixes for the fsfcv script configuration for the Sobol' external package. * The alternative committer names are now better handled in the fsfcv script. The committer's names in the VC logs are now also translated from the alternative to the standard name. * Correct spelling of Troels Schwarz-Linnet in the copyright notices. * Troels' name is now handled differently in the fsfcv script configuration file. The text "Troels E. Linnet" is now the alternative name, and "Troels Schwarz-Linnet" the standard name. * MS Windows support for the FSF Copyright Validation script. * Cut and paste error fix for the Oxygen Icon licensing text in the README files. As stated in the COPYING file, the licence is LGPLv3+, not GPLv3+. * Updated the general relax copyright notice for 2018. This last copyright year is now stored as info.copyright_final_year. * Clarified the GPLv3+ licensing in the relax introduction string. * Manual: Addition of a GPLv3+ copyright notice to a second title page. * Another Oxygen Icon licensing text fix in the README files. * Improved the LGPLv3+ licensing text for the base directory of the Oxygen Icons. * Manual: Added the LGPLv3+ copyright notice for the Oxygen Icons to the second title page. * Documentation for the copyright and public domain notices for 3D structures. This is to explain why the strict format text files are not modified to include notices, hence they are placed in the README file, and detailing the public domain nature of the Protein Data Bank repository. * Updated the script for Docker images. * Adding Dockerfile for Ubuntu 18.04 LTS and development on Windows. * Fix for comparison of arrays to None. The use of 'x == None' should be 'x is None'. * Initial commit of travis-ci. * Setting sys.exit(1) in dep_check, to make Travis-ci fail the build on error. * Travis CI: Adding minfx to pip requirements file. * Travis CI: Fixing path to minfx for pip to install. * Travis CI: Adding PYTHON_INCLUDE_DIR. * Travis CI: Fix for getting Python.h. * Travis CI: Again trying to fix export variable to find Python.h. * Travis CI: Adding debug echo of path to Python.h. * Travis CI: Moving export to .travis.yml. * Travis CI: Adding unit test to travis. * Travis CI: Fix for executing relax from current folder. * Travis CI: Removing scons, since it should already be part of Compilers & Build toolchail in Trusty images. * Travis CI: Adding print of relax information. * Travis CI: Adding more packages to pip requirements. * Travis CI: Better reading of tests performed. * Travis CI config: Adding additional Python version to Travis and cleaning up. * Travis CI config: Adding Python 2.6 and 3.5 to the test matrix. * Travis CI config: Specific testing for Python 2.6. * Travis CI config: Trying to get pip conf file. * Travis CI config: Trying to add svwh.dl.sourceforge.net to trusted pip. * Travis CI config: Adding importlib for Python 2.6. * Travis CI config: Trying to add subprocess for Python 2.6. * Travis CI: Removed matplotlib from Python 2.6. * Travis CI: Remove test of Python 2.6. * Renamed README file to markdown. * Added travis build shield to README. * Adding system-tests to be executed with travis. * Creation of a large set of system tests for implementing the frame_order.decompose user function. The tests have been copied from Frame_order.test_distribute_* and include: Frame_order.test_decompose_free_rotor_z_axis, Frame_order.test_decompose_iso_cone_z_axis, Frame_order.test_decompose_iso_cone_xz_plane_tilt, Frame_order.test_decompose_iso_cone_free_rotor_z_axis, Frame_order.test_decompose_iso_cone_torsionless_z_axis, Frame_order.test_decompose_pseudo_ellipse_xz_plane_tilt, Frame_order.test_decompose_pseudo_ellipse_z_axis, Frame_order.test_decompose_pseudo_ellipse_free_rotor_z_axis, Frame_order.test_decompose_pseudo_ellipse_torsionless_z_axis, Frame_order.test_decompose_rotor_z_axis. * Creation of the frame_order.decompose user function front end. * Implementation of the frame_order.decompose user function backend. * Scons: Fixes for the manual compilation. The relax manual cannot be compiled if one of the sys.path values contains a 'docs/' directory. Instead of appending the relax docs/ path to sys.path, it is now prepended. The documentation Python module __all__ lists have also been filled out. * Renamed the relax default repository version from "repository checkout" to "repository commit". This general text is more appropriate for a git repository. * Manual: Removed a Gna! reference in the intro chapter. * Manual: Alias creation for the relax mailing lists. This is to allow for a centralised place for changing the mailing list name, if any changes occur to the mailing list in the future. * Manual, Ch. Infrastructure: Converted the Gna! shutdown note into a new 'History' section. A lot of the relax free software/open source infrastructure history is now documented. * Manual, Ch. Infrastructure: Removed the Gna! information from the relax website section. * Manual, Ch. Infrastructure: Updated the relax mailing list information from Gna! to SourceForge. This is now all through LaTeX aliases, so infrastructure changes should be easier to deal with in the future. * Manual, Ch. Infrastructure: Abstracted the bug reporting section using aliasing. This removes all Gna! specific links from the chapter, shifting them to SourceForge links in the main relax.tex file. * Manual, Ch. Infrastructure: Abstract the relax repository section and switch from svn to git. This removes all Gna! specific links from the chapter, shifting them to SourceForge links in the main relax.tex file. * Manual, Ch. Infrastructure: Removal of the news section, as this is not supported on SourceForge. * Manual, Ch. Infrastructure: Abstract the distribution archive section and switch from svn to git. This removes all Gna! specific links from the chapter, shifting them to SourceForge links in the main relax.tex file. * Manual, Ch. Installation: Abstraction of the bug tracker links. This replaces the dead Gna! links to the current SourceForge links. * Manual, Ch. N-state model: Abstraction of the relax-users mailing list. * Manual, Ch. Dispersion: Dead link and mailing list fixes. The mailing lists are now abstracted using aliases, some old dead links have been removed, and some Gna! support request links have been converted to Internet Archive links. * Manual, Ch. Development: Removal of the note about the Gna! shutdown. The chapter is about to be updated for the switch to SourceForge, so this note is no longer needed. * Manual, Ch. Development: Aliases for the mailing lists and addition of a cross reference. * Manual, Ch. Development: Converted the version control section from SVN to git. * Manual, Ch. Development: Minor edits to the coding conventions section. * Adding exit codes for the unit and system tests. This is for Travis to fail if these fail. In Windows these can be seen with: echo Exit Code is %errorlevel% * Manual, Ch. Development: Removal of the section describing creating and submitting patches. * Manual, Ch. Development: Section rearrangement in preparation for new text. * Manual, Ch. Development: svn to git and infrastructure abstraction in the Committers section. All references to svn have been changed to git. And the Gna! infrastructure has been abstracted to aliases in the main relax.tex file so that future infrastructure changes are easier to deal with. In addition, many edits of the text have been made. * Manual, Ch. Development: Expansion of the relax repository section. * Manual, Ch. Development: Minor edits to the relax repository git mirror section. * Manual, Ch. Development: Editing of the source code repository section. * Manual, Ch. Development: Added links to the web interfaces for all relax mirror sites. * Fixing the return value of execution of unit and system tests. * Manual, Ch. Development: New subsection and editing of the relax repository section. An initial section describing git version control and listing all relax repositories has been added. * Manual, Ch. Infrastructure: Updated the relax repository section to include the website and demo. * Manual, Ch. Development: Complete rewrite of the 'Submitting changes to the relax project' section. This converts the Subversion instructions to git, and switches from Gna! to the aliased primary relax infrastructure. * Manual, Ch. Development: Converted the SCons section from SVN to git, and removed Gna! references. * Manual, Ch. Development: Major editing of the 'Core design of relax' section. This section is now significantly improved. There was a lot of old information, some dating back to the pre-relax 3.0 designs. And a lot of new information has been added to expand on all of the descriptions. * Manual, Ch. Development: Minor editing of the tracker section. * Manual, Ch. Development: Updated the very out of date links section. This was incredibly out of date. The links have been updated to include everything listed at http://www.nmr-relax.com/links.html. * Manual, Index: Removed the no longer relevant svnmerge.py entry. * Simplify Travis file. * Added travis-ci support for Python 3.7 and OSX. Adding notifications from builds att travis-ci.com to nmr-relax-devel att lists.sourceforge.net. This is after inspiration from https://github.com/WeblateOrg/translation-finder/blob/master/.travis.yml. Windows can not be added due to unknown compile error. * Fixing a bug for running scons. This happens after a 'pip install -U numpy', where numpy is upgraded from 13.3 to 16.1.0. More to read here: https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.set_printoptions.html; https://stackoverflow.com/questions/1987694/how-to-print-the-full-numpy-array; https://github.com/numpy/numpy/pull/12353. * Fix for building on Mac OSX Python 3.7. A possible solution was found here: https://stackoverflow.com/questions/31019854/typeerror-cant-use-a-string-pattern-on-a-bytes-like-object-in-re-findall. * Adding sending mails to nmr-relax-devel att lists.sourceforge.net. This introduces a spamming problem. Everyone who forks this project and have travis setup for their user will spam the develop mailing list. To limit this, there are options in travis: https://docs.travis-ci.com/user/notifications/; https://docs.travis-ci.com/user/conditional-builds-stages-jobs. Introducing a condition like if: branch = master seems not to be implemented yet: https://github.com/travis-ci/travis-ci/issues/1405. Travis has internal ticket to track this feature request. * SCons: Git support for the scons distribution targets. This was previously only set up for Subversion. * FSF Copyright Validation script: Support for tracking files renamed in later repositories. In this case, a file rename in the current git repository would not allow the file to be found in the SVN archive repository. The history of the later repository is now used to find all file renames after the end of the earlier repository. False git history is also correctly handled. * FSF Copyright Validation script: Bug fixes for recording the first VC commit as copyright ownership. * FSF Copyright Validation configuration: Updates for recent files and the script bug fixes. A lot false git history needed to be identified and blocked. And a lot of README files added for copyright identification needed to be manually included. * Python multiversion test suite script: Added Python 3.6 and 3.7 to the list to test. * Travis CI config: Minimise mailing list messages with successes only reported after fixing failures. * Test suite: Fix for the running of multiple test suite categories. Now all test categories will be run and the execution will not be terminated at the end of the category containing the first error/failure. * Activating MS Windows Python 3.7 32-bit for travis (64 bit does not work). Adding travis option for upgrading pip packages in one of the builds. This is to try to have pip packages where the versions numbers are normal/average and then where the packages have been upgraded to the newest. Adding check for Python 3.6, since this is the standard version in Ubuntu 16.04 and 18.04. * Added Python as overall language to travis. * System tests: Relax_disp.test_paul_schanda_nov_2015 is now skipped when Scipy is missing. * Devel scripts: Improved logic for finding Python.h in the manual C module building script. * SCons: Improved logic for finding Python.h for building the C modules. * Python multiversion test suite script: Removal of Python 2.3 and 2.4. These Python versions have not been supported since the first usage of "from __future__ import absolute_import" back in 2013. * Test suite: Graceful failure of the GUI tests when the wx app cannot be setup. This currently occurs when using wxPython-Phoenix. * Travis CI config: Adding Python 3.6 and adding test of mpirun. * .gitignore: Ignoring Windows C extensions. * Travis CI config: Trying to add MPI for Windows. It does not seem to work. * Travis CI config: Trying MPI on Windows does not work: "The processor type ''mpi4py'' is not supported." * GUI: Fix for a wxPython 2.9 issue found via the Relax_disp.test_bug_missing_replicates GUI test. The spectrum ID wx.ListCtrl element cannot be queried for item 0 when empty. * Development scripts: Rewrote the python_seek.py script to report all import errors. * Creation of a large set of system tests for expanding the frame_order.decompose user function. The tests have been copied from Frame_order.test_decompose_* and modified to include the new 'total', 'reverse', and 'mirror' user function keywords. The tests include: Frame_order.test_decompose2_free_rotor_z_axis, Frame_order.test_decompose2_iso_cone_z_axis, Frame_order.test_decompose2_iso_cone_xz_plane_tilt, Frame_order.test_decompose2_iso_cone_free_rotor_z_axis, Frame_order.test_decompose2_iso_cone_torsionless_z_axis, Frame_order.test_decompose2_pseudo_ellipse_xz_plane_tilt, Frame_order.test_decompose2_pseudo_ellipse_z_axis, Frame_order.test_decompose2_pseudo_ellipse_free_rotor_z_axis, Frame_order.test_decompose2_pseudo_ellipse_torsionless_z_axis, Frame_order.test_decompose2_rotor_z_axis. * User function frame_order.decompose: Implementation of the 'total', 'reverse' and 'mirror' params. This allows a fixed number of structures to be generated over the distribution, for the model order to be reversed, and for the models to step from the negative angle to positive angle and then return to the negative angle. The original code has been simplified by switching from numpy.arange() to numpy.linspace() for generating the range of angles. This function is far more reliable than arange() which has end point instability issues. * Creation of the Test_object.test_add_model unit test. This is within the _lib._structure._internal.test_object test module. The aim is to reveal issues with the model number accounting within the internal structural object. * System test: Addition of Structure.test_add_secondary_structure. This will be used to quickly implement the new structure.add_helix and structure.add_sheet user functions. * User function: Implementation of structure.add_helix for defining alpha helices. * User function: Implementation of structure.add_sheet for defining beta sheets. * Library: Implementation of the lib.arg_check.is_bool_or_bool_list() function. This is to allow for either Boolean values or lists of Booleans. * User functions: Registration of the 'bool_or_bool_list' argument type. * User function frame_order.decompose: The argument 'reverse' can now be a list of Booleans. This allows different modes to be selectively reversed. * User function structure.superimpose: Speed up of the 'fit to first' algorithm. The translation and rotation are now skipped for the first structure (as the translation is zero and the rotation matrix is the identity matrix). * User function structure.superimpose: Improved the documentation of the 'models' arg. * RelaxErrors: Implementation of a number of new error types. This includes the RelaxBoolListBoolError, RelaxNoneBoolError, RelaxNoneBoolListBoolError, and RelaxNoneTupleNumError objects. * Unit tests: Complete checking of the lib.arg_check module. * lib.arg_check module: Missing RelaxError import for the new is_bool_or_bool_list() function. The lib.error import statement has also been spread across multiple lines and alphabetically sorted. * lib.arg_check module: Protection of the functions against future numpy depreciations. The code 'arg == None' will not be supported by numpy in the future, if the arg being checked is a numpy object. Instead the 'arg is None' syntax must be used. * RelaxErrors: Bug fix for the error message generation for list types. The simple_types and list_types variables are class rather than instance variables, but these were being unintentionally modified by the BaseArgError base class __init__() method. * lib.compat module: Implementation of the Python version independent from_iterable() function. This will be used to avoid directly using itertools.chain.from_iterable(), which was only introduced in Python 2.6 and later. For Python >= 2.6, the itertools.chain.from_iterable() function is used, otherwise the roughly equivalent lib.compat.from_iterable_pre_2_6() function is used. * lib.arg_check module: Redesign of the is_float_object() function to handle any data input. Previously the function could only handle max rank-2 Python lists (lists of lists), and max rank-2 numpy arrays. And only the first dimensionality was being checked. Now any rank list or numpy array is correctly handled. * lib.arg_check module: Addition of the can_be_none argument to the is_bool() function. * lib.arg_check module: Documentation fixes for the is_*() functions. * lib.arg_check module: Fix for the wrong RelaxErrors being used in the is_num_tuple() function. * lib.arg_check module: Fix for missing RelaxError imports for the is_list() function. * lib.arg_check module: Bug fix, Boolean or empty lists no longer evaluate as true in is_num_tuple(). * lib.arg_check module: Bug fix, Boolean or empty lists no longer evaluate as true in is_num_list(). * lib.arg_check module: Simplification of the is_list() function. * lib.arg_check module: Fixes to and simplification of the is_int_list() function. Boolean lists no longer evaluate as true. * RelaxErrors: Addition of more error objects in preparation for a new lib.arg_check function. * RelaxErrors: Expansion of the functionality of the BaseArgError base class. The docstring now documents the arguments. The 'dim' and 'rank' arguments have been added to allow for more control over the reported message for array-type objects. And the 'can_be_none' argument has been added to append ', or None' to the message, negating the need for the RelaxNone*Error objects. For formatting the lists used in the BaseArgError class, the new function human_readable_list() has been added to the lib.text.string module. * lib.arg_check module: Creation of the generic validate_arg() function. A large number of associated unit tests have been added to test all combinations. The _lib.test_arg_check unit tests include: Test_arg_check.test_validate_arg_all_basic_types, Test_arg_check.test_validate_arg_all_basic_types_and_all_containers, Test_arg_check.test_validate_arg_all_containers, Test_arg_check.test_validate_arg_bool, Test_arg_check.test_validate_arg_bool_list, Test_arg_check.test_validate_arg_bool_list_rank2, Test_arg_check.test_validate_arg_bool_or_bool_list, Test_arg_check.test_validate_arg_float, Test_arg_check.test_validate_arg_float_list, Test_arg_check.test_validate_arg_float_list_rank2, Test_arg_check.test_validate_arg_float_or_float_list, Test_arg_check.test_validate_arg_func, Test_arg_check.test_validate_arg_int, Test_arg_check.test_validate_arg_int_list, Test_arg_check.test_validate_arg_int_list_rank2, Test_arg_check.test_validate_arg_int_or_int_list, Test_arg_check.test_validate_arg_list, Test_arg_check.test_validate_arg_list_or_numpy_array, Test_arg_check.test_validate_arg_number, Test_arg_check.test_validate_arg_number_array_rank1, Test_arg_check.test_validate_arg_number_array_rank2, Test_arg_check.test_validate_arg_number_array_rank3, Test_arg_check.test_validate_arg_number_list, Test_arg_check.test_validate_arg_number_list_rank2, Test_arg_check.test_validate_arg_number_list_rank3, Test_arg_check.test_validate_arg_number_numpy_array_rank1, Test_arg_check.test_validate_arg_number_numpy_array_rank2, Test_arg_check.test_validate_arg_number_numpy_array_rank3, Test_arg_check.test_validate_arg_number_or_number_tuple, Test_arg_check.test_validate_arg_number_tuple, Test_arg_check.test_validate_arg_number_tuple_rank2, Test_arg_check.test_validate_arg_number_tuple_rank3, Test_arg_check.test_validate_arg_numpy_float_array, Test_arg_check.test_validate_arg_numpy_float_matrix, Test_arg_check.test_validate_arg_numpy_float_rank3, Test_arg_check.test_validate_arg_numpy_int_array, Test_arg_check.test_validate_arg_numpy_int_matrix, Test_arg_check.test_validate_arg_numpy_int_rank3, Test_arg_check.test_validate_arg_str, Test_arg_check.test_validate_arg_str_list, Test_arg_check.test_validate_arg_str_list_rank2, Test_arg_check.test_validate_arg_str_or_file_object, Test_arg_check.test_validate_arg_str_or_str_list, Test_arg_check.test_validate_arg_tuple. In addition, a set of new RelaxErrors have been added for more detailed user feedback, including: RelaxArrayError, RelaxArrayFloatError, RelaxArrayIntError, RelaxArrayNumError, RelaxNumpyFloatError, RelaxNumpyIntError, and RelaxNumpyNumError. * lib.arg_check module: Fixes for handling empty numpy arrays. This is for the is_float_array() and is_float_matrix() functions. * lib.arg_check module: Removal of the is_list_val_or_list_of_list_val() function. This was never completely implemented, and was only used by the 'point' argument of the dx.map user function. The user function py_type "list_val_or_list_of_list_val" value has been renamed to 'num_list_or_num_list_of_lists' and the call to is_list_val_or_list_of_list_val() replaced by a call to validate_arg(). The 'dim' argument for the 'point' argument of the dx.map user function has been modified to match the validate_arg() function syntax. * User function definition redesign, increasing the argument setting flexibility. The 'py_type' argument definition has been replaced by 'basic_types', 'container_types', and sometimes 'dim'. This matches the new validate_arg() function in the lib.arg_check module and allows for far greater flexibility in defining a parameter together with more extensive parameter checking than previously possible. * specific_analyses.consistency_tests.api module: Missing RelaxWarning import. * User function definitions: Support for checking file lists (from arg_type='file sel multi'). The new RelaxStrFileListStrFileError object has been created for this check (and the RelaxStrListError also added for completeness). * User function definitions: Overrides for arguments with 'arg_type' set. The 'arg_type' argument is now fully documented in the user_functions.objects module Uf_container.add_keyarg() function docstring. The value is now checked, and a few unimplemented values have been eliminated. Overrides for the 'dim', 'basic_types', and 'container_types' are now set for almost all arguments with 'arg_type' set. And checks that these are not set in the user function definition have been added. * system.cd user function: Removal of the incorrect wiz_filesel_style argument in the definition. * User function definitions: Split of the 'file sel' arg_type value into readable and writable. The arg_type value is now either 'file sel read' or 'file sel write'. The 'file sel multi' value has also been split into 'file sel multi read' and 'file sel multi write'. This is used for checking if file objects supplied to the user function are correctly readable or writable. And it is used in the GUI to automatically set the file selection dialog style. Hence the redundant 'wiz_filesel_style' argument has been removed from the user function definitions. The is_filetype_readable(), is_filetype_rw(), and is_filetype_writable() functions have been added to... [truncated message content] |
From: Edward d'A. <ed...@nm...> - 2019-02-07 09:41:44
|
Hi, I would like to announce that, after the May 2017 shutdown of the Gna! open source infrastructure and hence the loss of relax's home (as well as minfx and bmrblib), we have now finally and fully migrated to SourceForge: https://sourceforge.net/projects/nmr-relax/ https://sourceforge.net/projects/minfx/ https://sourceforge.net/projects/bmrblib/ The details of this migration are below. As part of the migration, I will soon release a new version of relax - version 4.1.0. This will consist of 2 years of minor changes and bug fixes, as well as lots of documentation changes for the new infrastructure. Note that, for reference, I have CCed all current and former relax developers. Regards, Edward 1) Gnu Savannah. For two years I tried to register relax at Gnu Savannah as a non-Gnu project: https://savannah.nongnu.org/task/?14528 However the admin there was making it particularly hard - setting a bar so high that most of the official Gnu software collection probably would not make it over. See that thread for all the details. This includes the development of the devel_scripts/fsfcv script for the "FSF copyright validation" of absolutely all relax files. The reason for trying Savannah was due to their strong free software philosophy that relax has always stringently followed. That was the original reason for choosing Gna! (as well as the fact that Gna! was one of the first infrastructures with support for the new and advanced Subversion version control system). I will continue to try to have relax registered at Savannah as a long-term backup solution. Any constructive help on the registration thread is welcome. 2) Git migration. As part of the process, I spent considerable time migrating my permanent backups of the relax Subversion repository backend to a git repository, preserving the full commit history. Due to relax's long Subversion history, with some incredibly long lived branches, this process ended up being quite complex. See the docs/devel/svn2git_migration/README file for the details: https://sourceforge.net/p/nmr-relax/code/ci/master/tree/docs/devel/svn2git_migration/README To visualise the complex history, look back at the commits with: $ GIT_PAGER="less -S" git log --all --graph --date-order --pretty=format:'%Cred%H %P%Creset -%C(yellow)%d%Creset %s %Cgreen(%cd) %C(bold blue)<%an>%Creset' --date=iso Note that the old SVN repository (as well as the git-svn bridge repository) is archived in a read-only state on SourceForge: https://sourceforge.net/p/nmr-relax/code-svn-archive/HEAD/tree/ 3) relax mirrors In the meantime, mirroring of relax was set up across a number of different infrastructures: SourceForge (SF) - https://sourceforge.net/projects/nmr-relax/ GitHub (GH) - https://github.com/nmr-relax GitLab (GL) - https://gitlab.com/nmr-relax Bitbucket (BB) - https://bitbucket.org/nmr-relax/ Having relax spread across so many places should ensure access to relax for decades to come. The mirroring is specifically for the git repositories. These now include the relax source code, the relax website, and the new relax demonstration data files and scripts as three separate git repositories. I have all of these set up as remotes and push all changes to all mirrors! That way the main repositories and mirrors are always up to date. 4) SourceForge SourceForge was chosen over the other infrastructures due to a number of reasons. These include: - File downloads. - Multiple trackers (or tickets as they call it). - SVN support for hosting the old SVN repository. - Real mailing lists. - Backend shell log in (shell services). - MySQL and PHP 7 support (we could possibly set up the relax Mediawiki here in the future). Each of the free software/open source infrastructures have their own benefits. But SourceForge was chosen as the primary site for practicality, as it has all that relax needs. 5) Mailing lists The old relax mailing lists have now been fully restored: https://sourceforge.net/p/nmr-relax/mailman/nmr-relax-announce/ https://sourceforge.net/p/nmr-relax/mailman/nmr-relax-commits/ https://sourceforge.net/p/nmr-relax/mailman/nmr-relax-devel/ https://sourceforge.net/p/nmr-relax/mailman/nmr-relax-users/ The 2 year gap is evident in the history. These have been migrated to the new SourceForge mailing lists: nmr-relax-announce att lists.sourceforge.net nmr-relax-commits att lists.sourceforge.net nmr-relax-devel att lists.sourceforge.net nmr-relax-users att lists.sourceforge.net 6) Website at GitHub Due to SourceForge having ugly support for SSI includes (requiring *.shtml files for this - "SSI will only be applied on files with an .shtml extension"), instead GitHub is being used for http://www.nmr-relax.com. GitHub uses Jekyll and YAML which allows for automatic merging with the master branch with its SSI include files. The website git repository has a gh-pages branch. Changes are made to the master branch, merged into the gh-pages branch, and then pushed to all mirrors. GitHub picks up the changes and quickly updates the website. 7) Wiki The relax wiki (http://wiki.nmr-relax.com) was never affected by the Gna! shutdown as this is run on personal infrastructure donated by Troels Schwarz-Linnet (an active relax developer). 8) The commits mailing list and git-multimail.py Due to SourceForge having the powerful shell services, I can log into their backend and directly modify the bare git repository (and svn repository). That allows me to set up repository hook scripts. For example it allowed me to make the svn and git-svn bridge repositories read-only. I have therefore set up the git-multimail.py script for reporting all commits on the SF git repositories to nmr-relax-commits att lists.sourceforge.net. I have used a custom 'flightgear' branch that simplifies the output: https://github.com/edward-dauvergne/git-multimail 9) The Internet Archive Wayback Machine Note that most, if not all of the relax open source infrastructure from Gna! was preserved by the Internet Archive prior to shutdown. The main page, for example, is: https://web.archive.org/web/20170301013031/https://gna.org/projects/relax/ Navigation to bug, support and task items is not easy though. Often you will need to paste the original URL directly into the Internet Archive (https://web.archive.org/). 10) OpenHub I have set up relax at OpenHub to allow for meaningful development statistics: https://www.openhub.net/p/nmr-relax If you are or were a relax developer, you can sign up there to have a public profile documenting your open source coding. For example: https://www.openhub.net/accounts/true_bugman The stats are shown on the links page: http://www.nmr-relax.com/links.html#OpenHUB 11) NESSY As a side note, as Michael Bieri has nominated me to be the caretaker for his NESSY project, I have also performed a migration of that software to a number of mirrors: https://sourceforge.net/projects/nmr-nessy/ https://github.com/nmr-nessy https://gitlab.com/nmr-nessy Note that the Subversion repository was unfortunately permanently lost with the Gna! shutdown, so the git repositories have no history. The SourceForge site has been set up to be fully functional for any NESSY users. This includes full restoration of the NESSY website: https://nmr-nessy.sourceforge.io/ |
From: Edward d'A. <ed...@do...> - 2016-05-13 18:56:44
|
This is a minor feature and bugfix release. The new user functions system.cd and system.pwd have been added to allow the working directory to be changed and displayed. The time and sys_info user functions have been renamed to system.time and system.sys_info. The structure.delete_ss user function has been created to remove the helix and sheet information from the internal structural object. For bugs, the R2eff dispersion model can now handle missing peaks in subsets of spectra, and the structure.read_pdb can now handle multiple structures and multiple models with the merge flag set. For the official, easy to navigate release notes, please see http://wiki.nmr-relax.com/Relax_4.0.2 . The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html). The full list of changes is: Features: * Addition of the new user functions system.cd and system.pwd to allow the working directory to be changed and displayed. * Addition of the structure.delete_ss user function to remove the helix and sheet information from the internal structural object. Changes: * Improved formatting for the \yes LaTeX command for the HTML manual (www.nmr-relax.com/manual/). This now inputs the raw HTML character for a tick. * The replicate title finding script now processes short titles as well. This shows that the Frame_order.html file will be conflicting and overwritten. * Avoidance of a replicated title in the frame order chapter of the manual. * Added some unicode characters for improved formatting of the CHANGES file. * A number of updates for the release checklist document. This should make it easier to replicate the full release process. * Update the release checklist document. The version number at http://wiki.nmr-relax.com/Template:Current_version_relax also needs to be updated for each release. * Added a check for the total argument for the frame_order.distribute user function. The maximum value is 9999, as the PDB format cannot accept more models. * Creation of the structure.delete_ss user function. This simply resets the helices and sheets data structures in the internal structural object to []. * Updated the copyright notices for 2016. * Created a short Info_box copyright string for displaying in the main GUI window. This shows the full range of copyright dates. * Added the spin_num boolean argument to the structure.load_spins user function. Setting this flag to False will cause the spin number information to be ignored when creating the spin containers. This allows for better support of homologous structures but with different PDB atom numbering. The default flag value is True, preserving the old behaviour. * Added support for concatenating atomic positions in the structure.load_spins user function. Together with the spin_num flag set to False, this allows for atomic positions to be read from multiple homologous structures with different PDB atomic numbering. The spin containers will be created from the first structure, in which the spin is defined, and the atomic position from subsequent structures will be appended to the list of current atomic positions. * Fix for the Structure.test_read_pdb_internal3 system test. With the new atomic position concatenation support, when called sequentially the structure.load_spins user function should always use the same value for the ave_pos argument. * In the GUI the user functions sys_info and time are now grouped into a "system" subclass. This is to prepare for other system related functions. * Added a new 16x16 icon for the oxygen folder-favorites icon. * Adding a new file at lib/system.py. This file will contain different functions related to python os and system related functions. For example changing directory or printing working directory. * In /lib/__init__.py, adding the filename for system.py. * Renaming the folder-favorites icon. * Deleting the old folder-favorites icon. * Adding a new graphics variable: WIZARD_OXYGEN_PATH, to use oxygen icons with size of 200px. * Adding the new user function system.cd. This is to change the current working directory. * Adding a new 200px of oxygen folder-favorites icon. This is to be used in the wizard image. * Adding a user function translation for: This is to catch the new naming of these functions. * Adding a new lib.system.pwd() function, to print and return the current working directory. * Adding a new user function system.pwd() to print/display the current working directory. * Adding new 16x16 px and 200px of the oxygen icon folder-development. This icon is used for displaying the current working directory. * Adding a relax GUI menu for changing the current working directory. * Adding a menu item for changing the current working directory. * Adding a verbose True/False for the lib.system.pwd() function. * Storing the current working directory as a GUI variable. * Adding a toolbar button for changing the current working directory. * Adding a verbose flag to lib.system.pwd() function. * Changing to a filedialog for the user function system.cd. * Adding an observer for current working directory. * Modifying the user function system.cd not to show the result to STDOUT. * Letting the lib.system.cd function notify the observer, when changing directory. * Letting the current working directory be printed in the statusbar in the bottom. * Updating self.system_cwd_path when a directory change is observed. * For the four auto-analysis methods, the default results directory is now the current working directory instead of the launch directory. * Changing the keyboard shortcut for changing the working directory to Ctrl+W. Since Ctrl+C is often used for copying (from the terminal). * Fix for GUI prompt bug, where ANSI escape characters should not be printed when interpreter is inherited from wxPython. * Added a newline character after printing the script. * Optimising the width of the statusbar. * When the user function script is called, a notification of pipe_alteration is made. This will force the GUI to update, and make sure that it is up to date. * Updated the frame order auto-analysis for the time -> system.time user function change. * Fix for the GUI status bar element widths. Fixed widths in pixels causes text truncation on many systems, depending on the width of the main relax window. Instead variable widths should be used to allow wxPython to more elegantly present the text while minimising truncation. * Created a system test for catching bug #24601. This is the failure of the optimisation of the 'R2eff' dispersion model when peaks are missing from one spectrum, as reported by Petr Padrta at https://gna.org/bugs/?24601. The test uses his data and script to trigger the bug. * Simplified the Relax_disp.test_bug_24601_r2eff_missing_data system test. This is to allow the test to catch bug #24601 to complete in a reasonable time (2 seconds on one system). * Fix for the independence of the relax library. As lib.system was using the status object, the library independence was broken. To work around this, the module has simply been shifted into the pipe_control package. * Added some missing oxygen icons to allow the relax manual to compile. These are the 128x128 EPS versions of the places/folder-development.png and places/folder-favorites.png Oxygen icons recently introduced. For completeness the 32x32, 48x48, and 128x128 PNG versions of the icons have also been added. To help create these EPS icons in the future, the graphics/README file has been added with a description of the *.eps.gz file creation. * Some more details for the *.eps.gz icon creation process. * Mac OS X fixes for the Structure.test_pca and Structure.test_pca_observers system tests. The eigenvectors on this OS are sometimes inverted. As the sign of the eigenvector is irrelevant, the vectors hardcoded into the system tests are now inverted as required. Bugfixes: * Fix for bug #24218 (https://gna.org/bugs/?24218). This is the incorrect labelling of alignment tensors by the align_tensor.matrix_angles user function when a subset of tensors is specified. The logic for the labels was expanded from being only for all tensors to handling subsets. * Bug fix for the structure.read_pdb user function (bug #24300, https://gna.org/bugs/?24300). When the merge flag is True, and both multiple structures and multiple models are present, the structure.read_pdb user function would fail with a RelaxError. The problem was that the molecule index was simply not being updated correctly. * Fix for bug #24601. This is the failure of the optimisation of the 'R2eff' dispersion model when peaks are missing from one spectrum, as reported by Petr Padrta at https://gna.org/bugs/?24601. To handle the missing data, the peak intensity keys are now checked for in the spin container peak_intensities data structure. This is both for the R2eff model optimisation as well as the data back-calculation. A warning is given when the key is missing. The relaxation dispersion base_data_loop() method has been modified to now yield the spin ID string, as this is used in the warnings. In addition, the Grace plotting code in the relax library was also modified. When peak intensity keys are missing, some of the Grace plots will have no data. The code will now generate a plot for that data set, but detect the missing data and allow an empty plot to be created. BZh91AY&SYO]1 |
From: Edward d'A. <ed...@do...> - 2015-12-15 14:34:55
|
This is a major feature and bugfix release. Features include the new structure.pca user function for performing a principle component analysis (PCA) of a set of structures, handling of replicated R2eff data points in the dispersion analysis, improvements in the handling of PDB structures, the protection against numpy ≥ 1.9 FutureWarnings for a number of soon to change behaviours in numpy, and addition of a deployment script for the Google Cloud Computing. Bugfixes include an error when loading relaxation data, the CSA constant equation in the manual, missing information in the relax state and results files, loading of certain state files in the GUI, running relax with no graphical display and using matplotlib, BMRB export failure when a spin container is missing data or parameters. For the official, easy to navigate release notes, please see http://wiki.nmr-relax.com/Relax_4.0.1 . The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html). The full list of changes is: Features: * Many improvements for the compilation of the HTML version of the relax manual (http://www.nmr-relax.com/manual/index.html). * Updated relax to eliminate all FutureWarnings from numpy ≥ 1.9, to future-proof relax against upcoming numpy behaviour changes. * Ability to handle replicated R2eff data points by the relax_disp.r2eff_read user function, but adding 0.001 to the frequency value for the replicated point. * A new sample script for loading a model-free results file and back-calculating relaxation data. * Improvements for the handling of PDB structural data. * Implementation of the structure.pca user function for performing principle component analyses (PCA) of an ensemble of structures. * Addition of a script for rapid deployment on the Google Cloud Computing infrastructure. Changes: * Fix for the rigid frame order model 2nd degree frame order matrix in the manual. The wrong symbol was being used. * Removed the newparagraph and newsubparagraph definitions from the LaTeX manual. These were causing conflicts with latex2html, preventing the HTML version of the manual at http://www.nmr-relax.com/manual/index.html from being compiled. These definitions are unnecessary for the current set up of the sectioning in the manual. * Modified the short captions in the new frame models chapter of the manual. The runic ᛞ character has been replaced simply by 'Daeg'. This is due to incompatibilities with latex2html which prevents the HTML manual at http://www.nmr-relax.com/manual/index.html from being compiled. * Removal of the definition of a fixed-width table column from the LaTeX manual preamble. This is required as the definition breaks latex2html compatibility, causing a corruption in the figure numbering resulting in the images in the HTML to be essentially randomised. * Removal of the accents package to allow the HTML manual to be compiled. The 'accents' LaTeX package is not compatible with latex2html, so the easiest fix is to eliminate the package. * Manually rotated the frame order matrix element EPS manual figures, for latex2html compatibility. The '90 rotate' command has been deleted and the bounding box permuted as 'a b c d' -> 'b -c d -a'. This allows the angle argument in the \includegraphics{} command to be dropped, as latex2html does not recognise this. It allows the figures to be visible in the HTML version of the manual at http://www.nmr-relax.com/manual/index.html . * Redesign of the frame order parameter nesting table in the manual for latex2html compatibility. The table uses the tikz package, which is fatal for latex2html, even if not used. Therefore the table in the docs/latex/frame_order/parameter_nesting.tex file has been converted into a standalone LaTeX document to create a cropped postscript version of the tikz formatted table. A compilation script has been added as well. The resultant *.ps file is now included into the PCS numerical integration section, rather than this section creating the tikz table. All tikz preamble text has been removed to allow latex2html to run. * Workaround for latex2html not being able to handle the allrunes package or associated font. In the preamble 'htmlonly' environment, the frame order symbols are redefined using the text 'Daeg' instead of the runic character ᛞ. * Fixes for sub and superscripts throughout the manual. This introduces {} around all sub and superscripted \textrm{} instances. This is not needed for the PDF version of the manual as the missing bracket problem is avoided, but it affects the HTML version of the manual compiled by latex2html, which requires the correct notation. The fixes are for both the new frame order chapter as well as the relaxation dispersion chapter. * Editing and fixes for the relax 4.0.0 part of the CHANGES file. * Updated and improved the wiki instructions in the relax release checklist document. * One more wiki instruction about checking for dead links in the release checklist document. * More minor changes to the 'Announcement' section of the release checklist document. * Updated the shell script for finding duplicated titles in the LaTeX files of the manual. * Converted the duplicate title finding shell script into a Python script. The Python script is far more advanced and uses a different logic to produce a table of replicated titles and their count. The script also returns a failed exit status when replicates exist. * Converted the replicated title finding Python script to use a class structure. This allows the script to be imported as a module. The replicate finding has been shifted into a find() class method. * Renamed the replicate title finding script. * Removed the duplicate LaTeX title finding shell script. This is now handled by the far more advanced Python script. * The Scons compilation of the PDF and HTML manuals now checks for replicated titles. A new replicate_title_check target has been added to the scons scripts. This calls the find() method of the replicate LaTeX title finding script to determine if any titles are replicated, and if so the scons target returns with a sys.exit(1) call. This target is set at the start of the user_manual_pdf, user_manual_pdf_nofetch, user_manual_html, user_manual_html_nofetch scons targets. The result is that the manual cannot be compiled if replicate titles exist, forcing the titles to be changed. The result will be that the HTML pages at http://www.nmr-relax.com/manual/ will all be unique, as replicated titles results in only one HTML page being created for all the sections. * Elimination of replicated titles in the LaTeX sources that the new frame order chapters introduced. * Removal of an old replicated title in the LaTeX sources for the manual. This is the title 'Model-free analysis' which is used for the entire specific analysis chapter as well as for the model-free analysis section of the values, gradients, and Hessians for optimisation chapter. * Fixes and improved printouts for the replicate_title_check scons target. * Updated all of relax to protect against future changes occurring in the numpy Python package. From numpy version 1.9, the FutureWarning "__main__:1: FutureWarning: comparison to `None` will result in an elementwise object comparison in the future." is seen in a large percentage of all relax's user functions. This is caught and turned into a RelaxWarning with the same message. The issue is that the behaviour of the comparison operators '==' and '!=' will change with future numpy versions. These have been replaced with 'is' and 'is not' throughout the relax code base. Changes have also been made to the minfx and bmrblib packages to match. * More future protection against numpy changes. The FutureWarning is "`rank` is deprecated; use the `ndim` attribute or function instead. To find the rank of a matrix see `numpy.linalg.matrix_rank`." Therefore the N-state model target function method paramag_info() has been updated to use the .ndim attribute and longer use numpy.rank() function. * Created the Mf.test_bug_23933_relax_data_read_ids system test. This is designed to catch bug #23933 (https://gna.org/bugs/?23933), the "NameError: global name 'ids' is not defined" problem when loading relaxation data. A truncated version of the PDB file and relaxation data, the full versions of which are attached to the bug report, consisting solely of residues 329, 330, and 331 have been added to the test suite shared data directories, and the system test written to catch the NameError. * Updated the Mf.test_bug_23933_relax_data_read_ids system test to catch the RelaxMultiSpinIDError. This allows the system test to pass, as a RelaxMultiSpinIDError is expected. * Updated the minfx and bmrblib versions in the release checklist document to 1.0.12 and 1.0.4. This is to remove the numpy FutureWarning messages about the '== None' and '=! None' comparisons to numpy data structures, which in the future will change in behaviour. * Increased the Gna! news item sectioning depth in the release checklist document. * Expanded the description of the sequence.attach_protons user function. This follows from http://thread.gmane.org/gmane.science.nmr.relax.user/1849/focus=1855 . * Added initial data for testing data from Paul Schanda. This will demonstrate that there are several possibilities to enhance the r2eff point method. * Added the Relax_disp.test_paul_schanda_nov_2015 system test. This will catch the loaning of "nan" values. * Made additional check in sequence reading, that "nan" values are skipped. * Making sure that the replicated 4000 Hz point fpr the 950 MHz experiment is not overwritten. * In the Relax_disp.test_paul_schanda_nov_2015 system test, added a test of counting the R2eff values. This shows that the replicated R2eff at 950 MHz/4000 Hz point is overwritten. A solution could be to change the dispersion frequency very little, to allow the addition of the data point. * Expanded a comment about why 1 is subtracted in a test. * Added further tests to Relax_disp.test_paul_schanda_nov_2015. This will show that replicates of R2eff values is not handled well. * In the function of r2eff_read, in data module of the dispersion, added the possibilities to read r2eff values which are replicated. This is done first checking if the dispersion key exists in the r2eff dictionary. If it exists, continue add 0.001 to the frequency until a new possibility exists. This should help handle multiple R2eff points, as separate values and not taking any decision to average them. * Added the expectation of raising an relax error, if trying to plot and no model information is stored. * Raising an error if plotting dispersion curves, and no model is saved. * Changed example script for analysing data. * Extended the Relax_disp.test_paul_schanda_nov_2015 system test to include auto-analysis and clustered fits. This should show that the analysis is now possible. * Added a temporary state and a script for GUI setup to the data Paul Schanda. * Added the Relax_disp.test_paul_schanda_nov_2015 GUI test. This will show that loading a state will create a problem. Traceback (most recent call last): TypeError: int() argument must be a string or a number, not 'NoneType'. * Added a sample script for back-calculating relaxation data from a model-free results file. This is useful when the results file is not the final model, as these results file do not contain the back-calculated data. This is in response to Christina Möller's sr #3303 support request (https://gna.org/support/?3303). * Using Gary's lib.float.isNaN() instead of math.isnan(), to have backwards compatibility with python 2.5. * Fix for spelling mistake and documenting the new behavior of relax_disp.r2eff_read(), when reading r2eff points with the same frequency. If the spin-container already contain r2eff values with the 'frequency of the CPMG pulse' or 'spin-lock field strength', the frequency will be changed by a infinitesimal small value of + 0.001 Hz. This allow for duplicates or more of the same frequency. * Modified the internal structural object to be less influenced by the format of the PDB. The PDB serial number is now intelligently handled, in that it is reset to 1 when a new model is created. This information is still kept for supporting the logic of the reading of the CONECT records, and will be eliminated in the future. The chain ID information is now no longer stored in the internal structural object, as this information is recreated by the structure.write_pdb user function based on how the internal structural object has been created. * Updates to the Noe and Structure system test classes for the internal structural object changes. The serial number can now be reset, and the chain ID information is no longer stored. * Added a file to the test suite shared data to help implement the PCA structural analysis. This is the N-domain of the CaM-IQ complex used in a frame order analysis. It is the first 5 structures from a call to the frame_order.distribute user function, with the different rigid-bodies merged back together into a single molecule. * Created the structure.pca user function front end. This is currently modelled on the structure.rmsd user function framework. * Basic implementation of the structure.pca user function back end. This is the new pca() function of the pipe_control.structure.main module. It simply performs some checks, assembles the atomic coordinates, and the passes control to the relax library pca_analysis() function of the currently unimplemented lib.structure.pca module. * Partial implemented of the PCA analysis in the relax library. This is for the new structure.pca user function. The lib.structure.pca module has been created, and the pca_analysis() function created to calculate the structure covariance matrix, via the covariance() function, and then calculate the eigenvalues and eigenvectors of the covariance matrix, sorting them and truncating to the desired number of PCA modes. * Added the 'algorithm' and 'num_modes' arguments to the structure.pca user function. These are passed all the way into the relax library backend. * Implemented the SVD algorithm for the PCA analysis in the relax library. This simply calls numpy.linalg.svd(). * The PCA analysis in the relax library now calculates the per structure projections along the PCs. * The PCA analysis function in the relax library is now returning data. This includes the PCA values and vectors, and the per structure projections. * The PCA values and vectors, and the per structure projections are now being stored. This is in the structure.pca user function backend in the pipe_control.structure.main module. * Added the 'format' and 'dir' arguments to the structure.pca user function. This is to the front and back ends. * Modified the assemble_structural_coordinates() method to return more information. This is from the pipe_control.structure.main module. The 'lists' boolean argument is now accepted which will cause the function to additionally return the object ID list per molecule, the model number list per molecule, and the molecule name list per molecule. * The structure.pca user function now creates graphs of the PC projections. This includes PC1 vs. PC2, PC2 vs. PC3, etc. * Added the Gromacs PCA results for the distribution.pdb file. This includes a script used to execute all parts of Gromacs and all output files. * Updated the Gromacs PCA results for the newest Gromacs version (5.1.1). * Created an initial Structure.test_pca system test. This executes the new structure.pca user function, and checks if data is stored in cdp.structure. * Improved the graphs in the backend of the structure.pca user function. The graphs are now clustered so that different models of the same structure in the same data pipe are within one graph set. The graph header has also been improved. * Expanded the Structure.test_pca system test checks to compare to the values from Gromacs. * A weighted mean structure can now be calculated. This is for the calc_mean_structure() function of the relax library module lib.structure.statistics. Weights can now be supplied for each structure to allow for a weighted mean to be calculated and returned. * Added support for 'observer' structures in the structure.pca user function. This allows a subset of the structures used in the PC analysis to have zero weight so that these structures can be used for comparison purposes. The obs_pipes, obs_models, and obs_molecules arguments have been added to the user function front end. The backend uses this to create an array of weights for each structure. And the lib.structure.pca functions use the zero weights to remove the observer structures from the PC mode calculations. * Created the Structure.test_pca_observers system test. This is for testing the new observer structures concept of the structure.pca user function. * Improved the printouts from the relax library principle component analysis. This is in the pca_analysis() function of the lib.structure.pca module. * Fixes and improvements for the graphs produced by the structure.pca user function. The different sets are now correctly created, and are now labelled in the plots. * Adding a testing deploy script, for rapid deployment on Google Cloud Computing. This is for an intended install in Ubuntu 14.04 LTS. * Expanding script for installation. * Putting installation into functions in deploy script. * Splitting deploy script into several small functions. * Adding checking statements to install script. * When sourcing the scripts, several functions can be performed instead. * Added spaces to install script for better printing. * Adding a tutorial script. * Adding 2 tutorial scripts. * Fix for small spin ID error in tutorial script. * Created a system test for catching bug #24131. This is the BMRB export failure when the SpinContainer object has no s2 attribute, as reported by Martin Ballaschk at https://gna.org/bugs/?24131. * Modified the Mf.test_bug_24131_bmrb_deposition system test to check for the RelaxError. The test results in a RelaxError, as the results file contains no selected spins. * Added the Mf.test_bug_24131_missing_interaction system test to catch another problem. This is part of bug #24131 (https://gna.org/bugs/?24131), the BMRB export failure with the SpinContainer object having no s2 value. However the previous fix of skipping deselected spins introduced a new problem of relax still searching for the interatomic interactions for that deselected spin. Bugfixes: * Replicated titles in the HTML version of the relax manual (http://www.nmr-relax.com/manual/index.html), and hence replicated HTML file names overwriting earlier sections, have been eliminated. * Fix for bug #23933 (https://gna.org/bugs/?23933). This is the "NameError: global name 'ids' is not defined" problem when loading relaxation data. The bug was introduced back in November 2014, and is due to some incomplete error handling code. The problem is that the spin type that the relaxation data belongs to (@N vs. @H) has not been specified. Now the correct RelaxMultiSpinIDError is raised. The 'ids' variable did not exist - it was code that was planned to be added, but never was and was forgotten. * Fix for the CSA constant equation in the model-free chapter of the manual. This was spotted by Christina Möller and reported on the relax-users mailing list at https://mail-archive.com/relax-users%40gna.org/msg01776.html . * Bug fix for the storage of the XML structural object in the state and results files. Previously any objects added to cdp.structure (or any structure object) would not be saved by the structural object to_xml() method unless the function is explicitly modified to store that object. Now all objects present will be converted to XML. * Fix for the relaxation dispersion analysis in the GUI, as caught by the Relax_disp.test_paul_schanda_nov_2015 GUI test. When loading from a script state file, the value of "None" can be present. This is now set to the standard values. * Fix for running relax at a server with no graphical display and using matplotlib. The error was found with the Relax_disp.test_repeat_cpmg system test. And the error generated was: "QXcbConnection: Could not connect to display. Aborted (core dumped)". The backend of matplotlib has to be changed. This is for example described in: http://stackoverflow.com/questions/2766149/possible-to-use-pyplot-without-display http://stackoverflow.com/questions/8257385/automatic-detection-of-display-availability-with-matplotlib. * Modified the behaviour of the bmrb.write user function backend for a model-free analysis (fix for bug #24131, https://gna.org/bugs/?24131). This is in the bmrb_write() method of the model-free analysis API. Deselected spins are now skipped and a check has been added to be sure that spin data has been assembled. * Another fix for bug #24131 (https://gna.org/bugs/?24131). This is the BMRB export failure when the SpinContainer object has no s2 attribute. Now no data is stored in the BMRB file if a model-free model has not been set up for the spin. This allows the test suite to pass. * Bug fix to allow the Mf.test_bug_24131_missing_interaction system test to pass. This is part of bug #24131 (https://gna.org/bugs/?24131), the BMRB export failure with the SpinContainer object having no s2 value. The problem was when assembling the diffusion tensor data. The spin_loop() function was being called, as the diffusion tensor is reported for all residues. Therefore the skip_desel=True has been added to match the model-free part. |
From: Edward d'A. <ed...@do...> - 2015-10-14 15:01:34
|
This is a major feature release for a new analysis type labelled 'frame order'. The frame order theory aims to unify all rotational molecular physics data sources via a single mechanical model. It is a bridging physics theory for rigid body motions based on the statistical mechanical ordering of reference frames. The previous analysis of the same name was an early iteration of this theory that was however rudamentary and non-functional. Its current implementation is for analysing RDC and PCS data from an internal alignment to interpret domain or other rigid body motions within a molecule or molecular complex. For the official, easy to navigate release notes, please see http://wiki.nmr-relax.com/Relax_4.0.0 . The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html). The full list of changes is: Features: * The final, complete, and correct implementation of the frame order theory for studying rigid body motions. This is currently for analysing RDC and PCS data from internally aligned systems. Changes: * Deletion of the frame_order.average_position user function and all of the associated backend code. This user function allowed the user to specify five different types of displacement to the average moving domain position: a pure rotation, with no translation, about the pivot of the motion in the system; a rotation about the pivot of the motion of the system together with a translation; a pure translation with no rotation; a rotation about the centre of mass of the moving domain with no translation; a rotation about the centre of mass of the moving domain together with a translation. Now the last option will be the default and only option. This option is equivalent to the standard superimposition algorithm (the Kabsch algorithm) to a hypothetical structure at the real average position. The other four are due to the history of the development of the theory. These limit the usefulness of the theory and will only cause confusion. * Clean up of the frame order target function code. This matches the previous change of the deletion of the frame_order.average_position user function. The changes include the removal of the translation optimisation flag as this is now always performed, and the removal of the flag which causes the average domain rotation pivot point to match the motional pivot point as these are now permanently decoupled. * Alphabetical ordering of functions in the lib.frame_order.pseudo_ellipse module. * Eliminated all of the 'line' frame order models, as they are not implemented yet. This is just frontend code - the backend does not exist. * Updated the isotropic cone CaM frame order test model optimisation script. Due to all of the changes in the frame order analysis, the old script was no longer functional. * Created a script for the CaM frame order test models for finding the average domain position. As the rotation about a fixed pivot has been eliminated, the shift from 1J7P_1st_NH_rot.pdb to 1J7P_1st_NH.pdb has to be converted into a translation and rotation about the CoM. This script will be used to replace the pivot rotation Euler angles with the translation vector and CoM rotation Euler angles. However the structure.superimpose user function will need to be modified to handle both the standard centroid superimposition as well as a CoM superimposition. * Updated the CaM frame order test model superimposition script. The structure.superimpose user function is now correctly called. The output log file has been added to the repository as it contains the correct translation and Euler rotation information needed for the test models. * Parameter update for the isotropic cone CaM frame order test model optimisation script. The Euler angles for the rotation about the motional pivot have been replaced by the translation vector and Euler angle CoM rotation parameters. * Fix for a number of the frame order models which do not have parameter constraints. The linear_constraint() function was returning A, b = [], [] for these models, but these empty numpy arrays were causing the minfx library (https://gna.org/projects/minfx/) to fail. These values are now caught and the constraint algorithm turned off in the minimise() specific API method. * Increased the precision of all the data in the CaM frame order test data generation base script. These have all been converted from float16 to float64 numpy types. * Fix for the RDC error setting in the CaM frame order test data generation base script. The rdc_err data structure is located in the interatomic data containers, no the spin containers. * Modification of the structure loading part of the CaM frame order data generation base script. The structures are now only loaded if the DIST_PDB flag is set, as they are only used for generating the 3D distribution of structures. This saves a lot of time and computer memory. * Huge speedup of the CaM frame order test data generation base script. By using multidimensional numpy arrays to store the atomic positions and XH unit vectors of all spins, and performing the rotations on these structures using numpy.tensordot(), the calculations are now a factor of 10 times faster. The progress meter had to be changed to show every 1000 rather than 100 iterations. The rotations of the positions and vectors are now performed sequentially, accidentally fixing a bug with the double motion models (i.e. the 'double rotor' model). * Modified the CaM frame order test data generation base script to conserve computer RAM. The XH vector and atomic position data structures for all N rotations are now of the numpy.float32 rather than numpy.float64 type. The main change is to calculate the averaged RDCs and averaged PCSs separately, deleting the N-sized data structures once the data files are written. * Complete redesign of the CaM frame order data generation base script for speed and memory savings. Although the rotated XH bond vector and atomic position code was very fast, the amount of memory needed to store these in the spin containers and interatomic data containers was huge when N > 1e6. The subsequent rdc.back_calc and pcs.back_calc user function calls would also take far too long. Therefore the base script has been redesigned. The _create_distribution() method has been split into four: _calculate_pcs(), _calculate_rdc(), _create_distribution(), and _pipe_setup(). The _pipe_setup() method is called first to set up the data pipe with all required data. Then the _calculate_rdc() and _calculate_pcs() methods, and finally _create_distribution() if the DIST_PDB flag is set. The calls to the rdc.back_calc and pcs.back_calc user functions have been eliminated. Instead the _calculate_rdc() and _calculate_pcs() methods calculate the averaged RDC and PCS themselves as numpy array structures. Rather than storing the huge rotated vectors and atomic positions data structures, the RDCs and PCSs are summed. These are then divided by self.N at the end to average the values. Compared to the old code, when N is set to 20 million the RAM usage drops from ~20 GB to ~65 MB. The total run time is also decreased on one system from a few days to a few hours (an order or two of magnitude). * Changed the progress meter updating for the CaM frame order test data generation base script. The spinner was far too fast, updating every 5 increments, and is now updated every 250. And the total number is now only printed every 10,000 increments. * Improvements to the progress meter for the CaM frame order test data generation base script. Commas are now printed between the thousands and the numbers are now right justified. * Large increase in accuracy of the RDC and PCS averaging. This is for the CaM frame order test data generation base script. By summing the RDCs and PCSs into 1D numpy.float128 arrays (for this, a 64-bit system is required), and then dividing by N at the end, the average value can be calculated with a much higher accuracy. As N becomes larger, the numerical averaging introduces greater and greater amounts of truncation artifacts. So this change alleviates this. * Fix for the RDC and PCS averaging in the CaM frame order test data generation base script. For the double rotor model, or any multiple motional mode model, the averaging was incorrect. Instead of dividing by N, the values should be divided by N**M, where M is the number of motional modes. * Huge increase in precision for the CaM frame order free rotor model test data. The higher precision is because the number structures in the distribution is now twenty million rather than one million, and the much higher precision numpy.float128 averaging of the updated data generation base script has been used. This data should allow for a much better estimate of the beta and gamma average domain position parameter values for the free rotor models which are affected by the collapse of the alpha parameter to zero. * Huge increase in precision for the CaM frame order double rotor model test data. The higher precision is because the number structures in the distribution is now over twenty million (4500**2) rather than a quarter of a million (500**2). And the much higher precision numpy.float128 averaging of the updated data generation base script has been used. * Fix for the constraint deactivation in the frame order minimisation when no constraints are present. * Huge increase in precision for the CaM frame order rotor model test data. The higher precision is because the number structures in the distribution is now 20 million rather than 166,666, and the numpy.float128 data averaging has been used. * Large increase in precision for the 2nd CaM frame order rotor model test data set. The higher precision is because the number structures in the distribution is now 20 million rather than 1,000,001 and the numpy.float128 data averaging has been used. * Parameter update for the 2nd rotor CaM frame order test model optimisation script. The Euler angles for the rotation about the motional pivot have been replaced by the translation vector and Euler angle CoM rotation parameters. * Large increase in precision for the 2nd CaM frame order free rotor model test data set. The higher precision is because the number structures in the distribution is now 20 million rather than 999,999 and the numpy.float128 data averaging has been used. * Updated the CaM frame order test model superimposition script. The Ca2+ atoms are now deleted from the structures before superimposition so that the centroid matches that used in the frame order analysis. * The average domain rotation centroid is printed out when setting up the frame order target functions. This is to help the user understand what is happening in the analysis. * Faster clearing of numpy arrays in the lib.frame_order modules. The x[:] = 0.0 notation is now used to set all elements to zero, rather than nested looping over all dimensions. This however has a negligible effect on the test suite timings. * Large increase in precision for the CaM frame order pseudo-ellipse model test data set. The higher precision is because the number structures in the distribution is now 20 million rather than 1 million and the numpy.float128 data averaging has been used. * Improved the value setting in the optimisation() method of the CaM frame order system tests. This is in the base script used by all scripts in test_suite/system_tests/scripts/frame_order/cam/. * Changed the average domain position parameter values in the CaM frame order system tests. This is in the base script used by all scripts in test_suite/system_tests/scripts/frame_order/cam/. The translation vector coordinates are now set, as well as the CoM Euler angle rotations. These come from the log file of the test_suite/shared_data/frame_order/cam/superimpose.py script, and are needed due to the simplification of the average domain position mechanics now mimicking the Kabsch superimposition algorithm. * The CaM frame order system test mesg_opt_debug() method now prints out the translation vector. This is printed out at the end of all CaM frame order system tests to help with debugging when the test fails. * Change for how the CaM frame order system test scripts handle the average domain position rotation. The trick of pre-rotating the 3D coordinates was used to solve the {a, b, g} -> {0, b', g'} angle conversion problem in the rotor models no longer works now that the average domain position mechanics has been simplified. Instead, high precision optimised b' and g' values are now set, and the ave_pos_alpha value set to None. The high precision parameters were obtained with the frame_order.py script located in the directory test_suite/shared_data/frame_order/cam/free_rotor. The free rotor target function was modified so that the translation vector is hard-coded to [-20.859750185691549, -2.450606987447843, -2.191854570352916] and the axis theta and phi angles to 0.96007997859534299767 and 4.0322755062196229403. These parameters were then commented out for the model in the module specific_analyses.frame_order.parameters so only b' and g' were optimised. Iterative optimisation was used with increasing precision, ending up with high precision using 10,000 Sobol' points. * Updated a number of the CaM frame order system tests for the higher precision data. The new data results in chi-squared values at the real solution to be much closer to zero. * Change for how the CaM frame order free-rotor pseudo-ellipse test script handle the average position. * Added FIXME comments to the 2nd free-rotor CaM frame order model system test scripts. These explain the steps required to obtain the correct b' and g' average domain position rotation angles. * Large increase in precision for the CaM frame order isotropic cone model test data set. The higher precision is because the number structures in the distribution is now 20 million rather than 1 million and the numpy.float128 data averaging has been used. * Large increase in precision for the CaM frame order free-rotor, isotropic cone model test data set. The higher precision is because the number structures in the distribution is now 20 million rather than 1 million and the numpy.float128 data averaging has been used. * Updated the CaM frame order free-rotor model test data set for testing for missing data. This is the data in test_suite/shared_data/frame_order/cam/free_rotor_missing_data. To simplify the copying of data from test_suite/shared_data/frame_order/cam/free_rotor and then the deletion of data, the missing.py script was created to automate the process. The generate_distribution.py script and some of the files it creates were removed from the repository so it is clearer how the data has been created. * Large increase in precision for the 2nd CaM frame order free-rotor, isotropic cone model test data set. The higher precision is because the number structures in the distribution is now 20 million rather than 1 million and the numpy.float128 data averaging has been used. * Large increase in precision for the CaM frame order free-rotor, pseudo-ellipse model test data set. The higher precision is because the number structures in the distribution is now 20 million rather than 1 million and the numpy.float128 data averaging has been used. * Large increase in precision for the CaM frame order pseudo-ellipse model test data set. The higher precision is because the number structures in the distribution is now 20 million rather than 1 million and the numpy.float128 data averaging has been used. * Updated a number of the CaM frame order system tests for the higher precision data. The new data results in chi-squared values at the real solution to be much closer to zero. The free-rotor pseudo-ellipse models might need investigation however as the chi-squared values have increased. * Elimination of the error_flag variable from the frame order analysis. This flag is used to activate some old code paths which have now been deleted as they are never used. * Optimisation of the average domain position for the CaM frame order free-rotor models. The log file that shows the optimisation of the average domain position for the free-rotor models has been added to the repository for reference. This is for the simple free-rotor model, but the optimised position holds for the isotropic cone and pseudo-ellipse model data too. To perform the optimisation, the axis_theta and axis_phi parameters were removed from the model and hardcoded into the target function. As the rotor axis is know, this allows the average domain position to be optimised in isolation. Visual inspection of the results confirmed the position to be correct. * Fixes for the 2nd frame order free-rotor system tests. The average domain position parameters are now set to the correct values, matching those in the relax log file frame_order_ave_pos_opt.log in test_suite/shared_data/frame_order/cam/free_rotor2. * Updated the 2nd CaM free-rotor frame order system tests for the correct average domain position. The chi-squared values are now significantly lower. * Increased the precision of the chi-squared value testing in the CaM frame order system tests. The check_chi2 method has been modified so that the chi-squared value is no longer scaled, and the precision has been increased from 1 significant figure to 4. All of the tests have been updated to match. * The minimisation verbosity flag now effects the frame order RelaxWarning about turning constraints off. * Preformed a frame order analysis on the 2nd CaM free-rotor model test data. This is to check that everything is operating as expected. * Small speedup for the frame order target functions for most models. The rotation matrix corresponding to each Sobol' point for the numerical integration is now pre-calculated during target function initialisation rather than once for each function call. * Updates for some of the frame order system tests for the rotation matrix pre-calculation change. As the rotation matrix is being pre-calculated, one consequence is that the Sobol' angles are now full 64-bit precision rather than 32-bit. Therefore this changes the chi-squared value a little, requiring updates to the tests. * Preformed a frame order analysis on the CaM free-rotor mode test data set. This is to demonstrate that everything is operating correctly. * Preformed a frame order analysis on the CaM free-rotor mode test data set with missing data. This is to demonstrate that everything is operating correctly. * Attempt to speed up the pseudo-elliptic frame order models. The quasi-random numerical integration of the PCS for the pseudo-ellipse has been modified so that the torsion angle check for each Sobol' point is preformed before the tmax_pseudo_ellipse() function call. A new check that the tilt angle is less than cone_theta_y, the larger of the two cone angles, has also been added to avoid tmax_pseudo_ellipse() when the theta tilt angle is outside of an isotropic cone defined by cone_theta_y. * Preformed a frame order analysis on a number of the CaM test data sets. This includes the rotor, isotropic cone, and pseudo-ellipse, and the analyses demonstrate a common bug between all these models. * Preformed a frame order analysis on the rigid CaM test data set. This is to demonstrate that everything is operating correctly. * Optimisation of the rotor model to the rigid CaM frame order test data. The optimisation script and all results files have been added to the repository. * Increased the grid search bounds for the frame order average domain translation. Instead of being a 10 Angstrom box centred at {0, 0, 0}, now the translation search has been increased to a 100 Angstrom box. * Proper edge case handling and slight speedup of the frame order PCS integration functions. The case whereby no Sobol' points in the numerical integration lie within the motional distribution is now caught and the rotation matrix set to the motional eigenframe to simulate the rigid state. As the code for averaging the PCS was changed, it was also simplified by removing an unnecessary loop over all spins. This should speed up the PCS integration by a tiny amount. * Created a new CaM frame order test data set. This is for the rotor model with a very small torsion angle of 1 degree, and will be used as a comparison to the rigid model and for testing the performance of the rotor model for an edge case. * Updated the frame order representations in all of the frame_order.py scripts for the CaM test data. All PDB files are now gzipped to save space, the old pymol.cone_pdb user function calls replaced with pymol.frame_order, and an average domain PDB file for the exact solution is now created in all cases. * The minimisation constraints are now turned on for all CaM test data frame_order.py optimisation scripts. * Updated the rotor CaM test data frame_order.py script for the parameter reduction. The rotor axis {theta, phi} polar angles have been replaced by the single axis alpha angle. This now matches the script for the 2nd rotor model. * Updated the parameters in all of the frame_order.py scripts for the CaM test data. The parameters are now specified at the top of the script as variables. All scripts now handle the change to the translation + CoM rotation for the average domain position rather than having a pure rotation about a fixed pivot, which is no longer supported. * The frame_order.num_int_pts user function now throws a RelaxWarning if not enough points are used. * Changed the creation of Sobol' points for numerical integration in the frame order target functions. The points are now all created at once using the i4_sobol_generate() rather than i4_sobol() function from the extern.sobol.sobol_lib module. * Increased the number of integration points from 50 or 100 to 5000. This is for all CaM frame_order.py test data optimisation scripts. The higher number of points are essential for optimising the frame order models and hence for checking the relax implementation. * Updated the frame_order.py optimisation script for the small angle CaM rotor frame order test data. This now has the correct rotor torsion angle of 1 degree, and the spherical coordinates are now converted to the axis alpha parameter. * Expanded the capabilities of the pymol.frame_order user function. The isotropic and pseudo-elliptic cones are now represented as they used to be under the pymol.cone_pdb user function. To avoid code duplication, the new represent_cone_axis(), represent_cone_object() and represent_rotor_object() functions have been created to send the commands into PyMOL. * Increased the precision of all of the CaM frame order system tests by 40 times. The number of Sobol' integration points have been significantly increased while only increasing the frame order system test timings by ~10%. This allows for checking for chi-squared values at the minima much closer to zero, and is much better for demonstrating bugs. * Optimisation constraints are no longer turned off in the frame order auto-analysis. Constraints are now supported by all frame order models, or automatically turned off for those which do not have parameter constraints. * Fix for the frame order visualisation script created by the auto-analysis. The call to pymol.frame_order is now correct for the current version of this user function. * Removed a terrible hack for handling the frame order analysis without constraints. This is no longer needed as the log-barrier method is now used to constrain the optimisation, so that the torsion angle can no longer be negative. * Constraints are now implemented in the frame order grid search. This is useful for the pseudo-elliptic models as the cone theta_x < theta_y constraint halves the optimisation space. * Expanded the CaM rotor test data frame_order.py optimisation script. The optimisation is now implemented as in the auto-analysis, with an iterative increase in accuracy of the quasi-random numerical integration together with an decrease of the function tolerance cutoff for optimisation. The accuracy of the initial chi-squared calculation is now much higher. And the accuracy of the initial grid search and the Monte Carlo simulations is now much lower. The results of the new optimisation are included. * Expanded the CaM pseudo-ellipse test data frame_order.py optimisation script. The optimisation is now implemented as in the auto-analysis, with an iterative increase in accuracy of the quasi-random numerical integration together with an decrease of the function tolerance cutoff for optimisation. The accuracy of the initial chi-squared calculation is now much higher. And the accuracy of the initial grid search and the Monte Carlo simulations is now much lower. The results of the new optimisation are included. * Added one more iteration for the zooming optimisation of the frame order auto-analysis. This is to improve the speed of optimisation when all RDC and PCS data is being used. The previous iterations where with [100, 1000, 200000] Sobol' integration points and [1e-2, 1e-3, 1e-4] function tolerances. This has been increased to [100, 1000, 10000, 100000] and [1e-2, 1e-3, 5e-3, 1e-4]. The final number of points has been decreased as that level of accuracy does not appear to be necessary. These are also only default values that the user can change for themselves. * Updated the CaM frame order data generation base script to print out more information. This is for the first axis system so that the same amount of information as the second system is printed. * Expanded the CaM isotropic cone test data frame_order.py optimisation script and added the results. The optimisation is now implemented as in the auto-analysis, with an iterative increase in accuracy of the quasi-random numerical integration together with an decrease of the function tolerance cutoff for optimisation. The accuracy of the initial chi-squared calculation is now much higher. And the accuracy of the initial grid search and the Monte Carlo simulations is now much lower. * Important fix for the 2nd rotor model of the CaM frame order test data. The tilt angle was not set, and therefore the old data matched the non-tilted 1st rotor model. All PCS and RDC data has been regenerated to the highest quality using 20,000,000 structures. * Updated the 3 Frame_order.test_cam_rotor2* system tests for the higher quality data. * Expanded the 2nd CaM pseudo-ellipse test data frame_order.py optimisation script. The optimisation is now implemented as in the auto-analysis, with an iterative increase in accuracy of the quasi-random numerical integration together with an decrease of the function tolerance cutoff for optimisation. The accuracy of the initial chi-squared calculation is now much higher. And the accuracy of the initial grid search and the Monte Carlo simulations is now much lower. The results of the new optimisation have been added to the repository. * Expanded the CaM free-rotor isotropic cone test data frame_order.py optimisation script. The optimisation is now implemented as in the auto-analysis, with an iterative increase in accuracy of the quasi-random numerical integration together with an decrease of the function tolerance cutoff for optimisation. The accuracy of the initial chi-squared calculation is now much higher. And the accuracy of the initial grid search and the Monte Carlo simulations is now much lower. The results of the new optimisation have been added to the repository. * Expanded all remaining CaM test data frame_order.py optimisation scripts. The optimisation is now implemented as in the auto-analysis, with an iterative increase in accuracy of the quasi-random numerical integration together with an decrease of the function tolerance cutoff for optimisation. The accuracy of the initial chi-squared calculation is now much higher. And the accuracy of the initial grid search and the Monte Carlo simulations is now much lower. * Updated the CaM 2-site to rotor model frame_order.py optimisation script for the parameter reduction. The rotor frame order model axis spherical angles have now been converted to a single alpha angle. * Fix for a number of the frame order models which do not have parameter constraints. This change to the grid_search() API method is similar to the previous fix for the minimise() method. The linear_constraint() function was returning A, b = [], [] for these models, but these empty numpy arrays were causing the dot product with A to fail in the grid_search() API method. These values are now caught and the constraint algorithm turned off. * Converted the 'free rotor' frame order model to the new axis_alpha parameter system. The axis_theta and axis_phi spherical coordinates are converted to the new reduced parameter set defined by a random point in space (the CoM of all atoms), the pivot point, and a single angle alpha. The alpha parameter defines the rotor axis angle from the xy-plane. * Parameter conversion for all of the CaM free rotor test data frame_order.py optimisation scripts. The rotor axis spherical angles have been replaced by the axis alpha angle defining the rotor with respect to the xy-plane. * Modified the CaM frame order base system test script to catch a bug in the free rotor model. The axis spherical angles are no longer set for the rotor or free rotor models, as they use the alpha angle instead and the lack of the theta and phi parameters triggers the bug. The PDB representation of the frame order motions is also now tested for all frame order models, as it was turned off for the rigid, rotor and free rotor models and this is where the bug lies. * Fix for the failure of the frame_order.pdb_model user function for the free rotor frame order model. This is due to the recent parameter conversion to the axis alpha angle. * Eliminated the average position alpha Euler angle parameter from the free-rotor pseudo-ellipse model. As this frame order model is a free-rotor, the average domain position is therefore undefined and it can freely rotate about the rotor axis. One of the Euler angles for rotating to the average position can therefore be removed, just as in the free rotor and free rotor isotropic cone models. * Eliminated the free rotor psuedo-ellipse model ave_pos_alpha parameter from the target function. The average domain position alpha Euler angle has already been removed from the specific analyses code and this change brings the target function into line with these changes. * Added the full optimisation results for the 2nd rotor frame order model for the CaM test data. This is from the new frame_order.py optimisation script and the results demonstrate the stability of the rotor model. * Added the full optimisation results for the small angle rotor CaM frame order test data. This is from the new frame_order.py optimisation script and the results demonstrate the stability of the rotor model, even when the rotor is as small as 1 degree. * Fix for the free rotor PDB representation created by the frame_order.pdb_model user function. The simulation axes were being incorrectly generated from the theta and phi angles, which no longer exist as they have been replaced by the alpha angle. * Added the full optimisation results for the free rotor pseudo-ellipse frame order model. This is for the CaM test data using the new frame_order.py optimisation script. * Added the full optimisation results for the rotor frame order model. This is for the 2-site CaM test data using the new frame_order.py optimisation script. * The CaM frame order data generation base script now uses lib.compat.norm(). This is to allow the test suite to pass on systems with old numpy versions whereby the numpy.linalg.norm() function does not support the new axis argument. * Modified the pymol.cone_pdb and pymol.frame_order user functions to use PyMOL IDs. The PyMOL IDs are used to select individual objects in PyMOL rather than all objects so that the subsequent PyMOL commands will only be applied to that object. This allows for multiple objects to be handled simultaneously. * Added the full optimisation results for the free rotor frame order model. This is for the CaM test data using the new frame_order.py optimisation script. * Added the full optimisation results for the 2nd free rotor frame order model. This is for the CaM test data using the new frame_order.py optimisation script. * Added the full optimisation results for the free rotor frame order model with missing data. This is for the CaM test data using the new frame_order.py optimisation script. * Added a script for recreating the frame order PDB representation and displaying it in PyMOL. This is for the optimised results. * Fixes for the rotor object created by the frame_order.pdb_model user function. The rotor is now also shown for the free rotor pseudo-ellipse, despite it being a useless model, and the propeller blades are no longer staggered for all the free rotor models so that two circles are no longer produced. * Updated the free rotor and 2nd free rotor PDB representations using the represent_frame_order.py script. This is for the CaM frame order test data. * Reparameterisation of the double rotor frame order model. The two axes defined by spherical angles have been replaced by a full eigenframe and the second pivot has been replaced by a single displacement along the z-axis of the eigenframe. * Removed the 2nd pivot point infrastructure from the frame order analysis. The 2nd pivot is now defined via the pivot_disp parameter. * Added the 2nd rotor axis torsion angle to the list of frame order parameters. This is for the double rotor model. * Comment fixes for the eigenframe reconstruction in the frame order target functions. * Converted the double rotor frame order model target function to use the new parameterisation. * Fix for the PDB representation generated by frame_order.pdb_model for the free rotor pseudo-ellipse. * Fix for the Frame_order.test_rigid_data_to_free_rotor_model system test. As the free rotor has undergone a reparameterisation, the chi-squared value is now higher. The value is reasonable as the free rotor can never model the rigid system. * Removed the structure loading and transformation from the CaM frame order system tests. This was mimicking the old behaviour of the auto-analysis. However as that behaviour has been shifted into the backend of the frame_order.pdb_model user function, which is called by these system tests as well, the code is now redundant and is wasting test suite time. * Removed the setting of the second pivot point in the CaM frame order system tests. The second pivot point has been removed from the double rotor frame order model to eliminate parameter redundancy, so no models now have a conventional second pivot. * Modified the CaM frame order system test base script to test alternative code paths. This pivot point was fixed in all tests, so the code in the target functions behind the pivot_opt flag was not being tested. Now for those system tests whereby the calc rather than minimise user function is called, the pivot is no longer fixed to execute this code. * Simplification and clean up of the RDC and PCS flags in the frame order target functions. The per-alignment flags have been removed and replaced by a global flag for all data. This accidentally fixes a bug when only RDCs are present, as the calc_vectors() method was being called when it should not have been. * Speedup and simplifications for the vector calculations used for the PCS numerical integration. This has a minimal effect on the total speed as the target function calc_vectors() method is not the major bottleneck - the slowest part is the quasi-random numerical integration. However the changes may be useful for speeding up the integration later on. The 3D pivot point, average domain rotation pivot, and paramagnetic centre position arrays are now converted into rank-2 arrays in __init__() where the first dimension corresponds to the spin. Each element is a copy of the 3D array. These are then used for the calculation of the pivot to atom vectors, eliminating the looping over spins. The numpy add() and subtract() ufuncs are used together with the out argument for speed and to avoid temporary data structure creation and deletion. The end result is that the calculated vector structure is transposed, so the first dimension are the spins. The changes required minor updates to a number of system tests. The target functions themselves had to be modified so that the pivot is converted to the larger structure when optimised, or aliased. * Added a script for timing different ways to calculate PCSs and RDCs for multiple vectors. This uses the timeit module rather than profile to demonstrate the speed of 7 different ways to calculate the RDCs or PCSs for an array of vectors using numpy. In the frame order analysis, this is the bottleneck for the quasi-random numerical integration of the PCS. The log file shows a potential 1 order of magnitude speedup between the 1st technique, which is currently used in the frame order analysis, and the 7th and last technique. The first technique loops over each vector, calculating the PCS. The last expands the PCS/RDC equation of the projection of the vector into the alignment tensor, and calculates all PCSs simultaneously. * Added another timing script for RDC and PCS calculation timings. This time, the calculation for multiple alignments is now being timed. An addition set of methods for calculating the values via tensor projections have been added. For 5 alignments and 200 vectors, this demonstrates a potential 20x speedup for this part of the RDC/PCS calculation. Most of this speedup should be obtainable for the numerical PCS integration in the frame order models. * Small speedup for all of the frame order models. The PCS averaging in the quasi-random numerical integration functions now uses the multiply() and divide() numpy methods to eliminate a loop over the alignments. For this, a new dimension over the spins was added to the PCS constant calculated in the target function __init__() method. In one test of the pseudo-ellipse, the time dropped from 191 seconds to 172. * Added another timing script for helping with speeding up the frame order analysis. This is for the part where the rotation matrix for each Sobol' integration point is shifted into the eigenframe. * Python 3 fix for the CaM frame order system test base script. * Added the full optimisation results for the torsionless isotropic cone frame order model. This is for the CaM test data using the new frame_order.py optimisation script. * Small speedups for all of the frame order models in the quasi-random numerical PCS integration. These changes result in an ~10% speedup. Testing via the func_pseudo_ellipse() target function using the relax profiling flag, the time for one optimisation decreased from 158 to 146 seconds. The changes consist of pre-calculating all rotations of the rotation matrix into the motional eigenframe in one mathematical operation rather than one operation per Sobol' point rotation, unpacking the Sobol' points into the respective angles prior to looping over the points, and taking the absolute value of the torsion angle and testing if it is out of the bounds rather than checking both the negative and positive values. * Attempt at speeding up the torsionless pseudo-ellipse frame order model. The check if the Sobol' point is outside of an isotropic cone defined by the largest angle theta_y is now performed to avoid many unnecessary calls to the tmax_pseudo_ellipse() function. This however reveals a problem with the test suite data for this model. * Updated all of the CaM frame order system tests for the recent speedup. The speedup switched to the use of numpy.tensordot() for shifting each Sobol' rotation into the eigenframe rather than the previous numpy.dot(). Strangely this affects the precision and hence the chi-squared value calculated for each system test - both increasing and decreasing it randomly. * The frame order target function calc_vectors() method arguments have all been converted to keywords. This is in preparation for handling a second pivot argument for the double rotor model. * Updated the double rotor frame order model to be in a pseudo-functional state. Bugs in the target function method have been removed, the calc_vectors() target function method now accepts the pivot2 argument (but does nothing with it yet), and the lib.frame_order.double_rotor module has been updated to match the logic used in all other lib.frame_order modules. * The frame_order.pdb_model user function no longer tries to create a cone object for the double rotor. * Added a timeit script and log file for different ways of checking a binary numpy array. * Modified the rigid_test.py system test script to really be the rigid case. This is used in all of the Frame_order.test_rigid_data_to_*_model system tests. Previously the parameters of the dynamics were set to being close to zero, to catch the cases were a few Sobol' PCS integration points were accepted. But now the case were no Sobol' points can be used is being tested. This checks a code path currently untested in the test suite, demonstrating many failures. * Fix for the frame order matrix calculation for a pseudo-elliptic cone with angles of zero degrees. The lib.frame_order.pseudo_ellipse_torsionless.compile_2nd_matrix_pseudo_ellipse_torsionless() function has been changed to prevent a divide by zero failure. The surface area normalisation factor now defaults to 0.0. * Fixes for all PCS numeric integration for all frame order models in the rigid case. The exact PCS values for the rigid state are now correctly calculated when no Sobol' points lie within the motional model. The identity matrix is used to set the rotation to zero, and the PCS values are now multiplied by the constant. * Updates for the chi-squared value in all the Frame_order.test_rigid_data_to_*_model system tests. This is now much reduced as the true rigid state is now being tested for. * The rigid frame order matrix for the pseudo-ellipse models is now correctly handled. This allows the rigid case RDCs to be correctly calculated for both the pseudo-ellipse and torsionless pseudo-ellipse models. The previous catch of the theta_x cone angle of zero was incorrectly recreating the frame order matrix, which really should be the identity matrix. However truncation artifacts due to the quadratic SciPy integration still cause the model to be ill-conditioned near the rigid case. The rigid case is correctly handled, but a tiny shift of the parameters off zero cause a discontinuity. * Updates for the Frame_order.test_rigid_data_to_pseudo_ellipse*_model system tests. The chi-squared value now matches the rigid model. * Large increase in precision for the CaM frame order torsionless pseudo-ellipse model test data set. In addition, the theta_x and theta_y angles have also been swapped so that the new constraint of 0 <= theta_x <= theta_y <= pi built into the analysis is satisfied. The higher precision is because the number structures in the distribution is now 20 million rather than 1 million and the numpy.float128 data averaging has been used. The algorithm for finding suitable random domain positions within the motional limits has been changed as well by extracting the theta and phi tilt angles from the random rotation, dropping the torsion angle sigma, and reconstructing the rotation from just the tilt angles. This increases the speed of the data generation script by minimally 5 orders of magnitude. * Changed the parameter values for the Frame_order.test_cam_pseudo_ellipse_torsionless* system tests. The theta_x and theta_y angles are now swapped. The chi-squared values are now also lower in the 3 system tests as the data is now of much higher precision. * Speedup for the frame order analyses when only one domain is aligned. When only one domain is aligned, the reverse Ln3+ to spin vectors for the PCS are no longer calculated. For most analyses, this should significantly reduce the number of mathematical operations required for the quasi-random Sobol' point numerical integration. * Support for the 3 vector system for double motions has been added to the frame order analysis. This is used for the quasi-random Sobol' numeric integration of the PCS. The lanthanide to atom vector is the sum of three parts: the 1st pivot to atom vector rotated by the 1st mode of motion; the 2nd pivot to 1st pivot vector rotated by the 2nd mode of motion (together with the rotated 1st pivot to atom vectors); and the lanthanide to second pivot vector. All these vectors are passed into the lib.frame_order.double_rotor.pcs_numeric_int_double_rotor() function, which passes them to the pcs_pivot_motion_double_rotor() function where they are rotated and reconstructed into the Ln3+ to atom vectors. * Fully implemented the double rotor frame order model for PCS data. Sobol' quasi-random points for the numerical integration are now generated separately for both torsion angles, and two separate sets of rotation matrices for both angles for each Sobol' point are now pre-calculated in the create_sobol_data() target function method. The calc_vectors() target function method has also been modified as the lanthanide to pivot vector is to the second pivot in the double rotor model rather than the first. The target function itself has been fixed as the two pivots were mixed up - the 2nd pivot is optimised and the inter-pivot distance along the z-axis gives the position of the 1st pivot. For the lib.frame_order.double_rotor module, the second set of Sobol' point rotation matrices corresponding to sigma2, the rotation about the second pivot, is now passed into the pcs_numeric_int_double_rotor() function. These rotations are frame shifted into the eigenframe of the motion, and then correctly passed into pcs_pivot_motion_double_rotor(). The elimination of Sobol' points outside of the distribution has been fixed in the base pcs_numeric_int_double_rotor() function and now both torsion angles are being checked. * Fix for the unpacking of the double rotor frame order parameters in the target function. This is for when the pivot point is being optimised. * Created a new synthetic CaM data set for the double rotor frame order model. This is the same as the test_suite/shared_data/frame_order/cam/double_rotor data except that the angles have been increased from 11.5 and 10.5 degrees to 85.0 and 55.0 for the two torsion angles. This is to help in debugging the double rotor model as the original test data is too close to the rigid state to notice certain issues. * Corrected the printout from the CaM frame order data generation base script. The number of states used in the distribution of domain positions is now correctly reported for the models with multiple modes of motion. * Created a frame order optimisation script for the CaM double rotor test suite data. This is the script used for testing the implementation, it will not be used in the test suite. * Created the Frame_order.test_rigid_data_to_double_rotor_model system test. This shows that the double rotor model works perfectly when the domains of the molecule are rigid. * Fix for the frame order target functions for when no PCS data is present. In this case, the self.pivot structure was being created as an empty array rather than a rank-2 array with dimensions 1 and 3. This was causing the rotor models to fail, as this pivot is used to recreate the rotation axis. * Fix for the CaM double rotor frame order system tests. The torsion angle cone_sigma_max is a half angle, therefore the full angles from the data generation script are now halved in the system test script. * Created 3 frame order system tests for the new large angle double rotor CaM synthetic data. These are the Frame_order.test_cam_double_rotor_large_angle, Frame_order.test_cam_double_rotor_large_angle_rdc, and Frame_order.test_cam_double_rotor_large_angle_pcs system tests. * Added the full optimisation results for the torsionless pseudo-ellipse frame order model. This is for the CaM test data using the new frame_order.py optimisation script. * Added the full optimisation results for the 2nd free rotor isotropic cone frame order model. This is for the CaM test data using the new frame_order.py optimisation script. * Small fix for the large angle CaM double rotor frame order model synthetic test data. The way the rotation angle was calculated was slightly out due to integer truncation. The integers are now converted to floats in the generate_distribution.py script and all of the PCS and RDC data averaged over ~20 million states has been recalculated. * Added proper support for the double rotor frame order models to the system test scripts. This is for the CaM synthetic data. The base script can now handle the current parameterisation of the double rotor model with a single pivot, an eigenframe, and the second pivot defined by a displacement along the z-axis. The scripts for the double_rotor and double_rotor_large_angle data sets have been changed to use this parameterisation as well. * Attempt at implementing the 2nd degree frame order matrix for the double rotor model. This is required for the RDC. * The second torsion angle is now printed out for the frame order system tests. This is in the system test class mesg_opt_debug() method and allows for better debugging of the double rotor models. * Fix for the Frame_order.test_cam_double_rotor_large_angle* system tests. The system test script was pointing to the wrong data directory. * The double rotor frame order system tests are no longer blacklisted. * Updated the chi-squared values being checked for the double rotor frame order system tests. * Shifted the frame order geometric representation functions into their own module. This is the new specific_analyses.frame_order.geometric module. * The frame order geometric representation functions are no longer PDB specific. Instead the format argument is accepted. This will allow different formats to be supported in the future. Because of this change, all specific_analyses.frame_order.geometric.pdb_*() functions has been renamed to create_*(). * Created an auxiliary function for automatically generating the pivots of the frame order analysis. This is the new specific_analyses.frame_order.data.generate_pivot() function. It will generate the 1st or 2nd pivot, hence supporting both the single motion models and the double motion double rotor model. * Shifted the rotor generation for the frame order geometric representation into its own function. This is the specific_analyses.frame_order.geometric.add_rotors() function which adds the rotors are new structures to a given internal structural object. The code has been extended to add support for the double rotor model. * Fix for the pivots created by the specific_analyses.frame_order.data.generate_pivot() function. This is for the double rotor model where the 1st mode of motion is about the 2nd pivot, and the 2nd mode of motion about the 1st pivot. * Fixes for the cone geometric representation in the internal structural object. The representation can now be created if the given MoleculeContainer object is empty. * Refactored the frame order geometric motional representation code. The code of the specific_analyses.frame_order.geometric.create_geometric_rep() function has been spun out into 3 new functions: add_rotors(), add_axes(), and add_cones(). This is to better isolate the various elements to allow for better control. Each function now adds the atoms for its geometric representation to a separate molecule called 'axes' or 'cones'. The add_rotors() does not create a molecule as the lib.structure.represent.rotor.rotor_pdb() function creates its own. As part of the rafactorisation, the neg_cone flag has been eliminated. * Renamed the residues of the rotor geometric object representation. The rotor axis atoms now belong to the RTX residue and the propeller blades to the RTB residue. The 'RT' at the start represents the rotor and this will allow all the geometric objects to be better isolated. * Improvements to the internal structural object _get_chemical_name() method. This now uses a translation table to convert the hetID or residue name into a description, for example as used in the PDB HETNAM records to give a human readable description of the residue inside the PDB file itself. The new rotor RTX and RTB residue names have been added to the table as well. * Renaming of the residues of the cone geometric representation. The cone apex or centre is now the CNC residue, the cone axis is now CNX and the cone edge is now CNE. These used to be APX, AXE, and EDG respectively. The aim is to make these names 100% specific to the cone object so that they can be more easily selected for manipulating the representation and so that they are more easily identifiable. The internal structural object _get_chemical_name() function now returns a description for each of these. Note that the main cone object is still named CON. * The motional pivots for the frame order models are now labelled in the geometric representation. The pivot points are now added as a new molecule called 'pivots' in the frame_order.pdb_model user function. The atoms all belong to the PIV residue. The pymol.frame_order user function now selects this residue, hides its atoms, and then shows the atom name 'Piv' as the label. For the double rotor model, the atom names 'Piv1' and 'Piv2' are used to differentiate the pivots. * Renamed the lib.structure.represent.rotor.rotor_pdb() function to rotor(). This function is not PDB specific and it just creates a 3D structural representation of a rotor object. * Added support for labels in the rotor geometric object for the internal structural object. The labels are created by the frame_order.pdb_model user function backend. For the double rotor model, these are 'x-ax' and 'y-ax'. For all other models, the label is 'z-ax'. The labels are then sent into the lib.structure.represent.rotor.rotor() function via the new label argument. This function adds two new atoms to the rotor molecule which are 2 Angstrom outside of the rotor span and lying on the rotor axis. These then have their atom name set to the label. The residue name is set to the new RTL name which has been added to the internal structural object _get_chemical_name() method to describe the residue in the PDB file for the user. Finally the pymol.frame_order user function selects these atoms, hides them and then labels them using the atom name (x-ax, y-ax, or z-ax). * Modified the rotor representation generated by the pymol.frame_order user function. This is to make the object less bulky. * Redesign of the axis geometric representation for the frame order motions. This is now much more model dependent to avoid clashes with the rotor objects and other representations: For the torsionless isotropic cone, a single z-axis is created; For the double rotor, a single z-axis is produced connecting the two pivots, from pivot2 to pivot1; For the pseudo-ellipse and free rotor pseudo-ellipse, the x and y-axes are created; For the torsionless pseudo-ellipse, all three x, y and z-axes are created; For all other models, no axis system is produced as this has been made redundant by the rotor objects. * Fixes for the cone geometric object created by the frame_order.pdb_model user function. This was broken by the code refactoring and now works again for the pseudo-ellipse models. * Fix for the pymol.frame_order user function. The representation function for the rotor objects was hiding all parts of the representation, hence the pivot labels where being hidden. To fix this, the hiding of the geometric object now occurs in the base frame_order_geometric() function prior to setting up the representations for the various objects. * Started to redesign the frame_order.pdb_model user function. Instead of having the positive and negative representations in different PDB models, and the Monte Carlo simulations in different molecules, these will now all be shifted into separate files. For this to be possible, the file root rather than file names must now be supplied to the frame_order.pdb_model user function. To allow for different file compression, the compress_type argument is now used. The backend code correctly handles the file root change, but the multiple files are not created yet. * Python 3 fixes using the 2to3 script. Fatal changes to the multi.processor module were reverted. * Improvements to the lib.structure.represent.rotor.rotor() function for handling models. The 'rotor', 'rotor2', or 'rotor3' molecule name determination is now also model specific. * The frame order generate_pivot() function can now return the pivots for Monte Carlo simulations. This is the specific_analyses.frame_order.data.generate_pivot() function. The sim_index argument has been added to the function which will allow the pivots from the Monte Carlo simulations to be returned. If the pivot was fixed, then the original pivot will be returned instead. * Test suite fixes for the recent redesign of the frame_order.pdb_model user function. * Fixes for the frame_order.pdb_model user function for the rotor and free rotor models. * Redesign of the geometric object representation part of the frame_order.pdb_model user function. The positive and negative representations of the frame order motions have been separated out into two PDB files rather than being two models of one PDB file. This will help the user understand that there are two identical representations of the motions, as both will now be displayed rather than having to understand the model concept of PyMOL. The file root is taken, for example 'frame_order', and the files 'frame_order_pos.pdb' and 'frame_order_neg.pdb' are created. If no inverse representation exists for the model, the file 'frame_order.pdb' will be created instead. The Monte Carlo simulations are now also treated differently. Rather than showing multiple vectors in the axes representation component within one molecule in the same file as the frame order representation, these are now in their own file and each simulation is now a different model. If an inverse representation is present, then the positive representation will go into the file 'frame_order_sim_pos.pdb', for example, and the negative representation into the file 'frame_order_sim_neg.pdb'. Otherwise the file 'frame_order_sim.pdb' will be created. * Clean up of the frame_order.pdb_model user function definitions. Some elements were no longer of use, and some descriptions have be... [truncated message content] |
From: Edward d'A. <ed...@do...> - 2015-10-01 14:43:30
|
This is a minor feature release with improvements to the automatic relaxation dispersion protocol for repeated CPMG data, support for Monte Carlo or Bootstrap simulating RDC and PCS Q factors, a huge speedup of Monte Carlo simulations in the N-state model analysis, and geometric mean and standard deviation functions added to the relax library. For the official, easy to navigate release notes, please see http://wiki.nmr-relax.com/Relax_3.3.9. The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html). The full list of changes is: Features: * Improvements to the automatic relaxation dispersion protocol for repeated CPMG data. * Support for Monte Carlo or Bootstrap simulating the RDC and PCS Q factors. * Huge speedup of Monte Carlo simulations in the N-state model analysis. * Geometric mean and standard deviation functions added to the relax library. Changes: * Wrote a method to store parameter data and dispersion curves, for the protocol of repeated CPMG analysis. This is to prepare for analysis in other programs. The method loops through the data pipes, and writes the data out. It then writes a bash script that will concatenate the data in an matrix array style, for reading and processing in other programs. Task #7826 (https://gna.org/task/?7826): Write an Python class for the repeated analysis of dispersion data. * Added to write out a collection script for chi2 and rate parameters. Task #7826 (https://gna.org/task/?7826): Write an Python class for the repeated analysis of dispersion data. * In the collection bash script, removes spins which have not been fitted. Task #7826 (https://gna.org/task/?7826): Write an Python class for the repeated analysis of dispersion data. * Fix for use of " instead of ' in bash script. Task #7826 (https://gna.org/task/?7826): Write an Python class for the repeated analysis of dispersion data. * Adding option to minimise class function, to perform Monte Carlo error analysis. Task #7826 (https://gna.org/task/?7826): Write an Python class for the repeated analysis of dispersion data. * Printout when minimising Monte Carlo simulations. Task #7826 (https://gna.org/task/?7826): Write an Python class for the repeated analysis of dispersion data. * Added additional test to system test Relax_disp.test_bug_23186_cluster_error_calc_dw() to prove that Bug #23619 is invalid. Bug #23619: (https://gna.org/bugs/index.php?23619): Stored chi2 sim values from Monte Carlo simulations does not equal normal chi2 values. * Small fix for the shell script to collect data files, and not use the program "column" in the end. The line width becomes to large to handle for column. Task #7826 (https://gna.org/task/?7826): Write an Python class for the repeated analysis of dispersion data. * Added a unit test that triggers the bug. Test added in test_delete_spin_all, and can be accessed with: relax -u _pipe_control.test_spin. Bug #23642 (https://gna.org/bugs/index.php?23642): When deleting all spins for a residue, an empty placeholder is where select=True. * Added sample data and analysis script, that will eventually show that there is not much difference in the sample statistics used for comparing the output of two very similar datasets. This is a multiple comparison test with many T-tests at once, where the familywise error is controlled by the Holm method. Even if the values are close to equal, and within the standard deviation, this procedure will reject up to 20% of the null hypothesis. This is not deemed as a suitable method. Bug #23644 /https://gna.org/bugs/?23644): monte_carlo.error_analysis() does not update the mean value/expectation value from simulations. * Added Monte Carlo simulations to the N_state_model.test_absolute_T system test. This is to demonstrate a failure of the simulations in certain N-state model setups. * Added a missing call to monte_carlo.initial_values in the N_state_model.test_absolute_T system test. This fixes the N_state_model.test_absolute_T system test, showing that there is not a problem with the Monte Carlo simulations. * Added Monte Carlo and Bootstrap simulation support for the RDC and PCS Q factor calculations. The pipe_control.rdc.q_factors() and pipe_control.pcs.q_factors() functions have been modified to support Monte Carlo and Bootstrap simulations. The sim_index argument has been added to allow the Q factor for the given simulation number to be calculated. All of the Q factor data structures in the base data pipe now have *_sim equivalents for permanently storing the simulation values. For the simulation values, all the warnings have been silenced. * Added simulation support for the RDC and PCS Q factors in the N-state model analysis. This is for both Monte Carlo and Bootstrap simulation. The simulation RDC and PCS values, as well as the simulation back calculated values are now stored via the minimise_bc_data() function of specific_analyses.n_state_model.optimisation in the respective spin or interatomic data containers. The analysis specific API methods now send the sim_index value into minimise_bc_data(), as well as the pipe_control.rdc.q_factors() and pipe_control.pcs.q_factors() functions. * Silenced a warning in the N-state model optimisation if the verbosity is set to zero. This removes a repetitive warning from the Monte Carlo or Bootstrap simulations. * Huge speed up for the Monte Carlo simulations in the N-state model analyses. This speed up is also for Bootstrap simulations and the frame order analysis. The change affects the monte_carlo.initial_values user function. The alignment tensor _update_object() method was very inefficient when updating the Monte Carlo simulation data structures. For each simulation, each of the alignment tensor data structures were being updated for all simulations. Now only the current simulations is being updated. This speeds up the user function by many orders of magnitude. * Added functions for calculating the geometric mean and standard deviation to the relax library. These are the geometric_mean() and geometric_std() functions of the lib.statistics module. The implementation is designed to be fast, using numpy array arithmetic rather than Python loops. * Created a simple unit test for the new lib.statistics.geometric_mean() function. * Added a unit test for the new lib.statistics.geometric_std() function. * Made a summarize function to compare results. Task #7826 (https://gna.org/task/?7826): Write an Python class for the repeated analysis of dispersion data. Bugfixes: * Fix committed, where an empty spin placeholder has the select flag set to False. Bug #23642 (https://gna.org/bugs/index.php?23642): When deleting all spins for a residue, an empty placeholder is where select=True. |
From: Edward d'A. <ed...@do...> - 2015-04-02 14:05:48
|
This is a minor bugfix release which allows the relax GUI to be used on screens with the low resolution of 1024x768 pixels. For the official, easy to navigate release notes, please see http://wiki.nmr-relax.com/Relax_3.3.8. The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html). The full list of changes is: Features: N/A Changes: * Fix for the pipe_control.reset.reset() function when resetting the GUI in non-standard contexts. This is mainly for debugging scripts when simulating a GUI and hence the GUI reset() method does not exist. * Created a GUI memory management debugging script for the align_tensor.init user function. This repetitively calls the reset, pipe.create and align_tensor.init user functions, and opening the GUI element for setting alignment tensor elements (the Sequence window). The pympler muppy_log file shows no memory leaks for these user functions on Linux systems. Bugfixes: * Resized all fixed-sized GUI wizards to fit on 1024x768 pixel wide displays. The problem was reported by Lora Picton in the thread starting at http://thread.gmane.org/gmane.science.nmr.relax.user/1813. Both the spin loading wizard of the spin viewer window and the relaxation data loading wizard used currently in the model-free analysis tab and BMRB export page were fixed. These both had the y-dimension set to 800 pixels, hence parts of the window would be out of view. |
From: Edward d'A. <ed...@do...> - 2015-03-13 17:55:56
|
This is a major feature and bugfix release. New features include the statistics.aic and statistics.model user functions, plotting API advancements, huge speed ups for the assembly of atomic coordinates from a large number of structures, the sorting of sequence data in the internal structural object for better structural consistency, conversion of the structure.mean user function to the new pipe/model/molecule/atom_id design, and improvements to the rdc.copy and pcs.copy user functions. Bugs fixed include the incorrect pre-scanning of old scripts identifying the minimise.calculate user function as the old minimise user function, Python 3 fixes, and the failure in reading CSV files in the sequence.read user function. Many more features and bugfixes are listed below. For the official, easy to navigate release notes, please see http://wiki.nmr-relax.com/Relax_3.3.7. The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html). The full list of changes is: Features: * Creation of the statistics.aic and statistics.model user functions for calculating and printing out different statistics. * Addition of new infrastructure for future support for plotting data using Veusz (http://home.gna.org/veusz/). * Huge speed up for the assembly of atomic coordinates from a large number of structures. * Sequence data in the internal structural object can now be sorted for better structural consistency. * The structure.read_pdb user function now skips water molecules, avoiding the creation of hundreds of new molecules when reading X-ray structures. * Conversion of the structure.mean user function to the new pipes/models/molecules/atom_id design and the addition of the set_mol_name and set_model_num arguments to allow the mean structure to be stored alongside the other molecules. * The monte_carlo.setup user function now raises a RelaxError if the number of simulations is less than 3, avoiding subsequent errors. * Expanded the functionality of the rdc.copy and pcs.copy user functions, allowing for the operation on two data pipes with different spin sequences, skipping deselected spins and interatomic data containers, printing out all copied data for better feedback, and copying all alignment metadata. * The sequence.attach_protons user function now lists all the newly created spins. * Clarification of the RDC and PCS Q factors with the printouts and XML file variable names modified to indicate if the normalisation is via the tensor size (2Da^2(4 + 3R)/5) or via the sum of data squared to allow for clearer RDC vs. PCS comparisons. * Expansion of the align_tensor.copy user function to allow all tensors to be copied between different data pipes. * Huge speed up for loading results and state files with Monte Carlo simulation alignment tensors. * Improvements for the rdc.weight and pcs.weight user functions. The spin_id argument can now be set to None to allow all spins or interatomic data containers to be set. * Improvements for the pcs.structural_noise user function. The check for the presence of PCS data for points to skip now includes checking for PCS values of None. And the output Grace file now also includes the spin ID string as a string or comment value which can be displayed in the plot when desired. Changes: * Created the N_state_model.test_statistics system test. This system test will be used to implement the new statistics user function class consisting of the structure.model and structure.aic user functions for calculating and storing the [chi2, n, k] parameters and Akaike's Information Criterion statistic respectively. * Added the structure.align user function to the renaming translation table. This is so relax identifies structure.align user functions in scripts to raise an error saying that the structure.superimpose user function should be used instead. * Added the office-chart-pie set of Oxygen icons for use in the new statistics user function class. * Created the empty statistics user function class. This adds the infrastructure for creating the statistics user functions. * Small fix for the structure.add_model user function description. * Created the frontend for the statistics.model user function. * Created a wizard graphic for the statistics user functions. This is based on a number of Oxygen icons, as labelled in the SVG layer names. * The statistics.model user function now uses the new statistics wizard graphic. * Created the empty pipe_control.statistics module. This will be used for the backend of all of the statistics user functions. * Fixes for the EPS versions of some Oxygen icons used in the relax manual. This is the actions.document-preview-archive and actions.office-chart-pie Oxygen icons used for the user function icons. The files were not created correctly in the Gimp. The export to EPS requires the width and height to be both set to 6 mm, and the X and Y offsets to zero. This allows the icon bounding boxes and sizes to match the other EPS icons. * Implemented the backend of the statistics.model user function. The implementation heavily uses the specific analysis API, calling the calculate(), model_loop(), print_model_title(), model_statistics() and get_model_container() methods to do all of the work. The last of these API methods is yet to be implemented. * Fix for the statistics.model user function backend. The API methods are now called with the model_info argument set to a keyword argument so that it is always passed in as the correct argument. * Fix for the specific analysis API _print_model_title_global() common method. This method was horribly broken, as it was never used. The new statistics.model user function together with the N-state model uncovers this breakage. * Defined the get_model_container() specific analysis API method. This base method raises a RelaxImplementError, therefore each analysis type must implement its own method (or use an API common method). * Implemented the specific analysis API _get_model_container_cdp() commmon method. This is to be used for the get_model_container() for returning the current data pipe object as the model container. This is for the global models where the model information is stored in the pipe object rather than in spin containers. * The N-state model now uses the _get_model_container_cdp() method. This is aliased as the get_model_container() specific analysis API method. * Fix for the N_state_model.test_statistics system test - the probabilities were missing from k. * Expanded the printouts from the statistics.model user function to include the statistics. * Updated the N-state model num_data_points() function to use more modern integer incrementation. * Fix for the N_state_model.test_statistics system test. The deselected spins and interatomic data containers are now taken into account for the RDC and PCS data point counts. * Implementation of the statistics.aic user function. This is very similar to the statistics.model user function - the code was copied and only slightly modified. The new user function will calculate the current chi-squared value per model, obtain the model statistics, calculate the AIC value per model, and store the AIC value, chi-squared value and number of parameters in the appropriate location for the model in the relax data store. * Created the empty lib.plotting.veusz module for graphing using Veusz (http://home.gna.org/veusz/). * Shifted the lib.software.grace module to lib.plotting.grace. This follows from http://thread.gmane.org/gmane.science.nmr.relax.devel/7532 and http://thread.gmane.org/gmane.science.nmr.relax.devel/7536. * Created XY-data functions for the plotting API of the relax library. These are currently copies of the heads of the lib.plotting.grace functions write_xy_data() and write_xy_header(). These lib.plotting.api functions (write_xy_data() and write_xy_header()) are set up to use the grace functions. * Converted all of the Grace plotting in relax to use the plotting API of the relax library. * Shifted the pipe_control.grace.write() function. This is now the format independent pipe_control.plotting.write_xy() function. The format argument has been added and this defaults to 'grace'. The grace.write user function has been updated to use the new backend. * Updated the pcs.structural_noise user function to use the relax library plotting API. * Fixes for the new pipe_control.plotting.write_xy() function. This includes missing imports which should have moved from pipe_control.grace, as well as shifting the axis_setup() function from the pipe_control.grace module into the pipe_control.plotting module. * The rdc.corr_plot user function backend now uses the relax library plotting API. The write_xy_data() and write_xy_header() functions from lib.plotting.api are now uses instead of the equivalent pipe_control.grace functions which no longer exist. * More import fixes for the new pipe_control.plotting.write_xy() function. * Fix for the backend of the relax_disp.plot_disp_curves user function. The lib.plotting.api functions write_xy_data() and write_xy_header() require the format argument. * Updated the relative stereochemistry auto-analysis to use the relax library plotting API. * Huge speed up for the assembly of atomic coordinates from a large number of structures. The internal structural object validate_models() method was being called once for each structure when assembling the atomic coordinates. This resulted in the _translate() internal structural object method, which converts all input data to formatted strings, being called hundreds of millions of times. The problem was in lib.structure.internal.coordinates.assemble_atomic_coordinates(), in that the one_letter_codes() method, which calls validate_models(), was called for each molecule encountered. The solution was not to validate models in one_letter_codes(). * Huge speed up of the internal structural object validate_models() method. The string formatting to create pseudo-PDB records and the large number of calls to the _translate() method for atomic information string formatting has been shifted to only be called when atomic information does not match. Instead the structural information is directly compared within a large if-else statement. * Created the Structure.test_atomic_fluctuations_no_match system test. This demonstrates a failure in the operation of the structure.atomic_fluctuations user function when the supplied atom ID matches no atoms. * Fix for the Structure.test_atomic_fluctuations_no_match system test. The structure.atomic_fluctuations user function will now raise a RelaxError when no data corresponding to the atom ID can be found, so the test now checks for this. * Created the unit test infrastructure for the lib.structure.internal.object module. * Created the Test_object.test_add_atom_sort unit test. This is from the _lib._structure._internal.test_object unit test module. The test will be used to implement the sorting of input data by residue number in the add_atom() internal structural object method. This will mean that added atoms will be placed in residue sequence order, so that output PDB files are correctly ordered. * Implementation of methods for sorting sequence data in the internal structural object. The information is sorted in the molecule container level using the new MolContainer._sort() private method. This uses the _sort_key() helper method which determines what the new order should be. This is used as the 'key' argument for the Python sort() method. Instead of list shuffling, new lists in the correct order are created. Although not memory efficient, this might be faster than shuffling. * The loading of structural data now sorts the data if the merge flag is True. The pack_structs() method for sorting the data will now call the new MolContainer._sort() function is the data is being merged. This is to ensure that the final structural data is correctly ordered. * Fixes for a number of Structure system tests for the sorted structural data changes. * Modified the structure.read_pdb user function backend to skip water molecules. All residues with the name 'HOH' are now skipped when loading PDB files. This is implemented in the MolContainer.fill_object_from_pdb() method, and a RelaxWarning is printed listing the residue numbers of all skipped waters. * Modified the Structure.test_read_pdb_1UBQ system test for the new water skipping feature. As the structure.read_pdb user function will now skip waters, the last atom in the structural object will now be the last ubiquitin atom and not the last water atom. * Modified the Test_object.test_add_atom_sort unit test to check atom connectivities. This is from the _lib._structure._internal.test_object unit test module. The problem is that the MolContainer._sort() method for sorting the structural data currently does not correctly update the bonded data structure. * Completed the implementation of the sorting of structural data in the internal structural object. The MolContainer._sort() private method now changes the connect atom indices in the bonded data structure to the new sorted indices. * Created new system tests for implementing new functionality for the structure.mean user function. This includes the Structure.test_mean_models and Structure.test_mean_molecules. The idea is to convert the user function to the new pipes/models/molecules/atom_id design. This will allow molecules with non-identical sequences and atomic compositions to be averaged. The set_mol_name and set_model_num arguments from the structure.read_pdb, structure.read_gaussian, and structure.read_xyz user functions will also be implemented to allow the mean structure to be stored alongside the other molecules. * Some fixes for the checks in the Structure.test_mean_molecules system test. * Fix for the structure.mean user function call in the Structure.test_mean_models system test. * Expanded the checking in all the Structure.test_mean* system tests to cover all atomic information. This includes the Structure.test_mean, Structure.test_mean_models, and Structure.test_mean_molecules system tests. All structural data is now carefully checked to make sure that the structure.mean user function operates correctly. * Converted the structure.mean user function to the new pipe/model/molecule/atom_id design. This allows the average structure calculation to work on atomic coordinates from different data pipes, different structural models, and different molecules. The user function backend uses the new pipe_control.structure.main.assemble_structural_coordinates() function to assemble the common atom coordinates, molecule names, residue names, residue numbers, atom names and elements. All this information is then used to construct a new molecule container for storing the average structure in the internal structural object. To allow for the averaged structural data to be stored, the internal structural object method add_coordinates() has been created. This is modelled on the PDB, Gaussian, and XYZ format loading methods. The internal structural object mean() method is no longer used, but remains for anyone who might have interest in the future (though as it is untested, bit-rot will be a problem). * Small correction for the structure.read_pdb user function description. * Created the Structure.test_read_merge_simultaneous system test. This is to demonstrate a failure in the structure.read_pdb user function when merging multiple molecules from one file into one molecule simultaneously with a single user function call. * Added some error checking for the monte_carlo.setup user function. A RelaxError is now raised if the number of simulations is less than 3. This prevents Python errors when later calling the monte_carlo.error_analysis user function. * Test suite fixes for the error checking in the monte_carlo.setup user function. The number of simulations has been increased from either 1 or 2 in all tests to the minimal number of simulations (3). * Created the Structure.test_bug_23293_missing_hetatm system test. This is to catch bug #23293 (https://gna.org/bugs/?23293), the PDB HETATM loading error whereby the last HETATM record is sometimes not read from the PDB file. * Small fix for the chain IDs in the Structure.test_bug_23293_missing_hetatm system test. * Created the Structure.test_multi_model_and_multi_molecule system test. This is used to check the loading and writing of a multi-model and multi-molecule PDB file. The test shows that this functions correctly. * Modified the Structure.test_multi_model_and_multi_molecule test to check for model consistency. This is just for better test suite coverage of the handling of PDB structural data. * Created the Structure.test_bug_23294_multi_mol_automerge system test. This is used to catch bug #2329 (https://gna.org/bugs/?23294), the automatic merging of PDB molecules resulting in an IndexError. It reads in the 'in.pdb' PDB file attached to the bug report, now named 'bug_23294_multi_mol_automerge.pdb', to show the IndexError. The test also checks the structure.write_pdb user function to make sure that the output PDB file contains a single merged molecule. * Added the PDB file to the repository for the Structure.test_bug_23294_multi_mol_automerge system test. * Fix for the Structure.test_bug_23294_multi_mol_automerge system test. The MASTER PDB record has been added to the data to check for, as this will be produced by the structure.write_pdb user function. * Improved the RelaxWarning for missing atom numbers in the PDB CONECT records. This is for the structure.read_pdb user function. Now only one warning is given for the entire PDB file listing all of the missing atom numbers rather than one warning per missing atom. This can significantly compact the warnings, removing a lot of repetition. * Improved the quality of the printouts from the structure.read_pdb user function. This also affects the structure.read_gaussian and structure.read_xyz user functions. The messages about adding new molecules or merging with existing molecules has been significantly improved. The text with the model information is now only printed if the model number is present in the PDB file or has been supplied by the user. * Fixes for all of the PDB documentation HTML links in the docstrings. The PDB have shifted their documentation from http://www.wwpdb.org/documentation/format33/v3.3.html to http://www.wwpdb.org/documentation/file-format/format33/v3.3.html, stupidly without redirects. This will create dead links in the relax API documentation at http://www.nmr-relax.com/api/3.3/, as well as the older API documentation (http://www.nmr-relax.com/api/2.2/, http://www.nmr-relax.com/api/3.0/, http://www.nmr-relax.com/api/3.1/, http://www.nmr-relax.com/api/3.2/). * Created the Structure.test_bug_23295_ss_metadata_merge system test. This is to catch bug #23295 (https://gna.org/bugs/?23295), the PDB secondary structure HELIX and SHEET records not updated when merging molecules. This uses the '2BE6_secondary_structure.pdb' structure file and 'test.py' relax script contents as the test, checking the HELIX and SHEET records. * Added one more check to the Structure.test_bug_23295_ss_metadata_merge system test. The test would pass if no HELIX or SHEET records were to be written to the PDB file. * Fix for the Structure.test_bug_23295_ss_metadata_merge system test and additional printouts. * Fix for the Structure.test_pdb_combined_secondary_structure system test. The SHEET PDB record check was incorrect and was checking for the improperly formatted atom name field, which has now been fixed in relax. * Large speed up of the structure.web_of_motion user function. With the introduction of the _sort() internal structural object method and it being called by the add_atom(), the structure.web_of_motion user function was now painfully slow. As sorting the structural data is unnecessary for the backend of this user function, the add_atom() boolean argument 'sort' has been added to turn the sorting on and off, and the structure.web_of_motion backend now sets this to False. * Fix for the internal structural object unit test Test_object.test_add_atom_sort. This test of the _lib._structure._internal.test_object unit test module now requires the sort argument set to True when calling the add_atom() method. * Improvement for a RelaxError message when assembling structural data but no coordinates can be found. * Created a series of unit tests for implementing a new internal structural object feature. These tests check a new 'inv' argument for the selection() structural object method for allowing all atoms not matching the atom ID string to be selected. * Implemented the new 'inv' argument for the selection() structural object method. This allows for all atoms not matching the atom ID string to be selected. The unit tests for this argument now all pass, validating the implementation. * Improvement for the structure.mean user function. This can now be used to store an averaged structure in an empty data pipe. Previously structural data needed to be present in the current data pipe for the user function to work. * Created a system test to show a limitation of the rdc.copy user function. Currently, it cannot work when spin systems in two data pipes are different. The system test will be used to implement the support. * Simplification of the new Rdc.test_rdc_copy_different_spins system test. This no longer tests the deletion of interatomic data containers by the spin.delete user function, something which is not implemented. * Some more fixes for the Rdc.test_rdc_copy_different_spins system test. The residue.delete and not spin.delete user function is required to delete the sequence data. * Another small fix for the new Rdc.test_rdc_copy_different_spins system test. The rdc.copy user function requires the pipe_to argument to be supplied in this case. * Expansion of the Rdc.test_rdc_copy_different_spins system test. The interatomic data containers are now defined via the interatom.define user function, which requires the spin.element user function to set up the element information. A printout has also been added to demonstrate a failure in the pipe_control.interatomic.interatomic_loop() function in handling the correct data pipe. * Some more modifications for the Rdc.test_rdc_copy_different_spins system test. One of the interatomic data containers does not have RDC data, as it is not present in the original data pipe, hence this is checked for. And the printouts have more formatting. * Expanded the functionality of the rdc.copy user function. The user function will now operate on two data pipes with different spin sequences. If the interatomic data container is missing from the target data pipe, a warning is given. And if the interatomic data container is not present in the source data pipe, nothing will be copied. * Modified the rdc.copy user function to printout all copied RDC values and errors. * Created the Rdc.test_rdc_copy_back_calc system test. This will be used to implement the back_calc Boolean argument for the rdc.copy user function to allow not only measured, but also back-calculated RDC values to be copied. * Modified the rdc.copy printout of RDCs to occur for each alignment ID. * Implemented the back_calc argument for the rdc.copy user function. This allows the back-calculated RDCs to be additionally copied together with the real value and error. * Small formatting change for the rdc.copy user function printouts. * Created the Pcs.test_pcs_copy_different_spins system test. This will be used to show a limitation of the pcs.copy user function in that it cannot copy data between two data pipes with different molecule, residue, and spin sequences. * Added a printout of the alignment ID for the pcs.copy user function. This is to match the rdc.copy user function. * Created the Pcs.test_pcs_copy_back_calc system test. This will be used to implement the back_calc Boolean argument for the pcs.copy user function to allow not only measured, but also back-calculated PCS values to be copied. It matches the equivalent Rdc.test_rdc_copy_back_calc system test. * Implemented the back_calc argument for the pcs.copy user function. This allows the back-calculated PCSs to be additionally copied together with the real value and error. The implementation simply copies that of the rdc.copy user function. * Added full per-alignment data printouts to the pcs.copy user function to match rdc.copy. The feedback is important to know what was actually copied. * Modified the pcs.copy user function to handle different spin sequence between data pipes. * Fixes for the Pcs.test_pcs_copy_different_spins and Pcs.test_pcs_copy_back_calc system tests. * Fix for the pcs.copy user function for a recently introduced problem. The data pipe for the spin_loop() function must be supplied. * The pcs.copy user function now skips deselected spins. * Modified the N_state_model.test_data_copying system test to skip deselected spins. * Added more checks to the three Pcs.test_pcs_copy* system tests. * Added more checks to the three Rdc.test_rdc_copy* system tests. * Created the Rdc.test_calc_q_factors_no_tensor system test. This is to demonstrate a failure in the rdc.calc_q_factors user function when no alignment tensor is present. In addition, the test is also triggering an earlier problem of spin isotope information being missing. However the isotope is not required if the tensor is absent. * The Rdc.test_rdc_copy_* system tests now check for the 'rdc_data_types' data structure. This is in the Rdc.test_rdc_copy_different_spins and Rdc.test_rdc_copy_back_calc system tests and shows that the rdc.copy user function fails to duplicate this information. * The Rdc.test_rdc_copy_* system tests now check for the 'absolute_rdc' data structure. This is in the Rdc.test_rdc_copy_different_spins and Rdc.test_rdc_copy_back_calc system tests and shows that the rdc.copy user function fails to duplicate this information as well. * Expanded the rdc.copy user function to copy the RDC data type and absolute RDC flag information. * Created the Rdc.test_corr_plot system test to check the rdc.corr_plot user function. This shows that this poorly tested function works correctly. * Created the Pcs.test_corr_plot system test to check the pcs.corr_plot user function. This user function is poorly tested, and this test triggers a series of bugs. * Added the 'title' and 'subtitle' arguments to the pcs.corr_plot user function. This problem was detected by the new Pcs.test_corr_plot system test. The pcs.corr_plot user function now matches the rdc.corr_plot user function in terms of arguments. * Completed the Pcs.test_corr_plot system test. The file contents are now known and have been carefully checking in Grace. * Clarification of the RDC and PCS Q factors. This affects the rdc.calc_q_factors and pcs.calc_q_factors user functions, as well as all other operations involving the calculation of Q factors. The printouts have been modified to clarify if the normalisation is via the tensor size (2Da^2(4 + 3R)/5) or via the sum of data squared, and the separation of the two is now clearer. This allows for better RDC vs. PCS comparisons. In addition, the data pipe variable names have been updated to reflect the normalisation, so it is instantly known when looking at the XML contents of results or save files which was used. The backwards compatibility hooks have been modified to support the data pipe variable name changes. * The align_tensor.copy user function 'tensor_from' argument can now be None. This is to enable the copying of all alignment tensors from one data pipe to another. * Created the Align_tensor.test_copy_pipes system test. This is to show a problem in the align_tensor.copy user function when copying all tensors between data pipes. * Modified the pipe_control.align_tensor.align_data_exists() function to handle no tensor IDs. If no tensor ID is supplied, this will then return True if any alignment data exists. * Improvement for the align_tensor.copy user function. The user function has been modified to allow all alignment tensors to be copied between two data pipes. This allows the Align_tensor.test_copy_pipes system test to pass. * Fixes for the align_tensor.copy user function argument unit tests. The tensor_from and tensor_to arguments can now be None. * Created the Align_tensor.test_copy_pipes_sims system test. This demonstrates a failure of the align_tensor.copy user function when Monte Carlo simulated tensors are present. * Deleted the data_store.align_tensor.AlignTensorSimList.append() method. This replacement list method was proving fatal when copy.deepcopy() is called on the alignment tensor object. The change allows the Align_tensor.test_copy_pipes_sims system test to pass. * Huge speed up for loading results and state files with Monte Carlo simulation alignment tensors. The reading of the alignment tensor component of XML formatted results and state files has been modified. Previously the data_store.align_tensor.AlignTensorData._update_object() method for updating the alignment tensor object (for values, errors, simulations) was being called once for each Monte Carlo simulation. Now is it called only once for all simulations. In one test, the reading of the save file with 500 simulations dropped from 253.7 to 10.0 seconds. * Added an extra check for the assembly of RDC data. This is in the pipe_control.rdc.return_rdc_data() function and the check is for any unit vectors set to None, which is a fatal condition. * Improved the RelaxError message from the RDC assembly function when unit vectors are None. * Added a new warning to the interatom.unit_vectors user function if data is missing. This is to aid in detecting problems earlier before unit vectors of None are encountered by other parts of relax. * Modified the rdc.corr_plot user function to skip deselected interatomic data containers. This would normally happen as no back-calculated data is normally present. However, if data has been copied from elsewhere, this may not always be the case. * Created the Sequence.test_bug_23372_read_csv system test. This is to catch bug #23372 (https://gna.org/bugs/?23372), the sequence.read failure with CSV files. It uses a truncated version of the CSV data file attached to sr #3219 (https://gna.org/support/?3219). * Converted the lib.sequence.validate_sequence() to the checking function design. This is the checking function design documented at http://wiki.nmr-relax.com/Relax_source_design#The_check_.2A.28.29_functions. The validate_sequence() function has been renamed to check_sequence_func() and the checking object is called check_sequence. It removes the string processing hack to convert RelaxErrors to RelaxWarnings in the lib.sequence.read_spin_data() function, avoiding strange messages such at "RelaxWarning: ror: The sequence data in the line..." as seen in the Sequence.test_bug_23372_read_csv system test. * Small typo fix for the Sequence.test_bug_23372_read_csv system test. * Added the raise_flag argument to the lib.sequence.read_spin_data() function. This is to allow the missing data RelaxError to be deactivated. * Modified the spectrum.read_intensities user function backend to be more robust. This affects the generic formatted peak lists, via the lib.spectrum.peak_list.intensity_generic() function. The peak list reading will now continue reading the file after corrupted lines have been encountered. * Python 3 improvement for the rdc.corr_plot and pcs.corr_plot user functions. The world view is now set in floating point numbers. In Python 2, the math.ceil() and math.floor() functions return floats, whereas in Python 3 these functions return integers. The behaviour is now consistent in both Python versions, fixing a few system tests. * Modified the internal formatting of the data section of the Grace 2D graph files. This affects the lib.plotting.grace.write_xy_data() function. The formatting is now more consistent, with the X value now set to a fixed number of decimal places, and hence will no longer change between Python 2 and 3. The data is now all right justified as well, for easier reading. All affected system tests have been updated for the new format. * Epydoc documentation fix for the lib.structure.pdb_write_handle_atom_name() function. Bugfixes: * Big bug fix for the N-state model num_data_points() function. This is from the specific_analyses.n_state_model.data module. This code was very much out of date. It was expecting an ancient behaviour where the spin container 'pcs' variable and interatomic data container 'rdc' where lists of floats. However these were converted many years ago to dictionaries with keys set to the alignment IDs. The result was that no RDCs nor PCSs were counted as a base data point, so the function would in most cases return a value of zero. * Fixes for the printout from the pipe_control.pcs.return_pcs_data() function. The number of PCSs printed out was including values of None when data was missing for one alignment. These values of None are no longer counted. * Fixes for the printout from the pipe_control.rdc.return_rdc_data() function. The number of RDCs printed out was including values of None when data was missing for one alignment. These values of None are no longer counted. * More fixes for the RDC and PCS count printouts from the corresponding data assembly functions. Sometimes the RDC or PCS value could be present as None. This is now detected and the count is not incremented. * More fixes for the PCS count printout from the pipe_control.pcs.return_pcs_data() function. The check for None values was incorrect. * Fixes for the N-state model num_data_points() function. The deselected interatomic data containers are no longer used for counting RDC data. And the skipping of deselected spin containers for the PCS is now via the spin_loop() skip_desel argument. * Fix for bug #23259 (https://gna.org/bugs/?23259). This is the broken user functions in the prompt UI with the RelaxError: The user function 'X' has been renamed to 'Y'. The problem was that the only the first part of the user function name, for example 'minimise' from 'minimise.calculate' was being checked in the user function name translation table. As the minimise user function has been renamed to minimise.execute, 'minimise' is in the translation table and hence minimise.calculate was being identified as the minimise user function. Now the full user function name is reconstructed before checking the translation table. * Fixes for the lib.structure.internal.coordinates.assemble_coord_array() function. The problem was uncovered by the Structure.test_atomic_fluctuations_no_match system test. The function can now handle no data being passed in. * Fixes for the pipe_control.structure.main.assemble_structural_coordinates() function. The function will now raise a RelaxError if no structural data matching the atom ID can be found. The problem was uncovered by the Structure.test_atomic_fluctuations_no_match system test. The fix affects the structure.atomic_fluctuations, structure.displacement, structure.find_pivot, structure.rmsd, structure.superimpose, and structure.web_of_motion user functions. * Fix for bug #23265 (https://gna.org/bugs/?23265). This is the failure of the edit buttons in the user function GUI windows. The problem was that the column titles of the window opened by the edit button were being incorrectly handled if the dimensions of the window were not supplied. * Fix for bug #23288 (https://gna.org/bugs/?23288). This is the failure of the structure.read_pdb user function when simultaneously merging multiple molecules from one file. The set_mol_name and set_model_num arguments are now converted to lists equal to the length of the read_mol and read_model arguments simultaneously, if supplied. * Small fix for the structure.write_pdb user function for handling old relax state and results files. * Fix for bug #23293 (https://gna.org/bugs/?23293). This is the PDB HETATM loading error whereby the last HETATM record is sometimes not read from the PDB file. The problem was two-fold. Firstly the internal structural object _parse_mols_pdb() method for separating a PDB file into distinct molecules was terminating too early when a new molecule is found, so that the last PDB record is not appended to the records list for the molecule. Secondly the write_pdb() method was not handling the PDB sequential serial number correctly. * Fix for bug #23294 (https://gna.org/bugs/?23294). This is the automatic merging of PDB molecules resulting in an IndexError. Now if only a single molecule name is supplied, this will be used for all molecules in the PDB file. The result is that the structural data will all be automatically merged into a single molecule. This merging is communicated to the user via the current printouts. * Bug fix for the SHEET PDB records created by the structure.write_pdb user function. The current and previous atom parts of the record were not being correctly formatted. This was simply using the %4s formatting string. However the PDB atom format is rather more complicated. To handle this, the new _handle_atom_name() helper function has been added to the lib.structure.pdb_write module. This is now used in the atom() and sheet() functions for consistently formatting the atom name field. * Fix for bug #23295 (https://gna.org/bugs/?23295). This is the PDB secondary structure HELIX and SHEET records not updating when merging molecules. The problem was that the algorithm for changing the molecule numbers for the helix and sheet metadata when calling the structure.read_pdb user function was far too simplistic. Therefore the logic has been completely rewritten. Now the helix and sheet metadata are stored in temporary data structures in the _parse_pdb_ss() method. As the molecules are being read from the PDB records, new data structures containing the original molecule numbers and new molecule numbers are created. The helix and sheet metadata is then stored in the internal structural object via the pack_structs() method, and the molecule indices of the metadata changed based on the two molecule number remapping data structures. * Python 3 fix for the new internal structural object MolContainer._sort() method. The list() builtin function is required to convert the output of the range() function into a true list in Python 3, so that the list.sort() method can be accessed. * Python 3 fix for the Test_msa.test_central_star unit test. This is from the _lib._sequence_alignment.test_msa unit test module. The logic of range() + range() does not work in Python 3, so the range function calls are now wrapped in list() function calls to convert to the correct data structure type. * Python 3 fix for the internal structural object MolContainer._sort_key() method. This method is used as the key for the sort() function. However in Python 3, the key cannot be None. So now if the residue number is None, the value of 0 is returned instead. * Python 3 fix for the pipe_control.structure.main.assemble_structural_coordinates() function. This affects most of the structure user functions. This was another case of requiring the list() built in function to create a list object from an iterator. * Another Python 3 list() fix for the structure user functions. This time the problem was in the pipe_control.structure.main.sequence_alignment() function. * Fix for a RelaxError message from the internal structural object when validating models. * Bug fix for the results.write user function when loading relax state files. The results.write user function can load not only the results file consisting of a single data pipe, but also relax state files if only a single pipe is present. However this was causing the current data pipe and other pipe-independent data (sequence alignments and the GUI) to be overwritten, just as when loading a state file. Now only the data from the data pipe will be loaded and the pipe independent data in the state file will be ignored. * Fix for the rdc.write user function. The check for the missing rdc_data_types variable in the interatomic containers is now more comprehensive and checks for the presence of the alignment ID. * Big bug fix for the pipe_control.interatomic.interatomic_loop() function. This was identified in the Rdc.test_rdc_copy_different_spins system test. The problem was that the pipe argument was being ignored when looking up the spin containers. Hence if the pipe being worked on was not the current data pipe, and the spin sequences were not identical, the function would fail. This mainly affects the rdc.copy user function. * Fix for the pcs.read user function. The problem was caught by the new Pcs.test_pcs_copy_different_spins system test. If the spin system does not exist in the current data pipe, but data for it is present in the PCS file, the pcs.read user function would terminate in a TypeError. * Fixes for the rdc.calc_q_factors user function for when no alignment tensor is present. This was caught by the Rdc.test_calc_q_factors_no_tensor system test. Now if no tensor is present, a warning is given and the 2Da^2(4 + 3R)/5 normalised Q factor is skipped. Also, if present but no spin isotope information is present, then RelaxSpinTypeError errors are raised. * Fix for the pcs.corr_plot user function when the spin containers have no element information. * Fix for bug #23372 (https://gna.org/bugs/?23372), the sequence.read failure with CSV files. The problem was that the sep argument was not being passed all the way to the backend lib.io.extract_data() function. * Fix for the lib.sequence.check_sequence checking object. Although rarely used, the check for the spin number was incorrect and half of the checks were instead for the residue number. This is a classic copy and paste error where the residue name and number checks were copied but not completely converted to spin name and numbers. |
From: Edward d'A. <ed...@do...> - 2015-02-05 09:43:08
|
This is a minor feature and bugfix release. It includes the addition of the new structure.sequence_alignment user function which can use the 'Central Star' multiple sequence alignment algorithm or align based on residue numbers, saving the results in the relax data store. The assembly of structural coordinates used by the structure.align, structure.atomic_fluctuations, structure.com, structure.displacement, structure.find_pivot, structure.mean, structure.rmsd, structure.superimpose and structure.web_of_motion user functions has been redesigned around this new user function. It will use any pre-existing sequence alignments for the molecules of interest, use no sequence alignment if only structural models are selected, and default to a residue number based alignment if the structure.sequence_alignment user function has not been used. Bug fixes include a system test failure on Mac OS X, and I∞ parameter text files and Grace graphs are now produced by the relaxation curve-fitting auto-analysis for the inversion recovery and saturation recovery experiment types. Many more details are given below. For the official, easy to navigate release notes, please see http://wiki.nmr-relax.com/Relax_3.3.6. The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html). The full list of changes is: Features: * The Needleman-Wunsch sequence alignment algorithm now calculates an alignment score. * Implementation of the central star multiple sequence alignment (MSA) algorithm. * Implementation of a reside number based multiple sequence alignment (MSA) algorithm. * Large speed up of the molecule, residue, and spin selection object, affecting all parts of relax. * Sequence alignments are now saved in the relax data store. * Important formatting improvement for the description in the GUI user function windows, removing excess empty lines after lists. * Creation of the structure.sequence_alignment user function. The MSA algorithm can be set to either 'Central Star' or 'residue number', the pairwise sequence alignment algorithm to 'NW70' for the Needleman-Wunsch algorithm, and the substitution matrix to one of 'BLOSUM62', 'PAM250', or 'NUC 4.4'. * More advanced support for different numpy number types in the lib.xml relax library module. This allows numpy int16, int32, float32, and float64 objects to be saved in the relax data store and retrieved from relax XML save and results files. * Merger of structure.align into the structure.superimpose user function. * The assembly of common atomic coordinates by the structure user functions now takes sequence alignments into account. The logic is to first use a sequence alignment from the relax data store if present, use no sequence alignment if coordinates only come from structural models, or fall back to a residue number based alignment. This affects the structure.align, structure.atomic_fluctuations, structure.com, structure.displacement, structure.find_pivot, structure.mean, structure.rmsd, structure.superimpose and structure.web_of_motion user functions. * Large improvements in the memory management for all parts of the GUI. Changes: * Spelling fixes for the CHANGES document. * Created the Structure.test_align_molecules2 system test. This is to demonstrate a failure condition in the structure.align user function. * Large simplification of the atomic coordinate assembly code in the internal structural object. This is in the lib.structure.internal.coordinates.assemble_coord_array() function. The logic of the function has recently changed due to the introduction of the pairwise sequence alignments. This caused a lot of code to now be redundant, and also incorrect in certain cases. This simplification fixes the problem caught by the Structure.test_align_molecules2 system test. * Fix for the Structure.test_displacement system test - the molecule IDs needed updating. * Created the Structure.test_align_molecules_end_truncation system test. This is to demonstrate a failure of the common residue detection algorithm using multiple pairwise alignments in the backend of the structure.align and other multiple structure based user functions. * Created empty unit test infrastructure for testing the lib.structure.internal.coordinates module. * Created the Test_coordinates.test_common_residues unit test. This is from the _lib._structure._internal.test_coordinates unit test module. The test shows that the lib.structure.internal.coordinates.common_residues() function is working correctly. However the printout, which is not caught by the test, is incorrect. * Modified the lib.structure.internal.coordinates.common_residues() function. It now accepts the seq argument which will caused the gapped sequence strings to be returned. This is to allow for checking in the unit tests. * Created the Test_align_protein.test_align_multiple_from_pairwise unit test. This is in the _lib._sequence_alignment.test_align_protein unit test module. This test checks the operation of the lib.sequence_alignment.align_protein.align_multiple_from_pairwise() function, which does not yet exist. * Simplified the Test_coordinates.test_common_residues unit test by removing many residues. This is from the _lib._structure._internal.test_coordinates unit test module. * Expanded the docstring of the Test_align_protein.test_align_multiple_from_pairwise unit test. This is from the _lib._sequence_alignment.test_align_protein unit test module. * Attempt at fixing the lib.structure.internal.coordinates.common_residues() function. This function still does not work correctly. * Renamed the Test_align_protein.test_align_multiple_from_pairwise unit test. This is now the Test_msa.test_central_star unit test of the _lib._sequence_alignment.test_msa unit test module (it was originally in the _lib._sequence_alignment.test_align_protein unit test module). This is in preparation for converting the lib.sequence_alignment.align_protein.align_multiple_from_pairwise() function into the lib.sequence_alignment.msa.central_star() function. * Added the lib.sequence_alignment.align_protein.align_multiple_from_pairwise() function. This should have been committed earlier. The function is only partly implemented. * Initial lib.sequence_alignment.msa.central_star() function. This was moved from lib.sequence_alignment.align_protein.align_multiple_from_pairwise(). * Import fix for the _lib._sequence_alignment.test_align_protein unit test module. * Added the verbosity argument to lib.sequence_alignment.align_protein.align_pairwise(). If set to zero, all printouts are suppressed. * The Needleman-Wunsch sequence alignment algorithm now calculates and returns an alignment score. This is in the lib.sequence_alignment.needleman_wunsch.needleman_wunsch_align() function. The score is calculated as the sum of the Needleman-Wunsch matrix elements along the traceback path. * The protein pairwise sequence alignment function now returns the alignment score. This is in the lib.sequence_alignment.align_protein.align_pairwise() function. The score from the Needleman-Wunsch sequence alignment algorithm is simply passed along. * Fix for the Test_msa.test_central_star unit test. This is from the _lib._sequence_alignment.test_msa unit test module. Some of the real gap matrix indices were incorrect. * Complete implementation of the central star multiple sequence alignment algorithm. This includes all the four major steps - pairwise alignment between all sequence pairs, finding the central sequence, iteratively aligning the sequences to the gapped central sequence, and introducing gaps in previous alignments during the iterative alignment. The correctness of the implementation is verified by the Test_msa.test_central_star unit test of the _lib._sequence_alignment.test_msa module. * Fixes for the unit tests of the _lib._sequence_alignment.test_align_protein module. The Test_align_protein.test_align_pairwise_PAM250 unit test was accidentally duplicated due to a copy and paste error. And the lib.sequence_alignment.align_protein.align_pairwise() function now also returns the alignment score. * Fixes for the unit tests of the _lib._sequence_alignment.test_needleman_wunsch module. The lib.sequence_alignment.needleman_wunsch.needleman_wunsch_align() function now returns the alignment score. * The assemble_coord_array() function is now using the central star multiple sequence alignment. This is the function from the lib.structure.internal.coordinates module used to assemble common atomic coordinate information, used by the structure.align, structure.atomic_fluctuations, structure.com, structure.displacement, structure.find_pivot, structure.mean, structure.rmsd, structure.superimpose and structure.web_of_motion user functions. The non-functional lib.structure.internal.coordinates.common_residues() function has been removed as the lib.sequence_alignment.msa.central_star() function performs this functionality correctly. * Deleted the Test_coordinates.test_common_residues unit test. This is from the _lib._structure._internal.test_coordinates unit test module. The lib.structure.internal.coordinates.common_residues() function no longer exists. * Alphabetical ordering of all Structure system tests. * Better printout spacing in lib.sequence_alignment.msa.central_star(). * Fixes for the Structure.test_align_molecules_end_truncation system test. This system test had only been partly converted from the old Structure.test_align_molecules2 system test it had been copied from. * Created the Internal_selection.count_atoms() internal structural object selection method. This counts the number of atoms in the current selection. * Added final printouts to the structure.rotate and structure.translate user function backends. This is to give feedback to the user as to how many atoms were translated or rotated, to aid in solving problems with the structure user functions. These backend functions are also used by the structure.align and structure.superimpose user functions. * Corrections for the Structure.test_align_CaM_BLOSUM62 system test. The CaM N and C domains can not be aligned together in a global MSA as they would align very well to themselves, causing the atomic coordinate assembly function to fail. * Improvement for the lib.sequence_alignment.msa.central_star() function. The strings and gap matrix returned by the function have been reordered to match the input sequences. * Modified the Structure.test_align_molecules_end_truncation system test. The calmodulin bound calciums are now deleted prior to the structure.align user function call. This prevents these being labelled as '*' residues and aligning with real amino acids via the central star multiple sequence alignment (MSA) algorithm. * Large speed up of the mol-res-spin selection object. The Selection.contains_mol(), Selection.contains_res() and Selection.contains_spin() methods of the lib.selection module have been redesigned for speed. Instead of setting a number of flags and performing bit operations at the end of the method to return the correct Boolean value, each of the multiple checks now simply returns a Boolean value, avoiding all subsequent checks. The check list order has also been rearranged so that the least expensive checks are to the top and the most time intensive checks are last. * Created the new relax data store object for saving sequence alignments. This is in the new data_store.seq_align module via the Sequence_alignments object, subclassed from RelaxListType, for holding all alignments and the Alignment Element object, subclassed from Element, for holding each individual alignment. The objects are currently unused. * Added the seq_align module to the data_store package __all__ list. * Created the Test_seq_align.test_alignment_addition unit test. This is in the _data_store.test_seq_align unit test module. This tests the setup of the sequence alignment object via the data_store.seq_align.Sequence_alignment.add() method. * Fixes for the data_store.seq_align.Alignment.generate_id() method. These problems were identified by the _data_store.test_seq_align module Test_seq_align.test_alignment_addition unit test. * Added the Test_seq_align.test_find_alignment and Test_seq_align.test_find_missing_alignment unit tests. These are in the _data_store.test_seq_align unit test module. They check the functionality of the currently unimplemented Sequence_alignment.find_alignment() method which will be used to return pre-existing alignments. * Code rearrangement in the _data_store.test_seq_align unit test module. The ID generation has been shifted into the generate_ids() method to be used by multiple tests. * Implemented the data_store.seq_align.Sequence_alignments.find_alignment() method. This will only return an alignment if all alignment input data and alignment settings match exactly. * Shifted the data_store.seq_align.Alignment.generate_id() method into the relax library. It has been converted into the lib.structure.internal.coordinates.generate_id() function to allow for greater reuse. * Created the Sequence.test_align_molecules system test. This will be used to implement the sequence.align user function which will be used for performing sequence alignments on structural data within the relax data store and storing the data in the data pipe independent sequence_alignments data store object (which will be an instance of data_store.seq_align.Sequence_alignments). The system test also checks the XML saving and loading of the ds.sequence_alignments data structure. * Renamed the Sequence.test_align_molecules system test to Structure.test_sequence_alignment_molecules. As the sequence alignment is dependent on the structural data in the relax data store, the user function for sequence alignment would be better named as structure.sequence_alignment. The sequence.align user function is not appropriate as all other sequence user functions relate to the molecule, residue, and spin data structure of each data pipe rather than to the structural data. * Modified the Structure.test_sequence_alignment_molecules system test. Changed and expanded the arguments to the yet to be implemented structure.sequence_alignment user function. * Important formatting improvement for the description in the GUI user function windows. Previously lists, item lists, and prompt items were spaced with one empty line at the top and two at the bottom. The two empty lines at the bottom was an accident caused by how the list text elements were built up. Now the final newline character is stripped so that the top and bottom of the lists only consist of one empty line. The change can give a lot more room in the GUI window. * Created the frontend for the structure.sequence_alignment user function. This is based on the structure.align user function with the 3D superimposition arguments removed and new arguments added for selecting the MSA algorithm and the pairwise alignment algorithm (despite only NW70 being currently implemented). * Modified the assemble_coordinates() function of the pipe_control.structure.main module. The function has been renamed to assemble_structural_objects(). The call to the lib.structure.internal.coordinates.assemble_coord_array() function has also been shifted out of assemble_structural_objects() to simplify the logic and decrease the amount of arguments passed around. * Spun out the atomic assembly code of the assemble_coord_array() function. The code from the lib.structure.internal.coordinates.assemble_coord_array() function has been shifted to the new assemble_atomic_coordinates(). This is to simplify assemble_coord_array() as well as to isolate the individual functionality for reuse. * Implemented the backend of the structure.sequence_alignment user function. This checks some of the input parameters, assembles the structural objects then the atomic coordinate information, performs the multiple sequence alignment, and then stores the results. * Fixes for the sequence alignment objects for the relax data store. The Sequence_alignments(RelaxListType) and Alignment(Element) classes were not being set up correctly. The container names and descriptions were missing. * The data store ds.sequence_alignment object is now being treated as special and is blacklisted. The object is now explicitly recreated in the data store from_xml() method. * Fixes for handling the sequence_alignments data store object. * Implemented the data store Sequence_alignments.from_xml() method. This method is required for being able to read RelaxListType objects from the XML file. * Modified the data returned by lib.structure.internal.coordinates.assemble_atomic_coordinates(). The function will now assemble simple lists of object IDs, model numbers and molecule names with each list element corresponding to a different structural model. This will be very useful for converting from the complicated pipes, models, and molecules user function arguments into relax data store independent flat lists. * Updates for the structure.sequence_alignment user function. This is for the changes to the lib.structure.internal.coordinates.assemble_atomic_coordinates() function return values. The new object ID, model, and molecule flat lists are used directly for storing the alignment results in the relax data store. * Updates for the Structure.test_sequence_alignment_molecules system test. This is required due to the changes in the backend of the structure.sequence_alignment user function. * Merger of the structure.align and structure.superimpose user functions. The final user function is called structure.superimpose. As the sequence alignment component of the structure.align user function has been shifted into the new structure.sequence_alignment user function and the information is now stored in ds.sequence_alignments relax data store object, the functionality of structure.align and structure.superimpose are now essentially the same. The sequence alignment arguments and documentation has also been eliminated. And the documentation has been updated to say that sequence alignments from structure.sequence_alignment will be used for superimposing the structures. * Updated the Structure system tests for the structure.align and structure.superimpose user function merger. * Fix for the structure.sequence_alignment user function. The alignment data should be stored in ds.sequence_alignments rather than ds.sequence_alignment. * Sequence alignments can now be retrieved without supplying the algorithm settings. This is in the data_store.seq_align.Sequence_alignments.find_alignment() method. The change allows for the retrieval of pre-existing sequence alignments at any stage. * Added a function for assemble the common atomic coordinates taking sequence alignments into account. This is the new pipe_control.structure.main.assemble_structural_coordinates() function. It takes the sequence alignment logic out of the lib.structure.internal.coordinates.assemble_coord_array() function so that sequence alignments in the relax data store can be used. The logic has also been redefined as: 1, use a sequence alignment from the relax data store if present; 2, use no sequence alignment if coordinates only come from structural models; 3, fall back to a residue number based alignment. The residue number based alignment is yet to be implemented. As a consequence, the lib.structure.internal.coordinates.assemble_coord_array() function has been greatly simplified. It no longer handles sequence alignments, but instead expects the residue skipping data structure, built from the alignment, as an argument. The seq_info_flag argument has also been eliminated in this function as well as the pipe_control.structure.main module. * Updated the structure.displacement user function for the changed atomic assembly logic. This now uses the assemble_structural_coordinates() function of the pipe_control.structure.main module to obtain the common coordinates based on pre-existing sequence alignments, no-alignment, or the default of a residue number based alignment. * Updated the structure.find_pivot user function for the changed atomic assembly logic. This now uses the assemble_structural_coordinates() function of the pipe_control.structure.main module to obtain the common coordinates based on pre-existing sequence alignments, no-alignment, or the default of a residue number based alignment. * Updated the structure.atomic_fluctuations user function for the changed atomic assembly logic. This now uses the assemble_structural_coordinates() function of the pipe_control.structure.main module to obtain the common coordinates based on pre-existing sequence alignments, no-alignment, or the default of a residue number based alignment. * Updated the structure.rmsd user function for the changed atomic assembly logic. This now uses the assemble_structural_coordinates() function of the pipe_control.structure.main module to obtain the common coordinates based on pre-existing sequence alignments, no-alignment, or the default of a residue number based alignment. * Updated the structure.web_of_motion user function for the changed atomic assembly logic. This now uses the assemble_structural_coordinates() function of the pipe_control.structure.main module to obtain the common coordinates based on pre-existing sequence alignments, no-alignment, or the default of a residue number based alignment. * Fix for the structure.superimpose user function if no data pipes are supplied. This reintroduces the pipes list construction. * Fix for the new pipe_control.structure.main.assemble_structural_coordinates() function. The atom_id argument is now passed into the assemble_atomic_coordinates() function of the lib.structure.internal.coordinates module so that atom subsets are once again recognised. * Another fix for the new pipe_control.structure.main.assemble_structural_coordinates() function. The logic for determining if only models will be superimposed was incorrect. * Implemented the residue number based alignment in the atomic assembly function. This is in the new pipe_control.structure.main.assemble_structural_coordinates() function. The code for creating the residue skipping data structure is now shared between the three sequence alignment options. * Implemented the multiple sequence alignment method based on residue numbers. This is the new msa_residue_numbers() function in the lib.sequence_alignment.msa module. The logic is rather basic in that the alignment is based on a residue number range from the lowest residue number to the highest - i.e. it does not take into account gaps in common between all input sequences. * The residue number based sequence alignment is now executed when assembling atomic coordinates. This is in the assemble_structural_coordinates() function of the pipe_control.structure.main module. * Modified the internal structural object one_letter_codes() method. This now validates the models to make sure all models match, and the method requires the selection object so that residue subsets can be handled. * The assemble_atomic_coordinates() function now calls one_letter_codes() with the selection object. This is the lib.structure.internal.coordinates module function. * Fix for the residue number based sequence alignment when assembling structural coordinates. This is in the assemble_structural_coordinates() function of the pipe_control.structure.main module. The sequences of the different molecules can be of different lengths. * Shifted the residue skipping data structure construction into the relax library. The code was originally in pipe_control.structure.main.assemble_structural_coordinates() but has been shifted into the new lib.sequence_alignment.msa.msa_residue_skipping() function. This will also for greater code reuse. The lib.sequence_alignment.msa module is also a better location for such functionality. * Renamed the Structure.test_sequence_alignment_molecules system test. The new name is Structure.test_sequence_alignment_central_star_nw70_blosum62, to better reflect what the test is doing. * Modified the Structure.test_sequence_alignment_central_star_nw70_blosum62 system test. Some residues are now deleted so that the sequences are not identical. * Created the Structure.test_sequence_alignment_residue_number system test. This will be used to test the structure.sequence_alignment user function together with the 'residue number' MSA algorithm. This is simply a copy of the Structure.test_sequence_alignment_central_star_nw70_blosum62 system test with a few small changes. * Corrections and simplifications for the Structure.test_sequence_alignment_residue_number system test. * Modified the structure.sequence_alignment user function arguments. The pairwise_algorithm and matrix arguments can no be None, and they default to None. * Updated the Structure.test_align_CaM_BLOSUM62 system test script. The MSA algorithm and pairwise alignment algorithms are now specified in the structure.sequence_alignment user function calls. * Creation of the lib.sequence_alignment.msa.msa_general() function. This consists of code from the structure.sequence_alignment user function backend function pipe_control.structure.main.sequence_alignment() for selecting between the different sequence alignment methods. * The structure.sequence_alignment user function now sets some arguments to None before storage. This is for all arguments not used in the sequence alignment. For example the residue number based alignment does not use the gap penalties, pairwise alignment algorithm or the substitution matrices. * Fix for the lib.sequence_alignment.msa.msa_residue_skipping() function. The sequences argument for passing in the one letter codes has been removed. The per molecule loop should be over the alignment strings rather than one letter codes, otherwise the loop will be too short. * Fix for the internal structural object atomic coordinate assembly function. This is the pipe_control.structure.main.assemble_structural_coordinates() function. The case of no sequence alignment being required as only models are being handled is now functional. The strings and gaps data structures passed into the lib.sequence_alignment.msa.msa_residue_skipping() function for generating the residue skipping data structure are now set to the one letter codes and an empty structure of zeros respectively. * Test data directory renaming. The test_suite/shared_data/diffusion_tensor/spheroid directory has been renamed to spheroid_prolate. This is in preparation for creating oblate spheroid diffusion relaxation data. * Creation of oblate spheroid diffusion relaxation data. This will be used in the Structure.test_create_diff_tensor_pdb_oblate system test. * Fix for the oblate spheroid diffusion relaxation data. The diffusion parameters are constrained as Dx <= Dy <= Dz. * More fixes for the Structure.test_create_diff_tensor_pdb_oblate system test. The initial Diso value is now set to the real final Diso, and the PDB file contents have been updated for the fixed oblate spheroidal diffusion relaxation data. * Updates for many of the Diffusion_tensor system tests. This is due to the changed directory names in test_suite/shared_data/diffusion_tensor/. The ds.diff_dir variable has been introduced to point to the correct data directory. * Large improvement for the GUI test tearDown() clean up method, fixing the tests on wxPython 2.8. The user function window destruction has been shifted into a new clean_up_windows() method which is executed via wx.CallAfter() to avoid racing conditions. In addition, the spin viewer window is destroyed between tests. The spin viewer window change allows the GUI tests to pass on wxPython 2.8 again. This also allows the GUI tests to progress much further on Mac OS X systems before they crash again for some other reason. This could simply be hiding a problem in the spin viewer window. However it is likely to be a racing problem only triggered by the super fast speed of the GUI tests and a normal user would never be able to operate the GUI on the millisecond timescale and hence may never see it. * Reverted the wxPython 2.8 warning printout when starting relax, introduced in relax 3.3.5. * Reverted the skipping of the GUI tests on wxPython 2.8, introduced in relax 3.3.5. * Reverted the General.test_bug_23187_residue_delete_gui GUI test disabling, introduced in relax 3.3.5. The 'Bus Error' on Mac OS X due to this test is no longer an issue, as the spin viewer window is now destroyed after each GUI test. * Created a special Destroy() method for the spin viewer window. This is for greater control of the spin viewer window destruction. First the methods registered with the observer objects are unregistered, then the children of the spin viewer window are destroyed, and finally the main spin viewer window is destroyed. This change saves a lot of GUI resources in the GUI tests (there is a large reduction in 'User Objects' and 'GDI Objects' used on MS Windows systems, hence an equivalent resource reduction on other operating systems). * Fix for the GUI test clean_up_windows() method called from tearDown(). The user function window (Wiz_window) must be closed before the user function page (Uf_page), so that the Wiz_window._handler_close() can still operate the methods of the Uf_page. This avoids a huge quantity of these errors: Traceback (most recent call last): __getattr__ wx._core.PyDeadObjectError: The C++ part of the Uf_page object has been deleted, attribute access no longer allowed. * Simplification of the Dead_uf_pages.test_mol_create GUI test. The RelaxError cannot be caught from the GUI user function window, therefore the try statement has been eliminated. * More memory saving improvements for the GUI test suite tearDown() method. The clean_up_windows() method now loops through all top level windows (frames, dialogs, panels, etc.) and calls their Destroy() method. * Created the gui.uf_objects.Uf_object.Destroy() method. This will be used to cleanly destroy the user function object. * Modified the GUI test suite _execute_uf() method. This user function execution method now calls the user function GUI object Destroy() method to clean up all GUI objects. This should save memory for GUI objects in the GUI test suite. * Modified the GUI test suite tearDown() method. The clean_up_windows() method called by tearDown() now prints out a lost of all of the living windows instead of trying to destroy them (which causes the running of the GUI tests in the GUI to cause the GUI to be destroyed). The printouts will be used for debugging purposes. * Fixes for the custom Wiz_window.Destroy() method. This will now first close the wizard window via the Close() method to make sure all of the wizard pages are properly updated. In the end the wizard DestroyChildren() method is called to clean up all child wx objects, and finally Destroy() is called to eliminate the wizard GUI object. * The GUI test suite tearDown() method now calls the user function GUI wizard Destroy() method. This is for better handling of user function elimination. * Fixes for the user function GUI object Destroy() method. This matches the code just deleted in the GUI test suite tearDown() method for handing the user function page object. * More fixes for the user function GUI object Destroy() method. This page GUI object is destroyed by the wizard window Destroy() method, so destroying again causes wxPython runtime errors. * Spacing printout for the list of still open GUI window elements. This is for the GUI test tearDown() method. * Shifted the printouts from the GUI tests suite clean_up_windows() method to the tearDown() method. This change means that the printouts are not within a wx.CallAfter() call, but rather at the end of the tearDown() method just prior to starting the next test. * Simplification of the GUI analysis post_reset() method. This now uses the delete_all() and hence delete_analysis() methods to clean up the GUI. The reset argument has been added to skip the manipulation of relax data store data, as the data store is empty after a reset. However the calling of the delete_analysis() method now ensures that the analysis specific delete() method is now called so that the GUI elements can be properly destroyed. * Proper destruction of the peak analysis wizard of the NOE GUI analysis. The peak wizard's Destroy() method is now called and the self.peak_wizard object deleted in the NOE GUI analysis delete() method. * Improved memory management in the NOE GUI analysis peak_wizard_launch() method. This method was just overwriting the self.peak_wizard object with a new object. However this does not destroy the wxPython window. Now if a peak wizard is detected, its Destroy() method is called before overwriting the object. * Improved GUI clean up when terminating GUI tests. The clean_up_windows() method, called from tearDown(), now also destroys the pipe editor window, the results viewer window, and the prompt window. This ensures that all of the major relax windows are destroyed between GUI tests. * Improved memory management in the relaxation curve-fitting GUI analysis. The peak intensity loading wizard is now properly destroyed. This is both via the delete() function for terminating the analysis calling the wizard Delete() method, and in the peak_wizard_launch() method calling the wizard Delete() method prior to overwriting the self.peak_wizard object with a new GUI wizard. * Improved memory management in the model-free GUI analysis. The dipole-dipole interaction wizard is now properly destroyed. This is both via the delete() function for terminating the analysis calling the wizard Delete() method, and in the setup_dipole_pair() method calling the wizard Delete() method prior to overwriting the self.dipole_wizard object with a new GUI wizard. * Improved memory management in the model-free GUI analysis. The analysis mode selection window (a wx.Dialog) is now being destroyed in the analysis delete() method. This appears to work on Linux, Windows, and Mac systems. * Improved memory management in the model-free GUI analysis. The local tm and model-free model windows are now destroyed in the GUI analysis delete() method. * Improved termination of the GUI tests. The clean_up_windows() method now calls the results viewer and pipe editor window handler_close() methods. This ensures that all observer objects are cleared out so that the methods of the dead windows can no longer be called. * Fix for the previous commit, calls to wx.Yield() are required to flush the calls on the observer objects after unregisteristing them and deleting the results and pipe editor windows. * Improved memory management in the relaxation dispersion GUI analysis. The peak intensity loading wizard is now properly destroyed. This is both via the delete() function for terminating the analysis calling the wizard Delete() method, and in the peak_wizard_launch() method calling the wizard Delete() method prior to overwriting the self.peak_wizard object with a new GUI wizard. * Created custom Destroy() methods for the pipe editor and results viewer GUI windows. * Improved memory management in the relaxation dispersion GUI analysis. The dispersion model list window is now destroyed in the GUI analysis delete() method. * Fixes for the custom Destroy() methods for the pipe editor and results viewer GUI windows. The event argument is now a keyword argument which defaults to None. This allows the Destroy() methods to be called without arguments. * Temporary disablement of the results viewer window destruction in the GUI tests. This currently, for some unknown reason, causes segfault crashes of the GUI tests on Linux systems. * Changes for how the main GUI windows are destroyed by the GUI test tearDown() method. These changes revert some of the code of previous commits. The recently introduced pipe editor and results viewer windows Delete() methods have been deleted. Instead the Close() methods are called in the tearDown() method to unregister the windows from the observer objects, followed by a wx.Yield() call to flush the wx events, and then the clean_up_windows() GUI test base method is called within a wx.CallAfter() call. This avoids the racing induced segfaults in the GUI tests. * Improved memory management in the spin viewer window. The spin loading wizard is now destroyed in the Destroy() method as well as before reinitialising the wizard in the load_spins_wizard() method. * The GUI tests tearDown() method now prints out the Wizard windows title, if not destroyed. * The Wizard window title is now being stored as a class instance variable. * Improved memory management in the relaxation data list GUI element, as well as the base list object. The relaxation data loading wizard is now destroyed in the Base_list.delete() method, or any wizard for that matter. In addition, the relaxation data loading wizard is destroyed before reinitialising the wizard in the wizard_exec() method. * Better memory management for the missing data dialog in the GUI analyses. The dialog is now stored as the class variable missing_data, and then is destroyed in the analysis delete() method. Without this, the wxPython dialog would remain in memory for the lifetime of the program. * Improved memory management for the Sequence and Sequence_2D input GUI elements. These are mainly used in the user function GUI windows. The dialogs are now destroyed before a second is opened. * Improved memory management for the GUI user function windows. The Destroy() method will now destroy any Sequence or Sequence_2D windows used for the user function arguments. * The relax prompt window is now being destroyed by the GUI test suite tearDown() method. The window is first closed in the tearDown() method and then destroyed in the clean_up_windows() method. * Added memory management checking to the GUI test suite tearDown() method. If any top level windows are present, excluding the main GUI window and the relax controller, then a RelaxError will be raised. Such a check will significantly help in future GUI coding, as now there will be feedback if not all windows are properly destroyed. * Popup menus are now properly destroyed in the GUI tests. In many instances, the wx.Menu.Destroy() method was only being called when the GUI is shown. This causes memory leaking in the GUI tests. * Changed the title for the user function GUI windows. To better help identify what the window is, the title is now the user function name together with text saying that it is a user function. * Removed the wx.CallAfter() call in the GUI tests tearDown() method. This was used to call the clean_up_windows() method. However the value of wx.Thread_IsMain() shows that the tearDown() method executes in the main GUI thread. Therefore the wx.CallAfter() call for avoiding racing conditions is not needed. * Fix for the GUI tests clean_up_windows() tearDown method. After destroying all of the main GUI windows, a wx.Yield() call is made to flush the wxPython event queue. This seems to help with the memory management. * Temporary disabling of the memory management check in the GUI tests tearDown() method. For some reason, it appears as if it is not possible to destroy wx Windows on MS Windows. * Created the relax GUI prompt Destroy() method. This is used to cleanly destroy the GUI prompt by first unregistering with the observer objects, destroying then deleting the wx.py.shell.Shell instance, and finally destroying the window. * Modified the manual_c_module.py developer script so that the path can be supplied on the command line. * Removed some unused imports, as found by devel_scripts/find_unused_imports.py. * Added a copyright notice to the memory_leak_test_relax_fit.py development script. This is to know how old the script is, to see how out of date it is in the future. * Created the memory_leak_test_GUI_uf.py development script. This is to help in tracking down memory leaks in the relax GUI user functions. Instead of using a debugging Python version and guppy (wxPython doesn't seem to work with these), the pympler Python package and its muppy module is used to produce a memory usage printout. * Clean up of the memory_leak_test_GUI_uf.py development script. * Created the new devel_scripts/memory_management/ directory. This will be used for holding all of the memory C module leak detection, GUI object leak detection, memory management, etc. development scripts. * Shifted the memory_leak_test_GUI_uf.py script to devel_scripts/memory_management/GUI_uf_minimise_execute.py. * Created a base class for the memory management scripts for the GUI user functions. The core of the GUI_uf_minimise_execute.py script has been converted into the GUI_base.py base class module. This will allow for new GUI user function testing scripts to be created. * Removal of unused imports from the GUI user function memory testing scripts. * Created a script for testing the memory management when calling the time GUI user function. * Large memory management improvement for the relax GUI wizards and GUI user functions. The pympler.muppy based memory management scripts in devel_scripts/memory_management for testing the GUI user function windows was showing that for each GUI user function call, 28 wx._core.BoxSizer elements were remaining in memory. This was traced back to the gui.wizards.wiz_objects.Wiz_window class, specifically the self._page_sizers and self._button_sizers lists storing wx.BoxSizer instances. The problem was that 16 page sizers and 16 button sizers were initialised each time for later use, however the add_page() method only added a small subset of these to the self._main_sizer wx.BoxSizer object. But the Destroy() method was only capable of destroying the wx.BoxSizer instances associated with another wxPython object. The fix was to add all page and button sizers to the self._main_sizer object upon initialisation. This will solve many memory issues in the GUI, especially in the GUI tests on Mac OS X systems causing 'memory error' or 'bus error' messages and on MS Windows due to 'USER Object' and 'GDI object' limitations. * The maximum number of pages in the GUI wizard is no longer hardcoded. The max_pages argument has been added to allow this value to be changed. * Fix for GUI wizards and GUI user functions. The recent memory management changes caused the wizard windows to have an incorrect layout so that the wizard pages were not visible. Reperforming a layout of the GUI elements did not help. The solution is to not initialise sets of max_pages of wx.BoxSizer elements in the wizard __init__() method, but to generate and append these dynamically via the add_page() method. The change now means that there are no longer multiple unused wx.BoxSizer instances generated for each wizard window created. * Fix for the GUI wizard _go_next() method. The way to determine if there are no more pages needs to be changed, as there are now no empty list elements at the end of the wizard storage objects. * Another fix for the now variable sized wizard page list. This time the fix is in the GUI user function __call__() method. * Created the Relax_fit.test_bug_23244_Iinf_graph system test. This is to catch bug #23244 (https://gna.org/bugs/?23244). Bugfixes: * Bug fix for the structure.align user function. The addition of the molecule name to the displacement ID is now correctly performed. * Fix for the new Internal_selection.count_atoms() internal structural object selection method. The method was previously returning the total number of molecules, not the total number of atoms in the selection. * Printout fix for the backend of the structure.translate and structure.rotate user functions. Model numbers of zero were not correctly identified. This also affects the structure.align and structure.superimpose user functions which uses this backend code. * Another fix for the Internal_selection.count_atoms() internal structural object selection method. * Small fix for the lib.structure.internal.coordinates.assemble_coord_array() function. The termination condition for determining the residues in common between all structures was incorrect. * The Structure.test_create_diff_tensor_pdb_oblate system test now uses oblate diffusion relaxation data. This fixes bug #23232 (https://gna.org/bugs/?23232), the failure of this system test on Mac OS X. The problem was that the system test was previously using relaxation data for prolate spheroidal diffusion and fitting an oblate tensor to that data. This caused the solution to be slightly different on different CPUs, operating systems, Python versions, etc. and hence the PDB file representation of the diffusion would be slightly different. * Big bug fix for the GUI tests on MS Windows systems. On MS Windows systems, the GUI tests were unable to complete without crashing. This is because each GUI element requires one 'User object', and MS Windows has a maximum limit of 10,000 of these objects. The GUI tests were taking more than 10,000 and then Windows would say - relax, you die now. The solution is that after each GUI test, all user function windows are destroyed. The user function page is a wx.Panel object, so this requires a Destroy() call. But the window is a Uf_page instance which inherits from Wiz_page which inherits from wx.Dialog. Calling Destroy() on MS Windows and Linux works fine, but is fatal on Mac OS X systems. So the solution is to call Close() instead. * Fix for the default grid_inc argument for the relaxation curve-fitting auto-analysis. This needs to be an integer. * Fix for bug #23244 (https://gna.org/bugs/?23244). The relaxation curve-fitting auto-analysis now outputs text files and Grace graphs for the I0 parameter and the Iinf parameter if it exists. * Fixes for the package checking unit tests on MS Windows for the target_functions package. The compiled relaxation curve-fitting file is called target_functions\relax_fit.pyd on MS Windows. The package checking was only taking into account *.so compiled files and not *.pyd file. |
From: Edward d'A. <ed...@do...> - 2015-01-28 09:58:31
|
This is a major feature and bugfix release. It fixes an important bug in the Monte Carlo simulation error analysis in the relaxation dispersion analysis. Features include improvements to the NMR spectral noise error analysis, expansion of the grace.write user function to handle both first and last point normalisation for reasonable R1 curves in saturation recovery experiments, the implementation of Needleman-Wunsch pairwise sequence alignment algorithm using the BLOSUM62, PAM250 and NUC 4.4 substitution matrices for more advanced 3D structural alignments via the structure.align and structure.superimpose user functions as well as any of the other structure user functions dealing with multiple molecules, conversion of the structure.displacement, structure.find_pivot, structure.rmsd, structure.superimpose and structure.web_of_motion user functions to a new pipes/models/molecules/atom_id design to allow the user functions to operate on different data pipes, different structural models and different molecules, addition of the displace_id argument to the structure.align and structure.superimpose user functions to allow finer control over which atoms are translated and rotated by the algorithm, large improvement for the PDB molecule identification code affecting the structure.read_pdb user function, creation of the lib.plotting package for assembling all of the data plotting capabilities of relax, implementation of the new structure.atomic_fluctuations user function for creating text output or Gnuplot graphs of the correlation matrix of interatomic distance, angle or parallax shift fluctuations, the implementation of ordinary least squares fitting, and improvements for the pcs.corr_plot and rdc.corr_plot user functions. Many more features and bugfixes are listed below. For the official, easy to navigate release notes, please see http://wiki.nmr-relax.com/Relax_3.3.5. The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html). The full list of changes is: Features: * Improvements to the NMR spectral noise error analysis. * Addition of the new spectrum.error_analysis_per_field user function to quickly perform a per-NMR field spectrum error analysis. * Added spectrum.sn_ratio user function to calculate the signal to noise ration for all spins, and introduced the per-spin sn_ratio parameter for the NOE, relaxation curve-fitting and relaxation dispersion analyses. * Added the new select.sn_ratio and deselect.sn_ratio user functions to change the selection status of spins according to their signal to noise ratio. * Expansion of the grace.write user function to handle both first and last point normalisation for reasonable R1 curves in saturation recovery experiments. * Conversion of the structure.align, structure.displacement, structure.find_pivot, structure.rmsd, structure.superimpose and structure.web_of_motion user functions to a standardised pipes/models/molecules/atom_id argument design to allow the user functions to operate on different data pipes, different structural models and different molecules simultaneously and to restrict operation to a subset of all spins. This is also used by the new structure.atomic_fluctuations user function. * Addition of the displace_id argument to the structure.align and structure.superimpose user functions to allow finer control over which atoms are translated and rotated by the algorithm independently of the align_id atom ID for selecting atoms used in the superimposition. * Large improvement for the PDB molecule identification code affecting the structure.read_pdb user function allowing discontinuous ATOM and HETATM records with the same chain ID to be loaded as the same molecule. * Creation of the lib.plotting package for assembling all of the data plotting capabilities of relax into a unified software independent API. * Implementation of the new structure.atomic_fluctuations user function for creating text output or Gnuplot graphs of the correlation matrix of interatomic distance, angle or parallax shift fluctuations, measured as sample standard deviations, between different molecules. * The implementation of ordinary least squares fitting. * And improvements for the pcs.corr_plot and rdc.corr_plot user functions. * The implementation of Needleman-Wunsch pairwise sequence alignment algorithm using the BLOSUM62, PAM250 and NUC 4.4 substitution matrices for more advanced 3D structural alignments via the structure.align user function. The Needleman-Wunsch algorithm is implemented as in the EMBOSS software to allow for gap opening and extension penalties as well as end penalties. This is also used in all the other structure user functions dealing with multiple molecules - structure.atomic_fluctuations, structure.displacement, structure.find_pivot, structure.rmsd, structure.superimpose, structure.web_of_motion. * Improved support for PDB secondary structure metadata for the structure.read_pdb and structure.write_pdb user functions. Changes: * Added a sentence to the start of the citation chapter about http://www.nmr-relax.com links. This is to convince people to more freely use this URL. In that way, the relax search engine ranking should be significantly increased. And it will be easier for new users to get into relax. * Removing the automatic function for error analysis per field in the relaxation dispersion auto-analysis. This function is moved into pipe_control/spectrum.py. * Added the function pipe_control.error_analysis_per_field(), as an automatic way of submitting subset IDs per field for error analysis. * For the pipe_control.spectrum.error_analysis_per_field(), added additional printout of subset IDs used for error analysis. * In the auto_analysis.relax_disp module, used the new spectrum.error_analysis_per_field user function to calculate the peak intensity errors. * Reinserted the error_analysis() function in the auto class of relaxation dispersion. This function only checks if the error analysis has not been be performed before, and then decides to call the user function spectrum.error_analysis_per_field(). The implementation can be tested with the Relax_disp.test_estimate_r2eff_err_auto system test. * In pipe_control.spectrum.error_analysis_per_field() removed the checks which would stop the calculation of the errors. This function will now always run, which will make it possible for the user to try different error calculations. * Copy of the system test script peak_lists.py to spectrum.py. This is for the implementation of calculation of signal to noise ratio, selection and deselection. * Initialised first test in the Spectrum system test class. This is simply loading some intensity data, and checks data. The system test test Spectrum.test_signal_noise_ratio will be expanded to test the calculation of the signal to noise ratio. * Added the Spectrum system test class to the init file, so these system tests can be executed. * Added the pipe_control.spectrum.signal_noise_ratio() backend function, for calculation of the signal to noise ratio per spin. * Added system test Spectrum.test_grace_int, to test plotting the intensity per residue. This is to prepare for a grace plotting of the signal to noise level per residue. Also added additional tests for signal to noise ratio calculation in the system test Spectrum.test_signal_noise_ratio. * Added system test Spectrum.test_grace_sn_ratio to help implement plotting the signal to noise ratio per residue. * Added the common API Parameter structure 'sn_ratio' in parameter_object. * For the specific analysis of "noe", "relax_disp", and "relax_fit", initialised the sn_ratio parameter structure. * Added float around values in signal_noise_ratio() function. * Made the user function spectrum.sn_ratio smaller. * Added two new system tests Spectrum.test_deselect_sn_ratio_all and Spectrum.test_deselect_sn_ratio_any. These test the user function deselect.sn_ratio, to deselect spins with a signal to noise ratio lower than the specified ratio. * Added function in pipe_control.spectrum.sn_ratio_deselection(), a function to deselect spins according to the signal to noise ratio. The function is flexible, since it possible to use different comparison operators. And the function can be switched, so a selection is made instead. * Added the new deselect.sn_ratio user function to deselect spins according to their signal to noise ratio. * Added new backend function in pipe_control.spectrum.sn_ratio_selection. This is to select spins with a signal to noise ratio, higher or lower than the specified ratio. * Added two new system tests Spectrum.test_select_sn_ratio_all and Spectrum.test_select_sn_ratio_any. These test the select.sn_ratio user function. * Added the new select.sn_ratio user function to select spins with signal to noise ratio above a specified ratio. The default ratio for signal to noise selection is 10.0. But should probably be 50-100 instead. The default of 'all_sn' is True, meaning that all signal to noise ratios for the spins needs to pass the test. * Small fix for standard values in user function deselect.sn_ratio. The standard values will deselect spins which have at least one signal to noise ratio which is lower than 10.0. * Small fix for the backend of spectrum sn_ratio_selection() and sn_ratio_deselection(). The standard values have been changed. * Fix for the window size in user function dx.map. The size of the windows was not compatible with the latest change. * Documentation fix in the manual for the lower and upper bonds for parameters in the grid search. * Documentation fix in the manual for the lower and upper bonds for parameters in the minimisation. * Documentation fix in the manual for the scaling values of parameters in the minimisation. The scaling helps the minimisers to make the same step size for all parameters when moving in the chi2 space. * Added a devel script which can quickly convert oxygen icons to the desired sizes. * Extended the devel script image size converter. * Adding new oxygen icon in all needed sizes. * Comment fix in user function select.sn_ratio and deselect.sn_ratio. * Important fix for the spectrum.error_analysis_per_field user function. This is for the compilation of the user manual. The possessive apostrophe should not be used in the text "spectrum ID's". This grammar error triggers an unfortunate bug in the docstring fetching script docs/latex/fetch_docstrings.py whereby the script thinks that ' is the start of a quote. * Added a compressed EPS version of the 128x128/actions/document-preview-archive Oxygen icon. The EPS bounding box was manually changed to 0 0 18 18 in a text editor. The scanline translation parameters were also fixed by changing them all to 18 as well. This allows the icon to be used in the relax manual. * Fix for the blacklist objects in data_store.data_classes.Element.to_xml(). The class blacklist variable was not being taken into account. * Added the norm_type argument to the grace.write user function. This is in response to http://thread.gmane.org/gmane.science.nmr.relax.devel/7392/focus=7438. This norm_type argument can either be 'first' or 'last' to allow different points of the plot to be the normalisation factor. The default of 'first' preserves the old behaviour of first point normalisation. * The relax_fit_saturation_recovery.py system test script now sets the norm_type argument. This is for testing out this new option for the grace.write user function. * The new grace.write user function norm_type argument has been activated. The argument is now passed from pipe_control.grace.write into the write_xy_data() function of the lib.software.grace module, and is used to select which point to use for the normalisation. * The relaxation exponential curve-fitting auto-analysis now sets the normalisation type. This is for the new grace.write user function. If the model for all spins is set to 'sat', then the norm_type will be set to 'last'. This allows for reasonable normalised curves for the saturation recovery R1 experiment types. * Change for norm_type variable in the relaxation exponential curve-fitting auto-analysis. This is now set to 'last', not only for the saturation recovery, but now also for the inversion recovery experiment types. This ensures that the normalisation point is the steady state magnetisation peak intensity. * Cleared the list of blacklisted objects for the cdp.exp_info data structure. The data_store.exp_info.ExpInfo class blacklist variable had previously not been used. But after recent changes, the list was now active. As all the contents of the container were blacklisted, the container was being initialised as being empty when reading the XML formatted state or results files. Therefore the blacklist is now set to an empty list. * Improvements for all of the tables of the relaxation dispersion chapter of the manual. The captions are now the full width (or height for rotated tables) of the page in the PDF version of the manual. The \latex{} command from the latex2html package has been used to improve the HTML versions of the tables by deactivating the landscape environment, the cmidrule command, and the caption width commands. This results in properly HTML formatted tables, rather than creating a PNG image for the whole table. These should significantly improve the tables in the webpages http://www.nmr-relax.com/manual/Comparison_of_dispersion_analysis_software.html, http://www.nmr-relax.com/manual/The_relaxation_dispersion_auto_analysis.html, and http://www.nmr-relax.com/manual/Dispersion_model_summary.html. * Created the Structure.test_align_molecules system test. This will be used to extend the functionality of the structure.align user function to be able to align different molecules in the same data pipe, rather than requiring either models or identically named structures in different data pipes. * Modified the Structure.test_align_molecules system test. This now simultaneously checks both the pipes and molecules arguments to the structure.align user function. * More changes for the new Structure.test_align_molecules system test. * Some more fixes for the Structure.test_align_molecules system test. * Change to the Structure.test_align system test. The molecules argument for the structure.align user function has been changed to match the models argument, in that it now needs to be a list of lists with the first dimension matching the pipes argument. This change is to help with the implementation of the new structure.align functionality. * Implemented the new molecules argument for the structure.align user function. In addition to accepting the new argument, the user function backend has been redesigned for flexibility. The assembly of coordinates and final rotations and translations now consist of three loops over desired data pipes, all models, and all molecules. If the models or molecules arguments are supplied, then the models or molecules in the loop which do not match are skipped. This logic simplifies and cleans up the backend. * Created the Structure.test_rmsd_molecules system test. This will be used to implement a new molecules argument for the structure.rmsd user function so that the RMSD between different molecules rather than different models can be calculated. * Implemented the new molecules argument for the structure.rmsd user function. This allows the RMSD between different molecules rather than different models to be calculated, extending the functionality of this user function. * Created the Structure.test_displacement_molecules system test. This will be used to implement the new molecules argument for the structure.displacement user function. * Implemented the molecules argument for the structure.displacement user function. This allows the displacements (translations and rotations) to be calculated between different molecules rather than different models. This information is stored in the dictionaries of the cdp.structure.displacement object with the keys set to the molecule list indices. * Created the Structure.test_find_pivot system test. This is to check the structure.find_pivot user function as this algorithm is currently not being checked in the test suite. * Created the Structure.test_find_pivot_molecules system test. This will be used to implement support for a molecules argument in the structure.find_pivot user function so that different molecules rather than different models can be used in the analysis. * Increased the precision of pivot optimisation in the Structure.test_find_pivot_molecules system test. * Implemented the molecules argument for the structure.find_pivot user function. This allows the motional pivot optimisation between different molecules rather than different models. * Shifted the atomic assembly code from the structure.align user function into its own function. The new function assemble_coordinates() of the pipe_control.structure.main module will be used to standardise the process of assembling atomic coordinates for all of the structure user functions. This will improve the support for comparing different molecules rather than different models as missing atoms or divergent primary sequence are properly handled, and it has multi-pipe support. * Changed the argument order for the structure.align user function. The standardised order will now be pipes, models, molecules, atom_id, etc. * Converted the structure.find_pivot user function to the new pipes/models/molecules/atom_id design. This allows the motional pivot algorithm to work on atomic coordinates from different data pipes, different structural models, and different molecules. The change allows the Structure.test_find_pivot_molecules system test to now pass, as missing atomic data is now correctly handled. The user function backend uses the new pipe_control.structure.main.assemble_coordinates() function. The Structure.test_find_pivot and Structure.test_find_pivot_molecules system tests have been updated for the user function argument changes. * Shift of the atomic coordinate assembly code into the relax library. Most of the pipe_control.structure.main.assemble_coordinates() function has been shifted into the assemble_coord_array() function of the new lib.structure.internal.coordinates module. The pipe_control function now only checks the arguments and assembles the structural objects from the relax data store, and then calls assemble_coord_array() to do all of the work. This code abstraction increases the usefulness of the atomic coordinate assembly and allows it to be significantly expanded in the future, for example by being able to take sequence alignments into consideration. * Tooltip standardisation for the structure.align and structure.find_pivot user functions. * The coordinate assembly function now returns list of unique IDs. This is for each structural object, model and molecule. * Changed the structure ID strings returned by the assemble_coord_array() function. This is from the lib.structure.internal.coordinates module. The structural object name is only included if more than one structural object has been supplied. * More improvements for the structure ID strings returned by the assemble_coord_array() function. * Converted the internal structural displacement object to use unique IDs rather than model numbers. This allows the object to be much more flexible in what types of structures it can handle. This is in preparation for a change in the structure.displacement user function. * Converted the structure.displacement user function to the new pipes/models/molecules/atom_id design. This allows the displacements to be calculated between atomic coordinates from different data pipes, different structural models, and different molecules. The user function backend has been hugely simplified as it now uses the new pipe_control.structure.main.assemble_coordinates() function. The Structure.test_displacement system test has been updated for the user function argument changes. * Another refinement for the structure ID strings returned by the assemble_coord_array() function. * Updated the Structure.test_displacement_molecules system test. This is for the changes to the structure.displacement user function. * Docstring spelling fixes for the steady-state NOE and relaxation curve-fitting auto-analyses. * Converted the structure.rmsd user function to the new pipes/models/molecules/atom_id design. This allows the RMSD calculation to work on atomic coordinates from different data pipes, different structural models, and different molecules. The user function backend uses the new pipe_control.structure.main.assemble_coordinates() function. The Structure.test_rmsd_molecules system test has been updated for the user function argument changes. * Created the internal structural object model_list() method. This is to simplify the assembly of a list of all current models in the structural object. * Converted the structure.superimpose user function to the new pipes/models/molecules/atom_id design. The user function arguments have not changed, however the backend now uses the new pipe_control.structure.main.assemble_coordinates() function. This is to simply decrease the number of failure points possible in the structure user functions. The change has no effect on the user function use or results. * Documentation fix for the assemble_coord_array() function. The return values for lib.structure.internal.coordinates.assemble_coord_array() were incorrectly documented. * Modified the Structure.test_bug_22070_structure_superimpose_after_deletion system test. This now calls the structure.align user function after calling the structure.superimpose user function to better test a condition that can trigger bugs. * Fixes for the structure.superimpose and structure.align user functions. The fit_to_mean() and fit_to_first() functions of lib.structure.superimpose where being incorrectly called, in that they expect a list of elements and not lists of lists. * Code refactorisation for the structure.align user function backend. The looping over data pipes, model numbers, and molecule names, skipping those that don't match the function arguments, has been shifted into the new structure_loop() generator function of the pipe_control.structure.main module. This function assembles the data from the data store and then calls the new loop_coord_structures() generator function of the lib.structure.internal.coordinates module which does all of the work. * Some docstring expansions for the pipe_control.structure.main module functions. * Refactored the descriptions of a number of structure user functions. This includes the structure.align, structure.displacement, structure.find_pivot, structure.rmsd and structure.superimpose user functions. The paragraph_multi_struct and paragraph_atom_id module strings have been created and are shared as two paragraphs for each of these user function descriptions. This standardises the pipe/model/molecule/atom_id descriptions. The user function wizard page sizes have been updated for these changes. * Changed the design of the lib.structure.internal.coordinates.assemble_coord_array() function. The elements_flag argument has been renamed to seq_info_flag. If this is set, then in addition to the atomic elements, the molecule name, residue name, residue number, and atom name is now assembled and returned. This information is now the common information between the structures, hence the return values for the elements are a list of str rather than list of lists. All of the code in pipe_control.structure.main has been updated for the change. * Fix for the structure.align user function if no data pipes are supplied. The pipes list was no longer being created as it was shifted to the assemble_coordinates() function, however it is required for the translation and rotation function calls. * Converted the structure.web_of_motion user function to the new pipe/model/molecule/atom_id design. This allows the web of motion representation to work on atomic coordinates from different data pipes, different structural models, and different molecules. The user function backend uses the new pipe_control.structure.main.assemble_coordinates() function to assemble the common atom coordinates, molecule names, residue names, residue numbers, atom names and elements. All this information is then used to construct the new web of motion PDB file. Therefore the entire backend has been rewritten. The Structure.test_web_of_motion_12, Structure.test_web_of_motion_13, and Structure.test_web_of_motion_all system tests have all been updated for the changed structure.web_of_motion user function arguments. In addition, the system tests Structure.test_web_of_motion_12_molecules, Structure.test_web_of_motion_13_molecules and Structure.test_web_of_motion_all_molecules have been created as a copy of the other tests but with the 3 structures loaded as different molecules. * Fix for the IDs returned by lib.structure.internal.coordinates.assemble_coord_array(). The list of unique structure IDs was being incorrectly constructed if multiple molecules are present but the molecules argument was not supplied. It would be of a different size to the coordinate data structure. * Fix for the Structure.test_displacement system test for the assemble_coord_array() function bugfix. * Modified the Structure.test_align system test to show a failure of the structure.align user function. The alignment causes all atoms in the structural object to be translated and rotated, whereas it should only operate on the atoms of the atom_id argument. * Modified the Structure.test_superimpose_fit_to_mean system test. This is also to demonstrate a bug, this time in the structure.superimpose user function, in which the algorithm causes a translation and rotation of all atoms rather than just those selected by the atom_id argument. * Modified some system tests of the structure.align and structure.superimpose user functions. The displace_id argument has been introduced for both of these user functions for finer control over which atoms are translated and rotated by the algorithm. This allows, for example, to align structures based on a set of backbone heavy atoms while the protons and side chains are displaced by default. Or if a domain is aligned, then just that domain can be displaced. * Added the displace_id argument to the structure.align and structure.superimpose user functions. This gives both of these user functions finer control over which atoms are translated and rotated by the algorithm. This allows, for example, to align structures based on a set of backbone heavy atoms while the protons and side chains are displaced by default. Or if a domain is aligned, then just that domain can be displaced. * Fixes for the Structure.test_superimpose_fit_to_mean system test for the displace_id argument. * Modified the Structure.test_align_molecules system test to catch a bug. This is the failure of the displace_id argument of the structure.align user function when the molecules argument is supplied - all atoms are being displaced instead of a subset. * Fix for the displace_id and molecules arguments of the structure.align user function. The atom ID used for the translations and rotations is now properly constructed from the molecule names in the molecules list and the displace_id string. * Changes for water in the PDB file created by the structure.write_pdb user function. The waters with the residue name 'HOH' are no longer output to HET records. * Improvement for the structure.read_pdb user function. The helix and sheet secondary structure reading now takes the real_mol argument into account to avoid reading in too much information. * Improvement for the merge argument of the structure.read_pdb user function. This argument is now overridden if the molecule to merge to does not exist. This allows the merge flag to be used together with read_mol and set_mol_name set to lists. * Fix for the selective secondary structure reading of the structure.read_pdb user function. The molecule index needs to incremented by 1 to be the molecule number. * Large improvement for the PDB molecule identification code. This affects the structure.read_pdb user function. Now the chain ID code, if present in the PDB file, is being used to determine which ATOM and HETATM records belong to which molecule. All of the records for each molecule are stored until the end, when they are all yielded. This allows for discontinuous chain IDs throughout the PDB file, something which occurs often with the HETATM records. * Expanded the displace_id argument for the structure.align user function. This can now be a list of atom IDs, so that any atoms can be rotated together with the structure being aligned. This is useful if the molecules argument is supplied. * Fix for the Noe.test_bug_21562_noe_replicate_fail system test. This is for the changed behaviour of the structure.read_pdb user function. The problem is that the PDB file read in this test has the chain ID set to X. This broken PDB causes molecule numbering problems. * Expanded the description of the structure.rmsd user function. * Changed the paragraph ordering in the documentation of a number of the structure user functions. This includes the structure.align, structure.displacement, and structure.find_pivot user functions. * Fix for the prompt examples documentation for the structure.align user function. * Improved the sizing layout of the structure.align user function GUI dialog. * Improved the sizing layout of the structure.superimpose user function GUI dialog. * Created the Structure.test_atomic_fluctuations system test. This will be used to implement the idea of the structure.atomic_fluctuations user function. * Implemented the structure.atomic_fluctuations user function. This is loosely based on the structure.web_of_motion user function and is related to it. The user function will write to file a correlation matrix of interatomic distance fluctuations. * Created 4 unit tests for the lib.io.swap_extension function. This is in preparation for implementing the function. * Implemented the lib.io.swap_extension() function. This is confirmed to be fully functional by its four unit tests. * Created the empty lib.plotting package. This follows from the thread at http://thread.gmane.org/gmane.science.nmr.relax.devel/7444. The package will be used for assembling all of the data plotting capabilities of relax. It will make support for different plotting software - Grace, OpenDX, matplotlib, gnuplot, etc - more coherent. This will be used to create a software independent API for plotting in relax. I.e. the plotting software is chosen by the user and then the data output by the user function passes into the lib.plotting API which is then passed into the software dependent backend in lib.plotting. * Created the Structure.test_atomic_fluctuations_gnuplot system test. This checks the operation of the structure.atomic_fluctuations user function when the output format is set to 'gnuplot'. This will be used to implement this option. The current gnuplot script expected by this test is just a very basic starting script for now. * Created the lib.plotting API function correlation_matrix(). This is the lib.plotting.api.correlation_matrix() function. It will be used for the visualisation of rank-2 correlation matrices. The current basic API design here uses a dictionary of backend functions (currently empty) for calling the backend. * Implemented a very basic gnuplot backend for the correlation_matrix() plotting API function. This is in the new lib.plotting.gnuplot module. It creates an incredibly basic gnuplot script for visualising the correlation matrix, assuming a text file has already been created. * Enabled the gnuplot format for the structure.atomic_fluctuations user function. This uses the plotting API correlation_matrix() function for visualisation. The change allows the Structure.test_atomic_fluctuations_gnuplot system test to pass. * Shifted the matrix output of the structure.atomic_fluctuations user function into lib.plotting.text. The new lib.plotting.text module will be used by the relax library plotting API to output data into plain text format. The current correlation_matrix() function, which has been added to the API correlation_matrix() function dictionary, simply has the file writing code of the structure.atomic_fluctuations user function. This significantly simplifies the user function. * More simplifications for the structure.atomic_fluctuations user function backend. * Fix for the structure.atomic_fluctuations user function backend. The pipe_control.structure.main.atomic_fluctuations() function no longer opens the output file. * The gnuplot correlation_matrix() plotting API function now creates a text file of the data. The lib.plotting.gnuplot.correlation_matrix() function now calls the lib.plotting.text.correlation_matrix() function prior to creating the gnuplot script. * Significantly expanded the gnuplot script from via the correlation_matrix() plotting API function. This is for the structure.atomic_fluctuations user function. The output terminal is now set to EPS, the colour map changed from the default to a blue-red map, labels have been added, the plot is now square, and comments are now included throughout the script to help a user hand modify it after creation. * Improvement in the comments from the gnuplot correlation_matrix() plotting API function. * Updated the Structure.test_atomic_fluctuations_gnuplot system test. This is for the gnuplot correlation_matrix() plotting API changes which affect the structure.atomic_fluctuations user function. * Docstring fixes for the Structure.test_atomic_fluctuations_gnuplot system test. This was pointing to the structure.rmsd user function instead of structure.atomic_fluctuations. * Fixes and improvements for the gnuplot correlation_matrix() plotting API function. This is for the structure.atomic_fluctuations user function. The "pm3d map" plot type is incorrect for such data type, so instead of using 'splot', 'plot' is being used instead. The resultant EPS file is now much smaller. The colour map has also been changed to one of the inbuilt ones for higher contrast. * Forced the gnuplot correlation_matrix plot to be square. This is for the correlation_matrix() plotting API function used by the new structure.atomic_fluctuations user function. * Updated the Structure.test_atomic_fluctuations_gnuplot system test. This is for the changes of the gnuplot correlation_matrix() plotting API function used by the structure.atomic_fluctuations user function. * Docstring fix for the Structure.test_atomic_fluctuations system test. * Another docstring fix for the Structure.test_atomic_fluctuations system test. * Created the Structure.test_atomic_fluctuations_angle system test. This will be used to implement the mapping of inter-atomic vector angular fluctuations between structures via a new 'measure' keyword argument for the structure.atomic_fluctuations user function. * Implemented angular fluctuations for the structure.atomic_fluctuations user function. This adds the measure argument to the user function to allow either the default of 'distance' or the 'angle' setting to be chosen. The implementation is confirmed by the Structure.test_atomic_fluctuations_angle system test which now passes. * Clean ups and speed ups of the structure.atomic_fluctuations user function. Duplicate calculations are now avoided, as the SD matrix is symmetric. * Description improvements and GUI layout fixes for the structure.atomic_fluctuations user function. * Added the 'parallax shift' measure to the structure.atomic_fluctuations user function. The parallax shift is defined as the length of the average vector minus the interatomic vector. It is similar to the angle measure however, importantly, it is independent of the distance between the two atoms. * Updated the gnuplot scripts to be executable. These are the scripts created by the gnuplot specific correlation_matrix() plotting API function. The file is made executable and the script now starts with "#!/usr/bin/env gnuplot". * Created the Structure.test_atomic_fluctuations_parallax system test. This is to demonstrate that the parallax shift fluctuations are not implemented correctly. * Fix for the Structure.test_atomic_fluctuations_parallax system test. The distance shifts need to be numbers, not vectors. * Proper implementation of the 'parallax shift' for the structure.atomic_fluctuations user function. * Improved the structure.atomic_fluctuations user function documentation. The fluctuation categories are now better explained. And the 'parallax shift' option is now available in the GUI. * Fix for the parallax shift description in the structure.atomic_fluctuations user function. The parallax shift is not quite orthogonal to the distance fluctuations. * Implemented ordinary_least_squares function the repeated auto-analysis. Inspection of statistics books, shows that several authors does not recommend using regression through the origin (RTO). >From Joseph G. Eisenhauer, Regression through the Origin: RTO residuals will usually have a nonzero mean, because forcing the regression line through the origin is generally inconsistent with the best fit; R square measures (for RTO) the proportion of the variability in the dependent variable "about the origin" explained by regression. This cannot be compared to R square for models which include an intercept. From "Experimental design and data analysis for biologists", G. P. Quinn, M. J. Keough: Minimum observed xi rarely extends to zero, and forcing our regression line through the origin not only involves extrapolating the regression line outside our data range but also assuming the relationship is linear outside this range (Cade & Terrell 1997, Neter et al. 1996); We recommend that it is better to have a model that fits the observed data well than one that goes through the origin but provides a worse fit to the observed data; residuals from the no-intercept model no longer sum to zero; usual partition of SSTotal into SSRegression and SSResidual does not work. * Added save state for test of bug 23186. Bug #23186 (https://gna.org/bugs/index.php?23186): Error calculation of individual parameter "dw" from Monte-Carlo, is based on first spin. * Added the system test Relax_disp.test_bug_23186_cluster_error_calc_dw which shows the failure of Monte Carlo simulations error calculations. Bug #23186 (https://gna.org/bugs/index.php?23186): Error calculation of individual parameter "dw" from Monte-Carlo, is based on first spin. * Added additional test for the r2a parameter. Bug #23186 (https://gna.org/bugs/index.php?23186): Error calculation of individual parameter "dw" from Monte-Carlo, is based on first spin. * Attempt to implement the GUI test General.test_bug_23187_residue_delete_gui. This will NOT catch the error. Bug #23187 (https://gna.org/bugs/index.php?23187): Deleting residue in GUI, and then open spin viewer crashes relax. * Added test for spin independent error of k_AB. Bug #23186 (https://gna.org/bugs/index.php?23186): Error calculation of individual parameter "dw" from Monte-Carlo, is based on first spin. * Fix for the showing of the spin viewer window in the GUI tests. The show_tree() method of the main GUI window class was not calling the custom self.spin_viewer.Show() method, as required to set up the observer objects required to keep the spin viewer window updated. The value of status.show_gui was blocking this. Instead the show argument of this Show() method is being set to status.show_gui to allow the method to always be executed. * Updated the main relax copyright notices for 2015. * The copyright notice in the GUI now uses the info box object. This is for the status bar at the bottom of the GUI window. This removes one place where copyright notices needs to be updated each year. This status text will then be updated whenever the info.py file has been updated. * Updated the copyright notice for 2015 in the GUI splash screen graphic. * Racing fixes for the General.test_bug_23187_residue_delete_gui GUI test. Some GUI interpreter flush() calls have been added to avoid racing in the GUI. The GUI tests are so quick that the asynchronous user function call will be processed at the same time as the spin viewer window is being created, causing fatal segmentation faults in the test suite. * More robustness for the spin viewer GUI window prune_*() methods. When no spin data exists, the self.tree.GetItemPyData(key) call can return None. This is now being checked for and such None values are being skipped in the prune_mol(), prune_res() and prune_spin() methods. The problem was found in the Mf.test_bug_20479_gui_final_pipe system test when running the command: for i in {1..10}; do ./relax --gui-tests --time -d &>> gui_tests.log; done * More robustness for the spin viewer GUI window update_*() methods. When no spin data exists, the self.tree.GetItemPyData(key) call can return None. This is now being checked for and such None values are being skipped in the update_mol(), update_res() and update_spin() methods. The problem was found in the Mf.test_bug_20479_gui_final_pipe system test when running the command: for i in {1..10}; do ./relax --gui-tests --time -d &>> gui_tests.log; done * More robustness for the spin viewer GUI window prune_*() methods. The data returned from the self.tree.GetItemPyData(key) call can in rare racing cases not contain the 'id' key. This is now being checked for and are being skipped in the prune_mol(), prune_res() and prune_spin() methods. The problem was found in the Mf.test_bug_20479_gui_final_pipe system test when running the command: for i in {1..10}; do ./relax --gui-tests --time -d &>> gui_tests.log; done * More robustness for the spin viewer GUI window update_*() methods. The data returned from the self.tree.GetItemPyData(key) call can in rare racing cases not contain the 'id' key. This is now being checked for and are being skipped in the update_mol(), update_res() and update_spin() methods. The problem was found in the Mf.test_bug_20479_gui_final_pipe system test when running the command: for i in {1..10}; do ./relax --gui-tests --time -d &>> gui_tests.log; done * Created a development document for catching segfaults and other errors in the GUI tests. This is needed as not all wxPython errors can be caught in the Python unittest framework. * Small whitespace formatting fix for the titles printed by the align_tensor.display user function. * Improvements for the plots created by the pcs.corr_plot user function. The axes now have labels, and have the range and number of ticks set to reasonable values. * Improvements for the pcs.corr_plot user function - the plot range is now determined by the data. * Improvements for the rdc.corr_plot user function - the plot range is now determined by the data. * Added save state for testing implementation of error analysis. Task #7882 (https://gna.org/task/?7882): Implement Monte-Carlo simulation whereby errors are generated with width of standard deviation or residuals. * Simplification of system test Relax_disp.test_task_7882_monte_carlo_std_residual, to just test the creation of Monte-Carlo data where errors are drawn from the reduced chi2 distribution. Task #7882 (https://gna.org/task/?7882): Implement Monte-Carlo simulation whereby errors are generated with width of standard deviation or residuals. * Extension of the monte_carlo.create_data user function to draw errors from the reduced chi2 Gauss distribution as found by best fit. Task #7882 (https://gna.org/task/?7882): Implement Monte-Carlo simulation whereby errors are generated with width of standard deviation or residuals. * Adding to backend of pipe_control.error_analysis(), to modify data point as error drawn from the reduced chi2 Gauss distribution. Task #7882 (https://gna.org/task/?7882): Implement Monte-Carlo simulation whereby errors are generated with width of standard deviation or residuals. * Adding empty API method to return errors from the reduced chi2 distribution. Task #7882 (https://gna.org/task/?7882): Implement Monte-Carlo simulation whereby errors are generated with width of standard deviation or residuals. * Added API function in relaxation dispersion to return error structure from the reduced chi2 distribution. Task #7882 (https://gna.org/task/?7882): Implement Monte-Carlo simulation whereby errors are generated with width of standard deviation or residuals. * Temporary test of making a confidence interval as described in fitting guide. This is system test Relax_disp.x_test_task_7882_kex_conf, which is not activated by default. Running the test, interestingly shows, there is a possibility for a lower global kex. But the value only differ from kex=1826 to kex=1813. Task #7882 (https://gna.org/task/?7882): Implement Monte-Carlo simulation whereby errors are generated with width of standard deviation or residuals. * Change to system test Relax_disp.x_test_task_7882_kex_conf(). This is just a temporary system test, to check for local minima. This is method in regression book of Graphpad: http://www.graphpad.com/faq/file/Prism4RegressionBook.pdf Page: 109-111. Task #7882 (https://gna.org/task/?7882): Implement Monte-Carlo simulation whereby errors are generated with width of standard deviation or residuals. * Raising an error, if the R2eff model is used, and drawing errors from the fit. Task #7882 (https://gna.org/task/?7882): Implement Monte-Carlo simulation whereby errors are generated with width of standard deviation or residuals. * To system test Relax_disp.test_task_7882_monte_carlo_std_residual(), adding test for raise of errors, if the R2eff model is selected. Task #7882 (https://gna.org/task/?7882): Implement Monte-Carlo simulation whereby errors are generated with width of standard deviation or residuals. * Added test of argument "distribution" in pipe_control.error_analysis.monte_carlo_create_data(). This is to make sure that a wrong argument is not passed into the function. Task #7882 (https://gna.org/task/?7882): Implement Monte-Carlo simulation whereby errors are generated with width of standard deviation or residuals. * Extended the user function 'monte_carlo.create_data', to allow for the definition of the STD to use in Gauss distribution. This is for creation of Monte-Carlo simulations, where one has perhaps gained information about the expected errors of the data points, which is not measured. Task #7882 (https://gna.org/task/?7882): Implement Monte-Carlo simulation whereby errors are generated with width of standard deviation or residuals. * In backend pipe_control.error_analysis.monte_carlo_create_data() added the argument 'fixed_error' to allow for fixed input of error to the Gauss distribution. Inserted a range of checks, to make sure function behaves as expected. Task #7882 (https://gna.org/task/?7882): Implement Monte-Carlo simulation whereby errors are generated with width of standard deviation or residuals. * Added to pipe_control.error_analysis.monte_carlo_create_data() the creation of data points for a fixed distribution. Task #7882 (https://gna.org/task/?7882): Implement Monte-Carlo simulation whereby errors are generated with width of standard deviation or residuals. * To system test Relax_disp.test_task_7882_monte_carlo_std_residual(), added tests for creation of Monte-Carlo data by different methods. Task #7882 (https://gna.org/task/?7882): Implement Monte-Carlo simulation whereby errors are generated with width of standard deviation or residuals. * In pipe_control.error_analysis.monte_carlo_create_data(), if data is of list type or ndarray, then modify the data point according to the fixed error if the distribution is set to 'fixed'. Task #7882 (https://gna.org/task/?7882): Implement Monte-Carlo simulation whereby errors are generated with width of standard deviation or residuals. * Expanded the STD acronym, to the meaning of standard deviation. This is in the user function 'monte_carlo.create_data'. Task #7882 (https://gna.org/task/?7882): Implement Monte-Carlo simulation whereby errors are generated with width of standard deviation or residuals. * Added a RelaxWarning printout to the dep_check module if wxPython 2.8 or less is encountered. This follows from http://thread.gmane.org/gmane.science.nmr.relax.devel/7502. The warning text is simply written to STDERR as relax starts. * Updated the wxPython version in the relax manual to be 2.9 or higher. This is in the section http://www.nmr-relax.com/manual/Dependencies.html. * The GUI tests are now skipped for wxPython version <= 2.8 due to bugs causing fatal segfaults. This follows from http://thread.gmane.org/gmane.science.nmr.relax.devel/7502. These wxPython versions are simply too buggy. * Fix for the Relax_disp.test_bug_23186_cluster_error_calc_dw system test on 32-bit and Python <= 2.5 systems. * Better error handling in the structure.align user function. If no common atoms can be found between the structures, a RelaxError is now raised for better user feedback. * Created an empty lib.sequence_alignment relax library package. This may be used in the future for implementing more advanced structural alignments (the current method is simply to skip missing atoms, sequence numbering changes are not handled). * Added the sequence_alignment package to the lib package __all__ list. * Added the unit testing infrastructure for the new lib.sequence_alignment package. * Implementation of the Needleman-Wunsch sequence alignment algorithm. This is located in the lib.sequence_alignment.needleman_wunsch module. This is implemented as described in the Wikipedia article https://en.wikipedia.org/wiki/Needleman%E2%80%93Wunsch_algorithm. * Created a unit test for checking the Needleman-Wunsch sequence alignment algorithm. This uses the DNA data from the example in the Wikipedia article at https://en.wikipedia.org/wiki/Needleman%E2%80%93Wunsch_algorithm. The test shows that the implementation of the lib.sequence_alignment.needleman_wunsch.needleman_wunsch_align() function is correct. * Created the lib.sequence_alignment.substitution_matrices module. This is for storing substitution matrices for use in sequence alignment. The module currently only includes the BLOSSUM62 matrix. * Corrected the spelling of the BLOSUM62 matrix in lib.sequence_alignment.substitution_matrices. * Fix for the lib.sequence_alignment.substitution_matrices.BLOSUM62_SEQ string. * Modification of the Needleman-Wunsch sequence alignment algorithm implementation. This is in the lib.sequence_alignment.needleman_wunsch functions. Scoring matrices are now supported, as well as a user supplied non-integer gap penalty. The algorithm for walking through the traceback matrix has been fixed for a bug under certain conditions. * Created the lib.sequence_alignment.align_protein module for the sequence alignment of proteins. This general module currently implements the align_pairwise() function for the pairwise alignment of protein sequences. It provides the infrastructure for specifying gap starting and extension penalties, choosing the alignment algorithm (currently only the Needleman-Wunsch sequence alignment algorithm as 'NW70'), and choosing the substitution matrix (currently only BLOSUM62). The function provides lots of printouts for user feedback. * Created a unit test for lib.sequence_alignment.align_protein.align_pairwise(). This is to test the pairwise alignment of two protein sequences using the Needleman-Wunsch sequence alignment algorithm, BLOSUM62 substitution matrix, and gap penalty of 10.0. * Added more printouts to the Test_align_protein.test_align_pairwise unit test. This is the test of the module _lib._sequence_alignment.test_align_protein. * Fix for the Needleman-Wunsch sequence alignment algorithm when the substitution matrix is absent. * The lib.sequence_alignment.align_protein.align_pairwise() function now returns data. This includes both alignment strings as well as the gap matrix. * Annotated the BLOSUM62 substitution matrix with the amino acid codes for easy reading. * Updated the gap penalties in the Test_align_protein.test_align_pairwise unit test. This is from the unit test module _lib._sequence_alignment.test_align_protein. * Modified the Needleman-Wunsch sequence alignment algorithm. The previous attempt was buggy. The algorithm has been modified to match the logic of the GPL licenced EMBOSS software (http://emboss.sourceforge.net/) to allow for gap opening and extension penalties, as well as end penalties. No code was copied, rather the algorithm for creating the scoring and penalty matrices, as well as the traceback matrix. * Added a DNA similarity matrix to lib.sequence_alignment.substitution_matrices. * Added sanity checks to the Needleman-Wunsch sequence alignment algorithm. The residues of both sequences are now checked in needleman_wunsch_align() to make sure that they are present in the substitution matrix. * Added the NUC 4.4 nucleotide substitution matrix from ftp://ftp.ncbi.nih.gov/blast/matrices/. Uracil was added to the table as a copy to T. * Added the header from ftp://ftp.ncbi.nih.gov/blast/matrices/BLOSUM62. This is to document the BLOSUM62 substitution matrix. * Added the PAM 250 amino acid substitution matrix. This was taken from ftp://ftp.ncbi.nih.gov/blast/matrices/PAM250 and added to lib.sequence_alignment.substitution_matrices.PAM250. * Modified the Test_needleman_wunsch.test_needleman_wunsch_align_DNA unit test to pass. This is from the unit test module _lib._sequence_alignment.test_needleman_wunsch. The DNA sequences were simplified so that the behaviour can be better predicted. * Created the Test_needleman_wunsch.test_needleman_wunsch_align_NUC_4_4 unit test. This is in the unit test module _lib._sequence_alignment.test_needleman_wunsch. This tests the Needleman-Wunsch sequence alignment for two DNA sequences using the NUC 4.4 matrix. * Created a unit test for demonstrating a failure in the Needleman-Wunsch sequence alignment algorithm. The test is Test_needleman_wunsch.test_needleman_wunsch_align_NUC_4_4b from the _lib._sequence_alignment.test_needleman_wunsch module. The problem is that the start of the alignment is truncated if any gaps are present. * Fix for the Needleman-Wunsch sequence alignment algorithm. The start of the sequences are no longer truncated when starting gaps are encountered. * The needleman_wunsch_align() function now accepts the end gap penalty arguments. These are passed onto the needleman_wunsch_matrix() function. * Added the end gap penalty arguments to lib.sequence_alignment.align_protein.align_pairwise(). * Created the Structure.test_align_CaM_BLOSUM62 system test. This will be used for expanding the functionality of the structure.align user function to perform true sequence alignment via the new lib.sequence_alignment package. The test aligns 3 calmodulin (CaM) structures from different organisms, hence the sequence numbering is different and the current structure.align user function design fails. The structure.align user function has been expanded in the test to include a number of arguments for advanced sequence alignment. * Added support for the PAM250 substitution matrix to the protein pairwise sequence alignment function. This is the function lib.sequence_alignment.align_protein.align_pairwise(). * Bug fix for the Needleman-Wunsch sequence alignment algorithm. Part of the scoring system was functioning incorrectly when the gap penalty scores were non-integer, as some scores were being stored in an integer array. Now the array is a float array. * Created the Test_align_protein.test_align_pairwise_PAM250 unit test. This is in the unit test module _lib._sequence_alignment.test_align_protein. It checks the protein alignment function lib.sequence_alignment.align_protein.align_pairwise() together with the PAM250 substitution matrix. * Small docstring expansion for lib.sequence_alignment.align_protein.align_pairwise(). * Added the sequence alignment arguments to the structure.align user function front end. This includes the 'matrix', 'gap_open_penalty', 'gap_extend_penalty', 'end_gap_open_penalty', and 'end_gap_extend_penalty' arguments. The 'algorithm' argument has not been added to save room, as there is only one choice of 'NW70'. A paragraph has been added to the user function description to explain the sequence alignment part of the user function. * Added the sequence alignment arguments to the back end of the structure.align user function. This is to allow the code in trunk to be functional before the sequence alignment before superimposition has been implemented. * Removed the 'algorithm' argument from the Structure.test_align_CaM_BLOSUM62 system test script. This is for the structure.align user function. The argument has not been implemented to save room in the GUI, and as 'NW70' is currently the only choice. * The sequence alignment arguments are now passed all the way to the internal structural object backend. These are the arguments of the structure.align user function. * Created the lib.sequence.aa_codes_three_to_one() function. The lib.sequence module now contains the AA_CODES dictionary which is a translation table for the 3 letter amino acid codes to the one letter codes. The new aa_codes_three_to_one() function performs the conversion. * Implemented the internal structural object MolContainer.loop_residues() method. This generator method is used to quickly loop over all residues of the molecule. * Implemented the internal structural object one_letter_codes() method. This will create a string of one letter residue codes for the given molecule. Only proteins are currently supported. This method uses the new lib.sequence.aa_codes_three_to_one() relax library function. * Sequence alignment is now performed in lib.structure.internal.coordinates.assemble_coord_array(). This is a pairwise alignment to the first molecule of the list. The alignments are not yet used for anything. The assemble_coord_array() function is used by the structure.align user function, as well as a few other structure user functions. * Fix for the lib.sequence.aa_codes_three_to_one() function. Non-standard residues are now converted to the '*' code. The value of 'X' prevents any type of alignment of a str... [truncated message content] |
From: Edward d'A. <ed...@do...> - 2014-12-04 10:35:54
|
This is a major feature and bugfix release, finally adding support for the saturation recovery and inversion recovery R1 experiments and including a major bug fix for storing multi-dimensional numpy data structures as IEEE 754 byte arrays in the XML output of the relax state and results files. For the official, easy to navigate release notes, please see http://wiki.nmr-relax.com/Relax_3.3.4. The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html). The full list of changes is: Features: * Numerous improvements for the relax_fit.select_model user function. * Support for the saturation recovery experiment in the relaxation exponential curve-fitting analysis. * Support for the inversion recovery experiment in the relaxation exponential curve-fitting analysis. * Added a section to the start of the relaxation curve-fitting chapter of the manual to include descriptions of all supported models. * Addition of a button to the R1 and R2 GUI analyses for selecting the desired exponential curve model via the relax_fit.select_model user function. Changes: * Small updates for the wiki section of the release checklist document. * Fixes for the links at the bottom of all HTML manual pages. This is for the automatically generated documentation at http://www.nmr-relax.com/manual/index.html, created using latex2html. The links all require double quotes, and some a trailing '/'. The links fixed are http://www.nmr-relax.com, http://www.nmr-relax.com/manual/ and http://download.gna.org/relax/manual/relax.pdf. * Removed the repository backup file text from the relax manual. This is for http://www.nmr-relax.com/manual/Latest_sources_the_relax_repositories.html. The gzipped repository dump file has not been created by Gna! for many, many years. The problem was identified by the W3C link checker (http://validator.w3.org/checklink). * Updated all of the http://www.nmr-relax.com/manual/ links in the lib.dispersion package. This is for all of the individual model pages in the HTML manual. * Improved the description for the relax_fit.select_model user function. * A small code rearrangement to create the new target_functions.relax_fit_wrapper module. This follows from the idea at https://gna.org/task/?7415#comment6. The *func_wrapper() functions of the specific_analyses.relax_fit.optimisation module have been shifted out and converted to class methods to create the target_functions.relax_fit_wrapper module. This will be used to abstract away all of the C code, and will form the infrastructure to allow new exponential curves to be quickly supported. The modules of the specific_analyses.relax_fit and specific_analyses.relax_disp packages now import the target_functions.relax_fit_wrapper.Relax_fit_opt target function class and use that instead. * Shifted the C code Jacobian functions into the new target_functions.relax_fit_wrapper module. This shifts all of the relaxation curve-fitting C code access into the target_functions.relax_fit_wrapper module so that the rest of relax does not need to handle the C code. This will allow for new models to be very easily supported, as they would all be set up in this target function module. * Updated the formula in the description of the relax_fit.select_model user function. * Modified the printouts from the structure.write_pdb user function if models are present. Instead of printing out 'MODEL', 'ATOM, HETATM, TER' and 'ENDMDL' for each model, the header 'MODEL records' is printed followed by a single '.' character for each model. For structures with many models, this results in a huge speed up of the user function which is strongly limited by how fast the terminal can display text. * Added the synthetic saturation-recovery data in the form of Sparky peak lists to the repository. These files were created by Andras Boeszoermenyi. They are attached to the task at http://gna.org/task/?7415 as the Relax_sym.tar.gz file at http://gna.org/task/download.php?file_id=22989. They were created for the formula I0*(1 - exp(−R1*t)) where I0 = 1000000000000000.00 and R1 = 0.5. These files and the associated relax_sim.py script (which needs to be updated for the latest relax version) could form the basis of a basic system test. This system test could then be used to implement the saturation-recovery experiment equations in relax. * Updated the target_functions package __all__ list to include the relax_fit* modules. * Modified the package __all__ list checking unit test to accept *.so C modules. * Removal of an unused import in the relax_fit_zooming_grid.py system test script. * Added a system test script for testing the saturation-recovery R1 experiment. This was created by Andras Boeszoermenyi. The file was taken from the saturation_recovery.tar.gaz file (https://gna.org/task/download.php?file_id=22997) are attached to the task at http://gna.org/task/?7415. The only difference with the original script is that the grace.view user function calls have been removed, as these cannot be used in a system test. * Modified the relax_fit_saturation_recovery.py script to work as a system test. This is the script from Andras Boeszoermenyi. The change follows from the discussion of http://thread.gmane.org/gmane.science.nmr.relax.devel/7308/focus=7369. The status.install_path variable is now used to point to the location of the files. The relax data store ds.tmpdir variable is used for outputting all files. And commented out user functions have been deleted. * Added a copyright notice for Andras Boeszoermenyi for the newly added saturation-recovery R1 script. This change follows the discussion in the message http://thread.gmane.org/gmane.science.nmr.relax.devel/7308/focus=7369. * Created the Relax_fit.test_saturation_recovery system test. This follows from the discussion of http://thread.gmane.org/gmane.science.nmr.relax.devel/7308/focus=7369. * Added the saturation recovery experiment to the relax_fit.select_model user function. This simply adds a new option and sets up a different parameter set [Rx, Iinf]. * Modified the Relax_fit.test_saturation_recovery system test script. The relax_fit.select_model user function call now selects the 'sat' model. * Fix for the relax_fit.select_model user function backend for the 'sat' model. * The exponential model name is now being passed into the target function class. The model as specified by the relax_fit.select_model user function is now finally being sent into the target function, in this case the Relax_fit_opt class in target_functions.relax_fit_wrapper. * Small fix for the relax_fit.select_model user function. * Renamed all of the relaxation curve-fitting target functions. This includes all of the C functions which are model specific, by appending '_exp' to the current names to now be func_exp, dfunc_exp, d2func_exp, jacobian_exp, and jacobian_chi2_exp. And all of the Relax_fit_opt target function class *_wrapper() methods to *_exp(). The target function class is now only aliasing the *_exp() methods when the model is set to 'exp'. * Alphabetical ordering of the C function imports in the target_functions.relax_fit_wrapper module. * Modified the relax_fit.test_saturation_recovery system test to check for Iinf instead of I0. * Added support for the saturation recovery experiment to parameter disassembly function. This is in the disassemble_param_vector() function of the specific_analyses.relax_fit.parameters module. This function requires each experiment to be handled separately. * Implemented the target functions for the saturation recovery exponential curve. In the Python target function class Relax_fit_opt, the new func_sat(), dfunc_sat() and d2func_sat() methods have been created as wrappers for the new C functions. These are aliased to func(), dfunc() and d2func() in the __init__() method. In the target_functions/exponential.c C file, the functions exponential_sat(), exponential_sat_dIinf(), exponential_sat_dR(), exponential_sat_dIinf2(), exponential_sat_dR_dIinf() and exponential_sat_dR2() have been created to implement the function, gradient, and Hessian for the equation I = Iinf * (1 - exp(-R.t)). In the target_functions/relax_fit.c file, the functions func_sat(), dfunc_sat(), d2func_sat(), jacobian_sat() and jacobian_chi2_sat() have been created as duplications of the *_exp() functions, but pointing to the exponential_sat*() functions and using Iinf instead of I0. * Split the saturation recovery exponential equations and partial derivatives into their own C file. * Expansion and improvements for the relax_fit.select_model user function documentation and printouts. * The relax_fit.relax_time and relax_fit.select_model user functions now have wizard graphics. The R1 graphic from graphics/analyses/r1_200x200.png is now being used. * Added support for the inversion recovery experiment to parameter disassembly function. This matches the change for the saturation recovery experiment. This is in the disassemble_param_vector() function of the specific_analyses.relax_fit.parameters module. This function requires each experiment to be handled separately. * Expanded the relax_fit_saturation_recovery.py system test script. This now calls the error_analysis.covariance_matrix user function to test that code path. * Updated the relaxation curve-fitting covariance_matrix() API method to handle all models. The check for the 'exp' model type has been eliminated, and the parameter vector is assembled using the flexible assemble_param_vector() function rather than manually constructing the vector. * The errors in the Relax_fit.test_saturation_recovery system test are now reasonable. They have been set to 5% of Iinf so that the chi-squared value during optimisation is more realistic. * Updated the relaxation curve-fitting get_param_names() API method to handle all models. This now simply returns the spin container 'params' list, allowing all models to be properly supported. * Big bug fix for the error_analysis.covariance_matrix user function. The model_info structure is now being passed into the get_param_names() API method, as required by the API. * Another change for the relaxation curve-fitting covariance_matrix() API method to handle all models. The scaling matrix diagonalised list of 1.0 values now has the same number of elements as there are parameters. * Implemented the target functions for the inversion recovery exponential curve. In the Python target function class Relax_fit_opt, the new func_inv(), dfunc_inv() and d2func_inv() methods have been created as wrappers for the new C functions. These are aliased to func(), dfunc() and d2func() in the __init__() method. The target_functions/exponential_inv.c C file has been created with the functions exponential_inv(), exponential_inv_d0(), exponential_inv_dIinf(), exponential_inv_dR(), exponential_inv_dI02(), exponential_inv_dIinf2(), exponential_inv_dI0_dIinf(), exponential_inv_dR_dI0(), exponential_inv_dR_dIinf() and exponential_inv_dR2() have been created to implement the function, gradient, and Hessian for the equation I(t) = Iinf - I0*exp(-R.t). In the target_functions/relax_fit.c file, the functions func_inv(), dfunc_inv(), d2func_inv(), jacobian_inv() and jacobian_chi2_inv() have been created as duplications of the *_exp() functions, but pointing to the exponential_inv*() functions and adding the Iinf dimension. * More editing of the relax_fit.select_model user function. The IR and SR abbreviations have been added, and a lot of text cleaned up. * Improvement for the relax_fit.select_model user function in the GUI. Unicode text is now being used to display the parameters as R_x and I_0 and to show an infinity symbol in the Iinf parameter. The Rx and Iinf parameters have been added to lib.text.gui to allow this. * Expanded the relaxation curve-fitting chapter of the manual to include descriptions of the models. A new section at the start of this chapter has been added to explain the different models and their equations. This was taken from the script mode section and expanded to include the new saturation recovery experiment. * Removed the relax_fit.select_model user function call from the relax_fit auto-analysis. This is to allow the user in a script, or in the GUI, to choose the model themselves. * Added a button to the R1 and R2 GUI analyses for executing the relax_fit.select_model user function. This is just after the peak list GUI element and before the optimisation settings. It allows different curve types to be selected for the analysis. * Created the new specific_analyses.relax_fit.checks module. This creates the check_model_setup Check object, following the check_*() function design at http://wiki.nmr-relax.com/Relax_source_design#The_check_.2A.28.29_functions. This will be used to make sure that the exponential curve model is set prior to executing certain user functions. * Improved the checking in the relaxation curve-fitting analysis. The new specific_analyses.relax_fit.checks.check_model_setup() function is now called prior to minimisation and in the get_param_names() API method to prevent Python errors from occurring due to missing data structures. In addition, the pipe_control.mol_res_spin module function exists_mol_res_spin_data() has been replaced with check_mol_res_spin_data(). * Fix for the recently broken Relax_fit.test_curve_fitting_height_estimate_error system test. The relax_fit.select_model user function is now called as this is no longer performed in the auto-analysis. * Removed the text that the inversion recovery experiment is not implemented yet. This is in the documentation for the relax_fit.select_model user function and is in preparation for completing this. * Added the checks module to the specific_analyses.relax_fit package __all__ list. * Fixes for the relaxation dispersion analysis for the recent relaxation curve-fitting analysis changes. The Relax_fit_opt target function class requires the model argument to be supplied to be correctly set up. * Fixes for the unit tests of the target_functions.relax_fit C module. This is for the recent renaming of all the C functions based on the model type. * Fix for the Rx.test_r1_analysis GUI test. A click on the relax_fit.select_model user function button is now being simulated. * Created a directory for holding synthetic inversion recovery R1 data. * Copied synthetic inversion recovery Sparky peak lists from Sébastien Morin's inversion-recovery branch. * Created a system test script for the inversion-recovery function. This is based on a copy of the script 'relax_fit_exp_2param_neg.py'. * The 3-parameter curve fitting test script now uses the corresponding peak lists. * Prepared the "exp_3param" test for inclusion of artificial data. * Added missing delays in the list. The duplicates had been omitted... * Manually fix the script based on changes made during branch updating. This is as discussed by Edward d'Auvergne in a post at https://mail.gna.org/public/relax-devel/2012-01/msg00001.html. * Updated Séb's relax_fit_exp_3param_inv_neg.py system test script to work with the current relax design. * Added a script for calculating the expected peak intensities for an inversion recovery curve. This is based on the values used by Sébastien Morin in his inversion-recovery branch, as the check_curve_fitting_exp_3param_inv_neg() function of the test_suite/system_tests/relax_fit.py file. * Increased the precision of the printout from the calc.py script of the last commit. * Changed the peak intensities for Gly 4 in the synthetic inversion recovery Sparky lists. The values have been changed to match that determined from the calc.py script. The replicate spectra intensities are simply the calculated intensity +/-1, to preserve the average. * Created the Relax_fit.test_inversion_recovery system test. This simply calls Sébastien Morin's relax_fit_exp_3param_inv_neg.py system test script, ported from the inversion-recovery branch, and then checks the parameter values for the single optimised spin. * Updated the manual_c_module.py C module compilation development script for the recent changes. The exponential_inv.c and exponential_sat.c files need to be compiled as well. * Python 3 fix for the relax_fit_exp_3param_inv_neg.py system test script. The xrange() function does not exist in Python 3, so was replaced by range(). * Updated the memory_leak_test_relax_fit.py development script for the C module changes. This is only the docstring description which changed. * Epydoc docstring fixes for the lib.io module - keyword arguments were not correctly identified. These were identified by Troels in the post at http://thread.gmane.org/gmane.science.nmr.relax.scm/24565/focus=7384. * Created the State.test_bug_23017_ieee_754_multidim_numpy_arrays system test. This is to catch bug #23017 (https://gna.org/bugs/?23017), the multidimensional numpy arrays are not being stored as IEEE 754 arrays in the XML state and results files. This test checks a rank-2 float64 numpy array stored in the current data pipe against what the IEEE 754 int list should be for it. * Grammar fix for a warning from the pymol.display user function. Bugfixes: * Bug fix for the pymol.view user function for when no PDB file exists. The pymol.view user function would fail with an AttributeError when the currently loaded data does not exist as a PDB file. This is now caught and the non-existent PDB is no longer displayed. A better solution might be to dump all the current structural data into a temporary file and load that, all within a try-finally statement to be sure to delete the temporary file. This solution may not be what the user is interested in anyway. * Simple fix for bug #23017 (https://gna.org/bugs/?23017). This is the multidimensional numpy arrays are not being stored as IEEE 754 arrays in the XML state and results files. The problem was a relatively recent regression caused by a change to the is_float_matrix() function of the lib.arg_check module. It was simply that the default dims keyword argument value was changed from None to (3, 3). Therefore any call to the function without supplying the dims argument would fail if the matrix was not of the (3, 3) shape. |
From: Edward d'A. <ed...@do...> - 2014-11-24 15:50:10
|
This is a major feature and bugfix release. It fixes a failure when loading relaxation data and adds Python 3 support for using the NMRPipe showApod software. Features include a large expansion for the align_tensor.matrix_angles and align_tensor.svd user functions to support the standard inter-matrix angles, the unitary 9D vector notation {Sxx, Sxy, Sxz, Syx, Syy, Syz, Szx, Szy, Szz}, and the irreducible spherical tensor 5D basis set of {A-2, A-1, A0, A1, A2} for correctly calculating the inter-tensor angles, singular values and condition numbers. For the official, easy to navigate release notes, please see http://wiki.nmr-relax.com/Relax_3.3.3. The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html). The full list of changes is: Features: * Implemented the lib.geometry.vectors.vector_angle_atan2() relax library function. This is for calculating the inter-vector angle using the more numerically stable atan2() formula. * Implemented the lib.geometry.vectors.vector_angle_acos() relax library function. This is used to calculate the inter-vector angle using the arccos of the dot product formula. The function has been introduced into the relax library as the calculation is repeated throughout relax. * Expanded the basis sets for the align_tensor.matrix_angles user function to allow the correct inter-tensor angles to be calculated. This includes the standard inter-matrix angles via the arccos of the Euclidean inner product of the alignment matrices in rank-2, 3D form divided by the Frobenius norm of the matrices, irreducible spherical tensor 5D basis set {A-2, A-1, A0, A1, A2}, and the unitary 9D basis set {Sxx, Sxy, Sxz, Syx, Syy, Syz, Szx, Szy, Szz} (all of which produce the same result). * Expanded the basis sets for the align_tensor.svd user function to allow the correct singular values and condition number to be calculated. This includes the irreducible spherical tensor 5D basis set {A-2, A-1, A0, A1, A2} and the unitary 9D basis set {Sxx, Sxy, Sxz, Syx, Syy, Syz, Szx, Szy, Szz} (both of which produce the same result). * Added the angle_units and precision arguments to the align_tensor.matrix_angles user function to allow either degrees or radians to be output and the number of decimal points to be specified. * Added the precision argument to the align_tensor.svd user function to allow the number of decimal points for the singular values and condition number to be specified. * Updated the align_tensor.display user function to output the irreducible spherical harmonic weights. This is the alignment tensor in the {A-2, A-1, A0, A1, A2} notation. Changes: * Basic Epydoc fix for the data_store.exp_info module. * Epydoc fix for the name_pipe() method of the relaxation dispersion auto-analysis for repeated data. * Fixes for the HTML user manual compilation. The index.html file was not being created as the main page has changed from 'relax_user_manual.html' to 'The_relax_user_manual.html'. * Added a line to the release checklist document about updating the wiki release links. These are for the combined release notes pages at http://wiki.nmr-relax.com/Relax_releases, http://wiki.nmr-relax.com/Relax_release_descriptions, http://wiki.nmr-relax.com/Relax_release_metadata, http://wiki.nmr-relax.com/Relax_release_features, http://wiki.nmr-relax.com/Relax_release_changes, http://wiki.nmr-relax.com/Relax_release_bugfixes, http://wiki.nmr-relax.com/Relax_release_links. * Updates for the release announcement section of the release checklist document. * Created a system test to catch a rare relaxation data loading problem. * Created the Mf.test_dauvergne_protocol_sphere system test. This catches bug #22963 (https://gna.org/bugs/?22963): Using '@N*' to define the interatomic interactions for a model-free analysis fails when using non-backbone 15N spins. * Set more reasonable default values for the lib.structure.pdb_write functions atom() and hetatm(). The occupancy now defaults to 1.0 instead of '', and the temperature factor to 0.0 instead of ''. This avoid painful errors when using these functions, as these arguments must be floating point numbers at all times, hence the default value of '' causes a TypeError. * Updated the PDB file in the test_suite/shared_data/model_free/sphere/ directory. The relax library is now being used to create the PDB file. Additional TER and CONECT records are now being created so the result is a more correct PDB file. * Converted all ATOM records to HETATM in the sphere.pdb file. * Renamed vector_angle() to vector_angle_normal() in the lib.geometry.vectors module. This is to standardise the naming as there are now the standard vector angle formula implemented as the vector_angle_acos() and vector_angle_atan2() functions. * Added 6 unit tests for the lib.geometry.vectors.vector_angle_acos() function. These are similar to those of the vector_angle_normal() function but unsigned angles are checked for. * Created 6 unit tests for the lib.geometry.vectors.vector_angle_atan2() function. * Created a script and log file to demonstrate differences between alignment tensor basis sets. This shows that the inter-tensor angles and condition numbers are dependent on the basis set used. * Improved the printouts from the align_tensor.svd user function by including the basis set text. * Updated the log file for comparing different alignment tensor basis sets for align_tensor.svd changes. * Implemented a new default basis set for the align_tensor.matrix_angles user function. This is uses standard definition of the inter-matrix angle using the Euclidean inner product of the two matrices divided by the product of the Frobenius norm of each matrix. As this is a linear map, it should produce the correct definition of inter-tensor angles. * Improvements to the description of the align_tensor.matrix_angles user function. * Updated the test_matrix_angles_identity() unit test for pipe_control.align_tensor.matrix_angles(). This is the test in the _prompt.test_align_tensor.Test_align_tensor module. The basis set has been set back to the now non-default value of 0, and the value checks have been converted from assertEqual() to assertAlmostEqual() to allow for small truncation errors. * Conversion of the basis_set argument for the align_tensor.matrix_angles user function. The argument is now a string that accepts the values of 'matrix', 'unitary 5D', and 'geometric 5D' to select between the different matrix angles techniques. This has been updated in the test suite as well. * Added a check for the values of the basis_set argument. This is to the align_tensor.matrix_angles user function backend. * Printout improvements clarifying the align_tensor.matrix_angles user function. * Conversion of the basis_set argument for the align_tensor.svd user function. The argument is now a string that accepts the values of 'unitary 9D', 'unitary 5D', and 'geometric 5D' to select between the different SVD matrices. This has been updated in the test suite as well. * Expanded the N_state_model.test_5_state_xz system test. This now covers the new 'unitary 9D' basis set for the align_tensor.svd user function and the new 'matrix' basis set for the align_tensor.matrix_angles user function. * Expansion of the align_tensor.matrix_angles user function. The new basis set 'unitary 9D' has been introduced. This creates vectors as {Sxx, Sxy, Sxz, Syx, Syy, Syz, Szx, Szy, Szz} and computes the inter-vector angles. These match the 'matrix' basis set whereby the Euclidean inner product divided by the Frobenius norms is used to calculate the inter-tensor angles. In addition, the user function documentation and printouts have been improved. And the backend code has been simplified. * Updated the script and log file for demonstrating differences between alignment tensor basis sets. This now handles the changes to the basis_set arguments used in the align_tensor.matrix_angles and align_tensor.svd user functions, and includes the new basis sets. * Added the irreducible tensor notation of {A-2, A-1, A0, A1, A2} to the alignment tensor object. This follows from the definition of Sass et al, J. Am. Chem. Soc. 1999, 121, 2047-2055, http://dx.doi.org/10.1021/ja983887w. The equations of (2) were converted using Gaussian elimination to obtain a reduced row echelon form, so that the equations in terms of {A-2, A-1, A0, A1, A2} were derived. These have been coded into the alignment tensor object calc_Am2, calc_Am1, calc_A0, calc_A1 and calc_A2 methods respectively, and the values can be obtained by accessing the Am2, Am1, A0, A1, and A2 objects. To check that the implementation is correct, a unit test has been created to compare the calculated values with those determined using Pales. * Expanded the unit test of the alignment tensor {A-2, A-1, A0, A1, A2} parameters to cover all values. * Created functions in the relax library for calculating the inter-vector angle for complex vectors. This is in the lib.geometry.vectors module. The function vector_angle_complex_conjugate() has been created to calculate the angle between two complex vectors. This uses the new auxiliary function complex_inner_product() to calculate <v1|v2>. * Added the 'irreducible 5D' basis set option to the align_tensor.matrix_angles user function. This is for the inter-tensor vector angle for the irreducible 5D basis set {S-2, S-1, S0, S1, S2}. Its results match that of the standard tensor angle as well as the 'unitary 9D' basis sets. * Added the 'irreducible 5D' basis set option to the align_tensor.svd user function. This is for the inter-tensor vector angle for the irreducible 5D basis set {A-2, A-1, A0, A1, A2}. Its results match that of the 'unitary 9D' basis set. * Editing of the description for the 'irreducible 5D' alignment tensor basis set. This is for the align_tensor.matrix_angles and align_tensor.svd user functions. All Sm element have been converted to Am. * Editing of the description for the align_tensor.matrix_angles user function. * Editing of the align_tensor.svd user function description. * Updated the script and log file for demonstrating differences between alignment tensor basis sets. The 'irreducible 5D' basis set in now used for both the align_tensor.matrix_angles and align_tensor.svd user functions. * Fix for a spelling mistake in the align_tensor.matrix_angles user function printouts. * Small fix for the align_tensor.matrix_angles user function documentation. * Expanded the N_state_model.test_5_state_xz system test for more alignment tensor basis sets. The align_tensor.matrix_angles and align_tensor.svd user functions are now being called with the additional 'irreducible 5D', and 'unitary 9D' basis sets, to make sure these work correctly. * Created the Align_tensor.test_align_tensor_matrix_angles system test. This is to check the angles calculated by the align_tensor.matrix_angles user function. As there are no external references, this essentially fixes the angles to the currently calculated values to catch any accidental changes in the future. * Created the Align_tensor.test_align_tensor_svd system test. This is to check the angles calculated by the align_tensor.svd user function. As there are no external references, this essentially fixes the singular values and condition numbers to the currently calculated values to catch any accidental changes in the future. * Fixes for the proportions of the align_tensor.matrix_angle user function GUI wizard. * Expanded the 'irreducible 5D' text in the align_tensor.matrix_angles and align_tensor.svd user functions. This now explains that these are the coefficients for the spherical harmonic decomposition. * Improved the text for the irreducible tensor notation in the align_tensor.display user function. * Formatting fix for the magnetic susceptibility tensor part of the align_tensor.display user function. * More improvements for the align_tensor.matrix_angles user function description. * Epydoc docstring fixes and expansion for the lib.io.sort_filenames() function. * Epydoc docstring fixes for the lib.spectrum.nmrpipe module. This is for the API documentation at http://www.nmr-relax.com/api/index.html. The show_apod_rmsd_to_file() and show_apod_rmsd_dir_to_files() function docstrings have both been modified. * Epydoc docstring fixes for the pipe_control.opendx.map() function. This is for http://www.nmr-relax.com/api/3.3/pipe_control.opendx-module.html#map. The fixes include whitespace and textwrapping changes. * Python 2.5 fix for the align_tensor.display user function. The new irreducible spherical tensor coefficient printout was failing as the float.real variable was introduced from Python 2.6 onwards. * Shifted the structure checks into their own module. This shifts the special check_structure Check object from pipe_control.structure.main into the new checks module. It allows the check to be performed by other modules in the pipe_control.structure package. * Added the missing_error keyword argument to the pipe_centre_of_mass() function. This is from the pipe_control.structure.mass module. The new keyword controls what happens with the absence of structural data. The pipe_control.structure.checks.check_structure() function is now being used to either throw a warning and return [0, 0, 0] or to raise a RelaxError. * Fix for the new unit tests - Python 2.5 floats do not have a 'real' property. Bugfixes: * Fix for bug #22961 (https://gna.org/bugs/?22961), the failure of relaxation data loading with the message "IndexError: list index out of range". The bug was found by Julien Orts. It is triggered by loading relaxation data from a file containing spin name information and supplying the spin ID using the spin name to restrict data loading to a spin subset. To solve the problem, the pipe_control.relax_data.pack_data() function has been redesigned. Now the selection union concept of Chris MacRaild's selection object is being used by joining the spin ID constructed from the data file and the user supplied spin ID with '&', and using this to isolate the correct spin system. * Big Python 3 bug fix for the dep_check module for the detection of the NMRPipe showApod software. The showApod program was falsely detected as always not being present when using Python 3. This is because the output of the program was being tested using string comparisons. However the output from programs obtained from the subprocess module is no longer strings but rather byte-arrays in Python 3. Therefore the byte-array is not being converted to text if Python 3 is being used, allowing the showApod software to be detected. * Python 3 bug fix for the lib.spectrum.nmrpipe.show_apod_extract() function. The subprocess module output from the showApod program, or any software, is a byte array in Python 3 rather than text. This is now detected and the byte array converted to text before any processing. * Bug fix for the lib.structure.angles.angles_*() functions for odd increments. This affects the PDB representations of the diffusion tensor and frame order when the number of increments in the respective user functions is set to an odd number. It really only affects the frame_order.pdb_model user functions, as the number of increments cannot be set in any of the other user functions (structure.create_diff_tensor_pdb, structure.create_rotor_pdb, structure.create_vector_dist, n_state_model.cone_pdb). |
From: Edward d'A. <ed...@do...> - 2014-11-14 16:34:51
|
This is a minor feature and bugfix release. It includes improvements to the readability of the HTML version of the manual (http://www.nmr-relax.com/manual/index.html), improved printouts throughout the program, numerous GUI enhancements, and far greater Python 3 support. Please see below for a full listing of all the new features and bugfixes. For the official, easy to navigate release notes, please see http://wiki.nmr-relax.com/Relax_3.3.2. The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html). The full list of changes is: Features: * Many improvements for the HTML version of the manual at http://www.nmr-relax.com/manual/index.html. * Improved sectioning printouts in the model-free dauvergne_protocol auto-analysis. * Significant improvements for the relax controller window. * All wizards and user functions in the relax GUI now have focus so that keyboard is active without requiring a mouse click. * The ESC key will now close the relax controller window and all user function windows. * The structure.load_spins user function can now load spins from multiple non-identical molecules and merge them into one molecule allowing missing atoms and differential atom numbering to be handled. * Improvements to the printouts for many user functions. Changes: * Updated the minfx version in the release checklist document to version 1.0.11. * Updated the relax version in the release checklist document to be more modern. * Spelling fixes for the CHANGES file. * Updates for the release checklist document. This is mainly because the main release notes are now the relax wiki, for example for the current version at http://wiki.nmr-relax.com/Relax_3.3.1. * Spelling fixed throughout the CHANGES document. * Removed a few triple spaces in the CHANGES document. * Added periods to the end of all items in the CHANGES document. * Fix for an 'N/A' in the CHANGES document. * Converted a number of single spaces between sentences to double spaces in the CHANGES document. * More updates for the announcement section of the release checklist document. * The HTML version of the manual is now compiled with Unicode character support. This is for the manual at http://www.nmr-relax.com/manual/index.html. It allows Greek symbols, for example, to be represented as text rather than LaTeX generated PNG images. This fixes titles and massively decreases the number of images required by the HTML pages. * Removal of many dual LaTeX and latex2html section titles in the manual. As the HTML manual (http://www.nmr-relax.com/manual/index.html) is now compiled with Unicode support, the Greek characters in the titles are now supported. Therefore in the model-free and the values, gradients, and Hessians chapters, the dual LaTeX and latex2html section titles could be collapsed to the standard LaTeX section title. This will result in better formatting of the manual and its links. * Added instructions and a build script for creating a useful version of latex2html. This version is essential for building the HTML version of the manual at http://www.nmr-relax.com/manual/. The build script downloads the Debian latex2html-2008 sources as well as all Debian patches for latex2html. It then applies a number of patches for fixing and improving the relax documentation. The program is then compiled and can be installed as the root user into /usr/local/. * Extended the number of words used in the HTML webpage file names. This is to hopefully prevent files from being overwritten by multiple files having the same name. * Added the write out of parameters and chi2 values, when creating a dx_map. Task #7860 (https://gna.org/task/index.php?7860): When dx_map is issued, create a parameter file which maps parameters to chi2 value. * Created system test Relax_disp.test_dx_map_clustered_create_par_file, which must show that relax is not able to find the local minimum under clustered conditions. When creating the map, the map contain chi2 values, which are lower than the clustered fitted values. This should not be the case. Running a larger map with larger bounds and more increments, which should show that there exist a minimum in the minimisation space with a lower chi2 value. Bug #22754 (https://gna.org/bugs/index.php?22754): The minimise.calculate() does not calculate chi2 value for clustered residues. Task #7860 (https://gna.org/task/index.php?7860): When dx_map is issued, create a parameter file which maps parameters to chi2 value. * Renamed test scripts and files for producing surface chi2 plots. * Renamed sample scripts making surface maps. * Added scripts to make surface plots of spin independents parameters dw and R2a. * Added example surface chi2 values for plots. Task #7826 (https://gna.org/task/index.php?7826): Write an python class for the repeated analysis of dispersion data. * Added example save state for more surface plotting. * Added boolean argument to dx.map() function, to specify the creation of a parameter and associated chi2 values file. For very very special situations, the creation of this file is not desired. * Modified that structure of points in dx.map() is always a list of numpy arrays with 3 values. * When issuing dx.map() function with points, implemented the writing out of parameter file, with associated calculated chi2 values. * Improved the feedback in the User_functions.test_structure_add_atom system test. It is now clearer what the input and output data is. * The devel_scripts/python_multiversion_test_suite.py script now runs relax with the --time flag. This is for quicker identification of failure points. It will also force the sys.stdout buffer to be flushed more often on Python 2.5 so that it does not appear as if the tests have frozen. * Added check to system test Relax_disp.test_cpmg_synthetic_dx_map_points() for the creation of a matplotlib surface command plot file. * Added the write out of a matplotlib command file, to plot surfaces of a dx map. It uses the minimum chi2 value in the map space, to define surface definitions. It creates a X,Y; X,Z; Y,Z map, where the values in the missing dimension has been cut at the minimum chi2 value. For each map, it creates a projected 3d map of the parameters and the chi2 value, and a heat map for the contours. It also scatters the minimum chi2 value, the 4 smallest chi2 values, and maps any points in the point file, to a scatter point. Mapping the points from file to map points, is done by finding the shortest Euclidean distance in the space from the points to any map points. * Fix for testing the raise of expected errors in system tests. The system test will not be tested, if Python version is under version 2.7. Bug #22801 (https://gna.org/bugs/?22801): Failure of the relax test suite on Python 2.5. * Inserted a z_axis limit for the plotting of 2D surfaces in matplotlib. * Added better figure control of chi2 values on z-axis for surface plots. * Narrowed in dx_map in system test Relax_disp.test_dx_map_clustered_create_par_file(). This is to illustrate the failure of relax finding the global minimum. It seems there is a shallow barrier, which relax failed to climb over, in order to find the minimum value. * Added the verbosity argument to the pipe_control.minimise.reset_min_stats() function. All of the minimisation code which calls this now send in their verbosity arguments. This allows the text "Resetting the minimisation statistics." to be suppressed. * Added the verbosity argument to the pipe_control.value.set() function. This is passed into the pipe_control.minimise.reset_min_stats() function so its printouts can be silenced. * The pipe_control.opendx space mapping code now calls the value.set() function with verbosity=0. This is to silence the very repetitive statistics resetting messages when executing the dx.map user function. * Added more checks to the determine_rnd() of the dauvergne_protocol model-free auto-analysis. This is to try to catch bizarre situations such as bug #22730 (https://gna.org/bugs/?22730), model-free auto-analysis - relax stops and quits at the polate step. The following additional fatal conditions are now checked for: A file with the same name as the base model directory already exists; The base model directory is not readable; The base model directory is not writable. The last two could be caused by file system corruptions. In addition, the presence of the base model directory is checked for using os.path.isdir() rather than catching errors coming out of the os.listdir() function. These changes should make the analysis more robust in the presence of 'strangeness'. * Added an additional check to determine_rnd() of the dauvergne_protocol model-free auto-analysis. This is to try to catch bizarre situations such as bug #22730 (https://gna.org/bugs/?22730), model-free auto-analysis - relax stops and quits at the polate step. The additional check is that if the base model directory is not executable, a RelaxError is raised. * Added printouts to the determine_rnd() function of the dauvergne_protocol model-free auto-analysis. This is for better user feedback in the log files as to what is happening. It may help in debugging bug #22730 (https://gna.org/bugs/?22730): Model-free auto-analysis - relax stops and quits at the polate step. * Alphabetical ordering of imports in the dauvergne_protocol model-free auto-analysis. * Changed the model-free single spin optimisation title printouts. The specific_analyses.model_free.optimisation.spin_print() function has been deleted. It has instead been replaced by a call to lib.text.sectioning.subtitle(). This is to match the grid search setup title printouts and to differentiate these titles from those printed out by minfx being underlined by '~' characters. * Added extensive sectioning printouts to the dauvergne_protocol model-free auto-analysis. The lib.text.section functions title() and subtitle() are now used to mark out all parts of the auto-analysis. This will allow for a much better understanding of the log files produced by this auto-analysis. * Complete redesign of the following of text in the relax controller window in the GUI. The current design for some reason no longer worked very often, and there would be many situations where the scrolling to follow the text output would stop and could never be recovered. Therefore this feature has been redesigned. In the LogCtrl element of the relax controller, which displays the relax output messages, the at_end class boolean variable has been introduced. It defaults to True. The following events will turn it off: Arrow keys, Home key, End key, Ctrl-Home key, Mouse button clicks, Mouse wheel scrolling, Window thumbtrack scrolling (the side scrollbar), finding text, the pop up menu 'Go to start', and Select all (menu or Ctrl-A). It will only be turned on in two cases: The pop up menu 'Go to end', and if the caret is on the final line (caused by Ctrl-End, Mouse wheel scrolling, Page Down, Down arrow, Window thumbtrack scrolling, etc.). Three new methods have been introduced to handle certain events: capture_mouse() for mouse button clicks, capture_mouse_wheel() for mouse wheel scrolling, and capture_scroll for window thumbtrack scrolling. * Improvements for selecting all text in the relax controller window. Selecting text using the pop up menu or [Ctrl-A] now shifted the caret to line 1 before selecting all text. This deactivates the following of the end of text, if active, as the text following feature causes the text selection to be lost. * Modified the behaviour of the relax controller window so that pressing escape closes the window. This involves setting the initial focus on the LogCtrl, and catching the ESC key press in the LogCtrl as well as all relax controller read only wx.Field elements and calling the parent controller handle_close() method. * Replaced the hardcoded integer keycodes in the relax controller with the wx variables. This is for the LogCtrl.capture_keys() handler method for dealing with key presses. * Improvement for all wizards and user functions in the relax GUI. The focus is now set on the currently displayed page of the wizard. This allows the keyboard to be active without requiring a mouse click. Now text can be instantly input into the first text control and the tab key can jump between elements. As the GUI user functions are wizards with a single page, this is a significant usability improvement for the GUI. * The ESC character now closes all wizards and user functions in the relax GUI. By using an accelerator table set to the entire wizard window to catch the ESC keyboard event, the ESC key will cause the _handler_escape() method to be called which then calls the windows Close() method to close the window. * Changed the logic for how the new analysis wizard in the GUI is destroyed. This relates to bug #22818 (https://gna.org/bugs/?22818), the GUI test suite failures in MS Windows - PyAssertionError: C++ assertion "Assert failure". The Destroy() method has been added to the Analysis_wizard class to properly close all elements of the wizard. This is now called from the menu_new() method of the Analysis_controller class, which is the target of the menu item and toolbar button. To allow the test suite to use this, the menu_new() method now accepts the destroy boolean argument. The test suite can set this to False and then access the GUI elements after calling the method (however the Destroy() method must be called by the test suite). * Resign of how the new analysis wizard is handled in the GUI tests. This relates to bug #22818 (https://gna.org/bugs/?22818), the GUI test suite failures in MS Windows - PyAssertionError: C++ assertion "Assert failure". The GUI test base class method new_analysis_wizard() has been created to simplify the process. When a new analysis is desired, this method should be called. It will return the analysis page GUI element for use in the test. The method standardises the execution of the new analysis wizard and sets up the analysis in the GUI. It also properly destroys the wizard to avoid the memory leaking issues such as bug #22818. All GUI tests have been converted to use new_analysis_wizard(). This allows the GUI tests to pass on MS Windows. However there are still significant sources of memory leaks (the USER Objects count) visible in the Windows Task Manager. * Fix for the gui.fonts module to allow it to be used outside of the GUI. * Updated all of the scripts in devel_scripts/gui/. These have been non-functional since the merger of the relax bieri_gui branch back in January 2011. * The gui.misc.bitmap_setup() function can now be used outside of the GUI. * Fix for the GUI test base class new_analysis_wizard() method for relaxation dispersion analyses. * Modified the pipe_control.pipes.get_bundle() function to operate when no pipe is supplied. In this case, the pipe bundle that the current data pipe belongs to will be returned. * Created the Periodic_table.has_element() method for the lib.periodic_table module. This is used to simply check if a given symbol exists as an atom in the periodic table. * Added 4 unit tests to the _lib.test_periodic_table module for the Periodic_table.has_element() method. * Modified the internal structural object backend for the structure.read_pdb user function. The MolContainer._det_pdb_element() method for handling PDB files with missing element information has been updated to use the Periodic_table.has_element() method to check if the PDB atom name corresponds to any atoms in the periodic table. This allows for far greater support for HETATOMS and all of the metals. * Created the Structure.test_load_spins_multi_mol system test. This is to test yet to be implemented functionality of the structure.load_spins user function. This is the loading of spin information similar, but not necessarily identical molecules all loaded into the same structural model. For this, the from_mols argument will be added. * Fixes for the Structure.test_load_spins_multi_mol system test. The call to the structure.load_spins user function has also been modified so that all 3 spins are loaded at the same time. * Implemented the multiple molecule merging functionality of the structure.load_spins user function. The argument has been added to the user function frontend and a description added for this new functionality. In the backend, the pipe_control.structure.main.load_spins() function will now call the load_spins_multi_mol() function if from_mols is supplied. This alternative function is required to handle missing atoms and differential atom numbering. * Modified the N_state_model.test_populations system test to test the grid search code paths. This performs a grid search of one increment after minimisation, then switches to the 'fixed' N-state model and performs a second grid search of one increment. This now tests currently untested code paths in the grid_search() API method behind the minimise.grid_search user function. The test demonstrates a bug in the N-state model which was not uncovered in the test suite. * Created the N_state_model.test_CaM_IQ_tensor_fit system test. This is for catching bug #22849 (https://gna.org/bugs/?22849), the failure of the N-state model analysis when optimising only alignment tensors using RDCs and/or PCSs. This new test checks code paths unchecked in the rest of the test suite, and is therefore of high value. * Modified the atomic position handling in pipe_control.structure.main.load_spins_multi_mol(). The multiple molecule merging functionality of the structure.load_spins user function now handles missing atomic positions differently. The aim is that the length of the spin container position variable is fixed for all spins to the number of structures, as the N-state model analysis assumes this equal length for all spins. When data is missing, the atomic position for that structure is now set to None. This will require other modifications in relax to support this new design. * Modified the interatom.unit_vectors user function backend to handle missing atomic positions. This is to match the structure.load_spins user function change whereby missing atomic positions are now set to the value of None. * Fix for the atomic position handling in pipe_control.structure.main.load_spins_multi_mol(). The dimensionality of the position structure returned by the structural object atom_loop() method needed to be reduced. * The structure.load_spins user function now stores the number of states in cdp.N. This is to help the specific analyses which handle ensembles of structures. With the introduction of the from_mols argument to the structure.load_spins user function, the number of states is now not equal to the number of structural models, as the states can now come from different structures of the same model. Therefore the user function will now explicitly set cdp.N to the number of states depending on how the spins were loaded. * Clean up and speed up of the N_state_model.test_CaM_IQ_tensor_fit system test. All output files are now set to 'devnull' so that the system test no longer creates any files within the relax source directories. And the optimisation settings have been decreased to hugely speed up the system test. * Expanded the lib.arg_check.is_float_matrix() function by adding the none_elements argument. This matches a number of the other module functions, and allows for entire rows of the matrix to be None. * Lists of lists containing rows of None are now better supported by the lib.xml functions. The object_to_xml() function will now convert the float parts to IEEE-754 byte arrays, and the None parts will be stored as None in the <ieee_754_byte_array> list node. The matching xml_to_object() method has also been modified to read in this new node format. This affects the results.write and state.save user functions (as well as the results.read and state.load user functions). * Added spacing after the minimise.grid_search user function setup printouts. This is for better spacing for the next messages from the specific analysis. * Speed up of the N_state_model.test_CaM_IQ_tensor_fit system test. This test is however still far too slow. * Added printouts to pipe_control.pcs.return_pcs_data() and pipe_control.rdc.return_rdc_data(). These functions now accept the verbosity argument which if greater than 0 will activate printouts of how many RDCs or PCSs have been assembled for each alignment. This will be useful for user feedback as the spin verses interatomic data container selections can be difficult to understand. * The verbosity argument for the N-state model optimisation is now propagated for more printouts. The argument for the calculate() and minimise() API methods is now sent into specific_analyses.n_state_model.optimisation.target_fn_setup(), and from there into the pipe_control.pcs.return_pcs_data() and pipe_control.rdc.return_rdc_data() functions. That way the number of RDCs and PCSs used in the N-state model is reported back to the user for better feedback. * Updated the N_state_model.test_CaM_IQ_tensor_fit system test so it operates correctly as a GUI test. All user functions are now executed through the special self._execute_uf() method to allow either the prompt interpreter or the GUI to execute the user function. * Modified the N_state_model.test_CaM_IQ_tensor_fit system/GUI test for implementing a new feature. The 'spin_selection' argument has been added to the interatom.define user function. This will be used to carry the spin selections over into the interatomic data containers. * Implemented the spin_selection Boolean argument for the interatom.define user function. This has been added to the frontend with a description, and to the backend. When set, it allows the spin selections to define the interatomic data container selection. * Changed the spin_selection argument default in the interatom.define user function backend. This now defaults to False to allow other parts of relax which call this function to operate as previously. The default for the interatom.define user function is however still True. * Modified the Structure.test_load_spins_multi_mol system test for the spin.pos variable changes. The atomic position for an ensemble of structures is now set to None rather than being missing, so the system test has been updated to check for this. * The align_tensor.display user function now has more consistent section formatting. The section() and subsection() functions of the lib.text.sectioning module are now being used to standardise these custom printouts with the rest of relax. * Modifications to the new N_state_model.test_CaM_IQ_tensor_fit system test. The system test now checks all of the optimised values to make sure the correct values have been found. That will block any future regressions in this N-state model code path. The system test is now also faster. And the pcs.structural_noise user function RMSD value has been set to 0.0 so that the test no longer has a random component affecting the final optimised values. * Added printouts for the rdc.calc_q_factors and pcs_calc_q_factors user functions. These are activated by the new verbosity user function argument which defaults to 1. If the value is greater than 0, then the backend will print out all the calculated Q factors. * The verbosity argument of the RDC and PCS q_factors() functions now defaults to 1. This causes the Q factors to be printed out at the end of all N-state model optimisations. * Created the Structure.test_bug_22860_CoM_after_deletion system test. This is to catch bug #22860 (https://gna.org/bugs/?22860), the failure of the structure.com user function after calling structure.delete. * Fix for the checks in the new Structure.test_load_spins_multi_mol system test. A spin index was incorrect. * Fix for the structure.load_spins user function when the from_mols argument is used. The load_spins_multi_mol() function of the pipe_control.structure.main module was incorrectly handling the atomic position returned by the internal structural object atom_loop() method. This position is a list of lists when multiple models are present. But when only a single model is present, it returns a simple list. * Modified the Structure.test_bug_22860_CoM_after_deletion system test to expect a RelaxNoPdbError. This tests that the structure.com user function raises RelaxNoPdbError after deleting all of the structural information from the current data pipe. * The mol_name argument is now exposed in the structure.add_atom user function. This has been added as the first argument of the user function to allow new molecules to be created or to allow the atom to be placed into a specific molecule container. The functionality was already implemented in the backend, so it has been exposed by simply adding a new argument definition to the user function. * Created the Structure.test_bug_22861_PDB_writing_chainID_fail system test. This is to catch bug #22861 (https://gna.org/bugs/?22861), the chain IDs in the structure.write_pdb user function PDB files are incorrect after calling structure.delete. * Small modification of the Structure.test_bug_22861_PDB_writing_chainID_fail system test. File metadata is now being set to demonstrate that the structure.delete user function does not remove this once there is no more data left for the molecule. * Small indexing fixes for the dispersion chapter of the relax manual. * Fix for system test Relax_disp.test_cpmg_synthetic_dx_map_points. Another import line was written to the matplotlib script. * Speedup and fix for system test Relax_disp.test_dx_map_clustered_create_par_file. The following test was taken out, since this a particular interesting case. There exist a double minimum, where relax has not found the global minimum. This is due to not grid searching for R2a, but using the minimum value. * Removed debugging code from the N_state_model.test_CaM_IQ_tensor_fit system test. This was an accidentally introduced state.save user function used to catch the system test state. It would results in the 'x.bz2' file being dumped in the current directory. * Loosened the checks in the Relax_disp.test_baldwin_synthetic_full system test. This is to allow the test to pass on Python 2.5 and 3.1 on a 32-bit GNU/Linux system. * Fix for the Relax_disp.test_cpmg_synthetic_dx_map_points system test for certain systems. This change is to allow the test to pass on Python 2.5 and 3.1 on a 32-bit GNU/Linux system. This may be related to 32-bit numpy 1.6.2 verses later numpy versions causing precision differences. * Fixes for the Relax_disp.test_hansen_cpmg_data_missing_auto_analysis system test for certain systems. The optimisation precision has been increased, and the value checking precision has been decreased. This change is to allow the test to pass on Python 2.5 and 3.1 on a 32-bit GNU/Linux system. This may be related to 32-bit numpy 1.6.2 verses later numpy versions causing precision differences. * Converted all the extern.numdifftools modules using the dos2unix program. * Updated the Python 2 to Python 3 migration document to be more current. * Small edit of the docs/devel/2to3_checklist document. * Expanded the Python 2 to 3 conversion document to list the 2to3 command individually. * The ImportErrors in unit tests are now correctly handled by the relax test suite. If an ImportError occurred, this was previously killing the entire test suite. * The target_function.relax_fit module unit tests are now skipped if the C module is not compiled. * Expanded the Python 2 to 3 conversion document. * Small update to the 2to3_checklist document - the print statement conversion has been added. * The lib.errors module is now importing lib.compat.pickle for better Python 2 and 3 support. This shifts the compatibility code from lib.errors into lib.compat so that the 2to3 program will not touch the lib.errors module. * Better Python 3 compatibility in some test suite shared data profiling scripts. These changes invert the logic, importing the Python 3 builtins module and aliasing xrange() to range(), and passing if an ImporError occurs. The code will now no longer be modified by the 2to3 program. * Unicode fixes for the "\u" string in "\usepackage" in the module docstring. This requires escaping as "\\usepackage" to avoid the unicode character '\u'. * The lib.check_types now imports io.IOBase from the lib.compat module. This is to shift more Python 2 vs. 3 compatibility into lib.compat and out of all other modules. * Python 3 improvements - changed how the Python 3 absent builtins.unicode() function is handled. The aliased builtins.str() function is now referenced as lib.compat.unicode(). The Python 2 __builtin__.unicode() function is also aliased to lib.compat.unicode(). The GUI using this function now import it from lib.compat. * Removed the writable base directory check in the dauvergne_protocol auto-analysis. This check was causing the system test to fail if the user does not have write access to the installed relax directory. * Expanded the Mac_framework_build_3way document to include matplotlib. * Important bug fix for racing causing the GUI to freeze. This is really only seen in the GUI tests on MS Windows systems, as a user could never be fast enough with the mouse. The GUI interpreter flush() method for ensuring that all user functions in the queue have been cleared now calls wx.Yield() to force all wxPython events to also be flushed. This change will avoid random freezing of the relax test suite. * Bug fix for the Mf.test_bug_21615_incomplete_setup_failure GUI test on MS Windows systems. The GUI interpreter flush() method needs to be called between the two structure.load_spins user function calls. Without this, the test will freeze on MS Windows. The freezing behaviour is however not 100% reproducible and is dependent on the Windows version and wxPython version. * Shifted a number of wx.NewId() calls to the module namespace to conserve IDs. These are for the menus in the main window and in the spin view window. * Shifted the wx.NewId() calls for the spectrum list GUI element to the module namespace. These IDs are used for the pop up menus. The change avoids repetitive calls to wx.NewId() every time a right click occurs, conserving wx IDs so that they are not exhausted when running the test suite or running the GUI for a long time. * More shifting of wx.NewId() calls for popup menus to module namespaces to conserve IDs. * Converted all of the GUI wizard button IDs to -1, as they are currently unused. This should conserve wx IDs, especially in the test suite. * Shifted the main GUI window toolbar button wx IDs to the module namespace. This has no effect apart from better organising the code. * Shifted the relax controller window popup menu wx IDs to the module namespace. This is simply to better organise the code to match the other GUI module changes. * Menus created by the gui.components.menu.build_menu_item() now default to the wx ID of -1. This is to conserve wx IDs. If the calling code does not provide the ID, there is no need to grab one from the small pool of IDs. * Shifted the spin viewer GUI window toolbar button wx IDs to the module namespace. This should conserve wx IDs as the window is created and destroyed, as only 2 IDs will be taken from the small pool for the entire lifetime of the program. * Shifted all of the wx.NewId() calls for the new analysis wizard into the module namespace. This will hugely save the number of wx IDs used by the GUI, especially in the test suite. Instead of grabbing 8 IDs from the small pool every time the new analysis wizard is created, only 8 IDs for the lifetime of the program will be used. * Another large wx ID saving change. The ID associated with the special accelerator table that allows the ESC button to close relax wizards is now initialised once in the module namespace, and not each time a wizard is created. * A small wx ID conserving change - the 'Execute' button in the analysis tabs now uses the ID of -1. A unique ID is not necessary and is unused. * The user function class menus no longer have unique wx IDs, as these are unnecessary. This conserves the small pool of unique wx IDs, as the spin viewer window is created and destroyed. * Bug fix for the structure.load_spins user function new from_mols argument. This was incorrectly using the pipe_control.pipes.pipe_names() function to obtain its default values in the GUI (although this is not currently uesd). The result was a non-fatal error message on Mac OS X systems of "Python[1065:1d03] *** __NSAutoreleaseNoPool(): Object 0x3a3944c of class NSCFString autoreleased with no pool in place - just leaking". * Added a debugging Python version check to the devel_scripts/memory_leak_test_relax_fit.py script. This prevents the script from being executed with a normal Python binary. * Created the blacklisted Noe.test_noe_analysis_memory_leaks GUI test. This long test can be manually run to help chase down memory leaks. This can be monitored using the MS Windows task manager, once the 'USER Objects' column is shown. If the USER Objects count reaches 10,000 in Windows, then no more GUI elements can be created and the user will see errors. * Added a printout to the Noe.test_noe_analysis_memory_leaks GUI test to help with debugging. * Improved debugging printouts for the Noe.test_noe_analysis_memory_leaks GUI test. * Small fix for the GUI analysis deletion method to prevent racing in the GUI tests. * Redesigned how wizards are destroyed in the GUI. The relax wizard Destroy() method is now overridden. This allows the buttons in the wizard to be properly destroyed, as well as all wizard pages. This should remove a lot of GUI memory leaks. * Created the General.test_new_analysis_wizard_memory_leak blacklisted GUI test. This will be used to check for memory leaks in the new analysis wizard. * Removed an unused dictionary from the GUI wizard object. * Added a wx.Yield() before destroying the new analysis wizard via menu_new(). This is to avoid racing which can be triggered in the test suite. Bugfixes: * Fix for the latex2html tags in the model-free chapter of the relax manual. This bug may affect the compilation of both the PDF and HTML version (http://www.nmr-relax.com/manual/) of the manual. * Formatting improvements for the user function chapter of the HTML manual. This is for http://www.nmr-relax.com/manual/Alphabetical_listing_user_functions.html. This will hopefully fix the horrible formatting whereby all text is wrapped in the HTML tags <SMALL CLASS="FOOTNOTESIZE"><SMALL CLASS="FOOTNOTESIZE"><SMALL CLASS="FOOTNOTESIZE"><SMALL CLASS="FOOTNOTESIZE"><SMALL CLASS="FOOTNOTESIZE"><SMALL CLASS="FOOTNOTESIZE"><SMALL CLASS="FOOTNOTESIZE"><SMALL CLASS="FOOTNOTESIZE"><SMALL CLASS="FOOTNOTESIZE"><SMALL CLASS="SCRIPTSIZE">text</SMALL></SMALL></SMALL></SMALL></SMALL></SMALL></SMALL></SMALL></SMALL></SMALL>. * Big bug fix for the text size formatting of the HTML manual. The previous fix for the user function chapter of the HTML manual (http://www.nmr-relax.com/manual/Alphabetical_listing_user_functions.html) did not fix the problem. The issue was with the {exampleenv} defined using a \newenvironment command in the preamble. The command \footnotesize was bing used in the start, but nothing was changing the font size at the end. In LaTeX, the ending of the environment appears to reset the font size, whereas in latex2html it does not. Therefore all text after this environment is prepended by <SMALL CLASS="FOOTNOTESIZE"> in the HTML manual and this keeps adding to the text after each new exampleenv environment. * Fix for the poorly written User_functions.test_structure_add_atom GUI test. This fixes one part of 2 of the bug #22772 (https://gna.org/bugs/?22772), the modelfree4 binary issue and the User_functions GUI tests with wxPython 2.9 failures of the test suite. The problem was that a list element was being set in the GUI test, but that element did not exist yet. Somehow this worked in wxPython 2.8. But the bad code failed on wxPython 2.9. * Updated the Palmer.test_palmer_omp system test for the 64-bit Linux Modelfree 4.20 GCC binary file. This fixes the second part and last part of the bug #22772 (https://gna.org/bugs/?22772), the modelfree4 binary issue and the User_functions GUI tests with wxPython 2.9 failures of the test suite. The problem is that the 64-bit GNU/Linux GCC compiled binary of Modelfree 4.20 produces different results as previous versions. These are now caught by the system test and correctly checked. * Removal the use of OrderedDict(). OrderedDict is first available in python 2.7, and is not essential functionality. The functionality is replaced with looping over a list of dictionary keys instead, which is picked up under analysis. Bug #22798 (https://gna.org/bugs/?22798): Failure of relax to start due to an OrderedDict ImportError on Python 2.6 and earlier. * Fix for the find next bug in the relax controller window. This is bug #22815 (https://gna.org/bugs/?22815), the failure of find next using F3 (or Ctrl-G on Mac OS X) in the relax controller window if search text has already been set. The fix was simple, as the required flags are in the self.find_data class object (an instance of wx.FindReplaceData). * Fix for find dialog in the relax controller window. This is for bug #22816 (https://gna.org/bugs/?22816), the find functionality of the relax controller window does not find text when using wxPython >= 2.9. The find wxPython events are now bound to the find dialog rather than the relax controller window LogCtrl element for displaying the relax messages. This works on all wxPython versions. * Bug fix for the structure.align user function for when no data pipes are supplied. * Bug fix for the N-state model grid search when only alignment tensor parameters are optimised. The algorithm for splitting up the grid search to optimise each tensor separately, hence massively collapsing the dimensionality of the problem, was being performed incorrectly. The grid_search() API method inc, lower, and upper arguments are lists of lists, but were only being treated as lists. * Final fix for bug #22849 (https://gna.org/bugs/?22849). This is the failure of the N-state model analysis when optimising only alignment tensors using RDCs and/or PCSs. The alignment tensor is no longer initialised to zero values. This is to allow the skip_preset argument for the minimise.grid_search user function to be operational for the N-state model, a feature introduced with the zooming grid search. The solution was to check for the uninitialised tensor in the minimise_setup_fixed_tensors() method of the specific_analyses.n_state_model.optimisation module. * Bug fix for the lib.arg_check.is_float_matrix() function. The check for a numpy.ndarray data structure type was incorrect so that lists of numpy arrays were failing in this function. Rank-2 arrays were not affected. * Fix for the structure.com user function. This fixes bug #22860 (https://gna.org/bugs/?22860), the failure of the structure.com user function after calling structure.delete. The number of models in cdp.structure is now counted and if set to zero, RelaxNoPdbError will be raised. * The structure.write_pdb user function can now handle empty molecules. This fixes bug #22861 (https://gna.org/bugs/?22861), the chain IDs in the structure.write_pdb user function PDB files are incorrect after calling structure.delete. To handle this consistently, the internal structural object ModelContainer.mol_loop() generator method has been created. This loops over the molecules, yielding those that are not empty. The MolContainer.is_empty() method has been fixed by not checking for the molecule name, as that remains after the structure.delete user function call while all other information has been removed. And finally the write_pdb() structural object method has been modified to use the mol_loop() method rather than performing the loop itself. * Fix for the structure.delete user function for molecule metadata once no more data exists. This relates to bug #22861 (https://gna.org/bugs/?22861), the chain IDs in the structure.write_pdb user function PDB files are incorrect after calling structure.delete. The metadata, when it exists, is now deleted for the molecule once no more data is present. * Fix for system test Relax_disp.test_bug_atul_srivastava. The call to the expected RelaxError needed to be performed differently for erlier python versions that 2.7. * Fix for bug #22937 (https://gna.org/bugs/?22937). This is the failure of the Relax_disp.test_estimate_r2eff_err_auto system test on Python 2.5. The test_suite/shared_data/dispersion/Kjaergaard_et_al_2013/1_setup_r1rho_GUI.py simply required a newline character at the end of the file so that it can be executed in Python 2.5. * Fix for bug #22938 (https://gna.org/bugs/?22938). This is the failure of the test suite in the relax GUI. The problem was that the status.skip_blacklisted_tests variable did not exist - it was only initialised if relax is started in test suite mode. Now the value is always set from within the status module and defaults to True. * Python 3 fixes for the relax codebase. These changes were made using the command: 2to3 -j 4 -w -f buffer -f idioms -f set_literal -f ws_comma -x except -x import -x imports -x long -x numliterals -x xrange . * Python 3 fixes throughout relax, as identified by the 2to3 script. The command used was: 2to3 -j 4 -w -f except -f import -f imports -f long -f numliterals -f xrange . * Python 3 fixes - eliminated all usage of the dictionary iteritems() calls as this no longer exists. * Python 3 fixes using 2to3 for the extern.numdifftools package (mainly spacing fixes). The command used was: 2to3 -j 4 -w -f buffer -f idioms -f set_literal -f ws_comma -x except -x import -x imports -x long -x numliterals -x xrange . * Python 3 fixes using 2to3 for the extern.numdifftools package. The command used was: 2to3 -j 4 -w -f except -f import -f imports -f long -f numliterals -f xrange . * Python 3 fixes for all print statements in the extern.numdifftools package. The print statements have been manually converted into print() functions. * Python 3 fixes via 2to3 - elimination of all map and lambda usage in relax. The command used was: 2to3 -j 4 -w -f map . * Python 3 fixes via 2to3 - replacement of all `x` with repr(x). The command used was: 2to3 -j 4 -w -f repr . * Manual Python 3 fixes for the dict.key() function which returns a list or iterator in Python 2 or 3. This involves a number of changes. The biggest is the conversion of the "x in y.keys()" statements to "x in y". For code which requires a list of keys, the function calls "list(y.keys())" or preferably "sorted(y.keys())" are used throughout (sorted() ensures that the list will be of the same order on all operating systems and Python implementations). A number of "x in list(y.keys())" statements were simplified to "x in y", some list() calls changed to sorted(), and some unnecessary list() calls were removed. * Python 3 fixes via 2to3 - elimination of all apply() calls. This only affects the GUI which cannot run in Python 3 yet as wxPython is not Python 3 compatible yet. The command used was: 2to3 -j 4 -w -f apply . * Python 3 fixes via 2to3 - proper handling of the dict.items() and dict.values() functions. These are now all wrapped in list() function calls to ensure that the Python 3 iterators are converted to list objects before they are accessed. The command used was: 2to3 -j 4 -w -f dict . * Python 3 fixes via 2to3 - the execfile() function does not exist in Python 3. The command used was: 2to3 -j 4 -w -f execfile . * Python 3 fixes via 2to3 - the filter() function in Python 3 now returns an iterator. The command used was: 2to3 -j 4 -w -f filter . |
From: Edward d'A. <ed...@do...> - 2014-10-09 16:56:59
|
This is a minor feature and bugfix release. It includes the addition of the error_analysis.covariance_matrix, structure.align, and structure.mean user functions and expanded functionality for the structure.com and structure.delete user functions. Many operations involving the internal structural object are now orders of magnitude faster, with the interatom.define user function showing the greatest speed ups. There are also improvements for helping to upgrade relax scripts to newer relax versions. The numdifftools package is now bundled with relax for allowing numerical gradient, Hessian and Jacobian matrices to be calculated. And the release includes the start of a new protocol for iteratively analysing repetitive relaxation dispersion experiments. The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html). The full list of changes is: Features: * Initial auto-analysis support for a highly repetitive protocol for analysing relaxation dispersion data. * Addition of the docs/user_function_changes.txt file which documents all user function changes from relax 1.0.1 to 3.3.1 to help with upgrading scripts to newer relax versions. * Updated the translation table used to identify no longer existing user functions and explain what the new user function is called for all relax versions from 1.3.1 to 3.3.1. * The structure.delete user function can now delete individual models as well as select atoms in individual models. * Addition of the error_analysis.covariance_matrix user function for determining parameter errors via the covariance matrix. This is currently only implemented for the relaxation curve-fitting analysis. * Bundling of the Numdifftools 0.6.0 package with relax (https://code.google.com/p/numdifftools/) for numerically testing implementations of gradients, Hessians, and Jacobians. * Implementation of the internal structural object collapse_ensemble() method to allow for all but one model to be deleted. * Massive speed up of the internal structural object by pre-processing the atom ID string into a special atom selection object. This speeds up the interatom.define, structure.delete, structure.rotate, structure.translate and many other user functions which loop over structural data. * Many orders of magnitude speed up of the structure.add_model user function. * Implementation of the structure.mean user function to calculate the mean structure from the atomic coordinates of all loaded models. * Implementation of the structure.align user function for aligning and superimposing different but related structures. This is similar to the structure.superimpose user function but allows for missing atomic information or small sequence changes. Only atoms with the same residue name and number and atom name are used in the superimposition. * Expanded the structure.com user function to accept the atom_id argument to allow the centre of mass of a subset of atoms to be determined. * Improvements for the running of the relax test suite. Changes: * Epydoc docstring fix for the dep_check.version_comparison() function. * Removed ZZ and HD exchange from the dispersion chapter of the relax manual. These would probably require completely new analysis types added to relax to analyse such data. * Updated the 'Announcement' section of the release checklist document. This now includes details about initially composing the message using the relax wiki (http://wiki.nmr-relax.com), and then how that text and the CHANGES file are used for the email announcement (http://news.gmane.org/gmane.science.nmr.relax.announce) and the Gna! news item (https://gna.org/news/?group=relax). * Small changes for the Gna! news item in the release checklist document. * Modified the announcement section of the release checklist document. Text about removing wiki markup has been added. * More expansion of the release checklist document. Added text about creating internal and external links for the wiki release notes. * Modified systemtest Relax_disp.test_show_apod_extract that test output from showApod. The output can be different according to NMRPipe version. The 'Noise Std Dev' is though the same. * Fix for comments to dependency check of showApod. * Fix for raising error when calling showApod, and subprocess module not available. * Fix for the dependency check for showApod in systemtests. * Further extended the protocol for repeated dispersion analysis. Task #7826 (https://gna.org/task/index.php?7826): Write an python class for the repeated analysis of dispersion data. * Extended the system test for the protocol for repeated dispersion analysis. Task #7826 (https://gna.org/task/index.php?7826): Write an python class for the repeated analysis of dispersion data. * Added a relaxation dispersion model profiling log file for relax version 3.3.0 vs. 3.2.3. This is the output from the dispersion model profiling master script. These numbers will be used for the relax 3.3.0 release notes (http://wiki.nmr-relax.com/Relax_3.3.0). * Fixes for the relax 3.3.0 vs. 3.2.3 dispersion model profiling log file. The numeric model numbers were incorrectly scaled and a factor of 10 too high. * Fixes for the scaling factors in the dispersion model super profiling script. * Editing of the relax 3.3.0 features section of the CHANGES file. This will be used for the release notes. * Added more test data for the repeated analysis. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Updated the Baldwin 2014 reference in the relax manual. The pybliographic software was used to format this BibTeX entry (http://pybliographer.org/). This was updated as volume and page number information is now available. * Updated the Morin et al, 2014 paper (the relax relaxation dispersion paper) reference in the manual. The paper now has volume and page information. * Added some more user function ranamings to the translation table. These were identified while preparing the release notes on the wiki (http://wiki.nmr-relax.com/Category:Release_Notes, http://wiki.nmr-relax.com/Release_notes). * Stored a frequency dependent dictionary with spectrum IDs and repeated PMG frequencies in setup pipe. This information will progress out through children pipes. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Further extended methods in the class for repeated analysis of dispersion data. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Updated the release checklist document, including adding a section about cross-linking. The cross-linking is important for search engine indexing. * Created a simple script for printing out the names of all user functions. * Added listings of all user functions from relax version 2.0.0 all the way to relax 3.3.0. This will be used to look at how the user function names have changed with time. * Added a script and log file for comparing relax user function differences between versions. * Created a document for relax users which follows the changes to the user function names. * For the spin.display user function, added the print out of spin ID and status for selection. This is to help with showing the spin ID string for selection, and the current status of selection. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * To the back-end of display pipes, added functionality to sort the pipe names before printing. Also added the return of the list of pipes, with its associated information about pipe type, and pipe_bundle. This is to help with getting a better overview for multiple pipes in data store. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Parsed the force flag from front end of value.set to back end. Bug #22598 (https://gna.org/bugs/index.php?22598). Back end of value.set does not respect force=False flag. * Broke optimisation function into smaller functions. This is to help selecting spins, do particular grid search and minimise. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Modified system test to follow the new functions in the auto analysis. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Shifted the user function listing script into the test suite directory where the results are. * Created a script for printing out relax 1.3 user functions. * Stripped out all of the relax intro and script printouts from the user function listing files. This allows the diff.py script to be simplified. * Updated the relax 1.3 user function printout script and added many printouts. The printouts are for relax versions 1.3.5 to 1.3.16. The earlier relax versions used the relax 1.2 user function setup. * Created a script for printing out all user functions for relax 1.2 versions. This also includes the relax 1.3.0 to 1.3.4 versions. * Added the relax 1.3.0 to 1.3.4 user function printouts. * Changed the behaviour of the script for showing user function difference between relax versions. The relax versions are now reversed so the oldest version is at the bottom of the difference printout. * Added the relax 1.0.1 to 1.2.15 user function printouts. The diff.log file has also been updated with all of these versions. * Updated the user_function_changes.txt document. This now lists all changes in the user function naming from relax version 1.0.1 all the way to relax 3.3.0. * Added all remaining user function ranamings since relax 2.0.0 to the translation table. These were taken directly from the docs/user_function_changes.txt document. * Added all user function ranamings since relax 1.3.1 to the translation table. These were taken directly from the docs/user_function_changes.txt document. Earlier relax versions are far too different, so this will be the earliest relax version for this translation table. The relax 1.2 and earlier (and 1.3.0) versions used the run argument throughout and the scripting was so different, that telling the user how to upgrade to new user functions is pointless. And the release date of relax 1.2.15, the last of these old designs was in November 2008. * Changed the order of the two relax versions being compared for user function changes. This is in the diff.py script and log file and the user_function_changes.txt document. * Changed the organisation of the files in the docs/ directory. A new docs/devel directory has been created and the 2to3_checklist, Mac_framework_build_3way, package_layout, and prompt_screenshot.txt documents shifted into it. This is to hide or abstract away the development documents so that relax users do not see them when looking into docs/. This should make the directory less intimidating. * Shifted the Release_Checklist document into docs/devel/ to hide it from users. * Correction for the noe.read to spectrum.read_intensities user function change. This is for the translation table used to catch old user function calls. * Initial try to implement plotting in the repeated auto analysis protocol. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Small improvement of the matplotlib plotting of data in the repeated analysis protocol. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Fix for calling correct folder with test intensities. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * For the class of repeated analysis, implemented method to collect peak intensity, and function to plot the correlation. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Added system test Relax_disp.test_repeat_cpmg to be skipped, if no matplotlib module exists. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Added the Gimp XCF file for the logo of the relax wiki (http://wiki.nmr-relax.com). * Added system test Relax_fit.test_curve_fitting_height_estimate_error() for the manual and automated analysis of exponential fit. This is to prepare for new methods in the auto analysis protocol. * In the auto analysis of exponential fitting, changed the minimisation method from simplex to Newton, to speed-up the fitting. This is for master Monte Carlo simulations. * In the system test Relax_fit.test_curve_fitting_height_estimate_error(), moved the auto-detection of replicated spectra into the manual method. This is to prepare for auto-mated detection of replicates. * Implemented a method to automatically find duplicates of spectrum in exponential fit. This is to ease the user intervention for error analysis, if this has been forgotten. * Implemented the writing out of a "grace2images.py" script file, when performing auto analysis of exponential fits. * Created the Structure.test_delete_model system test. This is in preparation for extending the structure.delete user function to be able to delete individual structural models. The test will only pass once this functionality is in place. * Expanded the wiki instructions in the release checklist document. This includes a number of steps for significantly improving the release notes: External links to the Gna! trackers with full descriptions, external links to the HTML user manual for all user functions, internal links to release notes of other relax versions, internal links to wiki pages for all models from all theories, and HTML formatting of all symbols/parameters/etc. * Introduction of the model argument to the structure.delete user function. This argument is passed all the way into the internal structural object, but is not used yet. * The model argument in the structure.delete user function is now operational. In the internal object, it has two functions. When the atom_id argument is none, then new ModelList.delete_model() function is being called to remove the entire model from the list of structural models. When the atom_id argument is supplied, then only the corresponding atoms in the given model will be deleted. * Expanded the checking in the Structure.test_delete_model system test. Now a number of structural model loading and deletion scenarios are tested. * Implemented back-end function to estimate Rx and I0 errors from Jacobian matrix. This is to prepare for user function in relax_fit, to estimate errors. * Implemented user function relax_fit.rx_err_estimate in relax_fit to estimate Rx and I0 errors from the Jacobian Co-variance matrix. * Extended system test Relax_fit.test_curve_fitting_height_estimate_error() to test the error estimation method from the Co-variance matrix. The results seems very similar, if increasing to 2000 Monte Carlo simulations. * Renamed the pipe_control.monte_carlo module to pipe_control.error_analysis. This is in preparation for the module to handle all error analysis techniques: Monte Carlo simulations, covariance matrix, Jackknife simulations, Bootstrapping (which is currently via the Monte Carlo functions), etc. All current functions are now prepended with 'monte_carlo_*()'. * Fix for the old relax 1.2 model-free results file reading. This is due to the pipe_control.monte_carlo to pipe_control.error_analysis module renaming. * Implemented the pipe_control.error_analysis.covariance_matrix() function. This follows from http://thread.gmane.org/gmane.science.nmr.relax.scm/23526/focus=7096. It will be used by a new error_analysis.covariance_matrix user function. And it calls the specific API methods model_loop(), covariance_matrix(), and set_error() and the relax library lib.statistics.multifit_covar() function do to most of the work. * Modified the Relax_fit.test_curve_fitting_height_estimate_error system test. The call to relax_fit.rx_err_estimate has been replaced by the yet-to-be implemented error_analysis.covariance_matrix user function. * Creation of the error_analysis.covariance_matrix user function. This is simply a code rearrangement. The relax_fit user function module was duplicated and relax_fit.rx_err_estimate renamed to error_analysis.covariance_matrix. References to the specific analysis have been removed. * Created the specific analysis base API method covariance_matrix(). This defines the arguments required and what is returned by the method. It raises the RelaxImplementError for all analyses which do not implement this method. * Modified pipe_control.error_analysis.covariance_matrix(). The call to the API covariance_matrix() method now has the model_info argument passed into it. For the relaxation curve-fitting, this allows the loop over spin systems to be skipped. * Shifted the contents of the specific_analysis.relax_fit.estimate_rx_err module into the API. The estimate_rx_err() function is now the covariance_matrix() method of the specific API. The code for calculating the covariance matrix and errors are now in the function pipe_control.error_analysis.covariance_matrix(), so this has been removed. And the error setting is performed by the set_errors() API method, so that code has been deleted as well. * Removed the specific_analyses.relax_fit.estimate_rx_err module import. The module has been merged into the specific API module. * Fix for the pipe_control.error_analysis.covariance_matrix() function. The set_errors() API method is parameter specific, so a loop over the parameters using the get_param_names() API method has been added. * Removed the estimate_rx_err module from the specific_analyses.relax_fit.__all__ list. This module was deleted after merger into the api module. * Improved the plotting of correlation plot for intensity. Now the intensity to error is plotted, which is the correct measure of this data. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Implemented a correlation plot for R2eff values to be plotted for different pipes. This has the R2eff/R2eff_err plotted, which is the best way to represent this data. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Further improved the plotting of data in repeated analysis. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Added the Relax_disp.test_show_apod_rmsd_dir_to_files system test to the blacklist. This is if the showApod program is not installed on the machine and allows the test suite to pass. * Extended the printout for the skipped tests in the test suite. As tests using the NMRPipe showApod software are skipped and listed in this table, the text now includes 'software' in the list. * Shifted the checks for the Dasha and Modelfree4 software into the system test __init__() method. This is to bring this into the same design as the relaxation dispersion tests which require the NMRPipe showApod software. Now the test suite will list either Dasha or Modelfree4 in the skipped test table if they are not installed. * Adding another statistic method to plot for multi-data sets. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * More adding of matplotlib snippets for plotting intermediate data. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Changing the range of plotting for statistics. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * More changes to plotting for statistics. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Fix for axis limits when plotting stats. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Fix for globing, to prevent incidentally taking wrong intensity file. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Correction to figure limits. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Implemented writing out of statistics to file. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Adding writing out of PNG files from matplotlib, when looking at statistics. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Another math domain checking, if ref intensity is set to 0.0, then points are skipped, rather than raising an Error. This can happen for extremely bad dispersion data. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Trying implementing flexibility, when data expected data is missing. This can be due failing of processing data, where a whole run of data is randomly skipped. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Better check for math domain error in intensity proportionality. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Removal of initialised of dictionary, before data existence have been checked. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Small fix for correct check of missing of data. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Imported the Numdifftools 0.6.0 package into the relax source tree. This package is extremely useful for testing the implementation of gradients, Hessians, and Jacobians for all relax target functions. The numerical values from numdifftools can be compared to the directly calculated values. And for analysis types where the partial derivatives with respect to each model parameter are too complicated to calculated, or the derivatives are very complicated and hence slow, numdifftools can be used to provide a numerical estimate for direct use in the optimisation. The Numdifftools package is from https://pypi.python.org/pypi/Numdifftools and https://code.google.com/p/numdifftools/. The current version 0.6.0 has been placed into extern/numdifftools. This is only the numdifftools package within the official distribution files and the Python package setup.py file and associated files and directories have not been included. The package uses the New BSD licence (the revised licence with no advertising clause) which is compatible with the GPL v3 licence. * Reordered functions in repeated analysis protocol. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Added more check of methods to the system test Relax_disp.test_repeat_cpmg(). This actually shows, that user function relax_disp.r20_from_min_r2eff maybe is broken. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Fix for the testing of method is finished when called. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Turned on minimisation in system test Relax_disp.test_repeat_cpmg(). Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * The lib.spectrum.nmrpipe module has been made independent of the relax source code. This was discussed at http://thread.gmane.org/gmane.science.nmr.relax.scm/23357/focus=7103. The change allows the software verification tests pass. The dep_check module cannot be used in the relax lib package. Only modules from within lib are allowed to be imported into modules of lib. The fix now allows the full test suite to pass and hence new relax releases are once again possible. * Created a document which explains how missing copyrights can be found. * Even more improvements to the shell command for finding missing copyrights. * Updated the copyright notice for 2014 for all files changed by Edward d'Auvergne. These were identified using the command in the find_missing_copyrights document. * Added numdifftools to the extern package __all__ list. * Updated the find_missing_copyrights document. The matching is now more precise and skips all svnmerge operations. * Added the 2014 copyright notice for Troels Linnet to many relax source files. These were identified as being edited by Troels using the command listed in the find_missing_copyrights document. The changes include adding "Copyright 2014 Troels E. Linnet" to many files not containing Troels' copyright notice, and extending the 2013 copyright to 2014. * Implemented correlation plot of minimisation values. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Changed the missing package/module/software table in the test suite. This is to allow all names to fit and to update the column titles for software packages. * Decreased the accuracy of a check in the Relax_disp.test_estimate_r2eff_err_auto system test. This is to allow the test to pass on my Windows 7 VM. * Added Troels E. Linnet to the COMMITTERS file, which has not been updated in almost 3 years. * Created the Structure.test_get_model system test. This demonstrates that the internal structural object get_model() method is not working as it should. * Added a few more checks to the Structure.test_get_model system test. * Created the Structure.test_collapse_ensemble system test. This is used to test a planned feature of the internal structural object. The collapse_ensemble() method will be created to remove all but one model in the structural ensemble. * Modified the Structure.test_collapse_ensemble system test to check the initial values. This is for sanity reasons as the test coverage of the structure.add_atom user function is poor. * Implemented the internal structural object collapse_ensemble() method. This allows the Structure.test_collapse_ensemble system test to pass. * Created a basic text based progress meter in the new lib.text.progress module. This is taken from the script test_suite/shared_data/frame_order/cam/generate_base.py. * Modifications to the User_functions.test_structure_add_atom GUI test. As lists of lists are now accepted by the structure.add_atom user function, the operation in the GUI is now significantly different. Therefore many checks have been removed from the GUI test. * Updated the minimum minfx dependency version number from 1.0.9 to 1.0.11 in the dep_check module. This newest version handles infinite target function values preventing optimisation from continuing forever (https://gna.org/forum/forum.php?forum_id=2477). The 1.0.10 version is also useful as there is full support for gradients and Hessians in the log-barrier constraint algorithm (https://gna.org/forum/forum.php?forum_id=2475). * Shifted the specific_analyses.relax_disp.variables module into lib.dispersion. This is both to minimise circular dependencies, as previously the specific_analyses.relax_disp modules import from target_functions.relax_disp and vice-versa, and to allow the relax library functions to have access to these variables. This follows from a similar change to the frame order analysis in the frame_order_cleanup branch. * Dependency fix for the auto_analyses.relax_disp_repeat_cpmg module. This was causing relax to fail. SciPy is an optional dependence for relax, but this module caused relax to not start if scipy was not installed. This was detected by testing relax with PyPy. * Implemented writing out of particular correlation plots to file. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Created a special internal structural object selection object. This will be used for massively speeding up the internal structural object. The use of the lib.selection module by the internal structural object is currently very slow as a huge number of calls to re.search() are required. The idea is to avoid this by using lib.selection once to populate this new selection object, and then reusing this object to loop over molecules and atoms. * Added the selection() method to the internal structural object. This parses the atom ID string using the lib.selection module, loops over the molecules and atoms, performs matches using re.search() via lib.selection, and populates and returns the new Internal_selection object. This can be used to pre-process the atom ID string to save huge amounts of time. * The internal structural object validate_models() method now accepts the verbosity argument. This is used to silence printouts. * Fixes for the new structural object Internal_selection object. The atom indices are not stored via the molecule index. * Converted the rotate() and translate() structural object methods to use the new selection object. The atom_id arguments have been replaced with selection arguments. Therefore all parts of relax which call these methods must first call selection() to obtain the Internal_selection instance. * Created the structural object Internal_selection.mol_loop() method. This is to simply quickly loop over all molecule indices of the selection object. * Converted all structural object methods to use the selection object rather that atom ID strings. This should have a significant impact on the speed of certain operations within relax. The most obvious effect will be a huge speed up of the interatom.define user function. There should be speed ups with a number of other user functions relating to structural information. All parts of relax have been updated for the change. * Implemented the sampling sparseness instead of NI on the graph axis. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Massive speed up of the internal structural object add_model() method. This speeds up the structure.add_model user function, as well as many internal relax operations on the structural object. Instead of using the copy.deepcopy() function to duplicate an already existing structural model, now new molecule container objects are created and then the individual elements of the original molecule container data lists are copied one by one. This avoids copying a lot of internal Python junk and hence the copying operation is now orders of magnitude faster. * Created the new --no-skip relax command line option. This is a debugging option specifically designed for relax developers. It allows all blacklisted tests to be executed, i.e. all blacklists are ignored. These tests would normally be skipped, however this option enables them. * Fix for the test suite summary printout function for the new --no-skip option. The relax status object was clashing with a variable of the same name. * Reactivated the Relax_disp.test_m61b_data_to_m61b system test, but blacklisted it. This will allow the test to be executed if the --no-skip command line option is used. * Created the Bmrb.test_bug_22703_display_empty system and GUI test. This system test catches bug #22703 (https://gna.org/bugs/?22703), the failure of the bmrb.display user function with an AttributeError when no data is present. It is simultaneously a system and GUI test, as the GUI test class inherits directly from the system test class. * Created the pipe_control.spectrometer.check_setup() function. This follows the design on the wiki page http://wiki.nmr-relax.com/Relax_source_design. This is for checking if spectrometer information has been set up. * Created the RelaxNoFrqWarning warning class for warning that no spectrometer information is present. * Renamed the pipe_control.spectrometer.check_setup() function to check_spectrometer_setup(). This is so it can be used without confusion outside of the module. * Fix for a broken elif block in the new pipe_control.spectrometer.check_spectrometer_setup() function. * The model-free bmrb_write() API method now checks for spectrometer information. This is via a call to the pipe_control.spectrometer.check_spectrometer_setup() function. * Modified the Bmrb.test_bug_22703_display_empty system/GUI test to catch the RelaxNoFrqError. * Created a special Check class based on the strategy design pattern. This is in the new lib.checks module. The class will be used to simplify and unify all of the check_*() functions in the pipe_control and specific_analyses packages. * Converted the pipe_control.spectrometer.check_*() functions to the strategy design pattern. These are now passed into the lib.checks.Check object, and the original functions are now instances of this class. * Alphabetical ordering of all functions in the pipe_control.pipes module. * Changed the design of the Check object in lib.checks. The design of the checking function to call has been modified - it should now return either None if the check passes or an instantiated RelaxError object if not. This is then used to determine if the __call__() method should return True (when None is received). Otherwise if escalate is set to 1, the text from the RelaxError object is sent into a RelaxWarning and False is returned. And if escalate is set to 2, then the error object is simply raised. * Updated the pipe_control.spectrometer.check_*_func() functions to use the new design. * Implemented the writing out of parameter values between comparison of NI level. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Fixes for the lib.checks.Check object. The __call__() method keyword arguments **kargs needs to be processed inside the method to strip out the escalate argument. * The default value of the escalate argument of the Check.__call__() method is now 2. This will cause the calls to the check_*() function/objects to default to raising RelaxErrors. * Changed the behaviour of the lib.checks.Check object again. This time the registered function is stored rather than converted into a class instance method. That way the check_*() function-like objects do not need to accept the unused 'self' argument. * The data pipe testing function has been converted to the strategy design pattern of the Check object. The function pipe_control.pipes.test() has also been renamed to check_pipe(). * Created the Bmrb.test_bug_22704_corrupted_state_file system test. This is to catch bug #22704 (https://gna.org/bugs/?22704), the corrupted relax state files after setting the relax references via the bmrb.software, bmrb.display, or bmrb.write user functions. * Implemented getting the statistics for parameters and comparing to init NI. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Implemented writing and plotting of statistics for individual and clustered fitting, comparing to full NI. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Added checks to the Bmrb.test_bug_22704_corrupted_state_file system test. This is to see if the cdp.exp_info data structure has been correctly restored from the save file. * Uncommented some checks in the Bmrb.test_bug_22704_corrupted_state_file system test. * For relaxation dispersion, modified that the Grid search and linear constraints for parameter "k_AB" is between 0-100. The parameter is only used in the TSMFK01 model. The "k_AB" parameter is only for very slow forward exchange rate. The expected values should according to the reference paper: Tollinger, M., Skrynnikov, N. R., Mulder, F. A. A., Forman-Kay, J. D., and Kay, L. E. (2001). Slow dynamics in folded and unfolded states of an sh3 domain. J. Am. Chem. Soc., 123(46), 11341-11352. (10.1021/ja011300z) The paper concerns values of k_AB in the region 0.1 to 5.0. If the exchange rate is any higher value of this, then another model should be used for the analysis. * Set the default insignificance value to 0.0 instead of 1.0. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Modified the grid search limits for parameter "k_AB" to be between 0.1 and 20.0 rad.s^-1. This is for the TSMFK01 model, where values much above 10/20 is not expected. * Implemented counting of outliers for statistics. This is to get a better feeling why some statistics are very much different between NI. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Created the Structure.test_mean system test. This is to test the functionality of a planned new feature, the structure.mean user function. This is an analysis aid that will calculate the mean structure from all loaded models. * Implemented the structure.mean user function frontend. The backend is currently just a stub function. * Fixes and simplifications for the pipe_control.pipes.check_pipe() checking object. One of the RelaxError classes were not initialised and the docstring was incorrect. * Created the pipe_control.structure.main.check_structure() checking object. This will be used for providing much more detailed feedback for when structural information is missing. * Converted all of the pipe_control.structure.main functions to use the check_structure() object. This standardises and improves all of the checks. * Some fixes and additional checks for the Structure.test_mean system test. * Implemented the backend of the structure.mean user function. This primarily occurs within the internal structural object in the new mean() method. The pipe_control.structure.main.mean() function simply checks if the current data pipe is correctly set up and then calls the structural object mean() method. * Created the Structure.test_align system test. This will be used to test the yet to be implemented structure.align user function. This user function will be similar to the structure.superimpose user function but will be designed so that structures with different primary and atomic sequences can be superimposed. * Created the frontend of the structure.align user function. This is almost the same as that of the structure.superimpose user function except that the pipes argument has been added and the titles and description changed to indicate the differences. * Registered the new user function argument type 'int_list_of_lists' in the prompt UI. This is to allow for lists of lists of integers, as used for the model argument in the new structure.align user function. * Modified the lib.arg_check.is_int_list() function to accept the list_of_lists Boolean argument. This updates the function to have the same functionality as is_str_list(), allows for lists of lists of int to be checked. * Extended the Structure.test_align system test to throughly check the structural data. This includes changing the structure.align user function call to use 'fit to first' and carefully checking the new atomic coordinates. * Modified the Structure.test_align system test so that translations and rotations match the algorithm. This allows the output of the structure.align user function to be checked to see if the rotation matrix and translation vector found match that used to shift the original structures. * Implemented the structure.align user function backend. This is similar to the structure.superimpose user function, however the coordinate data structure only contains atoms which are in common to all structures. * The pipe_control.structure.main functions translate() and rotate() now accept the pipe_name argument. This is used to translate and rotate structures in different data pipes, as required by the structure.align user function. * The pipe_control.structure.main.check_structure() checking object now accepts the pipe_name argument. This allows structural data to be checked for in different data pipes without having to switch to them. * Modified the Structure.test_align system test to call the structure.write_pdb user function. This sets the file name to sys.stdout so that the original structure and the final aligned structures are output to STDOUT for debugging purposes. * Created the Structure.test_delete_atom system test. This is used to test the deletion of a single atom using the structure.delete user function. * Expanded the Structure.test_delete_atom system test. This is to show that the structure.write_pdb user function fails after a call to the structure.delete user function to delete individual atoms. * Fix for the new structure.align user function. The translation and rotation of the structures at the end to the aligned positions was being incorrectly performed. * Loosened some checks in the Structure.test_align system test to allow it to pass. Some self.assertEqual() checks for the atomic coordinates have been replaced by self.assertAlmostEqual() to allow for small machine precision differences. * Modified the lib.arg_check.is_str_or_inst() to handle cStringIO objects. This allows sys.stdout to be used as a file object in the relax test suite. * Modified the lib.arg_check.is_str_or_inst() function to work with Python 3. Instead of checking for cStringIO.OutputType, which does not exist in Python 3, the argument is simply checked to see if it has a write() method. * Print out of the number of all R2eff points, if they are different between analysis. This can become an issue if a single intensity point has slipped into noise, due to low quality of spectrum reconstruction. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Implemented statistics for R2eff values. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Added data checks and printouts to the structure.align user function. The data checks are to prevent the user from attempting an alignment with differently named molecules, as this will not work. * Implemented writing out intensity and error correlations plot. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Implemented writing out of intensity statistics. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Expanded the structure.com user function to accept the atom_id argument. This allows the centre of mass (CoM) calculation to be restricted to a certain subset of atoms. The backend already had support for this feature, but now it is exposed in the frontend. The user function docstring has been slightly modified as well. * Skipping of intensity calculation, if the intensity pipe does not exists. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Added example CPMG data, which could possibly be sent for BMRB submission. The data is un-published CPMG data, related to the paper: Webb H, Tynan-Connolly BM, Lee GM, Farrell D, O'Meara F, Soendergaard CR, Teilum K, Hewage C, McIntosh LP, Nielsen JE (2011). Remeasuring HEWL pK(a) values by NMR spectroscopy: methods, analysis, accuracy, and implications for theoretical pK(a) calculations. Proteins: Struct., Funct., Bioinf. 79(3), 685-702, DOI 10.1002/prot.22886. Task #7858 (https://gna.org/task/?7858): Make it possible to submit CPMG experiments for BMRB. * Added system test Relax_disp.test_bmrb_sub_cpmg() to try calling the bmrb functions in relax. Task #7858 (https://gna.org/task/?7858): Make it possible to submit CPMG experiments for BMRB. * Implemented the initial part of the API, to collect data for BMRB submission. Task #7858 (https://gna.org/task/?7858): Make it possible to submit CPMG experiments for BMRB. * Inserted a "RelaxImplementError" when trying to call bmrb_write from a relaxation dispersion analysis. To implement the function, it would require a re-write of the relax_data bmrb_write(star) function, and proper handling of cdp.ri_ids. It was also not readily possible to find examples of submitted CPMG data in the BMRB database. This makes it hard to develop, and even ensure that BMRB would accept the format. Task #7858 (https://gna.org/task/?7858): Make it possible to submit CPMG experiments for BMRB. * Removed the system test Relax_disp.test_bmrb_sub_cpmg() to be tested in the test-suite. This test will not be implemented, as it requires a large re-write of data structures. Task #7858 (https://gna.org/task/?7858): Make it possible to submit CPMG experiments for BMRB. * Removed the showing of Matplotlib figures in the test suite. Task #7826 (https://gna.org/task/index.php?7826): Write a Python class for the repeated analysis of dispersion data. * Implemented system test Relax_disp.test_dx_map_clustered to catch the missing creation of a point file. Bug #22753 (https://gna.org/bugs/index.php?22753): dx.map does not work when only 1 point is used. * Inserted a check in system test Relax_disp.test_dx_map_clustered, that a call to minimise.calculate should be the same as the file stored with the clustered chi2 value. Bug #22754 (https://gna.org/bugs/index.php?22754): The minimise.calculate user function does not calculate chi2 value for clustered residues. * Made initial preparation to loop over clustered spins and IDs for the minimise.calculate user function call. Bug #22754 (https://gna.org/bugs/index.php?22754): The minimise.calculate user function does not calculate chi2 value for clustered residues. * Implemented looping over spin-clusters when issuing a minimise.calculate(). Bug #22754 (https://gna.org/bugs/index.php?22754): The minimise.calculate user function does not calculate chi2 value for clustered residues. * Made back_calc_r2eff() in optimisation module use the spin and ID list instead. Bug #22754 (https://gna.org/bugs/index.php?22754): The minimise.calculate user function does not calculate chi2 value for clustered residues. * Fix for graph plotting functionality to send spins as list of one spins. Bug #22754 (https://gna.org/bugs/index.php?22754): The minimise.calculate user function does not calculate chi2 value for clustered residues. * Fix for calling back_calc_r2eff with the new argument keywords, and use list of spin and spin IDs. Bug #22754 (https://gna.org/bugs/index.php?22754): The minimise.calculate user function does not calculate chi2 value for clustered residues. * Fix for synthetic script calling back_calc_r2eff() with old arguments and to use list of spin containers and IDs. Bug #22754 (https://gna.org/bugs/index.php?22754): The minimise.calculate user function does not calculate chi2 value for clustered residues. * Inserted last test in test_dx_map_clustered, to check out the written chi2 values are as expected. Bug #22754 (https://gna.org/bugs/index.php?22754): The minimise.calculate user function does not calculate chi2 value for clustered residues. * Moved the looping over cluster spin IDs into its own function in the API. Bug #22754 (https://gna.org/bugs/index.php?22754): The minimise.calculate user function does not calculate chi2 value for clustered residues. * Added the selection string for all the cluster IDs to be parsed back as well. Bug #22754 (https://gna.org/bugs/index.php?22754): The minimise.calculate user function does not calculate chi2 value for clustered residues. * Made the value set function, set values to all spins, if it is a global parameter. Bug #22754 (https://gna.org/bugs/index.php?22754): The minimise.calculate user function does not calculate chi2 value for clustered residues. * Moved the skipping of protons away from looping function. Bug #22754 (https://gna.org/bugs/index.php?22754): The minimise.calculate user function does not calculate chi2 value for clustered residues. * Inserted some testing lines for making a dx_map, either global clustered or as a free spin. There is a big difference which dx map you get. It illustrates beautifully the effect of clustering things together. Bug #22754 (https://gna.org/bugs/index.php?22754): The minimise.calculate user function does not calculate chi2 value for clustered residues. * Added a BMRB NMR-STAR formatted deposition file for the OMP model-free data for reference. This is because there are no other NMR-STAR formatted files in the relax sources. * In the dispersion API calculate(), used the API function model_loop() to loop over the clusters instead. Bug #22754 (https://gna.org/bugs/index.php?22754): The minimise.calculate user function does not calculate chi2 value for clustered residues. * Removed then function loop_cluster_ids() from dispersion API(). This should be implemented elsewhere. Bug #22754 (https://gna.org/bugs/index.php?22754): The minimise.calculate user function does not calculate chi2 value for clustered residues. * Updated the API set_param_values() function to use model_loop() to get the spin_ids from the cluster. Bug #22754 (https://gna.org/bugs/index.php?22754): The minimise.calculate user function does not calculate chi2 value for clustered residues. * Initial try to fix unit test test_value_set_r1_rit(). The problem is that no spin ID can be generated since the spins are created manually. "AttributeError: 'MoleculeContainer' object has no attribute '_res_name_count' ". Bug #22754 (https://gna.org/bugs/index.php?22754): The minimise.calculate user function does not calculate chi2 value for clustered residues. * Removed the checking of MODEL_LIST_MMQ, and spin.isotope from optimisation.back_calc_r2eff(), since this check is already covered. Bug #22754 (https://gna.org/bugs/index.php?22754): The minimise.calculate user function does not calculate chi2 value for clustered residues. * Fix for references to "spin" in optimisation.back_calc_r2eff(). Bug #22754 (https://gna.org/bugs/index.php?22754): The minimise.calculate user function does not calculate chi2 value for clustered residues. * Fix for looping performed twice in relax_disp API model_loop(). Bug #22754 (https://gna.org/bugs/index.php?22754): The minimise.calculate user function does not calculate chi2 value for clustered residues. * Removed unused proton reference in relax_disp API calculate(). There is though some problems with these tests (F 1.93 s for Relax_disp.test_korzhnev_2005_15n_dq_data, F 2.01 s for Relax_disp.test_korzhnev_2005_1h_mq_data, F 1.93 s for Relax_disp.test_korzhnev_2005_1h_sq_data). It is unsure where these comes from. Bug #22754 (https://gna.org/bugs/index.php?22754): The minimise.calculate user function does not calculate chi2 value for clustered residues. * Fix for epydoc in system test Relax_disp.test_dx_map_clustered. * Updated all of the Relax_disp.test_korzhnev_2005_*_data system tests. These now have slightly changed parameter values due to the fix of bug #22563 (https://gna.org/bugs/?22563), the NS MMQ 2-site dispersion model running at 32-bit precision and not 64-bit as it should be. * Epydoc change for DOI reference in system tests. Bug #22754 (https://gna.org/bugs/index.php?22754): The minimise.calculate user function does not calculate chi2 value for clustered residues. * Added some test PyMOL scripts to create OpenDX maps and chi2 surface plots. These will go to the wiki: http://wiki.nmr-relax.com/Chi2_surface_plot. * Big improvement for running the relax unit tests via the relax command line options. The unit test module path is now accepted as a command line option. This brings more capabilities of Gary Thompson's test_suite/unit_tests/unit_test_runner.py script into the relax command line. The _pipe_control/test_value unit test module path can be specified as, for example, one of 'test_suite.unit_tests._pipe_control.test_value', 'test_suite/unit_tests/_pipe_control/test_value', '_pipe_control.test_value', '_pipe_control/test_value'. This allows individual modules of tests to be run, rather than having to execute all unit tests, which is very useful for debugging. * Modified the printouts for the unit tests when running with the --time command line option. The test name is now being processed. The leading 'test_suite.unit_tests.' text is now stripped out. And the remaining text is split into the module name and the test name. This is to allow the unit test module name to be more easily identifiable, so it can then be used as a command line option to allow only a subset of tests to be performed. * Modified the help strings for the test suite options shown when 'relax -h' is run. The ability to specify individual tests (or modules of tests for the unit tests) is now documented. The '--time' option help string has also been edited. * Fix for the Bmrb.test_bug_22704_corrupted_state_file GUI test. This was failing because the setUp() method in the inherited Bmrb system test module was being overwritten by the default Unittest.setUp() method. Therefore the system test setUp() method has been copied into the GUI test class. * Fix for the Test_value.test_value_set_r1_rit test of the _pipe_control.test_value unit test module. This is a general fix for all unit test modules which use the test_suite.unit_tests.value_testing_base.Value_testing_base base class. After the molecules, residues and spins are manually created, the pipe_control.mol_res_spin.metadata_update() function is called to make sure that all of the private and volatile metadata have been correctly created, so that the other pipe_control.mol_res_spin module functions can operate correctly. * Removal of repetitive code in the relaxation dispersion model_loop() API method. The spin loop does not need to be called twice, instead the if statements have been modified to better direct the code execution. * Added script to simulate dispersion profiles at different settings. This shows that something is wrong. The back-calculated values in the graphs are not equal to the interpolated values. There must be something wrong somewhere. This list shows the chi2 values and, judging from the dispersion graphs, this simply cannot be true. * Changed bounds for sample scripts to create: 3D iso-surface plot, surface plot and simulation of dispersion curves. * Minor changes to Python matplotlib script, to produce surface plot. Also added the new data for the plotting. * Modified the example data, after issue with parameters was fixed. Bugfixes: * Fix for two-point calculation of exponential curve with corrupted data. The two-point calculation is now also skipped, if the measured intensity is 0. This can happen for corrupted intensity files. * Fix for the internal structural object get_model() method - it now actually returns the model. * Fixes for the structure.add_atom user function to allow for list of lists for the atomic position. This allows different coordinates to be supplied for each model. * Added safety checks for NaN values to the lib.structure.pdb_write module. This is within the _record_validate() function. The check prevents the creation of invalid PDB files. * Fix for the experimental information data pipe object when converting to XML state and results files. This is a partial fix for bug #22704 (https://gna.org/bugs/?22704), the corrupted relax state files after setting the relax references via the bmrb.software, bmrb.display, or bmrb.write user functions. The names and descriptions for the software, citation and script list objects were incorrectly set. These have been fixed so that the name of the data structure and the real description is present in the XML state or results file instead of <relax_list desc='relax list container'>. * Fix for the experimental information data pipe object when converting to XML state and results files. This is a partial fix for bug #22704 (https://gna.org/bugs/?22704), the corrupted relax state files after setting the relax references via the bmrb.software, bmrb.display, or bmrb.write user functions. The names and descriptions for the software, citation and script list objects were incorrectly set. These have been fixed so that the name of the data structure and the real description is present in the XML state or results file instead of <relax_list desc='relax list container'>. * Fix for the cdp.exp_info.software data structure setup. This is a partial fix for bug #22704 (https://gna.org/bugs/?22704), the corrupted relax state files after setting the relax references via the bmrb.software, bmrb.display, or bmrb.write user functions. The Element data container name was being replaced by the software name, making it impossible to restore from the XML. * Implemented the cdp.exp_info.from_xml() method to correctly restore the experimental info structure. This fixes bug #22704 (https://gna.org/bugs/?22704), the corrupted relax state files after setting the relax references via the bmrb.software, bmrb.display, or bmrb.write user functions. This custom ExpInfo.from_xml() method is required to properly recreate the software, script and citation list data structures of the cdp.exp_info data structure, as these are special RelaxListType objects populated by Element objects (both from data_store.data_classes). * Bug fix for the structure.delete user function. When individual atoms are deleted, the bonded atom data structure is no correctly updated to remove the now non-existent atom. * Another bug fix for the structure.delete user function when deleting individual atoms. The bonded atom data structure consisting of indices requires all indices after the deleted atom to be decremented by 1. * Bug fix for the CONECT records created by the structure.write_pdb user function. The atom numbers in... [truncated message content] |
From: Edward d'A. <ed...@do...> - 2014-09-05 12:18:56
|
This is a major feature release which includes a huge number of changes, as can be seen below. The most important change is an incredible speed up of all relaxation dispersion models. See the table below for a comparison to the previous relax 3.2.3 release. The maximum possible advantage of linear algebra operations are used to eliminate all of the slow Python looping and to obtain the ultimate algorithms for speed. As this is using NumPy, conversion to C or FORTRAN will not result in any significant speed advantage. With these huge speed ups, relax should now be one of the fastest software packages for analysing relaxation dispersion phenomena. Other important features include the implementation of a zooming grid search (http://www.nmr-relax.com/manual/minimise_grid_zoom.html) algorithm for use in all analysis types, expanded plotting capabilities for R1rho values in the relaxation dispersion analysis (http://www.nmr-relax.com/manual/relax_disp_plot_disp_curves.html), the ability to optimise the R1 parameter in all off-resonance dispersion models (http://www.nmr-relax.com/manual/relax_disp_r1_fit.html), proper minimisation statistics resetting by the minimisation user functions, and a large expansion of the periodic table information for all elements in the relax library for correctly estimating molecular masses. Additional features are that there is better tab completion support in the prompt UI for Mac OS X, the addition of the time user function for printing the current date and time (http://www.nmr-relax.com/manual/time.html), the value.copy user function accepting a force argument for overwriting values (http://www.nmr-relax.com/manual/value_copy.html), model nesting in the dispersion auto-analysis has been extended, the spin-lock offset is now shown in the dispersion analysis in the GUI, the relax_disp.r2eff_estimate user function has been added for fast R2eff and I0 parameter value and error estimation (http://www.nmr-relax.com/manual/relax_disp_r2eff_err_estimate.html), and gradient and Hessian functions have been added to the exponential curve-fitting C module allowing for more advanced optimisation in the relaxation curve-fitting and dispersion analyses. Note that this new 3.3 relax series breaks compatibility with old relax scripts. The important change, which is the main reason for starting the relax 3.3.x line, is the renaming of the calc, grid_search and minimise user functions to minimise.calculate, minimise.grid_search and minimise.execute respectively (http://www.nmr-relax.com/manual/minimise_calculate.html, http://www.nmr-relax.com/manual/minimise_grid_search.html, http://www.nmr-relax.com/manual/minimise_execute.html). Please update your scripts appropriately. A new relax feature is that old user function calls are detected in the prompt and script UIs and a RelaxError raised explaining what to rename the user function to. Important bugfixes in this release include that relax can run on MS Windows systems again, numerous Python 3 fixes, the ability to load Bruker DC files (http://www.nmr-relax.com/manual/bruker_read.html) when the file format has corrupted whitespace, the GUI "close all analyses" feature works and no longer raises an error, structure.create_diff_tensor_pdb user function now works when no structural data is present (http://www.nmr-relax.com/manual/structure_create_diff_tensor_pdb.html), the geometric prolate diffusion 3D PDB representation in a model-free analysis now aligns with the axis in the PDB as it was previously rotated by 90 degrees, and the Monte Carlo simulations in the relaxation dispersion analysis for exponential curve-fitting for R2eff/R1rho parameter errors is now correct and no longer underestimating the errors by half. For more details about the new features and the bug fixes, please see below. For fully formatted and easy to navigate release notes, please see http://wiki.nmr-relax.com/Relax_3.3.0. To demonstrate the huge speeds ups in the relaxation dispersion analysis, the following table compares the speed of dispersion models in relax 3.2.3 (http://wiki.nmr-relax.com/Relax_3.2.3) compared to the new 3.3.0 version: 100 single spins analysis (times in seconds): No Rex: 0.824+/-0.017 -> 0.269+/-0.016, 3.068x faster. LM63: 1.616+/-0.017 -> 0.749+/-0.008, 2.157x faster. LM63 3-site: 3.218+/-0.039 -> 0.996+/-0.013, 3.230x faster. CR72: 2.639+/-0.042 -> 1.536+/-0.019, 1.718x faster. CR72 full: 2.808+/-0.027 -> 1.689+/-0.075, 1.663x faster. IT99: 1.838+/-0.032 -> 0.868+/-0.011, 2.118x faster. TSMFK01: 1.643+/-0.033 -> 0.718+/-0.011, 2.289x faster. B14: 5.841+/-0.050 -> 3.747+/-0.044, 1.559x faster. B14 full: 5.942+/-0.053 -> 3.841+/-0.044, 1.547x faster. NS CPMG 2-site expanded: 8.309+/-0.066 -> 4.070+/-0.073, 2.041x faster. NS CPMG 2-site 3D: 245.180+/-2.162 -> 45.410+/-0.399, 5.399x faster. NS CPMG 2-site 3D full: 237.217+/-2.582 -> 45.177+/-0.415, 5.251x faster. NS CPMG 2-site star: 183.423+/-1.966 -> 36.542+/-0.451, 5.020x faster. NS CPMG 2-site star full: 183.622+/-1.326 -> 36.788+/-0.343, 4.991x faster. MMQ CR72: 5.920+/-0.105 -> 4.078+/-0.105, 1.452x faster. NS MMQ 2-site: 363.659+/-2.610 -> 82.588+/-1.197, 4.403x faster. NS MMQ 3-site linear: 386.798+/-4.480 -> 92.060+/-0.754, 4.202x faster. NS MMQ 3-site: 391.195+/-3.442 -> 93.025+/-0.829, 4.205x faster. M61: 1.576+/-0.022 -> 0.862+/-0.009, 1.828x faster. DPL94: 22.794+/-0.517 -> 1.101+/-0.008, 20.705x faster. TP02: 19.892+/-0.363 -> 1.232+/-0.007, 16.152x faster. TAP03: 31.701+/-0.378 -> 1.936+/-0.017, 16.377x faster. MP05: 24.918+/-0.572 -> 1.428+/-0.015, 17.454x faster. NS R1rho 2-site: 244.604+/-2.493 -> 35.125+/-0.202, 6.964x faster. NS R1rho 3-site linear: 287.181+/-2.939 -> 68.245+/-0.536, 4.208x faster. NS R1rho 3-site: 290.486+/-3.614 -> 70.449+/-0.686, 4.123x faster. Cluster of 100 spins analysis (times in seconds): No Rex: 0.818+/-0.016 -> 0.008+/-0.001, 97.333x faster. LM63: 1.593+/-0.018 -> 0.037+/-0.000, 43.401x faster. LM63 3-site: 3.134+/-0.039 -> 0.067+/-0.001, 47.128x faster. CR72: 2.610+/-0.047 -> 0.115+/-0.001, 22.732x faster. CR72 full: 2.679+/-0.034 -> 0.122+/-0.005, 22.017x faster. IT99: 1.807+/-0.025 -> 0.063+/-0.001, 28.687x faster. TSMFK01: 1.636+/-0.036 -> 0.039+/-0.001, 42.170x faster. B14: 5.799+/-0.054 -> 0.488+/-0.010, 11.879x faster. B14 full: 5.803+/-0.043 -> 0.484+/-0.006, 11.990x faster. NS CPMG 2-site expanded: 8.326+/-0.081 -> 0.685+/-0.012, 12.160x faster. NS CPMG 2-site 3D: 244.869+/-2.382 -> 41.217+/-0.467, 5.941x faster. NS CPMG 2-site 3D full: 236.760+/-2.575 -> 41.001+/-0.466, 5.775x faster. NS CPMG 2-site star: 183.786+/-2.089 -> 30.896+/-0.417, 5.948x faster. NS CPMG 2-site star full: 183.243+/-1.615 -> 30.898+/-0.343, 5.931x faster. MMQ CR72: 5.978+/-0.094 -> 0.847+/-0.007, 7.061x faster. NS MMQ 2-site: 363.138+/-3.041 -> 75.906+/-0.845, 4.784x faster. NS MMQ 3-site linear: 384.978+/-5.402 -> 83.703+/-0.773, 4.599x faster. NS MMQ 3-site: 388.557+/-3.261 -> 84.702+/-0.762, 4.587x faster. M61: 1.555+/-0.021 -> 0.034+/-0.001, 45.335x faster. DPL94: 22.837+/-0.494 -> 0.140+/-0.002, 163.004x faster. TP02: 19.958+/-0.407 -> 0.167+/-0.002, 119.222x faster. TAP03: 31.698+/-0.424 -> 0.287+/-0.003, 110.484x faster. MP05: 25.009+/-0.683 -> 0.187+/-0.007, 133.953x faster. NS R1rho 2-site: 242.096+/-1.483 -> 32.043+/-0.157, 7.555x faster. NS R1rho 3-site linear: 280.778+/-2.589 -> 62.866+/-0.616, 4.466x faster. NS R1rho 3-site: 282.192+/-5.195 -> 63.174+/-0.816, 4.467x faster. Full details of this comparison can be seen in the test_suite/shared_data/dispersion/profiling directory. For information about each of these models, please see the links: http://wiki.nmr-relax.com/No_Rex, http://wiki.nmr-relax.com/LM63, http://wiki.nmr-relax.com/LM63_3-site, http://wiki.nmr-relax.com/CR72, http://wiki.nmr-relax.com/CR72_full, http://wiki.nmr-relax.com/IT99, http://wiki.nmr-relax.com/TSMFK01, http://wiki.nmr-relax.com/B14, http://wiki.nmr-relax.com/B14_full, http://wiki.nmr-relax.com/NS_CPMG_2-site_expanded, http://wiki.nmr-relax.com/NS_CPMG_2-site_3D, http://wiki.nmr-relax.com/NS_CPMG_2-site_3D_full, http://wiki.nmr-relax.com/NS_CPMG_2-site_star, http://wiki.nmr-relax.com/NS_CPMG_2-site_star_full, http://wiki.nmr-relax.com/MMQ_CR72, http://wiki.nmr-relax.com/NS_MMQ_2-site, http://wiki.nmr-relax.com/NS_MMQ_3-site_linear, http://wiki.nmr-relax.com/NS_MMQ_3-site, http://wiki.nmr-relax.com/M61, http://wiki.nmr-relax.com/DPL94, http://wiki.nmr-relax.com/TP02, http://wiki.nmr-relax.com/TAP03, http://wiki.nmr-relax.com/MP05, http://wiki.nmr-relax.com/NS_R1rho_2-site, http://wiki.nmr-relax.com/NS_R1rho_3-site_linear, http://wiki.nmr-relax.com/NS_R1rho_3-site. For CPMG statistics: 3 fields, each with 20 CPMG points. Total number of dispersion points per spin is 60. For R1rho experiments: 3 fields, each with 10 spin lock offsets, and each offset has been measured at 5 different spin lock fields. Per field there is 50 dispersion points. Total number of dispersion points per spin is 150. The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html). The full list of changes is: Features: * Huge speed ups for all of the relaxation dispersion models ranging from 1.452x to 163.004x times faster. The speed ups for the clustered spin analysis are far greater than for the single spin analysis. * Implementation of a zooming grid search algorithm for optimisation in all analyses. This includes the addition of the minimise.grid_zoom user function to set the zoom level. The grid width will be divided by 2**zoom_level and centred at the current parameter values. If the new grid is outside of the bounds of the original grid, the entire grid will be translated so that it lies entirely within the original. * Increased the amount of user feedback for the minimise.grid_search user function. Now a comment for each parameter is included in the printed grid search setup table. This includes if the lower or upper bounds, or both, have been supplied and if a preset value has been used instead. * Expanded support for R1rho 2D graph plotting in the relax_disp.plot_disp_curves user function as the X-axis can now be the nu1 value, the effective field omega_eff, or the rotating frame title angle. And the plots are interpolation over the spin-lock offset. * Ability to optimise the R1 relaxation rate parameter in the off-resonance relaxation dispersion models. * Creation of the relax_disp.r1_fit user function for activating and deactivating R1 fitting in the dispersion analysis. * Better tab completion support in the prompt UI for Mac OS X users. For some Python versions, the Mac supplied libedit library is used rather than GNU readline. But this library uses a completely different language and hence tab completion was non-functional on these systems. The library difference is now detected and the correct language sent into libedit to activate tab completion. * Created the time user function. This is just a shortcut for printing out the output of the time.asctime() function. * The value.copy user function now accepts the force flag to allow destination values to be overwritten. * Expanded model nesting capabilities in the relaxation dispersion auto-analysis to speed up the protocol. * The spin-lock offset is now included in the spectra list GUI element for the relaxation dispersion analysis. * Creation of the relax_disp.r2eff_estimate user function for the fast estimation of R2eff/R1rho values and errors when full exponential curves have been collected. This experimental feature uses linearisation to estimate the R2eff and I0 parameters and the covariance matrix to estimate parameter errors. * Gradients and Hessians are implemented for the exponential curve-fitting, hence all optimisation algorithms and constraint algorithms are now available for this analysis type. Using Newton optimisation instead of Nelder-Mead Simplex can save over an order of magnitude in computation time. This is also available in the relaxation dispersion analysis. * The minimisation statistics are now being reset for all analysis types. The minimise.calculate, minimise.grid_search, and minimise.execute user functions now all reset the minimisation statistics for either the model or the Monte Carlo simulations prior to performing any optimisation. This is required for both parallelised grid searches and repetitive optimisation schemes to allow the result to overwrite an old result in all situations, as sometimes the original chi-squared value is lower and the new result hence is rejected. * Large expansion of the periodic table information in the relax library to include all elements, the IUPAC 2011 standard atomic weights for all elements, mass numbers and atomic masses for all stable isotopes, and gyromagnetic ratios. * Significant improvements to the structure centre of mass calculations by using the new periodic table information - all elements are now supported and exact masses are now used. * Added a button to the spectra list GUI element for the spectrum.error_analysis user function. This is placed after the 'Add' and 'Delete' buttons and is used in the NOE, R1 and R2 curve-fitting and relaxation dispersion analyses. * RelaxErrors are now raised in the prompt or script UI if an old user function is called, printing out the names of the old and new user functions. This is for help in upgrading old scripts and is currently for the calc(), grid_search(), and minimise() user function calls. Changes: * Improved model handling for the internal structural object. The set_model() method has been added to allow either a model number to be set to the first unnumbered model (in preparation for adding new models) or to allow models to be renumbered. The logic of the add_model() has also been changed. Rather than looping over all atoms of the first model and copying them, which does not work due to the model validity checks, the entire MolList (molecule list) data structure is copied using copy.deepcopy() to make a perfect copy of the structural data. The ModelList.add_item() method has also been modified to return the newly added or numbered model. This is used by the add_model() structural object method to obtain the model object. * Updated the Mac OS X framework setting up instruction document. New sections have been added for the nose and matplotlib Python packages, as nose is needed for the numpy and scipy testing frameworks and matplotlib might be a useful optional dependency in the future. The mpy4py section has been updated to avoid the non-framework fink version of mpicc which cannot produce universal binaries. A few other parts also have small edits. * Removed the Freecode section from the release checklist as Freecode has been permanently shut down. The old relax links are still there (http://freecode.com/projects/nmr-relax), but Freecode is dead (http://freecode.com/about). * Fix for the internal structural object MolContainer.last_residue() method. This can now operate when no structural information is present, returning 0 instead of resulting in an IndexError. * Updated the script for finding unused imports in the relax source code. Now the file name is only printed for Python files which have unused imports. * Completely removed all mentions of Freecode from the release document. The old relax links are still there (http://freecode.com/projects/nmr-relax), but Freecode is dead (http://freecode.com/about). * Updated the minfx version in the release checklist document to 1.0.8. This version has not been released yet, but it will include important fixes and additions for constrained parallelised grid searches. * Fix for a broken link in the development chapter of the relax manual. * Fixes for dead hyperlinks in the relaxation dispersion chapter of the relax manual. The B14 model links to http://www.nmr-relax.com/api/3.2/lib.dispersion.b14-module.html were broken as the B in B14 was capitalised. * Sent in the verbosity argument value to the minfx.grid.grid_split() function. The minfx function in the next release (1.0.8) will now be more verbose, so this will help with user feedback when running the model-free analysis on a cluster or multi-core system using MPI. * The time user function now uses the chronometer Oxygen icon in the GUI. * Removed the line wrapping in the epydoc parameter section of the optimisation function docstrings. This is for the pipe_control.minimise module. * More docstring line wrapping removal from pipe_control.minimise. * Bug fix for the parameter units descriptions. This only affects a few rare parameters. The specific analysis API parameter object units() method was incorrectly checking if the units value is a function - it was checking the parameter conversion factor instead. * Modified the align_tensor.init user function so that the parameters are now optional. This allows alignment tensors to be initialised without specifying the parameter values for that tensor. * Modified profiling script to have different number of NCYC points per frequency. This is to complicate the data, so any erroneous reshaping of data is discovered. It is expected, that experiments can have different number of NCYC points per spectrometer frequency. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Initial try to alter the target function calc_CR72_chi2. This is the first test to restructure the arrays, to allow for higher dimensional computation. All numpy arrays have to have same shape to allow to multiply together. The dimensions should be [ei][si][mi][oi][di]. [Experiment][spins][spec. frq][offset][disp points]. This is complicated with number of disp point can change per spectrometer frequency. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. This implementation brings a high overhead. The first test shows no winning of time. The creation of arrays takes all the time. * Temporary changed the lib/dispersion/cr72.py function to unsafe state. This change turns-off all the safety measures, since they have to be re-implemented for higher dimensional structures. * Altered profiling script to report cumulative timings and save to temporary files. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. This indeed shows that the efficiency has gone down. * Added print out of chi2 to profile script. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Moved the creation of special numpy structures outside target function. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Modified profiling script to calculate correct values when setting up R2eff values. This is to test, that the return of chi2 gets zero. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Removing looping over exp and offset indices in calc_chi2. They are always 0 anyway. This brings a little speed. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * In profiling script, moved up the calculation of values one level. This is to better see the output of the profiling iterations for CR72.py. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Fix for calculation of the Larmor frequency per spin in profiling script. The frq loop should also be up-shifted. It was now extracted as 0.0. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Re-inserted safety checks in lin/dispersion/CR72.py file. This is re-inserted for the rank_1 cases. This makes the unit-tests pass again. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Important fix for extracting the correct shape to create new arrays. If using just one field, or having the same number of dispersion points, the shape would extend to the dispersion number. It would report [ei][si][mi][oi][di] when calling ndarray.shape. Shape always has to be reported as: [ei][si][mi][oi]. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Made it easier to switch between single and cluster reporting in profiling script. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Important fix for the creation of the multi dimensional pA numpy array. It should be created as numpy.zeros([ei][si][mi][oi]) instead of numpy.ones([ei][si][mi][oi]). This allows for rapid testing of all dimensions with np.allclose(pA, numpy.ones(dw.shape)). pA can have missing filled out values, when the number of dispersion points are different per spectrometer frequency. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Added unit tests demonstrating edge cases 'no Rex' failures of the model 'CR72 full', for a clustered multi dimensional calculation. This is implemented for one field. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Re-implemented safety checks in lib/dispersion/cr72.py. This is now implemented for both rank-1 float array and of higher dimensions. This makes the unit tests pass for multi dimensional computing. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Added unit tests demonstrating edge cases 'no Rex' failures of the model 'CR72 full', for a clustered multi dimensional calculation. This is implemented for three fields. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Changed that special numpy structure is also created for "CR72". This makes most system tests pass. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Critical fix for the slicing of values in target function. This makes system test: Relax_disp.test_sod1wt_t25_to_cr72 pass. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Added self.has_missing keyword in initialization of the Dispersion class. This is to test once, per spin or cluster. This saves a looping over the dispersion points, when collection the data. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Created multi dimensional error and value numpy arrays. This is to calculate the chi2 sum much faster. Reordered the loop over missing data points, so it is only initiated if missing points is detected. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Switch the looping from spin->frq to frq->spin. Since the number of dispersion points are the same for all spins, this allows to move the calculation of pA and kex array one level up. This saves a lot of computation. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Changed all the creation of special numpy arrays to be of float64 type. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Moved the data filling of special numpy array errors and values, to initialization of Dispersion class. These values does not change, and can safely be stored outside. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Just a tiny little more speed, by removing temporary storage of chi2 calculation. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Made copies of numpy arrays instead of creating from new. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Added a self.frqs_a as a multidimensional numpy array. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Small fix for the indices to the errors and values numpy array. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Lowered the number of iterations to the profiling scripts. This is to use the profiling script as bug finder. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Moved the calculation of dw_frq out of spin and spectrometer loop. This is done by having a special 1/0 spin numpy array, which turns on or off the values in the numpy array multiplication. The multiplication needs to first axis expand dw, and then tile the arrays according to the numpy structure. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Moved the calculation of pA and kex out off all loops. This was done by having two special 1/0 spin structure arrays. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Removed dw_frq_a numpy array, as it was not necessary. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Removed all looping over spin and spectrometer frequency. This is the last loop! Wuhu. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Reordered arrays for beauty of code. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Made the back_calc array be initiated as copy of the values array. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Small edit to profiling script, to help bug finding. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Fixed that arrays are correctly initiated with one or zero values. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Very important fix, for only replacing part of data array which have Nan values. Before, all values were replaced, which was wrong. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Needed to increase the relative tolerance when testing if pA array is 1. Now system test Relax_disp.test_hansen_cpmg_data_missing_auto_analysis passes. Also added some comments lines, to prepare for mask replace of values. For example if only some of etapos values should be replaced. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Restored profiling script to normal. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Made the logic and comments much clearer about how to reshape, expand axis, and tile numpy arrays. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Implemented a masked array search for where "missing" array is equal 1. This makes it possible to replace all values with this mask, from the value array. This eliminates the last loops over the missing values. It took over 4 hours to figure out, that the mask should be called with mask.mask, to return the same fulls structure, Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Yet another small improvement for the profiling script. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Removed the multi dimensional structure of pA. pA is not multi-dimensional, and can just be multiplied with numpy arrays. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Fix for testing of pA in lib function, when pA is just float. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Modified unit tests, so pA is sent to target function as float instead of array. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Removed the multi dimensional structure of kex. kex is not multi-dimensional, and can just be multiplied with numpy arrays. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Fix for testing of kex in lib function, when kex is just float. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Modified unit tests, so kex is sent to target function as float instead of array. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Important fix for replacing values if eta_pos > 700 is violated. This fixes system test: Relax_disp.test_sod1wt_t25_to_cr72, which failed after making kex to a numpy float. The trick is to make a numpy mask which stores the position where to replace the values. Then replace the values just before last return. This makes sure, that not all values are changed. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Increased the kex speed to 1e7 in clustered unit tests cases. This is to demonstrate where there will be no excange. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Added a multi-dimensional numpy array chi2 value calculation function. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Called the newly created chi2 function to calculate for multi dimensional numpy arrays. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Renamed chi2_ND to chi2_rankN. This is a better name for representing multiple axis calculation. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Made special ei, si, mi, and oi numpy structure array. This is for rapid speed-up of numpy array creation in target function. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Replaced self.spins_a with self.disp_struct. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Made initialisation structures for dw. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Initial try to reshape dw faster. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Switched to use self.ei, self.si, self.mi, self.oi, self.di. This is for better reading of code. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Comment out the sys.exit(), which would make the code fail for wrong calculation of dw. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Copied profiling script for CPMG model CR72 to R1rho DPL94 model. The framework of the script will be the same, but the data a little different. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Started converting profiling script to DPL94. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Replaced self.(ei,si,mi,oi,di) with self.(NE,NS,NM,NO,ND). These numbers represents the maximum number of dimensions, instead of index. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Added the ei index, when creating the first dw_mask. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Reordered how the structures dw init structures are created. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Clearing the dw_struct before calculation. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Started using the new way of constructing dw. This is for running system tests. Note, somewhere in the dw array, the frequencies will be different between the two implementations. But apparently, this does not matter. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Inserted temporary method to switch for profiling. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * First try to speed-up the old dw structure calculation. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Simplified calculation. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Yet another try to implement a fast dw structure method. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Implemented the fastest way to calculate the dw structure. This uses the numpy ufunc multiply.outer function to create the outer array, and then multiply with the frqs_structure. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Renamed dw temporary structure to generic structure. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Restructured the calculation of R20A and R20B to the most efficient way. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Made the lib/CR72.py to a numpy multi dimensional numpy array calculation. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Changed the catching when dw is zero, to use masked array. Implemented backwards compatibility with unit tests. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Bugfix for testing if kex is zero. It was tested if kex was equal 1.0. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Implemented masked replacement if fact is less that 1.0. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Replaced isnan mask with function that catches all invalid values. * Removed the masked replacement if fact is less than 1.0. This is very strange, but otherwise system test: Relax_disp.test_hansen_cpmg_data_missing_auto_analysis would fail. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Removed the slow allclose() function to test if R20A and R20B is equal. It is MUCH faster to just subtract and check sum is not 0.0. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Replaced the temporary variable R2eff with back_calc, and used numpy subtract to speed up. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Made the lib function into a pure numpy array calculation. This requires, that r20a, r20b and dw has same dimension as the dispersion points. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Changes too unit tests, so data is sent to target function in numpy array format. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Removed the creation of an unnecessary structure by using numpy multiply. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Moved the mask which finds where to replace values into the __init__ function. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Copied profiling script for CR72 to B14 model. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Modified profiling script for the B14 model. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Modified model B14 lib file to faster numpy multidimensional mode. The implementations comes almost directly from the CR72 model file. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Reverted the use of the mask "mask_set_blank". It did not work, and many system tests started failing. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Changed the target function to handle the B14 model for faster numpy computation. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Changed unit test for B14 to match numpy input requirement. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Added additional tests in b14, when math errors can occur. This is very easy with a conditional masked search in arrays. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Comment fix for finding when E0 is above 700 in lib function of B14. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Removed use of "asarray", since the variables are already arrays. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Changed target function for model CR72. To CR72 is now also the input of the parameters of R20A, R20B and dw. dw is tested for zero, to return flat lines. It is faster to search in the smaller numpy array, than the 5 dimensional dw array. This is for speed-up. R20A and R20B is also subtracted, to see if the full model should be used. In the same way, it is faster to subtract the smaller array. These small tricks are expected to give 5-10 pct. speeed-up. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Made the lib function of CR72 accept the R20A, R20B and dw of the original array. This is for speed-up. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Changed unit-tests, to send in the original R20A, R20B and dw_orig to the testing of the lib function CR72. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Changed profiling script to send R20A, R20B and dw, as original parameters to the lib function. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Changed target function for model B14. To B14 now also send the input of the original parameters dw. dw is tested for zero, to return flat lines. It is faster to search in the smaller numpy array, than the 5 dimensional dw array. This is for speed-up. These small tricks are expected to give 5-10 pct. speed-up. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Made the lib function of B14 accept dw of the original array. This is for speed-up. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Changed unit-tests, to send in the original dw_orig to the testing of the lib function B14. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Changed profiling script to send dw as original parameters to the lib function B14. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Copied profiling script for CR72 model to TSMFK01 model. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Modified profiling script to be used for model TSMFK01. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Modified target function for model TSMFK01, to send in dw as original parameter. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Modified lib function for model TSMFK01 to accept dw_orig as input and replaced functions to find math domain errors into maske replacements. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Made unit tests for model TSMFK01 send in R20A and dw as a numpy array. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Large increase in speed for model TSMFK01 by changing target functions to use multidimensional numpy arrays in calculation. This is done by restructuring data into multidimensional arrays of dimension [NE][NS][NM][NO][ND], which are number of spins, number of magnetic field strength, number of offsets, maximum number of dispersion point. The speed comes from using numpy ufunc operations. The new version is 2.4X as fast per spin calculation, and 54X as fast for clustered analysis. * Replacing math domain checking in model DPL94, with masked array replacement. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * First try to speed up model DPL94. This has not succeeded, since system test: Relax_disp.test_dpl94_data_to_dpl94 still fails. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Trying to move some of the structures into its own part. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Fix for forgetting to multiply frqs to power 2. This was found by inspecting all print out before and after implementation. New implementation of DPL94 now passes all system and unit tests. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Moved the expansion of the R1 structure out of the for loops. This is to speed-up the __init__ of the class of the target function. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Moved the packing of errors and values out of for loop in the __init__ class of target function. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Moved the multi dimensional expansion of inv_relax_times out of for loop. This can be done for all structures, which does not have missing points. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * For inv_relax_times, expanded one axis, and tiled up to NR spins, before reshaping and blowing up to full structure. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Moved the expansion of frqs out of for loops. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Documentation fix for description of input arrays to lib functions. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Converted TAP03 model to use multi dimensional numpy arrays. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Made dw in unit tests of TAP03 be of numpy array. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Replaced the loop structure in target function of TAP03 with numpy arrays. This makes the model faster. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Reordered the initialization structure of the special numpy arrays. This was done in the init part of the target function of relaxation dispersion. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Added model MODEL_TSMFK01 also get self.tau_cpmg calculated in init part. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Methods to replace math domain errors in model TP02, has been replaced with numpy masks. Documentation is also fixed. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Fix for sending in dw as numpy array in unit tests of model TP02. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Replaced target function for model TP02, to use higher dimensional numpy array structures. That makes the model much faster. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Fix for adding model TP02 to part of init class to initialize preparation of higher dimension numpy structures. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Made the NOREX model a faster numpy array calculation. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Removed an unnecessary frq_struct in init of target function. frqs can just be expanded, and back_calc is cleaned afterwards with disp_struct. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Methods to replace math domain errors in model M61, has been replaced with numpy masks. Documentation is also fixed. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Fix for sending in r1rho_prime and phi_ex_scaled as numpy array in unit tests of model M61. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Replaced target function for model M61, to use higher dimensional numpy array structures. That makes the model much faster. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Methods to replace math domain errors in model M61b, has been replaced with numpy masks. Documentation is also fixed. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Fix for sending in r1rho_prime and dw as numpy array in unit tests of model M61b. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Replaced target function for model M61b, to use higher dimensional numpy array structures. That makes the model much faster. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Removed number of points to be send to lib function of model TSMFK01. These are not used anymore. Also removed in corresponding unit tests. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Removed number of points and pB to be send to lib function of model TP02. Number of points are not used anymore. pB is calculated in lib function instead. Also removed in corresponding unit tests. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Removed number of points and pB to send to lib function of model TP02. pB is calculated in lib function instead. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Removed number of points, pB, k_AB, k_BA to be send to lib function of model B14. Number of points are not used anymore. pB is calculated in lib function instead. k_AB, and k_BA are calculated in lib functions instead. Fixed in target function. Fixed in lib function. Fixed in corresponding unit tests. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Fix for sending number of points in target function of TSMFK01. This was removed in lib function. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Removed number of points, pB, to be send to lib function of model TAP03. Number of points are not used anymore. pB is calculated in lib function instead. Fixed in target function. Fixed in lib function. Fixed in corresponding unit tests. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Removed number of points, to be send to lib function of model CR72. Number of points are not used anymore. Fixed in target function. Fixed in lib function. Fixed in corresponding unit tests. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Removed number of points, to be send to lib function of model DPL94. Number of points are not used anymore. Fixed in target function. Fixed in lib function. Fixed in corresponding unit tests. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Removed number of points, to be send to lib function of model M61. Number of points are not used anymore. Fixed in target function. Fixed in lib function. Fixed in corresponding unit tests. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Removed number of points, to be send to lib function of model M61b. Number of points are not used anymore. Fixed in target function. Fixed in lib function. Fixed in corresponding unit tests. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Methods to replace math domain errors in model MP05, has been replaced with numpy masks. Number of points has been removed, as the masks utility replaces this. Calculation of pB, has been moved to lib function for simplicity. Documentation is also fixed. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Fix for sending in dw as numpy array in unit tests of model MP05. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Replaced target function for model MP05, to use higher dimensional numpy array structures. That makes the model much faster. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Methods to replace math domain errors in model LM63, has been replaced with numpy masks. Number of points has been removed, as the masks utility replaces this. Documentation is also fixed. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Fix for sending in number of points in unit tests of model LM63. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Replaced target function for model LM63, to use higher dimensional numpy array structures. That makes the model much faster. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Fix for replacement of values with mask, when phi_ex is zero. This can be spin specific. System test: Relax_disp.test_hansen_cpmg_data_to_lm63 starts to fail: Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Fix for sending in r20 and phi_ex as numpy array in unit tests of LM63. This is after using masks as replacement. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * 1 digit decrease in parameter check in system test: Relax_disp.test_hansen_cpmg_data_to_lm63. It is unknown, why this has occurred. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Methods to replace math domain errors in model IT99, has been replaced with numpy masks. Number of points has been removed, as the masks utility replaces this. pB is now moved to be calculated inside. This makes the lib function simpler. Documentation is also fixed. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Fix for sending in r20 and dw as numpy array in unit tests of IT99. This is after using masks as replacement. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Replaced target function for model IT99, to use higher dimensional numpy array structures. That makes the model much faster. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Methods to replace math domain errors in model ns_cpmg_2site_expanded, has been replaced with numpy masks. Number of points has been removed, as the masks utility replaces this. pB is now moved to be calculated inside. This makes the lib function simpler. k_AB and k_BA is also now calculated here. Documentation is also fixed. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Fix for sending in r20 and dw as numpy array in unit tests of ns_cpmg_2site_expanded. This is after using masks as replacement. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Replaced target function for model ns_cpmg_2site_expanded, to use higher dimensional numpy array structures. That makes the model much faster. I cannot get system test: Relax_disp.test_cpmg_synthetic_dx_map_points to pass. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Fix for system test Relax_disp.test_cpmg_synthetic_dx_map_points. By just copying self.back_calc_a to self.back_calc, problem was solved. In specific_analysis.relax_disp.optimisation in function back_calc_r2eff(), the function gets the last values stores in the class function. This is in "class Disp_result_command(Result_command)" with self.back_calc = back_calc. And back_calc_r2eff() have return model.back_calc. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Methods to replace math domain errors in model ns_cpmg_2site_3d, has been replaced with numpy masks. Number of points has been removed, as the masks utility replaces this. pB is now moved to be calculated inside. This makes the lib function simpler. k_AB and k_BA is also now calculated here. Magnetization vector is also now filled in lib function. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Fix for unit tests of model NS CPMG 2-site 3D to the reduced input to the lib function. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Change to the target function to the model NS CPMG 2-site 3D to use the reduced input to the lib function. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Changed linked matrix/vector inner products into chained dot expressions. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Wrote the essential dot matrix up to be initiated earlier. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Lowered the number of dot iterations, by pre-prepare the dot matrix another round. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Turned Mint vector into a 7,1 matrix, so dimensions fit with evolution matrix. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Lowered the number of dot operations, by pre-preparing the evolution matrix another round. The power is in system tests always even. The trick to removing this for loop, would be to make a general multi dot function. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Moved the bulk operation of model NS CPMG 2-site 3D into the lib file. This is to keep the API clean. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Changed the unit test of NS CPMG 2-site 3D, after the input to the function has changed. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Changed the target function for NS CPMG 2-site 3D. This reflects the new API layout. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Changed the lib function of NS CPMG 2-site star, to get input of dw and r20a+r20b of higher dimensional type. This is to move the main operations from the target function to the lib function, and make the API code clean and consistent. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Changed the target function of NS CPMG 2-site star, to reflect the input to the function. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Made the dot evolution structure faster for NS CPMG 2-site 3D. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Implemented the BLAS method of dot product, which should be faster. I cannot get the "out" argument to work. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Small fix for the dot method. But the out argument does not work. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Implemented the dot method via blas. This needs a array with one more axis. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for clustered analysis. * Last try to use the out argument. In the last dotting loop, the out argument wont work, no matter what I do. Task #7807 (https://gna.org/task/?7807): Speed-up of dispersion models for cluste... [truncated message content] |
From: Edward d'A. <ed...@do...> - 2014-07-03 14:07:23
|
This is a major bugfix release and the first requiring numpy >= 1.6 to allow for faster calculations for certain analyses. There have been improvements to the GUI user functions, the ^[[?1034h escape code is finally suppressed on Linux systems, and the structure.com user function has been added. Bugfixes include the proper handling of R20A and R20B parameters in the relaxation dispersion models, the 'IT99' dispersion model tex parameter was incorrectly handled, the 'LM63 3-site' dispersion models had a fatal mistake in its equations, files with multiple extensions (for example *.pdb.gz) are now correctly handled, and closing the free file format window in Mac OS X systems caused the GUI to freeze. Full details can be found below. For this release, the Mac OS X framework used to build the universal 3-way (ppc, i386, x86_64) binaries for the stand-alone relax application has been updated. The relax application now bundles Python 2.7.8, numpy 1.8.1, scipy 0.14.0, nose 1.3.3, wxPython 2.9.3.1 osx-cocoa (classic), matplotlib 1.3.1, epydoc 3.0.1, mpi4py 1.3.1 and py2app 0.8.1. This should result in better formatted relax state and results files and give access to more advanced packages for power users to take advantage of. The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html). The full list of changes is: Features: * Improvements for a number of GUI elements used in the user function windows. * The ^[[?1034h escape code should now no longer be emitted by GNU readline on Linux systems. * Created the very basic structure.com user function for calculating the centre of mass. This is to simply allow an easy interface to the pipe_control.structure.mass.pipe_centre_of_mass() function. * Expansion of the REMARK section of the PDB file created for the internal structural object. This is visible when using the structure.write_pdb user function, as well as the many other user functions which create PDB files. The relax version as well as the file creation date are now recorded in the PDB file. This extra information should be very useful. Empty lines in the REMARK section improve the formatting. Changes: * Added proper sectioning to the release checklist document. * Added the upload script to the release checklist document. * Modified the Sequence GUI input element used for the user function list arguments. The first column is now of fixed with when titles are supplied. Previously when supplying titles, the width would be tiny and no text would be visible. * Added titles for all 3D coordinate user function arguments. This is for the Sequence GUI input element, and affects the frame_order.average_position, n_state_model.CoM and paramag.centre user functions. * The compilation of the C modules now respects the user defined environment. This is the patch from Justin (https://gna.org/users/jlec) attached to bug #22145 (https://gna.org/bugs/?22145). It has been modified to include a comment and remove a double empty line. * Bug fix for the compilation of the C modules now respects the user defined environment. The problem was that on Mac OS X (as well as other systems), that these environmental variables were not defined and hence the scons commands would all fail with a KeyError and traceback. Now the keys in the os.environ dictionary are being searched for before they are set. * Fix for the wxPython link in the installation chapter of the manual. This was pointing to the scipy website for some reason. * Changed the Python readline link for MS Windows in the installation chapter of the manual. This now points to https://pypi.python.org/pypi/pyreadline as the iPython link is broken. * Implemented system test Relax_disp.test_bug_22146_unpacking_r2a_r2b_cluster. This is to catch the wrong unpacking of R2A and R2B when performing a clustered full dispersion model analysis. Bug #22146: (https://gna.org/bugs/?22146) Unpacking of R2A and R2B is performed wrong for clustered "full" dispersion models. * Extended system test Relax_disp.test_bug_22146_unpacking_r2a_r2b_cluster for B14 full model. This is to catch the wrong unpacking of R2A and R2B when performing a clustered full dispersion model analysis. Bug #22146: (https://gna.org/bugs/?22146) Unpacking of R2A and R2B is performed wrong for clustered "full" dispersion models. * Extended system test Relax_disp.test_bug_22146_unpacking_r2a_r2b_cluster for NS CPMG 2SITE 3D full model. This is to catch the wrong unpacking of R2A and R2B when performing a clustered full dispersion model analysis. Bug #22146: (https://gna.org/bugs/?22146) Unpacking of R2A and R2B is performed wrong for clustered "full" dispersion models. * Extended system test Relax_disp.test_bug_22146_unpacking_r2a_r2b_cluster for NS CPMG 2SITE STAR full model. This is to catch the wrong unpacking of R2A and R2B when performing a clustered full dispersion model analysis. Bug #22146: (https://gna.org/bugs/?22146) Unpacking of R2A and R2B is performed wrong for clustered "full" dispersion models. * Added synthetic data generator script which created the data to test against. Bug #22146: (https://gna.org/bugs/?22146) Unpacking of R2A and R2B is performed wrong for clustered "full" dispersion models. * Split system test Relax_disp.test_bug_22146_unpacking_r2a_r2b_cluster up in different tests. A setup function which is: setup_bug_22146_unpacking_r2a_r2b_cluster(self, folder=None, model_analyse=None): And then the tests: test_bug_22146_unpacking_r2a_r2b_cluster_B14 test_bug_22146_unpacking_r2a_r2b_cluster_CR72 test_bug_22146_unpacking_r2a_r2b_cluster_NS_3D test_bug_22146_unpacking_r2a_r2b_cluster_NS_STAR. Bug #22146: (https://gna.org/bugs/?22146) Unpacking of R2A and R2B is performed wrong for clustered "full" dispersion models. * Modified profiling script to get closer to the implementation in relax. An additional test function is setup to figure out how to reshape the numpy arrays in the target function. Bug #22146: (https://gna.org/bugs/?22146) Unpacking of R2A and R2B is performed wrong for clustered "full" dispersion models. * Updated profiling text for CR72 model. Now it is tested for 3 fields. This is related to: Task #7807 (https://gna.org/task/index.php?7807): Speed-up of dispersion models for Clustered analysis. * Added searching for environment variable PYTHON_INCLUDE_DIR if Python.h is not found in standard Python library. This can be very handsome, if one has a Python virtual environment for multiple users. This relates to the wiki page: http://wiki.nmr-relax.com/Epd_canopy. * The lib.compat.norm() replacement function for numpy.linalg.norm() now handles no axis argument. This is to allow the function to be used in all cases where numpy.linalg.norm() is used, while providing compatibility with the axis argument and all numpy versions. * Fix for the scons target for compiling the relax manual when using a repository checkout copy. The method for compiling the relax manual was calling the version.revision() function, however this has been replaced a while ago by the version.repo_revision variable. * Created two unit tests for the lib.io.file_root() function. The second of the tests demonstrate a failure of the function if multiple file extensions are present. * Lowered chi2 value test in system test Relax_disp.test_bug_22146_unpacking_r2a_r2b_cluster_NS_STAR. This is due to the data produced on 32 bit machine, and tested on 64 bit machines. The error was: AssertionError: 2.4659455670347743e-05 != 0.0 within 7 places. The reason for this is due to truncation artifacts. * Fix for wrong path testing of Python.h. Python.h would be in PYTHON_PREFIX/include/pythonX.Y/Python.h and not in PYTHON_PREFIX/include/Python.h. * Better handling of the control-C keyboard interrupt signal in the relax test suite. This includes two changes. The Python 2.7 and higher unittest.installHandler() function is now called, when present, to terminate all tests using the unittest module control-C handler. The second change is that the keyboard interrupt signal is caught in a try-except statement, a message printed out, and the tests terminated. This should be an improvement for all systems. * Adding last profiling information for model CR72. * Added system test for model LM63 3 site. According to results folder in test_suite/shared_data/dispersion/Hansen/relax_results/LM63 3-site. This should pass, but it doesn't. * Created an initial Relax_disp.test_lm63_3site_synthetic system test. This should have been set up a long time ago. It uses the synthetic noise-free data in the test_suite/shared_data/dispersion/lm63_3site directory which was created for a system test but never converted into one. The test still needs modifications to allow it to pass. * Modifications for the Relax_disp.test_lm63_3site_synthetic system test. The r2eff_values.bz2 saved state file has been updated, as it was too old to use in the test. The test has also had a typo bug fixed and the data pipe name updated. The test now also checks all of the optimised values. * Removed system test test_hansen_cpmg_data_to_lm63_3site. This was a temporary implementation and has been replaced with system test Relax_disp.test_lm63_3site_synthetic. * Fixes for all of the relaxation dispersion system tests which were failing with the new minfx code. Due to the tuning of the log barrier constraint algorithm in minfx in the commit at http://article.gmane.org/gmane.science.mathematics.minfx.scm/25, many system tests needed to be slightly adjusted. Two of the Relax_disp.test_tp02_data_to_* system tests were also failing as the optimisation can no longer move out of the minimum at pA = 0.5 for one spin (due to the low quality grid search in the auto-analysis). * Updated the release checklist document for the new 1.0.7 release of minfx. * Fixes for the Relax_disp.test_hansen_cpmg_data_missing_auto_analysis system test. The pA parameter is no longer tested for one spin as it moves to random values on different operating systems and 32 vs. 64-bit systems. This is because this spin experiences no exchange, both dw and kex are zero. * Decreased the value checking precision in the Relax_disp.test_hansen_cpmg_data_to_lm63 system test. This is to allow the test to pass on certain operating systems and 32-bit systems. * Modified the precision of the output from the relax_disp.sherekhan_input user function. This is simply to allow the Relax_disp.test_sod1wt_t25_to_sherekhan_input system test to pass on certain 32-bit systems, as the float output to 15 decimal places is not always the same. This system test has been updated for the change. * Modified the Relax_disp.test_sprangers_data_to_mmq_cr72 system test to pass on certain systems. This test fails on 32-bit Linux (and probably other systems as well). To fix the test, the kex values are all divided by 100 before checking them to 4 decimal places of accuracy. * Improved how the relax installation path is determined in the status object. If the path cannot be found, the current working directory is then checked if it is where relax is installed. This is needed when importing modules outside of relax. * Hack to permanently eliminate the ^[[?1034h escape code being produced on Linux systems. This is produced by importing the readline module. The escape code will be sent to STDOUT every time relax is executed, so it will be present in all log files. The problem is the TERM environmental variable being set to 'xterm'. The hack simply sets TERM to an empty string. * More hacks for permanently eliminating the ^[[?1034h escape code being produced on Linux systems. This is a nasty feature of the GNU readline library. It is now also turned off in the dep_check module, suppressing ^[[?1034h in Python scripts which import only parts of relax. * Numpy version 1.6 or higher is now required to be able to run relax. This follows from the series of messages: http://www.mail-archive.com/relax-devel@domain.hid, http://www.mail-archive.com/relax-devel@domain.hid, http://www.mail-archive.com/relax-devel@domain.hid, and http://www.mail-archive.com/relax-devel@domain.hid. If too many users complain, maybe this change can be reverted later. This minimal numpy version is needed for many of the speed ups going in the relaxation dispersion and frame order analyses. It is required for the numpy ufunc out arguments and for the numpy.eigsum() function. These will likely be used in other analyses in the future for improving the speed of relax, so it might affect users of other analyses later on. * Updated the numpy minimal dependency in the installation chapter of the manual to version 1.6. * Added better epydoc sectioning to the lib.dispersion.ns_cpmg_2site_expanded module docstring. This is to better separate the original scripts used to document the code evolution. * Empty lines are now handled by the lib.structure.pdb_write.record() function. By supplying the remark as None, empty lines can now be created in the REMARK section of a PDB file. This can be used for nicer formatting. * Fixes for the Diffusion_tensor system tests due to the recent PDB file changes. Prior to the comparison of the generated PDB files, all REMARK PDB lines are now stripped out. * Fixes for all system tests failing due to the expanded and improved PDB REMARK section. The system tests now remove all REMARK records prior to comparing file contents. The special strip_remarks() system test method has been created to simplify the stripping process. * Fix for the software verification tests. The recent expansion and improvements of the REMARK records created by the internal structural object PDB writing method imported the relax version to place this information into the PDB files. However this breaks the relax library design, as shown by the verification tests. Instead the relax version information is being taken from the lib.structure.internal.object.RELAX_VERSION variable. This defaults to None, however the version module now sets this variable directly when it is imported so that it is always set to the current relax version when running relax. * General Python 3 fixes via the 2to3 script. * Removed the lib.compat.sorted() function which was providing Python2.3 compatibility. For a while now, relax has been unable to run on Python versions less than 2.5. Therefore there is no use for having this replacement function for Python <= 2.3 which was being placed into the builtins module. * Python 3 fixes for the entire codebase using the 2to3 script. The command used was: 2to3 -j 4 -w -f xrange . * The internal structural object add_molecule() and has_molecule() methods are now model specific. This allows for finer control of structural object. * Created the new lib.structure.files module. This currently contains the single find_pdb_files() function which will be used to find all *.pdb, *.pdb.gz and *.pdb.bz2 versions of the PDB file in a given path. * Fix for the breakage of the relax help system. This was reported at http://thread.gmane.org/gmane.science.nmr.relax.devel/6481. The problem was that the TERM environmental variable was turned off to avoid the GNU readline library on Linux systems emitting the ^[[?1034h escape code. See the message at http://thread.gmane.org/gmane.science.nmr.relax.devel/6481/focus=6489 for more details. However the Python help system obviously requires this environmental variable. Now only if the TERM variable is set to 'xterm' will it be reset, and to 'linux' instead of the blank string ''. This does not affect any relax releases. Bugfixes: * Fix for the wrong unpacking of R20A and R20B in model CR72 full. Bug #22146: (https://gna.org/bugs/?22146) Unpacking of R2A and R2B is performed wrong for clustered "full" dispersion models. * Fix for the wrong unpacking of R20A and R20B in model B14 full. Bug #22146: (https://gna.org/bugs/?22146) Unpacking of R2A and R2B is performed wrong for clustered "full" dispersion models. * Fix for the wrong unpacking of R20A and R20B in model NS CPMG 2SITE 3D full model. Bug #22146: (https://gna.org/bugs/?22146) Unpacking of R2A and R2B is performed wrong for clustered "full" dispersion models. * Fix for the wrong unpacking of R20A and R20B in model NS CPMG 2SITE STAR full model. Bug #22146: (https://gna.org/bugs/?22146) Unpacking of R2A and R2B is performed wrong for clustered "full" dispersion models. * Bug fix for the lib.io.file_root() function for multiple file extensions. The function will now strip off all file extensions. * Fix for bug #22210 (https://gna.org/bugs/?22210), the failure of the 'LM63 3-site' dispersion model. The problem is described in the bug report - the multiplication in the tanh() function is a mistake, it must be a division. * Fix for the Library.test_library_independence verification test on MS Windows. The tearDown() method now uses the error handling test_suite.clean_up.deletion() function to remove the copied version of the relax library. * Fixed the packing out of parameter tex for global analysis in model IT99. Bug #22220 (https://gna.org/bugs/index.php?22220): Unpacking of parameters for global analysis in model IT99, is performed wrong. * Fix for bug #22257, the freezing of the GUI after using the free file format window on Mac OS X. This is reported at https://gna.org/bugs/?22257. This is a recurring problem in Mac OS X as it cannot be tested in the relax test suite. The problem is with wxPython. The modal dialogs, such as the free file format window, cannot be destroyed on Mac OS X using wx.Dialog.Destroy() - this kills wxPython and hence kills relax. The problem does not exist on any other operating system. To fix this, all wx.Dialog.Destroy() calls have been replaced with wx.Dialog.Close(). |