Name | Modified | Size | Downloads / Week |
---|---|---|---|
Parent folder | |||
README.md | 2014-02-23 | 2.0 kB | |
spectral-0.14.zip | 2014-02-23 | 184.8 kB | |
spectral-0.14.tar.gz | 2014-02-23 | 141.1 kB | |
Totals: 3 Items | 328.0 kB | 0 |
Spectral Python (SPy) version 0.14
To install, uncompress the archive, cd
into the unpacked directory, and type
python setup.py install
Instead of downloading the archive, the latest release of the software can be automatically downloaded and installed using pip
:
pip install -U spectral
For information on package dependencies, see the web site.
Changes
-
Attempt to use Pillow fork of PIL, if available, rather than older PIL.
-
view_cube
now uses common color scale limits on all side faces. -
When creating an
AsterDatabase
instance, directories in theSPECTRAL_DATA
environment variable are search for the specified file (after the current directory). -
spectral.imshow
accepts an optionalfignum
argument to render to an existing figure. -
Class labels in a
spectral.imshow
window can be reassigned even when class labels were not provided in the function call (all pixels will start with class 0). -
File
spectral/algorithms/perceptron.py
can be used independently of the rest of the package.
Bug Fixes
-
Front and left sides of the image cube displayed by
view_cube
were mirrored left-right. Cube aspect ratio was being computed incorrectly for non-square images. These bugs were introduced by a recent release. -
Global covariance was not being scaled properly in the
MahalanobisDistanceClassifier
. Mathematically, it does not affect results and did not affect results on the test data but for large covariance with many classes, it could have cause rounding/truncation that would affect results. -
PerceptronClassifier constructor was failing due to recent changes in base class code. Unit tests have been added to ensure it continues to work properly.
Performance Improvements
- PerceptronClassifier is roughly an order of magnitude faster due to better use of numpy. Inputs are now scaled and weights are initialized withing the data limits, which usually results in fewer iterations for convergence.