Serialization + persistance : in a few lines of code, compress and annotate Python objects into SQLite; then later retrieve them chronologically by keywords without any SQL. Most useful "standard" module for a database to store schema-less data.
- single standard Python module for serialized data storage.
- elegant user methods (write Python code; not necessarily SQL).
- includes compression, plus support for storing file and URL contents.
- optimized for speed, security, and concurrency.
- searched objects can be retrieved as a dictionary (data subset).
- search syntax supports (unix filename) regular expression.
- useful as persistent generalized queue.
- embedded documentation includes useful tips and references.
- very easy to use immediately (import module for all practical projects).
- README: http://yserial.sourceforge.net
- [ in beta: Farm of databases for concurrency ]
Looks very cool but difficult to install except manually place on PYTHONPATH. Please package as an egg and list on pypi.python.org so automated tools like zc.buildout and easy_install can find it. A tiny setup.py is all that is needed. Very good work though.
this essentially unifies standard modules: sqlite3, zlib, gzip, cPickle -- to compress Python objects (such as a dictionaries, lists, and classes) for persistance in a SQL database -- fast access in a few lines of code (without SQL). Easy to maintain all objects under (project) tables in a *single* compact database file. Helpful examples, functions, and comments within the module.