|
From: th643138 <th6...@ma...> - 2005-11-10 12:12:28
|
hi, i=C2=B4m new to this mailing list and the hsql-project... during the analysis of the source-code to get an understanding how the=20 things interact and work together, it struck me that the way to make=20 databases persistent is to write all rows of an table into one large=20 binary file using random acces methods (file pointers, seek,...) so is the resulting structure of the file (rows appended everytime) one=20 large serialized block, containing all table information... ->did i understood that correct? doing that this way causing some questions in my mind... how do you store variable length character fields? did you allocate the=20 maximum field size or take you advantage of real variable sized records?=20 (eigther embedded fields or attached variable length fields at the end=20 of the records fixed size fields) the question is, your persistence-strategy (as described above) gets=20 realy into trouble if you enlarge the content of variable sized=20 record-fields... same occours if someone want to alter an existing table=20 and wanna insert an new row... the result should be to restructure the datafile containing the table=20 data, because there is no available space between the serialized=20 records, according record-acces with an real file-offset... ->doing that this way may acceptable on small database files... but on=20 your feature list you promote to support very large databases up to 8=20 gig... (maybe with the capability of internal structures, but not really=20 in practise... pour performance...) an better solution (specially if you wana use the engine as an server)=20 should be an paged file as described in most database-architecture books so my question, is anyone working on such an persistence-layer? if not, i=C2=B4m interested in try to do it... as result that could easily support different page-insert strategies=20 (update in place or allocate a new pages) furthermore that would support=20 great revised physical logging ciao thomas |