Work at SourceForge, help us to make it a better place! We have an immediate need for a Support Technician in our San Francisco or Denver office.

Close

Tree [r97] /
History



File Date Author Commit
bin 2011-02-25 knizhnik [r2] Cleanup repository from generated files
clntmfc 2011-02-25 knizhnik [r1]
csharp 2011-02-25 knizhnik [r1]
examples 2012-10-11 knizhnik [r79] Undo update intertask locks
goodsrv 2011-02-25 knizhnik [r1]
inc 2013-02-20 knizhnik [r96] Fix problem with update CRC when updating trans...
java 2013-02-17 knizhnik [r95] Migrate to new repository
lib 2011-02-25 knizhnik [r2] Cleanup repository from generated files
src 2014-04-06 knizhnik [r97] Remove unneeded include from memmgr.cxx
srvrmfc 2011-02-25 knizhnik [r1]
threadmgr 2011-02-25 knizhnik [r1]
CHANGES 2012-11-21 knizhnik [r94] Release 3.20
SAL.htm 2011-02-25 knizhnik [r1]
browser.htm 2011-02-25 knizhnik [r1]
config 2013-02-17 knizhnik [r95] Migrate to new repository
goods.spec 2011-02-25 knizhnik [r1]
goodsmfc.dsw 2011-02-25 knizhnik [r1]
graph1.gif 2011-02-25 knizhnik [r1]
graph2.gif 2011-02-25 knizhnik [r1]
graph3.gif 2011-02-25 knizhnik [r1]
make.bat 2011-02-25 knizhnik [r1]
makefile 2011-02-25 knizhnik [r1]
makefile.cygwin 2011-02-25 knizhnik [r1]
makefile.freebsd 2013-02-17 knizhnik [r95] Migrate to new repository
makefile.freebsd-pthreads 2011-02-25 knizhnik [r1]
makefile.gcc 2012-01-13 knizhnik [r44] disable fortify for paortable multitasking
makefile.hp1020 2011-02-25 knizhnik [r1]
makefile.inc 2012-10-18 knizhnik [r86] Fix DEADLOCK_DEADLOCk support at Windows
makefile.linux-pthreads 2012-05-29 knizhnik [r47] Support server side queries
makefile.macos-gcc 2011-02-25 knizhnik [r1]
makefile.macos-pthreads 2011-02-25 knizhnik [r1]
makefile.mingw 2011-02-25 knizhnik [r1]
makefile.osf 2011-02-25 knizhnik [r1]
makefile.sgi 2011-02-25 knizhnik [r1]
makefile.solaris-cc 2011-02-25 knizhnik [r1]
makefile.sun 2011-02-25 knizhnik [r1]
makefile.usr 2013-02-17 knizhnik [r95] Migrate to new repository
mingw.inc 2011-02-25 knizhnik [r1]
protocol.htm 2011-02-25 knizhnik [r1]
readme.htm 2011-03-03 knizhnik [r10] Update documentation
rebuildall.cmd 2011-02-25 knizhnik [r1]
windows.mak 2011-02-25 knizhnik [r1]

Read Me

<HTML>
<HEAD>
<TITLE>Generic Object Oriented Database System</TITLE>
<UL>
<LI> <A HREF = "#introduction">Introduction</A>
<LI> <A HREF = "#server">GOODS server</A>
  <UL>
  <LI> <A HREF = "#memmgr">Storage memory manager</A>
  <LI> <A HREF = "#transmgr">Transaction manager</A>
  <LI> <A HREF = "#poolmgr">Page pool manager</A>
  <LI> <A HREF = "#objmgr">Object access manager</A>
  <LI> <A HREF = "#classmgr">Class information manager</A>
  <LI> <A HREF = "#sendobject">Optimization of object loading</A>
  <LI> <A HREF = "#replication">Replication support</A>
  </UL>
<LI> <A HREF = "#api">Application interface to the database</A>
  <UL>
  <LI> <A HREF = "SAL.htm">System abstraction layer</A>
  <LI> <A HREF = "#cpp">GOODS interface for the C++ language</A>
    <UL>
    <LI> <A HREF = "#array">Dynamic arrays</A>
    <LI> <A HREF = "#set">Sets</A>
    <LI> <A HREF = "#btree">B*-Tree</A>
    <LI> <A HREF = "#hash">Hash table</A>
    <LI> <A HREF = "#htree">H-Tree</A>
    <LI> <A HREF = "#blob">Blob</A>
    <LI> <A HREF = "#rtree">R-Tree</A>
    <LI> <A HREF = "#kdtree">KD-Tree</A>
    </UL>
  <LI> <A HREF = "java/JavaAPI.htm"><B>GOODS interface for the Java language</B></A>
  <LI> <A HREF = "#www">API for development Web applications</A>
  <LI> <A HREF = "#running">Running GOODS applications</A>
    <UL>
    <LI> <A HREF = "#configure">Database configuration file</A>
    <LI> <A HREF = "#goodsrv">Server monitor GOODSRV</A>
    <LI> <A HREF = "#embedded">Embedded GOODS server</A>
    <LI> <A HREF = "#browser">Database browser</A>
    <LI> <A HREF = "#bugdb">Bug tracking database</A>
    <LI> <A HREF = "#examples">Running GOODS examples</A>
    <LI> <A HREF = "#performance">Measuring GOODS performance</A>
    </UL>
  </UL>
<LI> <A HREF = "#installation">Installation of GOODS</A>
  <UL>
  <LI> <A HREF = "#compilation">Compilation of GOODS sources</A>
  <LI> <A HREF = "#sources">Description of GOODS sources</A>
  </UL>
<LI> <A HREF = "protocol.htm">Desription of client-server protocol</A>
<LI> <A HREF = "#distribution">Distribution of GOODS</A>
</UL>
<BODY>
<HR>
<H2><A NAME = "introduction">Introduction</A></H2>

GOODS (Generic Object Oriented Database System) uses an active client
language-independent model of server-client interaction.
All application logic is implemented in and executed by client applications.
The server is responsible for storing and retrieving
objects, handling transactions, object locks, garbage collection,
database backups and recovery. A metaobject protocol is used in the
implementation of the client database application interface to 
provide transparent and flexible interaction with the database. 
"<EMP>Generic</EMP>" in the acronym GOODS stands for the capability to
extend the system to handle almost all possible object access and
synchronization strategies. An "<CITE>aspect-oriented</CITE>" approach
makes it possible to change object access management policies
without affecting the application code. The notation and ideas of the
"<CITE>aspect-oriented</CITE>"
approach was taken from the works of Kickzales. Different strategies, such as 
a standard serialized transaction model or an optimistic model,
can be used as well as models fitting the specific needs of a concrete
application.<P>

GOODS is a fully distributed database. 
The database consists of a number of storages, each of which can be
located at different nodes of a network. The storage is controlled
by a storage server, which is responsible both for handling 
database requests relevant to this storage and for interaction with
other servers to perform database wide operations. The client can
work with an arbitrary number of databases at any time. Each persistent 
object is stored in a concrete storage and its location can't be changed. 
The location of an object within the database is mostly transparent for clients
(the client should not worry about in which storage an object is situated), 
but it is possible for a client to attach an object to a concrete storage.
Global database operations (operations affecting more than one storage
in the database) such as global transaction commitment, garbage collection and
object locking are handled automatically by the database system using 
various algorithms to synchronize the activities of each storage server.
Online database backup and schema modification is possible without
suspending client interaction with the database. A lazy object conversion approach
is used for implementing the schema modification. Different clients can
simultaneously work with different versions of class definitions.<P>

GOODS was designed to achieve maximum performance for handling object requests.
Various algorithms for caching and prefetching objects and storage pages
are used for this purpose. By splitting work into separate 
control threads, GOODS makes it possible to use the benefits of 
parallel execution - especially on multiprocessor architectures. 
A specially designed library provides a system independent interface for
multitasking (classes and methods for creation and synchronization of tasks,
including synchronization primitives such as: mutex, event, semaphore).
This library is used to implement the client part of the database interface as well 
as the GOODS database server itself. Using a metaobject protocol for handling
access to objects makes it possible to specify algorithms 
for object caching and synchronization of concurrent accesses, allowing
a concrete application to reach the highest level of performance.<P>

<H2><A NAME = "server">GOODS server</A></H2>

The GOODS storage server is divided into
several components. Each of them is responsible for its own task and 
interacts with other components by means of a strictly defined interface. 
These components include a storage memory manager, a transaction manager,
a page pool manager, an object access manager and a class information
manager. This modular design makes it possible to choose different
implementations for each component
(the most suitable for a concrete application) and also
to investigate the  efficiency of various database management algorithms 
and strategies. A short description of each component follows.<P>

<H3><A NAME = "memmgr">Storage memory manager</A></H3>

This manager is responsible for (de)allocating storage memory and object
descriptors. In GOODS, an indirect access model is used for interobject
references. Each object has a unique identifier (opid). A descriptor table
provides the mapping of object identifiers to the physical offset within 
the storage file. This approach allows an efficient implementation of the
"<CODE>become</CODE>" operator, which changes the type of a concrete object.
In GOODS, the object descriptor contains information
about the object class, its size and its location in the storage file.
The object descriptor table is implemented using a mapped on memory file. 
The allocation of storage memory in the current implementation is done
by using a bit memory map. Each bit of this map corresponds to a quantum
of storage memory. Currently this quantum is 32 bytes long. The storage
of the bit memory map is itself stored in a mapped on memory file.<P> 

Garbage collection (GC) is done using a variation of the standard mark-and-sweep 
algorithm with some extensions for synchronization of the GC processes
at different storages. To perform GC, one of the storage servers is chosen as 
a coordinator, which initiates the GC processes at each server
and determines the global end of the mark phase at all servers.<P>

GC can be initiated either when the size of the allocated storage exceeds
some predefined threshold value or after some period spent by the system 
in an idle state. To initiate GC, each server sends a request to the 
coordinator. If the coordinator decides, that the GC can be started, it
polls other servers in the database to find out whether they are ready
to start GC. When the server receives such request from the coordinator
and sends an acknowledgment to start GC, the server changes its state
to "<DFN>PRE-GC</DFN>". If acknowledgments are received from all servers,
the coordinator broadcasts to all servers a request to start the mark stage
of GC. Otherwise (if a server is not available or a previous GC iteration
still not finished at one of the servers) the coordinator broadcasts the
"<DFN>CANCEL</DFN>" request to all servers, so that they can exit from the
"<DFN>PRE-GC</DFN>" state.<P>

Each storage server locally performs GC starting from the storage root objects.
The set of roots consists of storage root objects and instances of objects 
loaded by the client at the moment of starting GC.
Objects already scanned are considered to be marked with 
"<DFN>black</DFN>" color. 
Not yet marked objects which are referenced from "<DFN>black</DFN>"
objects are considered to be "<DFN>gray</DFN>".
All other objects are "<DFN>white</DFN>".<P>

When the GC process finds a reference to an object from another storage, it
sends a message to this server with this reference. To make this exchange of 
references more effective, GOODS buffers external references.
Before a reference will be sent to a client, it is placed in the export buffer
(if such reference is not already present in the buffer). When the  buffer
is full, it is sent to the destination server. The server maintains a separate 
export buffer for each other server.<P>

When there are no more "<DFN>gray</DFN>" objects in the storage, the server
sends a message to the coordinator containing a vector of numbers of messages
with external references sent and received by this server from other servers.
The coordinator maintains a matrix of all such vectors. To determinate the
global end of the mark stage, the coordinator checks the following condition:

<BLOCKQUOTE>
  M[i,j].import = M[j,i].export, for all i, j<P>

  where M[i,j].import is the number of external references received  by server i 
  from server j, and M[j,i].export is the number of messages sent by server j
  to server i.
</BLOCKQUOTE><P>

If this condition is true, the mark phase is completed at all servers and the
coordinator sends messages to all servers to initiate the sweep phase of GC. 
During the sweep phase all "<DFN>white</DFN>" objects are deallocated.
The same approach is used to collect unused versions of object classes.<P>

To prevent the deallocation of object instances which were just allocated
and not yet committed by some transaction, all such instances are marked
as "<DFN>black</DFN>" before the sweep phase. 
The GC process runs in the background without affecting the normal functioning
of the database. If an object is changed during GC, the difference of the sets
of the references from the old and the new version of the object is calculated.
(To make this operation more effective, not really the set difference
is calculated, but only references in near standing positions are compared
instead. This makes it possible to handle the most common cases: references
are not changed or are shifted due to insert or remove from array 
operation.) A list of references from all old versions of the object
is kept until the end of GC. The <DFN>PRE-GC</DFN> state is necessary to
synchronize the moment when all servers start to keep information 
about old versions of objects. So GC at each server will see all 
references between objects which were available at the moment of GC 
initiation. If some objects are changed by a transaction, GC will
have no access to these objects until the transaction will be committed.
Then all these objects are updated.<P>

Since different clients can have different versions of objects, the storage 
server should remember all versions of objects used by active clients. 
This information can be removed when all clients update their instances 
of an object and the object is scanned by GC (if GC was active when the object
was modified).<P> 

The sweep stage is performed incrementally, making it possible to 
allocate new objects in storage while the sweep process is active. 
A system crash can cause some leak of memory due to garbage in the memory 
allocation bitmap (for example space allocated for extended objects by
transactions which were active at the moment of the crash). 
Usually this leak of memory is very small (if any) and it is
more essential to recover the server as fast as possible.
It is possible to completely reconstruct the bitmap after recovery:
clear the bitmap and mark only the space used by objects present in the
object index.<P>
 
Allocation of object descriptors is performed using a list of free
object descriptors. Since a system fault can leave this list in 
an inconsistent state, special checks are used during all operations
on this list. If the list is corrupted, it will be completely rebuilt
at the sweep stage of the next GC process (normally the GC process
only appends deallocated descriptors to this list).<P>

Usually objects are allocated successively in a file (unless the bitmap is 
fragmented), so the object's body can cross a page's boundary. Usually 
continuous allocation provides good performance, because objects which are
created together will be with high probability accessed together. So continuous 
allocation of objects will reduce page thrashing. But for some classes of
objects it is preferable to place each object on a separate page. The B-tree
page is an example for such kind of object. Since the page size in GOODS 
is an application dependent parameter which can be changed in any moment,  
the only thing we can ask the memory manager to do with such objects is to 
align its position in the file on object size boundary (more precisely,
on power of 2 which is greater than or equal to the object size). By such
alignment we can guarantee that the object will never cross a page boundary
unless the page size is smaller than the object size.<P>


<H3><A NAME = "transmgr">Transaction manager</A></H3>

The transaction manager is responsible for handling all database updates as 
atomic and recoverable actions. A transaction can be either local (only one 
server participates in the transaction) or global. 
In the last case a two-stage transaction commit protocol is used to guarantee 
the global database state consistency. When a global transaction is committed,
one of the servers participated in this transaction (one with a lowermost
identifier) is chosen to be the coordinator. 
The coordinator assigns a global identifier to the transaction, writes the
identifier into the global transaction history file and sends the identifier
back to the client.
The client sends local parts of the transaction to all servers participated 
in this transaction together with the identifier of the coordinator and the
global identifier assigned to this transaction by the coordinator. The
coordinator then waits for response from all servers participated in this
transaction (stage I). Receiving his part of the transaction, the server
writes it to the transaction log and sends a message to the coordinator,
reporting that he is ready for a global transaction commit. 
When the coordinator receives acknowledgements from all servers involved
in the transaction (stage II), the coordinator marks the transaction in the
global transaction history file as globally committed, sends messages to all 
servers to finish the transaction, flushes all transaction changes in the
database and sends a response to the client about the successfully committed
transaction.
Receiving such a message from the coordinator, each other server participated
in the transaction will copy all objects modified by this transaction 
into the storage and will release locks set by this transaction. 
So it is possible that the client,
which initiated the transaction, will receive a reply from the coordinator before
the transaction is completed at all servers involved in the transaction. But
because the transaction is committed by a client agent at the server, the
server will not handle any other requests from this client before transaction
completion.<P>

This protocol of transaction commitment has uncertainty periods: the
protocol might not be able to complete if the coordinator fails.
The participants must wait until the coordinator recovers 
before committing or aborting their local transactions. During
this time, the local data (accessed by a transaction) is locked to
ensure serializability. When the server is in an uncertain state it 
periodically sends a request to the coordinator to obtain the 
transaction status until a response will be received.<P>

If at stage I the coordinator receives an "<DFN>aborted</DFN>" message 
from one of the servers or if the timeout of waiting for a reply is
expired, the coordinator decides that the transaction is globally 
aborted, marks this transaction in the global transaction history log
as aborted  and sends abort messages to all servers participated in
the transaction and to the client which has initiated this transaction.<P>

During recovery from system fault, the server reads the transaction log
and checks whether the transaction is local or global. If the transaction
is global, the server asks the coordinator of this transaction for the
status of this transaction. The coordinator reads the global transaction
history log to check the status of the global transaction.
If the transaction is marked as globally committed, the coordinator sends
a "<DFN>committed</DFN>" reply to the server. The server performs recovery
of his local part of the transaction. 
Otherwise, if the transaction is marked as "<DFN>aborted</DFN>"
or if there is no entrance with a global transaction identifier in the
global transaction log file, then the coordinator responds to the 
server with an "<DFN>aborted</DFN>" message and the server skips this
transaction in his log. 
If the coordinator is not available at this moment, the server has to wait until
the coordinator will be restarted.<P>

If one of the participants fails after saving his local part of the transaction
in the log but before sending a "<DFN>ready-to-commit</DFN>" message to the
coordinator, the participant will ask the coordinator for the transaction 
status while recovering. If the coordinator still waits for acknowledgement
of transaction commitment from servers participated in the transaction
(timeout is not yet expired), it can notice that a request for the transaction
refers to the active transaction. In this case the coordinator can treat a
"<DFN>get-transaction-status</DFN>" 
request as an acknowledgement from this server to commit the transaction
(the server asks for the transaction status only when it has successfully
saved his local part of the transaction in the log and is ready now to recover
it).
So the coordinator doesn't send a reply for this request immediately but instead
continues processing the transaction. A reply will be sent to the participants
later when the transaction will be either committed or aborted.<P>

When objects are restored from the transaction log after a crash, we should
be careful with "zombie" objects - objects, which are already deallocated
by the GC process. The space of these objects can be reused for newly
created objects. When we restore such "zombie" object from the log it will
override a new object allocated in the same memory space. That is not
a problem - the new object is also present in the log and will be restored
later. But now we have several object descriptors refering to the same
memory location. Since there should be no references to an old
object (which was once collected by GC), it will be deallocated during
the next GC iteration and the space will be freed. But we also
deallocate space of new objects which still can be accessible!<P>

To avoid such dangerous behavior, we keep track of address ranges
occupied by all restored objects during recovery.
If the ranges of two objects overlap, then the older one is removed
(space used by this object is deallocated and the descriptor is inserted 
into the list of free descriptors). We use a special kind of binary
tree for effective finding of overlapped regions (log can contain a big
number of objects and a naive algorithm with complexity N*N is not
acceptable). Each level of this tree represents a range which
is power of 2. The depth of this tree is limited by 64 (an offset
in the database file has a size of 8 byte). So the complexity of checking the ranges
of all restored objects is C*N, where C is some constant.<P>

When storage is recovered after the crash, the file memory allocation
bitmap is cleared and all used objects (addressed by object index)
are marked in this bit map. If some of the bitmap bits (corresponding
to the object space) are already marked and the object is not a one
recovered from the transaction log, we have an inconsistency in the
object index. This inconsistency can be caused by the garbage collector which
freed an unreferenced object, reused its space for some newly allocated
object, but did not flush the index page with this descriptor to the disk
before the crash. Fortunately if one of two overlapped objects is really
accessible from the storage root, then it should be found in the transaction
log, which means it was already recovered. In this case we can just remove
the descriptor of the second object. If both objects are not recovered from
the transaction log, then both of them are inaccessible and we  can safely
remove one of them to reestablish the consistency of the object index. 
<P>

When the size of the transaction log exceeds some limit, a checkpoint
process is started by the transaction manager. All modified pages and file
buffers are flushed to the disks, the log is truncated and the logging
process starts from the beginning of the file.
A new sequential number is written at the beginning of 
the log. This sequential number is incremented after each  checkpoint and
database close operation. This sequential number is included in each 
transaction record written to the log. 
Only a  record which sequential number is equal to the sequential
number of the log will be restored during recovery after a fault. When the
checkpoint process has completed, the server sends messages to all possible 
coordinators of global transactions in which this server can be involved 
(servers with identifiers less than the identifier of this server) informing
them about checkpoint completion. When receiving such message from a server i,
the coordinator can remove from the global transaction log all entries relevant
to the server with a sequential number less than the current sequential number
of server i minus one. Subtracting one is necessary because transactions 
with these sequential numbers can be saved by backup and retrieved by 
restore procedures.<P> 

The transaction manager is also responsible for managing online backup
procedures. Two types of backups are supported by the GOODS server: 
snapshot backup and permanent backup. Snapshot backup allows to produce 
snapshots of storage states. If a disk failure happens, then it is  
possible to restore a locally consistent state of storage for the
time of the backup completion (if global transactions are used, then the
global state of the database can not be consistent because a backup is
local to the storage server: so snapshots at each server are not synchronized).
A permanent backup process can be used to guarantee the global database
recovery in any moment of time.
It is assumed that log files can not be corrupted (they can be placed on
some reliable media - for example RAID disk massive). When the backup
process is active, checkpoints are delayed till the end of backup.
To avoid delays in database functioning due to waiting for backup 
completion, the limiting log size should be set to a value, which can
guarantee that backup can be completed before this limitation 
is reached.  Backup can be started by triggering one of two conditions: 
a timeout signal is received  or the size of the log becomes greater than some specified value. 
The backup process first saves the current contents of the storage file,
the storage memory allocation bit map and the object descriptor table.
Then it copies to the backup file all records from the transaction log 
until it reaches the last one (records can be added to the transaction
log while backup is in progress: so synchronization is required).
When the last record is reached, the backup process is complete. Now the
checkpoint procedure is invoked to truncate the transaction log. 
After backup completion, the database operator has to switch backup media 
(tape) and make a schedule for the next backup: specify the timeout
and/or log size for the next backup.<P>

The difference between a permanent backup and a snapshot backup is that
the snapshot backup has to save only that part of the transaction log file
which was written at the moment when the backup completes copying
the storage file and the memory manager tables. The backup should save only
information about transactions completed before this moment to 
provide a consistent snapshot of the storage. The checkpoint procedure is
not forced by the snapshot backup.<P>

The operation of writing the transaction into the log should be performed 
synchronously, to guarantee that after the completion of this operation
it will be possible to recover a transaction in case of system fault.
The total throughput of a database system can be significantly increased
if separate synchronous writes are combined into a single write operation. 
The performance can further be increased if the size and position of the data 
written are aligned to the operating system's disk block size. In this case
the operating system should not previously read the contents of a modified
block from the disk. 
This algorithm is encapsulated into the "<CODE>file</CODE>" class.
This abstract class provides operating system independent access to
files, guaranteeing correctness of concurrent access operations to the file. 
Implementations of this class for different operating system use 
system specific advanced calls (such as <CODE>writev</CODE> for Unix
or overlapped IO for Windows-NT) 
to achieve maximal performance. To provide merging of
concurrent synchronous write requests, the implementation of the class
<CODE>file</CODE>
uses two buffers in a cyclic way. While one buffer is writing to the disk
all other write requests are collected in the other buffer. When the write 
operation for the first buffer is completed, the buffers change the role:
all write requests are now collected in the first buffer. 
The size of the buffer is aligned to the size of the operating system's disk
block, so no overhead of reading the content of the modified block is present.  
In case of using gathered IO functions (such as <CODE>writev</CODE>),
it is possible to avoid copying the data written to the buffer; instead 
a vector of segments can be written to disk.<P> 

It is possible in GOODS to switch off synchronous writes to the transaction 
logs (using normal buffered write requests) if the operating system
can guarantee that all modified buffers will be flushed to
disk before system halt. That is possible if UPS is used to
prevent power faults and the operating system itself is considered to
be enough reliable. The performance can be dramatically increased in this case.

<H3><A NAME = "poolmgr">Page pool manager</A></H3>

The page pool manager provides efficient access to storage files.
To improve IO operation performance, the page pool is used for caching
the most recently used pages and buffering IO requests. A special
synchronization policy allows different clients to perform read/write
operations in parallel. The algorithm is similar to the one used by operating
systems to control the access to file buffers. Each page has a <CODE>BUSY</CODE>
and a <CODE>WAIT</CODE> flag and an event object on which tasks, accessing 
this page, can sleep.
While reading a page from the file, the <CODE>BUSY</CODE> flag is set, so
any other task trying to access this page will set the <CODE>WAIT</CODE> flag
and sleep on the correspondent event object. After closing the read operation,
the <CODE>WAIT</CODE> flag is checked.
If it is set, then the event object is signaled to wake up all waiting tasks.
While the page is being used by some tasks (they copy data to or from it),
the <CODE>USED</CODE> counter in the page header is not zero. This counter
prevents the page replacing algorithm from throwing this page from the cache.<P>

The storage page size is an application dependent parameter and can be changed
without affecting the storage data file. By default, the operating system page
size is used as a value for this parameter.<P>

<H3><A NAME = "objmgr">Object access manager</A></H3>

The object access manager controls the locking of objects, maintains information
about instances of objects loaded by clients, sends notifications to clients
when an object is modified and synchronizes the access to objects by different 
components of the storage server.<P>

Two kinds of locks are supported: exclusive and shared. Only one process can 
lock an object in exclusive mode, all other processes in shared mode. 
Lock requests can be either blocked or not-blocked.
In the first case, the process requesting the lock will be 
blocked until the requested lock is granted.  
In case of a non-blocking request and if granting a lock is impossible, a
negative answer is returned immediately. In GOODS the "<DFN>honest policy</DFN>"
of granting locks is used: a lock requested first will be granted first. 
So even if a requested lock is compatible with current locks set on the object, 
this request can be blocked because there are already other blocked lock requests  
for this object. There is only one exception from this rule: a shared lock
can be upgraded to an exclusive lock despite of the presence of other
blocked requests.<P>

Since different components of the storage server can access the same object
simultaneously, some synchronization mechanism is needed. 
The object access manager provides methods for two types of object accesses: 
reading and writing, implementing the "<DFN>multiple readers single writer</DFN>" 
discipline.<P>

The object access manager maintains for each object a list of object instances 
loaded by client processes. When an object is changed, this list is scanned
and to all processes having an instance of this object (except the process
which has modified this object) a notification message is sent. 
For correct garbage collection it is necessary to know about 
all objects and references from these objects present in client processes.
Since the object access manager has this information, the garbage collector
process requests information about all such references from the object
access manager during the mark phase and before the sweep phase.<P>

Since a client can have deteriorated a version of the object and can follow
references from this object instance, it is necessary to prevent the 
garbage collector from deallocating this object, which is referenced only from
deteriorated instances (so no more references to this object are present in
the database). When an object is modified, the object access manager compares
the set of references in the old with the one in the new version of the object.
All references in the old version which are not present in the new version are 
inserted into an "<CODE>old-reference-list</CODE>", explored by the garbage
collector. The references are kept in this list until all clients will
update their instances of this object.<P>

<H3><A NAME = "classmgr">Class information manager</A></H3>

GOODS uses an "<DFN>active-client</DFN>" model of client-server interaction.
The client applications are responsible for programming the application logic,
whereas the servers are responsible only for fetching and storing objects
and synchronizing the access to them. The server does not need to know too
much about the contents of objects. It views an object as an array of
references and raw data: knowledge about object references is necessary to
perform garbage collection. The object is stored by the server in a system
independent format (big-endian byte order, 6-bytes references
containing object storage identifier and number of the object within storage).
To simplify the server's work with objects and to increase the server
performance,  objects are stored by the server formatted: all references are
placed before any other data.<P> 

The GOODS server stores information about classes of all objects allocated in 
this storage. The class information includes the class name,
the size of fixed and varying parts of class, the number of references
in fixed and varying parts and information about each object field:
name, type and offset within object. 
This information is used by the server only to calculate the
number of references in the object while doing garbage collection
and building the closure of objects claimed by the client. 
Storing complete information about the class definition is essential for
the server to allow work with clients which have different database schemas
(class definitions). GOODS allows the online modification of a database schema
while the client works with the database. Moreover
one client can use the old version of a class definition, while another a new one. 
Such lazy object conversion strategy is really necessary for a
system with a lot of different client applications and a working
time 7x24.<P>

Before requesting an object from the server, the client looks at the class identifier.
If the client has already loaded an object of such a class, then the mapping
between the database class and the local application class is already set.
Otherwise the client sends to the server a request to provide
information about the class with such identifier. The server responds
with the full class description. The client then looks for a
local application class with the same name. If no such class is found,
the application is abnormally terminated. Otherwise the client compares
the signatures of the storage class and the application class. If the signatures
are equal, the client  establishes a new mapping between the storage class
identifier and the application class.<P>

If the signatures are not equal, then it is assumed that the client
has a new version of the class. Now a  special class descriptor for converting
the object from the old representation to the new one is constructed. The conversion
procedure looks for the definition of all components in the old class description
and tries to find fields with the same name in the new class description.
If a field with the same name is present in both class definitions, the
conversion procedure checks whether a conversion between the old type of the
field and the new type is possible. The only restriction on type conversion 
compatibility is that a reference type can be only assigned to a references type.
All other kinds of conversion (<CODE>real-&gt;integer, integer-&gt;real,
fixed array of scalars-&gt;scalar, varying array-&gt;fixed array...</CODE>)
are possible (certainly some of these conversions can cause loss of data).
Then the client sends the new signature of the class to the server and receives the
new storage class identifier assigned to this class descriptor by the server.
When an object of such class is loaded from the server, it is converted to the
new class format and a new storage class identifier is assigned to this object.
If such object will be modified and sent back to the storage, it
will be saved in the new class format. So lazy approach is used to
perform object conversion when the class definition has been  changed.<P>

When a client is going to store an object in the storage (as a result
of making a reference to this object from another persistent object),
it checks whether this object has a valid storage class identifier.
If none exists, the client sends to the server a description of the object class
and waits for a response containing the  storage class identifier assigned to
this class. When the server receives such a request from the client, it
looks through the definitions of all classes in the storage. If a class with
the same signature is found, the server replies with its identifier.
Otherwise a new class definition is added to the storage and a new assigned 
storage class identifier is returned to the client.<P>

Since there can be a lot of object class modifications, a garbage
collection procedure is also necessary to collect unused classes.
During the mark stage of the garbage collector, classes of all
"<DFN>black</DFN>" objects are also marked. At the sweep stage the
class information manager is asked to remove all unmarked classes.
If a class can be registered by an application, but the object instances of this class
are not yet created at this moment, the deletion of this class definition 
is delayed to the moment when all client processes working with this 
class descriptor will be terminated.
To implement this strategy of delayed class deallocation, all clients
are assigned successive numbers as identifiers at the time of connection to
the server. Each class descriptor stores the maximal client identifier which
has accessed this class. When a client sends a request to provide a class identifier
for a concrete storage class descriptor, the identifier of this client is compared 
with the client identifier stored in this class descriptor;if the first is bigger,
it replaces the client identifier stored in this class descriptor.
A class descriptor can be removed when there are no object instances of this 
class in the storage and there are no more clients with identifiers less or equal
to the class identifier stored in this class descriptor.<P>

Class descriptors are stored in the storage like other objects, but a special range
of object identifiers is used for class descriptor objects. In GOODS a
class storage identifier is stored in one word (two bytes) of the object
descriptor, so the maximal number of classes in the storage is limited
to 2^16 = 65535. The first 2^16 entries in the object descriptor table
are reserved for class descriptors. Creating new class descriptors
and updating existing ones is performed using standard transaction 
mechanisms. To improve the performance of the class descriptor lookup, the class 
information manager maintains an array of copies of all class descriptors in 
memory and uses a hash table for fast search of the specified class
name.<P>


<H3><A NAME = "sendobject">Optimization of object loading</A></H3>

When a client application accesses a persistent object which is not present
in the client cache, the database interface sends a request for loading the object to
the server where this object is located. Since transmission of data through a
network is a relatively slow operation and moreover only a small portion of the
total time is used to transfer the data itself (it is not so much
difference in time between sending 1 byte or 100 bytes of data), there are
two ways of increasing the application performance. Either one can increase
the size of the object cache and improve the cache management algorithms
(we will speak about this approach later). Or we predict which objects will
will be retrieved by the client and include into the reply not only the requested
object, but all objects which will with high probability be requested by the client in near
future: when the client accesses such object, the client database interface finds 
that the object is already present in the object cache and no request to the server is
necessary. Certainly it is very difficult a task to predict which objects 
will be retrieved by the application in the near future. Looks like no single best
solution exists for this problem.<P>

There are several main approaches
to determinate which set of objects should be sent to the client.
One alternative is to ask the programmer to explicitly specify clusters
of objects. If one object from the cluster is requested, then all
objects from this cluster are transmitted to the client. The disadvantage of this
approach is that it violates the transparency of manipulation with persistent
objects. Another approach is to maintain statistics of requesting
objects by clients. So if the server knows that after requesting object A
the client with high probability will request object B, then object B 
can be sent to the client together with object A. The disadvantage of this approach 
is that it consumes a lot of server memory and CPU time. While this
approach may provide very good results for some applications, it can
be inefficient for other applications. 
The third approach is to try to construct some kind of object closure
which includes all (or some of the) objects referenced
from the requested object. The size of the closure should be limited to
minimize the increase of network traffic due to transfer of useless data. <P>

The GOODS server currently uses the third approach (certainly other 
approaches can easily be implemented and tested by redefinition of one 
method). When an object is requested by the client, the server includes in his
reply all objects which satisfy the three following conditions:

<OL>
<LI> the object is directly accessible from the requested object,
<LI> the client doesn't have this object,
<LI> the total size of all objects in the reply is limited to some  
     predefined constant.
</OL><P>

<H2><A NAME = "replication">Replication support</A></H2>

Starting from version 2.60 GOODS supports database replication.
This feature can be used if your application is mission critical and requires fault tolerance.
GOODS supports master-slave replication model with active primary node and passive standby node.
Replication is integrated in GOODS transaction mechanism. All transactions executed at primary server
are replicated to standby server. When standby server detects failure of primary server, 
it performs recovery and continue functioning as new primary server. Clients connected to the crashed
primary server needs to reestablish connection to new new server and then they will see the database
in the same state as it was before crash of the primary server.<P>

To enable replication mode, you should specify <code>transmgr.replication_node</code> parameter
in GOODS configuration file (<code>goodsrv.cfg</code>). When GOODS server is started and 
<code>transmgr.replication_node</code> is specified, it tries to connect to the specified node.
<UL>
<LI>If connection is established, then started server becomes standby node and synchronize its state with
primary node. Standby node is blocked in database open method. In case of primary node crash, 
standby node performs recovery and successfully returns from open method. Then server continue normal 
functioning serving clients. If primary node is normally terminated, open method returns <code>False</code>
at standby node.  
<LI>If nobody listens specified port at specified address, then started node is considered
to be primary node and is ready to accept connections from standby nodes. It continue normal
functioning and serving of clients. There is only one replication specific thing: it doesn't truncate
log during checkout until standby node becomes active and synchronize its state with primary node.
</UL><P>

Both standby and primary nodes store transaction records in their transaction logs.
To improve performance, it is good idea to set <code>transmgr.sync_log_writes</code> to <code>0</code>
disabling synchronous writes to the log. Since transaction is transferred to standby node and can be recovered
from standby node in case of primary node fault, synchronous write to primary node transaction log is not 
needed. Since synchronous write is very expensive operation operation, disabling them can significantly improve 
performance. Using transaction logs both at standby and primary node sides, allows to minimize
size of data which needs to be transferred from primary node to standby node during recovery.
Most of the transaction will be recovered from local transaction log, and only those transaction
which were not flushed to the disk before failure or which were committed by new primary node
after the crash, have to be sent from primary node to recovered standby node during synchronization
of state of standby node.<P>

When you are running GOODS in replication mode, you should start two GOODS servers. The first one started
becomes primary node. Second server (which should be started with some delay after start
of th first server) will connect to primary node and becomes standby node. After crash of the primary server, 
you should restart it as soon as possible. If will connect to active server, becomes standby server
and synchronize its state with primary node. If primary node has to be terminated and standby node is not 
available, then primary node doesn't perform checkout. When you will start the database next time, 
you should first launch GOODS server at this node (node which was primary before termination). It will perform
recovery (even through it was normally terminated) and is able to restore state of standby node once it will be 
started and connected to this node.<P>

When <code>transmgr.replication_node</code> parameter is specified by no standby node is connected, 
primary node doesn't perform checkpoints and truncate transaction log. Transaction log is needed to be able 
to perform recovery of standby node once it will be connected to primary node. It means that size
of the transaction log can becomes very huge if there are a lot of transactions commits and 
standby node is not available for a long time. So you need to schedule your disk resources in such way, that
avoid transaction log overflow in such situation.<P>

When standby node detects primary node failure (connection with primary node is broken), it invokes
virtual method <code>database::on_replication_master_crash()</code>. A programmer can override this method
in class derived from <code>database</code> to be able to provide some application specific handling of primary
node failure. For example, it can change IP address of the standby node to be the same as was at primary node.
So now the clients need not know about primary and standby node addresses - they will just reconnect to the 
same address and will work with new primary server.<P>

Currently only replication of the single storage database is supported. 
Replication will not work for distributed transactions in the database consisting of two
or more storages.<P>

Described above mechanism can be used for fault tolerance (recovery in case
of server fault), but it doesn't handle corruption of data in one of the servers and doesn't
help to improve scalability of GOODS. That is why alternative mechanism of data 
replication was also implemented in GOODS: transaction monitor.
Transaction monitor replicates requests of the clients connected to the transaction
monitor to two or more GOODS servers. So each server receive the same set of requests
and in the same order. If server parameters are the same, then content of storages of
all these servers should be also the same at any moment at time.<P>

Except replication of client requests to all servers, transaction monitor also 
collect responses from all servers and compare them. If one of the response is different
from all others (number of online servers should be not less than 3), then data in this server
is assumed to be corrupted and it is disconnected. So described mechanism allows to verify
data stored by the servers.<P>

The same mechanism can be used to improve scalability of GOODS for
application accessing database in read-only mode (that is true for most of Web applications)
by running two or more servers and connecting read-only clients directly to one of these servers.
And to propagate changes among all these servers transaction monitor can be used.
So all clients needed to update database has to access it trough transaction monitor, while]
read-only clients can access some of the servers directly. As a result adding more servers can help to 
serve more (read-only) clients. If application is mostly browsing data, but sometimes need to 
insert/update data in the databases (typical example: internet shop), it should manage two connections
to the database: one direct to some of the database servers to browse the database and one through 
transaction monitor - to update the database. You can ask why it is not possible to
use transaction monitor in both cases and instead of replicating read request to all servers 
and comparing received responses just send read response to least loaded server?
Well, I can answer this question:
<OL>
<LI>If all clients are connected through transaction monitor, then transaction monitor can actually 
becomes the bottleneck - it will limit performance and scalability of the system
<LI>If read request will be send to only one server, then it is not possible to detect data corruption 
(and some applications need not scalability but have very high demands to data authenticity).
<LI>If read request will be send only to one server, and this server then crashes, 
other servers will not know that the client retrieves instance of this object and will not send 
data deterioration notifications to it.
</OL><P>

To make this mechanism work we need that each server function in deterministic way.
The only process which can cause server differences in behavior of two servers receiving the same 
sequence of requests is background garbage collector. It can complete at one server faster than at 
another and in this case result of the subsequent allocation request will be different 
at these servers (server completing GC first can reuse deallocated OID). In this
case content of the objects can become different at two servers. To prevent it
you should set <code>memmgr.gc_background</code> parameter to <code>0</code>.
Also you should not specify any time-based scheduling parameters for periodical GC initiation
<code>memmgr.gc_init_timeout</code>, <code>memmgr.gc_init_idle_period</code>).<P>

Transaction monitor accepts client connection only using IP sockets (it uses <code>select</code>
for it). So, if you are running clients at the same computer as transaction monitor, 
you should specify server address in configuration file as direct IP address of the computer 
or computer name but not <code>localhost</code>.<P>



<H2><A NAME = "api">Application interface to the database</A></H2>

The application interface to the database is divided into two parts:
the application dependent and the application independent. The application
independent part is represented by the abstract class 
"<CODE>dbs_storage</CODE>". This class
contains a reference to the abstract class "<CODE>dbs_application</CODE>"
which declares 
methods handling database requests to applications (notification about object 
modification, disconnect handler). The class "<CODE>dbs_storage</CODE>"
declares methods
for fetching, locking and unlocking objects, for retrieving and storing
class descriptors, for manipulations with transactions,...
Implementations of this class are responsible for sending requests to 
the server and receiving replies and notifications from the server. 
This class is not responsible for synchronizing the access to the storage
and for allocation, unpacking and deallocation of object instances in 
client applications. Instead the implementation of 
"<CODE>dbs_storage</CODE>" just places an  object instance or
class descriptor loaded from the server
in a specially supplied buffer, whereas successive operations on this buffer are
performed by the application dependent part of the database interface.
The application dependent part of the database interface implements the
"<CODE>dbs_application</CODE>" protocol; it contains methods accessing
"<CODE>dbs_storage</CODE>" and providing the
synchronization of storage accesses as well as object packing and unpacking.  
Also handling of storage notifications is performed by methods defined in
this part of the database interface.<P>

The database interface is organized in such way that it is possible 
for different applications written in different languages to
work with the same database. In the current version of GOODS only
the interface for C++ is provided, because C++ is the most
popular object oriented language now, which also provides highly  performance
efficient executables. An interface to the Java language is planned in near future.<P>

Different applications may have very different requirements to the
database system. For example for a banking application a standard serializable
transaction model may be the most relevant one. Applications in such a system 
should have an illusion of monopoly work with a database:a transaction should
not see changes made by other transactions of other clients. 
It is the isolation model of database organization. 
But for an office document flow controlling system a
cooperation model may be more preferable. In this model applications
cooperate with each other and changes made by one application should 
be visible by other applications.<P>

Since it is not possible to incorporate all possible strategies in a
single database system, the generic (or universal) object oriented database
should have facilities to extend its functionality and behavior
depending on application requirements.<P>

Modern programming systems have to deal with a lot of different aspects: 
user interface, memory management, multitasking, database access,
security... A traditional approach of dividing an application into modules 
will not work if each module will be responsible for all of these aspects. 
If for example the logic of synchronization access to objects will
be scattered about the application, it will be very difficult
to understand such code, debug or modify it. 
To make the application more clear, simple and extensible, it will be
a good idea to separate code responsible for different aspects
of system behavior. These separated aspects can be combined 
together at compilation time (or at runtime) using a metaobject protocol
approach. According to the GOODS interface with client applications, the following
aspects of the system are implemented by metaobjects:

<UL>
<LI> <A HREF = "#isync">Intertask synchronization of object access</A>
<LI> <A HREF = "#esync">
Synchronization of object access by different database clients</A>
<LI> <A HREF = "#trans">Handling of database transactions</A>
<LI> <A HREF = "#lru">Management of the client's object cache</A>
</UL><P>

Let us look at these aspects more closer:<P>

<H5><A NAME = "isync">Intertask synchronization of object access</A></H5>
Modern applications very often have to deal with a lot of concurrent jobs:
handling user interfaces, retrieving data from the database, performing some
background recalculations... It is very difficult to implement such a system 
without using multitasking. Multitasking can be either explicit or implicit.
Implicit multitasking is more transparent for the programmer (for example
each method invocation can be considered as a separate thread of control),
but it is not so flexible as explicit multitasking. And since aspects of
its implementation are hidden from the programmer, some kind of unexpected
programmer behavior can take place. Thus we decided to use in GOODS 
the explicit multitasking model where the programmer has to create and
synchronize tasks explicitly using synchronization objects like:
mutex, event, semaphore. Synchronization of access to the object by 
different tasks forms a separate aspect of system behavior which is 
implemented by means of metaobject protocols.<P>

The default policy used in GOODS for intertask
object access synchronization is the mutual exclusion model. Only one 
task can access the object at the same time. Each GOODS object has an
associated monitor object which synchronizes the access to the object instance
by different tasks. When a method of the object is invoked
by some task, the monitor associated with this object is locked to prevent
other tasks from using this object until this task returns from the invoked 
method. Nested calls to objects are also allowed. This is a simple and 
powerful strategy but it has some serious disadvantages, 
one of them is the possibility of a <DFN>deadlock</DFN>
- the mutual blocking of several tasks.<P>

This model of intertasking object access synchronization is similar to one
used in Java (with all methods explicitly considered to be
"<DFN>synchronized</DFN>"). Each GOODS object also has two methods 
"<KBD>wait</KBD>" and "<KBD>notify</KBD>". When a task, executing in a monitor
needs to wait for another task to do something, it calls <KBD>wait()</KBD>.
This causes the task to unlock the monitor and go to sleep. Since the monitor is 
unlocked, another thread can enter the monitor and supply the information the first
task is waiting for. The second task signals the waiting task by calling 
<KBD>notify()</KBD>. Once the first task is signaled it wakes up and begins
waiting for the monitor to become available. When the second task 
finishes its processing, it  unlocks the monitor, allowing the first
task to reacquire the monitor and to finish what it started.<P>

Unfortunately, there is one serious problem with this signaling mechanism,
which can cause deadlocks. If a method of object <CODE>O1</CODE> invokes 
a method of object <CODE>O2</CODE>, which in turn calls the <KBD>wait()</KBD>
method, then only the monitor of object <CODE>O2</CODE> is unlocked, while
the monitor of object <CODE>O1</CODE> remains locked. One could, which looks 
tempting at first glance, modify the behavior of the metaobject so that calling 
<KBD>wait()</KBD> unlocks all monitors a task has acquired. This would be a 
mistake. It would imply that anytime you call a function that could in turn 
potentially call <KBD>wait()</KBD>, you cannot guarantee that another task won't
change the state of the object.<P>

Since monitor objects take some space, it is very space consuming a
solution to have separate monitor objects for each application object. 
Instead of this a "<DFN>turnstile</DFN>" of monitor objects is used in GOODS.
When an object should be locked and it has no attached monitor object, a new 
monitor object is taken from the turnstile 
(if there are no more free monitor objects, then the turnstile is extended). 
When an object is unlocked, the monitor object continues to be attached to this 
object, but it can be taken away and used for another object. 
So the total number of monitor objects 
doesn't exceed the maximal number of simultaneously accessed objects and there is
a high probability that when an object is accessed it already has an attached 
monitor object.<P>

<H5><A NAME = "esync">Synchronization of object access by different
database clients</A></H5>
There are two main approaches to object access synchronization in the database:
pessimistic and optimistic (certainly some combination of this 
two approaches can be used). With the pessimistic approach, object
locks are usually used to prevent other clients from accessing the object.
This approach guarantees that object changes can not be lost.
Main disadvantages of this approach are the possibility of deadlocks and 
reduced concurrency. The optimistic approach can be successively used
if the possibility of conflicts (simultaneous modification of an object by
different clients) is small and the transaction can be easily restarted.
With this approach the check for correctness of object access is made only
when the transaction is committed. If some of the objects touched by the transaction 
have been changed by some other transaction, a conflict has taken place
and the current transaction should be restarted. There are a number of 
different metaobjects in the GOODS database interface, which implement different
variations of these approaches. Below the hierarchy of these metaobjects
is illustrated. The description of these metaobjects
can be found in the <A HREF = "inc/mop.h">"mop.h"</A> header file.<P>

<PRE>
Abstract Metaobject
	Basic Metaobject 
		Optimistic Metaobject             
			Repeatable Read Optimistic Metaobject	
		Pessimistic Metaobject
			Lazy Pessimistic Metaobject 
                        Repeatable Read Pessimistic Metaobject
</PRE><P>

It is possible to achieve a higher level of concurrency between different 
clients if the semantic of a concrete object class is explored. 
For example, the PUT and GET methods of the "<CODE>queue</CODE>" object
can be executed 
concurrently without mutual exclusion as long as the queue is not empty.
This is possible because these methods, although both are mutators,
work on different ends of the queue. Using a specific metaobject protocol
considering the object's semantic can be advantageous.<P>
   
<H5><A NAME = "trans">Handling of database transactions</A></H5>
All changes in the database should be made by means of consistent and atomic 
sequences of  operations - the transactions. A transaction transfers the database
from one consistent state to another. There are a lot of different  
transaction models: <DFN>flat</DFN> transactions, <DFN>nested</DFN>
transactions...
Simply speaking, all these models have different answers to two questions:
when should a transaction be committed or aborted and when should changes made
by one transaction become visible for other transactions.
Since different applications may require different transaction models,
we want to allow methods of the metaclass to answer these questions.
So by definition each programmer can introduce her/his
own model of transactions by means of own metaobjects.<P>

The default implementation of transactions in GOODS implicitly opens a
nested transaction each time a GOODS object is accessed; and
commits all nested transactions when control returns from the
last invoked method. So the programmer should not worry about the
definition of points where to open or where to close a transaction
(but it is possible to explicitly open or close a nested transaction and 
so to create a long-live transaction). When a transactions is aborted,
all modifications of persistent objects are discarded.
This implicit transaction schema is highly compatible with the idea 
of a transparent database interface. And it is more error safe.<P>

It is possible to explicitly specify transaction boundaries by
<code>metaobject::begin_transaction()</code> and 
<code>metaobject::end_transaction</code> virtual methods. The method
<code>metaobject::begin_transaction()</code> increments by one counter of 
nested transactions and <code>metaobject::end_transaction</code> method
decrements the counter and if it becomes zero, commits the transaction.<P>

<H5><A NAME = "lru">Management of the client's object cache</A></H5>
Since the client application has to send requests to the server for every
accessed object, the performance of the application mostly depends on
the efficiency of the object caching strategy. Usually a standard LRU
algorithm is used for replacing entries in the cache, but for a database
this discipline is not always a good choice. An application accessing a database
frequently scans a large number of objects. With a traditional
LRU cache replacement algorithm, these scanned objects (most of them
will not be accessed in the near future any more) will completely replace 
all cache entries by throwing away all other objects.<P>

To avoid such undesirable behavior, a
special modification of the LRU algorithm is used in GOODS. 
The object cache is divided into two parts: a frequently used objects 
(<DFN>FUO</DFN>)
part and an objects used only once (<DFN>OUOO</DFN>) part.
Both parts are controlled by an ordinary LRU discipline using 
double linked lists. The object is excluded from the first list when it is accessed;
and inserted at the head of the second list after the end of the access.
If an object is taken from the <DFN>OUOO</DFN> list and it is not at the
head of this list, the object is considered to be a frequently used one 
and reattached to the <DFN>FUO</DFN> list. (Since the object is present
in the <DFN>OUOO</DFN> list but not at the head of this list,
we can make the conclusion that the object was already accessed
by some other place in the program and may be accessed once again).
So each object scanned during the database search can not replace objects from 
the FUO part of the cache. But GOODS allows the programmer to define her/his
own cache managing policy because the cache is also controlled by 
metaclass methods.<P>

Currently the GOODS database interface consists of some kernel 
database requests (such as set lock, open transaction, load object...)
and a number of basic metaobjects, implementing the most common
models of object access organization. Deriving from these basic metaobjects,
it is possible to create specific metaobjects to handle specific
requirements of concrete applications.
Usually most of the work can be done by methods of base
metaclasses and only few things should be redefined or added.<P>




<H2><A NAME="cpp">The GOODS interface for the C++ language</A></H2>

GOODS supports the following scalar types as components of database classes:

<PRE> 
char    1 byte character  
nat1	unsigned 1 byte integer
int1    signed 1 byte integer
nat2	unsigned 2 bytes integer
int2    signed 2 bytes integer
nat4	unsigned 4 bytes integer
int4    signed 4 bytes integer
nat8	unsigned 8 bytes integer
int8    signed 8 bytes integer
real4   4 bytes ANSII floating type
real8   8 bytes ANSII floating type
</PRE>

It is better to use these type aliases instead of native C types to 
provide portability of your application (for example type <TT>long</TT>
can be 4 bytes on one system and 8 bytes on another).
References to other GOODS objects are supported by means of 
"<DFN>smart pointers</DFN>" implemented in the template class
"<CODE>ref</CODE>":

<PRE>
class A;
ref&lt;A&gt; ra;
</PRE>

It is possible to construct arrays and structures of the above
specified atomic types:

<PRE>
char str[10];
struct coord {
	int x;
	int y;
};
coord points[10];
</PRE>

It is possible to specify classes with objects of varying size. Such class
can have (only) one varying component, the size of which is determined
at object creation time: 

<PRE>
class person : public object {
  public:  
    char name[1];
   
    static person* create(char* name) {
        int name_len = strlen(name);
        return new (self_class, name_len) person(name, name_len);
    }
    METACLASS_DECLARATIONS(person, object);

  protected: 
    person(const char* name, size_t name_len)
    : object(self_class, name_len)
    {
        memcpy(this->name, name, name_len+1);
    }	
};

field_descriptor& person::describe_components()
{
    return VARYING(name);
}
</PRE>

Such a class can have only one varying component which must be the last one.
Classes with varying components are used in the GOODS C++ interface to
implement arrays. They provide an efficient method for representing classes
with constant (immutable) string identifiers (like <TT>person</TT> and
<TT>name</TT>).<P>

When the implementation of the C++ metaclass information does not exist,
the programmer has to specify this information manually because the
database needs to know the format of objects. To ease this work, a number
of macros and functions are supported by the GOODS interface for C++.<P>

GOODS supports persistency only for objects of <I>persistent capable</I>
classes. A class is <I>persistent capable</I> if it is derived from the
GOODS "<CODE>object</CODE>" class and implements some methods and constructors
needed by the GOODS client library. Such a class should have:

<OL>
  <LI>a specific constructor for initializing the object  
      when it is loaded from the database;
  <LI>a static component "<CODE>self_class</CODE>"
      containing the class descriptor of this class;
  <LI>an overloaded function "<CODE>classof</CODE>" returning the 
      class descriptor determined by the static type
      of the function argument; 
  <LI>a virtual method "<CODE>describe_components</CODE>",
      which returns information about all components of the class. 
</OL>

All these are declared by the macro  

<PRE>
METACLASS_DECLARATIONS(CLASS, BASE_CLASS)
</PRE>

The programmer merely has to implement a member function 
"<CODE>describe_components</CODE>",
which provides information about all class instance variables.
To make the work of describing the class variables more easy and error safe,
five special macros are provided: 

<PRE>
NO_FIELDS   the method should return this value if there are no variables in the class
FIELD(x)    describes an atomic or structural field 
ARRAY(x)    describes a fixed array type
MATRIX(x)   describes a two dimensional fixed array type
VARYING(x)  describes a varying array 
</PRE>

If there are structural components in the class, you should define a
function "<CODE>describe_field</CODE>" to describe components of 
this structure:

<PRE>
class B_page : public object {
    friend class B_tree;
    enum { n = 1024 }; // Minimal number of used items at not root B tree page
    int4 m;            // Index of first used item

    struct item { 
        ref&lt;object&gt; p; // for leaf page - pointer to 'set_member' object
	               // for all other pages - pointer to child page
        skey_t      key;

        field_descriptor& describe_components() {
	    return FIELD(p), FIELD(key);
	}				
        inline friend field_descriptor& describe_field(item& s) {
            return s.describe_components();
	}
    } e[n*2];

    ...
  public: 
    METACLASS_DECLARATIONS(B_page, object);
};

field_descriptor& B_page::describe_components()
{
    return FIELD(m), ARRAY(e); 
}

REGISTER(B_page, object, optimistic_scheme);
</PRE>

The macro <CODE>REGISTER</CODE> can be used to define default implementations
for the methods "<CODE>classof</CODE>" and "<CODE>constructor</CODE>". The
macro can be used to create a class descriptor for the class:

<PRE>
REGISTER(CLASS, // name of the class
	 BASE,  // name of base class
	 MOP    // metaobject for this class
        ); 
</PRE>

It is possible for a template class to be <I>persistent capable</I>, 
but the programmer has to explicitly register different template instantiations
of the database. The common way is to use the <TT>typedef</TT> operator to
create an alias name of the concrete template instantiation and then register
the class with this alias name in the database by means of the
<TT>REGISTER</TT> macro.<P>

Each storage has a predefined root object. To change the type of this
abstract root object you should use the "<CODE>become</CODE>" method.
To find out whether storage is already initialized, a special virtual method 
"<CODE>is_abstract_root</CODE>" is defined in the class
"<CODE>object</CODE>". This method
returns true if the type of the root object has not been changed yet and 
false otherwise.<P>

The multitasking library requires some initialization before it can be used.
The first statement in  the <CODE>main()</CODE> function of the application  should
be the invocation of the static method <CODE>task::initialize</CODE>.
The single parameter of this method specifies which stack size should 
be reserved for the main thread of the program.<P>

Usually the sequence of steps to initialize storage may be:
 
<PRE>
class my_root : public object {
  public:
    ...
    METACLASS_DECLARATIONS(my_root, object);
  
    my_root() : object(self_class)
    {
	...
    }
    void initialize() const { 
        if (is_abstract_root()) {
            ref&lt;my_root&gt; root = this;
            modify(root)-&gt;become(new my_root);
	} 
    }
};
	
int main(int argc, char* argv[]) 
{
    task::initialize(task::huge_stack);
    database db;

    char* cfg_name = new char[strlen(argv[1])+5];
    sprintf(cfg_name, "%s.cfg", argv[1]);

    if (db.open(cfg_name)) {
        ref&lt;my_root&gt; root;
        db.get_root(root);
        root-&gt;initialize();
	
	... // do something with database 
	
        db.close();

        // To avoid memory leaks messages for statically allocated objects
        // this method may be called (but only if there are no more active database connections)
        database::cleanup(); // release statically allocated objects
    }
}
</PRE>

You can find a template for a simple database application in the
file <A HREF = "examples/template.cxx">"template.cxx"</A>.
All accesses to persistent objects should be encapsulated inside
object methods. It is bad programming practice which may cause a
runtime error, to directly reference some object component:

<PRE>
class text {
  public: 
    char str[1];
    ...
};

main () {
    ref&lt;text&gt; t;
    ...
    printf("text; %s\n", t-&gt;str); // !!! ERROR
}
</PRE>

The reason for such restriction is that the persistent object at any
moment can be thrown away from memory by the cache replacement algorithm,
when there is no active method for this object. Then it may happen that a
smart pointer points to the wrong object.
The correct implementation for the code above is:

<PRE>
class text { 
  protected: 
    char str[1];
  public:
    void print() const;
    ...
};

void text::print() const
{
    printf("text; %s\n", str);
}

main () {
    ref&lt;text&gt; t;
    ...
    t-&gt;print();
}
</PRE>

Avoid direct access to object components whenever it is possible:
make all object instance variables protected and encapsulate the
access to them within object methods. Encapsulation makes your application
simpler and more flexible. With the GOODS C++ interface the performance
is increased, because the access to self components within object methods 
requires no extra runtime overhead.<P>

GOODS C++ API provides implicit memory deallocation (garbage collector) for persistent 
capable objects based on reference counters. It means that once number of references
to the instance of persistent capable object becomes zero, it will be immediately deallocated.
<I>Smart pointer</I> are responsible for maintaining reference counter for each object. 
Unfortunately garbage collector based on reference counters is not able to deallocate 
data structures with cyclic references (such as L2-list). It is not critical for persistent 
objects, because object cache replacement algorithm will in any case remove least recently
used instances from the memory. But you should take care about transient data structures 
with cyclic references.<P>

There are two dangerous issues related with usage constructors of persistent capable 
objects. As far as at the moment of the constructor execution there are no references
(<I>smart pointer</I>) to the created instance, assigning <code>this</code> pointer in the 
constructor to some smart pointer can cause deallocation of the created object one this 
variable will be changed. For example, the following constructor will 
cause unexpected deallocation of object:<P>

<PRE>
class X : public object{ 
    X() : object(self_class) { 
        ref<X> x = this;
	x = NULL; <<<-- at this point this object will be removed!
	...  
    }
};
</PRE>

Another problem is specific for the constructor used for creation of root object.
Root object is initialized in GOODS C++ API by means of <code>become()</code> operator, 
which just swaps references to two objects (predefined root object and new root objects). 
It means that all references to the new root object set in constructor will point
to the other object (abstract root) after become. So do not set any references to 
created object in the constructor of root object. Also be care with assignment of 
<code>this</code> pointer in constructors of all persistent capable classes, and better avoid 
such assignments at all by performing all necessary initialization in some other method.<P>


Starting from 2.37 version of GOODS, per-thread transactions are supported. 
In prior versions of GOODS only all threads share the same transaction. The transaction is 
committed only if there are no more active methods of persistent capable object in any thread. 
The counter of nested transaction invocations was static variable of <code>basic_metaobject</code> 
class, so all persistent capable objects (even if these objects belongs to the different database) share the single counter of nested transaction.<P>

In 2.37 version of GOODS, special class <code>cache_manager</code> was introduced to contain 
static variables from <code>basic_metaobject</code> class used for transaction and cache 
management. By default the behavior is consistent with old 
versions of GOODS - all threads share the same transaction. But it is now possible to assign
some particular instance of <code>cache_manager</code> to a thread or group of threads.
<code>database</code> class has now two new methods: <code>attach()</code> and 
<code>detach()</code>. 
Each <code>database</code> class can have its own <code>cache_manager</code>. But unless
<code>attach()</code> method is executed by the thread, default cache manager is used.
The method <code>attach</code> associate cache manager of the particular database
connection with the current thread. The method <code>detach</code> destroy this association.
Invocation of <code>detach</code> method is not ncessary and this method was added only for 
csymetry and compatibility with Java API.<P>

Per-thread transaction model is especially useful for servers, when one process handle 
remote connections with multiple clients. The actions performed by each client should be
treated as separate transaction. To implement this model in GOODS version 2.37 and higher, it
is necessary to create separate database connection for each thread (create instance of 
<code>database</code> class and open the database). Then thread should be attached
to the database by <code>database::attach()</code> method. Each thread will
have its own object cache, so there can be several instance of the object with the same OID in 
the application. Standard GOODS synchronization mechanism based on locks set by metaobjects
is used to avoid conflict of accessing the same  object by several concurrent threads. 
There are the following drawbacks of the proposed model:<P>

<OL>
<LI>Inefficient use of client memory due to duplication of information - 
several copies of one object can be present in the application. 
<LI>Redundant load operations. Object will be fetched from the server even if instance of the 
object is available in the cache of other thread. 
<LI>Instead of having the single connection with server, multiple connections (one per thread)
are maintained. 
<LI>Necessity of explicit attach and detach methods invocations - it is responsibility of 
programmer to associate database connection with thread.
</OL><P>

This model of per-thread transaction is experimental and may be changed in future.
Example illustrating per-thred thransactions:

<PRE>
void task_proc run(void*) { 
    database db;
    db.attach();
    if (db.open("myserver.cfg")) {
	...
        db.close();
    }
    done.signal();
}

int main() { 
    int i;
    task::initialize(task::normal_stack);
    for (i = 0; i < nThreads; i++) { 
	task::create(run);
    }
    for (i = 0; i < nThreads; i++) { 
	done.wait();
    }
    return 0;    
}
</PRE>

For real servers, it is better to maintain pool of threads. So the server will not spawn
new thread and open database connection each time it receives request from the client
(and destroying thread and connection after requests has been processed). 
Instead of it threads for handling client request are taken from the pool and return 
to the pool after the end of request processing. Each thread has permanently opened database 
connection. Such scheme can significantly increase total server throughput because of 
eliminating overhead of spawning new thread and especially establishing new database 
connection.<P>

Version 2.37 of GOODS provides two additional optimization methods:
fetching object clusters and bulk allocation mode. Both optimizations can be enabled at
database level by <code>set_alloc_buffer_size(size_t size)</code> and 
<code>enable_clustering(boolean enabled)</code> methods.<P>

When clustering is enabled, server will send to the client not only the requested 
object, but also objects directly referenced from the requested object. The total
size of objects in cluster is limited by server parameter for maximal cluster size.
By default the value of this parameter is 500 bytes.<P>

The bulk allocation mode allows to reduce number of messages send between client and 
server and also improve reference locality. If size of allocation buffer is set
to non zero value, then the server will reserve requested number of OPIDs for the 
client. So allocation of object (assigning OPID to the object) can be done immediately
by the client without sending requests to the server. At the end of transaction or
when the end of the allocation buffer is reached, one request is sent to the server to
allocate all objects with reserved OPIDs. Allocation is performed as the single
continuous segment, so all objects created within transaction will be placed near
each other. When the client is detached from the server (normally or abnormally) 
cleanup is performed and all reserved OPIDs are reclaimed.<P> 

The GOODS C++ interface provides a library of some widely used 
container classes for efficient access to persistent data. 
The following subsections briefly describe these classes.<P>

<H4><A NAME = "array">Dynamic arrays</A></H4>
The GOODS interface for C++ provides a template class for dynamic arrays. 
The template parameter should be either a builtin primitive type (<TT>nat1, 
int4, real8</TT>...) or a persistent object reference. It is important to 
notice that template instances must be explicitly registered in the database
by the <TT>REGISTER</TT> macro. The following array template instantiations
are defined and registered in <A HREF="inc/dbscls.h">"dbscls.h"</A>:
<TT>ArrayOfByte, ArrayOfInt, ArrayOfDouble</TT> and <TT>ArrayOfObject</TT>.
If you want to use an array of some other component type, you should first
create a type alias by means of the <TT>typedef</TT> operator and then
register this type in the database:<P>

<PRE>
     typedef array_template<int2> ArrayOfShort;
     REGISTER(ArrayOfShort, object, pessimistic_repeatable_read_scheme);
</PRE>

The dynamic array template provides methods for direct access to array components:

<PRE>
        T operator[](nat4 index) const;
	T getat(nat4 index) const;
        void putat(nat4 index, T elem);
</PRE>

Methods for getting the number of components in the array, for copying and 
appending array components are also available.  
The dynamic array also implements a stack protocol by providing 
such methods as <TT>Push(T value), Pop(), Top()</TT>. The methods 
<TT>insert(int index, int count, T value)</TT> and 
<TT>remove(int index, int count)</TT> can be used to add or remove
dynamic array components.<P>

The class <TT>String</TT> is implemented as a subclass of the
<TT>ArrayOfChar</TT>
class and defines extra methods for string manipulation.<P>


<H4><A NAME = "set">Sets</A></H4>

The GOODS class library provides CODASYL-like sets to represent one-to-many and
many-to-many relationships between objects. The set consists of one 
owner and many member components. The set owner is represented by the
<TT>set_owner</TT> class and provides methods for insertion/removal of
members to/from the set and iteration through the set members. The set members
are accessed through the <TT>set_member</TT> class, which contains a member 
key and a virtual function to calculate the short key representation (used
in the B-tree). Both the <TT>set_member</TT> and the <TT>set_owner</TT> class
contain a pointer to the attached object, so the object can be a member of some 
sets and an owner of some other set at the same time.<P>

<H4><A NAME = "btree">B*-tree</A></H4>
The B-tree is the classical data structure for DBMS. It minimizes the number
of disk read operations needed to locate an object by key and preserves
the order of elements (range requests are possible). Also the maintenance
of a B-tree can be done efficiently (insert/remove operations have 
log(N) complexity).<P>

In the classical implementation of the B-tree, each B-tree page contains
a set of pairs &lt;key, page-pointer&gt;. The nodes at the page are 
ordered by key, so a binary search can be used to locate an item with greater
or equal key. 
In the B*-tree, pointers to members are stored only in leaf pages of the B-tree.
All other pages contain pointers to child pages.<P>

The B*-Tree in GOODS is implemented as a subclass of the
<TT>set_owner</TT> class. Pointers in leaf pages of the B*-tree refer
to objects of the <TT>set_member</TT> class, which contain references to
the objects included in the B-Tree. Nodes of the B*-tree pages contain a short
form of key , which can be calculated
from the object key by a virtual method of the <TT>set_member</TT> class
(using the <TT>nat8</TT> type, it usually consists of just the first 8 bytes of the original key). 
Such structure allows objects to be included
in several B_trees and also makes the search operation more effective, because
only small <TT>set_member</TT> objects are accessed during the search
(if there are several objects with the same value of the short key). The
<TT>B_Tree</TT> class defines methods for inserting new objects into the tree,
removing objects from the tree and searching objects by the key.<P>

<H4><A NAME = "hash">Hash Table</A></H4>
The class <TT>hash_table</TT> provides fast, almost constant time access to the 
object by the key. The GOODS class library for C++ provides the implementation
of a non-extendable hash table which operates with string keys. The hash table
can be effectively used if the upper limit of objects in the hash table 
is known, does not exceed the size of the table more than several times
and the hash table fits into the operating memory.<P>

<H4><A NAME = "htree">H-Tree</A></H4>
The H-Tree is a combination of hash table and index tree. It can be used when
the size of the hash table is too large to represent the table as a single
object (as an array of pointers). The H-Tree algorithm first calculates the
normal hash key and then divides it into several groups of bits. The first
group of bits is used as an index in the root page of the H-Tree, the second
group of bits as an index in the page referred from the root page, and so
on... If e.g. the size of the hash table is 1000003, then the H-tree with
pages, containing 128 pointers, requires access to three pages to locate
any object. Since a reference in GOODS is 6 bytes long, the total size of
the loaded objects is 2304 bytes (128*6*3) instead of 6Mb when using the 
<TT>hash_table</TT> class.<P>

<H4><A NAME = "blob">Blob</A></H4>
Most modern database applications have to deal with large objects, used to 
store multimedia and text data. The GOODS class library has a special class
<TT>Blob</TT> to provide an efficient mechanism for storing/extracting large 
objects. Since loading large objects can consume significant time and 
memory, the <TT>Blob</TT> object allows subdividing large objects into parts
(segments), which can be accessed sequentially. Moreover, the <TT>Blob</TT> 
object uses the multitasking model of GOODS, which makes it possible 
to load the next part of the <TT>Blob</TT> object in parallel while handling 
(playing, visualizing,...) the current part of the <TT>Blob</TT>. 
Such approach minimizes delays caused by loading objects from the storage.<P>

<H4><A NAME = "rtree">R-Tree</A></H4>
The R-tree provides fast access to spatial data. Basically the idea behind
the R-Tree and the B-Tree are the same:
use a hierarchical structure with a high branching factor to reduce the
number of disk accesses. The R-tree is the extension of the B_tree for a
multidimensional object. A geometric object is represented by its minimum 
bounding rectangle (MBR). Non-leaf nodes contain entries of the form 
(R,<I>ptr</I>) where <I>ptr</I> is a pointer to a child node in the R-tree; R 
is the MBR that covers all rectangles in the child node. Leaf nodes contain 
entries of the form (obj-id, R) where
obj-id is a pointer to the object, and R is the MBR of the object. The main 
innovation in the R-tree is that the father nodes are allowed to overlap. By
this means the R-tree guarantees at least 50% space utilization and remains 
balanced. 
The first R-tree implementation was proposed by Guttman. The GOODS <TT>R_tree</TT>
class is based on Guttman's implementation with a quadratic split algorithm.
The quadratic split algorithm is the one that achieves the best trade-off
between splitting time and search performance.<P>

<H4><A NAME = "kdtree">KD-Tree</A></H4>
B-Tree allows you to locate object by key. But sometimes you need to specify several
search conditions. You can certainly perform index search by one of the specified keys (most selective)
and then filter selected records to match rest of conditions. But if it is not clear which key is most selective or
selectivity of all specified conditions is bad, then such query execution plan can be very inefficient.
To address this problem GOODS provides multidimensional index: KD-Tree.<p>

KD-Tree allows to perform index search using multiple criteria. Search is based on query-by-example mechanism: 
you use pattern object of the same as class as you are going to search and set  values for some fields of this object.
Then you specify bit mask indicating which fields are set. Assume that you have class Car describing some cars:

<pre>
class Car : object 
{
     String  vendor;
     String  model;
     String  color;
     int     year;
     int     mileage;
     boolean automatic;
     boolean ac;
     int     price;
     char    state[3];
...
};
</pre>

To use multidimensional index you need to derive your own class from KD_tree
and implement comparison function in it.  Comparison function should compare specified fields of the object.
Assume that we want to be able to search car by any combination of fields defined in Car class.
It is convenient to enumerate this fields, because we need to access them my ordinary numbers:

<pre>
enum CarFields 
{ 
    CAR_VENDOR,
    CAR_MODEL,
    CAR_COLOR,
    CAR_YEAR,
    CAR_MILEAGE,
    CAR_AUTOMATIC,
    CAR_AC,
    CAR_PRICE,
    CAR_STATE
};

class CarIndex : public KD_tree 
{
  protected:
    virtual int compare(anyref o1, anyref o2, int dim) const
    {
        r_ref<Car> c1 = o1;
        r_ref<Car> c2 = o2;
        switch (dim) { 
            case CAR_VENDOR:
               return c1->vendor->compare(c2->vendor);
          case CAR_MODEL:
               return c1->model->compare(c2->model);
          case CAR_COLOR:
               return c1->coolor->compare(c2->color);
          case CAR_YEAR:
               return c1->year < c2->year ? -1 : c1->year == c2->year ? 0 : 1;
          case CAR_MILEAGE:
               return c1->mileage < c2->mileage ? -1 : c1->mileage == c2->mileage ? 0 : 1;
          case CAR_AUTOMATIC:
               return c1->automatic ? c2->automatic ? 0 : 1 : c2->automatic ? -1 : 0;
          case CAR_AC:
               return c1->ac ? c2->ac ? 0 : 1 : c2->ac ? -1 : 0;
          case CAR_PRICE:
               return c1->price < c2->price ? -1 : c1->price == c2->price ? 0 : 1;
          case CAR_STATE:
               return strcmp(c1->state, c2->state);
           default:
              assert(false);
              return 0;
        }
    }
...
};
</pre>


Now assume that you want to search for green Fords with automatic transmission:

<pre>
ref<Car> car = new Car();
car.vendor = String::create("Ford");
car.color = String::create("green");
car.automatic = True;
KD_tree::Iterator i = index->queryByExample(car, BIT(CAR_VENDOR)|BIT(CAR_COLOR)|BIT(CAR_AUTOMATIC));
while (!(car = ++i).is_nil()) { 
    ...
}
</pre>


It is also possible to perform range queries: you just need to specify two pattern objects for low and high boundaries.
KD-tree will select all records belonging to the specified intervals. Interval can be open - low or high boundary may be omitted.<p>

For example assume that we want to find cars with mileage < 100000 and year >= 2008 and price between 5000 and 10000

<pre>
ref<Car> low = new Car();
low->year = 2008;
low->price = 5000;

ref<Car> high = new Car();
high->mileage = 100000;
high->price = 10000;
KD_tree::Iterator i = index->queryByExample(low, BIT(CAR_YEAR)|BIT(CAR_PRICE), high, BIT(CAR_MILEAGE)|BIT(CAR_PRICE));
</pre>

<H3><A NAME = "www">API for development Web applications</A></H3>

New version of GOODS provides API for developing WWW applications.
It is very easy to perform Web database publishing with GOODS.
GOODS  server can either communicate with standard WWW server by
means of CGI requests, or it can serve HTTP requests itself.<P>
Interaction with Web server is based on three-tier model:

<PRE>
    Web Server   -&gt;     CGI stub     -&gt;    GOODS application
             CGI call          local socket connection  
</PRE>

Using GOODS built-in HTTP server provides maximum performance, because in 
this no communication and process creation overhead takes place.
In both cases the same API for receiving and unpacking requests 
and constructing responses is used. So the same application 
can be used for interaction with external Web server as well as 
stand-alone HTTP server.<P>

GOODS application is request-driven program, receiving data from
HTML forms and dynamically generating result HTML page. Classes
<code>WWWapi</code> and <code>WWWconnection</code> provide simple and 
convenient interface for getting HTTP requests, constructing HTML page and 
sending reply back to WWW browser. Abstract class <code>WWWapi</code>
has two implementations: <code>CGIapi</code> and <code>HTTPapi</code>,
first of which implements protocol of interaction with Web server by means of 
CGI mechanism, and the second - protocol of direct serving HTTP requests.<P>

Built-in HTTP server is able to handle two types of requests - 
transfer HTML file find in place relative to the current working directory
in response to GET HTTP request and perform action specified by GET or POST
requests with parameters. Built-in HTTP server provides persistent connections -
server will not close connection with client immediately after sending 
response, instead of this connection will be kept during some specified 
interval of time. Also this built-in server supports concurrent requests 
processing by several threads of control. But starting of threads should
be performed by client application.<P>

Virtual method <code>WWWapi::connect(WWWconnection& con)</code>
accept clients connection (either from CGISTUB program of from WWW browser).
This method returns <code>true</code> if connection is established. 
In this case programmer should call 
<code>CGIapi::serve(WWWconnection& con)</code> to receive and handle client's
requests. This method return <code>false</code> if and only if handler
of request returns <code>false</code>. Even if request was not correctly
received or could not be handled, <code>true</code> is returned by 
<code>serve</code> method. The connection is always closed after return from
<code>serve</code> method. It is possible to start separate thread for 
exceution of each <code>server</code> method.<P>

To construct responce to the request special overloaded <code>&gt;&gt;</code>
operators are provided in <code>WWWconnection</code> class. First line of 
response should specify type of response body, for example:

<PRE>
Content-type: text/html\r\n\r\n
</PRE>

Two CR-LF character after this line separate HTTP header from the body.
Three encoding schemes can be used for constructing response body:

<OL>
<LI><B>TAG</B> - used for specifying HTML control elements. No conversion is
done for this encoding.
<LI><B>HTML</B> - with this encoding output characters which are special for 
HTML (&Ltd; &gt; &amp; &qout;) are replaced with special symbolic names 
(&qout;lt; &qout;gt; &qout;amp; &qout;qout;). 
<LI><B>URL</B> - used for specifying call parameters in URL format. 
Spaces are replaced with '+' character, all other special characters with
their hex code.
</OL>

To make switching between encoding more convenient, <code>WWWconeection</code>
class performs automatic switching between encodings. Initially <B>TAG</B>
encoding is always used. Then encodings are implicitly changed using the 
following rules:

<PRE>
              TAG  -&gt; HTML
              HTML -&gt; TAG
              URL  -&gt; TAG
</PRE>

It certainly possible to explicitly specify encoding for the next output 
operation by means of special <code>&lt;&lt;</code> operator, which accepts 
one of the following constants: <code>TAG, HTML, URL</code>.<P>

Information about HTML form slots values or request parameters can be obtained 
using <code>WWWconnection::get(char const* name, int n = 0)</code> method.
Optional second parameter is used only for getting value of selectors with
multiple selection allows option. If parameter with such name is not found, 
<code>NULL</code> is returned. There are some mandatory parameters 
which always should be present in all forms handled by GOODS:<P>

<TABLE BORDER>
<TR><TH>Parameter name</TH><TH>Parameter Description</TH></TR>
<TR><TD>socket</TD><TD>address of the server, used for constructing new links</TD></TR>
<TR><TD>page</TD><TD>symbolic name of the page, used for request dispatching</TD></TR>
<TR><TD>stub</TD><TD>name of CGI stub program always defined by API</TD></TR>
</TABLE><P>


<H3><A NAME = "running">Running GOODS applications</A></H3>

<H4><A NAME = "configure">Database configuration file</A></H4>

To specify the configuration of the database you should create a configuration
file using the following format:

<PRE>
&lt;number of storages = N&gt;
&lt;storage identifier 0&gt;: &lt;host name&gt;:&lt;port0&gt;
...
&lt;storage identifier N-1&gt;: &lt;host name&gt;:&lt;portN-1&gt;
</PRE>

The storage identifiers should be successive integer numbers,which are used as
indices in the array of storages. You can find examples of this configuration 
file in: <A HREF = "examples/unidb.cfg">"unidb.cfg"</A> and
<A HREF = "examples/guess.cfg">"guess.cfg"</A>.
In a distributed environment, a configuration file can be accessed from the
server computer using some network file system protocol (for example NFS) 
or can be replicated to client computers. 


<H4><A NAME = "goodsrv">Server monitor GOODSRV</A></H4>

To run a database you should first start all storage servers at each
node of the net as specified in the configuration file. You can write a server
program yourself. GOODS provides a standard server implementation named
"<KBD>goodsrv</KBD>" supporting some basic monitoring functions.
To run this program you should specify the name of the database, which
should be equal to the name of the configuration file without extension
(the extension is assumed to be "<KBD>.cfg</KBD>").
The first line of this configuration file specifies the number of storages
in the database. Each successive line specifies locations of database servers. 
The line consists of three fields separated by colons:
storage identifier, host name and port number.<P>

Parameters for GOODSRV can be specified in one of two files:
"<TT>goodsrv.cfg</TT>" and "<I>database</I>,srv". The first one specifies
parameters common for all servers, the second specifies parameters of the server
of the specific database. If a parameter is defined in both configuration
files, then the value of the parameter from  "<I>database</I>,srv"
is used. If a parameter is not specified in the configuration files, 
then default values will be used. The following table describes all available
parameters:<P>

<TABLE BORDER>
<TR><TH>Parameter</TH><TH>Type</TH><TH>Unit</TH><TH>Default value</TH><TH>Meaning</TH><TH>Set</TH></TR>
<TR><TD>memmgr.init_map_file_size</TD><TD>integer</TD><TD>Kb</TD><TD>8192</TD><TD>
Initial size of memory map file. Increasing size reduces the
number of memory map reallocations.
</TD><TD ALIGN="CENTER">-</TD></TR> 

<TR><TD>memmgr.init_index_file_size</TD><TD>integer</TD><TD>Kb</TD><TD>4096</TD><TD>
Initial size of index file. Increasing size reduces the 
number of index reallocations. 
</TD><TD ALIGN="CENTER">-</TD></TR> 

<TR><TD>memmgr.gc_init_timeout</TD><TD>integer</TD><TD>seconds</TD><TD>60</TD><TD>
Timeout for initiation of GC process. A GC coordinator waits for replies from
other servers for GC initiation request during the specified period of time.
</TD><TD ALIGN="CENTER">+</TD></TR> 

<TR><TD>memmgr.gc_response_timeout</TD><TD>integer</TD><TD>seconds</TD><TD>86400</TD><TD>
Timeout for acknowledgment from a GC coordinator 
to finish the mark stage and perform the sweep stage of GC. If no response
is received from the GC coordinator within this period, GC will
be aborted at the server. 
</TD><TD ALIGN="CENTER">+</TD></TR> 

<TR><TD>memmgr.gc_init_allocated</TD><TD>integer</TD><TD>Kb</TD><TD>1024</TD><TD>
If > 0:  size of allocated memory since last GC, after which the next
garbage collection process will be initiated.
</TD><TD ALIGN="CENTER">+</TD></TR> 

<TR><TD>memmgr.gc_init_idle_period</TD><TD>integer</TD><TD>seconds</TD><TD>0</TD><TD>
If > 0: specifies the idle period interval, after which GC will be 
initiated, if the memory management server receives no request during
this period of time.
</TD><TD ALIGN="CENTER">+</TD></TR> 

<TR><TD>memmgr.gc_init_min_allocated</TD><TD>integer</TD><TD>Kb</TD><TD>0</TD><TD>
Minimal size of allocated memory to start GC in idle state (see previous
parameter). GC will be initiated only if the idle period timeout is expired and
more than gc_init_min_allocated memory was allocated since last GC.
</TD><TD ALIGN="CENTER">+</TD></TR> 

<TR><TD>memmgr.gc_grey_set_threshold</TD><TD>integer</TD><TD>references</TD><TD>1024</TD><TD>
Maximal extension of GC grey references set:
reaching this number of references,
the optimization of the order of taking references from the grey set
(to improve reference locality) is disabled and a breadth first order of 
object reference graph traversal is used. 
</TD><TD ALIGN="CENTER">+</TD></TR> 

<TR><TD>memmgr.max_data_file_size</TD><TD>integer</TD><TD>Kb</TD><TD>0</TD><TD>
If > 0: limits the size of the storage data file.
After reaching this value, GC is forced and all allocation requests are 
blocked until enough free space is collected.
</TD><TD ALIGN="CENTER">+</TD></TR> 

<TR><TD>memmgr.max_objects</TD><TD>integer</TD><TD>objects</TD><TD>0</TD><TD>
If > 0: limits the number of objects in the storage.
After reaching this value, GC is forced and all allocation requests are 
blocked until some object will be collected by GC.
</TD><TD ALIGN="CENTER">+</TD></TR> 

<TR><TD>memmgr.map_file_name</TD><TD>string</TD><TD ALIGN="CENTER">-</TD><TD>"*.map"</TD><TD>
Name of file with memory allocation bitmap.
</TD><TD ALIGN="CENTER">-</TD></TR> 

<TR><TD>memmgr.index_file_name</TD><TD>string</TD><TD ALIGN="CENTER">-</TD><TD>"*.idx"</TD><TD>
Name of file with object index.
</TD><TD ALIGN="CENTER">-</TD></TR> 

<TR><TD>transmgr.sync_log_writes</TD><TD>boolean</TD><TD>0/1</TD><TD>1</TD><TD>
Enable/disable write-ahead transaction logging.
</TD><TD ALIGN="CENTER">-</TD></TR> 

<TR><TD>transmgr.permanent_backup</TD><TD>boolean</TD><TD>0/1</TD><TD>0</TD><TD>
If == 0: a snapshot backup type is used. Backup is terminated after
saving a consistent state of the database and checkpoints will 
be enabled.<P>
If == 1: a permanent backup type is used.
Backup terminates and forces a checkpoint after saving all
records from the transaction log. This type can be used to ensure,
that storage can be restored when loosing the storage data file after a fault.
</TD><TD ALIGN="CENTER">+</TD></TR> 

<TR><TD>transmgr.max_log_size</TD><TD>integer</TD><TD>Kb</TD><TD>8192</TD><TD>
Maximal size of transaction log: reaching this value starts a checkpoint. 
After checkpoint completion, writing to the log file continues from 
the beginning.
</TD><TD ALIGN="CENTER">+</TD></TR> 

<TR><TD>transmgr.max_log_size_for_backup</TD><TD>integer</TD><TD>Kb</TD><TD>1073741824</TD><TD>
Maxiaml size of transaction log during backup. When this limit is reached,
# all new transactions will be blocked until the end of backup.
</TD><TD ALIGN="CENTER">+</TD></TR> 

<TR><TD>transmgr.preallocated_log_size</TD><TD>integer</TD><TD>Kb</TD><TD>0</TD><TD>
Forces the transaction manager to preallocate the
log file and doesn't truncate it after checkpoint. In this case the file size 
should not be updated after each write operations and the transaction performance
is increased about 2 times. 
</TD><TD ALIGN="CENTER">+</TD></TR> 

<TR><TD>transmgr.wait_timeout</TD><TD>integer</TD><TD>seconds</TD><TD>600</TD><TD>
Timeout for committing a global transaction. A coordinator waits for replies
from other servers participated in a global transaction until expiration of this
timeout. 
</TD><TD ALIGN="CENTER">+</TD></TR> 

<TR><TD>transmgr.retry_timeout</TD><TD>integer</TD><TD>seconds</TD><TD>5</TD><TD>
Timeout for requesting the status of a global transaction from a coordinator. 
When a server performs recovery after crash, it needs to know the status of a global
transactions, in which it has been participated, So it polls coordinators
of global transactions, using this timeout as interval for resending request
to coordinators. 
</TD><TD ALIGN="CENTER">+</TD></TR> 

<TR><TD>transmgr.checkpoint_period</TD><TD>integer</TD><TD>seconds</TD><TD>0</TD><TD>
Time interval between two checkpoints. A checkpoint can be forced either
by exceeding some limit value of transaction log size or after some specified
period of time (if this timeout is non-zero). 
</TD><TD ALIGN="CENTER">+</TD></TR> 

<TR><TD>transmgr.dynamic_reclustering_limit</TD><TD>integer</TD><TD>bytes</TD><TD>0</TD><TD>
Enable/disable dynamic reclustering of objects. If dynamic
reclustering of objects is enabled, all objects modified in a
transaction are sequentially written to a new place in the storage
(with the assumption that objects modified in one transaction will 
be also accessed together in future). This parameter specifies a maximal size 
of objects for dynamic reclustering. If == 0: disables 
reclustering.  
</TD><TD ALIGN="CENTER">+</TD></TR> 

<TR><TD>transmgr.log_file_name</TD><TD>string</TD><TD ALIGN="CENTER">-</TD><TD>"*.log"</TD><TD>
Name of local transaction log file.
</TD><TD ALIGN="CENTER">-</TD></TR> 

<TR><TD>transmgr.history_file_name</TD><TD>string</TD><TD ALIGN="CENTER">-</TD><TD>"*.his"</TD><TD>
Name of global transaction history file.
</TD><TD ALIGN="CENTER">-</TD></TR> 

<TR><TD>objmgr.lock_timeout</TD><TD>integer</TD><TD>seconds</TD><TD>600</TD><TD>
Deadlock detection timeout. If a lock can't be granted within specified 
period of time, the server considers that deadlock takes place and aborts one or 
more client processs to destroy deadlock. 
</TD><TD ALIGN="CENTER">+</TD></TR> 

<TR><TD>poolmgr.page_pool_size</TD><TD>integer</TD><TD>pages</TD><TD>4096</TD><TD>
Size of page cache. Increasing this value will improve performance of disk
IO operations.
</TD><TD ALIGN="CENTER">-</TD></TR> 

<TR><TD>poolmgr.data_file_name</TD><TD>string</TD><TD ALIGN="CENTER">-</TD><TD>"*.odb"</TD><TD>
Name of storage data file.
</TD><TD ALIGN="CENTER">-</TD></TR> 

<TR><TD>server.admin_telnet_port</TD><TD>string</TD><TD>hostname:port</TD><TD>""</TD><TD>
Specifies the port upon which goodsrv will serve administrative
interactive dialog sessions.  If you specify this option, you can run
goodsrv as a background task.  Administrative functions, such as scheduling
backup tasks, etc. can be performed by telnetting to the specified port.

</TD><TD ALIGN="CENTER">-</TD></TR> 
<TR><TD>server.cluster_size</TD><TD>integer</TD><TD>bytes</TD><TD>512</TD><TD>
Maximal size of object cluster to be sent to client. A server can perform
optimization of sending objects to clients. Instead of sending one object
for each request, it can send several objects (cluster of objects), including
into the  cluster objects referenced from the requested object (but only if total
size of objects will not exceed <TT>cluster_size</TT> parameter).
</TD><TD ALIGN="CENTER">+</TD></TR> 

<TR><TD>server.garcc_port</TD><TD>string</TD><TD>hostname:port</TD><TD>""</TD><TD>
Specifies the port upon which goodsrv will host a backup/restore server
thread.  If you specify this option, you can create backups or restore the
database remotely.  This has several advantages over using the
administrative console:
<OL>
<LI> Since you're using a remote workstation's hard drive to store the backup,
     your database server's entire hard drive can be allocated for database
     storage;
<LI> Equipped with gigabit ethernet controllers using Jumbo frames (or
     fiber-channel cards), your backup or restore operation can complete twice
     as fast.  Instead of one hard drive performing both the read and write
     operations, the database server's drive performs the read operations
     while the remote workstation's drive performs the writes.
<LI> Because GOODS is platform-independent, so is the remote backup & restore
     process.  For example, you can run your database on a Linux server and
     handle your backup files using a Microsoft Windows workstation.
<LI> The remote workstation can compress or uncompress the archive during the
     backup/restore operation.
</OL>
To perform a remote backup or restore, use the <B>G</B>OODS <B>Arc</B>hiver
<B>C</B>lient utility, "garcc".  Assuming goodsrv is running on a server
whose IP address is "10.0.0.100", here's an example:
<P>
btree.srv:
<UL>
    server.garcc_port="10.0.0.100:3007"
</UL>
Start "goodsrv btree", then execute this command on the remote workstation
(i.e., 10.0.0.101):
<PRE>
    garcc --backup 10.0.0.100:3007 some_backup_filename.bak
</PRE>
Restoring is almost as easy.  Use the "close" command in the database's
administrative console.  Then execute this command on the remote
workstation:
<PRE>
    garcc --restore 10.0.0.100:3007 some_backup_filename.bak
</PRE>
Type just "garcc" for additional usage documentation.
<p>
<em>"garcc" was added to GOODS by Marc Seter</em>
</TD><TD ALIGN="CENTER">-</TD></TR> 
<TR><TD>server.ping_interval</TD><TD>integer</TD><TD>seconds</TD><TD>0</TD><TD>
If > 0:  specifies the number of seconds of socket inactivity the database
allows.  If a client<->server OR server<->server socket is inactive for this
number of seconds, the database sends a "ping" command to the socket's peer.
The peer then responds with a "ping_ack" command in response.
<p>
By default, this option is not enabled.  But if you have a packet-filtering
firewall between your database server and its clients (OR between database
storages), you can use this option to prevent that firewall from shutting
down a proxied socket due to inactivity.
<p>
<em>"ping_interval" was added to GOODS by Marc Seter</em>
</TD><TD ALIGN="CENTER">-</TD></TR> 

<TR><TD>server.remote.connections</TD><TD>boolean</TD><TD>0/1</TD><TD>1</TD><TD>
Enable or disable acceptance of remote connections by GOODS server. 
Disabling remote connections avoid any problem with firewall. In this case
GOODS server is able to handle only local connections (Unix socket, Win32 local socket, process socket)
</TD><TD ALIGN="CENTER">+</TD></TR> 
</TABLE><P>

<BLOCKQUOTE>
<OL>
<LI>The last column of this table marks parameters, the values of which can be changed 
by the <TT>set</TT> command in interactive dialogue mode. 

<LI>The symbol <B>*</B> used in the file names in this table 
stands for the concrete database name. This name is an argument
passed to GOODSRV.
</OL>
</BLOCKQUOTE><P>

Other arguments of "<KBD>goodsrv</KBD>" are:

<PRE>
goodsrv &lt;storage name&gt; [&lt;storage-id&gt; [&lt;trace-file-name&gt; | "-"]]
</PRE>

By default "<KBD>goodsrv</KBD>" starts in interactive dialogue mode,
allowing the user to execute some administrative database operations
as well as to see database usage parameters.
Also if the GOODS server was compiled with the trace option, trace information
will be shown at the terminal. If you specify a file name
as the last argument to "<KBD>goodsrv</KBD>", messages will be saved 
in this file. Below is a list of all valid
commands for the "<KBD>goodsrv</KBD>" monitor:

<PRE>
help [COMMAND]                        print information about command(s)
open				      open database
close				      close database
exit				      terminate server
log [LOG_FILE_NAME|"-"]               set log file name
show [CATEGORIES]                     show current server state
monitor PERIOD [CATEGORIES]           periodical monitoring of server state
backup FILE_NAME [TIME [LOG_SIZE]]    schedule online backup process
stop backup			      stop backup process
restore BACKUP_FILE_NAME              restore database from the backup
trace [TRACE_MESSAGE_CLASS]           select trace messages to be printed
set PARAMETER=INTEGER_VALUE           set server parameter value
rename OLD_CLASS_NAME NEW_CLASS_NAME  rename class in the storage
rename CLASS COMPONENT_PATH NEW_NAME  rename component of the class
</PRE>
<BLOCKQUOTE>
<KBD>CATEGORIES</KBD> is the set of "<KBD>servers clients memory transaction 
classes</KBD>". By default all categories are shown.<P>

<KBD>PERIOD</KBD> interval in seconds of printing statistic information<P>

<KBD>TIME</KBD> is the interval of time in seconds and <KBD>LOG_SIZE</KBD>
is the transaction log size.
After reaching one of these parameters, a backup procedure will be started.
Default values for these parameters are 0, which means that backup 
will be started immediately.<P>

<KBD>TRACE_MESSAGE_CLASS</KBD> is a space separated list of 
message classes. Only messages belonging to one of the specified classes
will be shown. The complete list of message classes can be obtained
by the "<KBD>help trace</KBD>" command.<P>

<KBD>PARAMETER</KBD> is a valid server parameter (see table above). 
Execute the "<KBD>help set</KBD>" command to obtain a list of all valid
parameter names.<P>

</BLOCKQUOTE><P>

If you would like to run goodsrv as a background task, set the
"server.admin_telnet_port" parameter in your configuration file.
GOODSRV will not output any messages to stdout; instead, it will
log all output and provide a telnet session on the specified
hostname and port.  By enabling this option, a database
administrator can monitor and maintain a database remotely.<P>

Here's an example setting for the server administration telnet
port:<p>

<pre>server.admin_telnet_port="localhost:8920"</pre><P>

Once the server is started with this parameter set, a database
administrative session can be accessed by issuing the following
command:<p>

<pre>telnet localhost 8920</pre><p>

<H4><A NAME = "embedded">Embedded GOODS server</A>

Usually GOODSRV is started as separate process. 
But it is possible to start GOODS in separate thread within the same process
(embedded server).  In this case you should link you application with both server and client
libraries and use <code>start_goods_server</code> and <code>stop_goods_server</code>
functions defined in <code>confgrtr.h</code> header file.
At windows you can use GOODS local sockets for communication between embedded server and client
(it will provide significant improve of performance). GOODS choose local windows sockets
when you specify <code>localhost</code> as server address.
At Unix for communication between threads within one process you can use  <i>process</i> sockets.
Process sockets can be also used at Windows but them do no not provide advantage in 
performance comparing with local windows sockets and will not work in case of using DLLs.
To use process sockets, you should prepend server name with '!':

<pre>
1
0: !localhost:12345
</pre>

<H4><A NAME = "browser">Database browser</A></H4>

The database browser is a program which allows you to dump fields of
objects in the database. I want to make this program as simple as possible 
and also system independent. The primary idea of writing this program is 
to show how metainformation can be extracted from a database. 
There are two versions of the browser: a console application and a CGI version
using a WWW browser to navigate through objects.<P>

The browser can dump values of fields of all objects (including
classes) stored in database storages. The console version of the database
browser <A HREF="src/browser.cxx">"browser.cxx"</A> requires a single command line 
argument, which specifies the name of the
database configuration file. It is not necessary to specify all servers in 
this configuration file. You only have to describe those storages, objects from 
which you want to inspect. Certainly the servers of these storages should
have been started before you run the "<KBD>browser</KBD>" application.
To refer an object, you should specify the storage and the object identifier.
It is possible to see a list of available commands by typing "<KBD>help</KBD>".
<P>

For example, entering "0:1" at the browser prompt will display the structure
of the abstract root class, as retrieved from storage "0".  The default
configuration of GOODS uses "0:2" thru "0:ffff" for storing class descriptors;
"0:10000" will be the ID of the first object added to the database.
<P>

You can also use the browser to modify objects in the database.  For example,
let's say that you've inserted an object 0:10077 of the class "Address"
into the database, and that class includes a persistent member "PostalCode".
Entering "set 0:10077.PostalCode=12345" at the browser's ">>" prompt will
modify the object's member value and display the updated object.
<P>

The CGI version of the browser <A HREF="src/cgibrows.cxx">"cgibrows.cxx"</A> 
provides a much more user friendly interface.
To be able to use this browser you need some WWW server running at the 
computer where the database storage is located. For example it can
be the Microsoft Peer WebServer, included in the NT-4.0 distribution, or the
Apache free WWW server. You have to do some preparation before you can 
use the browser. You should edit a file <A HREF="browser.htm">"browser.htm"</A>
(name can be changed) to specify the name of your host (<CODE>localhost</CODE>
is possible) and the path to the <CODE>cgibrows</CODE> program 
(<CODE>cgibrows.exe</CODE> in Windows). This program can be placed either in 
some default directory for CGI scripts used by the WWW server, or it is possible
to register another directory in the WWW server. It is preferable to 
have the <CODE>cgibrows</CODE> program in the same directory as the database.
Otherwise you should specify the full absolute path to the directory where the
database is located when opening the browser.
Be sure that the user, under which name the CGI script will be executed,
has enough permissions to access files and ports at the local computer 
(<CODE>GUESS</CODE> by default has no such permissions).<P>

More preparations the WWW browser for GOODS does not need. Now you can 
open in your favorite WWW browser the page 
<A HREF="browser.htm">"browser.htm"</A> and specify the database name and 
storage number. Database name is the name of the configuration file without
the extension. If this file and the <CODE>cgibrows</CODE> program are
located in different directories, you should specify the absolute path to this
file. By default, the storage with identifier 0 will be opened.
To start the browser
press the <B>Open</B> button. The browser will dump fields of the root object.
The browser outputs non-NULL-reference fields as the pair: 
<CODE>storageID:objectID</CODE>. To navigate through objects, click on the
reference field. Do not forget the <B>Back</B>, <B>Forward</B>
and <B>Go</B> buttons, which can help you in navigation. You can browse 
the database from any computer which can access the WWW server.<P>


<H4><A NAME = "bugdb">Bug tracking database</A></H4>

Example "Bug tracking database" illustrates developing Web application
using GigaBASE and WWW API. It can be used either with any WWW server 
(for example Apache or Microsoft Personal Web Server) or with
built-in HTTP server. To compile BUGDB for interaction with external server, 
define macro <code>USE_EXTERNAL_HTTP_SERVER</code>. 
Database can be
accessed from any computer running some WWW browser. To build 
<code>bugdb</code> application in Unix you should specify <code>www</code>
target to make utility.<P>

To run this BUGDB with external WWW server you should first customize your 
WWW server.
It should be able to access <code>buglogin.htm</code> file and run
CGI script <code>cgistub</code>. Also user, under which CGI scripts will
be executed, should have enough permissions to establish connection with 
GigaBASE application (by sockets). It is better to run GigaBASE application and
GigaBASE CGI scripts under the same user. For example, I have changed the 
following variables in Apache configuration file:

<PRE>
httpd.conf:
        User konst
        Group users

access.conf:
        &lt;Directory /usr/konst/gigabase&gt;
        Options All
        AllowOverride All
        allow from all
        &lt;/Directory&gt;

        DocumentRoot /usr/konst/gigabase

srm.conf:
        ScriptAlias /cgi-bin/ /usr/konst/gigabase/
</PRE>

It is also possible not to change configuration of WWW server, but place 
<code>cgistub</code> and <code>bugdb</code> programs in standard CGI
script directory and change in the file <code>buglogin.htm</code> path to 
the <code>cgistub</code> program.
After preparing configuration files you should start WWW server.<P>

No configuration is needed when you are using built-in HTTP server.
Just make sure that user has enough permission to access port number 80
(default port for HTTP server). If some HTTP server is already started at your
computer, you should either stop it or specify another port for 
built-in HTTP server. In last case you also need to specify the same port
in the settings of WWW browser to make it possible to establish connection with
right HTTP server.<P>

After starting <code>bugdb</code> application itself you can visit
<code>buglogin.htm</code> page in WWW browser and start to work with 
BUGDB database. When database is initialized, "administrator" user is 
created in the database. First time you should login as administrator using
empty password. Than you can create some other users/engineers and 
change the password. BUGDB doesn't use secure protocol of passing passwords and
doesn't worry much about restricting access of users to the database.
So if you are going to use BUGDB in real life, you should first
think about protecting database from unauthorized access.<P>




<H4><A NAME = "examples">Running GOODS examples</A></H4>

You can try to play with some examples of GOODS application programs: 
<PRE>
<A HREF="examples/guess.cxx">guess.cxx</A>    - game "Guess an animal",
<A HREF="examples/unidb.cxx">unidb.cxx</A>    - "university" database
<A HREF="examples/testblob.cxx">testblob.cxx</A> - work with binary large object
<A HREF="examples/tstbtree.cxx">tstbtree.cxx</A> - program for measuring of GOODS performance
</PRE>

The program "<KBD>guess.cxx</KBD>" is the simplest database application
using an optimistic approach for synchronization. This program creates a
binary tree with information about animals (or anything else which you have
entered). To run this program you should first start the database server
by the following command:<P>

<PRE>
&gt; goodsrv guess
</PRE>

Then you can start any number of client sessions by running the
program <KBD>guess</KBD> in separate windows. Optimistic approach
in this application means: should - while you are answering program
questions - another user have performed an update of 
the same branch of the tree, you have to start from the beginning.<P>

The program "<KBD>unidb.cxx</KBD>" is a sample GOODS application working with
a distributed database. The structure of this database is very simple. The
university database contains a B_tree of students and a B_tree of professors.
Each student object has a diploma component and is attached to one
professor (his advisor). A professor owns an unordered set of students 
(group) attached to him. Students can be added and removed; or reattached from
one professor to another. A professor can be added and removed,
except he is someone's tutor. This
application shows a simple menu, allowing the user to select an action and
also uses the GOODS change notification mechanism to handle database
modifications done by other clients (each time a student or professor is
added or removed by some database client, the total number of students and 
professors in the university is updated and shown at the top of the menu).<P>

Objects in this application are distributed between two storages:
the student's storage (0) and the professor's storage (1). All student objects
(and objects referenced from them, such as "<KBD>string</KBD>" and
"<KBD>set_member</KBD>" objects) are placed in storage 0.
All professor objects are placed in storage 1.
To run this application, you need to start two servers:<P>

<PRE>
terminal-1&gt; goodsrv unidb 0
terminal-2&gt; goodsrv unidb 1
</PRE>

Then you can start any number of clients by running the program 
"<KBD>unidb</KBD>" without any arguments at different windows.<P>

Dealing with large multimedia objects is shown in the sample
"<KBD>testblob</KBD>" application. Two classes from the GOODS class library 
are used in this application: 
the <CODE>blob</CODE> class and the <CODE>hash_table</CODE> class.
Class <CODE>blob</CODE> provides incrementally loading big binary objects 
in parallel with its handling (for example you can unpack and transfer
data from buffer to audio device while the next part of the object is loaded
from the database). Also this application tests multitasking at client's site.

The program "<CODE>testblob</CODE>" is very simple and has only few commands,
 allowing you to insert, extract and remove files in/from the database. 
Type the "<KBD>help</KBD>" command to learn more about the command syntax. 
To run this application you should activate the database server by issuing
the command "<KBD>goodsrv blob</KBD>". Then start the application itself.<P>

One of the most effective structures for storing spatial objects is the R-tree
(proposed by Guttman). The GOODS library contains an "<KBD>R_tree</KBD>" class
which implements Guttman's quadratic split algorithm. To test this class
implementation, a very simple model of a spatial database is developed:
the "<KBD>tstrtree</KBD>". All objects in this program are placed in 
R-trees and/or H-trees (a combination of B*-tree and hash table)
and can be accessed either by name or by coordinates. 
The configuration file for this program is "<KBD>rtree.cfg</KBD>".
So issue the command "<KBD>goodsrv rtree</KBD>" to start the server; and 
then the command "<CODE>tstrtree</CODE>" to run the test program. 


<H4><A NAME = "performance">Measuring GOODS performance</A></H4>

Program "<CODE>tstbtree</CODE>" simulates parallel work of several clients
with the database.
To run this program, you should first start the "<CODE>goodsrv</CODE>" server
with the following parameters:

<PRE>
&gt; goodsrv btree 0
</PRE>

and then run some amount of client applications using the "spawn" utility,
for example:

<PRE>
&gt; spawn 32 8 tstbtree
</PRE>

This command will 32 times invoke the "<CODE>tstbtree</CODE>" program, and at
most 8 instances of this program will be executed simultaneously. 
Each instance of the "<CODE>tstbtree</CODE>" program inserts 100 records
of randomly distributed size in the range  6..1030 bytes in 
one of four B-trees. Then 10 times it repeats 
a loop to search each of these records in the B-tree. Finally 
it removes all created records from the B-tree. After the end of
the test there should be no records in the database, so it is possible
to run the test once again. Because not the performance of the B-tree
implementation itself but the performance of the GOODS server is measured,
it was decided to reduce the size of the B-tree page to 4 entries
to increase the number of objects participated in the transactions.<P>

It is interesting to investigate the dependence between the number of programs 
executed in parallel and the total system performance. 
In theory the best result should be obtained when there are exactly 4 
concurrent applications (mostly they all will work with different B-trees and 
no synchronization between them is necessary). 
If there are more than 4 applications running in 
parallel then a lot of notification messages used to synchronize the
caches of these applications will reduce the total system performance.
The following table contains results (elapsed time in seconds of 
test execution) for some systems:<P>

<TABLE BORDER>
<CAPTION>GOODS transaction performance test</CAPTION>
<TR>
<TH>Parallel processes</TH>
<TH>AlphaServer 2100 2x250 DigitalUnix 4.0 portable</TH> 
<TH>AlphaServer 2100 2x250 DigitalUnix 4.0 pthreads</TH> 
<TH>PowerPC 120 MkLinux portable</TH>
<TH>PPro-200 Linux portable</TH> 
<TH>PPro-200 WinNT 4.0</TH> 
<TH>SPARCstation-20 2x50 Solaris 2.5 pthreads</TH> 
<TH>PPro-233 FreeBSD 3.0  portable</TH> 
<TH>UltraSparc 2x300 pthreads</TH>
</TR>
<TR>
<TD>1</TD>
<TD>239</TD>
<TD>227</TD>
<TD>121</TD>
<TD>73</TD>
<TD>57</TD>
<TD>349</TD>
<TD>117</TD>
<TD>130</TD>
</TR>
<TR>
<TD>2</TD>
<TD>226</TD>
<TD>187</TD>
<TD>124</TD>
<TD>95</TD>
<TD>47</TD>
<TD>339</TD>
<TD>116</TD>
<TD>119</TD>
</TR>
<TR>
<TD>4</TD>
<TD>221</TD>
<TD>112</TD>
<TD>130</TD>
<TD>66</TD>
<TD>30</TD>
<TD>178</TD>
<TD>65</TD>
<TD>70</TD>
</TR>
<TR>
<TD>8</TD>
<TD>233</TD>
<TD>114</TD>
<TD>156</TD>
<TD>67</TD>
<TD>37</TD>
<TD>188</TD>
<TD>68</TD>
<TD>71</TD>
</TR>
<TR>
<TD>16</TD>
<TD>240</TD>
<TD>124</TD>
<TD>247</TD>
<TD>68</TD>
<TD>44</TD>
<TD>209</TD>
<TD>68</TD>
<TD>74</TD>
</TR>
</TABLE><P>

<A HREF = "graph1.gif">Graphic representation of these results</A><P>

The time of this test execution mostly depends on the time needed for
synchronous writes to the transaction log. Improving the system performance
in case of executing client requests in parallel is mostly obtained by merging
synchronous write requests. The following table shows the average number of
merged synchronous writes for different systems and number of clients
running in parallel:<P>

<TABLE BORDER>
<CAPTION>Parallel transaction log writes</CAPTION>
<TR>
<TH>Parallel processes</TH>
<TH>AlphaServer 2100 2x250 DigitalUnix pthreads</TH> 
<TH>PPro-200 WinNT 4.0</TH> 
</TR>
<TR>
<TD>2</TD>
<TD>1.009</TD>
<TD>1.854</TD>
</TR>
<TD>2</TD>
<TD>1.845</TD>
<TD>2.948</TD>
</TR>
<TR>
<TD>8</TD>
<TD>1.805</TD>
<TD>1.953</TD>
</TR>
<TR>
<TD>16</TD>
<TD>1.758</TD>
<TD>1.767</TD>
</TR>
</TABLE><P>

<A HREF = "graph2.gif">Graphic representation of these results</A><P>

It is also interesting to measure the database performance when
the transaction log synchronous writes option is switched off.
In this case mainly the efficiency of the implementation of the GOODS server
components and their interactions is measured.<P>

<TABLE BORDER>
<CAPTION>GOODS transaction performance test with asynchronous writes</CAPTION>
<TR>
<TH>Parallel processes</TH>
<TH>AlphaServer 2100 2x250 DigitalUnix 4.0 portable</TH> 
<TH>AlphaServer 2100 2x250 DigitalUnix 4.0 pthreads</TH> 
<TH>PPro-200 Linux portable</TH> 
<TH>PPro-200 WinNT 4.0</TH> 
<TH>SPARCstation-20 2x50 Solaris 2.5 pthreads</TH> 
<TH>PPro-233 FreeBSD 3.0 portable</TH> 
<TH>UltraSparc 2x300 pthreads</TH>
</TR>
<TR>
<TD>1</TD>
<TD>28</TD>
<TD>47</TD>
<TD>19</TD>
<TD>22</TD>
<TD>102</TD>
<TD>29</TD>
<TD>40</TD>
</TR>
<TR>
<TD>2</TD>
<TD>23</TD>
<TD>30</TD>
<TD>21</TD>
<TD>13</TD>
<TD>87</TD>
<TD>26</TD>
<TD>27</TD>
</TR>
<TR>
<TD>4</TD>
<TD>21</TD>
<TD>30</TD>
<TD>17</TD>
<TD>17</TD>
<TD>90</TD>
<TD>21</TD>
<TD>27</TD>
</TR>
<TR>
<TD>8</TD>
<TD>24</TD>
<TD>41</TD>
<TD>21</TD>
<TD>21</TD>
<TD>137</TD>
<TD>21</TD>
<TD>31</TD>
</TR>
<TR>
<TD>16</TD>
<TD>39</TD>
<TD>56</TD>
<TD>24</TD>
<TD>33</TD>
<TD>174</TD>
<TD>24</TD>
<TD>37</TD>
</TR>
</TABLE><P>

<A HREF = "graph3.gif">Graphic representation of these results</A><P>

There are also two programs (not working with the database) which can be
used for testing and measuring the performance of socket and multitasking 
libraries: 
<A HREF = "examples/testsock.cxx">"testsock.cxx"</A>
and <A HREF = "examples/testtask.cxx">"testtask.cxx"</A>.<P>

The socket test is performed only with local clients, i.e. server and client are 
at the same computer. This test consists of two programs (client and server)
which interact with each other in the same way as in normal GOODS
applications. The client sends 1000000 requests to the server, waiting
for a server response for each request. Each client request consists of
two parts: header and body. So sending a request to the server requires two
socket write operations. This program mostly measures the efficiency
of the socket library implementations at concrete systems 
(UNIX_DOMAIN sockets in Unix). But in case of Windows-NT/95, the 
performance of the GOODS local sockets implementation is tested.
The table below represents results for different systems 
(elapsed time in seconds of test execution):

<TABLE BORDER>
<CAPTION>Performance of socket library</CAPTION>
<TR>
<TH>AlphaServer 2100 2x250 DigitalUnix 4.0</TH> 
<TH>PPro-200 WinNT 4.0 WinSockets</TH> 
<TH>PPro-200 WinNT 4.0 GOODS local sockets</TH> 
<TH>PPro-200 Linux</TH> 
<TH>SPARCstation-20 2x50 Solaris 2.5</TH> 
<TH>PPro-233 FreeBSD 3.0 portable</TH> 
</TR>
<TR>
<TD>374</TD>
<TD>275</TD>
<TD>25</TD>
<TD>109</TD>
<TD>541</TD>
<TD>86</TD>
</TR>
</TABLE><P>

The program <KBD>testtask</KBD>
measures the performance of the multitasking library.
This test simply starts a number of tasks (threads). Each of them 
performs the following loop: wait for signal, enter critical section 
and wake up (i.e. send signal to) another task. 
The number of loops, the total number of tasks and the
number of concurrently running tasks (number of tasks initially signaled)
are parameters which can be specified from the command line. Default
values for these parameters (these values were used to obtain the results below)
are: loops - 50000, tasks - 20, activations - 4. The following results 
(elapsed time in seconds of test execution) were measured at different 
platforms:<P> 

<TABLE BORDER>
<CAPTION>Performance of GOODS multitasking library</CAPTION>
<TR>
<TH>AlphaServer 2100 2x250 DigitalUnix 4.0 portable</TH> 
<TH>AlphaServer 2100 2x250 DigitalUnix 4.0 pthreads</TH> 
<TH>PPro-200 WinNT 4.0</TH> 
<TH>PPro-200 Linux portable</TH> 
<TH>SPARCstation-20 2x50 Solaris 2.5 pthreads</TH> 
<TH>PPro-233 FreeBSD 3.0 portable</TH> 
</TR>
<TR>
<TD>3</TD>
<TD>29</TD>
<TD>11</TD>
<TD>2</TD>
<TD>65</TD>
<TD>2</TD>
</TR>
</TABLE><P>

<H2><A NAME = "installation">Installation of GOODS</A></H2>

<H3><A NAME = "compilation">Compilation of GOODS sources</A></H3>

GOODS is now running under Windows-NT/95 and various Unix dialects. 
The system specific code is encapsulated within few files and is
accessed by abstract system independent interfaces: 
<A HREF = "inc/task.h">"task.h"</A>, <A HREF = "inc/sockio.h">"sockio.h"</A> and
<A HREF = "inc/file.h">"file.h"</A>. There are several system specific 
implementations for these interfaces. To achieve maximal performance, 
advanced features of modern operating systems are used, such as
memory mapped files, gathered io (writev), threads. That can be a source of 
problems when porting GOODS to some old Unix dialects.<P>

To build GOODS you should execute the <KBD>config</KBD> script in 
the source directory. You can specify the name of your system or let
the configuration script try to guess the target system itself.
The configuration script only copies one system specific makefile version
to the file "<KBD>makefile</KBD>". There is a common makefile for
all Unix systems "<KBD>makefile.uni</KBD>" containing targets and rules. 
All system specific makefiles only define some parameters 
(such as <KBD>CC</KBD>, <KBD>CFLAGS</KBD>...) and include the
"<KBD>makefile.uni</KBD>". 
Apart from the name of the C++ compiler and compiler flags,
systems mainly differ in which type of multitasking library
is used by GOODS (portable, implemented by setjmp/longjmp;
or based on Posix pthreads).<P>

It is not necessary to run configuration
scripts at Windows. I am using Microsoft Visual C++ 5.0 for compiling
GOODS. The makefile for Windows has the name "<KBD>windows.mak</KBD>".
Compiler dependent definitions are collected in "<KBD>makefile.inc</KBD>"
file and rules are placed in "<KBD>makefile.win</KBD>" files. 
There is a special <KBD>MAKE.BAT</KBD> file which invokes <KBD>NMAKE</KBD>
and specifies the name of the makefile.<P>

GOODS itself is not using C++ exception mechanism. If you want to use 
exceptions in your application you should add 
<code>-DGOODS_SUPPORT_EXCEPTIONS</code> option to the DEFS macro in makefile
and compile GOODS with this option (for some platforms it can be already enabled by default).
Only this case GOODS will be able correctly perform cleanup when exception 
caused by database error or thrown by user function is generated.
If you want to abort transaction instead of committing it when thrown exception
is not caught within any method of persistent capable object, then you
should derive all application exception classes from dbException class.
In this case standard metaobjects will abort the current transaction.<P>

<a name="MFCDLL"></a>
Marc Seter has incorporated support of DLLs into GOODS.
So, for anyone who wants to link the client and/or server
library into an MFC-based application, which uses MFC as shared DLL, do the following:

<OL>
<LI>Compile clntmfc.dll, clntmfc.lib, clntmfcd.dll and clntmfcd.lib
    using the MSVC makefile in goods/clntmfc.  
    <pre>
    C:\GOODS\CLNTMFC> nmake
    C:\GOODS\CLNTMFC> nmake BUILD=RELEASE
    </pre>
<LI>Compile srvrmfc.dll, srvrmfc.lib, srvrmfcd.dll and srvrmfcd.lib
    using the MSVC makefile in goods/srvrmfc.  
    <pre>
    C:\GOODS\SRVRMFC> nmake
    C:\GOODS\SRVRMFC> nmake BUILD=RELEASE
    </pre>
<LI>Link your DLL or EXE against clntmfc.lib (the DLL export library)
    for your Release configuration.
<LI>Link your DLL or EXE against clntmfcd.lib (the DLL export library)
    for your Debug configuration.
<LI>Link your server-side DLL or EXE against srvrmfc.lib (the DLL
    export library) for your server's Release configuration.
<LI>Link your server-side DLL or EXE against srvrmfcd.lib (the DLL
    export library) for your server's Debug configuration.
<LI>Define <code>__GOODS_MFCDLL</code> in your DLL or EXE (both Debug
    and Release configurations).
<LI>Be sure to include the .dll files in the same directory as your
    project's .EXE.
</OL><P>


Alternatively to step 1. and 2. you can use the provided MSVC 6.0 workspace and project files 
to build the libraries. In both the makefiles and project files the option 
<code>GOODS_SUPPORT_EXCEPTIONS</code> is enabled by default.<P>

After configuration just issue the <code>make</code> command to build GOODS
client and server libraries and administative utilities. To build sample
GOODS applications and tests execute <code>make test</code> command.
To build JavaAPI execute <code>make jar</code> command. You can build
all this targets by execution <code>make all</code> command.
It is also possible to invoke make localy in <code>src</code>, 
<code>examples</code> or <code>java</code> directory.
If you want to use the provided native
socket dynamic library you should build it by running <code>buildlocalsocket.bat</code>
file in <code>goods/java</code> directory. Refer to chapter <A HREF="java/JavaAPI.htm#sockets">Native socket library</A> for further information.<P>

The command <code>make install</code> at Unix will copy header files, 
libraries and utilities to the location specified by 
<code>$(INC_INSTALL_PATH), $(BIN_INSTALL_PATH), $(BIN_INSTALL_PATH)</code> 
variables defined in system dependent makefile.<P>


<H3><A NAME = = "#sources">Description of GOODS sources</A></H3>

<DL>
<DT><A HREF = "src/async.cxx">async.cxx</A><DD>
Asynchronous event manager for portable multitasking library. 
Task with priority 0 repeatedly calls Unix KBD>select</KBD> function 
to find out channels ready to input/output.
<DT><A HREF = "inc/async.h">async.h</A><DD>
Specification of the asynchronous event manager.
<DT><A HREF = "src/browser.cxx">browser.cxx</A><DD>
Simple database browser. This application shows how metainformation can
be extracted from database storage.
<DT><A HREF = "examples/bugdb.cxx">bugdb.cxx</A><DD>
Implementation of Bug Tracking Database - example of Web publishing 
of GOODS database.
<DT><A HREF = "examples/bugdb.h">bugdb.h</A><DD>
Definition of classes of Bug Tracking Database.
<DT><A HREF = "src/wwwapi.cxx">wwwapi.cxx</A><DD>
Implementation of HTTP server interface for the GOODS database applications.
<DT><A HREF = "inc/wwwapi.h">wwwapi.h</A><DD>
Interface to the HTTP server for the GOODS database applications.
<DT><A HREF = "src/cgibrows.cxx">cgibrows.cxx</A><DD>
CGI version of database browser. Using this CGI program you can browse database
objects from WWW browser if WWW server is installed at your computer.
<DT><A HREF = "src/cgistub.cxx">cgistub.cxx</A><DD>
CGI script used as gateway between WWW server and GOODS application.
<DT><A HREF = "src/class.cxx">class.cxx</A><DD>
Classes to support class information for client application at runtime.
Methods of this classes are responsible for building class and field 
descriptors, conversion of object instances when from storage format to
application representation and visa versa.
<DT><A HREF = "inc/class.h">class.h</A><DD>
Definition of classes providing reflection property to client application.
<DT><A HREF = "src/classmgr.cxx">classmgr.cxx</A><DD>
Storage class manager implementation.
<DT><A HREF = "src/classmgr.h">classmgr.h</A><DD>
Interface for storage class manager.
<DT><A HREF = "src/client.cxx">client.cxx</A><DD>
Implementation of application independent client interface with storage.
<DT><A HREF = "src/client.h">client.h</A><DD>
Definition of the database storage interface for clients.
<DT><A HREF = "inc/config.h">config.h</A><DD>
Definition of some global types and constants used in GOODS.
<DT><A HREF = "src/console.cxx">console.cxx</A><DD>
Implementation of GOODS console interface. 
These methods perform input and output of data from/to terminal.
<DT><A HREF = "inc/console.h">console.h</A><DD>
Definition of the GOODS console interface. 
<DT><A HREF = "inc/convert.h">convert.h</A><DD>
Definition of functions for (un)packing atomic types from storage format
(big endian, unaligned) to application representation. 
<DT><A HREF = "src/ctask.cxx">ctask.cxx</A><DD>
Implementation of the portable non-preemptive multitasking library, 
using setjmp/longjmp function for context switching.
<DT><A HREF = "inc/ctask.h">ctask.h</A><DD>
Definition of classes used by portable non-preemptive multitasking library.
<DT><A HREF = "src/database.cxx">database.cxx</A><DD>
Application dependent part of the client interface with database.
This part of the interface is responsible for synchronizing the access to
the database, packing/unpacking objects, handling server messages.
<DT><A HREF = "inc/database.h">database.h</A><DD>
Definition of the application dependent part of client interface with database.
<DT><A HREF = "src/dbscls.cxx">dbscls.cxx</A><DD>
Implementation of the application database classes: set, B-tree, hash table,
blob, dynamic arrays and strings. 
<DT><A HREF = "inc/dbscls.h">dbscls.h</A><DD>
Collection of application database classes. 
<DT><A HREF = "src/file.h">file.h</A><DD>
Abstract file interface. This interface provides operating system independent 
methods for working with files.
<DT><A HREF = "inc/goods.h">goods.h</A><DD>
Main include file for GOODS client application.
<DT><A HREF = "src/goodsrv.cxx">goodsrv.cxx</A><DD>
Simple database storage server program. This storage server is powerful
and flexible enough for been used in various applications.
<DT><A HREF = "examples/guess.cxx">guess.cxx</A><DD>
Sample GOODS application:  game "Guess an animal".
<DT><A HREF = "src/memmgr.cxx">memmgr.cxx</A><DD>
Implementation of the GOODS storage server memory manager with 
distributed and incremental garbage collector.
<DT><A HREF = "src/memmgr.h">memmgr.h</A><DD>
Abstract interface for the server memory manager. 
<DT><A HREF = "src/mmapfile.h">mmapfile.h</A><DD>
Interface for mapped on memory file.
<DT><A HREF = "src/mop.cxx">mop.cxx</A><DD>
Implementation of basic metaobjects. Using this metaobject which
covers most common database access patterns, you can derive your
own metaobject, satisfying requirements of concrete applications.
<DT><A HREF = "inc/mop.h">mop.h</A><DD>
Metaobject protocol definition. This protocol is used in GOODS for 
controlling object access synchronization aspects, transactions, 
and object cache management. 
<DT><A HREF = "src/multfile.cxx">multfile.cxx</A><DD>
Implementation of file consisting of several physical segment 
(operating system files). Such multifile makes it possible to overcome
operating system limitation for maximal file size.
<DT><A HREF = "src/multfile.h">multfile.h</A><DD>
Definition of the multifile class.
<DT><A HREF = "src/object.cxx">object.cxx</A><DD>
Implementation of the <KBD>object</KBD> class: the base class for all
GOODS objects. The object index, which is used for indirect object access,
is implemented in this file too.
<DT><A HREF = "inc/object.h">object.h</A><DD>
Definition of the <KBD>object</KBD> class - top class in object hierarchy.
<DT><A HREF = "src/objmgr.cxx">objmgr.cxx</A><DD>
Implementation of the GOODS storage server object access manager. This manager
is responsible for handling object locks and notification of clients
about object instance deterioration.
<DT><A HREF = "src/objmgr.h">objmgr.h</A><DD>
Definition of the storage server object access manager interface.
<DT><A HREF = "src/osfile.h">osfile.h</A><DD>
Definition of the file class corresponding to operating system file.
There are several implementation for this class for different platforms.
<DT><A HREF = "src/poolmgr.cxx">poolmgr.cxx</A><DD>
Implementation of the GOODS storage server page pool manager,which is 
responsible for efficient work with database files. 
<DT><A HREF = "src/poolmgr.h">poolmgr.h</A><DD>
Definition of the storage server page pool manager interface. 
<DT><A HREF = "src/protocol.cxx">protocol.cxx</A><DD>
Implementation of the client-server protocol methods.
<DT><A HREF = "inc/protocol.h">protocol.h</A><DD>
Definition of the protocol used for client-server and server-server communication.
<DT><A HREF = "src/ptask.cxx">ptask.cxx</A><DD>
Implementation of the multitasking library using Posix threads.
<DT><A HREF = "inc/ptask.h">ptask.h</A><DD>
Definition of classes used in multitasking library for Posix threads.
<DT><A HREF = "inc/refs.h">refs.h</A><DD>
Definition of smart pointers for GOODS C++ interface.
<DT><A HREF = "examples/rtree.h">rtree.h</A><DD>
Definition of the R-tree class (effective search structure for spatial objects)
<DT><A HREF = "examples/rtree.cxx">rtree.cxx</A><DD>
Implementation of the R-tree class 
<DT><A HREF = "src/server.cxx">server.cxx</A><DD>
Database server methods implementation. This server is responsible for the
coordination of work of all storage managers and interaction with clients.
<DT><A HREF = "src/server.h">server.h</A><DD>
Definition of the database storage server class. 
<DT><A HREF = "inc/sockio.h">sockio.h</A><DD>
Abstract socket interface. A socket is a reliable bidirectional connection
used for implementation of client-server and server-server communications.
<DT><A HREF = "examples/spawn.cxx">spawn.cxx</A><DD>
Auxiliary utility for spawning sever parallel applications.
This utility can be used for testing GOODS performance.
<DT><A HREF = "inc/stdinc.h">stdinc.h</A><DD>
List of include files common for all GOODS modules.
<DT><A HREF = "inc/stdtp.h">stdtp.h</A><DD>
Definition of standard types and including standard system headers.
<DT><A HREF = "src/storage.h">storage.h</A><DD>
Definition of the application independent client interface to 
the GOODS storage.
<DT><A HREF = "inc/support.h">support.h</A><DD>
Set of support classes and functions for GOODS modules:
dynamic arrays, buffers, hash functions...
<DT><A HREF = "inc/task.h">task.h</A><DD>
System independent multitasking interface. This header file contains 
definitions of following classes: task, mutex, semaphore, event, 
eventex, semaphorex.
<DT><A HREF = "examples/template.cxx">template.cxx</A><DD>
Template for the GOODS client application.
<DT><A HREF = "examples/testblob.cxx">testblob.cxx</A><DD>
Test program for binary large objects. This program can be used as
an example for creating your own multimedia objects.
<DT><A HREF = "examples/testsock.cxx">testsock.cxx</A><DD>
Test program for testing socket performance.
<DT><A HREF = "examples/testtask.cxx">testtask.cxx</A><DD>
Test program for GOODS multitasking libraries.
<DT><A HREF = "src/transmgr.cxx">transmgr.cxx</A><DD>
Implementation of the GOODS storage server transaction manager using
/maintaining a log file for providing transaction recoverability.
This manager handles local and global (distributed) transaction.
<DT><A HREF = "src/transmgr.h">transmgr.h</A><DD>
Definition of the interface for storage server transaction manager.
<DT><A HREF = "examples/tstbtree.cxx">tstbtree.cxx</A><DD>
Test program for the B-tree implementation in GOODS. This program can also
be used for measuring GOODS performance.
<DT><A HREF = "examples/tstrtree.cxx">tstrtree.cxx</A><DD>
Test program for the R-tree implementation in GOODS (very simple spatial database).
<DT><A HREF = "examlpes/unidb.cxx">unidb.cxx</A><DD>
Example of a GOODS application: "university database".
<DT><A HREF = "src/unifile.cxx">unifile.cxx</A><DD>
Implementation of the <KBD>os_file</KBD> class for Unix. This implementation
supports concurrent access to files by several tasks, merging of
parallel synchronous write requests, alignment of synchronous writes
to operating system file block boundary. The two last capabilities greatly
increase performance of the server by commiting transactions 
more efficiently. 
<DT><A HREF = "src/unisock.cxx">unisock.cxx</A><DD>
Implementation of the <KBD>socket</KBD> class for Unix.
<DT><A HREF = "inc/unisock.h">unisock.h</A><DD>
Definition of the class representing Unix sockets.
<DT><A HREF = "src/winfile.cxx">winfile.cxx</A><DD>
Implementation the of <KBD>os_file</KBD> class for Windows.
Optimizations include merging of synchronous write into single request to
operating system.
<DT><A HREF = "src/w32sock.cxx">w32sock.cxx</A><DD>
Implementation of the <KBD>socket</KBD> class for Windows. This class
uses the WinSockets library for remote connections and provides own
efficient implementations for local (within one computer) connections 
(UNIX domain sockets analog). This local sockets implementation uses
cyclic buffers in shared memory and is more than 10 times faster than
WinSockets.
<DT><A HREF = "inc/w32sock.h">w32sock.h</A><DD>
Definition of socket classes for Windows,
<DT><A HREF = "src/wtask.cxx">wtask.cxx</A><DD>
Implementation of multitasking library for Windows.
<DT><A HREF = "inc/wtask.h">wtask.h</A><DD>
Definition of classes from multitasking library for Windows.
</DL><P>

<H2><A NAME = "distribution">Distribution of GOODS</A></H2>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the <A HREF="#Software">Software</A>), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:<P>

<A NAME="Software">
<B>
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL THE
AUTHOR OF THIS SOFTWARE BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR 
IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
</B>
</A><P>

I will provide e-mail support and help you with development of 
GOODS applications.<P>

<HR>
<P ALIGN="CENTER"><A HREF="http://www.garret.ru/~knizhnik">
<B>Look for new version at my homepage</B></A><B> | </B>
<A HREF="mailto:knizhnik@garret.ru">
<B>E-Mail me about bugs and problems</B></A></P>
</BODY>
</HTML>