O_ITable class's performance is unreasonably slow.
Currently the most important function to be optimized
is the O_ITable::Load() function. O_ITable::Load()
takes about 30% of the message store runtime on some
common use cases. Of that, O_ITable::GetRowAt() uses a
little more than 50%.
O_ITable::GetRowAt() was always known to be too slow,
but I wrote it that way to keep the implementation simple.
The issue is not as bad as it can be currently since
O_ITable frequently copies all it's data to an
ITableData object and passes that to MAPI. Still we
would like to use the O_ITable object itself more
often, and the situation is bad enough to warrant
attention right now.
Rewritting O_ITable::Load(), and any of its dependant
subroutine may require O_ITable schema changes that may
then affect even more O_ITable functions, requiring
more testing, etc.
Logged In: YES
user_id=1456
O_ITable needs a caching layer and a transaction API.
For caching, we can run a query that would load the
next 'N' ITable rows into an in-memory temp table.
O_ITable::GetRowAt() would then use the in-memory
temp table for access. This should make things very
fast. When O_ITable::GetRowAt() needs more rows,
the cache retrieves the next 'N' rows from disk.
Rows could probably be written out in the same
manner in the future.
For transactions, O_ITable should introduce new
functions that switch between disk and in-memory
databases. Maybe a 'isTransacted' flag and a
SaveChanges() method as with O_IProp.
O_IProp::SaveChanges() would have to call
SaveChanges() on dependent objects like O_ITable
objects.