|
From: Calin Pirtea\(RDS\) <pc...@rd...> - 2009-11-18 12:31:38
|
Hi Ann, Thanks for clarifying it for me. The way you described it, sounds awesome. Cheers, Calin. ----- Original Message ----- From: "Ann W. Harrison" > Calin Pirtea(RDS) wrote: > >> >> My argument is that metadata changes to a production database are meant >> to >> be very rare hence a full table update is meant to be very rare for >> adding a >> new field. Reading a table on production databases, on the other hand, >> should be very often and if performance impact is 1% for these defaults, >> then reading a record 100 times it's already worth the effort to update >> the >> table to get that extra 1% performance back. > > The mechanism that Adriano has proposed will add almost no overhead. > Reading a table and applying the default from the table definition > may be cheaper than reading a record that's been modified to include the > default value because the records are smaller and require fewer disk > reads. There's no computation involved in evaluating the default - > it's part of the in-memory table definition. > > Maybe I'm wrong, but it seems to me that the only format that needs to > store the default value is the format where the new not null field was > added. Only records of an older format could possibly have that field > absent. > > Regards, > > Ann |