From: Dimitry S. <sd...@ib...> - 2009-11-16 13:24:14
|
> There are many changes needed to support more than 64K columns. :-) I > think have partial support for it in the ODS is not good. > > And more than 64K column is also not needed IMHO. Especially keeping in mind current limit in 64k for record size. SY, SD. |
From: Alexander P. <pes...@ma...> - 2009-11-16 13:49:16
|
On Monday 16 November 2009 16:21:26 Adriano dos Santos Fernandes wrote: > Claudio Valderrama C. escreveu: > >> -----Original Message----- > >> From: Adriano dos Santos Fernandes [mailto:adr...@gm...] > >> Sent: Domingo, 15 de Noviembre de 2009 21:15 > >> > >> I now want to encode it as follow: > >> numberOfFields: USHORT > >> list of Ods::Descriptor > >> numberOfFieldsWithDefault: USHORT > >> list of (<fieldId: USHORT><Ods::Descriptor><data: > >> dsc_length bytes>) > > > > What about using ULONG instead of USHORT for whatever extension needed in > > the future? I know some SAS tests that use a very high amount of fields > > and it seems it's not pure lab testing. > > There are many changes needed to support more than 64K columns. :-) I > think have partial support for it in the ODS is not good. > > And more than 64K column is also not needed IMHO. Agreed - when we decide to have more columns, we should better cleanup all places. Currently it's better to have uniform usage of datatypes in all over firebird. |
From: Helen B. <he...@tp...> - 2009-11-14 22:22:41
|
At 07:46 AM 15/11/2009, Adriano dos Santos Fernandes wrote: >Vlad Khorsun wrote: >> >> Sounds not bad but will not work if complex expressions referencing another fields from the same >> table is allowed by standard. >> >The standard mentions only simple expressions, like we have now: > ><default clause> ::= > DEFAULT <default option> > ><default option> ::= > <literal> > | <datetime value function> > | USER > | CURRENT_USER > | CURRENT_ROLE > | SESSION_USER > | SYSTEM_USER > | CURRENT_CATALOG > | CURRENT_SCHEMA > | CURRENT_PATH > | <implicitly typed value specification> > ><datetime value function> ::= > <current date value function> > | <current time value function> > | <current timestamp value function> > | <current local time value function> > | <current local timestamp value function> Let me see....you want to change (break) the standard behaviour for inserts and "enrich" it so that "default" becomes dynamic behaviour that (1) resets existing data, (2) bypasses NOT NULL validation for updates and (3) makes the value of the defaulted field a moving target. Who needs or wants that? If that dynamic behaviour is wanted, we have triggers. What we *really* need is to address the update of existing records in situations where (a) a nullable column is ALTERed to a domain that is not nullable and (b) a new, non-nullable column is added. Helen |
From: Adriano d. S. F. <adr...@gm...> - 2009-11-14 22:29:39
|
Helen Borrie wrote: > At 07:46 AM 15/11/2009, Adriano dos Santos Fernandes wrote: >> Vlad Khorsun wrote: >>> Sounds not bad but will not work if complex expressions referencing another fields from the same >>> table is allowed by standard. >>> >> The standard mentions only simple expressions, like we have now: >> >> <default clause> ::= >> DEFAULT <default option> >> >> <default option> ::= >> <literal> >> | <datetime value function> >> | USER >> | CURRENT_USER >> | CURRENT_ROLE >> | SESSION_USER >> | SYSTEM_USER >> | CURRENT_CATALOG >> | CURRENT_SCHEMA >> | CURRENT_PATH >> | <implicitly typed value specification> >> >> <datetime value function> ::= >> <current date value function> >> | <current time value function> >> | <current timestamp value function> >> | <current local time value function> >> | <current local timestamp value function> > > Let me see....you want to change (break) the standard behaviour for inserts and "enrich" it so that "default" becomes dynamic behaviour that (1) resets existing data, ??? > (2) bypasses NOT NULL validation for updates and ??? > (3) makes the value of the defaulted field a moving target. > ??? > Who needs or wants that? If that dynamic behaviour is wanted, we have triggers. > I can't say, because I don't understand any of your questions. > What we *really* need is to address the update of existing records in situations where (a) a nullable column is ALTERed to a domain that is not nullable Already done. > and (b) a new, non-nullable column is added. > What's being discussed here. Adriano |
From: Rustam G. <fir...@ma...> - 2009-11-14 23:15:26
|
Helen Borrie> > Let me see....you want to change (break) the standard behaviour for inserts and "enrich" it so that "default" becomes dynamic > behaviour that (1) resets existing data, (2) bypasses NOT NULL validation for updates and (3) makes the value of the defaulted > field a moving target. It is just my impressions too. What need of this moving logic from DML (current standart logic) to DDL ? BTW, Adriano, correct me if I am wrong - by your logic, field values will be "calculted" (by some algorithm) when they reads _instead of_ insert ? |
From: Adriano d. S. F. <adr...@gm...> - 2009-11-14 23:23:20
|
Rustam Gadjimuradov wrote: > Helen Borrie> >> Let me see....you want to change (break) the standard behaviour for inserts and "enrich" it so that "default" becomes dynamic >> behaviour that (1) resets existing data, (2) bypasses NOT NULL validation for updates and (3) makes the value of the defaulted >> field a moving target. > > > It is just my impressions too. > > What need of this moving logic from DML (current standart logic) to DDL ? > BTW, Adriano, correct me if I am wrong - by your logic, field values will be > "calculted" (by some algorithm) when they reads _instead of_ insert ? NO. That is how it works till now. Just try to add a NOT NULL field with DEFAULT CURRENT_USER to a populated table and select from it using different user names. Adriano |
From: Calin Pirtea\(RDS\) <pc...@rd...> - 2009-11-15 02:43:47
|
Adriano, Are you talking about set the value of the field (full table update) when the new not null field is added to the table? If that's the case why not simply performing a full table update after evaluating the expression before hand? Sounds to me like what you suggest is to create the not null field and on next read make it look like it has a value. That sounds sooo wrong to me. Currently, every time I create a new not null field I follow it with a full table update. Cheers, Calin ----- Original Message ----- From: "Adriano dos Santos Fernandes" <adr...@gm...> To: "For discussion among Firebird Developers" <fir...@li...> Sent: Sunday, November 15, 2009 8:23 AM Subject: Re: [Firebird-devel] Adding NOT NULL fields with DEFAULT > Rustam Gadjimuradov wrote: >> Helen Borrie> >>> Let me see....you want to change (break) the standard behaviour for >>> inserts and "enrich" it so that "default" becomes dynamic >>> behaviour that (1) resets existing data, (2) bypasses NOT NULL >>> validation for updates and (3) makes the value of the defaulted >>> field a moving target. >> >> >> It is just my impressions too. >> >> What need of this moving logic from DML (current standart logic) to DDL ? >> BTW, Adriano, correct me if I am wrong - by your logic, field values will >> be >> "calculted" (by some algorithm) when they reads _instead of_ insert ? > > NO. That is how it works till now. > > Just try to add a NOT NULL field with DEFAULT CURRENT_USER to a populated > table and select from it using different user names. > > > Adriano > > > ------------------------------------------------------------------------------ > Let Crystal Reports handle the reporting - Free Crystal Reports 2008 > 30-Day > trial. Simplify your report design, integration and deployment - and focus > on > what you do best, core application coding. Discover what's new with > Crystal Reports now. http://p.sf.net/sfu/bobj-july > Firebird-Devel mailing list, web interface at > https://lists.sourceforge.net/lists/listinfo/firebird-devel |
From: Calin Pirtea\(RDS\) <pc...@rd...> - 2009-11-18 12:31:38
|
Hi Ann, Thanks for clarifying it for me. The way you described it, sounds awesome. Cheers, Calin. ----- Original Message ----- From: "Ann W. Harrison" > Calin Pirtea(RDS) wrote: > >> >> My argument is that metadata changes to a production database are meant >> to >> be very rare hence a full table update is meant to be very rare for >> adding a >> new field. Reading a table on production databases, on the other hand, >> should be very often and if performance impact is 1% for these defaults, >> then reading a record 100 times it's already worth the effort to update >> the >> table to get that extra 1% performance back. > > The mechanism that Adriano has proposed will add almost no overhead. > Reading a table and applying the default from the table definition > may be cheaper than reading a record that's been modified to include the > default value because the records are smaller and require fewer disk > reads. There's no computation involved in evaluating the default - > it's part of the in-memory table definition. > > Maybe I'm wrong, but it seems to me that the only format that needs to > store the default value is the format where the new not null field was > added. Only records of an older format could possibly have that field > absent. > > Regards, > > Ann |
From: Rustam G. <fir...@ma...> - 2009-11-15 00:38:18
|
Adriano > NO. That is how it works till now. Hm... No comments. <many russian abuses here> |
From: Rustam G. <fir...@ma...> - 2009-11-15 15:47:58
|
Calin Pirtea(RDS)> > Sounds to me like what you suggest is to create the not null field and on > next read make it look like it has a value. That sounds sooo wrong to me. > Currently, every time I create a new not null field I follow it with a full > table update. Seconded. |
From: Alexander P. <pes...@ma...> - 2009-11-15 15:58:33
|
On Sunday 15 November 2009 18:43:38 Rustam Gadjimuradov wrote: > Calin Pirtea(RDS)> > > > Sounds to me like what you suggest is to create the not null field and on > > next read make it look like it has a value. That sounds sooo wrong to me. > > Currently, every time I create a new not null field I follow it with a > > full table update. > > Seconded. But what should be firebird's behavior if you've forgotten to perform that update? Is destroying database possible in such case to teach user to always perform such update? Something similiar happens when you try to restore a copy of such database - copy is not restorable, which sometimes is a kind of destroyed database. As for me I dislike such behavior:)) BTW, if it's documented that providing default value makes it work like you did perform that update with default value, you get exactly same behavior, but no need to waste time for that mass update, also avoiding a lot of garbage in your database when the table is big. |
From: Rustam G. <fir...@ma...> - 2009-11-15 17:28:57
|
Alexander Peshkoff > But what should be firebird's behavior if you've forgotten to perform that update? It can and must be discussed. Begining for discussion i already mentioned above - "bring an exception if there are wrong data". Alexander Peshkoff > Is destroying database possible in such case to teach user to always > perform such update? I think, any cases is better than getting difference results (values) from the same record under difference environment (user, transaction etc). Where is and how db must be destroyed - i can't see here. Can you give some example ? Alexander Peshkoff > Something similiar happens when you try to restore a > copy of such database - copy is not restorable Yes. That's why this must be exceptioned. Alexander Peshkoff > which sometimes is a kind of destroyed database. Why you equals unrestorable backup and destroyed original db ? Alexander Peshkoff > As for me I dislike such behavior:)) Me too. :) See above about checking data when changing DDL. May be optional, may be another way - but it must be simple and transparent way (for understanding and using by end users). > BTW, if it's documented that providing default value makes it work like you > did perform that update with default value, you get exactly same behavior, > but no need to waste time for that mass update, also avoiding a lot of > garbage in your database when the table is big. I vote against behaviour like "calc value when read". Getting/calcing value when changing DDL (with using default value too) - good, using some syntax like "add not null <expression>" - good, etc. WBR, GR |
From: Alexander P. <pes...@ma...> - 2009-11-15 17:55:29
|
On Sunday 15 November 2009 20:24:35 Rustam Gadjimuradov wrote: > Alexander Peshkoff > But what should be firebird's behavior if you've > forgotten to perform that update? > > It can and must be discussed. Begining for discussion i already > mentioned above - "bring an exception if there are wrong data". > > Alexander Peshkoff > Is destroying database possible in such case to teach > user to always > > > perform such update? > > I think, any cases is better than getting difference results (values) from > the same record under difference environment (user, transaction etc). Rustam. looks like you did not read initial proposal carefully. It's targeted to avoid getting difference results from the same record (like it happens now). All values will be taken from the state, which took place during DDL request. > Where is and how db must be destroyed - i can't see here. Can you > give some example ? I've talked about unrestorable backups. > Alexander Peshkoff > Something similiar happens when you try to restore a > > > copy of such database - copy is not restorable > > Yes. That's why this must be exceptioned. > > Alexander Peshkoff > which sometimes is a kind of destroyed database. > > Why you equals unrestorable backup and destroyed original db ? Because in the worst case (HDD with database dead, this is the only copy) they are equal... I know that normally such cases should not happen, but when we talk about unqualified users - it's unfortunately possible. > Alexander Peshkoff > As for me I dislike such behavior:)) > > Me too. :) See above about checking data when changing DDL. > May be optional, may be another way - but it must be simple and > transparent way (for understanding and using by end users). > > > BTW, if it's documented that providing default value makes it work like > > you did perform that update with default value, you get exactly same > > behavior, but no need to waste time for that mass update, also avoiding a > > lot of garbage in your database when the table is big. > > I vote against behaviour like "calc value when read". Getting/calcing > value when changing DDL (with using default value too) - good, using > some syntax like "add not null <expression>" - good, etc. I've talked about "exactly same behavior", i.e. no "calc value when read". |
From: Rustam G. <fir...@ma...> - 2009-11-15 18:34:27
|
Alexander Peshkoff> > Rustam. looks like you did not read initial proposal carefully. It's targeted > to avoid getting difference results from the same record (like it happens > now). All values will be taken from the state, which took place during DDL > request. OK, what about if field has no default value ? |
From: Adriano d. S. F. <adr...@gm...> - 2009-11-15 20:22:00
|
Here is the result of current (for who does not know how it works and are lazy to test) and proposed new way. I'm not going to explain what's wrong again, and anyway it's obvious reading that. current ============================================== create table t ( n1 integer ); insert into t (n1) values (1); commit; alter table t add n2 integer default 2 not null; select * from t; N1 N2 ============ ============ 1 2 insert into t (n1) values (2); select * from t; N1 N2 ============ ============ 1 2 2 2 commit; alter table t add t1 time default current_time not null; select * from t; N1 N2 T1 ============ ============ ============= 1 2 00:00:00.0000 2 2 00:00:00.0000 -- wait insert into t (n1) values (3); -- wait insert into t (n1) values (4); select * from t; N1 N2 T1 ============ ============ ============= 1 2 00:00:00.0000 2 2 00:00:00.0000 3 2 18:19:17.0000 4 2 18:19:20.0000 alter table t alter n2 null, alter t1 null; select * from t; N1 N2 T1 ============ ============ ============= 1 <null> <null> 2 2 <null> 3 2 18:19:17.0000 4 2 18:19:20.0000 commit; alter table t add t2 time not null; select * from t; N1 N2 T1 T2 ============ ============ ============= ============= 1 <null> <null> 00:00:00.0000 2 2 <null> 00:00:00.0000 3 2 18:19:17.0000 00:00:00.0000 4 2 18:19:20.0000 00:00:00.0000 delete from t; commit; alter table t add t3 time not null; insert into t (n1) values (5); Statement failed, SQLSTATE = 23000 validation error for column T2, value "*** null ***" commit; drop table t; ============================================== new ============================================== create table t ( n1 integer ); insert into t (n1) values (1); commit; alter table t add n2 integer default 2 not null; select * from t; N1 N2 ============ ============ 1 2 insert into t (n1) values (2); select * from t; N1 N2 ============ ============ 1 2 2 2 commit; alter table t add t1 time default current_time not null; select * from t; N1 N2 T1 ============ ============ ============= 1 2 18:15:11.0000 2 2 18:15:11.0000 -- wait insert into t (n1) values (3); -- wait insert into t (n1) values (4); select * from t; N1 N2 T1 ============ ============ ============= 1 2 18:15:11.0000 2 2 18:15:11.0000 3 2 18:15:13.0000 4 2 18:15:17.0000 alter table t alter n2 null, alter t1 null; select * from t; N1 N2 T1 ============ ============ ============= 1 2 18:15:11.0000 2 2 18:15:11.0000 3 2 18:15:13.0000 4 2 18:15:17.0000 commit; alter table t add t2 time not null; Statement failed, SQLSTATE = 22006 unsuccessful metadata update -Cannot make field T2 NOT NULL because there are NULLs present select * from t; N1 N2 T1 ============ ============ ============= 1 2 18:15:11.0000 2 2 18:15:11.0000 3 2 18:15:13.0000 4 2 18:15:17.0000 delete from t; commit; alter table t add t3 time not null; insert into t (n1) values (5); Statement failed, SQLSTATE = 23000 validation error for column T3, value "*** null ***" commit; drop table t; ============================================== Adriano |
From: Rustam G. <fir...@ma...> - 2009-11-15 20:56:10
|
Adriano dos Santos Fernandes> Here is the result of current Just for interest - begining from what versions this behavior exists ? IB 6.0 ? FB 1.5 ? Later ? |
From: Ann W. H. <aha...@ib...> - 2009-11-16 15:47:30
|
Calin Pirtea(RDS) wrote: > > Sounds to me like what you suggest is to create the not null field and on > next read make it look like it has a value. That sounds sooo wrong to me. > Currently, every time I create a new not null field I follow it with a full > table update. > It may sound wrong to you, but it is the general style that Firebird and InterBase have used for years. When you add a column to a table, Firebird generates a new record in RDB$FORMATS, describing the new physical structure. Existing records are unchanged. When Firebird reads a record, it compares the format version in the record header with the format requested - which may not be the most recent, depending on what's happening and how old the request is - and, if necessary, moves the record forward or back to the requested format. So if you add a field FOO to a table and read that table, you'll see a bunch of null FOO's. They're not stored, but are created on reference. If you update those records, they'll be stored in the new format with the FOO field. What Adriano is proposing is a mechanism to add new fields with a NOT NULL constraint without doing a full table update. That would be a good thing. Cheers, Ann |
From: Rustam G. <fir...@ma...> - 2009-11-16 16:07:41
|
Yes, but going some way when add not null field with default value and another way when adding not null field without default - looks like too wrong for me. BTW, if the value will be computed and stored in the RDB$FORMATS and used correctly (first meeted format with default value up from current format) ... Though this "format finding" seems not so cheap in POV of perfomance. WBR, GR |
From: Calin Pirtea\(RDS\) <pc...@rd...> - 2009-11-16 23:23:44
|
Hi Ann, While I understand and mostly agree with what you say I argue that not performing a full table update can be as good as it is bad. Implementation can make it good if done realy well. My argument is that metadata changes to a production database are meant to be very rare hence a full table update is meant to be very rare for adding a new field. Reading a table on production databases, on the other hand, should be very often and if performance impact is 1% for these defaults, then reading a record 100 times it's already worth the effort to update the table to get that extra 1% performance back. Please correct my logic if wrong. Cheers, Calin. ----- Original Message ----- From: "Ann W. Harrison" > Calin Pirtea(RDS) wrote: >> >> Sounds to me like what you suggest is to create the not null field and on >> next read make it look like it has a value. That sounds sooo wrong to me. >> Currently, every time I create a new not null field I follow it with a >> full >> table update. >> > > It may sound wrong to you, but it is the general style that Firebird > and InterBase have used for years. When you add a column to a table, > Firebird generates a new record in RDB$FORMATS, describing the new > physical structure. Existing records are unchanged. When Firebird > reads a record, it compares the format version in the record header > with the format requested - which may not be the most recent, depending > on what's happening and how old the request is - and, if necessary, > moves the record forward or back to the requested format. So if you > add a field FOO to a table and read that table, you'll see a bunch > of null FOO's. They're not stored, but are created on reference. > If you update those records, they'll be stored in the new format > with the FOO field. > > What Adriano is proposing is a mechanism to add new fields with a > NOT NULL constraint without doing a full table update. That would > be a good thing. > > > Cheers, > > Ann > > ------------------------------------------------------------------------------ > Let Crystal Reports handle the reporting - Free Crystal Reports 2008 > 30-Day > trial. Simplify your report design, integration and deployment - and focus > on > what you do best, core application coding. Discover what's new with > Crystal Reports now. http://p.sf.net/sfu/bobj-july > Firebird-Devel mailing list, web interface at > https://lists.sourceforge.net/lists/listinfo/firebird-devel |
From: Ann W. H. <aha...@ib...> - 2009-11-17 21:44:03
|
Calin Pirtea(RDS) wrote: > > My argument is that metadata changes to a production database are meant to > be very rare hence a full table update is meant to be very rare for adding a > new field. Reading a table on production databases, on the other hand, > should be very often and if performance impact is 1% for these defaults, > then reading a record 100 times it's already worth the effort to update the > table to get that extra 1% performance back. The mechanism that Adriano has proposed will add almost no overhead. Reading a table and applying the default from the table definition may be cheaper than reading a record that's been modified to include the default value because the records are smaller and require fewer disk reads. There's no computation involved in evaluating the default - it's part of the in-memory table definition. Maybe I'm wrong, but it seems to me that the only format that needs to store the default value is the format where the new not null field was added. Only records of an older format could possibly have that field absent. Regards, Ann |
From: Rustam G. <fir...@ma...> - 2009-11-18 00:42:23
|
Ann W. Harrison > Maybe I'm wrong, but it seems to me that the only format that needs to > store the default value is the format where the new not null field was > added. Only records of an older format could possibly have that field > absent. Seems, right. Only for *new not field* __with__ *default value*. All other cases have no need for it. And that's why I, personally, against this exception. WBR, GR |
From: Alexander P. <pes...@ma...> - 2009-11-23 09:43:16
|
On Wednesday 18 November 2009 00:43:51 Ann W. Harrison wrote: > Calin Pirtea(RDS) wrote: > > My argument is that metadata changes to a production database are meant > > to be very rare hence a full table update is meant to be very rare for > > adding a new field. Reading a table on production databases, on the other > > hand, should be very often and if performance impact is 1% for these > > defaults, then reading a record 100 times it's already worth the effort > > to update the table to get that extra 1% performance back. > > The mechanism that Adriano has proposed will add almost no overhead. > Reading a table and applying the default from the table definition > may be cheaper than reading a record that's been modified to include the > default value because the records are smaller and require fewer disk > reads. There's no computation involved in evaluating the default - > it's part of the in-memory table definition. > > Maybe I'm wrong, but it seems to me that the only format that needs to > store the default value is the format where the new not null field was > added. Only records of an older format could possibly have that field > absent. Adriano, one more question. Suppose mentioned NOT NULL field is used in FK. Suppose provided default value violates RI. What is supposed engine reply to user? Raise error or not? |
From: Adriano d. S. F. <adr...@gm...> - 2009-11-23 09:56:44
|
Alexander Peshkoff escreveu: > Suppose mentioned NOT NULL field is used in FK. Suppose provided default value > violates RI. What is supposed engine reply to user? Raise error or not? It works automagically the way I think it should, raising an error: --------------- SQL> create table t1 (pk int primary key); SQL> insert into t1 values (1); SQL> insert into t1 values (2); SQL> insert into t1 values (3); SQL> commit; SQL> create table t2 (n int); SQL> insert into t2 values (0); SQL> insert into t2 values (-1); SQL> commit; SQL> alter table t2 add fk int default 4 not null references t1; Statement failed, SQLSTATE = 23000 violation of FOREIGN KEY constraint "INTEG_7" on table "T2" -Foreign key reference target does not exist SQL> alter table t2 add fk int default 3 not null references t1; SQL> --------------- Adriano |