From: Leyne, S. <Se...@br...> - 2011-12-27 21:17:07
|
Dimitry, > 27.12.2011 19:17, Leyne, Sean wrote: > > That type of solution is not what I would define as a cluster. > > As you wish. But the rest of world consider this kind of system to be called > "a shared-nothing cluster". You are correct! "a shared-nothing cluster" is a type of cluster, one that is used for a number of database cluster solutions. I should have said: "That type of solution is not what immediately comes to mind for me, since I see a shared disk solution (using redundant SAN storage) to be much easier to implement for FB." Sean |
From: Leyne, S. <Se...@br...> - 2011-12-28 18:51:13
|
> It is feasible to roll over the transaction ID without putting the database > offline? i.e. when ID is close to limit, reset it to 0 from the code? In theory there is a "simple" code change which would provide you another 6 months breathing room. But there is no permanent solution which is currently available, and none that will be available in 6 months. You will need to perform a backup/restore at some point. The "simple" solution is to change the datatype of Transaction related variables from SLONG* to ULONG. The reality of this solution is, unfortunately, far uglier given the testing required to confirm that all references have been changed. Sean * for the life of me I don't understand why signed types where used for variables which could only contain positive values -- this is a common problem which is throughout the codebase. |
From: Alex P. <pes...@ma...> - 2011-12-29 08:50:10
|
On 12/28/11 22:51, Leyne, Sean wrote: >> It is feasible to roll over the transaction ID without putting the database >> offline? i.e. when ID is close to limit, reset it to 0 from the code? > In theory there is a "simple" code change which would provide you another 6 months breathing room. > > But there is no permanent solution which is currently available, and none that will be available in 6 months. You will need to perform a backup/restore at some point. > > The "simple" solution is to change the datatype of Transaction related variables from SLONG* to ULONG. The reality of this solution is, unfortunately, far uglier given the testing required to confirm that all references have been changed. > > > Sean > > > * for the life of me I don't understand why signed types where used for variables which could only contain positive values -- this is a common problem which is throughout the codebase. This change was done in trunk. The reason for signed type here was (at least visible reason) very simple - into some functions using same parameter might be passed transaction number (positive value) and something else (negative value). I.e. negative sign meant 'this is not transaction' and function behaved according to it. Please do not treat it as an advice to use trunk in production!!! |
From: Leyne, S. <Se...@br...> - 2011-12-29 18:05:34
|
Alex, > > The "simple" solution is to change the datatype of Transaction related > variables from SLONG* to ULONG. The reality of this solution is, > unfortunately, far uglier given the testing required to confirm that all > references have been changed. > > > > > > Sean > > > > > > * for the life of me I don't understand why signed types where used for > variables which could only contain positive values -- this is a common > problem which is throughout the codebase. > > This change was done in trunk. To be clear, you have a code-branch that uses ULONG for transaction ID? > The reason for signed type here was (at least visible reason) very simple > - into some functions using same parameter might > be passed transaction number (positive value) and something else (negative > value). I.e. negative sign meant 'this is not transaction' and function behaved > according to it. And some people have complained about some of my suggestions as being "hacks"!!! Sean |
From: Ann H. <aha...@nu...> - 2011-12-29 22:57:11
|
Sean, > Alex wrote: >> - into some functions using same parameter might >> be passed transaction number (positive value) and something else (negative >> value). I.e. negative sign meant 'this is not transaction' and function behaved >> according to it. > > And some people have complained about some of my suggestions as being "hacks"!!! > Actually, there is at least one other "special" value for transaction ids. Zero is always the system transaction. If you consider that a signed long is "retirement proof", which we did, using the other half for something else doesn't seem so bad. Particularly if you cut your programming teeth in a 64Kb address space. Maybe it's time to look at all the small integers as well. Cheers, Ann |
From: Dmitry Y. <fir...@ya...> - 2011-12-30 06:33:47
|
29.12.2011 22:05, Leyne, Sean wrote: > >> This change was done in trunk. > > To be clear, you have a code-branch that uses ULONG for transaction ID? Trunk is the ongoing development branch (formerly known as HEAD in CVS), i.e. transaction IDs are already unsigned long in FB 3.0. Dmitry |
From: Leyne, S. <Se...@br...> - 2011-12-30 20:43:45
|
Dmitry, > Trunk is the ongoing development branch (formerly known as HEAD in CVS), > i.e. transaction IDs are already unsigned long in FB 3.0. Although this would mean a further ODS change as well as an increase in overhead associated with all rows, perhaps the ODS size should be increased from 4 bytes to 5 bytes to remove any possible likelihood of overflowing the max value (=256 transactions per sec, continuously for over 136 years!) Sean |
From: Kjell R. <kje...@da...> - 2011-12-30 22:23:22
|
Den 2011-12-30 21:43 skrev Leyne, Sean såhär: > Dmitry, > >> Trunk is the ongoing development branch (formerly known as HEAD in CVS), >> i.e. transaction IDs are already unsigned long in FB 3.0. > Although this would mean a further ODS change as well as an increase in overhead associated with all rows, perhaps the ODS size should be increased from 4 bytes to 5 bytes to remove any possible likelihood of overflowing the max value (=256 transactions per sec, continuously for over 136 years!) Who knows what will happen within only 5-10 years? Perhaps in five years, it will be common with systems running a few thousand transactions per second? In that case 40 bits will only suffice for about 11 years (3000 trans/sec). If such a "big" change is to be made I suggest to make it at least 48 bits, and why not 64 bits while we're at it? At least I was under the impression that the snag is not the size and space it takes on disk, but rather that the trans id type is used in so many places that the change is high risk. So, if it's to be changed at all, make sure the change is large enough for a long time. Kjell -- -------------------------------------- Kjell Rilbe DataDIA AB E-post: kj...@da... Telefon: 08-761 06 55 Mobil: 0733-44 24 64 |
From: Alexander P. <pes...@ma...> - 2011-12-31 13:57:31
|
В Пт., 30/12/2011 в 23:10 +0100, Kjell Rilbe пишет: > Den 2011-12-30 21:43 skrev Leyne, Sean såhär: > > Dmitry, > > > >> Trunk is the ongoing development branch (formerly known as HEAD in CVS), > >> i.e. transaction IDs are already unsigned long in FB 3.0. > > Although this would mean a further ODS change as well as an increase in overhead associated with all rows, perhaps the ODS size should be increased from 4 bytes to 5 bytes to remove any possible likelihood of overflowing the max value (=256 transactions per sec, continuously for over 136 years!) > Who knows what will happen within only 5-10 years? Perhaps in five > years, it will be common with systems running a few thousand > transactions per second? In that case 40 bits will only suffice for > about 11 years (3000 trans/sec). > > If such a "big" change is to be made I suggest to make it at least 48 > bits, and why not 64 bits while we're at it? This will make each version of the record (not the record- but EACH version of it) 4 bytes longer. For table with small records that means severe performance degradation. |
From: Dmitry Y. <fir...@ya...> - 2011-12-31 14:08:55
|
31.12.2011 17:57, Alexander Peshkov wrote: > This will make each version of the record (not the record- but EACH > version of it) 4 bytes longer. Not strictly necessary. We could use a variable-length encoding for txn ids longer than 32 bits and mark such records with a new flag. It would add zero storage/performance overhead for all the current applications but allow longer txn ids for the slightly bigger cost. It would increase the code complexity though. Dmitry |
From: Alexander P. <pes...@ma...> - 2011-12-31 14:16:08
|
В Сб., 31/12/2011 в 18:08 +0400, Dmitry Yemanov пишет: > 31.12.2011 17:57, Alexander Peshkov wrote: > > > This will make each version of the record (not the record- but EACH > > version of it) 4 bytes longer. > > Not strictly necessary. We could use a variable-length encoding for txn > ids longer than 32 bits and mark such records with a new flag. It would > add zero storage/performance overhead for all the current applications > but allow longer txn ids for the slightly bigger cost. It would increase > the code complexity though. > May be simply use 64-bit numbers with new flag? Variable-length encoding is not very good from performance POV. |
From: Kjell R. <kje...@da...> - 2012-01-01 20:31:56
|
So far, these are the suggestions I've seen (I'm adding my own suggestion at the bottom): 1. Implement 64 bit id:s and use them everywhere always. Pros: - Straightforward once all code has been changed to use 64 bit id:s. - No overhead for various special flags and stuff. Cons: - Will increase disk space even for DB:s that don't need 64 bit id:s. 2. Implement 64 bit id:s but use them only in records where required. use a per-record flag to indicate if the transaction id i 32 bit or 64 bit. Pros: - Will not increase disk space for DB:s that don't need 64 bit id:s. - For DB:s that do need 64 bit id:s, only records with high transaction id:s will take up the additional 4 bytes of space. Cons: - More complicated implementation than suggestion 1. - Runtime overhead for flag checking on each record access. 3. Stay at 32 bit id:s but somehow "compact" all id:s older than OIT(?) into OIT-1. Allow id:s to wrap around. Pros: - No extra disk space. - Probably rather simple code changes for id usage. Cons: - May be difficult to find an effective way to compact old id:s. - Even if an effective way to compact old id:s is found, it may be complicated to implement. - May be difficult or impossible to perform compacting without large amounts of write locks. - Potential for problems if OIT isn't incremented (but such OIT problems should be solved anyway). May I suggest a fourth option? 4. Add a DB wide flag, like page size, indicating if this DB uses 32 bit or 64 bit id:s. Could be changed via backup restore cycle. Pros: - Almost as simple as suggestion 1 once all code has been changed to support 64 bit id:s. - Minimal overhad for flag checking. - DB:s that don't need 64 bit id:s won't need to waste disk space on 64 bit id:s. Cons: - A DB that has been configured with the incorrect flag has to be backed up + restored to change the flag, causing (a one-time) down time. - A bit more complicated to implement and maintain than suggestion 1. Questions: Q1. Will it be extremely difficult to change all code to support 64 bit id:s? So difficult that option 3 is worth investigating thoroughly? As far as I can see, that's the only option to avoid 64 bit id:s. Q2. If support for 64 bit id:s is implemented, how important is it to conserve disk space in cases where 64 bit id:s are not required? And is it important to conserve disk space per record (suggestion 2) or is it sufficient to conserve disk space only for databases where 32 bit id:s suffice (suggestion 4)? Q3. For suggestion 4: the flag has to be "loaded" only on connect, but in the 32 bit case I would assume each record access would cast the loaded 32 bit id to 64 bit. Would the overhead for that logic and cast be noticable and large enough to be problem? I assume the overhead would be less that the per record flag as per suggestion 2, but... maybe not? If suggestion 2 logic is comparable in code complexity and runtime overhead to that of suggestion 4, then I see no reason to go for suggestion 4. Kjell -- -------------------------------------- Kjell Rilbe DataDIA AB E-post: kj...@da... Telefon: 08-761 06 55 Mobil: 0733-44 24 64 |
From: Yi Lu <yl...@ra...> - 2012-01-04 00:08:50
|
How do I debug if firebird server running with code changes that we made? I am running fbserver.exe with "-a -p 3050 -database "localhost:..\myDatabase.fdb"". Firebird tray icon is showing me that nothing has been attached and nothing is happening outside of the windows message loop. Also, when parsing the debug parameter, the parser did not pick up the database parameter. Basically, I am wondering if I can find a document describing how to debug, reference guide or any kind of helpful document to debug Firebird code changes. We are using Visual Studio 2005. -- View this message in context: http://firebird.1100200.n4.nabble.com/Firebird-Transaction-ID-limit-solution-tp4230210p4259244.html Sent from the firebird-devel mailing list archive at Nabble.com. |
From: Yi Lu <yl...@ra...> - 2012-01-02 19:32:58
|
Approach 1 seems to be least risky. Disk space should not be a big issue with today's hardware. -- View this message in context: http://firebird.1100200.n4.nabble.com/Firebird-Transaction-ID-limit-solution-tp4230210p4254287.html Sent from the firebird-devel mailing list archive at Nabble.com. |
From: Ann H. <aha...@ib...> - 2012-01-02 19:49:59
|
On Mon, Jan 2, 2012 at 2:32 PM, Yi Lu <yl...@ra...> wrote: > Approach 1 seems to be least risky. Disk space should not be a big issue with > today's hardware. > The problem is not disk space, but locality of reference. With small records, adding four bytes could decrease the number of rows per page by 10-15%, leading to more disk I/O. That's a significant cost to every database which can be avoided by using a slightly more complicated variable length transaction id or a flag that indicates which size is used for a particular record. Firebird already checks the flags to determine whether a record is fragmented, so the extra check adds negligible overhead And, as an aside, sweeping is not read-only now. It's purpose is to remove unneeded old versions of records from the database. The actual work may be done by the garbage collect thread, but the I/O is there. Good luck, Ann |
From: Yi Lu <yl...@ra...> - 2012-01-03 01:02:09
|
We are trying to take the approach #1, involving ODS change and data type changes for transaction id to use signed 64 bit integer. We just start looking at the code from today and following changes should be applied on code changes so far, 1. hdr_oldest_transaction, hdr)oldest_active, hdr_next_transaction on header_page structure should be SINT64 on file ods.h. 2. fld_trans_id should have dtype_int64 and size should be sizeof(SINT64) on fields.h. 3. Signature of functions and variables which are using those variables should be changed. I am wondering if we are on right track on approach? Did we miss any important DS changes? Also, we would be willing to contribute to the Firebird foundation in order to help us to develop this feature as we can't wait until Firebird 3.0. -- View this message in context: http://firebird.1100200.n4.nabble.com/Firebird-Transaction-ID-limit-solution-tp4230210p4255165.html Sent from the firebird-devel mailing list archive at Nabble.com. |
From: Philippe M. <mak...@fi...> - 2012-01-03 06:54:38
|
Yi Lu [2012-01-03 02:02] : > Also, we would be willing to contribute to the Firebird foundation in order > to help us to develop this feature as we can't wait until Firebird 3.0. Please contact me directly, to see what are your need and willing so we can find the best solution -- Philippe Makowski |
From: Dimitry S. <sd...@ib...> - 2012-01-01 18:33:17
|
01.01.2012 18:22, Adriano dos Santos Fernandes wrote: > Let something like sweep consolidate old transactions in only one, > concurrently with user operations. Sweep is mostly read-only operation which is known to cause intolerable slowdown. Everybody turn it off because of that. You are suggesting to run read-write operation on whole database concurrently. I'm afraid that comparing with this, backup-restore is a "lesser evil". At least - faster one. -- SY, SD. |
From: Yi Lu <yl...@ra...> - 2012-01-02 19:34:54
|
We turn off the automatical sweep option, and manually perform sweep (as well as recompute index selectivity) once a day at low hour (2am). The performance is actually not impacted so much. -- View this message in context: http://firebird.1100200.n4.nabble.com/Firebird-Transaction-ID-limit-solution-tp4230210p4254296.html Sent from the firebird-devel mailing list archive at Nabble.com. |
From: Vlad K. <hv...@us...> - 2012-01-04 18:45:58
|
> How do I debug if firebird server running with code changes that we made? Are you asking how to debug applications ? Using debugger ;) > I am running fbserver.exe with "-a -p 3050 -database > "localhost:..\myDatabase.fdb"". Firebird tray icon is showing me that > nothing has been attached and nothing is happening outside of the windows > message loop. Also, when parsing the debug parameter, the parser did not > pick up the database parameter. Are you changed command-line parser to support "database" switch ? I don't ask why do you need such switch... > Basically, I am wondering if I can find a document describing how to debug, > reference guide or any kind of helpful document to debug Firebird code > changes. We are using Visual Studio 2005. So, use VS docs. Firebird is the same application as any other and there is no separate description of how to debug Firebird. Regards, Vlad PS Or i misunderstand you completely ;) |
From: W O <sis...@gm...> - 2012-01-01 00:07:46
|
You are right Alexander but with computers each month more and more fast, that's really a problem today? Greetings. Walter. On Sat, Dec 31, 2011 at 9:57 AM, Alexander Peshkov <pes...@ma...> wrote: > В Пт., 30/12/2011 в 23:10 +0100, Kjell Rilbe пишет: > > Den 2011-12-30 21:43 skrev Leyne, Sean såhär: > > > Dmitry, > > > > > >> Trunk is the ongoing development branch (formerly known as HEAD in > CVS), > > >> i.e. transaction IDs are already unsigned long in FB 3.0. > > > Although this would mean a further ODS change as well as an increase > in overhead associated with all rows, perhaps the ODS size should be > increased from 4 bytes to 5 bytes to remove any possible likelihood of > overflowing the max value (=256 transactions per sec, continuously for over > 136 years!) > > Who knows what will happen within only 5-10 years? Perhaps in five > > years, it will be common with systems running a few thousand > > transactions per second? In that case 40 bits will only suffice for > > about 11 years (3000 trans/sec). > > > > If such a "big" change is to be made I suggest to make it at least 48 > > bits, and why not 64 bits while we're at it? > > This will make each version of the record (not the record- but EACH > version of it) 4 bytes longer. For table with small records that means > severe performance degradation. > > > > > ------------------------------------------------------------------------------ > Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex > infrastructure or vast IT resources to deliver seamless, secure access to > virtual desktops. With this all-in-one solution, easily deploy virtual > desktops for less than the cost of PCs and save 60% on VDI infrastructure > costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox > Firebird-Devel mailing list, web interface at > https://lists.sourceforge.net/lists/listinfo/firebird-devel > |
From: Alex P. <pes...@ma...> - 2012-01-03 13:13:41
|
On 01/01/12 04:07, W O wrote: > You are right Alexander but with computers each month more and more fast, > that's really a problem today? Practice says that it is still a problem, at least at user's brain level :) We increased record number size in FB2, and I've used to hear many times - well, why on the same server fb2 is slower than fb1.5 on some operations? |
From: Alex P. <pes...@ma...> - 2011-12-30 07:47:55
|
On 12/29/11 22:05, Leyne, Sean wrote: >> The reason for signed type here was (at least visible reason) very simple >> - into some functions using same parameter might >> be passed transaction number (positive value) and something else (negative >> value). I.e. negative sign meant 'this is not transaction' and function behaved >> according to it. > And some people have complained about some of my suggestions as being "hacks"!!! > Sean, certainly it was hack, but it left in codebase since pre-firebird times. In fb3 it was cleaned up. We try to remove such 'solutions' from the code, and certainly do not want to add new ones. |
From: Jesus G. <je...@gm...> - 2011-12-29 19:44:14
|
> This change was done in trunk. The reason for signed type here was (at > least visible reason) very simple - into some functions using same > parameter might be passed transaction number (positive value) and > something else (negative value). I.e. negative sign meant 'this is not > transaction' and function behaved according to it. > Would not be better, instead of that, If transaction id is equal To 0, no transaction, else transaction. As now there is a problem with transactionid and heavy loaded systems, that could solve in a little the problem. Jesus |
From: Dimitry S. <sd...@ib...> - 2011-12-29 20:50:07
|
29.12.2011 20:41, Jesus Garcia wrote: > Would not be better, instead of that, If transaction id is equal To 0, no transaction, else transaction. There is transaction number zero. > As now there is a problem with transactionid and heavy loaded systems, that could solve in a little the problem. Heavy loaded systems should use clusters. That's all. While one node is on maintenance, others do all work. These systems need clusters anyway for high availability and/or load balancing. One can't be serious running critical systems on a single server. -- SY, SD. |