From: Rustam G. <fir...@ma...> - 2009-10-26 16:42:45
|
Alexander Peshkoff > Not sure. For some options (like ExternalFileAccess) I prefer to let > per-database configuration only to make accessible range narrow. For example: > > firebird.conf: > ExternalFileAccess=Restrict /path1;/path2 > > perDatabaseConfig: > ExternalFileAccess=Restrict /path1 #valid > ExternalFileAccess=Restrict /path1;/path2/sub2 #valid > ExternalFileAccess=Restrict /path3 #bad > ExternalFileAccess=Full #bad Perfect. But in POV cleaning db-header options it changes nothing. :) WBR, GR. |
From: Alexander P. <pes...@ma...> - 2009-10-26 16:47:32
|
On Monday 26 October 2009 19:39:04 Rustam Gadjimuradov wrote: > Alexander Peshkoff > > > Not sure. For some options (like ExternalFileAccess) I prefer to let > > per-database configuration only to make accessible range narrow. For > > example: > > > > firebird.conf: > > ExternalFileAccess=Restrict /path1;/path2 > > > > perDatabaseConfig: > > ExternalFileAccess=Restrict /path1 #valid > > ExternalFileAccess=Restrict /path1;/path2/sub2 #valid > > ExternalFileAccess=Restrict /path3 #bad > > ExternalFileAccess=Full #bad > > Perfect. But in POV cleaning db-header options it changes nothing. :) Just wanted to be a bit more precise here :)) |
From: Pavel C. <pc...@ib...> - 2009-10-26 19:48:51
|
Leyne, Sean napsal(a): >> 1) Configuration right in database > >> Cons: > ... >> - Risk of Blast-from-the-past misconfiguration mentioned above. > > I really don't think those are real problems, more human misunderstandings... Definitely, but it keeps to happen whenever people start to play with sweep interval, forced writes and page buffers :( As a Firebird consultant I get called in to optimize and troubleshoot installations. It happens to me too often to my taste that they call me again after a while to fix what I was supposed to fix in first place, just to discover that someone restored from old backup or used the "old" database copy without adjusting the configuration again, although they were explicitly and in writing warned that some configuration options I've changed to tune up their system would revert back to bad values if they'd use backups created before this change or some rogue database copy I haven't adjusted. Forced writes is mostly harmless, but sweep interval and especially page buffers could kill. >> >> Personally, I would vote for 2) any sober day. > > It's 2:00pm in Toronto, so I'm quite sober! It's 20:45 here in Prague and I start to be sorry I'm still sober (daylight saving time switch always puts me out of balance). Guess it's time to get some beer and life :) best regards Pavel Cisar IBPhoenix |
From: Alexander P. <pes...@ma...> - 2009-10-27 09:40:48
|
On Monday 26 October 2009 22:48:01 Pavel Cisar wrote: > Forced writes is mostly harmless, Not agreed. Have you ever seen database, corrupted when server failure happened due to FW turned off? |
From: Adriano d. S. F. <adr...@gm...> - 2009-10-26 22:07:33
|
Everybody on this list is (or should be) able to provide simple tests demonstrating why they feature request worth. Otherwise developers time would be consumed for crazy things and nothing will be really developed. Adriano On Mon, Oct 26, 2009 at 7:56 PM, Geoff Worboys < ge...@te...> wrote: > >> We know ? You know ? I'm - not. Show me the real DBMS > >> which support such page size and test showing this fat pages > >> is good (not read-only test, please)... > > Why not a read-only test? After all we can create read-only > databases. We need stop assuming that we know everything that > all users want from their databases. > > > We simply need to "get out of the way" and provide support for > > larger page sizes, to allow them to do what they need to do. > > I've got to vote with Sean here. Unless there is a really good > reason to do otherwise, Firebird should remove such artificial > restrictions. > > Remove the restriction and developers can test Firebird, rather > than some unrelated engine, to see if it makes a difference to > their own specific situations. > > -- > Geoff Worboys > Telesis Computing > > > > > > > ------------------------------------------------------------------------------ > Come build with us! The BlackBerry(R) Developer Conference in SF, CA > is the only developer event you need to attend this year. Jumpstart your > developing skills, take BlackBerry mobile applications to market and stay > ahead of the curve. Join us from November 9 - 12, 2009. Register now! > http://p.sf.net/sfu/devconference > Firebird-Devel <http://p.sf.net/sfu/devconference%0AFirebird-Devel>mailing list, web interface at > https://lists.sourceforge.net/lists/listinfo/firebird-devel > |
From: Leyne, S. <Se...@br...> - 2009-10-26 22:43:28
|
> Everybody on this list is (or should be) able to provide simple tests > demonstrating why they feature request worth. Assuming a 16KB page size: - a 4KB row with a 12KB blob would require 2 disk operations in FB to read/write - a 64KB row (think lots of integer values) would require a minimum of 4 disk operations in FB to read/write - a 1MB Blob would require 65 disk operations to read/write - a 10MB Blob would require 650 disk operations to read/write whereas with a 64KB page size - a 4KB row with a 12KB blob could require just 1 disk operations in FB to read/write - a 64KB row would require a minimum of 1 or 2 disk operations in FB to read/write - a 1MB Blob would require a minimum of 17 disk operations to read/write - a 10MB Blob would require a minimum of 170 disk operations to read/write Sean |
From: Vlad K. <hv...@us...> - 2009-10-27 08:30:52
|
>> > They are disk block size! They are the size of the data which will be >> read/written to disk on 1 operation. >> >> If i have NTFS with cluster size of 4KB on the RAID10 with stripe size >> of 64KB : >> what is the smallest IO unit ? > > For the OS: 4KB > > For the disk subsystem: 64KB > > > The disk controller would read 64KB from disk and give the OS the 4KB it wanted. > > When the OS would write 4KB, the disk controller would read 64KB, and "stich" the 4KB > from the OS into the 64KB block, which would be written to disk. Why do you think OS\FB will write 4KB only ? Why not 8 or 16 or 128 ? Remember i said "allocation policy" ? > So, you see 64KB is being read and written whether you like it or not. > > With that being the case, you are better off setting the NTFS cluster size to 64KB and the > database page size to 64KB to get the best performance. But wait, you can't set the database > to 64KB... hum, how do we solve that? NTFS cluster of 64KB is the worst thing you can do for database. Regards, Vlad |
From: Vlad K. <hv...@us...> - 2009-10-27 08:44:25
|
> Vlad, > >> I think extents (implemented properly) will satisfy much better all >> possible usage cases : >> not waste cache memory, > > If optimizing cache is a goal, shouldn't the cache be changed from a page cache to a row cache? Not sure. Far far not sure. If you are refers to the Falcon - it have both caches, IIRC. > Wouldn't this allow for the cache to truly represent the most active structures in the system ... rows? Probably. But cost of syncronization between page and row caches could eat all performance benefit easy. This question requires many time to research. > Further, wouldn't caching rows provide the most efficient memory usage? By allowing for the memory > associated with inactive rows to be freed up... Again, this could be good for read-only database, but not for the read-write. Or, if you need in-memory database - take one ready from the market... Regards, Vlad |
From: Vlad K. <hv...@us...> - 2009-10-27 08:48:29
|
>> The restriction on page size (at least on page sizes above >> 64Kb) is not entirely artificial. Lifting it could increase >> the size of internal structures and make smaller page sizes >> less efficient. There are ways to avoid the increase in size, >> but they increase the computational load. > > Which, perhaps, represents "a really good reason to do > otherwise". Although the performance impact is very difficult > to predict given modern processor optimisations. > > Vlad, so far, seems to be arguing that there are no situations > in which a larger page size will be useful. That seems an > unrealistic view (eg: read-only databases, hardware with large > block sizes etc). Vlad, so far, tried to show another way to archve the same goal without knowing drawbacks. > It is also very difficult to predict exactly what effect a > change in page-size will have without trying it. What works > for some other engine may not work for Firebird... and vise- > versa. > > > This does start to sound like "try just to see if it works", > but to a certain extent many developments for performance can > be like that. The original suggestion was: > > Dimitry Sibiryakov wrote: >>> Change offset fields from USHORT to ULONG to allow page >>> sizes >64k. Change meaning of hdr_page_size from number >>> of bytes to number of kilobytes for the same purpose. > > Is there much of a downside to making this much change? To > make it feasible for future experimentation on the subject of > page sizes. (Or are we going to be stuck in the "you cant > prove its faster" but "you cant prove its slower" argument?) As Dmitry already pointed out, this change is < 1% of required changes. I even will not say about amount of testing required. Regards, Vlad |
From: Alexander P. <pes...@ma...> - 2009-10-27 10:08:46
|
On Tuesday 27 October 2009 11:30:37 Vlad Khorsun wrote: > >> > They are disk block size! They are the size of the data which will be > >> > >> read/written to disk on 1 operation. > >> > >> If i have NTFS with cluster size of 4KB on the RAID10 with stripe > >> size of 64KB : > >> what is the smallest IO unit ? > > > > For the OS: 4KB > > > > For the disk subsystem: 64KB > > > > > > The disk controller would read 64KB from disk and give the OS the 4KB it > > wanted. > > > > When the OS would write 4KB, the disk controller would read 64KB, and > > "stich" the 4KB from the OS into the 64KB block, which would be written > > to disk. > > Why do you think OS\FB will write 4KB only ? Why not 8 or 16 or 128 ? > Remember i said "allocation policy" ? Well, when you perform massive insert that allocation policy can help. But when later you need to update records here and there - and OS will anyway need to read/write all 64 Kb. > NTFS cluster of 64KB is the worst thing you can do for database. If it can't support 64Kb blocks - yes. Vlad, do you want to say that old fact 'database works optimally when logical block size == physical block size (minimal amount of data that OS reads-from/writes-to disk)' is wrong? |
From: Vlad K. <hv...@us...> - 2009-10-27 10:17:18
|
> On Tuesday 27 October 2009 11:30:37 Vlad Khorsun wrote: >> >> > They are disk block size! They are the size of the data which will be >> >> >> >> read/written to disk on 1 operation. >> >> >> >> If i have NTFS with cluster size of 4KB on the RAID10 with stripe >> >> size of 64KB : >> >> what is the smallest IO unit ? >> > >> > For the OS: 4KB >> > >> > For the disk subsystem: 64KB >> > >> > >> > The disk controller would read 64KB from disk and give the OS the 4KB it >> > wanted. >> > >> > When the OS would write 4KB, the disk controller would read 64KB, and >> > "stich" the 4KB from the OS into the 64KB block, which would be written >> > to disk. >> >> Why do you think OS\FB will write 4KB only ? Why not 8 or 16 or 128 ? >> Remember i said "allocation policy" ? > > Well, when you perform massive insert that allocation policy can help. But > when later you need to update records here and there - and OS will anyway > need to read/write all 64 Kb. Another sample : we have almost full page of 64KB and need to update record on it. We and OS will write two pages of 64KB. So, where the performance gain ? BTW, mass update should not differ much from mass insert case. >> NTFS cluster of 64KB is the worst thing you can do for database. > > If it can't support 64Kb blocks - yes. In any case, i'd said. I never read recommendation for DBA to format FS with cluster more than 8KB. > Vlad, do you want to say that old fact 'database works optimally when logical > block size == physical block size (minimal amount of data that OS > reads-from/writes-to disk)' is wrong? When speak about huge physical blocks - yes, its wrong. Therefore people not used huge physical blocks for databases. Regards, Vlad PS Disk IO is big part of database work but think also about memory use efficience, internal on-page structures, etc... |
From: Alexander P. <pes...@ma...> - 2009-10-27 10:21:14
|
On Tuesday 27 October 2009 11:44:06 Vlad Khorsun wrote: > > Vlad, > > > >> I think extents (implemented properly) will satisfy much better all > >> possible usage cases : > >> not waste cache memory, > > > > If optimizing cache is a goal, shouldn't the cache be changed from a page > > cache to a row cache? > > Not sure. Far far not sure. If you are refers to the Falcon - it have > both caches, IIRC. > > > Wouldn't this allow for the cache to truly represent the most active > > structures in the system ... rows? > > Probably. But cost of syncronization between page and row caches could > eat all performance benefit easy. This question requires many time to > research. > > > Further, wouldn't caching rows provide the most efficient memory usage? > > By allowing for the memory associated with inactive rows to be freed > > up... > > Again, this could be good for read-only database, but not for the > read-write. > > Or, if you need in-memory database - take one ready from the market... I suppose that rows' cache is something totally unrealistic for FB3. Page sizes >64K - too. What should be done in new ODS - measure page size in KB, and support 32K pages. May be 64K too, but this depends upon possible resources to do it. |
From: Leyne, S. <Se...@br...> - 2009-10-27 16:56:36
|
Alex, > What should be done in new ODS - measure page size in KB, and support 32K > pages. May be 64K too, but this depends upon possible resources to do it. I would support that first step. Sean |
From: Alexander P. <pes...@ma...> - 2009-10-27 10:54:27
|
On Tuesday 27 October 2009 13:17:05 Vlad Khorsun wrote: > > Well, when you perform massive insert that allocation policy can help. > > But when later you need to update records here and there - and OS will > > anyway need to read/write all 64 Kb. > > Another sample : we have almost full page of 64KB and need to update > record on it. We and OS will write two pages of 64KB. So, where the > performance gain ? Same for almost full 64K extent. Now - we write only 2 * 4Kb, OS first reads 2 * 64Kb, next writes 2 * 64Kb. I do not know allocation policy which makes difference between almost full 64K extent and almost full 64K page. > BTW, mass update should not differ much from mass insert case. This depends upon what you understand under 'much' :)) > >> NTFS cluster of 64KB is the worst thing you can do for database. > > > > If it can't support 64Kb blocks - yes. > > In any case, i'd said. I never read recommendation for DBA to format FS > with cluster more than 8KB. May be because databases anyway do not support larger blocks? > > Vlad, do you want to say that old fact 'database works optimally when > > logical block size == physical block size (minimal amount of data that OS > > reads-from/writes-to disk)' is wrong? > > When speak about huge physical blocks - yes, its wrong. Therefore > people not used huge physical blocks for databases. How can they do it if databases do not support it? > PS Disk IO is big part of database work but think also about memory use > efficience, internal on-page structures, etc... Low efficiency of page cache is really a problem here. May be we need (that's definitely not about 3.0) rows' cache in addition to (rather small, line in CS now) page's cache t support huge pages effciently. What about on-page structures... Nobody forces us to measure offset/length in bytes, when using 8-byte words (which also provides better alignment), we can move to 512K pages with existing data structures. Hope this will be enough for a relatively long time :)) |
From: Doychin B. <do...@ds...> - 2009-10-27 11:40:09
|
I can only add that bigger page size will probably help a lot when database is stored on raw device. Doychin |
From: Vlad K. <hv...@us...> - 2009-10-27 12:29:59
|
>> >> NTFS cluster of 64KB is the worst thing you can do for database. >> > >> > If it can't support 64Kb blocks - yes. >> >> In any case, i'd said. I never read recommendation for DBA to format FS >> with cluster more than 8KB. > > May be because databases anyway do not support larger blocks? So, why most DB vendors are not supported large blocks ? >> > Vlad, do you want to say that old fact 'database works optimally when >> > logical block size == physical block size (minimal amount of data that OS >> > reads-from/writes-to disk)' is wrong? >> >> When speak about huge physical blocks - yes, its wrong. Therefore >> people not used huge physical blocks for databases. > > How can they do it if databases do not support it? The same question as above ;) >> PS Disk IO is big part of database work but think also about memory use >> efficience, internal on-page structures, etc... > > Low efficiency of page cache is really a problem here. May be we need (that's > definitely not about 3.0) rows' cache in addition to (rather small, line in > CS now) page's cache t support huge pages effciently. > > What about on-page structures... Nobody forces us to measure offset/length in > bytes, when using 8-byte words (which also provides better alignment), we can > move to 512K pages with existing data structures. Hope this will be enough > for a relatively long time :)) Its not about how to represent big pages. Its about processing (b-tree) and reducing lock granularity creating additional hot spots. Regards, Vlad |
From: Alexander P. <pes...@ma...> - 2009-10-27 12:43:54
|
On Tuesday 27 October 2009 15:13:04 Vlad Khorsun wrote: > So, why most DB vendors are not supported large blocks ? > > >> > Vlad, do you want to say that old fact 'database works optimally when > >> > logical block size == physical block size (minimal amount of data that > >> > OS reads-from/writes-to disk)' is wrong? > >> > >> When speak about huge physical blocks - yes, its wrong. Therefore > >> people not used huge physical blocks for databases. > > > > How can they do it if databases do not support it? > > The same question as above ;) There can be different reasons. Including problems with ODS of that databases. > >> PS Disk IO is big part of database work but think also about memory use > >> efficience, internal on-page structures, etc... > > > > What about on-page structures... Nobody forces us to measure > > offset/length in bytes, when using 8-byte words (which also provides > > better alignment), we can move to 512K pages with existing data > > structures. Hope this will be enough for a relatively long time :)) > > Its not about how to represent big pages. Its about processing (b-tree) > and reducing lock granularity creating additional hot spots. May be I'm too blind but I do not see serious b-tree issues. But reducing lock granularity is really serious argument to use extents. |
From: m. Th. <th...@va...> - 2009-10-27 11:30:20
|
Dmitry Yemanov wrote: > All, > > Below is the initial proposal (to be corrected/extended by others) that > describes the physical ODS changes we'd like to see in ODS 12. Just wondering: It is possible to enhance the Index engine to allow backwards scanning ? Ie. to disappear the need to have _two_ indexes (one ascending and one descending) just in case in which the user wants to do a sort, max, min etc. TIA, m. Th. |
From: Dimitry S. <sd...@ib...> - 2009-10-27 11:35:30
|
> Just wondering: > It is possible to enhance the Index engine to allow backwards scanning ? No. Unless you can suggest algorithm of working with it without deadlocks. SY, SD. |
From: Stefan H. <li...@st...> - 2009-10-27 16:30:10
|
> "So, if I move my database to another server/folder, I will need to > move the separate conf file as well!" > Nuts to that!!! > Store all of the *specified* database settings (remembering not all > settings are *database* settings) in the database header. I also don't like the idea of having multiple files per database. It is a great plus that it's just one comprehensive file. (Unless you specify otherwise, but then that's on purpose.) We already have settings like Sweep Interval, SQL Dialect, and Buffer Pages in the database. Why introduce a new file for others? Where do you draw the line? Will GBAK also put that file into the .fbk backup or will I have to copy that? If it does, will a GBAK restore also restore that file? Regards Stefan |
From: Stefan H. <li...@st...> - 2009-10-28 11:22:29
|
>> We already have settings like Sweep Interval, SQL Dialect, and Buffer >> Pages in the database. Why introduce a new file for others? > The point was to move these values (except SQL Dialect which actually > isn't a configuration option) out from database to text configuration > file. The config file could be firebird.conf with [databases] section, > separate databases.conf file or database specific file, whatever, as > this wasn't settled yet. But why? Why do you want to separate things from a database that are a property of the database? I don't get the point. >> Where do you draw the line? Will GBAK also put that file into the >> .fbk backup or will I have to copy that? If it does, will a GBAK >> restore also restore that file? > Configuration isn't actually what should be part of the backup file > itself. It should contain only your data, not configuration as there is > no direct link between your data and server configuration (although > database specific). Sorry, I don't get this. For me, it belongs together. E.g. when I configure a Page Cache Size for my database I want it to stay until I configure another one. Of course I can configure it to be the "default", as specified by the central configuration. We already have that now. > You can use the backup just fine on another system or on the same > one it was created but under different conditions (for example you > switched from SuperServer to Classic meanwhile), where the > configuration values would be not valid any more. But why? What's the advantage? > If you want to backup your configuration, then make a backup copy of > your configuration file(s). I cannot do that when I do a backup from a remote location. So I would lose the configuration data. In a crash recovery scenario it can be a nightmare to find out why things aren't working as straight as they did before. This is leading away from Firebird's simplicity. Best Regards Stefan -- Stefan Heymann, Tübingen, Germany |
From: Pavel C. <pc...@ib...> - 2009-10-28 12:31:50
|
Stefan Heymann napsal(a): > >> The point was to move these values (except SQL Dialect which actually >> isn't a configuration option) out from database to text configuration >> file. The config file could be firebird.conf with [databases] section, >> separate databases.conf file or database specific file, whatever, as >> this wasn't settled yet. > > But why? Why do you want to separate things from a database that are a > property of the database? I don't get the point. It's NOT property of database, it's SERVER configuration OVERRIDE for a database. Database property is for example page size, owner or fill ratio, but page buffers, forced writes or sweep interval are server properties. They affect server operations, not database itself. Don't get fooled by the fact that they could be database specific, they're still server operational parameters. The worst thing about this is that currently you CAN adjust DATABASE ATTRIBUTES on restore via gbak options, but you can't affect these server attributes stored in backup. >> Configuration isn't actually what should be part of the backup file >> itself. It should contain only your data, not configuration as there is >> no direct link between your data and server configuration (although >> database specific). > > Sorry, I don't get this. For me, it belongs together. E.g. when I > configure a Page Cache Size for my database I want it to stay until I > configure another one. Of course I can configure it to be the > "default", as specified by the central configuration. We already have > that now. Moving cache size out from database to server configuration doesn't make it miraculously volatile. Once you set it, it will stay the same until you change it. The sole difference is that you would finally treat database-specific server configuration as such. Cache size is bound to SERVER environment, not general value for database that you should carry around when you move it to another box or change the server itself. >> You can use the backup just fine on another system or on the same >> one it was created but under different conditions (for example you >> switched from SuperServer to Classic meanwhile), where the >> configuration values would be not valid any more. > > But why? What's the advantage? It's not advantage, it's matter of fact. Backup file is just that, a pile of data in portable format that could be used to create usable database. You can create such database in any environment, and you would certainly doesn't want that these data would poison your server environment with server configuration overrides that are not applicable to it. >> If you want to backup your configuration, then make a backup copy of >> your configuration file(s). > > I cannot do that when I do a backup from a remote location. So I would > lose the configuration data. In a crash recovery scenario it can be a > nightmare to find out why things aren't working as straight as they > did before. Are you serious? It's normal that you make a backup of your environment settings (and FB server configuration it's part of it) and refresh it whenever you change the configuration. There is no point to backup environment setting twice a day if you don't change it. On the other hand, you can backup your database content every hour. If your server would crash (for example HW failure), you have to restore it's environment first and then databases (if they were affected as well). If only your database gets corrupted, you'll just restore the database in still perfectly working and configured environment. As the remote backup of your environment goes, it's expected that there will be Services API that would allow you to get/write the configuration file(s). > This is leading away from Firebird's simplicity. Not at all, it actually fixes confusing anomaly. It was a bad decision to put it in InterBase, and fact that we get used to it over the years doesn't make it less wrong. best regards Pavel Cisar IBPhoenix |
From: Roman R. <ro...@ro...> - 2009-10-28 11:51:38
|
>> The point was to move these values (except SQL Dialect which actually >> isn't a configuration option) out from database to text configuration >> file. The config file could be firebird.conf with [databases] section, >> separate databases.conf file or database specific file, whatever, as >> this wasn't settled yet. > > But why? Why do you want to separate things from a database that are a > property of the database? I don't get the point. I am not sure. Forced writes can be safe on a server with UPSs and battery-backedup controller, but are not safe on others. Sweep interval is ok for one environment and is not appropriate for another. Same with page cache size. >> If you want to backup your configuration, then make a backup copy of >> your configuration file(s). > > I cannot do that when I do a backup from a remote location. So I would > lose the configuration data. In a crash recovery scenario it can be a > nightmare to find out why things aren't working as straight as they > did before. That means only one thing - we need a possibility to backup the configuration parameters from the server for a particular database and restore them on another one (latter is optional, simple dump into text file is already a lot). Then we could check whether it is a task for Firebird project or we leave it for other vendors like Red Soft. Roman PS. For some time I am contemplating on creating a better managed version of the Firebird server, since the current one has too many things open. But to make it really nice would require forking the codebase. So I am still waiting for the appropriate provider architecture, where the configuration is external to the provider, authentication is external to the provider and so on. |
From: Jiri C. <di...@ci...> - 2009-10-28 12:25:59
|
On Wed, Oct 28, 2009 at 12:51, Roman Rokytskyy <ro...@ro...> wrote: > I am not sure. Forced writes can be safe on a server with UPSs and > battery-backedup controller, but are not safe on others. Sweep interval > is ok for one environment and is not appropriate for another. Same with > page cache size. But if your server blows out, you get the new/backup one (or setup new VM) and you wanna restore the database, you probably wanna to restore it to the same state before the server has been blown, don't you? You're restoring database, not only raw data. -- Jiri {x2} Cincura (CTO x2develop.com) http://blog.cincura.net/ | http://www.ID3renamer.com |
From: Roman R. <ro...@ro...> - 2009-10-28 12:38:47
|
> On Wed, Oct 28, 2009 at 12:51, Roman Rokytskyy <ro...@ro...> wrote: >> I am not sure. Forced writes can be safe on a server with UPSs and >> battery-backedup controller, but are not safe on others. Sweep interval >> is ok for one environment and is not appropriate for another. Same with >> page cache size. > > But if your server blows out, you get the new/backup one (or setup new > VM) and you wanna restore the database, you probably wanna to restore > it to the same state before the server has been blown, don't you? > You're restoring database, not only raw data. Sure, but I do not use gbak to backup my /etc, /var, /usr/local and /home directories. Usually I mirror them with rsync. Roman |