From: Peter C. <pet...@ne...> - 2002-01-14 12:11:28
|
> From: J.P. King [mailto:jp...@he...] > Would it be feasible to create an object store, and detatch > the code engine from the object store engine? Yes. > I personally am convinced that this would be a technically good thing > to do (given an object store), but I don't know how the game engine > accesses the objects - is it directly, or is there an interface layer > which can be mangled? The database is an object with operator[] defined, specifically so that we could do this. - Peter |
From: Adrian S. J. <AS...@pa...> - 2002-01-14 12:20:44
|
> From: Peter Crowther [mailto:pet...@ne...] >=20 > > From: J.P. King [mailto:jp...@he...] > > Would it be feasible to create an object store, and detach=20 > > the code engine from the object store engine? >=20 > Yes. >=20 > > I personally am convinced that this would be a technically=20 > good thing > > to do (given an object store), but I don't know how the game engine > > accesses the objects - is it directly, or is there an=20 > interface layer > > which can be mangled? >=20 > The database is an object with operator[] defined,=20 > specifically so that we > could do this. I think the bigger problem is how do we use it? The code wouldn't be particularly efficient if every access went to an out-of-process call. I suppose if we had a write-through cache it wouldn't be too bad. At least then we'd get semi-fault tolerant data. We'd need a decent sanity-checker that could fix problems like the contents lists being mangled due to a crash whilst moving an object. Adrian. |
From: J.P. K. <jp...@he...> - 2002-01-14 12:44:50
|
> I think the bigger problem is how do we use it? The code wouldn't > be particularly efficient if every access went to an out-of-process > call. No, it wouldn't. Is this really a problem? Does this problem outweigh the potential benefits? For example the ability to have the engine crash but the DB stay up, in principle the posibility for the DB to crash, but the game to stay up (long enough for the db to reload). > I suppose if we had a write-through cache it wouldn't be too bad. > At least then we'd get semi-fault tolerant data. There are all sorts of options, although there is still the issue of finding the object store which works for us. > We'd need a decent sanity-checker that could fix problems like the > contents lists being mangled due to a crash whilst moving an object. Yes, although ideally we'd like ways of atomising operations, or supporting rollback. > Adrian. Julian |
From: Peter C. <pet...@ne...> - 2002-01-14 13:10:38
|
> From: J.P. King [mailto:jp...@he...] > > I think the bigger problem is how do we use it? The code wouldn't > > be particularly efficient if every access went to an out-of-process > > call. > > No, it wouldn't. Is this really a problem? Yes. Users have come to expect the site to be fast. > Does this problem outweigh the potential benefits? Yes. Witness the number of times I've seen MUSH databases go tits-up. However, see later. > Yes, although ideally we'd like ways of atomising operations, or > supporting rollback. Quite. [Below] The modern C++ object stores can do transparent, transactional persistence and object caching in-process. If there's a free one, it would certanly be worth investigating. But we shouldn't try to roll our own. - Peter |
From: J.P. K. <jp...@he...> - 2002-01-14 13:25:36
|
> > No, it wouldn't. Is this really a problem? > Yes. Users have come to expect the site to be fast. And the overhead of another process is really going to impinge on this? I don't see it myself - we aren't talking about starting a new process, merely talking to an already running one. > > Does this problem outweigh the potential benefits? > Yes. Witness the number of times I've seen MUSH databases go tits-up. > However, see later. Sorry, I don't see why a MUSH database going tits up means that Ugly having a separate persistant store to the main game engine process. The Object store should be fast enough that if it falls over it can be made to bounce back up rather fast... > > Yes, although ideally we'd like ways of atomising operations, or > > supporting rollback. > > Quite. > > [Below] The modern C++ object stores can do transparent, transactional > persistence and object caching in-process. If there's a free one, it would > certanly be worth investigating. But we shouldn't try to roll our own. I absolutely agree that we shouldn't start from scratch and roll our own. I am trawling a few places to see what I can see, thus far only one looks interesting enough to even investigate, although that is in Java. How expensive are the non-free ones? > - Peter Julian |
From: J.P. K. <jp...@he...> - 2002-01-14 13:30:15
|
Having said I could only find one so far, I then find another, which is more our cup of tea: http://sourceforge.net/projects/coldstore/ I haven't looked at the code itself yet. Thoughts welcome, as ever. Julian |
From: Adrian S. J. <AS...@pa...> - 2002-01-14 13:45:53
|
> From: J.P. King [mailto:jp...@he...] >=20 >=20 > Having said I could only find one so far, I then find another, which > is more our cup of tea: >=20 > http://sourceforge.net/projects/coldstore/ >=20 > I haven't looked at the code itself yet. Quote from their website: MUDs, MOOsm MUSHes, M*=20 It's a good way to store those objects. [ColdStore could almost have = been designed to implement a M* :) ] I have to say it is a nice idea; we'd still need to dump the database like we do at the moment for backup and for modifying the Object classes (it states that you can modify the classes as long as you don't change the layout or virtual methods, which is exactly what we'd want to do). When I've got time I'll have a play with it and see how painful it might be. The only problem might be the lack of portability between platforms. (But if we're keeping the database dumps, then that isn't too much of a problem.) Adrian. |
From: J.P. K. <jp...@he...> - 2002-01-14 13:51:47
|
> MUDs, MOOsm MUSHes, M* > It's a good way to store those objects. [ColdStore could almost have been designed to implement a M* :) ] > Cool. > I have to say it is a nice idea; we'd still need to dump the database > like we do at the moment for backup and for modifying the Object > classes (it states that you can modify the classes as long as you > don't change the layout or virtual methods, which is exactly what > we'd want to do). Well we'd want it to be dumped anyway, so that we can do backups and stuff... talking of which, we really should do that again.. > When I've got time I'll have a play with it and see how painful > it might be. > The only problem might be the lack of portability between platforms. > (But if we're keeping the database dumps, then that isn't too much > of a problem.) Ok, I'll have a look at getting the code onto the standard platform (I guess Linux) and thence to port it to Solaris and possibly FreeBSD - or at least see how painful it is. Should be able to do that later today, either before I leave work or perhaps between Farscape and Enterprise. > Adrian. Julian |
From: Adrian S. J. <AS...@pa...> - 2002-01-14 14:04:14
|
> From: Adrian St. John=20 >=20 > > From: J.P. King [mailto:jp...@he...] > >=20 > >=20 > > Having said I could only find one so far, I then find another, which > > is more our cup of tea: > >=20 > > http://sourceforge.net/projects/coldstore/ > >=20 > > I haven't looked at the code itself yet. >=20 > Quote from their website: >=20 > MUDs, MOOsm MUSHes, M*=20 > It's a good way to store those objects. [ColdStore could=20 > almost have been designed to implement a M* :) ] >=20 (Yes, I'm following up to my own post...) Taking a step back, what problem are we trying to solve? If we're looking for a secure way of storing the data, then this isn't it. It is in-process, and therefore prone to the same problems we have already (if we crash unexpectedly the data could be in a bad state). I think that what we're really after is an out-of-process object store that can be connected to by different processes (UglyMUG engine, Web interface, low-level db hacking tool) at the same time, and provide a minimal amount of consistancy (eg no half-written strings, having to use a database snapshot from up to an hour ago). Admittedly in front of that should be a local-process cache that holds some details (the common strings, flags, etc), or possibly the whole object, or even depending on what we're doing, tell the database how much to cache. Adrian. |
From: J.P. K. <jp...@he...> - 2002-01-14 14:17:58
|
> (Yes, I'm following up to my own post...) > > Taking a step back, what problem are we trying to solve? > > If we're looking for a secure way of storing the data, then > this isn't it. It is in-process, and therefore prone to the > same problems we have already (if we crash unexpectedly the > data could be in a bad state). *nod* - but I assume that you can make a separate object store with this, and then link this, and the original together via a socket/pipe/rpc/sharedmmap - am I wrong? This is why you want the abstraction layer, so that you can start with the code in one place, and then move it around, only needing to modify the abstraction layer to change where the work happens. > I think that what we're really after is an out-of-process > object store that can be connected to by different processes > (UglyMUG engine, Web interface, low-level db hacking tool) > at the same time, and provide a minimal amount of consistancy > (eg no half-written strings, having to use a database snapshot > from up to an hour ago). *nod* This is something that I would consider a vital aim for some point in the future, yes. > Admittedly in front of that should be a local-process cache > that holds some details (the common strings, flags, etc), or > possibly the whole object, or even depending on what we're > doing, tell the database how much to cache. Broadly I consider this to be a detail - an important one, but a detail none the less. Given that we don't want to write our own Object Store we need one which more or less meets our requirements. Then we need to make sure that it can be bent enough to exactly meet our requirements. Then the work starts. :-) Will coldstore be able to replace the current 'object store' that the game uses? - If so then it is viable enough to consider. The consideration needs to include - can we make it a separate processes to the game engine? Do we need write through caching to give us enough speed? If so can we implement it with this product? etc. I am relatively happy with the idea of moving our current object store out into a separate process if people think that is a sane idea - but my impression is that noone had a good opinion of our DB engine as it stood at the moment. Is this wrong? > Adrian. Julian |
From: Peter C. <pet...@ne...> - 2002-01-14 14:24:20
|
> From: Adrian St. John [mailto:AS...@pa...] > Taking a step back, what problem are we trying to solve? I have no idea. I presume it's the problem that the game sometimes crashes without a database dump. This is rare, and getting rarer. I can see no other reason for using such a database. > If we're looking for a secure way of storing the data, then > this isn't it. It is in-process, and therefore prone to the > same problems we have already (if we crash unexpectedly the > data could be in a bad state). Does it transaction-log? - Peter |
From: Peter C. <pet...@ne...> - 2002-01-14 14:28:01
|
> From: J.P. King [mailto:jp...@he...] > > Yes. Users have come to expect the site to be fast. > And the overhead of another process is really going to > impinge on this? > I don't see it myself - we aren't talking about starting a > new process, merely talking to an already running one. IPC via anything other than shared memory is roughly 10,000 times slower than in-process memory reference. IPC via shared memory would be liable to exactly the same problems we have now, namely data corruption. > Sorry, I don't see why a MUSH database going tits up means that > Ugly having a separate persistant store to the main game engine > process. The Object store should be fast enough that if it falls > over it can be made to bounce back up rather fast... ... assuming the data has remained internally consistent. This seems unlikely. > How expensive are the non-free ones? Multiple hundreds to multiple thousands of pounds per seat, from memory. Server licences are more. - Peter |
From: J.P. K. <jp...@he...> - 2002-01-14 16:30:38
|
> IPC via anything other than shared memory is roughly 10,000 times slower > than in-process memory reference. IPC via shared memory would be liable to > exactly the same problems we have now, namely data corruption. _10000_ ? I find that hard to swallow, but assuming that it is true, this means that if you can make the access 10000 times faster than you need it to be then this isn't a problem. The game used to run on a 16MHz 68020 as I recall, but lets say we talk about the 20-25MHz IPC that it used to run on - now lets compare this with a 167MHz Ultra1, with its improved memory access. Now compare that to a dual CPU E250 (which is what I expect the next machine to be at present). No we aren't up to 10,000 times, but we are well into the hundreds. Once you get up to a multiple GHz CPU we should more or less be there. :-) The main delay with multiple processes is context switching, but if you only have two active processes this shouldn't be much of an issue. If the delay were _quite_ as bad as you are suggesting then why don't all this big database people build webservers, and the like, into their SQL servers - I am sure that people would pay for a factor of 10 increase, let alone a factor of 10000. > ... assuming the data has remained internally consistent. This seems > unlikely. The object store will be able to bounce back faster than the mud with the object store built in by virtue of the fact that there will be less code. Whether the Object Store copes well with the stress we would want to impose on it is another matter. If it doesn't then it isn't the product for us. > > How expensive are the non-free ones? > Multiple hundreds to multiple thousands of pounds per seat, from memory. > Server licences are more. Oh well, bang goes that idea then ;-) > - Peter Julian |
From: Adrian S. J. <AS...@pa...> - 2002-01-14 14:30:37
|
> From: Peter Crowther [mailto:pet...@ne...] > > From: Adrian St. John [mailto:AS...@pa...] > > Taking a step back, what problem are we trying to solve? >=20 > I have no idea. I presume it's the problem that the game=20 > sometimes crashes > without a database dump. This is rare, and getting rarer. I=20 > can see no > other reason for using such a database. I've found the reason for that happening; It occurs if you overwrite the running executable. Solaris nicely trashes the entire game because the 'mv' seems to truncate the file, rather than unlink & recreate. > > If we're looking for a secure way of storing the data, then > > this isn't it. It is in-process, and therefore prone to the > > same problems we have already (if we crash unexpectedly the > > data could be in a bad state). >=20 > Does it transaction-log? Not that I can tell. All it gives you is a way of storing arbitrary objects to disk, and it looks like it does it by having a custom memory allocator that writes into a mmap'd file. Adrian. |
From: Peter C. <pet...@ne...> - 2002-01-14 14:37:01
|
> From: Adrian St. John [mailto:AS...@pa...] > I've found the reason for that happening; It occurs if you > overwrite the running executable. Solaris nicely trashes the > entire game because the 'mv' seems to truncate the file, rather > than unlink & recreate. Er... I thought most *nixes did that. I always moved the old file out of the way, then copied the new one to the old patchname, which is why you got netmud.old... or am I talking out of my hat here? > Not that I can tell. All it gives you is a way of storing arbitrary > objects to disk, and it looks like it does it by having a custom > memory allocator that writes into a mmap'd file. Bugger that, then; it's no better, and in some ways worse, than the current technique. At the minimum, I'd want one that could handle transactions. - Peter |
From: Peter C. <pet...@ne...> - 2002-01-14 16:56:59
|
> From: J.P. King [mailto:jp...@he...] > _10000_ ? I find that hard to swallow By the time you've done all the data structure marshalling and demarshalling, it's about right --- especially in a client-server environment where you have thread locks and the like. You can get c. 100M memory accesses per second, but 10k IPC calls per second is pretty good. I can point you at some interesting Web sites if you want, notably the TAO site (which is the ORB I'm using for work right now). > No we aren't up to 10,000 times, but we are well into the hundreds. > Once you get up to a multiple GHz CPU we should more or less > be there. :-) And? This leaves us where we were ten years ago: we can't have a large system, because it's too slow. > The main delay with multiple processes is context switching, but if > you only have two active processes this shouldn't be much of an issue. > If the delay were _quite_ as bad as you are suggesting then why don't > all this big database people build webservers, and the like, > into their > SQL servers - I am sure that people would pay for a factor of > 10 increase, let alone a factor of 10000. Increasingly, they do. In-process data servers have a long and distinguished history. > The object store will be able to bounce back faster than the mud with > the object store built in by virtue of the fact that there will be > less code. Er... what? Half a second, maybe, for loading the extra code into RAM. - Peter |