You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(13) |
Sep
(7) |
Oct
|
Nov
(4) |
Dec
(2) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(1) |
Feb
(2) |
Mar
(1) |
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(4) |
Sep
|
Oct
|
Nov
|
Dec
|
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(9) |
Sep
(5) |
Oct
|
Nov
|
Dec
|
From: Nolan D. <no...@bi...> - 2000-12-30 02:47:59
|
I spotted my error. Seems I was a bit too enthusiastic when cut-n-pasting some earlier code. Anyhow, the problem seems to be caused by functions which aren't ever closed, and the initialization process doesn't report this. Has anyone else encountered this? |
From: Cynbe ru T. <cy...@mu...> - 2000-11-09 10:04:31
|
Nolan Darilek <no...@bi...> writes: > Thanks! Just one thing that I'm still confused on: > > >>>>> "Cynbe" == Cynbe ru Taren <cy...@mu...> writes: > > Cynbe> So my expectation was that in such a case you'd write > Cynbe> not > > Cynbe> defmethod: startElementHandler { 'TestXMLParser } > > Cynbe> but > > Cynbe> defmethod: startElementHandler { 'TestXMLParser 't } > > Cynbe> where 't is the CommonLisp father-of-all-types, which > Cynbe> gets used as a wildcard in such cases, meaning "allow > Cynbe> anything here". (Since t like nil is a symbol that > Cynbe> evaluates to itself, one can write just t instead of 't in > Cynbe> such situations, but I tend to write 't for consistency.) > > Ok, I partially get this. But, I'm still confused by the fact that { > 'TestXMLParser 't } isn't the function's exact arity. I guess I could > understand { 'TestXMLParser $ -> $ }, or even substituting the $'s > with 't, but why only one 't? Um? According to the arity, there are two required input parameters, and in the method we're specifying the type of each of them. We're specifying that the first has to be 'TextXMLParser or some class inheriting from it, the second has to b 't or some class/type inheriting from it -- and CLOS defines that -all- classes automatically inherit from t. Possibly one -should- have to specify the -> $ part (results arity) too, and maybe that should be at least allowed in some future release. I don't remember thinking about it, so it was probably an oversight when I was up to my ears just implementing the basic functionality. :) But that isn't logically required, whereas the method compilation process -has- to know the parameter specializers in order to compile code. But the result arguments don't have any required type, and the method compilation code has no way of taking advantage of knowing their type even if specified, so it would be an odd hodgepodge sort of construct which required you to specify the types of the method input arguments, but only the number of the output arguments. We could just always specify the output argument types as 't I suppose, and leave the door open to maybe specifying them more precisely, and having the compiler potentially make use of that information in some fashion, if only to issue warning messages when the code clearly did not match the specification... Many object-oriented languages -only- allow you to specialize on the first parameter, and you may be thinking in that mindset, in which case having the option of specifying more than one type may seem strange. (These languages tend to call CLOS-style methods specializing on more than one parameter a "multi-method" I think.?) This habit of only specializing on one parameter fits well with the tendency of most OOP languages to tightly link the concept of generic functions to the concept of class declarations -- you declare "in" the class all the generics "for" it. The CLOS point of view is, rightly or wrongly, quite a bit different, with generics much more decoupled from classes. A generic function is first and foremost a real function, which can be passed around as an parameter and applied/called just like any other function, which is decidedly not the case with the typical C++ or Smalltalk or such "message" or "virtual function". In the CLOS point of view, generic functions being basically creatures in their own right, not minor appendages of some class, there's no reason whatever that you should be restricted to only specializing on one parameter, or only on the first parameter. And if you -do- specialize on multiple parameters, then it becomes unclear why the generic function should be considered an appendage to some class -- if the multiple parameters it specializes on, are specialized on different classes, which class then should it be considered an appendage to? Each point of view makes sense on its own terms. The CLOS one is less common today, but not clearly inferior. The first argument does become a little special in the Muq context when it comes to the question of which users are allowed to redefine a generic by defining new methods: In the end, in Muq, it comes down to ownership or at least control of the class of the first argument. There are security issues there in that one doesn't want malicious users to be able to subvert the operation of a generic defined by someone else. But diving into a discussion of that is probably getting ahead of the game. :) Things get more fun when we start introducing the possibility of keyword arguments, optional arguments, arguments with default values and such -- that gets us into the {[ ... ]} syntax which you'll see some places in muq/pkg/Check/x-mos and maybe in the mufhack1 docs. These correspond to CommonLisp-style lambda lists. They require that the argument list be enclosed in a block, and automate the busywork otherwise required in dealing with things like optional arguments. They are actually quite nice for high-level functions where flexibility and extendability are more important than efficiency -- they are comparable in some ways to Unix commandline syntax, with its optional flags, optional keyword-value pairs which amount to optional arguments, and variable numbers of arguments: myprog -vBd --foo=bar this that those these {[ ... ]} syntax lets you do much the same sorts of things. In fact, I expect we'll sooner or later write a shell-style language for Muq that translates the former syntax into the later. :) But you were sticking to the simple fixed-number-of-arguments style. (Thankfully! :) So we don't have to explore that just yet. (Muq, being intended for use by whole online communities building distribute programs, rather than just one programmer hacking on one machine, has a much stronger need than most software systems to cater to multiple, differing programming esthetics and styles, so it attempts to remain style neutral and to support multiple styles and approaches to programming, and as far as possible for the different styles to interoperate smoothly.) BTW, I'm pretty sure there's a bug in the woodpile somewhere in methods which specialize on more than one argument, but it only appears when I'm not in a mood to fix it. If you find a consistent way of reproducing it, let me know. Hope somewhere in all this I answered your question...? -- Cynbe |
From: Nolan D. <no...@bi...> - 2000-11-09 02:26:58
|
Thanks! Just one thing that I'm still confused on: >>>>> "Cynbe" == Cynbe ru Taren <cy...@mu...> writes: Cynbe> So my expectation was that in such a case you'd write Cynbe> not Cynbe> defmethod: startElementHandler { 'TestXMLParser } Cynbe> but Cynbe> defmethod: startElementHandler { 'TestXMLParser 't } Cynbe> where 't is the CommonLisp father-of-all-types, which Cynbe> gets used as a wildcard in such cases, meaning "allow Cynbe> anything here". (Since t like nil is a symbol that Cynbe> evaluates to itself, one can write just t instead of 't in Cynbe> such situations, but I tend to write 't for consistency.) Ok, I partially get this. But, I'm still confused by the fact that { 'TestXMLParser 't } isn't the function's exact arity. I guess I could understand { 'TestXMLParser $ -> $ }, or even substituting the $'s with 't, but why only one 't? Thanks |
From: Cynbe ru T. <cy...@mu...> - 2000-11-09 01:17:45
|
[My replies inline.] Nolan Darilek <no...@bi...> writes: > Being generally bored with life, the universe and everything, I > started Muq hacking again. Cool! (: *jealouslook* :) > My most recent project is an XML parser which is vaguely SAX-like. The > parsing is going well, and I'm trying to recode portions of the parser > to behave SAX-ish, but that causes me problems. > > First, a brief overview. What I'm trying to design is an event-based > XML parser which, I feell, would be great for a virtual > world. Basically you define a parser object and write methods which > are called when elements open and close, when cdata is encountered, > etc. The methods can then dynamically swap parsers, so if you have a > 'document' element, a new parser could be swapped in to handle > document-specific elements. Ok, I think I almost follow that... I skimmed some SAX docs once, and it looked interesting. > I'm using classes and generics to accomplish this. Here's some code: > > ( XMLParser -- Stores information about which functions to call and when. ) > defclass: XMLParser ; > 'XMLParser export > > defgeneric: startElementHandler { $ $ -> $ } ; > defgeneric: endElementHandler { $ $ -> $ } ; > defgeneric: CDATAHandler { $ $ -> $ } ; > defgeneric: CommentHandler { $ $ -> $ } ; > > Ok, so now I'd like a test parser. The purpose of the test parser is > to define methods which basically print out "Element foo opens", etc. > > ( Here we build a test XML parser. ) > defclass: TestXMLParser > :isA 'XMLParser > ; > > defmethod: startElementHandler { 'TestXMLParser } > -> parser > -> element > element , " opens\n" , > parser > ; > (more of the same) Very neat! > When I try compiling that, however, I get: > > Sorry: Needed block argument at top-of-stack[0] > xml: > xml: > Sorry: Stack underflow > xml: > Sorry: Stack underflow > xml: > Sorry: Undefined identifier: element > xml: > Sorry: Stack underflow > xml: > Sorry: Undefined identifier: parser > xml: > xml: > Sorry: pull: stack is empty. > xml: > > I'm a bit confused about how this is supposed to work. I've worked > with Dylan which, AFAIK, uses a CLOS-ish model. So I'm fairly sure I > understand generics and such. I'm a bit confused about how the arity > for the generics is supposed to work, though. Since I've already > specified the arity in the defgeneric, do I need to explicitly define > it for each method? Or do I just list the class for which it was > defined, as I've done above? > > Thanks. :pokes quickly around the code and docs for a clue, having completely forgotten all the syntax and semantics here... Ah, ok! 1) The Muq docs on OOP suck even more than I remembered, which is really pretty remarkable. That's definitely a prime spot for work. 2) The above has to be in the running for worst compiler error message of all time...! :) Pretty clearly the MUF compiler isn't anticipating this possible problem and is basically crashing rather than issuing a decent error message. (Muq being the robust system it is, such a crash as usual results in a return to the prompt rather than a coredump, which is at least nice as far as that goes. :) 3) Muq MUF is intended to stick to the spirit of MUF, or at least of Forth, which is to be very dumb and not do anything to speak of behind the scenes. In particular, not to build parsetrees or such during compilation. So in my interpretation, at least, magically "doing the right thing" because the arity has already been declared isn't really in the spirit of Forth. So my expectation was that in such a case you'd write not defmethod: startElementHandler { 'TestXMLParser } but defmethod: startElementHandler { 'TestXMLParser 't } where 't is the CommonLisp father-of-all-types, which gets used as a wildcard in such cases, meaning "allow anything here". (Since t like nil is a symbol that evaluates to itself, one can write just t instead of 't in such situations, but I tend to write 't for consistency.) I just tried that and it didn't give the error you describe, whereas I did get just the error you show when I tried it with your syntax. So try that and see if it works for you. BTW, just as a personal status report, Cisco is still basically eating about 70-80 hours/week of my time, starving out Muq entirely, but current plans are for me to telecommute next year plus cut back from 5 to 4 "official" days a week (5 official days are currently coming out more like 7 actual days) so I'm currently hoping that next year I will once again have evenings and weekends free, and to devote a sizable fraction of them to Muq hacking. (The downside is that instead of living on the beach in California, I'll be in Austin *shudder*. But given that in general I leave before dawn and get back after sundown these days, the beach hasn't been doing me a lot of good...) Commuting 3-5 hours/day -has- been giving me lots of time to think of neat hacks I'd like to add to Muq! :) On last night's drive home I was thinking about adding more support for pure-functional programming into Muq, and then some support for automatic memo-ization -- the two seem to me like potentially a very powerful combination, which essentially automate a lot of datastructure programming in much the same way that garbage collection automates a lot of storage management programming. -- Cynbe |
From: Nolan D. <no...@bi...> - 2000-11-08 23:39:51
|
Being generally bored with life, the universe and everything, I started Muq hacking again. My most recent project is an XML parser which is vaguely SAX-like. The parsing is going well, and I'm trying to recode portions of the parser to behave SAX-ish, but that causes me problems. First, a brief overview. What I'm trying to design is an event-based XML parser which, I feell, would be great for a virtual world. Basically you define a parser object and write methods which are called when elements open and close, when cdata is encountered, etc. The methods can then dynamically swap parsers, so if you have a 'document' element, a new parser could be swapped in to handle document-specific elements. I'm using classes and generics to accomplish this. Here's some code: ( XMLParser -- Stores information about which functions to call and when. ) defclass: XMLParser ; 'XMLParser export defgeneric: startElementHandler { $ $ -> $ } ; defgeneric: endElementHandler { $ $ -> $ } ; defgeneric: CDATAHandler { $ $ -> $ } ; defgeneric: CommentHandler { $ $ -> $ } ; Ok, so now I'd like a test parser. The purpose of the test parser is to define methods which basically print out "Element foo opens", etc. ( Here we build a test XML parser. ) defclass: TestXMLParser :isA 'XMLParser ; defmethod: startElementHandler { 'TestXMLParser } -> parser -> element element , " opens\n" , parser ; (more of the same) When I try compiling that, however, I get: Sorry: Needed block argument at top-of-stack[0] xml: xml: Sorry: Stack underflow xml: Sorry: Stack underflow xml: Sorry: Undefined identifier: element xml: Sorry: Stack underflow xml: Sorry: Undefined identifier: parser xml: xml: Sorry: pull: stack is empty. xml: I'm a bit confused about how this is supposed to work. I've worked with Dylan which, AFAIK, uses a CLOS-ish model. So I'm fairly sure I understand generics and such. I'm a bit confused about how the arity for the generics is supposed to work, though. Since I've already specified the arity in the defgeneric, do I need to explicitly define it for each method? Or do I just list the class for which it was defined, as I've done above? Thanks. |
From: Cynbe ru T. <cy...@mu...> - 2000-09-17 17:28:10
|
"Eli Stevens" <wic...@wi...> writes: > I am still trying to get a grasp of how exactly certain features of MUF > work, so please forgive me if these questions are simplistic. :) If the docs aren't clear, it is me who needs to apologize, not you. :) > Is the following redundant? > > :slot :aSetSlot > :initform :: makeSet > :initval [ | ]set There's one syntax error there, probably a typo while composing email: :slot :aSetSlot :initform :: makeSet ; :initval [ | ]set is the syntactically correct version. :initform takes an anonymous function as argument, and the syntax for those is :: bunchacode ; -- think of it as the usual MUF : functionname bunchacode ; syntax, but with the : functionname part collapsed down to just :: (C++ memories are not your friend here...) Anyhow, using both :initform and :initval is rendundant, yes. I'm not sure offhand exactly what will happen. :) The difference between the two is that one has its expression evaluated at compile time, the other at runtime (object creation time): The expression you give :initval will be evaluated at compile time, which means that all of your objects will have the SAME set in their aSetSlot. This might be what you want if you're using the set as a communication channel between the different instances of this class. (But using a class slot instead of an object slot would make more sense in that case, usually.) The expression you give :initform is packaged up in an anonymous function, and each time an instance of :aSetSlot is created (i.e., each time an object of the class which you're defining in the code containing this fragment is created), the anonymous function will be evaluated once to produce the initial value for this slot. Which is a long way of saying that your :initform example will result in each of your objects having its own separate empty set as the initial value of aSetSlot, which is probably what you intended. > If I have several groups of strings of varying sizes (the groups), would a > vector of sets of strings be a decent way of storing the data? My intention > is to show where items may be stored: a two-handed sword may go in > <"primaryHand", "secondaryHand"> or in <"scabbard"> or <"mantleRack">. From > what I have read, this seems to be the way to go, but I seem to use the > standard template library's vector class a lot, and I was not sure if this > was fallout from that habit (when it comes down to it, you can use a vector > for /anything/... It's like a Swiss army knife. </personalBias></rambling> > :). *giggle* In Muq, the vector is the closest thing there is to just allocating a chunk of ram and doing whatever you want with it, yupyup. Muq is like Perl in that it is intended to cater to a range of programming styles, and if you want to roll your own stuff from vectors, that's ok and you'll probably get an insignificant win in time and space efficiency to boot. Personally, I'd probably define a class or struct with :primaryHand and :secondaryHand fields and such and initialize them to Indexes or Hashes. Using Sets will be a bit more efficient, but I'd probably be too lazy to worry about that. :) > If I want to upgrade from a previous version of Muq to the current one, and > I am not concerned with what may reside in my current DBs etc., is it safe > for me to simply delete the directory it resides in and install it anew? I'm not quite following what potential problem you're thinking about, but that sounds fine to me. One thing I need to do is to kill the muq/pkg/000-C-version.muf.t file, since there should no longer be any need to keep the server and db in exact version sync. [hackhack... done in my version.] > Also, concerning disk access, can Muq read/write files off/onto of the disk? > I ask because I am interested in being able to migrate code, world/area > files, etc. without having to take along the entire DB. For example, if I > have one test Muq server, and one production server, is there a way to move > things tested on the beta server without having to cut and paste at a > terminal/telnet window? There are two or three good answers to that. :) (Geez, maybe we'll have to adopt the Perl slogan There's More Than One Way To Do it!) The FIRST answer is that Muq is intended to be a very secure sandbox system, so I've been at pains to absolutely minimize access to the host filesystem &tc from the server, especially access initiated by the server -- even if internal muq root gets cracked due to a softcode security hole, I'd like the host account to be safe. Defense in depth, you know, sort of the opposite of the Microsoft Outlook policy of running all untrusted code as root. :) So there's a deliberate absence of what you're most directly visualizing. The SECOND answer is that Muq is intended to give you more than enough rope to hang yourself with. Muq is shipped secure because it is easier to add security holes to a secure system than to add security to an insecure system, not because I think every Muq site should run at top security setting -- which is to say, maximum inconvenience setting. :) If you want to read and write hostfiles from within the Muq db, the intended way for you to punch the needed security hole is: * Write a little Perl script (or C program) which accepts text commands to read or write a file, and name it something like ~/muq/bin/my-security-hole :) * Invoke this script from within the Muq server via ]rootPopenSocket * Write a little MUF wrapper which makes the ]rootPopenSocket call transparent, so you have just readAHostfile and writeAHostfile functions which (say) accept and return strings. (Yes, we need to relax Muq's current limits on maximum string length...) * If you want every Muq account to be able to read and write hostfiles, have the above wrappers use asMeDo{ } so that anyone can call them, and they still run with the required root privilege. * For the ultimate in insecurity, you can then run the Muq server as root. :) See the muf reference manual docs and examples on ]rootPopenSocket for further information on this approach. The THIRD answer, and probably the one you really want in this case (but the above one is more general) is that I spent much of 1999 coding up what I hope is a nice clean general solution to just the problem you outline. Internally, the Muq db is not just one big undifferentiated glob the way it is in most servers, but instead consists of a series of logical partitions, with the boundaries between the partitions carefully hidden so they don't get in your way. (Switching to 64-bit operation internally gave us enough bits to implement this sort of functionality.) Normally, there is basically one partition per user, and there should be one per major system library, but I think currently all the system libraries are just in root's partition. These logical partitions can be individually installed, updated, backed up, deleted and so forth, to a first approximation without affecting other parts of the overall db. I say "to a first approximation" because if user A has a pointer to something in user B's partition and you then delete user B's partition, obviously -something- user visible has to happen to that hanging pointer. (Unless we just forbid deletion in such cases, which admins wouldn't like.) Muq has various internal hacks to catch and handle such hanging pointers cleanly -- I think they turn to NIL in general, maybe they give an error when accessed. I thought for a long time about which of the two alternatives makes more sense, and I'm still not sure. Maybe it should be configurable. Anyhow, read the "db functions" section of the MUF reference manual for further details. This is the intended way of handling the sort of problem you mention here: Installing an updated library, moving a user from one server to another, installing a Christmas area for a week, moving code from a test server to a production server &tc. This facility has never been used in production and isn't like anything else I've seen in the MU* world, so it may need some work before it does what people really want in practice: Only live users playing with it will tell. But I think it is basically The Right Idea. It might even make Muq .db files the ultimate interchange format, since they can in principle contain absolutely any data and associated code, and be read on any machine. :) One caution: Don't do a rootRemoveDb followed immediately by a rootImportDb -- this will break all links into the db from other partitions. (Unless that is exactly what you want, of course.) Do rootReplaceDb instead, which will preserve all cross-partition links to objects which exist in both the old and new dbs. To make all this work, you'll have to think a bit about cross-partition links and how to avoid breaking them needlessly. E.g., hacking the source code and then building a new db from scratch is likely to give all the objects new internal identities relative to the previous version of the db partition. It is probably better to compile just the required source changes into the existing db, preserving the identities of as many objects as possible. In general, clean Muq / MUF programming practice is to have long-lived links, especially cross-partition ones, refer to Symbol objects (which implement global variables in a package) where-ever reasonably possible, rather than directly to the the objects in question. That is, try to have your long-lived cross-partition links refer in code to pkg:var.key rather than obj.key. This way, if obj gets recycled and a new replacement for it assigned to pkg:var, your code will keep working as expected. This allows people to assign a new object to the Symbol (change the value of the global package variable) and everything keeps working. For example, the MUF compiler normally generates code which calls functions via their Symbols, rather than directly -- it internally generates 'pkg::fnname.function call rather than just #<compiledFunctionInstance> call This way, when you compile new code for the function, which implicitly updates the pkg::fnname.function field, all existing calls to it automatically call the new version in future, so you don't have to run around recompiling every function which references it. The cost is one indirection on each function call, which I think is well worth it for the extra maintainability. If you work this way on db partitions which you intend to import and update, cross-partition links into the updated module will keep working as long as the Symbol objects haven't been replaced, which is relatively easy to avoid. (I'm thinking about adding some hacks to the import/export machinery to handle links to symbols correctly even if they do -not- still have the same internal dbref, but that is just an idea in the back of my head at present. It would mean scanning packages being exported and annotating all hanging links to symbols with the names of the symbols, and then using this information in import and update commands.) Cynbe > <//> Silence is golden RUIN, v. To destroy. <\\> > || Eli Specifically, to destroy a maid's || > || wic...@wi... belief in the virtue of maids. || > <\\> www.wickedgrey.com -- Ambrose Bierce <//> > > > > _______________________________________________ > Muq-coder mailing list > Muq...@li... > http://lists.sourceforge.net/mailman/listinfo/muq-coder |
From: Eli S. <wic...@wi...> - 2000-09-17 06:31:25
|
I am still trying to get a grasp of how exactly certain features of MUF work, so please forgive me if these questions are simplistic. :) Is the following redundant? :slot :aSetSlot :initform :: makeSet :initval [ | ]set If I have several groups of strings of varying sizes (the groups), would a vector of sets of strings be a decent way of storing the data? My intention is to show where items may be stored: a two-handed sword may go in <"primaryHand", "secondaryHand"> or in <"scabbard"> or <"mantleRack">. From what I have read, this seems to be the way to go, but I seem to use the standard template library's vector class a lot, and I was not sure if this was fallout from that habit (when it comes down to it, you can use a vector for /anything/... It's like a Swiss army knife. </personalBias></rambling> :). If I want to upgrade from a previous version of Muq to the current one, and I am not concerned with what may reside in my current DBs etc., is it safe for me to simply delete the directory it resides in and install it anew? Also, concerning disk access, can Muq read/write files off/onto of the disk? I ask because I am interested in being able to migrate code, world/area files, etc. without having to take along the entire DB. For example, if I have one test Muq server, and one production server, is there a way to move things tested on the beta server without having to cut and paste at a terminal/telnet window? <//> Silence is golden RUIN, v. To destroy. <\\> || Eli Specifically, to destroy a maid's || || wic...@wi... belief in the virtue of maids. || <\\> www.wickedgrey.com -- Ambrose Bierce <//> |
From: Cynbe ru T. <cy...@mu...> - 2000-09-09 21:06:04
|
Jerry Hsu <jh...@cj...> writes: > Looking through the oldmud/oldmush example, it appears that commands take > the form of msh reads teh command, does some parsing and passes a request > on to the mud which processes the request. > > Presuming my understanding of that is correct, I'm unclear as to why this > approach was used. > > [...] Actually, now that I think about it, I may have been misinterpreting the above question, or at least concentrating too much on one issue to the exclusion of other interesting and relevant design issues in oldmud/oldmsh. There are a number of interesting division of responsibility issues involved, in terms of what the relative roles should be of: The user; The shell coder; The room builder/coder; The world builder/coder; The worldkit builder/coder. One of the design decisions in oldmud/oldmsh is to have per-user code which includes o A persistent daemon which is responsible for animating all of that user's possessions; o A shell job which is responsible for parsing and executing input from the user. This job goes away when the user logs out. The persistent per-user daemon shows off Muq's multitasking ability, and forms an interesting alternative to the more typical single-threaded mudserver design, and more importantly, it provides a firewall between world functionality and user functionality. Without this sort of firewall, allowing Joe Random User to hack code seriously can quickly compromise world stability, security or sanity, so there is a strong incentive for world admins to clamp down strongly and only allow very trusted users to do much in the way of hacking. This is a drag, in that it reduces the amount of interesting new hacking likely to happen. With a solid such firewall in place, users can be allowed to hack weird per-room code to their heart's content, and if they crash their room or area daemon *shrug* ok, so vistors have to go elsewhere for awhile. No big deal. A second design decision is to have almost all the "real work" done by/for a given user go through this per-user persistent daemon. The biggest advantage of this is that it avoids almost all locking and consistency sorts of issues: If objects are only modified by the player owning them, and only via a single animation daemon, then the scope is vastly reduced for the typical sorts of synchronization problems involving two separate jobs reading and writing the same information at about the same time. This is good because the sort of naive end-user who often winds up doing most of the creative building on mu* is quite unlikely to write correct locking. (I looked for a long time at transparent rollback based handling of such interactions, but concluded it would likely halve system performance, and that most worlds/admins would in the end rather have the performance.) The oldmud/oldmsh 'task' facility serves to implement this design philosophy in the face of arbitrary network lag by managing multiple outstanding requests, continuing processing in the interim. As long as responding to each request can be done in a fraction of a second, running them all out of the single persistent per-user daemon is a viable way to go. Operations which take significantly longer can always be handled as special cases by fork()ing off a separate thread, at which point one may have to start thinking about explicit locking. I think this reasonably well satisfies Alan Kay's dictum that Simple things should be simple; Complex things should be possible. The task mechanism keeps simple things simple; The primitives are available to do more complex things for those who have the need and skill to do so. It seems to me to make a lot of sense to separate off the user's shell from the user's persistent animation daemon: o The user's daemon needs to be very persisent and reliable; The user's shell will typically be stopped and started each time a telnet connection is made. o The user may want to run the shell on a different server from the one currently hosting most of his state -- logged in remote on a laptop or whatever. o The user might want to run several shells at once, maybe with a friend logged in or running multiple windows or running a batch script in one or whatever. o The user may want to experiment running different shells, without wanting to affect the stability of the persistent daemon. o If the user's persistent daemon -does- crash due to buggy per-user code, it is very nice if the user's shell is still running, so as to provide a springboard from which to fix and reboot the daemon! So the general architecture of a running oldmud/oldmsh system with two users sitting in a room chatting looks like: +--------------+ +---------------+ | User A shell | <----> | User A daemon | +--------------+ +---------------+ ^ +---------------------+ | <-----------> | Room owner's daemon | v +---------------------+ +--------------+ +---------------+ | User B shell | <----> | User B daemon | +--------------+ +---------------+ The room owner's daemon is just one more persistent daemon, running exactly the same code as the User A and B daemons, it is drawn differently here because we are interested in this context in slightly different uses of it. When User A enters the room, the command to do so is parsed by the shell, then passed to the daemon for execution, since this involves changes in the general user state. The User A daemon asks the room daemon for permission to enter, and having recieved it, asks for a list of objects in the room and caches the list locally to speed up room operations. Being told that User B is in the room, the User A daemon queries the User B daemon to find out if this is true (the Room owner's daemon might be lying or out of date or buggy) and if the User B daemon agrees (which might involve a netlag of 30 seconds on a bad net day), the User A daemon then has the User a shell tell User A that User B is in the room. If User A and User B chat with each other, their messages will be passed back and forth directly between their two daemons, without involving the Room Owner's daemon at all. Naturally, each of these the above illustration boxes might be running in a different server on a different machine in the most general case, although in general each shell/daemon pair is likely to be on the same machine and Muq server, and I don't think the current release yet supports separating them. But the three daemons could easily be on different machines in the current release without anybody noticing any difference. If machine or server boundaries -are- crossed, appropriate authentication and encryption will be done automatically: User daemons can trust Muq-supplied source attributions. Cynbe |
From: Cynbe ru T. <cy...@mu...> - 2000-09-09 19:34:52
|
I just expanded the "Arity declarations" section of the MUF reference manual to include some discussion of the @ ! and ? arity operators: @c @node Arity declarations, Nested functions, Named functions, function definition syntax @subsection Arity declarations @cindex Arity declaration @findex @{ It is traditional Forth programming practice to include at the start of each function a comment giving the number of arguments it accepts and returns. (This sort of documentation is a good habit in any language.) Muq @sc{muf} continues this tradition, but extends it by making the declarations in a syntax understandable to the muf compiler, as do most modern languages other than Forth: @example Stack: : x { $ $ -> $ } * ; Stack: 2 3 x Stack: 6 @end example The arity declaration is enclosed in curly braces and contains one '$' for each input parameter, followed by an arrow, followed by one dot for each value returned. Muq distinguishes single arguments from blocks of arguments: @example Stack: : |double { [] -> [] } |for i do{ i 2 * -> i } ; Stack: 5 seq[ Stack: [ 0 1 2 3 4 | |double Stack: [ 0 2 4 6 8 | @end example The arity declaration contains one @code{[]} for each block read, and one for each block returned. Block arguments must in each case precede nonblock arguments. The Muq @sc{muf} compiler is capable of computing the arity itself for simple @sc{muf} functions. This means that you can either leave the arity declaration off simple functions if you wish, or that you can supply it, and depend upon the @sc{muf} compiler to check your code against your declaration, catching simple errors. The current @sc{muf} compiler is unable to compute the correct arity for recursive functions, for functions which push variable numbers of things on the stack, and various other similar cases. For these sorts of functions, you should end your arity declaration with a '!' -- this tells the @sc{muf} compiler to accept your declaration without trying to verify it. @b{Mnemonic}: Yes! I really mean it! Here is an example: @example ( Demonstration of taking a block of strings and generating ) ( a block containing all of their characters in order, ) ( without generating any garbage (by Andrew Nelson): ) : stringchars { [] -> [] [] ! } |length 1 - -> len |for i do{ i stringChars[ } for i from 0 below len do{ ]|join } ; @end example In this case, the @sc{muf} compiler is not bright enough to reason that the last loop consumes one less block than the middle loop creates, and thus to deduce that in the end there will always be one more block on the stack than when it started. Functions which can never return (short of throwing an exception, or intervention in the debugger) should be declared with an arity result of '@': @example ( Endlessly spew a given value to standard out: ) : spam { $ -> @ } -> v do{ v , "\n" , } ; @end example @b{Mnemonic}: The '@' is a whirling little infinite loop that never returns. Finally, there is very occasionally a need for a function with no consistent arity. You should probably never use this declaration! The one clear need I have discovered for this to date is in writing mufshell loops which iteratively prompt for, read, compile and evaluate user-specified code. The each line of user code is compiled into a function, and in general that function may have just about any arity, so we cannot provide a sensible, consistent arity declaration in the loop. For these functions, we declare the arity return to be '?' -- @b{mnemonic}: Who knows? @example ( ===================================================================== ) ( - ]shell -- Simple muf shell. ) : ]shell { [] -> ? } ( Lotsa code... ) ; @end example |
From: Cynbe ru T. <cy...@mu...> - 2000-09-08 23:46:09
|
Jerry Hsu <jh...@cj...> writes: > Looking through the oldmud/oldmush example, it appears that commands take > the form of msh reads teh command, does some parsing and passes a request > on to the mud which processes the request. > > Presuming my understanding of that is correct, I'm unclear as to why this > approach was used. My intuitive design for a shell woudl be something > along the lines of pass most commands straight through to the server and > let the server handle parsing. My vision for the shell would be to > process the return and maybe do some formatting on it. I'd also envision > the shell providing some other support like possible filtering and command > macroing. > > It seems to me that following the oldmud/oldmsh example would require new > commands to be implemented both on the mud and in every shell that is > written. I guess this would facilitate something like having both a > graphical shell and a text shell, the graphical shell woudln't present a > text UI to the user. Or maybe having foreign language shells. But those > don't seem like common cases. > > So overall, can anyone (Cynbe) give me some reasoning behind designing > things this way and the advantages it would present? > > _______________________________________________ > Muq-coder mailing list > Muq...@li... > http://lists.sourceforge.net/mailman/listinfo/muq-coder I suppose it depends where you're coming from, and where you see the system as headed. I'm sure what you're envisioning makes sense relative to what you're interested in doing; I'm inclined to think my design approach makes sense in terms of my vision. So here's a sketch of my notions on this subject. Keep in mind that your design problems and solutions may well vary for very good reasons, and that I'm not presenting oldmud/oldmsh as -the- way everything should be done on Muq, just as one direction in which people can get started. Personally, I remember the Bad Old Days when operating systems like TOPS-10 essentially did do all user shell parsing &tc in the kernel. Then I ran into Unix, where the user shell is just an unprivileged process, and Joe Random User is free to write and use his own shell without any impact on system security or stability -- the critical system services are religiously separated out from the random C library code, commandline parsing code and such. To me, this was just a Great Leap Forward! Now we can have our choice of sh bash ksh rsh csh tcsh wsh &tc &tc &tc -- support for diversity instead of imposition of The Army Way From On High. Not to mention people can layer on graphical shells of various sorts, again without impacting system security or sanity, since none of the shell code need be trusted by the system. Without necessarily taking the position that what current online text or graphics worlds do is wrong or inadequate, I'm very interested in using Muq to explore possibilities not well served by current servers. One particular frontier which I think is interesting to explore is scalable, decentralized worlds. Many mu* have in the past scaled to saturation of the central server and then either stalled or died. One can of course take this as an indication that allowing such unbridled growth is Just Plain Stupid and admins should impose more sensible limits sooner. There Aint No Such Thing As A Free Lunch. No exponential growth phase can last very long in an effectively finite universe. &tc. But one can also take the position that such experiences demonstrate that there is an existing demand for larger-scale online communities, and take it as a challenge to provide suitable support for them. Muq oldmud is in part an experiment in laying the groundwork for an online community which can scale fairly trivially to dozens or hundreds of servers and thousands to tens of thousands of users, with reasonable prospects of scaling up in upward compatible fashion to millions if the demand proves to be there. So it supports an automatically updated local db of known servers and users, transparent migration of servers from one IP address to another, transparent migration of users from one Muq server to another, connection of any exit to any room regardless of server boundaries, paging and posing between users without regard to machine boundaries, automatic transparent authentication of all cross-server user<->user traffic using Diffie-Hellmen public key encryption, automatic transparent encryption of all cross-server user<-> user traffic using Twofish encryption, and various other scalability hacks like that which aren't common in contemporary mu* systems. In the context of a shared world with potentially millions of users and server machines, there seem to me to be good reasons for maintaining a pretty clean firewall between the shell, which contains code local to the user and trusted by the user, and the world server, which contains code maybe owned and operated by "il4q2" physically located in Outer Neferia. Some specific examples: o I don't think il4q2 should be able to access or modify my local information as a user, to successfully fake messages so that they appear to come from me, to send me messages which appear to come from someone I trust (but do not actually do so), or to eavesdrop on private whispers I send to someone else in a room hosted by il4q2. oldmud accomplishes this by performing most of the relevant processing locally in the user's shell: il4q2 is allowed to list who is currently present in the room he hosts (which is his perogative as owner and implementor of the room), which list we check with the people listed -- an object or user is counted as present in a room only if both room and object/user agree on that fact -- but all pages/poses/whispers to those objects/users are sent direct and do not in general even go through the room server -- if it crashes, people chatting in the room might not notice for some time. Similarly, the oldmud message display format is carefully structured to prevent malicious spoofing of trusted agents or people -- messages from the room server are always syntactically distinguishable from those by people within the room, and those of people in the room from each other. o I don't think all people passing through a given room should have to switch to using the favorite shell syntax of the user implementing that room. When server boundaries are largely invisible, this could create all sorts of major and minor problems. o I think shell design will evolve much faster if it is decoupled from world design issues, and in particular if you don't need a wizbit just to fiddle with your shell. Imagine if you had to go to the campus system administration people every time you wanted to switch to using a different shell or mail client. :-/ Now, someone may well -want- to implement a worldserver on Muq that doesn't scale past one host machine, which is highly vulnerable to spoofing, where all control is in the hands of the All Powerful, All Knowing, All Benevolent Wizards, where processing proceeds for only one user at a time, and so forth. That is fine! Muq has the tools, in many ways it is a much simpler implementation problem than what oldmud does, and it is a popular, proven model with a demonstrated audience. There's nothing wrong with that, and in fact I hope people write a whole variety of servers on top of Muq, including some on that model. That's why I bent over backwards to make the Muq server policy-free in the frequently excoriated X Window System tradition, and why I named oldmud "oldmud" as a hint that I expect -- hope! -- to see a variety of other mudservers to be written for it in due course. But my personal motivation comes in large part from my delight in seeing what novel things people create in shared online worlds, so oldmud is heavily oriented towards: o Implementing things which haven't been done before; o Pointing the way towards interesting new possibilities; o Maximizing the potential for others to create new things. Letting Joe Schmoe Average User hack up a killer new shell without having to go humbly begging to the wizards for special help is one aspect of that. Personally, if I were using oldmud much, the first thing I'd do would be to replace oldmsh with a custom shell of my own. I think traditional MU* syntax sucks. E.g., on typical social mu*, the most common commands are say and pose, so they should have the most convenient syntax, which for most people on most keyboards would be something like taking any line starting with an alphabetic as a 'say' and any line starting with a blank as a pose. (I'd use a leading '.' as an escape for the rest of the commandset.) But converting people to ergonomic mudshell designs is not a that's not a battle which it makes sense to fight at the same time as trying to establish Muq! There are right and wrong times to open a second front. So I implemented oldmsh as a painfully plain, middle-of-the-road syntax. Does that clarify the motivation behind oldmud/oldmsh's design and organization? Again, I don't want to get into a debate over whether the design goals and decisions for oldmud/oldmsh are The One True Religion. I'm just trying to clarify why they are the way they are, whether or not that is right. BTW, oldmud/oldmsh are among other things experiments in using Flavors-style multiple-inheritance heavily. I try to try something new on every project, and I hadn't tried using multiple inheritance heavily before. If I were to do it again now, I'd retain the basic structure, but base it instead on Dave Cheriton's warthog/trampoline style, which has the virtue of elevating interfaces to first-class objects. (That's a nice approach which would be nicer if given appropriate support by the compiler.) It would probably also be nice to support calling functions located on other hosts (with appropriate safeguards!) so that the oldmud network protocol can be more transparently extensible. Right now it has a painfully hardwired feel. I attribute that partly to Muq being young and providing less support than one might wish to distributed programming, and mostly to the fact that constructing really seriously scalable, decentralized shared worlds being a very new art involving making novel design trade-offs between lots of nontrivial issues. When the server software runs on one machine, with the wizards completely trusted and the users basically not allowed to do anything to speak of but run around and admire, the trust issues are pretty minimal. When you have lots of separately administered servers and lots of users free to move between servers and in general limited trust between any two admins or users, -plus- you want to maximize the creative space available to everyone involved, then things get ... well, frankly, more interesting! Fascinating design trade-offs. Which of course open lots of space for software designers to disagree as to what the Right Answer is. :) I hope Muq allows lots of designers to implement lots of variations, and for a larger community to form which decides which solution it likes best. Cynbe |
From: Jerry H. <jh...@cj...> - 2000-09-08 21:02:54
|
Looking through the oldmud/oldmush example, it appears that commands take the form of msh reads teh command, does some parsing and passes a request on to the mud which processes the request. Presuming my understanding of that is correct, I'm unclear as to why this approach was used. My intuitive design for a shell woudl be something along the lines of pass most commands straight through to the server and let the server handle parsing. My vision for the shell would be to process the return and maybe do some formatting on it. I'd also envision the shell providing some other support like possible filtering and command macroing. It seems to me that following the oldmud/oldmsh example would require new commands to be implemented both on the mud and in every shell that is written. I guess this would facilitate something like having both a graphical shell and a text shell, the graphical shell woudln't present a text UI to the user. Or maybe having foreign language shells. But those don't seem like common cases. So overall, can anyone (Cynbe) give me some reasoning behind designing things this way and the advantages it would present? |
From: Cynbe ru T. <cy...@mu...> - 2000-09-08 16:38:01
|
Artiste Extrordinaire! <ar...@mu...> writes: > I have a snippet that messes up the arity evaluator. It's something I > whipped up for borogove to show that it's possible to take a group of > strings in a block and convert them to one block of joined characters. He > wanted a way to do it without generating garbage. :) > > : stringchars { [] -> [] } > |length 1 - -> len > |for i do{ i stringChars[ } > for i from 0 below len do{ ]|join } > ; > > I think this should be { [] -> [] [] } for the arity, but that causes an > arity error that refuses to define the function. Changing it to what is > listed above works. > > Example code: > > root: [ "abc" "def" | stringchars > root: [ "abc" "def" | [ 'a' 'b' 'c' 'd' 'e' 'f' | > > -Andy You can force any arity you want using '!' : stringchars { [] -> [] [] ! } |length 1 - -> len |for i do{ i stringChars[ } for i from 0 below len do{ ]|join } ; You want to be a little careful when doing this, since if you make a mistake, or if you're lying to the compiler, all other functions calling that function will be using an incorrect arity. But there are times when '!' is the right solution, and this is one of them. Correctly computing the arity of this function is far beyond the capabilities of the current logic, which simply does a simple-minded constant-time symbolic execution of the code. (It would be nice if it at least knew that it doesn't know...) The current arity checker has no way of knowing that the second line in the function will push as many blocks as the third line eats (but one) -- this would involve much more sophisticated theorem proving techniques (probably much slower than the current arity checker, compiler and assembler combined) or else a very special-case hack. That's a pretty nice solution to the problem as stated, btw. Since Muq's stacks are currently limited to fairly small sizes, to prevent one infinite recursion error from thrashing the whole system to death, you do run a risk of running out of stack space on relatively small problems. Having people start worrying about not generating garbage and distorting their programming style to avoid this is of course one of the disadvantages of not having a really first-rate garbage collector implemented yet *wrygrin*. * * * * * By the way, GeomView is now on SourceForge and out in a Linux edition. GeomView is a very nice little 3D graphics package driven by a tiny lisp interpreter. I've liked it for years. It was developed at an American university (U Maryland? I forget) to support mathematicians and such, and consequently has cool hacks like support for non-Euclidean spaces. The developers originally had even more ambitious plans, but their funding source and available time didn't allow for it. They've since been defunded and moved to geomview.sourceforge.net and become primarily a Linux open source project. I haven't looked at the new release, but I'd be surprised if it isn't worth a look by anyone interested in smart 3D graphics servers for Linux. :) -- Cynbe |
From: Scott A C. <cr...@qw...> - 2000-08-07 04:22:51
|
Oops, sent to the wrong list, if another comes up, sorry.. Scott ---------- Forwarded message ---------- Date: Sun, 6 Aug 2000 18:05:49 -0400 (EDT) From: Scott A Crosby <cr...@qw...> To: muq...@li... Cc: muq...@li... Subject: A full diff for my codebase. (fwd) Here's my patch from the stock 0.0.0 codebase, circa 2 months ago. It includes patches for: Parallel building and a lot of makefile cleanup. We're no longer lying to make anymore. ('make check' isn't parallelized.) Use libGL instead of libMesaGL Minor changes in the codebase. Get rid of brokenness of the attempted autoconfiguration in Muq-config.sh The configuration path is hardcoded in there, but need only be changed there and in MUQ_DIR. Advantages: A simple way to build MUQ anywhere. I'll write up a quick snippet for building muq in any location. Building the docs, and building an initial databases, assuming that you're using my patches. Don't apply this patch blindly unless you look at what it does. Cynbe.. Can you dig out some of the other patches I sent you? Or shall I just rewrite them. There's close to a half-dozen more. Finally, there should be a record somewhere that MUQ isn't freeing socket's. Scott -- No DVD movie will ever enter the public domain, nor will any CD. The last CD and the last DVD will have moldered away decades before they leave copyright. This is not encouraging the creation of knowledge in the public domain. ---------- Forwarded message ---------- Date: Wed, 14 Jun 2000 16:24:38 -0400 (EDT) From: Scott A Crosby <cr...@qw...> To: Cynbe ru Taren <cy...@mu...> Subject: A full diff for my codebase. BTW, as a random patch. I'm submitting the full diff between my codebase and the stock 0.0.0. Most of the changes are either automatically-built stuff, or the 3 patches I already sent. Scott |
From: Cynbe ru T. <cy...@mu...> - 2000-08-05 03:43:07
|
Scott A Crosby <cr...@qw...> writes: > On 3 Aug 2000, Cynbe ru Taren wrote: > > > > > Scott A Crosby <cr...@qw...> writes: > > > > Tail recursion, in the sense you intend (looping rather than pushing > > a stack frame) isn't supported yet, alas. This needs to be > > implemented as a tweak in muq/c/asm.t, and when I wrote that I was > > (and am) very enthusiastic about the Scheme concept of mandating > > "proper" implementation of tail recursion, but the tweak turned out > > to be nontrivial, and you and I are probably the only Muq folks so > > far who would notice the difference, so I haven't done it yet. > > > > It comes up more than you'd think, especially in fundamental algorithms. > For example, I have a tail-recursive implementation of splay trees written > in LISP. If someone wants to port it? :) > > It's also used in a lot of sophisticated looping constructs. Tail > recursion is a generalization of looping. Yupyup -- I cut my teeth on this stuff about a quarter century ago on Guy L Steel's "Rabbit" compiler for Scheme, which gets a lot of milage from macro-expanding all loop constructs into tail recursion and mandating efficient implementation of tail recursion. Still not a lot of people doing that in Muq yet though. :) > > (2) Infer the arity of a recursive function. -I-'m too stupid to > > see a simple algorithm for this (I'd be happy if anyone showed > > me an algorithm!), but this shouldh't matter in cases such as > > the above where the arity is given explicity. > > > > Solving arity in general is equivalent to solving the halting problem. > Determining the arity of: : solvehalt 'testroutine call{ -> } pop ; > > Has an arity of { $ -> }, or { -> } based on whether 'testroutine halts > or throws an exception... Thus, there is no simple algorithm. Technically, { $ -> } vs { -> @ } -- Muq internally distinguishes between not returning and returning nothing. > I do not think that there's a good conservative approximation. Typing it as { $ -> } works as a conservative approximation, nothing in the code generation will break if in fact there is no return. > Having it > guess the arity and seeing if things work would breed programmer laziness. > If multiple arities are possible, it'll might choose the wrong one. Even > if it chose the right one, it might not next time the inference-algorithm > changes. > > Maybe, how's this sound: You have a special arity slot in each function > object, and allow 'declarations' of functions; just specifying their > arity. On might thing you'd been reading the source code. :) There is in fact such a slot: /* Our refinement of Obj_Header_Rec: */ struct Fun_Header_Rec { Obj_A_Header o; Vm_Obj compiler; Vm_Obj source; Vm_Obj doc; Vm_Obj executable; Vm_Obj arity; Vm_Obj file_name; /* Source file function was compiled from. */ Vm_Obj fn_line; /* Line number in above on which fn started. */ Vm_Obj line_numbers;/* Vector of line numbers for each bytecode. */ /* 'line_numbers' is relative to line_number: */ /* Bytecodes on first line in fn are line 0. */ Vm_Obj local_variable_names;/* Vector of names of local vars in fn. */ Vm_Obj specialized_parameters; Vm_Obj default_methods; /* NIL or a mosKey for methods which */ /* have t as specializer for first arg. */ Vm_Obj reserved_slot[ FUN_RESERVED_SLOTS ]; }; And writing : myFun { $ -> @ ! } ; or whatever works fine as such a declaration -- the '!' forces the MUF compiler to accept the type as given, without attempting to verify it. A compiledFunction is generated as a side effect, but that's no big deal. > As soon as you parse the arity declaration on a function, you stuff > it in the slot. On assembling all function calls, you check that slot; it > is an error to compile a call to a function for which it's arity-slot is > empty unless the call is done by 'foo call{ $ $ -> $} where the arity is > declared. Well, if we're checking the arity and find out later it is in fact wrong, things might not be quite that simple. But that's pretty much the idea I've had in mind, yup. > Singly recursive functions will work transparently. Yup. > Mutually recursive > will require predeclaring them. I sometimes just predeclare the first, in the few places in the existing MUF codebase where this is a problem. > And any calls to undefined functions will > be flagged. Typically true already... > Scott > > PS: Am I on the muq-coder list? If you get two copies of this, you must be. :) > > -- > No DVD movie will ever enter the public domain, nor will any CD. The last CD > and the last DVD will have moldered away decades before they leave copyright. > This is not encouraging the creation of knowledge in the public domain. |
From: Scott A C. <cr...@qw...> - 2000-08-03 21:44:16
|
On 3 Aug 2000, Cynbe ru Taren wrote: > > Scott A Crosby <cr...@qw...> writes: > > Tail recursion, in the sense you intend (looping rather than pushing > a stack frame) isn't supported yet, alas. This needs to be > implemented as a tweak in muq/c/asm.t, and when I wrote that I was > (and am) very enthusiastic about the Scheme concept of mandating > "proper" implementation of tail recursion, but the tweak turned out > to be nontrivial, and you and I are probably the only Muq folks so > far who would notice the difference, so I haven't done it yet. > It comes up more than you'd think, especially in fundamental algorithms. For example, I have a tail-recursive implementation of splay trees written in LISP. If someone wants to port it? :) It's also used in a lot of sophisticated looping constructs. Tail recursion is a generalization of looping. > (2) Infer the arity of a recursive function. -I-'m too stupid to > see a simple algorithm for this (I'd be happy if anyone showed > me an algorithm!), but this shouldh't matter in cases such as > the above where the arity is given explicity. > Solving arity in general is equivalent to solving the halting problem. Determining the arity of: : solvehalt 'testroutine call{ -> } pop ; Has an arity of { $ -> }, or { -> } based on whether 'testroutine halts or throws an exception... Thus, there is no simple algorithm. I do not think that there's a good conservative approximation. Having it guess the arity and seeing if things work would breed programmer laziness. If multiple arities are possible, it'll might choose the wrong one. Even if it chose the right one, it might not next time the inference-algorithm changes. Maybe, how's this sound: You have a special arity slot in each function object, and allow 'declarations' of functions; just specifying their arity. As soon as you parse the arity declaration on a function, you stuff it in the slot. On assembling all function calls, you check that slot; it is an error to compile a call to a function for which it's arity-slot is empty unless the call is done by 'foo call{ $ $ -> $} where the arity is declared. Singly recursive functions will work transparently. Mutually recursive will require predeclaring them. And any calls to undefined functions will be flagged. Scott PS: Am I on the muq-coder list? -- No DVD movie will ever enter the public domain, nor will any CD. The last CD and the last DVD will have moldered away decades before they leave copyright. This is not encouraging the creation of knowledge in the public domain. |
From: Cynbe ru T. <cy...@mu...> - 2000-08-03 16:19:16
|
Scott A Crosby <cr...@qw...> writes: > A couple of random programming questions: > > First, how do we get recursion? Tail recursion? Mutual tail recursion? > > (A couple of examples would be nice.) Tail recursion, in the sense you intend (looping rather than pushing a stack frame) isn't supported yet, alas. This needs to be implemented as a tweak in muq/c/asm.t, and when I wrote that I was (and am) very enthusiastic about the Scheme concept of mandating "proper" implementation of tail recursion, but the tweak turned out to be nontrivial, and you and I are probably the only Muq folks so far who would notice the difference, so I haven't done it yet. For vanilla recursion, the simplest MUF hack is : fact { $ $ -> $ } -> product -> i i 2 < if product return fi product i * -> product -- i i product 'fact call{ $ $ -> $ } ; : factorial 1 fact ; root: 1 factorial root: 1 2 factorial root: 1 2 3 factorial root: 1 2 6 4 factorial root: 1 2 6 24 The crucial (and yes, bug-ugly) hack is the 'symbol call{ ... } in i product 'fact call{ $ $ -> $ } which works around the fact that at present the MUF compiler is too dumb to (1) Know how to compile a call to a function it hasn't compiled yet. This is easily fixable, because all calls are now indirected through symbols (in the Lisp sense of 'symbol') to ensure that recompiling a function has the intuitively expected effect on the existing codebase. (2) Infer the arity of a recursive function. -I-'m too stupid to see a simple algorithm for this (I'd be happy if anyone showed me an algorithm!), but this shouldh't matter in cases such as the above where the arity is given explicity. This work-around just tells it to call the function (compiledFunction) hanging off the given symbol at runtime, and specifies what arity the compiler should presume that function will have. The MUF compiler doesn't notice or care that this happens to be the same as the function it is compiling at the moment. It is not pretty, but it gets us through the night. Mutual recursion can be done the same way: : fact1 { $ $ -> $ } -> product -> i i 2 < if product return fi product i * -> product -- i i product 'fact2 call{ $ $ -> $ } ; : fact2 { $ $ -> $ } -> product -> i i 2 < if product return fi product i * -> product -- i i product 'fact1 call{ $ $ -> $ } ; : factorial2 1 fact1 ; root: for i from 1 upto 30 do{ i factorial2 , "\n" , } 1 2 6 24 120 720 5040 40320 362880 3628800 39916800 479001600 6227020800 87178291200 1307674368000 20922789888000 355687428096000 6402373705728000 121645100408832000 2432902008176640000 51090942171709440000 1124000727777607680000 25852016738884976640000 620448401733239439360000 15511210043330985984000000 403291461126605635584000000 10888869450418352160768000000 304888344611713860501504000000 8841761993739701954543616000000 265252859812191058636308480000000 root: -- Cynbe |
From: Artiste E. <ar...@mu...> - 2000-08-03 00:08:19
|
Testing to see if names show up in 'from' fields now. I had one of the protection features on that hides the 'from' address, but obviously that works less well if there's no way to identify the sender (ie PGP). -Andy |
From: <muq...@li...> - 2000-08-02 03:39:56
|
This is very useful feedback. (I'm going to be vacationing in Saas-Fee Switzerland much of the last half of this month, and am expecting to get back into actively working on Muq again at that point, courtesy of my antique laptop. :) After seven and a half years slaving over a hot terminal, it hard for me to see Muq as newcomers do, and to deduce what would be most useful. I was not aware that the beginner MUF tutorial had broken examples, but your deduction about order of creation is correct. :) I've been trying to dream up a simple way of making the reference manual and tutorial examples part of the regression suite, so that they get tested regularly. At the moment, maintainance of them is purely manual, and as a result bitrot does tend to get to them. :( We definitely need for someone other than me to develop and maintain something like one or more "muqlib"s -- maintaining the C server and docs is quite enough to keep me spread thin, without trying to take on major softcode projects as well. "oldmud", as the name implies, is intended primarily as a springboard to get other people started on such projects. Unfortunately, as a springboard it seems to fall considerably short: About half a dozen people have taken a shot at such a project, with so far no shippable results. Life is Good! :) -- Cynbe > My personal votes are for: > > This > > * Just generally working on the documentation and laying off > > coding for awhile. > > then this, > > * In-db access to a Gtk client to enable the above sorts of > > stuff. > > then this, > > * Heavy documentation on how to build a fairly conventional > > text world-server on Muq, just because that's what most Muq > > folks to date seem most interested in doing, but foiled by > > the high first step on the current Muq learning curve. > > Mostly because the first two will help in people wanting to do the third. > Though over the past couple of days, I think I've gotten a decent enough > feel that I can mostly rely on the muf reference guide. > > But stuff like the examples for objects in the beginner muf guide appears > wrong now. So that was somewhat frustrating trying that out early on. I'm > guessing MUF for hackers was written much more recently as the examples in > there work. > > > I haven't tried the muc shell to much but if there's stuff you want to > iron out with that compiler, I'd put that as a high priority too as that'd > be likely to help bring in lpc types. > > > My personal goal is to develop a muck-like "muqlib", to borrow the lpmud > term, and then expand and branch from there. I've looked around oldmud > and I'm getting an idea of how to approach it. > > > _______________________________________________ > Muq-coder mailing list > Muq...@li... > http://lists.sourceforge.net/mailman/listinfo/muq-coder |
From: <muq...@li...> - 2000-08-02 01:21:35
|
My personal votes are for: This > * Just generally working on the documentation and laying off > coding for awhile. then this, > * In-db access to a Gtk client to enable the above sorts of > stuff. then this, > * Heavy documentation on how to build a fairly conventional > text world-server on Muq, just because that's what most Muq > folks to date seem most interested in doing, but foiled by > the high first step on the current Muq learning curve. Mostly because the first two will help in people wanting to do the third. Though over the past couple of days, I think I've gotten a decent enough feel that I can mostly rely on the muf reference guide. But stuff like the examples for objects in the beginner muf guide appears wrong now. So that was somewhat frustrating trying that out early on. I'm guessing MUF for hackers was written much more recently as the examples in there work. I haven't tried the muc shell to much but if there's stuff you want to iron out with that compiler, I'd put that as a high priority too as that'd be likely to help bring in lpc types. My personal goal is to develop a muck-like "muqlib", to borrow the lpmud term, and then expand and branch from there. I've looked around oldmud and I'm getting an idea of how to approach it. |
From: <muq...@li...> - 2000-08-01 22:49:58
|
jh...@cj... writes and cy...@mu... replies: > On 1 Aug 2000 cynbe wrote in response to > > ... > > > > > Option 2 isn't so horrible when working locally but working locally isn't > > > always an option. > > > > Why do you restrict this to "working locally"? Seems like a term > > program would have much the same results whether working locally > > or remotely -- am I missing your point? > > The factor i'm considering is the lag involved. (Are there still people without gigabit ethernet to the door? :) Ah, ok. > However, now that I think about it, muq dev is different with the package > system so emacs is probably the best way to go and just redefine specific > functions that need it. That is the typical mode of operation in comparable systems like Lisp and Smalltalk, and more or less what I'd envisioned doing in practice when doing production softcoding on a live Muq server. (The current Muq compiler and debugger machinery actually tries to keep track of both line number relative to start of source file and line number relative to start of function, since I'm not sure how all this will actually shake out in practice...) > Since specific functions should rarely be huge > with any sort of reasonable programming style, lag concerns would be > minimized. Yes, this is a good way to go at present, possibly with the support of an appropriate emacs mode. We could write some sort of "inferior-muq-mode" that left you editing the source file relatively normally but fired the current function definition over the wire to the currently open Muq server when you hit (say) ^J, if that looked like a significant win... Alternately, there is currently a very primitive line editor written in-db (200-O-edit.muf.t) and also some simple (largely untested) functions for VT100/ANSI cursor control (280-O-vt100.muf.t): Working from these one could hack together in a evening to a weekend an simple source-code editor that would let one edit source code in-db if one wanted to. This would subject one to round-trip network lag on a per-character basis, of course. (I actually have sick fantasies of hacking together Muq support for the elisp bytecode set, so as to be able to run emacs directly inside Muq. There are people who think a multithreaded, distributed, multi-user persistent emacs would be interesting...) One basic policy decision one has to make is whether the canonical copy of your MUF source code is what lives on your host file system, or what lives in the Muq db. Both are plausible choices, and both have their advantages. I'm not sure which is likely to appeal to the most people in practice, initially. > I've got a glimmer of understanding in what you have in mind for a GUI. > Since all the information is in the DB, the GUI could make a series of > queries and download all the source information and what not to itself. > Anyways I think that's a little of what you're getting at. You could pick various designs that cache more or less information at the client end, depending on such things as how much of a problem lag is. Most GUI toolkits have a text-editing widget of some sort, so one can squirt the source code for a function over the wire into such a widget have have pretty lag-free local editing, and then hit some button to squirt it back and compile it. These sorts of widgets don't offer editing facilities as fancy as emacs (or vi or...) of course. Muq is designed to be eminently amenable to file-system style browsing where you go from object to object peering at the properties on it: A single Muq API lets you do this consistently for objects, functions, arrays, structs &tc &tc. The existing 'ls' commandline function uses this API to let you inspect just about any object in the system (100-C-utils.muf.t): : ls foreach key val do{ key ,, "\t" , val ,, "\n" , } ; (Is that short and sweet, or what? :) The typical Lisp or Smalltalk or such system has a "browser" mini-app which lets one run around viewing and editing the state of the system in a clean GUI-driven sort of fashion: I'd hope Muq would shortly sport a similar sort of browser mini-app which lets one similarly inspect and edit one's objects, not too much different from a graphical shell for Linux. The first generation of such a browser could just be a very simple widget layout wrapped around the existing 'ls' logic. It could grow from there by stages into a decent IDE. Since all of it would be built on top of the existing server without any special privileges or such, such a shell could be freely hackable on a per-user basis without needing Muq root privileges or endangering system security or stability or such, so I'd expect once we have a minimal example working, we'll see a mini-explosion of people customizing and improving it -- this is just the sort of mini-app softcoders love doing weekend hacks on, in my impression. :) What I'd really like to explore in any Muq GUI client is marrying multi-user / distributed stuff nicely to standard GUI facilities. For example, it would be nice if all state was always known on the Muq server end, so that one could pick up a session without interruption after a network outage, or trivial detach a session from one X server and re-attach it to another. (TOPS-10 used to let you trivially do this with any terminal session, moving freely from one terminal to another. 'screen' on Linux gives a similar ability, but as a kind of special hack instead of a pervasive utility. And it doesn't work very well with GUI apps. :) Also, it would be nice if the Muq GUI layer made it trivial to invite any other Muq user to come "look over your shoulder" remote, without needing any special coding by the softcoder. This shouldn't be hard if designed in from the outset: Again, just ensure that all state is known at the Muq server end, and design the library to be able to drive several network connections to Gtk clients in parallel rather than just one. Suddenly, every Muq mini-app becomes automatically a potential collaboration tool. :) So, anyhow, the current candidates for my next Top Priority Muq Project currently seem to include: * In-db access to a Gtk client to enable the above sorts of stuff. * Improved OpenGL support to enable great eye-candy 3D distributed worlds sorts of apps to attract a Muq userbase. * Heavy documentation on how to build a fairly conventional text world-server on Muq, just because that's what most Muq folks to date seem most interested in doing, but foiled by the high first step on the current Muq learning curve. * Just generally working on the documentation and laying off coding for awhile. I find it hard to pick. :) -- Cynbe |
From: <muq...@li...> - 2000-08-01 21:37:39
|
On 1 Aug 2000 cynbe wrote in response to jh...@cj...: > ... > > > Option 2 isn't so horrible when working locally but working locally isn't > > always an option. > > Why do you restrict this to "working locally"? Seems like a term > program would have much the same results whether working locally > or remotely -- am I missing your point? The factor i'm considering is the lag involved. My experiences come from the tinymuck dev cycle which I'm pretty sure you're familiar with. Historically, it's been "/quote the entire source file to the muck with the @prog muf/1 9999 d/i/etc...". Lately, though, i've been doing more of "edit the muf file on the local file system" then "@prog muf/c/q" inside the muck. Especially considering the size of things like your page.muf type programs, it's a ton more convenient to work this way. However, now that I think about it, muq dev is different with the package system so emacs is probably the best way to go and just redefine specific functions that need it. Since specific functions shoudl rarely be huge with any sort of reasonable programming style, lag concerns would be minimized. ---- I've got a glimmer of understanding in what you have in mind for a GUI. Since all the information is in the DB, the GUI could make a series of queries and download all the source information and what not to itself. Anyways I think that's a little of what you're getting at. |
From: <muq...@li...> - 2000-08-01 19:51:24
|
jh...@cj... writes: > This ML software seems to be stating the sender as itself instead of me. > Is this intentional? > > My email is jh...@cj.... > > Date: Tue, 1 Aug 2000 14:34:55 -0400 (EDT) > From: muq...@li... > Reply-To: muq...@li... > To: muq...@li... > Subject: [Muq-coder] Saving functions to disc (and loading them) Yours is the first live post to one of these lists -- this is a most unexpected side-effect, at least to me and Artie. :) Possibly SourceForge for some reason thinks it is a Good Idea. Most likely, this is something controllable via some knob on the web-based SourceForge administration pages which we're just learning our way around. We'll look for a fix. Cynbe |
From: <muq...@li...> - 2000-08-01 19:48:14
|
jh...@cj... writes: > Is there anyway to pipe data to and from the local disc? I can't find > anything to suggest that it's available. > > So baring that, it seems that the only ways to introduce softcode are > 1) place the file in the pkg directory, shut the muq down and use > muq-c-lib to insert it into the db. This is what I've actually mostly done to date, just because for server development purposes it provides total reproducability of any odd behavior, and hence a high probability of fixing any server bug encountered. Obviously, this isn't a great model for softcode development generally. > 2) paste/quote it into your term program. This is actually what I'd favor at present, probably using an emacs-based term program, or running the term program in an emacs shell window. But I haven't done a lot of this to date. > Option 2 isn't so horrible when working locally but working locally isn't > always an option. Why do you restrict this to "working locally"? Seems like a term program would have much the same results whether working locally or remotely -- am I missing your point? > Option 1 seems to require shutting the muq server down which I'm guessing > shoudl never be necessary. That's certainly the intended spirits -- I've been musing about the best way to do server upgrades without appearing to shut the server down. :) (Options include using dlopen() &tc to re/load individual server modules, using a small watchdog program which passes the open file descriptors to the new process -- hiding a quick shutdown/restart from open sessions -- or having the server exec() its update, passing open file descriptors directly and the remaining state via disk. The latter is probably the most portable, and the former potentially the fastest, since it can avoid touching all data and most code.) > The only way to save softcode I've found from the system is to log the > 'function.function.source.source , output > or somethign along that line though it seems like precompiled functions > don't have anything stored in that slot. The current intent is that the canonical source live on the host filesystem, and get loaded from outside into the Muq server, but not in general saved from the Muq server back to the host system. (To export a complete user's code/data state, the intent is that db parts be exported via rootExportDb -- see 'db functions' in the reference manual.) > So I guess my question is, what's the intended way to do things? Frankly, I'm still feeling my way and very open to suggestions. This is the sort of stuff which I short-changed while concentrating on getting the C-coded server proper stable, and which now need to start getting their fair share of attention in turn. Some of my mental guidelines on this front are that: * Muq is intended to be a very secure sandbox system: Sysadmins should be able to put up a Muq server with minimal fear of being cracked. This militates against free-and-easy access to the host file system, especially that initiated within the Muq server. For this reason, I'm very, very wary of introducing hardcoded capabilities to access the host account from within the Muq server, and most especially a maze of various special-case such abilities which are hard to understand and control, such as a special one just to save out source code. * Muq is intended to support distributed communities and remote operation, so solutions ideally shouldn't be sensitive to the local/remote distinction. This also militates against local disk accesses and in favor of operating via telnet (or similar) connections which are insensitive to local/remote server distinctions. * Muq is intended to support diverse communities, so it is likely that there won't be a single one-size-fits-all solution, but we'll need to cater to several popular models. * The Muq server proper is intended to be as policy-free as practical, so ideally solutions to this sort of problem should involve either no change to the C-coded server proper, or else the addition of facilities which are as generic and policy-free as possible, with the bulk of the preference-catering and functionality provided in softcode. * I would very much like to see Muq develop into a world-class programming environment in a class with Smalltalk and such. In the long run, I envision moving to a system in which the Muq db is the primary repository of Muq source code, and saving or loading it to/from host filesystems is exceptional rather than normal. I think it is premature to try operating this way right at the moment, however. My current thoughts include providing solid Gtk widget support via Muq softcode wrappers transparently communicating pipe or socket to some generic Gtk client such as GUILE+Gtk or Perl/Gtk. This will give us the ability to write first-class GUI interfaces in Muq softcode which can run both locally and remotely, for both single-user and multi-user applications. Nice IDE style programming facilities for Muq are one of the things which we can build on this foundation. As a shorter-term programming environment, some sort of better integration with emacs might be an idea: I'm unsure just what form that should take. Muq currently has two basic generic ways of providing new communication between host file system (etc) and the db: * Writing a little softcode daemon which accepts connections (or makes a connection to another server) and then provides a service of some sort. For example, if one wanted a little Linux commandline facility called 'muf' that accepted a line of MUF code, evalutated it inside the Muq server, and the returned the result as a text string, I'd do it by writing a little softcode daemon of this sort in-db in Muq, and then a little Perl (or C) utility in Linux which read the line of text, fed it to the Muq server daemon via a telnet connection, and then printed out the result. (This would actually be a cool weekend project and potentially very useful in making the Muq db more accessible without greatly weaking the sandbox.) * Spawning a subprocess of the Muq process in softcode via ]rootPopenSocket (again, see the reference manual). This is the intended general mechanism for providing local weakening of the sandbox in the form of special access to the host system. The idea is that by (1) CAREFULLY writing a simple C or Perl program to provide some specific service without otherwise compromising host security; (2) Writing a Muq softcode wrapper which transparently executes the above program via ]rootPopenSocket and (3) If desired, using asMeDo{ ... } to make the above (Muq)root-privileged facility available to some or all other users on the Muq server. you can effectively add a new host-accessing primitive to your local Muq server without hacking the C server and taking the attendant risks of introducing crashing, security or performance bugs. Either of the above would be a candidate for implementing richer source code import/export facilities, if needed. One current minor problem is that the current MUF compiler provides no simple 'eval' function compiling an arbitrary given string. (The MUC compiler does have such a function, but is not yet anything like a complete alternative to MUF for serious softcoding.) This is pretty high on my mental TO-FIX list, and I'd be happy to move it to the top if lack of it was impeding a credible coding project on Muq. :) BTW: None of the above addresses Muq's currently lamentably primitive debugging support facilities: Similar comments apply to them. I'm inclined to think that writing a good multi-thread debugger will require having something like the Gtk toolkit available, and hence to do Gtk support first, and then hammer on better debugging. The Muq context poses some interesting issues for debug support, since users need to be able to single-step through code they don't own without affecting other users using the same code: The usual breakpoint hack of modifying executables is probably a Bad Idea in the Muq context. Hashtables of breakpoints and watchpoints, and a dual-mode interpreter which runs quickly most of the time but with a lot of breakpoint and watchpoint checking in threads under debugger control looks like the most reasonable answer... Anyhow, that's where my thoughts are on that front. I'm very open to suggestions and comments. :) Cynbe |
From: <muq...@li...> - 2000-08-01 18:44:40
|
This ML software seems to be stating the sender as itself instead of me. Is this intentional? My email is jh...@cj.... Date: Tue, 1 Aug 2000 14:34:55 -0400 (EDT) From: muq...@li... Reply-To: muq...@li... To: muq...@li... Subject: [Muq-coder] Saving functions to disc (and loading them) |
From: <muq...@li...> - 2000-08-01 18:35:28
|
Is there anyway to pipe data to and from the local disc? I can't find anything to suggest that it's available. So baring that, it seems that the only ways to introduce softcode are 1) place the file in the pkg directory, shut the muq down and use muq-c-lib to insert it into the db. 2) paste/quote it into your term program. Option 2 isn't so horrible when working locally but working locally isn't always an option. Option 1 seems to require shutting the muq server down which I'm guessing shoudl never be necessary. The only way to save softcode I've found from the system is to log the 'function.function.source.source , output or somethign along that line though it seems like precompiled functions don't have anything stored in that slot. So I guess my question is, what's the intended way to do things? |