exprla-devel Mailing List for XPL: eXtensible Programming Language (Page 4)
Status: Pre-Alpha
Brought to you by:
xpl2
You can subscribe to this list here.
2002 |
Jan
(198) |
Feb
(40) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
---|
From: reid_spencer <ras...@re...> - 2002-01-31 09:27:13
|
--- In xpl-dev@y..., Lucas Gonze <lucas@g...> wrote: I haven't been able to find any public implementation at all. Does anyone know otherwise? > Richard Anthony Hein wrote: > > The Blocks eXtensible eXchange Protocol (BXXP) was developed by > Mashall Rose: > > "BXXP is essentially a tool kit that developers can use to > quickly create protocols for a range of applications including > instant messaging, file transfer, content syndication, network > management and metadata exchange. Because it uses a > peer-to-peer architecture, BXXP is a good foundation for > creating protocols that govern distributed file-sharing > applications such as Gnutella, iMesh and Freenet." > > "Application-specific protocols can be stacked on top of the > reusable BXXP code, and developers can update the add-on > protocols without changing the underlying BXXP foundation. > > BXXP-enabled applications work by setting up and maintaining a > network connection between two users, which can alternate > between functioning as clients and servers. The two users can > respond to requests for data as well as push data back and > forth over a network connection. > > One special feature of a BXXP connection is it can carry > multiple simultaneous exchanges of data - called channels - > between users. For example, users can chat and transfer files > at the same time from one application that employs a network > connection. BXXP uses XML to frame the information it carries, > but the information can be in any form including images, data > or text. > > BXXP runs on top of TCP and acts as an alternative to HTTP or a > custom-made data exchange protocol. HTTP was designed to handle > the transport of hypertext documents and is ideal for Web > browsing. However, HTTP doesn't work well for the transfer of > XML data, nor does it support multiple simultaneous exchanges > between users. For these types of applications, developers have > to create their own special-purpose protocols. Now they can use > BXXP to speed that process. " > > Get more information at > http://www.nwfusion.com/news/2000/0626bxxp.html (slow today), > and follow a discussion at www.slashdot.org, where a variety of > people will be discussing the ins-and-outs of BXXP. > > What does Simon St. Laurent think about this, considering his > view points expressed in earlier messages (Simon?)? How does > this compare to WorldOS (Lucas?)? > > Richard A. Hein > <plug>who has finished up his last contract and is now looking > for a new job <email_ me> 935551@i...</email_me> </plug> > > > > --------------------------------------------------------------- > [Image] > > > > [Image] > --------------------------------------------------------------- > To unsubscribe from this group, send an email to: > xpl-unsubscribe@o... -- L.U.C.A.S.: Lifeform Used for Calculation and Accurate Sabotage --- End forwarded message --- |
From: reid_spencer <ras...@re...> - 2002-01-31 09:26:58
|
--- In xpl-dev@y..., Lucas Gonze <lucas@g...> wrote: I just met BXXP yesterday. It looks pretty compelling. It is very similar to the WorldOS protocol, though with an ability to layer multiple messages across a single stream. The request/response thing is somewhat different, since the WorldOS stuff considers a request/response pair to be just a special case of a message that can have 0 or n responses. I'm going to read the spec slowly later tonight. If I can use it instead of worldos protocol I'll consider either switching or adding it on to the toolkit as a standard option. Anybody know if there is a java implementation, preferably under the GPL? > Richard Anthony Hein wrote: > > The Blocks eXtensible eXchange Protocol (BXXP) was developed by > Mashall Rose: > > "BXXP is essentially a tool kit that developers can use to > quickly create protocols for a range of applications including > instant messaging, file transfer, content syndication, network > management and metadata exchange. Because it uses a > peer-to-peer architecture, BXXP is a good foundation for > creating protocols that govern distributed file-sharing > applications such as Gnutella, iMesh and Freenet." > > "Application-specific protocols can be stacked on top of the > reusable BXXP code, and developers can update the add-on > protocols without changing the underlying BXXP foundation. > > BXXP-enabled applications work by setting up and maintaining a > network connection between two users, which can alternate > between functioning as clients and servers. The two users can > respond to requests for data as well as push data back and > forth over a network connection. > > One special feature of a BXXP connection is it can carry > multiple simultaneous exchanges of data - called channels - > between users. For example, users can chat and transfer files > at the same time from one application that employs a network > connection. BXXP uses XML to frame the information it carries, > but the information can be in any form including images, data > or text. > > BXXP runs on top of TCP and acts as an alternative to HTTP or a > custom-made data exchange protocol. HTTP was designed to handle > the transport of hypertext documents and is ideal for Web > browsing. However, HTTP doesn't work well for the transfer of > XML data, nor does it support multiple simultaneous exchanges > between users. For these types of applications, developers have > to create their own special-purpose protocols. Now they can use > BXXP to speed that process. " > > Get more information at > http://www.nwfusion.com/news/2000/0626bxxp.html (slow today), > and follow a discussion at www.slashdot.org, where a variety of > people will be discussing the ins-and-outs of BXXP. > > What does Simon St. Laurent think about this, considering his > view points expressed in earlier messages (Simon?)? How does > this compare to WorldOS (Lucas?)? > > Richard A. Hein > <plug>who has finished up his last contract and is now looking > for a new job <email_ me> 935551@i...</email_me> </plug> > > > > --------------------------------------------------------------- > [Image] > > > > [Image] > --------------------------------------------------------------- > To unsubscribe from this group, send an email to: > xpl-unsubscribe@o... --- End forwarded message --- |
From: reid_spencer <ras...@re...> - 2002-01-31 09:26:39
|
--- In xpl-dev@y..., "Richard Anthony Hein" <935551@i...> wrote: The Blocks eXtensible eXchange Protocol (BXXP) was developed by Mashall Rose: "BXXP is essentially a tool kit that developers can use to quickly create protocols for a range of applications including instant messaging, file transfer, content syndication, network management and metadata exchange. Because it uses a peer-to-peer architecture, BXXP is a good foundation for creating protocols that govern distributed file-sharing applications such as Gnutella, iMesh and Freenet." "Application-specific protocols can be stacked on top of the reusable BXXP code, and developers can update the add-on protocols without changing the underlying BXXP foundation. BXXP-enabled applications work by setting up and maintaining a network connection between two users, which can alternate between functioning as clients and servers. The two users can respond to requests for data as well as push data back and forth over a network connection. One special feature of a BXXP connection is it can carry multiple simultaneous exchanges of data - called channels - between users. For example, users can chat and transfer files at the same time from one application that employs a network connection. BXXP uses XML to frame the information it carries, but the information can be in any form including images, data or text. BXXP runs on top of TCP and acts as an alternative to HTTP or a custom-made data exchange protocol. HTTP was designed to handle the transport of hypertext documents and is ideal for Web browsing. However, HTTP doesn't work well for the transfer of XML data, nor does it support multiple simultaneous exchanges between users. For these types of applications, developers have to create their own special-purpose protocols. Now they can use BXXP to speed that process. " Get more information at http://www.nwfusion.com/news/2000/0626bxxp.html (slow today), and follow a discussion at www.slashdot.org, where a variety of people will be discussing the ins-and-outs of BXXP. What does Simon St. Laurent think about this, considering his view points expressed in earlier messages (Simon?)? How does this compare to WorldOS (Lucas?)? Richard A. Hein <plug>who has finished up his last contract and is now looking for a new job <email_ me> 935551@i...</email_me> </plug> --- End forwarded message --- |
From: reid_spencer <ras...@re...> - 2002-01-31 09:26:30
|
--- In xpl-dev@y..., "Michael Lauzon" <ce940@f...> wrote: I don't think it's to early in the game to bring up the topic of what license we're going to use as what we are creating is open-source and will be freeware, but I see three choices: 1. GPL 2. LGPL 3. MPL There may be more. Though I still stick to the idea when we get this fully working, any updates to the code or tags, we have to be sent the new code, just like they do with Linux. Michael --- End forwarded message --- |
From: reid_spencer <ras...@re...> - 2002-01-31 09:26:13
|
--- In xpl-dev@y..., Michael Lauzon <ce940@f...> wrote: That's what I meant to write, but SourceForge is still slow nevertheless. On Sat, 24 Jun 2000, Pat McCotter wrote: > > This should be http://eXtenDE.sourceforge.net/ > > On Sat, 24 Jun 2000 09:18:42 -0400 (EDT), Michael Lauzon > <ce940@f...> wrote: > > | > |Here are some links to stuff we should read, the first one isn't finished > |yet, so the author says: > | > |http://www.treelight.com/software/bootstrap/encodingSource.html > | > |This link appears to be down, but SourceForge is a slow website: > | > |http://eXtenDE.sourceforge.com/ > | > |The I don't know what we'll find on the latter one as I can't connect to > |it. > | > |There is also this one: > | > |http://www.idevresource.com/ > | > | > |Michael > |http://www.geocities.com/SiliconValley/Way/9180/ > | > |'Eat, drink, and be merry, for tomorrow you may work.' > | > > -- > Cheers > Pat McCotter > pat0@s... > PGP Key - 0x4E5E46BB > Fingerprint 82ED 8A86 68B9 BEBF 100B 06CF 603B E658 > > -------------------------------------------------------------------- ---- > Create professional forms and interactive web pages in less time > with Mozquito(tm) technology. > Form the Web today - visit: > http://click.egroups.com/1/5770/2/_/809694/_/961860520/ > -------------------------------------------------------------------- ---- > > To unsubscribe from this group, send an email to: > xpl-unsubscribe@o... > > > > Michael http://www.geocities.com/SiliconValley/Way/9180/ 'Eat, drink, and be merry, for tomorrow you may work.' --- End forwarded message --- |
From: reid_spencer <ras...@re...> - 2002-01-31 09:25:38
|
--- In xpl-dev@y..., Lucas Gonze <lucas@g...> wrote: re: paradigms shifting away from procedural programming and http limitations as an rpc transport - I believe that the move underway to have everything be a server _and_ client leads basically to an asynchronous way of thinking where procedure calls have 0, 1 or more results. the first major problem with http is that it is hopelessly tied to sychronous call & response. the second major problem is that it is tied to html as a content type. as messages pass from node to node the content and transport may change. http 1.1 specifies specific rules for how browsers, proxies and servers interact. they are explicitly different animals. this approach works badly enough for html in a browser. For reliable procedure calling it is a flawed approach. For example, a proxy is supposed to convert chunked encoding to non-chunked. This is helpful to a client that is a browser. But if the actual client is another proxy then this will cause problems. If the data is streamed it can cause serious problems. Brooklyn is hot today. totally hot. Apologies if this writing reflects the fact that I am moving (and thinking) as slowly as possible to keep the sweating to a minimum. - Lucas > We are shifting paradigms here, and as a general rule of thumb, > as one paradigm becomes ascendant another becomes obsolete. > It's my personal conviction that the paradigm being shifted is > not HTML -- although that will in fact be subsumed by the > technology over time. Rather the paradigm is procedural > programming itself. As I see it, over time VB, Java, and even > (gasp) C++ will take on increasingly XML like characteristics, > to the point where eventually these languages largely disappear > as independent entities. That is not necessarily to say that > they will be cast in an XML description (though that may happen > in some cases). Rather, the mechanisms for handling procedural > languages will grow out of the hydra-like head of XML, with XML > itself likely to evolve in the process. > > > I don't think XML-RPC and SOAP use the "HTTP protocol > incorrectly", though > > I certainly think it's worth considering more efficient > transfer protocols > > with clearer security. > My biggest problems with any type of XML based RPC is that, > while you can write very efficient RPC code that effectively > keeps most of the interaction in a periodic request from a > client to a server to send in a larger stream of data (and this > use I don't have problems with at all), it can also be used to > control component interactions on the server one command at a > time. RPCs have typically been complex things to write, > involving skills that were sufficiently specialized that there > was a prerequisite understanding of how best to optimize the > RPCs service. However, by making RPCs much simpler, it lowers > the bar on the skills necessary to create such RPCs, which in > turn means that you'll see more people use XML-RPCs to handle > routine component control commands that should typically be > handled within a compiled (or at least protected) control. > Thus I have this disturbing visions of lots of RPCs basically > performing incremental procedural coding between environments > using XML as the basis, which is the worst possible way of > designing such systems. > > My second concern with RPCs is that they are running over http, > typically through port 80. Firewalls exist for a purpose -- to > keep critical subcomponents from being programmed illicitly. > SOAP especially is designed (and this has been a common thread > coming from a number of different vendors) to bypass the > firewalls. Thus by explicit admission, SOAP in particular > serves as a means of circumventing security measures put in > place by sys admins in the first place. > > These were the specific objection that I had when I said that > XML-RPCs used the HTTP protocol incorrectly. The syntactic use > of such techniques is undoubtedly correct, though they do rely > on extensions to the protocol, but so do many other things. It > is more in the usage of SOAP that I see a conflict with the > design goals of HTTP, and the biggest danger from it. I should > point out that I do see the obvious benefits of incorporating > SOAP -- it is a powerful technology; it is however this exact > power that worries me. > > > I think it's time to consider changing the Internet to make > better use of > > XML and to simplify implementing the possibilities XML opens > up. It's not > > yet time to throw everything out and start over, but I'm glad > to hear that > > people are willing to at least consider the possibility of a > transition. > I'm not questioning the sentiment (which I personally agree > with), but the feasibility. The Internet evolved -- at no > point was there a single conscious decision for it to appear in > the form that it did; rather, as different people contributed > different facets to this communication protocol and all the > technologies overlaying it, the Internet lurched and lunged > through different embodiments of itself. XML as a language for > creating the semantic web is laudible but far too late; as a > consequence, the best approach may very well be to introduce a > new meme into the existing infrastructure that would make it > evolve toward the newer requiremenets. This is a very > counterintutive notion, that one should grow or evolve the web, > but I suspect that it may be the only way that we can > effectively integrate this new paradigm into the systems at > hand. > > BTW, it's late, I've been chasing kids around all day, and I'm > probably not making a lot of sense at the moment. > > - Kurt Cagle > ----- Original Message ----- > From: "Simon St.Laurent" <simonstl@s...> > To: <xpl@e...> > Sent: Saturday, June 24, 2000 9:29 AM > Subject: [XPL] XML/Web infrastructure > > > I'm going to reply to various bits, and hope this doesn't get > too confusing. > > > > Richard Hein wrote: > > >www.xml.com has an articles about some things we need to be > > >informed about, > > > including the foundational infrastructure of the 'net and > how XML > > >makes too > > > much of a demand on the current infrastructure, > > > > I'm not certain that XML makes 'too much of a demand', but it > certainly > > makes demands and opens new possibilities. I tend to enjoy > the > > 'disruptions' XML causes. > > > > Kurt Cagle wrote: > > >I just spent three days closeted with Simon St. Laurent, who > wrote the > > >article, and while I agree with him in part, I would also > keep in mind that > > >he represents just one side of this issue. > > > > Kurt's completely right that there are _many_ sides to all of > these issues. > > I wouldn't have written the article if it hadn't been for > the strong > > opposition some of those opinions have generated. I'd like > to see XML > > carry on disrupting things, but not everyone is happy about > that. > > > > Kurt wrote: > > >I do agree > > >with Simon with regard to the issue of SOAP and XML-RPC, and > I have made me > > >feelings known about that in the VBXML board -- we are > utilizing the HTTP > > >protocol incorrectly, placing too many demands upon it to > efficiently handle > > >the use of the network as a common interchange medium for > messagess. It also > > >gives me more than a little pause about both security > concerns and > > >architecure. > > > > I don't think XML-RPC and SOAP use the "HTTP protocol > incorrectly", though > > I certainly think it's worth considering more efficient > transfer protocols > > with clearer security. > > > > Then Richard wrote: > > >Kurt says that he > > >disagrees with Simon St. Laurent, who wrote an article on > the problems with > > >the current infrastructure on http://www.xml.com in the > sense that it may be > > >too late to change a lot of it now. I don't really know > what to think about > > >that, but I wonder because it only took a few years for the > internet to grow > > >out of obscurity into this huge thing, so why would it be so > bad to change > > >it somewhat for XML? > > > > I think it's time to consider changing the Internet to make > better use of > > XML and to simplify implementing the possibilities XML opens > up. It's not > > yet time to throw everything out and start over, but I'm glad > to hear that > > people are willing to at least consider the possibility of a > transition. --- End forwarded message --- |
From: reid_spencer <ras...@re...> - 2002-01-31 09:25:15
|
--- In xpl-dev@y..., cagle@o... wrote: Jonathon, I was basically stumbling tired yesterday, or I would have responded more cogently on this. I personally don't think you'll see "one" language emerge under the rubrick of XPL. Rather, I sense that we're talking about a methodology for creating languages within the constraints of an XML environment, and that, just as there are multiple procedural languages that have already filled their respective niches (you wouldn't program low level system components with Visual Basic, nor would you write high level "Business Logic" with C++), that we'll see analogous low and highl level XML based languages, many based upon some variation of XSLT. The way I'm handling XPipes (which I really hope to get up to the VBXML site sometime this week), is that it is essentially an uncompiled language that is then processed into XSLT. I liken it to the Java model --> the original source code is first compiled into the java bytecode, which is then interpreted using the Java virtual machine into a binary representation. XPipes works on a similar premise: XPL Precompiled code --> XSLT Raw code --> XSLT Compiled code. The difference between these is that whereas the Java VM is an event driven message loop environment (is stateful), XPipes is stateless and is driven by the movement of streams. That's not to say that you couldn't create an even driven version of this in a client environment (a stream of XML triggers a series of events within the XPipes message loop which in turn cascade into responses), but the model assumes that most of the code there is still XSLT specific. One of the goals that I have for XPipes and similar XPL languages is that they serve as testbeds or catalysts for the XSLT2 developments. XSLT is Turing complete, but its also a language with some gaping holes. It's scoping model is rather skewed, you have to create multiple recursive instances of for-each loops to handle indexed-for expressions, it really does need regular expressions as an integral part of XPath (why they took it out is beyond me, that was the first thing to make sense in XSLT for me in a long time). The binding between XSLT and XML Schema needs to be written. There needs to be a much tighter story for module deployment standards (otherwise we have this proliferation of procedural scripting languages to handle the shortfall, reducing interoperability and adding to the general headache of developers). I would agree with you, though, on the notion that compilation in and of itself for any XSLT language is a local rather than a global thing. I view an XSLT stylesheet as a filter which takes one or more incoming XML streams and creates zero or more outgoing XML streams, though it may in the process perform some side effect that is the actual desired result (much as a function may take information to blit an image to a screen and then return an error code as a result -- the error code is very much secondary to the blitting, but the blitting is effectively just a side effect). In short, there is a fairly high degree of correspondance between an XSLT stylesheet and a compiled function. Making the jump to the next level of abstraction -- between a stylesheet and a component -- is a more sophisticated process, but certainly doable; however, in this case you're effectively talking about the "methods" potentially spanning more than one computer, as would the component itself. In this case, the methods may be compiled, even though the component itself is most certainly not (it in fact exists not as a discrete entity but rather a pattern of actions). I think we're going to find this to be the case with a number of elements that have traditionally been compiled -- the compilation process creates a tightly bound entity, whereas XML tends to create decoupled systems, much more loosely bound than is traditional with procedural languages. This will in turn force us to reconsider out paradigms, and look more closely at the nature of programming across distributed systems. I think the results will be much more organic and self-organizing than procedural programming will, but I'm not a hundred percent sure of this. -- Kurt Cagle ----- Original Message ----- From: Jonathan Burns To: xpl@e... Sent: Sunday, June 25, 2000 4:57 PM Subject: Re: [XPL] Oracle and Sun debut "translets" and virtual machine for XSLT cagle@o... wrote: Just a quick observation. I think we need to qualify what is specifically meant by compilation here, and to note that similar compiled stylesheets exist on the Microsoft side in the form of IXSLProcessor entitites. You're right. It's the first time I've pushed the argument right through, in my own understanding. You're a writer, you understand :-) What's implicit in my exposition is that the usual idea of oompilation falls apart, into separate connotations, in the context of XML development. Here's the definition - and I'll have to stick to it, because it's what most readers will understand by it: Compilation is the process which translates a definition of a process, expressed in a human-readable source syntax, to a series of instructions in a machine code architecture which actually carries out the process. Do we want compilation for XPL, then? NO WAY! People go to all this trouble to define a platform-independent syntax for XML - and we propose to give it a machine-dependent semantics? We'd have to be nuts. But, we still want speed, and memory economy. So we're bound to propose something like compilation, but machine-independent. There are two dimensions along which we can modify the strict definition. (1) We can define a virtual machine architecture, which is similar to actual machine architectures, and translate to that. Loosely speaking, we can "compile to JVM bytecode", for example. Problem solved - provided we include a JVM as part of the XPL environment. (2) We can include under the heading of compilation, correctly, translation to a list of indirectly-expressed instructions, which is executed in traversal. Strictly speaking, this is what we do in (1). But the broadened definition includes executables such as Forth - subroutine-threaded code, expressed as a list of subroutine addresses, with embedded machine code for a small set of primitives. (2a) And if we can do that, then why can't we traverse a tree structure of indirectly- expressed instructions, in memory, in the same form as parsed XML data trees? It's only a degree more abstract than (2). Just where along the line, the mechanism departs from the reader's understanding of compilation, is a matter of the reader's background. Instead of saying "compilation", we should be saying "parsing" for translation of source (e.g. paths and templates) to logical tree structure; "realization" or perhaps "encoding" for implementation of the trees as instructions on one of the models above; and "execution" for the actual transform process. This may all be clearer once I've researched SAX, and your XPipes. Jonathan To unsubscribe from this group, send an email to: xpl-unsubscribe@o... --- End forwarded message --- |
From: reid_spencer <ras...@re...> - 2002-01-31 09:25:00
|
--- In xpl-dev@y..., Jonathan Burns <saski@w...> wrote: cagle@o... wrote: Just a quick observation. I think we need to qualify what is specifically meant by compilation here, and to note that similar compiled stylesheets exist on the Microsoft side in the form of IXSLProcessor entitites. You're right. It's the first time I've pushed the argument right through, in my own understanding. You're a writer, you understand :-) What's implicit in my exposition is that the usual idea of oompilation falls apart, into separate connotations, in the context of XML development. Here's the definition - and I'll have to stick to it, because it's what most readers will understand by it: Compilation is the process which translates a definition of a process, expressed in a human-readable source syntax, to a series of instructions in a machine code architecture which actually carries out the process. Do we want compilation for XPL, then? NO WAY! People go to all this trouble to define a platform-independent syntax for XML - and we propose to give it a machine-dependent semantics? We'd have to be nuts. But, we still want speed, and memory economy. So we're bound to propose something like compilation, but machine-independent. There are two dimensions along which we can modify the strict definition. (1) We can define a virtual machine architecture, which is similar to actual machine architectures, and translate to that. Loosely speaking, we can "compile to JVM bytecode", for example. Problem solved - provided we include a JVM as part of the XPL environment. (2) We can include under the heading of compilation, correctly, translation to a list of indirectly-expressed instructions, which is executed in traversal. Strictly speaking, this is what we do in (1). But the broadened definition includes executables such as Forth - subroutine-threaded code, expressed as a list of subroutine addresses, with embedded machine code for a small set of primitives. (2a) And if we can do that, then why can't we traverse a tree structure of indirectly- expressed instructions, in memory, in the same form as parsed XML data trees? It's only a degree more abstract than (2). Just where along the line, the mechanism departs from the reader's understanding of compilation, is a matter of the reader's background. Instead of saying "compilation", we should be saying "parsing" for translation of source (e.g. paths and templates) to logical tree structure; "realization" or perhaps "encoding" for implementation of the trees as instructions on one of the models above; and "execution" for the actual transform process. This may all be clearer once I've researched SAX, and your XPipes. Jonathan --- End forwarded message --- |
From: reid_spencer <ras...@re...> - 2002-01-31 09:24:43
|
--- In xpl-dev@y..., cagle@o... wrote: Just a quick observation. I think we need to qualify what is specifically meant by compilation here, and to note that similar compiled stylesheets exist on the Microsoft side in the form of IXSLProcessor entitites. -- Kurt ----- Original Message ----- From: Jonathan Burns To: xpl@e... Sent: Sunday, June 25, 2000 5:29 AM Subject: Re: [XPL] Oracle and Sun debut "translets" and virtual machine for XSLT Richard Anthony Hein wrote: Everyone, www.xml.com has an articles about some things we need to be informed about, including the foundational infrastructure of the 'net and how XML makes too much of a demand on the current infrastructure, and one about "translets" and an XSLT virtual machine! Very important to XPL I think! No kidding. That www.xml.com/pub is a very interesting place. I just scanned the St. Laurent and Dodds articles - I get part of them, but more of them makes reference to issues I haven't begun to study. What they're talking about, though, is related to what I've been brooding about while offline. Does XML demand something new in our Web paradigm? Should we expect compiled XSLT to make a real difference? Compiling is what I'm talking about here. There is a persistent interest on the list, in compiling XPL. I share in it, but from a skewed perspective. From one angle, I'm keen on grammars - it's a disappointment for me that EBNF should be set into the foundations, as a means of defining correct source parsing, but ignored as a high-level mechanism for combining XML structures. From another angle... I think that the benefit of compilation will not be transferred easily (if at all) from complex applications resident on single machines, to complex interactions distributed via comms protocols. I think they will show up to a degree on servers that are dealing heavily in XML - but only when a whole lot of related efficiency issues are addressed at the same time. Roughly estimating, in the time my system downloads 1 kilobyte of HTML, the CPU can execute 100 million instructions. That wealth of processing power is employed by my browser, to access local resources like fonts, to render the content as X Windows primitives, and to pass them through to X Windows - which uses more CPU power to get them to the graphics board. Compared with all of that going on, the processing requirements of an XML parser should be marginal. It could be implemented quite inefficiently, and hardly make a dent. Which gives us valuable leeway, for more important requirements. I like it, that we are starting to see XML parsers being written in all the common scripting languages. It means you can choose your own platform-above-the- platform, and XML will be available to you. If you think about it, it's just an extension to CGI - i.e. processing in the interpreted language of your choice, including the generation of HTML output on the fly. I stress: your choice, of conceptual Web lubricant. The downside is, What happens when development efforts for the various scripting languages get out of step with one another? And, What happens when they get out of step with XML tech developments? There is the horrid potential for a Balkanization of the platform-independent platforms - with one crowd of developers rushing in to capitalize on XML-via-Java, while another exploits XML-via-Perl - with the same wheels (hell, with giant chains of interdependencies) being invented on both sides of the divide. Supplementing the chaos with compiled XML-via-C, or -via-i386 machine architecture, brings nothing to the table, except some additional processing speed in the parsing and transformations parts of XML processing - which, on the client side, would hardly be noticed. What about the server side, then? And what about the Internet relay in between? Naturally, I've thought about how that 1K of HTML or XML is The Bottleneck, and about how to pack more value into that 1K. We could compress the text, of course, before transmission, and unpack it on receipt. Or we could tokenize it - encode it into a stream of binary numbers. That would double or triple the content of the average kilobyte. Maybe it's worth doing, but my sense is that a compression stage would be so straightforward, that people will be doing it without advice from me :-) And as for tokenization, that has problems - namespace and addressing problems (i.e. any two processes communicating by numbers must share equivalent lookup tables for what the numbers mean). On the server side, is there enough XML processing going on in one place, that compilation is a significant gain? Maybe - and the Oracle people must think so, if they're excited by compiled XSLT translets. I'm thinking about online transaction processing (OLTP). Here we are in the DB and application services context, surrounded by interface formats - SQL and a thousand COM and CORBA interface schemata. To filter and join and translate among them is relatively easy - but it takes a bit of effort to set up, and probably the effort has to be reinvented system by system to some degree. And above all, the result of the effort is a translation stage which could be a bad bottleneck in a high-transaction-rate pipeline. If the translation stage can be compiled, no more bottleneck. And if it can be compiled automatically, from an XML document set which contains the source and target interfaces in XML form, then no more system-by-system reinvention of the translator. That's the rationale I'm seeing for compilation. I think that's what the translet stuff is about. Does XPL change this context? Or is it changed in this context? There are a lot of factors here, and a huge discussion, of which this post just scataches the surface. For a minute, put yourself in the position of a server system - whether it's raw data you're serving, or personalized interactions. Under your control is an inventory of data, the bulk of it perhaps of the same type, but generally heterogeneous. Your business is to search it, sort it, reformat and rearrange it, pack it up for transmission, unpack it on receipt, and maybe do some calculations on it. You are equipped with XPL, which we'll assume is some extension of XSLT. By default, what you're doing most of, is accessing XPL source (tags, indentations and all) and passing it to an interpreter. The interpreter parses the source, builds a tree structure (parse tree), and sets this tree to work on the data at hand. (Below, I'll split this into a parser stage and a tree-processing stage, and use "interpreter" for the latter.) Some kind of cursor runs up and down the parse tree - as directed by the XML data it's working on - and as a result, cursors run up and down trees of XML data as well, identifying elements and leaves. As a further result, the parse tree elements are activated, causing elements to be added to output trees in process of construction. In some cases, activated parse tree elements will make requests of the native system, e.g. to render the state of processing in a window. But by and large the server system is self-contained, the way that an HTML browser is. The basic rule of economy is, never do the same job three times. That is, if you find yourself doing something for the second time, and you could recognize a third time in advance if you saw it coming - then don't just do the job and forget about it. Instead, cache the results. When the third time comes, just output the cached results. You can save lots of time that way. In the days when CPU time was expensive, this technique was taken to extremes, in respect of processing overhead. The entire range of jobs which an application was to perform was worked out in advance, coded in some language, and pre- translated to machine code. Compilation. Ironically, the art of caching data was an afterthought, effectively done only in major shops. The job that had been automated was the translation from high- level source to machine ocde. It had only to be done once, in advance, never on the fly. Interpreted languages, which repeated and repeated the parsing and machine-code-greneration overhead, were regarded as something less than rocket science. The catch was this: In compilation, information was thrown away. This was partly because memory space was also at a premium. That which actually did the job, raw machine code, contained no labels, no syntactic niceties, no structured programming constructs, and of course no comments. There was no possibility for decompilation into something legible, nor for reflexive operations on the working code. With all this in mind, consider how that server is spending its time, given XPL. I find it plausible to suppose that the server is I/O bound - if its XPL-based software is simplistic. I think it will be spending most of its time queued on communications, with brief periods in which it is queued on local disk access - and eyeblinks, in which it is actually CPU-bound. During the I/O bound intervals, processing will be going on, though not nearly to the capacity of the CPU. There will be CPU time to waste - and it will indeed be wasted. On the other hand, I find it plausible that the server may spend a good deal of time CPU-bound - if it is being fed a steady transaction stream and also its XPL-based software is sophisticated, with sorting and caching and hashing employed to supply the end-use XPL processes with precisely the data which needs to be worked on. In the latter case it makes sense to ask, Is the XPL processing efficient in itself? Or is it throwing away results, and repeating operations needlessly? Well, for one thing there will be a lot of parsing going on, by default, of both XPL code and XML data. That is sensible, if the source text is usually different with each parse; but it is wasteful, if the same source is being parsed repeatedly, just to build the same parse trees over and over. Most of the XPL code will be fixed - and so, its parsing should be done just once, and its parse-trees retained. But most of the XML data will be heterogeneous, selected from all over the place, and some of it will be volatile, i.e. its content will be changing as it is updated, written out, read back in - and re-parsed to updated parse-trees. In that case, there will be benefit in making the parsing of data fast. So let's assume that the parser will be compiled. As I've said in earlier posts, the way to get a fast parser for an EBNF language is to employ some equivalent of Yacc, to produce a recognizer automaton for the XML grammar. The form of the automaton is a lookup table - and looking up tables, and jumping from row to row, are based on a very small primitive set of operations, quite cheap to reimplememnt for multiple platforms. This leaves us pretty much with the hard core of XPL processing - traversal and reconstruction of trees, with a little number-crunching on the side. Is there enough needless reproduction of results, to justify compilation? On the negative side, we have here a process which can be considered a series of little processes, in which an XPL parse-tree is traversed, with the effect that a data tree is also traversed, and an output tree produced. Likely enough, parts of the code tree will be traversed many times - there has to be some equivalent of looping, after all. But also likely, there will not be much needlessly repeated overhead, merely from shifting from node to node of the code tree via links. The fact is, once we have parsed the source and created the code tree, we have more or less compiled the code already. Good compiled code - lean, mean machine code, Real Programmers' code - is a string of primitives translated directly to the machine instruction set, and held together by the brute fact that they follow one another in memory. The minimal overhead of loading up the address of the next instruction is being carried out by the CPU itself, except for loops and calls. Not an instruction is wasted. Good semi-compiled code allows a bit more slack. It is permissable that the next instruction is not hardwired in, but discovered on the fly, by handing a token to a tiny interpreter, or indexing into a lookup table. Finite-state automata are in this class; so are the threaded languages like Forth; and so is Java, with its virtual machine architecture. In the server scenarios I've sketched, we have the slack. To imagine the server being CPU- bound, I had to imagine it being driven to the limits of its I/O by a continuous transaction stream, and its code having been heroically engineered to squeeze out unnecessary repetitions of data fetching. Within reasonable bounds, we can implement our low-level tree processing on whatever little interpreter is appropriate - say the JVM - without accusations flying around that we're wasting CPU power. On the positive side ... Yes, yes, there's a positive side :-) ... The ideal is that our server is spending most of its time traversing trees. That's where the work gets done. To approach the ideal, we need the XML data we're working on to be in tree form. Before even that, we need it to be in memory. (I've just lately been to Tom Bray's Annotated XML 1.0 Spec - an intricately hyperlinked document, backed by a couple thousand lines of Javascript. Tom notes that there's a problem getting the whole document into memory. He suggests the need for a "virtual tree-walking" mechanism, analogous to virtual memory. It's a little scary to consider that one document can occupy several meg of RAM. ) I think - this is vague as yet - that we get the most use of our CPU, if most of our code and data are in tree form, and the tree form is succinct. I see a parsed document as a list of nodes, side by side in memory in tree- traversal order. Each node has addresses of parent, sibs and kiddies, token numbers for each attribute, and the address of a data structure which contains a property definition of its element type - including all values used for each attribute, by every element of its type within the document. I'd guess 20-40 bytes per node, average. With that, we can keep the tree structure of a good many kilonode documents in memory - and stand a fair chance of keeping one kilonode document in a hardware data cache, once we've read it from end to end. CDATA leaves are special. They stand for the actual content, and read that content into memory when requested. They have some extra gear in them, to support hashing and sorting and stuff. XLink leaves are special too. They stand for separate documents and specific nodes in them. Physically, they contain the addresses of proxy elements, which specify whether the document in question is parsed in at present, and if so where it is, and if not, where to find it as a resource. Put the pieces all together, and the picture emerges of our server comprising three major processes: (1) The parser, running on a queue of document requests; compiled to EBNF automaton form, constantly converting XML text to tree form. (2) The interpreter, running on a queue of execution requests; traversing in-memory parse trees, and building new ones; written in JVM code, or something similar. (3) The deparser, converting new parse trees to source form, and flushing them back to disk; probably compiled, because it must maintain the free memory reserve. That's the kind of system I think would keep a server I/O-bound, as it should be, with disk, RAM and CPU running pretty much in harmony. There's more to a good XML prcoessing system than I've described here. For instance, there's a content manager, which accesses and works through a mass of CDATA, searching and sorting - ultimately to return selected CDATA lists to the interpreter. Think of it as our internal search engine. There's need for an XML-based internal file system architecture, which can handle and cache directory searches and such. Without taking those into account, though, I think I see the outlines of an XML system which runs, byte for byte of source text, about as fast as your average C compiler. More important than speed, is correctness. But that's another story. Tata for now Jonathan A client! Okay, you guys start coding, and I'll go and see what they want. ---------------------------------------------------------------------- -------- ---------------------------------------------------------------------- -------- To unsubscribe from this group, send an email to: xpl-unsubscribe@o... --- End forwarded message --- |
From: reid_spencer <ras...@re...> - 2002-01-31 09:24:28
|
--- In xpl-dev@y..., cagle@o... wrote: Simon, Both so as not to risk ill will and to clarify a few points, I want to make the following responses: > I'm not certain that XML makes 'too much of a demand', but it certainly > makes demands and opens new possibilities. I tend to enjoy the > 'disruptions' XML causes. We are shifting paradigms here, and as a general rule of thumb, as one paradigm becomes ascendant another becomes obsolete. It's my personal conviction that the paradigm being shifted is not HTML -- although that will in fact be subsumed by the technology over time. Rather the paradigm is procedural programming itself. As I see it, over time VB, Java, and even (gasp) C++ will take on increasingly XML like characteristics, to the point where eventually these languages largely disappear as independent entities. That is not necessarily to say that they will be cast in an XML description (though that may happen in some cases). Rather, the mechanisms for handling procedural languages will grow out of the hydra-like head of XML, with XML itself likely to evolve in the process. > I don't think XML-RPC and SOAP use the "HTTP protocol incorrectly", though > I certainly think it's worth considering more efficient transfer protocols > with clearer security. My biggest problems with any type of XML based RPC is that, while you can write very efficient RPC code that effectively keeps most of the interaction in a periodic request from a client to a server to send in a larger stream of data (and this use I don't have problems with at all), it can also be used to control component interactions on the server one command at a time. RPCs have typically been complex things to write, involving skills that were sufficiently specialized that there was a prerequisite understanding of how best to optimize the RPCs service. However, by making RPCs much simpler, it lowers the bar on the skills necessary to create such RPCs, which in turn means that you'll see more people use XML-RPCs to handle routine component control commands that should typically be handled within a compiled (or at least protected) control. Thus I have this disturbing visions of lots of RPCs basically performing incremental procedural coding between environments using XML as the basis, which is the worst possible way of designing such systems. My second concern with RPCs is that they are running over http, typically through port 80. Firewalls exist for a purpose -- to keep critical subcomponents from being programmed illicitly. SOAP especially is designed (and this has been a common thread coming from a number of different vendors) to bypass the firewalls. Thus by explicit admission, SOAP in particular serves as a means of circumventing security measures put in place by sys admins in the first place. These were the specific objection that I had when I said that XML- RPCs used the HTTP protocol incorrectly. The syntactic use of such techniques is undoubtedly correct, though they do rely on extensions to the protocol, but so do many other things. It is more in the usage of SOAP that I see a conflict with the design goals of HTTP, and the biggest danger from it. I should point out that I do see the obvious benefits of incorporating SOAP -- it is a powerful technology; it is however this exact power that worries me. > I think it's time to consider changing the Internet to make better use of > XML and to simplify implementing the possibilities XML opens up. It's not > yet time to throw everything out and start over, but I'm glad to hear that > people are willing to at least consider the possibility of a transition. I'm not questioning the sentiment (which I personally agree with), but the feasibility. The Internet evolved -- at no point was there a single conscious decision for it to appear in the form that it did; rather, as different people contributed different facets to this communication protocol and all the technologies overlaying it, the Internet lurched and lunged through different embodiments of itself. XML as a language for creating the semantic web is laudible but far too late; as a consequence, the best approach may very well be to introduce a new meme into the existing infrastructure that would make it evolve toward the newer requiremenets. This is a very counterintutive notion, that one should grow or evolve the web, but I suspect that it may be the only way that we can effectively integrate this new paradigm into the systems at hand. BTW, it's late, I've been chasing kids around all day, and I'm probably not making a lot of sense at the moment. - Kurt Cagle ----- Original Message ----- From: "Simon St.Laurent" <simonstl@s...> To: <xpl@e...> Sent: Saturday, June 24, 2000 9:29 AM Subject: [XPL] XML/Web infrastructure > I'm going to reply to various bits, and hope this doesn't get too confusing. > > Richard Hein wrote: > >www.xml.com has an articles about some things we need to be > >informed about, > > including the foundational infrastructure of the 'net and how XML > >makes too > > much of a demand on the current infrastructure, > > I'm not certain that XML makes 'too much of a demand', but it certainly > makes demands and opens new possibilities. I tend to enjoy the > 'disruptions' XML causes. > > Kurt Cagle wrote: > >I just spent three days closeted with Simon St. Laurent, who wrote the > >article, and while I agree with him in part, I would also keep in mind that > >he represents just one side of this issue. > > Kurt's completely right that there are _many_ sides to all of these issues. > I wouldn't have written the article if it hadn't been for the strong > opposition some of those opinions have generated. I'd like to see XML > carry on disrupting things, but not everyone is happy about that. > > Kurt wrote: > >I do agree > >with Simon with regard to the issue of SOAP and XML-RPC, and I have made me > >feelings known about that in the VBXML board -- we are utilizing the HTTP > >protocol incorrectly, placing too many demands upon it to efficiently handle > >the use of the network as a common interchange medium for messagess. It also > >gives me more than a little pause about both security concerns and > >architecure. > > I don't think XML-RPC and SOAP use the "HTTP protocol incorrectly", though > I certainly think it's worth considering more efficient transfer protocols > with clearer security. > > Then Richard wrote: > >Kurt says that he > >disagrees with Simon St. Laurent, who wrote an article on the problems with > >the current infrastructure on http://www.xml.com in the sense that it may be > >too late to change a lot of it now. I don't really know what to think about > >that, but I wonder because it only took a few years for the internet to grow > >out of obscurity into this huge thing, so why would it be so bad to change > >it somewhat for XML? > > I think it's time to consider changing the Internet to make better use of > XML and to simplify implementing the possibilities XML opens up. It's not > yet time to throw everything out and start over, but I'm glad to hear that > people are willing to at least consider the possibility of a transition. > > These are interesting times! > > Simon St.Laurent > XML Elements of Style / XML: A Primer, 2nd Ed. > http://www.simonstl.com - XML essays and books > > -------------------------------------------------------------------- ---- > Replace complicated scripts using 14 new HTML tags that work in > current browsers. > Form the Web today - visit: > http://click.egroups.com/1/5769/2/_/809694/_/961864057/ > -------------------------------------------------------------------- ---- > > To unsubscribe from this group, send an email to: > xpl-unsubscribe@o... > > > --- End forwarded message --- |
From: reid_spencer <ras...@re...> - 2002-01-31 09:24:16
|
--- In xpl-dev@y..., Jonathan Burns <saski@w...> wrote: Richard Anthony Hein wrote: > Everyone, > > www.xml.com has an articles about some things we need to be informed > about, > including the foundational infrastructure of the 'net and how XML > makes too > much of a demand on the current infrastructure, and one about > "translets" > and an XSLT virtual machine! Very important to XPL I think! No kidding. That www.xml.com/pub is a very interesting place. I just scanned the St. Laurent and Dodds articles - I get part of them, but more of them makes reference to issues I haven't begun to study. What they're talking about, though, is related to what I've been brooding about while offline. Does XML demand something new in our Web paradigm? Should we expect compiled XSLT to make a real difference? Compiling is what I'm talking about here. There is a persistent interest on the list, in compiling XPL. I share in it, but from a skewed perspective. From one angle, I'm keen on grammars - it's a disappointment for me that EBNF should be set into the foundations, as a means of defining correct source parsing, but ignored as a high-level mechanism for combining XML structures. From another angle... I think that the benefit of compilation will not be transferred easily (if at all) from complex applications resident on single machines, to complex interactions distributed via comms protocols. I think they will show up to a degree on servers that are dealing heavily in XML - but only when a whole lot of related efficiency issues are addressed at the same time. Roughly estimating, in the time my system downloads 1 kilobyte of HTML, the CPU can execute 100 million instructions. That wealth of processing power is employed by my browser, to access local resources like fonts, to render the content as X Windows primitives, and to pass them through to X Windows - which uses more CPU power to get them to the graphics board. Compared with all of that going on, the processing requirements of an XML parser should be marginal. It could be implemented quite inefficiently, and hardly make a dent. Which gives us valuable leeway, for more important requirements. I like it, that we are starting to see XML parsers being written in all the common scripting languages. It means you can choose your own platform-above-the- platform, and XML will be available to you. If you think about it, it's just an extension to CGI - i.e. processing in the interpreted language of your choice, including the generation of HTML output on the fly. I stress: your choice, of conceptual Web lubricant. The downside is, What happens when development efforts for the various scripting languages get out of step with one another? And, What happens when they get out of step with XML tech developments? There is the horrid potential for a Balkanization of the platform-independent platforms - with one crowd of developers rushing in to capitalize on XML-via-Java, while another exploits XML-via-Perl - with the same wheels (hell, with giant chains of interdependencies) being invented on both sides of the divide. Supplementing the chaos with compiled XML-via-C, or -via-i386 machine architecture, brings nothing to the table, except some additional processing speed in the parsing and transformations parts of XML processing - which, on the client side, would hardly be noticed. What about the server side, then? And what about the Internet relay in between? Naturally, I've thought about how that 1K of HTML or XML is The Bottleneck, and about how to pack more value into that 1K. We could compress the text, of course, before transmission, and unpack it on receipt. Or we could tokenize it - encode it into a stream of binary numbers. That would double or triple the content of the average kilobyte. Maybe it's worth doing, but my sense is that a compression stage would be so straightforward, that people will be doing it without advice from me :-) And as for tokenization, that has problems - namespace and addressing problems (i.e. any two processes communicating by numbers must share equivalent lookup tables for what the numbers mean). On the server side, is there enough XML processing going on in one place, that compilation is a significant gain? Maybe - and the Oracle people must think so, if they're excited by compiled XSLT translets. I'm thinking about online transaction processing (OLTP). Here we are in the DB and application services context, surrounded by interface formats - SQL and a thousand COM and CORBA interface schemata. To filter and join and translate among them is relatively easy - but it takes a bit of effort to set up, and probably the effort has to be reinvented system by system to some degree. And above all, the result of the effort is a translation stage which could be a bad bottleneck in a high-transaction-rate pipeline. If the translation stage can be compiled, no more bottleneck. And if it can be compiled automatically, from an XML document set which contains the source and target interfaces in XML form, then no more system-by-system reinvention of the translator. That's the rationale I'm seeing for compilation. I think that's what the translet stuff is about. Does XPL change this context? Or is it changed in this context? There are a lot of factors here, and a huge discussion, of which this post just scataches the surface. For a minute, put yourself in the position of a server system - whether it's raw data you're serving, or personalized interactions. Under your control is an inventory of data, the bulk of it perhaps of the same type, but generally heterogeneous. Your business is to search it, sort it, reformat and rearrange it, pack it up for transmission, unpack it on receipt, and maybe do some calculations on it. You are equipped with XPL, which we'll assume is some extension of XSLT. By default, what you're doing most of, is accessing XPL source (tags, indentations and all) and passing it to an interpreter. The interpreter parses the source, builds a tree structure (parse tree), and sets this tree to work on the data at hand. (Below, I'll split this into a parser stage and a tree-processing stage, and use "interpreter" for the latter.) Some kind of cursor runs up and down the parse tree - as directed by the XML data it's working on - and as a result, cursors run up and down trees of XML data as well, identifying elements and leaves. As a further result, the parse tree elements are activated, causing elements to be added to output trees in process of construction. In some cases, activated parse tree elements will make requests of the native system, e.g. to render the state of processing in a window. But by and large the server system is self-contained, the way that an HTML browser is. The basic rule of economy is, never do the same job three times. That is, if you find yourself doing something for the second time, and you could recognize a third time in advance if you saw it coming - then don't just do the job and forget about it. Instead, cache the results. When the third time comes, just output the cached results. You can save lots of time that way. In the days when CPU time was expensive, this technique was taken to extremes, in respect of processing overhead. The entire range of jobs which an application was to perform was worked out in advance, coded in some language, and pre-translated to machine code. Compilation. Ironically, the art of caching data was an afterthought, effectively done only in major shops. The job that had been automated was the translation from high-level source to machine ocde. It had only to be done once, in advance, never on the fly. Interpreted languages, which repeated and repeated the parsing and machine-code-greneration overhead, were regarded as something less than rocket science. The catch was this: In compilation, information was thrown away. This was partly because memory space was also at a premium. That which actually did the job, raw machine code, contained no labels, no syntactic niceties, no structured programming constructs, and of course no comments. There was no possibility for decompilation into something legible, nor for reflexive operations on the working code. With all this in mind, consider how that server is spending its time, given XPL. I find it plausible to suppose that the server is I/O bound - if its XPL-based software is simplistic. I think it will be spending most of its time queued on communications, with brief periods in which it is queued on local disk access - and eyeblinks, in which it is actually CPU-bound. During the I/O bound intervals, processing will be going on, though not nearly to the capacity of the CPU. There will be CPU time to waste - and it will indeed be wasted. On the other hand, I find it plausible that the server may spend a good deal of time CPU-bound - if it is being fed a steady transaction stream and also its XPL-based software is sophisticated, with sorting and caching and hashing employed to supply the end-use XPL processes with precisely the data which needs to be worked on. In the latter case it makes sense to ask, Is the XPL processing efficient in itself? Or is it throwing away results, and repeating operations needlessly? Well, for one thing there will be a lot of parsing going on, by default, of both XPL code and XML data. That is sensible, if the source text is usually different with each parse; but it is wasteful, if the same source is being parsed repeatedly, just to build the same parse trees over and over. Most of the XPL code will be fixed - and so, its parsing should be done just once, and its parse-trees retained. But most of the XML data will be heterogeneous, selected from all over the place, and some of it will be volatile, i.e. its content will be changing as it is updated, written out, read back in - and re-parsed to updated parse-trees. In that case, there will be benefit in making the parsing of data fast. So let's assume that the parser will be compiled. As I've said in earlier posts, the way to get a fast parser for an EBNF language is to employ some equivalent of Yacc, to produce a recognizer automaton for the XML grammar. The form of the automaton is a lookup table - and looking up tables, and jumping from row to row, are based on a very small primitive set of operations, quite cheap to reimplememnt for multiple platforms. This leaves us pretty much with the hard core of XPL processing - traversal and reconstruction of trees, with a little number-crunching on the side. Is there enough needless reproduction of results, to justify compilation? On the negative side, we have here a process which can be considered a series of little processes, in which an XPL parse-tree is traversed, with the effect that a data tree is also traversed, and an output tree produced. Likely enough, parts of the code tree will be traversed many times - there has to be some equivalent of looping, after all. But also likely, there will not be much needlessly repeated overhead, merely from shifting from node to node of the code tree via links. The fact is, once we have parsed the source and created the code tree, we have more or less compiled the code already. Good compiled code - lean, mean machine code, Real Programmers' code - is a string of primitives translated directly to the machine instruction set, and held together by the brute fact that they follow one another in memory. The minimal overhead of loading up the address of the next instruction is being carried out by the CPU itself, except for loops and calls. Not an instruction is wasted. Good semi-compiled code allows a bit more slack. It is permissable that the next instruction is not hardwired in, but discovered on the fly, by handing a token to a tiny interpreter, or indexing into a lookup table. Finite-state automata are in this class; so are the threaded languages like Forth; and so is Java, with its virtual machine architecture. In the server scenarios I've sketched, we have the slack. To imagine the server being CPU- bound, I had to imagine it being driven to the limits of its I/O by a continuous transaction stream, and its code having been heroically engineered to squeeze out unnecessary repetitions of data fetching. Within reasonable bounds, we can implement our low-level tree processing on whatever little interpreter is appropriate - say the JVM - without accusations flying around that we're wasting CPU power. On the positive side ... Yes, yes, there's a positive side :-) ... The ideal is that our server is spending most of its time traversing trees. That's where the work gets done. To approach the ideal, we need the XML data we're working on to be in tree form. Before even that, we need it to be in memory. (I've just lately been to Tom Bray's Annotated XML 1.0 Spec - an intricately hyperlinked document, backed by a couple thousand lines of Javascript. Tom notes that there's a problem getting the whole document into memory. He suggests the need for a "virtual tree-walking" mechanism, analogous to virtual memory. It's a little scary to consider that one document can occupy several meg of RAM. ) I think - this is vague as yet - that we get the most use of our CPU, if most of our code and data are in tree form, and the tree form is succinct. I see a parsed document as a list of nodes, side by side in memory in tree-traversal order. Each node has addresses of parent, sibs and kiddies, token numbers for each attribute, and the address of a data structure which contains a property definition of its element type - including all values used for each attribute, by every element of its type within the document. I'd guess 20-40 bytes per node, average. With that, we can keep the tree structure of a good many kilonode documents in memory - and stand a fair chance of keeping one kilonode document in a hardware data cache, once we've read it from end to end. CDATA leaves are special. They stand for the actual content, and read that content into memory when requested. They have some extra gear in them, to support hashing and sorting and stuff. XLink leaves are special too. They stand for separate documents and specific nodes in them. Physically, they contain the addresses of proxy elements, which specify whether the document in question is parsed in at present, and if so where it is, and if not, where to find it as a resource. Put the pieces all together, and the picture emerges of our server comprising three major processes: (1) The parser, running on a queue of document requests; compiled to EBNF automaton form, constantly converting XML text to tree form. (2) The interpreter, running on a queue of execution requests; traversing in-memory parse trees, and building new ones; written in JVM code, or something similar. (3) The deparser, converting new parse trees to source form, and flushing them back to disk; probably compiled, because it must maintain the free memory reserve. That's the kind of system I think would keep a server I/O-bound, as it should be, with disk, RAM and CPU running pretty much in harmony. There's more to a good XML prcoessing system than I've described here. For instance, there's a content manager, which accesses and works through a mass of CDATA, searching and sorting - ultimately to return selected CDATA lists to the interpreter. Think of it as our internal search engine. There's need for an XML-based internal file system architecture, which can handle and cache directory searches and such. Without taking those into account, though, I think I see the outlines of an XML system which runs, byte for byte of source text, about as fast as your average C compiler. More important than speed, is correctness. But that's another story. Tata for now Jonathan A client! Okay, you guys start coding, and I'll go and see what they want. --- End forwarded message --- |
From: reid_spencer <ras...@re...> - 2002-01-31 09:22:53
|
--- In xpl-dev@y..., "Simon St.Laurent" <simonstl@s...> wrote: I'm going to reply to various bits, and hope this doesn't get too confusing. Richard Hein wrote: >www.xml.com has an articles about some things we need to be >informed about, > including the foundational infrastructure of the 'net and how XML >makes too > much of a demand on the current infrastructure, I'm not certain that XML makes 'too much of a demand', but it certainly makes demands and opens new possibilities. I tend to enjoy the 'disruptions' XML causes. Kurt Cagle wrote: >I just spent three days closeted with Simon St. Laurent, who wrote the >article, and while I agree with him in part, I would also keep in mind that >he represents just one side of this issue. Kurt's completely right that there are _many_ sides to all of these issues. I wouldn't have written the article if it hadn't been for the strong opposition some of those opinions have generated. I'd like to see XML carry on disrupting things, but not everyone is happy about that. Kurt wrote: >I do agree >with Simon with regard to the issue of SOAP and XML-RPC, and I have made me >feelings known about that in the VBXML board -- we are utilizing the HTTP >protocol incorrectly, placing too many demands upon it to efficiently handle >the use of the network as a common interchange medium for messagess. It also >gives me more than a little pause about both security concerns and >architecure. I don't think XML-RPC and SOAP use the "HTTP protocol incorrectly", though I certainly think it's worth considering more efficient transfer protocols with clearer security. Then Richard wrote: >Kurt says that he >disagrees with Simon St. Laurent, who wrote an article on the problems with >the current infrastructure on http://www.xml.com in the sense that it may be >too late to change a lot of it now. I don't really know what to think about >that, but I wonder because it only took a few years for the internet to grow >out of obscurity into this huge thing, so why would it be so bad to change >it somewhat for XML? I think it's time to consider changing the Internet to make better use of XML and to simplify implementing the possibilities XML opens up. It's not yet time to throw everything out and start over, but I'm glad to hear that people are willing to at least consider the possibility of a transition. These are interesting times! Simon St.Laurent XML Elements of Style / XML: A Primer, 2nd Ed. http://www.simonstl.com - XML essays and books --- End forwarded message --- |
From: reid_spencer <ras...@re...> - 2002-01-31 09:22:45
|
--- In xpl-dev@y..., Pat McCotter <pat0@s...> wrote: This should be http://eXtenDE.sourceforge.net/ On Sat, 24 Jun 2000 09:18:42 -0400 (EDT), Michael Lauzon <ce940@f...> wrote: | |Here are some links to stuff we should read, the first one isn't finished |yet, so the author says: | |http://www.treelight.com/software/bootstrap/encodingSource.html | |This link appears to be down, but SourceForge is a slow website: | |http://eXtenDE.sourceforge.com/ | |The I don't know what we'll find on the latter one as I can't connect to |it. | |There is also this one: | |http://www.idevresource.com/ | | |Michael |http://www.geocities.com/SiliconValley/Way/9180/ | |'Eat, drink, and be merry, for tomorrow you may work.' | -- Cheers Pat McCotter pat0@s... PGP Key - 0x4E5E46BB Fingerprint 82ED 8A86 68B9 BEBF 100B 06CF 603B E658 --- End forwarded message --- |
From: reid_spencer <ras...@re...> - 2002-01-31 09:21:27
|
--- In xpl-dev@y..., Michael Lauzon <ce940@f...> wrote: Here are some links to stuff we should read, the first one isn't finished yet, so the author says: http://www.treelight.com/software/bootstrap/encodingSource.html This link appears to be down, but SourceForge is a slow website: http://eXtenDE.sourceforge.com/ The I don't know what we'll find on the latter one as I can't connect to it. Michael http://www.geocities.com/SiliconValley/Way/9180/ 'Eat, drink, and be merry, for tomorrow you may work.' --- End forwarded message --- |
From: reid_spencer <ras...@re...> - 2002-01-31 09:21:13
|
--- In xpl-dev@y..., "Richard Anthony Hein" <935551@i...> wrote: Welcome back Jonathan, LOL! I hate it when my connection is down too ... but the reason for your problems is too funny! Hehehe. Sorry...! Thanks for the affirmation. I appreciate it. Yeah, I forgot about the neuroscience background (not neurology ... that's beyond nesc., at the medical school level). However, some of that may be useful to a degree, but perhaps it gets to far afield sometimes. <sidetrack>I see the 'net as a neural network that is forming in an emergent fashion, like the early development of connections in the brain, and the beginnings of simple communications using those early connections. Do I think the 'net will ever attain true consciousness? No. I don't think so; at least not in the next 100 years, and definitely not until quantum computers become a reality (I think the brain is a quantum computer - although we can simulate most of the functions classically, the inherent nature of consciousness is at the QM level, IMNSHO). I am not sure all people attain true consciousness either. But don't quote me on this; things change. </sidetrack> Sometimes it's like I am trying to build a brain when we (the world) don't even understand the basics of brain function fully, and have no real idea what the true nature of thought even IS. Your digital logic and assembler background will have to be tapped a lot here I think, as well as everything else there you listed. I am not so sure that sketchy web technology knowledge is that bad, given that the current infrastructure is probably not even well suited for XML, and XPL. That's not to say we shouldn't try to get a grip on the fundamentals, since that is necessary for learning what to do this time round. Kurt says that he disagrees with Simon St. Laurent, who wrote an article on the problems with the current infrastructure on http://www.xml.com in the sense that it may be too late to change a lot of it now. I don't really know what to think about that, but I wonder because it only took a few years for the internet to grow out of obscurity into this huge thing, so why would it be so bad to change it somewhat for XML? Anyways, that's it for now. I am actually working on compiling the discussions that have taken place in the group since the beginning into one condensed overview of everything that seems important, the requirements (so far not many), and ideas that people have thought were good, while weeding out the ones that people didn't really respond to, like my instruction set in XML to make compilers idea (which I thought was a damn good idea - but I don't know much at all about compilers, so please, if you respond to this Jonathan, explain to me why it's not a good idea?), or were just plain wrong. G'night, -----Original Message----- From: me@m... [mailto:me@m...]On Behalf Of Jonathan Burns Sent: June 23, 2000 7:41 PM To: xpl@e... Subject: Re: [XPL] strengths and weaknesses Kurt Cagle wrote: Richard, I'd dare say that simply keeping things organized around here is a better strength than many of us bring to this table -- you're doing good with it, and you're insight will carry you far. Hear, hear! For myself, Strengths -- working with most scripting technologies since the early 1980s, both client and server, a multimedia background, grounding in systems theory, complex analysis and chaos, and in general a fairly broad overview of programming principles and practices. Interest in both human and computer based languages, semantics, and philosophy. Writes pretty good science fiction and draws a sexy mermaid. Weaknesses -- not well organized (what do you expect, I study chaos!), database skills at the basic SQL level (I could tell you what a trigger was, but would have to look up its syntax to write one), no formal training as a computer programmer (which may or may not be a weakness), tendency to overcommit to projects. Kurt Cagle Hmmm. Strengths - background in mathematics (good for logical relationships) and physics (good for analogies). Long-term (25 years) interest in programming languages. Survivor of middle-era OOP disputes, and current contributor to Pattern Languages of Programming discussion group. Solid grounding in C/C++. Experience, mostly amateur, with a dozen languages. 8 years as university tutor, spec. digital logic and assembler programming. Plain English. Reasonably shrewd estimator of how much longer everything takes, and how much more it oosts. Fierce believer in power of skill combinations in small groups. Weaknesses - gaps in databases, and communications protocols. Sketchy Web techs knowledge. Maker of mountains from molehills. Subject to bouts of despair. I actually think we've got a very good team already - at least for purposes of establishing goals and writing up design principles. It will take stamina, though. I'm certain there will be issues we have to go over and over again, and we'll probably feel we've sweated blood over every document we produce. Put it this way. We're already on the edge of The international standards community for XML technologies. What we can find out, by mining and studying the W3C and XML-DEV literature, will bring us level with the most experienced workers in the field, quite soon - just as Kurt promised. Not many people get this kind of opportunity. Hey, Richard. Those are good skills. And you left out the neurology, which I don't think is insignificant in the least. Both of us need a deeper XML background. Beyond that, your DB knowledge complements my programming history. Skills like those make it worthwhile for Kurt to spend attention on this group. Michael is getting into end-applications research. And the others, as far as I can see, have a pretty good grip on relevant topics. Nobody's wasting space here. I note that your researches have brought both Groves and WorldOS into the framework of discussion. Spot on target. Nobody begrudges time spent with your girlfriend. And when it comes to self-doubt, I guess I can still take the likes of you on points. :-) Seeya Jonathan but look who follows in my train a desert ant a tamerlane who ate a pyramid in half that he might get at and devour the mummies of six hundred kings who in remote antiquity stepped on and crushed ancestors of his - archy's life of mehitabel ---------------------------------------------------------------------- ------ -- ---------------------------------------------------------------------- ------ -- To unsubscribe from this group, send an email to: xpl-unsubscribe@o... --- End forwarded message --- |
From: reid_spencer <ras...@re...> - 2002-01-31 09:21:03
|
--- In xpl-dev@y..., "Richard Anthony Hein" <935551@i...> wrote: Thanks Kurt, for the kind words ... too kind I think! My organizational skills don't seem so good to me; my bedroom is a clear indicator of that! :-) In any case, thank you. I study chaos too! Actually, that's a large claim ... I should say, I have read some material on chaos, including a very interesting book called Chaos: Making of a New Science, by James Gleick, and various internet essays and papers. Hey, I have to look up syntax for things all the time. I remember my COBOL days ... ugh, I wish I didn't ... I had to double check the order of the divisions - Identification Division, Procedural Division ... ?? What the hell else?? Sheesh. See? I am not even sure if those first two are right! Only practice makes a difference, and if I am doing mostly maintenance, I don't get used to the whole syntax and structure enough to do it without references when it's time to write a program from scratch. I've got VB pretty well down, but when you consider all the objects ... well, I have to check references for them a lot. Thank goodness for autocompletion! Then again, I probably know the syntax of XML and XSL better than any other language I have used so far, because there isn't the rich tools available to provide the help I have become used to, and lean on, in Visual Studio. I am still not entirely used to the structure and syntax of ASP. I tend to mix up VBScript and VB syntax, get an error message, and sit there like a dummy, saying, "WHY!? WHY!?", before I clue in. I CAN'T WAIT until ASP+ and VB7!! By then I will know Java too, and C++, I keep promising myself. And hey, MS is supposed to be announcing C# (C-Sharp), a new language soon, which will be not unlike Java from the sounds of it, in a lot of ways, but intended to allow web services to be built more easily, around a distributed computing model. Sounds not unlike XPL in some ways too, which may be very interesting. Bill Gates gave a speech about "data clouds" using XML at the .NET (formally Next Generation Windows Services) announcement yesterday, which harkens to our discussions about XPL-fog in an uncanny way. You probably know all of this already. Obviously this idea (data clouds, data fog, XML fog, whatever) has reached critical mass in the consciousness of the IT world, and is springing from the minds of many people at once. Just like my idea of a question and answer web site where you offer people money for information (how do I blah blah blah - I'll pay you $2.00) ... all of a sudden there was a whole bunch of sites starting up that did just that, in multiple variations and business models. Boy I was dissapointed, especially since I had asked for a loan to get it started months before I ever saw one of these sites (and I searched for it) and was refused - "I don't see how that would work ... why wouldn't someone just use a search engine? That gets you answers for free ...." Grrr.... I know I am way off topic, so I'll shut up now. We actually have a lot in common - I have always been interested in language, and had an overview of linguistics in the context of neuroscience at university, plus I too can draw a pretty sexy mermaid! :-) LOL! Richard A. Hein -----Original Message----- From: Kurt Cagle [mailto:cagle@o...] Sent: June 21, 2000 8:12 PM To: xpl@e... Subject: Re: [XPL] strengths and weaknesses Richard, I'd dare say that simply keeping things organized around here is a better strength than many of us bring to this table -- you're doing good with it, and you're insight will carry you far. For myself, Strengths -- working with most scripting technologies since the early 1980s, both client and server, a multimedia background, grounding in systems theory, complex analysis and chaos, and in general a fairly broad overview of programming principles and practices. Interest in both human and computer based languages, semantics, and philosophy. Writes pretty good science fiction and draws a sexy mermaid. Weaknesses -- not well organized (what do you expect, I study chaos!), database skills at the basic SQL level (I could tell you what a trigger was, but would have to look up its syntax to write one), no formal training as a computer programmer (which may or may not be a weakness), tendency to overcommit to projects. Kurt Cagle --- End forwarded message --- |
From: reid_spencer <ras...@re...> - 2002-01-31 09:20:53
|
--- In xpl-dev@y..., "Kurt Cagle" <cagle@o...> wrote: Richard, I just spent three days closeted with Simon St. Laurent, who wrote the article, and while I agree with him in part, I would also keep in mind that he represents just one side of this issue. I think it should be pointed out that James Clark's processor is a conceptual test bed, not a highly optimized performance engine, and this is a point that he himself readily acknowledges (which is why I'm a little surprised that IBM points out that people working with XSLT should use it). I would second the point about compiled XSLT, however. XSLT involves a two step process -- interpreting an XSLT document into a formal tokenized structure, and then running the data through the tokenized structure. Not surprisingly, if you have to parse this information everytime, you get exceedingly inefficient systems. However, you're seeing the rise of XSLT cached compilers that can optimize this process considerably. I suspect that over time this will be the preferred way of working with such structures. I don't honestly believe that starting over at this stage is a viable option. We are too far down the road now, and there are too many billions already invested to make any changes to these standards possible. I do agree with Simon with regard to the issue of SOAP and XML-RPC, and I have made me feelings known about that in the VBXML board -- we are utilizing the HTTP protocol incorrectly, placing too many demands upon it to efficiently handle the use of the network as a common interchange medium for messagess. It also gives me more than a little pause about both security concerns and architecure. More later. -- Kurt ----- Original Message ----- From: "Richard Hein" <935551@i...> To: <xpl@e...> Sent: Friday, June 23, 2000 12:47 PM Subject: [XPL] Re: This is about compiled XSLT which allows STREAMING XSLT > Just thought I would rewrite the subject line, to catch your > attention. IIRC, Kurt mentioned something about streaming XSLT > transformations a while back ... I have to look for the message. > Unfortunetly, there is not much more info. about it. The translets, > are MUCH faster than James Clark's processor. > > Richard A. Hein > > > --- In xpl@e..., "Richard Anthony Hein" <935551@i...> wrote: > > Everyone, > > > > www.xml.com has an articles about some things we need to be > informed about, > > including the foundational infrastructure of the 'net and how XML > makes too > > much of a demand on the current infrastructure, and one > about "translets" > > and an XSLT virtual machine! Very important to XPL I think! > > > > Richard A. Hein > > > -------------------------------------------------------------------- ---- > Create professional forms and interactive web pages in less time > with Mozquito(tm) technology. > Form the Web today - visit: > http://click.egroups.com/1/5770/2/_/809694/_/961789670/ > -------------------------------------------------------------------- ---- > > To unsubscribe from this group, send an email to: > xpl-unsubscribe@o... > > > > --- End forwarded message --- |
From: reid_spencer <ras...@re...> - 2002-01-31 09:20:35
|
--- In xpl-dev@y..., "Richard Hein" <935551@i...> wrote: Just thought I would rewrite the subject line, to catch your attention. IIRC, Kurt mentioned something about streaming XSLT transformations a while back ... I have to look for the message. Unfortunetly, there is not much more info. about it. The translets, are MUCH faster than James Clark's processor. Richard A. Hein --- In xpl@e..., "Richard Anthony Hein" <935551@i...> wrote: > Everyone, > > www.xml.com has an articles about some things we need to be informed about, > including the foundational infrastructure of the 'net and how XML makes too > much of a demand on the current infrastructure, and one about "translets" > and an XSLT virtual machine! Very important to XPL I think! > > Richard A. Hein --- End forwarded message --- |
From: reid_spencer <ras...@re...> - 2002-01-31 09:19:48
|
--- In xpl-dev@y..., Jonathan Burns <saski@w...> wrote: Kurt Cagle wrote: > Richard, > > I'd dare say that simply keeping things organized around here is a > better > strength than many of us bring to this table -- you're doing good with > it, > and you're insight will carry you far. > Hear, hear! > For myself, > Strengths -- working with most scripting technologies since the early > 1980s, > both client and server, a multimedia background, grounding in systems > theory, complex analysis and chaos, and in general a fairly broad > overview > of programming principles and practices. Interest in both human and > computer > based languages, semantics, and philosophy. Writes pretty good > science > fiction and draws a sexy mermaid. > > Weaknesses -- not well organized (what do you expect, I study chaos!), > > database skills at the basic SQL level (I could tell you what a > trigger was, > but would have to look up its syntax to write one), no formal training > as a > computer programmer (which may or may not be a weakness), tendency to > overcommit to projects. > > Kurt Cagle Hmmm. Strengths - background in mathematics (good for logical relationships) and physics (good for analogies). Long-term (25 years) interest in programming languages. Survivor of middle-era OOP disputes, and current contributor to Pattern Languages of Programming discussion group. Solid grounding in C/C++. Experience, mostly amateur, with a dozen languages. 8 years as university tutor, spec. digital logic and assembler programming. Plain English. Reasonably shrewd estimator of how much longer everything takes, and how much more it oosts. Fierce believer in power of skill combinations in small groups. Weaknesses - gaps in databases, and communications protocols. Sketchy Web techs knowledge. Maker of mountains from molehills. Subject to bouts of despair. I actually think we've got a very good team already - at least for purposes of establishing goals and writing up design principles. It will take stamina, though. I'm certain there will be issues we have to go over and over again, and we'll probably feel we've sweated blood over every document we produce. Put it this way. We're already on the edge of The international standards community for XML technologies. What we can find out, by mining and studying the W3C and XML-DEV literature, will bring us level with the most experienced workers in the field, quite soon - just as Kurt promised. Not many people get this kind of opportunity. Hey, Richard. Those are good skills. And you left out the neurology, which I don't think is insignificant in the least. Both of us need a deeper XML background. Beyond that, your DB knowledge complements my programming history. Skills like those make it worthwhile for Kurt to spend attention on this group. Michael is getting into end-applications research. And the others, as far as I can see, have a pretty good grip on relevant topics. Nobody's wasting space here. I note that your researches have brought both Groves and WorldOS into the framework of discussion. Spot on target. Nobody begrudges time spent with your girlfriend. And when it comes to self-doubt, I guess I can still take the likes of you on points. :-) Seeya Jonathan but look who follows in my train a desert ant a tamerlane who ate a pyramid in half that he might get at and devour the mummies of six hundred kings who in remote antiquity stepped on and crushed ancestors of his - archy's life of mehitabel --- End forwarded message --- |
From: reid_spencer <ras...@re...> - 2002-01-31 09:19:17
|
--- In xpl-dev@y..., "Richard Anthony Hein" <935551@i...> wrote: Everyone, www.xml.com has an articles about some things we need to be informed about, including the foundational infrastructure of the 'net and how XML makes too much of a demand on the current infrastructure, and one about "translets" and an XSLT virtual machine! Very important to XPL I think! Richard A. Hein --- End forwarded message --- |
From: reid_spencer <ras...@re...> - 2002-01-31 09:18:56
|
--- In xpl-dev@y..., "Kurt Cagle" <cagle@o...> wrote: Richard, I'd dare say that simply keeping things organized around here is a better strength than many of us bring to this table -- you're doing good with it, and you're insight will carry you far. For myself, Strengths -- working with most scripting technologies since the early 1980s, both client and server, a multimedia background, grounding in systems theory, complex analysis and chaos, and in general a fairly broad overview of programming principles and practices. Interest in both human and computer based languages, semantics, and philosophy. Writes pretty good science fiction and draws a sexy mermaid. Weaknesses -- not well organized (what do you expect, I study chaos!), database skills at the basic SQL level (I could tell you what a trigger was, but would have to look up its syntax to write one), no formal training as a computer programmer (which may or may not be a weakness), tendency to overcommit to projects. Kurt Cagle ----- Original Message ----- From: "Richard Hein" <935551@i...> To: <xpl@e...> Sent: Wednesday, June 21, 2000 4:22 PM Subject: [XPL] strengths and weaknesses > Dear XPLers, > > I want to make a few comments on the past little while I have been > part of this group, and the feelings I have so far about taking > part. I feel that everyone probably has strengths and weaknesses in > relation to what we are doing, and maybe it's time we let each other > know what we can and can't really do, so that we are able to > effectively split up some tasks among members. > > I personally am not a great programmer. That's a weakness. My OOP > experience is minimal, and only related to my own personal research > and tutorials I've taken. I've never written a program from start to > finish in Java, C++ or any other OOP language. This bites, because > it obviously limits what I can do. I program in VB and (mostly VBA, > for Access, in which I have completed 3 full lifecycle database > applications in the past year, from analysis to support, including > one that uses a Palm III to collect and scan inventory, and done lots > of maintenance and user-interface stuff for about 3 years), SQL (MS > SQL Server 7 - but I am not very experienced with it). I am not a > web developer yet ... but have been trying really hard to become > one. > > I think I understand most of the XML concepts and technologies, what > they are able to do, and how they relate, but I am far from a > complete understanding. I have tried and tried to make an > interactive database on the web using XML and ADO linked to SQL > Server, using stored procedures, for about 2 months, and spent about > 9 months so far, trying to absorb all the info related to XML I can > handle. I still haven't pulled off the interactive database yet. > > That's pretty sad, I know. > > The reason is almost certainly my weakness in ASP, COM and Java > related technologies, like EJBs. At this point I haven't been able > to work it all out yet. Presenting static data, or data input > controls in XML/XSL is easy, but trying to get it back to the > database from which it originated, has proven very hard for me. > > Just last night I figured it out - in theory, and soon in practice, I > HOPE. I didn't realize that it required some things I didn't know > about to get that data back in. I found a tutorial at www.xml- > zone.com that finally made it clear what I had to do, and I am pretty > happy to have read the author's comments that it was a lot of work > for him to figure out how to do the same thing I have been trying to > do. At least he had the benefit of other programmers to talk to > while figuring it out ... I work independently, and the only > programmer communication I get is online. Sometimes I am just too > embarrassed to ask for help, because I think it should be obvious. > Well, I finally realized that my pride has gotten in the way of my > success. > > So, I am putting pride aside, and saying I don't know much about any > of this stuff, even though I research and study it all day and into > the early morning hours. I have been coddled by VB and VBA for so > long I have forgotten most of what I learned about programming to > begin with (I started programming Basic games from code in magazines > back when I was 7 years old, on our TRS-80 - dad read the code letter > by letter, while I typed it in, and we'd switch when I got tired). > > So what do I have to offer the XPL group at all?? > > I hope I can offer my research ability. That's my strength, I > believe, more than anything. That's what I do best ... look for > information that relates, and pass it along. Sometimes it's useless, > and wrong, because of the limitations I mentioned above. Sometimes > it's helpful. > > For those of you that are really talented at programming, and have a > strong knowledge of compilers, languages, design, project management, > and internet protocols (that's another weakness for me), etc ..., you > can spend a lot more time doing those things, and get me to search > out the corners of the internet for information you need. > > I know enough about this stuff to be able to find the information you > might need, if you want my help, even if I don't know exactly how to > implement it. Then I can learn more, and grow, and you don't have to > waste as much time as you might otherwise. In return, I will be > gaining knowledge and wonderful experience from you all. By the time > this thing is done, I will be a great XPL programmer! :-) I am > learning Java right now as well, and studying up on computer language > design, sematics, compilers and more, just so I can be valuable to > this group. > > However, I recognize the fact that I say far-out things, and make > major mistakes. I need input concerning these mistakes, and > unrealistic ideas. Please, and I mean it sincerely, DON'T let me go > away believing something WRONG! Correct me, and I will learn and > grow. Maybe it's just annoying to have to respond to things I say > that are nonsense to you, and I can understand that. But the choice > is to either correct me, or to let me say stupid things forever, and > be a burden to the group, until I finally give up and leave. > > I really want to be a part of this, and it's become an obsession - > just ask my girlfriend - boy she hates XPL! I think that there are > other people, in similar position to me in this group who feel they > can't contribute much, but are far less vocal (um ... well, not > VOCAL, but you get my drift) than I am. > > However, they must have something to contribute, and together we can > be very beneficial ... just the very idea that if we don't understand > XPL, lots of other people won't either, is reason enough for us to be > here. > > On the other hand, it must be a pain in the proverbial you-know-what > to try to work on something like XPL, and be surrounded by > unknowledgable people, who interfere with the "real" work of the > experts and gurus among the group. > > But on the other-other-hand (if you are a three armed mutant, like > me, which explains why I type so bloody much ;-)), maybe the fact > that we are coming in with a pretty open (or empty - like Buddha :- )) > mind will help XPL break through the classic paradigms that other > language concepts hold to, but may not work well at all for XPL. > Older and more experienced people in OOP may be stuck on the idea > that XPL should be like [insert your favorite language here], but > that may be completely wrong for the new framework that XML demands > to make the "programmable web". > > I'm a dreamer and an idea man. Perhaps 90 - 99% of my ideas don't > work, and 90 - 99% of my dreams have never come true. But I have > hundreds of them, so one of them is going to work someday! I love to > study, and love to read about new things, but I am not the type of > person who is good at actually doing it ... I feel that the fun is in > discovery, not implementation. My motivation is mostly gone once I > figure out how something is done, and when I go to use it, I don't > weather through it well, because it's boring to me. So that means a > lot of the time I don't really know if it will work - which is why I > say, "perhaps this will be helpful". > > So, these are my strengths and weaknesses, and I hope that you all > can work with them, and maybe in time those weaknesses will turn to > strengths. I hope so. I have no illusions that XPL is a massive > undertaking, and will take a long time to bring into the world, so I > need to know if you all will be able to stand me, and if I am helping > or hurting the group, because I want to see it happen - even if I am > just a bystander (although it's way more fun to be part of it), and I > don't want to be a stumbling block. > > Sorry for the long email ... again. > > Sincerely, > > Richard A. Hein > > > > > -------------------------------------------------------------------- ---- > SALESFORCE.COM MAKES SOFTWARE OBSOLETE > Secure, online sales force automation with 5 users FREE for 1 year! > http://click.egroups.com/1/2658/2/_/809694/_/961629780/ > -------------------------------------------------------------------- ---- > > To unsubscribe from this group, send an email to: > xpl-unsubscribe@o... > > > --- End forwarded message --- |
From: reid_spencer <ras...@re...> - 2002-01-31 09:18:04
|
--- In xpl-dev@y..., "Richard Hein" <935551@i...> wrote: Dear XPLers, I want to make a few comments on the past little while I have been part of this group, and the feelings I have so far about taking part. I feel that everyone probably has strengths and weaknesses in relation to what we are doing, and maybe it's time we let each other know what we can and can't really do, so that we are able to effectively split up some tasks among members. I personally am not a great programmer. That's a weakness. My OOP experience is minimal, and only related to my own personal research and tutorials I've taken. I've never written a program from start to finish in Java, C++ or any other OOP language. This bites, because it obviously limits what I can do. I program in VB and (mostly VBA, for Access, in which I have completed 3 full lifecycle database applications in the past year, from analysis to support, including one that uses a Palm III to collect and scan inventory, and done lots of maintenance and user-interface stuff for about 3 years), SQL (MS SQL Server 7 - but I am not very experienced with it). I am not a web developer yet ... but have been trying really hard to become one. I think I understand most of the XML concepts and technologies, what they are able to do, and how they relate, but I am far from a complete understanding. I have tried and tried to make an interactive database on the web using XML and ADO linked to SQL Server, using stored procedures, for about 2 months, and spent about 9 months so far, trying to absorb all the info related to XML I can handle. I still haven't pulled off the interactive database yet. That's pretty sad, I know. The reason is almost certainly my weakness in ASP, COM and Java related technologies, like EJBs. At this point I haven't been able to work it all out yet. Presenting static data, or data input controls in XML/XSL is easy, but trying to get it back to the database from which it originated, has proven very hard for me. Just last night I figured it out - in theory, and soon in practice, I HOPE. I didn't realize that it required some things I didn't know about to get that data back in. I found a tutorial at www.xml- zone.com that finally made it clear what I had to do, and I am pretty happy to have read the author's comments that it was a lot of work for him to figure out how to do the same thing I have been trying to do. At least he had the benefit of other programmers to talk to while figuring it out ... I work independently, and the only programmer communication I get is online. Sometimes I am just too embarrassed to ask for help, because I think it should be obvious. Well, I finally realized that my pride has gotten in the way of my success. So, I am putting pride aside, and saying I don't know much about any of this stuff, even though I research and study it all day and into the early morning hours. I have been coddled by VB and VBA for so long I have forgotten most of what I learned about programming to begin with (I started programming Basic games from code in magazines back when I was 7 years old, on our TRS-80 - dad read the code letter by letter, while I typed it in, and we'd switch when I got tired). So what do I have to offer the XPL group at all?? I hope I can offer my research ability. That's my strength, I believe, more than anything. That's what I do best ... look for information that relates, and pass it along. Sometimes it's useless, and wrong, because of the limitations I mentioned above. Sometimes it's helpful. For those of you that are really talented at programming, and have a strong knowledge of compilers, languages, design, project management, and internet protocols (that's another weakness for me), etc ..., you can spend a lot more time doing those things, and get me to search out the corners of the internet for information you need. I know enough about this stuff to be able to find the information you might need, if you want my help, even if I don't know exactly how to implement it. Then I can learn more, and grow, and you don't have to waste as much time as you might otherwise. In return, I will be gaining knowledge and wonderful experience from you all. By the time this thing is done, I will be a great XPL programmer! :-) I am learning Java right now as well, and studying up on computer language design, sematics, compilers and more, just so I can be valuable to this group. However, I recognize the fact that I say far-out things, and make major mistakes. I need input concerning these mistakes, and unrealistic ideas. Please, and I mean it sincerely, DON'T let me go away believing something WRONG! Correct me, and I will learn and grow. Maybe it's just annoying to have to respond to things I say that are nonsense to you, and I can understand that. But the choice is to either correct me, or to let me say stupid things forever, and be a burden to the group, until I finally give up and leave. I really want to be a part of this, and it's become an obsession - just ask my girlfriend - boy she hates XPL! I think that there are other people, in similar position to me in this group who feel they can't contribute much, but are far less vocal (um ... well, not VOCAL, but you get my drift) than I am. However, they must have something to contribute, and together we can be very beneficial ... just the very idea that if we don't understand XPL, lots of other people won't either, is reason enough for us to be here. On the other hand, it must be a pain in the proverbial you-know-what to try to work on something like XPL, and be surrounded by unknowledgable people, who interfere with the "real" work of the experts and gurus among the group. But on the other-other-hand (if you are a three armed mutant, like me, which explains why I type so bloody much ;-)), maybe the fact that we are coming in with a pretty open (or empty - like Buddha :-)) mind will help XPL break through the classic paradigms that other language concepts hold to, but may not work well at all for XPL. Older and more experienced people in OOP may be stuck on the idea that XPL should be like [insert your favorite language here], but that may be completely wrong for the new framework that XML demands to make the "programmable web". I'm a dreamer and an idea man. Perhaps 90 - 99% of my ideas don't work, and 90 - 99% of my dreams have never come true. But I have hundreds of them, so one of them is going to work someday! I love to study, and love to read about new things, but I am not the type of person who is good at actually doing it ... I feel that the fun is in discovery, not implementation. My motivation is mostly gone once I figure out how something is done, and when I go to use it, I don't weather through it well, because it's boring to me. So that means a lot of the time I don't really know if it will work - which is why I say, "perhaps this will be helpful". So, these are my strengths and weaknesses, and I hope that you all can work with them, and maybe in time those weaknesses will turn to strengths. I hope so. I have no illusions that XPL is a massive undertaking, and will take a long time to bring into the world, so I need to know if you all will be able to stand me, and if I am helping or hurting the group, because I want to see it happen - even if I am just a bystander (although it's way more fun to be part of it), and I don't want to be a stumbling block. Sorry for the long email ... again. Sincerely, Richard A. Hein --- End forwarded message --- |
From: reid_spencer <ras...@re...> - 2002-01-31 09:17:38
|
--- In xpl-dev@y..., Rod Moten <rod@c...> wrote: >X-Mailer: Windows Eudora Pro Version 3.0 (32) >Date: Tue, 20 Jun 2000 14:16:46 -0500 >To: Rod Moten <rod@c...> >Subject: Re: [XPL] formal semantics of XSLT and other XML langs - > Jonathan, please read > >Well, actually, it's a bit more modest: denotational semantics for an old >(and small) version of XPath, the pattern sublanguage used within XSLT. >The actual XPath standard is much larger and does not have formal semantics >yet, afaik (here's a challenge for you :). > >Sasha > >At 11:19 AM 6/20/00 -0500, you wrote: >> >> >>Phil Wadler has developed a formal semantics of XSLT. >> <a >>href="http://www.cs.bell-labs.com/who/wadler/papers/xsl- semantics/xsl-semant >>ics.pdf">xsl-semantics.pdf</a> >> >><a href="http://www.cs.bell- labs.com/who/wadler/topics/xml.html">Walder's >>XML topics</a> > > ******************************************** * Make affirming your wife a top priority. * ******************************************** --- End forwarded message --- |
From: reid_spencer <ras...@re...> - 2002-01-31 09:17:12
|
--- In xpl-dev@y..., "Richard Anthony Hein" <935551@i...> wrote: <?xml version="1.0" encoding="UTF-8"?> <!-- some tags that might be useful based on the paper at http://www.cs.bell-labs.com/who/wadler/papers/xsl-semantics/xsl- semantics.pd f, which talks about a data model for XML, beginning on page 4: <xpl:isRoot></xpl:isRoot> <xpl:isElement></xpl:isElement> <xpl:isAttribute></xpl:isAttribute> <xpl:isText></xpl:isText> <xpl:isComment></xpl:isComment> <xpl:isPI></xpl:isPI> <xpl:set></xpl:set> <xpl:union></xpl:union> <xpl:member-of></xpl:member-of> <xpl:or></xpl:or> <xpl:children-of></xpl:children-of> <xpl:attributes-of></xpl:attributes-of> <xpl:root-of></xpl:root-of> <xpl:equals></xpl:equals> <xpl:implies></xpl:implies> Use these (and others, this is just a taste to see if it's helpful to XPL) to evaluate semantics of any given well-formed XML document in XPL? --> <!-- Example: a test to find out the node type --> <xpl:if> <xpl:or> <a-node> <xpl:member-of> <xpl:children-of> <another-node> <xpl:implies> <!-- between the following is* tag sets, put a reference to a-node, which has it's value in the document that XPL is evaluating; not sure how we should reference it - XPath? Then this returns true or false? Or is this redundant, because if it gets this far, one of the following must be true? --> <xpl:isElement></xpl:isElement> <xpl:isText></xpl:isText> <xpl:isComment></xpl:isComment> <xpl:isPI></xpl:isPI> </xpl:implies> </another-node> </xpl:children-of> </xpl:member-of> </a-node> <a-node> <xpl:member-of> <xpl:attributes-of> <another-node> <xpl:implies> <xpl:isAttribute><!-- a-node --></xpl:isAttribute> </xpl:implies> </another-node> </xpl:attributes-of> </xpl:member-of> </a-node> <a-node> <xpl:equals> <xpl:root-of> <another-node> <xpl:implies> <xpl:isRoot><!-- a-node --></xpl:isRoot> </xpl:implies> </another-node> </xpl:root-of> </xpl:equals> </a-node> </xpl:or> </xpl:if> <!-- So, using this kind of test, we can verify the semantic correctness of a document perhaps. I am not sure if this helps, but there it is. Comments? Richard A. Hein --> -----Original Message----- From: Rod Moten [mailto:rod@c...] Sent: June 20, 2000 12:12 PM To: xpl@e... Cc: xpl@e... Subject: Re: [XPL] formal semantics of XSLT and other XML langs - Jonathan, please read At 09:49 AM 6/20/00 +0000, Jonathan Burns wrote: > > Once we have an exact semantics for XPL, we're WAY ahead. To get this, >we really want an exact semantics for XPath, XSLT etc - because most of this >existing XML tech, we will want to reuse. What I'm hoping is that Groves >will be the absolute foundation for semantic definitions. Jonathan > > Phil Wadler has developed a formal semantics of XSLT. <a href="http://www.cs.bell-labs.com/who/wadler/papers/xsl-semantics/xsl- semant ics.pdf">xsl-semantics.pdf</a> <a href="http://www.cs.bell- labs.com/who/wadler/topics/xml.html">Walder's XML topics</a> Rod ******************************************** * Make affirming your wife a top priority. * ******************************************** To unsubscribe from this group, send an email to: xpl-unsubscribe@o... To unsubscribe from this group, send an email to: xpl-unsubscribe@o... --- End forwarded message --- |
From: reid_spencer <ras...@re...> - 2002-01-31 09:16:44
|
--- In xpl-dev@y..., "Richard Anthony Hein" <935551@i...> wrote: Fantastic! Thanks Rod! Richard A. Hein -----Original Message----- From: Rod Moten [mailto:rod@c...] Sent: June 20, 2000 12:12 PM To: xpl@e... Cc: xpl@e... Subject: Re: [XPL] formal semantics of XSLT and other XML langs - Jonathan, please read At 09:49 AM 6/20/00 +0000, Jonathan Burns wrote: > > Once we have an exact semantics for XPL, we're WAY ahead. To get this, >we really want an exact semantics for XPath, XSLT etc - because most of this >existing XML tech, we will want to reuse. What I'm hoping is that Groves >will be the absolute foundation for semantic definitions. Jonathan > > Phil Wadler has developed a formal semantics of XSLT. <a href="http://www.cs.bell-labs.com/who/wadler/papers/xsl-semantics/xsl- semant ics.pdf">xsl-semantics.pdf</a> <a href="http://www.cs.bell- labs.com/who/wadler/topics/xml.html">Walder's XML topics</a> Rod ******************************************** * Make affirming your wife a top priority. * ******************************************** ---------------------------------------------------------------------- ------ -- ---------------------------------------------------------------------- ------ -- To unsubscribe from this group, send an email to: xpl-unsubscribe@o... --- End forwarded message --- |