At 07:31 PM 7/31/2001 +0200, Oren Ben-Kiki wrote:
Clark has done a great job in unifying the iterator and
visitor APIs into a coherent scheme. [...]

Clark and I are sure to chat some more about this...

There's a neat "sliding window" approach you can take which
avoids most of this. A "streaming parser" could only keep
the nodes along the path from the root to the current node
at any point in time...

Yes, I was one of the original proponents of this approach when I wrote webMethods' XML parser several years ago.  webMethods' parser can do this.

Obviously you need a low-level API in order to write a
higher level one. What I suggested was that we don't settle
on "the" low-level API yet, even though we write one (several,
in fact). It is much easier to settle on "the" highest-level
API (load/save) first.

ACK!  I'm with ya!

> What about an implementation that allows for traversing
> arbitrarily sized (or even unending) YAML streams?

That would be a future goal, requiring the "incremental" API.

I have a lot of experience with this approach.  It is a bit more complicated, but this time around I think I've found a simple way to do it.

I don't think there are that many, actually. String, Vector
and Hash have a natural (de)serialization. So that's the
first thing. Next we can layer a (de)serialization mechanism
on top of that - ideally interacting with Java's (JSX-like).
That's all there's to it, really.

... except that it calls for deferring dereferencing to a higher level API.

If you are hinting at the serialization issue, then note
Brian is working on and he's definitely going
to include serialization in there! He wants to be able
to round trip arbitrary perl data - which also means
he'll be using references (for graphs). So there's
nothing in the spec which isn't going to be implemented.

Excellent, that's what we need.  But to avoid perhaps overlooked but necessary generalizations, we should probably try it on at least two languages.