At 07:31 PM 7/31/2001 +0200, Oren Ben-Kiki wrote:
Clark has done a great job in unifying the iterator and
visitor APIs into a coherent scheme. [...]
There's a neat "sliding window" approach you can take which
avoids most of this. A "streaming parser" could only keep
the nodes along the path from the root to the current node
at any point in time...
Obviously you need a low-level API in order to write a
higher level one. What I suggested was that we don't settle
on "the" low-level API yet, even though we write one (several,
in fact). It is much easier to settle on "the" highest-level
API (load/save) first.
> What about an implementation that allows for traversing
> arbitrarily sized (or even unending) YAML streams?
That would be a future goal, requiring the "incremental" API.
I don't think there are that many, actually. String, Vector
and Hash have a natural (de)serialization. So that's the
first thing. Next we can layer a (de)serialization mechanism
on top of that - ideally interacting with Java's (JSX-like).
That's all there's to it, really.
If you are hinting at the serialization issue, then note
Brian is working on YAML.pm and he's definitely going
to include serialization in there! He wants to be able
to round trip arbitrary perl data - which also means
he'll be using references (for graphs). So there's
nothing in the spec which isn't going to be implemented.