You can subscribe to this list here.
2001 |
Jan
|
Feb
(20) |
Mar
(27) |
Apr
(14) |
May
(22) |
Jun
(6) |
Jul
(3) |
Aug
(4) |
Sep
(3) |
Oct
(2) |
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(4) |
Nov
|
Dec
|
From: Andy S. <ae...@st...> - 2001-04-18 22:17:14
|
Attendees: Bob Barned - LM ba...@sy... Burkhard Daniel - STG bu...@st... Andy Schweig - STG ae...@st... Today's discussion involved the data link metalanguage and how multiplexing would be handled. We discussed providing operations on the control channel to create transmit and receive channels. Any number of these channels could be created between a data link user and a data link provider. When creating a receive channel, the user would specify a "SAP" (service access point) which would be used by the data link provider to determine whether an incoming frame should be delivered on that receive channel. The SAP would most likely be a byte array. The creator of a receive channel could also specify that all frames should be delivered. There was also some discussion of whether the data link metalanguage should support operations to access multicast and promiscuous mode functions of the ND served by the data link provider, or whether these sorts of operations should be handled in some other way. Burk will be sending out some more details of these issues. |
From: Andy S. <ae...@st...> - 2001-04-18 22:07:06
|
(I apologize for the lateness of this mail.) Attendees: James Hall - Sun jam...@su... Bob Barned - LM ba...@sy... Burkhard Daniel - STG bu...@st... Kevin Van Maren - Unisys kev...@un... Andy Schweig - STG ae...@st... There was more discussion of how many channels are required per transport endpoint (socket), and it was generally agreed that one channel was sufficient. There was also further discussion of how to specify QoS requirements, and it was generally agreed that this information could be specified in the control blocks of "send" operations in both the transport and data link metas. The drivers that handle the data would be responsible for propogating QoS information. If anyone remembers anything additional or different, please speak up. |
From: Burkhard D. <bu...@st...> - 2001-04-09 17:45:59
|
Hi all, I looked really hard at that architecture drawing I proposed on this list a while ago, and the more I looked, the less happy I was with it. Throw in complaints from people working on classifier and scheduler semantics, as well as a general uneasyness about parts of the picture, and I soon knew it was time to rework (and simplify) the drawing. Along the way I decided to put everything into UDI terminology. The result is attached. Marked red are the metalanguages that we are currently focussing on. I left the Network Meta in the picture, because at some time we (or somebody) may decide to specify such a meta, and to make it easier to correlate the architecture drawing with the ISO/OSI layer model. Of course, that correlation becomes diffuse as we reach the ND portion of the drawing. This is because we really are talking about a different layering scheme here than the ISO/OSI model. While the OSI model distinguishes functional components of a network service, we need to specifiy interfaces between a network stack and NIC drivers that operate their respective NICs. These NICs provide hardware support for all of the Physical Layer and at least a portion of the Data Link layer (e.g. MAC). So, this drawing represents an implementation architecture, while the older drawing was meant to illustrate the functional layering. Notice the bubble-like looking structures marked "SI"? These are plugins to the Data Link Driver providing scheduling (SI stands for "Scheduler Interface", suggestions on nicer christening of this interface are appreciated). I decided to make these "bubbles" rather than an extra layer because scheduling really is part of the data link functionality, and is something that a data link driver should have great control over. A scheduler really only specifies the order (and timing) of elements in the sending queue for a particular device, and is therefore only an extension of the data link driver. So I figured it would make sense to make the architecture reflect that relationship better. I think that schedulers could be conveniently implemented as UDI libraries. The new drawing also accomodates for the fact that the semantics of the data link driver vary widely for different interface types. In Ethernet, for example, most of the actual MAC protocol is implemented in hardware, and the data link driver has no real control over its operation). The ND can in most cases tune some aspects of MAC operation, but not the data link driver. I think I only recently realized that the existing interface between NSR and ND does not coincide with the Data Link-to-Physical interface, and in fact doesn't even try to cover the same ground. The NSR-to-ND interface is an implementation-specific interface, while the Data Link-to-Physical interface is an abstract functional interface. Please comment on this new drawing. Thanks, Burk. -- Burkhard Daniel Software Technologies Group, Inc. bu...@st... * http://www.stg.com fon: +49-179-5319489 fax: +49-179-333-5319489 |
From: Van M. K. <kev...@un...> - 2001-04-05 15:28:35
|
> One approach for specifying QoS requirements is through a QoS > constraints > handle, much like the DMA constraints handle in the current spec. The > constraints handle would be an opaque handle that would be > passed from the TU to > the TSP and from the TSP (or Network Service Provider) to the > Data Link > Provider, preferrably as part of the control block for the > channel ops. This requires a core specification change to add a new handle (spec says only Core and Physio can define new types), as well as the addition of udi-core routines to insert/extract information. Is it impractical to accomplish this without the addition of a new UDI type? For example using typed data or a udi_buf_t to convey this information? A QoS handle seems very domain-specific, and I don't think we want to "encourage" every new user of UDI to create additional opaque types. Perhaps a channel operation, kindof like the NIC control request (or even the bind/bind-ack request), could be used to convey changes in desired QoS info. The meta can define when it is valid to perform a QoS update, just like it could define when these handles are passed up. The downside here is that adding additional constraints is a little difficult without reving the spec; but that doesn't seem much different from using an opaque type to store this info. If a GIO buffer is used (like NIC multicast info), then new types can be added and ignored if they are not known by the receiver. But you still need a spec rev so that people aren't grabbing an arbitrary QoS id for their use and colliding. Kevin |
From: Van M. K. <kev...@un...> - 2001-04-05 15:13:13
|
> We were discussing how many channels we would need between > the socket mapper > (i.e. transport user) and the transport service provider. > Below I try to make the case for using 3. I'll try to restate what I can remember of the "1" channel idea. > Channels are anchored in an ops vector, and per the spec (as > defined in > udi_mei_ops_vec_template_t, Chapter 28) only one end of a > channel can act as an initiator of a channel operation. Yes, but that is per-operation. Both ends of a channel can initiate requests, as long as they are *different requests*. For example, look at GIO: the child initiates all the xfer requests, but the parent initiates all the event requests. So the child can initiate xmit operations and the parent can initiate receive operations on the same channel: they just need to be separate vectors. > If we were to use one channel per connection > between the TU and the TSP, we would have to allocate a pool > of receive cb's in > the TU, pass them down to the TSP (which would represent the > initiation of a > channel operation) and upon actual receive of a packet the > TSP would pass one > control block back up (representing the response to the inititation). No; control block allocation and usage is not dependent on the channel assignment. You can have multiple control block types per channel, and you can have both ends allocating control blocks. This is defined by the metalanguage designer. > Plus, the ops vectors used for this channel would have to > sport operations for > rx, tx and control ops on both sides (i.e. the TU side vector > would have to have > an rx_ind as well as a tx_ack operation in the same vector). Yes, but those operations have to exist on both sides anyway; the only difference is whether they go in one channel's list of ops or 2 or 3 separate lists. The only reason for separate channels is if you want to bind them in different regions. Since the transmit and receive processing for protocol stacks have a lot of shared state, placing them in separate regions would involve a lot of communication between those regions, and you would lose much of the advantage of multiple regions. > If on the other hand we want the TSP to allocate the rx cbs > on its own (or pass > through rx cb's it gets from lower layers), we at least need > two channels > (tx/control and rx) for each connection (with the TU acting > as inititiator for > tx/control and the TSP acting as inititator for rx ops). > However, since channels > are not that heavyweight (and please correct me here if I'm > mistaken), we can > just as well allocate 3 channels and get the added advantage > of having better > parallelability. Channel creation is not free. Channels were expected to be long living. That isn't the case when a channel is created for every tcp/ip socket. Creating one channel is much faster than creating three of them. > Any comments? At the F2F we discusses several objectives. One was fast connection establishment and teardown. Additionally, it was thought that there would be potentially thousands of connections; the parallelism is achieved by having multiple connections, not by multithreading each connection. This contrasts with the NIC meta, where the parallelism is there by splitting send/receive, not from the individual packets (which the HW processes serially). There is also the load-ballancing issue: if a protocol server has 100's of connections, it will want to farm out those to worker regions (probably as many as CPUs it wants to use); this can be done statically with a few tweaks to UDI. Without tweaking UDI, I think two channels are necessary: one bind channel that goes to the main driver region, and creates the worker channel that is bound in a secondary region (since the child bind region is specified in the udiprops.txt, can't refer to a "pool", and is pre-bound by the environment so the driver can't move it). Kevin |
From: Burkhard D. <bu...@st...> - 2001-04-05 01:02:32
|
One approach for specifying QoS requirements is through a QoS constraints handle, much like the DMA constraints handle in the current spec. The constraints handle would be an opaque handle that would be passed from the TU to the TSP and from the TSP (or Network Service Provider) to the Data Link Provider, preferrably as part of the control block for the channel ops. QoS constraints could be filled in at different places in the stack. The socket mapper would be able to fill in some, the Classifier would be able to fill in some, etc., so by the time a buffer reaches the scheduler, all relevant constraints would be set. Constraints we have identified so far are: Rate Requirement (Constant Bit Rate, Variable Bit Rate, non-realtime/realtime), Permissible Bit Error Rate, Permissible Packet Delay Variation (PDV), Permissible PDV Tolerance, Permissible Packet Loss Ratio, Permissible Packet Transfer Delay, Maximum Burst Size, Minimum Data Rate, Peak Data Rate, Sustainable Data Rate, (for the full-fledged generic scheduler) Permissible Bit Error Rate, Packet Expiration, Packet Priority, (for a bulk scheduler, i.e. in routers) Flow Type (for a flow-specific scheduler). Others may be relevant as well. Obviously, not all constraints make sense in all scenarios. A scheduler would have to pick the constraints that are applicable to its interface for its scheduling decision. For scenarios where we have flow-based scheduling, we would also need to provide a flow id (that is, an identifier assigned to a certain type of flow with a specific set of QoS constraints) in order to identify incoming packets to upper layers. This flow id would have to be part of the control blocks for the channel ops in both the Transport and the Data Link Metalanguages. Any comments will be highly appreciated. Thanks, Burk. -- Burkhard Daniel Software Technologies Group, Inc. bu...@st... * http://www.stg.com fon: +49-179-5319489 fax: +49-179-333-5319489 |
From: Burkhard D. <bu...@st...> - 2001-04-05 00:42:03
|
We were discussing how many channels we would need between the socket mapper (i.e. transport user) and the transport service provider. Andy mentioned that during the last F2F the general consent was that one channel per connection was needed, which would be spawned by the socket mapper and anchored in the transport provider as a result of a bind operation between the two James pointed out that instead of only one channel per connection we very likely will need 3: Rx, Tx, and control, the way it is currently done in the NSR/ND interface. Below I try to make the case for using 3. Channels are anchored in an ops vector, and per the spec (as defined in udi_mei_ops_vec_template_t, Chapter 28) only one end of a channel can act as an initiator of a channel operation. If we were to use one channel per connection between the TU and the TSP, we would have to allocate a pool of receive cb's in the TU, pass them down to the TSP (which would represent the initiation of a channel operation) and upon actual receive of a packet the TSP would pass one control block back up (representing the response to the inititation). Plus, the ops vectors used for this channel would have to sport operations for rx, tx and control ops on both sides (i.e. the TU side vector would have to have an rx_ind as well as a tx_ack operation in the same vector). If on the other hand we want the TSP to allocate the rx cbs on its own (or pass through rx cb's it gets from lower layers), we at least need two channels (tx/control and rx) for each connection (with the TU acting as inititiator for tx/control and the TSP acting as inititator for rx ops). However, since channels are not that heavyweight (and please correct me here if I'm mistaken), we can just as well allocate 3 channels and get the added advantage of having better parallelability. Any comments? Thanks, Burk. -- Burkhard Daniel Software Technologies Group, Inc. bu...@st... * http://www.stg.com fon: +49-179-5319489 fax: +49-179-333-5319489 |
From: Andy S. <ae...@st...> - 2001-04-04 17:24:44
|
Attendees: James Hall - Sun jam...@su... Bob Barned - LM ba...@sy... Burkhard Daniel - STG bu...@st... Andy Schweig - STG ae...@st... Today's topics of discussion were ways to specify quality of service in the transport and data link metas; and whether there should be one or three (tx, rx, control) channels between the transport user and transport provider for each endpoint. Burk will be writing up more details on these issues. |
From: Andy S. <ae...@st...> - 2001-04-03 20:40:15
|
STG is hosting the April UDI Protocols conference calls. Days: April 4, 11, 18, 25 Time: 11:00 am - Noon CDT (GMT-5) Phone: 405.244.5555 Code: 7734 |
From: Robert L. <ro...@sc...> - 2001-03-18 05:00:04
|
I see SourceForge finally fixed the archiver. It had been failing to archive this lists. If any one (and only one) of you has an archive of the traffic on this list, if you bounce (not forward) it to arc...@db..., I *think* it'll do the right thing and preserve that in the archives. RJL |
From: Burkhard D. <bu...@st...> - 2001-03-14 22:03:54
|
I have now elevated the Link Selector (aka the Data Link Classifier) to full layer status, as it really is needed to select the appropriate data link protocol(s). For example, it is possible to run IP over Wireless LAN. The Link Selector layer would set up the bindings accordingly for flows that should be so routed. Note that therefore the Link Selector is a filter driver, as it implements the data link interface on both sides. The updated drawing is attached. Thanks, Burk. -- Burkhard Daniel Software Technologies Group, Inc. bu...@st... * http://www.stg.com fon: +49-179-5319489 fax: +49-179-333-5319489 |
From: Andrew K. <an...@sc...> - 2001-03-14 20:14:53
|
Andy Schweig wrote: > 2. We discussed how to incorporate data link address translation into > the protocol metalanguages. Andy will look into and write up ideas about > a generic implementation; KevinQ will look into existing data link > protocols and how they handle translation. I guess this was after I left. Lets not forget the option of having a seperate address translation meta (option C in Burk's writeup). This would involve having a seperate translation module for each data link type / network layer pair. The translator would have the data link as parent and the network layer as child, however the network layer would still be a direct child of the data link (yes, multi-parent). The new meta would take an opaque network layer address and return an opaque data link address. The network layer would cache this address and pass it down to the data link with each traffic packet. The meta would also allow unsolicited updates, in case the translation changes. Andrew -- Andrew Knutsen an...@sc... Santa Cruz Operation (831) 427-7538 work: http://andrewk.pdev.sco.com personal: http://www.goldbarlodge.com |
From: Andy S. <ae...@st...> - 2001-03-14 18:55:06
|
Attendees: Kevin Quick Andrew Knutsen Bob Barned Burkhard Daniel Andy Schweig 1. In our discussion of Burk's classifier/scheduler/multiple interface thing, it was determined that quality of service can be handled with fields in control blocks in the transport & data link metalanguages. More details to come from Burk. 2. We discussed how to incorporate data link address translation into the protocol metalanguages. Andy will look into and write up ideas about a generic implementation; KevinQ will look into existing data link protocols and how they handle translation. 3. KevinQ will look into why the protocols mailing list is not being archived. |
From: Burkhard D. <bu...@st...> - 2001-03-13 11:08:00
|
After talking with some folks here at the university and after the discussions we had on the topic, I have updated the architecture to reflect the issues. - I have removed ATM from the picture, because as KveinVM pointed out it is a separate "technology pane" as well and two are enough to make the point, - every interface now has its own scheduler instance. These will have to talk to each other (using some control mechanism not shown here, possibly GIO channels) - I subdivided the Data link layer for the mobile technology pane. This became necessary because we want to be able to replace all modules underneath the Interface Selector as we switch technologies. Please comment. Thanks, Burk. -- Burkhard Daniel Software Technologies Group, Inc. bu...@st... * http://www.stg.com fon: +49-179-5319489 fax: +49-179-333-5319489 |
From: Burkhard D. <bu...@st...> - 2001-03-08 12:45:10
|
On the call yesterday we discussed several concerns with the proposed classifier layer. I will try to explain exactly what the proposed functionality of that layer is meant to be. The main purpose of the classifier is to detect traffic flows in the user data (coming in from the user process via the socket interface), and direct these flows to the appropriate protocol suite. So, if the user opens a UDP socket, and the data sent over that socket is a sequence of RTP packets, the payload type field in the RTP header specifies the type of contents, and therefore the type of flow. Similarly, for a TCP connection, if the data represents HTTP traffic, the MIME type for that HTTP connection defines the type of flow associated with that connection. Flows will then be routed to different protocol suites (TCP/UDP or ReSoA in the diagram I sent out) and will be transmitted using separate interfaces. So, at least during connection setup, inspection of the actual data is necessary for the classification to occur. Once a classification for a connection has been determined, directing the flows to their appropriate protocols means looking up a "route" for the (destination address, port address) tuple to the correct protocol. Thus, it is not neccessary to inspect every packet, but the classification can be performed at connection setup. For performance-conscious environments with fixed network topologies and access methods, the classifier layer need not be used, but instead the bindings of protocols to flows can be defined using static properties (and "device line matching" during the bind process). Does that clear up some of the problems we talked about yesterday? Burk. -- Burkhard Daniel Software Technologies Group, Inc. bu...@st... * http://www.stg.com fon: +49-179-5319489 fax: +49-179-333-5319489 |
From: Burkhard D. <bu...@st...> - 2001-03-06 17:24:26
|
"Van Maren, Kevin" wrote: > > Apart from the bandwidth sharing/load optimization issue, an > > ND could talk to > > the scheduler (and the classifier, for that matter) through > > the GIO meta. > > I'd hesitate to do this, as then your scheduler would require a > custom ND driver, and would not "just work" with a vendor-provided > ND. I'd try really hard to make do with the information that is > available via the NIC meta (although you may discover things that > should be added to the meta), and have the administrator send other > information through a configuration (probably GIO) channel. > I understand the concern, but we could make the scheduler work with and without the channel between the scheduler and the driver, i.e. use anything it can get from the ND if the ND supports it. In other words, if the driver underneath the scheduler doesn't talk to the scheduler, the scheduler will make a generic scheduling decision based on the information it has. > > > Classifier and Scheduler could also use driver-internal buffer tags to > > communicate status information/parameter adjustments to/from > > the ND. > > No -- driver-internal buffer tags (according to spec, not impl :-) > are not visible outside the driver that sets them. So they can't be > used to convey information between drivers. > However, if the buffer is returned unmodified, then the driver- > internal tag you set will still exist. > Ok, then we may end up having to define extra buffer tags. I don't know how bad this feels to everybody else, but buffer tags would be a very nice way to propagate information from the classifier and the scheduler to the ND. > I don't know what any of the ReSoA issues are. > ReSoA basically replaces TCP, IP and Ethernet in a mobile host with an Export Protocol and a Last Hop Protocol that sit on top of a (usually) wireless link. The base station then establishes a "proper" TCP connection on behalf of the mobile host. All of this is done transparently, and the main reason for doing this is that TCP performance on wireless links with quickly changing transmission characteristics is very poor. > Good luck, Thanks! Burk. -- Burkhard Daniel Software Technologies Group, Inc. bu...@st... * http://www.stg.com fon: +49-179-5319489 fax: +49-179-333-5319489 |
From: Van M. K. <kev...@un...> - 2001-03-06 16:45:57
|
> Apart from the bandwidth sharing/load optimization issue, an > ND could talk to > the scheduler (and the classifier, for that matter) through > the GIO meta. I'd hesitate to do this, as then your scheduler would require a custom ND driver, and would not "just work" with a vendor-provided ND. I'd try really hard to make do with the information that is available via the NIC meta (although you may discover things that should be added to the meta), and have the administrator send other information through a configuration (probably GIO) channel. > Classifier and Scheduler could also use driver-internal buffer tags to > communicate status information/parameter adjustments to/from > the ND. No -- driver-internal buffer tags (according to spec, not impl :-) are not visible outside the driver that sets them. So they can't be used to convey information between drivers. However, if the buffer is returned unmodified, then the driver- internal tag you set will still exist. > I'm pretty sure some mechanism like this will be needed for the > applications I have in mind > (wireless using ReSoA and wired using Ethernet running at the > same time, with a > flow classifier sitting under the socket layer). > > Burk. I don't know what any of the ReSoA issues are. Good luck, Kevin |
From: Burkhard D. <bu...@st...> - 2001-03-06 16:15:44
|
"Van Maren, Kevin" wrote: > I recommend that someone research the existing bandwidth reservation > systems (I mentioned two yesterday) and post a summary/references to > the list before we get too into this and try to reinvent a wheel that > doesn't work with other wheels. > Ok, I'll take that one. Burk. -- Burkhard Daniel Software Technologies Group, Inc. bu...@st... * http://www.stg.com fon: +49-179-5319489 fax: +49-179-333-5319489 |
From: Burkhard D. <bu...@st...> - 2001-03-06 16:10:39
|
Apart from the bandwidth sharing/load optimization issue, an ND could talk to the scheduler (and the classifier, for that matter) through the GIO meta. Classifier and Scheduler could also use driver-internal buffer tags to communicate status information/parameter adjustments to/from the ND. I'm pretty sure some mechanism like this will be needed for the applications I have in mind (wireless using ReSoA and wired using Ethernet running at the same time, with a flow classifier sitting under the socket layer). Burk. -- Burkhard Daniel Software Technologies Group, Inc. bu...@st... * http://www.stg.com fon: +49-179-5319489 fax: +49-179-333-5319489 |
From: Van M. K. <kev...@un...> - 2001-03-06 15:43:32
|
> I was thinking that there needs to be information passes to > the scheduler about the bandwidth that it > was able to use on the network. The ND-side provides the hardware capability via the link_(m)bps entries in the udi_nic_info_cb_t. [Note that this number includes overheads, such as the inter-frame gap, preamble, and CRC on Ethernet, so the buffer size is only an approximation of how much BW is really taken.] That, combined with using the completion_urgent flag (which unfortunately causes the buffer to be freed, but not the cb to be returned immediately), should allow the scheduler to track the outstanding packets. [I seem to recall HPFQ requires a transmit-complete interrupt per packet (yes, yuck) to work properly.] Since this node is the only entity giving packets to the ND, it should be easy to track, unless the ND changes the number of xmit CBs. > This bandwidth may be less than the bandwith of the interface > because there may be bottlenecks elsewhere in the system. Which entity is supposed to know about bottlenecks "elsewhere"? I think the only way to deal with those is by observing how fast you get packets and can give them to the ND. In general, bandwidth-reservation systems want the driver to hold on to as few packets as possible -- if it processes one packet at a time, then the scheduler can determine which packet is the best at that time, instead of having to give packets for far into the future with incomplete (present) information. This is at odds with normal bandwidth-saring, where you want to give as many packets at a time as possible for maximum throughput. > I am thinking of some sort of a > bandwidth reservation system where applications request and > are granted bandwwidth resources > on a network. The scheduler would not be involved in the > reservation, but once the reservation > is made, it would be the scheduler that enforced it. If > there were burst traffic going out, the > scheduler might smooth the burst so that packets did not > backup or get dropped somewhere > in the network. Right: the scheduler would rate-limit the traffic by buffering it (or dropping it if there is too much). I recommend that someone research the existing bandwidth reservation systems (I mentioned two yesterday) and post a summary/references to the list before we get too into this and try to reinvent a wheel that doesn't work with other wheels. Kevin |
From: Robert B. <ba...@sy...> - 2001-03-06 15:33:19
|
The following is a discussion of ATM that I was asked to provide in the last phone meeting. ------------------------------------ ATM has two approaches to work with IP. One is Lan Emulation (LANE). The other is Classic IP (CIP). LANE has the capablities of ethernet. It uses a MAC address to determine the destination of packets. There is a LAN Emulation Server (LES) which translates a MAC address to an ATM address. LANE is capable of broadcasting and multicasting IP packets. CIP also has a server to translate addresses. It if referred to as an CIP ATMARP server. The ATMARP server translates IP addresses to ATM addresses. CIP does not support broadcast or IP mulitcast. ATM is a connected protocol. In general, it sends packets over a single connection between any two host comupters. This is accomplished by using switched virtual circuits (SVCs) or permanent virtual circuits (PVCs). SVCs will be setup when needed and torn down if they are unused for an extended period of time. PVCs can be configuread by a system administrator. They remain as long as they are configured. In the case of LANE, when there is an IP packet to send to an unknown host, a normal ARP will be broadcasted. This will be done with a broadcast server. The ARP reply will contain the MAC address. The ATM address is then found with a LE_ARP request to the LAN Emulation Server to find the ATM address. The ATM address will be used to setup an SVC to the destination host. In the case of CIP, when there is an IP packet to send to an unknown host, an ATMARP request will be sent to an ATM ARP server to obtain the ATM address. The ATM address will be used to setup an SVC to the destination host. |
From: Robert B. <ba...@sy...> - 2001-03-06 13:39:08
|
I was thinking that there needs to be information passes to the scheduler about the bandwidth that it was able to use on the network. This bandwidth may be less than the bandwith of the interface because there may be bottlenecks elsewhere in the system. I am thinking of some sort of a bandwidth reservation system where applications request and are granted bandwwidth resources on a network. The scheduler would not be involved in the reservation, but once the reservation is made, it would be the scheduler that enforced it. If there were burst traffic going out, the scheduler might smooth the burst so that packets did not backup or get dropped somewhere in the network. Bob -----Original Message----- From: Burkhard Daniel [SMTP:bu...@st...] Sent: Monday, March 05, 2001 9:21 PM To: Robert Barned Cc: Project UDI Protocol Metalanguages Discussions; ud...@st... Subject: Re: [UDI-protocols] Multiple Technology Network Architecture Robert Barned wrote: > An example of such a scheduler for real time might be a module which insures > that a computer does not exceed the bandwith that it has been allocated on > a network. Such a module might distrubute burst traffic over time so that > the network does not get overloaded. If someone wished to implement such a > scheduler portably, there would need to be a portable way to get the bandwidth > information to the scheduler. Ok, I thought about this comment more. Yes, the scheduler needs some information about what sort of service the interface (read ND here) supplies, and must make its scheduling decisions accordingly. Therefore, it may be useful to add some sort of status indication to the NSR/ND interface, such as "what type of interface is this?" and "what's the current load on the device?". We may be able to deduce some of the info for the latter question by looking at the number of queued CB the NSR still has, but this number is heavily dependent on the environment implementation and the accurate estimation of control block usage by the driver. Is there currently a way of querying the ND/notifying the NSR as to these statistics? Burk. -- Burkhard Daniel Software Technologies Group, Inc. bu...@st... * http://www.stg.com fon: +49-179-5319489 fax: +49-179-333-5319489 _______________________________________________ Projectudi-protocols mailing list Pro...@li... http://lists.sourceforge.net/lists/listinfo/projectudi-protocols |
From: Burkhard D. <bu...@st...> - 2001-03-06 02:18:50
|
Robert Barned wrote: > An example of such a scheduler for real time might be a module which insures > that a computer does not exceed the bandwith that it has been allocated on > a network. Such a module might distrubute burst traffic over time so that > the network does not get overloaded. If someone wished to implement such a > scheduler portably, there would need to be a portable way to get the bandwidth > information to the scheduler. Ok, I thought about this comment more. Yes, the scheduler needs some information about what sort of service the interface (read ND here) supplies, and must make its scheduling decisions accordingly. Therefore, it may be useful to add some sort of status indication to the NSR/ND interface, such as "what type of interface is this?" and "what's the current load on the device?". We may be able to deduce some of the info for the latter question by looking at the number of queued CB the NSR still has, but this number is heavily dependent on the environment implementation and the accurate estimation of control block usage by the driver. Is there currently a way of querying the ND/notifying the NSR as to these statistics? Burk. -- Burkhard Daniel Software Technologies Group, Inc. bu...@st... * http://www.stg.com fon: +49-179-5319489 fax: +49-179-333-5319489 |
From: Burkhard D. <bu...@st...> - 2001-03-06 02:06:08
|
"Van Maren, Kevin" wrote: > > On the last conference call, we discussed several > > possibilities for address > > translation at the data link interface. Here's a quick summary. > > Address translation is generally specific to both the protocol and > the media. ie, ARP for IP on Ethernet. > Yes, that is what I meant. > > I think the "best" palce for it would be as a small media-specific > component in the network protocol. If it could be reused by multiple > protocols, then providing it as a UDI library would make sense. > Implementing the protocol so that additional media "modules" can be > added makes sense. > Using a UDI library sounds cool, but we agreed that it might make sense to put it into the data link layer for some protocols (read ATM/AAL here). Thus, we should provide for a way to put it into either the data link protocol or the network protocol. > I don't think there is enough complexity of processing involved to > justify a separate driver module or channel operations, and I don't > want to push the protocol down into the driver, which means we must > push the media into the protocol. > The extra module would be useful if you have several network protocols accessing one data link protocol. I'm not sure this is a common case though, and you may be completely right. However, maybe the extra genericness (?) is worth the effort. Burk. -- Burkhard Daniel Software Technologies Group, Inc. bu...@st... * http://www.stg.com fon: +49-179-5319489 fax: +49-179-333-5319489 |
From: Burkhard D. <bu...@st...> - 2001-03-06 01:59:33
|
"Van Maren, Kevin" wrote: > > The other important factor is the introduction of two additional layers, > for > > classifying (i.e. identifying the type of data to be sent & selecting the > > appropriate protocols & access method) and scheduling (for providing QoS). > > Wouldn't the "classifier" normally be the socket mapper? > Not necessarily. It could be a udi module that sits underneath a specially tailored socket mapper (which is the idea here). The classifier itself might not even be one single module, but could instead be a hierarchical collection of classifiers each directing "their" specific flows to the appropriate protocol engine. > Doesn't the protocol layer talk NIC meta (as the NSR), with the layer below > acting as the ND? So the scheduler would be an driver that plugs in to the > existing meta framework, right? [If the datalink layer in your diagram > plays the NSR, then the scheduler would more clearly be a link between the > ND > and the NSR.] Yes, the data link layer is the bottom-most layer of the NSR. The inception of UDI protocols means splitting up the NSR into pieces (isn't that true?). The scheduler could act as an ND, but with another ND underneath it. This is what I refer to as a filter, a driver that presents the same interface on its top and bottom boundaries. > Your diagram has the scheduler routing Ethernet and ATM packets to multiple > interfaces. Is this expected to be the typical configuration, or would > a typical configuration have a separate QoS module for each physical > interface? > I think most QoS work only deals with a single interface (but link > aggregation > can be used to bundle multiple links together, which could then be QoS'd). > > I tried to depict the most generic situation. It is indeed desired functionality to be able to route Ethernet and ATM packets to multiple interfaces, for instance if using the same IP address on different physical devices (and thus, potentially, data link interfaces, e.g. with UMTS and Bluetooth). This is a declared goal in ubiquitous computing and has been implemented in Linux, so I think it is a good idea to provide support for it in UDI. It is expected that multiple interfaces using a similiar technology would be QoS'd together, but I agree that ATM and Ethernet are probably far enough apart that they would get different schedulers (then again, maybe not; we'd do good to leave that as a management option). > Is there a problem with just using a GIO-interface to the scheduler/filter > module for configuration? One example filter could be a firewall module, > which could be fed the ruleset through a gio channel; another scheduler > could > be a link-aggregation module that supports Cisco Etherchannel. > The diagram does not consider configuration. Confifuration would indeed be adjusted thorugh a "management pane", which could well be a GIO interface to the scheduler, but it could also be static properties. The behaviour of the scheduler is not expected to change much over its lifetime, as it is bound to a certain technology. Bind requests and reading static properties of the underlying ND driver should give it enough information to perform its function. I think one arrow missing in the diagram is between two scheduler instances, as they may well need to talk to each other to come to an optimal (and unique) decision. Imagine a situtation where the internet is available through both several wireless networks and through an Ethernet, which lie in different "technology panes" (the hatched areas in the diagram). The decision where to route the traffic would be made on costs and requirements on the traffic (e.g. email could be sent trhough cheap GSM channels, while multimedia contents might be delivered via the Ethernet). By the way, by "filter", I mean a module that provides the same interface on its "top" and "bottom" boundaries, so that it can be plugged in and out. It only provides extra functionality, but a system could well work without it. > > Have you read much of the existing literature on network QoS work, such > as HFSC and HPFQ? I'm a bit rusty. > Classical QoS is not really the goal of this, but (fair and appropriate) distribution of transmission resources is (Remember, the incentive for this is in part mobile networking). I am not very familiar with either of the acronyms you mentioned (Sorry). Burk. -- Burkhard Daniel Software Technologies Group, Inc. bu...@st... * http://www.stg.com fon: +49-179-5319489 fax: +49-179-333-5319489 |