Yuchen Zhang wrote:
> Jan,
>
>
>>BTW: Yuchen, we still need to think about that common rtskb format to
>>avoid the additonal copy step in eth1394...
>>
>
> The solution currently in my mind is to also use rtpkbuff module in
> RTnet, which will be exactly the same for RTnet and RT-FW. The
> rtpkbuff modules takes care of the general memory management, so all
> the current rtskb functions will be using the primitives from rtpkbuff
> module, like the hpsb_packet in RT-FW. That way, RTnet can still keep
> the same naming for all rtskb functions. See below how currently rtpkb
> is defined:
>
> struct rtpkb {
> /*meta stuff for general memory management*/
> ......
>
> char specific_stuff[256]; /*rtskb specific stuff goes here*/
> };
>
> Would like to know your opinion:)
I checked the situation in more details again. This is how I see at ATM:
o rt-buffers as RT-FireWire and RTnet use consist of a single memory
fragment, starting with a buffer head, followed by a payload buffer.
RTnet requires almost 1700 bytes per buffer, RT-FireWire more than
4k. But RTnet may require more in case we ever start using jumbo
frames for gigabit Ethernet (i.e. >8k).
o In order to make buffer fully exchangeable between RT-FireWire and
RTnet, both subsystems must agree on
1. a common rt-buffer size
2. a minimum buffer head size to let every user put some private
data into the header as well
3. the same kernel slab pool to take additional buffers from
The third item actually enforces the first one - but may
unfortunately do this in the wrong direction, i.e. provide too small
buffers for one instance.
Requirement 3 could be avoided by using kmalloc instead of
kmem_cache_alloc for each individual buffer, but this may easily
increase the overhead due to not optimally sized memory fragments.
o As we should not simply set the worst-case size as default for all,
some flexibility at least during compile time is required regarding
the size parameters (1. and 2.). More precisely, that subsystem which
is closer to the physical media dictates the rt-packet size.
Now let's think about some approaches to fulfill these requirements in a
generic, maybe also extensible way:
o The rt-buffer pooling mechanism is a generic feature probably suited
for more scenarios than RTnet and RT-FireWire. I'm currently playing
with the idea to move the functionality into the RTDM layer. The core
features would be:
- Providing non-real-time allocation for rt-buffers from -what I
would call it- parent pools. Buffers from parent pools can be
organised in local real-time pools by the drivers and, in case
they come from the same parent pool, freely exchanged.
- Parent pools need to be identified by a unique name. The first
user which accesses a parent pool under a yet unregistered name
will create that pool with rt-buffer size that it provides with
the request.
- Any additional user of an existing parent pool specifying a
different rt-buffer size will receive an error. Thus we make sure
that all users cleanly agree on the same size beforehand. By
"size", I also mean the minimum header size, i.e. actually two
parameters.
- Moreover, one can discuss which features from RT-FireWire's rtpk
and RTnet's rtskb implementation should make it into RTDM. I
haven't thought about this yet.
o Applying this new RTDM feature to the RTnet/RT-FireWire situation, we
would first have to establish some mechanism to agree on rt-buffer
sizes. As RTnet with its rt_eth1394 driver is stacked on top of
RT-FireWire and its buffers are (currently) smaller, it will simply
grab RT-FireWire's parameters from some exported header during
compile time. Then RTnet can call RTDM to either join or create the
fitting parent pool.
o Due to the significant overhead of using 4k buffers for standard
Ethernet and the possibility that RTnet may support jumbo frames in
the future, a scheme for using different parent pools, thus rt-buffer
sizes for different network devices has to be established in RTnet
later on. But this requires some refactoring and API extensions and
is nothing for a quick shot.
So far my ideas. Comments are welcome, questions as well. I hope the
actual implementation is not half as long as this posting. ;)
Jan
|