Re: [RTnet-developers] Re: Common rt-buffers
Brought to you by:
bet-frogger,
kiszka
|
From: Jan K. <jan...@we...> - 2005-12-07 22:53:07
|
Yuchen Zhang wrote:
> On 12/6/05, Jan Kiszka <jan...@we...> wrote:
>
>>Yuchen Zhang wrote:
>>
>>>Jan,
>>>
>>>
>>>
>>>>BTW: Yuchen, we still need to think about that common rtskb format to
>>>>avoid the additonal copy step in eth1394...
>>>>
>>>
>>>The solution currently in my mind is to also use rtpkbuff module in
>>>RTnet, which will be exactly the same for RTnet and RT-FW. The
>>>rtpkbuff modules takes care of the general memory management, so all
>>>the current rtskb functions will be using the primitives from rtpkbuff
>>>module, like the hpsb_packet in RT-FW. That way, RTnet can still keep
>>>the same naming for all rtskb functions. See below how currently rtpkb
>>>is defined:
>>>
>>>struct rtpkb {
>>> /*meta stuff for general memory management*/
>>> ......
>>>
>>> char specific_stuff[256]; /*rtskb specific stuff goes here*/
>>>};
>>>
>>>Would like to know your opinion:)
>>
>>I checked the situation in more details again. This is how I see at ATM:
>>
>> o rt-buffers as RT-FireWire and RTnet use consist of a single memory
>> fragment, starting with a buffer head, followed by a payload buffer.
>> RTnet requires almost 1700 bytes per buffer, RT-FireWire more than
>> 4k. But RTnet may require more in case we ever start using jumbo
>> frames for gigabit Ethernet (i.e. >8k).
>>
>> o In order to make buffer fully exchangeable between RT-FireWire and
>> RTnet, both subsystems must agree on
>>
>> 1. a common rt-buffer size
>> 2. a minimum buffer head size to let every user put some private
>> data into the header as well
>> 3. the same kernel slab pool to take additional buffers from
>>
>
> What I understand about the buffer header is: it is composed of the
> meta stuff, which is used both for general memory management (like
> payload size, the pool where the buffer belongs to, etc) and for
> application-specific management (like the Ethernet frame headers), and
> the pointer to the payload memory. A rt-buffer struct pointer can be
> safely casted to an pointer to application-specific buffer struct, and
> vice versa.
>
>
>> The third item actually enforces the first one - but may
>> unfortunately do this in the wrong direction, i.e. provide too small
>> buffers for one instance.
>> Requirement 3 could be avoided by using kmalloc instead of
>> kmem_cache_alloc for each individual buffer, but this may easily
>> increase the overhead due to not optimally sized memory fragments.
>>
>
> Yes, I agree.
>
>
>> o As we should not simply set the worst-case size as default for all,
>> some flexibility at least during compile time is required regarding
>> the size parameters (1. and 2.). More precisely, that subsystem which
>> is closer to the physical media dictates the rt-packet size.
>>
>
> I also agree that the size of buffer should be flexible. It could be
> an argument that is specified to the pool allocating API in the
> general memory management(say RTDM).
>
>
>>Now let's think about some approaches to fulfill these requirements in a
>>generic, maybe also extensible way:
>>
>> o The rt-buffer pooling mechanism is a generic feature probably suited
>> for more scenarios than RTnet and RT-FireWire. I'm currently playing
>> with the idea to move the functionality into the RTDM layer. The core
>> features would be:
>>
>
> Good idea. In RT-FireWire, the general memory management has been
> separated from application-specific buffer management: we have rtpkb
> (for general) and hpsb_packet (for FireWire protocol). But, to move
> the functionality to a base layer, like RTDM, seems more reasonable.
>
>
>> - Providing non-real-time allocation for rt-buffers from -what I
>> would call it- parent pools. Buffers from parent pools can be
>> organised in local real-time pools by the drivers and, in case
>> they come from the same parent pool, freely exchanged.
>>
>> - Parent pools need to be identified by a unique name. The first
>> user which accesses a parent pool under a yet unregistered name
>> will create that pool with rt-buffer size that it provides with
>> the request.
>>
>> - Any additional user of an existing parent pool specifying a
>> different rt-buffer size will receive an error. Thus we make sure
>> that all users cleanly agree on the same size beforehand. By
>> "size", I also mean the minimum header size, i.e. actually two
>> parameters.
>>
>
>
> Above 3 seem fine to me.
>
>> - Moreover, one can discuss which features from RT-FireWire's rtpk
>> and RTnet's rtskb implementation should make it into RTDM. I
>> haven't thought about this yet.
>>
>
>
> rtpkb has some functions that rtskb does not have. We can take a look
> into these details later.
>
>
>> o Applying this new RTDM feature to the RTnet/RT-FireWire situation, we
>> would first have to establish some mechanism to agree on rt-buffer
>> sizes. As RTnet with its rt_eth1394 driver is stacked on top of
>> RT-FireWire and its buffers are (currently) smaller, it will simply
>> grab RT-FireWire's parameters from some exported header during
>> compile time. Then RTnet can call RTDM to either join or create the
>> fitting parent pool.
>>
>
> The pool allocating API of RTDM can be called either when the pool (of
> which the name is specified as argument) has been allocated or not. If
> allocated, the API simply returns the existing pointer to the pool.
>
>
>> o Due to the significant overhead of using 4k buffers for standard
>> Ethernet and the possibility that RTnet may support jumbo frames in
>> the future, a scheme for using different parent pools, thus rt-buffer
>> sizes for different network devices has to be established in RTnet
>> later on. But this requires some refactoring and API extensions and
>> is nothing for a quick shot.
>>
>
> Ok, again, the buffer size can be set as argument. But for now, I
> think we only need to focus on the service-provider part, i.e. RTDM.
Hey, convincing you of my idea was really easy. ;)
>
> Now, I have some further idea: what if we also bind the packet
> capturing service with the memory management. That is if a real-time
> application needs real-time memory allocation, it can also ask for
> capturing of all those allocated memory (packets). We can use some
> user-space tools to analyze the captured memory. The captured memory
> can be delivered via fake Ethernet interface, so tools like Ethereal
> can be used (only need to extend it based on the application).
>
> The capturng of a packet can be done when that packet memory is freed,
> where the capturing operation is transparent to the application. Well,
> the application must claim somewhere before the packet is freed that
> it should be captured, e.g. right after it is allocated.
>
> Concerning the packet capturing idea, we can explore more, if it is
> agreed to be interesting and effort-worth.
Actually, I'm not yet convinced that outsourcing /this/ job to RTDM is
useful.
At least for RTnet, capturing is a debugging feature you don't what
compiled in for production systems. It adds overhead to some critical
paths that may hurt on low-end boxes.
The question to me is then what functionality of the capturing subsystem
could be moved to RTDM while still keeping the control of using it or
not (without runtime costs) at the configuration level of the driver
using it. I would like to be able to switch capturing off during "make
menuconfig" of RTnet, but maybe switching it on for RT-FireWire at the
same time.
But maybe I'm too sceptical. I haven't look at both code bases with your
idea in mind yet. What packet field, what functions would you move to RTDM?
Jan
|