Re: [RTnet-developers] Re: Common rt-buffers
Brought to you by:
bet-frogger,
kiszka
|
From: Yuchen Z. <yuc...@gm...> - 2005-12-13 00:06:22
|
On 12/9/05, Jan Kiszka <jan...@we...> wrote:
> Yuchen Zhang wrote:
> >>>Now, I have some further idea: what if we also bind the packet
> >>>capturing service with the memory management. That is if a real-time
> >>>application needs real-time memory allocation, it can also ask for
> >>>capturing of all those allocated memory (packets). We can use some
> >>>user-space tools to analyze the captured memory. The captured memory
> >>>can be delivered via fake Ethernet interface, so tools like Ethereal
> >>>can be used (only need to extend it based on the application).
> >>>
> >>>The capturng of a packet can be done when that packet memory is freed,
> >>>where the capturing operation is transparent to the application. Well,
> >>>the application must claim somewhere before the packet is freed that
> >>>it should be captured, e.g. right after it is allocated.
> >>>
> >>>Concerning the packet capturing idea, we can explore more, if it is
> >>>agreed to be interesting and effort-worth.
> >>
> >>Actually, I'm not yet convinced that outsourcing /this/ job to RTDM is
> >>useful.
> >>
> >
> > Ok, now time for real debating:)
> > Let's first sketch the main frame code to support this service.
> >
> > struct Cap_struct {
> > struct net_dev *fakedev; /*the fake ethernet dev to deliver
> > captured packet to user-space*/
> > struct rt-buffer-pool *cap_pool; /*where the buffers to
> > compensate application memory come from*/
> > int buffer_size; /* the size of buffer that can be captured he=
re */
> > }
>
> So, that structure describes a capturing instance, conceptually
> something like rtcap service of RTnet?
>
Here the difference is, compared with rtcap in RTnet, every single
Capturer has its own buffer pool and interface-to-userspace (the fake
Ethernet). The latter can be seen the same in RTnet, as you have tap
dev for each rtdev.
But all in all, the new Capturing is oriented to application instead
of network interface:
when rtcap module in RTnet is loaded, it changes the xmit_hook&rx_hook
of a certain rtdev with informing the applications, i.e. the
capturing service is fully transparent to applications. (Fix me, if I
am wrong:)) The new Capturing is new mainly because it lets
applications take over the control of capturing.
> >
> > struct rt-buffer{
> > .....
> > struct Cap_struct *aCap;
> > }
> >
>
> And this link is needed as there can be multiple capturers, right? But
> what about the additional fields required by rtcap to save the initial
> buffer layout, the compensation buffer, timestamp, and other stuff? Note
> that rtcap does deferred capturing. Does your service in RT-FireWire
> work the same way?
>
My idea was also deferred capturing, but somehow it was not clearly express=
ed:)
And yes, the timestamp will be in rt-buffer struct also. For
compensation buffer, my initial idea was to only capture the rt-buffer
when it is being freed. In that case, we dont need the compesation
buffer pointer in rt-buffer struct, since the compensation buffer will
be immediately given to the application buffer pool, i.e. buffer
exchange between application and capturer. That's also why the
cap_rtbuffer function should return the pointer to compensation
buffer. BUT now, I would rather choose to let application decide when
to capture the packet: stamping the time, assigning the compensation
packet and putting into the queue of captured packet. That is more
similiar to rtcap in RTnet.
So the rt-buffer struct will be
struct rt-buffer{
..........
struct Cap_struct *aCap;
struct rt-buffer *comp_buf;
}
> > /*function to create a Cap_struct*/
> > struct Cap_struct *create_cap( int size, // the buffer size
> > struct kernel_slab_pool
> > *parent_pool,
> > char *name) // the name
> > of fake ethernet dev, should be
> >
> > //specific about application
> > {
> > if(*name already exists){
> > return (address of existing Cap_struct)
> > }
> > .............
> > }
> >
> > /*function to remove a Cap_struct*/
> > void remove_cap(struct Cap_struct *aCap_struct)
> >
>
> Ok, this makes sense.
>
> > /*function to capture a buffer*/
> > struct rt-buffer *cap_rtbuffer(struct rt-buffer *abuffer, int
> > buffer_size, struct cap_struct *aCap)
> > {
> > if(aCap->buffer_size !=3D buffer_size)
> > //print error msg and return
> > else{
> > //allocate a compesating buffer from the cap pool
> > // exchange buffers
> > //deliver the captured buffer to fake ethernet dev
>
> I hope you do NOT deliver immediately, just pend the delivery for Linux
> to get in the CPU again.
>
I thought this would not be real-time killing, since the main work is
done in the signal handler invoked via netif_rx, but I just realized
it also should include one time dynamic memory allocation, so now it
will be wrapped in a Linux signal handler. Again like in RTnet.
> > //return the address of compensating buffer
> > return &comp_buffer;
> > }
> > }
>
> What's the caller supposed to do with that returned buffer?
>
The changed cap_rtbuffer is like:
void cap_rtbuffer(struct rt-buffer *abuffer, int
buffer_size, struct cap_struct *aCap)
> > {
> > if(aCap->buffer_size !=3D buffer_size)
> > //print error msg and return
> > else{
> > //allocate a compesating buffer from the cap pool
> > // exchange buffers
> > //deliver the captured buffer to fake ethernet dev
>
> I hope you do NOT deliver immediately, just pend the delivery for Linux
> to get in the CPU again.
>
I thought this would not be real-time killing, since the main work is
done in the signal handler invoked via netif_rx, but I just realized
it also should include one time dynamic memory allocation, so now it
will be wrapped in a Linux signal handler. Again like in RTnet.
> > //return the address of compensating buffer
> > return &comp_buffer;
> > }
> > }
>
> >
> > /*function to free a rt buffer*/
> > void free_rtbuffer(struct rt-buffer *abuffer){
> > if(abuffer->aCap){ //pointer NOT null, so we should capture this on=
e
> > abuffer =3D cap_rtbuffer(abuffer, sizeof(abuffer), abuffer->aC=
ap);
>
> Is this the only use case for cap_rtbuffer? How do you mark the actual
> capturing date? Only on buffer release? Or did you leave out this aspect
> in the draft?
>
> > }
> > //free abuffer
> > ......
> > }
> >
> >
> >>At least for RTnet, capturing is a debugging feature you don't what
> >>compiled in for production systems. It adds overhead to some critical
> >>paths that may hurt on low-end boxes.
> >>
> >
> > I dont think the overhead is noticable. When the base module (say
> > RTDM) has capturing option enabled, the only overhead that happens to
> > applications that do NOT need capturing is, in the free_rtbuffer
> > function, the extra checking of a rt-buffer's cap_struct pointer.
>
> Well, the current kfree_rtskb in RTnet has to do more. It iterates over
> all rtskbs in a (possible) chain and checks for each if it is a captured
> one. Anyway, this needless overhead in the capturing-compiled-in-but-
> not-used case can be avoided with a simple check before entering any
> loop. So, you are likely right, the overhead is acceptable - and could
> still be reduced when we add a CONFIG_switch to RTDM.
>
> Apropos chains: does RT-FireWire's rtpk support them? If we drag
> capturing into the core service layer, this is required.
>
> >
> >
> >>The question to me is then what functionality of the capturing subsyste=
m
> >>could be moved to RTDM while still keeping the control of using it or
> >>not (without runtime costs) at the configuration level of the driver
> >>using it. I would like to be able to switch capturing off during "make
> >>menuconfig" of RTnet, but maybe switching it on for RT-FireWire at the
> >>same time.
> >>
> >
> > The capturing I am proposing is conceptually different from the
> > capturing in RTnet (what I understood). It is fully oriented to
> > applications, while RTnet's capturing is more or less per networking
> > interface. That means in this new capturing service, buffer capturing
> > can happen /anywhere/, even something totally away from network
> > transaction. This is especially useful when we develop multiple
> > applications over /one/ FireWire interface: we may only want to
> > capture buffers used by one high-level application. Besides FireWire
>
> I may not know my own code well enough, but I don't see a reason why
> setting the capturing mark in RTnet should not be movable, also away
> from the general places where it is right now.
>
> > and Ethernet, other applications based on RTDM, like socketCAN, can
> > also benefit from this service, as long as they need some analysis
> > over the used memory to gain some insight to the system behavior.
>
> I had Socket-CAN in mind as well first, but Sebastian's implementation
> is smartly not using discrete CAN-packet buffers. Rather, there are ring
> buffers per receiving socket. This is more efficient than throwing
> around 8 bytes data + some bytes head per CAN frame in individual
> packets. CAN packets are too small to justify the maintenance costs of
> skb-like approaches - that's at least my strong believe (but we should
> soon be able to do some benchmarks on this as well - there is a Linux
> skb-based variant on the way).
>
> Anyway, one may consider if "external" users of the rt-packet capturing
> services make sense as well. But that's more kind of a second or third st=
ep.
>
> >
> > BTW, are you going to work on these soon, at least the memory
> > management part? How can I contribute? My idea is to make RTDM
>
> My plan was to burden much of the development on you, as my time is more
> than just limited. ;)
>
> Well, I have no fixed schedule, but I guess things will move quickly as
> soon as we have some API and data structures.
>
> > temporarily a separate module that can be compiled over xenomai, and
> > put it either on RTnet svn or RT-FireWire svn.
>
> I wouldn't drag the whole RTDM module back in RTnet again (that's too
> platform dependent anyway). I would suggest to develop the required
> services as an optional subsystem first inside RT-FireWire and RTnet. As
> soon as this API is stable for each individual project, we could than
> merge it into a private Xenomai tree, test the cooperation between RTnet
> and RT-FireWire, and then start submitting patches against Xenomai first
> and RTAI later.
>
> And, with the RTDM-packet subsystem still in both projects available, we
> could continue to provide support for older RTDM cores without that
> feature. Detecting the RTDM API revision is easy and could be used to
> decide if the internal rt-packet code should be build.
>
> Jan
>
>
>
|