Menu

#83 High CPU usage with Linux port definition (DED or Serial)

7.0.12
pending
None
2023-12-13
2023-06-08
Steve
No

Tested on pi4 / Raspbian GNU/Linux 11 (bullseye)
and also on i386 VM mint/Sarah (old I know)

FBB-7.0.11 (and fbb-7.0.8-beta8)
BPQ 6.0.23.74 (as well as an old ver 6.0.20.50)
Linux pi4 6.1.21-v8+ #1642 SMP PREEMPT Mon Apr 3 17:24:16 BST 2023 aarch64 GNU/Linux

When using DED to connect to BPQ, I am seeing high CPU usage (~40%)

Config is as follows

BPQ DED config:

TNCPORT
    COMPORT=/home/taj/fbbded
    TYPE=DED
    STREAMS=4
    APPLMASK=1
ENDPORT

port.sys

# FBB7.0.11
#
#Ports TNCs
 1     1
#
#Com Interface Adress (Hex)               Baud
 1       9      /home/taj/fbbded   9600
#
#TNC NbCh Com MultCh Pacln Maxfr NbFwd MxBloc M/P-Fwd Mode  Freq
 0   0    0   0      0     0     0     0      00/01   ----  File-fwd.
 1   4    1   1      250   2     1     10     00/15   DUWYL BPQ
#
# End of file.
#

Looking into this with gproc, it seems it is in a very tight loop constantly checking the is_cmd (this was run over only 30sec, maybe not even that, just enough for it to init the TNCs and me to look in htop).

 %   cumulative   self              self     total
 time   seconds   seconds    calls  Ts/call  Ts/call  name
  0.00      0.00     0.00  1503534     0.00     0.00  is_cmd
  0.00      0.00     0.00  1503534     0.00     0.00  teste_voies
  0.00      0.00     0.00  1252966     0.00     0.00  no_port
  0.00      0.00     0.00  1252945     0.00     0.00  inbuf_ok
  0.00      0.00     0.00   830213     0.00     0.00  btime
  0.00      0.00     0.00   561663     0.00     0.00  deb_io
  0.00      0.00     0.00   561662     0.00     0.00  fin_io
  0.00      0.00     0.00   250760     0.00     0.00  rcv_ded
  0.00      0.00     0.00   250760     0.00     0.00  rcv_drv

linbpq CPU% is <1% , usually 0, so I'm not sure it's anything to do with too much DED traffic. I have split the pipe and monitored it with socat and there is just normal checking going on. No BBS forwarding or invalid commands etc.

~~~socat -v open:$HOME/fbbded,nonblock,raw PTY,link=$HOME/fbbded2,raw
~~~
(and then change port.sys to point to fbbded2)

If the port.sys is changed to only a telnet port, the CPU usage drops to ~ 0%

If I switch the port type to /dev/ttyAMA0 (even though nothing is conencted there) the CPU usage is still high.

Steve

Related

Tickets: #83

Discussion

  • Dave van der Locht

    These functions are constantly being invoked by LinFBB's main program in a tight loop. Therefore gproc will show high numbers of calls when there are AX.25 communication ports configured like DED, Linux AX.25 etc.. That's just normal as it seems.

    However... High CPU usage when using communication ports I'm not recognizing and haven't seen or heard about it before. I'll have a look at it when my BPQ+LinFBB setup is ready for testing/debugging regarding ticket #82.

    Some brain cells suggest me it might have something to do with how LinFBB's DED driver communicates with its DED hostmode 'device'. Signs of polling instead of blocking I/O implementation...

     
  • Dave van der Locht

    • assigned_to: Dave van der Locht
     
  • Dave van der Locht

    Replicated the issue and found 'the place to be'...

    A 'quick and dirty fix' reduced the CPU load from 20% on my system to about 1% without negative functional and performance side-effects as far as I have (limited) tested and able to test now.

    Due to its complexity I'll have to take a good look at how things are done regarding the handling of serial I/O to make a good and solid fix.

     
    🎉
    1
  • Steve

    Steve - 2023-06-08

    Dave,

    The issue was occuring with serial as well as DED but as all are just files the issue is probably the same. Good news on finding the place . Will be interested to see the fix.

     
    🎉
    1
    • Dave van der Locht

      Correct, it possibly affects all serial I/O comms (when used) and I think its doing that for a very very long time now.

      I even think the Windows version of 2 decades ago, before FBB became Linux only, must have had a relative high CPU usage when a DED port was used, and possibly with some other protocol drivers over serial I/O too (anything between slightly less or much less CPU usage).

       
  • Dave van der Locht

    • status: open --> accepted
     
  • Dave van der Locht

    A patch is available for testing for anyone who's interested. After doing some more testing with DED emulation software and hardware I'll commit the patch to SVN.

    If anyone would like to test please send me a message and you'll receive the patch file. You need to manually apply the patch, recompile and install.

     
    🎉
    1
    • Red Tuby

      Red Tuby - 2023-06-14

      Steve also applied the patch on my system even though I didn't have the high CPU issue, it is working well even so. Just thought I'd let you know its good.

       
  • Steve

    Steve - 2023-06-10

    Thank you for the patch.
    I have added it and tested and all seem good ,
    CPU is 0 -> 1.3ish now

     
  • Dave van der Locht

    Changes are committed [r239].

     

    Related

    Commit: [r239]

  • Dave van der Locht

    • status: accepted --> pending
     

Log in to post a comment.