Menu

pydb performance

Help
Yang Zhang
2006-11-03
2012-12-10
  • Yang Zhang

    Yang Zhang - 2006-11-03

    I was using 1.17 for a while, and the performance of my debugged Python apps didn't seem to be too different from their normal speed of execution. However, I just upgraded to 1.19, and found that my Python apps were slowing down to a crawl. `top` shows all my CPU going into pydb. What's going on? Did pydb actually hook itself into every single line of execution via settrace()? Is this necessary for any reason? Is there any way to go back to the original mode of debugging? Thanks in advance.

     
    • Rocky Bernstein

      Rocky Bernstein - 2006-11-05

      The speed of debugging depends very much on how you are are debugging. Some of this is described in the
      in the pydb documentation http://bashdb.sourceforge.net/pydb/pydb/lib/module-pydb.html in various sections, but perhaps a separate section on performance is needed.

      The fastest way to debug is to put in a set_trace your program at the point you want to debug. In this situation there is no debugger around up until the point at which you first start to debug something.

      Various debugger commands can also change the speed. The following things cause an additional check on every line executed:

      - line tracing. Especially if you use "set linetrace delay" :-)
      - set sigcheck on.
      - breakpoints. More breakpoints, more work and also if there are conditions, or commands run on them.

      One change between version 1.17 and 1.19 that causes a slowdown was the introduction of signal handling. To see if this is what's causing the problem, in set.py and make the following patch:

      --- set.py    30 Oct 2006 15:28:16 -0000    1.12
      +++ set.py    5 Nov 2006 11:59:36 -0000
      @@ -213,9 +213,9 @@
                           self.trace_dispatch = self.trace_dispatch_gdb
                       else:
                           # Turn off signal checking/adjusting
      -                    self.break_anywhere = self.break_anywhere_bdb
      -                    self.set_continue   = self.set_continue_bdb
      -                    self.trace_dispatch = self.trace_dispatch_bdb
      +                    self.break_anywhere = self.break_anywhere_old
      +                    self.set_continue   = self.set_continue_old
      +                    self.trace_dispatch = self.trace_dispatch_old
                   self.sigcheck = sigcheck
               except ValueError:
                   pass

      Alternatively just check out a copy of pydb from CVS. Then try running your program like this:

         pydb --exec  'set sigcheck off' *program* ...

      Or add this to your ~/.pydbrc:
        set sigcheck off

      However, again you need to apply the above patch first since there was a bug in the "set sigcheck" command in version 1.19.

      In the tests that I made though, my CPU time never went to 100% and things were not significantly slowed down. However it's also true that the program you debug may change things as well. On the hanoi.py test that is in the test directory of the source here are some timings in GNU/Linux on with an athlon CPU:

      time pydb --exec  'continue;;quit' hanoi.py 6
      (hanoi.py:2):
      user    0m0.212s
      sys    0m0.020s

      # time pydb --exec  'set sigcheck off;;continue;;quit' hanoi.py 6
      user    0m0.136s
      sys    0m0.020s

      time python hanoi.py 6
      user    0m0.024s
      sys    0m0.008s

      If this is representative (and I don't know that it is), the user time added 50% using with the default behavior. If this is the problem and others notice this, I'll consider making "sigcheck off" the default behavior.

       

Log in to post a comment.

Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.