|
From: 齐玉华 <qi...@12...> - 2012-06-01 07:46:50
|
Because the recompilation time of one patched php is nearly 20 seconds, so if the sum of degradation time is more than 20 seconds, then our method of indirecting function will perorm worse than native regression testing. Then, can we group test cases into one test script(.sh), then run the script with valgrind(--track-child=yes) to suppress the start up time? In this way, will the mechanism of wrapping function in valgrind work well for each test case? Thanks very much. At 2012-06-01 15:24:26,"David Chapman" <dcc...@ac...> wrote: Please remember to use "reply all" so messages go to the list. On 5/31/2012 11:58 PM, 齐玉华 wrote: Thanks for your quick reply again. It is probable that the valgrind startup time to slow down the short-running scipt. I will validate it later(the computer with linux installed is in laboratory). Then, i seek to wrap some functions in php system to the patched ones in a .so file. In this way, the recompilation time-overhead of the patched php will be cut down. But to validate the candidate patch, we need to regression testing (running the original php system with valgrind against numberous test cases ). And the degradation due to valgrind will trade off the recompilation-time cost reduction. Thus, if the valgrind startup time can be somewhat reduced, the way to indirect the patched function will have advantage of time-overhead. Can you give me some advice to reduce the valgrind startup time? Thanks very much. I do not know whether valgrind startup time can be reduced; perhaps Julian Seward (the author of valgrind) would know. I suspect the answer is no; most programs spend much more time running than initializing, so valgrind startup time has probably not been a concern to anyone else. I am not sure I understand your run-time concerns. You would be using valgrind only during testing, so the penalty would be paid only during testing. Will the tests fail simply because they are slower? Are there so many tests that they will take substantially more than 20 minutes? 在 2012-06-01 14:03:14,"David Chapman" <dcc...@ac...> 写道: On 5/31/2012 10:18 PM, 齐玉华 wrote: Thanks for your quick reply. Here is the detailed information: The good2.php is attached. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ grace@grace:~/software/WAutoRepair/experiment/php/php_null$ valgrind --version valgrind-3.7.0 grace@grace:~/software/WAutoRepair/experiment/php/php_null$ uname -a Linux grace 2.6.32-21-generic-pae #32-Ubuntu SMP Fri Apr 16 09:39:35 UTC 2010 i686 GNU/Linux grace@grace:~/software/WAutoRepair/experiment/php/php_null$ /home/grace/software/WAutoRepair/experiment/php/php_null/php-src-5.3/sapi/cli/php --version PHP 5.3.7RC4-dev (cli) (built: May 21 2012 23:48:55) Copyright (c) 1997-2011 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2011 Zend Technologies grace@grace:~/software/WAutoRepair/experiment/php/php_null$ time /home/grace/software/WAutoRepair/experiment/php/php_null/php-src-5.3/sapi/cli/php good2.php 6 real 0m0.009s user 0m0.004s sys 0m0.008s grace@grace:~/software/WAutoRepair/experiment/php/php_null$time valgrind --tool=none -q /home/grace/software/WAutoRepair/experiment/php/php_null/php-src-5.3/sapi/cli/php good2.php 6 real 0m1.086s user 0m1.048s sys 0m0.040s This could very easily be valgrind startup time, not the slowdown from valgrind's instruction simulator. As requested by the previous post, please try "valgrind --tool=none ls" and compare the run time with "ls" by itself. grace@grace:~/software/WAutoRepair/experiment/php/php_null$ time /home/grace/software/WAutoRepair/experiment/php/php_null/php-src-5.3/sapi/cli/php --version PHP 5.3.7RC4-dev (cli) (built: May 21 2012 23:48:55) Copyright (c) 1997-2011 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2011 Zend Technologies real 0m0.008s user 0m0.004s sys 0m0.008s grace@grace:~/software/WAutoRepair/experiment/php/php_null$ time valgrind --tool=none -q /home/grace/software/WAutoRepair/experiment/php/php_null/php-src-5.3/sapi/cli/php --version PHP 5.3.7RC4-dev (cli) (built: May 21 2012 23:48:55) Copyright (c) 1997-2011 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2011 Zend Technologies real 0m0.948s user 0m0.904s sys 0m0.044s ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If the set of test cases are numerous(e.g. 1000), then the php will incur significantly performance degradation. I am waiting in hope for your insightful advices. Thanks again. If your test cases run in 0.01 seconds each now but will run in 1 second under valgrind, 1000 tests will take about 20 minutes. Why is this a problem? You won't be running valgrind every time you launch PHP. What happens when you run a large PHP script under valgrind? Do you still see 25x or 100x slowdown? 在 2012-06-01 10:40:59,"John Reiser" <jr...@bi...> 写道: >On 05/31/2012 07:10 PM, 齐玉华 wrote: >> I try to use Valgrind to wrap some functions in target sotfwares such as php, libtiff. But i found that valgrind will slow down them significantly(up to 25 times slowwer), even if run valgrind with '--tool=none'. And as described in valgrind mannual, there should be 5 times slowwer than native >> running. I am confused, so anyone can help me to reduce the instrumentation-cost with some suggestions? I am waiting in hope for your advices. Thanks. > >Please tell us the version of valgrind (the output of "valgrind --version" >when fed to a shell), which hardware (x86_64, i686, PowerPC 64-bit, ARMv7, >etc.), and which operating system and release (Ubuntu Linux 12.04, Android 4.1, >MacOS 10.8 Darwin, etc.) > >How much slow-down do you observe for simple programs, such as >"valgrind --tool=none date", or 'ls', 'who', etc? > >-- > ------------------------------------------------------------------------------ Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ _______________________________________________ Valgrind-users mailing list Val...@li...https://lists.sourceforge.net/lists/listinfo/valgrind-users -- David Chapman dcc...@ac... Chapman Consulting -- San Jose, CA Software Development Done Right. www.chapman-consulting-sj.com -- David Chapman dcc...@ac... Chapman Consulting -- San Jose, CA Software Development Done Right. www.chapman-consulting-sj.com |
|
From: 齐玉华 <qi...@12...> - 2012-06-01 07:47:36
|
Because the recompilation time of one patched php is nearly 20 seconds, so if the sum of degradation time is more than 20 seconds, then our method of indirecting function will perorm worse than native regression testing. Then, can we group test cases into one test script(.sh), then run the script with valgrind(--track-child=yes) to suppress the start up time? In this way, will the mechanism of wrapping function in valgrind work well for each test case? Thanks very much. At 2012-06-01 15:24:26,"David Chapman" <dcc...@ac...> wrote: Please remember to use "reply all" so messages go to the list. On 5/31/2012 11:58 PM, 齐玉华 wrote: Thanks for your quick reply again. It is probable that the valgrind startup time to slow down the short-running scipt. I will validate it later(the computer with linux installed is in laboratory). Then, i seek to wrap some functions in php system to the patched ones in a .so file. In this way, the recompilation time-overhead of the patched php will be cut down. But to validate the candidate patch, we need to regression testing (running the original php system with valgrind against numberous test cases ). And the degradation due to valgrind will trade off the recompilation-time cost reduction. Thus, if the valgrind startup time can be somewhat reduced, the way to indirect the patched function will have advantage of time-overhead. Can you give me some advice to reduce the valgrind startup time? Thanks very much. I do not know whether valgrind startup time can be reduced; perhaps Julian Seward (the author of valgrind) would know. I suspect the answer is no; most programs spend much more time running than initializing, so valgrind startup time has probably not been a concern to anyone else. I am not sure I understand your run-time concerns. You would be using valgrind only during testing, so the penalty would be paid only during testing. Will the tests fail simply because they are slower? Are there so many tests that they will take substantially more than 20 minutes? 在 2012-06-01 14:03:14,"David Chapman" <dcc...@ac...> 写道: On 5/31/2012 10:18 PM, 齐玉华 wrote: Thanks for your quick reply. Here is the detailed information: The good2.php is attached. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ grace@grace:~/software/WAutoRepair/experiment/php/php_null$ valgrind --version valgrind-3.7.0 grace@grace:~/software/WAutoRepair/experiment/php/php_null$ uname -a Linux grace 2.6.32-21-generic-pae #32-Ubuntu SMP Fri Apr 16 09:39:35 UTC 2010 i686 GNU/Linux grace@grace:~/software/WAutoRepair/experiment/php/php_null$ /home/grace/software/WAutoRepair/experiment/php/php_null/php-src-5.3/sapi/cli/php --version PHP 5.3.7RC4-dev (cli) (built: May 21 2012 23:48:55) Copyright (c) 1997-2011 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2011 Zend Technologies grace@grace:~/software/WAutoRepair/experiment/php/php_null$ time /home/grace/software/WAutoRepair/experiment/php/php_null/php-src-5.3/sapi/cli/php good2.php 6 real 0m0.009s user 0m0.004s sys 0m0.008s grace@grace:~/software/WAutoRepair/experiment/php/php_null$time valgrind --tool=none -q /home/grace/software/WAutoRepair/experiment/php/php_null/php-src-5.3/sapi/cli/php good2.php 6 real 0m1.086s user 0m1.048s sys 0m0.040s This could very easily be valgrind startup time, not the slowdown from valgrind's instruction simulator. As requested by the previous post, please try "valgrind --tool=none ls" and compare the run time with "ls" by itself. grace@grace:~/software/WAutoRepair/experiment/php/php_null$ time /home/grace/software/WAutoRepair/experiment/php/php_null/php-src-5.3/sapi/cli/php --version PHP 5.3.7RC4-dev (cli) (built: May 21 2012 23:48:55) Copyright (c) 1997-2011 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2011 Zend Technologies real 0m0.008s user 0m0.004s sys 0m0.008s grace@grace:~/software/WAutoRepair/experiment/php/php_null$ time valgrind --tool=none -q /home/grace/software/WAutoRepair/experiment/php/php_null/php-src-5.3/sapi/cli/php --version PHP 5.3.7RC4-dev (cli) (built: May 21 2012 23:48:55) Copyright (c) 1997-2011 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2011 Zend Technologies real 0m0.948s user 0m0.904s sys 0m0.044s ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If the set of test cases are numerous(e.g. 1000), then the php will incur significantly performance degradation. I am waiting in hope for your insightful advices. Thanks again. If your test cases run in 0.01 seconds each now but will run in 1 second under valgrind, 1000 tests will take about 20 minutes. Why is this a problem? You won't be running valgrind every time you launch PHP. What happens when you run a large PHP script under valgrind? Do you still see 25x or 100x slowdown? 在 2012-06-01 10:40:59,"John Reiser" <jr...@bi...> 写道: >On 05/31/2012 07:10 PM, 齐玉华 wrote: >> I try to use Valgrind to wrap some functions in target sotfwares such as php, libtiff. But i found that valgrind will slow down them significantly(up to 25 times slowwer), even if run valgrind with '--tool=none'. And as described in valgrind mannual, there should be 5 times slowwer than native >> running. I am confused, so anyone can help me to reduce the instrumentation-cost with some suggestions? I am waiting in hope for your advices. Thanks. > >Please tell us the version of valgrind (the output of "valgrind --version" >when fed to a shell), which hardware (x86_64, i686, PowerPC 64-bit, ARMv7, >etc.), and which operating system and release (Ubuntu Linux 12.04, Android 4.1, >MacOS 10.8 Darwin, etc.) > >How much slow-down do you observe for simple programs, such as >"valgrind --tool=none date", or 'ls', 'who', etc? > >-- > ------------------------------------------------------------------------------ Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ _______________________________________________ Valgrind-users mailing list Val...@li...https://lists.sourceforge.net/lists/listinfo/valgrind-users -- David Chapman dcc...@ac... Chapman Consulting -- San Jose, CA Software Development Done Right. www.chapman-consulting-sj.com -- David Chapman dcc...@ac... Chapman Consulting -- San Jose, CA Software Development Done Right. www.chapman-consulting-sj.com |
|
From: Dallman, J. <joh...@si...> - 2012-06-01 09:33:35
|
> Because the recompilation time of one patched php is nearly 20 seconds, so if the > sum of degradation time is more than 20 seconds, then our method of indirecting > function will perorm worse than native regression testing<app:ds:%20%20regression%20testing>. You must expect performance under Valgrind to be worse than any normal kind of test harness for regression testing. Valgrind is not a substitute for normal kinds of testing, or is it a general-purpose testing framework. Its value is that it can find errors that normal testing won't, errors that leave the code producing correct results most of the time, but wasting memory, using uninitialized variables whose contents are unpredictable, or other errors depending on the tool that you use. For example, the products I work on have slowdowns of x20 to x30 under Valgrind. I don't attempt to run daily testing with Valgrind, because it would be far too slow. Instead, as the development phase of each release comes (there are two per year) to an end, and pre-release maintenance gets under way, I run all of the appropriate testing once under Valgrind. This takes several weeks, and finds (some types of) coding errors made by the developers during the development of the release. Those get fixed in the release. This method is not perfect — running all the testing every day under Valgrind would be preferable. But the amount of hardware it would require (perhaps 50 modern Linux machines) compared to the number of problem it would find (perhaps one per moth) makes the perfect solution quite impractical. Using Valgrind has considerably reduced the numbers of errors reported by users that we can't reproduce, which is well worthwhile. That has been achieved using one or two machines. -- John Dallman |
|
From: 齐玉华 <qi...@12...> - 2012-06-01 10:34:16
|
Thanks for your detail account. In the manual the program degradation is relatively 5 times slower than native execution with the "--tool=none" flag. And "valgrind --tool=none -q date" takes about 0.370 seconds. Thus, i assume that the valgrind start time is no more than 0.370 seconds. But I am confused to the following program degradation(~about 10 times): ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ grace@grace:~/software/WAutoRepair/experiment/php/php_null$ time /home/grace/software/WAutoRepair/experiment/php/php_null/php-src-5.3/sapi/cli/php good2.php 6 real 0m4.764s user 0m4.756s sys 0m0.004s grace@grace:~/software/WAutoRepair/experiment/php/php_null$ time valgrind --tool=none -q /home/grace/software/WAutoRepair/experiment/php/php_null/php-src-5.3/sapi/cli/php good2.php 6 real 0m44.110s user 0m43.971s sys 0m0.140s ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ than, what has happened? At 2012-06-01 17:20:33,"Dallman, John" <joh...@si...> wrote: > Because the recompilation time of one patched php is nearly 20 seconds, so if the > sum of degradation time is more than 20 seconds, then our method of indirecting > function will perorm worse than native regression testing. You must expect performance under Valgrind to be worse than any normal kind of test harness for regression testing. Valgrind is not a substitute for normal kinds of testing, or is it a general-purpose testing framework. Its value is that it can find errors that normal testing won't, errors that leave the code producing correct results most of the time, but wasting memory, using uninitialized variables whose contents are unpredictable, or other errors depending on the tool that you use. For example, the products I work on have slowdowns of x20 to x30 under Valgrind. I don't attempt to run daily testing with Valgrind, because it would be far too slow. Instead, as the development phase of each release comes (there are two per year) to an end, and pre-release maintenance gets under way, I run all of the appropriate testing once under Valgrind. This takes several weeks, and finds (some types of) coding errors made by the developers during the development of the release. Those get fixed in the release. This method is not perfect — running all the testing every day under Valgrind would be preferable. But the amount of hardware it would require (perhaps 50 modern Linux machines) compared to the number of problem it would find (perhaps one per moth) makes the perfect solution quite impractical. Using Valgrind has considerably reduced the numbers of errors reported by users that we can't reproduce, which is well worthwhile. That has been achieved using one or two machines. -- John Dallman |
|
From: Eliot M. <mo...@cs...> - 2012-06-01 11:23:33
|
On 6/1/2012 6:34 AM, 齐玉华 wrote: > Thanks for your detail account. In the manual the program degradation is relatively 5 times slower > than native execution with the "--tool=none" flag. And "valgrind --tool=none -q date" takes about > 0.370 seconds. Thus, i assume that the valgrind start time is no more than 0.370 seconds. But I am > confused to the following program degradation(~about 10 times): --tool=none still rewrites code and in effect emulates your cpu's semantics on your same cpu. This emulation will indeed slow things down quite a bit. You can kind of think of it as like a somewhat optimized interpreter for your machine -- but it is still at some level an interpreter. So, a 10x slowdown is *good*, not bad, given what is going on under the covers. Eliot Moss |
|
From: Julian S. <js...@ac...> - 2012-06-01 11:34:51
|
> So, a 10x slowdown is *good*, not bad, given what is going on > under the covers. A 10x slowdown is actually pretty bad for --tool=none, especially after transation chaining landed last month. If the system is steady-stating at 10x slowdown with --tool=none, that's a bug that I'd like to know about. Starting large apps (apache, etc) requires jitting tens or hundreds of thousands of basic blocks and can take several seconds, which makes these short-running tests unrepresentative of the steady state performance. Because of that, qi...@12..., you might get a better result by arranging your tests to do more useful work per process-startup. J |