|
Re: [Valgrind-developers] [LLVMdev] [GSoC 2014] Using LLVM as a
code-generation backend for Valgrind
From: Timur I. <tim...@go...> - 2014-02-25 18:21:22
|
Valgrind is still going to be single threaded, right? 25 февр. 2014 г. 22:10 пользователь "Denis Steckelmacher" < ste...@ya...> написал: > On 02/25/2014 04:50 PM, John Criswell wrote: > >> >> I think a more interesting idea would be to use LLVM to perform >> instrumentation and then to use Valgrind to instrument third-party >> libraries linked into the program. >> >> What I'm imagining is this: Let's say you instrument a program with >> SAFECode or Asan to find memory safety errors. When you run the program >> under Valgrind, the portion of the code instrumented by SAFECode or Asan >> runs natively without dynamic binary instrumentation because it's >> already been instrumented. When the program calls uninstrumented code >> (e.g., code in a dynamic library), Valgrind starts dynamic binary >> instrumentation to do instrumentation. >> >> A really neat thing you could do with this is to share run-time data >> structures between the LLVM and Valgrind instrumentation. For example, >> Valgrind could use SAFECode's meta-data on object allocations and >> vice-versa. >> >> > Someone proposed to cache the results of a JIT compilation. Caching LLVM > bitcode is easy (and the LLVM optimizations operate on bitcode, so they > don't need to be re-run on bitcode reload), and may be a good way to fasten > Valgrind. Caching native binary code is more difficult and would only be > useful if LLVM's codegen is slow (I think that the codegen can be > configured to be fast, for instance by using a simpler register allocator). > > If every .so is cached in a separate bitcode file, loading an application > would only require the generation of bitcode for the application itself, > not the libraries it uses, provided that they didn't change since another > application using them was analyzed. That may speed up the start-up of > Valgrind. > _______________________________________________ > LLVM Developers mailing list > LL...@cs... http://llvm.cs.uiuc.edu > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev > |
|
Re: [Valgrind-developers] [LLVMdev] [GSoC 2014] Using LLVM as a
code-generation backend for Valgrind
From: Kirill B. <bat...@is...> - 2014-02-26 11:23:28
|
Hi, only one letter got to valgrind-developers mailing list. I'll quote the first message of the thread so that those who do not read llvmdev knew what's this discusssion about. === Begin of the first message === > Hi, > > I've seen on the LLVM's Open Projet Page [1] an idea about using LLVM to > generate native code in Valgrind. For what I know, Valgrind uses libVEX > to translate native instructions into a bitcode, used to add the > instrumentation and then translated back to native code for execution. > > Valgrind and LLVM are two tools that I use nearly every day. I'm also > very interested in code generation and optimization, so adding the > possibility to use LLVM to generate native code in libVEX interests me > very much. Is it a good idea? Could a LLVM backend bring something > useful to Valgrind (for instance, faster execution or more targets > supported) ? > > I've sent this message to the LLVM and Valgrind mailing lists because > I've originally found the idea on the LLVM's website, but Valgrind is > the object of the idea. By the way, does anyone already know if LLVM or > Valgrind will be a mentoring organization for this year's GSoC? > > You can find in [2] the list of my past projects. During the GSoC 2011, > I had the chance to use the Clang libraries to compile C code, and the > LLVM JIT to execute it (with instrumented stdlib functions). I have also > played with the LLVM C bindings to generate code when I explored some > parts of Mesa. > > Denis Steckelmacher > > [1] : http://llvm.org/OpenProjects.html#misc_new > [2] : http://steckdenis.be/page-projects.html === End of the first message === The idea of using LLVM backend in some dynamic binary translation (DBT) project has became popular recently. Unfortunately it does not prove to be good. I suggest you check the related work in QEMU. DBT part of both QEMU and Valgrind works in similar way. And there were a bunch of works on using LLVM as a QEMU backend. They resulted in slowdown mostly. In [1] the authors reported 35x slowdown, in [2] there were around 2x slowdown. Finally in [3] the authors reported performance gain, but there are some catches. 1. They used LLVM not only for backend. They replaced internal representation with LLVM. This is not an option for Valgrind because you'll need to rewrite all existing tools (including third party ones) to do it. 2. They use SPEC CPU benchmarks to measure their speedup. Important things about these tests is that they have little code to translate but a lot of computations to do with translated code. And even some of these tests are not doing too well (like 403.gcc). On real life applications (like firefox) where there are a lot of library code to translate and not so much computations to do results may be totally different. LLVM is not doing good as a DBT backend mostly for two reasons. First, in DBT you need to translate while you are running the application. You need to do it really fast. Compiler is not optimized for that task. LLVM JIT? May be. Second, in DBT you translate code in small portions like basic blocks, or extended basic blocks. They have very simple structure. There is no loops, there is no redundancy from translation high level language to low level. There is nothing good sophisticated optimizations can do better then very simple ones. In conclusion I second what have already been said: this project sounds like fun to do, but do not expect much practical results from it. > It would also be interesting to cache the LLVM-generated code > between runs The tricky part here is to build matching between binary code fragments and cached translations from previous runs. In worst case all you know about the binary code is it's address (which can vary between runs) and the binary code itself. [1] : "Dynamically Translating x86 to LLVM using QEMU" http://infoscience.epfl.ch/record/149975/files/x86-llvm-translator-chipounov_2.pdf [2] : llvm-qemu project. http://code.google.com/p/llvm-qemu/ [3] : "LnQ: Building High Performance Dynamic Binary Translator with Existing Compiler Backends" http://people.cs.nctu.edu.tw/~chenwj/slide/paper/lnq.pdf -- Kirill |
|
Re: [Valgrind-developers] [LLVMdev] [GSoC 2014] Using LLVM as a
code-generation backend for Valgrind
From: Julian S. <js...@ac...> - 2014-02-26 15:17:14
|
On 02/26/2014 12:23 PM, Kirill Batuzov wrote:
I tend to agree with Kirill. It would be great to make Valgrind/Memcheck
faster, and there are certainly ways to do that, but using LLVM is not
one of them.
> Second, in DBT you translate code in small portions like basic blocks,
> or extended basic blocks. They have very simple structure. There is no
> loops, there is no redundancy from translation high level language to
> low level. There is nothing good sophisticated optimizations can do
> better then very simple ones.
Yes. One of the problems of the "Let's use LLVM and it'll all go much
faster" concept is that it lacks a careful analysis of what makes Valgrind
(and QEMU, probably) run slowly in the first place.
As Kirill says, the short blocks of code that V generates make it
impossible for LLVM to do sophisticated loop optimisations etc.
Given what Valgrind's JIT has to work with -- straight line pieces
of code -- it generally does a not-bad job of instruction selection
and register allocation, and I wouldn't expect that substituting LLVM's
implementation thereof would make much of a difference.
What would make Valgrind faster is
(1) improve the caching of guest registers in host registers across
basic block boundaries. Currently all guest registers cached in
host registers are flushed back into memory at block boundaries,
and no host register holds any live value across the boundary.
This is simple but very suboptimal, creating large amounts of
memory traffic.
(2) improve the way that the guest program counter is represented.
Currently it is updated before every memory access, so that if an
unwind is required, it is possible. But this again causes lots of
excess memory traffic. This is closely related to (1).
(3) add some level of control-flow if-then-else support to the IR, so
that the fast-case paths for the memcheck helper functions
(helperc_LOADV64le etc) can be generated inline.
(4) Redesign Memcheck's shadow memory implementation to use a 1 level
map rather than 2 levels as at present. Or something more
TLB-like.
I suspect that the combination of (1) and (2) causes processor write
buffers to fill up and start stalling, although I don't have numbers
to prove that. What _is_ very obvious from profiling Memcheck using
Cachegrind is that the generated code contains much higher proportion
of memory references than "normal integer code". And in particular
it contains perhaps 4 times as many stores as "normal integer code".
Which can't be a good thing.
(3) is a big exercise -- much work -- but potentially very beneficial.
(4) is also important if only because we need a multithreaded
implementation of Memcheck. (1) and (2) are smaller projects and would
constitute a refinement of the existing code generation framework.
> In conclusion I second what have already been said: this project sounds
> like fun to do, but do not expect much practical results from it.
The above projects (1) .. (4) would also be fun :-) and might generate more
immediate speedups for Valgrind.
J
|
|
Re: [Valgrind-developers] [LLVMdev] [GSoC 2014] Using LLVM as a
code-generation backend for Valgrind
From: Yan <ya...@ya...> - 2014-02-26 15:21:51
|
For (3), would something like making all statements conditional (like LoadG, StoreG, and Exit are) do, or are we talking about something more complex? On Wed, Feb 26, 2014 at 7:16 AM, Julian Seward <js...@ac...> wrote: > > On 02/26/2014 12:23 PM, Kirill Batuzov wrote: > > I tend to agree with Kirill. It would be great to make Valgrind/Memcheck > faster, and there are certainly ways to do that, but using LLVM is not > one of them. > > > Second, in DBT you translate code in small portions like basic blocks, > > or extended basic blocks. They have very simple structure. There is no > > loops, there is no redundancy from translation high level language to > > low level. There is nothing good sophisticated optimizations can do > > better then very simple ones. > > Yes. One of the problems of the "Let's use LLVM and it'll all go much > faster" concept is that it lacks a careful analysis of what makes Valgrind > (and QEMU, probably) run slowly in the first place. > > As Kirill says, the short blocks of code that V generates make it > impossible for LLVM to do sophisticated loop optimisations etc. > Given what Valgrind's JIT has to work with -- straight line pieces > of code -- it generally does a not-bad job of instruction selection > and register allocation, and I wouldn't expect that substituting LLVM's > implementation thereof would make much of a difference. > > What would make Valgrind faster is > > (1) improve the caching of guest registers in host registers across > basic block boundaries. Currently all guest registers cached in > host registers are flushed back into memory at block boundaries, > and no host register holds any live value across the boundary. > This is simple but very suboptimal, creating large amounts of > memory traffic. > > (2) improve the way that the guest program counter is represented. > Currently it is updated before every memory access, so that if an > unwind is required, it is possible. But this again causes lots of > excess memory traffic. This is closely related to (1). > > (3) add some level of control-flow if-then-else support to the IR, so > that the fast-case paths for the memcheck helper functions > (helperc_LOADV64le etc) can be generated inline. > > (4) Redesign Memcheck's shadow memory implementation to use a 1 level > map rather than 2 levels as at present. Or something more > TLB-like. > > I suspect that the combination of (1) and (2) causes processor write > buffers to fill up and start stalling, although I don't have numbers > to prove that. What _is_ very obvious from profiling Memcheck using > Cachegrind is that the generated code contains much higher proportion > of memory references than "normal integer code". And in particular > it contains perhaps 4 times as many stores as "normal integer code". > Which can't be a good thing. > > (3) is a big exercise -- much work -- but potentially very beneficial. > (4) is also important if only because we need a multithreaded > implementation of Memcheck. (1) and (2) are smaller projects and would > constitute a refinement of the existing code generation framework. > > > In conclusion I second what have already been said: this project sounds > > like fun to do, but do not expect much practical results from it. > > The above projects (1) .. (4) would also be fun :-) and might generate more > immediate speedups for Valgrind. > > J > > > > ------------------------------------------------------------------------------ > Flow-based real-time traffic analytics software. Cisco certified tool. > Monitor traffic, SLAs, QoS, Medianet, WAAS etc. with NetFlow Analyzer > Customize your own dashboards, set traffic alerts and generate reports. > Network behavioral analysis & security monitoring. All-in-one tool. > > http://pubads.g.doubleclick.net/gampad/clk?id=126839071&iu=/4140/ostg.clktrk > _______________________________________________ > Valgrind-developers mailing list > Val...@li... > https://lists.sourceforge.net/lists/listinfo/valgrind-developers > |
|
Re: [Valgrind-developers] [LLVMdev] [GSoC 2014] Using LLVM as a
code-generation backend for Valgrind
From: Julian S. <js...@ac...> - 2014-02-26 15:32:15
|
On 02/26/2014 04:21 PM, Yan wrote: > For (3), would something like making all statements conditional (like > LoadG, StoreG, and Exit are) do, or are we talking about something more > complex? >> (3) add some level of control-flow if-then-else support to the IR, so >> that the fast-case paths for the memcheck helper functions >> (helperc_LOADV64le etc) can be generated inline. Something more complex: being able to add control-flow diamonds (if-then-else-merge) into the IR. Then, for example, the load-cases for Memcheck could be put inline: fast/slow case check before the diamond, the fast case code in the then branch, the slow case code calling a helper in the else branch. Doing control flow diamonds in IR means that both the IR optimiser and the register allocator will have to deal with control flow merges, which they don't at present. That would make them more complex, although not as complex as they would be if they had to deal with loops as well. J |
|
Re: [Valgrind-developers] [LLVMdev] [GSoC 2014] Using LLVM as a
code-generation backend for Valgrind
From: Patrick J. L. <lop...@gm...> - 2014-02-26 16:40:59
|
On Wed, Feb 26, 2014 at 7:16 AM, Julian Seward <js...@ac...> wrote: > > > What would make Valgrind faster is > > (1) improve the caching of guest registers in host registers across > basic block boundaries. Currently all guest registers cached in > host registers are flushed back into memory at block boundaries, > and no host register holds any live value across the boundary. > This is simple but very suboptimal, creating large amounts of > memory traffic. Sounds more like large amounts of L1 cache traffic. > I suspect that the combination of (1) and (2) causes processor write > buffers to fill up and start stalling, although I don't have numbers > to prove that. Maybe, but maybe not. (3) and (especially) (4) might well have greater impact. It is notoriously difficult to guess where a modern CPU is spending its time without a profiler. Random memory access is of course a disaster, but that sounds more like (4) than (1) or (2). It would be very interesting to see a micro-profile of Valgrind. - Pat |
|
Re: [Valgrind-developers] [LLVMdev] [GSoC 2014] Using LLVM as a
code-generation backend for Valgrind
From: Julian S. <js...@ac...> - 2014-02-26 17:08:34
|
On 02/26/2014 05:40 PM, Patrick J. LoPresti wrote: > It would be very interesting to see a micro-profile of Valgrind. Are you able to do that? Did you have any specific profiler in mind? J |