You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
1
(6) |
2
(6) |
3
(10) |
4
(10) |
|
5
(6) |
6
(6) |
7
(9) |
8
(6) |
9
(6) |
10
(7) |
11
(7) |
|
12
(6) |
13
(6) |
14
(8) |
15
(17) |
16
(10) |
17
(17) |
18
(8) |
|
19
(9) |
20
(7) |
21
(6) |
22
(6) |
23
(6) |
24
(5) |
25
(3) |
|
26
(3) |
27
(3) |
28
(3) |
29
(3) |
30
(2) |
31
(3) |
|
|
From: <js...@ac...> - 2004-12-04 03:56:39
|
Nightly build on phoenix ( SuSE 9.1 ) started at 2004-12-04 03:50:00 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow insn_sse: valgrind ./insn_sse insn_sse2: (skipping, prereq failed: ../../../tests/cputest x86-sse2) int: valgrind ./int rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind ./pushpopseg rcl_assert: valgrind ./rcl_assert seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 187 tests, 5 stderr failures, 0 stdout failures ================= corecheck/tests/as_mmap (stderr) corecheck/tests/fdleak_fcntl (stderr) memcheck/tests/scalar (stderr) memcheck/tests/writev (stderr) memcheck/tests/zeropage (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <to...@co...> - 2004-12-04 03:25:56
|
Nightly build on dunsmere ( Fedora Core 3 ) started at 2004-12-04 03:20:03 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 192 tests, 12 stderr failures, 1 stdout failure ================= corecheck/tests/fdleak_cmsg (stderr) corecheck/tests/fdleak_fcntl (stderr) corecheck/tests/fdleak_ipv4 (stderr) corecheck/tests/fdleak_socketpair (stderr) memcheck/tests/badpoll (stderr) memcheck/tests/buflen_check (stderr) memcheck/tests/execve (stderr) memcheck/tests/execve2 (stderr) memcheck/tests/scalar (stderr) memcheck/tests/scalar_exit_group (stderr) memcheck/tests/scalar_supp (stderr) memcheck/tests/writev (stderr) none/tests/exec-sigmask (stdout) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-12-04 03:20:11
|
Nightly build on audi ( Red Hat 9 ) started at 2004-12-04 03:15:02 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 192 tests, 12 stderr failures, 0 stdout failures ================= corecheck/tests/fdleak_cmsg (stderr) corecheck/tests/fdleak_fcntl (stderr) corecheck/tests/fdleak_ipv4 (stderr) corecheck/tests/fdleak_socketpair (stderr) memcheck/tests/badpoll (stderr) memcheck/tests/buflen_check (stderr) memcheck/tests/execve (stderr) memcheck/tests/execve2 (stderr) memcheck/tests/scalar (stderr) memcheck/tests/scalar_exit_group (stderr) memcheck/tests/scalar_supp (stderr) memcheck/tests/writev (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-12-04 03:13:54
|
Nightly build on ginetta ( Red Hat 8.0 ) started at 2004-12-04 03:10:02 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow insn_cmov: valgrind ./insn_cmov insn_fpu: valgrind ./insn_fpu insn_mmx: valgrind ./insn_mmx insn_mmxext: valgrind ./insn_mmxext insn_sse: valgrind ./insn_sse insn_sse2: (skipping, prereq failed: ../../../tests/cputest x86-sse2) int: valgrind ./int rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind ./pushpopseg rcl_assert: valgrind ./rcl_assert seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 192 tests, 1 stderr failure, 0 stdout failures ================= memcheck/tests/scalar (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-12-04 03:08:35
|
Nightly build on alvis ( Red Hat 7.3 ) started at 2004-12-04 03:05:02 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow insn_mmxext: valgrind ./insn_mmxext insn_sse: valgrind ./insn_sse insn_sse2: (skipping, prereq failed: ../../../tests/cputest x86-sse2) int: valgrind ./int rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind ./pushpopseg rcl_assert: valgrind ./rcl_assert seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 192 tests, 3 stderr failures, 1 stdout failure ================= memcheck/tests/scalar (stderr) memcheck/tests/vgtest_ume (stderr) none/tests/susphello (stdout) none/tests/susphello (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-12-04 03:04:05
|
Nightly build on standard ( Red Hat 7.2 ) started at 2004-12-04 03:00:02 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow insn_mmxext: valgrind ./insn_mmxext insn_sse: valgrind ./insn_sse insn_sse2: (skipping, prereq failed: ../../../tests/cputest x86-sse2) int: valgrind ./int rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind ./pushpopseg rcl_assert: valgrind ./rcl_assert seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 192 tests, 3 stderr failures, 1 stdout failure ================= memcheck/tests/scalar (stderr) memcheck/tests/vgtest_ume (stderr) none/tests/susphello (stdout) none/tests/susphello (stderr) make: *** [regtest] Error 1 |
|
From: Roland M. <ro...@re...> - 2004-12-04 02:46:39
|
> That could be tricky. It's generally pretty hard to do that kind switch > between real and virtual CPU, because the virtual CPU has a lot more > state than the real one. I don't think that matters, though I'm also not quite sure what it means. You don't have to handle "going real" in a general way. There are only a tiny number of ways a guest kernel can go into user mode, and in practice there is only one that you'll need to handle. It will be a block that pops every register off the stack and ends with an `iret' instruction. Incidentally, inside that block will be the only place you encounter an instruction in a xen/x86 guest kernel that sets a segment register. You just have to identify a block that does this, and translate it into a special thing that escapes valgrindland into the implementation glue code and provides it all the register values the translated block wanted to load. Those values are going into a black box, just as if they were copied into a buffer passed to a `write' system call in the userland valgrind's perspective. You check whether they are uninitialized data. But after that, you don't care what they are or whether they are going to be segment register values or whatever, because it's not happening in your world. The black box comes back at some point through some glue code that reenters valgrindland telling you that the virtual CPU's kernel-mode eip and esp have these here values and the other registers have just been filled in with random bits from the black box (again, as if loaded from the buffer filled by a `read' system call from the other side of the looking glass). Thanks, Roland |
|
From: Roland M. <ro...@re...> - 2004-12-04 02:31:09
|
> Segments are a bit trickier, because Valgrind takes short-cuts. As I said, in practice these assumptions are valid for what interesting guest kernels actually do. If you do want to cope with it, it's a) not hard to identify the case, and b) not all that complex to support. That is, when you hit a segment register load instruction or an intersegment jump/call instruction, stop the world, possibly throw away all your cached translations, and switch to the slower plan where the translated code does the segmented->linear translation work. I said it's not too complex to support, but that doesn't mean it wouldn't be slow. That's why it's a feature to support on demand that will never be demanded. Even the current hardware presumes flat segmentation will be used and, relatively speaking, is dog slow if a kernel tries to use it any other way. > Valgrind maintains a cache of translated code, which is indexed by eip; > like segments for data access, cs is assumed to be flat and unchanging. > If cs changes around, we'd need to make sure that the translation cache > is maintained appropriately. You just want the translation cache to remain in terms of linear eip values, and apply segmentation transformations to virtual CPU eip values before going to the code translation step. > One wart is that the mode of the CPU (16/32/64?) is a property of the > current cs, and that affects how the instruction bytes are actually > interpreted; if we wanted to support the different sizing modes, we'd > need to make sure that we maintain the translation cache properly. This is another thing that is straightforward and that in practice probably noone will ever ask us to do. That is, 16-bit mode. If we have a pure-virtual valgrind for xen/x86-64, it should support going into 32-bit user mode. Keeping the translations straight is simple; you just need to include some mode bits from the segmentation universe along with the linear address in what constitutes the lhs of the translation cache. More of the work is making the translator understand the 64-bit (16-bit) instruction set you get in 64-bit mode. > At least changes to cs are pretty obvious; when translating any kind of > trap/long jump/call/ret we generate code to observe how cs is changed. All segment register moves are as easily noted in translation. Anyway, this is pretty much hypothetical. There is no call to support code that makes use of segmentation much, because none around does. It's sufficient to have checks on segment register changes (rare, easily-identified instructions) and punt in flames when any happen that might mean something. Thanks, Roland |
|
From: Roland M. <ro...@re...> - 2004-12-04 02:09:27
|
> I'm not so worried about page table management. Valgrind already needs > to support user-mode programs using mmap, that's just a high-level > interface to the paging hardware. Uh, sort of. I am less worried about implementation difficulties than about being clear in my head and in our discussion what we are actually talking about. :-) (It probably doesn't help that just in what I am myself clear on, I have at least two opposing plans of attack I'm describing.) There are two areas of complication here. First, you can switch page tables (a virtualized %cr3 change). This changes the universe of what each flat address means. To a first approximation, this requires that valgrind have a big hook on which to hang its tables of memory state and translation cache lhs values, and swap different whole worlds onto that hook in response to the appropriate hypercall. But slightly deeper contemplation reveals this approximation is way, way off. The page tables are like user programs' use of mmap in that writing page table entries constitutes a page-granularity allocation. But in user programs you are used to thinking about this as an extra allocation mechanism you don't do a lot with, and focus more on the malloc allocations where you know object boundaries from intercepting those known calls. In a xen guest kernel, the page table allocations are the basic thing underlying all the object allocators. It is necessary and useful just to track the use of the pages and the initializedness of their bytes, but not useful enough. I want to teach valgrind to understand calls to my xen guest kernel's object allocators so it knows the object boundaries of the memory used in the guest kernel just as it knows the boundaries of malloc allocations in a user program. This is what I was getting at when I said page tables are intertwined with the allocation tracking plan. I'd like to explore how you think about this. To get into the deeper contemplation I mentioned, I again diverge into two disparate implementation styles. I think both have merit for different purposes, and ultimately would like to see a flexible hybrid approach. I have yet to settle on which I think is easier to get going first. First, there is the "pure virtual" approach. That is, valgrind's task is to virtualize the entire xen/x86 virtual machine fully, just as vanilla valgrind virtualizes the entire linux-user/x86 virtual machine provided in each individual user process address space by a linux kernel. This is like unto moving towards the whole-machine emulation ability of something like vmware or qemu, but still a whole lot simpler than that because the xen/x86 virtual machine is substantially simpler than the x86 privileged hardware. What this means is that valgrind directly simulates in software all the behavior of privilege modes and page tables that xen/x86 guests can see. valgrind runs the kernel, valgrind runs the user. For the memory handling, valgrind emulates the work of the MMU by translating virtual addresses according to the page tables, and indexes its shadow memory, translation caches, and whatnot, on physical addresses that come out of the MMU module. The translation hopefully can optimize this, analogous to how it's possible to optimize the memory checks done for a translated block containing multiple accesses to a single object; i.e., figure out that some blocks can access a given address range and do the MMU translation once at the start of a translated block. I probably said before this is straightforward. I didn't say it wouldn't be dog slow. This is the completist approach, and it has some benefits. You get a system where one user process can copy around some uninitialized data, write it into a pipe, have it read and written by three other user processes through three other pipes, and come out someplace where valgrind says "this uninitialized garbage you see before you came from over here". That is pretty cool. Of more every day use will be the ability you mentioned earlier to notice kernel code examining uninitialized data copied in from user memory. There are many worthwhile kinds of analysis that become possible when you are tracking the entire use of the machine with no boundary lines (such as process address space, or user/kernel mode) on your knowledge of the details. The other approach is in fact what I had in mind at the genesis of the discussion. That is, take advantage of some knowledge of the guest kernels we're interested in running under valgrind, and only try to instrument kernel code, not user code. First, we decide that user-mode execution is a black box--we don't translate code in user mode, we just run it natively (more in another message about how this is done). When a page table entry is written with user-mode write permissions enabled, and then we've gone to user mode, we just assume something broad about what happened to all the bytes in those pages. (Well actually we can optimize this to know whether the page was touched at all or not.) Second, we observe that the guest kernels of interest actually use their page tables such that the kernel-mode-only pages are all the same in all the address spaces they ever switch among. These two assumptions allow us to go back to the innocent world we know, where there is just one single address space used by a single program: the guest kernel, in the kernel-only subset of the address space set up by its page tables. (Furthermore, kernels draw a fixed line between the two kinds of pages in the page tables; so in fact, the valgrind implementation of the guest hypercalls to install page table entries need only enforce that user-accessible pages are on one side of a threshold address and that kernel-only pages are on the other side, and then it need only keep track of this threshold for identifying user addresses on use rather than consult the page tables.) Now, the kernel does actually access user memory, and when it does so, the results depend on the address space switches that we just decided to pretend weren't happening. So, the translated code needs to identify reads from user addresses as getting random bits from the black box. Basically, any load from user memory is equivalent to getting those bytes from a `read' system call in userland valgrind's perspective. Usefully, in practice each block that loads from user memory always loads from user memory and each block that stores to user memory always stores to user memory. So you can notice at the first execution of a block that it will refer to user memory, and translate that block to quickly check against the threshold address and then do the memory-tracking machinery appropriate for the black box loads/stores. (In practice the guest kernels never call these blocks with an address not in the user side of the address space, so that quick check is just for wild bugs.) Thanks, Roland |
|
From: Jeremy F. <je...@go...> - 2004-12-04 01:08:17
|
CVS commit by fitzhardinge: If we have included valgrind.h, but are not compiling for a supported target architecture, don't emit any inline asms. M +11 -0 valgrind.h.in 1.5 --- valgrind/include/valgrind.h.in #1.4:1.5 @@ -65,4 +65,15 @@ #define __@VG_ARCH@__ 1 // Architecture we're installed on + +/* If we're not compiling for our target architecture, don't generate + any inline asms. This would be a bit neater if we used the same + CPP symbols as the compiler for identifying architectures. */ +#if !(__x86__ && __i386__) +# ifndef NVALGRIND +# define NVALGRIND 1 +# endif /* NVALGRIND */ +#endif + + /* This file is for inclusion into client (your!) code. |