You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
1
(6) |
2
(6) |
3
(10) |
4
(10) |
|
5
(6) |
6
(6) |
7
(9) |
8
(6) |
9
(6) |
10
(7) |
11
(7) |
|
12
(6) |
13
(6) |
14
(8) |
15
(17) |
16
(10) |
17
(17) |
18
(8) |
|
19
(9) |
20
(7) |
21
(6) |
22
(6) |
23
(6) |
24
(5) |
25
(3) |
|
26
(3) |
27
(3) |
28
(3) |
29
(3) |
30
(2) |
31
(3) |
|
|
From: Jeremy F. <je...@go...> - 2004-12-16 23:31:29
|
On Thu, 2004-12-16 at 12:33 +0000, Greg Parker wrote: > In a month or two I should have a better idea of where Valgrind's > OS-specific assumptions are, and perhaps some ideas of how it > could be layered to make it more portable. My porting procedure > is still mostly "doesn't build? comment it out!" so I haven't > really looked at most of the system yet. Nick has already made a good start on that; there's now a good framework for determining where various pieces of code should live, depending on whether they're CPU, OS or CPU+OS specific. There's still a lot of stuff to be moved, but that process will necessarily be driven by ports. > > My solution is as follows: Each thread in the inferior is a real > Mach thread. Valgrind contains no scheduler and no reimplementation > of the threading primitives. Instead, a single coarse-grained mutex > is used to ensure that only one thread is executing in the Valgrind > core at a time. If some thread is about to start a blocking syscall, > Valgrind's syscall wrapper relinquishes the mutex, executes the syscall, > and then blocks on the mutex before continuing execution. New threads > block on the mutex before starting at their simulated entry point. > Threads that have exhausted their basic block counter release the > mutex and yield. The thread_suspend() trap throws in a few more > curves, but nothing insurmountable. Almost everything else is handled > automatically by the kernel's scheduler and threading primitives. Yes, that's the direction we'd like to go anyway. I've been looking at restructuring the core's thread support in a similar way. The plan is to drop the whole Valgrind pthreads layer (mostly), and implement Linux's threading at the clone level, layered on top of a very simple internal thread model (which is basically "threads are either running code or are blocked in a syscall"). One of my hesitations was that I wanted to see how other systems do threading to make sure that we can use a similar mechanism across multiple OS's: MacOS's Mach threading was a particular concern, so the fact that you independently came up with the same scheme is very reassuring. I was planning on keeping the existing deterministic scheduling, rather than letting the kernel scheduler make the decision; I'm concerned that we'd get strange competition between IO and CPU bound threads, since CPU-bound threads will be about 20-50x more CPU consuming, but IO-bound threads will look much the same to the kernel. So, I was thinking, one mutex per thread, and each thread chooses and wakes its successor as it is about to give up the CPU. BTW, what's the thread_suspend trap, and what difficulties does it introduce? > So far, this mechanism works well, though I haven't thrown any > heavily multithreaded tests at it yet. I haven't examined the > possible interactions with signal handling, but signals are > uncommon in Mac OS X applications so I'm willing to ignore them > for now. The benefit is that I only need thin wrappers around > a handful of Mach traps and some mutex management in the syscall > handler. Yeah, that's great. J |
|
From: Jeremy F. <je...@go...> - 2004-12-16 23:03:25
|
On Wed, 2004-12-15 at 18:31 +0000, Julian Seward wrote: > What one might interpret it to mean is that this is a microarchitectural > hack from Intel. The different variants of each instruction effectively > give a hint about which forwarding path the instruction's results > should be sent along. If you keep the types consistent, data might get > to the next functional unit (or whatever) sooner; if you mix up types, > the results are still the same, but results have to be shunted along > longer, slower forwarding paths. > > Any microarchitects out there have a clue about this? Further confirmation: the Athlon64 performance counter documentation lists events for "SSE reclass microfaults" and "SSE retype microfaults". J |
|
From: Tom H. <th...@cy...> - 2004-12-16 13:36:33
|
In message <200...@ka...>
Greg Parker <gp...@us...> wrote:
> I'm working on a port of Valgrind to Mac OS X, based on Paul
> Mackerras's Linux/PPC port. After about a week of bringup I
> have TextEdit.app running in simple recompilation mode - no
> optimization, no instrumentation, but mostly faithful simulation.
Sounds excellent.
> I started with Paul's "valgrind-2.3.0.CVS-ppc-tar.bz2", because
> it looked newest. Is there something else I should be working
> with? I have several PPC codegen fixes that I'll clean up in
> the next week or so, including a correction to stwux and similar;
> a correction to mfvrsave/mtvrsave; and a still-incomplete
> implementation of lswx/stswx.
As far as PPC goes that is probably the newest, but I'm not sure
what date the CVS tree it is against is for.
> In a month or two I should have a better idea of where Valgrind's
> OS-specific assumptions are, and perhaps some ideas of how it
> could be layered to make it more portable. My porting procedure
> is still mostly "doesn't build? comment it out!" so I haven't
> really looked at most of the system yet.
Nicholas Nethercote has done a large amount of work on factoring
out OS and processor specific code - look at the current CVS head
to see where the layering is going on that front. Nick is away at
the moment though.
There is also work ongoing on a new model for the virtual CPU to
better support additional processors - that work isn't in CVS yet.
> One area I have looked at is the threading model, which I understand
> has been a difficult point for Valgrind in the past. Mac OS X is
> based entirely on Mach threads; pthreads are entirely a userspace
> construction. This means Valgrind's "reimplement libpthread and
> a scheduler" is incomplete here, because many libraries manipulate
> the Mach layer directly, and I'd rather not reimplement the Mach
> API as well.
I for one would dearly love to get rid of the current pthread
replacement model - it is a pain in the rear. I think there is
a pretty good consensus here that it should go as well.
One problem is that we still need to be able to track certain
events for things like helgrind that want to know when mutexes
are locked and unlocked.
> My solution is as follows: Each thread in the inferior is a real
> Mach thread. Valgrind contains no scheduler and no reimplementation
> of the threading primitives. Instead, a single coarse-grained mutex
> is used to ensure that only one thread is executing in the Valgrind
> core at a time. If some thread is about to start a blocking syscall,
> Valgrind's syscall wrapper relinquishes the mutex, executes the syscall,
> and then blocks on the mutex before continuing execution. New threads
> block on the mutex before starting at their simulated entry point.
> Threads that have exhausted their basic block counter release the
> mutex and yield. The thread_suspend() trap throws in a few more
> curves, but nothing insurmountable. Almost everything else is handled
> automatically by the kernel's scheduler and threading primitives.
That's more or less the sort of thing we were talking about although
that method of handling system calls hadn't occurred to me. On a
modern linux we can use a futex for the master lock which should be
fairly efficient.
We did actually have a solution along those lines working at one point
for wine although that was based around using sigsuspend when a thread
wanted to relinquish control and sending a signal to the next thread
to wake it up just before it suspended itself.
Tom
--
Tom Hughes (th...@cy...)
Software Engineer, Cyberscience Corporation
http://www.cyberscience.com/
|
|
From: Greg P. <gp...@us...> - 2004-12-16 12:33:16
|
I'm working on a port of Valgrind to Mac OS X, based on Paul Mackerras's Linux/PPC port. After about a week of bringup I have TextEdit.app running in simple recompilation mode - no optimization, no instrumentation, but mostly faithful simulation. So far, the bringup has been a rather destructive process. The scheduler and libpthreads were ripped out completely because they're inappropriate on Mac OS X. The dispatcher was rewritten because I didn't like the one that was there. Finally, large sections like the memory tracker and signal handling are disabled because I simply haven't gotten around to them yet. I started with Paul's "valgrind-2.3.0.CVS-ppc-tar.bz2", because it looked newest. Is there something else I should be working with? I have several PPC codegen fixes that I'll clean up in the next week or so, including a correction to stwux and similar; a correction to mfvrsave/mtvrsave; and a still-incomplete implementation of lswx/stswx. In a month or two I should have a better idea of where Valgrind's OS-specific assumptions are, and perhaps some ideas of how it could be layered to make it more portable. My porting procedure is still mostly "doesn't build? comment it out!" so I haven't really looked at most of the system yet. One area I have looked at is the threading model, which I understand has been a difficult point for Valgrind in the past. Mac OS X is based entirely on Mach threads; pthreads are entirely a userspace construction. This means Valgrind's "reimplement libpthread and a scheduler" is incomplete here, because many libraries manipulate the Mach layer directly, and I'd rather not reimplement the Mach API as well. My solution is as follows: Each thread in the inferior is a real Mach thread. Valgrind contains no scheduler and no reimplementation of the threading primitives. Instead, a single coarse-grained mutex is used to ensure that only one thread is executing in the Valgrind core at a time. If some thread is about to start a blocking syscall, Valgrind's syscall wrapper relinquishes the mutex, executes the syscall, and then blocks on the mutex before continuing execution. New threads block on the mutex before starting at their simulated entry point. Threads that have exhausted their basic block counter release the mutex and yield. The thread_suspend() trap throws in a few more curves, but nothing insurmountable. Almost everything else is handled automatically by the kernel's scheduler and threading primitives. So far, this mechanism works well, though I haven't thrown any heavily multithreaded tests at it yet. I haven't examined the possible interactions with signal handling, but signals are uncommon in Mac OS X applications so I'm willing to ignore them for now. The benefit is that I only need thin wrappers around a handful of Mach traps and some mutex management in the syscall handler. I used Purify on Solaris several years ago, and I've missed having that kind of power on the platforms I've used since then. About a year ago I looked at Valgrind, but the need for a new runtime and a PowerPC engine was too much for me. Now thanks to Paul I should have a good chance of pulling it together. -- Greg Parker gp...@us... gp...@se... |
|
From: <js...@ac...> - 2004-12-16 03:50:15
|
Nightly build on phoenix ( SuSE 9.1 ) started at 2004-12-16 03:50:00 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow Nightly build on phoenix ( SuSE 9.1 ) started at 2004-12-16 03:50:00 GMT |
|
From: Tom H. <to...@co...> - 2004-12-16 03:25:40
|
Nightly build on dunsmere ( Fedora Core 3 ) started at 2004-12-16 03:20:03 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 192 tests, 12 stderr failures, 1 stdout failure ================= corecheck/tests/fdleak_cmsg (stderr) corecheck/tests/fdleak_fcntl (stderr) corecheck/tests/fdleak_ipv4 (stderr) corecheck/tests/fdleak_socketpair (stderr) memcheck/tests/badpoll (stderr) memcheck/tests/buflen_check (stderr) memcheck/tests/execve (stderr) memcheck/tests/execve2 (stderr) memcheck/tests/scalar (stderr) memcheck/tests/scalar_exit_group (stderr) memcheck/tests/scalar_supp (stderr) memcheck/tests/writev (stderr) none/tests/exec-sigmask (stdout) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-12-16 03:20:39
|
Nightly build on audi ( Red Hat 9 ) started at 2004-12-16 03:15:01 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 192 tests, 12 stderr failures, 0 stdout failures ================= corecheck/tests/fdleak_cmsg (stderr) corecheck/tests/fdleak_fcntl (stderr) corecheck/tests/fdleak_ipv4 (stderr) corecheck/tests/fdleak_socketpair (stderr) memcheck/tests/badpoll (stderr) memcheck/tests/buflen_check (stderr) memcheck/tests/execve (stderr) memcheck/tests/execve2 (stderr) memcheck/tests/scalar (stderr) memcheck/tests/scalar_exit_group (stderr) memcheck/tests/scalar_supp (stderr) memcheck/tests/writev (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-12-16 03:14:04
|
Nightly build on ginetta ( Red Hat 8.0 ) started at 2004-12-16 03:10:02 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow insn_cmov: valgrind ./insn_cmov insn_fpu: valgrind ./insn_fpu insn_mmx: valgrind ./insn_mmx insn_mmxext: valgrind ./insn_mmxext insn_sse: valgrind ./insn_sse insn_sse2: (skipping, prereq failed: ../../../tests/cputest x86-sse2) int: valgrind ./int rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind ./pushpopseg rcl_assert: valgrind ./rcl_assert seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 192 tests, 1 stderr failure, 0 stdout failures ================= memcheck/tests/scalar (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-12-16 03:08:33
|
Nightly build on alvis ( Red Hat 7.3 ) started at 2004-12-16 03:05:01 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow insn_mmxext: valgrind ./insn_mmxext insn_sse: valgrind ./insn_sse insn_sse2: (skipping, prereq failed: ../../../tests/cputest x86-sse2) int: valgrind ./int rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind ./pushpopseg rcl_assert: valgrind ./rcl_assert seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 192 tests, 3 stderr failures, 1 stdout failure ================= memcheck/tests/scalar (stderr) memcheck/tests/vgtest_ume (stderr) none/tests/susphello (stdout) none/tests/susphello (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-12-16 03:04:00
|
Nightly build on standard ( Red Hat 7.2 ) started at 2004-12-16 03:00:02 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow insn_mmxext: valgrind ./insn_mmxext insn_sse: valgrind ./insn_sse insn_sse2: (skipping, prereq failed: ../../../tests/cputest x86-sse2) int: valgrind ./int rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind ./pushpopseg rcl_assert: valgrind ./rcl_assert seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 192 tests, 3 stderr failures, 1 stdout failure ================= memcheck/tests/scalar (stderr) memcheck/tests/vgtest_ume (stderr) none/tests/susphello (stdout) none/tests/susphello (stderr) make: *** [regtest] Error 1 |