You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
1
|
2
(13) |
3
(29) |
|
4
(18) |
5
(12) |
6
(12) |
7
(22) |
8
(9) |
9
(14) |
10
(6) |
|
11
|
12
|
13
(1) |
14
(5) |
15
(11) |
16
(7) |
17
(5) |
|
18
(1) |
19
(8) |
20
(7) |
21
(12) |
22
(5) |
23
(17) |
24
(6) |
|
25
(27) |
26
(17) |
27
(2) |
28
(10) |
29
(3) |
30
(8) |
31
(20) |
|
From: Tom H. <th...@cy...> - 2004-01-25 22:56:15
|
In message <107...@ix...>
Jeremy Fitzhardinge <je...@go...> wrote:
> On Sun, 2004-01-25 at 02:12, Tom Hughes wrote:
> > We should really handle extended flags and set the MMXEXT bit as we
> > do support all the MMXEXT instructions on Athlons.
>
> Oh, I hadn't realized we were supporting some of the AMD extensions.
> Well, I guess we need to leave the vendor name unmolested.
Well it's only MMXEXT at the moment, which is actually a subset of
the original SSE extensions, but earlier Athlons had MMXEXT but not
full SSE support.
Tom
--
Tom Hughes (th...@cy...)
Software Engineer, Cyberscience Corporation
http://www.cyberscience.com/
|
|
From: Dirk M. <dm...@gm...> - 2004-01-25 22:54:05
|
On Sunday 25 January 2004 22:59, Jeremy Fitzhardinge wrote: > Those are just typos. The real issue is that should we claim to support > instruction set features which we don't actually support. No of course not, but thats not what your patch was about, was it?. For example, up to the best of my knowledge we unmasked the 3dnow! feature for example before already. So no application correctly using cpuid was ever going to run into 3dnow! instructions. > If we claim > to support SSE3 just because the underlying CPU supports it, how does > that help anyone? It doesn't. But do you want to "blacklist" features which we for sure not support, or do you want to "whitelist" features which we support? since its hard to say which kind of features some obscure CPU might have, I don't think whitelisting makes sense. We know the handful of features we don't support, so blacklist them. Leave the rest alone. Don't disturb the emulation when there is no urgent need to do so. Coincidentally, this was exactly what the old code was doing, and I'd have liked to see some discussion first before we radically change such a decision, especially when there was apparently no concrete bug fix pending. > The specific thing I wanted to address here was not reporting CPU > capabilities which Valgrind doesn't implement. Since there's a new wave > of CPUs being released with new feature bits (most importantly Intel's > Prescott, but also AMD's Athlon64), we're going to see new feature flags > appearing, and they need to be handled correctly. Sure, when they cause a known problem, which we can't fix otherwise right now, we disable them. I for one would like to know about those problems first for deciding if its better to disable the feature in the cpuid, or instead trying to implement the feature in the valgrind emulation. When you're disabling the feature in the first place, you'll never get told about if that feature is working or not, or what you have to look at to get it working. Dirk |
|
From: Eyal L. <ey...@ey...> - 2004-01-25 22:48:31
|
Pawel Kot wrote: > > On Thu, 22 Jan 2004, Marcelo Tosatti wrote: > > > What happened to this? > > > > Was valgrind fixed or its compilation is still broken? > > I have successfully built valgrind versions 2.0.0 and 2.1.0 with 2.4.24 > includes. I have explicitely copied the includes from the sources to be > sure it is correct. > > pkot@laptok:~$ gcc -v > [...] > gcc version 3.2.2 Sure. The problem only started with 2.4.25-pre. It was clear from the start that linux headers changed. [BTW, it really started with 2.4.24-pre, but this series was withdrawn and became 2.4.25-pre, hence the 2.4.24 in the title which I now changed]. The question is "who is at fault"? Are we (vg) using kernel headers incorrectly and were now caught, or did the linux people really break their headers. Is any valgrind developer testing with 2.4.25-pre? This is not an area I am familiar with, it is best handled by a vg person who knows about the use of asm/timex.h. -- Eyal Lebedinsky (ey...@ey...) <http://samba.org/eyal/> |
|
From: Jeremy F. <je...@go...> - 2004-01-25 21:59:33
|
On Sun, 2004-01-25 at 13:27, Dirk Mueller wrote: > On Sunday 25 January 2004 18:46, Julian Seward wrote: > > > I think a case can be made for both points of view. However, it > > seems to me that Jeremy's approach is reasonable enough -- in fact > > I quite like the sound of it. > > I complain less about the Vendorname change (though I find it pretty > pointless, it just introduces an emulation breakage without any good reason > for it as far as I can see), but about the "we only tell the user about those > flags we know about". There are so many cpu's out there, and so far the only > problem we had was that we don't support 3dnow!. I think it is wrong trying > to artificially limit the "features" we advertise from those of the host we > run on. Tom already found two features that were disabled with this patch > which we support just fine - there might be more. Those are just typos. The real issue is that should we claim to support instruction set features which we don't actually support. If we claim to support SSE3 just because the underlying CPU supports it, how does that help anyone? All the other feature flags are things which just don't matter to user-mode programs. They don't represent any kind of instruction set extension; they're related to other things like page-table format, power management, etc. We could pass them through, but there's no strong reason to do so. > Personally, I find it rather frustrating during debugging that whatever I'm > trying to look at is "healed" by running under valgrind. This is just another > change that purposefully breaks the emulation for (apparently) no good > reason. There are a few that are nasty, and are remaining: like syscalls > being terrible fast compared to the "normal" case, and the resulting "timing" > based races. Hm? What's this? > I haven't tested yet. But to ask another question: which problems does this > change fix? Be specific, please. > > Just give me one concrete bug that can't be fixed without that patch, and I'll > shut up. The specific thing I wanted to address here was not reporting CPU capabilities which Valgrind doesn't implement. Since there's a new wave of CPUs being released with new feature bits (most importantly Intel's Prescott, but also AMD's Athlon64), we're going to see new feature flags appearing, and they need to be handled correctly. The ValgrindVCPU thing was more a spur of the moment thing, and I'll think I'll back it out, since it obscures real information. J |
|
From: Jeremy F. <je...@go...> - 2004-01-25 21:53:19
|
On Sun, 2004-01-25 at 08:06, Nicholas Nethercote wrote: > "ValgrindVCPU" seems ok to me, for two reasons: > > 1. Theoretical: The difference between Valgrind's VCPU and the underlying > one are greater than just timing. For example, my Athlon supports 3dNow! > instructions, Valgrind doesn't. > > 2. Practical: It's hard to imagine anyone actually using the vendor string > ID in a real program; or certainly not in a way that changes any code > paths taken. Well, that's not quite true. The vendorname is what you need to look at to see what vendor-specific extensions are available (requests >0x80000000). One of the reasons for changing the vendor name is that it gives us scope to add our own vendor extensions. Unfortunately, if we're implementing vendor-specific extensions like AMD's MMXext, then we can't play this game. We need to pass through the vendor string so that clients can know what parts of the CPUID request space they can use. I still think its a good idea to suppress all the extensions which we either don't know about, or aren't relevant to user-mode programs. Also, things like the cache/TLB details are (naturally) very implementation specific, so the vendor ID will play a role there. We'll care about that when we get to self-virtualizing. J |
|
From: Jeremy F. <je...@go...> - 2004-01-25 21:41:36
|
On Sun, 2004-01-25 at 02:12, Tom Hughes wrote: > We should really handle extended flags and set the MMXEXT bit as we > do support all the MMXEXT instructions on Athlons. Oh, I hadn't realized we were supporting some of the AMD extensions. Well, I guess we need to leave the vendor name unmolested. J |
|
From: Dirk M. <dm...@gm...> - 2004-01-25 21:27:11
|
On Sunday 25 January 2004 18:46, Julian Seward wrote: > I think a case can be made for both points of view. However, it > seems to me that Jeremy's approach is reasonable enough -- in fact > I quite like the sound of it. I complain less about the Vendorname change (though I find it pretty pointless, it just introduces an emulation breakage without any good reason for it as far as I can see), but about the "we only tell the user about those flags we know about". There are so many cpu's out there, and so far the only problem we had was that we don't support 3dnow!. I think it is wrong trying to artificially limit the "features" we advertise from those of the host we run on. Tom already found two features that were disabled with this patch which we support just fine - there might be more. > If we get a lot of people complaining, > we can always back it out and/or modify it. How many people do you think will be able to track back that whatever weird behaviour they see when not running under valgrind, and is gone when running under valgrind, to exactly this change? So how do we expect to get a lot of complains? Personally, I find it rather frustrating during debugging that whatever I'm trying to look at is "healed" by running under valgrind. This is just another change that purposefully breaks the emulation for (apparently) no good reason. There are a few that are nasty, and are remaining: like syscalls being terrible fast compared to the "normal" case, and the resulting "timing" based races. Adding another one on top doesn't make things better. > > What do you gain by breaking code which you don't have the sources of > > (like for example the nvidia dri stuff) ? > Be specific -- what problem(s) is this change giving you? I haven't tested yet. But to ask another question: which problems does this change fix? Be specific, please. Just give me one concrete bug that can't be fixed without that patch, and I'll shut up. Dirk "if it ain't broken, don't fix it" |
|
From: Jeremy F. <je...@go...> - 2004-01-25 20:52:49
|
On Sun, 2004-01-25 at 10:35, Nicholas Nethercote wrote: > Attached is a my attempt to add epoll support to the CVS HEAD. I'm not > sure about the use of the "#ifdef KERNEL_2_6" in vg_syscalls.c, however. > It was my attempt to cope with "struct epoll_event" not being available in > 2.4 and earlier kernels. I would be inclined to copy the definition into vg_kerneliface.h, and make it all unconditional. If the underlying kernel doesn't support the syscall, then it will fail properly. As much as possible, I'd like to eliminate compile-time dependencies on particular kernel versions. J |
|
From: Nicholas N. <nj...@ca...> - 2004-01-25 20:50:49
|
CVS commit by nethercote:
Update description of Robert's patches.
M +2 -2 related.html 1.5
--- devel-home/valgrind/related.html #1.4:1.5
@@ -34,6 +34,6 @@
<li>Robert Walsh has two useful
<a href="http://www.durables.org/software/valgrind/">patches</a>. One
- adds watchpoints on memory locations, the other adds file descriptor
- leak checking.
+ adds watchpoints on memory locations, the other adds support for pool-based
+ allocators.
<p>
</ul>
|
|
From: Nicholas N. <nj...@ca...> - 2004-01-25 20:49:13
|
CVS commit by nethercote:
Include results of 2 late surveys.
M +29 -26 survey-summary 1.2
--- devel-home/valgrind/survey-summary #1.1:1.2
@@ -5,7 +5,7 @@
directly in the past 18 months (10 of those bounced).
-Got 114 full responses (plus 2 or 3 that gave no useful info).
+Got 116 full responses (plus 2 or 3 that gave no useful info).
-113 were in English. 1 was in French. Fortunately the French was pretty easy.
+115 were in English. 1 was in French. Fortunately the French was pretty easy.
Nationalities of the 226 people directly contacted (based on email suffixes;
@@ -156,5 +156,5 @@
private: lots of small personal projects.
-other: file format translator, job scheduling system.
+other: CAD, file format translator, job scheduling system.
unexpected: helped one guy learning C++, esp. for understanding destructors.
@@ -165,11 +165,12 @@
tell if they use both, or just consider them equivalent.
-C 54
-C++ 50
+C 56
+C++ 52
Fortran 6
Java 3
-asm 2
+asm 3
+Python 2
+TCL/TK 1
Objective C 1
-Python 1
Pike 1
ExaScript 1
@@ -184,5 +185,5 @@
two 7
~5 12
-~10 9
+~10 10
~15 4
~20 2
@@ -205,5 +206,5 @@
Raw figures:
-Memcheck 100% - 47, 99% - 5, 97% - 1, 95% - 11, 90% - 13, 80% - 8, 75% - 1,
+Memcheck 100% - 48, 99% - 5, 97% - 1, 95% - 11, 90% - 14, 80% - 8, 75% - 1,
70% - 3, 60% - 3, 50% - 2, 40% - 1, 30% - 1, 33% - 1, 25% - 1,
20% - 3, 10% - 2, used - 1
@@ -212,5 +213,5 @@
5% - 4
-Calltree 100% - 1, 80% - 2, 50% - 1, 40% - 1, 25% - 2, 20% - 7, 10% - 7,
+Calltree 100% - 1, 80% - 2, 50% - 1, 40% - 1, 25% - 2, 20% - 7, 10% - 8,
5% - 6, 1% - 1, used - 2
@@ -232,6 +233,6 @@
sum %
--- -
-Memcheck: 8920 85%
-Calltree: 641 6%
+Memcheck: 9110 85%
+Calltree: 651 6%
Addrcheck: 583 6%
Cachegrind: 234 2%
@@ -295,7 +296,7 @@
Event-based:
- when a bug occurs/suspected 41
- before releases 17
- on big changes 8
+ when a bug occurs/suspected 42
+ before releases 19
+ on big changes 9
Calltree/KCachegrind when I'm bored 2
on every change 1
@@ -319,6 +320,6 @@
command line, or via a script.
-in automated testing 11
-manually 102
+in automated testing 13
+manually 103
via script/makefile
(to avoid long command lines) 11
@@ -406,5 +407,5 @@
purify (?):
Valgrind pros:
- V easier to run 12
+ V easier to run 13
V is better 7
V has no horrible licence server 4
@@ -425,5 +426,5 @@
V finds free/mismatch errors 1
Purify pros:
- P GUI is nicer 7
+ P GUI is nicer 8
P faster 3
P allows interactive leak checks 2
@@ -661,5 +662,5 @@
usage:
ease of use/no recompilation 40
- "it works" 13
+ "it works"/"it just works" 14
more convenient to run than GDB 1
programs sometimes seg fault normally,
@@ -698,4 +699,5 @@
finds most bugs 2
finds bugs I wouldn't otherwise know about 2
+ finds uninitialised errors 2
find bugs more easily than with traditional tools 1
finds memory overruns 1
@@ -707,5 +709,4 @@
full code coverage 1
tests thing no other free software can test 1
- only way to find uninitialised errors 1
bit-level accuracy is good 1
@@ -897,4 +898,5 @@
threading/syscall msgs could be better 1
lack of type information in error messages 1
+ some errors could give more information 1
skins, usage:
@@ -915,4 +917,5 @@
code is complex 1
reinvents the wheel (viz. bochs, QEMU) 1
+ KCachegrind takes some understanding 1
10 had no complaints, 6 didn't answer, which presumably means no complaints.
@@ -936,5 +939,5 @@
# Good things about non-software stuff
-generally happy 71
+generally happy 72
[Ie. answered "yes" to the "are you happy with the way Valgrind is developed"
question. Some also had extra comments/quibbles.]
@@ -1075,6 +1078,6 @@
generally yes 5
3,2,1 total
-win32/2000/XP 3,16,15 56
-Solaris 5,12, 8 47
+win32/2000/XP 4,16,15 59
+Solaris 6,13, 8 52
OS X/Darwin 2, 5, 6 22
FreeBSD 1, 8, 4 21
@@ -1121,8 +1124,8 @@
generally yes 6
+SPARC 0, 17, 6 40
PowerPC 5, 7, 10 39
-SPARC 0, 16, 6 38
AMD-64 1, 11, 5 30
-ia64 1, 4, 4 15
+ia64 1, 5, 4 17
Power(4) 2, 3, 1 13
something 64-bit 1, 3, 0 9
|
|
From: Nicholas N. <nj...@ca...> - 2004-01-25 20:34:24
|
CVS commit by nethercote:
staticalise
M +1 -1 vg_errcontext.c 1.52
--- valgrind/coregrind/vg_errcontext.c #1.51:1.52
@@ -192,5 +192,5 @@ void construct_error ( Error* err, Threa
}
-void gen_suppression(Error* err)
+static void gen_suppression(Error* err)
{
Int i;
|
|
From: Dirk M. <dm...@gm...> - 2004-01-25 19:53:40
|
On Sunday 25 January 2004 20:30, Nicholas Nethercote wrote: > Anti-globalisation > -void VG_(gen_suppression)(Error* err) > +void gen_suppression(Error* err) static? |
|
From: Nicholas N. <nj...@ca...> - 2004-01-25 19:30:59
|
CVS commit by nethercote:
Anti-globalisation
M +2 -2 vg_errcontext.c 1.51
M +0 -2 vg_include.h 1.175
--- valgrind/coregrind/vg_errcontext.c #1.50:1.51
@@ -192,5 +192,5 @@ void construct_error ( Error* err, Threa
}
-void VG_(gen_suppression)(Error* err)
+void gen_suppression(Error* err)
{
Int i;
@@ -260,5 +260,5 @@ void do_actions_on_error(Error* err, Boo
if (VG_(is_action_requested)( "Print suppression",
& VG_(clo_gen_suppressions) )) {
- VG_(gen_suppression)(err);
+ gen_suppression(err);
}
}
--- valgrind/coregrind/vg_include.h #1.174:1.175
@@ -1282,6 +1282,4 @@ extern void VG_(show_all_errors) (
extern Bool VG_(is_action_requested) ( Char* action, Bool* clo );
-extern void VG_(gen_suppression) ( Error* err );
-
extern UInt VG_(n_errs_found);
|
|
From: Nicholas N. <nj...@ca...> - 2004-01-25 19:07:10
|
On Sat, 24 Jan 2004, Julian Seward wrote: > I think we should remove support for --stop-after. Attached patch does so. N |
|
From: Nicholas N. <nj...@ca...> - 2004-01-25 18:35:08
|
On Tue, 20 Jan 2004, Tom Hughes wrote: > Mukund has confirmed to me that it is only normal kernel locking that > he is talking about, so I think it is only epoll_wait that we need to > consider as blocking. Attached is a my attempt to add epoll support to the CVS HEAD. I'm not sure about the use of the "#ifdef KERNEL_2_6" in vg_syscalls.c, however. It was my attempt to cope with "struct epoll_event" not being available in 2.4 and earlier kernels. Also, I'm not super confident about the checking of the arguments; my patch is based on the patch at www.fefe.de/diffs/, but I changed the argument checking a little bit, because I don't think it was correct. Finally, I haven't actually tried it because my machine is still running a 2.4 kernel... N |
|
From: Nicholas N. <nj...@ca...> - 2004-01-25 17:48:13
|
On Sat, 24 Jan 2004, Jeremy Fitzhardinge wrote: > All the dependencies on having to read /proc/self/maps before/after > allocating various pieces of memory should be obsolete now. This is > because 1) the client hasn't run at all at this early stage, so there > are no dynamically generated mappings, and 2) anyway, all client > mappings are in the client address space, and all Valgrind mappings are > above, so there's no likelihood of confusing the two. I don't know if > this actually makes any difference, but it seemed like a couple of > things were initialized in somewhat unnatural places because of this old > constraint. You're welcome to change it; I'm reluctant to do so because I'm not familiar enough with how memory is laid out at the start. > Also, I think read and parse of /proc/self/maps were separated because > it was quite common to read /proc/self/maps once, then parse it multiple > times for various reasons (in fact, main() does this now). I think this > is still a useful thing to have, though reading /proc/self/maps should > now only be necessary during startup, since after that the Segment list > will tell you everything you need to know about the layout of the > address space. /proc/self/maps is read once, and parsed twice, all within main() before running the client. Also, the two parsings happen one after another, in a way that made me think they could be combined. N |
|
From: Julian S. <js...@ac...> - 2004-01-25 17:39:26
|
On Sunday 25 January 2004 13:17, Dirk Mueller wrote: > On Sunday 25 January 2004 04:07, Jeremy Fitzhardinge wrote: > > Well, exactly that. We're so far from being like the underlying CPU, > > there's no point in pretending it is actually the underlying CPU. > > Well, I disagree. Besides timing, we're pretty much like the underlying > CPU. I think its important that the application, no matter if I wrote it or > not, and if I have the sources or not,runs just the very same code it would > run without valgrind as well. Otherwise trying to debug failures becomes > pretty much mood. I think a case can be made for both points of view. However, it seems to me that Jeremy's approach is reasonable enough -- in fact I quite like the sound of it. If we get a lot of people complaining, we can always back it out and/or modify it. > What do you gain by breaking code which you don't have the sources of (like > for example the nvidia dri stuff) ? Be specific -- what problem(s) is this change giving you? Obviously we want V to work for as many people as possible, and for that we need specifics. J |
|
From: Nicholas N. <nj...@ca...> - 2004-01-25 16:06:58
|
On Sun, 25 Jan 2004, Dirk Mueller wrote: > > Well, exactly that. We're so far from being like the underlying CPU, > > there's no point in pretending it is actually the underlying CPU. > > Well, I disagree. Besides timing, we're pretty much like the underlying CPU. I > think its important that the application, no matter if I wrote it or not, and > if I have the sources or not,runs just the very same code it would run > without valgrind as well. Otherwise trying to debug failures becomes pretty > much mood. "ValgrindVCPU" seems ok to me, for two reasons: 1. Theoretical: The difference between Valgrind's VCPU and the underlying one are greater than just timing. For example, my Athlon supports 3dNow! instructions, Valgrind doesn't. 2. Practical: It's hard to imagine anyone actually using the vendor string ID in a real program; or certainly not in a way that changes any code paths taken. > What do you gain by breaking code which you don't have the sources of (like > for example the nvidia dri stuff) ? I don't understand -- what code is being broken? N |
|
From: Dirk M. <dm...@gm...> - 2004-01-25 13:17:37
|
On Sunday 25 January 2004 04:07, Jeremy Fitzhardinge wrote: > Well, exactly that. We're so far from being like the underlying CPU, > there's no point in pretending it is actually the underlying CPU. Well, I disagree. Besides timing, we're pretty much like the underlying CPU. I think its important that the application, no matter if I wrote it or not, and if I have the sources or not,runs just the very same code it would run without valgrind as well. Otherwise trying to debug failures becomes pretty much mood. > still emulates all the important parts of the CPUID instruction, and any > program which correctly uses CPUID will be fine. Programs which don't > correctly use the CPUID instruction should be given the opportunity to > fail so they can be fixed. What do you gain by breaking code which you don't have the sources of (like for example the nvidia dri stuff) ? > Also, it means you could use CPUID to implement RUNNING_ON_VALGRIND, > which may be useful (for example, if you want to special-case something, > without actually having a source dependency on valgrind/*.h). Thats not a reason IMHO. RUNNING_ON_VALGRIND is a 5 line #define, which you can just copy&paste into your sources instead of including something from valgrind/*h. I'm not sure how the others feel, but given that there was absolutely NO discussion about this patch I'm surprised that it was just committed. |
|
From: Tom H. <th...@cy...> - 2004-01-25 10:12:33
|
In message <200...@of...>
Jeremy Fitzhardinge <je...@go...> wrote:
> Virtualize CPUID. Rather than just using the host CPU's CPUID,
> we now completely virtualize it. The feature flags returned are the
> intersection of the set the CPU supports, and the set of flags Valgrind
> supports. This turns out to be a small number of features, like FPU,
> TSC, MMX, SSE, SSE2, FXSR. All mention of things which are only useful
> to kernel-mode code are also suppressed. This CPUID doesn't support
> any extended feature flags, or extended CPUID operations. It returns a
> vendor string of "ValgrindVCPU".
We should really handle extended flags and set the MMXEXT bit as we
do support all the MMXEXT instructions on Athlons.
Tom
--
Tom Hughes (th...@cy...)
Software Engineer, Cyberscience Corporation
http://www.cyberscience.com/
|
|
From: Tom H. <th...@cy...> - 2004-01-25 09:50:19
|
In message <200...@of...>
Jeremy Fitzhardinge <je...@go...> wrote:
> +/* The set of features we're willing to support for the client */
> +#define VG_SUPPORTED_FEATURES \
> + ((1 << VG_X86_FEAT_FPU) | \
> + (1 << VG_X86_FEAT_TSC) | \
> + (1 << VG_X86_FEAT_CMOV) | \
> + (1 << VG_X86_FEAT_MMX) | \
> + (1 << VG_X86_FEAT_FXSR) | \
> + (1 << VG_X86_FEAT_SSE) | \
> + (1 << VG_X86_FEAT_SSE2))
> +
The CX8 bit should be set as well, as valgrind supports cmpxchg8b.
Tom
--
Tom Hughes (th...@cy...)
Software Engineer, Cyberscience Corporation
http://www.cyberscience.com/
|
|
From: Jeremy F. <je...@go...> - 2004-01-25 03:44:48
|
CVS commit by fitzhardinge:
Oops, make base static.
M +1 -1 vg_mylibc.c 1.67
--- valgrind/coregrind/vg_mylibc.c #1.66:1.67
@@ -1532,5 +1532,5 @@ Int VG_(system) ( Char* cmd )
UInt VG_(read_millisecond_timer) ( void )
{
- ULong base;
+ static ULong base = 0;
struct vki_timeval tv_now;
ULong now;
|
|
From: Jeremy F. <je...@go...> - 2004-01-25 03:33:32
|
CVS commit by fitzhardinge:
Don't use TSC for internal timing purposes. This is for two reasons:
- old CPUs (and their modern embedded clones) don't implement the TSC
- new machines with power management, the TSC changes rate, and so is
useless as a timebase
Valgrind doesn't use read_millisecond_timer very much these days, so
the expense of doing a gettimeofday syscall shouldn't be a huge issue.
Naturally, rdtsc is still available for client purposes (if the host CPU
supports it).
M +0 -4 vg_include.h 1.174
M +0 -14 vg_main.c 1.141
M +11 -113 vg_mylibc.c 1.66
--- valgrind/coregrind/vg_include.h #1.173:1.174
@@ -1059,8 +1059,4 @@ extern void* VG_(brk) ( void* end_data_s
extern Char* VG_(arena_strdup) ( ArenaId aid, const Char* s);
-/* Skins shouldn't need these...(?) */
-extern void VG_(start_rdtsc_calibration) ( void );
-extern void VG_(end_rdtsc_calibration) ( void );
-
extern Int VG_(fcntl) ( Int fd, Int cmd, Int arg );
extern Int VG_(select)( Int n,
--- valgrind/coregrind/vg_main.c #1.140:1.141
@@ -2886,10 +2886,4 @@ int main(int argc, char **argv)
//--------------------------------------------------------------
- // Start calibration of our RDTSC-based clock
- // p: n/a
- //--------------------------------------------------------------
- VG_(start_rdtsc_calibration)();
-
- //--------------------------------------------------------------
// Reserve Valgrind's kickstart, heap and stack
// p: XXX ???
@@ -2945,12 +2939,4 @@ int main(int argc, char **argv)
//--------------------------------------------------------------
- // End calibrating our RDTSC-based clock, having waited a while.
- // p: VG_(start_rdtsc_calibration)() [obviously]
- //--------------------------------------------------------------
- // Nb: Don't have to wait very long; it does pretty well even if
- // start_rdtsc_calibration() is immediately before this.
- VG_(end_rdtsc_calibration)();
-
- //--------------------------------------------------------------
// Initialise translation table and translation cache
// p: read_procselfmaps [so the anonymous mmaps for the TT/TC
--- valgrind/coregrind/vg_mylibc.c #1.65:1.66
@@ -1527,124 +1527,23 @@ Int VG_(system) ( Char* cmd )
/* ---------------------------------------------------------------------
- Support for a millisecond-granularity counter using RDTSC.
+ Support for a millisecond-granularity timer.
------------------------------------------------------------------ */
-static __inline__ ULong do_rdtsc_insn ( void )
-{
- ULong x;
- __asm__ volatile (".byte 0x0f, 0x31" : "=A" (x));
- return x;
-}
-
-/* 0 = pre-calibration, 1 = calibration, 2 = running */
-static Int rdtsc_calibration_state = 0;
-static ULong rdtsc_ticks_per_millisecond = 0; /* invalid value */
-
-static struct vki_timeval rdtsc_cal_start_timeval;
-static struct vki_timeval rdtsc_cal_end_timeval;
-
-static ULong rdtsc_cal_start_raw;
-static ULong rdtsc_cal_end_raw;
-
UInt VG_(read_millisecond_timer) ( void )
{
- ULong rdtsc_now;
- // If called before rdtsc setup completed (eg. from SK_(pre_clo_init)())
- // just return 0.
- if (rdtsc_calibration_state < 2) return 0;
- rdtsc_now = do_rdtsc_insn();
- vg_assert(rdtsc_now > rdtsc_cal_end_raw);
- rdtsc_now -= rdtsc_cal_end_raw;
- rdtsc_now /= rdtsc_ticks_per_millisecond;
- return (UInt)rdtsc_now;
-}
-
-
-void VG_(start_rdtsc_calibration) ( void )
-{
+ ULong base;
+ struct vki_timeval tv_now;
+ ULong now;
Int res;
- vg_assert(rdtsc_calibration_state == 0);
- rdtsc_calibration_state = 1;
- rdtsc_cal_start_raw = do_rdtsc_insn();
- res = VG_(do_syscall)(__NR_gettimeofday, (UInt)&rdtsc_cal_start_timeval,
- (UInt)NULL);
- vg_assert(!VG_(is_kerror)(res));
-}
-
-void VG_(end_rdtsc_calibration) ( void )
-{
- Int res, loops;
- ULong cpu_clock_MHZ;
- ULong cal_clock_ticks;
- ULong cal_wallclock_microseconds;
- ULong wallclock_start_microseconds;
- ULong wallclock_end_microseconds;
- struct vki_timespec req;
- struct vki_timespec rem;
-
- vg_assert(rdtsc_calibration_state == 1);
- rdtsc_calibration_state = 2;
-
- /* Try and delay for 20 milliseconds, so that we can at least have
- some minimum level of accuracy. */
- req.tv_sec = 0;
- req.tv_nsec = 20 * 1000 * 1000;
- loops = 0;
- while (True) {
- res = VG_(nanosleep)(&req, &rem);
- vg_assert(res == 0 /*ok*/ || res == 1 /*interrupted*/);
- if (res == 0)
- break;
- if (rem.tv_sec == 0 && rem.tv_nsec == 0)
- break;
- req = rem;
- loops++;
- if (loops > 100)
- VG_(core_panic)("calibration nanosleep loop failed?!");
- }
- /* Now read both timers, and do the Math. */
- rdtsc_cal_end_raw = do_rdtsc_insn();
- res = VG_(do_syscall)(__NR_gettimeofday, (UInt)&rdtsc_cal_end_timeval,
+ res = VG_(do_syscall)(__NR_gettimeofday, (UInt)&tv_now,
(UInt)NULL);
- vg_assert(rdtsc_cal_end_raw > rdtsc_cal_start_raw);
- cal_clock_ticks = rdtsc_cal_end_raw - rdtsc_cal_start_raw;
-
- wallclock_start_microseconds
- = (1000000ULL * (ULong)(rdtsc_cal_start_timeval.tv_sec))
- + (ULong)(rdtsc_cal_start_timeval.tv_usec);
- wallclock_end_microseconds
- = (1000000ULL * (ULong)(rdtsc_cal_end_timeval.tv_sec))
- + (ULong)(rdtsc_cal_end_timeval.tv_usec);
- vg_assert(wallclock_end_microseconds > wallclock_start_microseconds);
- cal_wallclock_microseconds
- = wallclock_end_microseconds - wallclock_start_microseconds;
-
- /* Since we just nanoslept for 20 ms ... */
- vg_assert(cal_wallclock_microseconds >= 20000);
+ now = tv_now.tv_sec * 1000000ULL + tv_now.tv_usec;
- /* Now we know (roughly) that cal_clock_ticks on RDTSC take
- cal_wallclock_microseconds elapsed time. Calculate the RDTSC
- ticks-per-millisecond value. */
- if (0)
- VG_(printf)("%lld ticks in %lld microseconds\n",
- cal_clock_ticks, cal_wallclock_microseconds );
+ if (base == 0)
+ base = now;
- rdtsc_ticks_per_millisecond
- = cal_clock_ticks / (cal_wallclock_microseconds / 1000ULL);
- cpu_clock_MHZ
- = (1000ULL * rdtsc_ticks_per_millisecond) / 1000000ULL;
- if (VG_(clo_verbosity) >= 1)
- VG_(message)(Vg_UserMsg, "Estimated CPU clock rate is %d MHz",
- (UInt)cpu_clock_MHZ);
- if (cpu_clock_MHZ < 50 || cpu_clock_MHZ > 10000)
- VG_(core_panic)("end_rdtsc_calibration: "
- "estimated CPU MHz outside range 50 .. 10000");
- /* Paranoia about division by zero later. */
- vg_assert(rdtsc_ticks_per_millisecond != 0);
- if (0)
- VG_(printf)("ticks per millisecond %llu\n",
- rdtsc_ticks_per_millisecond);
+ return (now - base) / 1000;
}
|
|
From: Jeremy F. <je...@go...> - 2004-01-25 03:07:14
|
On Sat, 2004-01-24 at 19:00, Dirk Mueller wrote: > Euhm, I'm curious: why that? > > It seems most sensible to me to actually emulate the host we're running on as > close as possible, instead of purposefully breaking the emulation by doing > stuff like returning a vendor string of "ValgrindVCPU". > > I'm pretty sure that developers do want to figure out why an application > doesn't work when it is not run under valgrind. when it is run under > valgrind, the same codepaths should be taken. > > Its hard to imagine what you were trying to fix here since though. Well, exactly that. We're so far from being like the underlying CPU, there's no point in pretending it is actually the underlying CPU. It still emulates all the important parts of the CPUID instruction, and any program which correctly uses CPUID will be fine. Programs which don't correctly use the CPUID instruction should be given the opportunity to fail so they can be fixed. Also, it means you could use CPUID to implement RUNNING_ON_VALGRIND, which may be useful (for example, if you want to special-case something, without actually having a source dependency on valgrind/*.h). J |
|
From: Dirk M. <dm...@gm...> - 2004-01-25 03:00:30
|
On Sunday 25 January 2004 03:38, Jeremy Fitzhardinge wrote: > to kernel-mode code are also suppressed. This CPUID doesn't support > any extended feature flags, or extended CPUID operations. It returns a > vendor string of "ValgrindVCPU". Euhm, I'm curious: why that? It seems most sensible to me to actually emulate the host we're running on as close as possible, instead of purposefully breaking the emulation by doing stuff like returning a vendor string of "ValgrindVCPU". I'm pretty sure that developers do want to figure out why an application doesn't work when it is not run under valgrind. when it is run under valgrind, the same codepaths should be taken. Its hard to imagine what you were trying to fix here since though. |