You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
|
1
(6) |
2
(7) |
|
3
(12) |
4
(9) |
5
(12) |
6
(9) |
7
(18) |
8
(10) |
9
(17) |
|
10
(15) |
11
(22) |
12
(16) |
13
(18) |
14
(9) |
15
(14) |
16
(18) |
|
17
(24) |
18
(11) |
19
(15) |
20
(29) |
21
(19) |
22
(20) |
23
(9) |
|
24
(25) |
25
(25) |
26
(38) |
27
(22) |
28
(16) |
29
(17) |
|
|
From: Julian S. <js...@ac...> - 2008-02-20 22:54:50
|
On Sunday 17 February 2008 21:57, Nicholas Nethercote wrote: > On Sun, 17 Feb 2008 sv...@va... wrote: > > +to and reading from the shared memory. Since the invention of the > > +multithreading concept, there is an ongoing debate about which way to > > +model concurrent activities is better -- shared memory programming or > > +message passing [Ousterhout 1996]. > > Isn't what you've called here "multithreading" more typically called > "shared memory multithreading" or something like that? > > Nice write-up, BTW. I agree. I would add that POSIX pthreads is the de-facto standard a way to do shared memory programming, and MPI is the de-facto standard way to do message passing. I'm sure that message-passing has some failure modes (deadlocks) in common with shared memory programming, and I wouldn't be at all surprised to hear it could suffer from races too. --- One of the things I have come to realise in the past year or so is what a terrible programming model explicit shared-memory parallelism is. It's simply too hard for humans to understand and reason about (in all but the most trivial of applications): even small threaded programs are extremely hard to make sense of. J |
|
From: Julian S. <js...@ac...> - 2008-02-20 22:36:43
|
> > For sure a lower miss rate on the cache is possible. Also, the
> > miss path -- function cmpGEQ_VTS -- is naively coded, we could
> > do a lot better there. Maybe it is possible to change the
> > representation of VTSs so that comparison using vector operations
> > (SSE insns, etc) is possible. Or at least so that the complex
> > alignment logic can be avoided.
>
> Right now I use the graph machinery, not vts.
> In order to support barrier I have to create a fake segment which does not
> belong to any thread.
Another way is with O(log N) new segments. No need for segments
that do not belong to any thread, and you can implement it using
the existing Segment representation easily.
Suppose there are 4 threads, t1, t2, t3, t4
* wait for all the threads to arrive at the barrier. When the
last one arrives:
* make a binary tree:
- "seg12": new segment belonging to t2, depending on t1 and old t2
- "seg34": new segment belonging to t4, depending on t3 and old t4
- "seg1234": new segment belonging to t4, depending on seg12 and seg34
* now seg1234 depends on all threads arriving at the barrier.
* now construct new segments for t1, t2, t3, t4, which of course
depend on the previous segments in the same thread, and also on
seg1234
I heard that this technique or something like it is explained in
Arndt Muehlenfeld's thesis, although I did not read it.
It is inefficient (in space) compared to having a single fake segment.
But at least you can implement it using the 2-input Segment nodes in
the original Helgrind. For a single fake segment, you would need
to represent N input dependencies, which makes the representation more
complex. Note that because the VTS effectively caches the result of
all hb-queries between all segment-pairs, this technique is not
inefficient in time, only in space (and then only O(log N) cost).
> I did not figure out what VTS to put in that fake segment (well, I did not
> try hard :))
General rule:
supposing thread T is starting a new segment, giving segments Told and Tnew
and also we want to add dependence on some other segment S, then
Tnew->vts = tickL_and_joinR( Told->vts, S->vts )
assuming that Tnew->prev == Told and Tnew->other == S
and it is important that Told and Tnew are from the same thread.
J
|
|
From: Julian S. <js...@ac...> - 2008-02-20 21:29:58
|
On Wednesday 20 February 2008 22:07, Nicholas Nethercote wrote:
> On Wed, 20 Feb 2008, Julian Seward wrote:
> >>> But programmers should know in advance which bits of memory should be
> >>> shared. Perhaps some client requests could be used which say "this
> >>> section of memory will be shared" or "this section of memory won't be
> >>> shared" could be useful. In the "won't be shared" sections the
> >>> checking might be a lot simpler?
> >>
> >> Sounds nice, but a very important task for data race detection tools
> >> is to detect which data is shared between threads unintentionally.
> >
> > For Helgrind-style schemes, data that is accessed only by one thread
> > stays in the exclusive state, and the state machine actions for exclusive
> > states are cheaper than for shared data, since there is no need
> > to do lockset intersections or threadset unions for Excl states.
> > So to some extent, there already is a less-expensive (I won't say fast
> > :-) handling case for data which is never shared.
>
> I think you're both missing the point. If you say "this memory shouldn't
> be shared", Helgrind could warn as soon as the memory leaves the exclusive
> state. Currently, Helgrind won't warn about memory shared unintentionally
> unless there's a race involved.
Ah. I did indeed miss the point.
Yes. So in effect it's a way for the program to communicate to the
checker, knowledge about properties of memory ("should always be
unshared") which it can check, and which it would not otherwise know.
Neat idea.
Typically only a small fraction of the memory in a program is shared;
most is unshared. It might be easier to turn it upside down, so the
tool is notified of areas which may be shared. Hmm. Not sure if
that is a good idea. It would give stronger checking but would
require exhaustively annotating all locations which might become
shared.
J
|
|
From: Nicholas N. <nj...@cs...> - 2008-02-20 21:19:52
|
On Wed, 20 Feb 2008, Tom Hughes wrote: >> I just wanted to check if anyone out there had a serious go at it >> already. > > I think the answer to that is yes: > > dellow [~/src/valgrind-3] % ls -d VEX/*/*arm* > VEX/orig_arm/nanoarm VEX/priv/guest-arm/ VEX/pub/libvex_guest_arm.h > VEX/orig_arm/nanoarm.orig VEX/priv/host-arm/ See also docs/internals/porting-to-ARM.txt. Nick |
|
From: Nicholas N. <nj...@cs...> - 2008-02-20 21:13:23
|
On Wed, 20 Feb 2008, Olivier Sarrouy wrote: > But how can i, at run-time, access the content of temporaries (for example > to obtain the result of an Add32) ? Temporaries don't exist at run-time as such -- they all get converted into real registers. But, at instrumentation time that isn't relevant... if you want the value in a temporary, you just use it. Eg. if you have t3 = Add32(t1, t2) the result in t3 is accessed simply by using t3. So you can pass it into a helper function, for example. Look at Memcheck's code, and also the code after it has instrumented it (using --trace-flags). Or you might find the example in http://www.valgrind.org/docs/valgrind2007.pdf useful. Nick |
|
From: Nicholas N. <nj...@cs...> - 2008-02-20 21:08:40
|
On Wed, 20 Feb 2008, Julian Seward wrote: >>> But programmers should know in advance which bits of memory should be >>> shared. Perhaps some client requests could be used which say "this >>> section of memory will be shared" or "this section of memory won't be >>> shared" could be useful. In the "won't be shared" sections the checking >>> might be a lot simpler? >> >> Sounds nice, but a very important task for data race detection tools >> is to detect which data is shared between threads unintentionally. > > For Helgrind-style schemes, data that is accessed only by one thread > stays in the exclusive state, and the state machine actions for exclusive > states are cheaper than for shared data, since there is no need > to do lockset intersections or threadset unions for Excl states. > So to some extent, there already is a less-expensive (I won't say fast :-) > handling case for data which is never shared. I think you're both missing the point. If you say "this memory shouldn't be shared", Helgrind could warn as soon as the memory leaves the exclusive state. Currently, Helgrind won't warn about memory shared unintentionally unless there's a race involved. Judging from Julian's comment, this mightn't make much difference to speed, but it would give stronger checking. Nick |
|
From: Julian S. <js...@ac...> - 2008-02-20 16:19:01
|
> Right now I use the graph machinery, not vts. > In order to support barrier I have to create a fake segment which does not > belong to any thread. > I did not figure out what VTS to put in that fake segment (well, I did not > try hard :)) Ah, ok, something to fix. The graph machinery is much too expensive for realistic use. The cache miss rate may be low but the miss cost can be extremely high. J |
|
From: Konstantin S. <kon...@gm...> - 2008-02-20 15:52:14
|
> > > Improved accuracy can only be a good thing. Is MSMProp1 accurate > enough that you can find bugs in your real applications, without > getting to many false errors now? Yes! (I still see many false reports due to custom synchronization and that needs code annotations, but that's another story...) > > > > but noticeably slower > > (mostly because it has to check happens-before in all states). > > Maybe some better caching of the happens-before queries would > help? The current inplementation (hbefore__cache et al) uses a > 64-entry fully associative cache, in effect. Maybe it would be > better to have a larger, set-associative cache, eg 256 lines of > 4 entries each, for example. I don't see many cache misses here. But MSMProp1 has a loop where MSMHelgrind did not have it. Anyway, this needs more experiment... > > For sure a lower miss rate on the cache is possible. Also, the > miss path -- function cmpGEQ_VTS -- is naively coded, we could > do a lot better there. Maybe it is possible to change the > representation of VTSs so that comparison using vector operations > (SSE insns, etc) is possible. Or at least so that the complex > alignment logic can be avoided. Right now I use the graph machinery, not vts. In order to support barrier I have to create a fake segment which does not belong to any thread. I did not figure out what VTS to put in that fake segment (well, I did not try hard :)) --kcc |
|
From: Bart V. A. <bar...@gm...> - 2008-02-20 15:51:11
|
On Feb 20, 2008 12:42 PM, Julian Seward <js...@ac...> wrote: > But anyway. A flag which says "assume all stack accesses are thread-local" > would drastically cut the number of references to be checked, and might > be a useful addition. I think old Helgrind had such a feature. The > only real difficulty is deciding for sure what is and isn't a stack access > at JIT time (basically impossible, we'd need a run-time filter too). There are applications that share thread-allocated data over threads. Some examples: * At least with Linuxthreads, when creating a new thread by calling pthread_create(), a pointer to the thread ID is passed as the first argument to pthread_create(). Such thread ID's are typically allocated on the stack. The newly created thread then fills in this thread ID, and some time later the creator thread reads this thread ID when calling pthread_join(). So this is an example of stack-allocated data initialized by one thread and read by another thread. I did not yet check the behavior of NPTL with regard to filling in thread ID's. * There exist several high-level C++ abstractions of threads and synchronization objects. These libraries typically associate one object with each thread, one object with each mutex, etc. It is convenient when creating threads from inside main() to allocate the instances of thread objects on the stack. The data of such objects is typically accessed both by the creator and by the created thread. Bart. |
|
From: Konstantin S. <kon...@gm...> - 2008-02-20 15:37:45
|
> > So this increases the granularity by treating larger sections of memory as > a > single thing? No. It just ignores large portion of memory locations (thus missing some races). What you suggest will also lead to speedup, but will show races where there is no race actually, but likely a false sharing. > > > But programmers should know in advance which bits of memory should be > shared. Perhaps some client requests could be used which say "this > section > of memory will be shared" or "this section of memory won't be shared" > could > be useful. In the "won't be shared" sections the checking might be a lot > simpler? > Yes, I was also thinking about such requests. But in order to be helpful they must cover a large portion of all accesses (in dynamic). Also, we will have to put all these addresses into some map/hash_map/cache which may also be expensive. One more idea I am going to try: Consider we want to debug only some part of a large program. Then we can simply ignore all memory accesses done in other parts of program. Helgrind could have a command line option saying 'handle only those accesses that happen when execution stack contains FOO() or BAR()' --kcc |
|
From: <sv...@va...> - 2008-02-20 15:20:34
|
Author: sewardj
Date: 2008-02-20 15:20:33 +0000 (Wed, 20 Feb 2008)
New Revision: 7429
Log:
Minimal changes needed to make the regression tests build and run
again.
Modified:
branches/DATASYMS/coregrind/m_debuginfo/readdwarf3.c
branches/DATASYMS/coregrind/m_oset.c
branches/DATASYMS/memcheck/mc_malloc_wrappers.c
branches/DATASYMS/memcheck/tests/oset_test.c
Modified: branches/DATASYMS/coregrind/m_debuginfo/readdwarf3.c
===================================================================
--- branches/DATASYMS/coregrind/m_debuginfo/readdwarf3.c 2008-02-20 01:12:54 UTC (rev 7428)
+++ branches/DATASYMS/coregrind/m_debuginfo/readdwarf3.c 2008-02-20 15:20:33 UTC (rev 7429)
@@ -1490,7 +1490,7 @@
}
}
if (!found) {
- if (VG_(clo_verbosity) >= 0) {
+ if (0 && VG_(clo_verbosity) >= 0) {
VG_(message)(Vg_DebugMsg,
"warning: parse_var_DIE: non-external variable "
"outside DW_TAG_subprogram");
Modified: branches/DATASYMS/coregrind/m_oset.c
===================================================================
--- branches/DATASYMS/coregrind/m_oset.c 2008-02-20 01:12:54 UTC (rev 7428)
+++ branches/DATASYMS/coregrind/m_oset.c 2008-02-20 15:20:33 UTC (rev 7429)
@@ -786,7 +786,7 @@
Word cmpresS; /* signed */
UWord cmpresU; /* unsigned */
- tl_assert(oset);
+ vg_assert(oset);
stackClear(oset);
if (!oset->root)
@@ -833,7 +833,7 @@
if (stackPop(oset, &n, &i)) {
// If we've pushed something to stack and did not find the exact key,
// we must fix the top element of stack.
- tl_assert(i == 2);
+ vg_assert(i == 2);
stackPush(oset, n, 3);
// the stack looks like {2, 2, ..., 2, 3}
}
Modified: branches/DATASYMS/memcheck/mc_malloc_wrappers.c
===================================================================
--- branches/DATASYMS/memcheck/mc_malloc_wrappers.c 2008-02-20 01:12:54 UTC (rev 7428)
+++ branches/DATASYMS/memcheck/mc_malloc_wrappers.c 2008-02-20 15:20:33 UTC (rev 7429)
@@ -508,7 +508,9 @@
{
MC_Chunk* mc1 = *(MC_Chunk**)n1;
MC_Chunk* mc2 = *(MC_Chunk**)n2;
- return (mc1->data < mc2->data ? -1 : 1);
+ if (mc1->data < mc2->data) return -1;
+ if (mc1->data > mc2->data) return 1;
+ return 0;
}
static void
Modified: branches/DATASYMS/memcheck/tests/oset_test.c
===================================================================
--- branches/DATASYMS/memcheck/tests/oset_test.c 2008-02-20 01:12:54 UTC (rev 7428)
+++ branches/DATASYMS/memcheck/tests/oset_test.c 2008-02-20 15:20:33 UTC (rev 7429)
@@ -347,7 +347,7 @@
return buf;
}
-static Word blockCmp(void* vkey, void* velem)
+static Word blockCmp(const void* vkey, const void* velem)
{
Addr key = *(Addr*)vkey;
Block* elem = (Block*)velem;
@@ -369,8 +369,8 @@
// Create a dynamic OSet of Blocks. This one uses slow (custom)
// comparisons.
OSet* oset = VG_(OSetGen_Create)(offsetof(Block, first),
- blockCmp,
- (void*)malloc, free);
+ blockCmp,
+ malloc, free);
// Try some operations on an empty OSet to ensure they don't screw up.
vg_assert( ! VG_(OSetGen_Contains)(oset, &v) );
|
|
From: John R.
|
> I think the real question is how useful ARM support will be if we > can only support relatively old instruction sets. Supporting only ARMv5 (including Thumb mode) would be just as useful as current x86 and x86_64 support, where the reports of unsupported opcodes still dribble in, even 7 years after workable valgrind. ARMv5 and ARMv4 cover the vast majority of installed chips today. The Microsoft operating system products (WinCE, Windows Mobile) for mobile devices that use ARM, compile for ARMv4. The NSLU2 (200MHz CPU, 32MB RAM, 64MB flash ROM, 2xUSB2.0, 10/100 ethernet; $100; http://www.nslu2-linux.org/ ) and similar consumer devices use ARMv5 (ARMv5TE on mine.) -- John Reiser, jreiser@BitWagon.com |
|
From: John R. <joh...@gm...> - 2008-02-20 13:45:26
|
On 20/02/2008, Tom Hughes <to...@co...> wrote: > In message <996...@ma...> > John Ripley <joh...@gm...> wrote: > > > I realise this gets brought up often, but I'm thinking of having a > > decent go at getting at least ARMv4 support into Valgrind. I know that > > there are possibly some legal issues but: > > > > * I have never clicked through or signed an agreement not to develop > > cores or models for ARM. > > * The ARM reference manual (2nd ed) I have on my desk here doesn't > > have that agreement. > > * No datasheet I've downloaded in the past had that agreement. > > * It seems to have only appeared recently and only on ARMv6/7 data. > > > > So I think I'll basically just go ahead regardless and just stay away > > from anything ARMv6 and up (which the ARM book doesn't cover anyway). > > That is the situation as I understand it, yes. I too have an ARMv4 > data book at home that I purchased from a book shop without entering > into any kind of restrictive agreement. > > I think the real question is how useful ARM support will be if we > can only support relatively old instruction sets. Very. The vast majority of shipping products with ARM cores in them are ARMv6 or below, and sadly even the ARMv6 cores are usually running at most ARMv5 code. It is also very easy to recompile most applications with -march=armv5, compared to re-targeting everything for x86-Linux and wrapping everything in a faked-up environment (e.g missing drivers, etc). Valgrind for even just ARMv4 would have been incredibly useful to me for years and even if it was still just ARMv4 would be useful for years from now :) I've had plenty of bugs disappear when compiled for x86 and it would have saved a lot of debug time. > > I just wanted to check if anyone out there had a serious go at it > > already. > > I think the answer to that is yes: > > dellow [~/src/valgrind-3] % ls -d VEX/*/*arm* > VEX/orig_arm/nanoarm VEX/priv/guest-arm/ VEX/pub/libvex_guest_arm.h > VEX/orig_arm/nanoarm.orig VEX/priv/host-arm/ That's a decent start :) I saw the document saying that there was an old effort to port for ARM, but it's so out of date that it doesn't compile any more. If the instruction decode/encode is already there, then that's a HUGE amount of work I won't need to do. John. |
|
From: Tom H. <to...@co...> - 2008-02-20 13:28:09
|
In message <996...@ma...>
John Ripley <joh...@gm...> wrote:
> I realise this gets brought up often, but I'm thinking of having a
> decent go at getting at least ARMv4 support into Valgrind. I know that
> there are possibly some legal issues but:
>
> * I have never clicked through or signed an agreement not to develop
> cores or models for ARM.
> * The ARM reference manual (2nd ed) I have on my desk here doesn't
> have that agreement.
> * No datasheet I've downloaded in the past had that agreement.
> * It seems to have only appeared recently and only on ARMv6/7 data.
>
> So I think I'll basically just go ahead regardless and just stay away
> from anything ARMv6 and up (which the ARM book doesn't cover anyway).
That is the situation as I understand it, yes. I too have an ARMv4
data book at home that I purchased from a book shop without entering
into any kind of restrictive agreement.
I think the real question is how useful ARM support will be if we
can only support relatively old instruction sets.
> I just wanted to check if anyone out there had a serious go at it
> already.
I think the answer to that is yes:
dellow [~/src/valgrind-3] % ls -d VEX/*/*arm*
VEX/orig_arm/nanoarm VEX/priv/guest-arm/ VEX/pub/libvex_guest_arm.h
VEX/orig_arm/nanoarm.orig VEX/priv/host-arm/
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: John R. <joh...@gm...> - 2008-02-20 12:57:19
|
I realise this gets brought up often, but I'm thinking of having a decent go at getting at least ARMv4 support into Valgrind. I know that there are possibly some legal issues but: * I have never clicked through or signed an agreement not to develop cores or models for ARM. * The ARM reference manual (2nd ed) I have on my desk here doesn't have that agreement. * No datasheet I've downloaded in the past had that agreement. * It seems to have only appeared recently and only on ARMv6/7 data. So I think I'll basically just go ahead regardless and just stay away from anything ARMv6 and up (which the ARM book doesn't cover anyway). I just wanted to check if anyone out there had a serious go at it already. John Ripley. |
|
From: Ashley P. <api...@co...> - 2008-02-20 11:52:18
|
On Wed, 2008-02-20 at 08:15 +0100, Bart Van Assche wrote: > On Feb 19, 2008 10:09 PM, Nicholas Nethercote <nj...@cs...> wrote: > > But programmers should know in advance which bits of memory should be > > shared. Perhaps some client requests could be used which say "this section > > of memory will be shared" or "this section of memory won't be shared" could > > be useful. In the "won't be shared" sections the checking might be a lot > > simpler? > > Hello Nick, > > Sounds nice, but a very important task for data race detection tools > is to detect which data is shared between threads unintentionally. That's one class of bug but there is another class where data is known to be shared but the locking is insufficient. Checking for the latter but not the former doesn't sound useful but if it gives a significant performance improvement it would be useful to have as an addition. I think the problem with it is you would need to annotate *all* library's linked with the program for it to work and that isn't going to happen easily and would be extensive and hence error prone if it did. Ashley, |
|
From: Julian S. <js...@ac...> - 2008-02-20 11:45:35
|
On Wednesday 20 February 2008 08:15, Bart Van Assche wrote: > On Feb 19, 2008 10:09 PM, Nicholas Nethercote <nj...@cs...> wrote: > > But programmers should know in advance which bits of memory should be > > shared. Perhaps some client requests could be used which say "this > > section of memory will be shared" or "this section of memory won't be > > shared" could be useful. In the "won't be shared" sections the checking > > might be a lot simpler? > > Sounds nice, but a very important task for data race detection tools > is to detect which data is shared between threads unintentionally. For Helgrind-style schemes, data that is accessed only by one thread stays in the exclusive state, and the state machine actions for exclusive states are cheaper than for shared data, since there is no need to do lockset intersections or threadset unions for Excl states. So to some extent, there already is a less-expensive (I won't say fast :-) handling case for data which is never shared. But anyway. A flag which says "assume all stack accesses are thread-local" would drastically cut the number of references to be checked, and might be a useful addition. I think old Helgrind had such a feature. The only real difficulty is deciding for sure what is and isn't a stack access at JIT time (basically impossible, we'd need a run-time filter too). J |
|
From: Julian S. <js...@ac...> - 2008-02-20 11:38:51
|
On Wednesday 20 February 2008 09:26, you wrote: > In fact, i think i've explained my problem in a very silly way (and i'm > sorry for that ...). > Let's suppose we're given a input IRSB with a t3 = Add32(t2,t5) statement. > What i intend to do is to access, at run-time, the value of t3. > My first idea was to use a helper c function which would, at run time, > access the temporaries (via pointers f.e.). But how to access this > temporaries at run time, via pointers ? You can't access them via pointers. The temporaries are stored in registers by a later stage of the compilation (JIT) pipeline. Your instrumentation function will scan the input IRSB. It must copy all the input code into a new IRSB (else the program won't work properly). But when it does the copy, it can add new code of its own. For example, if it sees t3 = Add32(t2,t5) then after that you can create an IRStmt which contains an IRExpr_RdTmp(t3). For example, you could create an IRStmt which calls a C helper function, passing it the value of t3. Many tools do that kind of thing. Try studying lackey, it is relatively simple. Try to understand the output of --trace-flags=10001000. It is better to generate in-inline IR instrumentation than to generate many calls to C helper functions, since calling C helpers a lot will make your programs run slowly. J |
|
From: Bart V. A. <bar...@gm...> - 2008-02-20 07:15:29
|
On Feb 19, 2008 10:09 PM, Nicholas Nethercote <nj...@cs...> wrote: > But programmers should know in advance which bits of memory should be > shared. Perhaps some client requests could be used which say "this section > of memory will be shared" or "this section of memory won't be shared" could > be useful. In the "won't be shared" sections the checking might be a lot > simpler? Hello Nick, Sounds nice, but a very important task for data race detection tools is to detect which data is shared between threads unintentionally. Bart Van Assche. |
|
From: Tom H. <th...@cy...> - 2008-02-20 05:07:40
|
Nightly build on alvis ( i686, Red Hat 7.3 ) started at 2008-02-20 03:15:02 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 338 tests, 80 stderr failures, 1 stdout failure, 29 post failures == memcheck/tests/addressable (stderr) memcheck/tests/badjump (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/erringfds (stderr) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-pool-0 (stderr) memcheck/tests/leak-pool-1 (stderr) memcheck/tests/leak-pool-2 (stderr) memcheck/tests/leak-pool-3 (stderr) memcheck/tests/leak-pool-4 (stderr) memcheck/tests/leak-pool-5 (stderr) memcheck/tests/leak-regroot (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/long_namespace_xml (stderr) memcheck/tests/lsframe1 (stderr) memcheck/tests/lsframe2 (stderr) memcheck/tests/malloc_free_fill (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/noisy_child (stderr) memcheck/tests/partial_load_dflt (stderr) memcheck/tests/partial_load_ok (stderr) memcheck/tests/partiallydefinedeq (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/sigkill (stderr) memcheck/tests/stack_changes (stderr) memcheck/tests/x86/bug152022 (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/x86/xor-undef-x86 (stderr) memcheck/tests/xml1 (stderr) massif/tests/alloc-fns-A (post) massif/tests/alloc-fns-B (post) massif/tests/basic (post) massif/tests/basic2 (post) massif/tests/big-alloc (post) massif/tests/culling1 (stderr) massif/tests/culling2 (stderr) massif/tests/custom_alloc (post) massif/tests/deep-A (post) massif/tests/deep-B (stderr) massif/tests/deep-B (post) massif/tests/deep-C (stderr) massif/tests/deep-C (post) massif/tests/deep-D (post) massif/tests/ignoring (post) massif/tests/insig (post) massif/tests/long-names (post) massif/tests/long-time (post) massif/tests/new-cpp (post) massif/tests/null (post) massif/tests/one (post) massif/tests/overloaded-new (post) massif/tests/peak (post) massif/tests/peak2 (stderr) massif/tests/peak2 (post) massif/tests/realloc (stderr) massif/tests/realloc (post) massif/tests/thresholds_0_0 (post) massif/tests/thresholds_0_10 (post) massif/tests/thresholds_10_0 (post) massif/tests/thresholds_10_10 (post) massif/tests/thresholds_5_0 (post) massif/tests/thresholds_5_10 (post) massif/tests/zero1 (post) massif/tests/zero2 (post) none/tests/blockfault (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/hg01_all_ok (stderr) helgrind/tests/hg02_deadlock (stderr) helgrind/tests/hg03_inherit (stderr) helgrind/tests/hg04_race (stderr) helgrind/tests/hg05_race2 (stderr) helgrind/tests/hg06_readshared (stderr) helgrind/tests/tc01_simple_race (stderr) helgrind/tests/tc02_simple_tls (stderr) helgrind/tests/tc03_re_excl (stderr) helgrind/tests/tc05_simple_race (stderr) helgrind/tests/tc06_two_races (stderr) helgrind/tests/tc07_hbl1 (stderr) helgrind/tests/tc08_hbl2 (stderr) helgrind/tests/tc09_bad_unlock (stderr) helgrind/tests/tc11_XCHG (stderr) helgrind/tests/tc12_rwl_trivial (stderr) helgrind/tests/tc14_laog_dinphils (stderr) helgrind/tests/tc16_byterace (stderr) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc19_shadowmem (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) helgrind/tests/tc23_bogus_condwait (stderr) helgrind/tests/tc24_nonzero_sem (stderr) exp-drd/tests/fp_race (stderr) exp-drd/tests/fp_race2 (stderr) exp-drd/tests/matinv (stderr) exp-drd/tests/pth_barrier (stderr) exp-drd/tests/pth_broadcast (stderr) exp-drd/tests/pth_cond_race (stderr) exp-drd/tests/pth_cond_race2 (stderr) exp-drd/tests/pth_create_chain (stderr) exp-drd/tests/pth_detached (stderr) exp-drd/tests/pth_detached2 (stderr) exp-drd/tests/sem_as_mutex (stderr) exp-drd/tests/sem_as_mutex2 (stderr) exp-drd/tests/sigalrm (stderr) exp-drd/tests/tc17_sembar (stderr) exp-drd/tests/tc18_semabuse (stderr) |
|
From: Tom H. <th...@cy...> - 2008-02-20 04:04:53
|
Nightly build on lloyd ( x86_64, Fedora 7 ) started at 2008-02-20 03:05:06 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 372 tests, 7 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) |
|
From: Tom H. <th...@cy...> - 2008-02-20 03:48:28
|
Nightly build on trojan ( x86_64, Fedora Core 6 ) started at 2008-02-20 03:25:21 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 376 tests, 6 stderr failures, 5 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/bug133694 (stdout) memcheck/tests/x86/bug133694 (stderr) memcheck/tests/x86/scalar (stderr) none/tests/cmdline1 (stdout) none/tests/cmdline2 (stdout) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) |
|
From: Tom H. <th...@cy...> - 2008-02-20 03:48:19
|
Nightly build on aston ( x86_64, Fedora Core 5 ) started at 2008-02-20 03:20:08 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 378 tests, 9 stderr failures, 1 stdout failure, 0 post failures == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) none/tests/blockfault (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/sem (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) |
|
From: Tom H. <th...@cy...> - 2008-02-20 03:27:23
|
Nightly build on dellow ( x86_64, Fedora 8 ) started at 2008-02-20 03:10:06 GMT Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 372 tests, 8 stderr failures, 3 stdout failures, 0 post failures == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/pth_cvsimple (stdout) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 372 tests, 8 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Wed Feb 20 03:18:50 2008 --- new.short Wed Feb 20 03:27:24 2008 *************** *** 8,10 **** ! == 372 tests, 8 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/malloc_free_fill (stderr) --- 8,10 ---- ! == 372 tests, 8 stderr failures, 3 stdout failures, 0 post failures == memcheck/tests/malloc_free_fill (stderr) *************** *** 16,17 **** --- 16,18 ---- none/tests/mremap2 (stdout) + none/tests/pth_cvsimple (stdout) helgrind/tests/tc18_semabuse (stderr) |
|
From: Tom H. <th...@cy...> - 2008-02-20 03:14:53
|
Nightly build on gill ( x86_64, Fedora Core 2 ) started at 2008-02-20 03:00:02 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 378 tests, 29 stderr failures, 3 stdout failures, 0 post failures == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/amd64/insn_ssse3 (stdout) none/tests/amd64/insn_ssse3 (stderr) none/tests/amd64/ssse3_misaligned (stderr) none/tests/blockfault (stderr) none/tests/fdleak_fcntl (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/x86/insn_ssse3 (stdout) none/tests/x86/insn_ssse3 (stderr) none/tests/x86/ssse3_misaligned (stderr) helgrind/tests/hg01_all_ok (stderr) helgrind/tests/hg02_deadlock (stderr) helgrind/tests/hg03_inherit (stderr) helgrind/tests/hg04_race (stderr) helgrind/tests/hg05_race2 (stderr) helgrind/tests/tc01_simple_race (stderr) helgrind/tests/tc05_simple_race (stderr) helgrind/tests/tc06_two_races (stderr) helgrind/tests/tc09_bad_unlock (stderr) helgrind/tests/tc14_laog_dinphils (stderr) helgrind/tests/tc16_byterace (stderr) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc19_shadowmem (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) helgrind/tests/tc23_bogus_condwait (stderr) |