You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
|
1
(26) |
2
(24) |
|
3
(21) |
4
(23) |
5
(19) |
6
(24) |
7
(27) |
8
(28) |
9
(18) |
|
10
(15) |
11
(14) |
12
(17) |
13
(18) |
14
(24) |
15
(27) |
16
(17) |
|
17
(26) |
18
(22) |
19
(27) |
20
(25) |
21
(19) |
22
(22) |
23
(17) |
|
24
(30) |
25
(21) |
26
(14) |
27
(20) |
28
(25) |
29
(23) |
30
(22) |
|
From: Philippe W. <phi...@sk...> - 2012-06-13 18:38:40
|
Below is a text describing the prototype/problems/questions/... on how to make Valgrind running threads in parallel. The below text and the related patches have been attached to bug https://bugs.kde.org/show_bug.cgi?id=301830 It is the idea that very soon, the below text will be put in SVN docs/internals. It is initially inlined in this mail to trigger discussions/comments. Updates of the below text will then be tracked under SVN. Philippe ---------------------------------------------------------------------------- Valgrind (3.8.0) supports multi-threaded applications, but schedules only one thread at a time. In other words, a multi-threaded application running under Valgrind will not benefit from multiple CPUs. A prototype has been developped of a "really" multi threaded Valgrind (mtV). The prototype is made of ugly and/or inefficient kludges/trials/... Now, you have been warned that you will encounter horrors. The only objective of this prototype is to understand the problems of doing an mtV, see what are the possible approaches, possible performance gains. The current state of the patches and/or tests results are maintained in bugzilla in bug https://bugs.kde.org/show_bug.cgi?id=301830 The document starts with a description of the current behaviour and structure of Valgrind. It then contains a description of the prototype developped, the techniques used, the problems still opened. The document below has some sections. Section title starts with a #. The first section is some general information, which might be skipped by most V developpers. # The 3 Valgrind "layers": ========================== ------------------------------------------------------------------------ | | | | TOOL "runtime code" | "instrument function" | ------------------------------------------------------------------------ | | | GUEST JIT code | | (generated/instrumented code) | -------------------------------------------------------------------- | | | CORE layer | -------------------------------------------------------------------- CORE layer : this contains the V framework & modules used to support the TOOL code. TOOL layer : made of two big parts. 1. The instrument function is called by the CORE layer. It instruments the code of the executable running under V small blocks at a time, producing "translated and instrumented" small pieces of JIT code. These pieces of JIT codes are called by the scheduler contained in the CORE layer. 2. the TOOL "runtime code" (C helper functions and data structures needed by the GUEST JIT code). For example, for memcheck, it contains the data structures to track the allocated and freed memory. GUEST layer : contains GUEST JIT : this is the process guest code translated and instrumented by the TOOL instrument function. The typical control flow is: 1. CORE layer calls the "instrument function", producing instrumented GUEST JIT code 2. CORE layer calls the GUEST JIT generated code. 3. the GUEST JIT code runs. It will typically do many calls to the tool runtime code. Note: the none tool is special : the "instrument function" does not do any code transformation. The none tool has no runtime code. The tool runtime code can itself do calls to the CORE layer. Example: the guest process code is: char *s; s = malloc (10); strcpy(s, "abc"); (*fnptr)(); The following sequence will happen: 1. CORE will decode the guest process code. 2. CORE will call the TOOL function to instrument this decoded code For memcheck, the instrumented code will be made of: a call to the memcheck malloc replacement function (this replacement function is part of memcheck runtime code, it a.o. tracks the allocated memory to allow leak search). an assignment of the result of the malloc replacement to s a call to a tool C helper to indicate that s (the ptr) is now initialized a call to the function strcpy a (dynamic) call to the function pointed to by fnptr. 3. CORE will translate the instrumented code to executable code (JIT) and will store this code in a data structure (the translation table). Basically, the translation table is a mapping between a guest code address and the corresponding translated JIT code. 4. CORE scheduler will call the JIT code (the just translated piece of code) 5. the JIT code does the following: call the tool malloc replacement function, which does: call the CORE code to allocate a block of memory (10 bytes + some admin overhead). call the CORE code to compute (and store) a stack trace of the guest code call stack. call the CORE code to store in an hash table the address of the allocated block + the stack trace reference maintain in a memcheck data structure that the 10 bytes are accessible but not initialized. the JIT code should normally call strcpy. Because strcpy is not translated yet, the call code is jumping to So, the JIT code executes a return, giving back the control to the CORE scheduler 6. the scheduler will translate the strcpy function (generating a new piece of instrumented JIT code) and add it in the translation table. Possibly, the table is full. Then the old translations have to be removed from the table. 7. The scheduler then calls the new JIT code The JIT code will execute the strcpy instrumented JIT code (this code will typically do multiple calls to the tool runtime code to indicate that the 4 first bytes pointed to by s are now initialized). 8. The JIT code will search in the translation table for the translated piece of code corresponding to *fnptr. If existing, it is called. Otherwise, control is given back to scheduler for instrumentation. For a typical tool and guest application being executed, most of the time is spent in GUEST JIT code, in the TOOL runtime code and in the CORE code. It is deemed that the time spent in TOOL instrument function is not a major part : usually, a translated piece of code is executed often. If we have an application containing multiple threads, there is no (to be more precise, almost no) parallel execution : at any moment in time, there is maximum one thread which "really" executes some code. This thread can be busy either in the CORE layer (possibly calling the TOOL instrument function), or can be busy in GUEST JIT code or in the TOOL runtime code. This "single thread busy" model is needed because neither the CORE layer, nor the TOOL layer (runtime or instrument) are re-entrant. These 3 layers are containing global variables/data structures/... which can only be used by one thread at a time. To ensure thread-safety of this non-reentrant code, V uses the very simple approach of "single thread busy". The "single thread busy" is ensured by the "Big Lock". A thread must acquire the Big Lock before it executes CORE or GUEST or TOOL code. When a thread acquires the Big Lock, it becomes THE running thread. The running thread releases the Big Lock after it has executed a certain quantity of GUEST code. Releasing the Big lock will allow another thread to run during some time. The other threads (ready to run) are all blocked, trying to acquire the Big Lock. One of these threads will acquire the Big Lock and start to execute some code. Typically, it will start to execute GUEST JIT code. But it might also start to execute CORE code because some guest code has to be instrumented. If the running thread executing GUEST JIT code has to execute a syscall (e.g. reading data on a socket), the running thread must release the Big Lock : if this thread would not release the Big Lock before doing a syscall, then the whole application will be blocked waiting for this thread to finish its syscall. In other words, the only real parallelism provided by Valgrind is between one thread doing some CPU in CORE/GUEST/TOOL and all other threads doing a system call in GUEST JIT code. This is a very low level of parallelism. If an application does an intensive usage of threads, the typically slowdown factor of the V tool (e.g. 5 to 20 for memcheck) will be multiplied by the serial execution of the CORE/GUEST/TOOL code. The objective of the prototype is to determine how to best increase the parallelism of Valgrind. # the non-thread safe code ========================== To increase the performance, we want to have multiple threads running in parallel (e.g. executing the code of the example). For this, the Big Lock must be removed and replaced by other techniques : in all the above steps, global data structures are used that will be corrupted by parallel access : almost all the steps are not thread safe if the Big Lock is removed. Typically, for the translation part, the following are not thread safe : the VEX lib (used by the CORE and tool instrument function) the tool instrument function itself the CORE translation framework ... At runtime (executing JIT code), e.g. the following are not thread safe the CORE scheduler the memcheck C helpers memcheck malloc replacement memcheck V and A bits C helpers the CORE malloc/free the CORE aspacemgr (used e.g. by the CORE malloc/free) ... At many places in the code, we have statistical counters to help understand the behaviour/performance of V. (e.g. CORE malloc is counting how many bytes have been malloc-ed). These counters are all non-thread safe. The guest code might or might not be thread safe (e.g. might contain race condition). Such non thread safe guest code is not a problem for our objective, as such non thread safety will only corrupt the guest process data structure (the developper of the guest code might use helgrind to search for such non thread safe code). However, currently, even thread-safe guest code can result in non thread safe JIT code. For example, two threads executing at the same time a call to a not yet translated function (e.g. strcpy) will corrupt the JIT code (t-chaining). Making all the above thread safe can be done using various techniques: using thread local storage instead of global variables using mutex locks using read/write locks using atomic instructions (e.g. for statistical counters) using lock-less algorithms (e.g. to maintain data structures without using locks. Such lock-less algorithms are based on atomic instructions) Note: mutex can be implemented using atomic instructions and OS syscalls (e.g. based on futex). If there is no contention, the perf. of mutex based approach will be very probably similar (or even maybe better?) than a lock-less algorithm. Lock-less algorithms might however help either to increase the throughput or to avoid difficulties such as deadlocks. The prototype has explored some of the above techniques. Some "testing" (ha ha) has been done. See # testing below. At this stage, only the thread safe aspect of CORE and JIT layer was looked at for the prototype. Nothing has been done to look at the tool runtime layer. # prototype =========== The very first version of the prototype was based on the following idea (looking somewhat wrong a posteriori): When a thread executes guest code, it has to use some data structures (e.g. the translation table, the JIT code itself, ...). These structures can't be protected by a mutex, otherwise we are back to stage 1: no parallelism. So, let's transform the Big Lock into a rwlock. Before executing guest code, a thread must acquire the Big Lock in read mode. Before doing any modifications to any data structure, the thread must acquire the Big Lock in write mode. So, the Big Lock was transformed into a rwlock by using low level "semaphores". These semaphores are the same as the current trunk scheduler "big lock", i.e. implemented using a pipe. !!! Fair scheduler has not been looked at. The rwlock implemented on top of these semaphores is not efficient as it implies several read/write syscall to acquire/release the lock. Really With this, a bunch of changes were needed to avoid assertions being raised. A.o.: * the concept of THE lock owner disappears so this cannot be checked anymore * the concept of THE running thread disappears similarly * the concept of a global 'in generated code' disappears * a bunch of failing asserts have been #ifdef-ed or commented. After fixing the above, the 4 threads were able to run in parallel. One observe the following on the "big_parallel_sleepers" test: * during +- 40 seconds, about 125% CPU is used * after that, jump to 400%. Interpretation of this: making the translations and storing them in the transtab is still very serialized by the Big Lock. Once all the code is translated, full speed is possible. See also the "duplicate t-chaining" below. helgrind (trunk) was then used in a setup "outer trunk helgrind/inner none tool prototype" to find race conditions. A bunch of such race conditions were found. Typically: * race conditions in statistical counters * ... need to retrieve and document more of these changes done. There are some tricky races as some actions looks like they are just reading some data, but in fact they do modifications to global variables. E.g. all the functions which are maintaining a small static local cache will write to the local cache. Typical example in the aspacemgr or in the transtab. The transtab (VG_(tt_fast)) is the worst case because: 1. it is used a lot 2. it is used by the asm dispatch code. => using a rw lock for this might not be an ideal solution. See # tt_fast below for another suggested approach. Currently the VG_(tt_fast) race condition is not fixed in the prototype. The other data structures related to translation tables were first tried to be protected the following way: Before having to search or modify the translation table, the Big Lock has to be acquired in W mode. So, if a thread had the Big Lock in R mode, it was reacquiring it in W mode (by releasing and re-acquiring). Really not efficient, the effect of this was really bad: The max CPU usage dropped from 400% till 140%. Worse, the total elapsed time to run the test was bigger than with the V trunk. The conclusion of this was : a (reasonable) mtV cannot be based purely on a RW Big Lock. * So, started to make a pub_tool_lock.h and m_lock.c module. (currently only contains a mutex, based on atomic instructions and futex syscall). It should at least be completed with a rwlock. Maybe also spin locks ? Some constraints for this module: * it should be initialisable very early in the startup sequence (as e.g. aspacemgr will need it. Even maybe DebugLog might need locks). The m_lock.c is derived from (lgpl 2.1) NPTL glibc 2.13 code. Atomic instructions for x86 and amd64 were also copied from the NPTL pthread lib and slightly transformed. Basically, removed the 'catomic*' kind of actions, which are optimisations done referencing a global pthread NPTL var to "skip" the LOCK prefix when there is only one thread. !!! There is one asm statement in priv_atomic_x86.h giving a compile/assembly error. Replaced by __sync_add_and_fetch temporarily. This should be fixed. !!! the fair scheduler should be rewritten based on the atomic instructions rather than the __sync.. and __builtin so that it will be available everywhere. !!! need to see if all 32 bits platforms are supporting an atomic increment of a 64 bit counter. Might either not be supported or cause problems like the x86 compile error above. Then these will have to be emulated. This might all be costly so we might have to avoid some of the 64 bits statistical counters. * The above efficient futex based mutex was used to protect the translation table. With this, many race condiions disappeared. CPU usage back to 400%. * VG_(unknown_SP_update) is a perf. critical function. This function contains a piece of non thread safe code for detecting stack changes. This code has been disabled waiting for a proper solution. A possible approach is to use TLS : the current_stack global variable would become a 'per thread' variable. See # TLS below for the trial of thread local storage. * aspacemgr locking: race condition detected e.g. between VG_(am_find_nsegment) and a mmap syscall, calling VG_(am_notify_client_mmap) => a mutex was added in aspacemgr, protecting at least the race between these 2 functions. However, it is far to be clear that the aspacemgr is "safe" with that. Effectively, VG_(am_find_nsegment) returns a pointer to an element of the segment array. So, if another thread is modifying the entry in the array once the first thread has got an access to it, then what ? It looks however that VG_(am_find_nsegment) is used for "small durations". But still it is not clear this is a "clean thread safe interface". It is unclear exactly what and how should be protected in the aspace mgr. Protecting all the public interface of aspacemgr could be done (need to avoid recursive locking, as pub_tool_lock.h does not detect that). Howeve, unclear if this is good enough (or not). * When doing the trial of protecting aspacemgr globally, deadlock obtained due to recursive locking. => one might need to make "safe" locks (currently, m_lock.c does not verify non-recursive locking). Also, if we do plenty of fine grained locking everywhere, deadlocks might be difficult to avoid. Lock-free algorithms might help to avoid these. See # Lock free algorithms below. * Need to avoid "duplicate t-chaining". Got an assert failure in VEX, as the place to t-chain did not contain the expected bytes. This was created by two threads detecting at the same time that a t-chainng is done. Then, even if Write Locked one after each other, the 2nd thread would assert as the expected bytes to replace for t-chain are not there anymore. So, in the t-chaining protected critical section, there is a verification if the t-chaining has not been done in the meantime. This causes a really bizarre perf impact (search for really bizarre in the # Performance measurements) : running 150 iterations with 4 parallel threads is total less cpu than in serial. But running only 2 iterations is significantly slower in parallel than serial. The most probable explanation: I suppose there is some useless work done e.g. during translation which consumes CPU : a thread detects it must do t-chaining" : it exits to the scheduler, takes the write big lock, detects that this t-chaining has already be done by another thread and so has done all this work for nothing. This hypothesis is confirmed when giving -d. This causes plenty of traces: --22919:1:transtab host code 0x40537438F already chained => no chaining redone But by which miracle are the parallel threads "recuperating" this cpu burned for nothing later on when doing more iterations ? Whatever: a possible solution might be to have VEX provide an efficient way to check that the current ip of the thread has been already chained. Then a "already done t-chaining" would only cost an exit to the C scheduler, and a few "VEX" if-s. This condition looks easy to implement in VEX. Is it safe ? I believe yes: At least VEX can detect that the t-chaining has been done already (or rather that the place is not t-chainable anymore). I suppose just a "!" on this condition would do it : if a place has to be chained, then it contains (for x86) BA <4 bytes value == disp_cp_chain_me_EXPECTED> FF D2 otherwise it contains something different. ??? is it so clear that this will avoid burning cpu : when a thread exits to C, it will try to acquire the big lock to do t-chaining, but will not be able till all threads go out of the JIT code either due to need of t-chaining or due to QUANTUM expired). If all exits due to QUANTUM expired, our thread will get the big lock, do the t-chaining and all is well. But if the other threads have to do the same t-chaining, they will all exits, see that the t-chaining is not done, and then queue to acquire the write big lock. One will get it, do the t-chaining, then all others will one by one get the write lock, and see the t-chaining is done. Threads have quite some chance to get to the same not done t-chaining if they all execute the same code. ??? or even isn't it that the translation table and the rd/wr big lock protecting it is just not scalable : whenever there are some threads which are executing JITted code not yet translated and/or t-chained, there is a high level of contention, causing a lot of lock/unlock, consuming user and sys cpu ? # tt_fast "xor" approach ======================== (the tt_fast race condition has not been solved yet. Here is a suggested very elegant approach. But is this really working ?) tt_fast is used (read only) by the asm dispatcher. tt_fast is accessed for every 'dynamic' call/jump/... (the 'static' call/jump/... are resolved using translation chaining). tt_fast is modified either when a new translation is done or when an already translated piece of code is found in the translation cache. In other words, one see that even if tt_fast+translation table is logically only read, it is also modified by a search. http://www.cis.uab.edu/hyatt/hashing.html describes a technique which (I believe) would allow to use tt_fast without locking and without atomic instruction, with very few changes in the asm dispatcher. Basically, the idea of the paper applied on tt_fast would be: tt_fast is an array of pair (G, H) where G is a guest code address and H is the address of the JIT translation of G. (G, H) is stored at a position in tt_fast obtained by "hashing" G (basically, shifting and masking some bits of G). If we have a pair (G1,H1) and (G2,H2) which have the same hash value, and these pairs are inserted (or modified) in parallel, a third thread reading this table might get one of the following 4 pairs: (G1,H1) (good) (G2,H2) (good) (G1,H2) (bad) (G2,H1) (bad) The idea is that the asm dispatcher would detect the bad cases, and then just fall back to the normal search (exiting the asm dispatcher to do a full search in the translation table). To differentiate the good from the bad (without an ugly lock :), the idea is to store in tt_fast the following: (G xor H, H) Then when searching for G1, one does the following: k = hash(G1) g_xor_h = tt_fast(k).g h = tt_fast(k).h if (g_xor_h xor h == G1) then h is H1 else a full search in translation table is needed as either we have (G2,H2) (a good case, but not ok for us) or one of the (temporary) bad case. A bad case will be "repaired" by the next full search which goes to the same hash bucket. I believe that even without memory __sync_synchronize, this should all work : either the pair is consistent and usable or it is inconsistent and not usable. # remaining race conditions =========================== Currently, there is (for the none tool) at least the following race conditions to fix: * these are counters used for sanity checking or gdbserver polling Probably to be fixed either by TLS or by atomic instructions. ==23743== Location 0x284d2880 is 0 bytes inside local var "slow_check_interval" ==23743== Location 0x284d2884 is 0 bytes inside local var "next_slow_check_at" ==23743== Location 0x28670dac is 0 bytes inside local var "sanity_slow_count" ==23743== Location 0x28670e30 is 0 bytes inside local var "vgdb_next_poll" ==23743== Location 0x2866aafc is 0 bytes inside local var "busy" * to be analysed: many races reported on none/tests/pth_once (and other similar tests). Looks like the clone syscall needs to lock somewhat more things) Maybe it needs to lock rw ? Or maybe helgrind needs a hb relationship on the clone syscall (helgrind does a hb relationship for pthread level but not for the os level) ? ==29332== Location 0x284fd2c8 is 0 bytes inside local var "nsegments_used" ==29332== Location 0x28c70010 is 0 bytes inside vgPlain_threads[2].exitreason, ==29332== Location 0x28c70020 is 0 bytes inside vgPlain_threads[2].arch.vex.host_EvC_FAILADDR, ==29332== Location 0x28c70028 is 0 bytes inside vgPlain_threads[2].arch.vex.host_EvC_COUNTER, ==29332== Location 0x28c71b00 is 0 bytes inside vgPlain_threads[2].sig_mask.sig[0], ==29332== Location 0x28c71b44 is 0 bytes inside vgPlain_threads[2].os_state.threadgroup, .... too much races to all put here ... * VG_(tt_fast) xor technique to be implemented: ==23743== Location 0x28fe1350 is 0 bytes inside vgPlain_tt_fast[*].guest, ==23743== Location 0x28fe1358 is 0 bytes inside vgPlain_tt_fast[*].host, # debuglog ========== Output can currently be done by multiple threads in parallel (e.g. in case of a crash/inconsistency, each thread might detect this and report the state of the scheduler at crash time). This causes output to be mixed/unreadable. We might maybe have a mutex to have each debugLog being a single syscall and have a flush for each write. But some messages (e.g. a stack trace) are made of several debuglog calls ? # testing ========= At this stage, very little testing done. The "testing" (ha ha) of the prototype was done mostly using parallel_sleepers (a slightly modified version of gdbserver_tests/sleepers.c by removing setaffinity to a single CPU) or by big_parallel_sleepers (sleepers.c more heavily modified to have multiple threads executing in parallel the code of perf/bigcode1). These two test programs have command line arguments to indicate to 4 threads to do a mix of cpu burning or syscall (sleep during some milli-seconds). When telling to only burn CPU, for a perfectly parallel V, the 4 threads should consume 400% of cpu (on a >= 4 CPU system). There will be a whole additional bunch of tests needed before we could have a reasonable trust in the race-free state of a V. For example, we will need test programs doing mmap/munmap and similar with multiple threads in parallel. 9 June: none tests run succesfully on f12/x86 and deb6/amd64. (does not mean much: we for sure still have race conditions). 12 June: run the none tests in an outer helgrind inner none config. A bunch of race conditions to analyse (a lot seems caused by the clone syscall which might not be understood by helgrind). Also, the outer.log file contains some null bytes which are written. Need to see why these are produced (might be just the outer/inner setup : to be verified with an inner trunk untouched). # TLS ===== It is highly probable that mtV will need an efficient way to retrieve "per thread" values. This looks needed e.g. for VG_(unknown_SP_update). But it will also very probably be needed for the tool runtime code (to e.g. retrieve the current thread which is calling the tool runtime code, or retrieve per thread tool data structures). There exists multiple ways to manage TLS. For example, a TLS variable defined in a shared lib is managed differently than in the statically linked part of the executable. The runtime requirements are limited for what is called the ELD 'local-exec' model. Basically, the only need is to have for each thread (including the main thread) a static zone of memory big enough for all the TLS variable in the statically linked part of the executable. TLS variables in shared libs are not supported (it is suspected that shared libs specified at link time might work but not tested. In any case, V does not use such linked shared libs). This static zone of memory has been added in the VG_(threads) array. For the main thread, very early in the V startup sequence, the thread register is modified to point at this zone. There are small differences depending on the platforms. All these differences are hidden in m_tls.c. For the non main threads, the clone syscall must specify the TLS area of the thread. Some operations might be needed before and after the clone. Again, the specifics of these operations is inside m_tls.c. Currently, m_tls.c is working on linux x86/amd64/ppc32/ppc64. It is unclear how to implement TLS on MacOS (missing documentation) Arm (Android or Linux) is also still to be looked at. Android TLS seems to be emulated (at least with current gcc NDK) so TLS will not be ultra fast. s390x also still to be looked at (probably easy). * TLS storage is not necessarily properly/efficiently supported in all environment, depending e.g. on the OS version and/or of the gcc version. Assuming we cannot make the __thread keyword properly supported in all the environments, here is a sketch of how such TLS could be replaced by something less comfortable but still providing the same 'per thread variable'. The assumption is that inside the CORE code, we (most of the time) have either the tid or the thread state pointer. With the thread state pointer, one can efficiently reference an offset in the thread state. So, thread local variables can be defined by an offset (obtained at module init time). Then the address of a local thread variable is obtained by e.g. int * my_thread_specific_counter = GETTLS(tst, my_thread_specific_counter_offset); and then *my_thread_specific_counter can be safely used as a TLS pointer variable. The JIT code must then pass the thread state to all the C helpers which need to have access to TLS. The thread state pointer is normally efficiently available in a register. This is less comfortable than __thread attributes, and will oblige to put the thread state as argument to many C helpers. If TLS storage is really needed, this might be better than nothing. # outer helgrind difficulties ============================= Running an outer helgrind on an inner (parallel) V is a very easy way to find (some?) race conditions. However, several traps were encountered, and are documented here. 1. Encountered a crash in the outer helgrind when adding some hg client request in the inner. This crash seems to be linked with the stack of a new thread (or main thread?) not being registered early enough. Then if the outer helgrind has to do a stacktrace at this early stage, the stack trace code was crashing. This has been bypassed by the following patch in the outer V. It is not very clear what to do. A proper (earlier) registration of the stack might solve the problem. Maybe the hack below is not such an horrible hack at the end: if stack_limits cannot find the stack of the stack pointer, then it looks unwise to allow unwinding without taking reasonable stack limits into account. =================================================================== --- coregrind/m_stacktrace.c (revision 12593) +++ coregrind/m_stacktrace.c (working copy) @@ -801,6 +801,8 @@ /* See if we can get a better idea of the stack limits */ VG_(stack_limits)( (Addr)startRegs.r_sp, &stack_lowest_word, &stack_highest_word ); + // Hack to avoid crash in outer Valgrind: + if (stack_lowest_word == 0) stack_highest_word = 0; // Hack /* Take into account the first_ip_delta. */ startRegs.r_pc += (Long)(Word)first_ip_delta; 2. Non thread safe aspect in the guest process are detected by the outer helgrind. However, this is detected in the JIT code generated by the inner V. So, the outer V does not understand the origin of this race condition. This means the suppression (for e.g. the normal libc race conditions) do not work. Adding --read-var-info=yes helped to determine these races as the outer V was able to point at global variables in libc. The assumption taken was that any stack trace that the outer V cannot properly associate with some inner code is considered as an irrelevant race condition. 3. mutex order: the Big Lock was transformed in a rwlock using two "lower level" locks (more exactly token passing). I believe the rwlock code is correct, but still helgrind detects problems in it e.g. because one thread releases a lock that was taken by another thread (not abnormal, as the low level "locks" are in fact a token). The helgrind marking code in the low level locks was removed. Only the Big Lock itself was marked with RW lock annotations. 4. helgrind does not understand that the clone syscall is introducing a happens-before relationship between the actions in the parent thread before the clone, and the actions in the child thread after the clone. A trial was done to mark this relationship, but it did not help much (probably because the race condition is detected in the asm just after the clone syscall. The HG annotations cannot be put in asm code, and so might not be done precisely enough to avoid hg to report the error. # how to schedule the changes ============================= Obtaining an mtV might imply some significant changes in the code. It might be more difficult to implement on macOS or on android than on the other Linux platforms. Making an mtV which works better for multi threaded apps might make the (maybe more important case ?) of a non-threaded application slower. Also, each tool will have to be updated to be really multi-threaded. So, here are a bunch of questions: * Should we try to have a core library "single threaded" and another one "multi-threaded" ? Each tool would link with the appropriate lib. Also, one might imagine that a tool could work both in a single threaded setup, or in a multi-threaded version. This all might make the transition and/or testing easier/less risky. * Alternatively, one could imagine that the core would switch from "null" locking primitives to "real locking primitives" just before the first clone syscall is done. E.g. we would have a struct containing the addresses of the locks/unlocks primitives to call. Initially, they would point at empty procedures. When a first clone syscall is to be executed, the struct would be set to point to real locking primitives. Depending on the tool, one might then have a Bool singleV (or mtV) which serialises the threads for the not yet (or not to be) parallelised tool. The scheduler policy (currently --fair-sched) could be replaced by --sched=[a list of policy to try] Then depending on the tool and the OS/platform support, the first working policy would be selected. Then we would have something like: --sched=parallel,fair,generic will take the first working in the order p/f/g or --sched=fair will fail if fair scheduler not available. The tool would have to agree on the policy. So, a tool not yet mtV ready would only agree on fair and generic. Core would then ensure that the calls to the tool layer is serialised. This allows a gradual migration of (some of) the tools to a mtV. Note: this looks to be a good balance between having 2 different coregrind libraries (mtV and serial) and oblige all tools to either be all migrated to mtV or to have a lock/unlock for each helper call. # parallelising memcheck ======================== The prototype shows that we can relatively easily parallelise the none tool, and have good scalability. (there is however very probably poor scalability when translating and t-chaining. Unclear if this is a real life problem, or only a problem for the big_parallel_sleeper test program). We need to have at least one useful tool made parallel to be convinced that mtV can be useful. memcheck (the most used tool) is a very good candidate for that. A simple approach is to have a "big tool lock" (BTL) (similarly to the core BL). (Possibly the BTL could be a rwlock). Whenever a C helper is called, it takes the BTL. This will then make the tool code itself thread-safe. It is assumed (but is this really the case ?) that the tool instrument function generates JIT code which is itself thread-safe. In other words, only the c helper might need protection. Note: this is not necessarily the case. Eg. if memcheck generates IR which does read/write in a global data structure directly, then the JIT code itself is not thread-safe. Such non thread safe was ok with a big lock allowing only one thread to execute JITted code at a time. Approach is dead for a mtV tool. If this is the case for memcheck, then it looks like the IR will have to be changed. The BTL will also automatically protect the data structure of the core which are only used by the tool (for example, the error list mgr). However, there are many core data structures which can be used both directly by the core and (indirectly or directly) by the tool runtime code. For example, VG_(malloc) can be called by the core code and by the tool runtime code. The aspacemgr data structure is another similar example. Probably we need to have specific locks for this core/tool shared data structures ? However, using a BTL has a big performance drawback (at least for most tools, which have C helpers called very frequently) : acquiring and releasing the BTL for each C helper will very probably be a perf disaster. Even without contention, each C helper call will imply (for a simple mutex) two atomic instructions. But with such a simple approach, it is probable that there will be a very high contention on the BTL (is there some statistics about the ratio between CPU time spend in core versus JITted code versus tool runtime ?). In case of contention, the price for each c helper call will be a few atomic instructions and two syscalls. So, a BTL might be useful, but for operations done very frequently (e.g. V and A bits related C helpers), it will be needed to have something better. Options to look at: 1. have "smaller grain" locking for some data structures e.g. rather than to use the BTL in some C helpers, have a different lock for each 64 KB of memory tracked by the memcheck memory map. This will reduce the contention assuming that most threads do not work often with the same 64Kb of memory. 2. use lock-less algorithms. See below # lock free algorithms ... Not much done yet on the aspect of parallelising memcheck. Need to find relevant ways to attack the problem. ... # lock free algorithms ====================== Searched on internet to try to find lock free algorithm or code. http://www.concurrencykit.org seems an attractive candidate to provide: * predefined atomic primitives for a bunch of arch. * if arch not supported natively, it fallsback on gcc builtin * provides multiple type of locks * relatively good documentation * it has several lock free data structures that might be useful. * no dependency to pthread, libc, and similar. * very few usage of standard headers * Contact was taken with the main developper, who was positive about doing the changes needed to allow usage of ck inside Valgrind. # Performance measurements ========================== Mostly done on gcc20, amd64. Most recent measurements at the top. time ./vg-in-place --tool=none ../small_programs/big_parallel_sleepers 150 1 300000 BSBSBSBS 9 June Prototype V2 (with futex rwlock, rather than 3 low level sema) real 2m49.817s user 10m8.686s sys 0m6.332s Trunk real 10m22.792s user 10m20.171s sys 0m5.212s 9 June, first measurements with an *UNPROTECTED* memcheck: Prototype V2 real 9m23.239s user 35m53.615s sys 0m3.056s Trunk real 36m31.915s user 36m26.805s sys 0m7.952s 7 June Prototype V2 real 2m52.839s user 10m17.147s sys 0m8.469s Trunk real 10m40.479s user 10m36.788s sys 0m6.808s 7 June, same test but replacing 150 by 2. time ./vg-in-place --tool=none ../small_programs/big_parallel_sleepers 2 1 300000 BSBSBSBS ?????? really bizarre. See t-chaining above. Prototype V2 real 0m23.689s user 0m25.042s sys 0m6.428s Trunk real 0m13.583s user 0m13.477s sys 0m0.124s 6 June Prototype V1 real 3m7.894s user 10m49.673s sys 0m10.561s Trunk real 11m26.408s user 11m22.483s sys 0m7.600s ------------------------------------------------------------------------ |
|
From: <sv...@va...> - 2012-06-13 11:12:56
|
sewardj 2012-06-13 12:12:49 +0100 (Wed, 13 Jun 2012)
New Revision: 12635
Log:
Update with recent notes.
Modified files:
trunk/docs/internals/avx-notes.txt
Modified: trunk/docs/internals/avx-notes.txt (+19 -1)
===================================================================
--- trunk/docs/internals/avx-notes.txt 2012-06-13 12:12:06 +01:00 (rev 12634)
+++ trunk/docs/internals/avx-notes.txt 2012-06-13 12:12:49 +01:00 (rev 12635)
@@ -2,4 +2,22 @@
Cleanups
~~~~~~~~
-(none at present)
+* Important: iropt: Make sure XorV128 and XorV256 of identical
+ args gets folded to zero
+
+* add more iteration in test cases
+
+* math_UNPCKxPS_128: use xIsH ? InterleaveHI32x4 : InterleaveLO32x
+ I think this is safe w.r.t. the backend
+
+* math_UNPCKxPD_128: ditto
+
+* math_UNPCKxPD_256: split into 128 bit chunks and use math_UNPCKxPD_128
+
+
+Known limitations
+~~~~~~~~~~~~~~~~~
+
+* for many (all?) of the vector shift-by-imm cases (pre-existing as
+ well as AVX), out of range shifts are not handled properly and only
+ work I think because the host happens to have the same semantics.
|
|
From: <sv...@va...> - 2012-06-13 11:12:16
|
sewardj 2012-06-13 12:12:06 +0100 (Wed, 13 Jun 2012)
New Revision: 12634
Log:
Change the V output file name from out-V to out-VAL.
Modified files:
trunk/auxprogs/gsl16test
Modified: trunk/auxprogs/gsl16test (+6 -6)
===================================================================
--- trunk/auxprogs/gsl16test 2012-06-13 12:11:10 +01:00 (rev 12633)
+++ trunk/auxprogs/gsl16test 2012-06-13 12:12:06 +01:00 (rev 12634)
@@ -100,19 +100,19 @@
echo " ... done"
echo -n " Collecting valgrinded results "
-rm -f out-V
-(cd gsl-1.6-patched && for f in $ALL_TESTS ; do eval $GSL_VV -v --trace-children=yes "$GSL_VFLAGS" ./$f ; done) &> out-V
+rm -f out-VAL
+(cd gsl-1.6-patched && for f in $ALL_TESTS ; do eval $GSL_VV -v --trace-children=yes "$GSL_VFLAGS" ./$f ; done) &> out-VAL
echo " ... done"
echo -n " Native fails: " && (grep FAIL: out-REF | wc -l)
echo -n " Native passes: " && (grep PASS: out-REF | wc -l)
-echo -n " Valgrind fails: " && (grep FAIL: out-V | wc -l)
-echo -n " Valgrind passes: " && (grep PASS: out-V | wc -l)
+echo -n " Valgrind fails: " && (grep FAIL: out-VAL | wc -l)
+echo -n " Valgrind passes: " && (grep PASS: out-VAL | wc -l)
(echo -n " Native fails: " && (grep FAIL: out-REF | wc -l)) >> summary.txt
(echo -n " Native passes: " && (grep PASS: out-REF | wc -l)) >> summary.txt
-(echo -n " Valgrind fails: " && (grep FAIL: out-V | wc -l)) >> summary.txt
-(echo -n " Valgrind passes: " && (grep PASS: out-V | wc -l)) >> summary.txt
+(echo -n " Valgrind fails: " && (grep FAIL: out-VAL | wc -l)) >> summary.txt
+(echo -n " Valgrind passes: " && (grep PASS: out-VAL | wc -l)) >> summary.txt
echo >> summary.txt
echo
|
|
From: <sv...@va...> - 2012-06-13 11:11:24
|
sewardj 2012-06-13 12:11:10 +0100 (Wed, 13 Jun 2012)
New Revision: 12633
Log:
Update.
Modified files:
trunk/none/tests/amd64/avx-1.c
Modified: trunk/none/tests/amd64/avx-1.c (+53 -0)
===================================================================
--- trunk/none/tests/amd64/avx-1.c 2012-06-12 16:00:00 +01:00 (rev 12632)
+++ trunk/none/tests/amd64/avx-1.c 2012-06-13 12:11:10 +01:00 (rev 12633)
@@ -888,7 +888,49 @@
"vpermilpd $0x3, %%xmm6, %%xmm8",
"vpermilpd $0x2, (%%rax), %%xmm8")
+GEN_test_RandM(VUNPCKLPD_256,
+ "vunpcklpd %%ymm6, %%ymm8, %%ymm7",
+ "vunpcklpd (%%rax), %%ymm8, %%ymm7")
+GEN_test_RandM(VUNPCKHPD_256,
+ "vunpckhpd %%ymm6, %%ymm8, %%ymm7",
+ "vunpckhpd (%%rax), %%ymm8, %%ymm7")
+
+GEN_test_RandM(VSHUFPS_0x39_256,
+ "vshufps $0x39, %%ymm9, %%ymm8, %%ymm7",
+ "vshufps $0xC6, (%%rax), %%ymm8, %%ymm7")
+
+GEN_test_RandM(VUNPCKLPS_256,
+ "vunpcklps %%ymm6, %%ymm8, %%ymm7",
+ "vunpcklps (%%rax), %%ymm8, %%ymm7")
+
+GEN_test_RandM(VUNPCKHPS_256,
+ "vunpckhps %%ymm6, %%ymm8, %%ymm7",
+ "vunpckhps (%%rax), %%ymm8, %%ymm7")
+
+GEN_test_RandM(VXORPD_256,
+ "vxorpd %%ymm6, %%ymm8, %%ymm7",
+ "vxorpd (%%rax), %%ymm8, %%ymm7")
+
+GEN_test_Monly(VBROADCASTSD_256,
+ "vbroadcastsd (%%rax), %%ymm8")
+
+GEN_test_RandM(VCMPPD_128_0x4,
+ "vcmppd $4, %%xmm6, %%xmm8, %%xmm7",
+ "vcmppd $4, (%%rax), %%xmm8, %%xmm7")
+
+GEN_test_RandM(VCVTDQ2PD_128,
+ "vcvtdq2pd %%xmm6, %%xmm8",
+ "vcvtdq2pd (%%rax), %%xmm8")
+
+GEN_test_RandM(VDIVPD_128,
+ "vdivpd %%xmm6, %%xmm8, %%xmm7",
+ "vdivpd (%%rax), %%xmm8, %%xmm7")
+
+GEN_test_RandM(VANDPD_256,
+ "vandpd %%ymm6, %%ymm8, %%ymm7",
+ "vandpd (%%rax), %%ymm8, %%ymm7")
+
/* Comment duplicated above, for convenient reference:
Allowed operands in test insns:
Reg form: %ymm6, %ymm7, %ymm8, %ymm9 and %r14.
@@ -1100,5 +1142,16 @@
test_VPERMILPD_256_0x5();
test_VPERMILPD_128_0x0();
test_VPERMILPD_128_0x3();
+ test_VUNPCKLPD_256();
+ test_VUNPCKHPD_256();
+ test_VSHUFPS_0x39_256();
+ test_VUNPCKLPS_256();
+ test_VUNPCKHPS_256();
+ test_VXORPD_256();
+ test_VBROADCASTSD_256();
+ test_VCMPPD_128_0x4();
+ test_VCVTDQ2PD_128();
+ test_VDIVPD_128();
+ test_VANDPD_256();
return 0;
}
|
|
From: <sv...@va...> - 2012-06-13 11:10:41
|
sewardj 2012-06-13 12:10:20 +0100 (Wed, 13 Jun 2012)
New Revision: 2381
Log:
Implement even more instructions generated by "gcc-4.7.0 -mavx -O3".
This is the first point at which coverage for -O3 generated code could
be construed as "somewhat usable".
Modified files:
trunk/priv/guest_amd64_toIR.c
trunk/priv/host_amd64_isel.c
trunk/priv/ir_defs.c
trunk/pub/libvex_ir.h
Modified: trunk/pub/libvex_ir.h (+9 -2)
===================================================================
--- trunk/pub/libvex_ir.h 2012-06-12 15:59:17 +01:00 (rev 2380)
+++ trunk/pub/libvex_ir.h 2012-06-13 12:10:20 +01:00 (rev 2381)
@@ -1423,14 +1423,21 @@
/* ------------------ 256-bit SIMD Integer. ------------------ */
/* Pack/unpack */
- Iop_V256to64_0, // V256 -> I64, extract least sigificant lane
+ Iop_V256to64_0, // V256 -> I64, extract least significant lane
Iop_V256to64_1,
Iop_V256to64_2,
- Iop_V256to64_3, // V256 -> I64, extract most sigificant lane
+ Iop_V256to64_3, // V256 -> I64, extract most significant lane
Iop_64x4toV256, // (I64,I64,I64,I64)->V256
// first arg is most significant lane
+ Iop_V256toV128_0, // V256 -> V128, less significant lane
+ Iop_V256toV128_1, // V256 -> V128, more significant lane
+ Iop_V128HLtoV256, // (V128,V128)->V256, first arg is most signif
+
+ Iop_AndV256,
+ Iop_XorV256,
+
/* ------------------ 256-bit SIMD FP. ------------------ */
Iop_Add64Fx4,
Iop_Sub64Fx4,
Modified: trunk/priv/host_amd64_isel.c (+31 -0)
===================================================================
--- trunk/priv/host_amd64_isel.c 2012-06-12 15:59:17 +01:00 (rev 2380)
+++ trunk/priv/host_amd64_isel.c 2012-06-13 12:10:20 +01:00 (rev 2381)
@@ -2994,6 +2994,13 @@
return dst;
}
+ case Iop_V256toV128_0:
+ case Iop_V256toV128_1: {
+ HReg vHi, vLo;
+ iselDVecExpr(&vHi, &vLo, env, e->Iex.Unop.arg);
+ return (e->Iex.Unop.op == Iop_V256toV128_1) ? vHi : vLo;
+ }
+
default:
break;
} /* switch (e->Iex.Unop.op) */
@@ -3467,6 +3474,30 @@
return;
}
+ case Iop_AndV256: op = Asse_AND; goto do_SseReRg;
+ case Iop_XorV256: op = Asse_XOR; goto do_SseReRg;
+ do_SseReRg:
+ {
+ HReg argLhi, argLlo, argRhi, argRlo;
+ iselDVecExpr(&argLhi, &argLlo, env, e->Iex.Binop.arg1);
+ iselDVecExpr(&argRhi, &argRlo, env, e->Iex.Binop.arg2);
+ HReg dstHi = newVRegV(env);
+ HReg dstLo = newVRegV(env);
+ addInstr(env, mk_vMOVsd_RR(argLhi, dstHi));
+ addInstr(env, mk_vMOVsd_RR(argLlo, dstLo));
+ addInstr(env, AMD64Instr_SseReRg(op, argRhi, dstHi));
+ addInstr(env, AMD64Instr_SseReRg(op, argRlo, dstLo));
+ *rHi = dstHi;
+ *rLo = dstLo;
+ return;
+ }
+
+ case Iop_V128HLtoV256: {
+ *rHi = iselVecExpr(env, e->Iex.Binop.arg1);
+ *rLo = iselVecExpr(env, e->Iex.Binop.arg2);
+ return;
+ }
+
default:
break;
} /* switch (e->Iex.Binop.op) */
Modified: trunk/priv/ir_defs.c (+13 -0)
===================================================================
--- trunk/priv/ir_defs.c 2012-06-12 15:59:17 +01:00 (rev 2380)
+++ trunk/priv/ir_defs.c 2012-06-13 12:10:20 +01:00 (rev 2381)
@@ -982,6 +982,9 @@
case Iop_V256to64_2: vex_printf("V256to64_2"); return;
case Iop_V256to64_3: vex_printf("V256to64_3"); return;
case Iop_64x4toV256: vex_printf("64x4toV256"); return;
+ case Iop_V256toV128_0: vex_printf("V256toV128_0"); return;
+ case Iop_V256toV128_1: vex_printf("V256toV128_1"); return;
+ case Iop_V128HLtoV256: vex_printf("V128HLtoV256"); return;
case Iop_DPBtoBCD: vex_printf("DPBtoBCD"); return;
case Iop_BCDtoDPB: vex_printf("BCDtoDPB"); return;
case Iop_Add64Fx4: vex_printf("Add64Fx4"); return;
@@ -992,6 +995,8 @@
case Iop_Sub32Fx8: vex_printf("Sub32Fx8"); return;
case Iop_Mul32Fx8: vex_printf("Mul32Fx8"); return;
case Iop_Div32Fx8: vex_printf("Div32Fx8"); return;
+ case Iop_AndV256: vex_printf("AndV256"); return;
+ case Iop_XorV256: vex_printf("XorV256"); return;
default: vpanic("ppIROp(1)");
}
@@ -2799,8 +2804,16 @@
case Iop_Sub32Fx8:
case Iop_Mul32Fx8:
case Iop_Div32Fx8:
+ case Iop_AndV256:
+ case Iop_XorV256:
BINARY(Ity_V256,Ity_V256, Ity_V256);
+ case Iop_V256toV128_1: case Iop_V256toV128_0:
+ UNARY(Ity_V256, Ity_V128);
+
+ case Iop_V128HLtoV256:
+ BINARY(Ity_V128,Ity_V128, Ity_V256);
+
default:
ppIROp(op);
vpanic("typeOfPrimop");
Modified: trunk/priv/guest_amd64_toIR.c (+328 -93)
===================================================================
--- trunk/priv/guest_amd64_toIR.c 2012-06-12 15:59:17 +01:00 (rev 2380)
+++ trunk/priv/guest_amd64_toIR.c 2012-06-13 12:10:20 +01:00 (rev 2381)
@@ -8845,12 +8845,12 @@
unop(Iop_32Uto64,sseround) ) );
}
-/* Break a 128-bit value up into four 32-bit ints. */
+/* Break a V128-bit value up into four 32-bit ints. */
-static void breakup128to32s ( IRTemp t128,
- /*OUTs*/
- IRTemp* t3, IRTemp* t2,
- IRTemp* t1, IRTemp* t0 )
+static void breakupV128to32s ( IRTemp t128,
+ /*OUTs*/
+ IRTemp* t3, IRTemp* t2,
+ IRTemp* t1, IRTemp* t0 )
{
IRTemp hi64 = newTemp(Ity_I64);
IRTemp lo64 = newTemp(Ity_I64);
@@ -8872,10 +8872,10 @@
assign( *t3, unop(Iop_64HIto32, mkexpr(hi64)) );
}
-/* Construct a 128-bit value from four 32-bit ints. */
+/* Construct a V128-bit value from four 32-bit ints. */
-static IRExpr* mk128from32s ( IRTemp t3, IRTemp t2,
- IRTemp t1, IRTemp t0 )
+static IRExpr* mkV128from32s ( IRTemp t3, IRTemp t2,
+ IRTemp t1, IRTemp t0 )
{
return
binop( Iop_64HLtoV128,
@@ -8923,7 +8923,28 @@
);
}
+/* Break a V256-bit value up into four 64-bit ints. */
+static void breakupV256to64s ( IRTemp t256,
+ /*OUTs*/
+ IRTemp* t3, IRTemp* t2,
+ IRTemp* t1, IRTemp* t0 )
+{
+ vassert(t0 && *t0 == IRTemp_INVALID);
+ vassert(t1 && *t1 == IRTemp_INVALID);
+ vassert(t2 && *t2 == IRTemp_INVALID);
+ vassert(t3 && *t3 == IRTemp_INVALID);
+ *t0 = newTemp(Ity_I64);
+ *t1 = newTemp(Ity_I64);
+ *t2 = newTemp(Ity_I64);
+ *t3 = newTemp(Ity_I64);
+ assign( *t0, unop(Iop_V256to64_0, mkexpr(t256)) );
+ assign( *t1, unop(Iop_V256to64_1, mkexpr(t256)) );
+ assign( *t2, unop(Iop_V256to64_2, mkexpr(t256)) );
+ assign( *t3, unop(Iop_V256to64_3, mkexpr(t256)) );
+}
+
+
/* Helper for the SSSE3 (not SSE3) PMULHRSW insns. Given two 64-bit
values (aa,bb), computes, for each of the 4 16-bit lanes:
@@ -9385,13 +9406,13 @@
IRTemp s3, s2, s1, s0;
s3 = s2 = s1 = s0 = IRTemp_INVALID;
- breakup128to32s( sV, &s3, &s2, &s1, &s0 );
+ breakupV128to32s( sV, &s3, &s2, &s1, &s0 );
# define SEL(n) ((n)==0 ? s0 : ((n)==1 ? s1 : ((n)==2 ? s2 : s3)))
IRTemp dV = newTemp(Ity_V128);
assign(dV,
- mk128from32s( SEL((order>>6)&3), SEL((order>>4)&3),
- SEL((order>>2)&3), SEL((order>>0)&3) )
+ mkV128from32s( SEL((order>>6)&3), SEL((order>>4)&3),
+ SEL((order>>2)&3), SEL((order>>0)&3) )
);
# undef SEL
@@ -9704,7 +9725,7 @@
assign( rmode, r2zero ? mkU32((UInt)Irrm_ZERO)
: get_sse_roundingmode() );
t0 = t1 = t2 = t3 = IRTemp_INVALID;
- breakup128to32s( argV, &t3, &t2, &t1, &t0 );
+ breakupV128to32s( argV, &t3, &t2, &t1, &t0 );
/* This is less than ideal. If it turns out to be a performance
bottleneck it can be improved. */
# define CVT(_t) \
@@ -9750,17 +9771,18 @@
}
-/* FIXME: why not just use InterleaveLO / InterleaveHI ?? */
+/* FIXME: why not just use InterleaveLO / InterleaveHI? I think the
+ relevant ops are "xIsH ? InterleaveHI32x4 : InterleaveLO32x4". */
/* Does the maths for 128 bit versions of UNPCKLPS and UNPCKHPS */
static IRTemp math_UNPCKxPS_128 ( IRTemp sV, IRTemp dV, Bool xIsH )
{
IRTemp s3, s2, s1, s0, d3, d2, d1, d0;
s3 = s2 = s1 = s0 = d3 = d2 = d1 = d0 = IRTemp_INVALID;
- breakup128to32s( dV, &d3, &d2, &d1, &d0 );
- breakup128to32s( sV, &s3, &s2, &s1, &s0 );
+ breakupV128to32s( dV, &d3, &d2, &d1, &d0 );
+ breakupV128to32s( sV, &s3, &s2, &s1, &s0 );
IRTemp res = newTemp(Ity_V128);
- assign(res, xIsH ? mk128from32s( s3, d3, s2, d2 )
- : mk128from32s( s1, d1, s0, d0 ));
+ assign(res, xIsH ? mkV128from32s( s3, d3, s2, d2 )
+ : mkV128from32s( s1, d1, s0, d0 ));
return res;
}
@@ -9784,35 +9806,100 @@
}
-static IRTemp math_SHUFPS ( IRTemp sV, IRTemp dV, UInt imm8 )
+/* Does the maths for 256 bit versions of UNPCKLPD and UNPCKHPD.
+ Doesn't seem like this fits in either of the Iop_Interleave{LO,HI}
+ or the Iop_Cat{Odd,Even}Lanes idioms, hence just do it the stupid
+ way. */
+static IRTemp math_UNPCKxPD_256 ( IRTemp sV, IRTemp dV, Bool xIsH )
{
IRTemp s3, s2, s1, s0, d3, d2, d1, d0;
s3 = s2 = s1 = s0 = d3 = d2 = d1 = d0 = IRTemp_INVALID;
+ breakupV256to64s( dV, &d3, &d2, &d1, &d0 );
+ breakupV256to64s( sV, &s3, &s2, &s1, &s0 );
+ IRTemp res = newTemp(Ity_V256);
+ assign(res, xIsH
+ ? IRExpr_Qop(Iop_64x4toV256, mkexpr(s3), mkexpr(d3),
+ mkexpr(s1), mkexpr(d1))
+ : IRExpr_Qop(Iop_64x4toV256, mkexpr(s2), mkexpr(d2),
+ mkexpr(s0), mkexpr(d0)));
+ return res;
+}
+
+
+/* FIXME: this is really bad. Surely can do something better here?
+ One observation is that the steering in the upper and lower 128 bit
+ halves is the same as with math_UNPCKxPS_128, so we simply split
+ into two halves, and use that. Consequently any improvement in
+ math_UNPCKxPS_128 (probably, to use interleave-style primops)
+ benefits this too. */
+static IRTemp math_UNPCKxPS_256 ( IRTemp sV, IRTemp dV, Bool xIsH )
+{
+ IRTemp sVhi = newTemp(Ity_V128);
+ IRTemp sVlo = newTemp(Ity_V128);
+ IRTemp dVhi = newTemp(Ity_V128);
+ IRTemp dVlo = newTemp(Ity_V128);
+ assign(sVhi, unop(Iop_V256toV128_1, mkexpr(sV)));
+ assign(sVlo, unop(Iop_V256toV128_0, mkexpr(sV)));
+ assign(dVhi, unop(Iop_V256toV128_1, mkexpr(dV)));
+ assign(dVlo, unop(Iop_V256toV128_0, mkexpr(dV)));
+ IRTemp rVhi = math_UNPCKxPS_128(sVhi, dVhi, xIsH);
+ IRTemp rVlo = math_UNPCKxPS_128(sVlo, dVlo, xIsH);
+ IRTemp rV = newTemp(Ity_V256);
+ assign(rV, binop(Iop_V128HLtoV256, mkexpr(rVhi), mkexpr(rVlo)));
+ return rV;
+}
+
+
+static IRTemp math_SHUFPS_128 ( IRTemp sV, IRTemp dV, UInt imm8 )
+{
+ IRTemp s3, s2, s1, s0, d3, d2, d1, d0;
+ s3 = s2 = s1 = s0 = d3 = d2 = d1 = d0 = IRTemp_INVALID;
vassert(imm8 < 256);
- breakup128to32s( dV, &d3, &d2, &d1, &d0 );
- breakup128to32s( sV, &s3, &s2, &s1, &s0 );
+ breakupV128to32s( dV, &d3, &d2, &d1, &d0 );
+ breakupV128to32s( sV, &s3, &s2, &s1, &s0 );
# define SELD(n) ((n)==0 ? d0 : ((n)==1 ? d1 : ((n)==2 ? d2 : d3)))
# define SELS(n) ((n)==0 ? s0 : ((n)==1 ? s1 : ((n)==2 ? s2 : s3)))
IRTemp res = newTemp(Ity_V128);
assign(res,
- mk128from32s( SELS((imm8>>6)&3), SELS((imm8>>4)&3),
- SELD((imm8>>2)&3), SELD((imm8>>0)&3) ) );
+ mkV128from32s( SELS((imm8>>6)&3), SELS((imm8>>4)&3),
+ SELD((imm8>>2)&3), SELD((imm8>>0)&3) ) );
# undef SELD
# undef SELS
return res;
}
+/* 256-bit SHUFPS appears to steer each of the 128-bit halves
+ identically. Hence do the clueless thing and use math_SHUFPS_128
+ twice. */
+static IRTemp math_SHUFPS_256 ( IRTemp sV, IRTemp dV, UInt imm8 )
+{
+ IRTemp sVhi = newTemp(Ity_V128);
+ IRTemp sVlo = newTemp(Ity_V128);
+ IRTemp dVhi = newTemp(Ity_V128);
+ IRTemp dVlo = newTemp(Ity_V128);
+ assign(sVhi, unop(Iop_V256toV128_1, mkexpr(sV)));
+ assign(sVlo, unop(Iop_V256toV128_0, mkexpr(sV)));
+ assign(dVhi, unop(Iop_V256toV128_1, mkexpr(dV)));
+ assign(dVlo, unop(Iop_V256toV128_0, mkexpr(dV)));
+ IRTemp rVhi = math_SHUFPS_128(sVhi, dVhi, imm8);
+ IRTemp rVlo = math_SHUFPS_128(sVlo, dVlo, imm8);
+ IRTemp rV = newTemp(Ity_V256);
+ assign(rV, binop(Iop_V128HLtoV256, mkexpr(rVhi), mkexpr(rVlo)));
+ return rV;
+}
+
+
static IRTemp math_PMULUDQ_128 ( IRTemp sV, IRTemp dV )
{
/* This is a really poor translation -- could be improved if
performance critical */
IRTemp s3, s2, s1, s0, d3, d2, d1, d0;
s3 = s2 = s1 = s0 = d3 = d2 = d1 = d0 = IRTemp_INVALID;
- breakup128to32s( dV, &d3, &d2, &d1, &d0 );
- breakup128to32s( sV, &s3, &s2, &s1, &s0 );
+ breakupV128to32s( dV, &d3, &d2, &d1, &d0 );
+ breakupV128to32s( sV, &s3, &s2, &s1, &s0 );
IRTemp res = newTemp(Ity_V128);
assign(res, binop(Iop_64HLtoV128,
binop( Iop_MullU32, mkexpr(d2), mkexpr(s2)),
@@ -9898,7 +9985,7 @@
return deltaIN; /* FAIL */
}
s3 = s2 = s1 = s0 = IRTemp_INVALID;
- breakup128to32s( sV, &s3, &s2, &s1, &s0 );
+ breakupV128to32s( sV, &s3, &s2, &s1, &s0 );
switch (imm8) {
case 0: assign(d16, unop(Iop_32to16, mkexpr(s0))); break;
case 1: assign(d16, unop(Iop_32HIto16, mkexpr(s0))); break;
@@ -9915,6 +10002,41 @@
}
+static Long dis_CVTDQ2PD_128 ( VexAbiInfo* vbi, Prefix pfx,
+ Long delta, Bool isAvx )
+{
+ IRTemp addr = IRTemp_INVALID;
+ Int alen = 0;
+ HChar dis_buf[50];
+ UChar modrm = getUChar(delta);
+ IRTemp arg64 = newTemp(Ity_I64);
+ UInt rG = gregOfRexRM(pfx,modrm);
+ UChar* mbV = isAvx ? "v" : "";
+ if (epartIsReg(modrm)) {
+ UInt rE = eregOfRexRM(pfx,modrm);
+ assign( arg64, getXMMRegLane64(rE, 0) );
+ delta += 1;
+ DIP("%scvtdq2pd %s,%s\n", mbV, nameXMMReg(rE), nameXMMReg(rG));
+ } else {
+ addr = disAMode ( &alen, vbi, pfx, delta, dis_buf, 0 );
+ assign( arg64, loadLE(Ity_I64, mkexpr(addr)) );
+ delta += alen;
+ DIP("%scvtdq2pd %s,%s\n", mbV, dis_buf, nameXMMReg(rG) );
+ }
+ putXMMRegLane64F(
+ rG, 0,
+ unop(Iop_I32StoF64, unop(Iop_64to32, mkexpr(arg64)))
+ );
+ putXMMRegLane64F(
+ rG, 1,
+ unop(Iop_I32StoF64, unop(Iop_64HIto32, mkexpr(arg64)))
+ );
+ if (isAvx)
+ putYMMRegLane128(rG, 1, mkV128(0));
+ return delta;
+}
+
+
/* Note, this also handles SSE(1) insns. */
__attribute__((noinline))
static
@@ -11138,7 +11260,7 @@
}
assign( rmode, get_sse_roundingmode() );
- breakup128to32s( argV, &t3, &t2, &t1, &t0 );
+ breakupV128to32s( argV, &t3, &t2, &t1, &t0 );
# define CVT(_t) binop( Iop_F64toF32, \
mkexpr(rmode), \
@@ -12229,7 +12351,7 @@
delta += 1+alen;
DIP("shufps $%d,%s,%s\n", imm8, dis_buf, nameXMMReg(rG));
}
- IRTemp res = math_SHUFPS( sV, dV, imm8 );
+ IRTemp res = math_SHUFPS_128( sV, dV, imm8 );
putXMMReg( gregOfRexRM(pfx,modrm), mkexpr(res) );
goto decode_success;
}
@@ -12639,32 +12761,7 @@
/* F3 0F E6 = CVTDQ2PD -- convert 2 x I32 in mem/lo half xmm to 2 x
F64 in xmm(G) */
if (haveF3no66noF2(pfx) && sz == 4) {
- IRTemp arg64 = newTemp(Ity_I64);
-
- modrm = getUChar(delta);
- if (epartIsReg(modrm)) {
- assign( arg64, getXMMRegLane64(eregOfRexRM(pfx,modrm), 0) );
- delta += 1;
- DIP("cvtdq2pd %s,%s\n", nameXMMReg(eregOfRexRM(pfx,modrm)),
- nameXMMReg(gregOfRexRM(pfx,modrm)));
- } else {
- addr = disAMode ( &alen, vbi, pfx, delta, dis_buf, 0 );
- assign( arg64, loadLE(Ity_I64, mkexpr(addr)) );
- delta += alen;
- DIP("cvtdq2pd %s,%s\n", dis_buf,
- nameXMMReg(gregOfRexRM(pfx,modrm)) );
- }
-
- putXMMRegLane64F(
- gregOfRexRM(pfx,modrm), 0,
- unop(Iop_I32StoF64, unop(Iop_64to32, mkexpr(arg64)))
- );
-
- putXMMRegLane64F(
- gregOfRexRM(pfx,modrm), 1,
- unop(Iop_I32StoF64, unop(Iop_64HIto32, mkexpr(arg64)))
- );
-
+ delta = dis_CVTDQ2PD_128(vbi, pfx, delta, False/*!isAvx*/);
goto decode_success;
}
break;
@@ -13195,9 +13292,9 @@
delta += alen;
}
- breakup128to32s( sV, &s3, &s2, &s1, &s0 );
+ breakupV128to32s( sV, &s3, &s2, &s1, &s0 );
putXMMReg( gregOfRexRM(pfx,modrm),
- mk128from32s( s2, s2, s0, s0 ) );
+ mkV128from32s( s2, s2, s0, s0 ) );
goto decode_success;
}
/* F2 0F 12 = MOVDDUP -- move from E (mem or xmm) to G (xmm),
@@ -13232,9 +13329,9 @@
delta += alen;
}
- breakup128to32s( sV, &s3, &s2, &s1, &s0 );
+ breakupV128to32s( sV, &s3, &s2, &s1, &s0 );
putXMMReg( gregOfRexRM(pfx,modrm),
- mk128from32s( s3, s3, s1, s1 ) );
+ mkV128from32s( s3, s3, s1, s1 ) );
goto decode_success;
}
break;
@@ -13269,11 +13366,11 @@
assign( gV, getXMMReg(gregOfRexRM(pfx,modrm)) );
- breakup128to32s( eV, &e3, &e2, &e1, &e0 );
- breakup128to32s( gV, &g3, &g2, &g1, &g0 );
+ breakupV128to32s( eV, &e3, &e2, &e1, &e0 );
+ breakupV128to32s( gV, &g3, &g2, &g1, &g0 );
- assign( leftV, mk128from32s( e2, e0, g2, g0 ) );
- assign( rightV, mk128from32s( e3, e1, g3, g1 ) );
+ assign( leftV, mkV128from32s( e2, e0, g2, g0 ) );
+ assign( rightV, mkV128from32s( e3, e1, g3, g1 ) );
putXMMReg( gregOfRexRM(pfx,modrm),
binop(isAdd ? Iop_Add32Fx4 : Iop_Sub32Fx4,
@@ -13389,10 +13486,10 @@
assign( addV, binop(Iop_Add32Fx4, mkexpr(gV), mkexpr(eV)) );
assign( subV, binop(Iop_Sub32Fx4, mkexpr(gV), mkexpr(eV)) );
- breakup128to32s( addV, &a3, &a2, &a1, &a0 );
- breakup128to32s( subV, &s3, &s2, &s1, &s0 );
+ breakupV128to32s( addV, &a3, &a2, &a1, &a0 );
+ breakupV128to32s( subV, &s3, &s2, &s1, &s0 );
- putXMMReg( gregOfRexRM(pfx,modrm), mk128from32s( a3, s2, a1, s0 ));
+ putXMMReg( gregOfRexRM(pfx,modrm), mkV128from32s( a3, s2, a1, s0 ));
goto decode_success;
}
break;
@@ -15033,8 +15130,8 @@
nameXMMReg(gregOfRexRM(pfx,modrm)));
}
- breakup128to32s( dV, &d3, &d2, &d1, &d0 );
- breakup128to32s( sV, &s3, &s2, &s1, &s0 );
+ breakupV128to32s( dV, &d3, &d2, &d1, &d0 );
+ breakupV128to32s( sV, &s3, &s2, &s1, &s0 );
assign( t0, binop( Iop_MullS32, mkexpr(d0), mkexpr(s0)) );
putXMMRegLane64( gregOfRexRM(pfx,modrm), 0, mkexpr(t0) );
@@ -15578,7 +15675,7 @@
vassert(0==getRexW(pfx)); /* ensured by caller */
modrm = getUChar(delta);
assign( xmm_vec, getXMMReg( gregOfRexRM(pfx,modrm) ) );
- breakup128to32s( xmm_vec, &t3, &t2, &t1, &t0 );
+ breakupV128to32s( xmm_vec, &t3, &t2, &t1, &t0 );
if ( epartIsReg( modrm ) ) {
imm8_10 = (Int)(getUChar(delta+1) & 3);
@@ -15801,16 +15898,16 @@
UShort mask = 0;
switch (imm8) {
case 3: mask = 0x0FFF;
- assign(withZs, mk128from32s(u32, z32, z32, z32));
+ assign(withZs, mkV128from32s(u32, z32, z32, z32));
break;
case 2: mask = 0xF0FF;
- assign(withZs, mk128from32s(z32, u32, z32, z32));
+ assign(withZs, mkV128from32s(z32, u32, z32, z32));
break;
case 1: mask = 0xFF0F;
- assign(withZs, mk128from32s(z32, z32, u32, z32));
+ assign(withZs, mkV128from32s(z32, z32, u32, z32));
break;
case 0: mask = 0xFFF0;
- assign(withZs, mk128from32s(z32, z32, z32, u32));
+ assign(withZs, mkV128from32s(z32, z32, z32, u32));
break;
default: vassert(0);
}
@@ -15850,7 +15947,7 @@
{
const IRTemp inval = IRTemp_INVALID;
IRTemp dstDs[4] = { inval, inval, inval, inval };
- breakup128to32s( dstV, &dstDs[3], &dstDs[2], &dstDs[1], &dstDs[0] );
+ breakupV128to32s( dstV, &dstDs[3], &dstDs[2], &dstDs[1], &dstDs[0] );
vassert(imm8 <= 255);
dstDs[(imm8 >> 4) & 3] = toInsertD; /* "imm8_count_d" */
@@ -15859,7 +15956,7 @@
IRTemp zero_32 = newTemp(Ity_I32);
assign( zero_32, mkU32(0) );
IRTemp resV = newTemp(Ity_V128);
- assign( resV, mk128from32s(
+ assign( resV, mkV128from32s(
((imm8_zmask & 8) == 8) ? zero_32 : dstDs[3],
((imm8_zmask & 4) == 4) ? zero_32 : dstDs[2],
((imm8_zmask & 2) == 2) ? zero_32 : dstDs[1],
@@ -15883,7 +15980,7 @@
Int imm8;
assign( xmm_vec, getXMMReg( gregOfRexRM(pfx,modrm) ) );
t3 = t2 = t1 = t0 = IRTemp_INVALID;
- breakup128to32s( xmm_vec, &t3, &t2, &t1, &t0 );
+ breakupV128to32s( xmm_vec, &t3, &t2, &t1, &t0 );
if ( epartIsReg( modrm ) ) {
imm8 = (Int)getUChar(delta+1);
@@ -16297,7 +16394,7 @@
modrm = getUChar(delta);
assign( xmm_vec, getXMMReg( gregOfRexRM(pfx,modrm) ) );
- breakup128to32s( xmm_vec, &t3, &t2, &t1, &t0 );
+ breakupV128to32s( xmm_vec, &t3, &t2, &t1, &t0 );
if ( epartIsReg( modrm ) ) {
imm8_20 = (Int)(getUChar(delta+1) & 7);
@@ -16371,7 +16468,7 @@
modrm = getUChar(delta);
assign( xmm_vec, getXMMReg( gregOfRexRM(pfx,modrm) ) );
- breakup128to32s( xmm_vec, &t3, &t2, &t1, &t0 );
+ breakupV128to32s( xmm_vec, &t3, &t2, &t1, &t0 );
if ( epartIsReg( modrm ) ) {
imm8_10 = (Int)(getUChar(delta+1) & 3);
@@ -16476,7 +16573,7 @@
IRTemp vE = newTemp(Ity_V128);
assign( vE, getXMMReg(rE) );
IRTemp dsE[4] = { inval, inval, inval, inval };
- breakup128to32s( vE, &dsE[3], &dsE[2], &dsE[1], &dsE[0] );
+ breakupV128to32s( vE, &dsE[3], &dsE[2], &dsE[1], &dsE[0] );
imm8 = getUChar(delta+1);
d2ins = dsE[(imm8 >> 6) & 3]; /* "imm8_count_s" */
delta += 1+1;
@@ -16610,8 +16707,8 @@
binop( Iop_Mul32Fx4, mkexpr(xmm1_vec),
mkexpr(xmm2_vec) ),
mkV128( imm8_perms[((imm8 >> 4)& 15)] ) ) );
- breakup128to32s( tmp_prod_vec, &v3, &v2, &v1, &v0 );
- assign( prod_vec, mk128from32s( v3, v1, v2, v0 ) );
+ breakupV128to32s( tmp_prod_vec, &v3, &v2, &v1, &v0 );
+ assign( prod_vec, mkV128from32s( v3, v1, v2, v0 ) );
assign( sum_vec, binop( Iop_Add32Fx4,
binop( Iop_InterleaveHI32x4,
@@ -20130,6 +20227,34 @@
*uses_vvvv = True;
goto decode_success;
}
+ /* VUNPCKLPS ymm3/m256, ymm2, ymm1 = VEX.NDS.256.0F.WIG 14 /r */
+ /* VUNPCKHPS ymm3/m256, ymm2, ymm1 = VEX.NDS.256.0F.WIG 15 /r */
+ if (haveNo66noF2noF3(pfx) && 1==getVexL(pfx)/*256*/) {
+ Bool hi = opc == 0x15;
+ UChar modrm = getUChar(delta);
+ UInt rG = gregOfRexRM(pfx,modrm);
+ UInt rV = getVexNvvvv(pfx);
+ IRTemp eV = newTemp(Ity_V256);
+ IRTemp vV = newTemp(Ity_V256);
+ assign( vV, getYMMReg(rV) );
+ if (epartIsReg(modrm)) {
+ UInt rE = eregOfRexRM(pfx,modrm);
+ assign( eV, getYMMReg(rE) );
+ delta += 1;
+ DIP("vunpck%sps %s,%s\n", hi ? "h" : "l",
+ nameYMMReg(rE), nameYMMReg(rG));
+ } else {
+ addr = disAMode ( &alen, vbi, pfx, delta, dis_buf, 0 );
+ assign( eV, loadLE(Ity_V256, mkexpr(addr)) );
+ delta += alen;
+ DIP("vunpck%sps %s,%s\n", hi ? "h" : "l",
+ dis_buf, nameYMMReg(rG));
+ }
+ IRTemp res = math_UNPCKxPS_256( eV, vV, hi );
+ putYMMReg( rG, mkexpr(res) );
+ *uses_vvvv = True;
+ goto decode_success;
+ }
/* VUNPCKLPD xmm3/m128, xmm2, xmm1 = VEX.NDS.128.66.0F.WIG 14 /r */
/* VUNPCKHPD xmm3/m128, xmm2, xmm1 = VEX.NDS.128.66.0F.WIG 15 /r */
if (have66noF2noF3(pfx) && 0==getVexL(pfx)/*128*/) {
@@ -20158,6 +20283,34 @@
*uses_vvvv = True;
goto decode_success;
}
+ /* VUNPCKLPD ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG 14 /r */
+ /* VUNPCKHPD ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG 15 /r */
+ if (have66noF2noF3(pfx) && 1==getVexL(pfx)/*256*/) {
+ Bool hi = opc == 0x15;
+ UChar modrm = getUChar(delta);
+ UInt rG = gregOfRexRM(pfx,modrm);
+ UInt rV = getVexNvvvv(pfx);
+ IRTemp eV = newTemp(Ity_V256);
+ IRTemp vV = newTemp(Ity_V256);
+ assign( vV, getYMMReg(rV) );
+ if (epartIsReg(modrm)) {
+ UInt rE = eregOfRexRM(pfx,modrm);
+ assign( eV, getYMMReg(rE) );
+ delta += 1;
+ DIP("vunpck%spd %s,%s\n", hi ? "h" : "l",
+ nameYMMReg(rE), nameYMMReg(rG));
+ } else {
+ addr = disAMode ( &alen, vbi, pfx, delta, dis_buf, 0 );
+ assign( eV, loadLE(Ity_V256, mkexpr(addr)) );
+ delta += alen;
+ DIP("vunpck%spd %s,%s\n", hi ? "h" : "l",
+ dis_buf, nameYMMReg(rG));
+ }
+ IRTemp res = math_UNPCKxPD_256( eV, vV, hi );
+ putYMMReg( rG, mkexpr(res) );
+ *uses_vvvv = True;
+ goto decode_success;
+ }
break;
case 0x16:
@@ -20503,6 +20656,13 @@
uses_vvvv, vbi, pfx, delta, "vandpd", Iop_AndV128 );
goto decode_success;
}
+ /* VANDPD r/m, rV, r ::: r = rV & r/m */
+ /* VANDPD = VEX.NDS.256.66.0F.WIG 54 /r */
+ if (have66noF2noF3(pfx) && 1==getVexL(pfx)/*256*/) {
+ delta = dis_AVX256_E_V_to_G(
+ uses_vvvv, vbi, pfx, delta, "vandpd", Iop_AndV256 );
+ goto decode_success;
+ }
/* VANDPS = VEX.NDS.128.0F.WIG 54 /r */
if (haveNo66noF2noF3(pfx) && 0==getVexL(pfx)/*128*/) {
delta = dis_VEX_NDS_128_AnySimdPfx_0F_WIG_simple(
@@ -20554,6 +20714,13 @@
uses_vvvv, vbi, pfx, delta, "vxorpd", Iop_XorV128 );
goto decode_success;
}
+ /* VXORPD r/m, rV, r ::: r = rV ^ r/m */
+ /* VXORPD = VEX.NDS.256.66.0F.WIG 57 /r */
+ if (have66noF2noF3(pfx) && 1==getVexL(pfx)/*256*/) {
+ delta = dis_AVX256_E_V_to_G(
+ uses_vvvv, vbi, pfx, delta, "vxorpd", Iop_XorV256 );
+ goto decode_success;
+ }
/* VXORPS r/m, rV, r ::: r = rV ^ r/m */
/* VXORPS = VEX.NDS.128.0F.WIG 57 /r */
if (haveNo66noF2noF3(pfx) && 0==getVexL(pfx)/*128*/) {
@@ -20798,6 +20965,12 @@
uses_vvvv, vbi, pfx, delta, "vdivps", Iop_Div32Fx8 );
goto decode_success;
}
+ /* VDIVPD xmm3/m128, xmm2, xmm1 = VEX.NDS.128.66.0F.WIG 5E /r */
+ if (have66noF2noF3(pfx) && 0==getVexL(pfx)/*128*/) {
+ delta = dis_AVX128_E_V_to_G(
+ uses_vvvv, vbi, pfx, delta, "vdivpd", Iop_Div64Fx2 );
+ goto decode_success;
+ }
/* VDIVPD ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG 5E /r */
if (have66noF2noF3(pfx) && 1==getVexL(pfx)/*256*/) {
delta = dis_AVX256_E_V_to_G(
@@ -21295,6 +21468,16 @@
if (delta > delta0) goto decode_success;
/* else fall through -- decoding has failed */
}
+ /* VCMPPD xmm3/m64(E=argL), xmm2(V=argR), xmm1(G) */
+ /* = VEX.NDS.128.66.0F.WIG C2 /r ib */
+ if (have66noF2noF3(pfx) && 0==getVexL(pfx)/*128*/) {
+ Long delta0 = delta;
+ delta = dis_AVX128_cmp_V_E_to_G( uses_vvvv, vbi, pfx, delta,
+ "vcmppd", True/*all_lanes*/,
+ 8/*sz*/);
+ if (delta > delta0) goto decode_success;
+ /* else fall through -- decoding has failed */
+ }
break;
case 0xC5:
@@ -21335,11 +21518,41 @@
DIP("vshufps $%d,%s,%s,%s\n",
imm8, dis_buf, nameXMMReg(rV), nameXMMReg(rG));
}
- IRTemp res = math_SHUFPS( eV, vV, imm8 );
- putYMMRegLoAndZU( gregOfRexRM(pfx,modrm), mkexpr(res) );
+ IRTemp res = math_SHUFPS_128( eV, vV, imm8 );
+ putYMMRegLoAndZU( rG, mkexpr(res) );
*uses_vvvv = True;
goto decode_success;
}
+ /* VSHUFPS imm8, ymm3/m256, ymm2, ymm1, ymm2 */
+ /* = VEX.NDS.256.0F.WIG C6 /r ib */
+ if (haveNo66noF2noF3(pfx) && 1==getVexL(pfx)/*256*/) {
+ Int imm8 = 0;
+ IRTemp eV = newTemp(Ity_V256);
+ IRTemp vV = newTemp(Ity_V256);
+ UInt modrm = getUChar(delta);
+ UInt rG = gregOfRexRM(pfx,modrm);
+ UInt rV = getVexNvvvv(pfx);
+ assign( vV, getYMMReg(rV) );
+ if (epartIsReg(modrm)) {
+ UInt rE = eregOfRexRM(pfx,modrm);
+ assign( eV, getYMMReg(rE) );
+ imm8 = (Int)getUChar(delta+1);
+ delta += 1+1;
+ DIP("vshufps $%d,%s,%s,%s\n",
+ imm8, nameYMMReg(rE), nameYMMReg(rV), nameYMMReg(rG));
+ } else {
+ addr = disAMode ( &alen, vbi, pfx, delta, dis_buf, 1 );
+ assign( eV, loadLE(Ity_V256, mkexpr(addr)) );
+ imm8 = (Int)getUChar(delta+alen);
+ delta += 1+alen;
+ DIP("vshufps $%d,%s,%s,%s\n",
+ imm8, dis_buf, nameYMMReg(rV), nameYMMReg(rG));
+ }
+ IRTemp res = math_SHUFPS_256( eV, vV, imm8 );
+ putYMMReg( rG, mkexpr(res) );
+ *uses_vvvv = True;
+ goto decode_success;
+ }
break;
case 0xD4:
@@ -21466,6 +21679,14 @@
}
break;
+ case 0xE6:
+ /* VCVTDQ2PD xmm2/m64, xmm1 = VEX.128.F3.0F.WIG E6 /r */
+ if (haveF3no66noF2(pfx) && 0==getVexL(pfx)/*128*/) {
+ delta = dis_CVTDQ2PD_128(vbi, pfx, delta, True/*isAvx*/);
+ goto decode_success;
+ }
+ break;
+
case 0xE7:
/* MOVNTDQ xmm1, m128 = VEX.128.66.0F.WIG E7 /r */
if (have66noF2noF3(pfx) && 0==getVexL(pfx)/*128*/) {
@@ -21625,9 +21846,9 @@
Prefix pfx, Int sz, Long deltaIN
)
{
- //IRTemp addr = IRTemp_INVALID;
- //Int alen = 0;
- //HChar dis_buf[50];
+ IRTemp addr = IRTemp_INVALID;
+ Int alen = 0;
+ HChar dis_buf[50];
Long delta = deltaIN;
UChar opc = getUChar(delta);
delta++;
@@ -21645,6 +21866,25 @@
}
break;
+ case 0x19:
+ /* VBROADCASTSD m64, ymm1 = VEX.256.66.0F38.W0 19 /r */
+ if (have66noF2noF3(pfx)
+ && 1==getVexL(pfx)/*256*/ && 0==getRexW(pfx)/*W0*/
+ && !epartIsReg(getUChar(delta))) {
+ UChar modrm = getUChar(delta);
+ UInt rG = gregOfRexRM(pfx, modrm);
+ addr = disAMode( &alen, vbi, pfx, delta, dis_buf, 0 );
+ delta += alen;
+ DIP("vbroadcastsd %s,%s\n", dis_buf, nameYMMReg(rG));
+ IRTemp t64 = newTemp(Ity_I64);
+ assign(t64, loadLE(Ity_I64, mkexpr(addr)));
+ IRExpr* res = IRExpr_Qop(Iop_64x4toV256, mkexpr(t64), mkexpr(t64),
+ mkexpr(t64), mkexpr(t64));
+ putYMMReg(rG, res);
+ goto decode_success;
+ }
+ break;
+
case 0x1E:
/* VPABSD xmm2/m128, xmm1 = VEX.128.66.0F38.WIG 1E /r */
if (have66noF2noF3(pfx) && 0==getVexL(pfx)/*128*/) {
@@ -21857,14 +22097,9 @@
assign(sV, loadLE(Ity_V256, mkexpr(addr)));
}
delta++;
- IRTemp s3 = newTemp(Ity_I64);
- IRTemp s2 = newTemp(Ity_I64);
- IRTemp s1 = newTemp(Ity_I64);
- IRTemp s0 = newTemp(Ity_I64);
- assign(s3, unop(Iop_V256to64_3, mkexpr(sV)));
- assign(s2, unop(Iop_V256to64_2, mkexpr(sV)));
- assign(s1, unop(Iop_V256to64_1, mkexpr(sV)));
- assign(s0, unop(Iop_V256to64_0, mkexpr(sV)));
+ IRTemp s3, s2, s1, s0;
+ s3 = s2 = s1 = s0 = IRTemp_INVALID;
+ breakupV256to64s(sV, &s3, &s2, &s1, &s0);
IRTemp dV = newTemp(Ity_V256);
assign(dV, IRExpr_Qop(Iop_64x4toV256,
mkexpr((imm8 & (1<<3)) ? s3 : s2),
@@ -22065,7 +22300,7 @@
IRTemp vE = newTemp(Ity_V128);
assign( vE, getXMMReg(rE) );
IRTemp dsE[4] = { inval, inval, inval, inval };
- breakup128to32s( vE, &dsE[3], &dsE[2], &dsE[1], &dsE[0] );
+ breakupV128to32s( vE, &dsE[3], &dsE[2], &dsE[1], &dsE[0] );
imm8 = getUChar(delta+1);
d2ins = dsE[(imm8 >> 6) & 3]; /* "imm8_count_s" */
delta += 1+1;
|
|
From: Rich C. <rc...@wi...> - 2012-06-13 04:32:31
|
valgrind revision: 12632 VEX revision: 2380 C compiler: i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5646) Assembler: C library: unknown uname -mrs: Darwin 10.8.0 i386 Vendor version: unknown Nightly build on macbook ( Darwin 10.8.0 i386 ) Started at 2012-06-12 23:05:00 CDT Ended at 2012-06-12 23:32:01 CDT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 493 tests, 478 stderr failures, 130 stdout failures, 3 stderrB failures, 3 stdoutB failures, 32 post failures == gdbserver_tests/mchelp (stdoutB) gdbserver_tests/mchelp (stderrB) gdbserver_tests/mcinvokeRU (stdoutB) gdbserver_tests/mcinvokeRU (stderrB) gdbserver_tests/mcinvokeWS (stdoutB) gdbserver_tests/mcinvokeWS (stderrB) gdbserver_tests/nlfork_chain (stdout) gdbserver_tests/nlfork_chain (stderr) memcheck/tests/accounting (stderr) memcheck/tests/addressable (stdout) memcheck/tests/addressable (stderr) memcheck/tests/atomic_incs (stdout) memcheck/tests/atomic_incs (stderr) memcheck/tests/badaddrvalue (stdout) memcheck/tests/badaddrvalue (stderr) memcheck/tests/badfree-2trace (stderr) memcheck/tests/badfree (stderr) memcheck/tests/badfree3 (stderr) memcheck/tests/badjump (stderr) memcheck/tests/badjump2 (stderr) memcheck/tests/badloop (stderr) memcheck/tests/badpoll (stderr) memcheck/tests/badrw (stderr) memcheck/tests/big_blocks_freed_list (stderr) memcheck/tests/brk2 (stderr) memcheck/tests/buflen_check (stderr) memcheck/tests/bug287260 (stderr) memcheck/tests/calloc-overflow (stderr) memcheck/tests/clientperm (stdout) memcheck/tests/clientperm (stderr) memcheck/tests/clireq_nofill (stdout) memcheck/tests/clireq_nofill (stderr) memcheck/tests/custom-overlap (stderr) memcheck/tests/custom_alloc (stderr) memcheck/tests/darwin/aio (stderr) memcheck/tests/darwin/env (stderr) memcheck/tests/darwin/pth-supp (stderr) memcheck/tests/darwin/scalar (stderr) memcheck/tests/darwin/scalar_fork (stderr) memcheck/tests/darwin/scalar_nocancel (stderr) memcheck/tests/darwin/scalar_vfork (stderr) memcheck/tests/deep_templates (stdout) memcheck/tests/deep_templates (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/doublefree (stderr) memcheck/tests/err_disable1 (stderr) memcheck/tests/err_disable2 (stderr) memcheck/tests/err_disable3 (stderr) memcheck/tests/err_disable4 (stderr) memcheck/tests/erringfds (stdout) memcheck/tests/erringfds (stderr) memcheck/tests/error_counts (stderr) memcheck/tests/errs1 (stderr) memcheck/tests/execve1 (stderr) memcheck/tests/execve2 (stderr) memcheck/tests/exitprog (stderr) memcheck/tests/file_locking (stderr) memcheck/tests/fprw (stderr) memcheck/tests/fwrite (stderr) memcheck/tests/holey_buffer_too_small (stderr) memcheck/tests/inits (stderr) memcheck/tests/inline (stdout) memcheck/tests/inline (stderr) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cases-full (stderr) memcheck/tests/leak-cases-possible (stderr) memcheck/tests/leak-cases-summary (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-delta (stderr) memcheck/tests/leak-pool-0 (stderr) memcheck/tests/leak-pool-1 (stderr) memcheck/tests/leak-pool-2 (stderr) memcheck/tests/leak-pool-3 (stderr) memcheck/tests/leak-pool-4 (stderr) memcheck/tests/leak-pool-5 (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/long-supps (stderr) memcheck/tests/long_namespace_xml (stdout) memcheck/tests/long_namespace_xml (stderr) memcheck/tests/mallinfo (stderr) memcheck/tests/malloc1 (stderr) memcheck/tests/malloc2 (stderr) memcheck/tests/malloc3 (stdout) memcheck/tests/malloc3 (stderr) memcheck/tests/malloc_free_fill (stderr) memcheck/tests/malloc_usable (stderr) memcheck/tests/manuel1 (stdout) memcheck/tests/manuel1 (stderr) memcheck/tests/manuel2 (stdout) memcheck/tests/manuel2 (stderr) memcheck/tests/manuel3 (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/memalign2 (stderr) memcheck/tests/memalign_test (stderr) memcheck/tests/memcmptest (stdout) memcheck/tests/memcmptest (stderr) memcheck/tests/mempool (stderr) memcheck/tests/mempool2 (stderr) memcheck/tests/metadata (stdout) memcheck/tests/metadata (stderr) memcheck/tests/mismatches (stderr) memcheck/tests/mmaptest (stderr) memcheck/tests/nanoleak2 (stderr) memcheck/tests/nanoleak_supp (stderr) memcheck/tests/new_nothrow (stderr) memcheck/tests/new_override (stdout) memcheck/tests/new_override (stderr) memcheck/tests/noisy_child (stderr) memcheck/tests/null_socket (stderr) memcheck/tests/origin1-yes (stderr) memcheck/tests/origin2-not-quite (stderr) memcheck/tests/origin3-no (stderr) memcheck/tests/origin4-many (stderr) memcheck/tests/origin5-bz2 (stdout) memcheck/tests/origin5-bz2 (stderr) memcheck/tests/origin6-fp (stderr) memcheck/tests/overlap (stdout) memcheck/tests/overlap (stderr) memcheck/tests/partial_load_dflt (stderr) memcheck/tests/partial_load_ok (stderr) memcheck/tests/partiallydefinedeq (stdout) memcheck/tests/partiallydefinedeq (stderr) memcheck/tests/pdb-realloc (stderr) memcheck/tests/pdb-realloc2 (stdout) memcheck/tests/pdb-realloc2 (stderr) memcheck/tests/pipe (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/post-syscall (stderr) memcheck/tests/realloc1 (stderr) memcheck/tests/realloc2 (stderr) memcheck/tests/realloc3 (stderr) memcheck/tests/sbfragment (stdout) memcheck/tests/sbfragment (stderr) memcheck/tests/sh-mem-random (stdout) memcheck/tests/sh-mem-random (stderr) memcheck/tests/sh-mem (stderr) memcheck/tests/sigaltstack (stderr) memcheck/tests/sigkill (stderr) memcheck/tests/signal2 (stdout) memcheck/tests/signal2 (stderr) memcheck/tests/sigprocmask (stderr) memcheck/tests/static_malloc (stderr) memcheck/tests/str_tester (stderr) memcheck/tests/strchr (stderr) memcheck/tests/supp1 (stderr) memcheck/tests/supp2 (stderr) memcheck/tests/supp_unknown (stderr) memcheck/tests/suppfree (stderr) memcheck/tests/test-plo-no (stderr) memcheck/tests/test-plo-yes (stderr) memcheck/tests/trivialleak (stderr) memcheck/tests/unit_libcbase (stderr) memcheck/tests/unit_oset (stdout) memcheck/tests/unit_oset (stderr) memcheck/tests/varinfo1 (stderr) memcheck/tests/varinfo2 (stderr) memcheck/tests/varinfo3 (stderr) memcheck/tests/varinfo4 (stdout) memcheck/tests/varinfo4 (stderr) memcheck/tests/varinfo5 (stderr) memcheck/tests/varinfo6 (stdout) memcheck/tests/varinfo6 (stderr) memcheck/tests/vcpu_bz2 (stdout) memcheck/tests/vcpu_bz2 (stderr) memcheck/tests/vcpu_fbench (stdout) memcheck/tests/vcpu_fbench (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/vcpu_fnfns (stderr) memcheck/tests/wrap1 (stdout) memcheck/tests/wrap1 (stderr) memcheck/tests/wrap2 (stdout) memcheck/tests/wrap2 (stderr) memcheck/tests/wrap3 (stdout) memcheck/tests/wrap3 (stderr) memcheck/tests/wrap4 (stdout) memcheck/tests/wrap4 (stderr) memcheck/tests/wrap5 (stdout) memcheck/tests/wrap5 (stderr) memcheck/tests/wrap6 (stdout) memcheck/tests/wrap6 (stderr) memcheck/tests/wrap7 (stdout) memcheck/tests/wrap7 (stderr) memcheck/tests/wrap8 (stdout) memcheck/tests/wrap8 (stderr) memcheck/tests/writev1 (stderr) memcheck/tests/x86/bug152022 (stderr) memcheck/tests/x86/espindola2 (stderr) memcheck/tests/x86/fpeflags (stderr) memcheck/tests/x86/fprem (stdout) memcheck/tests/x86/fprem (stderr) memcheck/tests/x86/fxsave (stdout) memcheck/tests/x86/fxsave (stderr) memcheck/tests/x86/insn_basic (stdout) memcheck/tests/x86/insn_basic (stderr) memcheck/tests/x86/insn_cmov (stdout) memcheck/tests/x86/insn_cmov (stderr) memcheck/tests/x86/insn_fpu (stdout) memcheck/tests/x86/insn_fpu (stderr) memcheck/tests/x86/insn_mmx (stdout) memcheck/tests/x86/insn_mmx (stderr) memcheck/tests/x86/insn_sse (stdout) memcheck/tests/x86/insn_sse (stderr) memcheck/tests/x86/insn_sse2 (stdout) memcheck/tests/x86/insn_sse2 (stderr) memcheck/tests/x86/more_x86_fp (stdout) memcheck/tests/x86/more_x86_fp (stderr) memcheck/tests/x86/pushfpopf (stdout) memcheck/tests/x86/pushfpopf (stderr) memcheck/tests/x86/pushfw_x86 (stdout) memcheck/tests/x86/pushfw_x86 (stderr) memcheck/tests/x86/pushpopmem (stdout) memcheck/tests/x86/pushpopmem (stderr) memcheck/tests/x86/sse1_memory (stdout) memcheck/tests/x86/sse1_memory (stderr) memcheck/tests/x86/sse2_memory (stdout) memcheck/tests/x86/sse2_memory (stderr) memcheck/tests/x86/tronical (stderr) memcheck/tests/x86/xor-undef-x86 (stdout) memcheck/tests/x86/xor-undef-x86 (stderr) memcheck/tests/xml1 (stdout) memcheck/tests/xml1 (stderr) cachegrind/tests/chdir (stderr) cachegrind/tests/clreq (stderr) cachegrind/tests/dlclose (stdout) cachegrind/tests/dlclose (stderr) cachegrind/tests/notpower2 (stderr) cachegrind/tests/wrap5 (stdout) cachegrind/tests/wrap5 (stderr) cachegrind/tests/x86/fpu-28-108 (stderr) callgrind/tests/clreq (stderr) callgrind/tests/notpower2-hwpref (stderr) callgrind/tests/notpower2-use (stderr) callgrind/tests/notpower2-wb (stderr) callgrind/tests/notpower2 (stderr) callgrind/tests/simwork-both (stdout) callgrind/tests/simwork-both (stderr) callgrind/tests/simwork-branch (stdout) callgrind/tests/simwork-branch (stderr) callgrind/tests/simwork-cache (stdout) callgrind/tests/simwork-cache (stderr) callgrind/tests/simwork1 (stdout) callgrind/tests/simwork1 (stderr) callgrind/tests/simwork2 (stdout) callgrind/tests/simwork2 (stderr) callgrind/tests/simwork3 (stdout) callgrind/tests/simwork3 (stderr) callgrind/tests/threads-use (stderr) callgrind/tests/threads (stderr) massif/tests/alloc-fns-A (stderr) massif/tests/alloc-fns-A (post) massif/tests/alloc-fns-B (stderr) massif/tests/alloc-fns-B (post) massif/tests/basic (stderr) massif/tests/basic (post) massif/tests/basic2 (stderr) massif/tests/basic2 (post) massif/tests/big-alloc (stderr) massif/tests/big-alloc (post) massif/tests/culling1 (stderr) massif/tests/culling2 (stderr) massif/tests/custom_alloc (stderr) massif/tests/custom_alloc (post) massif/tests/deep-A (stderr) massif/tests/deep-A (post) massif/tests/deep-B (stderr) massif/tests/deep-B (post) massif/tests/deep-C (stderr) massif/tests/deep-C (post) massif/tests/deep-D (stderr) massif/tests/deep-D (post) massif/tests/ignored (stderr) massif/tests/ignored (post) massif/tests/ignoring (stderr) massif/tests/ignoring (post) massif/tests/insig (stderr) massif/tests/insig (post) massif/tests/long-names (stderr) massif/tests/long-names (post) massif/tests/long-time (stderr) massif/tests/long-time (post) massif/tests/malloc_usable (stderr) massif/tests/new-cpp (stderr) massif/tests/new-cpp (post) massif/tests/no-stack-no-heap (stderr) massif/tests/no-stack-no-heap (post) massif/tests/null (stderr) massif/tests/null (post) massif/tests/one (stderr) massif/tests/one (post) massif/tests/overloaded-new (stderr) massif/tests/overloaded-new (post) massif/tests/pages_as_heap (stderr) massif/tests/peak (stderr) massif/tests/peak (post) massif/tests/peak2 (stderr) massif/tests/peak2 (post) massif/tests/realloc (stderr) massif/tests/realloc (post) massif/tests/thresholds_0_0 (stderr) massif/tests/thresholds_0_0 (post) massif/tests/thresholds_0_10 (stderr) massif/tests/thresholds_0_10 (post) massif/tests/thresholds_10_0 (stderr) massif/tests/thresholds_10_0 (post) massif/tests/thresholds_10_10 (stderr) massif/tests/thresholds_10_10 (post) massif/tests/thresholds_5_0 (stderr) massif/tests/thresholds_5_0 (post) massif/tests/thresholds_5_10 (stderr) massif/tests/thresholds_5_10 (post) massif/tests/zero1 (stderr) massif/tests/zero1 (post) massif/tests/zero2 (stderr) massif/tests/zero2 (post) lackey/tests/true (stderr) none/tests/allexec32 (stdout) none/tests/allexec32 (stderr) none/tests/allexec64 (stdout) none/tests/allexec64 (stderr) none/tests/ansi (stderr) none/tests/args (stdout) none/tests/args (stderr) none/tests/async-sigs (stderr) none/tests/bitfield1 (stderr) none/tests/bug129866 (stdout) none/tests/bug129866 (stderr) none/tests/closeall (stderr) none/tests/cmd-with-special (stderr) none/tests/cmdline5 (stderr) none/tests/coolo_sigaction (stdout) none/tests/coolo_sigaction (stderr) none/tests/coolo_strlen (stderr) none/tests/darwin/access_extended (stderr) none/tests/darwin/apple-main-arg (stderr) none/tests/darwin/rlimit (stderr) none/tests/discard (stdout) none/tests/discard (stderr) none/tests/empty-exe (stderr) none/tests/exec-sigmask (stderr) none/tests/execve (stderr) none/tests/faultstatus (stderr) none/tests/fcntl_setown (stderr) none/tests/fdleak_cmsg (stderr) none/tests/fdleak_creat (stderr) none/tests/fdleak_dup (stderr) none/tests/fdleak_dup2 (stderr) none/tests/fdleak_fcntl (stderr) none/tests/fdleak_ipv4 (stdout) none/tests/fdleak_ipv4 (stderr) none/tests/fdleak_open (stderr) none/tests/fdleak_pipe (stderr) none/tests/fdleak_socketpair (stderr) none/tests/floored (stdout) none/tests/floored (stderr) none/tests/fork (stdout) none/tests/fork (stderr) none/tests/fucomip (stderr) none/tests/gxx304 (stderr) none/tests/manythreads (stdout) none/tests/manythreads (stderr) none/tests/map_unaligned (stderr) none/tests/map_unmap (stdout) none/tests/map_unmap (stderr) none/tests/mmap_fcntl_bug (stderr) none/tests/mq (stderr) none/tests/munmap_exe (stderr) none/tests/nestedfns (stdout) none/tests/nestedfns (stderr) none/tests/nodir (stderr) none/tests/pending (stdout) none/tests/pending (stderr) none/tests/process_vm_readv_writev (stderr) none/tests/procfs-non-linux (stderr) none/tests/pth_atfork1 (stdout) none/tests/pth_atfork1 (stderr) none/tests/pth_blockedsig (stdout) none/tests/pth_blockedsig (stderr) none/tests/pth_cancel1 (stdout) none/tests/pth_cancel1 (stderr) none/tests/pth_cancel2 (stderr) none/tests/pth_cvsimple (stdout) none/tests/pth_cvsimple (stderr) none/tests/pth_empty (stderr) none/tests/pth_exit (stderr) none/tests/pth_exit2 (stderr) none/tests/pth_mutexspeed (stdout) none/tests/pth_mutexspeed (stderr) none/tests/pth_once (stdout) none/tests/pth_once (stderr) none/tests/pth_rwlock (stderr) none/tests/pth_stackalign (stdout) none/tests/pth_stackalign (stderr) none/tests/rcrl (stdout) none/tests/rcrl (stderr) none/tests/readline1 (stdout) none/tests/readline1 (stderr) none/tests/require-text-symbol-1 (stderr) none/tests/require-text-symbol-2 (stderr) none/tests/res_search (stdout) none/tests/res_search (stderr) none/tests/resolv (stdout) none/tests/resolv (stderr) none/tests/rlimit64_nofile (stderr) none/tests/rlimit_nofile (stderr) none/tests/sem (stderr) none/tests/semlimit (stderr) none/tests/sha1_test (stderr) none/tests/shell (stdout) none/tests/shell (stderr) none/tests/shell_nosuchfile (stderr) none/tests/shell_valid1 (stderr) none/tests/shell_valid2 (stderr) none/tests/shell_valid3 (stderr) none/tests/shell_zerolength (stderr) none/tests/shortpush (stderr) none/tests/shorts (stderr) none/tests/sigstackgrowth (stdout) none/tests/sigstackgrowth (stderr) none/tests/stackgrowth (stdout) none/tests/stackgrowth (stderr) none/tests/syscall-restart1 (stderr) none/tests/syscall-restart2 (stderr) none/tests/syslog (stderr) none/tests/system (stderr) none/tests/thread-exits (stdout) none/tests/thread-exits (stderr) none/tests/threaded-fork (stdout) none/tests/threaded-fork (stderr) none/tests/threadederrno (stdout) none/tests/threadederrno (stderr) none/tests/timestamp (stderr) none/tests/vgprintf (stderr) none/tests/x86/aad_aam (stdout) none/tests/x86/aad_aam (stderr) none/tests/x86/badseg (stdout) none/tests/x86/badseg (stderr) none/tests/x86/bt_everything (stdout) none/tests/x86/bt_everything (stderr) none/tests/x86/bt_literal (stdout) none/tests/x86/bt_literal (stderr) none/tests/x86/bug125959-x86 (stdout) none/tests/x86/bug125959-x86 (stderr) none/tests/x86/bug126147-x86 (stdout) none/tests/x86/bug126147-x86 (stderr) none/tests/x86/bug132813-x86 (stdout) none/tests/x86/bug132813-x86 (stderr) none/tests/x86/bug135421-x86 (stdout) none/tests/x86/bug135421-x86 (stderr) none/tests/x86/bug137714-x86 (stdout) none/tests/x86/bug137714-x86 (stderr) none/tests/x86/bug152818-x86 (stdout) none/tests/x86/bug152818-x86 (stderr) none/tests/x86/cmpxchg8b (stdout) none/tests/x86/cmpxchg8b (stderr) none/tests/x86/cpuid (stdout) none/tests/x86/cpuid (stderr) none/tests/x86/cse_fail (stdout) none/tests/x86/fcmovnu (stdout) none/tests/x86/fcmovnu (stderr) none/tests/x86/fpu_lazy_eflags (stdout) none/tests/x86/fpu_lazy_eflags (stderr) none/tests/x86/fxtract (stdout) none/tests/x86/fxtract (stderr) none/tests/x86/getseg (stdout) none/tests/x86/getseg (stderr) none/tests/x86/incdec_alt (stdout) none/tests/x86/incdec_alt (stderr) none/tests/x86/insn_basic (stdout) none/tests/x86/insn_basic (stderr) none/tests/x86/insn_cmov (stdout) none/tests/x86/insn_cmov (stderr) none/tests/x86/insn_fpu (stdout) none/tests/x86/insn_fpu (stderr) none/tests/x86/insn_mmx (stdout) none/tests/x86/insn_mmx (stderr) none/tests/x86/insn_sse (stdout) none/tests/x86/insn_sse (stderr) none/tests/x86/insn_sse2 (stdout) none/tests/x86/insn_sse2 (stderr) none/tests/x86/insn_sse3 (stdout) none/tests/x86/insn_sse3 (stderr) none/tests/x86/jcxz (stdout) none/tests/x86/jcxz (stderr) none/tests/x86/lahf (stdout) none/tests/x86/lahf (stderr) none/tests/x86/looper (stdout) none/tests/x86/looper (stderr) none/tests/x86/movx (stdout) none/tests/x86/movx (stderr) none/tests/x86/pushpopseg (stdout) none/tests/x86/pushpopseg (stderr) none/tests/x86/sbbmisc (stdout) none/tests/x86/sbbmisc (stderr) none/tests/x86/shift_ndep (stdout) none/tests/x86/shift_ndep (stderr) none/tests/x86/smc1 (stdout) none/tests/x86/smc1 (stderr) none/tests/x86/x86locked (stdout) none/tests/x86/x86locked (stderr) none/tests/x86/xadd (stdout) none/tests/x86/xadd (stderr) helgrind/tests/annotate_hbefore (stderr) helgrind/tests/annotate_rwlock (stderr) helgrind/tests/annotate_smart_pointer (stderr) helgrind/tests/cond_timedwait_invalid (stderr) helgrind/tests/free_is_write (stderr) helgrind/tests/hg01_all_ok (stderr) helgrind/tests/hg02_deadlock (stderr) helgrind/tests/hg03_inherit (stderr) helgrind/tests/hg04_race (stderr) helgrind/tests/hg05_race2 (stderr) helgrind/tests/hg06_readshared (stderr) helgrind/tests/locked_vs_unlocked1_fwd (stderr) helgrind/tests/locked_vs_unlocked1_rev (stderr) helgrind/tests/locked_vs_unlocked2 (stderr) helgrind/tests/locked_vs_unlocked3 (stderr) helgrind/tests/rwlock_race (stderr) helgrind/tests/rwlock_test (stderr) helgrind/tests/t2t_laog (stderr) helgrind/tests/tc01_simple_race (stderr) helgrind/tests/tc02_simple_tls (stderr) helgrind/tests/tc03_re_excl (stderr) helgrind/tests/tc04_free_lock (stderr) helgrind/tests/tc05_simple_race (stderr) helgrind/tests/tc06_two_races (stderr) helgrind/tests/tc06_two_races_xml (stderr) helgrind/tests/tc07_hbl1 (stderr) helgrind/tests/tc08_hbl2 (stderr) helgrind/tests/tc09_bad_unlock (stderr) helgrind/tests/tc10_rec_lock (stderr) helgrind/tests/tc11_XCHG (stderr) helgrind/tests/tc12_rwl_trivial (stderr) helgrind/tests/tc13_laog1 (stderr) helgrind/tests/tc14_laog_dinphils (stderr) helgrind/tests/tc15_laog_lockdel (stderr) helgrind/tests/tc16_byterace (stderr) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc19_shadowmem (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc23_bogus_condwait (stderr) helgrind/tests/tc24_nonzero_sem (stderr) drd/tests/annotate_barrier (stderr) drd/tests/annotate_barrier_xml (stderr) drd/tests/annotate_hb_err (stderr) drd/tests/annotate_hb_race (stderr) drd/tests/annotate_hbefore (stderr) drd/tests/annotate_ignore_read (stderr) drd/tests/annotate_ignore_rw (stderr) drd/tests/annotate_ignore_rw2 (stderr) drd/tests/annotate_ignore_write (stderr) drd/tests/annotate_ignore_write2 (stderr) drd/tests/annotate_order_1 (stderr) drd/tests/annotate_order_2 (stderr) drd/tests/annotate_order_3 (stderr) drd/tests/annotate_publish_hg (stderr) drd/tests/annotate_rwlock (stderr) drd/tests/annotate_rwlock_hg (stderr) drd/tests/annotate_smart_pointer (stderr) drd/tests/annotate_smart_pointer2 (stderr) drd/tests/annotate_spinlock (stderr) drd/tests/annotate_static (stderr) drd/tests/annotate_trace_memory (stderr) drd/tests/annotate_trace_memory_xml (stderr) drd/tests/atomic_var (stderr) drd/tests/bug-235681 (stderr) drd/tests/circular_buffer (stderr) drd/tests/custom_alloc (stderr) drd/tests/custom_alloc_fiw (stderr) drd/tests/fp_race (stderr) drd/tests/fp_race2 (stderr) drd/tests/fp_race_xml (stderr) drd/tests/free_is_write (stderr) drd/tests/free_is_write2 (stderr) drd/tests/hg01_all_ok (stderr) drd/tests/hg02_deadlock (stderr) drd/tests/hg03_inherit (stderr) drd/tests/hg04_race (stderr) drd/tests/hg05_race2 (stderr) drd/tests/hg06_readshared (stderr) drd/tests/hold_lock_1 (stderr) drd/tests/hold_lock_2 (stderr) drd/tests/linuxthreads_det (stderr) drd/tests/memory_allocation (stderr) drd/tests/monitor_example (stderr) drd/tests/new_delete (stderr) drd/tests/pth_broadcast (stderr) drd/tests/pth_cancel_locked (stderr) drd/tests/pth_cleanup_handler (stderr) drd/tests/pth_cond_race (stderr) drd/tests/pth_cond_race2 (stderr) drd/tests/pth_cond_race3 (stderr) drd/tests/pth_create_chain (stderr) drd/tests/pth_detached (stderr) drd/tests/pth_detached2 (stderr) drd/tests/pth_detached3 (stderr) drd/tests/pth_inconsistent_cond_wait (stderr) drd/tests/pth_mutex_reinit (stderr) drd/tests/pth_once (stderr) drd/tests/pth_process_shared_mutex (stderr) drd/tests/pth_uninitialized_cond (stderr) drd/tests/read_and_free_race (stderr) drd/tests/recursive_mutex (stderr) drd/tests/rwlock_race (stderr) drd/tests/rwlock_test (stderr) drd/tests/rwlock_type_checking (stderr) drd/tests/sem_open (stderr) drd/tests/sem_open2 (stderr) drd/tests/sem_open3 (stderr) drd/tests/sem_open_traced (stderr) drd/tests/sigalrm (stderr) drd/tests/sigaltstack (stderr) drd/tests/tc01_simple_race (stderr) drd/tests/tc02_simple_tls (stderr) drd/tests/tc03_re_excl (stderr) drd/tests/tc04_free_lock (stderr) drd/tests/tc05_simple_race (stderr) drd/tests/tc06_two_races (stderr) drd/tests/tc07_hbl1 (stdout) drd/tests/tc07_hbl1 (stderr) drd/tests/tc08_hbl2 (stdout) drd/tests/tc08_hbl2 (stderr) drd/tests/tc09_bad_unlock (stderr) drd/tests/tc10_rec_lock (stderr) drd/tests/tc11_XCHG (stdout) drd/tests/tc11_XCHG (stderr) drd/tests/tc12_rwl_trivial (stderr) drd/tests/tc13_laog1 (stderr) drd/tests/tc15_laog_lockdel (stderr) drd/tests/tc16_byterace (stderr) drd/tests/tc17_sembar (stderr) drd/tests/tc19_shadowmem (stderr) drd/tests/tc21_pthonce (stdout) drd/tests/tc21_pthonce (stderr) drd/tests/tc23_bogus_condwait (stderr) drd/tests/thread_name (stderr) drd/tests/thread_name_xml (stderr) drd/tests/threaded-fork (stderr) drd/tests/trylock (stderr) drd/tests/unit_bitmap (stderr) drd/tests/unit_vc (stderr) exp-bbv/tests/x86/complex_rep (stderr) exp-bbv/tests/x86/fldcw_check (stderr) exp-bbv/tests/x86/million (stderr) exp-bbv/tests/x86/rep_prefix (stderr) ================================================= ./valgrind-new/cachegrind/tests/chdir.stderr.diff ================================================= --- chdir.stderr.exp 2012-06-12 23:17:07.000000000 -0500 +++ chdir.stderr.out 2012-06-12 23:29:16.000000000 -0500 @@ -1,17 +1,28 @@ -I refs: -I1 misses: -LLi misses: -I1 miss rate: -LLi miss rate: - -D refs: -D1 misses: -LLd misses: -D1 miss rate: -LLd miss rate: - -LL refs: -LL misses: -LL miss rate: +valgrind: m_scheduler/scheduler.c:707 (do_pre_run_checks): Assertion 'VG_IS_32_ALIGNED(a_vex)' failed. + at 0x3800D9C5: ??? + by 0x3800DB88: ??? + by 0x38054D57: ??? + by 0x38056BE7: ??? + by 0x3807C048: ??? + +sched status: + running_tid=1 + +Thread 1: status = VgTs_Runnable + at 0x8FE01030: _dyld_start (in /usr/lib/dyld) + + +Note: see also the FAQ in the source distribution. +It contains workarounds to several common problems. +In particular, if Valgrind aborted or crashed after +identifying problems in your program, there's a good chance +that fixing those problems will prevent Valgrind aborting or +crashing, especially if it happened in m_mallocfree.c. + +If that doesn't help, please report this bug to: www.valgrind.org + +In the bug report, send all the above text, the valgrind +version, and what OS and version you are using. Thanks. + ================================================= ./valgrind-new/cachegrind/tests/clreq.stderr.diff ================================================= --- clreq.stderr.exp 2012-06-12 23:17:07.000000000 -0500 +++ clreq.stderr.out 2012-06-12 23:29:16.000000000 -0500 @@ -0,0 +1,27 @@ + +valgrind: m_scheduler/scheduler.c:707 (do_pre_run_checks): Assertion 'VG_IS_32_ALIGNED(a_vex)' failed. + at 0x3800D9C5: ??? + by 0x3800DB88: ??? + by 0x38054D57: ??? + by 0x38056BE7: ??? + by 0x3807C048: ??? + +sched status: + running_tid=1 + +Thread 1: status = VgTs_Runnable + at 0x8FE01030: _dyld_start (in /usr/lib/dyld) + + +Note: see also the FAQ in the source distribution. +It contains workarounds to several common problems. +In particular, if Valgrind aborted or crashed after +identifying problems in your program, there's a good chance +that fixing those problems will prevent Valgrind aborting or +crashing, especially if it happened in m_mallocfree.c. + +If that doesn't help, please report this bug to: www.valgrind.org + +In the bug report, send all the above text, the valgrind +version, and what OS and version you are using. Thanks. + ================================================= ./valgrind-new/cachegrind/tests/dlclose.stderr.diff ================================================= --- dlclose.stderr.exp 2012-06-12 23:17:07.000000000 -0500 +++ dlclose.stderr.out 2012-06-12 23:29:17.000000000 -0500 @@ -1,17 +1,28 @@ -I refs: -I1 misses: -LLi misses: -I1 miss rate: -LLi miss rate: - -D refs: -D1 misses: -LLd misses: -D1 miss rate: -LLd miss rate: - -LL refs: -LL misses: -LL miss rate: +valgrind: m_scheduler/scheduler.c:707 (do_pre_run_checks): Assertion 'VG_IS_32_ALIGNED(a_vex)' failed. + at 0x3800D9C5: ??? + by 0x3800DB88: ??? + by 0x38054D57: ??? + by 0x38056BE7: ??? + by 0x3807C048: ??? + +sched status: + running_tid=1 + +Thread 1: status = VgTs_Runnable + at 0x8FE01030: _dyld_start (in /usr/lib/dyld) + + +Note: see also the FAQ in the source distribution. +It contains workarounds to several common problems. +In particular, if Valgrind aborted or crashed after +identifying problems in your program, there's a good chance +that fixing those problems will prevent Valgrind aborting or +crashing, especially if it happened in m_mallocfree.c. + +If that doesn't help, please report this bug to: www.valgrind.org + +In the bug report, send all the above text, the valgrind +version, and what OS and version you are using. Thanks. + ================================================= ./valgrind-new/cachegrind/tests/dlclose.stdout.diff ================================================= --- dlclose.stdout.exp 2012-06-12 23:17:07.000000000 -0500 +++ dlclose.stdout.out 2012-06-12 23:29:16.000000000 -0500 @@ -1 +0,0 @@ -This is myprint! ================================================= ./valgrind-new/cachegrind/tests/notpower2.stderr.diff ================================================= --- notpower2.stderr.exp 2012-06-12 23:17:07.000000000 -0500 +++ notpower2.stderr.out 2012-06-12 23:29:17.000000000 -0500 @@ -1,17 +1,28 @@ -I refs: -I1 misses: -LLi misses: -I1 miss rate: -LLi miss rate: - -D refs: -D1 misses: -LLd misses: -D1 miss rate: -LLd miss rate: - -LL refs: -LL misses: -LL miss rate: +valgrind: m_scheduler/scheduler.c:707 (do_pre_run_checks): Assertion 'VG_IS_32_ALIGNED(a_vex)' failed. + at 0x3800D9C5: ??? + by 0x3800DB88: ??? + by 0x38054D57: ??? + by 0x38056BE7: ??? + by 0x3807C048: ??? + +sched status: + running_tid=1 + +Thread 1: status = VgTs_Runnable + at 0x8FE01030: _dyld_start (in /usr/lib/dyld) + + +Note: see also the FAQ in the source distribution. +It contains workarounds to several common problems. +In particular, if Valgrind aborted or crashed after +identifying problems in your program, there's a good chance +that fixing those problems will prevent Valgrind aborting or +crashing, especially if it happened in m_mallocfree.c. + +If that doesn't help, please report this bug to: www.valgrind.org + +In the bug report, send all the above text, the valgrind +version, and what OS and version you are using. Thanks. + ================================================= ./valgrind-new/cachegrind/tests/wrap5.stderr.diff ================================================= --- wrap5.stderr.exp 2012-06-12 23:17:07.000000000 -0500 +++ wrap5.stderr.out 2012-06-12 23:29:17.000000000 -0500 @@ -1,17 +1,28 @@ -I refs: -I1 misses: -LLi misses: -I1 miss rate: -LLi miss rate: - -D refs: -D1 misses: -LLd misses: -D1 miss rate: -LLd miss rate: - -LL refs: -LL misses: -LL miss rate: +valgrind: m_scheduler/scheduler.c:707 (do_pre_run_checks): Assertion 'VG_IS_32_ALIGNED(a_vex)' failed. + at 0x3800D9C5: ??? + by 0x3800DB88: ??? + by 0x38054D57: ??? + by 0x38056BE7: ??? + by 0x3807C048: ??? + +sched status: + running_tid=1 + +Thread 1: status = VgTs_Runnable + at 0x8FE01030: _dyld_start (in /usr/lib/dyld) + + +Note: see also the FAQ in the source distribution. +It contains workarounds to several common problems. +In particular, if Valgrind aborted or crashed after +identifying problems in your program, there's a good chance +that fixing those problems will prevent Valgrind aborting or +crashing, especially if it happened in m_mallocfree.c. + +If that doesn't help, please report this bug to: www.valgrind.org + +In the bug report, send all the above text, the valgrind +version, and what OS and version you are using. Thanks. + ================================================= ./valgrind-new/cachegrind/tests/wrap5.stdout.diff ================================================= --- wrap5.stdout.exp 2012-06-12 23:17:07.000000000 -0500 +++ wrap5.stdout.out 2012-06-12 23:29:17.000000000 -0500 @@ -1,37 +0,0 @@ -computing fact1(7) -in wrapper1-pre: fact(7) -in wrapper2-pre: fact(6) -in wrapper1-pre: fact(5) -in wrapper2-pre: fact(4) -in wrapper1-pre: fact(3) -in wrapper2-pre: fact(2) -in wrapper1-pre: fact(1) -in wrapper2-pre: fact(0) -in wrapper2-post: fact(0) = 1 -in wrapper1-post: fact(1) = 1 -in wrapper2-post: fact(2) = 2 -in wrapper1-post: fact(3) = 6 -in wrapper2-pre: fact(2) -in wrapper1-pre: fact(1) -in wrapper2-pre: fact(0) -in wrapper2-post: fact(0) = 1 -in wrapper1-post: fact(1) = 1 -in wrapper2-post: fact(2) = 2 -in wrapper2-post: fact(4) = 32 -in wrapper1-post: fact(5) = 160 -in wrapper2-pre: fact(2) -in wrapper1-pre: fact(1) -in wrapper2-pre: fact(0) -in wrapper2-post: fact(0) = 1 -in wrapper1-post: fact(1) = 1 -in wrapper2-post: fact(2) = 2 -in wrapper2-post: fact(6) = 972 -in wrapper1-post: fact(7) = 6804 -in wrapper2-pre: fact(2) -in wrapper1-pre: fact(1) -in wrapper2-pre: fact(0) -in wrapper2-post: fact(0) = 1 -in wrapper1-post: fact(1) = 1 -in wrapper2-post: fact(2) = 2 -fact1(7) = 6806 -allocated 51 Lards ================================================= ./valgrind-new/cachegrind/tests/x86/fpu-28-108.stderr.diff ================================================= --- fpu-28-108.stderr.exp 2012-06-12 23:17:07.000000000 -0500 +++ fpu-28-108.stderr.out 2012-06-12 23:29:17.000000000 -0500 @@ -1,17 +1,28 @@ -I refs: -I1 misses: -LLi misses: -I1 miss rate: -LLi miss rate: - -D refs: -D1 misses: -LLd misses: -D1 miss rate: -LLd miss rate: - -LL refs: -LL misses: -LL miss rate: +valgrind: m_scheduler/scheduler.c:707 (do_pre_run_checks): Assertion 'VG_IS_32_ALIGNED(a_vex)' failed. + at 0x3800D9C5: ??? + by 0x3800DB88: ??? + by 0x38054D57: ??? + by 0x38056BE7: ??? + by 0x3807C048: ??? + +sched status: + running_tid=1 + +Thread 1: status = VgTs_Runnable + at 0x8FE01030: _dyld_start (in /usr/lib/dyld) + + +Note: see also the FAQ in the source distribution. +It contains workarounds to several common problems. +In particular, if Valgrind aborted or crashed after +identifying problems in your program, there's a good chance +that fixing those problems will prevent Valgrind aborting or +crashing, especially if it happened in m_mallocfree.c. + +If that doesn't help, please report this bug to: www.valgrind.org + +In the bug report, send all the above text, the valgrind +version, and what OS and version you are using. Thanks. + ================================================= ./valgrind-new/callgrind/tests/clreq.stderr.diff ================================================= --- clreq.stderr.exp 2012-06-12 23:17:03.000000000 -0500 +++ clreq.stderr.out 2012-06-12 23:29:18.000000000 -0500 @@ -1,6 +1,28 @@ -Events : Ir -Collected : +valgrind: m_scheduler/scheduler.c:707 (do_pre_run_checks): Assertion 'VG_IS_32_ALIGNED(a_vex)' failed. + at 0x3801F4C5: ??? + by 0x3801F688: ??? + by 0x38064F47: ??? + by 0x38066DD7: ??? + by 0x3808C238: ??? + +sched status: + running_tid=1 + +Thread 1: status = VgTs_Runnable + at 0x8FE01030: _dyld_start (in /usr/lib/dyld) + + +Note: see also the FAQ in the source distribution. +It contains workarounds to several common problems. +In particular, if Valgrind aborted or crashed after +identifying problems in your program, there's a good chance +that fixing those problems will prevent Valgrind aborting or +crashing, especially if it happened in m_mallocfree.c. + +If that doesn't help, please report this bug to: www.valgrind.org + +In the bug report, send all the above text, the valgrind +version, and what OS and version you are using. Thanks. -I refs: ================================================= ./valgrind-new/callgrind/tests/notpower2-hwpref.stderr.diff ================================================= --- notpower2-hwpref.stderr.exp 2012-06-12 23:17:03.000000000 -0500 +++ notpower2-hwpref.stderr.out 2012-06-12 23:29:18.000000000 -0500 @@ -1,20 +1,28 @@ -Events : Ir Dr Dw I1mr D1mr D1mw ILmr DLmr DLmw -Collected : +valgrind: m_scheduler/scheduler.c:707 (do_pre_run_checks): Assertion 'VG_IS_32_ALIGNED(a_vex)' failed. + at 0x3801F4C5: ??? + by 0x3801F688: ??? + by 0x38064F47: ??? + by 0x38066DD7: ??? + by 0x3808C238: ??? + +sched status: + running_tid=1 + +Thread 1: status = VgTs_Runnable + at 0x8FE01030: _dyld_start (in /usr/lib/dyld) + + +Note: see also the FAQ in the source distribution. +It contains workarounds to several common problems. +In particular, if Valgrind aborted or crashed after +identifying problems in your program, there's a good chance +that fixing those problems will prevent Valgrind aborting or +crashing, especially if it happened in m_mallocfree.c. + +If that doesn't help, please report this bug to: www.valgrind.org + +In the bug report, send all the above text, the valgrind +version, and what OS and version you are using. Thanks. -I refs: -I1 misses: -LLi misses: -I1 miss rate: -LLi miss rate: - -D refs: -D1 misses: -LLd misses: -D1 miss rate: -LLd miss rate: - -LL refs: -LL misses: -LL miss rate: ================================================= ./valgrind-new/callgrind/tests/notpower2-use.stderr.diff ================================================= --- notpower2-use.stderr.exp 2012-06-12 23:17:03.000000000 -0500 +++ notpower2-use.stderr.out 2012-06-12 23:29:18.000000000 -0500 @@ -1,20 +1,28 @@ -Events : Ir Dr Dw I1mr D1mr D1mw ILmr DLmr DLmw AcCost1 SpLoss1 AcCost2 SpLoss2 -Collected : +valgrind: m_scheduler/scheduler.c:707 (do_pre_run_checks): Assertion 'VG_IS_32_ALIGNED(a_vex)' failed. + at 0x3801F4C5: ??? + by 0x3801F688: ??? + by 0x38064F47: ??? + by 0x38066DD7: ??? + by 0x3808C238: ??? + +sched status: + running_tid=1 + +Thread 1: status = VgTs_Runnable + at 0x8FE01030: _dyld_start (in /usr/lib/dyld) + + +Note: see also the FAQ in the source distribution. +It contains workarounds to several common problems. +In particular, if Valgrind aborted or crashed after +identifying problems in your program, there's a good chance +that fixing those problems will prevent Valgrind aborting or +crashing, especially if it happened in m_mallocfree.c. + +If that doesn't help, please report this bug to: www.valgrind.org + +In the bug report, send all the above text, the valgrind +version, and what OS and version you are using. Thanks. -I refs: -I1 misses: -LLi misses: -I1 miss rate: -LLi miss rate: - -D refs: -D1 misses: -LLd misses: -D1 miss rate: -LLd miss rate: - -LL refs: -LL misses: -LL miss rate: ================================================= ./valgrind-new/callgrind/tests/notpower2-wb.stderr.diff ================================================= --- notpower2-wb.stderr.exp 2012-06-12 23:17:03.000000000 -0500 +++ notpower2-wb.stderr.out 2012-06-12 23:29:18.000000000 -0500 @@ -1,20 +1,28 @@ -Events : Ir Dr Dw I1mr D1mr D1mw ILmr DLmr DLmw ILdmr DLdmr DLdmw -Collected : +valgrind: m_scheduler/scheduler.c:707 (do_pre_run_checks): Assertion 'VG_IS_32_ALIGNED(a_vex)' failed. + at 0x3801F4C5: ??? + by 0x3801F688: ??? + by 0x38064F47: ??? + by 0x38066DD7: ??? + by 0x3808C238: ??? + +sched status: + running_tid=1 + +Thread 1: status = VgTs_Runnable + at 0x8FE01030: _dyld_start (in /usr/lib/dyld) + + +Note: see also the FAQ in the source distribution. +It contains workarounds to several common problems. +In particular, if Valgrind aborted or crashed after +identifying problems in your program, there's a good chance +that fixing those problems will prevent Valgrind aborting or +crashing, especially if it happened in m_mallocfree.c. + +If that doesn't help, please report this bug to: www.valgrind.org + +In the bug report, send all the above text, the valgrind +version, and what OS and version you are using. Thanks. -I refs: -I1 misses: -LLi misses: -I1 miss rate: -LLi miss rate: - -D refs: -D1 misses: -LLd misses: -D1 miss rate: -LLd miss rate: - -LL refs: -LL misses: -LL miss rate: ================================================= ./valgrind-new/callgrind/tests/notpower2.stderr.diff ================================================= --- notpower2.stderr.exp 2012-06-12 23:17:03.000000000 -0500 +++ notpower2.stderr.out 2012-06-12 23:29:18.000000000 -0500 @@ -1,20 +1,28 @@ -Events : Ir Dr Dw I1mr D1mr D1mw ILmr DLmr DLmw -Collected : +valgrind: m_scheduler/scheduler.c:707 (do_pre_run_checks): Assertion 'VG_IS_32_ALIGNED(a_vex)' failed. + at 0x3801F4C5: ??? + by 0x3801F688: ??? + by 0x38064F47: ??? + by 0x38066DD7: ??? + by 0x3808C238: ??? + +sched status: + running_tid=1 + +Thread 1: status = VgTs_Runnable + at 0x8FE01030: _dyld_start (in /usr/lib/dyld) + + +Note: see also the FAQ in the source distribution. +It contains workarounds to several common problems. +In particular, if Valgrind aborted or crashed after +identifying problems in your program, there's a good chance +that fixing those problems will prevent Valgrind aborting or +crashing, especially if it happened in m_mallocfree.c. + +If that doesn't help, please report this bug to: www.valgrind.org + +In the bug report, send all the above text, the valgrind +version, and what OS and version you are using. Thanks. -I refs: -I1 misses: -LLi misses: -I1 miss rate: -LLi miss rate: - -D refs: -D1 misses: -LLd misses: -D1 miss rate: -LLd miss rate: - -LL refs: -LL misses: -LL miss rate: ================================================= ./valgrind-new/callgrind/tests/simwork-both.stderr.diff ================================================= --- simwork-both.stderr.exp 2012-06-12 23:17:03.000000000 -0500 +++ simwork-both.stderr.out 2012-06-12 23:29:19.000000000 -0500 @@ -1,24 +1,28 @@ -Events : Ir Dr Dw I1mr D1mr D1mw ILmr DLmr DLmw Bc Bcm Bi Bim -Collected : +valgrind: m_scheduler/scheduler.c:707 (do_pre_run_checks): Assertion 'VG_IS_32_ALIGNED(a_vex)' failed. + at 0x3801F4C5: ??? + by 0x3801F688: ??? + by 0x38064F47: ??? + by 0x38066DD7: ??? + by 0x3808C238: ??? + +sched status: + running_tid=1 + +Thread 1: status = VgTs_Runnable + at 0x8FE01030: _dyld_start (in /usr/lib/dyld) + + +Note: see also the FAQ in the source distribution. +It contains workarounds to several common problems. +In particular, if Valgrind aborted or crashed after +identifying problems in your program, there's a good chance +that fixing those problems will prevent Valgrind aborting or +crashing, especially if it happened in m_mallocfree.c. + +If that doesn't help, please report this bug to: www.valgrind.org + +In the bug report, send all the above text, the valgrind +version, and what OS and version you are using. Thanks. -I refs: -I1 misses: -LLi misses: -I1 miss rate: -LLi miss rate: - -D refs: -D1 misses: -LLd misses: -D1 miss rate: -LLd miss rate: - -LL refs: -LL misses: -LL miss rate: - -Branches: -Mispredicts: -Mispred rate: ================================================= ./valgrind-new/callgrind/tests/simwork-both.stdout.diff ================================================= --- simwork-both.stdout.exp 2012-06-12 23:17:03.000000000 -0500 +++ simwork-both.stdout.out 2012-06-12 23:29:18.000000000 -0500 @@ -1 +0,0 @@ -Sum: 1000000 ================================================= ./valgrind-new/callgrind/tests/simwork-branch.stderr.diff ================================================= --- simwork-branch.stderr.exp 2012-06-12 23:17:03.000000000 -0500 +++ simwork-branch.stderr.out 2012-06-12 23:29:19.000000000 -0500 @@ -1,10 +1,28 @@ -Events : Ir Bc Bcm Bi Bim -Collected : +valgrind: m_scheduler/scheduler.c:707 (do_pre_run_checks): Assertion 'VG_IS_32_ALIGNED(a_vex)' failed. + at 0x3801F4C5: ??? + by 0x3801F688: ??? + by 0x38064F47: ??? + by 0x38066DD7: ??? + by 0x3808C238: ??? -I refs: +sched status: + running_tid=1 + +Thread 1: status = VgTs_Runnable + at 0x8FE01030: _dyld_start (in /usr/lib/dyld) + + +Note: see also the FAQ in the source distribution. +It contains workarounds to several common problems. +In particular, if Valgrind aborted or crashed after +identifying problems in your program, there's a good chance +that fixing those problems will prevent Valgrind aborting or +crashing, especially if it happened in m_mallocfree.c. + +If that doesn't help, please report this bug to: www.valgrind.org + +In the bug report, send all the above text, the valgrind +version, and what OS and version you are using. Thanks. -Branches: -Mispredicts: -Mispred rate: ================================================= ./valgrind-new/callgrind/tests/simwork-branch.stdout.diff ================================================= --- simwork-branch.stdout.exp 2012-06-12 23:17:03.000000000 -0500 +++ simwork-branch.stdout.out 2012-06-12 23:29:19.000000000 -0500 @@ -1 +0,0 @@ -Sum: 1000000 ================================================= ./valgrind-new/callgrind/tests/simwork-cache.stderr.diff ================================================= --- simwork-cache.stderr.exp 2012-06-12 23:17:03.000000000 -0500 +++ simwork-cache.stderr.out 2012-06-12 23:29:19.000000000 -0500 @@ -1,20 +1,28 @@ -Events : Ir Dr Dw I1mr D1mr D1mw ILmr DLmr DLmw -Collected : +valgrind: m_scheduler/scheduler.c:707 (do_pre_run_checks): Assertion 'VG_IS_32_ALIGNED(a_vex)' failed. + at 0x3801F4C5: ??? + by 0x3801F688: ??? + by 0x38064F47: ??? + by 0x38066DD7: ??? + by 0x3808C238: ??? + +sched status: + running_tid=1 + +Thread 1: status = VgTs_Runnable + at 0x8FE01030: _dyld_start (in /usr/lib/dyld) + + +Note: see also the FAQ in the source distribution. +It contains workarounds to several common problems. +In particular, if Valgrind aborted or crashed after +identifying problems in your program, there's a good chance +that fixing those problems will prevent Valgrind aborting or +crashing, especially if it happened in m_mallocfree.c. + +If that doesn't help, please report this bug to: www.valgrind.org + +In the bug report, send all the above text, the valgrind +version, and what OS and version you are using. Thanks. -I refs: -I1 misses: -LLi misses: -I1 miss rate: -LLi miss rate: - -D refs: -D1 misses: -LLd misses: -D1 miss rate: -LLd miss rate: - -LL refs: -LL misses: -LL miss rate: ================================================= ./valgrind-new/callgrind/tests/simwork-cache.stdout.diff ================================================= --- simwork-cache.stdout.exp 2012-06-12 23:17:03.000000000 -0500 +++ simwork-cache.stdout.out 2012-06-12 23:29:19.000000000 -0500 @@ -1 +0,0 @@ -Sum: 1000000 ================================================= ./valgrind-new/callgrind/tests/simwork1.stderr.diff ================================================= --- simwork1.stderr.exp 2012-06-12 23:17:03.000000000 -0500 +++ simwork1.stderr.out 2012-06-12 23:29:19.000000000 -0500 @@ -1,20 +1,28 @@ -Events : Ir Dr Dw I1mr D1mr D1mw ILmr DLmr DLmw -Collected : +valgrind: m_scheduler/scheduler.c:707 (do_pre_run_checks): Assertion 'VG_IS_32_ALIGNED(a_vex)' failed. + at 0x3801F4C5: ??? + by 0x3801F688: ??? + by 0x38064F47: ??? + by 0x38066DD7: ??? + by 0x3808C238: ??? + +sched status: + running_tid=1 + +Thread 1: status = VgTs_Runnable + at 0x8FE01030: _dyld_start (in /usr/lib/dyld) + + +Note: see also the FAQ in the source distribution. +It contains workarounds to several common problems. +In particular, if Valgrind aborted or crashed after +identifying problems in your program, there's a good chance +that fixing those problems will prevent Valgrind aborting or +crashing, especially if it happened in m_mallocfree.c. + +If that doesn't help, please report this bug to: www.valgrind.org + +In the bug report, send all the above text, the valgrind +version, and what OS and version you are using. Thanks. -I refs: -I1 misses: -LLi misses: -I1 miss rate: -LLi miss rate: - -D refs: -D1 misses: -LLd misses: -D1 miss rate: -LLd miss rate: - -LL refs: -LL misses: -LL miss rate: ================================================= ./valgrind-new/callgrind/tests/simwork1.stdout.diff ================================================= --- simwork1.stdout.exp 2012-06-12 23:17:03.000000000 -0500 +++ simwork1.stdout.out 2012-06-12 23:29:19.000000000 -0500 @@ -1 +0,0 @@ -Sum: 1000000 ================================================= ./valgrind-new/callgrind/tests/simwork2.stderr.diff ================================================= --- simwork2.stderr.exp 2012-06-12 23:17:03.000000000 -0500 +++ simwork2.stderr.out 2012-06-12 23:29:19.000000000 -0500 @@ -1,20 +1,28 @@ -Events : Ir Dr Dw I1mr D1mr D1mw ILmr DLmr DLmw ILdmr DLdmr DLdmw -Collected : +valgrind: m_scheduler/scheduler.c:707 (do_pre_run_checks): Assertion 'VG_IS_32_ALIGNED(a_vex)' failed. + at 0x3801F4C5: ??? + by 0x3801F688: ??? + by 0x38064F47: ??? + by 0x38066DD7: ??? + by 0x3808C238: ??? + +sched status: + running_tid=1 + +Thread 1: status = VgTs_Runnable + at 0x8FE01030: _dyld_start (in /usr/lib/dyld) + + +Note: see also the FAQ in the source distribution. +It contains workarounds to several common problems. +In particular, if Valgrind aborted or crashed after +identifying problems in your program, there's a good chance +that fixing those problems will prevent Valgrind aborting or +crashing, especially if it happened in m_mallocfree.c. + +If that doesn't help, please report this bug to: www.valgrind.org + +In the bug report, send all the above text, the valgrind +version, and what OS and version you are using. Thanks. -I refs: -I1 misses: -LLi misses: -I1 miss rate: -LLi miss rate: - -D refs: -D1 misses: -LLd misses: -D1 miss rate: -LLd miss rate: - -LL refs: -LL misses: -LL miss rate: ================================================= ./valgrind-new/callgrind/tests/simwork2.stdout.diff ================================================= --- simwork2.stdout.exp 2012-06-12 23:17:03.000000000 -0500 +++ simwork2.stdout.out 2012-06-12 23:29:19.000000000 -0500 @@ -1 +0,0 @@ -Sum: 1000000 ================================================= ./valgrind-new/callgrind/tests/simwork3.stderr.diff ================================================= --- simwork3.stderr.exp 2012-06-12 23:17:03.000000000 -0500 +++ simwork3.stderr.out 2012-06-12 23:29:19.000000000 -0500 @@ -1,20 +1,28 @@ -Events : Ir Dr Dw I1mr D1mr D1mw ILmr DLmr DLmw AcCost1 SpLoss1 AcCost2 SpLoss2 -Collected : +valgrind: m_scheduler/scheduler.c:707 (do_pre_run_checks): Assertion 'VG_IS_32_ALIGNED(a_vex)' failed. + at 0x3801F4C5: ??? + by 0x3801F688: ??? + by 0x38064F47: ??? + by 0x38066DD7: ??? + by 0x3808C238: ??? + +sched status: + running_tid=1 + +Thread 1: status = VgTs_Runnable + at 0x8FE01030: _dyld_start (in /usr/lib/dyld) + + +Note: see also the FAQ in the source distribution. +It contains workarounds to several common problems. +In particular, if Valgrind aborted or crashed after +identifying problems in your program, there's a good chance +that fixing those problems will prevent Valgrind aborting or +crashing, especially if it happened in m_mallocfree.c. + +If that doesn't help, please report this bug to: www.valgrind.org + +In the bug report, send all the above text, the valgrind +version, and what OS and version you are using. Thanks. -I refs: -I1 misses: -LLi misses: -I1 miss rate: -LLi miss rate: - -D refs: -D1 misses: -LLd misses: -D1 miss rate: -LLd miss rate: - -LL refs: -LL misses: -LL miss rate: ================================================= ./valgrind-new/callgrind/tests/simwork3.stdout.diff ================================================= --- simwork3.stdout.exp 2012-06-12 23:17:03.000000000 -0500 +++ simwork3.stdout.out 2012-06-12 23:29:19.000000000 -0500 @@ -1 +0,0 @@ -Sum: 1000000 ================================================= ./valgrind-new/callgrind/tests/threads-use.stderr.diff ================================================= --- threads-use.stderr.exp 2012-06-12 23:17:03.000000000 -0500 +++ threads-use.stderr.out 2012-06-12 23:29:19.000000000 -0500 @@ -1,20 +1,28 @@ -Events : Ir Dr Dw I1mr D1mr D1mw ILmr DLmr DLmw AcCost1 SpLoss1 AcCost2 SpLoss2 Ge sysCount sysTime -Collected : +valgrind: m_scheduler/scheduler.c:707 (do_pre_run_checks): Assertion 'VG_IS_32_ALIGNED(a_vex)' failed. + at 0x3801F4C5: ??? + by 0x3801F688: ??? + by 0x38064F47: ??? + by 0x38066DD7: ??? + by 0x3808C238: ??? + +sched status: + running_tid=1 + +Thread 1: status = VgTs_Runnable + at 0x8FE01030: _dyld_start (in /usr/lib/dyld) + + +Note: see also the FAQ in the source distribution. +It contains workarounds to several common problems. +In particular, if Valgrind aborted or crashed after +identifying problems in your program, there's a good chance +that fixing those problems will prevent Valgrind aborting or +crashing, especially if it happe... [truncated message content] |
|
From: Philippe W. <phi...@sk...> - 2012-06-13 03:46:07
|
valgrind revision: 12632 VEX revision: 2380 C compiler: gcc (GCC) 4.6.3 20120306 (Red Hat 4.6.3-2) Assembler: GNU assembler version 2.21.53.0.1-6.fc16 20110716 C library: GNU C Library development release version 2.14.90 uname -mrs: Linux 3.3.1-3.fc16.ppc64 ppc64 Vendor version: Fedora release 16 (Verne) Nightly build on gcc110 ( Fedora release 16 (Verne), ppc64 ) Started at 2012-06-12 20:00:11 PDT Ended at 2012-06-12 20:45:00 PDT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 526 tests, 15 stderr failures, 8 stdout failures, 1 stderrB failure, 1 stdoutB failure, 2 post failures == gdbserver_tests/mcmain_pic (stdout) gdbserver_tests/mcmain_pic (stderr) gdbserver_tests/mcmain_pic (stdoutB) gdbserver_tests/mcmain_pic (stderrB) memcheck/tests/ppc32/power_ISA2_05 (stdout) memcheck/tests/ppc32/power_ISA2_05 (stderr) memcheck/tests/ppc64/power_ISA2_05 (stdout) memcheck/tests/ppc64/power_ISA2_05 (stderr) memcheck/tests/supp_unknown (stderr) memcheck/tests/trivialleak (stderr) memcheck/tests/varinfo6 (stderr) memcheck/tests/wrap8 (stdout) memcheck/tests/wrap8 (stderr) massif/tests/big-alloc (post) massif/tests/deep-D (post) none/tests/empty-exe (stderr) none/tests/ppc32/jm-fp (stdout) none/tests/ppc32/jm-vmx (stdout) none/tests/ppc64/jm-fp (stdout) none/tests/ppc64/jm-vmx (stdout) none/tests/shell (stderr) none/tests/shell_valid1 (stderr) none/tests/shell_valid2 (stderr) none/tests/shell_valid3 (stderr) none/tests/shell_zerolength (stderr) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc20_verifywrap (stderr) |
|
From: Tom H. <to...@co...> - 2012-06-13 03:14:37
|
valgrind revision: 12632 VEX revision: 2380 C compiler: gcc (GCC) 4.3.0 20080428 (Red Hat 4.3.0-8) Assembler: GNU assembler version 2.18.50.0.6-2 20080403 C library: GNU C Library stable release version 2.8 uname -mrs: Linux 3.3.4-5.fc17.x86_64 x86_64 Vendor version: Fedora release 9 (Sulphur) Nightly build on bristol ( x86_64, Fedora 9 ) Started at 2012-06-13 03:42:00 BST Ended at 2012-06-13 04:14:21 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 600 tests, 0 stderr failures, 1 stdout failure, 0 stderrB failures, 0 stdoutB failures, 0 post failures == none/tests/amd64/sse4-64 (stdout) |
|
From: Tom H. <to...@co...> - 2012-06-13 03:03:01
|
valgrind revision: 12632 VEX revision: 2380 C compiler: gcc (GCC) 4.4.1 20090725 (Red Hat 4.4.1-2) Assembler: GNU assembler version 2.19.51.0.14-3.fc11 20090722 C library: GNU C Library stable release version 2.10.2 uname -mrs: Linux 3.3.4-5.fc17.x86_64 x86_64 Vendor version: Fedora release 11 (Leonidas) Nightly build on bristol ( x86_64, Fedora 11 ) Started at 2012-06-13 03:30:56 BST Ended at 2012-06-13 04:02:43 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 602 tests, 1 stderr failure, 1 stdout failure, 0 stderrB failures, 0 stdoutB failures, 0 post failures == memcheck/tests/long_namespace_xml (stderr) none/tests/amd64/sse4-64 (stdout) |
|
From: <br...@ac...> - 2012-06-13 02:59:19
|
valgrind revision: 12632
VEX revision: 2380
C compiler: gcc (GCC) 3.4.6 20060404 (Red Hat 3.4.6-3)
Assembler: GNU assembler 2.15.92.0.2 20040927
C library: GNU C Library stable release version 2.3.4
uname -mrs: Linux 2.6.9-42.EL s390x
Vendor version: Red Hat Enterprise Linux AS release 4 (Nahant Update 4)
Nightly build on z10-ec ( s390x build on z10-EC )
Started at 2012-06-12 22:20:16 EDT
Ended at 2012-06-12 22:59:08 EDT
Results unchanged from 24 hours ago
Checking out valgrind source tree ... done
Configuring valgrind ... done
Building valgrind ... done
Running regression tests ... failed
Regression test results follow
== 508 tests, 8 stderr failures, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures ==
memcheck/tests/manuel3 (stderr)
memcheck/tests/partial_load_ok (stderr)
memcheck/tests/varinfo6 (stderr)
helgrind/tests/tc09_bad_unlock (stderr)
helgrind/tests/tc18_semabuse (stderr)
helgrind/tests/tc20_verifywrap (stderr)
drd/tests/tc04_free_lock (stderr)
drd/tests/tc09_bad_unlock (stderr)
=================================================
./valgrind-new/drd/tests/tc04_free_lock.stderr.diff-ppc
=================================================
--- tc04_free_lock.stderr.exp-ppc 2012-06-12 22:43:06.000000000 -0400
+++ tc04_free_lock.stderr.out 2012-06-12 22:58:25.000000000 -0400
@@ -7,28 +7,22 @@
by 0x........: main (tc04_free_lock.c:20)
Destroying locked mutex: mutex 0x........, recursion count 1, owner 1.
- at 0x........: bar (tc04_free_lock.c:38)
- by 0x........: main (tc04_free_lock.c:26)
+ at 0x........: bar (tc04_free_lock.c:40)
+ by 0x........: process_dl_debug (in /lib64/ld-2.3.4.so)
+ by 0x........: ???
mutex 0x........ was first observed at:
at 0x........: pthread_mutex_lock (drd_pthread_intercepts.c:?)
by 0x........: bar (tc04_free_lock.c:38)
by 0x........: main (tc04_free_lock.c:26)
Destroying locked mutex: mutex 0x........, recursion count 1, owner 1.
- at 0x........: foo (tc04_free_lock.c:47)
- by 0x........: main (tc04_free_lock.c:27)
+ at 0x........: foo (tc04_free_lock.c:49)
+ by 0x........: process_dl_debug (in /lib64/ld-2.3.4.so)
+ by 0x........: ???
mutex 0x........ was first observed at:
at 0x........: pthread_mutex_init (drd_pthread_intercepts.c:?)
by 0x........: foo (tc04_free_lock.c:46)
by 0x........: main (tc04_free_lock.c:27)
-Destroying locked mutex: mutex 0x........, recursion count 1, owner 1.
- at 0x........: bar (tc04_free_lock.c:38)
- by 0x........: main (tc04_free_lock.c:28)
-mutex 0x........ was first observed at:
- at 0x........: pthread_mutex_lock (drd_pthread_intercepts.c:?)
- by 0x........: bar (tc04_free_lock.c:38)
- by 0x........: main (tc04_free_lock.c:28)
-
-ERROR SUMMARY: 4 errors from 4 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 4 errors from 3 contexts (suppressed: 0 from 0)
=================================================
./valgrind-new/drd/tests/tc04_free_lock.stderr.diff-x86
=================================================
--- tc04_free_lock.stderr.exp-x86 2012-06-12 22:43:06.000000000 -0400
+++ tc04_free_lock.stderr.out 2012-06-12 22:58:25.000000000 -0400
@@ -8,7 +8,8 @@
Destroying locked mutex: mutex 0x........, recursion count 1, owner 1.
at 0x........: bar (tc04_free_lock.c:40)
- by 0x........: main (tc04_free_lock.c:26)
+ by 0x........: process_dl_debug (in /lib64/ld-2.3.4.so)
+ by 0x........: ???
mutex 0x........ was first observed at:
at 0x........: pthread_mutex_lock (drd_pthread_intercepts.c:?)
by 0x........: bar (tc04_free_lock.c:38)
@@ -16,19 +17,12 @@
Destroying locked mutex: mutex 0x........, recursion count 1, owner 1.
at 0x........: foo (tc04_free_lock.c:49)
- by 0x........: main (tc04_free_lock.c:27)
+ by 0x........: process_dl_debug (in /lib64/ld-2.3.4.so)
+ by 0x........: ???
mutex 0x........ was first observed at:
at 0x........: pthread_mutex_init (drd_pthread_intercepts.c:?)
by 0x........: foo (tc04_free_lock.c:46)
by 0x........: main (tc04_free_lock.c:27)
-Destroying locked mutex: mutex 0x........, recursion count 1, owner 1.
- at 0x........: bar (tc04_free_lock.c:40)
- by 0x........: main (tc04_free_lock.c:28)
-mutex 0x........ was first observed at:
- at 0x........: pthread_mutex_lock (drd_pthread_intercepts.c:?)
- by 0x........: bar (tc04_free_lock.c:38)
- by 0x........: main (tc04_free_lock.c:28)
-
-ERROR SUMMARY: 4 errors from 4 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 4 errors from 3 contexts (suppressed: 0 from 0)
=================================================
./valgrind-new/drd/tests/tc09_bad_unlock.stderr.diff-glibc2.8
=================================================
--- tc09_bad_unlock.stderr.exp-glibc2.8 2012-06-12 22:43:06.000000000 -0400
+++ tc09_bad_unlock.stderr.out 2012-06-12 22:58:29.000000000 -0400
@@ -26,7 +26,7 @@
Destroying locked mutex: mutex 0x........, recursion count 1, owner 1.
at 0x........: nearly_main (tc09_bad_unlock.c:45)
- by 0x........: (below main)
+ by 0x........: ???
mutex 0x........ was first observed at:
at 0x........: pthread_mutex_init (drd_pthread_intercepts.c:?)
by 0x........: nearly_main (tc09_bad_unlock.c:31)
@@ -47,13 +47,5 @@
by 0x........: nearly_main (tc09_bad_unlock.c:41)
by 0x........: main (tc09_bad_unlock.c:50)
-Destroying locked mutex: mutex 0x........, recursion count 1, owner 1.
- at 0x........: nearly_main (tc09_bad_unlock.c:45)
- by 0x........: (below main)
-mutex 0x........ was first observed at:
- at 0x........: pthread_mutex_init (drd_pthread_intercepts.c:?)
- by 0x........: nearly_main (tc09_bad_unlock.c:31)
- by 0x........: main (tc09_bad_unlock.c:50)
-
-ERROR SUMMARY: 8 errors from 7 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 8 errors from 6 contexts (suppressed: 0 from 0)
=================================================
./valgrind-new/drd/tests/tc09_bad_unlock.stderr.diff-ppc
=================================================
--- tc09_bad_unlock.stderr.exp-ppc 2012-06-12 22:43:06.000000000 -0400
+++ tc09_bad_unlock.stderr.out 2012-06-12 22:58:29.000000000 -0400
@@ -25,8 +25,8 @@
by 0x........: main (tc09_bad_unlock.c:49)
Destroying locked mutex: mutex 0x........, recursion count 1, owner 1.
- at 0x........: nearly_main (tc09_bad_unlock.c:41)
- by 0x........: main (tc09_bad_unlock.c:49)
+ at 0x........: nearly_main (tc09_bad_unlock.c:45)
+ by 0x........: ???
mutex 0x........ was first observed at:
at 0x........: pthread_mutex_init (drd_pthread_intercepts.c:?)
by 0x........: nearly_main (tc09_bad_unlock.c:31)
@@ -47,13 +47,5 @@
by 0x........: nearly_main (tc09_bad_unlock.c:41)
by 0x........: main (tc09_bad_unlock.c:50)
-Destroying locked mutex: mutex 0x........, recursion count 1, owner 1.
- at 0x........: nearly_main (tc09_bad_unlock.c:41)
- by 0x........: main (tc09_bad_unlock.c:50)
-mutex 0x........ was first observed at:
- at 0x........: pthread_mutex_init (drd_pthread_intercepts.c:?)
- by 0x........: nearly_main (tc09_bad_unlock.c:31)
- by 0x........: main (tc09_bad_unlock.c:50)
-
-ERROR SUMMARY: 8 errors from 7 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 8 errors from 6 contexts (suppressed: 0 from 0)
=================================================
./valgrind-new/drd/tests/tc09_bad_unlock.stderr.diff-x86
=================================================
--- tc09_bad_unlock.stderr.exp-x86 2012-06-12 22:43:06.000000000 -0400
+++ tc09_bad_unlock.stderr.out 2012-06-12 22:58:29.000000000 -0400
@@ -26,7 +26,7 @@
Destroying locked mutex: mutex 0x........, recursion count 1, owner 1.
at 0x........: nearly_main (tc09_bad_unlock.c:45)
- by 0x........: main (tc09_bad_unlock.c:49)
+ by 0x........: ???
mutex 0x........ was first observed at:
at 0x........: pthread_mutex_init (drd_pthread_intercepts.c:?)
by 0x........: nearly_main (tc09_bad_unlock.c:31)
@@ -47,13 +47,5 @@
by 0x........: nearly_main (tc09_bad_unlock.c:41)
by 0x........: main (tc09_bad_unlock.c:50)
-Destroying locked mutex: mutex 0x........, recursion count 1, owner 1.
- at 0x........: nearly_main (tc09_bad_unlock.c:45)
- by 0x........: main (tc09_bad_unlock.c:50)
-mutex 0x........ was first observed at:
- at 0x........: pthread_mutex_init (drd_pthread_intercepts.c:?)
- by 0x........: nearly_main (tc09_bad_unlock.c:31)
- by 0x........: main (tc09_bad_unlock.c:50)
-
-ERROR SUMMARY: 8 errors from 7 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 8 errors from 6 contexts (suppressed: 0 from 0)
=================================================
./valgrind-new/helgrind/tests/tc09_bad_unlock.stderr.diff
=================================================
--- tc09_bad_unlock.stderr.exp 2012-06-12 22:38:48.000000000 -0400
+++ tc09_bad_unlock.stderr.out 2012-06-12 22:54:36.000000000 -0400
@@ -42,14 +42,6 @@
by 0x........: nearly_main (tc09_bad_unlock.c:41)
by 0x........: main (tc09_bad_unlock.c:49)
-----------------------------------------------------------------
-
-Thread #x's call to pthread_mutex_unlock failed
- with error code 22 (EINVAL: Invalid argument)
- at 0x........: pthread_mutex_unlock (hg_intercepts.c:...)
- by 0x........: nearly_main (tc09_bad_unlock.c:41)
- by 0x........: main (tc09_bad_unlock.c:49)
-
---------------------
----------------------------------------------------------------
@@ -110,16 +102,8 @@
----------------------------------------------------------------
-Thread #x's call to pthread_mutex_unlock failed
- with error code 22 (EINVAL: Invalid argument)
- at 0x........: pthread_mutex_unlock (hg_intercepts.c:...)
- by 0x........: nearly_main (tc09_bad_unlock.c:41)
- by 0x........: main (tc09_bad_unlock.c:50)
-
-----------------------------------------------------------------
-
Thread #x: Exiting thread still holds 1 lock
...
-ERROR SUMMARY: 11 errors from 11 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 9 errors from 9 contexts (suppressed: 0 from 0)
=================================================
./valgrind-new/helgrind/tests/tc18_semabuse.stderr.diff
=================================================
--- tc18_semabuse.stderr.exp 2012-06-12 22:38:48.000000000 -0400
+++ tc18_semabuse.stderr.out 2012-06-12 22:54:44.000000000 -0400
@@ -18,13 +18,5 @@
by 0x........: sem_wait (hg_intercepts.c:...)
by 0x........: main (tc18_semabuse.c:34)
-----------------------------------------------------------------
-Thread #x's call to sem_post failed
- with error code 22 (EINVAL: Invalid argument)
- at 0x........: sem_post_WRK (hg_intercepts.c:...)
- by 0x........: sem_post (hg_intercepts.c:...)
- by 0x........: main (tc18_semabuse.c:37)
-
-
-ERROR SUMMARY: 3 errors from 3 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 0 from 0)
=================================================
./valgrind-new/helgrind/tests/tc20_verifywrap.stderr.diff
=================================================
--- tc20_verifywrap.stderr.exp 2012-06-12 22:38:48.000000000 -0400
+++ tc20_verifywrap.stderr.out 2012-06-12 22:54:54.000000000 -0400
@@ -1,7 +1,7 @@
------- This is output for >= glibc 2.4 ------
+------ This is output for < glibc 2.4 ------
---------------- pthread_create/join ----------------
@@ -45,13 +45,6 @@
----------------------------------------------------------------
-Thread #x's call to pthread_mutex_init failed
- with error code 95 (EOPNOTSUPP: Operation not supported on transport endpoint)
- at 0x........: pthread_mutex_init (hg_intercepts.c:...)
- by 0x........: main (tc20_verifywrap.c:92)
-
-----------------------------------------------------------------
-
Thread #x: pthread_mutex_destroy of a locked mutex
at 0x........: pthread_mutex_destroy (hg_intercepts.c:...)
by 0x........: main (tc20_verifywrap.c:102)
@@ -63,26 +56,8 @@
at 0x........: pthread_mutex_destroy (hg_intercepts.c:...)
by 0x........: main (tc20_verifywrap.c:102)
-----------------------------------------------------------------
-
-Thread #x's call to pthread_mutex_lock failed
- with error code 22 (EINVAL: Invalid argument)
- at 0x........: pthread_mutex_lock (hg_intercepts.c:...)
- by 0x........: main (tc20_verifywrap.c:108)
-
-----------------------------------------------------------------
-Thread #x's call to pthread_mutex_trylock failed
- with error code 22 (EINVAL: Invalid argument)
- at 0x........: pthread_mutex_trylock (hg_intercepts.c:...)
- by 0x........: main (tc20_verifywrap.c:116)
-
-----------------------------------------------------------------
-
-Thread #x's call to pthread_mutex_timedlock failed
- with error code 22 (EINVAL: Invalid argument)
- at 0x........: pthread_mutex_timedlock (hg_intercepts.c:...)
- by 0x........: main (tc20_verifywrap.c:121)
+make pthread_mutex_lock fail: skipped on glibc < 2.4
----------------------------------------------------------------
@@ -90,13 +65,6 @@
at 0x........: pthread_mutex_unlock (hg_intercepts.c:...)
by 0x........: main (tc20_verifywrap.c:125)
-----------------------------------------------------------------
-
-Thread #x's call to pthread_mutex_unlock failed
- with error code 22 (EINVAL: Invalid argument)
- at 0x........: pthread_mutex_unlock (hg_intercepts.c:...)
- by 0x........: main (tc20_verifywrap.c:125)
-
---------------- pthread_cond_wait et al ----------------
@@ -215,14 +183,6 @@
by 0x........: sem_wait (hg_intercepts.c:...)
by 0x........: main (tc20_verifywrap.c:242)
-----------------------------------------------------------------
-
-Thread #x's call to sem_post failed
- with error code 22 (EINVAL: Invalid argument)
- at 0x........: sem_post_WRK (hg_intercepts.c:...)
- by 0x........: sem_post (hg_intercepts.c:...)
- by 0x........: main (tc20_verifywrap.c:245)
-
FIXME: can't figure out how to verify wrap of sem_post
@@ -235,4 +195,4 @@
...
-ERROR SUMMARY: 23 errors from 23 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 17 errors from 17 contexts (suppressed: 0 from 0)
=================================================
./valgrind-new/memcheck/tests/manuel3.stderr.diff
=================================================
--- manuel3.stderr.exp 2012-06-12 22:39:48.000000000 -0400
+++ manuel3.stderr.out 2012-06-12 22:49:36.000000000 -0400
@@ -1,4 +1,3 @@
Conditional jump or move depends on uninitialised value(s)
- at 0x........: gcc_cant_inline_me (manuel3.c:22)
- by 0x........: main (manuel3.c:14)
+ at 0x........: main (manuel3.c:12)
=================================================
./valgrind-new/memcheck/tests/partial_load_ok.stderr.diff
=================================================
--- partial_load_ok.stderr.exp 2012-06-12 22:39:48.000000000 -0400
+++ partial_load_ok.stderr.out 2012-06-12 22:50:07.000000000 -0400
@@ -1,7 +1,13 @@
-Invalid read of size 4
+Invalid read of size 1
+ at 0x........: main (partial_load.c:16)
+ Address 0x........ is 0 bytes after a block of size 7 alloc'd
+ at 0x........: calloc (vg_replace_malloc.c:...)
+ by 0x........: main (partial_load.c:14)
+
+Invalid read of size 8
at 0x........: main (partial_load.c:23)
- Address 0x........ is 1 bytes inside a block of size 4 alloc'd
+ Address 0x........ is 1 bytes inside a block of size 8 alloc'd
at 0x........: calloc (vg_replace_malloc.c:...)
by 0x........: main (partial_load.c:20)
@@ -11,9 +17,9 @@
at 0x........: calloc (vg_replace_malloc.c:...)
by 0x........: main (partial_load.c:28)
-Invalid read of size 4
+Invalid read of size 8
at 0x........: main (partial_load.c:37)
- Address 0x........ is 0 bytes inside a block of size 4 free'd
+ Address 0x........ is 0 bytes inside a block of size 8 free'd
at 0x........: free (vg_replace_malloc.c:...)
by 0x........: main (partial_load.c:36)
@@ -25,4 +31,4 @@
For a detailed leak analysis, rerun with: --leak-check=full
For counts of detected and suppressed errors, rerun with: -v
-ERROR SUMMARY: 3 errors from 3 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 4 errors from 4 contexts (suppressed: 0 from 0)
=================================================
./valgrind-new/memcheck/tests/partial_load_ok.stderr.diff64
=================================================
--- partial_load_ok.stderr.exp64 2012-06-12 22:39:48.000000000 -0400
+++ partial_load_ok.stderr.out 2012-06-12 22:50:07.000000000 -0400
@@ -1,4 +1,10 @@
+Invalid read of size 1
+ at 0x........: main (partial_load.c:16)
+ Address 0x........ is 0 bytes after a block of size 7 alloc'd
+ at 0x........: calloc (vg_replace_malloc.c:...)
+ by 0x........: main (partial_load.c:14)
+
Invalid read of size 8
at 0x........: main (partial_load.c:23)
Address 0x........ is 1 bytes inside a block of size 8 alloc'd
@@ -25,4 +31,4 @@
For a detailed leak analysis, rerun with: --leak-check=full
For counts of detected and suppressed errors, rerun with: -v
-ERROR SUMMARY: 3 errors from 3 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 4 errors from 4 contexts (suppressed: 0 from 0)
=================================================
./valgrind-new/memcheck/tests/varinfo6.stderr.diff
=================================================
--- varinfo6.stderr.exp 2012-06-12 22:39:48.000000000 -0400
+++ varinfo6.stderr.out 2012-06-12 22:51:04.000000000 -0400
@@ -7,8 +7,7 @@
by 0x........: BZ2_bzCompress (varinfo6.c:4860)
by 0x........: BZ2_bzBuffToBuffCompress (varinfo6.c:5667)
by 0x........: main (varinfo6.c:6517)
- Location 0x........ is 2 bytes inside local var "budget"
- declared at varinfo6.c:3115, in frame #2 of thread 1
+ Address 0x........ is on thread 1's stack
Uninitialised byte(s) found during client check request
at 0x........: croak (varinfo6.c:34)
=================================================
./valgrind-new/memcheck/tests/varinfo6.stderr.diff-ppc64
=================================================
--- varinfo6.stderr.exp-ppc64 2012-06-12 22:39:48.000000000 -0400
+++ varinfo6.stderr.out 2012-06-12 22:51:04.000000000 -0400
@@ -1,5 +1,5 @@
Uninitialised byte(s) found during client check request
- at 0x........: croak (varinfo6.c:35)
+ at 0x........: croak (varinfo6.c:34)
by 0x........: mainSort (varinfo6.c:2999)
by 0x........: BZ2_blockSort (varinfo6.c:3143)
by 0x........: BZ2_compressBlock (varinfo6.c:4072)
@@ -10,7 +10,7 @@
Address 0x........ is on thread 1's stack
Uninitialised byte(s) found during client check request
- at 0x........: croak (varinfo6.c:35)
+ at 0x........: croak (varinfo6.c:34)
by 0x........: BZ2_decompress (varinfo6.c:1699)
by 0x........: BZ2_bzDecompress (varinfo6.c:5230)
by 0x........: BZ2_bzBuffToBuffDecompress (varinfo6.c:5715)
=================================================
./valgrind-old/drd/tests/tc04_free_lock.stderr.diff-ppc
=================================================
--- tc04_free_lock.stderr.exp-ppc 2012-06-12 22:22:30.000000000 -0400
+++ tc04_free_lock.stderr.out 2012-06-12 22:37:07.000000000 -0400
@@ -7,28 +7,22 @@
by 0x........: main (tc04_free_lock.c:20)
Destroying locked mutex: mutex 0x........, recursion count 1, owner 1.
- at 0x........: bar (tc04_free_lock.c:38)
- by 0x........: main (tc04_free_lock.c:26)
+ at 0x........: bar (tc04_free_lock.c:40)
+ by 0x........: process_dl_debug (in /lib64/ld-2.3.4.so)
+ by 0x........: ???
mutex 0x........ was first observed at:
at 0x........: pthread_mutex_lock (drd_pthread_intercepts.c:?)
by 0x........: bar (tc04_free_lock.c:38)
by 0x........: main (tc04_free_lock.c:26)
Destroying locked mutex: mutex 0x........, recursion count 1, owner 1.
- at 0x........: foo (tc04_free_lock.c:47)
- by 0x........: main (tc04_free_lock.c:27)
+ at 0x........: foo (tc04_free_lock.c:49)
+ by 0x........: process_dl_debug (in /lib64/ld-2.3.4.so)
+ by 0x........: ???
mutex 0x........ was first observed at:
at 0x........: pthread_mutex_init (drd_pthread_intercepts.c:?)
by 0x........: foo (tc04_free_lock.c:46)
by 0x........: main (tc04_free_lock.c:27)
-Destroying locked mutex: mutex 0x........, recursion count 1, owner 1.
- at 0x........: bar (tc04_free_lock.c:38)
- by 0x........: main (tc04_free_lock.c:28)
-mutex 0x........ was first observed at:
- at 0x........: pthread_mutex_lock (drd_pthread_intercepts.c:?)
- by 0x........: bar (tc04_free_lock.c:38)
- by 0x........: main (tc04_free_lock.c:28)
-
-ERROR SUMMARY: 4 errors from 4 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 4 errors from 3 contexts (suppressed: 0 from 0)
=================================================
./valgrind-old/drd/tests/tc04_free_lock.stderr.diff-x86
=================================================
--- tc04_free_lock.stderr.exp-x86 2012-06-12 22:22:30.000000000 -0400
+++ tc04_free_lock.stderr.out 2012-06-12 22:37:07.000000000 -0400
@@ -8,7 +8,8 @@
Destroying locked mutex: mutex 0x........, recursion count 1, owner 1.
at 0x........: bar (tc04_free_lock.c:40)
- by 0x........: main (tc04_free_lock.c:26)
+ by 0x........: process_dl_debug (in /lib64/ld-2.3.4.so)
+ by 0x........: ???
mutex 0x........ was first observed at:
at 0x........: pthread_mutex_lock (drd_pthread_intercepts.c:?)
by 0x........: bar (tc04_free_lock.c:38)
@@ -16,19 +17,12 @@
Destroying locked mutex: mutex 0x........, recursion count 1, owner 1.
at 0x........: foo (tc04_free_lock.c:49)
- by 0x........: main (tc04_free_lock.c:27)
+ by 0x........: process_dl_debug (in /lib64/ld-2.3.4.so)
+ by 0x........: ???
mutex 0x........ was first observed at:
at 0x........: pthread_mutex_init (drd_pthread_intercepts.c:?)
by 0x........: foo (tc04_free_lock.c:46)
by 0x........: main (tc04_free_lock.c:27)
-Destroying locked mutex: mutex 0x........, recursion count 1, owner 1.
- at 0x........: bar (tc04_free_lock.c:40)
- by 0x........: main (tc04_free_lock.c:28)
-mutex 0x........ was first observed at:
- at 0x........: pthread_mutex_lock (drd_pthread_intercepts.c:?)
- by 0x........: bar (tc04_free_lock.c:38)
- by 0x........: main (tc04_free_lock.c:28)
-
-ERROR SUMMARY: 4 errors from 4 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 4 errors from 3 contexts (suppressed: 0 from 0)
=================================================
./valgrind-old/drd/tests/tc09_bad_unlock.stderr.diff-glibc2.8
=================================================
--- tc09_bad_unlock.stderr.exp-glibc2.8 2012-06-12 22:22:30.000000000 -0400
+++ tc09_bad_unlock.stderr.out 2012-06-12 22:37:11.000000000 -0400
@@ -26,7 +26,7 @@
Destroying locked mutex: mutex 0x........, recursion count 1, owner 1.
at 0x........: nearly_main (tc09_bad_unlock.c:45)
- by 0x........: (below main)
+ by 0x........: ???
mutex 0x........ was first observed at:
at 0x........: pthread_mutex_init (drd_pthread_intercepts.c:?)
by 0x........: nearly_main (tc09_bad_unlock.c:31)
@@ -47,13 +47,5 @@
by 0x........: nearly_main (tc09_bad_unlock.c:41)
by 0x........: main (tc09_bad_unlock.c:50)
-Destroying locked mutex: mutex 0x........, recursion count 1, owner 1.
- at 0x........: nearly_main (tc09_bad_unlock.c:45)
- by 0x........: (below main)
-mutex 0x........ was first observed at:
- at 0x........: pthread_mutex_init (drd_pthread_intercepts.c:?)
- by 0x........: nearly_main (tc09_bad_unlock.c:31)
- by 0x........: main (tc09_bad_unlock.c:50)
-
-ERROR SUMMARY: 8 errors from 7 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 8 errors from 6 contexts (suppressed: 0 from 0)
=================================================
./valgrind-old/drd/tests/tc09_bad_unlock.stderr.diff-ppc
=================================================
--- tc09_bad_unlock.stderr.exp-ppc 2012-06-12 22:22:30.000000000 -0400
+++ tc09_bad_unlock.stderr.out 2012-06-12 22:37:11.000000000 -0400
@@ -25,8 +25,8 @@
by 0x........: main (tc09_bad_unlock.c:49)
Destroying locked mutex: mutex 0x........, recursion count 1, owner 1.
- at 0x........: nearly_main (tc09_bad_unlock.c:41)
- by 0x........: main (tc09_bad_unlock.c:49)
+ at 0x........: nearly_main (tc09_bad_unlock.c:45)
+ by 0x........: ???
mutex 0x........ was first observed at:
at 0x........: pthread_mutex_init (drd_pthread_intercepts.c:?)
by 0x........: nearly_main (tc09_bad_unlock.c:31)
@@ -47,13 +47,5 @@
by 0x........: nearly_main (tc09_bad_unlock.c:41)
by 0x........: main (tc09_bad_unlock.c:50)
-Destroying locked mutex: mutex 0x........, recursion count 1, owner 1.
- at 0x........: nearly_main (tc09_bad_unlock.c:41)
- by 0x........: main (tc09_bad_unlock.c:50)
-mutex 0x........ was first observed at:
- at 0x........: pthread_mutex_init (drd_pthread_intercepts.c:?)
- by 0x........: nearly_main (tc09_bad_unlock.c:31)
- by 0x........: main (tc09_bad_unlock.c:50)
-
-ERROR SUMMARY: 8 errors from 7 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 8 errors from 6 contexts (suppressed: 0 from 0)
=================================================
./valgrind-old/drd/tests/tc09_bad_unlock.stderr.diff-x86
=================================================
--- tc09_bad_unlock.stderr.exp-x86 2012-06-12 22:22:30.000000000 -0400
+++ tc09_bad_unlock.stderr.out 2012-06-12 22:37:11.000000000 -0400
@@ -26,7 +26,7 @@
Destroying locked mutex: mutex 0x........, recursion count 1, owner 1.
at 0x........: nearly_main (tc09_bad_unlock.c:45)
- by 0x........: main (tc09_bad_unlock.c:49)
+ by 0x........: ???
mutex 0x........ was first observed at:
at 0x........: pthread_mutex_init (drd_pthread_intercepts.c:?)
by 0x........: nearly_main (tc09_bad_unlock.c:31)
@@ -47,13 +47,5 @@
by 0x........: nearly_main (tc09_bad_unlock.c:41)
by 0x........: main (tc09_bad_unlock.c:50)
-Destroying locked mutex: mutex 0x........, recursion count 1, owner 1.
- at 0x........: nearly_main (tc09_bad_unlock.c:45)
- by 0x........: main (tc09_bad_unlock.c:50)
-mutex 0x........ was first observed at:
- at 0x........: pthread_mutex_init (drd_pthread_intercepts.c:?)
- by 0x........: nearly_main (tc09_bad_unlock.c:31)
- by 0x........: main (tc09_bad_unlock.c:50)
-
-ERROR SUMMARY: 8 errors from 7 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 8 errors from 6 contexts (suppressed: 0 from 0)
=================================================
./valgrind-old/helgrind/tests/tc09_bad_unlock.stderr.diff
=================================================
--- tc09_bad_unlock.stderr.exp 2012-06-12 22:21:15.000000000 -0400
+++ tc09_bad_unlock.stderr.out 2012-06-12 22:33:19.000000000 -0400
@@ -42,14 +42,6 @@
by 0x........: nearly_main (tc09_bad_unlock.c:41)
by 0x........: main (tc09_bad_unlock.c:49)
-----------------------------------------------------------------
-
-Thread #x's call to pthread_mutex_unlock failed
- with error code 22 (EINVAL: Invalid argument)
- at 0x........: pthread_mutex_unlock (hg_intercepts.c:...)
- by 0x........: nearly_main (tc09_bad_unlock.c:41)
- by 0x........: main (tc09_bad_unlock.c:49)
-
---------------------
----------------------------------------------------------------
@@ -110,16 +102,8 @@
----------------------------------------------------------------
-Thread #x's call to pthread_mutex_unlock failed
- with error code 22 (EINVAL: Invalid argument)
- at 0x........: pthread_mutex_unlock (hg_intercepts.c:...)
- by 0x........: nearly_main (tc09_bad_unlock.c:41)
- by 0x........: main (tc09_bad_unlock.c:50)
-
-----------------------------------------------------------------
-
Thread #x: Exiting thread still holds 1 lock
...
-ERROR SUMMARY: 11 errors from 11 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 9 errors from 9 contexts (suppressed: 0 from 0)
=================================================
./valgrind-old/helgrind/tests/tc18_semabuse.stderr.diff
=================================================
--- tc18_semabuse.stderr.exp 2012-06-12 22:21:15.000000000 -0400
+++ tc18_semabuse.stderr.out 2012-06-12 22:33:27.000000000 -0400
@@ -18,13 +18,5 @@
by 0x........: sem_wait (hg_intercepts.c:...)
by 0x........: main (tc18_semabuse.c:34)
-----------------------------------------------------------------
-Thread #x's call to sem_post failed
- with error code 22 (EINVAL: Invalid argument)
- at 0x........: sem_post_WRK (hg_intercepts.c:...)
- by 0x........: sem_post (hg_intercepts.c:...)
- by 0x........: main (tc18_semabuse.c:37)
-
-
-ERROR SUMMARY: 3 errors from 3 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 0 from 0)
=================================================
./valgrind-old/helgrind/tests/tc20_verifywrap.stderr.diff
=================================================
--- tc20_verifywrap.stderr.exp 2012-06-12 22:21:15.000000000 -0400
+++ tc20_verifywrap.stderr.out 2012-06-12 22:33:37.000000000 -0400
@@ -1,7 +1,7 @@
------- This is output for >= glibc 2.4 ------
+------ This is output for < glibc 2.4 ------
---------------- pthread_create/join ----------------
@@ -45,13 +45,6 @@
----------------------------------------------------------------
-Thread #x's call to pthread_mutex_init failed
- with error code 95 (EOPNOTSUPP: Operation not supported on transport endpoint)
- at 0x........: pthread_mutex_init (hg_intercepts.c:...)
- by 0x........: main (tc20_verifywrap.c:92)
-
-----------------------------------------------------------------
-
Thread #x: pthread_mutex_destroy of a locked mutex
at 0x........: pthread_mutex_destroy (hg_intercepts.c:...)
by 0x........: main (tc20_verifywrap.c:102)
@@ -63,26 +56,8 @@
at 0x........: pthread_mutex_destroy (hg_intercepts.c:...)
by 0x........: main (tc20_verifywrap.c:102)
-----------------------------------------------------------------
-
-Thread #x's call to pthread_mutex_lock failed
- with error code 22 (EINVAL: Invalid argument)
- at 0x........: pthread_mutex_lock (hg_intercepts.c:...)
- by 0x........: main (tc20_verifywrap.c:108)
-
-----------------------------------------------------------------
-Thread #x's call to pthread_mutex_trylock failed
- with error code 22 (EINVAL: Invalid argument)
- at 0x........: pthread_mutex_trylock (hg_intercepts.c:...)
- by 0x........: main (tc20_verifywrap.c:116)
-
-----------------------------------------------------------------
-
-Thread #x's call to pthread_mutex_timedlock failed
- with error code 22 (EINVAL: Invalid argument)
- at 0x........: pthread_mutex_timedlock (hg_intercepts.c:...)
- by 0x........: main (tc20_verifywrap.c:121)
+make pthread_mutex_lock fail: skipped on glibc < 2.4
----------------------------------------------------------------
@@ -90,13 +65,6 @@
at 0x........: pthread_mutex_unlock (hg_intercepts.c:...)
by 0x........: main (tc20_verifywrap.c:125)
-----------------------------------------------------------------
-
-Thread #x's call to pthread_mutex_unlock failed
- with error code 22 (EINVAL: Invalid argument)
- at 0x........: pthread_mutex_unlock (hg_intercepts.c:...)
- by 0x........: main (tc20_verifywrap.c:125)
-
---------------- pthread_cond_wait et al ----------------
@@ -215,14 +183,6 @@
by 0x........: sem_wait (hg_intercepts.c:...)
by 0x........: main (tc20_verifywrap.c:242)
-----------------------------------------------------------------
-
-Thread #x's call to sem_post failed
- with error code 22 (EINVAL: Invalid argument)
- at 0x........: sem_post_WRK (hg_intercepts.c:...)
- by 0x........: sem_post (hg_intercepts.c:...)
- by 0x........: main (tc20_verifywrap.c:245)
-
FIXME: can't figure out how to verify wrap of sem_post
@@ -235,4 +195,4 @@
...
-ERROR SUMMARY: 23 errors from 23 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 17 errors from 17 contexts (suppressed: 0 from 0)
=================================================
./valgrind-old/memcheck/tests/manuel3.stderr.diff
=================================================
--- manuel3.stderr.exp 2012-06-12 22:21:36.000000000 -0400
+++ manuel3.stderr.out 2012-06-12 22:28:20.000000000 -0400
@@ -1,4 +1,3 @@
Conditional jump or move depends on uninitialised value(s)
- at 0x........: gcc_cant_inline_me (manuel3.c:22)
- by 0x........: main (manuel3.c:14)
+ at 0x........: main (manuel3.c:12)
=================================================
./valgrind-old/memcheck/tests/partial_load_ok.stderr.diff
=================================================
--- partial_load_ok.stderr.exp 2012-06-12 22:21:36.000000000 -0400
+++ partial_load_ok.stderr.out 2012-06-12 22:28:51.000000000 -0400
@@ -1,7 +1,13 @@
-Invalid read of size 4
+Invalid read of size 1
+ at 0x........: main (partial_load.c:16)
+ Address 0x........ is 0 bytes after a block of size 7 alloc'd
+ at 0x........: calloc (vg_replace_malloc.c:...)
+ by 0x........: main (partial_load.c:14)
+
+Invalid read of size 8
at 0x........: main (partial_load.c:23)
- Address 0x........ is 1 bytes inside a block of size 4 alloc'd
+ Address 0x........ is 1 bytes inside a block of size 8 alloc'd
at 0x........: calloc (vg_replace_malloc.c:...)
by 0x........: main (partial_load.c:20)
@@ -11,9 +17,9 @@
at 0x........: calloc (vg_replace_malloc.c:...)
by 0x........: main (partial_load.c:28)
-Invalid read of size 4
+Invalid read of size 8
at 0x........: main (partial_load.c:37)
- Address 0x........ is 0 bytes inside a block of size 4 free'd
+ Address 0x........ is 0 bytes inside a block of size 8 free'd
at 0x........: free (vg_replace_malloc.c:...)
by 0x........: main (partial_load.c:36)
@@ -25,4 +31,4 @@
For a detailed leak analysis, rerun with: --leak-check=full
For counts of detected and suppressed errors, rerun with: -v
-ERROR SUMMARY: 3 errors from 3 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 4 errors from 4 contexts (suppressed: 0 from 0)
=================================================
./valgrind-old/memcheck/tests/partial_load_ok.stderr.diff64
=================================================
--- partial_load_ok.stderr.exp64 2012-06-12 22:21:36.000000000 -0400
+++ partial_load_ok.stderr.out 2012-06-12 22:28:51.000000000 -0400
@@ -1,4 +1,10 @@
+Invalid read of size 1
+ at 0x........: main (partial_load.c:16)
+ Address 0x........ is 0 bytes after a block of size 7 alloc'd
+ at 0x........: calloc (vg_replace_malloc.c:...)
+ by 0x........: main (partial_load.c:14)
+
Invalid read of size 8
at 0x........: main (partial_load.c:23)
Address 0x........ is 1 bytes inside a block of size 8 alloc'd
@@ -25,4 +31,4 @@
For a detailed leak analysis, rerun with: --leak-check=full
For counts of detected and suppressed errors, rerun with: -v
-ERROR SUMMARY: 3 errors from 3 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 4 errors from 4 contexts (suppressed: 0 from 0)
=================================================
./valgrind-old/memcheck/tests/varinfo6.stderr.diff
=================================================
--- varinfo6.stderr.exp 2012-06-12 22:21:36.000000000 -0400
+++ varinfo6.stderr.out 2012-06-12 22:29:48.000000000 -0400
@@ -7,8 +7,7 @@
by 0x........: BZ2_bzCompress (varinfo6.c:4860)
by 0x........: BZ2_bzBuffToBuffCompress (varinfo6.c:5667)
by 0x........: main (varinfo6.c:6517)
- Location 0x........ is 2 bytes inside local var "budget"
- declared at varinfo6.c:3115, in frame #2 of thread 1
+ Address 0x........ is on thread 1's stack
Uninitialised byte(s) found during client check request
at 0x........: croak (varinfo6.c:34)
=================================================
./valgrind-old/memcheck/tests/varinfo6.stderr.diff-ppc64
=================================================
--- varinfo6.stderr.exp-ppc64 2012-06-12 22:21:35.000000000 -0400
+++ varinfo6.stderr.out 2012-06-12 22:29:48.000000000 -0400
@@ -1,5 +1,5 @@
Uninitialised byte(s) found during client check request
- at 0x........: croak (varinfo6.c:35)
+ at 0x........: croak (varinfo6.c:34)
by 0x........: mainSort (varinfo6.c:2999)
by 0x........: BZ2_blockSort (varinfo6.c:3143)
by 0x........: BZ2_compressBlock (varinfo6.c:4072)
@@ -10,7 +10,7 @@
Address 0x........ is on thread 1's stack
Uninitialised byte(s) found during client check request
- at 0x........: croak (varinfo6.c:35)
+ at 0x........: croak (varinfo6.c:34)
by 0x........: BZ2_decompress (varinfo6.c:1699)
by 0x........: BZ2_bzDecompress (varinfo6.c:5230)
by 0x........: BZ2_bzBuffToBuffDecompress (varinfo6.c:5715)
|
|
From: Rich C. <rc...@wi...> - 2012-06-13 02:55:41
|
valgrind revision: VEX revision: C compiler: gcc (SUSE Linux) 4.5.1 20101208 [gcc-4_5-branch revision 167585] Assembler: GNU assembler (GNU Binutils; openSUSE 11.4) 2.21 C library: GNU C Library stable release version 2.11.3 (20110203) uname -mrs: Linux 2.6.37.6-0.7-desktop x86_64 Vendor version: Welcome to openSUSE 11.4 "Celadon" - Kernel %r (%t). Nightly build on ultra ( gcc 4.5.1 Linux 2.6.37.6-0.7-desktop x86_64 ) Started at 2012-06-12 21:30:01 CDT Ended at 2012-06-12 21:49:17 CDT Results differ from 24 hours ago Checking out valgrind source tree ... failed Last 20 lines of verbose log follow echo Checking out valgrind source tree ... svn co svn://svn.valgrind.org/valgrind/trunk -r {2012-06-12T21:30:01} valgrind-new && svn update -r {2012-06-12T21:30:01} valgrind-new/VEX + eval 'svn co svn://svn.valgrind.org/valgrind/trunk -r {2012-06-12T21:30:01} valgrind-new && svn update -r {2012-06-12T21:30:01} valgrind-new/VEX' ++ svn co svn://svn.valgrind.org/valgrind/trunk -r '{2012-06-12T21:30:01}' valgrind-new svn: Unknown hostname 'svn.valgrind.org' ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 610 tests, 1 stderr failure, 1 stdout failure, 6 stderrB failures, 0 stdoutB failures, 0 post failures == gdbserver_tests/mcbreak (stderrB) gdbserver_tests/mcclean_after_fork (stderrB) gdbserver_tests/mcleak (stderrB) gdbserver_tests/mcmain_pic (stderrB) gdbserver_tests/mcvabits (stderrB) gdbserver_tests/mssnapshot (stderrB) memcheck/tests/origin5-bz2 (stderr) none/tests/res_search (stdout) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Tue Jun 12 21:48:32 2012 --- new.short Tue Jun 12 21:49:17 2012 *************** *** 1,18 **** ! Checking out valgrind source tree ... done ! Configuring valgrind ... done ! Building valgrind ... done ! Running regression tests ... failed ! Regression test results follow ! ! == 610 tests, 1 stderr failure, 1 stdout failure, 6 stderrB failures, 0 stdoutB failures, 0 post failures == ! gdbserver_tests/mcbreak (stderrB) ! gdbserver_tests/mcclean_after_fork (stderrB) ! gdbserver_tests/mcleak (stderrB) ! gdbserver_tests/mcmain_pic (stderrB) ! gdbserver_tests/mcvabits (stderrB) ! gdbserver_tests/mssnapshot (stderrB) ! memcheck/tests/origin5-bz2 (stderr) ! none/tests/res_search (stdout) --- 1,9 ---- ! Checking out valgrind source tree ... failed ! Last 20 lines of verbose log follow echo + Checking out valgrind source tree ... svn co svn://svn.valgrind.org/valgrind/trunk -r {2012-06-12T21:30:01} valgrind-new && svn update -r {2012-06-12T21:30:01} valgrind-new/VEX + + eval 'svn co svn://svn.valgrind.org/valgrind/trunk -r {2012-06-12T21:30:01} valgrind-new && svn update -r {2012-06-12T21:30:01} valgrind-new/VEX' + ++ svn co svn://svn.valgrind.org/valgrind/trunk -r '{2012-06-12T21:30:01}' valgrind-new + svn: Unknown hostname 'svn.valgrind.org' ================================================= ./valgrind-old/gdbserver_tests/mcbreak.stderrB.diff ================================================= --- mcbreak.stderrB.exp 2012-06-12 21:31:39.591948039 -0500 +++ mcbreak.stderrB.out 2012-06-12 21:38:30.170507616 -0500 @@ -1,5 +1,7 @@ relaying data between gdb and process .... vgdb-error value changed from 0 to 999999 + + vgdb-error value changed from 999999 to 0 n_errs_found 1 n_errs_shown 1 (vgdb-error 0) vgdb-error value changed from 0 to 0 ================================================= ./valgrind-old/gdbserver_tests/mcclean_after_fork.stderrB.diff ================================================= --- mcclean_after_fork.stderrB.exp 2012-06-12 21:31:39.591948039 -0500 +++ mcclean_after_fork.stderrB.out 2012-06-12 21:38:31.834700388 -0500 @@ -1,4 +1,6 @@ relaying data between gdb and process .... vgdb-error value changed from 0 to 999999 + + monitor command request to kill this process Remote connection closed ================================================= ./valgrind-old/gdbserver_tests/mcleak.stderrB.diff ================================================= --- mcleak.stderrB.exp 2012-06-12 21:31:39.586947524 -0500 +++ mcleak.stderrB.out 2012-06-12 21:38:50.375848102 -0500 @@ -1,5 +1,7 @@ relaying data between gdb and process .... vgdb-error value changed from 0 to 999999 + + 10 bytes in 1 blocks are still reachable in loss record ... of ... at 0x........: malloc (vg_replace_malloc.c:...) by 0x........: f (leak-delta.c:14) ================================================= ./valgrind-old/gdbserver_tests/mcmain_pic.stderrB.diff ================================================= --- mcmain_pic.stderrB.exp 2012-06-12 21:31:39.596948573 -0500 +++ mcmain_pic.stderrB.out 2012-06-12 21:38:51.929028015 -0500 @@ -1,3 +1,5 @@ relaying data between gdb and process .... vgdb-error value changed from 0 to 999999 + + Remote connection closed ================================================= ./valgrind-old/gdbserver_tests/mcvabits.stderrB.diff ================================================= --- mcvabits.stderrB.exp 2012-06-12 21:31:39.598948797 -0500 +++ mcvabits.stderrB.out 2012-06-12 21:38:56.795591734 -0500 @@ -1,5 +1,7 @@ relaying data between gdb and process .... vgdb-error value changed from 0 to 999999 + + Address 0x........ len 10 addressable Address 0x........ is 0 bytes inside data symbol "undefined" Address 0x........ len 10 defined ================================================= ./valgrind-old/gdbserver_tests/mssnapshot.stderrB.diff ================================================= --- mssnapshot.stderrB.exp 2012-06-12 21:31:39.596948573 -0500 +++ mssnapshot.stderrB.out 2012-06-12 21:38:59.842944723 -0500 @@ -1,5 +1,9 @@ relaying data between gdb and process .... vgdb-error value changed from 0 to 999999 + + +Missing separate debuginfo for /lib64/libc.so.6 +Try: zypper install -C "debuginfo(build-id)=92ec8fe859846a62345f74696ab349721415587a" general valgrind monitor commands: help [debug] : monitor command help. With debug: + debugging commands v.wait [<ms>] : sleep <ms> (default 0) then continue ================================================= ./valgrind-old/memcheck/tests/origin5-bz2.stderr.diff-glibc212-s390x ================================================= --- origin5-bz2.stderr.exp-glibc212-s390x 2012-06-12 21:33:09.652380235 -0500 +++ origin5-bz2.stderr.out 2012-06-12 21:40:19.740193672 -0500 @@ -75,17 +75,6 @@ at 0x........: main (origin5-bz2.c:6479) Use of uninitialised value of size 8 - at 0x........: mainSort (origin5-bz2.c:2859) - by 0x........: BZ2_blockSort (origin5-bz2.c:3105) - by 0x........: BZ2_compressBlock (origin5-bz2.c:4034) - by 0x........: handle_compress (origin5-bz2.c:4753) - by 0x........: BZ2_bzCompress (origin5-bz2.c:4822) - by 0x........: BZ2_bzBuffToBuffCompress (origin5-bz2.c:5630) - by 0x........: main (origin5-bz2.c:6484) - Uninitialised value was created by a client request - at 0x........: main (origin5-bz2.c:6479) - -Use of uninitialised value of size 8 at 0x........: mainSort (origin5-bz2.c:2963) by 0x........: BZ2_blockSort (origin5-bz2.c:3105) by 0x........: BZ2_compressBlock (origin5-bz2.c:4034) @@ -131,6 +120,12 @@ Conditional jump or move depends on uninitialised value(s) at 0x........: main (origin5-bz2.c:6512) - Uninitialised value was created by a client request - at 0x........: main (origin5-bz2.c:6479) + Uninitialised value was created by a heap allocation + at 0x........: malloc (vg_replace_malloc.c:...) + by 0x........: g_serviceFn (origin5-bz2.c:6429) + by 0x........: default_bzalloc (origin5-bz2.c:4470) + by 0x........: BZ2_decompress (origin5-bz2.c:1578) + by 0x........: BZ2_bzDecompress (origin5-bz2.c:5192) + by 0x........: BZ2_bzBuffToBuffDecompress (origin5-bz2.c:5678) + by 0x........: main (origin5-bz2.c:6498) ================================================= ./valgrind-old/memcheck/tests/origin5-bz2.stderr.diff-glibc234-s390x ================================================= --- origin5-bz2.stderr.exp-glibc234-s390x 2012-06-12 21:33:09.635378267 -0500 +++ origin5-bz2.stderr.out 2012-06-12 21:40:19.740193672 -0500 @@ -120,6 +120,12 @@ Conditional jump or move depends on uninitialised value(s) at 0x........: main (origin5-bz2.c:6512) - Uninitialised value was created by a client request - at 0x........: main (origin5-bz2.c:6479) + Uninitialised value was created by a heap allocation + at 0x........: malloc (vg_replace_malloc.c:...) + by 0x........: g_serviceFn (origin5-bz2.c:6429) + by 0x........: default_bzalloc (origin5-bz2.c:4470) + by 0x........: BZ2_decompress (origin5-bz2.c:1578) + by 0x........: BZ2_bzDecompress (origin5-bz2.c:5192) + by 0x........: BZ2_bzBuffToBuffDecompress (origin5-bz2.c:5678) + by 0x........: main (origin5-bz2.c:6498) ================================================= ./valgrind-old/memcheck/tests/origin5-bz2.stderr.diff-glibc25-amd64 ================================================= --- origin5-bz2.stderr.exp-glibc25-amd64 2012-06-12 21:33:09.581372494 -0500 +++ origin5-bz2.stderr.out 2012-06-12 21:40:19.740193672 -0500 @@ -120,6 +120,12 @@ Conditional jump or move depends on uninitialised value(s) at 0x........: main (origin5-bz2.c:6512) - Uninitialised value was created by a client request - at 0x........: main (origin5-bz2.c:6479) + Uninitialised value was created by a heap allocation + at 0x........: malloc (vg_replace_malloc.c:...) + by 0x........: g_serviceFn (origin5-bz2.c:6429) + by 0x........: default_bzalloc (origin5-bz2.c:4470) + by 0x........: BZ2_decompress (origin5-bz2.c:1578) + by 0x........: BZ2_bzDecompress (origin5-bz2.c:5192) + by 0x........: BZ2_bzBuffToBuffDecompress (origin5-bz2.c:5678) + by 0x........: main (origin5-bz2.c:6498) ================================================= ./valgrind-old/memcheck/tests/origin5-bz2.stderr.diff-glibc25-x86 ================================================= --- origin5-bz2.stderr.exp-glibc25-x86 2012-06-12 21:33:09.609375253 -0500 +++ origin5-bz2.stderr.out 2012-06-12 21:40:19.740193672 -0500 @@ -12,7 +12,7 @@ Uninitialised value was created by a client request at 0x........: main (origin5-bz2.c:6479) -Use of uninitialised value of size 4 +Use of uninitialised value of size 8 at 0x........: copy_input_until_stop (origin5-bz2.c:4686) by 0x........: handle_compress (origin5-bz2.c:4750) by 0x........: BZ2_bzCompress (origin5-bz2.c:4822) @@ -21,7 +21,7 @@ Uninitialised value was created by a client request at 0x........: main (origin5-bz2.c:6479) -Use of uninitialised value of size 4 +Use of uninitialised value of size 8 at 0x........: copy_input_until_stop (origin5-bz2.c:4686) by 0x........: handle_compress (origin5-bz2.c:4750) by 0x........: BZ2_bzCompress (origin5-bz2.c:4822) @@ -30,7 +30,7 @@ Uninitialised value was created by a client request at 0x........: main (origin5-bz2.c:6479) -Use of uninitialised value of size 4 +Use of uninitialised value of size 8 at 0x........: mainSort (origin5-bz2.c:2820) by 0x........: BZ2_blockSort (origin5-bz2.c:3105) by 0x........: BZ2_compressBlock (origin5-bz2.c:4034) @@ -41,7 +41,7 @@ Uninitialised value was created by a client request at 0x........: main (origin5-bz2.c:6479) -Use of uninitialised value of size 4 +Use of uninitialised value of size 8 at 0x........: mainSort (origin5-bz2.c:2823) by 0x........: BZ2_blockSort (origin5-bz2.c:3105) by 0x........: BZ2_compressBlock (origin5-bz2.c:4034) @@ -52,7 +52,7 @@ Uninitialised value was created by a client request at 0x........: main (origin5-bz2.c:6479) -Use of uninitialised value of size 4 +Use of uninitialised value of size 8 at 0x........: mainSort (origin5-bz2.c:2854) by 0x........: BZ2_blockSort (origin5-bz2.c:3105) by 0x........: BZ2_compressBlock (origin5-bz2.c:4034) @@ -63,7 +63,7 @@ Uninitialised value was created by a client request at 0x........: main (origin5-bz2.c:6479) -Use of uninitialised value of size 4 +Use of uninitialised value of size 8 at 0x........: mainSort (origin5-bz2.c:2858) by 0x........: BZ2_blockSort (origin5-bz2.c:3105) by 0x........: BZ2_compressBlock (origin5-bz2.c:4034) @@ -74,7 +74,7 @@ Uninitialised value was created by a client request at 0x........: main (origin5-bz2.c:6479) -Use of uninitialised value of size 4 +Use of uninitialised value of size 8 at 0x........: mainSort (origin5-bz2.c:2963) by 0x........: BZ2_blockSort (origin5-bz2.c:3105) by 0x........: BZ2_compressBlock (origin5-bz2.c:4034) @@ -85,7 +85,7 @@ Uninitialised value was created by a client request at 0x........: main (origin5-bz2.c:6479) -Use of uninitialised value of size 4 +Use of uninitialised value of size 8 at 0x........: mainSort (origin5-bz2.c:2964) by 0x........: BZ2_blockSort (origin5-bz2.c:3105) by 0x........: BZ2_compressBlock (origin5-bz2.c:4034) @@ -96,7 +96,7 @@ Uninitialised value was created by a client request at 0x........: main (origin5-bz2.c:6479) -Use of uninitialised value of size 4 +Use of uninitialised value of size 8 at 0x........: fallbackSort (origin5-bz2.c:2269) by 0x........: BZ2_blockSort (origin5-bz2.c:3116) by 0x........: BZ2_compressBlock (origin5-bz2.c:4034) @@ -107,7 +107,7 @@ Uninitialised value was created by a client request at 0x........: main (origin5-bz2.c:6479) -Use of uninitialised value of size 4 +Use of uninitialised value of size 8 at 0x........: fallbackSort (origin5-bz2.c:2275) by 0x........: BZ2_blockSort (origin5-bz2.c:3116) by 0x........: BZ2_compressBlock (origin5-bz2.c:4034) @@ -120,6 +120,12 @@ Conditional jump or move depends on uninitialised value(s) at 0x........: main (origin5-bz2.c:6512) - Uninitialised value was created by a client request - at 0x........: main (origin5-bz2.c:6479) + Uninitialised value was created by a heap allocation + at 0x........: malloc (vg_replace_malloc.c:...) <truncated beyond 100 lines> ================================================= ./valgrind-old/memcheck/tests/origin5-bz2.stderr.diff-glibc27-ppc64 ================================================= --- origin5-bz2.stderr.exp-glibc27-ppc64 2012-06-12 21:33:09.622376761 -0500 +++ origin5-bz2.stderr.out 2012-06-12 21:40:19.740193672 -0500 @@ -1,7 +1,7 @@ Conditional jump or move depends on uninitialised value(s) at 0x........: main (origin5-bz2.c:6481) Uninitialised value was created by a client request - at 0x........: main (origin5-bz2.c:6481) + at 0x........: main (origin5-bz2.c:6479) Conditional jump or move depends on uninitialised value(s) at 0x........: copy_input_until_stop (origin5-bz2.c:4686) @@ -10,7 +10,7 @@ by 0x........: BZ2_bzBuffToBuffCompress (origin5-bz2.c:5630) by 0x........: main (origin5-bz2.c:6484) Uninitialised value was created by a client request - at 0x........: main (origin5-bz2.c:6481) + at 0x........: main (origin5-bz2.c:6479) Use of uninitialised value of size 8 at 0x........: copy_input_until_stop (origin5-bz2.c:4686) @@ -19,7 +19,7 @@ by 0x........: BZ2_bzBuffToBuffCompress (origin5-bz2.c:5630) by 0x........: main (origin5-bz2.c:6484) Uninitialised value was created by a client request - at 0x........: main (origin5-bz2.c:6481) + at 0x........: main (origin5-bz2.c:6479) Use of uninitialised value of size 8 at 0x........: copy_input_until_stop (origin5-bz2.c:4686) @@ -28,7 +28,7 @@ by 0x........: BZ2_bzBuffToBuffCompress (origin5-bz2.c:5630) by 0x........: main (origin5-bz2.c:6484) Uninitialised value was created by a client request - at 0x........: main (origin5-bz2.c:6481) + at 0x........: main (origin5-bz2.c:6479) Use of uninitialised value of size 8 at 0x........: mainSort (origin5-bz2.c:2820) @@ -39,7 +39,7 @@ by 0x........: BZ2_bzBuffToBuffCompress (origin5-bz2.c:5630) by 0x........: main (origin5-bz2.c:6484) Uninitialised value was created by a client request - at 0x........: main (origin5-bz2.c:6481) + at 0x........: main (origin5-bz2.c:6479) Use of uninitialised value of size 8 at 0x........: mainSort (origin5-bz2.c:2823) @@ -50,7 +50,7 @@ by 0x........: BZ2_bzBuffToBuffCompress (origin5-bz2.c:5630) by 0x........: main (origin5-bz2.c:6484) Uninitialised value was created by a client request - at 0x........: main (origin5-bz2.c:6481) + at 0x........: main (origin5-bz2.c:6479) Use of uninitialised value of size 8 at 0x........: mainSort (origin5-bz2.c:2854) @@ -61,7 +61,7 @@ by 0x........: BZ2_bzBuffToBuffCompress (origin5-bz2.c:5630) by 0x........: main (origin5-bz2.c:6484) Uninitialised value was created by a client request - at 0x........: main (origin5-bz2.c:6481) + at 0x........: main (origin5-bz2.c:6479) Use of uninitialised value of size 8 at 0x........: mainSort (origin5-bz2.c:2858) @@ -72,7 +72,7 @@ by 0x........: BZ2_bzBuffToBuffCompress (origin5-bz2.c:5630) by 0x........: main (origin5-bz2.c:6484) Uninitialised value was created by a client request - at 0x........: main (origin5-bz2.c:6481) + at 0x........: main (origin5-bz2.c:6479) Use of uninitialised value of size 8 at 0x........: mainSort (origin5-bz2.c:2963) @@ -83,7 +83,7 @@ by 0x........: BZ2_bzBuffToBuffCompress (origin5-bz2.c:5630) by 0x........: main (origin5-bz2.c:6484) Uninitialised value was created by a client request - at 0x........: main (origin5-bz2.c:6481) + at 0x........: main (origin5-bz2.c:6479) Use of uninitialised value of size 8 at 0x........: mainSort (origin5-bz2.c:2964) @@ -94,7 +94,7 @@ by 0x........: BZ2_bzBuffToBuffCompress (origin5-bz2.c:5630) by 0x........: main (origin5-bz2.c:6484) Uninitialised value was created by a client request - at 0x........: main (origin5-bz2.c:6481) + at 0x........: main (origin5-bz2.c:6479) Use of uninitialised value of size 8 at 0x........: fallbackSort (origin5-bz2.c:2269) @@ -105,7 +105,7 @@ by 0x........: BZ2_bzBuffToBuffCompress (origin5-bz2.c:5630) by 0x........: main (origin5-bz2.c:6484) Uninitialised value was created by a client request - at 0x........: main (origin5-bz2.c:6481) + at 0x........: main (origin5-bz2.c:6479) Use of uninitialised value of size 8 <truncated beyond 100 lines> ================================================= ./valgrind-old/none/tests/res_search.stdout.diff ================================================= --- res_search.stdout.exp 2012-06-12 21:35:41.547975282 -0500 +++ res_search.stdout.out 2012-06-12 21:43:58.790546547 -0500 @@ -1 +1 @@ -Success! +Error: res_search() |
|
From: Tom H. <to...@co...> - 2012-06-13 02:51:03
|
valgrind revision: 12632 VEX revision: 2380 C compiler: gcc (GCC) 4.4.5 20101112 (Red Hat 4.4.5-2) Assembler: GNU assembler version 2.20.51.0.2-20.fc13 20091009 C library: GNU C Library stable release version 2.12.2 uname -mrs: Linux 3.3.4-5.fc17.x86_64 x86_64 Vendor version: Fedora release 13 (Goddard) Nightly build on bristol ( x86_64, Fedora 13 ) Started at 2012-06-13 03:21:50 BST Ended at 2012-06-13 03:50:47 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 602 tests, 1 stderr failure, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures == helgrind/tests/pth_barrier3 (stderr) |
|
From: Tom H. <to...@co...> - 2012-06-13 02:50:37
|
valgrind revision: 12632 VEX revision: 2380 C compiler: gcc (GCC) 4.5.1 20100924 (Red Hat 4.5.1-4) Assembler: GNU assembler version 2.20.51.0.7-8.fc14 20100318 C library: GNU C Library stable release version 2.13 uname -mrs: Linux 3.3.4-5.fc17.x86_64 x86_64 Vendor version: Fedora release 14 (Laughlin) Nightly build on bristol ( x86_64, Fedora 14 ) Started at 2012-06-13 03:11:55 BST Ended at 2012-06-13 03:50:21 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 617 tests, 1 stderr failure, 0 stdout failures, 1 stderrB failure, 2 stdoutB failures, 0 post failures == gdbserver_tests/mcinfcallWSRU (stderrB) gdbserver_tests/nlcontrolc (stdoutB) gdbserver_tests/nlpasssigalrm (stdoutB) memcheck/tests/origin5-bz2 (stderr) |
|
From: Tom H. <to...@co...> - 2012-06-13 02:39:57
|
valgrind revision: 12632 VEX revision: 2380 C compiler: gcc (GCC) 4.6.3 20120306 (Red Hat 4.6.3-2) Assembler: GNU assembler version 2.21.51.0.6-6.fc15 20110118 C library: GNU C Library stable release version 2.14.1 uname -mrs: Linux 3.3.4-5.fc17.x86_64 x86_64 Vendor version: Fedora release 15 (Lovelock) Nightly build on bristol ( x86_64, Fedora 15 ) Started at 2012-06-13 03:03:11 BST Ended at 2012-06-13 03:39:40 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 618 tests, 2 stderr failures, 0 stdout failures, 1 stderrB failure, 2 stdoutB failures, 0 post failures == gdbserver_tests/mcinfcallWSRU (stderrB) gdbserver_tests/nlcontrolc (stdoutB) gdbserver_tests/nlpasssigalrm (stdoutB) memcheck/tests/origin5-bz2 (stderr) memcheck/tests/overlap (stderr) |
|
From: Tom H. <to...@co...> - 2012-06-13 02:28:43
|
valgrind revision: 12632 VEX revision: 2380 C compiler: gcc (GCC) 4.6.3 20120306 (Red Hat 4.6.3-2) Assembler: GNU assembler version 2.21.53.0.1-6.fc16 20110716 C library: GNU C Library development release version 2.14.90 uname -mrs: Linux 3.3.4-5.fc17.x86_64 x86_64 Vendor version: Fedora release 16 (Verne) Nightly build on bristol ( x86_64, Fedora 16 ) Started at 2012-06-13 02:51:45 BST Ended at 2012-06-13 03:28:26 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 618 tests, 3 stderr failures, 0 stdout failures, 1 stderrB failure, 2 stdoutB failures, 0 post failures == gdbserver_tests/mcinfcallWSRU (stderrB) gdbserver_tests/nlcontrolc (stdoutB) gdbserver_tests/nlpasssigalrm (stdoutB) memcheck/tests/origin5-bz2 (stderr) memcheck/tests/overlap (stderr) memcheck/tests/str_tester (stderr) |
|
From: Christian B. <bor...@de...> - 2012-06-13 02:23:30
|
valgrind revision: 12632 VEX revision: 2380 C compiler: gcc (GCC) 4.5.3 20110121 (Red Hat 4.5.3-5) Assembler: GNU assembler version 2.20.51.0.7-4bb6.fc13 20100318 C library: GNU C Library stable release version 2.12.1 uname -mrs: Linux 3.3.4-53.x.20120504-s390xperformance s390x Vendor version: unknown Nightly build on fedora390 ( Fedora 13/14/15 mix with gcc 3.5.3 on z196 (s390x) ) Started at 2012-06-13 03:45:02 CEST Ended at 2012-06-13 04:23:41 CEST Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 543 tests, 7 stderr failures, 0 stdout failures, 1 stderrB failure, 1 stdoutB failure, 0 post failures == gdbserver_tests/mcinvokeWS (stdoutB) gdbserver_tests/mcinvokeWS (stderrB) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) drd/tests/tc04_free_lock (stderr) drd/tests/tc09_bad_unlock (stderr) drd/tests/tc21_pthonce (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 543 tests, 7 stderr failures, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures == helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) drd/tests/tc04_free_lock (stderr) drd/tests/tc09_bad_unlock (stderr) drd/tests/tc21_pthonce (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Wed Jun 13 04:10:19 2012 --- new.short Wed Jun 13 04:23:41 2012 *************** *** 8,10 **** ! == 543 tests, 7 stderr failures, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures == helgrind/tests/tc18_semabuse (stderr) --- 8,12 ---- ! == 543 tests, 7 stderr failures, 0 stdout failures, 1 stderrB failure, 1 stdoutB failure, 0 post failures == ! gdbserver_tests/mcinvokeWS (stdoutB) ! gdbserver_tests/mcinvokeWS (stderrB) helgrind/tests/tc18_semabuse (stderr) |
|
From: Christian B. <bor...@de...> - 2012-06-13 02:12:15
|
valgrind revision: 12632 VEX revision: 2380 C compiler: gcc (SUSE Linux) 4.3.4 [gcc-4_3-branch revision 152973] Assembler: GNU assembler (GNU Binutils; SUSE Linux Enterprise 11) 2.20.0.20100122-0.7.9 C library: GNU C Library stable release version 2.11.1 (20100118) uname -mrs: Linux 2.6.32.54-0.3-default s390x Vendor version: Welcome to SUSE Linux Enterprise Server 11 SP1 (s390x) - Kernel %r (%t). Nightly build on sless390 ( SUSE Linux Enterprise Server 11 SP1 gcc 4.3.4 on z196 (s390x) ) Started at 2012-06-13 03:45:01 CEST Ended at 2012-06-13 04:12:04 CEST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 544 tests, 4 stderr failures, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures == helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc20_verifywrap (stderr) drd/tests/tc04_free_lock (stderr) drd/tests/tc09_bad_unlock (stderr) |
|
From: Tom H. <to...@co...> - 2012-06-13 02:12:15
|
valgrind revision: 12632 VEX revision: 2380 C compiler: gcc (GCC) 4.7.0 20120507 (Red Hat 4.7.0-5) Assembler: GNU assembler version 2.22.52.0.1-10.fc17 20120131 C library: GNU C Library stable release version 2.15 uname -mrs: Linux 3.3.4-5.fc17.x86_64 x86_64 Vendor version: Fedora release 17 (Beefy Miracle) Nightly build on bristol ( x86_64, Fedora 17 (Beefy Miracle) ) Started at 2012-06-13 02:41:41 BST Ended at 2012-06-13 03:11:57 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 618 tests, 9 stderr failures, 1 stdout failure, 1 stderrB failure, 2 stdoutB failures, 0 post failures == gdbserver_tests/mcinfcallWSRU (stderr) gdbserver_tests/mcinfcallWSRU (stderrB) gdbserver_tests/mcmain_pic (stderr) gdbserver_tests/nlcontrolc (stdoutB) gdbserver_tests/nlpasssigalrm (stdoutB) memcheck/tests/origin5-bz2 (stderr) memcheck/tests/overlap (stderr) memcheck/tests/str_tester (stderr) drd/tests/bar_bad (stderr) drd/tests/bar_bad_xml (stderr) drd/tests/pth_cancel_locked (stderr) exp-sgcheck/tests/preen_invars (stdout) exp-sgcheck/tests/preen_invars (stderr) |