From: Jerry B. <je...@ge...> - 2013-10-16 17:48:55
|
Hi Mark, Thanks for the note. Here's the version I've been using: valgrind-3.8.1-15.fc19.x86_64 I wasn't able to find a more recent fedora package. We run on servers at rackspace. There is an older generation of cpu's and valgrind works find on them. It is the newer (about a year-old) generation on which valgrind doesn't work. This was a big surprise, since I think they are stock processors. Might there be something in the VM/hypervisor layer which is not translating correctly? I did just go the svn route. The results look similar. valgrind]$ ll /usr/bin/valgrind -rwxr-xr-x. 1 root root 54427 Oct 16 17:44 /usr/bin/valgrind valgrind]$ valgrind ls -l ==21888== Memcheck, a memory error detector ==21888== Copyright (C) 2002-2012, and GNU GPL'd, by Julian Seward et al. ==21888== Using Valgrind-3.9.0.SVN and LibVEX; rerun with -h for copyright info ==21888== Command: ls -l ==21888== vex: priv/main_main.c:338 (LibVEX_Translate): Assertion `are_valid_hwcaps(VexArchAMD64, vta->archinfo_host.hwcaps)' failed. vex storage: T total 0 bytes allocated vex storage: P total 0 bytes allocated valgrind: the 'impossible' happened: LibVEX called failure_exit(). ==21888== at 0x3804FBAC: report_and_quit (m_libcassert.c:260) ==21888== by 0x3804FDB1: vgPlain_core_panic_at (m_libcassert.c:350) ==21888== by 0x3804FDDA: vgPlain_core_panic (m_libcassert.c:360) ==21888== by 0x38067E32: failure_exit (m_translate.c:731) ==21888== by 0x380FA1F8: vex_assert_fail (main_util.c:219) ==21888== by 0x380F90AF: LibVEX_Translate (main_main.c:338) ==21888== by 0x3806A24E: vgPlain_translate (m_translate.c:1602) ==21888== by 0x3809B596: vgPlain_scheduler (scheduler.c:1004) ==21888== by 0x380AA6FC: run_a_thread_NORETURN (syswrap-linux.c:103) sched status: running_tid=1 Thread 1: status = VgTs_Runnable ==21888== at 0x315C601420: ??? (in /usr/lib64/ld-2.17.so) ==21888== by 0x1: ??? ==21888== by 0xFFF0005CA: ??? ==21888== by 0xFFF0005CD: ??? Note: see also the FAQ in the source distribution. It contains workarounds to several common problems. In particular, if Valgrind aborted or crashed after identifying problems in your program, there's a good chance that fixing those problems will prevent Valgrind aborting or crashing, especially if it happened in m_mallocfree.c. If that doesn't help, please report this bug to: www.valgrind.org In the bug report, send all the above text, the valgrind version, and what OS and version you are using. Thanks. Thanks. Jerry On Wed, Oct 16, 2013 at 6:55 AM, Mark Wielaard <mj...@re...> wrote: > On Tue, 2013-10-15 at 14:48 -0700, Jerry Blakley wrote: > > I'm running valgrind on fedora 19. I checked libvex code which > > executes the hwcaps code, but can't find anything in particular. > > > > vex: priv/main_main.c:319 (LibVEX_Translate): Assertion > > `are_valid_hwcaps(VexArchAMD64, vta->archinfo_host.hwcaps)' failed. > > This will happen when valgrind/VEX doesn't understand the (combination > of) cpuid capabilities of your CPU. > > Specifically show_hwcaps_amd64 () in VEX/priv/main_main.c should return > a valid string describing the configuration (not NULL). > > The version in fedora valgrind 3.8.1 only handles fixed combinations of > flags, the version in valgrind SVN is more flexible. > > > flags : fpu de tsc msr pae cx8 cmov pat clflush mmx fxsr sse sse2 ht > > syscall nx mmxext fxsr_opt lm rep_good nopl pni pclmulqdq ssse3 fma > > cx16 sse4_1 sse4_2 popcnt aes f16c hypervisor lahf_lm cmp_legacy > > extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch xop fma4 tce > > tbm perfctr_core perfctr_nb arat cpb hw_pstate > > If I am reading that right, you don't have AVX, RDTSCP, BMI or AVX. You > do have SSE3 (pni), XC16 and LZCNT (abm) > > Which should give you: > > case VEX_HWCAPS_AMD64_SSE3 | VEX_HWCAPS_AMD64_CX16 > | VEX_HWCAPS_AMD64_LZCNT: > return "amd64-sse3-cx16-lzcnt"; > > So I am probably missing some detail/flag. > > Which fedora valgrind package version is this exactly? > You can tell by running "rpm -q valgrind". > Have you tried valgrind build from SVN? > > Thanks, > > Mark > > |