You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
1
(7) |
|
2
(8) |
3
(8) |
4
(7) |
5
(9) |
6
(9) |
7
(7) |
8
(7) |
|
9
(7) |
10
(6) |
11
(6) |
12
(8) |
13
(10) |
14
(14) |
15
(7) |
|
16
(7) |
17
(8) |
18
(7) |
19
(6) |
20
(8) |
21
(7) |
22
(7) |
|
23
(7) |
24
(12) |
25
(7) |
26
(7) |
27
(16) |
28
(12) |
29
(7) |
|
30
(10) |
31
(8) |
|
|
|
|
|
|
From: Tom H. <to...@co...> - 2006-07-06 14:48:34
|
In message <200...@ac...>
Julian Seward <js...@ac...> wrote:
> So: the moral is: if doing a syscall which may cause _some other_
> thread to die, I must retain the lock to avoid such a deadlock.
> OTOH, if doing a syscall which may cause _me_ to die, I must release
> the lock so that others may pick it up in that case.
But all syscalls may cause you to die it you allow for being kill -9'd
in the middle of one... I guess that is only likely for a blocking
call?
> So what do we do for a syscall which causes some arbitrarily (kernel-)
> chosen thread in the process to die? Do any such syscalls exist?
That would be an evil system call.
> All very ugly. Our locking strategy imposes the non-obvious
> requirement of knowing how each syscall affects the liveness of
> each thread in the process.
I don't believe any threaded program is robust in the fact of hard
killing of threads in this way - even without valgrind you would be
hosed if the thread that was killed held a lock at the time.
Interesting, on linux at least there is a possible solution - if we
were using futexes in stead of the pipe (there was such an
implementation at one point) then we could use the new robust
futexes stuff to make sure the lock is released if it was held
by a thread that died.
I'm not sure if that is safe though? Is their any guarantee that the
valgrind internal data structure are safe to handle in such a
circumstance?
Catching SIGCHLD would potentially allow us to do something similar
but would face similar problems about data structure state.
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: Julian S. <js...@ac...> - 2006-07-06 14:22:10
|
There's an interesting conceptual hole in Valgrind's handling of client syscalls in the presence of threads, which may have practical consequences. I don't know what to do about it. Anyway: V allows native thread libraries to run, but serialises execution using a pipe-based lock (in scheduler/sema.c), so that only one thread ever runs at once, even on SMPs. The general case for handling a syscall is: do all preparations for the syscall drop the lock do the syscall reacquire the lock Dropping the lock is necessary so that other thread(s) can get the lock and hence run, if this syscall should block. Not doing so rapidly leads to deadlocks. Dropping the lock (and associated sigmask futzing) is expensive, and so for syscalls we're sure won't block (eg getpid) there is a cheaper route: do all preparations for the syscall do the syscall ie, the same except we retain the lock the whole time. --- You'd think the general case is always safe to use, but not so. Consider a syscall which causes some other thread to die instantly, which is probably possible with thread_kill(tid, 9) and is certainly possible on AIX. Then the slow route gives a possible sequence: (target thread: waiting to acquire the lock) (running thread has the lock) running thread: do all preparations for the syscall running thread: drop the lock target thread: acquire the lock running thread: do the syscall (target thread dies) running thread: (try to) reacquire the lock Now we're hosed; the target thread picked up the lock and then died. This really happens, especially on SMPs where the target thread has a good opportunity to run the instant the lock is dropped. --- So: the moral is: if doing a syscall which may cause _some other_ thread to die, I must retain the lock to avoid such a deadlock. OTOH, if doing a syscall which may cause _me_ to die, I must release the lock so that others may pick it up in that case. So what do we do for a syscall which causes some arbitrarily (kernel-) chosen thread in the process to die? Do any such syscalls exist? All very ugly. Our locking strategy imposes the non-obvious requirement of knowing how each syscall affects the liveness of each thread in the process. J |
|
From: Tom H. <th...@cy...> - 2006-07-06 03:22:23
|
Nightly build on gill ( x86_64, Fedora Core 2 ) started at 2006-07-06 03:00:04 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 260 tests, 4 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/fdleak_fcntl (stderr) none/tests/tls (stdout) |
|
From: <js...@ac...> - 2006-07-06 03:01:46
|
Nightly build on phoenix ( SuSE 10.0 ) started at 2006-07-06 03:30:02 BST Checking out vex source tree ... done Building vex ... done Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 235 tests, 4 stderr failures, 0 stdout failures, 0 posttest failures == memcheck/tests/leak-tree (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) |
|
From: Tom H. <th...@cy...> - 2006-07-06 02:55:42
|
Nightly build on ford ( i686, Fedora Core 4 ) started at 2006-07-06 03:25:07 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 235 tests, 5 stderr failures, 0 stdout failures, 0 posttest failures == memcheck/tests/leak-tree (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) |
|
From: Tom H. <to...@co...> - 2006-07-06 02:49:23
|
Nightly build on dunsmere ( athlon, Fedora Core 5 ) started at 2006-07-06 03:30:05 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 237 tests, 5 stderr failures, 0 stdout failures, 0 posttest failures == memcheck/tests/mempool (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) |
|
From: Tom H. <th...@cy...> - 2006-07-06 02:32:56
|
Nightly build on alvis ( i686, Red Hat 7.3 ) started at 2006-07-06 03:15:03 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 236 tests, 19 stderr failures, 0 stdout failures, 0 posttest failures == memcheck/tests/addressable (stderr) memcheck/tests/badjump (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/erringfds (stderr) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-regroot (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/mempool (stderr) memcheck/tests/partial_load_dflt (stderr) memcheck/tests/partial_load_ok (stderr) memcheck/tests/partiallydefinedeq (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/sigkill (stderr) memcheck/tests/stack_changes (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/xml1 (stderr) |
|
From: Tom H. <th...@cy...> - 2006-07-06 02:25:39
|
Nightly build on dellow ( x86_64, Fedora Core 5 ) started at 2006-07-06 03:10:07 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 260 tests, 2 stderr failures, 0 stdout failures, 0 posttest failures == memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) |
|
From: <sv...@va...> - 2006-07-06 01:54:39
|
Author: sewardj Date: 2006-07-06 02:54:34 +0100 (Thu, 06 Jul 2006) New Revision: 5982 Log: A patch for the "Open POSIX Test Suite" (http://posixtest.sourceforge.net) version 1.5.1, which makes it possible to run the suite on V and conveniently compare results against a native run (using the diff-results script). Added: trunk/auxprogs/posixtestsuite-1.5.1-diff-results trunk/auxprogs/posixtestsuite-1.5.1-diff.txt Modified: trunk/auxprogs/Makefile.am Modified: trunk/auxprogs/Makefile.am =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- trunk/auxprogs/Makefile.am 2006-07-05 22:54:49 UTC (rev 5981) +++ trunk/auxprogs/Makefile.am 2006-07-06 01:54:34 UTC (rev 5982) @@ -5,7 +5,9 @@ =20 noinst_SCRIPTS =3D gen-mdg DotToScc.hs primes.c \ gsl16test gsl16-badfree.patch gsl16-wavelet.patch \ - ppcfround.c ppc64shifts.c libmpiwrap.c mpiwrap_type_test.c + ppcfround.c ppc64shifts.c libmpiwrap.c mpiwrap_type_test.c \ + posixtestsuite-1.5.1-diff-results \ + posixtestsuite-1.5.1-diff.txt =20 EXTRA_DIST =3D $(noinst_SCRIPTS) =20 Added: trunk/auxprogs/posixtestsuite-1.5.1-diff-results =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- trunk/auxprogs/posixtestsuite-1.5.1-diff-results = (rev 0) +++ trunk/auxprogs/posixtestsuite-1.5.1-diff-results 2006-07-06 01:54:34 = UTC (rev 5982) @@ -0,0 +1,21 @@ +#!/bin/sh + +usage() +{ + cat <<EOF + +Usage: $0 result_file_1 result_file_2 + +EOF +} + +if [ $# !=3D 2 ]; then + usage; + exit 1; +else + echo $1 $2; + rm -f tmptmp_1 tmptmp_2; + grep -v GRIND=3D $1 > tmptmp_1; + grep -v GRIND=3D $2 > tmptmp_2; + diff -U2 tmptmp_1 tmptmp_2; +fi Property changes on: trunk/auxprogs/posixtestsuite-1.5.1-diff-results ___________________________________________________________________ Name: svn:executable + * Added: trunk/auxprogs/posixtestsuite-1.5.1-diff.txt =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- trunk/auxprogs/posixtestsuite-1.5.1-diff.txt = (rev 0) +++ trunk/auxprogs/posixtestsuite-1.5.1-diff.txt 2006-07-06 01:54:34 UTC = (rev 5982) @@ -0,0 +1,149 @@ +Only in posixtestsuite/conformance/interfaces/pthread_spin_lock: 1-1 +diff -U3 -r posixtestsuite.orig/conformance/interfaces/pthread_spin_lock= /1-1.c posixtestsuite/conformance/interfaces/pthread_spin_lock/1-1.c +--- posixtestsuite.orig/conformance/interfaces/pthread_spin_lock/1-1.c 2= 003-07-10 02:19:34.000000000 +0100 ++++ posixtestsuite/conformance/interfaces/pthread_spin_lock/1-1.c 2006-0= 7-05 15:38:27.000000000 +0100 +@@ -39,7 +39,8 @@ + static void sig_handler() + { + /* Just return */ +- pthread_exit(0); ++ /* pthread_exit(0); */ ++ exit(0); + return; + } +=20 +diff -U3 -r posixtestsuite.orig/LDFLAGS posixtestsuite/LDFLAGS +--- posixtestsuite.orig/LDFLAGS 2005-06-03 02:32:42.000000000 +0100 ++++ posixtestsuite/LDFLAGS 2006-07-05 13:03:09.000000000 +0100 +@@ -8,7 +8,7 @@ + #-lpthread -D_GNU_SOURCE + # + #Recommended flags: +-#-D_XOPEN_SOURCE=3D600 -lpthread -lrt -lm ++-D_XOPEN_SOURCE=3D600 -lpthread -lrt -lm + # + # For use with Linux, you may try the following flags to + # allow for the NPTL-specific compilation (used in some test cases) +diff -U3 -r posixtestsuite.orig/locate-test posixtestsuite/locate-test +--- posixtestsuite.orig/locate-test 2005-03-14 13:53:50.000000000 +0000 ++++ posixtestsuite/locate-test 2006-07-05 13:16:52.000000000 +0100 +@@ -60,19 +60,19 @@ + shift; + ;; + "--fmake") +- find functional/ -type f -maxdepth 2 -mindepth 2 -name "Makef= ile" -exec dirname '{}' ';' ++ find functional/ -maxdepth 2 -mindepth 2 -type f -name "Makef= ile" -exec dirname '{}' ';' + exit 0; + ;; + "--frun") +- find functional/ -type f -maxdepth 2 -mindepth 2 -name "run.s= h" -exec dirname '{}' ';'=20 ++ find functional/ -maxdepth 2 -mindepth 2 -type f -name "run.s= h" -exec dirname '{}' ';'=20 + exit 0; + ;; + "--smake") +- find stress/ -type f -maxdepth 2 -mindepth 2 -name "Makefile"= -exec dirname '{}' ';' ++ find stress/ -maxdepth 2 -mindepth 2 -type f -name "Makefile"= -exec dirname '{}' ';' + exit 0; + ;; + "--srun") +- find stress/ -type f -maxdepth 2 -mindepth 2 -name "run.sh" -= exec dirname '{}' ';' ++ find stress/ -maxdepth 2 -mindepth 2 -type f -name "run.sh" -= exec dirname '{}' ';' + exit 0; + ;; + "--help") +diff -U3 -r posixtestsuite.orig/Makefile posixtestsuite/Makefile +--- posixtestsuite.orig/Makefile 2005-03-14 13:53:41.000000000 +0000 ++++ posixtestsuite/Makefile 2006-07-05 16:28:57.000000000 +0100 +@@ -19,7 +19,7 @@ +=20 + # Added tests timeout from Sebastien Decugis (http://nptl.bullopensourc= e.org)=20 + # Expiration delay is 120 seconds +-TIMEOUT_VAL =3D 120 ++TIMEOUT_VAL =3D 15 + # The following value is the shell return value of a timedout applicati= on. + # with the bash shell, the ret val of a killed application is 128 + sig= num + # and under Linux, SIGALRM=3D14, so we have (Linux+bash) 142. +@@ -99,7 +99,8 @@ + %.run-test: %.test $(top_builddir)/t0 + @COMPLOG=3D$(LOGFILE).$$$$; \ + [ -f $< ] || exit 0; \ +- $(TIMEOUT) $< > $$COMPLOG 2>&1; \ ++ echo "$(@:.run-test=3D): GRIND=3D$(GRIND)" | tee -a $(LOGFILE); \ ++ $(TIMEOUT) $(GRIND) $< > $$COMPLOG 2>&1; \ + RESULT=3D$$?; \ + if [ $$RESULT -eq 1 ]; \ + then \ +@@ -141,11 +142,12 @@ + @echo Building timeout helper files; \ + $(CC) -O2 -o $@ $< ; \ + echo `$(top_builddir)/t0 0; echo $$?` > $(top_builddir)/t0.val +-=09 ++ + %.run-test: %.sh $(top_builddir)/t0 + @COMPLOG=3D$(LOGFILE).$$$$; \ ++ echo "$(@:.run-test=3D): GRIND=3D$(GRIND)" | tee -a $(LOGFILE); \ + chmod +x $<; \ +- $(TIMEOUT) $< > $$COMPLOG 2>&1; \ ++ $(TIMEOUT) $(GRIND) $< > $$COMPLOG 2>&1; \ + RESULT=3D$$?; \ + if [ $$RESULT -eq 0 ]; \ + then \ +diff -U3 -r posixtestsuite.orig/run_tests posixtestsuite/run_tests +--- posixtestsuite.orig/run_tests 2004-12-16 09:56:18.000000000 +0000 ++++ posixtestsuite/run_tests 2006-07-05 19:06:48.000000000 +0100 +@@ -12,11 +12,14 @@ + usage() + { + cat <<EOF=20 +-Usage: $0 [AIO|MEM|MSG|SEM|SIG|THR|TMR|TPS] ++Usage: $0 [AIO|MEM|MSG|SEM|SIG|THR|TMR|TPS |ALL] +=20 + Build and run the tests for POSIX area specified by the 3 letter tag + in the POSIX spec +=20 ++Optionally, set env variable GRIND to be the Valgrind and args used ++to run the tests (eg, GRIND=3D"vTRUNK --tool=3Dnone"). ++ + EOF + } +=20 +@@ -64,6 +67,39 @@ + runtests "$BASEDIR/m*map" + runtests "$BASEDIR/shm_*" + ;; ++ ++ ++ ALL) echo "Executing all tests" ++ echo "Executing asynchronous I/O tests" ++ runtests "$BASEDIR/aio_*" ++ runtests "$BASEDIR/lio_listio" ++ echo "Executing signals tests" ++ runtests "$BASEDIR/sig*" ++ runtests $BASEDIR/raise ++ runtests $BASEDIR/kill ++ runtests $BASEDIR/killpg ++ runtests $BASEDIR/pthread_kill ++ runtests $BASEDIR/pthread_sigmask ++ echo "Executing semaphores tests" ++ runtests "$BASEDIR/sem*" ++ echo "Executing threads tests" ++ runtests "$BASEDIR/pthread_*" ++ echo "Executing timers and clocks tests" ++ runtests "$BASEDIR/time*" ++ runtests "$BASEDIR/*time" ++ runtests "$BASEDIR/clock*" ++ runtests $BASEDIR/nanosleep ++ echo "Executing message queues tests" ++ runtests "$BASEDIR/mq_*" ++ echo "Executing process and thread scheduling tests" ++ runtests "$BASEDIR/*sched*" ++ echo "Executing mapped, process and shared memory tests" ++ runtests "$BASEDIR/m*lock*" ++ runtests "$BASEDIR/m*map" ++ runtests "$BASEDIR/shm_*" ++ ;; ++ ++ + *) usage + exit 1 + ;; |