You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
1
(5) |
2
(4) |
3
(8) |
4
(11) |
5
(23) |
|
6
(14) |
7
(17) |
8
(10) |
9
(8) |
10
(13) |
11
(10) |
12
(15) |
|
13
(12) |
14
(8) |
15
(15) |
16
(10) |
17
(19) |
18
(10) |
19
(5) |
|
20
(9) |
21
(8) |
22
(7) |
23
(4) |
24
(15) |
25
(6) |
26
(11) |
|
27
(5) |
28
(11) |
29
(9) |
30
(9) |
31
(4) |
|
|
|
From: Philippe W. <phi...@sk...> - 2011-03-03 22:56:34
|
The complete helgrind code is not that big : you do not have to read the whole of valgrind to see what is happening before and after the helgrind code is called :). So, the best is to read the full set of helgrind sources which are in the helgrind directory, and (mostly) ignore what is in the other directories. Philippe |
|
From: Philippe W. <phi...@sk...> - 2011-03-03 22:47:29
|
With the recent checked in changes, helgrind 3.7.0 SVN works faster and uses less memory
(when using -track-lockorders=no).
So, I have looked at the memory used by the laog data structure when track-lockorders is yes.
I have (temporarily) added a gdbserver monitor command to output (on demand) the state
of the arenas + output the whole of the various laog data structures.
While the arena statistics shows that the univ_laog takes about
1.4 Gb of memory, dumping the data structures in printable layout
only takes about 25 Mb.
I have copied below the various data structures output (truncated
the big ones).
I do not understand where the memory is used.
Any hint about what is missing to obtain a printout matching the 1.4 Gb of memory ?
All_Data_Structures (caller = "mon cmd") {
admin_threads (4 records) {
Thread 0x23EE3EF8 {
locksetA 67201
locksetW 67201
}
Thread 0x23F18E48 {
locksetA 67544
locksetW 67544
}
Thread 0x17AC51A0 {
locksetA 0
locksetW 0
}
Thread 0x42C68E0 {
locksetA 94609
locksetW 94609
}
}
map_threads (4 entries) {
coretid 1 -> Thread 0x42C68E0
coretid 2 -> Thread 0x17AC51A0
coretid 3 -> Thread 0x23F18E48
coretid 4 -> Thread 0x23EE3EF8
}
admin_locks (135689 records) {
Lock 0x23D57A10 (ga 0x26ef3544) {
unique 135798
kind nonRec
heldW no
heldBy 0x0
}
Lock 0x22FB3690 (ga 0x26ef66e4) {
unique 135797
kind nonRec
heldW no
heldBy 0x0
}
Lock 0x229A8038 (ga 0x26ef4fac) {
unique 135796
kind nonRec
heldW no
heldBy 0x0
}
......... (plenty of records removed here)
Lock 0x434A518 (ga 0x11651524) {
unique 1
kind mbRec
heldW no
heldBy 0x0
}
Lock 0x42C4FA8 (ga 0x381fcf80) {
unique 0
kind nonRec
heldW no
heldBy 0x0
}
}
map_locks (135689 entries) {
guest 0x441B240 -> Lock 0x4386108
guest 0x7EC1CA0 -> Lock 0x4352528
.... plenty of lines removed here
guest 0x26EF51D4 -> Lock 0x23E61140
guest 0x26EF66E4 -> Lock 0x22FB3690
guest 0x381FCF80 -> Lock 0x42C4FA8
guest 0xFE8AA04C -> Lock 0x22A8CA00
}
}
(In the below data structure, there is 27431 nodes, 29091 'inn'
and 29091 'out'.)
laog (requested by mon cmd) {
node 0x434A518:
inn 0x4360458
inn 0x438bf60
inn 0x438c848
inn 0x438c8e8
inn 0x43919e8
inn 0x4395a18
inn 0x4397c50
inn 0x4398330
inn 0x43983d0
inn 0x4399fc8
inn 0x439a388
inn 0x439a4c8
inn 0x439d290
inn 0x43c4200
inn 0x20a2ae88
........plenty of lines removed here
node 0x23E4D158:
inn 0x4395a18
inn 0x4399fc8
inn 0x439a4c8
inn 0x439d290
inn 0x23140890
out 0x43866f0
out 0x4386a00
out 0x14651c98
out 0x151a9180
out 0x151a9250
out 0x1789e1b0
out 0x178a5ce8
out 0x178a5e88
out 0x178ac550
out 0x178bb9c0
out 0x178c1410
out 0x178c2190
out 0x178c29b8
out 0x17a02188
out 0x17a03998
out 0x17a09468
out 0x2277f448
out 0x229ac6e8
out 0x2376a3d8
out 0x23eb24f0
node 0x23E4F828:
inn 0x2277f448
node 0x23E8AAD0:
inn 0x2277f448
node 0x23EB24F0:
inn 0x4395a18
inn 0x4399fc8
inn 0x439a4c8
inn 0x439d290
inn 0x2277f448
inn 0x23140890
inn 0x2376a3d8
inn 0x23d5bc00
inn 0x23e4d158
out 0x178c2190
node 0x23EDD7D0:
inn 0x43918a8
inn 0x4395a18
inn 0x4399fc8
inn 0x439a248
inn 0x439a4c8
inn 0x439d290
inn 0x22efa3f0
inn 0x2301ee70
inn 0x23620840
node 0x2402A5E8:
inn 0x4395a18
node 0x24E7E5E0:
}
WordSet "univ_tsets":
addTo 0 (0 uncached)
delFrom 0 (0 uncached)
union 0
intersect 0 (0 uncached) [nb. incl isSubsetOf]
minus 0 (0 uncached)
elem 0
doubleton 0
isEmpty 0
isSingleton 0
anyElementOf 0
isSubsetOf 0
WordSet "univ_lsets":
addTo 1716289 (769109 uncached)
delFrom 1712012 (767474 uncached)
union 0
intersect 13 (0 uncached) [nb. incl isSubsetOf]
minus 0 (0 uncached)
elem 1427451
doubleton 0
isEmpty 856013
isSingleton 0
anyElementOf 0
isSubsetOf 13
WordSet "univ_laog":
addTo 378757 (81292 uncached)
delFrom 28 (26 uncached)
union 0
intersect 0 (0 uncached) [nb. incl isSubsetOf]
minus 0 (0 uncached)
elem 61364
doubleton 27431
isEmpty 0
isSingleton 0
anyElementOf 0
isSubsetOf 0
locksets: 94,610 unique lock sets
threadsets: 1 unique thread sets
univ_laog: 30,048 unique lock sets
LockN-to-P map: 0 queries (0 map size)
string table map: 0 queries (0 map size)
LAOG: 27,431 map size
LAOG exposition: 29,105 map size
locks: 858,346 acquires, 858,339 releases
sanity checks: 6
-------- Arena "core": 1048576 mmap'd, 325568/309160 max/curr --------
64 in 4: stacks.rs.1
72 in 1: main.mpclo.3
368 in 27: errormgr.losf.4
1,008 in 46: errormgr.losf.3
5,056 in 158: errormgr.losf.1
5,864 in 157: errormgr.losf.2
34,584 in 22: gdbsrv
262,144 in 4: di.syswrap-x86.azxG.1
-------- Arena "tool": 1564475392 mmap'd, 1521820168/1521820168 max/curr --------
40 in 1: libhb.event_map_init.1 (RCEC groups)
40 in 1: libhb.event_map_init.3 (OldRef groups)
48 in 2: commandline.sua.3
64 in 2: libhb.Thr__new.4
80 in 2: hg.pSfs.1
104 in 4: libhb.Thr__new.1
104 in 2: commandline.sua.2
112 in 3: hashtable.Hc.1
152 in 1: libhb.event_map_init.4 (oldref tree)
168 in 4: hg.mk_Thread.1
176 in 4: libhb.Thr__new.3 (local_Kws_and_stacks)
184 in 1: initimg-linux.sce.5
304 in 2: hg.mstSs.1
368 in 4: hg.mpttT.1
608 in 37: hg.mctCloa.1
664 in 82: hg.lae.1
1,272 in 38: hg.mctCI.1
1,992 in 5: hg.ids.3
2,000 in 1: hg.ids.1
2,272 in 21: hg.lNaw.1
6,160 in 2: hashtable.Hc.2
8,976 in 120: errormgr.mre.2
49,152 in 4: libhb.Thr__new.2
196,632 in 1: hashtable.resize.1
251,632 in 27,349: hg.lae.2
491,456 in 29,105: hg.lae.3
514,264 in 21,266: libhb.zsm_init.1 (map_shmem)
697,584 in 27,432: hg.laog__init.1
730,384 in 29,106: hg.laog__init.2
786,456 in 1: libhb.event_map_init.2 (context table)
857,352 in 34,256: hg.new_MallocMeta.1
2,097,168 in 1: libhb.libhb_init.1
2,171,912 in 135,729: libhb.SO__Alloc.1
2,488,576 in 2: libhb.vts_tab_init.1
3,257,104 in 135,690: hg.ids.2
3,724,280 in 145,520: libhb.vts_set_init.1
5,970,696 in 283,832: hg.ids.4
6,513,272 in 135,689: hg.mk_Lock.1
7,084,728 in 145,519: libhb.vts_set_focaa.1
26,923,376 in 996: libhb.aFfw.1 (LineF storage)
1,456,988,256 in 90,146: hg.ids.5 (univ_laog)
-------- Arena "dinfo": 102596608 mmap'd, 100668584/100668584 max/curr --------
496 in 27: di.redi.1
1,056 in 63: redir.rnnD.4
1,736 in 107: redir.rnnD.2
1,904 in 107: redir.rnnD.3
2,544 in 71: redir.ri.1
3,120 in 63: di.debuginfo.aDI.2
3,424 in 107: redir.rnnD.1
23,712 in 63: di.debuginfo.aDI.1
211,216 in 4: di.ccCt.2
15,660,000 in 63: di.storage.addSym.1
22,952,000 in 24: di.storage.addLoc.1
26,151,440 in 45: di.storage.addDiCfSI.1
35,655,936 in 544: di.storage.addStr.1
-------- Arena "client": 16777216 mmap'd, 10953648/10947472 max/curr --------
10,947,472 in 34,256: replacemalloc.cm.1
-------- Arena "demangle": 65536 mmap'd, 256/0 max/curr --------
-------- Arena "exectxt": 12062720 mmap'd, 6966704/6812528 max/curr --------
1,572,968 in 1: execontext.reh1
5,239,560 in 215,720: execontext.rEw2.2
-------- Arena "errors": 65536 mmap'd, 4800/4800 max/curr --------
4,800 in 120: errormgr.mre.1
-------- Arena "ttaux": 524288 mmap'd, 466576/466352 max/curr --------
466,352 in 1,015: transtab.aECN.1
|
|
From: Philippe W. <phi...@sk...> - 2011-03-03 22:43:57
|
In evhH__pre_thread_releases_lock, I do not understand why
the a Read lock is removed from the locksetW.
>From what I can see, it does not harm to remove it (as a read
lock can't be present in the locksetW) but it is probably a no-op
which can be avoided.
Is there a reason why the below:
thr->locksetA
= HG_(delFromWS)( univ_lsets, thr->locksetA, (Word)lock );
thr->locksetW
= HG_(delFromWS)( univ_lsets, thr->locksetW, (Word)lock );
/* push our VC into the lock */
tl_assert(thr->hbthr);
tl_assert(lock->hbso);
/* If the lock was previously W-held, then we want to do a
strong send, and if previously R-held, then a weak send. */
libhb_so_send( thr->hbthr, lock->hbso, was_heldW );
}
should not be replaced by:
thr->locksetA
= HG_(delFromWS)( univ_lsets, thr->locksetA, (Word)lock );
if (was_heldW) {
thr->locksetW
= HG_(delFromWS)( univ_lsets, thr->locksetW, (Word)lock );
}
/* push our VC into the lock */
tl_assert(thr->hbthr);
tl_assert(lock->hbso);
/* If the lock was previously W-held, then we want to do a
strong send, and if previously R-held, then a weak send. */
libhb_so_send( thr->hbthr, lock->hbso, was_heldW );
}
With this change, the helgrind regression tests are the same.
|
|
From: Philippe W. <phi...@sk...> - 2011-03-03 22:39:47
|
I do not understand why the comparison function below is
done like that. It looks very sophisticated compared to the
simpler comparison which is given after.
Is there a reason to have the sophisticated version ?
static Word cmp_WordVecs_for_FM ( UWord wv1W, UWord wv2W )
{
UWord i;
WordVec* wv1 = (WordVec*)wv1W;
WordVec* wv2 = (WordVec*)wv2W;
UWord common = wv1->size < wv2->size ? wv1->size : wv2->size;
for (i = 0; i < common; i++) {
if (wv1->words[i] == wv2->words[i])
continue;
if (wv1->words[i] < wv2->words[i])
return -1;
if (wv1->words[i] > wv2->words[i])
return 1;
tl_assert(0);
}
/* Ok, the common sections are identical. So now consider the
tails. Both sets are considered to finish in an implied
sequence of -infinity. */
if (wv1->size < wv2->size) {
tl_assert(common == wv1->size);
return -1; /* impliedly, wv1 contains some -infinitys in places
where wv2 doesn't. */
}
if (wv1->size > wv2->size) {
tl_assert(common == wv2->size);
return 1;
}
tl_assert(common == wv1->size);
return 0; /* identical */
}
The below looks simpler and I believe more efficient. The order will
be different than of the above, but I have not seen where the specific
order implemented by the above cmp is significant.
At least the helgrind tests give the same results with the below.
static Word cmp_WordVecs_for_FM ( UWord wv1W, UWord wv2W )
{
UWord i;
WordVec* wv1 = (WordVec*)wv1W;
WordVec* wv2 = (WordVec*)wv2W;
// WordVecs with smaller size are smaller.
if (wv1->size < wv2->size) {
return -1;
}
if (wv1->size > wv2->size) {
return 1;
}
// Sizes are equal => order based on content.
for (i = 0; i < wv1->size; i++) {
if (wv1->words[i] == wv2->words[i])
continue;
if (wv1->words[i] < wv2->words[i])
return -1;
if (wv1->words[i] > wv2->words[i])
return 1;
tl_assert(0);
}
return 0; /* identical */
}
|
|
From: <sv...@va...> - 2011-03-03 19:59:37
|
Author: bart
Date: 2011-03-03 19:59:20 +0000 (Thu, 03 Mar 2011)
New Revision: 11578
Log:
DRD: avoid triggering an assertion failure if a thread is canceled while waiting inside pthread_mutex_lock(). Fixes #267413.
Modified:
trunk/drd/drd_thread.c
Modified: trunk/drd/drd_thread.c
===================================================================
--- trunk/drd/drd_thread.c 2011-02-28 10:26:42 UTC (rev 11577)
+++ trunk/drd/drd_thread.c 2011-03-03 19:59:20 UTC (rev 11578)
@@ -529,7 +529,9 @@
&& tid != DRD_INVALID_THREADID);
tl_assert(DRD_(g_threadinfo)[tid].pt_threadid != INVALID_POSIX_THREADID);
- DRD_(g_threadinfo)[tid].synchr_nesting = 0;
+ if (DRD_(thread_get_trace_fork_join)())
+ VG_(message)(Vg_UserMsg, "[%d] drd_thread_pre_cancel %d\n",
+ DRD_(g_drd_running_tid), tid);
}
/**
|
|
From: Michael M. <mmc...@uc...> - 2011-03-03 18:03:18
|
Hello. This is Michael McThrow, a first year PhD student at the University of California, Santa Cruz. I am working on a project where I want to obtain information about the number and types of interference points in a running program. (Interference points are locations in a program where the execution of threads could be interleaved; e.g., portions of a multithreaded program that are not protected by locks). I am interested in using Helgrind for this analysis since it maintains information about the happens-before relationship of instructions in order to identify race conditions. I may need to make some small modifications to Helgrind in order for it to output information about the happens-before relationships it discovers (I then plan on using a simple script to then analyze the resulting table to output the results). However, I need some help figuring out what portions of the Helgrind source code actually discover and keep track of the happens-before relationships. Which areas of the Helgrind source code keep track of the happens-before relationships? Sincerely, Michael McThrow -------------------------------------------------------------- Michael McThrow 1st Year PhD Student, Computer Science University of California, Santa Cruz mmc...@uc... |
|
From: Bart V. A. <bva...@ac...> - 2011-03-03 17:45:05
|
On Wed, Mar 2, 2011 at 12:45 PM, Christian Borntraeger <bor...@de...> wrote: > Am 26.02.2011 14:46, schrieb Bart Van Assche: >> Hello Julian, >> >> Maybe it would be easier for the people who are porting Valgrind if >> both Valgrind and VEX would be available in a public git repository ? >> Instead of filing patches in the KDE bugzilla people who are porting >> Valgrind could set up a clone of the public Valgrind git repository >> and work in that repository. That would not only make reviewing those >> patches easier but would also make merging them easier -- all that >> would be necessary to merge such work is to issue a "git pull" >> request. A brief guide about how to import a Subversion repository >> into git can be found here: http://help.github.com/svn-importing/. > > I second Barts opinion. Git (or mercurial) would allow private branches > and Julian doesnt have to do anything to make that work (e.g. setting up > the svn users). > This would help in several cases: > 1. For example after we merge the s390 port (hint hint :-) ) I certainly > have additional patches later on (new cpus etc.) but some aspects are to > big to communicate efficiently via patches. One example is decimal floating > point. Its available on power and s390 (with some plans for x86 as well > AFAIK) so it would certainly make sense that Maynard and myself would work > out the necessary common code changes to work for both system before we > bother you. So we could have a decimal floating point tree where Power > and s390x can work together until we are satisfied with the result. > 2. Another potential benefit would be that Julian could spread the maintenance > work among other people. e.g. we could have a trivial bug fix tree maintained > by someone you trust and you would just pull once in a while. A middle way could be to set up an internal git repository, import the Valgrind source code in it via e.g. svn2git and make the results available as a patch attached to a bugzilla entry. More information about svn2git can be found here: https://github.com/nirvdrum/svn2git#readme. That should suit everyone's needs. Bart. |
|
From: Florian K. <br...@ac...> - 2011-03-03 00:33:52
|
On 03/02/2011 06:45 AM, Christian Borntraeger wrote: > > I second Barts opinion. Git (or mercurial) would allow private branches > and Julian doesnt have to do anything to make that work (e.g. setting up > the svn users). svn allows restricting accesses to branches and it's not all that complex: http://svnbook.red-bean.com/en/1.5/svn-book.html#svn.serverconfig.pathbasedauthz It's probably less work doing this than setting up and synching an additional repository. Although I wouldn't mind seeing valgrind move to git altogether. > This would help in several cases: It would also help with the s390 port maintenance. Most of my patches will be in VEX and I wouldn't be able to check them into mainline. AFAICT Julian is the only one who can make changes there. Unless that changes, the current setup requires me to maintain a patch queue for VEX changes which is less than ideal because it's clumsy and error prone. Not to mention synching those patches with collaborators. That is what I currently do and it isn't very attractive. Florian |