|
From: Julian S. <js...@ac...> - 2010-01-27 16:54:54
|
> Also, I'd like to propose new annotations for barriers. We can annotate
> barriers in terms of the above 4, but that hardwires the assumption that
> the underlying checker has a concept of h-b edges. I'd prefer a set
> of macros which merely specify the state of the barrier:
>
> B_RESET(bar)
> state that nobody is waiting for the barrier
>
> B_ARRIVE(bar)
> i have arrived at the barrier
>
> B_DEPART(bar, Bool i_am_the_last_to_depart)
> i have left the barrier, and give an indication of whether or
> not I am the last to leave. (If so, the checker would
> probably want to behave like B_RESET).
Actually, even with those, I think it is impossible to guarantee a correct
annotation in the worst-case race condition. In the worst-case condition
we need an arbitrary number of VTSs associated with the barrier, and also
to track how many threads are waiting at the barrier.
This is because it could be that threads leaving the "real" barrier
call/mechanism stall and do not get to the B_DEPART request (at which
point their VTSs are updated) before a whole different set of threads arrive
at the barrier, pass through it, add their own set of edges to it. Then
the original threads finally get to their B_DEPART requests, at which
point they get the edges of these "overtaking" threads incorrectly
added.
IOW we need to deal in the worst case with an arbitrary delay between
a thread leaving the barrier proper and having its state updated at
the B_DEPART request.
Hence I am now thinking instead of the following annotations:
B_INITIALISE(bar, int capacity)
initialise; state capacity
void* abstract_handle = B_ARRIVE(bar)
arrive at barrier. obtain abstract handle (a VTS*, essentially) from
which we need to update when leaving
B_DEPART(bar, abstract_handle)
depart from barrier, updating our VTS using the one we were given
by B_ARRIVE
---------------------
One other thing. I think, for the semantics of the annotations to make
sense generally, you need to say that, although the checker may run the
threads concurrently, it must process each annotation atomically. It
sounds like hair splitting, I know, but .. it's hard to see what the
effect would be if processing of annotations was allowed to happen
concurrently.
J
|