Am Thursday 18 July 2002 20:40 schrieb Daniel Gryniewicz:
> On Thu, 2002-07-18 at 14:15, Nicolai Haehnle wrote:
> > I really doubt there's an architecture out there where simple
> > reads/writes in the native data type are not atomic (how could that be
> > possible anyway?). So it seems better to write for reasonable rather than
> > academic scenarios.
> > cu,
> > Nicolai
> Not true. On SMP, writes to anything are usually not atomic with
> respect to other processors, because of the cache. If two processors
> write to the same (for example) int at the same time, both write to
> their caches, and when the caches get flushed, they can trounce on each
> other. Memory ordering constraints can fix this. Things get more
> complicated if the memory can get written to by a DMA from the PCI bus.
> There was a thread on barrier() on the AtheOS list that covered this.
> True, a write to an int is atomic with respect to interrupts, unlike a
> long long on a 32-bit processor, but not with respect to SMP without an
Intel Architecture Manual vol. 3: System Programming, chapter 7.1. says that reads and writes to aligned data are always atomic (read-modiy-write operations are not atomic, you need a lock prefix for them). This is ensured by the cache coherency protocol on SMP systems.
This means that there is no effective difference between a sequence like
*address = v;
*address = v;
If two CPUs execute this sequence in parallel on the same address, you can't tell which store takes precedence. Afterwards, *address will always be either CPU#1's v or CPU#2's v, but it won't be corrupted (this only applies to aligned data, native data types). This is true with both sequences.
Of course, you will typically need some locking anyway, because the write is usually preceded by a read which determines the v. This lock needs to be outside of set_tld though, e.g.:
get relevant data
Am Thursday 18 July 2002 20:48 schrieb Kristian Van Der Vliet:
> I'm not sure (I'm not a kernel hacker, I just play one while trying to
> compile Glibc ;) ), but I believe the danger lies in the posibility that
> set_tld could be taken out of context between the call
> Process_s* psProc=CURRENT_PROC;
True. But if that's what you worry about, the lock needs to be around those two lines, not around just the assignment ;)
Still, this would only protect you against a change of the return value of CURRENT_PROC. I don't think the kernel moves the process structures around, does it? That would cause some major headaches...
> On an SMP system it is possible that free_tld() could be called by a thread
> on the second CPU. When the first thread restarts nHandle is now invalid.
> The problem couldn't possibly exist with get_tld() as it stands (The handle
> could be invalid anyway, as it is. I'm a lazy non-bounds checking coder
> who was in a rush!).
This sounds like the lock needs to be higher up in the call hierarchy (see above). I'd have to look at the TLD implementation in detail to tell for sure, though.