|
From: Josef W. <Jos...@gm...> - 2013-07-12 09:06:35
|
Am 20.06.2013 14:15, schrieb Josef Weidendorfer: > Am 20.06.2013 11:27, schrieb Julian Seward: >> (2) translate "XBEGIN fail-addr" by handing it through to the real CPU, >> in the same kind of way we handle CPUID, RDTSC, etc. Of course we will >> have to put our own failure-handler address so we don't lose control >> of execution if the transaction is aborted, but that's not difficult. > I do not see a hidden problem at the moment. But the code added by VG + > tool (just assume cache simulation) will raise the probability for > transaction failure significantly. And that actually may be the problem with (2): * the probability for failure may not just raise, but switch to a persistant failure, e.g. if the added code by a tool does a system call. * if hardware ensures that some transaction will eventually succeed (as on s390), and the compiler does not produce a failure path, any changes in the original instructions (already done be VEX itself) may destroy the hardware guarantee, and then we have a problem (ie. probably a livelock). TM detects conflicts by snooping memory accesses of other cores. Thus, LL/SC can be seen as primitive to implement a poor-man's TM (just one address). With LL/SC, VEX hands it through to the real CPU, and this results in the following effect: * with multithreaded ARM code, when asked for verbose debug output e.g. with Callgrind, the program may stuck - because LL/SC always will fail (e.g. a printf in the middle), but the original program expects LL/SC to eventually succeed. * I remember that for the MIPS port, LL/SC failed for some reason already with the regular tool handling with cachegrind/callgrind. The solution was to move the call to the cache simulator due to the memory access of LL *before* the passed-through LL instruction, thus reducing the amount of code executed between LL and SC, and thus reducing the probability for failure. And of course, this is not a real solution, but more a work-around which seems to work. I think with (2), we will run into similar issues as seen with LL/SC. It may be not too complex to emulate TM by using TM itself for hardware-supported conflict detection. Josef And doing a rollback + failure path > is slower than doing the failure path from the beginning. So (1) may > not be a bad option at all. > > What about > (3) if XBEGIN/XEND is found in the same SB, remove them. As VG is > serializing threads, there is no way for a conflict within a SB anyway. > Hm. A problem would be exceptions raised between XBEGIN/XEND. A > transaction would simply fail... > > Josef > > PS: Using TM ourself should be a very nice solution to make memcheck > fast when we remove serializing of threads at one point. The move > forward here would be to let the tool decide whether VG core should > do serialization or not. If TM is not available, memcheck would > go with thread serialization as now. > > PS2: making the handling of TM transparent to tools means that > e.g. cache simulation cannot tell anything about how TM would > influence the cache. Current TM implementations use ways > of cache sets to store transaction changes, thus influencing miss > behavior. > This is just a note; i do not suggest to simulate TM in VG ;-) > > > >> This also assumes that the s390 and POWER instructions can be mapped to >> this same structure: TRANSACTION-START(fail-addr) and TRANSACTION-END. >> That's probably the first thing that we should investigate. >> >> J >> >> ------------------------------------------------------------------------------ >> This SF.net email is sponsored by Windows: >> >> Build for Windows Store. >> >> http://p.sf.net/sfu/windows-dev2dev >> _______________________________________________ >> Valgrind-developers mailing list >> Val...@li... >> https://lists.sourceforge.net/lists/listinfo/valgrind-developers >> > > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Windows: > > Build for Windows Store. > > http://p.sf.net/sfu/windows-dev2dev > _______________________________________________ > Valgrind-developers mailing list > Val...@li... > https://lists.sourceforge.net/lists/listinfo/valgrind-developers > |