From: Wanlong G. <gao...@cn...> - 2011-09-30 05:32:02
|
If read_mem() goes before *map_address* has been mapped, dereference to the *map_address*(nul) will cause the Segment Fault. So, let read_mem() yield the CPU when *map_address* hasn't been mapped. Signed-off-by: Wanlong Gao <gao...@cn...> --- testcases/kernel/mem/mtest06/mmap1.c | 8 +++++++- 1 files changed, 7 insertions(+), 1 deletions(-) diff --git a/testcases/kernel/mem/mtest06/mmap1.c b/testcases/kernel/mem/mtest06/mmap1.c index cd11912..d671291 100644 --- a/testcases/kernel/mem/mtest06/mmap1.c +++ b/testcases/kernel/mem/mtest06/mmap1.c @@ -352,6 +352,12 @@ read_mem(void *args) /* number of reads performed */ long *rmargs = args; /* local pointer to the arguments */ long exit_val = 0; /* pthread exit value */ +retry: + if (!map_address) { + sched_yield(); + goto retry; + } + tst_resm(TINFO, "pid[%d] - read contents of memory %p %ld times", getpid(), map_address, rmargs[2]); if (verbose_print) @@ -608,4 +614,4 @@ main(int argc, /* number of input parameters. */ close(fd); }while (TRUE); exit (0); -} \ No newline at end of file +} -- 1.7.7.rc1 |
From: Wanlong G. <gao...@cn...> - 2011-09-30 05:39:25
|
On 09/30/2011 01:30 PM, Wanlong Gao wrote: > If read_mem() goes before *map_address* has been mapped, dereference > to the *map_address*(nul) will cause the Segment Fault. > > So, let read_mem() yield the CPU when *map_address* hasn't been mapped. > > Signed-off-by: Wanlong Gao <gao...@cn...> > --- > testcases/kernel/mem/mtest06/mmap1.c | 8 +++++++- > 1 files changed, 7 insertions(+), 1 deletions(-) > > diff --git a/testcases/kernel/mem/mtest06/mmap1.c b/testcases/kernel/mem/mtest06/mmap1.c > index cd11912..d671291 100644 > --- a/testcases/kernel/mem/mtest06/mmap1.c > +++ b/testcases/kernel/mem/mtest06/mmap1.c > @@ -352,6 +352,12 @@ read_mem(void *args) /* number of reads performed */ > long *rmargs = args; /* local pointer to the arguments */ > long exit_val = 0; /* pthread exit value */ > > +retry: > + if (!map_address) { > + sched_yield(); > + goto retry; > + } > + > tst_resm(TINFO, "pid[%d] - read contents of memory %p %ld times", > getpid(), map_address, rmargs[2]); > if (verbose_print) > @@ -608,4 +614,4 @@ main(int argc, /* number of input parameters. */ > close(fd); > }while (TRUE); > exit (0); > -} > \ No newline at end of file > +} For the messy coding style of the pre code, I just be consistent with the others in this patch. Thanks -Wanlong Gao |
From: Wanlong G. <gao...@cn...> - 2011-09-30 06:10:49
|
If read_mem() goes before *map_address* has been mapped, dereference to the *map_address*(nul) will cause the Segment Fault. So, let read_mem() yield the CPU when *map_address* hasn't been mapped. Which version do you prefer to? Thanks Signed-off-by: Wanlong Gao <gao...@cn...> --- testcases/kernel/mem/mtest06/mmap1.c | 5 ++++- 1 files changed, 4 insertions(+), 1 deletions(-) diff --git a/testcases/kernel/mem/mtest06/mmap1.c b/testcases/kernel/mem/mtest06/mmap1.c index cd11912..0aa1c2b 100644 --- a/testcases/kernel/mem/mtest06/mmap1.c +++ b/testcases/kernel/mem/mtest06/mmap1.c @@ -352,6 +352,9 @@ read_mem(void *args) /* number of reads performed */ long *rmargs = args; /* local pointer to the arguments */ long exit_val = 0; /* pthread exit value */ + while (!map_address) + sched_yield(); + tst_resm(TINFO, "pid[%d] - read contents of memory %p %ld times", getpid(), map_address, rmargs[2]); if (verbose_print) @@ -608,4 +611,4 @@ main(int argc, /* number of input parameters. */ close(fd); }while (TRUE); exit (0); -} \ No newline at end of file +} -- 1.7.7.rc1 |
From: Cyril H. <ch...@su...> - 2011-10-05 15:33:07
|
Hi! > If read_mem() goes before *map_address* has been mapped, dereference > to the *map_address*(nul) will cause the Segment Fault. > > So, let read_mem() yield the CPU when *map_address* hasn't been mapped. > I've been reading the test sourece lately and IMHO this is wanted behavior. As far as I understand the code the test creates two threads one to read and one to map/unmap the memory concurently. Look for the setjmp() and longjmp() in the read_mem() and signal handler that is expected to do exactly what you are trying to achieve. So if the test fails with segmentation fault, there is something wrong with the signal/longjmp handling. -- Cyril Hrubis ch...@su... |
From: Wanlong G. <gao...@cn...> - 2011-10-06 00:45:13
|
On 10/05/2011 11:36 PM, Cyril Hrubis wrote: > Hi! >> If read_mem() goes before *map_address* has been mapped, dereference >> to the *map_address*(nul) will cause the Segment Fault. >> >> So, let read_mem() yield the CPU when *map_address* hasn't been mapped. >> > > I've been reading the test sourece lately and IMHO this is wanted > behavior. As far as I understand the code the test creates two threads > one to read and one to map/unmap the memory concurently. Look for the > setjmp() and longjmp() in the read_mem() and signal handler that is > expected to do exactly what you are trying to achieve. So if the test > fails with segmentation fault, there is something wrong with the > signal/longjmp handling. > Sure, but I saw that when read_mem() goes first and become the first running thread, it should cause segment fault all the time now. So, if the first running thread is not read_mem(), it will not fail with segment fault all the time. So, I think there's no wrong with the signal or others' handling. We just make sure that read_mem() is not the first running thread, it will be enough. Thanks -Wanlong Gao |
From: Wanlong G. <gao...@cn...> - 2011-10-10 02:04:38
|
Hi Cyril: On 10/06/2011 08:44 AM, Wanlong Gao wrote: > On 10/05/2011 11:36 PM, Cyril Hrubis wrote: > >> Hi! >>> If read_mem() goes before *map_address* has been mapped, dereference >>> to the *map_address*(nul) will cause the Segment Fault. >>> >>> So, let read_mem() yield the CPU when *map_address* hasn't been mapped. >>> >> >> I've been reading the test sourece lately and IMHO this is wanted >> behavior. As far as I understand the code the test creates two threads >> one to read and one to map/unmap the memory concurently. Look for the >> setjmp() and longjmp() in the read_mem() and signal handler that is >> expected to do exactly what you are trying to achieve. So if the test >> fails with segmentation fault, there is something wrong with the >> signal/longjmp handling. >> > > > Sure, but I saw that when read_mem() goes first and become the first running > thread, it should cause segment fault all the time now. > > So, if the first running thread is not read_mem(), it will not fail with segment > fault all the time. So, I think there's no wrong with the signal or others' handling. > > We just make sure that read_mem() is not the first running thread, it will be enough. Any comment ? Thanks > > Thanks > -Wanlong Gao > > ------------------------------------------------------------------------------ > All the data continuously generated in your IT infrastructure contains a > definitive record of customers, application performance, security > threats, fraudulent activity and more. Splunk takes this data and makes > sense of it. Business sense. IT sense. Common sense. > http://p.sf.net/sfu/splunk-d2dcopy1 > _______________________________________________ > Ltp-list mailing list > Ltp...@li... > https://lists.sourceforge.net/lists/listinfo/ltp-list > |
From: Cyril H. <ch...@su...> - 2011-10-12 13:11:53
|
Hi! > > Sure, but I saw that when read_mem() goes first and become the first running > > thread, it should cause segment fault all the time now. > > > > So, if the first running thread is not read_mem(), it will not fail with segment > > fault all the time. So, I think there's no wrong with the signal or others' handling. > > > > We just make sure that read_mem() is not the first running thread, it will be enough. > > > Any comment ? > Stay tuned, I'll get to this till the end of the week. -- Cyril Hrubis ch...@su... |
From: Wanlong G. <gao...@cn...> - 2011-10-31 07:21:26
|
Hi Cyril: > Hi! >>> Sure, but I saw that when read_mem() goes first and become the first running >>> thread, it should cause segment fault all the time now. >>> >>> So, if the first running thread is not read_mem(), it will not fail with segment >>> fault all the time. So, I think there's no wrong with the signal or others' handling. >>> >>> We just make sure that read_mem() is not the first running thread, it will be enough. >> >> >> Any comment ? >> > > Stay tuned, I'll get to this till the end of the week. > Maybe you missed this? Thanks a lot -Wanlong Gao |
From: Cyril H. <ch...@su...> - 2011-11-03 14:26:12
|
Hi! > >>> So, if the first running thread is not read_mem(), it will not fail with segment > >>> fault all the time. So, I think there's no wrong with the signal or others' handling. > >>> > >>> We just make sure that read_mem() is not the first running thread, it will be enough. > >> > >> > >> Any comment ? > >> > > > > Stay tuned, I'll get to this till the end of the week. > > > > > Maybe you missed this? Sorry, should get to this ASAP. -- Cyril Hrubis ch...@su... |
From: Wanlong G. <gao...@cn...> - 2011-11-04 00:35:38
|
On 11/03/2011 10:31 PM, Cyril Hrubis wrote: > Hi! >>>>> So, if the first running thread is not read_mem(), it will not fail with segment >>>>> fault all the time. So, I think there's no wrong with the signal or others' handling. >>>>> >>>>> We just make sure that read_mem() is not the first running thread, it will be enough. >>>> >>>> >>>> Any comment ? >>>> >>> >>> Stay tuned, I'll get to this till the end of the week. >>> >> >> >> Maybe you missed this? > > Sorry, should get to this ASAP. > Thanks a lot Cyril. -Wanlong Gao |
From: Wanlong G. <gao...@cn...> - 2011-11-09 09:14:36
|
On 11/03/2011 10:31 PM, Cyril Hrubis wrote: > Hi! >>>>> So, if the first running thread is not read_mem(), it will not fail with segment >>>>> fault all the time. So, I think there's no wrong with the signal or others' handling. >>>>> >>>>> We just make sure that read_mem() is not the first running thread, it will be enough. >>>> >>>> >>>> Any comment ? >>>> >>> >>> Stay tuned, I'll get to this till the end of the week. >>> >> >> >> Maybe you missed this? > > Sorry, should get to this ASAP. > Yeah Cyril: I see you are back. Please don't miss this now. Thanks a lot -Wanlong Gao |
From: Cyril H. <ch...@su...> - 2011-11-09 09:20:22
|
Hi! > >>>>> So, if the first running thread is not read_mem(), it will not fail with segment > >>>>> fault all the time. So, I think there's no wrong with the signal or others' handling. > >>>>> > >>>>> We just make sure that read_mem() is not the first running thread, it will be enough. > >>>> > >>>> > >>>> Any comment ? > >>>> > >>> > >>> Stay tuned, I'll get to this till the end of the week. > >>> > >> > >> > >> Maybe you missed this? > > > > Sorry, should get to this ASAP. > > > > > Yeah Cyril: > I see you are back. > Please don't miss this now. Don't worry. I've spend one afternoon playing with the testcase and it's full of misconceptions race conditions and broken code (and one of the bugs disables the reading thread shortly after the start so no testing is done at all no matter how long you run it). I have, so far, five incremental patches that fixes most of the problems but it's not finished yet. I expect to commit these hopefully this week. -- Cyril Hrubis ch...@su... |
From: Wanlong G. <gao...@cn...> - 2011-11-09 09:25:21
|
On 11/09/2011 05:26 PM, Cyril Hrubis wrote: > Hi! >>>>>>> So, if the first running thread is not read_mem(), it will not fail with segment >>>>>>> fault all the time. So, I think there's no wrong with the signal or others' handling. >>>>>>> >>>>>>> We just make sure that read_mem() is not the first running thread, it will be enough. >>>>>> >>>>>> >>>>>> Any comment ? >>>>>> >>>>> >>>>> Stay tuned, I'll get to this till the end of the week. >>>>> >>>> >>>> >>>> Maybe you missed this? >>> >>> Sorry, should get to this ASAP. >>> >> >> >> Yeah Cyril: >> I see you are back. >> Please don't miss this now. > > Don't worry. I've spend one afternoon playing with the testcase and it's > full of misconceptions race conditions and broken code (and one of the > bugs disables the reading thread shortly after the start so no testing > is done at all no matter how long you run it). I have, so far, five > incremental patches that fixes most of the problems but it's not > finished yet. I expect to commit these hopefully this week. > Sure, nothing. you are so busy that I think you just take your tunes. Thanks for you hard working. Best Regards -Wanlong Gao |