|
From: Feiyang C. <chr...@gm...> - 2022-05-11 10:15:22
|
Hi everyone, I am trying to port Valgrind to LoongArch64/Linux. LoongArch is a brand-new ISA developed by the Loongson Technology Corporation Limited, which is a bit like RISC-V. LoongArch includes a reduced 32-bit version, a standard 32-bit version and a 64-bit version (LoongArch64). You can find the documentation for LoongArch here: https://github.com/loongson/LoongArch-Documentation Recently, GNU Binutils 2.38 has been released with LoongArch support, and GCC 12 has merged LoongArch port. Linux kernel support is also in progress: https://lore.kernel.org/linux-arch/202...@lo.../ Although many projects are not yet upstreamed, most of the basic software is open source: https://github.com/loongson https://github.com/loongarch64 Loongson 3A5000 is the first LoongArch CPU, which can be bought in China. Currently, I am focusing on running Valgrind on LoongArch64/Linux. I develop Valgrind on the CLFS (Cross Linux From Scratch) distribution: https://github.com/sunhaiyong1978/CLFS-for-LoongArch I enabled basic integer instructions and basic floating point instructions of LoongArch64. I successfully ran many programs with Valgrind, such as executing 'ls' command, compiling C programs with gcc, and so on. Valgrind for LoongArch64/Linux was able to pass about 1/3 of the regression tests. I'm still working on improving the project. In the future, I will focus on the following: 1) Writing more tests and fixing bugs. 2) Codegen optimizations, especially for floating point instructions. 3) Getting the number of regtest failures down. Here is my local tree: https://github.com/loongson/valgrind-loongarch64 As written in the README, this project is in early stage and I may use 'git push -f' when necessary. Please let me know if I missed anything or anything is incorrect. Is it possible for my work to be merged into the mainline? Thanks, Feiyang |
|
From: Feiyang C. <chr...@gm...> - 2023-06-30 09:57:47
|
Hi, I sent patches v5, which were rebased on master and squashed into 40 commits. I am working on supporting vector on LoongArch64 now. https://bugs.kde.org/show_bug.cgi?id=457504 Thanks, Feiyang |
|
From: Feiyang C. <chr...@gm...> - 2023-08-04 03:23:19
|
Hi, I want to inquire about the status of my patch. As of now, I haven't received any feedback or response regarding its review or potential inclusion in the upcoming release. I understand that everyone in the team is busy with their respective tasks, but I am eager to know if there are any plans to consider my contribution for integration. Thanks, Feiyang |
|
From: Mark W. <ma...@kl...> - 2023-08-22 13:16:59
|
Hi Feiyang, On Fri, 2023-08-04 at 11:22 +0800, Feiyang Chen wrote: > I want to inquire about the status of my patch. As of now, I haven't > received any feedback or response regarding its review or potential > inclusion in the upcoming release. I understand that everyone in the > team is busy with their respective tasks, but I am eager to know if > there are any plans to consider my contribution for integration. Sorry review is taking so long. I think it is fair to say nobody really made time for this. And we promise a new release in October. It seems LoonArch64 is making good progress overall. I saw it is a Debian port now: https://lwn.net/Articles/941743/ And the GCC Compile Farm now has two machines for testing: https://cfarm.tetaneutral.net/news/37 There are a couple of larger features that could use more people to take a look: - AVX-512 support, incomplete and we lost contact with the original developer: https://bugs.kde.org/show_bug.cgi?id=383010 - Risc-V port, seems to have a dedicated developers: https://bugs.kde.org/show_bug.cgi?id=468575 There also is active development to extend it with Vector Register support https://sourceforge.net/p/valgrind/mailman/valgrind-developers/thread/20230526135944.1959407-5-fei2.wu%40intel.com/ - Loongarch64 port, seems pretty complete, split out in 40 commits: https://bugs.kde.org/show_bug.cgi?id=457504 Unfortunately I cannot promise to have time before October to look at all of these. So if others could take a look and report on status that would be great. Cheers, Mark |
|
From: Floyd, P. <pj...@wa...> - 2023-08-22 13:58:40
|
On 22/08/2023 15:16, Mark Wielaard wrote: > Hi Feiyang, > > On Fri, 2023-08-04 at 11:22 +0800, Feiyang Chen wrote: >> I want to inquire about the status of my patch. As of now, I haven't >> received any feedback or response regarding its review or potential >> inclusion in the upcoming release. I understand that everyone in the >> team is busy with their respective tasks, but I am eager to know if >> there are any plans to consider my contribution for integration. > Sorry review is taking so long. I think it is fair to say nobody really > made time for this. And we promise a new release in October. It seems > LoonArch64 is making good progress overall. I saw it is a Debian port > now:https://lwn.net/Articles/941743/ > And the GCC Compile Farm now has two machines for testing: > https://cfarm.tetaneutral.net/news/37 > > There are a couple of larger features that could use more people to > take a look: > > - AVX-512 support, incomplete and we lost contact with the original > developer:https://bugs.kde.org/show_bug.cgi?id=383010 > - Risc-V port, seems to have a dedicated developers: > https://bugs.kde.org/show_bug.cgi?id=468575 > There also is active development to extend it with > Vector Register support > https://sourceforge.net/p/valgrind/mailman/valgrind-developers/thread/20230526135944.1959407-5-fei2.wu%40intel.com/ > - Loongarch64 port, seems pretty complete, split out in 40 commits: > https://bugs.kde.org/show_bug.cgi?id=457504 > > Unfortunately I cannot promise to have time before October to look at > all of these. So if others could take a look and report on status that > would be great. Hi Mark Do we have any contacts at Intel (or AMD) for help with AVX512? My wishlist for the October release of 3.22 includes all of the above plus * get at least one dev each for RISC-V and Loongson on board with sourceware git write access in order to be able to support the platforms directly * the long running question of what to do with macOS * memcheck aligned and sized checks plus maybe c23 free_sized and free_sized_aligned plus Linux aligned_alloc * detect whether a debug version of libstdc++ is being used and then use that to automatically turn on or off mismatch detection And I expect the usual steady stream of smaller fixes. A+ Paul |
|
From: Mark W. <ma...@kl...> - 2023-08-27 15:47:00
|
Hi Paul, On Tue, Aug 22, 2023 at 03:58:26PM +0200, Floyd, Paul wrote: > On 22/08/2023 15:16, Mark Wielaard wrote: > >There are a couple of larger features that could use more people to > >take a look: > > > >- AVX-512 support, incomplete and we lost contact with the original > > developer:https://bugs.kde.org/show_bug.cgi?id=383010 > >- Risc-V port, seems to have a dedicated developers: > > https://bugs.kde.org/show_bug.cgi?id=468575 > > There also is active development to extend it with > > Vector Register support > >https://sourceforge.net/p/valgrind/mailman/valgrind-developers/thread/20230526135944.1959407-5-fei2.wu%40intel.com/ > >- Loongarch64 port, seems pretty complete, split out in 40 commits: > > https://bugs.kde.org/show_bug.cgi?id=457504 > > > >Unfortunately I cannot promise to have time before October to look at > >all of these. So if others could take a look and report on status that > >would be great. > > Do we have any contacts at Intel (or AMD) for help with AVX512? I don't have any, but I'll ask around. > My wishlist for the October release of 3.22 includes all of the above plus > > * get at least one dev each for RISC-V and Loongson on board with > sourceware git write access in order to be able to support the > platforms directly And hopefully nightly test results and/or buildbot builders. > * the long running question of what to do with macOS Any testers/developers/volunteers? > * memcheck aligned and sized checks plus maybe c23 free_sized and > free_sized_aligned plus Linux aligned_alloc > > * detect whether a debug version of libstdc++ is being used and then > use that to automatically turn on or off mismatch detection That is an interesting idea. > And I expect the usual steady stream of smaller fixes. On irc we also discussed having a "rolling release branch" where we put small/essential bug fixes (some of which some distros now backport themselves). Cheers, Mark |
|
From: Floyd, P. <pj...@wa...> - 2023-08-31 14:50:32
|
Hi Mark On 27/08/2023 17:46, Mark Wielaard wrote: > And hopefully nightly test results and/or buildbot builders. > >> * the long running question of what to do with macOS > Any testers/developers/volunteers? One main developer but intermittent progress. Occasional other contributors After being just about totally broken on macOS versions from 10.15 onwards memcheck is now just about usable for small non-GUI applications on Intel hardware. Ongoing work on Apple's ARM architecture but I don't have an ARM mac so I can't test that. >> * memcheck aligned and sized checks plus maybe c23 free_sized and >> free_sized_aligned plus Linux aligned_alloc >> >> * detect whether a debug version of libstdc++ is being used and then >> use that to automatically turn on or off mismatch detection > That is an interesting idea. I need to do more testing to see if detecting debuginfo would be enough, or whether we also need to know if the compiler used optimization. >> And I expect the usual steady stream of smaller fixes. > On irc we also discussed having a "rolling release branch" where we > put small/essential bug fixes (some of which some distros now backport > themselves). I'd be interested in that. On FreeBSD I maintain two ports, one that is mostly stable and based on official Valgrind releases and a second one that is a bit more "rolling" (actually it doesn't roll as fast as it should lack of time etc.). That second one is based on a GH repo that shadows the sourceware repo. I'm planning to switch over to use sourceware snapshots the next time I bump that port (RSN after de-borking FreeBSD 14/15 Valgrind amd64 today). The big advantage for me is that I don't have to maintain any icky patchsets for the port. I can just fix things and push to sourceware and then base off that. Also I don't have any constraints like LTS and paying customers. Do you think it will be feasible for such release branch to satisfy most distro packagers needs? I do occasionally look to see what distros have in their patchsets - I merged one from Debian a dew days ago. A+ Paul |
|
From: Floyd, P. <pj...@wa...> - 2022-05-20 07:47:06
|
On 2022-05-11 12:15, Feiyang Chen wrote: > As written in the README, this project is in early stage and I may use 'git > push -f' when necessary. Please let me know if I missed anything or anything > is incorrect. Is it possible for my work to be merged into the mainline? Hi Feiyang I had a look at your github repo, and it seems like a good start. For integration into the Valgrind source repo, do you have a. someone committed to long term maintenance? b. a decent number of regression tests for your platform? c. an automatic build system (the source tree contains scripts that do the actual work)? A few other things will be expected 1. Minimize non-functional changes. 2. Respect the house coding rules. 3. Zero breakage of other platforms. 4. Added code to conform to the existing licensing. Point a, is probably the most important. A few years back there was a fairly bad example with TileGx. This got upstreamed. Shortly afterwards the company was bought and changed product line. TileGx support was dropped. The Valgrind port was then left unmaintained until someone deleted TileGx from the source tree. Right now macOS is close to being in that state. A+ Paul |
|
From: Feiyang C. <chr...@gm...> - 2022-05-21 09:32:57
|
On Fri, 20 May 2022 at 15:47, Floyd, Paul <pj...@wa...> wrote:
> Hi Feiyang
>
> I had a look at your github repo, and it seems like a good start.
>
Hi, Paul,
Thank you for your detailed reply.
>
> For integration into the Valgrind source repo, do you have
>
> a. someone committed to long term maintenance?
>
I am committed to the long term maintenance of the LOONGARCH64/Linux port.
> b. a decent number of regression tests for your platform?
>
I will try my best to fix the failed cases to reach a satisfactory result.
> c. an automatic build system (the source tree contains scripts that do
> the actual work)?
>
I have now set up nightly testing on my local machine according to the
nightly/README.txt. I can provide a Loongson 3A5000 desktop computer
if needed.
>
> A few other things will be expected
>
> 1. Minimize non-functional changes.
>
> 2. Respect the house coding rules.
>
> 3. Zero breakage of other platforms.
>
> 4. Added code to conform to the existing licensing.
>
Of course, I will remember and follow these things.
>
> Point a, is probably the most important. A few years back there was a
> fairly bad example with TileGx. This got upstreamed. Shortly afterwards
> the company was bought and changed product line. TileGx support was
> dropped. The Valgrind port was then left unmaintained until someone
> deleted TileGx from the source tree. Right now macOS is close to being
> in that state.
>
This sounds like a sad story. LoongArch is a brand-new ISA and Loongson
will release more LoongArch processors in the future. As I said before,
I will maintain this port for a long time.
At this stage, I am focusing on fixing bugs in some floating point
instructions. After that, I will add more tests and gradually reduce the
failure cases of regression tests.
I noticed that Memcheck reports a large number of warnings from glibc,
which led to a lot of test failures. I wonder if I can just ignore the
warnings by adding the following to glibc-2.X.supp.in:
{
glibc-loongarch64-cond-1
Memcheck:Cond
obj:*/libc.so*
}
{
glibc-loongarch64-cond-2
Memcheck:Cond
obj:*/ld-linux-loongarch-*.so*
}
Thanks,
Feiyang
>
> A+
>
> Paul
|
|
From: Feiyang C. <chr...@gm...> - 2022-06-10 08:38:53
|
Hi, team, After fixing some decoding and stack backtracking bugs, Valgrind for loongarch64-linux successfully passed most of the tests. https://github.com/loongson/valgrind-loongarch64 I will try to get Valgrind to pass more tests. But now I have some doubts. Could you help me, please? There are more dubious and reachable blocks in some memcheck tests. I don't know what's wrong with it. $ valgrind --leak-check=full --leak-resolution=high memcheck/tests/leak-cases ==166906== Memcheck, a memory error detector ==166906== Copyright (C) 2002-2022, and GNU GPL'd, by Julian Seward et al. ==166906== Using Valgrind-3.20.0.GIT and LibVEX; rerun with -h for copyright info ==166906== Command: memcheck/tests/leak-cases ==166906== ==166906== All heap blocks were freed -- no leaks are possible ==166906== ==166906== LEAK SUMMARY: ==166906== definitely lost: 32 bytes in 2 blocks ==166906== indirectly lost: 16 bytes in 1 blocks ==166906== possibly lost: 112 bytes in 7 blocks ==166906== still reachable: 80 bytes in 5 blocks ==166906== suppressed: 0 bytes in 0 blocks ==166906== Rerun with --leak-check=full to see details of leaked memory ==166906== leaked: 48 bytes in 3 blocks dubious: 112 bytes in 7 blocks reachable: 80 bytes in 5 blocks suppressed: 0 bytes in 0 blocks ==166906== ==166906== HEAP SUMMARY: ==166906== in use at exit: 240 bytes in 15 blocks ==166906== total heap usage: 15 allocs, 0 frees, 240 bytes allocated ==166906== ==166906== 16 bytes in 1 blocks are possibly lost in loss record 7 of 17 ==166906== at 0x484704C: malloc (vg_replace_malloc.c:393) ==166906== by 0x120A27: mk (leak-cases.c:52) ==166906== by 0x120AC3: f (leak-cases.c:78) ==166906== by 0x120DE7: main (leak-cases.c:107) ==166906== ==166906== 16 bytes in 1 blocks are possibly lost in loss record 8 of 17 ==166906== at 0x484704C: malloc (vg_replace_malloc.c:393) ==166906== by 0x120A27: mk (leak-cases.c:52) ==166906== by 0x120AF7: f (leak-cases.c:81) ==166906== by 0x120DE7: main (leak-cases.c:107) ==166906== ==166906== 16 bytes in 1 blocks are possibly lost in loss record 9 of 17 ==166906== at 0x484704C: malloc (vg_replace_malloc.c:393) ==166906== by 0x120A27: mk (leak-cases.c:52) ==166906== by 0x120B33: f (leak-cases.c:84) ==166906== by 0x120DE7: main (leak-cases.c:107) ==166906== ==166906== 16 bytes in 1 blocks are possibly lost in loss record 10 of 17 ==166906== at 0x484704C: malloc (vg_replace_malloc.c:393) ==166906== by 0x120A27: mk (leak-cases.c:52) ==166906== by 0x120B3F: f (leak-cases.c:84) ==166906== by 0x120DE7: main (leak-cases.c:107) ==166906== ==166906== 16 bytes in 1 blocks are possibly lost in loss record 11 of 17 ==166906== at 0x484704C: malloc (vg_replace_malloc.c:393) ==166906== by 0x120A27: mk (leak-cases.c:52) ==166906== by 0x120B73: f (leak-cases.c:87) ==166906== by 0x120DE7: main (leak-cases.c:107) ==166906== ==166906== 16 bytes in 1 blocks are possibly lost in loss record 12 of 17 ==166906== at 0x484704C: malloc (vg_replace_malloc.c:393) ==166906== by 0x120A27: mk (leak-cases.c:52) ==166906== by 0x120B7F: f (leak-cases.c:87) ==166906== by 0x120DE7: main (leak-cases.c:107) ==166906== ==166906== 16 bytes in 1 blocks are definitely lost in loss record 15 of 17 ==166906== at 0x484704C: malloc (vg_replace_malloc.c:393) ==166906== by 0x120A27: mk (leak-cases.c:52) ==166906== by 0x120AA7: f (leak-cases.c:74) ==166906== by 0x120DE7: main (leak-cases.c:107) ==166906== ==166906== 32 (16 direct, 16 indirect) bytes in 1 blocks are definitely lost in loss record 16 of 17 ==166906== at 0x484704C: malloc (vg_replace_malloc.c:393) ==166906== by 0x120A27: mk (leak-cases.c:52) ==166906== by 0x120ABB: f (leak-cases.c:76) ==166906== by 0x120DE7: main (leak-cases.c:107) ==166906== ==166906== 32 (16 direct, 16 indirect) bytes in 1 blocks are definitely lost in loss record 17 of 17 ==166906== at 0x484704C: malloc (vg_replace_malloc.c:393) ==166906== by 0x120A27: mk (leak-cases.c:52) ==166906== by 0x120BD7: f (leak-cases.c:91) ==166906== by 0x120DE7: main (leak-cases.c:107) ==166906== ==166906== LEAK SUMMARY: ==166906== definitely lost: 48 bytes in 3 blocks ==166906== indirectly lost: 32 bytes in 2 blocks ==166906== possibly lost: 96 bytes in 6 blocks ==166906== still reachable: 64 bytes in 4 blocks ==166906== suppressed: 0 bytes in 0 blocks ==166906== Reachable blocks (those to which a pointer was found) are not shown. ==166906== To see them, rerun with: --leak-check=full --show-leak-kinds=all ==166906== ==166906== For lists of detected and suppressed errors, rerun with: -s ==166906== ERROR SUMMARY: 9 errors from 9 contexts (suppressed: 0 from 0) I add four functions in glibc-2.X-helgrind.supp.in: '_dl_lookup_symbol_x', '_dl_map_object_deps', '_dl_sort_maps_dfs' and '_dl_fini'. I don't know why they cause errors. $ valgrind --tool=helgrind helgrind/tests/bar_bad ==166897== Helgrind, a thread error detector ==166897== Copyright (C) 2007-2017, and GNU GPL'd, by OpenWorks LLP et al. ==166897== Using Valgrind-3.20.0.GIT and LibVEX; rerun with -h for copyright info ==166897== Command: helgrind/tests/bar_bad ==166897== initialise a barrier with zero count ==166897== ---Thread-Announcement------------------------------------------ ==166897== ==166897== Thread #1 is the program's root thread ==166897== ==166897== ---------------------------------------------------------------- ==166897== ==166897== Thread #1: pthread_barrier_init: 'count' argument is zero ==166897== at 0x4855184: pthread_barrier_init (hg_intercepts.c:1869) ==166897== by 0x120CEB: main (bar_bad.c:44) ==166897== ==166897== ---------------------------------------------------------------- ==166897== ==166897== Thread #1's call to pthread_barrier_init failed ==166897== with error code 22 (EINVAL: Invalid argument) ==166897== at 0x4855244: pthread_barrier_init (hg_intercepts.c:1877) ==166897== by 0x120CEB: main (bar_bad.c:44) ==166897== initialise a barrier twice ==166897== ---------------------------------------------------------------- ==166897== ==166897== Thread #1: pthread_barrier_init: barrier is already initialised ==166897== at 0x4855184: pthread_barrier_init (hg_intercepts.c:1869) ==166897== by 0x120D3F: main (bar_bad.c:50) ==166897== initialise a barrier which has threads waiting on it ==166897== ---Thread-Announcement------------------------------------------ ==166897== ==166897== Thread #2 was created ==166897== at 0x4978F68: clone (clone.S:56) ==166897== by 0x490F27B: create_thread (pthread_create.c:295) ==166897== by 0x490FC47: pthread_create@@GLIBC_2.36 (pthread_create.c:828) ==166897== by 0x4853883: pthread_create_WRK (hg_intercepts.c:445) ==166897== by 0x4854B2F: pthread_create@* (hg_intercepts.c:478) ==166897== by 0x120D9F: main (bar_bad.c:59) ==166897== ==166897== ---------------------------------------------------------------- ==166897== ==166897== Possible data race during read of size 8 at 0x4030B10 by thread #2 ==166897== Locks held: none ==166897== at 0x400B1A4: _dl_lookup_symbol_x (dl-lookup.c:824) ==166897== by 0x4010953: _dl_fixup (dl-runtime.c:95) ==166897== by 0x4012B27: _dl_runtime_resolve (dl-trampoline.S:62) ==166897== by 0xFFFFFFFFFFFFFFFF: ??? ==166897== ==166897== This conflicts with a previous write of size 8 by thread #1 ==166897== Locks held: none ==166897== at 0x400B1C0: _dl_lookup_symbol_x (dl-lookup.c:824) ==166897== by 0x4010953: _dl_fixup (dl-runtime.c:95) ==166897== by 0x4012B27: _dl_runtime_resolve (dl-trampoline.S:62) ==166897== by 0xFFFFFFFFFFFFFFFF: ??? ==166897== Address 0x4030b10 is 2712 bytes inside data symbol "_rtld_local" ==166897== ==166897== ---------------------------------------------------------------- ==166897== ==166897== Possible data race during write of size 8 at 0x4030B10 by thread #2 ==166897== Locks held: none ==166897== at 0x400B1C0: _dl_lookup_symbol_x (dl-lookup.c:824) ==166897== by 0x4010953: _dl_fixup (dl-runtime.c:95) ==166897== by 0x4012B27: _dl_runtime_resolve (dl-trampoline.S:62) ==166897== by 0xFFFFFFFFFFFFFFFF: ??? ==166897== ==166897== This conflicts with a previous write of size 8 by thread #1 ==166897== Locks held: none ==166897== at 0x400B1C0: _dl_lookup_symbol_x (dl-lookup.c:824) ==166897== by 0x4010953: _dl_fixup (dl-runtime.c:95) ==166897== by 0x4012B27: _dl_runtime_resolve (dl-trampoline.S:62) ==166897== by 0xFFFFFFFFFFFFFFFF: ??? ==166897== Address 0x4030b10 is 2712 bytes inside data symbol "_rtld_local" ==166897== ==166897== ---------------------------------------------------------------- ==166897== ==166897== Thread #1: pthread_barrier_init: barrier is already initialised ==166897== at 0x4855184: pthread_barrier_init (hg_intercepts.c:1869) ==166897== by 0x120DD3: main (bar_bad.c:65) ==166897== ==166897== ---------------------------------------------------------------- ==166897== ==166897== Thread #1: pthread_barrier_init: threads are waiting at barrier ==166897== at 0x4855184: pthread_barrier_init (hg_intercepts.c:1869) ==166897== by 0x120DD3: main (bar_bad.c:65) ==166897== destroy a barrier that has waiting threads ==166897== ---------------------------------------------------------------- ==166897== ==166897== Thread #1: pthread_barrier_destroy: threads are waiting at barrier ==166897== at 0x4855444: pthread_barrier_destroy (hg_intercepts.c:1944) ==166897== by 0x120E5F: main (bar_bad.c:83) ==166897== ==166897== ---Thread-Announcement------------------------------------------ ==166897== ==166897== Thread #4 was created ==166897== at 0x4978F68: clone (clone.S:56) ==166897== by 0x490F27B: create_thread (pthread_create.c:295) ==166897== by 0x490FC47: pthread_create@@GLIBC_2.36 (pthread_create.c:828) ==166897== by 0x4853883: pthread_create_WRK (hg_intercepts.c:445) ==166897== by 0x4854B2F: pthread_create@* (hg_intercepts.c:478) ==166897== by 0x120E33: main (bar_bad.c:77) ==166897== ==166897== ---------------------------------------------------------------- ==166897== ==166897== Thread #4: pthread_barrier_wait: barrier is uninitialised ==166897== at 0x48552E8: pthread_barrier_wait (hg_intercepts.c:1910) ==166897== by 0x120C5B: sleep1 (bar_bad.c:23) ==166897== by 0x4853A5F: mythread_wrapper (hg_intercepts.c:406) ==166897== by 0x490F4CB: start_thread (pthread_create.c:442) ==166897== by 0x4978F8B: __thread_start (clone.S:87) ==166897== ==166897== ---------------------------------------------------------------- ==166897== ==166897== Lock at 0x4030A80 was first observed ==166897== at 0x4850BEC: mutex_lock_WRK (hg_intercepts.c:942) ==166897== by 0x4854F63: pthread_mutex_lock (hg_intercepts.c:958) ==166897== by 0x400CD17: _dl_open (dl-open.c:830) ==166897== by 0x49B311F: do_dlopen (dl-libc.c:95) ==166897== by 0x49B2D2B: _dl_catch_exception (dl-error-skeleton.c:208) ==166897== by 0x49B2DEF: _dl_catch_error (dl-error-skeleton.c:227) ==166897== by 0x49B3073: dlerror_run (dl-libc.c:45) ==166897== by 0x49B326F: __libc_dlopen_mode (dl-libc.c:162) ==166897== by 0x4978BD3: __libc_unwind_link_get (unwind-link.c:50) ==166897== by 0x490D783: pthread_cancel@@GLIBC_2.36 (pthread_cancel.c:99) ==166897== by 0x120E6B: main (bar_bad.c:85) ==166897== Address 0x4030a80 is 2568 bytes inside data symbol "_rtld_local" ==166897== ==166897== Lock at 0x4030AD0 was first observed ==166897== at 0x4850BEC: mutex_lock_WRK (hg_intercepts.c:942) ==166897== by 0x4854F63: pthread_mutex_lock (hg_intercepts.c:958) ==166897== by 0x40120C3: _dl_allocate_tls_init (dl-tls.c:539) ==166897== by 0x4910127: allocate_stack (allocatestack.c:428) ==166897== by 0x4910127: pthread_create@@GLIBC_2.36 (pthread_create.c:647) ==166897== by 0x4853883: pthread_create_WRK (hg_intercepts.c:445) ==166897== by 0x4854B2F: pthread_create@* (hg_intercepts.c:478) ==166897== by 0x120D9F: main (bar_bad.c:59) ==166897== Address 0x4030ad0 is 2648 bytes inside data symbol "_rtld_local" ==166897== ==166897== Possible data race during write of size 2 at 0x4032774 by thread #1 ==166897== Locks held: 2, at addresses 0x4030A80 0x4030AD0 ==166897== at 0x40040F8: _dl_map_object_deps (dl-deps.c:259) ==166897== by 0x400D23F: dl_open_worker_begin (dl-open.c:592) ==166897== by 0x49B2D2B: _dl_catch_exception (dl-error-skeleton.c:208) ==166897== by 0x400C917: dl_open_worker (dl-open.c:782) ==166897== by 0x49B2D2B: _dl_catch_exception (dl-error-skeleton.c:208) ==166897== by 0x400CD5B: _dl_open (dl-open.c:883) ==166897== by 0x49B311F: do_dlopen (dl-libc.c:95) ==166897== by 0x49B2D2B: _dl_catch_exception (dl-error-skeleton.c:208) ==166897== by 0x49B2DEF: _dl_catch_error (dl-error-skeleton.c:227) ==166897== by 0x49B3073: dlerror_run (dl-libc.c:45) ==166897== by 0x49B326F: __libc_dlopen_mode (dl-libc.c:162) ==166897== by 0x4978BD3: __libc_unwind_link_get (unwind-link.c:50) ==166897== ==166897== This conflicts with a previous read of size 2 by thread #4 ==166897== Locks held: none ==166897== at 0x400B274: _dl_lookup_symbol_x (dl-lookup.c:907) ==166897== by 0x4010953: _dl_fixup (dl-runtime.c:95) ==166897== by 0x4012B27: _dl_runtime_resolve (dl-trampoline.S:62) ==166897== by 0xFFFFFFFFFFFFFFFF: ??? ==166897== Address 0x4032774 is in a rw- mapped file /usr/lib64/ld-linux-loongarch-lp64d.so.1 segment ==166897== ==166897== ---------------------------------------------------------------- ==166897== ==166897== Lock at 0x4030A80 was first observed ==166897== at 0x4850BEC: mutex_lock_WRK (hg_intercepts.c:942) ==166897== by 0x4854F63: pthread_mutex_lock (hg_intercepts.c:958) ==166897== by 0x400CD17: _dl_open (dl-open.c:830) ==166897== by 0x49B311F: do_dlopen (dl-libc.c:95) ==166897== by 0x49B2D2B: _dl_catch_exception (dl-error-skeleton.c:208) ==166897== by 0x49B2DEF: _dl_catch_error (dl-error-skeleton.c:227) ==166897== by 0x49B3073: dlerror_run (dl-libc.c:45) ==166897== by 0x49B326F: __libc_dlopen_mode (dl-libc.c:162) ==166897== by 0x4978BD3: __libc_unwind_link_get (unwind-link.c:50) ==166897== by 0x490D783: pthread_cancel@@GLIBC_2.36 (pthread_cancel.c:99) ==166897== by 0x120E6B: main (bar_bad.c:85) ==166897== Address 0x4030a80 is 2568 bytes inside data symbol "_rtld_local" ==166897== ==166897== Lock at 0x4030AD0 was first observed ==166897== at 0x4850BEC: mutex_lock_WRK (hg_intercepts.c:942) ==166897== by 0x4854F63: pthread_mutex_lock (hg_intercepts.c:958) ==166897== by 0x40120C3: _dl_allocate_tls_init (dl-tls.c:539) ==166897== by 0x4910127: allocate_stack (allocatestack.c:428) ==166897== by 0x4910127: pthread_create@@GLIBC_2.36 (pthread_create.c:647) ==166897== by 0x4853883: pthread_create_WRK (hg_intercepts.c:445) ==166897== by 0x4854B2F: pthread_create@* (hg_intercepts.c:478) ==166897== by 0x120D9F: main (bar_bad.c:59) ==166897== Address 0x4030ad0 is 2648 bytes inside data symbol "_rtld_local" ==166897== ==166897== Possible data race during read of size 8 at 0x4030B10 by thread #1 ==166897== Locks held: 2, at addresses 0x4030A80 0x4030AD0 ==166897== at 0x400B1A4: _dl_lookup_symbol_x (dl-lookup.c:824) ==166897== by 0x400F6BB: elf_machine_rela (dl-machine.h:186) ==166897== by 0x400F6BB: elf_dynamic_do_Rela (do-rel.h:147) ==166897== by 0x400F6BB: _dl_relocate_object (dl-reloc.c:288) ==166897== by 0x400D373: dl_open_worker_begin (dl-open.c:702) ==166897== by 0x49B2D2B: _dl_catch_exception (dl-error-skeleton.c:208) ==166897== by 0x400C917: dl_open_worker (dl-open.c:782) ==166897== by 0x49B2D2B: _dl_catch_exception (dl-error-skeleton.c:208) ==166897== by 0x400CD5B: _dl_open (dl-open.c:883) ==166897== by 0x49B311F: do_dlopen (dl-libc.c:95) ==166897== by 0x49B2D2B: _dl_catch_exception (dl-error-skeleton.c:208) ==166897== by 0x49B2DEF: _dl_catch_error (dl-error-skeleton.c:227) ==166897== by 0x49B3073: dlerror_run (dl-libc.c:45) ==166897== by 0x49B326F: __libc_dlopen_mode (dl-libc.c:162) ==166897== ==166897== This conflicts with a previous write of size 8 by thread #4 ==166897== Locks held: none ==166897== at 0x400B1C0: _dl_lookup_symbol_x (dl-lookup.c:824) ==166897== by 0x4010953: _dl_fixup (dl-runtime.c:95) ==166897== by 0x4012B27: _dl_runtime_resolve (dl-trampoline.S:62) ==166897== by 0xFFFFFFFFFFFFFFFF: ??? ==166897== Address 0x4030b10 is 2712 bytes inside data symbol "_rtld_local" ==166897== ==166897== ---------------------------------------------------------------- ==166897== ==166897== Lock at 0x4030A80 was first observed ==166897== at 0x4850BEC: mutex_lock_WRK (hg_intercepts.c:942) ==166897== by 0x4854F63: pthread_mutex_lock (hg_intercepts.c:958) ==166897== by 0x400CD17: _dl_open (dl-open.c:830) ==166897== by 0x49B311F: do_dlopen (dl-libc.c:95) ==166897== by 0x49B2D2B: _dl_catch_exception (dl-error-skeleton.c:208) ==166897== by 0x49B2DEF: _dl_catch_error (dl-error-skeleton.c:227) ==166897== by 0x49B3073: dlerror_run (dl-libc.c:45) ==166897== by 0x49B326F: __libc_dlopen_mode (dl-libc.c:162) ==166897== by 0x4978BD3: __libc_unwind_link_get (unwind-link.c:50) ==166897== by 0x490D783: pthread_cancel@@GLIBC_2.36 (pthread_cancel.c:99) ==166897== by 0x120E6B: main (bar_bad.c:85) ==166897== Address 0x4030a80 is 2568 bytes inside data symbol "_rtld_local" ==166897== ==166897== Lock at 0x4030AD0 was first observed ==166897== at 0x4850BEC: mutex_lock_WRK (hg_intercepts.c:942) ==166897== by 0x4854F63: pthread_mutex_lock (hg_intercepts.c:958) ==166897== by 0x40120C3: _dl_allocate_tls_init (dl-tls.c:539) ==166897== by 0x4910127: allocate_stack (allocatestack.c:428) ==166897== by 0x4910127: pthread_create@@GLIBC_2.36 (pthread_create.c:647) ==166897== by 0x4853883: pthread_create_WRK (hg_intercepts.c:445) ==166897== by 0x4854B2F: pthread_create@* (hg_intercepts.c:478) ==166897== by 0x120D9F: main (bar_bad.c:59) ==166897== Address 0x4030ad0 is 2648 bytes inside data symbol "_rtld_local" ==166897== ==166897== Possible data race during write of size 8 at 0x4030B10 by thread #1 ==166897== Locks held: 2, at addresses 0x4030A80 0x4030AD0 ==166897== at 0x400B1C0: _dl_lookup_symbol_x (dl-lookup.c:824) ==166897== by 0x400F6BB: elf_machine_rela (dl-machine.h:186) ==166897== by 0x400F6BB: elf_dynamic_do_Rela (do-rel.h:147) ==166897== by 0x400F6BB: _dl_relocate_object (dl-reloc.c:288) ==166897== by 0x400D373: dl_open_worker_begin (dl-open.c:702) ==166897== by 0x49B2D2B: _dl_catch_exception (dl-error-skeleton.c:208) ==166897== by 0x400C917: dl_open_worker (dl-open.c:782) ==166897== by 0x49B2D2B: _dl_catch_exception (dl-error-skeleton.c:208) ==166897== by 0x400CD5B: _dl_open (dl-open.c:883) ==166897== by 0x49B311F: do_dlopen (dl-libc.c:95) ==166897== by 0x49B2D2B: _dl_catch_exception (dl-error-skeleton.c:208) ==166897== by 0x49B2DEF: _dl_catch_error (dl-error-skeleton.c:227) ==166897== by 0x49B3073: dlerror_run (dl-libc.c:45) ==166897== by 0x49B326F: __libc_dlopen_mode (dl-libc.c:162) ==166897== ==166897== This conflicts with a previous write of size 8 by thread #4 ==166897== Locks held: none ==166897== at 0x400B1C0: _dl_lookup_symbol_x (dl-lookup.c:824) ==166897== by 0x4010953: _dl_fixup (dl-runtime.c:95) ==166897== by 0x4012B27: _dl_runtime_resolve (dl-trampoline.S:62) ==166897== by 0xFFFFFFFFFFFFFFFF: ??? ==166897== Address 0x4030b10 is 2712 bytes inside data symbol "_rtld_local" ==166897== ==166897== ---------------------------------------------------------------- ==166897== ==166897== Possible data race during read of size 8 at 0x4030B10 by thread #1 ==166897== Locks held: none ==166897== at 0x400B1A4: _dl_lookup_symbol_x (dl-lookup.c:824) ==166897== by 0x49B3177: do_dlsym (dl-libc.c:105) ==166897== by 0x49B2D2B: _dl_catch_exception (dl-error-skeleton.c:208) ==166897== by 0x49B2DEF: _dl_catch_error (dl-error-skeleton.c:227) ==166897== by 0x49B3073: dlerror_run (dl-libc.c:45) ==166897== by 0x49B32F7: __libc_dlsym (dl-libc.c:190) ==166897== by 0x4978BE7: __libc_unwind_link_get (unwind-link.c:59) ==166897== by 0x490D783: pthread_cancel@@GLIBC_2.36 (pthread_cancel.c:99) ==166897== by 0x120E6B: main (bar_bad.c:85) ==166897== ==166897== This conflicts with a previous write of size 8 by thread #4 ==166897== Locks held: none ==166897== at 0x400B1C0: _dl_lookup_symbol_x (dl-lookup.c:824) ==166897== by 0x4010953: _dl_fixup (dl-runtime.c:95) ==166897== by 0x4012B27: _dl_runtime_resolve (dl-trampoline.S:62) ==166897== by 0xFFFFFFFFFFFFFFFF: ??? ==166897== Address 0x4030b10 is 2712 bytes inside data symbol "_rtld_local" ==166897== ==166897== ---------------------------------------------------------------- ==166897== ==166897== Possible data race during write of size 8 at 0x4030B10 by thread #1 ==166897== Locks held: none ==166897== at 0x400B1C0: _dl_lookup_symbol_x (dl-lookup.c:824) ==166897== by 0x49B3177: do_dlsym (dl-libc.c:105) ==166897== by 0x49B2D2B: _dl_catch_exception (dl-error-skeleton.c:208) ==166897== by 0x49B2DEF: _dl_catch_error (dl-error-skeleton.c:227) ==166897== by 0x49B3073: dlerror_run (dl-libc.c:45) ==166897== by 0x49B32F7: __libc_dlsym (dl-libc.c:190) ==166897== by 0x4978BE7: __libc_unwind_link_get (unwind-link.c:59) ==166897== by 0x490D783: pthread_cancel@@GLIBC_2.36 (pthread_cancel.c:99) ==166897== by 0x120E6B: main (bar_bad.c:85) ==166897== ==166897== This conflicts with a previous write of size 8 by thread #4 ==166897== Locks held: none ==166897== at 0x400B1C0: _dl_lookup_symbol_x (dl-lookup.c:824) ==166897== by 0x4010953: _dl_fixup (dl-runtime.c:95) ==166897== by 0x4012B27: _dl_runtime_resolve (dl-trampoline.S:62) ==166897== by 0xFFFFFFFFFFFFFFFF: ??? ==166897== Address 0x4030b10 is 2712 bytes inside data symbol "_rtld_local" ==166897== destroy a barrier that was never initialised ==166897== ---------------------------------------------------------------- ==166897== ==166897== Thread #1: pthread_barrier_destroy: barrier was never initialised ==166897== at 0x4855444: pthread_barrier_destroy (hg_intercepts.c:1944) ==166897== by 0x120EFB: main (bar_bad.c:100) ==166897== ==166897== ---------------------------------------------------------------- ==166897== ==166897== Lock at 0x4030A80 was first observed ==166897== at 0x4850BEC: mutex_lock_WRK (hg_intercepts.c:942) ==166897== by 0x4854F63: pthread_mutex_lock (hg_intercepts.c:958) ==166897== by 0x400CD17: _dl_open (dl-open.c:830) ==166897== by 0x49B311F: do_dlopen (dl-libc.c:95) ==166897== by 0x49B2D2B: _dl_catch_exception (dl-error-skeleton.c:208) ==166897== by 0x49B2DEF: _dl_catch_error (dl-error-skeleton.c:227) ==166897== by 0x49B3073: dlerror_run (dl-libc.c:45) ==166897== by 0x49B326F: __libc_dlopen_mode (dl-libc.c:162) ==166897== by 0x4978BD3: __libc_unwind_link_get (unwind-link.c:50) ==166897== by 0x490D783: pthread_cancel@@GLIBC_2.36 (pthread_cancel.c:99) ==166897== by 0x120E6B: main (bar_bad.c:85) ==166897== Address 0x4030a80 is 2568 bytes inside data symbol "_rtld_local" ==166897== ==166897== Possible data race during write of size 2 at 0x4032774 by thread #1 ==166897== Locks held: 1, at address 0x4030A80 ==166897== at 0x4011470: _dl_sort_maps_dfs (dl-sort-maps.c:188) ==166897== by 0x4011470: _dl_sort_maps (dl-sort-maps.c:301) ==166897== by 0x4006133: _dl_fini (dl-fini.c:99) ==166897== by 0x48CDF63: __run_exit_handlers (exit.c:113) ==166897== by 0x48CE0D7: exit (exit.c:143) ==166897== by 0x120F2B: main (bar_bad.c:110) ==166897== ==166897== This conflicts with a previous read of size 2 by thread #4 ==166897== Locks held: none ==166897== at 0x400B274: _dl_lookup_symbol_x (dl-lookup.c:907) ==166897== by 0x4010953: _dl_fixup (dl-runtime.c:95) ==166897== by 0x4012B27: _dl_runtime_resolve (dl-trampoline.S:62) ==166897== by 0xFFFFFFFFFFFFFFFF: ??? ==166897== Address 0x4032774 is in a rw- mapped file /usr/lib64/ld-linux-loongarch-lp64d.so.1 segment ==166897== ==166897== ---------------------------------------------------------------- ==166897== ==166897== Possible data race during write of size 2 at 0x4032774 by thread #1 ==166897== Locks held: none ==166897== at 0x4006178: _dl_fini (dl-fini.c:120) ==166897== by 0x48CDF63: __run_exit_handlers (exit.c:113) ==166897== by 0x48CE0D7: exit (exit.c:143) ==166897== by 0x120F2B: main (bar_bad.c:110) ==166897== ==166897== This conflicts with a previous read of size 2 by thread #4 ==166897== Locks held: none ==166897== at 0x400B274: _dl_lookup_symbol_x (dl-lookup.c:907) ==166897== by 0x4010953: _dl_fixup (dl-runtime.c:95) ==166897== by 0x4012B27: _dl_runtime_resolve (dl-trampoline.S:62) ==166897== by 0xFFFFFFFFFFFFFFFF: ??? ==166897== Address 0x4032774 is in a rw- mapped file /usr/lib64/ld-linux-loongarch-lp64d.so.1 segment ==166897== ==166897== ==166897== Use --history-level=approx or =none to gain increased speed, at ==166897== the cost of reduced accuracy of conflicting-access information ==166897== For lists of detected and suppressed errors, rerun with: -s ==166897== ERROR SUMMARY: 37 errors from 17 contexts (suppressed: 8 from 7) Thanks, Feiyang |
|
From: Feiyang C. <chr...@gm...> - 2022-07-01 10:04:18
|
Hi, team, I recently added some tests for loongarch64-linux. And after I applied a patch for glibc, only a few tests fail now. I will keep working on these tests. Any advice for me? == 642 tests, 8 stderr failures, 0 stdout failures, 2 stderrB failures, 0 stdoutB failures, 1 post failure == gdbserver_tests/hginfo (stderrB) gdbserver_tests/mcblocklistsearch (stderrB) memcheck/tests/leak-cases-full (stderr) memcheck/tests/leak-cases-summary (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-segv-jmp (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/lks (stderr) helgrind/tests/tls_threads (stderr) drd/tests/pth_mutex_signal (stderr) massif/tests/big-alloc (post) Thanks, Feiyang |
|
From: Floyd, P. <pj...@wa...> - 2022-07-01 10:15:22
|
On 2022-07-01 12:03, Feiyang Chen wrote: > Hi, team, > > I recently added some tests for loongarch64-linux. And after I applied > a patch for glibc, only a few tests fail now. I will keep working on > these tests. Any advice for me? > > == 642 tests, 8 stderr failures, 0 stdout failures, 2 stderrB > failures, 0 stdoutB failures, 1 post failure == > gdbserver_tests/hginfo (stderrB) > gdbserver_tests/mcblocklistsearch (stderrB) > memcheck/tests/leak-cases-full (stderr) > memcheck/tests/leak-cases-summary (stderr) > memcheck/tests/leak-cycle (stderr) > memcheck/tests/leak-segv-jmp (stderr) > memcheck/tests/leak-tree (stderr) > memcheck/tests/lks (stderr) > helgrind/tests/tls_threads (stderr) > drd/tests/pth_mutex_signal (stderr) > massif/tests/big-alloc (post) Hi Looks promising. drd/tests/pth_mutex_signal fails on most platforms and it needs work for Valgrind to correctly resume syscalls that are supposed to be resumed by the system. helgrind/tests/tls_threads is very platform-specific and requires debuginfo to turn off the pthread stackcache. The leak testcases can be tricky. I recently fixed a bunch of these on x86 FreeBSD where the last allocation was leaving the address of allocated memory in the ECX register. You may need to do something similar to clobber register contents to get clean results. massif/tests/big-alloc you may need to add a platform-specific filter to big-alloc.vgtest like the existing --ignore-fn= options A+ Paul |
|
From: Feiyang C. <chr...@gm...> - 2022-07-04 08:55:10
|
Hi, Thank you very much. For memcheck leak tests, I added CLEAR_CALLER_SAVED_REGS and do_syscall_WRK macros. For massif/tests/big-alloc, I added a .exp file like ppc64 and x86 freebsd. == 642 tests, 2 stderr failures, 0 stdout failures, 1 stderrB failure, 0 stdoutB failures, 0 post failures == gdbserver_tests/hginfo (stderrB) helgrind/tests/tls_threads (stderr) drd/tests/pth_mutex_signal (stderr) gdbserver_tests/hginfo fails because gdb/vgdb monitor shows one more lock record about "_rtld_local". Could I filter out all helgrind information about locks except the one named "mx" on linux like solaris? tls_threads fails because pthread stack cache cannot be disabled when glibc versions >= 2.34. "stack_cache_actsize" cannot be found as it has been hidden. It seems that we should use a new tunable named "glibc.pthread.stack_cache_size" to configure the size of the thread stack cache. But I don't know how to make it work with the existing mechanism. Thanks, Feiyang On Fri, 1 Jul 2022 at 18:15, Floyd, Paul <pj...@wa...> wrote: > > > > On 2022-07-01 12:03, Feiyang Chen wrote: > > Hi, team, > > > > I recently added some tests for loongarch64-linux. And after I applied > > a patch for glibc, only a few tests fail now. I will keep working on > > these tests. Any advice for me? > > > > == 642 tests, 8 stderr failures, 0 stdout failures, 2 stderrB > > failures, 0 stdoutB failures, 1 post failure == > > gdbserver_tests/hginfo (stderrB) > > gdbserver_tests/mcblocklistsearch (stderrB) > > memcheck/tests/leak-cases-full (stderr) > > memcheck/tests/leak-cases-summary (stderr) > > memcheck/tests/leak-cycle (stderr) > > memcheck/tests/leak-segv-jmp (stderr) > > memcheck/tests/leak-tree (stderr) > > memcheck/tests/lks (stderr) > > helgrind/tests/tls_threads (stderr) > > drd/tests/pth_mutex_signal (stderr) > > massif/tests/big-alloc (post) > > > Hi > > Looks promising. > > drd/tests/pth_mutex_signal fails on most platforms and it needs work for > Valgrind to correctly resume syscalls that are supposed to be resumed by > the system. > > helgrind/tests/tls_threads is very platform-specific and requires > debuginfo to turn off the pthread stackcache. > > The leak testcases can be tricky. I recently fixed a bunch of these on > x86 FreeBSD where the last allocation was leaving the address of > allocated memory in the ECX register. You may need to do something > similar to clobber register contents to get clean results. > > massif/tests/big-alloc you may need to add a platform-specific filter to > big-alloc.vgtest like the existing --ignore-fn= options > > A+ > Paul > > > _______________________________________________ > Valgrind-developers mailing list > Val...@li... > https://lists.sourceforge.net/lists/listinfo/valgrind-developers |
|
From: Feiyang C. <chr...@gm...> - 2022-07-06 09:42:22
|
Hi, For gdbserver_tests/hginfo, I filtered only on loongarch by using arch_test for now. For helgrind/tls_threads, I added a new script to determine if the pthread tunable is supported. If it is supported, then a new test which deactivates nptl pthread stackcache via the environment variable is used, otherwise the original test is used. The current regression test results are as follows: == 642 tests, 1 stderr failure, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures == drd/tests/pth_mutex_signal (stderr) Could it indicate that the loongarch port is almost complete (at least on the current system)? Do you have any advice on what I should do in the next phase? Repo: https://github.com/loongson/valgrind-loongarch64 P.S. The loongarch port brings some new IROps and a new IRMBusEvent: * Iop_ScaleBF64 * Iop_ScaleBF32 * Iop_RSqrtF64 * Iop_RSqrtF32 * Iop_LogBF64 * Iop_LogBF32 * Iop_MaxNumAbsF64 * Iop_MinNumAbsF64 * Iop_MaxNumF32 * Iop_MinNumF32 * Iop_MaxNumAbsF32 * Iop_MinNumAbsF32 * Imbe_InsnFence Thanks, Feiyang |
|
From: Feiyang C. <chr...@gm...> - 2022-07-15 08:31:10
|
Hi, I've been working on creating some new tests for loongarch64 recently. I found some bugs and fixed them. Meanwhile, I will use git rebase to keep my code up to date. Can I get some guidance on the next steps for having this port upstream? Where should I submit these patches for review? Thanks, Feiyang |