When I run donate-cpu, I receive the following errors:
Uploading results.. 571532 bytes
Results have been successfully uploaded.
Uploading information output.. 14094142 bytes
Upload error: [Errno 104] Connection reset by peer
Retrying upload in 30 seconds
Upload error: [Errno 104] Connection reset by peer
Retrying upload in 30 seconds
Upload error: [Errno 104] Connection reset by peer
Upload permanently failed!
Sleep 5 seconds..
Get Cppcheck..
error: pathspec 'master' did not match any file(s) known to git.
The entire output of the command python /cppcheck-2.1/tools/donate-cpu.py -j2 can be found at https://pastebin.com/R1F7WXjx
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Thanks. I now pull from Github main before running donate-cpu and it's working better. There are however two issues though:
[1] Timeout when processing gcc-riscv64-unknown-elf_8.3.0.
Get Cppcheck..
Already on 'main'
Your branch is up to date with 'origin/main'.
Already up to date.
Connecting to server to get Cppcheck versions..
Compiling Cppcheck..
make: 'cppcheck' is up to date.
Cppcheck 2.2 dev
Connecting to server to get assigned work..
Download package ftp://ftp.de.debian.org/debian/pool/main/g/gcc-riscv64-unknown-elf/gcc-riscv64-unknown-elf_8.3.0.2019.08+dfsg.orig.tar.gz
--2020-07-20 11:41:58-- ftp://ftp.de.debian.org/debian/pool/main/g/gcc-riscv64-unknown-elf/gcc-riscv64-unknown-elf_8.3.0.2019.08+dfsg.orig.tar.gz
=> ‘/home/user/cppcheck-donate-cpu-workfolder/temp.tgz’
Resolving ftp.de.debian.org (ftp.de.debian.org)... 141.76.2.4
Connecting to ftp.de.debian.org (ftp.de.debian.org)|141.76.2.4|:21... connected.
Logging in as anonymous ... Logged in!
==> SYST ... done. ==> PWD ... done.
==> TYPE I ... done. ==> CWD (1) /debian/pool/main/g/gcc-riscv64-unknown-elf ... done.
==> SIZE gcc-riscv64-unknown-elf_8.3.0.2019.08+dfsg.orig.tar.gz ... 90152098
==> PASV ... done. ==> RETR gcc-riscv64-unknown-elf_8.3.0.2019.08+dfsg.orig.tar.gz ... done.
Length: 90152098 (86M) (unauthoritative)
gcc-riscv64-unknown-elf_8.3.0 100%[=================================================>] 85.98M 4.56MB/s in 22s
Ok. If this is deliberate, an early exit from upload_info if len(info_output) > the limit may be better as it'd save the time and connection overheads of the 3 retries.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Would it also be a possibility to increase the limit? I have noticed a few packages falling into this category. That is, unless you really don't need those.
But yes, the more cores you throw at the problem, the more significant those 2 x 30s become.
Last edit: Andreas Grob 2020-07-25
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
it can be increased. but can we make this safer somehow. if an attacker would send random garbage to this server then it will happily store that on the disk. do you have suggestions? maybe start using some personal logins.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
You could use replication, i.e. distribute one job to multiple clients and then compare the results with a suitable metric. Of course this would slow down overall progress.
The problem with logins, e.g. for elevated upload permissions, might be how to assess trustworthiness in the first place.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
When I run donate-cpu, I receive the following errors:
The entire output of the command
python /cppcheck-2.1/tools/donate-cpu.py -j2
can be found at https://pastebin.com/R1F7WXjxthanks!
sorry but that script is a bit old. I made some changes on june 15th, two days after the cppcheck-2.1.
Thanks. I now pull from Github main before running donate-cpu and it's working better. There are however two issues though:
[1] Timeout when processing gcc-riscv64-unknown-elf_8.3.0.
[2] Upload failures when size exceeds 1.5MB. Server side/ web server limit? All of these appear when "Uploading information output" is attempted.
Last edit: Anand Bhat 2020-07-20
Command to replicate upload failure:
python donate-cpu.py --package=ftp://ftp.de.debian.org/debian/pool/main/g/glide/glide_2002.04.10ds1.orig.tar.gz
It's an intentional limitation with 1MB. We could abort more nicely.
Ok. If this is deliberate, an early exit from upload_info if len(info_output) > the limit may be better as it'd save the time and connection overheads of the 3 retries.
Would it also be a possibility to increase the limit? I have noticed a few packages falling into this category. That is, unless you really don't need those.
But yes, the more cores you throw at the problem, the more significant those 2 x 30s become.
Last edit: Andreas Grob 2020-07-25
it can be increased. but can we make this safer somehow. if an attacker would send random garbage to this server then it will happily store that on the disk. do you have suggestions? maybe start using some personal logins.
You could use replication, i.e. distribute one job to multiple clients and then compare the results with a suitable metric. Of course this would slow down overall progress.
The problem with logins, e.g. for elevated upload permissions, might be how to assess trustworthiness in the first place.