Just wanted to pass back some feedback that confused me for an hour or two. I am trying to use sync mode, but the files were never being skipped even running the same command twice in rapid sucession. After adding a lot of log lines, recompile, run, repeat I finally realised that since I was using uftpd flags of "-D /tmp/dest -T /tmp/receiving" the files were being moved to a different directory so when uftpd looked for them to see if they were the same, there was nothing in /tmp/receiving so it was simply downloaded again. Dropped -T and everything working perfectly - obvious once you know!! May be worth adding this to the documentation just to help others avoid others hitting the same thing.
One question - do you have any intent of putting the source code on git hub to allow others to fork, submit pull requests etc (I'd like to submit a few pull requests based on the work I'm doing)? If not, would you have any objections to someone else doing this (with the correct attributions per GPLv3 of course)? I'm looking a uftp as a mechanism for docker container distribution over networks where multicast is highly desirable.
I've also been using docker to do some scale testing i.e. running uftpd in a container and sending from the host and I can do about 300-400 instances on one large host and I'm hoping to use more hosts to really stress this (happy to share my tests scripts if this is of any use). How high has anyone pushed uftpd? Are you aware of any fundamental bottlenecks preventing tens or even hundreds of thousands? I'm wondering if the proxies would help with scale i.e. creating some sort of tiered architecture?
Many thanks in advance for a fantastic system that really does exactly what is says it will!
Guy
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Thanks for the feedback. I'll update the documentation to note that sync mode is not compatible with temp directories.
Regarding the source, I prefer to keep it in a private SVN repo so I have full control over it. I sometimes issue commercial licenses as well as GPLv3, and allowing other to contribute directly to the codebase can result in mixed copywrite ownership that would hinder the issuance of commercial licenses. If you want to push the code to GitHub, feel free, although you woundn't be the first to do so.
The largest deployment I'm aware of is around 3000 receivers. The server can handle feedback from over 1000 receivers (clients, or proxies on behalf of clients), and each proxy can handle over 1000 clients, so in theory a server could handle over a million receivers (1000 proxies with 1000 receivers each). The server is currently limited to 100,000 receivers to keep memory usage down, but that can be easily changed via #define MAXDEST in server.h.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
All,
Just wanted to pass back some feedback that confused me for an hour or two. I am trying to use sync mode, but the files were never being skipped even running the same command twice in rapid sucession. After adding a lot of log lines, recompile, run, repeat I finally realised that since I was using uftpd flags of "-D /tmp/dest -T /tmp/receiving" the files were being moved to a different directory so when uftpd looked for them to see if they were the same, there was nothing in /tmp/receiving so it was simply downloaded again. Dropped -T and everything working perfectly - obvious once you know!! May be worth adding this to the documentation just to help others avoid others hitting the same thing.
One question - do you have any intent of putting the source code on git hub to allow others to fork, submit pull requests etc (I'd like to submit a few pull requests based on the work I'm doing)? If not, would you have any objections to someone else doing this (with the correct attributions per GPLv3 of course)? I'm looking a uftp as a mechanism for docker container distribution over networks where multicast is highly desirable.
I've also been using docker to do some scale testing i.e. running uftpd in a container and sending from the host and I can do about 300-400 instances on one large host and I'm hoping to use more hosts to really stress this (happy to share my tests scripts if this is of any use). How high has anyone pushed uftpd? Are you aware of any fundamental bottlenecks preventing tens or even hundreds of thousands? I'm wondering if the proxies would help with scale i.e. creating some sort of tiered architecture?
Many thanks in advance for a fantastic system that really does exactly what is says it will!
Guy
Thanks for the feedback. I'll update the documentation to note that sync mode is not compatible with temp directories.
Regarding the source, I prefer to keep it in a private SVN repo so I have full control over it. I sometimes issue commercial licenses as well as GPLv3, and allowing other to contribute directly to the codebase can result in mixed copywrite ownership that would hinder the issuance of commercial licenses. If you want to push the code to GitHub, feel free, although you woundn't be the first to do so.
The largest deployment I'm aware of is around 3000 receivers. The server can handle feedback from over 1000 receivers (clients, or proxies on behalf of clients), and each proxy can handle over 1000 clients, so in theory a server could handle over a million receivers (1000 proxies with 1000 receivers each). The server is currently limited to 100,000 receivers to keep memory usage down, but that can be easily changed via #define MAXDEST in server.h.