Activity for dispy

  • Mikhail T. Mikhail T. posted a comment on discussion Open Discussion

    Hello, there. The last release is years-old, and no one seems to be addressing the bug-reports -- at least, not those on Github. Is the project still alive, or do users need to migrate to one of the other parallelization frameworks?

  • Mikhail T. Mikhail T. posted a comment on ticket #21

    Thank you for such a prompt reaction. It will take me a while to get to testing it though -- for now I'm still working with Dispy-4.10.5...

  • Giridhar Pemmasani Giridhar Pemmasani posted a comment on ticket #21

    I have emailed you an implementation of this feature.

  • Giridhar Pemmasani Giridhar Pemmasani posted a comment on ticket #21

    I will look into this and find a generic way to add resource requirements. I will post patch to try once ready.

  • Mikhail T. Mikhail T. posted a comment on ticket #21

    Your examples presume, the tasks are all homogeneous -- and a node either is or is not suitable for all of them. That's different from the problems I'm trying to solve, which both stem from the tasks being heterogeneous. Any node in my setup can process any task -- there is no need for any node to be excluded. But the tasks have different resource-requirements -- something a smart scheduler can help address by, for example, mixing heavy CPU-users together with heavy RAM-users. Currently, when determining,...

  • Giridhar Pemmasani Giridhar Pemmasani modified a comment on ticket #21

    The second type of resource check is also possible with setup function, which runs on each node before any jobs are submited. For example, this function can check GPUs / packages available etc. In fact, using setup to load all modules is better as it not only checks that packages are available on a node but avoids loading modules in computations (saves memory and time); e.g., def node_setup(data_file): global tensorflow, data # global variables are evailable to jobs import tensorflow with open(data_file,...

  • Giridhar Pemmasani Giridhar Pemmasani modified a comment on ticket #21

    The second type of resource check is also possible with setup function, which runs on each node before any jobs are submited. For example, this function can check GPUs / packages available etc. In fact, using setup to load all modules is better as it not only checks that packages are available on a node but avoids loading modules in computations (saves memory and time); e.g., def node_setup(data_file): global tensorflow, data # global variables are evailable to jobs import tensorflow with open(data_file,...

  • Giridhar Pemmasani Giridhar Pemmasani modified a comment on ticket #21

    The second type of resource check is also possible with setup function, which runs on each node before any jobs are submited. For example, this function can check GPUs / packages available etc. In fact, using setup to load all modules is better as it not only checks that packages are available on a node but avoids loading modules in computations (saves memory and time); e.g., def node_setup(data_file): global tensorflow, data # global variables are evailable to jobs import tensorflow with open(data_file,...

  • Giridhar Pemmasani Giridhar Pemmasani modified a comment on ticket #21

    The second type of resource check is also possible with setup function, which runs on each node before any jobs are submited. For example, this function can check GPUs / packages available etc. In fact, using setup to load all modules is better as it not only checks that packages are available on a node but avoids loading modules in computations (saves memory and time); e.g., def node_setup(data_file): global tensorflow, data # global variables are evailable to jobs import tensorflow with open(data_file,...

  • Giridhar Pemmasani Giridhar Pemmasani posted a comment on ticket #21

    The second type of resource check is also possible with setup function, which runs on each node before any jobs are submited. For example, this function can check GPUs / packages available etc. In fact, using setup to load all modules is better as it not only checks that packages are available on a node but avoids loading modules in computations (saves memory and time); e.g., def node_setup(data_file): global tensorflow, data # global variables are evailable to jobs import tensorflow with open(data_file,...

  • Giridhar Pemmasani Giridhar Pemmasani posted a comment on ticket #21

    This is already possible with NodeAllocate feature. For example if you would like to only use nodes that have 32G RAM and 16 cores, you can filter nodes with: if __name__ == '__main__': ... class AllocateNode(dispy.NodeAllocate): def allocate(self, cluster, ip_addr, name, cpus, avail_info=None, platform='*'): if cpus < 16: return 0 # don't use this node if avail_info.memory < (32*1024*1024*1024): return 0 # don't use this node return (cpus - 2) # reserve 2 cores for other uses cluster = dispy.JobCluster(compute,...

  • Mikhail T. Mikhail T. created ticket #21

    Feature-request: add different resource-consumption per task

  • dispy dispy released /dispy-4.15.2.tar.gz

  • Mikhail T. Mikhail T. modified a comment on ticket #20

    Ok, then I've misunderstood the usage/purpose of the SharedJobCluster -- I only attempted to use it, when hitting the bug 19... Never mind then -- we don't even have the scheduler running. Sorry for the noise.

  • Mikhail T. Mikhail T. posted a comment on ticket #20

    Ok, then I've misunderstood the usage/purpose of the SharedJobCluster -- I only attempted to use it, when hitting the bug 19... Never mind then -- we don't even have the scheduler running.

  • Giridhar Pemmasani Giridhar Pemmasani posted a comment on ticket #19

    Well, the last couple of years I have been working on improving and adding features to dispy to make it enterprise product (with support). One major feature is to integrate with multiple cloud platforms so applications can dynamically allocate and use cloud computing (in addition to local/remote compute nodes) to meet deadlines / peak demands. If you are using dispy in enterprise, would you or your team be interested in exploring? If so, email me at pgiri@yahoo.com or giri@uniaccel.com and I will...

  • Giridhar Pemmasani Giridhar Pemmasani posted a comment on ticket #20

    Hmm, client using SharedJobCluster only connects to scheduler; it doesn't communicate with nodes. So there is no need to support more than one address for client. You may want to use -i option more than once to start dispyscheduler.py to use multiple addresses at scheduler.

  • Mikhail T. Mikhail T. modified a comment on ticket #20

    More specifically, this is our code using JobCluster. I wish, the same worked for SharedJobCluster, but it does not... def my_ip_addresses(family = socket.AF_INET): for i, nics in psutil.net_if_addrs().items(): for nic in nics: if nic.family == family: yield nic.address ... cluster = dispy.JobCluster( execTask, ... ip_addr = list(my_ip_addresses()) )

  • Mikhail T. Mikhail T. posted a comment on ticket #20

    More specifically, this is our code using JobCluster. I wish, the same worked for SharedJobCluster, but it does not... def my_ip_addresses(family = socket.AF_INET): for i, nics in psutil.net_if_addrs().items(): for nic in nics: if nic.family == family: yield nic.address ... cluster = dispy.JobCluster( execTask, ... ip_addr = list(my_ip_addresses()) )

  • Mikhail T. Mikhail T. created ticket #20

    SharedJobCluster does not accept a list of IP-addresses in 4.10.x

  • Mikhail T. Mikhail T. posted a comment on ticket #19

    If you don't mind, can you describe your use case? I forgot the details, but I recall, that, when I attempted to upgrade our Dispy PIP here 2+ years ago -- to 4.11.0 -- the existing code stopped working. I think, it had to do with the tasks remaining mutable in 4.10.x -- but in 4.11 they became private copies after cluster.submit() returns. Our code is written to retry any failed task once more. The failure-count is part of the task -- which became impossible in 4.11 -- that's my best recollection...

  • Giridhar Pemmasani Giridhar Pemmasani posted a comment on ticket #19

    Thanks for submitting bug and fix. As you know, 4.10.x is quite old release. While I can commit this fix in github, releasing will confuse many users that use new(er) releases. Since this issue is fixed in 4.11.0, I would prefer not to push this. If you don't mind, can you describe your use case? Thanks again!

  • Mikhail T. Mikhail T. created ticket #19

    JobCluster.submit_node() broken in 4.10.6

  • dispy dispy released /dispy-4.15.1.tar.gz

  • dispy dispy released /dispy-4.15.0.tar.gz

  • dispy dispy released /dispy-4.14.0.tar.gz

  • dispy dispy released /dispy-4.13.0.tar.gz

  • dispy dispy released /dispy-4.12.4.tar.gz

  • Giridhar Pemmasani Giridhar Pemmasani created a blog post

    Version 4.12.3 released

  • Giridhar Pemmasani Giridhar Pemmasani created a blog post

    Version 4.12.2 released

  • Giridhar Pemmasani Giridhar Pemmasani created a blog post

    Version 4.12.1 released

  • dispy dispy released /dispy-4.12.1.tar.gz

  • Giridhar Pemmasani Giridhar Pemmasani created a blog post

    Version 4.12.0 released

  • dispy dispy released /dispy-4.12.0.tar.gz

  • Giridhar Pemmasani Giridhar Pemmasani modified a comment on discussion Open Discussion

    I have tested with Windows but not as much as with Linux, so likely there are issues with Windows. Can you try passing type to getaddrinfo, so the try block is: info = None for addr in socket.getaddrinfo(node, None, 0, socket.SOCK_STREAM): if not info or addr[0] == socket.AF_INET: info = addr assert info Thanks.

  • Giridhar Pemmasani Giridhar Pemmasani posted a comment on discussion Open Discussion

    I have tested with Windows but not as much as with Linux, so likely there are issues with Windows. Can you try passing type to getaddrinfo, so the try block is: info = None for addr in socket.getaddrinfo(node, None, 0, socket.SOCK_STREAM): if not info or addr[0] == socket.AF_INET: info = addr assert info Thanks.

  • Stephen Fan Stephen Fan posted a comment on discussion Open Discussion

    Also note I am using dispy 4.11.0

  • Stephen Fan Stephen Fan posted a comment on discussion Open Discussion

    Hi, I am running Dispy on Windows platform (both Windows 7 and 10 were tested). I noticed that the example code obj_instances.py won't work if I pass in the nodes as either localhost's IP address or a remote Window's machine's IP address. if name == 'main': import random, dispy cluster = dispy.JobCluster(compute, nodes=["IP address for my PC "], depends=[C], secret="xxx") jobs = [] for i in range(10): c = C(i, random.uniform(1, 3)) # create object of C job = cluster.submit(c) # it is sent to a node...

  • SagarD SagarD modified a comment on discussion Help

    sample.py from dispy/examples is giving error as follows I have started the dispynode on the node(server) which outside the network ( suppose IP address of node is a.b.c.d, where as client's IP address is e.f.g.h) After running, the sample.py program which has following code at cluster cluster = dispy.JobCluster(compute, nodes = ['a.b.c.d']) I am using Python 3.8.0 It is giving below error shown in attached screenshot.

  • SagarD SagarD posted a comment on discussion Help

    sample.py from dispy/examples is giving error as follows I have started the dispynode on the node(server) which outside the network ( suppose IP address of node is a.b.c.d, where as client's IP address is e.f.g.h) After running, the canonical program which has following code at cluster cluster = dispy.JobCluster(compute, nodes = ['a.b.c.d']) I am using Python 3.8.0 It is giving below error shown in attached screenshot.

  • Giridhar Pemmasani Giridhar Pemmasani created a blog post

    Version 4.11.1 released

  • dispy dispy released /dispy-4.11.1.tar.gz

  • dispy dispy released /dispy-4.11.0.tar.gz

  • Giridhar Pemmasani Giridhar Pemmasani created a blog post

    Version 4.11.0 released

  • Giridhar Pemmasani Giridhar Pemmasani created a blog post

    Version 4.10.6 released

  • dispy dispy released /dispy-4.10.6.tar.gz

  • dispy dispy released /dispy-4.10.5.tar.gz

  • Giridhar Pemmasani Giridhar Pemmasani created a blog post

    Version 4.10.5 released

  • dispy dispy released /dispy-4.10.4.tar.gz

  • Giridhar Pemmasani Giridhar Pemmasani created a blog post

    Version 4.10.4 released

  • Giridhar Pemmasani Giridhar Pemmasani created a blog post

    Version 4.10.3 released

  • dispy dispy released /dispy-4.10.3.tar.gz

  • Giridhar Pemmasani Giridhar Pemmasani created a blog post

    Version 4.10.2 released

  • Giridhar Pemmasani Giridhar Pemmasani created a blog post

    Version 4.10.1 released

  • dispy dispy released /dispy-4.10.1.tar.gz

  • Giridhar Pemmasani Giridhar Pemmasani created a blog post

    Version 4.10.0 released

  • dispy dispy released /dispy-4.10.0.tar.gz

  • Giridhar Pemmasani Giridhar Pemmasani posted a comment on discussion Help

    It depends on your setup and how you would like to use remote nodes. If you would like to use SSH port forwarding, sshportfwpy can be used. This should work even if client is in private network (behind a router that is on public network). However, it is not viable if there are many nodes or if you want to use it many times. Alternate would be to use the setup Cloud Computing. This requires changing router to forward port 51347 to your client computer. Let me know which setup is suitable and if you...

  • David William Carroll David William Carroll posted a comment on discussion Help

    Hi, I wondered if you might have a simple template to get dispy working on AWS for distributed processing - one pythin process per server with a remote client (master) at my office. The documentation and examples seem more complex than my use case. Thanks

  • Giridhar Pemmasani Giridhar Pemmasani created a blog post

    Version 4.9.1 released

  • dispy dispy released /dispy-4.9.1.tar.gz

  • Giridhar Pemmasani Giridhar Pemmasani created a blog post

    Version 4.9.0 released

  • dispy dispy released /dispy-4.9.0.tar.gz

  • Giridhar Pemmasani Giridhar Pemmasani created a blog post

    Version 4.8.9 released

  • dispy dispy released /dispy-4.8.9.tar.gz

  • Giridhar Pemmasani Giridhar Pemmasani created a blog post

    Version 4.8.8 released

  • dispy dispy released /dispy-4.8.8.tar.gz

  • Prabhu Prabhu posted a comment on discussion Help

    May i know how to deploy the whole Dispy process as executable files(.exe files) instead of executing script for each nodes (seperate executable file(.exe file) for both master and slave nodes). Because if I execute the script, it have to install python and supporting modules as dependencies. Is there any provision for this process?

  • Giridhar Pemmasani Giridhar Pemmasani posted a comment on discussion Help

    It is difficult for me to follow your compute to understand what the problem is. Moreover, dispy version you are using is quite old. Can you try latest release (4.8.7) and post the issue with it in detail? As explained in documentation, you can use 'pip' to install dispy instead of 'pyinstaller' (I am not familiar with it). If you have issues with latest release with your compute function, try to simplify it, e.g., start with 'sample.py' in 'examples' and work up to figure out what feature causes...

  • Prabhu Prabhu posted a comment on discussion Help

    Hi Giridhar I am using dispy 4.6.17 in windows (The latest version is not processing commands using subprocess in compute method). I try to make the script executable using pyinstaller. The build for the dispy client is working fine. But i dont know how to convert the dispynode program to executable. If i build dispynode program using pyinstaller the compute method is not working. The below error is raising in the the executable for the script. 2018-06-15 16:19:55 asyncoro - Could not load pywin32...

  • Giridhar Pemmasani Giridhar Pemmasani modified a blog post

    Version 4.8.7 released

  • Giridhar Pemmasani Giridhar Pemmasani created a blog post

    Version 4.8.7 released

  • dispy dispy released /dispy-4.8.7.tar.gz

  • dispy dispy released /dispy-4.8.7.zip

  • dispy dispy released /dispy-4.8.6.tar.gz

  • dispy dispy released /dispy-4.8.6.zip

  • Giridhar Pemmasani Giridhar Pemmasani created a blog post

    Version 4.6.8 released

  • Giridhar Pemmasani Giridhar Pemmasani modified a blog post

    Version 4.8.5 released

  • Giridhar Pemmasani Giridhar Pemmasani created a blog post

    Version 4.8.5 released

  • dispy dispy released /dispy-4.8.5.tar.gz

  • dispy dispy released /dispy-4.8.5.zip

  • Giridhar Pemmasani Giridhar Pemmasani posted a comment on discussion Open Discussion

    As sourceforge.net has been unreliable the past few weeks, the documentation has been mirrored at github at https://pgiri.github.io/dispy; once it is possible to update webpages at sourceforge, this note will be added to documentation pages.

  • Joel Millage Joel Millage posted a comment on discussion Help

    So looks like I fixed most of it, odd thing is it runs now but I get the error: 2018-02-13 13:28:35 dispy - SSL connection failed: [SSL: UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:579) 2018-02-13 13:28:35 dispy - SSL connection failed: [SSL: UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:579) But it still executes the job? so not sure if it is just doing it the old way or actually using SSL

  • Joel Millage Joel Millage posted a comment on discussion Help

    I just updated to dispy 4.84 and pycos 4.6.5 a few days ago, though I was just a minor rev back before that. I'll keep playing around with it and see what is going on.

  • Giridhar Pemmasani Giridhar Pemmasani posted a comment on discussion Help

    What version of dispy are you using? If not the latest version, upgrade to it; asyncoro has been replaced with pycos.

  • Joel Millage Joel Millage posted a comment on discussion Help

    Just going to comment on this thread since it seems similar, I am trying to use a openssl key I created ( am just running both client and sever on the same machine at the moment). when I try to connect to the local node I get the error: SSL connection failed: EOF occurred in violation of protocol (_ssl.c:579) This is on CentOS 7.4 I thought maybe it was the file permissions but that doesn't seem to do anything. Also tried a seperate and combined cert/key file got the same thing both ways

  • Joel Millage Joel Millage posted a comment on discussion Help

    Thanks for the reply! I think i figured it out, i think it was a mistake on my side from the documentation. First the port forwarding part of the cloud computing section doesn't specify the switch but I believe it should be -R (I used -L at first by accident) Second for some reason the host names don't work for me: like this: --ext_ip_addr ec2-x-x-x-x.y.amazonaws.com but I just change it to the IP address: --ext_ip_addr x.x.x.x it works fine. Not sure why that is tried in quotes/no quotes too. Also...

  • Giridhar Pemmasani Giridhar Pemmasani modified a comment on discussion Help

    One possible issue is dispynode on AWS may not be able to communicate with your client (e.g., if client is behind a router). In that case your router should forward port 51347 to your client. You also need to set ext_ip_addr=<your router IP> to dispy.JobCluster.

  • Giridhar Pemmasani Giridhar Pemmasani posted a comment on discussion Help

    One possible issue is dispynode on AWS may not be able to communicate with your client (e.g., if client is behind a router). In that case your router should forward port 51347 to your client.

  • Joel Millage Joel Millage posted a comment on discussion Help

    Curious if anyone can here can't point out my issue. I feel like I am so close! I am trying to use AWS to supplement my home servers. I followed the instructions on cloud computing on the source forge page so in my EC2 instance I have: dispynode.py --debug --ext_ip_addr <elastic ip=""> --clean I can telent to it via: telnet <elastic ip=""> 51478 get the expected output there. I have opened the security rules/ ACL for those ports. In my client on my local machine I am running: cluster = dispy.JobCluster(compute,...

  • Mattia Mattia posted a comment on discussion Help

    Hi Giridhar, Consider the following scenario: 5 nodes are allocated to cluster1 and they have to compute 5 tasks (1 each). 2 of them are over, whereas 3 are still computing. Now, the 2 nodes are idle but you cannot allocate them in a second cluster2. That's is the reason why an "deallocation" function would really help: while deallocating the nodes you can make them available to other cluster objects. What do you think?

  • Mattia Mattia created ticket #18

    minor bug while adding nodes with similar IPs to the JobCluster object

  • Giridhar Pemmasani Giridhar Pemmasani created a blog post

    Version 4.8.4 released

  • dispy dispy released /dispy-4.8.4.tar.gz

  • dispy dispy released /dispy-4.8.4.zip

1 >