Hi Nielsen
I have started using the pypar for doing cluster computing. Since I am
very new to this field I was learning things and try to understand the
demos from pypar website. I have modified the demo3.py and tried to run.
It is working except in the case of tag. The slave is not correctly
getting the tag which the master sends to it. The slaves always show the
tag=1. I am attaching he program (test3.py) and the output ( 661.out) from
it. Could you please help to sort out the problem.
Thanks
Vinu V.



#!/data/home/vvinu/software/local/bin/python

import numarray as n
import sys
import Numeric
import pypar

WORKTAG = 1
DIETAG =  2


def master():
    numCompleted = 0
    
    print "[MASTER]: I am processor %d of %d on node %s\n" %(MPI_myid, MPI_numproc,
MPI_node)
    
    # start slaves distributing the first work slot
    for i in range(1, min(MPI_numproc, numWorks)):
        work = workList[i]
        pypar.send(work, i)
        print "[MASTER]: sent work '%f' to node '%d'\n" %(work, i)

    # dispach the remaining work slots on dynamic load-balancing policy
    # the quicker to do the job, the more jobs it takes
    for work in workList[MPI_numproc:]:
        R, status = pypar.receive(pypar.any_source , return_status=True)
        print "[MASTER]: received result '%f' from node '%d'\n" %(R, status.source)
        numCompleted += 1
        pypar.send(work, status.source)
        print "[MASTER]: sent work '%f' to node '%d'\n" %(work, status.source)
    
    # all works have been dispatched out
    print "[MASTER]: toDo : %d\n" %numWorks
    print "[MASTER]: done : %d\n" %numCompleted
    
    # I've still to take into the remaining completions  
    while(numCompleted < numWorks):
        R, status = pypar.receive(pypar.any_source, return_status=True)
        print "[MASTER]: received (final) result '%f' from node '%d'\n" %(R,
status.source)
        numCompleted += 1
        print "[MASTER]: %d completed\n" %numCompleted
        
    print "[MASTER]: about to terminate slaves\n"

    # say slaves to stop working
    for i in range(1, MPI_numproc):
        pypar.send(0, i)
        print "[MASTER]: sent (final) work '%f' to node '%d'\n" %(0, i)
        
    return
    
def slave():

    print "[SLAVE %d]: I am processor %d of %d on node %s\n" %(MPI_myid, MPI_myid,
MPI_numproc, MPI_node)

    while 1:
        R, status = pypar.receive(pypar.any_source, pypar.any_tag, return_status=True)
        print "[SLAVE %d]: received work '%f' with tag '%d' from node '%d'\n"\
              %(MPI_myid, R, status.tag, status.source)
      
        if (R == 0):
            print "[SLAVE %d]: received termination from node '%d'\n" %(MPI_myid, 0)
            return
        else:
            A = R * R
            pypar.send(A, 0)
            print "[SLAVE %d]: sent result '%f' to node '%d'\n" %(MPI_myid, A, 0)
            
      

if __name__ == '__main__':
    MPI_myid =    pypar.rank()
    MPI_numproc = pypar.size()
    MPI_node =    pypar.Get_processor_name()

#    workList = ('_dummy_', 'a', 'b', 'c')
    workList = n.array([1,2,3,4,5,6,8])
    numWorks = len(workList) - 1
    
    
    #FIXME, better control here
    if MPI_numproc > numWorks or MPI_numproc < 2:
        pypar.Finalize()
        if MPI_myid == 0:
          print "ERROR: Number of processors must be in the interval [2,%d].\n"
%numWorks
          
        sys.exit(-1)

    if MPI_myid == 0:
        master()
    else:
        slave()

    pypar.Finalize()
    print "MPI environment finalized.\n"
                
Pypar (version 2.0alpha) initialised MPI OK with 3 processors
[SLAVE 1]: I am processor 1 of 3 on node n1
[MASTER]: I am processor 0 of 3 on node n1


[SLAVE 2]: I am processor 2 of 3 on node n1

[MASTER]: sent work '2.000000' to node '1'

[MASTER]: sent work '3.000000' to node '2'

[SLAVE 1]: received work '2.000000' with tag '1' from node '0'

[SLAVE 2]: received work '3.000000' with tag '1' from node '0'

[SLAVE 1]: sent result ' 4.000000' to node '0'

[SLAVE 2]: sent result '9.000000' to node '0'

[MASTER]: received result '4.000000' from node '1'

[MASTER]: sent work '4.000000' to node '1'
[SLAVE 1]: received work '4.000000' with tag '1' from node '0'

[SLAVE 1]: sent result '16.000000' to node '0'

[MASTER]: received result '9.000000' from node '2'

[MASTER]: sent work '5.000000' to node '2'

[SLAVE 2]: received work '5.000000' with tag '1' from node '0'

[SLAVE 2]: sent result '25.000000' to node '0'

[MASTER]: received result '16.000000' from node '1'

[MASTER]: sent work '6.000000' to node '1'

[SLAVE 1]: received work '6.000000' with tag '1' from node '0'

[SLAVE 1]: sent result '36.000000' to node '0'
[MASTER]: received result '25.000000' from node '2'

[MASTER]: sent work '8.000000' to node '2'

[SLAVE 2]: received work ' 8.000000' with tag '1' from node '0'

[MASTER]: toDo : 6

[MASTER]: done : 4

[SLAVE 2]: sent result '64.000000' to node '0'

[MASTER]: received (final) result ' 36.000000' from node '1'

[MASTER]: 5 completed

[MASTER]: received (final) result '64.000000' from node '2'

[MASTER]: 6 completed

[MASTER]: about to terminate slaves

[MASTER]: sent (final) work '0.000000' to node '1'

[SLAVE 1]: received work '0.000000' with tag '1' from node '0'

[SLAVE 1]: received termination from node '0'

[MASTER]: sent (final) work '0.000000' to node '2'

[SLAVE 2]: received work '0.000000' with tag '1' from node '0'

[SLAVE 2]: received termination from node '0'

MPI environment finalized.

MPI environment finalized.

MPI environment finalized.


--
VINU VIKRAM
http://iucaa.ernet.in/~vvinuv/