SF.net SVN: fclient: [20] trunk/fclient/fclient_lib/fcp/fcp2_0.py
Status: Pre-Alpha
Brought to you by:
jurner
|
From: <jU...@us...> - 2007-10-28 21:56:17
|
Revision: 20
http://fclient.svn.sourceforge.net/fclient/?rev=20&view=rev
Author: jUrner
Date: 2007-10-28 14:56:19 -0700 (Sun, 28 Oct 2007)
Log Message:
-----------
some more cooments
Modified Paths:
--------------
trunk/fclient/fclient_lib/fcp/fcp2_0.py
Modified: trunk/fclient/fclient_lib/fcp/fcp2_0.py
===================================================================
--- trunk/fclient/fclient_lib/fcp/fcp2_0.py 2007-10-28 09:45:09 UTC (rev 19)
+++ trunk/fclient/fclient_lib/fcp/fcp2_0.py 2007-10-28 21:56:19 UTC (rev 20)
@@ -51,14 +51,6 @@
'''
-#NOTE:
-#
-# downloading data to disk is not supported st the moment. TestDDA code is quite unwritable
-# and as far as I can see there are plans to get rid of it. So wait...
-#
-#
-
-
import atexit
import base64
import logging
@@ -688,6 +680,8 @@
#**************************************************************************
# jobs
#**************************************************************************
+#TODO: maybe remove syncron functionality and rely only on signals
+# ...if so, remove timeStart, timeStop aswell.. leave up to caller
class JobBase(object):
"""Base class for jobs"""
@@ -1165,10 +1159,9 @@
#TODO: no idea what happens on reconnect if socket died. What about running jobs?
#TODO: name as specified in NodeHello seems to be usable to keep jobs alive. Have to test this.
#TODO: no idea if to add support for pending jobs and queue management here
-
+#
#TODO: do not mix directories as identifiers with identifiers (might lead to collisions)
#TODO: how to handle (ProtocolError code 18: Shutting down)?
-
class FcpClient(events.Events):
"""Fcp client implementation
"""
@@ -1316,6 +1309,7 @@
# check if the is something like an identifier in the message
#TODO: we run into troubles when using directories and NodeIdentifiers as identifiers
+ # have to maintain extra queues to prevent this. jobDispatchMessage(queue='directories')
if msg.name == Message.TestDDAReply:
identifier = msg['Directory']
elif msg.name == Message.TestDDAComplete:
@@ -1534,17 +1528,30 @@
#########################################################
- ## boilerplate code to tackle TestDDA
+ ## how to tackle TestDDA?
##
- ## ...but I don't trust it ;-) I was not yet able to wrap my head around
- ## jobAdd(synchron=True) enough to know wether it is save (thread, deadlock) or not.
+ ## best idea hear so far is to wait for ProtocolError 25 Test DDA denied (or PersistantGet)
+ ## and reinsert the job if necessary after TestDDA completion.
##
- ## Another problem is that there is no way to know when a directory is no longer
- ## needed. And I fon't want to write code in a Gui to tackle a problem that will hopefully
- ## go away in the near future.
+ ## Problem is, how to wait for a message without flooding the caller. Basic idea of the client
+ ## is to enshure a Gui can stay responsitive by letting the caller decide when to process the
+ ## next message. So waiting would require to buffer messages and watch messages carefuly
+ ## as they flood in.
+ ##
+ ## If we do not wait, the caller may flood us with download requests, I fear, faster than
+ ## the node and we are able to go get the error and through the TestDDA drill. Have to
+ ## do some tests to see how the node reacts.
##
- ## see: https://bugs.freenetproject.org/view.php?id=1753
+ ## easiest approach would be to let the caller test a directory explicitely when HE thinks
+ ## it might be necessary. But then this code will hang around forever with an already
+ ## assigned bug report [https://bugs.freenetproject.org/view.php?id=1753] suggesting
+ ## much easier processing to test DDA (DDA Challenge)
##
+ ##
+ ## so.. maybe best is to lurker around a while and keep an eye on the tracker
+ ##
+ ##
+ ## below is just some old boilerplate code.. to be removed sooner or later
#########################################################
def testWriteAccess(self, directory):
canRead, canWrite = False, False
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|