asyncoro version 4.1 has been released. Up to now, coroutines are expected to yield frequently as all asyncoro framework, including processing of I/O events, is done in a single thread. This is (mostly) acceptable for concurrent programming, but with distributed programming if a coroutine executes a long running computation, even for a few seconds, it likely may result in loss of network packets and remote peers may even disconnect. To avoid this, it was advised to use threads to run long-running computations, or any computation that blocks asyncoro framework for significant amount of time. However, this can be error prone.

In this release I/O events are processed in a separate thread, and additional asyncoro scheduler is used for executing "system" coroutines, with ReactCoro (so called, to indicate these are for "reactive systems" that are implemented with somewhat ironically with synchronous programming) . With this setup coroutines created with Coro don't affect system coroutines that send/receive network traffic, process messages from peers etc. Now, for example, clients can distribute arbitrary computations (as long as they are generator functions) for distributed computing, even if the computaions execute long-running computations. To demonstrate this, discoro_client8.py example has been changed in this release to call time.sleep (which blocks asyncoro framework and all other coroutines from executing during that time). In earlier releases a thread was used to avoid blokcing asyncoro.

All this setup is transparent to user - earlier programs work without any modifications. In fact, ReactCoro itself is not documented, as it is meant for internal use only (at least for now).

Followng are other changes:

  • Added ping_interval option to Computation (for distributed computing). This option can be set to a number to indicate to scheduler to broadcast messages to discover nodes in a local network. if a scheduler is already running, those servers may not be detected by scheduler (scheduler discovers nodes running when it starts). 'ping_interval' to Computation can be used to set an interval when scheduler broadcasts discover messages so any nodes that were not running when scheduler starts, or any nodes that were not detected (which may happen if network is noisy, such as wifi which lose UDP packets used for broadcasting).
  • Coroutine's and Channel's repr has been changed to indicate if they are running in "user space" asyncoro or "reactive" asyncoro. If a coroutine's string has ~ as first character, it is created with Coro (user coroutine) and if it has !, it is created with ReactCoro. This is purely for disambiguation (and help with debugging in case of issues), and programs shouldn't rely on this, as this could potentially change in future.
  • discoronode.py runs additional asyncoro in __main__ process to send periodic heart-beat messages to current scheduler and to check if the scheduler is zombie etc. This asyncoro doesn't run user computations. If --tcp_ports option is used, then it should include one additional port for this asyncoro, in addition to server processes.
  • Corotuines can be created with Coro with same syntax used for threads; e.g., Coro(target=coro_proc, args=(42,), kwargs={'a': 'test'}). Other keyword arguments, such as group or name used for threads are not supported though. Older syntax of specifying process, arguments is also supported, so Coro(coro_proc, 42, a='test') can be used.
  • Deafult timeouts for sending/receiving messages is as per module variable MsgTimeout, which has default value of 10 (seconds); in earlier releases this was hardcoded as 5 seconds. If working with slow networks, the module variable can be set in user programs (e.g., as asyncoro.MsgTimeout = 15). Smaller timeouts detect network failures quicker and bigger timeouts give enough time for sending large messages etc.
  • Added discover_peers method to asyncoro scheduler, This method can be used to broadcast discover message in local network to detect peers. Peers are detected when asyncoro starts; however, UDP packets sent for broadcasting can be lost in some cases (e.g., with WiFi). In such cases, broadcasting periodically (or until desired peers are found) may be useful.