[Sop-svn] SF.net SVN: sop:[45] trunk/sopf
Status: Planning
Brought to you by:
labiknight
|
From: <El...@us...> - 2009-08-21 13:55:46
|
Revision: 45
http://sop.svn.sourceforge.net/sop/?rev=45&view=rev
Author: Elhanan
Date: 2009-08-21 13:55:36 +0000 (Fri, 21 Aug 2009)
Log Message:
-----------
added formatting spaced for readiblity in apt viewer
Modified Paths:
--------------
trunk/sopf/cache/src/site/apt/cache.apt
trunk/sopf/kernel/src/site/apt/sfde.apt
trunk/sopf/model/src/site/apt/architecture.apt
Modified: trunk/sopf/cache/src/site/apt/cache.apt
===================================================================
--- trunk/sopf/cache/src/site/apt/cache.apt 2009-07-28 10:56:53 UTC (rev 44)
+++ trunk/sopf/cache/src/site/apt/cache.apt 2009-08-21 13:55:36 UTC (rev 45)
@@ -1,12 +1,19 @@
-----------------------
SOP-CACHE Technical Requirements & Design
-----------------------
+
Owolabi Oyapero
+
-----------------------
+
2009-07-16
+
-------------------------
+
Copyright
+
This file is part of SOPF.
+
SOPF is free software: you can redistribute it and/or modify
it under the terms of the Lesser GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
@@ -19,353 +26,668 @@
along with SOPF. If not, see <http://www.gnu.org/licenses/>.
Table of Contents
+
* Goals (In order of importance):
+
* Concepts
* Assumptions
+
* Goal Derived Requirements
+
* Technical Requirements
+
* System Requirements
+
* Dependencies
+
* Activity Sequence
+
* Definitions
+
* Interfaces
+
* Notes
Concepts:
- * Cache:
+
+ * Cache:
+
* Stores and manages data-synchronizations, it consist of 1 or more nodes.
+
* There are two types: lateralCache (LC) and CentralCache (CC)
+
* Lateral cache is actively involved in execution within a session.
+
* CentralCache coordinates a session, is a data-backup for lateralCaches in a session.
+
* Session consist of a centralCache with all required lateralCache connected (i.e all gp-data allocated).
+
* Data-Types:
+
* Global-Partitioned (gp): allocated between nodes
+
* Global-Unpartitioned (gu): all nodes have a copy
+
* Local : specific to one node
+
Assumptions
+
* Neural structure will exceed 500KB in most real-life applications
+
* Maximum disk-write speed is around 100MB/s in most affordable systems for developers.
+
* Maximum network speed is around 125MB/s (1Gb/s) in affordable systems for developers.
+
Goal Derived Requirements
+
* Cache must support arbitrary sized neural-networks (even terabytes if desired)
+
* Cache may be distributed in case of large-networks
+
* Local and backup persistence must be as efficient as possible.
+
* Cycle execution must support maximum throughput
-Technical Requirements
+
+Technical Requirements
+
* Cache Types:
+
* LateralCache (LC)
+
* Cache nodes that executes cycles are lateralCaches referred to as sibling.
+
* All siblings must have the same centralCache & neural-network and cyclestamp prior-to/after a cycle.
+
* Must be pausable,although pause signal-must be broadcasted to all siblings
+
* Must have an alive msg
+
* list siblings lateralCaches
+
* Get sibling ip from centralCache
+
* Cannot start cycles until all siblings are up
+
* Sync persistence interval -ensure that persistence occurs at same interval at all nodes
+
* Reallocation is determined by centralCache config
+
* data-obj must be binary-sorted internally on the basis of key (id)
+
* tasks must be stored on a FIFO queue and persisted with order preserved
+
* cache could use two ports to transfer data
+
* {LC-internals} may be composed of queues (e.g. task-queue) and metadata:
+
* local-data
+
* Dynamic size (keep that in mind)
+
* allocation-info
+
* gu-data that are relevant to execution in kernel.
+
* gp-data: Data-obj must be sorted by type then key (id), collection? (TreeSet)
+
* gu-data delta is sent to CC
+
* Initiates request for backing up the data to CentralCache (interval-configurable)
+
+
* LN node must gracefully quit, if it has to quit the session.
+
* CentralCache
+
* gp-data must be binary-sorted exactly as in LC
+
* Does not participate in data execution
+
* Reallocation is going to be different, each lateralCache will send the data-obj to appropriate cache
+
* Internals structure MUST be the same with {{LC-internals}} with exception of the local-data.
+
* Persisted data structure MAY be the same with LateralCache
+
* Must not accept new-LC if all gp-data are allocated and LCs are not dead.
+
* Accepts new-LC if the session has not started
+
* Accepts new-LC if connected LC quits.
- * Accepts new-LC if connected LC dies AND waited for maxWait4DLc.
+
+ * Accepts new-LC if connected LC dies AND waited for maxWait4DLc.
+
* Must allocate only unallocated-gp-data in-case of LC-joins.
+
* reallocation-conditions,variables?
+
* New data obj should be added directly to the backup data (only) with a change in designVersion variable
+
* Must be able to send reallocate event to listeners (LCs).
+
* If dead-CC reawake and all LCs are wake & cyclestamp-synced, sync gu-data& backup (no rollback).
* Data:
+
* Data-Types
+
* Global-Partioned (gp)
+
* Global-Unpartitioned (gu):
- * must be deltable
+
+ * must be deltable
+
* must be synced at the end of each cycle
+
* Local
+
* <strikethrough>PreCycleTaskQueue<strikethrough/>
+
* CycleTaskQueues {NextCycleTaskQueue, [SiblingNextCycleTaskQueues]*} (FIFO queue)
+
* NewCycleTaskQueue consist of tasks belonging to the LC's kernel.
+
* <strikethrough>PostCycleTaskQueue<strikethrough/>
+
* Does not contain the cycle count, that is tracked by the cache
+
* All global-data should be managed by the cache.
+
* All global-data must have an identifier.
+
* Data-Obj
+
* data-type must uniquely identify a type of data
+
* data-typeId must uniquely identify a data-type
+
* Some properties in the data-obj such as conns, are not changed by the task
+
* File Structure
- * For the cache file, the first 100 bytes consist of 7 fields and must be
+
+ * For the cache file, the first 100 bytes consist of 7 fields and must be
+
* neuralNetId (long)
+
* cyclestamp (long)
+
* designVersion (int)
* maxLocalData (long): max slots for local data, value: maxDataCount * 400. ((40/obj, 10-obj/data)
+
* maxDataTypeHeaderCount (int): max-num data-types supported, default:500 (configurable)
+
* maxDataCount (long): the max-num of components supported, default: 100000(configurable)
- * allocationInfoSize (short): num of slots for allocating-info
+
+ * allocationInfoSize (short): num of slots for allocating-info
+
* status (byte): [ DESIGN (1)| NEW (3)| RESUME (5)| STOP (7)| QUIT (9), RUNNING (11)]
+
* reserved (57-bytes): reserved for future use (possibly include header-start-bits and end-bits)
+
* Local-data
+
* Dynamic size (keep that in mind, see maxLocalData)
+
* local-task-queue
+
* lateral-caches-task-queue
+
* Allocation-info
+
* ccip, ccport [, siblingIp, siblingPort, siblingAllocation]*
+
* gu-data that are relevant to execution in kernel.
+
* gp-data: Data-obj must be sorted by type then key (id), collection? (TreeSet)
+
* data-type-headers must begin the global-data section
+
* format: [data-type-header-start-bits, data-type-id, version, [propId,size]*, data-type-header-end-bits ]
+
* size: byte , short , short , short*, byte
+
* There should a fixed num of data-type-header-slots, value is maxDataTypeHeaderCount * 150 (assumes average of 50 fields)
+
* data-header must have maxDataCount*16 slots and must be sorted
+
* format: data-header-section-start-bits,[data-id, location]*, data-header-section-end-bits
+
* size: byte , long , long , byte
+
* data-format
+
[data-type-id,data-id,[propId, propValue]*]
+
* Use negative values as indices/values except where logic overrules, this provides a wider-range.
- * The data-bytes-serializer & data-bytes-parser should be generated automatically from class.
+
+ * The data-bytes-serializer & data-bytes-parser should be generated automatically from class.
+
* Persistence
+
* I suggest using memory maps since our global-data will often exceed 100Kb in real-applications. (assumptions)
Dependencies
+
* Task must inherently identify data-id
+
Activity Sequence
+
* Assumptions and Terminology
+
* LCs without qualification implies member-LCs
+
* START_NODES
+
* CentralCache:
- * startup-params: data-file, max-mem, port, maxWait4Ready, maxWait4PauseResume,maxWait4DeadResume,
+
+ * startup-params: data-file, max-mem, port, maxWait4Ready, maxWait4PauseResume,maxWait4DeadResume,
+
\nsaveInterval, backupInterval, maxSkippedCycles
+
* ? Listen for msg from lateralCaches
+
* if status equals RUNNING|PAUSE, do CC_RESUME
+
* if status equals STOP, do JOIN
- * LateralCache:
+
+ * LateralCache:
+
* startup-params: data-file (may be empty), max-mem, port, ccIp, ccPort, nni?, cyclestamp?, dataPerGd?, sizePerLd?
\n pingCCInterval,
\n . Optional variables are missing if data-file is not empty.
* if status equals RUNNING|PAUSE, do LC_RESUME
+
* if status equals STOP, do JOIN
+
* JOIN
- * CentralCache:
+
+ * CentralCache:
+
* Ignore non-JOIN request from non-members.
+
* Listen for JOIN from lateralCaches, record their ip, port, act & am & timestamp.
+
* Send JOIN-ACCEPT back
+
* do ALLOCATE_TO_NODE for that LC
+
* if all gp-data is not allocated, send AWAIT_JOIN signal to that LC
+
* repeat until all gp-data has been allocated
- * LateralCache:
+
+ * LateralCache:
+
* Send JOIN-req to a CentralCache at a given-port, provide available cpu time (act) & available memory (am).
+
* If response is JOIN-ACCEPT, accept response and do ALLOCATE_TO_NODE
+
* If response is JOIN-REJECT, record then send notification to user/UI and halt.
+
*ALLOCATE_TO_NODE
- * CentralCache:
+
+ * CentralCache:
+
* Determine data-size, assign selector-value-range (id) for the lateralCache based on their act-am.
+
* Send cyclestamp, allocation-info to LC
+
* Send global-data to the LC based on the selector-affinity (id).
+
* LateralCache:
+
* Receive the cyclestamp, allocation-info, maxWait4Ready, maxWait4PauseResume, maxWait4DeadResume, saveInterval,backupInterval
+
* Receive the gu-data & gp-data for this node
+
* update data file and load-data-file
+
* Do CACHE_READY.
+
* CACHE_READY
- * CentralCache:
+
+ * CentralCache:
+
* Receive CACHE_READY signal from all member LCs.
+
* If any member does not send CACHE_READY signal within maxWait4Ready time
+
* remove that member (send JOIN-REJECT to it) and do JOIN
+
* send CACHE_SYNC-signal with all-allocation-info to all lateralCaches (LCs)
- * LateralCache:
+
+ * LateralCache:
+
* Send tasks in SiblingNextCycleTaskQueue to appropriate LCs with zero or more tasks.
+
* Receive TASK signal from all siblings, putting tasks in NextCycleTaskQueue
+
* Send CACHE_READY signal to CC
+
* CC_RESUME (DEAD/PAUSE)
+
* CentralCache:
+
* send RESUME_PAUSE/RESUME_DEAD signal with most variables and persisted-cycle-stamp except allocation-info to all LC
+
* If response is received from ALL LCs within maxWait4PauseResume/maxWait4DeadResume (depends on context)
+
* If initialStatus is RUNNING (RESUME_DEAD), request gu-data delta
- * If persisted data-cycle-num is less than current-data-cycle-num, do REQ_DATA
+
+ * If persisted data-cycle-num is less than current-data-cycle-num, do REQ_DATA
+
* do CACHE_SYNC
- * Else do ROLL_BACK
+
+ * Else do ROLL_BACK
+
* LateralCache:
+
* If receive RESUME_DEAD/PAUSE signal
+
* send RESUME signal to CC
- * if receive RESUME_DEAD & persisted-cyclestamp < current-cyclestamp
+
+ * if receive RESUME_DEAD & persisted-cyclestamp \< current-cyclestamp
+
* await REQ_DATA
+
* Otherwise wait
+
* LC_RESUME
+
* CentralCache:
+
* accept RESUME_PAUSE/DEAD signal from LC
+
* if RESUME_DEAD
+
* if (cycle-stamp - current-cycle-stamp > maxSkippedCycles)
+
* do ROLLBACK
+
* LateralCache:
+
* send RESUME_PAUSE/RESUME_DEAD signal with cycle-stamp & nni
+
* CACHE_SYNC (on the commencement of a new-cycle at all times)
- * CentralCache:
- * send cache-sync signal & summated-gu-data delta & persistence-interval (for all global-data)
+
+ * CentralCache:
+
+ * send cache-sync signal & summated-gu-data delta & persistence-interval (for all global-data)
(if you want to set/change it).
- LateralCache:
+
+ LateralCache:
+
* receive cache-sync signal & summated-gu-data-delta
+
* send cache-sync signal to listeners (executor: then executor will send cycle-start signal)
+
* listen to cycle-start signal, track cycle-count
+
* END_CYCLE (At the end of each cycle)
- * CentralCache:
+
+ * CentralCache:
+
* receive cycle-end signal from each LCs
+
* if last-save-cycle matches backupInterval, do REQ_DATA
+
* global-field delta provided by each lateralCache is used to track alive lateralCaches.
+
* if any lateralCache is dead, go-to DEAD_CACHE
+
* else do CACHE_SYNC
+
* LateralCache:
+
* receive cycle-end signal from its CycleEventSource (e.g. executors), track completed-cycles
+
* if last-save-cycle matches saveInterval, do CACHE_SAVE
+
* send end-cycle signal with gu-data-delta to the centralCache, even if it is 0.
+
* CACHE_SAVE
+
* CentralCache:
- * send SAVE signal to LCs
- * LateralCache:
+
+ * send SAVE signal to LCs
+
+ * LateralCache:
+
* receive SAVE signal from CC
+
* local persistence of state
+
* REQ_DATA (data persistence mechanism)
- * CentralCache:
+
+ * CentralCache:
+
* broadcast data-request-signal with a specific cycle-num to all lateralCaches.
+
* wait for all match-data-signal
+
* if there is a no-match-data-signal
+
* send abort-data-request-signal
+
* send start-transfer signal to one LC and get all data from it
+
* repeat with next LC until all LC have sent data
+
* store received gp-data & gu-data in temp, store gu-data in memory
+
* if data is received from all allocated lateralCaches, migrate temp to permanent.
+
* else roll-back (clear temp).
- * LateralCache:
+
+ * LateralCache:
+
* receive REQ_DATA signal
- * if current-cycle does not match req-data-cycle-num,
+
+ * if current-cycle does not match req-data-cycle-num,
+
* send no-match-data-signal to the lateralCache,
- * else send match-data-signal
- * receive start-transfer, then send all gp-data upon receiving request
+
+ * else send match-data-signal
+
+ * receive start-transfer, then send all gp-data upon receiving request
+
* CACHE_PAUSE
+
* LateralCache:
+
* optionally send PAUSE signal to CC
- * receive pause
+
+ * receive pause
+
* wait for resume signal from CC
- * CentralCache:
+
+ * CentralCache:
+
* optionally, receive PAUSE signal and
+
* request gu-data delta
- * do REQ_DATA
+
+ * do REQ_DATA
+
* send PAUSE signal
+
* QUIT_LC (When a lateralCache decides to quit)
+
* LateralCache: (Quitting-LC)
+
* receive quit-signal from QuitEventSource
+
* send cache-quit signal and last-completed cycle to centralCache, with gu-data-delta (if current-cycle = completed-cycles)
+
* await data-request
+
* cleanly shutdown, closing ports and send cache-shutdown event to listeners
- * CentralCache:
+
+ * CentralCache:
+
* receive cache-quit signal and last-completed cycle to centralCache, with gu-data-delta
+
* store the gu-data of the quit-lateralCache in temp-place
+
* if completed-cyclestamp != persisted-cyclestamp,do REQ_DATA starting with quitting-LC
+
* store the gp-data of the quit-lateralCache in temp-place.
+
* send AWAIT_JOIN signal to LCs
+
* do JOIN
+
* QUIT_CC
- * CentralCache:
+
+ * CentralCache:
+
* receive quit-signal from QuitEventSource
+
* do REQ_DATA
+
* send quit-cc signal to LCs
+
* cleanly shutdown, closing ports and send cache-shutdown event to listeners
+
* LateralCache:
+
* receive quit-cc signal then halt, notify user
+
* await reconfiguration for a new CC
+
* continuously ping CC to see if it awakes at pingCCInterval
+
* DEAD_LC (LC abruptly terminates)
+
* CentralCache:
+
* send AWAIT_JOIN signal to LCs
+
* if RESUME_DEAD signal received within maxWait4DeadResume
+
* do RESUME_LC
+
* else
- * if (persisted-cyclestamp < current-cyclestamp), do ROLLBACK
+
+ * if (persisted-cyclestamp \< current-cyclestamp), do ROLLBACK
+
* do JOIN
+
* DEAD_CC (CC abruptly terminates)
+
* LateralCache:
+
* continuously ping CC to see if it awakes at pingCCInterval
+
* ROLL_BACK
- * CentralCache:
+
+ * CentralCache:
+
* send ROLLBACK signal to LCs
+
* REQ_ACT_AM
+
* do ALLOCATE_TO_NODE
+
* REQ_ACT_AM (request for act and am)
+
* CentralCache:
+
* Broadcast request for act-am of expired act-am based on record timestamp
* LateralCache:
+
* Send act-am to centralCache
+
* CentralCache:
+
* Record lateralCache's act & am.
-Definitions
+
+Definitions
+
* CACHE-EVENTS:
+
* Allocate
+
* Started
+
* Ready
+
* Syncing
+
* Sync
+
* Saving
+
* BackingUp
+
* Rollback
+
* Stop
+
* Messages:
+
* LateralCache to LateralCache
- * DistTaskMsg - for distributing task to appropriate cache
+
+ * DistTaskMsg - for distributing task to appropriate cache
Format: [disk-task-msg-bits, [task]*, end-bits]
- * DistDataMsg - for moving data-obj to its allocated lateralCache
+
+ * DistDataMsg - for moving data-obj to its allocated lateralCache
Format: [dist-data-msg-bits, [data]*, end-bits]
+
* CentralCache to LateralCache
- *
+
+ *
+
* LateralCache to CentralCache
+
*
- * Interfaces:
- * GP-Data
+ * Interfaces:
+
+ * GP-Data
+
* implements IData
- * GU-Data
- * implements IDeltable
+
+ * GU-Data
+
+ * implements IDeltable
CentralCacheProxy
* public List<LateralCacheProxy> listLateralCaches();
- * public void addListeners(CacheEventListener cel);
+
+ * public void addListeners(CacheEventListener cel);
+
LateralCache
+
* Collection<Task> fetchTasks();
+
* void addTask(Task task); - must never be called once a cycle is started
+
* addNewTask(Task task); - must never be called outside the context of a cycle
+
* extends TaskAware
* Variables
+
* Common
+
* maxDataTypeHeaderCount
+
* maxDataCount
+
* Assumption-variables: provides fine tuned control
local.dataPerGd (local obj/globalData),
local.sizePerLd (size/localData)
global.sizePerDtHd (size/dataTypeHeader),
global.sizePerGd (size/globalData)
- * LateralCache:
- local.ccip, local.ccport,
- * CentralCache:
+
+ * LateralCache:
+ local.ccip, local.ccport,
+ * CentralCache:
data.designVersion, data.cycleStamp,
maxWait4DLc [-1,0,>0] : -1 implies forever, 0 implies no wait.
Modified: trunk/sopf/kernel/src/site/apt/sfde.apt
===================================================================
--- trunk/sopf/kernel/src/site/apt/sfde.apt 2009-07-28 10:56:53 UTC (rev 44)
+++ trunk/sopf/kernel/src/site/apt/sfde.apt 2009-08-21 13:55:36 UTC (rev 45)
@@ -1,11 +1,19 @@
--------------------------
+
SOP Stateful Data Executor
+
--------------------------
+
Owolabi Oyapero
+
--------------------------
+
2009-07-16
+
-------------------------
+
Copyright
+
This file is part of SOPF.
SOPF is free software: you can redistribute it and/or modify
it under the terms of the Lesser GNU General Public License as published by
@@ -19,127 +27,229 @@
along with SOPF. If not, see <http://www.gnu.org/licenses/>.
Executor
+
* Goals
+
* Concepts
+
* Assumptions
+
* Goal Derived Requirements
+
* Technical Requirements
+
* System Requirements
+
* Dependencies
+
* Activity Sequence
+
* Interfaces
+
* Other Requirements
+
* Notes
+
* Old-Notes
* Goals (In order of importance):
+
* Stateful distributed execution (sfde)
+
-* Concepts:
+* Concepts:
+
* Task (command-pattern)
+
* Stateless object encapsulates operations for any instance of specific objType
+
* Will be stored in the queue
+
* Support for specific objType must be static (compile-time)
+
* Serialized-task format: [taskTypeId, taskTypeVersion, operationNum, [objTypeId, objId],[taskParamIndex, taskParamValue]]
+
* TypeId must uniquely identify a task-type that supports a specific data-type
+
* Task have a createTime (identifying the cycle in which they were created).
+
* Time is defined as the number of execution cycles for a given neural-network.
+
* An execution session is defined around the instantiation of a cache.
Assumptions
+
* Task types are defined and limited.
+
* Neurons will not be created/deleted frequently.
+
* All global-data should be managed by the cache
Goal Derived Requirements
- * There will be a set of predefined tasks, represented by a Runnable obj.
+
+ * There will be a set of predefined tasks, represented by a Runnable obj.
+
* Must determine and define the various task types.
+
* The queues must be ordered in a specific sequential manner?
+
* Upon task execution, the task must be inactivated.
+
* Queues must be demarcated based on time.
+
* All components must use the same unit of time, even peripheral receptors.
+
* Queues sequence must be based on actual literature.???
+
* The time demarcator is run before any task-queue.
+
* Peripheral components must process within the proposed time demarcation.
+
* Time must be demarcated in all queues.
+
* All precycle tasks must complete prior to starting cycle.
+
*
Technical Requirements
+
* The task will contain references to the involved components.
+
* Task is sent to each executor based on the selector-affinity.
+
* Each task-type will be assigned a pool of empty tasks and a queue for active task.
+
* Each main task type will have its queue, the last runnable will be latched with a binary sephamore release.
+
* The processor-pool will process tasks, as it is moved from the task-queue and submitted.
+
* Task execution may produce new tasks which, must have a correct selector-value (id) and createTime.
+
* A queue processing can start only after the binary sephamore is available.
- * The system task invocator will get the empty task-instance from a pool,
- \n the empty task will be filled fields related to the current invocation.
+
+ * The system task invocator will get the empty task-instance from a pool,
+ \n the empty task will be filled fields related to the current invocation.
+
* Upon task execution, the task obj will be returned to the pool.
+
* There will be a pool for task-type.
+
* There will be no neuron pool.
+
* PreCycleTasks (derived from data) should be submitted directly to the PausableThreadPoolExecutor
+
* The preCycle tasks should be split evenly among the PausableThreadPoolExcecutor threads.
-* Data
+
+* Data
+
* PreCycleTasks- the tasks should be relatively static and defined once, early in kernel init.
+
* CycleQueue- task queues for current cycle of the local-kernel
+
* New task against gu-data should be replicated transparently into all sibling kernels by the cache.
System Requirements
- *
-Dependencies
- * Task distribution is encapsulated in the cache
+ *
+
+
+Dependencies
+
+ * Task distribution is encapsulated in the cache
+
* Cache will provide the expected queue structure
+
* Cache will transparently put a new-task to the appropriate queue [NextCycleTaskQueue/SiblingNextCycleTaskQueue]
+
Activity Sequence
+
* START_SESSION:
- * Kernel:
+
+ * Kernel:
+
* startup-params: cache
+
* Get params from cache and instantiate appropriate variables and queues.
+
* if isCacheSync() is false;
+
* cache-sync signal
+
* do PRE_CYCLE
+
* RE-ALLOCATE:
- * Kernel:
+
+ * Kernel:
+
* Receive reallocated signal from cache
+
* change appropriate variables
+
* PRE_CYCLE:
+
* Receive cache-sync event
+
* Add pre-Cycle-task for each item (gp-data) to CycleQueue
+
* Copy New-Tasks from cache and add to CycleQueue
+
* do NEW_CYCLE
+
* NEW_CYCLE: (on the commencement of a new-Cycle at all times)
- * Kernel:
+
+ * Kernel:
+
* send start-Cycle-signal with the cycle-count
- * FIFO execution of tasks in the CycleQueue
+
+ * FIFO execution of tasks in the CycleQueue
+
* END_CYCLE: (At the end of each cycle)
- * Kernel:
+
+ * Kernel:
+
* send end-Cycle-signal with the cycle-count
+
* Cache:
- *
+
+ *
+
* QUIT_EXEC: (When an executor decides to quit)
- * Kernel:
+
+ * Kernel:
+
* Receive quit signal ?
+
* clean up, gc e.t.c
- * Send quit signal to cache
+
+ * Send quit signal to cache
+
* Cache:
+
*
* Interfaces
+
* Cache:
+
* addNewTask(Task) - will be inserted either in NextCycleTaskQueue or SiblingNextCycleTaskQueue.
+
* if Task.objTypeId is gu-data
+
* task must be added to NextCycleTaskQueue and all SiblingNextCycleTaskQueue
+
* else if Task.objTypeId is gp-data
+
* task must be added to the appropriate NextCycleTaskQueue (local/sibling).
-
+
* Other Requirements
+
* Notes:
+
* Todo List
+
\ No newline at end of file
Modified: trunk/sopf/model/src/site/apt/architecture.apt
===================================================================
--- trunk/sopf/model/src/site/apt/architecture.apt 2009-07-28 10:56:53 UTC (rev 44)
+++ trunk/sopf/model/src/site/apt/architecture.apt 2009-08-21 13:55:36 UTC (rev 45)
@@ -1,12 +1,21 @@
-----------------------------
+
SOP COMPONENTS
+
-----------------------------
+
Owolabi Oyapero
+
-----------------------------
+
2008/12/12
+
-----------------------------
+
+
Copyright
- This file is part of SOPF.
+
+ This file is part of SOPF.
SOPF is free software: you can redistribute it and/or modify
it under the terms of the Lesser GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
@@ -19,59 +28,105 @@
along with SOPF. If not, see <http://www.gnu.org/licenses/>.
Components
+
* Base Component
+
* Component
+
* Core Components
+
* Synapse
+
* Receiver
+
* Sender
+
* Neuron
+
* Logical Components
+
* Population [id,label, comps]
+
* Layer [id,label, pops]
+
* Layers [id,label, layers] ( may contain other layers as well)
+
* System [id,label, layerss]
-Components Associations:
+
+Components Associations:
+
* Keys
+
* <> - implies composition
+
* 1....* - implies one-to-many
+
* Composition & Aggregation
+
* Population 1....* components
+
* Layer <>..... population
+
* Layers <>..... Layer
+
* System <>..... Layers
+
* Inheritance
+
* Synapse--->Component,Receiver
- * Receiver--->Component,
- * Sender--->Component,
+
+ * Receiver--->Component,
+
+ * Sender--->Component,
+
* Neuron--->Component, Receiver, Sender
+
* Population--->DeltableData, Layer--->DeltableData, Layers--->DeltableData, System--->DeltableData
+
Static Structure
+
All core components (and base) must have an interface-definition.
+
* Component properties
- * supports :
+
+ * supports :
+
* Returns int, which is an interpreted as aggregated flags represented by bits.
+
* Each support is represented by a flag e.g LTP, Facilitation, PostSynapticFeedback e.t.c
+
Interfaces
+
* Component
{id, supports, systemState}
- * SystemState
+
+ * SystemState
{age, }
+
* Synapse
{}
- * Receiver
+
+ * Receiver
{}
- * Sender
+
+ * Sender
{}
- * Neuron
+
+ * Neuron
{}
+
*
+
* Population
+
* Layer
+
* Layers
+
* System
+
\ No newline at end of file
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|