Name | Modified | Size | Downloads / Week | Status |
---|---|---|---|---|

cai-videos | 2013-11-10 | 0 | ||

cai-prototypes | 2013-10-12 | 43 | ||

cai-interesting-papers | 2013-03-11 | 0 | ||

cai-documentation | 2006-11-23 | 1 | ||

readme.txt | 2017-06-22 | 9.3 kB | 0 | |

Totals: 5 Items | 9.3 kB | 44 |

CAI - Conscious Artificial Intelligence
========================================
Developed by Joao Paulo Schwarz Schuler.
CAI project contains various small prototypes showing common techniques in
artificial intelligence plus one big subproject.
These are the small prototypes:
* SOM Neural Network Example.
* Cellular Automatas:
John Horton Conway Game of Life.
Life Appearance – Cellular Automata showing self replication.
3D Cellular automata sliced in 6 layers.
* Evolutionary Algorithm Example: Magic Square Maker.
* Minimax Algorithm Example: Nine Mans Morris.
* Open CL parallel computing example.
There is also the BIG SUBPROJECT: conscious artificial intelligence with its
own neural network and own planning module. A detailed description follows.
The most recent source code can be downloaded from:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/
CAI Neural Network
===================
This project implements plenty of Object Pascal code. In the case that you intend to
reuse code, you should first look at the neural network implemented at
libs/UBUP3 - Universal Byte Prediction Unit. You can use UBUP3 to predict and/or
classify data. Start from looking at the class TEasyLearnAndPredictClass.
The basic code of UBUP3 has been functional since the year of 2001. This code
was firstly developed as part of Joao's study at university.
UBUP3 is inspired in the combinatorial neural network. The Combinatorial Neural
Network concept is explained here:
A Connectionist Model for Knowledge Based Systems
http://dl.acm.org/citation.cfm?id=661467
UBUP3 was originally inspired on the CNN described above. It's NOT a CNN as
originally invented.
Follows some characteristics implemented in libs/UBUP3:
* TEasyLearnAndPredictClass is an easy to use class that you can embed in your
own project. You can use it with small and large neural networks.
* Classes are problem independent (universal).
* After learning, these classes can predict/classify future states.
* Neurons in this unit resemble microprocessor OPERATIONS and TESTS.
* Neurons in this unit ARE NOT floating point operations. Therefore, this
implementation doesn't benefit from GPU nor is intended to run on GPU.
* There are 2 neural network layers: tests and operations.
* Some neurons are a "test" that relates a condition (current state) to an
OPERATION. "tests" are the FIRST NEURAL NETWORK LAYER.
* Some neurons are "operations" that transform the current state into the next
predicted state. Operations compose the SECOND NEURAL NETWORK LAYER.
* The current and the predicted states are arrays of bytes.
* Neurons that correctly predict future states get stronger.
* Stronger neurons win the prediction.
* UBUP3 has been tested under:
Linux (amd64) / Lazarus
Windows(amd64) / Lazarus
Android(armv7a) / Laz4Android plus lamw
* These platforms haven't been tested yet but they will probably work:
MacOS / Lazarus
Raspberry PI / Lazarus with armv7a Linux variants.
NEURON TYPES
=============
These are the types of available neurons at libs/UABFUN:
// available operations. Some operations are logic/test operations such as <,> and <>.
// Other operations are math operations such as +,- and *.
const csNop = 0; // no operation
csEqual = 1; // NextState[Base] := (Op1 = State[Op2]);
csEqualM = 2; // NextState[Base] := (State[Op1] = State[Op2]);
csDifer = 3; // NextState[Base] := (State[Op1] <> State[Op2]);
csGreater = 4; // NextState[Base] := (State[Op1] > State[Op2]);
csLesser = 5; // NextState[Base] := (State[Op1] < State[Op2]);
csTrue = 6; // NextState[Base] := TRUE;
csSet = 7; // NextState[Base] := Op1;
csInc = 8; // NextState[Base] := State[Base] + 1;
csDec = 9; // NextState[Base] := State[Base] - 1;
csAdd = 10; // NextState[Base] := State[Op1] + State[Op2];
csSub = 11; // NextState[Base] := State[Op1] - State[Op2];
csMul = 12; // NextState[Base] := State[Op1] * State[Op2];
csDiv = 13; // NextState[Base] := State[Op1] div State[Op2];
csMod = 14; // NextState[Base] := State[Op1] mod State[Op2];
csAnd = 15; // NextState[Base] := State[Op1] and State[Op2];
csOr = 16; // NextState[Base] := State[Op1] or State[Op2];
csXor = 17; // NextState[Base] := State[Op1] xor State[Op2];
csInj = 18; // NextState[Base] := State[Op1];
csNot = 19; // NextState[BASE] := not(PreviousState[BASE])
// An Operation type contains: an operation, 2 operands and boolean operand modifiers.
type
TOperation = record
OpCode:byte; //Operand Code
Op1:integer; //Operand 1
Op2:integer; //Operand 2
RelativeOperandPosition1, //Operand position is relative
RelativeOperandPosition2:boolean;
RunOnAction:boolean;
end;
// "RelativeOperandPosition" Modifier Examples
// As an example, if RelativeOperandPosition1 is false, then we have
// NextState[Base] := State[Op1] + State[Op2];
// If RelativeOperandPosition1 is TRUE, then we have
// NextState[Base] := State[BASE + Op1] + State[Op2];
// If RunOnAction is TRUE and RelativeOperandPosition1 is FALSE, then we have:
// NextState[Base] := State[Op1] + Action[Op2];
// "RunOnAction" modifies first operator in Unary operations and
// modifies second operator in binary operations.
INTRODUCTORY NEURAL NETWORK EXAMPLE:
=====================================
procedure trainingNeuralNetworkExample();
var
FNeural:TEasyLearnAndPredictClass;
aInternalState, aCurrentState, aPredictedState: array of byte;
internalStateSize, stateSize: integer;
secondNeuralNetworkLayerSize: integer;
I,error_cnt: integer;
begin
secondNeuralNetworkLayerSize := 1000; // 1000 neurons on second layer.
internalStateSize := 5; // the internal state is composed by 5 bytes.
stateSize := 10; // the current and next states are composed by 10 bytes.
SetLength(aInternalState , internalStateSize);
SetLength(aCurrentState , stateSize );
SetLength(aPredictedState, stateSize );
FNeural.Initiate(internalStateSize, stateSize, false, secondNeuralNetworkLayerSize, {search size} 40, {use cache} false);
// INCLUDE YOUR CODE HERE: some code here that updates the internal and current states.
error_cnt := 0;
for I := 1 to 10000 do
begin
// predicts the next state from aInternalState, aCurrentState into aPredictedState
FNeural.Predict(aInternalState, aCurrentState, aPredictedState);
// INCLUDE YOUR CODE HERE: some code here that updates the current state.
// INCLUDE YOUR CODE HERE: some code here that compares aPredictedState
// with new current state.
// INCLUDE YOUR CODE HERE: if predicted and current states don't match,
// then inc(error_cnt);
// This method is responsible for training. You can use the same code for
// training and actually predicting.
FNeural.newStateFound(aCurrentState);
end;
end;
SIMPLEST NEURAL NETWORK EXAMPLE:
=====================================
// In this example, the NN will learn how to count from 0 to 9 and restart.
procedure countTo9NeuralNetworkExample();
var
FNeural:TEasyLearnAndPredictClass;
aInternalState, aCurrentState, aPredictedState: array of byte;
internalStateSize, stateSize: integer;
secondNeuralNetworkLayerSize: integer;
I,error_cnt: integer;
begin
secondNeuralNetworkLayerSize := 1000; // 1000 neurons on second layer.
internalStateSize := 1; // the internal state is composed by 1 byte.
stateSize := 1; // the current and next states are composed by 1 byte.
SetLength(aInternalState , internalStateSize);
SetLength(aCurrentState , stateSize );
SetLength(aPredictedState, stateSize );
FNeural.Initiate(internalStateSize, stateSize, false, secondNeuralNetworkLayerSize, {search size} 40, {use cache} false);
// INCLUDE YOUR CODE HERE: some code here that updates the internal and current states.
ABClear(aInternalState);
ABClear(aCurrentState);
ABClear(aPredictedState);
error_cnt := 0;
writeln('Starting...');
for I := 1 to 10000 do
begin
// predicts the next state from aInternalState, aCurrentState into aPredictedState
FNeural.Predict(aInternalState, aCurrentState, aPredictedState);
// INCLUDE YOUR CODE HERE: some code here that updates the current state.
aCurrentState[0] := (aCurrentState[0] + 1) mod 10;
// INCLUDE YOUR CODE HERE: some code here that compares aPredictedState
// with new current state.
if (not ABCmp(aCurrentState, aPredictedState))
// INCLUDE YOUR CODE HERE: if predicted and current states don't match,
then inc(error_cnt);
// This method is responsible for training. You can use the same code for
// training and actually predicting.
FNeural.newStateFound(aCurrentState);
end;
// The smaller the number of errors, the faster the NN was able to learn.
writeln('Finished. Errors found:',error_cnt);
end;

__Screenshot instructions:__

Windows

Mac

Red Hat Linux
Ubuntu

__Click URL instructions:__

Right-click on ad, choose "Copy Link", then paste here →

(This may not be possible with some types of ads)