CAI - Conscious Artificial Intelligence
========================================
Developed by Joao Paulo Schwarz Schuler.
CAI project contains various libraries, prototypes showing common techniques in
artificial intelligence plus 2 subprojects: CAI convolutional neural network and
TEasyLearnAndPredict neural network.
These are the libraries (Free Pascal and Lazarus):
* An ultra fast Single Precision vector processing unit supporting AVX and AVX2
instructions at libs/uvolume.pas
* A convolutional neural network implemented at libs/uconvolutionneuralnetwork.pas
* A generic evolutionary algorithm implemented at libs/uevolutionary.pas.
* An easy to use OpenCL wrapper implemented at libs/ueasyopencl.pas.
* CIFAR-10 file support implemented at libs/ucifar10.pas
These are the small prototypes:
* CIFAR-10 classification examples at /testcnnalgo/testcnnalgo.lpr
3 binaries are provided: amd64, AVX and AVX2.
* Increase image resolution from 32x32 to 256x256 RGB.
* Web server that allows remote/distributed NN computing and backpropagation.
* Cellular Automatas:
John Horton Conway Game of Life.
Life Appearance – Cellular Automata showing self replication.
3D Cellular automata sliced in 6 layers.
* Evolutionary Algorithm Example: Magic Square Maker.
* Minimax Algorithm Example: Nine Mans Morris.
* SOM Neural Network Example.
* Open CL parallel computing example: Trillion Test.
* Open CL wrapper example: Easy Trillion Test.
Available Videos:
Increasing Image Resolution with Neural Networks
https://www.youtube.com/watch?v=jdFixaZ2P4w
Ultra Fast Single Precision Floating Point Computing
https://www.youtube.com/watch?v=qGnfwpKUTIQ
Popperian Mining Agent Artificial Intelligence
https://www.youtube.com/watch?v=qH-IQgYy9zg
The most recent source code can be downloaded from:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/
Convolutional Neural Network
=============================
This unit was made to be easy to use and understand.
Implemented at libs/uconvolutionneuralnetwork.pas, this unit offers these layers:
* TNNetInput (input/output: 1D, 2D or 3D).
* TNNetFullConnect (input/output: 1D, 2D or 3D).
* TNNetFullConnectReLU (input/output: 1D, 2D or 3D).
* TNNetLocalConnect (input/output: 1D, 2D or 3D - feature size: 1D or 2D). Similar to full connect with individual neurons.
* TNNetLocalConnectReLU (input/output: 1D, 2D or 3D - feature size: 1D or 2D).
* TNNetConvolution (input/output: 1D, 2D or 3D - feature size: 1D or 2D).
* TNNetConvolutionReLU (input/output: 1D, 2D or 3D - feature size: 1D or 2D).
* TNNetMaxPool (input/output: 1D, 2D or 3D - max is done per layer of depth).
* TNNetAvgPool (input/output: 1D, 2D or 3D - avg is done per layer of depth).
* TNNetConcat (input/output: 1D, 2D or 3D) - Allows concatenating the result from previous layers.
* TNNetReshape (input/output: 1D, 2D or 3D).
* TNNetSoftMax (input/output: 1D, 2D or 3D).
There are also layers that do opposing operations. They do not share data with above layer types.
* TNNetDeLocalConnect (input/output: 1D, 2D or 3D - feature size: 1D or 2D). Similar to full connect with individual neurons.
* TNNetDeLocalConnectReLU (input/output: 1D, 2D or 3D - feature size: 1D or 2D).
* TNNetDeconvolution (input/output: 1D, 2D or 3D - feature size: 1D or 2D).
* TNNetDeconvolutionReLU (input/output: 1D, 2D or 3D - feature size: 1D or 2D).
* TNNetDeMaxPool (input/output: 1D, 2D or 3D - max is done on a single layer).
These are the available weight initializers:
* InitUniform(Value: TNeuralFloat = 1);
* InitLeCunUniform(Value: TNeuralFloat = 1);
* InitHeUniform(Value: TNeuralFloat = 1);
* InitGlorotBengioUniform(Value: TNeuralFloat = 1);
The API allows you to create divergent/parallel and convergent layers as per
example below:
// The Neural Network
NN := TNNet.Create();
// Network that splits into 2 branches and then later concatenated
TA := NN.AddLayer(TNNetInput.Create(32, 32, 3));
// First branch starting from TA (5x5 features)
NN.AddLayerAfter(TNNetConvolutionReLU.Create(16, 5, 0, 0),TA);
NN.AddLayer(TNNetMaxPool.Create(2));
NN.AddLayer(TNNetConvolutionReLU.Create(64, 5, 0, 0));
NN.AddLayer(TNNetMaxPool.Create(2));
TB := NN.AddLayer(TNNetConvolutionReLU.Create(64, 5, 0, 0));
// Another branch starting from TA (3x3 features)
NN.AddLayerAfter(TNNetConvolutionReLU.Create(16, 3, 0, 0),TA);
NN.AddLayer(TNNetMaxPool.Create(2));
NN.AddLayer(TNNetConvolutionReLU.Create(64, 5, 0, 0));
NN.AddLayer(TNNetMaxPool.Create(2));
TC := NN.AddLayer(TNNetConvolutionReLU.Create(64, 6, 0, 0));
// Concats both branches so the NN has only one end.
NN.AddLayer(TNNetConcat.Create([TB,TC]));
NN.AddLayer(TNNetLayerFullConnectReLU.Create(64));
NN.AddLayer(TNNetLayerFullConnectReLU.Create(NumClasses));
TEasyLearnAndPredict Neural Network
====================================
This NN has plenty of Object Pascal code. In the case that you intend to
reuse code, you should first look at the neural network implemented at
libs/UBUP3 - Universal Byte Prediction Unit. You can use UBUP3 to predict and/or
classify data. Start from looking at the class TEasyLearnAndPredictClass.
The basic code of UBUP3 has been functional since the year of 2001. This code
was firstly developed as part of Joao's study at university.
UBUP3 is inspired in the combinatorial neural network. The Combinatorial Neural
Network concept is explained here:
A Connectionist Model for Knowledge Based Systems
http://dl.acm.org/citation.cfm?id=661467
UBUP3 was originally inspired on the CNN described above. It's NOT a CNN as
originally invented.
Follows some characteristics implemented in libs/UBUP3:
* TEasyLearnAndPredictClass is an easy to use class that you can embed in your
own project. You can use it with small and large neural networks.
* Classes are problem independent (universal).
* After learning, these classes can predict/classify future states.
* Neurons in this unit resemble microprocessor OPERATIONS and TESTS.
* Neurons in this unit ARE NOT floating point operations. Therefore, this
implementation doesn't benefit from GPU nor is intended to run on GPU.
* There are 2 neural network layers: tests and operations.
* Some neurons are a "test" that relates a condition (current state) to an
OPERATION. "tests" are the FIRST NEURAL NETWORK LAYER.
* Some neurons are "operations" that transform the current state into the next
predicted state. Operations compose the SECOND NEURAL NETWORK LAYER.
* The current and the predicted states are arrays of bytes.
* Neurons that correctly predict future states get stronger.
* Stronger neurons win the prediction.
* Neurons are born and killed at runtime! The number of neurons isn't static.
* UBUP3 has been tested under:
Linux (amd64) / Lazarus
Windows(amd64) / Lazarus
Android(armv7a) / Laz4Android plus lamw
* These platforms haven't been tested yet but they will probably work:
MacOS / Lazarus
Raspberry PI / Lazarus with armv7a Linux variants.
NEURON TYPES
=============
These are the types of available neurons at libs/UABFUN:
// available operations. Some operations are logic/test operations such as <,> and <>.
// Other operations are math operations such as +,- and *.
const csNop = 0; // no operation
csEqual = 1; // NextState[Base] := (Op1 = State[Op2]);
csEqualM = 2; // NextState[Base] := (State[Op1] = State[Op2]);
csDifer = 3; // NextState[Base] := (State[Op1] <> State[Op2]);
csGreater = 4; // NextState[Base] := (State[Op1] > State[Op2]);
csLesser = 5; // NextState[Base] := (State[Op1] < State[Op2]);
csTrue = 6; // NextState[Base] := TRUE;
csSet = 7; // NextState[Base] := Op1;
csInc = 8; // NextState[Base] := State[Base] + 1;
csDec = 9; // NextState[Base] := State[Base] - 1;
csAdd = 10; // NextState[Base] := State[Op1] + State[Op2];
csSub = 11; // NextState[Base] := State[Op1] - State[Op2];
csMul = 12; // NextState[Base] := State[Op1] * State[Op2];
csDiv = 13; // NextState[Base] := State[Op1] div State[Op2];
csMod = 14; // NextState[Base] := State[Op1] mod State[Op2];
csAnd = 15; // NextState[Base] := State[Op1] and State[Op2];
csOr = 16; // NextState[Base] := State[Op1] or State[Op2];
csXor = 17; // NextState[Base] := State[Op1] xor State[Op2];
csInj = 18; // NextState[Base] := State[Op1];
csNot = 19; // NextState[BASE] := not(PreviousState[BASE])
// An Operation type contains: an operation, 2 operands and boolean operand modifiers.
type
TOperation = record
OpCode:byte; //Operand Code
Op1:integer; //Operand 1
Op2:integer; //Operand 2
RelativeOperandPosition1, //Operand position is relative
RelativeOperandPosition2:boolean;
RunOnAction:boolean;
end;
// "RelativeOperandPosition" Modifier Examples
// As an example, if RelativeOperandPosition1 is false, then we have
// NextState[Base] := State[Op1] + State[Op2];
// If RelativeOperandPosition1 is TRUE, then we have
// NextState[Base] := State[BASE + Op1] + State[Op2];
// If RunOnAction is TRUE and RelativeOperandPosition1 is FALSE, then we have:
// NextState[Base] := State[Op1] + Action[Op2];
// "RunOnAction" modifies first operator in Unary operations and
// modifies second operator in binary operations.
INTRODUCTORY NEURAL NETWORK EXAMPLE:
=====================================
procedure trainingNeuralNetworkExample();
var
FNeural:TEasyLearnAndPredictClass;
aInternalState, aCurrentState, aPredictedState: array of byte;
internalStateSize, stateSize: integer;
secondNeuralNetworkLayerSize: integer;
I,error_cnt: integer;
begin
secondNeuralNetworkLayerSize := 1000; // 1000 neurons on second layer.
internalStateSize := 5; // the internal state is composed by 5 bytes.
stateSize := 10; // the current and next states are composed by 10 bytes.
SetLength(aInternalState , internalStateSize);
SetLength(aCurrentState , stateSize );
SetLength(aPredictedState, stateSize );
FNeural.Initiate(internalStateSize, stateSize, false, secondNeuralNetworkLayerSize, {search size} 40, {use cache} false);
// INCLUDE YOUR CODE HERE: some code here that updates the internal and current states.
error_cnt := 0;
for I := 1 to 10000 do
begin
// predicts the next state from aInternalState, aCurrentState into aPredictedState
FNeural.Predict(aInternalState, aCurrentState, aPredictedState);
// INCLUDE YOUR CODE HERE: some code here that updates the current state.
// INCLUDE YOUR CODE HERE: some code here that compares aPredictedState
// with new current state.
// INCLUDE YOUR CODE HERE: if predicted and current states don't match,
// then inc(error_cnt);
// This method is responsible for training. You can use the same code for
// training and actually predicting.
FNeural.newStateFound(aCurrentState);
end;
end;
SIMPLEST NEURAL NETWORK EXAMPLE:
=====================================
// In this example, the NN will learn how to count from 0 to 9 and restart.
procedure countTo9NeuralNetworkExample();
var
FNeural:TEasyLearnAndPredictClass;
aInternalState, aCurrentState, aPredictedState: array of byte;
internalStateSize, stateSize: integer;
secondNeuralNetworkLayerSize: integer;
I,error_cnt: integer;
begin
secondNeuralNetworkLayerSize := 1000; // 1000 neurons on second layer.
internalStateSize := 1; // the internal state is composed by 1 byte.
stateSize := 1; // the current and next states are composed by 1 byte.
SetLength(aInternalState , internalStateSize);
SetLength(aCurrentState , stateSize );
SetLength(aPredictedState, stateSize );
FNeural.Initiate(internalStateSize, stateSize, false, secondNeuralNetworkLayerSize, {search size} 40, {use cache} false);
// INCLUDE YOUR CODE HERE: some code here that updates the internal and current states.
ABClear(aInternalState);
ABClear(aCurrentState);
ABClear(aPredictedState);
error_cnt := 0;
writeln('Starting...');
for I := 1 to 10000 do
begin
// predicts the next state from aInternalState, aCurrentState into aPredictedState
FNeural.Predict(aInternalState, aCurrentState, aPredictedState);
// INCLUDE YOUR CODE HERE: some code here that updates the current state.
aCurrentState[0] := (aCurrentState[0] + 1) mod 10;
// INCLUDE YOUR CODE HERE: some code here that compares aPredictedState
// with new current state.
if (not ABCmp(aCurrentState, aPredictedState))
// INCLUDE YOUR CODE HERE: if predicted and current states don't match,
then inc(error_cnt);
// This method is responsible for training. You can use the same code for
// training and actually predicting.
FNeural.newStateFound(aCurrentState);
end;
// The smaller the number of errors, the faster the NN was able to learn.
writeln('Finished. Errors found:',error_cnt);
end;
Let Me Know
============
In the case your are using this project or even small parts of it, I would love
to know about. Please post here:
https://sourceforge.net/p/cai/discussion/637501/thread/068972e4/?limit=25#903d

Source: readme.txt, updated 2018-01-01