This is a plugin for Bevy game engine.
It can be used for creating massive scale, distributed,
peer-to-peer apps with built-in data synchronization.
This plugin is an EXPERIMENTAL piece of software.
So are underlying dependencies, so you SHOULD NOT
build any mission critical software on top of it.
Some features are barely implemented, some might not work.
There are currently NO TESTS defined for this thing.
use bevy_swarm_sync::prelude::*;
fn main() {
let config_dir = if let Some(c_d) = args().nth(1) {
PathBuf::new().join(c_d.to_string())
} else {
PathBuf::new().join("/default/config/dir")
};
App::new()
.insert_resource(SwarmConfigDir(config_dir))
.add_plugins(SwarmSyncPlugin)
.run();
}
fn when_new_content(nc: On<NewContent>, mut commands: Commands) {
let (_tags, header) = read_tags_and_header(nc.d_type, nc.data.clone());
let text = format!("NC: {}-{} {:?}: {header}", nc.id, nc.c_id, nc.d_type);
commands.spawn((
Text::new(text),
Node {
position_type: PositionType::Absolute,
top: px(36 + (24 * nc.c_id)),
left: px(12 + (336 * nc.id.0 as usize)),
height: px(24),
..default()
},
));
}
fn read_content_messages(
mut reader_a: MessageReader<MContent>,
) {
for _cont in reader_a.read() {
// eprintln!("App read MContent");
}
}
fn some_system(mut commands: Commands) {
commands.trigger(EToAppMgr {
msg: ToAppMgr::Quit,
});
}
When you run an application with SwarmSyncPlugin added,
you start an instance of a Gnome.
That Gnome immediately starts searching for other Gnomes on the Internet.
(You can define some initial IPv4 or v6 endpoints in conf_dir/neigh.conf file...)
When it finds some neighbors he tries to join them, forming one or more Swarms.
Every Swarm has a type and contains of a single Datastore.
A Datastore is an ordered collection of up to 65 536 synchronized Contents.
Contents have a DataType: there can be up to 256 different DataTypes per Swarm.
Every Content is an ordered collection of up to 65 536 Data blocks (aka Pages).
Every Data block (Page) consists of up to 1024 bytes.
There are also around 240 vacant sync messages that can be defined and interpreted
directly by Application's logic, bypassing Datastore entirely.
In addition to above every Gnome can have up to 256
Unicast channels defined with his direct Neighbors.
Also every Swarm can have up to 256 Broadcast channels,
and up to 256 Multicast channels.
Casts have one originator, but every member of a cast can send an
uplink message to originator, and maybe get his message casted to other members.
Every Swarm can have a set of Policies with Requirements needed to fullfill those Policies.
A Requirament is a logical pyramid of Capabilities a Gnome should have in order to be allowed
to exercise given Policy and add/change certain Content in a Datastore.
Every Swarm has one Founder Gnome and that Gnome decides what Capabilities should be assigned
to any particular Gnome.
An example (non-bevy) app is under following location:
https://sourceforge.net/p/village-tui/code/ci/master/tree/
If a Gnome tries to post a message without sufficient Capabilities, that msg gets immediately
rejected by local verification procedure. This event is being propagated up to application level.
When an application wants to synchronize some data with the Swarm it sends a SyncMessage.
This message has a SyncMessageType that is a one byte value from 0 to 255.
Values from 248 to 255 are predefined and support internal logic of this library.
Values from 0 to 247 are available for general use by an application.
If you want to send an AppDefinedMsg here is how it should be done:
fn send_app_msg(mut commands: Commands) {
let s_id: SwarmID = SwarmID(0);
let m_type: u8 = 0; // Has to be <= 247, Policy verified
let op_type: u16 = 1; // all values allowed, Policy verified
let op_subtype: u16 = 65535; // all values allowed, no verification
let data: Data = Data::new(vec![1, 2, 3]).unwrap();
let app_msg = AppDefinedMsg::new(m_type, op_type, op_subtype, data).unwrap();
commands.trigger(EToAppMgr {
msg: ToAppMgr::AppDefined(s_id, app_msg),
});
}
** This logic is in early stage of development, subject to change **
Check crate a-swarm-consensus/src/swarm.rs fn verify_policy() to see
if a Policy you are about to define is going to make an impact on your app's bevavior!
Now what would happen if this message was critical to application's logic and anybody could trigger it?
An application would be of no use!
So before we even post above message, we have to ensure only Gnomes with certain privilidges
are allowed to make such change.
To do this we need to define a RunningPolicy.
There are some predefined Policies, and for this case we should
be fine with defining Policy::DataWithFirstByte(byte), where byte = 0.
fn some_system(mut commands: Commands, my_id: Res<MyGnomeId>,
) {
// s_name has to correspond to s_id from previous example
let s_name = SwarmName::new(my_id.0, "/".to_string()).unwrap();
let req_l = Requirement::Has(Capabilities::Founder);
let req_r = Requirement::Has(Capabilities::Owner);
let req = Requirement::Or(Box::new(req_l),Box::new(req_r));
commands.trigger(EToAppMgr {
msg: ToAppMgr::SetRunningPolicy(s_name, Policy::DataWithFirstByte(0), req),
});
}
Now only a Founder Gnome and any Gnome that has been given Owner capability can make such change effective.
Right now verification logic for Policy::UserDefined is not implemented!
You could define such policy but it will not be enforced.
There is only bare minimum defined for Policy enforcement to have a starting point for further development.
How about a situation where we would have a set of App messages and we want to have a single Policy
cover this entire set?
Well, you can define a ByteSet but logic is also missing for now.
You can define a Policy that uses a ByteSet, but for now it is very limited.
fn some_system(mut commands: Commands, my_id: Res<MyGnomeId>,
) {
// s_name has to correspond to s_id from previous example
let s_name = SwarmName::new(my_id.0, "/".to_string()).unwrap();
let set_id:u8 = 0;
let mut h_set: HashSet<u8> = HashSet::new()
// add some bytes to h_set...
let b_set = ByteSet::new(h_set);
// You can also define a ByteSet of u16 values:
// let mut h_set: HashSet<u16> = HashSet::new()
// let b_set = ByteSet::new_pair(h_set);
commands.trigger(EToAppMgr {
msg: ToAppMgr::SetRunningByteSet(s_name, set_id, b_set),
});
}
Let's say you made an app that turned out to be an overwhelming success.
Until now you have managed it on your own, but time has come to delegate some of
responsibilities you have to other Gnomes that have proven to be trusted.
Maybe you happen to know them in person or have some other reason to give them
credit of trust.
How can this be done?
fn some_system(mut commands: Commands, my_id: Res<MyGnomeId>,
) {
// s_name has to correspond to s_id from previous example
let s_name = SwarmName::new(my_id.0, "/".to_string()).unwrap();
let mut trusted_gnomes: Vec<GnomeId> = Vec::new();
// TODO: add trusted Gnomes here
commands.trigger(EToAppMgr {
msg: ToAppMgr::SetRunningCapability(s_name, Capabilities::Admin, trusted_gnomes),
});
}
Every time an instance of given Swarm is started from scratch,
it has only Default Policy running.
We don't want to enter this settings from scratch again and again, so we can just put
them in Datastore. When a swarm is being restarted only Founder has the Capability to
change the Default Policy. Instead of writing everything from scratch,
he can read those settings from Datastore and make them Running.
This is what those SetStored... messages are for.
This entire mechanism is also implemented in order to prevent swarm fragmentation,
where you have two separate swarms with exactly the same name but not synced.
Let's say you have your swarm running and you have Gnomes all over the place.
But for example in Japan you had a few members who were synced to your swarm and after syncing
they all disconnected. Next day they start their apps again and they want to
connect to your swarm, but there are only local gnomes available, they sync to a state they were
last day, but in the meantime some changes were made to this swarm, and they don't know about it.
If they see that the only running Policy is the Default one, they know that they are not connected
to the core swarm. Even if some of them yesterday had Admin or other Capabilities now they are
out of luck and have to wait for Founder to join and restore order.
I have not tested such a scenario, and if a Founder joins that swarm, this swarm will "go back in time"
from Founder's perspective, since he is the one who is joining.
So it is best to keep your swarm always running and spread all over the place as much as possible.
If one swarm has two disjoined sets of gnomes and at some point those sets start to intersect,
then this will not work. This scenario is also not tested.
The smartest way to behave would be to implement some sort of reunion protocol...
And the simplest way for a reunion is this:
Once first Gnome intersects two Swarms he sees that his Neighbor has different root hash from his.
Now he can compare his RunningPolicy set with that from his Neighbor.
Once he finds out he and his existing Neighbors are members of a stalled swarm,
he can send a Neighbor request notifying all of his existing Neighbors:
"We are behind in this swarm, I'm disconnecting. Core swarm Neighbor is <this guy="">, join him if
you want to be up to date or you can just join me in just a moment".
Other Gnomes can decide if they believe him or not.
Simplest way to verify if a core swarm is not fake is to compare core's RunningPolicies with
"default swarm" StoredPolicies. If those match it is probably OK to make the transition.</this>
Now what about Swarms that want to keep Founder as the only Gnome with "write access"?
Simple: just add Admin capability to Founder gnome and change set Policy::Default = And(Founder,Admin).
Every Swarm has a type, called AppType. It will soon be changed to SwarmType. Here is why.
An application should not be limited to only a single swarm. In fact it should consist of multiple
swarms. This approach gives developers and users tremendous flexibility.
Apps can be partitioned into "chunks", each of them in separate Swarm.
Let's take a game as an example.
Every game has it's logic, current state, audio artifacts, 3d models, textures, shaders etc.
Currently we store those in files. But what if we stored them in Swarms?
One Swarm for game logic, one for state, one for audio, one for localization etc.
A Swarm could be considered a module that can be swapped in and out.
If this app was deployed as web-assembly code, then we could store compiled logic inside a Swarm.
It could be even possible to hot swap logic of the game while in the game, not to mention other assets.
Developers would release a "template swarm" for some kind of artifact, let's say 3d models, and
community could pick it up and create their own assets.
Users then would have a set of swarms to choose from for any kind of customization.
For efficient gaming only "game state" swarm with BCast/MCast channels
would be needed to stay in sync, all other assets could be retrieved from local storage.
If someone has an older machine he can still play most recent games just by selecting Assets
that give him desired quality of experience.
Config dir can have following text files:
* id_rsa
* id_rsa.pub
* dapp-lib.conf
* neigh.conf
* storage.rules
Currently this is also where actual data blocks are being stored under storage subdirectory.
You can also make 'search' dir and put text files containing search queries.
Every line starting with # is ignored
* AUTOSAVE - Contents will be automatically saved when changed by the Swarm
* LISTEN_PORT - a UDP and TCP port for IPv4 to listen to (default is 1026)
* LISTEN_PORT_IPV6 - a UDP and TCP port for IPv6 to listen to (default is 1027)
* MAX_CONNECTED_SWARMS - a number (<=255) of swarms we can be simultaneously talking to
* MAX_UPLOAD_BYTES_PER_SECOND - how many bytes from bandwidth we can use
* STORAGE_DIR - set storage dir (not sure if implemented)
* SEARCH_DIR - set search dir (not sure if implemented)
See config.rs for additional info
# IP PORT NAT PortAlloc Transport
192.168.0.31 48969 1 0 0
IamFounder All
FounderIs GID-abcdef0123456789 Manifest
SwarmName GID-abcdef0123456789 aname FirstPages
CatalogApp MatchOrForget
ForumApp MatchOrFirstPages
SearchMatch MatchAndManifestOrFirstPages
SearchMatch MatchAndManifestOrForget
Default Forget