ai-world-general Mailing List for AI World
Status: Abandoned
Brought to you by:
dnnrly
You can subscribe to this list here.
2004 |
Jan
|
Feb
|
Mar
(12) |
Apr
(5) |
May
(1) |
Jun
|
Jul
|
Aug
(6) |
Sep
(27) |
Oct
(1) |
Nov
(5) |
Dec
(2) |
---|
From: Pascal D. <dn...@gm...> - 2004-12-07 14:12:20
|
Hi Alex, you mentioned that someone has had the compiling in VC++. Does anyone have the .dsp and .dsw files? I've been trying to kick things into shape but not having much luck. Pascal On Mon, 6 Dec 2004 04:40:08 -0600, Alex Rubinsteyn <rub...@ui...> wrote: > ...one step closer to failing out of college... > > http://netfiles.uiuc.edu/rubnstyn/shared/ > > New stuff: > * Switched everything to 3d...agents inherit from Object3d, have vector3<float> > position, velocity, acceleration and a matrix4<float> rotation matrix. > > * Moved camera control to CameraController class. Use arrow keys, CTRL and > SHIFT to move and orient camera. Camera is now framerate independent. > > * Added green grid to give scene more depth. Replaced disc with a pyramid for > agent representation. > > * Made the Stopwatch timer more accurate by switching to SDL_Getticks (instead > of clock() in <time.h>) > > * Blah blah blah, no one reads this...Picel you should download/compile this > version if you have time... > > Bad things: > * Managed to break the network code somehow...it's garbage anyway, will > replace it with something better after finals. > > ------------------------------------------------------- > SF email is sponsored by - The IT Product Guide > Read honest & candid reviews on hundreds of IT Products from real users. > Discover which products truly live up to the hype. Start reading now. > http://productguide.itmanagersjournal.com/ > _______________________________________________ > Ai-world-general mailing list > Ai-...@li... > https://lists.sourceforge.net/lists/listinfo/ai-world-general > |
From: Alex R. <rub...@ui...> - 2004-12-06 10:40:12
|
...one step closer to failing out of college... http://netfiles.uiuc.edu/rubnstyn/shared/ New stuff: * Switched everything to 3d...agents inherit from Object3d, have vector3<float> position, velocity, acceleration and a matrix4<float> rotation matrix. * Moved camera control to CameraController class. Use arrow keys, CTRL and SHIFT to move and orient camera. Camera is now framerate independent. * Added green grid to give scene more depth. Replaced disc with a pyramid for agent representation. * Made the Stopwatch timer more accurate by switching to SDL_Getticks (instead of clock() in <time.h>) * Blah blah blah, no one reads this...Picel you should download/compile this version if you have time... Bad things: * Managed to break the network code somehow...it's garbage anyway, will replace it with something better after finals. |
From: Alex R. <rub...@ui...> - 2004-11-30 12:01:36
|
http://netfiles.uiuc.edu/rubnstyn/shared/ Latest version is 0.14 Pascal: The demo (if you can get it to compile) is a single neuron client causing its agent (the blue ball in the server window) to wiggle around in random directions. Alex: You probably don't have time...but if you put any more work into the renderer, start from this source. The class structure is much neater and saves headaches. Most of this code is still a mess (Networking, Actions) or completely nonfunctional (Console, Sensors). Still...its improving incrementally... There's still a good deal of thinking that needs to be done...my design is really...patchy. Problems as of now: * I have no way to transition between Application states (Intro Menu, Running Simulation, Paused Simulation, etc...). * Each agent has a single Action it can take, which can only affect a single game tick and is then deleted. The network task creates new action based on network input...this is way too inflexible.... * The controller/agent-body connection right now can't accomodate more than one agent in the simulation. Need to redesign so that clients can run multiple agent controllers and multiple clients can log in. Lots of problems...someone solve them...it's 6am and I'm too groggy to articulate anything. You guys get regular insight into my sleep deprived mind... Oh, one thing I'm going to manage to do before the finals-crunch-insanity is dump my sketchy vector2d class in favour of the math3d library. (http:// trenki.50free.org/math3d/). Soon each game object will have a vector3 position and a 3x3 matrix orientation. How does this sound for an object hierarchy: SpatialObject->DrawableObject SpatialObject->LightingObject SpatialObject->Camera Also, for a simulation interface: One camera high up in the sky, looking over complete map and rendering to a minimap in the corner of the screen, and another is user controlled closer-up camera. Blargh. Going to sleep. -Alex |
From: Pascal D. <dn...@gm...> - 2004-11-17 13:39:17
|
Hi all, sorry for not replying sooner, I've just got back from Scotland and talking to recruiters/slave traders. I haven't managed to build this properly yet but I get the feeling that this isn't quite 'production quality' yet. I think what I'll do is create a special directory in CVS for prototypes or experiments etc. and check it into there. I'd like to keep the trunk branch of CVS as ready to release as possible. This just avoids problems in the long run. Pascal On Mon, 15 Nov 2004 03:35:39 -0600, Alex Rubinsteyn <rub...@ui...> wrote: > Ok, don't get the last code, get this slightly updated version: > > http://netfiles.uiuc.edu/rubnstyn/shared/ > Click on ai-world-0.12.zip > > Things I changed: > * Collision detection sort-of working, but they get stuck in corners...Alex, > maybe you can figure it out? > > * I've moved all the capture/dispatch of SDL events out of TaskManager into it's > own task (EventDispatcher) > > * reorganized the code into three directories: > [server] - the world simulation (right now a simple ball collision sim) > [client ] - an agent's brain that logs into the simulation (right now does > nothing) > [engine] - any code that both server and client might share (the task > scheduling for example) > > Things I'll change after my test on weds: > * Get all the class code out of headers into .cpp file > > Things I'll be thinking about: > * How can tasks announce events to the TaskManager/other tasks? > Alex: I don't think this can be with int return values...take for example the > network task picking up some packets that tell it to create an Agent. The > network task shouldn't have to know how to do that, it should only broadcast > "Somebody better create a red SlugAgent"....but a return value can't store the > parameters of the Agent... > > * How to split up the Movement, Action and Collision detection? Should > Actions stick around until replaced, or be deleted each turn? Right now the only > way for an Agent to move is to have a MoveAction that turn in its agent->action > pointer. This seems...bad... > > Things I know other people will be doing sometime this week: > * Alex: an OpenGL renderer task...and if you get done with that, look into how > to render the text for a console? (oh, and if the collision problem is something > obvious, maybe fix that?) > * Andrew: You'll be crying at the feet of Prof. Hildebrand...I'll leave you alone > this week. > * Pascal: Could you put the code in CVS, we'll probably need source control... > I'll read about how to use it later this week (again, after my test on weds). > > Parts of the code we'll need to work on: > An AgentController class: > accepts vector<float> sensor data, returns vector<float> action commands > > The network connection between the client and server: > * Translates the AgentController's output vector into MoveCommand(0.3, 3.0) > * Send that move command in serialized form to the server > * Server-side: unserialize it > * figures out which agent moved, and stick that MoveCommand into agent- > >action > > (shit that sounds too complicated...we can probably slim down the sequence a > bit). > > The agent sensors: > Right now Sensor is an abstract class that takes a World* > We need to make some concrete sensors...Proximity sensor for all objects would > be a good start > > Sensor data transmitter: > Something that takes each agent's sensor data, serializes it, transmits it to the > client, which deserializes and feeds into the AgentController > > Again: The way I've been designing things, the classes I've come up with, is by > no means official, or even good. It's just what came to mind while coding. If you > want to improve the design...please do so, we probably need it. > > -Alex > > ------------------------------------------------------- > This SF.Net email is sponsored by: InterSystems CACHE > FREE OODBMS DOWNLOAD - A multidimensional database that combines > robust object and relational technologies, making it a perfect match > for Java, C++,COM, XML, ODBC and JDBC. www.intersystems.com/match8 > _______________________________________________ > Ai-world-general mailing list > Ai-...@li... > https://lists.sourceforge.net/lists/listinfo/ai-world-general > |
From: Alex R. <rub...@ui...> - 2004-11-15 09:35:42
|
Ok, don't get the last code, get this slightly updated version: http://netfiles.uiuc.edu/rubnstyn/shared/ Click on ai-world-0.12.zip Things I changed: * Collision detection sort-of working, but they get stuck in corners...Alex, maybe you can figure it out? * I've moved all the capture/dispatch of SDL events out of TaskManager into it's own task (EventDispatcher) * reorganized the code into three directories: [server] - the world simulation (right now a simple ball collision sim) [client ] - an agent's brain that logs into the simulation (right now does nothing) [engine] - any code that both server and client might share (the task scheduling for example) Things I'll change after my test on weds: * Get all the class code out of headers into .cpp file Things I'll be thinking about: * How can tasks announce events to the TaskManager/other tasks? Alex: I don't think this can be with int return values...take for example the network task picking up some packets that tell it to create an Agent. The network task shouldn't have to know how to do that, it should only broadcast "Somebody better create a red SlugAgent"....but a return value can't store the parameters of the Agent... * How to split up the Movement, Action and Collision detection? Should Actions stick around until replaced, or be deleted each turn? Right now the only way for an Agent to move is to have a MoveAction that turn in its agent->action pointer. This seems...bad... Things I know other people will be doing sometime this week: * Alex: an OpenGL renderer task...and if you get done with that, look into how to render the text for a console? (oh, and if the collision problem is something obvious, maybe fix that?) * Andrew: You'll be crying at the feet of Prof. Hildebrand...I'll leave you alone this week. * Pascal: Could you put the code in CVS, we'll probably need source control... I'll read about how to use it later this week (again, after my test on weds). Parts of the code we'll need to work on: An AgentController class: accepts vector<float> sensor data, returns vector<float> action commands The network connection between the client and server: * Translates the AgentController's output vector into MoveCommand(0.3, 3.0) * Send that move command in serialized form to the server * Server-side: unserialize it * figures out which agent moved, and stick that MoveCommand into agent- >action (shit that sounds too complicated...we can probably slim down the sequence a bit). The agent sensors: Right now Sensor is an abstract class that takes a World* We need to make some concrete sensors...Proximity sensor for all objects would be a good start Sensor data transmitter: Something that takes each agent's sensor data, serializes it, transmits it to the client, which deserializes and feeds into the AgentController Again: The way I've been designing things, the classes I've come up with, is by no means official, or even good. It's just what came to mind while coding. If you want to improve the design...please do so, we probably need it. -Alex |
From: Alex R. <rub...@ui...> - 2004-11-14 10:49:33
|
Shit, I'm so tired I'm in pain. Soon the sweet relief of sleep. But first: Download my code: http://netfiles.uiuc.edu/rubnstyn/shared/ Click on ai-world-0.11.zip I've been compiling it with gcc 3.3 (makefile included), but I know nothing of getting it into some other dev. environment. You do for sure need to link to SDL and SDL_Net...and maybe OpenGL (though I haven't used it yet). What does this code you ask? Wonderous wonderous things... Well, not really...run aiserver (the more developed of the two binaries)... It asks you for a port to listen on, and it does just that: listens for UDP network inputs on that port, and cout the details of the packets it receives. You were able to type in packets to send via aiclient, but I fucked with it...but the networking does work...it just doesn't do anything useful yet. Now, after aiserver starts listening on whatever port you typed, it brings up an SDL window and you'll get to see three precious squiggling circles... Their motion is being stimulated by a random number generator...but again, the principles work, we just need to fill in the specifics. Now what doesn't work? Collision detection (in the very pathetic form I have it) is completely busted (claims collisions are happening all over the place). And a good deal of other stuff is busted or buggy...but I'm tired and don't remember what. Things we need to do: I need you fine folks to go over the design and we need to conscript this beast to a specific blueprint. If it continues to grow organically (as it has in my hands), it will get really ugly. As for specific stuff to write: The current renderer is crap, let's get some OpenGL going. The ConsoleTask...doesn't work...need to use SDL_Console or SDL_ttf to have something like the Quake drop-down console The CollisionDetection is horrible There is nothing sending world-state back to AgentControllers There really are no AgentControllers...aiclient is a mess and doesn't control shit. That's only scratching the surface. I admit it, I am dumping a turd on this project...but it's the only turd we've got. So get curious, look through the code, ask me about it...so we'll all be on the same page, and soon this project will take off. This email is nonsense. The sun will soon rise...that would be my third sunrise since I last slept. Bad. I won't let it's deathrays melt me. I'm hiding under 3 blankets and closing my shades. Shit, I've gone crazy. 0002000Crazy |
From: Alex R. <rub...@ui...> - 2004-11-07 03:24:59
|
Ok, here's what I've designed/coded thus far. The main loop of the program is the TaskManager. It conains a list of tasks, each task associated with how many times per second it ought to execute. Tasks are things like the updater/physics engine, the renderer, networking, user input, etc.. Here is the header for Task (the class all tasks inherit from): class Task : public SmartObject { private: MillisecondType refreshTime; public: Task(MillisecondType t); virtual ~Task(); virtual MillisecondType GetRefreshTime(); virtual void SetRefreshTime(MillisecondType refresh); virtual void Update(WorldState world, CommandQueue commands); virtual void Init(); virtual void OnPause(); virtual void OnResume(); virtual void Shutdown(); bool running; bool canKill; }; Don't worry about "SmartObject", that's just reference counting for memory management. The WorldState is currently a list<agent>, but will need to be more complicated, the CommandQueue...is just that, a queue of commands. Each task can add Commands (changes to the world), but only one task (CommandProcessor) will actually apply these commands to the world. So! When you design a task...to get basic functionality, just overload the Update() function. If you need anything initiliazed (external libs, buffers, etc..) when the task starts, stick that in the Init() function. Is this design acceptable to people? Right now only Picel (so I'm posting this for his benefit) is working on a task (Renderer), but hopefully we'll all soon have some piece of ai-world to hack away at. -Alex (I haven't explained a lot that I should...soon I'll email the headers for Agent and Command, and things will become clear) Also...if you find my coding style annoying, please say so...I'm experimenting with what I capitalize, I'm not sure what results in maximum readability. |
From: Alex R. <rub...@ui...> - 2004-10-22 09:03:38
|
1) WE"RE NOT DEAD. (despite appearances). 2) curent coding: I wrote basic network client/server code that does the following: Server: wait for connections on PORT, print contents of packets received Client: connect to SERVER_IP:PORT, send out [agent_id, action_id, targetx, targety) It uses NET2 (a layer over SDL_Net)...and without all the other parts of ai-world, is by itself quite useless.... Alex (Picel) is going to write a simple SDL/OpenGL graphics window that renders every object in the world. Andrew is going to write a simple world update function (update coords of each agent based on its desired action). Once we have these three chunks together, we can quickly tear them apart and write something better. The point is to stop the vagueness and get a better hold on the problem. Hopefully code will be done by Monday. Andrew, Alex and I are going to meet over the weekend and forge a solid design (classes, functions). Pascal: We'll post any fruit of our babbling. 2) Good tutorial on engine design: http://www.gamedev.net/reference/programming/features/enginuity1/ This covers all the same elements we'll have to worry about...everyone should read this...it'll help us come to a final design plan and start chugging out more code. -Alex (must sleep...ugh) ---- Original message ---- >Date: Thu, 30 Sep 2004 13:34:20 -0600 >From: Alex Rubinsteyn <rub...@ui...> >Subject: Re: [Ai-world-general] Menacing 2d circles run amock! >To: > >> >>The body details stuff sounds good! DOCUMENT IT!! > >I'll tear into the wiki later today... > >>Please tell me more about decoupling the behaviour/control and also >>control system. Do you mean the client app? > >I mean the client app (loads body, logs in to server, caches local copy of map, >keeps status of all objects on map up to date, figures out sensor readings) >ought to be totally separate from the control application. I think we should >develop the client app first, leaving only a simple hook-in mechanism for the >control app (a DLL mechanism or maybe an agreed upon network port to >exchange RPC calls?)... > >Once we have the client and server running, the control app could be anything >that uses that hook-in (accepts sensor data, and sends actions), whether we >write it or someone else does. > >I don't want to do away with out "modular control system" idea...it would the >default way to design agent intelligence...but it wouldn't be an essential >component of ai-world (easily swapped out for something that better suits the >user). > >-Alex >(should I use Mik...@gm...?) > >>Pascal >> >>PS. I now have a GMail address. *titter* >> >> >>------------------------------------------------------- >>This SF.net email is sponsored by: IT Product Guide on ITManagersJournal >>Use IT products in your business? Tell us what you think of them. Give us >>Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more >>http://productguide.itmanagersjournal.com/guidepromo.tmpl >>_______________________________________________ >>Ai-world-general mailing list >>Ai-...@li... >>https://lists.sourceforge.net/lists/listinfo/ai-world-general > > >------------------------------------------------------- >This SF.net email is sponsored by: IT Product Guide on ITManagersJournal >Use IT products in your business? Tell us what you think of them. Give us >Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more >http://productguide.itmanagersjournal.com/guidepromo.tmpl >_______________________________________________ >Ai-world-general mailing list >Ai-...@li... >https://lists.sourceforge.net/lists/listinfo/ai-world-general |
From: Alex R. <rub...@ui...> - 2004-09-30 19:35:46
|
> >The body details stuff sounds good! DOCUMENT IT!! I'll tear into the wiki later today... >Please tell me more about decoupling the behaviour/control and also >control system. Do you mean the client app? I mean the client app (loads body, logs in to server, caches local copy of map, keeps status of all objects on map up to date, figures out sensor readings) ought to be totally separate from the control application. I think we should develop the client app first, leaving only a simple hook-in mechanism for the control app (a DLL mechanism or maybe an agreed upon network port to exchange RPC calls?)... Once we have the client and server running, the control app could be anything that uses that hook-in (accepts sensor data, and sends actions), whether we write it or someone else does. I don't want to do away with out "modular control system" idea...it would the default way to design agent intelligence...but it wouldn't be an essential component of ai-world (easily swapped out for something that better suits the user). -Alex (should I use Mik...@gm...?) >Pascal > >PS. I now have a GMail address. *titter* > > >------------------------------------------------------- >This SF.net email is sponsored by: IT Product Guide on ITManagersJournal >Use IT products in your business? Tell us what you think of them. Give us >Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more >http://productguide.itmanagersjournal.com/guidepromo.tmpl >_______________________________________________ >Ai-world-general mailing list >Ai-...@li... >https://lists.sourceforge.net/lists/listinfo/ai-world-general |
From: Pascal D. <dn...@gm...> - 2004-09-30 11:33:33
|
The body details stuff sounds good! DOCUMENT IT!! Please tell me more about decoupling the behaviour/control and also control system. Do you mean the client app? Pascal PS. I now have a GMail address. *titter* |
From: Alex R. <rub...@ui...> - 2004-09-29 17:42:23
|
One more bit: Pascal, for chemo-sensors you described the TYPE as PROXIMITY...but this doesn't specifiy to what the sensor is sensitive... A subfield is needed that tells the sensor to what objects it ought to respond. <AGENT id="CyberSlug"> <SIZE>10</SIZE> <SENSOR id="CHEM_GOOD_RIGHT"> <TYPE>PROXIMITY</TYPE> <SENSITIVITY>EvilPotion</SENSITIVITY> <POSITION>0.25</POSITION> </SENSOR> </AGENT> In this case the CHEM_GOOD_RIGHT sensor responds to any object carrying the id "EvilPotion". I've (mostly) finished a parser for this file format (using the tinyxml library...cute little thing) ----- Andrew: What else did we talk about last night that I've forgotten? -Alex |
From: Alex R. <rub...@ui...> - 2004-09-29 17:36:03
|
Sir Bravealot's Journey to the Dreaded ai-world mailing list: "Hello?" [echo: "Hello...hello...hello"] "Is anybody here?" [demon replies: "Yess...yess...come in my pet, come in..."] ------------------- Yes, this list has been a dark and lonely place for a week, deprived of its usual Pascal-Alex dialogue. Time to start it up again! Aleph) Me and Andrew were talking last night about the level of detail the physical world model should get in to... Andrew wanted a very discretized world (a tiled grid, where each agent/object takes up one tile, and can move into any neighboring tile) I was arguing for a continuous space with very detailed body models (in the case of slugs: slugs have a pair of opposing muscles that allow them to move forward by squiggling from side to side). So I found a decent compromise: http://www.cogs.susx.ac.uk/users/christ/bugworks/ Here the space is continous (so we can have varying sizes, velocities, etc..), but the body models are simple circles...it works nicely...I like it. Try out the java simulator on that page...its fun. I think something like the above would be fantastic for an alpha release. What sayeth the group? Another point Andrew brought up: We should exclude the behaviour/control system from our work initially...totally decouple it, and leave only a simple mechanism by which any application can hook in and control the agent. One more goodie: Why not have the control system also communicate via tcp/ip, and skip any DLL or IPC headaches? Server <-> Client <-> Control App The server keeps track of a global world state, resolves actions, etc.. (all the usual) The client keeps a local copy of the map, figures out sensor readings, sends them to the Control App (usually running on localhost) The Control App does the mysterious business of looking smart. (errr...returning actions, accepting sensor data). -Alex |
From: Pascal D. <pas...@ho...> - 2004-09-18 12:31:57
|
I've started some stuff for the agents on the site. Going slowly at the moment, as you will see. I've also added some new use cases for the client design. Still needs fleshing out but I'd like to know what people think so far. Here's some XML for you to chew on. <?xml version="1.0"?> <ROOT> <AGENTS> <AGENT id="CyberSlug"> <SIZE>10</SIZE> <SENSOR id="CHEM_GOOD_RIGHT"> <TYPE>PROXIMITY</TYPE> <POSITION>0.25</POSITION> </SENSOR> <SENSOR id="CHEM_BAD_RIGHT"> <TYPE>PROXIMITY</TYPE> <POSITION>0.25</POSITION> </SENSOR> <SENSOR id="TOUCH_RIGHT"> <TYPE>TOUCH</TYPE> <POSITION>0.25</POSITION> </SENSOR> <SENSOR id="CHEM_GOOD_LEFT"> <TYPE>PROXIMITY</TYPE> <POSITION>-0.25</POSITION> </SENSOR> <SENSOR id="CHEM_BAD_LEFT"> <TYPE>PROXIMITY</TYPE> <POSITION>-0.25</POSITION> </SENSOR> <SENSOR id="TOUCH_LEFT"> <TYPE>TOUCH</TYPE> <POSITION>-0.25</POSITION> </SENSOR> <!-- More stuff here --> </AGENT> </AGENT> <WORLD> <OBJECT_LOCATIONS /> <GLOBAL_ENVIRONMENT_CONSTANTS /> </WORLD> </ROOT> Hope this looks alright Pascal _________________________________________________________________ Use MSN Messenger to send music and pics to your friends http://www.msn.co.uk/messenger |
From: Alex R. <rub...@ui...> - 2004-09-18 03:25:13
|
Ick...replying to my own email...bad netiquette? Anways, more about sensors. I talked to Prof. Gillette today, and he told a bit about the sensor needs of a slug agent... The real (in the squishy flesh) slugs live in a world of smell. They have no vision, and use smell (chemo-sense) to figure out their surroundings. On top of chemo-sense, they also have touch-sensors. (slugs sometimes follow each other by feeling out slime trails). We can model chemicals as point-particles, emitting a "chemical field" in a circle around them. (so the strength of chemical sensor's activation is its distance from chemical emitter). The minimum sensors an agent must have to imitate basic slug behaviour are as follows: - 4 chemical sensors, 2 on each side of its head (discriminate between two chemicals) - 2 touch sensors, one on each side of its head Anyone have a draft of what the XML file storing this info ought to look like? -Alex |
From: Alex R. <rub...@ui...> - 2004-09-18 03:24:51
|
Ick...replying to my own email...bad netiquette? Anways, more about sensors. I talked to Prof. Gillette today, and he told a bit about the sensor needs of a slug agent... The real (in the squishy flesh) slugs live in a world of smell. They have no vision, and use smell (chemo-sense) to figure out their surroundings. On top of chemo-sense, they also have touch-sensors. (slugs sometimes follow each other by feeling out slime trails). We can model chemicals as point-particles, emitting a "chemical field" in a circle around them. (so the strength of chemical sensor's activation is its distance from chemical emitter). The minimum sensors an agent must have to imitate basic slug behaviour are as follows: - 4 chemical sensors, 2 on each side of its head (discriminate between two chemicals) - 2 touch sensors, one on each side of its head Anyone have a draft of what the XML file storing this info ought to look like? -Alex |
From: Alex R. <rub...@ui...> - 2004-09-17 03:42:45
|
Sensor types I'm imagining... Bump Sensors - output 1.0 when contacting, 0.0 otherwise...maybe vary sensor level depending on force of impact Internal State Sensors - if agents can "eat" an object such as food or water, then they ought to have a sensor telling them how full they are for that object Vision - for 2d worlds (such as we're starting with) an overhead camera with the agent in the middle...once we get into 3d, then vision will get tricky "Smell" sensors - sensitivity to specific class of object, sensors activate when agent is adjacent to that object. For example: in CyberSlug there are two chemicals the slug is sensitive to, one that causes pain and the other the slug mistakes for food. In ai-world, particles of each chemical would float through the world, and when encountered by an agent would set off a smell sensor. Velocity - mmmm...obvious... Heading - one North-South sensor (scaling from -1.0 to 1.0) and one East-West sensor (also -1.0 to 1.0) Any others? Maybe some rudiments of sound? (if we want to play with agent-generated languages). There could be a set of encoded phonemes ("Aa" "Oh" "K" etc...) and...fuck...this would get complicated...Ok, maybe ignore sound for now. What say you people? (yes, people plural: Andrew and Alex, please...please...) This email sucks. So does Differential Equations. Which I have to learn now. That is all. -Argo the Angry Chipmunk |
From: Pascal D. <pas...@ho...> - 2004-09-14 07:44:40
|
Looks good, go for it. Pascal ----- Original Message ----- From: "Alex Rubinsteyn" <rub...@ui...> To: <ai-...@li...> Sent: Monday, September 13, 2004 10:17 PM Subject: Re: [Ai-world-general] Problem/Solution/Uses > Shit, I think I got carried away and started sounding like a marketroid... > >> >>> Uses for: >>> Connectionism - The Agent Designer can be used to >>> construct almost any form of neural network (Feed Forward, >>> Recurrent, Self Organizing Map, Hopfield, ART, LSTM, etc...) >>> with a very fine grain of control (learning rate, momentum, >>> integration and output functions can be different for every >>> neuron). In the future there will also be a plugin for the >>> use of spiking (Integrate and Fire) neural networks. >>Do you already have some sort of designer? One that does all this? > > I have a neural net library that can do Feed Forward and Recurrent > networks with > backprop or hebbian learning...and I think adding LSTM won't be much of a > problem...SOMs and Hopfield might be cumbersome. The code is messy and > undocumented, but it works. Getting it into shape might take a week... > I'll rephrase all of this stuff into future tense, keeping it focused > strictly on what > the complete ai-world will have. > > As for where to put all this text...how about the Problem&Solution on the > front > page (a good intro to the project), and link to a Features and Uses page. > What say ye? > > -Alex > > > ------------------------------------------------------- > This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170 > Project Admins to receive an Apple iPod Mini FREE for your judgement on > who ports your project to Linux PPC the best. Sponsored by IBM. > Deadline: Sept. 13. Go here: http://sf.net/ppc_contest.php > _______________________________________________ > Ai-world-general mailing list > Ai-...@li... > https://lists.sourceforge.net/lists/listinfo/ai-world-general > |
From: Alex R. <rub...@ui...> - 2004-09-13 21:17:29
|
Shit, I think I got carried away and started sounding like a marketroid... > >> Uses for: >> Connectionism - The Agent Designer can be used to >> construct almost any form of neural network (Feed Forward, >> Recurrent, Self Organizing Map, Hopfield, ART, LSTM, etc...) >> with a very fine grain of control (learning rate, momentum, >> integration and output functions can be different for every >> neuron). In the future there will also be a plugin for the >> use of spiking (Integrate and Fire) neural networks. >Do you already have some sort of designer? One that does all this? I have a neural net library that can do Feed Forward and Recurrent networks with backprop or hebbian learning...and I think adding LSTM won't be much of a problem...SOMs and Hopfield might be cumbersome. The code is messy and undocumented, but it works. Getting it into shape might take a week... I'll rephrase all of this stuff into future tense, keeping it focused strictly on what the complete ai-world will have. As for where to put all this text...how about the Problem&Solution on the front page (a good intro to the project), and link to a Features and Uses page. What say ye? -Alex |
From: Pascal D. <pas...@ho...> - 2004-09-13 08:59:43
|
> Problem: > Researchers and hobbyists implementing multi-agent simulations waste much > of their time on mundane and uninteresting aspects of their experiments. ---spend a great deal of their time working on parts of their experiment that are simply not interesting. These are the bits that allow the experiments to work but without contributing to the results, parts such as the mechanics of running a simulation, allocating resources and mediating between different agents. > > Our Solution: > AIWorld is a highly modular and reusable environment within > which intelligent agents can be designed, tested, and let > loose to interact with their world. The world can be running > on the same machine as an agent, or across a network. Since > agent processing is distributed, large populations of complex > agents are (for the first time) possible. I'd avoid saying it's for the first time unless we can proove it for definate. Say something like "...complex agents are possible for considerably less effort" > > Uses for: > Connectionism - The Agent Designer can be used to > construct almost any form of neural network (Feed Forward, > Recurrent, Self Organizing Map, Hopfield, ART, LSTM, etc...) > with a very fine grain of control (learning rate, momentum, > integration and output functions can be different for every > neuron). In the future there will also be a plugin for the > use of spiking (Integrate and Fire) neural networks. Do you already have some sort of designer? One that does all this? > > Computational Neuroscience - Support is planned for plugins > connecting to the GENESIS and NEURON neural simulators, > allowing an agent's behaviour within AIWorld to be controlled > by a biologically realistic nervous system. Good. > > Artificial Life - AIWorld will include support for genetic > algorithms and agent mating behaviours. With a large agent > population distributed across a network, an ecosystem of > complicated organisms can develop and evolve. Also good. > > Traditional Artificial Intelligence - Traditional (logic-based) reasoning > techniques can be implemented using either an external plugin or the > AIWorld > Scripting Engine. ..developing a plugin using our easy to use plugin architecture, AIWorld Scripting engine or creating a link to another application. Looks good, is this for the front page? Pascal |
From: Alex R. <rub...@ui...> - 2004-09-13 07:32:48
|
What do you folks think of the following text explaining our project? Problem: Researchers and hobbyists implementing multi-agent simulations waste much of their time on mundane and uninteresting aspects of their experiments. Our Solution: AIWorld is a highly modular and reusable environment within which intelligent agents can be designed, tested, and let loose to interact with their world. The world can be running on the same machine as an agent, or across a network. Since agent processing is distributed, large populations of complex agents are (for the first time) possible. Uses for: Connectionism - The Agent Designer can be used to construct almost any form of neural network (Feed Forward, Recurrent, Self Organizing Map, Hopfield, ART, LSTM, etc...) with a very fine grain of control (learning rate, momentum, integration and output functions can be different for every neuron). In the future there will also be a plugin for the use of spiking (Integrate and Fire) neural networks. Computational Neuroscience - Support is planned for plugins connecting to the GENESIS and NEURON neural simulators, allowing an agent's behaviour within AIWorld to be controlled by a biologically realistic nervous system. Artificial Life - AIWorld will include support for genetic algorithms and agent mating behaviours. With a large agent population distributed across a network, an ecosystem of complicated organisms can develop and evolve. Traditional Artificial Intelligence - Traditional (logic-based) reasoning techniques can be implemented using either an external plugin or the AIWorld Scripting Engine. |
From: Pascal D. <pas...@ho...> - 2004-09-10 15:19:51
|
Looks good :) Would you be able to make it a bit smaller so we can embed it in a page more conveniently. Should be quite useful. Also, would you be able to have a look at the template file for the site and see if you can persuade it to format lists. I've tried some small mods to the client design but it's quite hard to define stuff properly without the right list types. Either that or just transfer the sf.net logo to the original template and use that. Don't worry if you can't - this is strictly an "if you can be bothered" request. Pascal > Put up a new ai-world component diagram... > > http://ai-world.sourceforge.net/images/aiWorldDiag2.png > > > Does that look better? > > -Alex > > > ------------------------------------------------------- > This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170 > Project Admins to receive an Apple iPod Mini FREE for your judgement on > who ports your project to Linux PPC the best. Sponsored by IBM. > Deadline: Sept. 13. Go here: http://sf.net/ppc_contest.php > _______________________________________________ > Ai-world-general mailing list > Ai-...@li... > https://lists.sourceforge.net/lists/listinfo/ai-world-general > |
From: Alex R. <rub...@ui...> - 2004-09-10 09:04:44
|
Put up a new ai-world component diagram... http://ai-world.sourceforge.net/images/aiWorldDiag2.png Does that look better? -Alex |
From: Pascal D. <pas...@ho...> - 2004-09-08 14:37:02
|
> I'm OK with all of this except the "location at time t" bit --- ??? > Location > calculated in the motion module? Isn't the motion module part of the > agent? > Location is a property of the world...to be calculated by the server as a > result of > an agent action... > > Also...I don't think this should be a hard split...but rather something > optional. > What if someone's AI consisted of layers of reflexes (like the subsumption > architecture for BEAM robots)...then there is no abstract output...but > definite > motor commands. Agreed, the movement module is expendable, I think I used the idea mostly to illustrate but got in to too much detail. I just like the idea of being able to break the agent down into discrete blocks. I think that will be quite useful. We can decide on actual modules when we get to designing individual agents. I'll try and put some more detail in the wiki tomorrow too. Have fun. Pascal |
From: Alex R. <rub...@ui...> - 2004-09-08 14:18:21
|
>That's more what I was thinking. The intelligence module will be concerned >with giving signals like direction and speed (and mode if we have this >hypothetical horse mod) and the movement module will deal with the physics >of "given signal x for the desired speed, y for direction (, z for mode), >previous conditions, location at time t is calculated to be there". I'm OK with all of this except the "location at time t" bit --- ??? Location calculated in the motion module? Isn't the motion module part of the agent? Location is a property of the world...to be calculated by the server as a result of an agent action... Also...I don't think this should be a hard split...but rather something optional. What if someone's AI consisted of layers of reflexes (like the subsumption architecture for BEAM robots)...then there is no abstract output...but definite motor commands. > >Inside the plugin new classes will be defined, both CModule and CAgent >derived classes. The client app will detect the presence of the plugin (just >having the app check a certain directory for dlls) do the required DLL stuff >and call a function in the dll that returns a list of strings (or some other >data type) that describes what classes are contained in that plugin. The >developer will know what classes there are in the plugin (if not why is the >plugin being used?) and put the required entries in the world description >that will ask for agents made up of certain modules. For the modules that >are described in the plugin, the client app will call some plugin >function --void *GetPluginObject("module or agent name here")-- which will >return a pointer to an object of the requested class. Ok....now I understand. I guess the confusion was to be to my ignorance of how plugins are written. I imagined a plugin with only one function: Action GetSensors( sensor data) > >A developer will want to create his own intelligence module but this will be >unique for every type of agent. Some may want to connect to another app but >not all apps can be used in the same way. The developer may also want to use >a new type of neural network that doesn't exist in in the standard >components provided by us. He will create the intelligence module class and >put it in a plugin, remembering to add the descriptions to the code that >sends information to the client on request. This will allow the developer to >have extended functionality without having to recompile the client app. Ok...so CModule (the superclass for all modules) is a black box with labeled number of input/output pins, that connect to the input/output pins of other modules. This totally fits into what me and Alex talked about for an AI designer/ editor... > >> Alex, did you make any progress with that editor? >???? I'm intrigued. Is this like a windows wizard for AI? Could we use it as >a tool for creating custom intelligence modules? More like LogicWorks (http://w3.msi.vxu.se/multimedia/rapporter/ HBinEngEdu.pic/Fig8.gif) with less cryptic labeling and prettier buttons...and now that you mention it, a wizard on top of that would be useful for the clueless. But yes, definitely, definitely for creating custom modules. I've already got a neural net engine where each neuron has a variable Integration function (Sum, Product, FUZZY_AND, FUZZY_OR) and a variable Transfer function (Sigmoid, Hyperbolic, Hard Threshold etc...)...neurons can be strung together into any network toplogy, which the user could stick inside a module and use as needed. Furthermore...I've been drafting a simple scripting host to allow even further customization. I looked into Lua for scripting...but even that is too big/heavy for the simple purposes of scripting module behavior... >It's a good starting point but I would like to have proper real-time support >eventually. There's not really any substitute when you get down to it, but >what you describe will carry us quite far so I wouldn't worry about it for a >while. Cool agreed (hey! I think we're almost settled on a design...I'll add some more to the wiki later tonight...reading wx tutorials, soon the coding can begin in earnest...) -Alex |
From: Pascal D. <pas...@ho...> - 2004-09-08 08:44:43
|
> Cool, agreed. (except for the XML specifying how many to create...think it > ought > to be the user, or world specified...but it's a minor point). Yes, XML for world description. (Oopsy - my fault, should have been clearer) > > This sounds like the "movement component" is an extension to any > intelligence > component...like two clumps of neurons in the brain, one generates a > signal to > RUN_AWAY, the second accepts that signal and converts it into detailed > motor- > neuron signals. > > The movement component implies a split between the decision-making > portions > of the agent's "nervous system" and the supportive components. > > Mmmm...maybe the supportive components should come packaged with a body > description (since they will be very specific to what motors a body > has)...giving a > decision-making freedom to output either general signals (EAT, RUN, etc..) > or > motor-specific signals (increase left wheel velocity by 10 world > units/second) That's more what I was thinking. The intelligence module will be concerned with giving signals like direction and speed (and mode if we have this hypothetical horse mod) and the movement module will deal with the physics of "given signal x for the desired speed, y for direction (, z for mode), previous conditions, location at time t is calculated to be there". > > (Could we call these modules? I like that name more than > component...) Good idea, less confusing. Consider the term changed. > > This confused me a bit...tells a client "what is inside"...inside of what? > Returns > pointers to what objects? Inside the plugin new classes will be defined, both CModule and CAgent derived classes. The client app will detect the presence of the plugin (just having the app check a certain directory for dlls) do the required DLL stuff and call a function in the dll that returns a list of strings (or some other data type) that describes what classes are contained in that plugin. The developer will know what classes there are in the plugin (if not why is the plugin being used?) and put the required entries in the world description that will ask for agents made up of certain modules. For the modules that are described in the plugin, the client app will call some plugin function --void *GetPluginObject("module or agent name here")-- which will return a pointer to an object of the requested class. > > I agree that developers ought to be able to use something other than our > code > for the control of an agent...but it seems that you want plugins to do > more than > that...though I'm not sure what.... A developer will want to create his own intelligence module but this will be unique for every type of agent. Some may want to connect to another app but not all apps can be used in the same way. The developer may also want to use a new type of neural network that doesn't exist in in the standard components provided by us. He will create the intelligence module class and put it in a plugin, remembering to add the descriptions to the code that sends information to the client on request. This will allow the developer to have extended functionality without having to recompile the client app. > > Picel and I drew up a pretty good model for module-based intelligence > (where > each portion of the intelligence can be designed seperately, using a > variety of > neural networks or scripting)...Maybe we can integrate AsyncNN with that > later. Don't worry about AsyncNN, I just using it as an example. I'll probably get around to writing an intelligence module for it some time. > Alex, did you make any progress with that editor? ???? I'm intrigued. Is this like a windows wizard for AI? Could we use it as a tool for creating custom intelligence modules? > > Mmmm...how does this sound: > - 1 time step = x real-world milliseconds > - World can switch between waiting for a READY signal from all agents or > ignoring their ready states and plodding along in "real time". > - variable tile size > > So a "realistic simulation" will have tiny time steps (in which very > little physical > action can be accomplished), tiny tiles (giving the world a continuos > feel), and > will ignore agent ready states (so they will operate asynchronously). > > In contrast, a very discretized simulation like CyberSlug will have large > time > steps (.1 of a second maybe...during which major actions can be > performed), > large tils (so the world is visible grid like it is in CyberSlug), and in > the case of a > multiagent simulation, the world waits for all agents before advancing a > time > step. > > What do you think? It's a good starting point but I would like to have proper real-time support eventually. There's not really any substitute when you get down to it, but what you describe will carry us quite far so I wouldn't worry about it for a while. Pascal |