3D Desktop GUI (GLOGO-1)
A new UI system concept by Brian Chabot
Like most computer geeks, I was enthralled by authors such as Gibson and
Stephenson. Since the mid 1980's I've watched the advancement of
technology. By the mid 1990's VR was right around the corner and we
could enter the Matrix. Then the bottom fell out and the tech boom went
Since then, processor and graphics technology has continued, but the
whole concept of VR seems to have been scrapped, while the technology
that was making it so clunky in 1996 has continued to grow and improve,
without any major improvement in user interfaces.
The goal of this project is to make 3D user environments both available
and inexpensive so as to finally realize the dream of virtual reality
and to make computing experiences as exciting as they were when the
Internet first became popular.
Computer to run it all
HMD with head tracker
3D pointing Device (6DOF? Space-tracking mouse?)
Second input device (Nostromo? Slider? Second pointer?)
Window Manager for X
3D worlds an objects creation
We live in a 3D world. The goal here is to make a usable 3DGUI which
puts the third dimension in the concept of a computer desktop. The user
will have a resizable 3D workspace which will contain objects such as 2D
windows, 3D icons, etc. which can be arrainged within this space. This
will enable computer users to work with the tools on their computer in a
way that closer mimics the way the world around them operates as well as
how the mind works, and thus more efficiently.
The idea here is to design a standard for the technology, impliment it
so that it's affordable, and possibly/probably make a few bucks selling
the customized hardware... kind of like getting an IBM brand keyboard
circa 1985 or a Sun branded mouse circa 1989...
Locate/Modify/Create HW devices needed.
Create a basic interface (I/O, Workspace, V-Room)
Add sample objects, layouts, etc. location-based (V-Room) and
relative to the user (Workspace).
Expand to multiple workspaces based on linked locations, objects,
or actions (V-World)
Continue expansion to create a networked series of worlds and
Add multi-user support
The idea is that the user would don the HMD and use pointing and
navigation input devices. As s/he looks in various directions, the view
pans accordingly. One controller is used to manipulate objects and the
other is used to navigate. These could be combined into one or more
multifunction input devices. The HMD should ideally be optional as it
would be nice to also be able yo use a regulat monitor for those who
don't have or can't afford a set of goggles.
Single Miltifunction Input Device: (MFID)
My idea was to make the input into a trackable ring that is worn near
the tip of the index finger. It would have one or more buttons which
could be pressed bythe thumb. No buttons is pointing, top button is
select (like grabbing an object with thumb and finger), a side or angled
button for navigation (like grabbing a joystick), and a lower button
would be an option button (like the middle button default in Enlightenment).
One hand would use the MFID above while either the other hand or perhaps
foot pedals would control movement within a workspace/world. With dual
inputs, the user could move and work with relative location objects
Throwing your keyboard away
As part of the tracking in dual MFID's, it would be concievably possible
to include a virtual keyboard either suspended where the user wants it
or creating a sort of augmented reality on the desk the user sits at.
File Formats and Protocols
The idea here is to create a way that multiple users can interact in the
same V-Room. The Workspace will be relative to the user and not seen by
other users. This Workspace would inslude menus, readouts, tools, etc.
that each user could manipulate. It would be a lot like a cockpit with
a HUD. Servers would send the V-World file and the current V-Room file
to each user on connect and/or patches to the same if the user already
has a copy. This is not unlike connecting to a custom level in an
online FPS game. Once the V-Room is transfered or patched, the user is
connected and a data stream established with the server to track where
everyone is located and what they are doing. This stream should envolve
minimal bandwidth such as coordinates, facing direction, avatar,
actions, etc. where the avatar would be already transfered upon
connection. There will need to be a set of common file formats for
V-Worlds, V-Rooms, Objects, Avatars, and actions with a 2-way
communications protocol to update these as needed. My proposal is to
use something standard and easily rendered which can be patched.
Perhaps the worlds may be best rendered in VRML97 or something similarly
available. I would peopose some URI format such as
"vroom://world.server.com/someworld/enter.vrm";. The desktop would
connect to the server. The server would list the files needed and their
current versions. The client would then tell the server which files it
already has and the versions. The server would then send the
appropriate files or patches. When it's ready, the client would then
establish the stream and the client's Avatar would enter the V-Room.
The Desktop would work as a V-World server locally with the user's
desktop as the V-Room. 2D windows would be objects in the workspace and
it would be nice to have a 2D HUD for some sort of a console readout.
When connecting to outside V-Worlds/V-Rooms, objects would be made to
serve as links or the URI could be typed into a menu option or console
The level of realism in a V-Room should be established prior to
connection. The proposal of standards would be something like this:
WorldLevel 0: There are no laws of physics. No gravity, no collisions.
WorldLevel 1: Collision detection/avoidance only.
WorldLevel 2: Collision detection and limited gravity. (Low-G, and/or
only Avatars affected)
WorldLevel 3: Normal Physics for objects and avatars (override possible
with permissions), (Links need doors) no other restrictions. (Fantasy world)
WorldLevel 4: As 3, but no overrides. (Basic mimic of physical reality)
WorldLevel 5: Realistic Physical Simulacrum.
AvatarLevel 0: No restrictions on Avatar definitions.
AvatarLevel 1: Size Restricted. (Must be defined.)
AvatarLevel 2: Local Manipulation only (no fireballs, lightning bolts,
AvatarLevel 3: Gravity Restriction (None, Fantasy, Real)
AvatarLevel 4: Humanoid mandated (no fantasy creatures, blobs, boxes, etc.)
AvatarLevel 5: ShapeShift Restrictions (Can't change basic look of
Avater inside V-Room)
The WorldLevel is one single option, where each Avatar Level is not
mutually exclusive. This is to maintain some sense of continuity within
each environment and to establish security to precent invisible Avatars,
MFID: Multi-Function Input Device. A 3D pointing device, worn like a
glove or ring.
Vworld: Virtual World. The low-resolution characteristics of your area
of observation from within a V-Room.
V-Room: Virtual Room: The high-res immediate environment around the
user. Roughly equivelent to an old VRML .wrl file.
Workspace: The area around the user within immedite reach represented by
objects only that specific user can see, such as menus, files, command
sensors, displays, etc.
Avatar: Same as it always has been. The representation of a user as seen
by other users.
Related Projects and other links
I'd like to work with http://www.cwonline.com if they're interested.
The same goes for http://www.microoptical.net/
http://www.3dwm.org looks wicked cool, but hasn't been worked on in a while.
http://fmc.sourceforge.net/ is another idea I'd like to look into
http://sourceforge.net/projects/layer3web/ Nice idea, no code.
http://www.blaxxun.com has wonderful products, but ditched Linux support
http://www.essentialreality.com/ makes a very inexpensive USB controller
glove. They have an SDK that includes both Linux and Windows drivers.
| brian@... http://www.hirebrian.net |
| Simply the Best IT/MIS Manager |
| Self-taught, Fast Learner, and Team Player |
| Ready to Start TODAY at Your Company. |