Unified Information Model
A unified data model specifies a canonical atomic unit (like the unit integer) and an abstract grouping construct in which these atomic units can be arranged. By themselves, these two can construct arbitrary levels of data complexity. Add the ability to apply names to these levels, and you have a complete data model for a happy programming environment. Think of it as OOPv2. You heard it here first.
We're going to be developing some cutting-edge computer science along with novel visualization models. This work involves the invention of a fractal graph sometimes referred to as a "unified object model" because a fractal graph, I argue, can represent every part of the physical world at every scale. Everything in the world can be related with a [mathematical] graph. It is a most flexible data structure, the world-wide-web being one example. But by itself it doesn't encode any sense of scale (hence the need to define the atomic unit).
Note that everything in a computer is series of these "atomic" bits organized by the machine into "words" (merely for sake of the efficiency that such parallelization affords), yet we have these human-language constructs such as lists and sets (or records, files, arrays, etc) where *no such things exist in the computer*. Hence, the usefulness of considering a unified data model as part of the computer science.
A unified object model shouldn't be confused with similar ideas, like in Python, where everything is derived from "Object". The point is actually rather subtle and has to do with where the abstract conceptual space is merged with the actual physical reality of machine states.
In any case, one never operates or concerns oneself with copies of atomic elements because they are all the same. It's a subtle meta-philosophical(?) point, not that different that that which occurs in the science of physics regarding electrons and protons ("Is there only one electron?").
Along with the graph and a probabilistic model for interacting with the vast amount of information that will be developed, there are two main *forces* which create the physics. Together, these comprise the voting model and consists of user-ranking and per-item voting. With multi-scale voting, I contend the resulting flow network is provably optimal for self-organized collaboration.
So, we'll be creating a fractal graph in which space and time are derived purely through relationship to dominant "centers of gravity".
With this project, each node in the graph can be a graph itself, thereby embedding a nested graph structure. I call this data structure a Fractal Graph. I contend that it is the first generic, maximally abstract, fractal data structure and that it can represent every complex relationship of the natural world. Here, the graph will be naturally employed in a knowledge and people-networking platform.
STUB Orthogonal to nodes, edges, and the activity thereon, there are schema or "templates". These are procedures that tell how to (auto-)organize the nodes and edges for, e.g., an incoming email stream. Also known as interfaces.
...It should be noted that node creation represents an enormous injection of information. Given that we're offering a blank slate, anything that appears out from the Apeiron represents an extraction from the boundless. Coupled with knowing that it comes from a self-aware individual (so is non-arbitrary) its value should be considered immeasurably large, something just one bit less than infinity. Likewise, to form a relationship to that node is an act of supreme novelty, akin to two ships passing each other on a vast ocean.