From: Matthew P. <mat...@nc...> - 2008-01-25 13:37:39
|
On Friday 25 January 2008, Ryan Brinkman wrote: > > Just to start you off: In my case there are intentions to perform > > actions fulfilled the content of the phrase and to accept the > > obligations consequent on uttering the phrase. > > (There are similar sorts of intentions involved e.g. in protocoll > > applications. If the protocoll requires that I feed the mouse with > > rice, and my brother, for a joke, leaves rice in the same room with > > the mouse, my brother is not performing a protocoll application.) > > BS > > Assuming the manner the rice was left followed the requirements laid out > in the protocol and your brother did not introduce and potentially > confounding conditions (e.g., the protocol did not specify that it was > BS specifically who had to feed the mice, the amount and condition the > rice was left in was correct, he didn't jump around the room), from a > experimental standpoint were not the requirements of the protocol were > realized even if that was not your brother's intent? I believe this is > what matters from those who would use OBI, since operationally I would > not handle the results of the experiment any different than if you fed > the mice (again since there was a proper application of the protocol, > even if that was not the intent). Or to say essentially the same thing differently, in the one case there was a plan to feed the mice according to experimental requirements in the agent that facilitated the action. In the other there was not. In the first case, the mouse-feeding activity is a realization of the plan, in the second case it is not. In both cases, the physical activity of mouse-feeding is exactly the kind required by the experiment. We should not conflate these two features. If you conflate the intent (that an agent realized a plan through an activity) with the activity, you naturally get permiscuous multiple inheritance on the activity side. Any given physical activity could be the outcome of no plan being followed, or any one of a large number of plans, possibly but not necesarily with the plans falling into some sort of hierachy. Plan: make a cup of tea Activity 1: poor boiling water into cup ; add teabag ; stew ; remove teabag ; add milk ; add 2 sugars Activity 2: poor milk into cup ; add sugar ; add tea brewed in teapot Agent 1: me Agent 2: tea-making robot, non-sentient So, there are two physical activities that result in a cup of tea. Either me or my tea robot can participate in these activities. If I make the tea, I am realizing the plan to make tea. If the robot does, it is not (by analogy to the earlier parrot). Hence, we have 4 combinations here, which by the BS logic would give rise to 4 universals of activity, and clearly have 2 super-types of these, one co-ordinated by realization, the other by base-activity. I take from this example that wether a physical activity is the realization of a plan or not, is not a good basis for categorising it, much as categorising baloons by if they are red or not is also not a good basis for categorising them. Wether something is a realization of a plan, is IMHO very clearly and naturally a defined class. For deciding what the consequences are in the real world, this information is eclipsed by what actually happened. > Ryan Matthew |