|
From: Matthew P. <mat...@nc...> - 2008-01-25 14:44:00
|
On Friday 25 January 2008, Smith, Barry wrote: > At 08:37 AM 1/25/2008, Matthew Pocock wrote: > I am assuming that the protocoll will have a good ontology in-built > (the ontology people use when they speak English and say things like: > feed the mice at regular intervals with ...) > > So let's twist the wheel one bit further. > You are a lazy, but lucky, experimenter. > You need to feed the mice twice a day, but you have learned that the > mice are good at opening bags of rice with their teeth, and that it > takes them about 12 hours to get them open. > You leave two bags of rice there every morning. > All the mice get fed, exactly according to the protocoll. (You are lucky.) > Will you still say: 'from a experimental standpoint ... the > requirements of the protocol were realized' I would say that the requirements where met, not that the protocol was=20 realised. In this case, it is, as you say, a lucky accident that the=20 requirements are met. This is distinguishable from when some agent follows= =20 the relevant plan and either does or does not meet the requirements. It is= =20 always possible to follow a plan to the best of your abilities and fail to= =20 get the desired result. It happens often to me during baking. > >Or to say essentially the same thing differently, in the one case there > > was a plan to feed the mice according to experimental requirements in t= he > > agent that facilitated the action. In the other there was not. In the > > first case, the mouse-feeding activity is a realization of the plan, in > > the second case it is not. In both cases, the physical activity of > > mouse-feeding is exactly the kind required by the experiment. We should > > not conflate these two features. > > Physical events and activities are entities of two different kinds. > The protocoll relates to the latter. No, you've lost me - I didn't mention events, I only mentioned activities. > > >If you conflate the intent (that an agent realized a plan through an > >activity) > >with the activity, you naturally get permiscuous multiple inheritance on > > the activity side. > > I hope that I am not doing that. Your explanation of the parrot example sounded a lot like you where. > >Any given physical activity could be the outcome of no plan > >being followed, or any one of a large number of plans, possibly but not > >necesarily with the plans falling into some sort of hierachy. > > > >Plan: make a cup of tea > >Activity 1: poor boiling water into cup ; add teabag ; stew ; remove > > teabag ; add milk ; add 2 sugars > >Activity 2: poor milk into cup ; add sugar ; add tea brewed in teapot > >Agent 1: me > >Agent 2: tea-making robot, non-sentient > > > >So, there are two physical activities that result in a cup of tea. Either > > me or my tea robot can participate in these activities. If I make the > > tea, I am realizing the plan to make tea. If the robot does, it is not > > (by analogy to the earlier parrot). Hence, we have 4 combinations here, > > which by the BS logic would give rise to 4 universals of activity, and > > clearly have 2 super-types of these, one co-ordinated by realization, t= he > > other by base-activity. > > The robot needs to be programmed and switched on by you, with the > same intention (to make tea). This sounds like the old creationist chestnut of intelligent design. Let's= =20 assume the robot was a happy accident of a whirlwind in a junkyard and that= =20 it turns itself on and off on a whim to make tea. > If you, a careful and non-lazy experimenter, wake up one morning and > find that the mouse has nearly pulled open one of your bags of rice, > you may decide (intentionally) to use that half open bag of rice. You > convert mouse activity into human intentional activity according to > the protocoll. You are no longer following the protocol, but instead doing different thing= s=20 that fulfill the post-condition (desired outcome) of the protocol. > >I take from this example that wether a physical activity is the > >realization of > >a plan or not, is not a good basis for categorising it, much as > > categorising baloons by if they are red or not is also not a good basis > > for categorising them. > > If OBI were interested exclusively in physical activities we might > need a different base of classification. > But OBI is interested in two kinds of activities, the intentional > (performed by experimenters; conceivably also by subjects), and the > non-intentional, e.g. cells dying. =46rom the point of view of the cell, there is no physical difference betwe= en a=20 cell dying because its neighbour told it to or a cell dieing because an=20 experimenter added a chemical to its media that mimmics that signal.=20 Similarly, there is no difference to a cat between me cutting it open to se= e=20 what is inside and cutting it open kill it, given I make the same cut - the= =20 cat will be transiently suprised and then terminally upset either way. If you want to draw a distinction, it is that the intention is different. A= t=20 the moment, in OBI, the intention is captured in the plan. I fail to=20 understand why you want this to leak through to taint the activity. > >Wether something is a realization of a plan, is IMHO very clearly and > >naturally a defined class. For deciding what the consequences are in the > > real world, this information is eclipsed by what actually happened. > > A scientific investigation is not only a matter of consequences, but > also a matter of how those consequences were reached (deliberately, > carefully, on the basis of the application of these and those > protocolls, honestly, etc.) Sure, but we are not discussing investigations, but activities. Investigati= ons=20 have all sorts of things - motiviations, ideas, hypotheses, models,=20 expectations and so on in addition to the activities, which provide ample=20 room for the intentionality to be attached. Perhaps the sticking point is that we have no activity of realizing a plan = as=20 a resulting activity, as distinct from the resulting activity, and therefor= e=20 when thinking about this and discussing it are we conflating this enactment= =20 partly into the plan and partly into the target activity? It seems that the= =20 agency involved (and therefore any intentionality) is naturally associated= =20 with this enactment rather than either the plan or the resulting activity.= =20 > BS Matthew |