|
From: Smith, B. <phi...@bu...> - 2008-01-25 14:56:17
|
At 09:43 AM 1/25/2008, Matthew Pocock wrote: >On Friday 25 January 2008, Smith, Barry wrote: > > At 08:37 AM 1/25/2008, Matthew Pocock wrote: > > > I am assuming that the protocoll will have a good ontology in-built > > (the ontology people use when they speak English and say things like: > > feed the mice at regular intervals with ...) > > > > So let's twist the wheel one bit further. > > You are a lazy, but lucky, experimenter. > > You need to feed the mice twice a day, but you have learned that the > > mice are good at opening bags of rice with their teeth, and that it > > takes them about 12 hours to get them open. > > You leave two bags of rice there every morning. > > All the mice get fed, exactly according to the protocoll. (You are lucky.) > > Will you still say: 'from a experimental standpoint ... the > > requirements of the protocol were realized' > >I would say that the requirements where met, not that the protocol was >realised. In this case, it is, as you say, a lucky accident that the >requirements are met. This is distinguishable from when some agent follows >the relevant plan and either does or does not meet the requirements. It is >always possible to follow a plan to the best of your abilities and fail to >get the desired result. It happens often to me during baking. > > > >Or to say essentially the same thing differently, in the one case there > > > was a plan to feed the mice according to experimental requirements in the > > > agent that facilitated the action. In the other there was not. In the > > > first case, the mouse-feeding activity is a realization of the plan, in > > > the second case it is not. In both cases, the physical activity of > > > mouse-feeding is exactly the kind required by the experiment. We should > > > not conflate these two features. > > > > Physical events and activities are entities of two different kinds. > > The protocoll relates to the latter. > >No, you've lost me - I didn't mention events, I only mentioned activities. How do you distinguish a physical activity from a physical event? > > > > >If you conflate the intent (that an agent realized a plan through an > > >activity) > > >with the activity, you naturally get permiscuous multiple inheritance on > > > the activity side. > > > > I hope that I am not doing that. > >Your explanation of the parrot example sounded a lot like you where. An intent is something psychological; can obtain without any physical realization. > > >Any given physical activity could be the outcome of no plan > > >being followed, or any one of a large number of plans, possibly but not > > >necesarily with the plans falling into some sort of hierachy. > > > > > >Plan: make a cup of tea > > >Activity 1: poor boiling water into cup ; add teabag ; stew ; remove > > > teabag ; add milk ; add 2 sugars > > >Activity 2: poor milk into cup ; add sugar ; add tea brewed in teapot > > >Agent 1: me > > >Agent 2: tea-making robot, non-sentient > > > > > >So, there are two physical activities that result in a cup of tea. Either > > > me or my tea robot can participate in these activities. If I make the > > > tea, I am realizing the plan to make tea. If the robot does, it is not > > > (by analogy to the earlier parrot). Hence, we have 4 combinations here, > > > which by the BS logic would give rise to 4 universals of activity, and > > > clearly have 2 super-types of these, one co-ordinated by realization, the > > > other by base-activity. > > > > The robot needs to be programmed and switched on by you, with the > > same intention (to make tea). > >This sounds like the old creationist chestnut of intelligent design. Let's >assume the robot was a happy accident of a whirlwind in a junkyard and that >it turns itself on and off on a whim to make tea. > > > If you, a careful and non-lazy experimenter, wake up one morning and > > find that the mouse has nearly pulled open one of your bags of rice, > > you may decide (intentionally) to use that half open bag of rice. You > > convert mouse activity into human intentional activity according to > > the protocoll. > >You are no longer following the protocol, but instead doing different things >that fulfill the post-condition (desired outcome) of the protocol. > > >I take from this example that wether a physical activity is the > > >realization of > > >a plan or not, is not a good basis for categorising it, much as > > > categorising baloons by if they are red or not is also not a good basis > > > for categorising them. > > > > If OBI were interested exclusively in physical activities we might > > need a different base of classification. > > But OBI is interested in two kinds of activities, the intentional > > (performed by experimenters; conceivably also by subjects), and the > > non-intentional, e.g. cells dying. > > From the point of view of the cell, I hope the cells have good ontologists working for them >there is no physical difference between a >cell dying because its neighbour told it to or a cell dieing because an >experimenter added a chemical to its media that mimmics that signal. >Similarly, there is no difference to a cat between me cutting it open to see >what is inside and cutting it open kill it, given I make the same cut - the >cat will be transiently suprised and then terminally upset either way. Again, when we are building OBI we are concerned not to make abstract simplifying models of experiments, but rather to understand how experiments are built up. This "there is no difference" talk fits well where we are doing the former; not where we are doing the latter. >If you want to draw a distinction, it is that the intention is different. At >the moment, in OBI, the intention is captured in the plan. I fail to >understand why you want this to leak through to taint the activity. The plan gets realized. We are trying to understand the nature of the occurrents involved in such realization. > > >Wether something is a realization of a plan, is IMHO very clearly and > > >naturally a defined class. For deciding what the consequences are in the > > > real world, this information is eclipsed by what actually happened. Again, OBI is not only interested in consequences. > > > > A scientific investigation is not only a matter of consequences, but > > also a matter of how those consequences were reached (deliberately, > > carefully, on the basis of the application of these and those > > protocolls, honestly, etc.) > >Sure, but we are not discussing investigations, What does the 'I' stand for in OBI? >but activities. Investigations >have all sorts of things - motiviations, ideas, hypotheses, models, >expectations and so on in addition to the activities, which provide ample >room for the intentionality to be attached. I want to know where the intentionality is; not where we might attach it in some simplifying model. >Perhaps the sticking point is that we have no activity of realizing a plan as >a resulting activity, as distinct from the resulting activity, and therefore >when thinking about this and discussing it are we conflating this enactment >partly into the plan and partly into the target activity? It seems that the >agency involved (and therefore any intentionality) is naturally associated >with this enactment rather than either the plan or the resulting activity. Parrots cannot realize a plan (of the sort specified in a protocoll) Because parrots cannot read the plan, cannot understand the plan, cannot form intentions on the basis of the plan, etc., etc. I don't think adding an extra layer called 'enactment' is going to help here. BS > > BS > >Matthew |