Hello Soar People
I am not sure I understand the argument yet. But, here's an attempt ...
If we are measuring intelligence with respect to an environment, and if
a simple agent does as well at maximizing expected utility as a more
sophisticated agent, then who is to say that the simple agent is not as
intelligent as the more sophisticated agent? It would seem that they can be
judged as being equally intelligent because they both maximize expected
utility, i.e. they both accomplish the task they were made to accomplish.
Or if the issue is whether the mechanisms used to achieve the goals are
"reasoning" (i.e. conscious?) mechanisms or not, can intelligence not be
unconscious, e.g. habit, instinct, reflex, etc? (Can information be stored
and even intelligently used in an unconscious way?) So, regardless of what
kinds of mechanisms are available to an agent to maximize utility,
whichever agent does a better job of doing what it was made to do could be
viewed as the more intelligent.
Let me know if I have missed the point entirely (or even partially).
I personally tie the reason for an agent's being to a particular set of
values -- my values. If intelligence is useful to me, then it will be able
to maximize my own set of values. I suppose that the intelligence of an
agent could be separated from its usefulness if people define it that way,
in which case I can now see your point. Then my concern is not so much how
intelligence is defined but rather how we define a _good_ agent.
Can someone now give me their definition of "a good agent" and point out
how this differs from an intelligent agent?
It might help if I clarified my original concern about Soar: As far as
I know, the programmer must be the one to define the goal states for an
agent -- at least the end-goal states. (I have heard of subgoaling.)
This seems limiting to me. Does the programmer always know what the
best goal states are?
Given some task in some environment (or some reason for the agent's
being), it seems as though the programmer will easily know how to evaluate a
few potential end states, in an axiological (utility) sense of the word
(e)value(ate). Or at least he will be able to evaluate a few features of
some potential end states, or perhaps a few features of all generic states
the agent could experience along the way.
But he may not know what goals are possible. He may not know what
features of a potential goal state are achievable together by which to
define a definite goal state. He may not even know which possible goals or
collection of state features would maximize the utility he is more directly
concerned with. Furthermore, the programmer may be lazy like me and want
the agent to figure out the appropriate goals itself, regardless of the fact
that I do know what a good goal state will look like.
The point is, I think the programmer is often more confident in
assigning the relative utility of a few state features than he is in
defining one or more definite goal states. And the programmer may not care
what a goal state looks like as long as value has been added/achieved along
So, instead of the programmer having to define particular goal states,
would it not be more effective in some environments/tasks if the programmer
could give a few real-valued or relative descriptions of the utility of a
few states (or features of states), then let the agent figure out how to
back-propagate utility estimates to preliminary states, and then
forward-propagate decisions based on its more complete utility function,
until it reaches what it believes to be an optimal state?
That is, could the agent somehow define its own goal states, directly or
indirectly, based on the original incomplete utility metrics given to it?
It seems like the addition of utility tied to decision theory could help
As an example, suppose I want to write a fighter-pilot agent (one that
did not need to follow the already well-defined rules of combat or standard
goals of missions). All I know as a civilian programmer is that there is
some value in human life being preserved, and that is all. Do I need to
define a particular goal? I just want the agent to exist, to act on its own
and be a benefit to me based on the values I know are important to me.
(An axiology that is even less-derived, i.e. more fundamental to an
agent than valuing other people's lives, could be given, such as the
pleasure/happiness of the agent, but this example should illustrate what I
want it to. And perhaps the emotion that Dr. Laird referred to earlier will
have a lot to do with what I am asking about here.)
By back-propagating utility, I mean that the agent can figure out that
shooting down enemies reduces their number, and that a reduction in number
of enemies reduces the chances that an enemy will attack, and the less the
enemy attacks the fewer human lives are lost. The agent starts with a value
description of one state and derives the values of other states from it.
The same backward chaining could also teach the agent about the value of
conserving fuel, or not running into missiles, etc. In other words, the
agent can derive an axiological (utility) value for a lot of states that
were not explicitly defined as goals. Then using that utility value, the
agent could invent its own goals that would benefit me, the programmer/user,
without me ever needing to define a goal for it. (And isn't that
It would seem reasonable that a "good" agent would not need any
prescribed goals but would carry out actions intended to maximize a utility
function, and would formulate its own goals along the way based on what
knowledge it has gained concerning the connection of its fundamental
axiology with the probabilities of states and causality.
This question may be based on significant ignorance of Soar on my part,
in which case I apologize. But I am hoping that your answers will benefit
more people than me either way.
----- Original Message -----
From: "Peter Pirolli" <pirolli@...>
To: "Wai-Tat Fu" <wfu@...>; "Peter Pirolli" <pirolli@...>;
"Thomas L. Packer in BYU CS DEG" <ThomasAndMegan@...>;
Sent: 2004.Mar.05/Fri 11:45
Subject: Re: [soar-group] Utility Theory
> I agree with Wai that it is probably a mistake to think that utility
> maximization is a "requirement" for an intelligent agent. It might be
> reasonable to assume that evolutionary processes will have
> utility-optimizing tendencies with respect to the selection of agent
> mechanisms, but those mechanisms do not need to be ones that reason about
> the maximization of utility. To use a degenerate example, if a single
> celled organism foraged for food in an environment in which food was
> distributed in a Poisson manner, then a random move generator would be
> optimal. A utility-maximization reasoner could be no more intelligent in
> that task environment (to use Newell's definition of "intelligence"),
> although it might be more intelligent in a different environment.
> t 09:12 AM 3/5/2004 -0800, Wai-Tat Fu wrote:
> >In act-r, each production has a utility value. There is a choice
> >that selects which production to fire based on the utility values of the
> >production in the conflict set. As pointed out by Pete this is virtually
> >identical to RUM.
> >There is also a learning mechanism that updates the utility values of
> >productions based on past experiences. In general act-r is similar to
> >other adaptive choice model such as ASCM of Rob Siegler and the adaptive
> >decision maker of Payne, Bettman, and Johnson: productions are
> >their utility values increase when positive outcomes are experienced.
> >However, it is not clear whether "maximizing utility" should be a
> >"requirement for intelligent system". It seems that the process could be
> >more complicated (e.g. foraging behavior) than simple maximization (if
> >that could be clearly characterized). See also the many studies by
> >Herrnstein and his colleagues (see e.g. The matching law : papers in
> >psychology and economics / Richard J. Herrnstein ; edited by Howard
> >Rachlin and David I. Laibson).
> >--On Thursday, March 04, 2004 1:25 PM -0800 Peter Pirolli
> ><pirolli@...> wrote:
> >>Wai-Tat Fu and John Anderson have been working on a revised theory of
> >>utility and utlity learning in ACT-R. I've forwarded this message to
> >>so he might be able to fill in more of those details.
> >>More generally, it turns out that the ACT-R theory of utility and choice
> >>is virtually identical to the Random Utility Model developed by Daniel
> >>McFadden, who won a Nobel in Economics for that work in 2000 (although
> >>the ACT-R model was developed independently).
> >>At 12:18 PM 3/4/2004 -0800, Thomas L. Packer in BYU CS DEG wrote:
> >>Hello Soar People
> >> I have been reading the Russell and Norvig text book called
> >>Artificial Intelligence: A Modern Approach and see many principles and
> >>techniques I like, many of which I see implemented in the Soar
> >>architecture -- except for utility theory, as far as I am aware.
> >>and Norvig make the principle of maximizing expected utility look very
> >>attractive to me as one key requirement, if not the definition, of an
> >>intelligent system -- not merely making use of all available information
> >>which I have heard Soar people use as the definition of intelligence.
> >> Are there any freely available discussions of the relationship (or
> >>the purposeful lacking of a relationship) between decision theory /
> >>utility theory and the Soar implementation of agent-based AI?
> >> I think I remember reading about Act-R using a utility measurement
> >>some kind and combining it with an expectation measurement, so I am
> >>guessing that Act-R is based on decision theory. Any comments on this?
> >> Thanks.
> >>Thomas Packer
> >>Brigham Young University
> >Wai-Tat Fu
> >Dept. of Psychology, BH 446J
> >Carnegie Mellon University,
> >Pittsburgh, PA 15213
> >Phone: 412-268-3323
> >Email: wfu@...
> >WWW: http://www.andrew.cmu.edu/user/wfu/home.htm
> This SF.Net email is sponsored by: IBM Linux Tutorials
> Free Linux tutorial presented by Daniel Robbins, President and CEO of
> GenToo technologies. Learn everything from fundamentals to system
> soar-group mailing list