[Freeciv-Dev] Re: [Wishlist] Attitude manager (agent and AI)
[Top] [All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index] [Thread Index]
On Mon, Mar 04, 2002 at 02:35:21PM -0500, Jason Short wrote:
> Gregory Berkolaiko wrote:
> > On Mon, 4 Mar 2002, Raimar Falke wrote:
> >
> >
> >>On Mon, Mar 04, 2002 at 05:42:25PM +0000, Gregory Berkolaiko wrote:
> >>
> >>>>>So I would suggest the following extension to the CMA: once the
> >>>>>target multipliers are set, you should be able to evaluate the
> >>>>>effect of building a happiness-related improvement. It would output
> >>>>>(for user) the increase in surpluses you'd be able to get (maybe
> >>>>>minus the cost of the upkeep). And for AI you just need to combine
> >>>>>the values of increases multiplied by their _WEIGHTINGs
> >>>>>
> >>>>I don't know how this relates to CMA and I don't know what target
> >>>>multipliers you mean.
> >>>>
> >>>Now let's consider evaluating impact of a Cathedral. The algorithm number
> >>>2 above will not give us anything useful (unless the whole city changes
> >>>it's happiness status).
> >>>
> >>I don't see a big problem in creating a happiness surplus value based
> >>on the ppl_* arrays.
> >>
> >
> > Well it would be useless, that's the problem. The value of happiness is
> > to allow more people to work. To estimate the effect of them working
> > you'd need to estimate F/S/T obtained through their (best) placement. AI
> > can do it poorly for itself, CMA can do it perfectly for the human.
>
> I think you guys are missing (or at least ignoring) something here.
>
> Yes, the value of happiness is to allow more people to work. A simple
> CMA should be able to evaluate building a cathedral, building a factory,
> or changing the global luxuries rate.
>
> But where this model falls short again is that it only evaluates the
> current situation. For example, it is easy to deduce that with no
> miltary units in the field a police station will not provide any
> benefit. It is less easy to determine that we want a police station
> before churning out units.
>
> As a more extreme example, you've said that the value of happiness is to
> allow more people to work. Well I say the value of _food_ is to allow
> more people to work. If a human is using the CMA, it is enough that the
> CMA tell the human what extra productivity they will get out of a
> cathedral. But for an AI to use this, we need to know more to choose
> our goals. Spending 30 turns to build a cathedral may give us 2 extra
> workers in 30 turns. Or we may divert more production into food and
> have 1 extra worker in 15 turns instead of 30, but at the cost of
> delaying the cathedral by 15 turns. In this case, I think a CMA that
> simply does a linear optimization will not be very helpful - the
> higher-level agent will need to know more. Alternately, it might be
> possible for the CMA to itself determine the usefulness of food -
> perhaps given an extra constraint of time weighting.
>
> This problem becomes even harder when considering, say, founding a new
> city. 40 units of production + 1 citizen (which we can think of in
> terms of food, perhaps) gives us 2 citizens. It is basically a straight
> conversion from production to food, but with a slight overhead because
> we have to move the settler. Then we have to consider how productive
> the citizens are (which will include our increased happiness at the
> current city after reducing our population). Finally, we need to
> consider that, since granary size increases as the city increases,
> founding a new city will effectively reduce the cost for future growth.
>
> I don't think the agents-AI is at a point yet where we should begin
> thinking about implementing these things. Things should go forward in a
> straightforward manner until it hits a wall. At that point hopefully
> we'll know enough to work on the _real_ problem.
In general ack. Two points: the CMA will not be extended this
much. Probably supporting virtual cities is needed but not the other
stuff mentioned.
Second point: these model are IMHO too fine grained. You have larger
fluctuations if an enemy is near and your settler needs to use a safer
path or an enemy is near and the settler have to completely abort its
mission. If we are at the point were we can arm our models in such a
way we could make tests to see if this really gives a better AI.
Raimar
--
email: rf13@xxxxxxxxxxxxxxxxx
"The primary purpose of the DATA statement is to give names to
constants; instead of referring to pi as 3.141592653589793 at every
appearance, the variable PI can be given that value with a DATA
statement and used instead of the longer form of the constant. This
also simplifies modifying the program, should the value of pi
change."
-- FORTRAN manual for Xerox Computers
- [Freeciv-Dev] Re: [Wishlist] Attitude manager (agent and AI), (continued)
- [Freeciv-Dev] Re: [Wishlist] Attitude manager (agent and AI), Per I. Mathisen, 2002/03/04
- [Freeciv-Dev] Re: [Wishlist] Attitude manager (agent and AI), Raimar Falke, 2002/03/04
- [Freeciv-Dev] Re: [Wishlist] Attitude manager (agent and AI), Gregory Berkolaiko, 2002/03/04
- [Freeciv-Dev] Re: [Wishlist] Attitude manager (agent and AI), Raimar Falke, 2002/03/04
- [Freeciv-Dev] Re: [Wishlist] Attitude manager (agent and AI), Gregory Berkolaiko, 2002/03/04
- [Freeciv-Dev] Re: [Wishlist] Attitude manager (agent and AI), Raimar Falke, 2002/03/04
- [Freeciv-Dev] Re: [Wishlist] Attitude manager (agent and AI), Jason Short, 2002/03/04
- [Freeciv-Dev] Re: [Wishlist] Attitude manager (agent and AI), Gregory Berkolaiko, 2002/03/04
- [Freeciv-Dev] Re: [Wishlist] Attitude manager (agent and AI),
Raimar Falke <=
[Freeciv-Dev] Re: [Wishlist] Attitude manager (agent and AI), Tony Stuckey, 2002/03/04
|
|