Complete.Org: Mailing Lists: Archives: freeciv-dev: March 2002:
[Freeciv-Dev] Re: [Wishlist] Attitude manager (agent and AI)
Home

[Freeciv-Dev] Re: [Wishlist] Attitude manager (agent and AI)

[Top] [All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index] [Thread Index]
To: jdorje@xxxxxxxxxxxx
Cc: freeciv-dev@xxxxxxxxxxx
Subject: [Freeciv-Dev] Re: [Wishlist] Attitude manager (agent and AI)
From: Gregory Berkolaiko <Gregory.Berkolaiko@xxxxxxxxxxxx>
Date: Mon, 4 Mar 2002 19:46:33 +0000 (GMT)

On Mon, 4 Mar 2002, Jason Short wrote:

> Gregory Berkolaiko wrote:
> >
> > Well it would be useless, that's the problem.  The value of happiness is 
> > to allow more people to work.  To estimate the effect of them working 
> > you'd need to estimate F/S/T obtained through their (best) placement.  AI 
> > can do it poorly for itself, CMA can do it perfectly for the human.
> 
> I think you guys are missing (or at least ignoring) something here.

Yes, ignoring.  Yes, I agree with all you wrote down below.  Especially I 
agree with your final paragraph.  Let us learn to walk before trying to 
ride a bike.

I think that such estimate for Temple/Cathedral/Coliseum would be a
user-friendly, informative and relatively easy-to-implement addition to
the CMA.  In time it might be of value for the AI as well.

Best,
G.

> 
> Yes, the value of happiness is to allow more people to work.  A simple 
> CMA should be able to evaluate building a cathedral, building a factory, 
>   or changing the global luxuries rate.
> 
> But where this model falls short again is that it only evaluates the 
> current situation.  For example, it is easy to deduce that with no 
> miltary units in the field a police station will not provide any 
> benefit.  It is less easy to determine that we want a police station 
> before churning out units.
> 
> As a more extreme example, you've said that the value of happiness is to 
> allow more people to work.  Well I say the value of _food_ is to allow 
> more people to work.  If a human is using the CMA, it is enough that the 
> CMA tell the human what extra productivity they will get out of a 
> cathedral.  But for an AI to use this, we need to know more to choose 
> our goals.  Spending 30 turns to build a cathedral may give us 2 extra 
> workers in 30 turns.  Or we may divert more production into food and 
> have 1 extra worker in 15 turns instead of 30, but at the cost of 
> delaying the cathedral by 15 turns.  In this case, I think a CMA that 
> simply does a linear optimization will not be very helpful - the 
> higher-level agent will need to know more.  Alternately, it might be 
> possible for the CMA to itself determine the usefulness of food - 
> perhaps given an extra constraint of time weighting.
> 
> This problem becomes even harder when considering, say, founding a new 
> city.  40 units of production + 1 citizen (which we can think of in 
> terms of food, perhaps) gives us 2 citizens.  It is basically a straight 
> conversion from production to food, but with a slight overhead because 
> we have to move the settler.  Then we have to consider how productive 
> the citizens are (which will include our increased happiness at the 
> current city after reducing our population).  Finally, we need to 
> consider that, since granary size increases as the city increases, 
> founding a new city will effectively reduce the cost for future growth.
> 
> I don't think the agents-AI is at a point yet where we should begin 
> thinking about implementing these things.  Things should go forward in a 
> straightforward manner until it hits a wall.  At that point hopefully 
> we'll know enough to work on the _real_ problem.
> 
> jason
> 
> 
> 



[Prev in Thread] Current Thread [Next in Thread]