Complete.Org: Mailing Lists: Archives: freeciv-ai: July 2002:
[freeciv-ai] Re: Borg AI.
Home

[freeciv-ai] Re: Borg AI.

[Top] [All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index] [Thread Index]
To: Per I Mathisen <per@xxxxxxxxxxx>
Cc: Freeciv AI development <freeciv-ai@xxxxxxxxxxx>
Subject: [freeciv-ai] Re: Borg AI.
From: "Ross W. Wetmore" <rwetmore@xxxxxxxxxxxx>
Date: Sat, 06 Jul 2002 11:41:56 -0400

At 10:30 AM 02/07/03 +0200, Per I Mathisen wrote:
>On Wed, 3 Jul 2002, Raimar Falke wrote:
>> How does the current AI and how would another AI answer the scenario:
>> "I want to get this enemy city. How strong it is defended and how many
>> units do I have to send?".
>
>The current AI just checks... it is omniscient!

You are always allowed to know if a city is defended or not - some 
levels of omniscience are part of the game.

You almost always know what type of unit is used for defence, and can
send a diplomat to give all the specific details.

Rather than shutting down all higher brain functions by falling back
on stale ground covered by religious terms like omniscient, it might 
be useful to try and reach various rationales for, or understanding of,
such actions, then plan for various stages of implementation :-).

In the current game world the "omniscience" of the AI is really just
a "free" diplomat probe. The human user has the same "omniscient"
ability when using a diplomat at some cost and with random chance
reliability, i.e. a human-fuzziness factor. The latter is easy to
implement for AIs to reduce this perceived advantage, leaving just
the mechanics of diplomat strategy.

Carry this a little further, and you might want to move to a Civ III
model where such probes are handled by paying money with no unit
required as the delivery agent - this simplifies the mechanics of
dealing with the AI by reducing all to the same simplified model.

In any event, the AI problem is not one of developing a full human
consciousness in this view, but of managing the diplomatic probe 
strategy for immediate query, or paying some penalty for getting 
help with some minor part of implementing this.

Note, that Fog-of-war is a human penalty, and should not be applied
to AIs that have no separate model for game data, other than direct 
query. When there is a tiered model data structure in place for AIs
the update of the AI model data can perhaps be governed by such effects
and it can redirect all its queries to this (incomplete) source. Until 
the AI is so enhanced with some mechanisms to deal intelligently with
estimations of unknowns, the "penalty" should not be fully applied, i.e.
should be applied with some level of AI-fuzziness for balance. The
same arguments apply for intuition-challenged humans - thus there is
nothing AI specific in doing things this way. Penalizing scores for
such abilities is a reasonable way to deal with the competitive aspect.

>A non-omniscient AI would have to remember units spotted on the continent
>and assume their are still around and remember the last spotting of the
>city and assume it is still as well/poorly defended. Or it might simply
>make an educated guess depending on information we have about the techs
>the player has.

Some data is more static than others and should be so managed.

The general concept of "data" needs to be enlarged as well, beyond just
enemy military units.

Educated guessing (estimations) is a critical part of dealing with
unknown data and must be a pre-requisite to any "penalized" AI model
so as not to penalize it unfairly.

Educated guesses can be thought of as a constant "base" level influence.
All other influence effects should be computed as a delta to this.

>When our non-omniscient AI spots a city (say with an exploring caravel),
>we don't know how many defenders the city has nor what kind they are. We
>just note if it is defended or not. Often a number of cities will be
>undefended, so can we go for them. When we don't find undefended cities,
>things get more difficult. Often we will simply have to do an attack in
>the dark without much knowledge of the enemy's real strength. Then we can
>use the knowledge of how this attack fared to determine the quality of the
>defenses of the area.

Some sort of static/dynamic influence memory is probably useful for the
latter. Note, that the AI shouldn't try to store details like exact unit 
parameters, for instance, but should use the level of influence threat 
to estimate the sort of attack/defence strength needed. This may mean 
substituting the known best enemy unit or some other heuristic to assess 
danger or the like in some specific situations. 

>Coming up with a good solution is hard. I don't have one at the moment.

One of the key concepts required is to get away from the detailed
estimations of limited elements, and a move to a more general assessment
function and its application in generating decision weights.

Specific details should only be used in limited cases involving immediate
or critical tactical decisions, and more general influence weights used
for most management decisions, i.e. advisor weight computations. There
should be an open-ended way to update the influences starting with a 
base (constant) guestimate which more sophisticated AIs can tweak by 
adding new factors as they are developed and/or experimented with.

Develop the general model first. Apply it to each manager function and
tweak its constant settings to provide a reasonable base level of play.
Then start developing "agents" to do more sophisticated adjustments of
the base level strategy.

Develop a means to do pluggable agents, or a generalized agent that is
just an interface to a streams pipe, or scripting engine, and let the
Civbot designers go crazy experimenting with agent/influence tweaks.

>Yours
>Per

Cheers,
RossW
=====




[Prev in Thread] Current Thread [Next in Thread]