[freeciv-ai] Re: README.AI
[Top] [All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index] [Thread Index]
--- Gregory Berkolaiko <Gregory.Berkolaiko@xxxxxxxxxxxx> wrote:
Attached is my version with two changes. I got rid of the cache city tile
values, and I changed the AI doesn't know about democracy to AI doesn't know
about democracy and fundamentalism.
[..]
> > > * struct choice should have a priority indicator in it. This will
> > > reduce the number of "special" want values and remove the necessity to
> > > * have want capped, thus reducing confusion.
> >
> > Aargh. Let me restate my position. Want should work in such a way that we
> don't need priority indicators. This means
> >
> > Getting rid of the ability of want to reach > 100
> >
> > 0-80 normal want
> > 80-90 critical
> > 90 buy now.
> >
> > Smart, sensible and obvious.
>
> Now please come up with a scheme of computing such want in a consistent
> way.
That's difficult. I'll have to get back to you on that. I don't fully
understand
the current way yet.
> > The priority scheme means a want of 40 with the
> > highest priority would supercede a want of 100 with a lower priority. This
> is ugly and stupid.
>
> It is more sensible than the present system and implementable too, which I
> cannot say about your proposal yet.
Your proposal is easier to implement, but it's continuing the ugliness.
> G.
Aloha,
RK.
That's why it's always worth having a few philosophers around the place. One
minute it's all Is Truth Beauty and Is Beauty Truth, and Does a Falling Tree in
the Forest Make a Sound if There's No One There to Hear It, and then just when
you think they're going to start dribbling one of 'em says, "Incidentally,
putting a thirty-foot parabolic reflector on a high place to shoot the rays of
the sun at an enemy's ships would be a very interesting demonstration of
optical principles."
{Small Gods, 1992, Terry Pratchett}
__________________________________________________
Do You Yahoo!?
Yahoo! Health - your guide to health and wellness
http://health.yahoo.com ==============
THE FREECIV AI
==============
CONTENTS
========
Introduction
The AI crew
Want calculations
Amortize
Estimation of profit from a military operation
Diplomacy
Difficulty levels
Things that needs to be fixed
Idea space
INTRODUCTION
============
The Freeciv AI is widely recognized as being as good as or better
militarywise as the AI of certain other games it is natural to compare
it with. It does, however, lack diplomacy and can only do war. It is
also too hard on novice players and too easy for experienced players.
The code base is not in a good state. It has a number of problems,
from very messy and not very readable codebase, to many missing
features to some bugs, which aren't easy to fix due to unreadable
code. The problem is, that most the code was written by someone who
didn't care about code readibility a lot. After he left the project,
various people have contributed their own mostly unfinished hacks
without really fixing the main issues in the AI code, resulting in
even more mess.
Another problem is that not all code is residing in ai/ (which is
currently still linked to a server, but there're plans to separate
this completely to some kind of client, working name is "civbot"),
but it is also dissolved in little chunks in the whole server/.
Aside that, server/settlers.c is only AI stuff - the problem is,
that most of it is used also for the autosettlers, so we can't
separate it from the server.
This file aims to describe all such problems, in addition to various
not entirely self-describing constants and equations used in the code
commonly.
THE AI CREW
===========
If you wish to get in touch with the Crew about AI problems and have
save games and a method for reproducing the problem in hand, here are
the email addresses:
Petr Baudis <pasky@xxxxxxxxxxx>
Gregory Berkolaiko <gregory.berkolaiko@xxxxxxxxxxxx>
Per I. Mathisen <per.inge.mathisen@xxxxxxxxxxx>
Raahul Kumar <raahul_da_man@xxxxxxxxx>
WANT CALCULATIONS
=================
Build calculations are expressed through a structure called ai_choice.
This has a variable called "want", which determines how much the AI
wants whatever item is pointed to by choice->type. choice->want is
-199 get_a_boat
< 0 an error
== 0 no want, nothing to do
<= 100 normal want
> 100 critical want, used to requisition emergency needs
> ??? probably an error (1024 is a reasonable upper bound)
> 200 Frequently used as a cap. When want exceeds this value,
it is reduced to a lower number.
These are ideal numbers, your mileage while travelling through the
code may vary considerably. Technology and diplomats, in particular,
seem to violate these standards.
AMORTIZE
========
Hard fact:
amortize(benefit, delay) returns benefit * ((MORT - 1)/MORT)^delay
(where ^ = to the power of)
Speculation:
What is better, to receive 10$ annually starting 5 years from now or
5$ annually starting from this year? The function amortize is
meant to help you answer this question. To achieve this, it rescales
the future benefit in terms of todays money.
Suppose we have a constant rate of inflation, x percent. Then in five
years time 10$ will buy as much as 10*(100/(100+x))^5 $ will buy
today. Denoting 100/(100+x) by q we get the general formula, N$ in Y
years time will be equivalent to N*q^Y in todays money. If we will
receive N$ every year starting Y years from now, the total amount
receivable (in todays money) is
N*q^Y + N*q^{Y+1} + N*q^{Y+2} + ...
= N*q^Y * (1 + q + q^2 + q^3 + ...)
= N*q^Y / (1-q)
Here we used the formula for the sum of geometric series. Note that
the factor 1/(1-q) does not depend on the parameters N and Y and can
be ignored. In this setting, the current value of MORT = 24
corresponds to the inflation rate (or rate of expansion of your civ)
of 4.3%
Most likely this explanation is not what the authors of amortize() had
in mind, but the basic idea is correct: the value of the payoff decays
exponentially with the delay.
The version of amortize used in military code (military_amortize())
remains a complete mistery.
ESTIMATION OF PROFIT FROM A MILITARY OPERATION
==============================================
This estimation is implemented by kill_desire function (which isn't
perfect: multi-victim part is flawed) plus some corrections. In
general,
Want = Operation_Profit * Amortization_Factor
where
* Amortization_Factor is completely beyond me (but it's a function of the
estimated time length of the operation).
* Operation_Profit = Battle_Profit - Maintenance
where
* Maintenance
= (1 shield + Unhappiness_Compensation) * Operation_Time
(here unhappiness is from military unit being away from home)
* Battle_Profit
= Shields_Lost_By_Enemy * Probability_To_Win
- Shields_Lost_By_Us * Probability_To_Lose
That is Battle_Profit is a probabilistic average. It answer the
question "how much better off, on average, we would be from attacking
this enemy unit?"
DIPLOMACY
=========
At the moment, the AI cannot change its diplomatic state. The AI
starts out in NO_CONTACT mode, and proceeds to WAR on first-contact.
The AI knows about friendly units and cities, and does not consider
them to be either targets nor dangers. Caravans are sent to friendly
cities, and ships that do not have targets are sent on a goto to the
closest allied port.
It is currently totally trusting and does not expect diplomatic states
to ever change. So if one is to add active diplomacy to the AI, this
must be changed.
For people who want to hack at this part of the AI code, please note
* pplayers_at_war(p1,p2) returns FALSE if p1==p2
* pplayers_non_attack(p1,p2) returns FALSE if p1==p2
* pplayers_allied(p1,p2) returns TRUE if p1==p2
ie we do not ever consider a player to be at war with himself, we
never consider a player to have any kind of non-attack treaty with
himself, and we always consider a player to have an alliance with
himself.
Note, however, that while perfectly logical, player_has_embassy(p1,p2)
does _not_ return TRUE if p1==p2. This should probably be changed.
The introduction of diplomacy is fraught with many problems. One is
that it usually gains only human players, not AI players, since humans
are so much smarter and know how to exploit diplomacy, while for AIs
they mostly only add constraints on what it can do. Another is that it
can be very difficult to write diplomacy that is useful for and not in
the way of modpacks. Which means diplomacy either has to be optional,
or have finegrained controls on who can do what diplomatic deals to
whom, set from rulesets.
But one possibility for diplomacy that it would be easy to introduce,
is an initial PEACE mode for AIs under 'easy' difficulty. This can be
turned to WAR by a simple countdown timer started after first contact.
This way 'easy' will be more easy - a frequently requested feature.
DIFFICULTY LEVELS
=================
There are currently three difficulty levels: 'easy', 'medium' and
'hard'. The 'hard' level is no-holds-barred, while 'medium' has a
number of handicaps. In 'easy', the AI also does random stupid things
through the ai_fuzzy function.
The handicaps used are:
H_RATES, can't set its rates beyond government limits
H_TARGETS, can't target anything it doesn't know exists
H_HUTS, doesn't know which unseen tiles have huts on them
H_FOG, can't see through fog of war
The other defined handicaps (in common/player.h) are not currently in
use.
THINGS THAT NEED TO BE FIXED
============================
* The AI difficulty levels aren't fully implemented. Either add more
handicaps to 'easy', or use easy diplomacy mode.
* AI doesn't understand when to become DEMOCRACY or FUNDAMENTALIST. Actually it
doesn't
evalute governments much at all.
* Cities don't realize units are on their way to defend it.
* AI doesn't understand that some wonders are obsolete, that some
wonders become obsolete, and doesn't upgrade units.
* AI doesn't understand how to favor trade when it needs luxury.
* AI builds cities without regard to danger at that location.
* Food tiles should be less wanted if city can't expand.
* AI won't build cross-country roads outside of city radii.
[Note: There is patch that permits the AI to build cross country
roads/rail. Unfortunately, it makes it too easy for the AI to be
invaded.]
* Non-military units need to stop going where they will be killed.
* Locally_zero_minimap is not implemented when wilderness tiles
change.
* Settlers won't treat about-to-be-built ferryboats as ferryboats.
* If no path to chosen victim is found, new victim should be chosen.
* AI doesn't know how to make trade routes or when. It should try to
build trade routes for its best cities (most building bonuses and
least corruption) by moving caravans there and changing homecity.
* Boats sometimes sail away from landlocked would-be passengers.
* Ferryboats crossing at sea might lead to unwanted behavior.
* Emergencies in two cities at once aren't handled properly.
* AI sometimes will get locked into a zero science rate and stay
there.
* Explorers will not use ferryboats to get to new lands to explore.
* AI autoattack is never activated (probably a good thing too) (PR#1340)
* AI sometimes believes that wasting a horde of weak military units to
kill one enemy is profitable (PR#1340)
THINGS PEOPLE ARE WORKING ON (for latest info contact the Crew)
===============================================================
* teach AI to use planes and missiles. [GB]
* teach AI to use diplomats [Per]
* teach AI to do diplomacy (see Diplomacy section) [Per]
IDEA SPACE
==========
* Friendly cities can be used as beachheads
* Assess_danger should acknowledge positive feedback between multiple
attackers
* Urgency and grave_danger should probably be ints showing magnitude
of danger
* It would be nice for bodyguard and charge to meet en-route more
elegantly.
* It may be correct to starve workers instead of allowing disorder to
continue. Ideal if CMA code or similar is used here.
* Bodyguards could be used much more often. Actually it would be nice
if the bodyguard code was taken out and shot, too. Then rewritten, of
course.
* struct choice should have a priority indicator in it. This will
reduce the number of "special" want values, thus reducing confusion.
- [freeciv-ai] Re: README.AI, Raimar Falke, 2002/05/01
- [freeciv-ai] Re: README.AI, Gregory Berkolaiko, 2002/05/01
- [freeciv-ai] Re: README.AI,
Raahul Kumar <=
- [freeciv-ai] Re: README.AI, Gregory Berkolaiko, 2002/05/01
- [freeciv-ai] Re: README.AI, Raimar Falke, 2002/05/01
- [freeciv-ai] Re: README.AI, Gregory Berkolaiko, 2002/05/02
- [freeciv-ai] Re: README.AI, Raimar Falke, 2002/05/02
- [freeciv-ai] Re: README.AI v.GB, Gregory Berkolaiko, 2002/05/02
- [freeciv-ai] Re: README.AI, Raahul Kumar, 2002/05/01
|
|