[freeciv-ai] Re: learning from experience
[Top] [All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index] [Thread Index]
Per I. Mathisen wrote:
In any case, there is a problem: Changing context. A strategy that works
fine in, say, gen 1, may not work as well in gen2. Some slight changes in
server options might change context enough to throw its accumulated
weights into question.
The problem with this is one of poor implementation. The AI needs a feedback
loop that can update weights or switch strategies (alternate way of phrasing
update weights) on a timeframe less than a game or release cycle.
What you describe would be quite useful in order to optimize the AI. We
could put researched weights into the rulesets. However, I am pessimistic
about the state of AI research when it comes to being able to write an AI
that can figure out on the fly and on its own new strategies and read
something meaningful into accumulated statistical data. But I would be
happy to be proven wrong.
Again, if the implementation is explicitly coded in fine detail, then you are
correct that this is going to be a total failure.
The solution is more successful if it is developed using a fuzzy feedback
flavour. Small tactical operations might be handled in a micro-managed
fashion, but overall strategy should not be explicitly planned. Rather it
should be handled as a weighted selection of responses to changing game
state where feedback parameters adjust the weighting selection on a chosen
At any given moment, one makes the best selection on the instantaneous set
of strategic weights. But over a longer period, one adjusts or reselects the
strategic imperatives based on events/needs/success/failure feedback decisions.