Complete.Org: Mailing Lists: Archives: freeciv-ai: May 2002:
[freeciv-ai] Re: long-term ai goals
Home

[freeciv-ai] Re: long-term ai goals

[Top] [All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index] [Thread Index]
To: Gregory Berkolaiko <Gregory.Berkolaiko@xxxxxxxxxxxx>
Cc: Per I Mathisen <per@xxxxxxxxxxx>, freeciv-ai@xxxxxxxxxxx
Subject: [freeciv-ai] Re: long-term ai goals
From: Raimar Falke <rf13@xxxxxxxxxxxxxxxxx>
Date: Sat, 25 May 2002 12:11:47 +0200

On Fri, May 24, 2002 at 05:08:53PM +0100, Gregory Berkolaiko wrote:
> On Fri, 24 May 2002, Per I Mathisen wrote:
> 
> > On Thu, 23 May 2002, Ross W. Wetmore wrote:
> > > If the server returns data which is not governed by the strict rules
> > > it may introduce programmed/random errors into this data.
> > 
> > A good AI is dependent on world governed by consistent rules, and adding
> > random errors into this world may cause random errors in the code as well.
> > 
> > Not a good idea.
> > 
> > > There are
> > > viable rationales for this effect which can be introduced into the
> > > rulesets and/or controlled by console settings - "Bardic travellers
> > > arrive from <blort> giving you information about its countryside",
> > 
> > The AI could easily just explore undiscovered territory like any other
> > player. This is good also for making the other players nervous and
> > paranoid (so they stop expanding) and for making them believe that the AI
> > is "real" (behaving like a real player and playing on the same terms as
> > them).
> > 
> > > When playing solitary against an AI this is much preferable to a
> > > poor AI that lacks both insight and external sources of information.
> > > Adjustable levels make an excellent learning curve. This also fixes
> > > some of the unfair restrictions on the AI of not being able to move
> > > places it has not yet moved too. A human can figure out how to do
> > > this easily so such code restrictions don't really hamper them.
> > 
> > I don't see what you are getting at here. The AI and the human have
> > exactly the same restrictions and possibilities.
> 
> I think I read it in one of the earlier emails by Ross and I agree with
> him: the difference is memory.  If two turns ago you (human) used your
> diplomat to investigate Cairo and saw 15 chariots you will remember that
> Egyptians have big force in Cairo and will continue fortifying your
> recently aquired Alexandria.
> 
> To teach a non-cheating AI to "remember" useful facts is a major project.
> To teach AI to extrapolate from the known facts (something that humans do 
> without even noticing) is probably almost impossible.
> 
> Letting AI know some of the restricted information is a much cheaper way 
> to emulate conscious behaviour.

I currently also think that there has to be a history part in the
client side AI. This should record all information the client receives
in a compressed form. A possible compression is the save only
changes. So you have: "unit foo (pos, id, owner, hp,...) moved from
(12,34) to (12,35) at turn 56.7".

No detailed plans yet but I also agree that some kind of memory is
needed.

        Raimar

-- 
 email: rf13@xxxxxxxxxxxxxxxxx
 "We've all heard that a million monkeys banging on a million typewriters
  will eventually reproduce the entire works of Shakespeare.
  Now, thanks to the Internet, we know this is not true."


[Prev in Thread] Current Thread [Next in Thread]