Complete.Org: Mailing Lists: Archives: freeciv-ai: July 2002:
[freeciv-ai] Re: time table for ai restructuring
Home

[freeciv-ai] Re: time table for ai restructuring

[Top] [All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index] [Thread Index]
To: "Per I. Mathisen" <per@xxxxxxxxxxx>
Cc: freeciv-ai@xxxxxxxxxxx
Subject: [freeciv-ai] Re: time table for ai restructuring
From: "Ross W. Wetmore" <rwetmore@xxxxxxxxxxxx>
Date: Wed, 17 Jul 2002 00:31:40 -0400

At 12:22 PM 02/07/16 +0000, Per I. Mathisen wrote:
>On Mon, 15 Jul 2002, Ross W. Wetmore wrote:
>> There is a terminology problem that I think is skewing your perceptions
>> and/or restricting your options. To repeat an earlier comment, agents
>> are not AI, and their purpose as user GUI tools is not the model to use
>> for a real AI.
>
>I believe the problem is one of terminology only. I don't disagree with
>you here at all.

Ok, the final statement is reassuring as well :-).

I do think we all (including Raimar) have the same general goals, but 
the devils is always in the details and choice of path to get there. 
I always prefer lazy choices over hasty prejudgments :-).

[...]
>> Library routines might be shared by the client and whatever form the AI
>> takes.
>
>Actually I don't think an "AI-client" needs to share any code with the GUI
>clients. It will use code from common/ and client/ai/common/.

Shouldn't rule out server/ai/common if your are making these distinctions.

There is a role for server processing with a well defined API accessible
over the packet stream too.

Which is one reason I prefer more to focus on the elements and not where
they are located yesterday, today or tomorrow. Location for me is an
implementation detail of the moment or based on need/efficiency concerns.

{...]
>In some parts of the AI code we use packet structures which we push over
>to handle_* functions that are the same as those on the receiving end of
>packets. In order to turn these into code acceptable for a client they can
>just be renamed to calls to the appropriate send_packet_* call.
>
>Rewriting more code to use this kind of interface will bring us much
>closer to clientifying the AI.

Agreed.

[...]
>> Once the AI runs essentially in its own part of the server process, it
>> would be trivial to move it to a forked process talking over a stream
>> connector between the packet queues and the server. At the top level
>> the current AI is not far off this goal.
>
>Here I agree with Raimar: This is a very bad idea. In order to become a
>client, you need to instantiate your own copy of the common code. The
>alternative you paint here means duplicating the common code inside the
>same process - a much bigger job.

I don't think we are fundamentally at odds here. Using the common code
base to do a lot of this makes sense. Updating or rewriting the common
code to do it better in spots (if needed) and using that is not much 
different. When the AI runs as its own process it should reuse as 
much of the current code as it can wherever it gets it. The same applies
to the process along the way. Some of its code may be AI vs client or 
server specific as well.

I have no problem running multiple instances of parts of the common code
(i.e. for each AI) inside the server process, at some point in the process.
I have no problem not doing it - it becomes an implementation detail. If
the current code were properly structured to take instance data, rather
than relying on globals so much, sharing all code would be no problem, as 
the data would always be separate. At the start some tricks might be needed
to run everything inside a single process space because of current status.
And the point at where it moves off to a separate process can be governed
by which way is faster or easier or which complications one wants to
tackle first.

There are going to have to be new elements developed for the AI to deal
with data limitations, so its concept or use of "common" elements and the 
server's or client's will definitely be different in some respects.

I do have a bit of a problem of ruling out things a priori, as this
smacks of hasty pre-judgments  :-).

[...]
>Cleanups should definitely go in first.

It is nice to be able to refactor and fix things in a testable environment.

[...]
>A write from scratch should preferably be done as a client AI, and can
>live comfortably side-by-side with the current AI. We already have two
>such initiatives: Raimar's agents and Vasco's BORG AI.

Since we already have a server AI, understand its uses and limitations 
and IMHO fixing it rather than rewriting it is more efficient, I agree.

I haven't seen Vasco's BORG AI, but from the discussion it seems to be
sufficiently different from Raimar's approach that the lessons will be 
useful and not redundant. That is good.

And as long as we remember that Raimar's agents are still human oriented 
tools and not real AI (at least for some time to come :-).

As one of many initiatives the agent work is very beneficial though. It
will help to insure that shared code is modular and generic if it has a 
quite different set of needs and priorities from the rest of the 
development.

>> Neither should
>> any evolutionary or other process that people are strongly interested
>> in be actively discouraged. Vigorous discussion is of course ok, but
>> keeping options open at this point is a good thing (TM).
>
>That is what is most important to me: We should choose the way to do this
>that least hampers development. How much we improve the AI before we start
>putting it into client directory or when or how we do it is secondary.

As I said, I like this sentiment very much. 

>Yours
>Per

Cheers,
RossW
=====




[Prev in Thread] Current Thread [Next in Thread]