Complete.Org: Mailing Lists: Archives: freeciv-dev: June 2002:
[Freeciv-Dev] Re: packet batches? (was: [Patch] Making city report list
Home

[Freeciv-Dev] Re: packet batches? (was: [Patch] Making city report list

[Top] [All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index] [Thread Index]
To: freeciv development list <freeciv-dev@xxxxxxxxxxx>
Subject: [Freeciv-Dev] Re: packet batches? (was: [Patch] Making city report list faster)
From: Reinier Post <rp@xxxxxxxxxx>
Date: Mon, 10 Jun 2002 20:27:13 +0200

On Mon, Jun 10, 2002 at 08:04:35PM +0200, Raimar Falke wrote:
> On Mon, Jun 10, 2002 at 07:52:40PM +0200, Reinier Post wrote:
> > The amount of possible "cheating" is limited by the fixed TCP packet size.
> 
> I'm not sure about this. The server executes all requests which have
> found its way into the receiving socket buffer of the connection. Why
> should this be limited to a TCP packet size if the server is slow? 

Good point, with AI the server can become very slow.

So adding an AI player decreases the amount of interleaving of client
requests.  This means if A, B and C are playing, and A attacks B's city
with 5 units at once, A's has a much better chance of success when C is
an AI player!

> $ cat /proc/sys/net/core/rmem_default
> 65535
> 
> So the default socket buffer size of incoming data is 64k.
 
> > At the moment, the client only calls connection_do_buffer() in the CMA.
> 
> > Perhaps the server should prohibit the client from sending multiple
> > move requests in the same TCP packet.
> 
> We can it more fair to just execute one request per connection/player
> at one time. But is this really a problem?

I think so.  Players can hack their clients
to improve their attacking chances.

It's only a problem with moves, I think, and only
if the server is normally fast enough to process them one at a time.

We'd have to look at the typical amount of moves processed in one batch,
it's easy to hack the server to log that information.

>       Raimar

-- 
Reinier


[Prev in Thread] Current Thread [Next in Thread]