Just a bit of understanding first: Lots of small packets work well for digitial connections, but for adsl where processing latency is higher, and bandwidth more limited, lots of small packets are as benefitial. Latency increases reciprocally as remaining bandwidth decreases (the amount it does this depends on the technology)
However, splitting the rate up into lots of small quickly transmitted packets means that updates occur more quickly.
Hence there is always a compromise between processing latency and update rate.
What I noticed is that maxpacketsize affects server to client number of packets and packet size. The server rate is split up by the maximum packet size this way. The range is 20-1400, 20 gives nearly 200 packets per second, which will is unusable. Setting high values, the update rate drops to around 50/s (obviously it won
What concerns me a lot here is that the default is 300. Well this might be ideal for servers with only a few players, but as soon as the server starts getting busy, your packets will get fragmented, and more packets mean more overhead, which means more bandwidth which means even higher latency (as well as the increased update latency).
NetUpdateSendPeriod which is controlled by
Cfg.PushLatency is analogous to negative cl_timenudge in Q3. Like in Q3, it is important to get this setting correct, and requires experimentation.
I also used the new prediction, but it is important to get your basic connection settings correct first of all before using the patch of prediction. Prediction should just *help* but you need to have a playable connection without it first to get the best effect from it.
This is what seemed to work well for me:
Cfg.NetUpdateMaxPacketSize = 700
Cfg.NetUpdateSendPeriod = 10
Cfg.PlayerPrediction = true
Cfg.NewPrediction = true
Cfg.PushLatency = 25
Let me know what works well for you