Socket.IO (TCP) holds back later packets while it retransmits a lost
one, which stalls worldUpdate delivery on lossy long-distance links —
exactly the pattern game state suffers worst from. WebRTC DataChannels
in unreliable mode (ordered:false, maxRetransmits:0) drop late packets
instead of queueing them, which is what we want for high-frequency
state sync.
Adds a per-user WebRTCTransport on top of the existing Socket.IO
connection. Socket.IO stays in charge of bootstrap, signaling
(SDP/ICE exchange), and control messages — only gameCommand payloads
get routed onto the unreliable channel once it's open. If WebRTC
fails to negotiate, gameCommand transparently falls back to
Socket.IO, so the game keeps working unchanged.
A new StatsLogger writes per-session JSONL events (session_start,
webrtc_ready with negotiation time, per-second stats with transport,
RTT samples, recv/send rates, seq gaps) so we can compare real-world
runs (e.g. Germany server <-> Korea client) instead of guessing.
URL flag ?webrtc=0 forces fallback for A/B testing.
scripts/webrtc-browser-test.js spins up a headless Chromium against
a freshly-started server and asserts the unreliable channel opens
and gameCommand traffic actually rides it.
Self-contained test under poc-webrtc/ that does not touch the game.
Spins up an Express + WebSocket signaling + node-datachannel server
alongside a Socket.IO server, serves a simple browser client that
runs the same game-like traffic pattern (14Hz worldUpdates, input
events, ping/pong) over either transport based on a URL flag.
Captures per-session stats to a JSONL file and ships an analyze.js
that prints a per-(transport, phase) summary of RTT percentiles,
receive rate, and seq-gap counts so the TCP-vs-UDP-style comparison
becomes quantitative rather than eyeball.
Confirms node-datachannel installs and works on this platform and
that the dual-channel (reliable + unreliable) pattern is feasible
to maintain — both prerequisites for the real integration.
The old PUNKBUSTER check compared client-reported position to server
position and snapped the player back when latency made them diverge,
which felt like getting teleported under any real network conditions.
Replaces that with proper client-side prediction + reconciliation:
client tags each input with a sequence number and keeps an input
buffer; server tracks the last processed sequence and reports its
authoritative position via a per-user inputAck alongside each
worldUpdate. The client only corrects when the actual disagreement
exceeds what the unacked input time can explain — so steady-state
movement runs purely on local physics, and only genuine unexpected
events (collisions, being hit) trigger a smooth blend toward the
server state.
Includes adaptive threshold scaling so high-latency sessions don't
false-positive corrections during normal running.
The state of the shift modifier is now distributed across the
network. Walking speeds and animation states are being updated
according to it.
Fixes#130
When a user leaves the channel, some items need to be cleared of
their fingerprints (lastTouchedBy). This feature was broken
because it used the this.gameObjects pool which was no longer in
use.
The channel GameController now triggers an event to which all
items are subscribed to and if it is triggered, all items with that
users fingerprints clear themselves off those.
Fixes#170
In order to not provide deep exposure to PlayerController,
we refactored it so that it is not visible anymore outside Player.
Also we renamed isInBetweenGames to inBetweenRounds.
Moved creation of PlayerController from GameController(s) to
The channel Player and client Me.