After searching around, it seems that synchronizing the clocks of 2 or more computers is not a trivial task. A protocol like NTP does a good job but is supposedly slow and too complex to be practical in games. Also, it uses UDP which won't work for me because I'm working with web-sockets, which don't support UDP.
I found a method here however, which seems relatively simple:
It claims to synchronize clocks to within 150ms (or better) of each other.
I don't know if that will be good enough for my purposes, but I haven't been able to find a more precise alternative.
Here's the algorithm it provides:
A simple clock synchronization technique is required for games. Ideally, it should have the following properties: reasonably accurate (150ms or better), quick to converge, simple to implement, able to run on stream-based protocols such as TCP.
A simple algorithm with these properties is as follows:
- Client stamps current local time on a "time request" packet and sends to server
- Upon receipt by server, server stamps server-time and returns
- Upon receipt by client, client subtracts current time from sent time and divides by two to compute latency. It subtracts current time from server time to determine client-server time delta and adds in the half-latency to get the correct clock delta. (So far this algothim is very similar to SNTP)
- The first result should immediately be used to update the clock since it will get the local clock into at least the right ballpark (at least the right timezone!)
- The client repeats steps 1 through 3 five or more times, pausing a few seconds each time. Other traffic may be allowed in the interim, but should be minimized for best results
- The results of the packet receipts are accumulated and sorted in lowest-latency to highest-latency order. The median latency is determined by picking the mid-point sample from this ordered list.
- All samples above approximately 1 standard-deviation from the median are discarded and the remaining samples are averaged using an arithmetic mean.
The only subtlety of this algorithm is that packets above one standard deviation above the median are discarded. The purpose of this is to eliminate packets that were retransmitted by TCP. To visualize this, imagine that a sample of five packets was sent over TCP and there happened to be no retransmission. In this case, the latency histogram will have a single mode (cluster) centered around the median latency. Now imagine that in another trial, a single packet of the five is retransmitted. The retransmission will cause this one sample to fall far to the right on the latency histogram, on average twice as far away as the median of the primary mode. By simply cutting out all samples that fall more than one standard deviation away from the median, these stray modes are easily eliminated assuming that they do not comprise the bulk of the statistics.
This solution appears to answer my question satisfactorily well, because it synchronizes the clock and then stops, allowing time to flow linearly. Whereas my initial method updated the clock constantly, causing time to jump around a bit as snapshots are received.