Some may now conclude that the theory of relativity has recently been proved with the discovery that the speed that an aluminum ion “ticks” is faster at higher elevations and is slower at lower elevations. Researchers in Boulder, Colorado have shown that a delta in elevation as small as one foot has an impact on the “speed of time.”
Most computers on the planet are synchronizing their time across the network with an Atomic Clock NTP service. Which works fine for course grained precision, however Einstein’s work on synchronization is worth reading up on if we are to solution for pico second accuracy in a distributed infrastructure.
Time servers are distributed globally — all by the way must be impacted by the force of gravity at varying elevations. The question is can we really trust the value of time now on a nanoscale? And without a world of a quantum computers integrated with synchronous data-teleportation, how can we?
Even with a daily NTP time synchronization message being sent to every server on the planet (assuming all of the packets actually were processed and the time change was executed simultaneously), that the time of each CPU clock in a world of globally distributed computers will still result in inconsistent time due to the issue of “Time Dilation”.
And simply increasing the frequency of the NTP time synchronization events to every 1 second is not workable since that would put too much load on all systems. Besides, NIST only allows clients to connect once every 4 seconds, and anything faster than that will be viewed as a potential denial of service technique. And since an enterprise doesn’t want to integrate every server with an external NTP server there is also latency to update the clock of privately hosted time servers.
From a security technology perspective, I find this matter to be quite interesting as it relates to when a globally distributed system-session should expire. This challenge of using time since last trusted event as a factor in an authentication and authorization event is even greater when dealing with transaction times that are expected to execute with nanosecond latencies and millisecond timeouts.
When connecting to a system that has only one front door, the matter of expiring a user session is relatively simple, and it typically as long as 20 minutes after the the last authorized event or transaction execution. But in a system that has multiple computers integrated globally to execute transactions, getting a list of every event that has executed in that collective system in the past 200 nanoseconds is a very difficult question to answer as we approach the paradigm of the quantum computing and nanoevents.
When searching the log files of 200 globally distributed computers for the past 200 nanoseconds of events executed starting from point in time NOW, there is a good probability that the list of events reported back from the system would vary from computer to computer, even if all of the events were executed locally and simultaneously ruling out network latency.
The reason for this is due to the fact that the present value of NOW will vary from location to location due to the time dilation effect of gravity on matter. One may even deduce that since time moves faster at higher elevations, that more events will execute in the past 200 nanoseconds on server located in position RU42 — relatively perceived that is.