[ntp:questions] Make NTP timestamps leap-second-neutral (like GPS time)

Michel Hack hack at watson.ibm.com
Wed Jan 7 06:10:05 UTC 2004


 Currently, NTP timestamps are defined to represent UTC based on a sliding
 epoch such that UTC can be derived from seconds-since-epoch using simple
 Gregorian conversion (where each day has exactly 86400 seconds).

 I propose to redefine NTP to be tied to TAI (International Atomic Time)
 but referenced to 2000, so that:  NTP(2000) = UTC(2000) = TAI-32 = GPS-13
 and from now on:  NTP = TAI-32 = GPS-13

 This is a good time to propose such a change.  There have been no leap
 seconds since July 1999.  I wish I had done so sooner (this here is based
 on an internal memo I wrote a year ago).


 My primary reason for opposing the current definition is to eliminate
 the timestamp jiggle that occurs for several minutes to tens of minutes
 after a leap second event when machines following NTP adjust their epoch
 to absorb the new leap second.  Under the new proposal, there would be no
 clock adjustment due to the leap second -- only the rule for converting
 NTP time to UTC would be affected.

 A second argument is that current NTP timestamps make it difficult to
 measure an interval by subtracting time stamps.  If the end points
 straddle one or more leap seconds, these have to be taken into account
 (based on a table of Leap Second events).  Worse, if one of the end
 points is within the jiggle period that follows a LSE, there will be an
 unpredictable error that could even exceed one second (due to overshoot
 effects during the adjustment period).  If the entire interval is
 during the jiggle period, the error might be as large as two seconds --
 a very large relative error in this case since it would be with respect
 to several minutes, i.e. possibly 1%.


 An immediate problem is how NTP would adjust the local clock after a
 Leap Second Event, in an OS that (so far) does not support this well.
 One possibility would be for NTP to defer dealing with it until it is
 in a position to perform a step change, e.g. during bringup.  It would
 then maintain an artificial 1-second offset between NTP timestamps and
 OS (e.g. Unix) timestamps until the OS time can safely be changed.  NTP
 could also use a linear slew (predictable and invertible) on top of the
 OS time adjustments made as part of the NTP protocol, so as to absorb
 this artificial offset gradually.  I'm not sure if this would mesh with
 a Unix adjtimex() that has support for inserting a Leap second at the
 next midnight UTC, because the handling of this may use internal slewing
 that NTP cannot control or know precisely -- I only just came across a
 mention of this when scanning this newsgroup, and haven't studied it yet.
 The proper solution for Unix would be to listen to Marcus Kuhn (see below).

 The argument has been made that tracking TAI as opposed to UTC (when UTC
 is what most users want to see) requires a historical record of leap
 second events.  For the purpose of converting past timestamps, this is
 required for the current definition too, since each leap second has
 redefined the effective epoch.  For the purpose of converting current
 time, only the current LSO (Leap Second Offset) is required.  For the
 purposes of near-future timestamps, the next LSE (Leap Second Event)
 must be known, but NTP already deals with that for times less than one
 day in the future (and other means could be used to cover a few months
 into the future).  The most common time reference these days is GPS,
 and its time signal includes the LSO.  What is missing however is an
 LSO field in the standard NTP packet format.  (*** Need to check whether
 NTP maintenance messages include means to report LSO, which changes
 infrequently after all.  For past timestamps, a means to distribute a
 Leap Second Table might be useful -- but I would argue that if any
 program needs updating to pick up this table, it might as well have
 the table canned in.  Leap seconds after 2000 may need a distribution
 mechanism -- assuming there will be more leap seconds.  ***)

 Coexistence of the new and old protocol has to be considered.  If the
 new LSO field is one that is now zero, it could be used to recognise
 the new protocol, because as long as the LSO is zero, old and new are
 fully compatible.  Old versions receiving timestamps from new versions
 might not be so lucky: they might treat them as falsetickers if they
 are in the minority.  Hopefully everybody would upgrade soon enough.
 If they are *requested* timestamps, the new version could convert them
 so as not to disturb the old version, and that problem could be avoided.
 Only old receivers of new broadcast-mode packets would then be at risk.


 I'd like to mention also that IBM S/390 (and now z/Series) defines its
 TOD clock with reference to TAI -- specifically, IBM = TAI-10 (because
 it was defined to have IBM(1972)=UTC(1972), whereas TAI(1958)=UTC(1958),
 and UTC used corrections more complicated than leap seconds from 1958
 to 1972).  This new definition was introduced in the second edition of
 S/390 Principles of Operation, in 1991 I think; the prior definition
 used the same rule NTP, Unix and everybody else seems to use, namely
 make number-of-seconds-in-epoch come out as UTC when converted using
 plain gregorian rules.  The operating system (MVS, OS/390, z/OS) keeps
 the current LSO in a public variable.

 Finally, I'm grateful to Markus Kuhn for having written such a thoughtful
 proposal for Unix to deal with UTC vs TAI, Leap Seconds, and local time
 zones:  http://www.cl.cam.ac.uk/~mgk25/c-time


More information about the questions mailing list