NTP: the rebirth of ailing, failing core network infrastructure

Originally published at: http://boingboing.net/2016/11/29/ntp-the-rebirth-of-ailing-fa.html

6 Likes

An important and interesting topic.

Thanks for posting this!

8 Likes

NTP is a touchy little service. I know, I use it.

3 Likes

We all use it! Even though most of us don’t know it.

11 Likes

Such a great interview and she raises really good points about what’s coming for the internet’s infrastructure. We’ve seen this problem cropping up with several very important pieces of what makes the internet tick and the small organizations that run them, like the precarious financial position OpenBSD was in a few years back.

3 Likes

It’s interesting that this work has been done just as many distributions have begun to make the switch to chrony - I wonder how this rewrite will affect adoption?

In reality, for most of what people use NTP for nowadays (time sync, instead of time server), chrony is the better choice. Most people don’t need a time server, let alone a client that can do broadcast/multicast time synchronization anymore.

4 Likes

How are we supposed to take this person seriously when she’s not talking about monetising NTP, securitising it and slicing it into tranches, and “disrupting” time itself to Make the World a Better Place™?

[seriously, what @davide405 said]

10 Likes

Oh, I’m not just a client.

2 Likes

I think its worth pointing out that the issue wasn’t with the actual protocol but with the specific implementation, no?

2 Likes

I would say the biggest problem with NTP is that people keep reinventing this particular wheel (yes, I’m looking at you, Microsoft) and doing it badly. In that regard Susan Son’s project, which is a fork of the NTP reference implementation, will probably be a major improvement.

Until ICEI and CACR got involved with NTP, it was supported by one person, part time, who had lost the root passwords to the machine where the source code was maintained (so that machine hadn’t received security updates in many years), and that machine ran a proprietary source-control system that almost no one had access to, so it was very hard to contribute to it.

The “proprietary source-control system” mentioned would be BitKeeper, so not exactly RCS ifyouknowwhatImsayin. Moving code in and out of BitKeeper is pretty simple and well understood, it’s what Linus Torvalds used for the Linux kernel until Tridge pissed off Larry McVoy.

NTP was designed and implemented by David Mills, who also invented the fuzzballs as well as the family of exterior gateway protocols that has since grown into BGP, the most important global routing protocol of the Internet. It’s not a big stretch to say he made the modern Internet possible, and he’s still kind of the Big Kahuna of network time synchronization, despite his age and fading eyesight.

I’m pretty certain Dr. Mills has never been formally responsible for security patching the University of Delaware’s servers, although I suppose I could be wrong. I wouldn’t expect him to have root access since his partial retirement in 2009, anyway, so somebody else would be responsible for patching today.

I run the reference implementation, patched to the 2016-11-21 release. I haven’t seen any pressing reason to move to NTPsec.

Chrony’s fine for laptops, I suppose, if you want more accuracy than OpenNTPd. As you’ve noted, most people don’t need a datacenter grade time server, they just want a reasonably efficient NTP client running in the background.

6 Likes

Not to pick nits, but all LANs (and some WANs) should have their own time servers, which sync (only) themselves externally. If it weren’t for all the junior sysadmins out there who don’t really know what NTP is we’d need several orders of magnitude less public NTP infrastructure (kudos to ntp.org for running “the pool”, you guys are awesome!) and what was there could be more reliable plus less able to be leveraged for DOS attacks. [/rant]

3 Likes

Maybe this is a good place to mention this?

Don’t run NTP clients on virtual machines. Seriously. Don’t. It just mucks up your logs.

Run a solid, efficient NTP client on each of the hypervisor nodes hosting the virtual machines, and let the VMs get their time from the fake hardware clock that exists as part of their virtualized environment.

If you control large networks, take the time to design a real NTP hierarchy, with a few canonical servers (one per site usually suffices) that sync to high value servers on the Internet (such as NIST), and everything else syncing from them.

5 Likes

I think given the power of modern CPUs this matters a lot less than it used to. Both MS and Apple, for example, run timeservers that all their clients hit (and that’s an awful lot of clients!)

Sure, in a datacenter, fine - I mean, we do that for BB - but for home? I don’t know if a packet or two per server every 1024 seconds is the end of the world. Especially when most of those hosts can hit commercial timeservers (or governmental timeservers - BB for example uses the timeservers available from the National Resource Council of Canada’s Timeservers and has been for well over a decade).

I’d even go further and suggest that most people running NTP timeservers today probably shouldn’t - we’ve seen a NTP amplification attack already caused by misconfigured timeservers.

If it’s your job, then by all means, run a timeserver, and dedicate the effort required to keep it secure and up-to-date, and use it to maintain your own internal LAN (and even better, don’t let it be accessible outside your LAN). If it’s not your job, then use one of the dontated/governmental/educational/commercial time sources and leave securing your time sources to those that are really focused on doing so (and, of course, have the atomic clocks needed to make it so).

2 Likes

Mostly agreed on all counts, but if you ever tried to, y’know, actually use Microsoft’s in a production environment you’d see that they never really got the memo… :stuck_out_tongue:

(To be fair, I’ve heard that their issue is actually hard-coded open session limitation between their public-facing firewalls and the actual time servers, but…)

Amplification attacks are trivial regardless of protocol, because so many of the really big corporations and ISPs are so profoundly bad at running a network.

If I could just get the major ISPs to implement ingress and egress filtering I’d declare victory. I’ve been in meetings where I had to argue with sysadmins and netadmins about following the most basic, obviously desirable standards… I did not always win those arguments.

I’ve seen many packets get delivered across the Internet with source addresses in the 10.0.0.0/16 range, over the years.

3 Likes

So… what about Harlan Stenn, who has actually spent years doing the work of maintaining NTP?

I don’t see his name mentioned once in Cory’s write-up, nor the Network Time Foundation, which is currently paying for his work, and both are barely mentioned in the ICEI’s slides. It sounds to me a lot like these ICEI folks are scarfing somebody else’s work, forking it, and giving themselves all the credit for the whole shebang, while alternately praising and bad-mouthing the guy who made their work possible at all.

A rewritten, improved implementation is a good thing in the long term, but right now if you’re running ntpd on your machine (as opposed to something like OpenBSD’s OpenNTP) then it’s virtually guaranteed that you’re running the “Classic NTP” as ICEI refers to it, and not the shiny new “NTPSec”. If you want the mainstream NTP implementation maintained and fixed, donations to ICEI will not directly help that, and donating to Network Time Foundation will.

Here’s one place you could do that:

2 Likes

Is this article about the NTPsec project, or something else? Hard to tell when the project name is never mentioned.

If you don’t need Stratum-1 or authenticated NTP, the OpenNTPD has offered a minimalist security-focused open source rewrite of the client/server code since 2004.

1 Like

Ahh, what would open-source be, without the politics?

Seriously though, as much as I can respect Harlan Stenn, the status of NTPd is what has led a lot of people to new implementations (chrony etc), so clearly there was something lacking. If these new kids on the bloc can improve the situation, they should be welcome, and an eventual merging should be considered. Let a thousand forks bloom!

3 Likes

Complete code contributors list, including Harlan Stenn and Dave Mills, is here.

Given the participants listed at NTPsec, it could be that they just wanted a more rapid and open programming environment, and weren’t socially adept enough to reform the extant structure. A very old project like NTP will often tend to form a idiosyncratic development culture that is difficult to enter. Think of how Keith Packard forked X11 because it was so tediously difficult to navigate the XFree86 bureaucratic maze required for approval of his patches, for example.

So yeah, it could be as you say, and does kind of look that way from the outside, but I’m going to give them benefit of the doubt and merely echo your praise for the maintainers of the original NTP here. And as @toyg says, it’s all FOSS, so forks are expected.

1 Like

All y’all a bunch of time leeches!
Get your own primary standard.