If you are using the standard ntpd daemon to serve time to the public internet, it’s important that you make sure it is configured to not reply to “monlist” queries. Many routers and other equipment are included in this.

The configuration recommendations include the appropriate “restrict” lines to disallow any management queries to ntpd. Most Linux distributions will have an updated version by now that just disables the “monlist” queries, that will also solve the primary problem.

The NTP Support wiki has more information.

If your server is part of the NTP Pool, the system also does a periodic check if your server responds to mode6/mode7 information queries and will warn you on the manage page if it does.

If you operate a network you can use the Open NTP Project to see if you have vulnerable devices on your network.

This week we had a period of weird behavior for the monitoring system for (mostly) German IPv6 servers.

After much back and forth on the mailing list and numerous debugging sessions we got this information from a network engineer at Hurricane Electric:

A bug was recently discovered in Force10 switches that cause unicast IPv6 NTP traffic to be erroneously broadcast to all ports. Due to this, there are currently access lists in place preventing some IPv6 NTP traffic from traversing the DECIX exchange, as it was causing a storm that generated nearly 1 terabit per second of traffic. This should be resolved in the near future.

The number of IPv6 servers active in the pool appears to be about back to normal.

Also this is the answer to "why don't we have IPv6 servers by default on all the pool zones" yet. As you might know only "2.pool.ntp.org" (and 2.debian.pool.ntp.org, etc) returns AAAA records currently.

The NTP Pool "backend systems" are moving racks at Phyber. To minimize the risk of things going wrong we're doing it the old-fashioned simple way of turning everything off, moving it and turning it on again. It will mean about an hour where servers are not monitored and we can't add new ones or access the www.pool.ntp.org site.

In the new rack there'll be more power available so when the move is done we'll have more capacity.

Server upgrades at ntppool.org

Over the last couple of months we had a couple of the "central servers" fail. It hasn't caused any service outage for the NTP clients, but some of you might have noticed that the manage NTP Pool site has been sluggish at times.

A few months ago I bought a few new servers and sent them down to our friends at Phyber Communications who wired them up in their hosting facility. Over the last weeks I've added puppet declarations) to configure them and since earlier this evening they're in production for the web sites and a few other services.

I have a long road map for the NTP Pool system and many of the items involve processing and storing more data to make our system better. The new servers are going to be helpful for that.

My other project for the months have been upgrades to the GeoDNS server to support EDNS-CLIENT-SUBNET. It has been live for users of Google DNS for a while. We're still working out some kinks with the OpenDNS folks to get it fully enabled there.

Over the last month the NTP Pool has gotten the biggest upgrade it has had in years. The changes has given us much more scalability and performance.

As you might know, the NTP Pool system is essentially a monitoring system and a smart DNS server. Server operators register their server in the system, the monitoring system checks and evaluates the submitted servers and the DNS server gives end-users a (hopefully) local selection of servers, weighted by preferences given by the server operator and other factors.

Last month there was a big change to the DNS server.

For years the geodns server has had a misconfiguration so users in Great Britain by default (accessing the non-country-code domain) would get a European server rather than a more local one.

The zone in the NTP Pool system has always been called ‘uk’, but the GeoIP library returns ‘gb’ for the relevant users. Oops! The system didn’t have a ‘gb’ zone configured, but knew it was in Europe so would fall back to that.

I fixed it about 3 weeks ago, so since then users in that region should be getting better service.

Related then servers registered in the ‘uk’ zone will have seen their traffic go up considerably. If you need to adjust how much traffic your server gets, you can adjust the netspeed on the manage site.

Brief maintenance window

To safely upgrade some of the DNS configuration infrastructure updates to the DNS data will be suspended for 20-45 minutes. Some parts of the website might also return errors while everything is being updated.

For end-users of the pool there should be no interruption.

Update Maintenance was completed in 20 minutes. The changes were in part to get ready to deploy a new Go based DNS server to replace the current DNS server.

Meinberg have since long generously been supporting the NTP Pool and other open source projects. The monitoring system uses a Meinberg NTP server for "reference time" when checking the more than 3000 servers in the pool. I can't recommend their equipment or expertise enough.

This month they are giving away in a raffle seven DCF77 computer clocks and three GPS time receivers to current and soon-to-be participants in the NTP Pool.

The form and rules are short and simple, but the deadline is July 29th, so don't delay!

The client base for the NTP Pool continues to grow, so we also need to increase the number of servers. Being a "public utility" of sorts (you likely use it for some computer or device in your house, office or both even if you don't know it), we need help from, well, the public. At least the particular kind of public who is running a server or two with static IP addresses and know how to configure a new daemon on it.

There are several thousand and new ones are added regularly, however from natural attrition the total number of servers have been stagnating or even going down lately, even in Europe. Some countries still have very good coverage (Germany for example), but many others really could use more.

In Asia virtually all countries could use more servers, even or maybe in particular Japan, China and India. In South America there are virtually no servers outside Brazil.

Iceland recently joined the pool as a "full zone"; so far just with two servers.

More servers in any country are very welcome, but in particular in the countries with sparse coverage it'll be great to get more.

Today I am experimenting with hosting www.pool.ntp.org through Fastly. If you don’t know about them, they make an excellent CDN based on Varnish serving billions of requests a day.

The downside is that it is IPv4 only (currently), but then all the “static assets” (CSS files, images, etc) were already served by them, so using the site with only IPv6 was not a good experience.

Fastly is also hosting Perldoc.perl.org and have been doing so for a while.

Anyway, while the experiment is ongoing, accessing the pool site should be even faster than before, in particular for those of you who are in Europe or in the eastern US.

Find recent content on the main index or look in the archives to find all content.

Recent Comments

  • Ask Bjørn Hansen: Hi Krala, The data was always stored in an SQL read more
  • krala.trusted.cz: And btw what are you using as backend storage? It read more
  • Simon Iremonger: May I suggest the following plan:- * Make sure the read more
  • Ask Bjørn Hansen: Hi Rob, You are right - the final response is read more
  • https://me.yahoo.com/a/cW8HWGkSksx8m.Ham3dTg4uptTA-#02e7e: Interesting. But I thought that the pool's DNS servers run read more
  • Ask Bjørn Hansen: Hi Randall, The NTP system always runs on "real time". read more
  • randall770: I am from the Memphis TN area. I am in read more
  • vicky: is there is any difference between synchronize the time with read more
  • vicky: is there is any difference between synchronize the time with read more
  • Ask Bjørn Hansen: Hi Herb, We don't have anything to do with time.windows.com read more

Categories

Pages

Powered by Movable Type 4.38