NTP Pool servers on Kubernetes on Packet
Packet is awesome.
When we started planning our recent unplanned server move, we investigated options for having not one, but two sites, for the “hub” systems for the NTP Pool. With 4000 NTP servers and hundreds of millions of clients using the system, it really should be a given!
Evaluating our options on a ridiculously short timeframe, Packet stood out as an interesting choice, though we were a little apprehensive at first if their setup would be too unusual compared to more familiar options.
After a quick chat with some of the friendly staff at Packet, we were off to the races to see if we could get everything migrated in less than a week of nights and weekends. If we could, we’d be able to move the physical servers the following Sunday without downtime to any critical services, and get us closer to having proper redundancy.
Working with the Packet system has been fascinating and extremely productive. Despite having done this sort of work for several decades, it was a surprise how mixing familiar capabilities, APIs and abstractions opened new ways for quickly building and managing powerful, reliable and scalable infrastructure.
In these modern times, there are lots of ways to run your services, for example:
- Physical servers, with all the fuss that entails in dealing with hardware failures, hardware that’s never quite the same, PXE booting that’s never quite automated enough (unless you have hundreds of servers), and so on.
- Virtual servers that are much better for automation, easy to spin up, less “hardware fuss”, but relatively low-powered and unpredictable performance.
- Completely abstracted away (think Heroku, AWS Lambas, “cloud databases”, etc), which is great if you can buy into that particular way of doing things and don’t need too much of anything else.
- “Serverless” when someone else runs a platform you can run code on, with whatever limitations and constraints the platform brings.
Packet has built a system that manages to straddle the abstractions between “virtual servers with APIs” and “performance and features from physical servers” in a way that keeps the best advantages of both and minimizes the drawbacks. It was quite a revelation to work with and a lot of fun. It’s an old-fashioned physical server, but works like computing as a utility (PDF) with plenty “platform features”.
Kubernetes setup
While getting started, we were wondering if using another provider with a “one-click” managed Kubernetes service would be faster, but it turned out that the Packet APIs, powerful hardware and modern open-source Kubernetes installation and management software made setting up a cluster a breeze.
Because it was so fast and easy to work with, we experimented with a couple of different OS options that were new to us before settling on RancherOS for the controller nodes (simple, easy) and CentOS (familiar) for the workers. We could easily spin up a few extra nodes for half an hour, do some experiments and then keep it, do a clean re-install (with the same IP, provisioning with cloud-init and so on) or “return it”. This was great for being able to quickly evaluate our server and OS options, even under time pressure.
If you have specific requirements (and a little more time), you can even use iPXE for a completely custom install.
Hardware setup
The controllers are running on 3 t1.small.x86 nodes which are plenty for our small cluster. Packet calls them “tiny”, but they have 2.5Gbps network, 4 cores, 8GB memory and the full feature set for installation options, BGP routing and everything else.
We’ve also configured a few of the “tiny” boxes for NTP monitoring nodes in [other locations] (https://www.packet.com/locations/) Being on a dedicated box is great for this. On a virtual machine, there’s a risk a “noisy neighbor” would introduce jitter in the NTP monitoring.
The worker nodes in the Kubernetes cluster (plus a node or two for some legacy VMs) are running on amazingly powerful m1.xlarge.x86 nodes. They have 256GB memory (except for one snow-flake we got with 512GB…), almost 3TB of SSD and 20Gbps network. Even our most resource-intensive jobs run easily on a small corner of these boxes without impacting the rest of the workloads (and if they did, having full control of the physical system means there are no random surprises).
Software setup + Load balancing
For the installation and managing the Kubernetes cluster, we use Rancher which integrated beautifully with the Packet system and made setting up the system just take minutes after we were done experimenting with Packet’s features. There isn’t much to tell about this, it was so easy! After setting up Rancher, we were able to batch boot controllers and workers with a short cloud-init config. The servers automatically joined the cluster, and when tested with wiping and re-installing a server, re-joined the cluster.
In our physical server setup, we have a couple of virtual machines running a high-availability TCP load balancer (FreeBSD, carp, and haproxy). While possible, we preferred not taking the time to duplicate this setup.
Instead, we got a small subnet allocated and via the portal (or API), set up BGP peering from the Kubernetes workers to the Packet network. Any of the workers are able to announce the elastic IPs over BGP. No NAT, layer 7 or other weird stuff in-between.
To integrate this with the Kubernetes API, we use MetalLB which has worked beautifully (use the upcoming 0.8.0 release). Creating a LoadBalancer resource in Kubernetes just works and automatically fails over if the workers disappear (while rebooting to a new kernel, for example).
On AWS or Google Cloud, you get a plethora of features you can buy into. On Packet, you don’t get those pre-determined features, but with powerful hardware and open-source software, you can choose your tools. Packet has a network block storage product, but with almost 3TB of SSD per server, we chose to use Rook to manage a Ceph cluster for block storage. For monitoring, we use Prometheus, for which there’s integration available to automatically discover your nodes.
Packet generously supports open source and if you are interested in new work, they are hiring.