Continuing on my project to re-do my internal services, today we’ll talk about Preboot eXecution Environnement, AKA PXE.
The principle is quite simple, and widely spread in everyday infrastructures.

Most PXE setups I’ve heard about involve serving installation images for a single system.
Most tutorials I’ve found, struggling configuring my service, would stick to serving a specific image, most likely taken from the host system, …
Although PXE is able to provide with a complete set of systems, as well as live environments from memtests, BIOS update utilities as well as liveCDs.

Being refractory to the idea of setting up an NFS server on a PXE server, I’m avoiding this later usage.
The main reason being my former work involved OpenVZ virtualization: using NFS is usually a pain in the ass, assuming your physical host/virtual host combination actually support NFS.
If setting up a NFS client is most likely doable, setting up a NFS server remain a bad idea, … and well, I ended up avoiding all kind of NFS services running on my core services.

Anyway, back to PXE.
Relying on industry standards such as DHCP (RFC1531, RFC4578), TFTP (RFC783, RFC906) then BOOTP (RFC951), PXE uses a set of low-level protocols clients, allowing lightweight implementation, inducing a low enough footprint to fit on regular network controllers.
Most DHCP servers can be reconfigured to provide its clients with a PXE server address (‘next-server’) and a file (‘filename’), supposed to be distributed by your PXE.
Once the client retrieved these info from its DHCP, it is able to download and render the downloaded file. Once the first image (usually named pxelinux.0, pxeboot.0, or anyway you want…, there’s a lot of declinations, mostly doing the same thing) is executed, the client would query the TFTP server again for files contained within pxelinux.cfg directory.
Looking for additional configurations, the PXE client would usually look up for a file matching its complete hardware address (eg pxelinux.cfg/aa-bb-cc-dd-ee-ff having the booting NIC hardware address set to AA:BB:CC:DD:EE:FF). The client would then query for a file matching its complete IP (eg pxelinux.cfg/C000025A having the client IP address set to If not found, the last digit of the previous query is removed, and then again, … until the string is empty, at which point, the client queries for a file ‘default’.
The ‘default‘ file, as of any file from pxelinux.cfg, can either define menus layout and several boot options, or simply boot a given image without further notice.

PXE use a very small set of commands. The kernel, initrd and append being the most common booting a system.
Setting up a system requires you to download the proper kernel and initramfs, then declare a menu item loading them. Automating the process is thus pretty straight-forward: two files to downloads, a template to include from your main menu, … See puppet defines, downloading CentOS, CoreOS, Debian, Fedora, FreeBSD, mfsBSD, OpenBSD, OpenSUSE and Ubuntu.
Most initramfs could be started with specific arguments such as which device to use, which keyboard layout to load, which repository to use, or even a URL the system installation process would use to retrieve either a preseed, a kickstart, or anything that may help automating installation process.

If unattended installations have limited benefits within smaller networks – not using CDs is still a big one – they are tremendous towards huge networks.
Targeting such infrastructures, products emerged bringing PXE to its next level.
Cobbler, for instance, a system-agnostic Python-based command-line tool managing PXE configuration. Its primary functionality is to automate repetitive actions and encourage the reuse of existing data through templating.
MAAS, from Canonical, in combination with Juju, based on Ubuntu, able to bootstrap complete infrastructures — which is what they do, I guess, with BootStack.


Wondering on wikipedia, we can learn in the early ages of the Internet, some guy at Stanford Research International maintained a file mapping alphanumeric hostnames to their numeric addresses on the ARPANET.
Later on, the first concepts were defined (RFC882 and RFC883, then superseded by RFC1034 and RFC1035), leading the the first implementation (bind, 1984).

I’m not especially familiar with the history of a technology older than me, and yet DNS servers are one of the few cornerstones of networks, from your local network to Internet as we know it.
The idea hasn’t changed since ARPANET: we prefer to use human-readable juxtaposed words, over a 4 digits identifier, to access a service. Thus, we’ve multiplied contributions to scale, redound, decentralize or secure a unified and standardized directory.

While hosting your own DNS server could be pretty straight-forward, popular services are usually popular targets, with their flows. Taking care of defining the function that would implement your server is crucial, though usually overlooked.

The most common attack targeting DNS servers is well known, exploiting poorly configured servers since over 10 years : amplification attacks.
An attacker would spoof its IP to his victim’s one, querying for ? ANY (64b in), resulting in a ~3200b answer. Depending on UDP: no connection state. Spoofing only requires you to write your headers properly (without being able to answer some ACK), which makes any UDP protocol especially vulnerable.
Assuming your DNS server answer to such queries, our attacker would have amplified its traffic by 50, as well as hidden his address.
Note while you may configure ACLs at software level, even a denied client would be answered to. To avoid sending anything that may end up in a DDOS attempt, there’s nothing like a goodol’ firewall.

While carrier-grade solution may involve reverse-route checking of inbound traffic, the only thing to do for us regular folks, is to restrict accesses to our DNS servers, to the very clients we know.
Renting servers in Dedibox and Leaseweb facilities, I’ve found out both management interfaces allow me to replicate my zones in their DNS servers. Thus, my split-horizon is configured to publicly announce Leaseweb and Dedibox’s NS as my masters, while these are the only public clients allowed to reach my masters.

Bind / named

The historic implementation, most widely-spread. Its only concurrent in terms of features being PowerDNS.
Bind implements what they call DLZ (Dynamically-Loadable Zones), allowing the storage of records in a database such as PostgreSQL or ODBC.
Having tested the OpenLDAP connector for a few years, I’ld mostly complain about being forced to patch and build my own package (RFC3986,, and being unable to resolve adresses containing characters such as ‘(‘. In the end, keeping my databases as plain file is easier to maintain.

Its usages include authoritative zone serving, zone replication and caching, split-horizon, TSIG, IPv6 and wildcard records, records caching, DNS & DNSSEC resolution and recursion, lying DNS using RPZ, …



A relatively new solution (2004), targeting cache and DNSSEC related features.
If you may declare records in your configuration, Unbound won’t answer to transfert queries, and thus does not qualify as an authoritative name server.
Unbound is perfect either for home usage, or as a local cache for your servers.

Also vulnerable to amplification attacks, keep in mind to deny accesses from unexpected clients.



NSD is a drop-in replacement for BIND zone serving features, while it won’t provide with split horizon, caching or recursive DNS resolution.
Its features being restricted to serving and replicating zones, NSD only applies for authoritative usage and should be used in conjunction with some cache solution such as Unbound.

One of the most popular implementation, embedded in devices such as cable box routers, Linksys WRT54-G mods, or even Ubuntu desktop installations. The key feature being Dnsmasq is a DNS server, embedding a DHCP server. Or vice-versa.
Like Unbound, serving configured records is possible, though Dnsmasq is not authoritative.

Vulnerable to amplification attacks, but most likely not to be exposed.


UTGB Refactoring

Back on the subject, refactoring my services. Today’s topic, obviously, DNS.

Being particularly found of my split-horizon, NSD does not apply in my case.
Continuing to validate and deploy my puppet modules, reinstalling my self-hosted services, I’ve ended up setting a new BIND server, rewriting my zones from LDAP to plain-text files.
After several spontaneous rewrites from scratches over the last few years, one thing I often miss managing my DNS zones, is the ability to synchronize a set of values (let’s say, NS, MX and TXT records) towards all my zones, without having to edit 30 files. Thus, you’ll note the latest named module used in my puppet repository stores temporary zones in /usr/share/dnsgen, and allow you to use a single SOA record template as well as a coupe of zone headers and trailers to generate an exhaustive split-horizon configuration.

Next step on the subject would be to setup some key infrastructure, then publish my key to Gandi and serve my own DNSSEC records, …

Meanwhile, I’ve started replacing my DNS caches as well. From unbound to unbound.
The specificity of my caches, is that their configuration declare about 100.000 names, redirecting them onto some locally-hosted pixel server.
The big news in this new version of unbound module, is that the names source isn’t static any more, and would be regularly downloaded and updated. Actual list now include around 6.000 entries, and might be completed later on.

UTGB Refactoring

Since december, and until further notice, I’ve been experimenting on my services, replacing my old VMs one by one.

Corresponding puppet modules are available at

Experiencing some Ceph disaster (lost PG), the next big step is to drop two hosts from my current crushmap, using them to start a new Ceph cluster, and migrate my disks progressively.


EDIT from May 17th:

All my hosts are now dependent of this new puppetmaster.