How to Assign IPv6 Addresses to LXD Containers on a VPS

Good news for the future of the Internet: IPv6 connectivity is an increasingly common feature offered by VPS providers these days. Unfortunately, many of them also cheap out. Instead of providing a true IPv6 prefix like 2607:f8b0:4004:811::1/64, they’ll provision you some arbitrary range, like the set of 10 addresses between 2607:f8b0:4004:811::1/128 and 2607:f8b0:4004:811::10/128.

I can’t fathom why the providers do this. There are literally trillions upon trillions of IPv6 addresses; if you can afford to hand out IPv4 addresses at no extra charge, you could surely afford to give everyone a /48 prefix, too. Alas, such is the cloud business.

I use my VPSes to run containers using Ubuntu’s LXD stack, and my current provider,, gives me 10 IPv6 addresses to work with. The other day, I was trying to figure out how to assign each address to a container in this scenario—only to find that no such guide existed.

  • This guy accomplished this with NAT66. (Ew.) Unfortunately, this is your only option if your host only provides a single /128 address.
  • This guy avoided NAT, but he did use an old program (npd6) that’s no longer relevant for modern kernels. He was also writing for LXC, not LXD.

So here, I’m going to present my solution, which avoids NAT66, doesn’t rely on manual firewall rules, and is tailored for the Ubuntu sysadmin stack (LXD, systemd, netplan).

Set up LXD networking

When setting up LXD, say “yes” to IPv6 addressing, but “no” to IPv6 NAT. Then add the addresses you want to assign to your containers as extra routes on the LXD bridge, like so:

$ lxc network set lxdbr0 ipv6.routes '2607:f8b0:4004:811::2/128, 2607:f8b0:4004:811::3/128'

The host’s network config should look something like this:

$ lxc network show lxdbr0
  ipv4.nat: "true"
  ipv6.address: fc02::1/120
  ipv6.dhcp.stateful: "true"
  ipv6.routes: 2607:f8b0:4004:811::2/128, 2607:f8b0:4004:811::3/128
$ ip -6 route
2607:f8b0:4004:811::2 dev lxdbr0 proto static metric 1024 pref medium
2607:f8b0:4004:811::3 dev lxdbr0 proto static metric 1024 pref medium

Set up NDP proxies

The kernel needs to know to advertise the containers’ addresses using the IPv6 neighbor discovery protocol (NDP). You do this using the ip neighbour add proxy (yes, British spelling) command for each address:

$ ip -6 neighbour add proxy 2607:f8b0:4004:811::2 dev net0
$ ip -6 neighbour add proxy 2607:f8b0:4004:811::3 dev net0
$ ip -6 neighbour list proxy
2607:f8b0:4004:811::2 dev net0  proxy
2607:f8b0:4004:811::3 dev net0  proxy

This list needs to be recreated each boot, so I have a systemd service to run these commands:

# /etc/systemd/system/proxy-ndp.service

Description=Announce all IPv6 addresses allocated to this server

ExecStart=/sbin/ip -6 neighbour add proxy 2607:f8b0:4004:811::2 dev net0
ExecStart=/sbin/ip -6 neighbour add proxy 2607:f8b0:4004:811::3 dev net0


Enable IPv6 packet forwarding and proxy relaying

The kernel disables these features by default, so you also need to modify these sysctl properties:

# /etc/sysctl.d/91-forward-ipv6.conf 


Set up container networking

Finally, you can assign each container the IPv6 address you desire. This configuration is done from within the container, just as if it were a virtual machine or a real computer. Each container already receives a private IPv6 address from LXD (like fc02::6e); you just need to assign its public address as an additional static address. Here’s how that’s done by hand using ip,

# ip -6 address add 2607:f8b0:4004:811::2 dev eth0

and here’s how it’s done in Ubuntu’s netplan:

# /etc/netplan/50-cloud-init.yaml

# This file is generated from information provided by
# the datasource.  Changes to it will not persist across an instance.
# To disable cloud-init's network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
    version: 2
            dhcp4: true
            dhcp6: true
            - 2607:f8b0:4004:811::2/128

It’s not necessary to specify a gateway (as in IPv4) thanks to the magic of IPv6 router advertisements.


And that’s it! Your containers should have IPv6 networking with Internet-reachable addresses now. Under the default libc configuration, traffic will be preferentially routed over IPv6.

# ip -6 route
2607:f8b0:4004:811::2 dev eth0 proto kernel metric 256 pref medium
fc02::/120 dev eth0 proto ra metric 100 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
default via fe80::c0c4:4aff:fedf:89a6 dev eth0 proto ra metric 100 mtu 1500 pref medium
# ping
PING (2a00:1450:400e:80b::200e)) 56 data bytes
64 bytes from (2a00:1450:400e:80b::200e): icmp_seq=1 ttl=57 time=5.16 ms
64 bytes from (2a00:1450:400e:80b::200e): icmp_seq=2 ttl=57 time=5.33 ms
64 bytes from (2a00:1450:400e:80b::200e): icmp_seq=3 ttl=57 time=5.49 ms
64 bytes from (2a00:1450:400e:80b::200e): icmp_seq=4 ttl=57 time=5.30 ms
--- ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 5.169/5.326/5.499/0.128 ms

I didn’t touch upon the IPv4 stack in this short tutorial, but you will most likely want to stick with LXD’s default configuration: DHCP and NAT on a /24.

$ lxc list
|       NAME       |  STATE  |       IPV4        |            IPV6              |    TYPE    | SNAPSHOTS |
| container1       | RUNNING | (eth0) | fc02::6e (eth0)              | PERSISTENT | 0         |
|                  |         |                   | 2607:f8b0:4004:811::2 (eth0) |            |           |
| container2       | RUNNING | (eth0)  | fc02::5 (eth0)               | PERSISTENT | 0         |
|                  |         |                   | 2607:f8b0:4004:811::3 (eth0) |            |           |

Enjoy having end-to-end connectivity on your containers, the way the Internet was meant to be experienced.

My VGA Passthrough Notes

Introduction: What is VGA passthrough?

Answer: Attaching a graphics card to a Windows virtual machine, on a Linux host, for near-native graphical performance. This is evolving technology.

This post reflects my experience running my VGA passthrough setup for several years. It is not intended as a complete step-by-step guide, but rather a collection of notes to supplement the existing literature (notably, Alex Williamson’s VFIO blog) given my specific configuration and objectives. In particular, I am interested in achieving a smooth gaming experience, maintaining access to the attached graphics card from the Linux host, and staying as close to a stock Fedora configuration as possible. Hopefully, my notes will be useful to someone.

Continue reading “My VGA Passthrough Notes”

Print Your Stuff from the Terminal with

Recently — in the spring of 2016, I believe — the UT Austin libraries rolled out a new printing system that allows students and staff to upload documents via a web interface. This was a huge deal to me because previously, I had to get off my laptop and sign in to a library computer to print things.

Functional but frustratingly slow.

It works well enough, but as is always the case for university computer systems, it’s a little cumbersome to use. My typical workflow looked like this:

  1. Log in
  2. Upload my essay
  3. Set my standard printing options: no color, duplex

That works out to about ten clicks and a password manager access. The horror! We can do much better. We have the technology.

Over the last two weekends, I put together a Python script that can send documents straight from the command line. It stays authenticated for two weeks at a time and there’s a configuration file to specify preferred printing settings.

$ ./ ~/Documents/utcs.pdf
Print settings:
  - Full color
  - Simplex
  - Copies: 1
  - Page range: all
Logging in with saved token ... done
Uploading utcs.pdf ... done
Processing ... done
    Available balance: $1.16
    Cost to print:     $0.42

    Remaining balance: $0.74

I’m sure it will prove useful to all the… one… UT Austin students who are handy with a terminal and do a lot of writing. Find it on GitHub.

Prototype Sustainable Map for the UT Austin Campus

I usually keep my writings on my personal blog and The Daily Texan separate, but I’ll make an exception here: this post is an addendum to my recent opinion column on sustainability and the UT campus map.

Here’s how I think the campus map should look like. I took the visitor map from the Parking and Transportation Services website and added the UT shuttle campus circulators and some suggested bicycling routes.

(I’m not a huge fan of the segregated bike lanes on Guadalupe and Dean Keeton, but they are depicted for posterity’s sake.)

If we want to claim the mantle of a sustainable campus, a map like this is just plain common sense.

Hacking Piazza with Cross-Site Scripting

Piazza is a free classroom discussion service marketed for science and mathematics classes. It is best described as a hybrid wiki and forum; students can post questions, and other students can collaborate on answers. Like WordPress, content can be formatted with a rich-text editor or with plain HTML with a restricted set of features. Piazza’s distinguishing feature is the ability to post anonymously, which it claims makes underrepresented groups in the sciences more comfortable with interacting with the class. At UT, the computer science department makes extensive use of Piazza for most of its classes.

Piazza is primarily accessed through the web interface on Of great interest, there is also a “lite” web interface designed for mobile devices and accessible browsers at I will demonstrate that Piazza is susceptible to common client-side web attacks, such as cross-site scripting, as a result of its reliance on web apps. (There are also native iOS and Android apps, but they are awful, and nobody uses them.) Continue reading “Hacking Piazza with Cross-Site Scripting”

Creating a Guest Network with a Tomato Router

Here are my notes on how to portion off a guest wireless network for… you know, guests… if you have a router powered by the excellent Tomato third-party firmware. (I run Tomato RAF on a Linksys E4200.)

It’s not meant to be an exhaustive guide, because there are a few already on the Internet. Rather this is how I achieved my specific setup:

  • Do not allow guests to make connections to the router, thus preventing them from accessing the web interface or making DNS requests.
  • Firewall guests from the main network and any connected VPN’s.
  • Push different DNS servers and a different domain to the guest network.

First you’ll need to create a separate VLAN and a virtual SSID for your guest network. My router has two antennas, so I could have used a dedicated antenna for the guest network, but I opted to use a virtual SSID anyway because the second antenna is used for the 5 GHz band.

By default, VLAN 1 is the LAN and VLAN 2 is the WAN (the Internet). So, I created VLAN 3 for my guest network. I then attached a virtual wireless network on wl0.1 named

This is where most guides stop, since Tomato already firewalls the new guest network from the rest of your LAN. Instead of bothering to tweak the firewall, they simply advise you to set a strong administrator password on the web interface.

This didn’t satisfy me, though – I wanted firewall-level separation. Also, the guest network is still able to access any VPN’s the router is running. So here’s some iptables magic:

# Add a special forward chain for guests. Accept all Internet-bound traffic but drop anything else.
iptables -N guestforward
iptables -A guestforward -o vlan2 -j ACCEPT
iptables -A guestforward -j DROP
iptables -I FORWARD -i br1 -j guestforward

# Add an input chain for guests. Make an exception for DHCP traffic (UDP 67/68) but refuse any other connections.
iptables -N guestin
iptables -A guestin -p udp -m udp --sport 67:68 --dport 67:68 -j ACCEPT
iptables -A guestin -j REJECT
iptables -I INPUT -i br1 -j guestin

This goes in Administration > Scripts > Firewall. Simple and easy to understand. Note that ‘br1’ is the network bridge for your guest network and ‘vlan2’ is the WAN VLAN. You probably don’t have to change these.

Last thing that bothered me was that Tomato by default assigns both networks the same DNS and domain settings. This means that guests can make DNS queries to your router for system hostnames, like ‘owl,’ and get back legitimate IP addresses. Overly paranoid? Probably, but here’s the fix:

# DNS servers for guest network
# Domain name for guest network

This goes in Advanced > DHCP/DNS > Dnsmasq custom configuration. Combined with the iptables rules above, this will force your guests to not use the router’s DNS.

Once again, ‘br1’ is the guest bridge. You can also specify your own DNS servers instead of OpenDNS.

And there you have it – a secure network for your own devices and a guest network, carefully partitioned off from everything else, solely for Internet access.

There are two pitfalls with this setup: no bandwidth prioritization and the possibility that someone could do illegal things with your IP address.

I don’t really care about bandwidth, because I already have a QoS setup, and I live in a suburban neighborhood so users of my guest network will be few and far between.

However, I am considering forcing all my guest traffic through the Tor network. That may be a future post.

Windows: Combat Evolved: a Halo Satire

What would Microsoft’s Halo video game series be like if it involved Microsoft itself?

The Introduction

Halo tells the story of 26th century humanity, which has organized itself under the auspices of the United Nations Space Command (Microsoft). Humans are fighting a losing war against the Covenant (Apple), a theocratic collection of alien races that worship a long-dead alien species called the Forerunners (pre-2000 Macs). Already, many colony worlds, including the military stronghold Reach (IBM), have fallen.

In the Beginning

In the first game, Halo: Combat Evolved, a lone starship (Windows XP) crash lands on a mysterious Forerunner ringworld (Best Buy) that is thought to be some kind of superweapon. Its human survivors, including the superhuman cyborg Master Chief (Bill Gates), fight the Covenant for control of the ring (store). However, the Covenant accidentally release a zombie-like parasite known as the Flood (Android). It is discovered that the purpose of the halo is actually to cleanse the galaxy of all sentient life, thereby depriving the Flood of all possible infection vectors. The Chief then destroys the ring and its Flood infestation before returning to Earth (Redmond, Washington) to warn of an impending invasion by a new Covenant fleet (the Intel Macintosh).

The Story Continues

In the sequel, Halo 2, the Covenant locate and invade Earth. Despite a valiant defense by the UNSC Home Fleet and Earth’s orbital defense platforms (Windows Vista), a single Covenant carrier punches through and lands at New Mombasa, an African metropolis. With the Master Chief’s help, the UNSC destroys most of the initial assault. However, the carrier makes a hasty slipspace jump to Delta Halo, another halo installation (New Egg). The Covenant and the UNSC once again battle for control of it. Meanwhile, the Chief assassinates a key Covenant leader (Steve Jobs) and the Flood are once again released. This sets off a complicated chain of events that leads to the primary warrior race of the Covenant, the Elites (Mac OS X), seceding from the theocracy. They are opposed by the new warrior race, the Brutes (iOS).

The Elites make a temporary truce with the humans to stop the rest of the Covenant from firing the halo ring. They succeed, but all rings are put on standby, ready to fire remotely from a location known only as “the Ark” (Amazon). The remaining Covenant leadership plan to bring the entire fleet to Earth and uncover a major Forerunner artifact (iOS 7).

In one of the worst cliffhangers in gaming history, the Master Chief stows away and prepares to “finish the fight.”

Finish the Fight

Halo 3 opens with the Chief jumping from the ship and landing outside the ruins of New Mombasa. He helps the UNSC (Windows Phone 7) launch a last-ditch attack against the Covenant excavation site, but they fail to put a dent in the operation. The artifact is activated by the Covenant; it turns out to be a portal to the Ark (flat UI design). The Elites and the UNSC (Windows 7) follow the Covenant through the portal to stop them from once and for all. After an epic three-way battle, the Master Chief kills the Covenant leadership and blows up the Ark to eradicate the Flood. Unfortunately, his ship fails to make it back through the portal in one piece, and he is left stranded in unknown space.

A New Era

Halo 3 was followed years later by Halo 4, which is intended to begin a new Halo trilogy.

In Halo 4, the Master Chief crash-lands on a mysterious Forerunner planet called Requiem (Power Mac G4). The UNSC Infinity (Windows 8), a massive capital ship commissioned after the war with the Covenant, attempts a rescue mission, but instead finds itself trapped in Requiem’s gravity well. The Chief helps to free it, but accidentally releases an immensely powerful Forerunner warlord known as the Didact (Steve Wozniak). The Didact intends to take a Forerunner ship to Earth and wipe out humanity with the Composer (Mac OS), a Forerunner weapon that allows him to turn sentient beings into his own soldiers. However, he is stopped in the nick of time thanks to the efforts of the Chief and the Infinity.