How to Fix Grandma’s Network on Verizon FiOS

In my family, the person with the fastest Internet connection is… Grandma, a Vietnam War refugee who has never used a computer in her life. This is by virtue of her residence on a main road in the great state of Delaware, which gets fiber TV and Internet service through Verizon FiOS. She subscribes to the cheapest Internet plan so that the grandkids can tap away at their tablets during family gatherings. And on FiOS, the “lowest tier” is a blazing-fast symmetric connection: 100 Mbps down, 100 Mbps up.

It really isn’t fair, is it?

Grandma’s 20th-century tract home, like Grandma herself, was thrust only reluctantly into the digital age. It has no data cabling whatsoever besides two landlines and two coax ports, which, naturally, are both located on the extreme corners of the house—the worst possible positions to place a Wi-Fi access point. So for many years, the family ISP shitbox sat on one end of the house or the other, saddling the opposite side with all of the classic symptoms of crappy Wi-Fi coverage: buffering videos, sluggish webpages, frequent disassociations, and frustrated kids. This fall, I discovered one of my uncles (bless his heart) had attempted to cover the dead spot with a cheap access point and powerline networking kit from TP-Link. Immediately, my heart sank—powerline networking is almost always bad news. I marshaled together two laptops and ran iperf to test the performance of the link. It was a bottleneck… to put it mildly. On a network with a 100 Mbps uplink, the powerline connection achieved a whopping 12 Mbps.

The prehistoric MI-424WR is a dime a dozen on the Philadelphia-area Craigslist.

Right then and there, I decided it was time to blow up Grandma’s home network and start over. It had to go, all of it—the extra AP; the powerline adapters; even the Verizon router itself, a venerable Actiontec MI-424WR that hasn’t received a security patch in over a decade. My plan was to junk everything and install a whole-home Wi-Fi mesh system using Ethernet-over-coax (MoCA) technology for reliable backhaul. (A cross between “option 9” and “option 10,” for those of you who made their way here from DSL Reports’ FiOS guide.) We’re talking 802.11ac, dual-band, wired backbone, baby. At first, I set my sights on Google Wifi, but I found the price tag of Linksys Velop—the economy dual-band model can be had in 2-packs for just $100-150 total—a little more palatable. I wasn’t worried about cheaping out because, thanks to the MoCA backbone, I wouldn’t be relying on the performance (or lack thereof) of Velop’s wireless repeating.

I consider Belkin a third-rate brand, but I have to hand it to them for the job they’ve done on their Velop product. For example, if you run a home network with multiple access points, it’s important that they support roaming assistance—the 802.11k, v, and r standards—without which Wi-Fi clients tend to “stick” to the first AP they see and refuse to switch to another station, even if the-signal-quality-is-garbage-and-another-AP-is-right-there-so-why-the-hell-wouldn’t-you-switch-god-damnit. In the consumer space, basically nothing supports roaming assistance except for whole-home mesh systems—including, of course, Linksys Velop. Velop also autodetects the presence of an Ethernet connection between its nodes, and makes use of it for backhaul. (Some contemporary mesh systems, unbelievably, lack Ethernet ports altogether!) And bonus points for Velop’s online management interface; it’s refreshing to be able to manage a network without installing yet another smartphone app.

For my MoCA adapters, I cheaped out and bought a pair of Actiontec WCB3000N‘s, which regularly go for $20 each, used. Testing with iperf clocked their maximum speed at about 100 Mbps. (The newest stuff on the market can exceed gigabit speeds, but I couldn’t justify paying triple the cost for speeds nobody in Grandma’s house would ever need or use.) Each WCB3000N comes with a pair of very outdated 802.11n Wi-Fi radios, the idea being that the device can act as a coax-backed “Wi-Fi extender.” Um, thanks, but no thanks; I’d just like the MoCA part, please. But what’s this? No web interface option to disable Wi-Fi? WTF?! Hilariously, there is indeed a control—it’s just hidden by a little bit of CSS.

And for my next trick, I shall make the “Wireless Radio” checkbox disappear!

You see, WCB3000N’s are so cheap because the market is flooded with examples that were handed out by ISP’s. Mine came from Spectrum, who apparently removed the ability to disable Wi-Fi to make their product “idiot-proof.” One brave soul on GitHub got the GPL source to compile and released a custom build that restores the missing control. Unfortunately, there’s one bug still unresolved: The setting to disable the 2.4GHz radio doesn’t stick after a reboot. Pooey. I turned off SSID broadcast on that network and called it a day.

There are some special considerations to mind when working on a FiOS network. First, the cable boxes require an IP connection to download TV guide data from Verizon. Although some models (including the one my Grandma has) have Ethernet ports, they are not activated, and the connection has to be made using their builtin MoCA adapters. Fortunately, the boxes can link with any commodity MoCA adapter—including the WCB3000N I was using to network the Velops. Second, the remote DVR and on-screen caller ID features won’t work if a Verizon router isn’t the gateway. In my case, the loss of neither of these mattered to Grandma…

Another complication is the connection from the router (my base Velop node) to the Optical Network Terminal on the side of the house, which can be made via either MoCA or Ethernet. Contemporary installs use Ethernet, but older FiOS installs—including, you guessed it, Grandma’s—used coax, probably so the installers could spare themselves the trouble of running a new Ethernet line. The coax connector on a FiOS-branded router conceals two MoCA adapters: the “LAN-side” one, which runs on channel D1 and connects to the cable boxes, and the “WAN-side” one, which runs on the less-common channel C4 and connects to the ONT. The use of differing frequencies keeps both MoCA network segments logically separate.

Network SegmentMoCA FrequencyConnected Devices
WANC4Router; ONT
LAND1Router; cable boxes; Wi-Fi; Ethernet

Running a new Ethernet line wasn’t an option—Grandma would’ve strangled me if I broke out the drill and started punching holes in her precious house. So, I needed a MoCA adapter that could operate on channel C4 and talk to the ONT. Turns out these adapters have gone nearly extinct! The Arris MEB1100, which Verizon distributes to FiOS customers, seems to be the only one still in production. But I wondered if it was possible to dodge this purchase by placing Grandma’s existing FiOS router into bridge mode. The Actiontec web interface has no obvious option to do this; but as it turns out, it is indeed possible!

The key is the “Network Connections” screen, which allows you to modify the router’s internal network topology to your heart’s content. You can accomplish a bridging configuration by detaching the “Ethernet/Coax” interface from the “Network” bridge and bridging it with the “Broadband Connection” interface. Unfortunately, if your model lacks the ability to separate the Ethernet and coax interfaces, you’ll have to disable the LAN-side MoCA adapter.

(A screenshot found on Google Images. Not my actual configuration!)

By doing this, you’ll lose all access to the web interface except through the Wi-Fi hotspot, which will become its own, isolated network segment. Just set the SSID to something unique, like MI424WR_Admin, and use the same WPA password printed on the unit so that it’s easy to remember in the event you need to access the configuration screen again. (It’s not a security concern to keep an Actiontec router in service like this, because the web interface is not accessible from the Internet—or even your own LAN.) Then, when you plug the WAN port of your own router into one of the LAN ports on the Actiontec, your router will receive a public IP address, and you’ll be off to the races.

So, several equipment overhauls and a few coax splitters later, Grandma’s network went from this:

To this:

Did I over-engineer the crap out of it? Probably. But at least you can start a download anywhere in Grandma’s house and get the full 100 Mbps.

The 30-Second WebRTC Guide

(Web technology changes fast! Mind the date this post was written, which was November 2019.)

I get the feeling nobody uses WebRTC in the real world, since all of the tutorials use the same toy examples that don’t involve any actual network connectivity. That’s a shame, because WebRTC makes peer-to-peer communication a cakewalk. Somewhere in our imaginations, there’s a whole category of decentralized web apps, just waiting to get written!

Anyway, this post serves as a quick, practical guide to WebRTC. The first thing to realize is that it’s not just another web API that’s ready to go out of the box—WebRTC requires three distinct services to work its magic. Fortunately, the browser handles much of the communication behind the scenes, so you don’t need to worry about all of the nitty-gritty details.

A network diagram illustrating the relationships between signalling, STUN, and TURN servers and browsers.
The relationships between browsers and servers in WebRTC. Diagram courtesy of
let me = { isInitiatingEnd: () => { ... },
           sendToOtherEnd: (type, data) => { ... } };

To use WebRTC, you need some kind of out-of-band signalling system—in other words, a middleman—to deliver messages between the browsers. This is how they exchange the networking information necessary to negotiate a direct connection. Obviously, if they could deliver it directly, then they would have no need for WebRTC!

The design of the signalling system itself is left entirely up to you. Choose any technology you please—WebSocket, QR code, email, carrier pigeon. As we will see, WebRTC provides the necessary hooks to abstract over the underlying technology.

const STUN_SERVERS = { urls: [""] },
      TURN_SERVERS = { urls: "", username: ..., credential: ... };
let rtc = new RTCPeerConnection({ iceServers: [STUN_SERVERS, TURN_SERVERS]});

If you expect your WebRTC session to traverse different networks, your clients will also need access to a Session Traversal Utilities for NAT (STUN) server. This is a service that informs browsers of their public IP address and port number, which can only be determined from a host on the public Internet. (STUN servers consume very little resources, so there are many that are freely available.)

Sometimes, despite the browsers’ best efforts, the network topology is too restrictive to achieve a direct connection. When this happens, WebRTC can fallback to a Traversal Using Relays around NAT (TURN) server, which is another middleman that can forward network traffic between clients. It’s like your signalling server, except it uses a standardized protocol explicitly designed for high-bandwidth streams. The more clients need such a middleman, the more bandwidth the TURN server will consume; therefore, if you want one, you will most likely need to run your own.

if (me.isInitiatingEnd())
        rtc.addEventListener("negotiationneeded", async (event) => {
                await sdpOffer = await rtc.createOffer();
                await rtc.setLocalDescription(sdpOffer);
                me.sendToOtherEnd("SDP-OFFER", sdpOffer);
rtc.addEventListener("icecandidate", async (event) => {
        if (event.candidate)
                me.sendToOtherEnd("ICE-CAND", event.candidate);

me.receiveFromOtherEnd = async (type, data) => {
        switch (type) {
        case "SDP-OFFER":
                await rtc.setRemoteDescription(data);
                const sdpAnswer = await rtc.createAnswer();
                await rtc.setLocalDescription(sdpAnswer);
                me.sendToOtherEnd("SDP-ANSWER", sdpAnswer);
        case "SDP-ANSWER":
                await rtc.setRemoteDescription(data);
        case "ICE-CAND":
                await rtc.addIceCandidate(data);

Okay, this is the big one—here, the browsers use your signalling system to perform a two-phase pairing operation. First, in the Session Description Protocol (SDP) phase, they share information about audio, video, and data streams and their corresponding metadata; second, in the Interactive Connectivity Establishment (ICE) phase, they exchange IP addresses and port numbers and attempt to punch holes in each other’s firewalls.

WebRTC provides the negotiationneeded and icecandidate events to abstract over your signalling system. The RTCPeerConnection object fires these events whenever the browser needs to exchange SDP or ICE information (respectively), which can happen multiple times over the course of a WebRTC session as network conditions change.

Only the side that initiates the connection need be concerned with negotiationneeded. There’s a specific protocol both sides need to follow when responding to these events, or to messages from each other—it’s best to let the code speak for itself.

let dataChannel = rtc.createDataChannel("data", { negotiated: true, id: 0 });
dataChannel.addEventListener("open", (event) => { ... });

Finally, set up your media and data streams. (For data channels, you can get away with a negotiated opening, which means the stream is pre-programmed on both ends and doesn’t require another handshake.) Wait for any open events to be fired.

You’re all done!

Introducing Sia Slice, My Absurdly Cheap Block Storage Solution

Sia Slice in action. (On a remote system, with tmux.)

I dabble in cryptocurrencies, occasionally. I hesitate to get too partisan on a subject the Internet takes very seriously, but it seems to me that the fairest judge of a coin’s value is the utility it provides to its holders. So Bitcoin is useful because everyone recognizes and accepts Bitcoin, Monero is useful because it facilitates anonymous transactions, Ethereum has that smart contracts thing going for it, and so on and so forth.

Continue reading “Introducing Sia Slice, My Absurdly Cheap Block Storage Solution”

How to Assign IPv6 Addresses to LXD Containers on a VPS

This post was rewritten on July 26, 2019 to incorporate a cleaner solution. The original version can be viewed here.

LXD is my favorite containerization stack. For my use case, which is running various services on the same machine with isolated root filesystems, it’s more flexible and easier to use than Docker, particularly in terms of networking capabilities.

Using LXD, I can bridge all of my containers to my local LAN, thereby providing each of them a unique local IPv4 and global IPv6 address. This makes it very easy to forward ports and set firewall rules to open services to the outside world—no more fumbling around with awkward PORT directives and multiple levels of NAT44, as is the case in Docker land.

But this setup gets complicated when you use attempt to use LXD on a commodity Virtual Private Server (VPS), because the IPv6 configuration these providers use is rather strange and counter-intuitive. (I’ll tell you exactly why when we get there.) So, here is how you can get globally routable, public-facing IP addresses for your containers on your $30/year VPS, without any application-level hacks like TCP/UDP proxying, port forwarding, or that abomination known as NAT66.

The Setup: The host is a VPS running Ubuntu, or your choice of contemporary distribution. The provider has allocated a virtualized network interface, net0, to connect to the Internet with IPv4 and IPv6 addresses. The containers will be attached to lxdbr0, a bridge interface managed by LXD.

$ ip link show
2: net0: <BROADCAST,MULTICAST,ALLMULTI,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 54:52:00:4f:5c:b3 brd ff:ff:ff:ff:ff:ff
3: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether fe:34:61:66:1b:ae brd ff:ff:ff:ff:ff:ff

So far, every VPS seller I’ve purchased from assigns each customer an entire /64 “prefix” (or a small subset of a prefix, or even a single address), but instead of using prefix delegation to advertise and route this prefix—as an Internet provider or cellular operator with native IPv6 would—they unceremoniously dump your server, and the servers of your “neighbors,” onto a common /48 prefix with a static gateway.

The following table, copied verbatim from my VPS provider’s network configuration page, suggests this is the result of a misguided attempt to translate a legacy IPv4 configuration into IPv6-speak:

IP AddressGatewayNetmask

The Red Herring: You can’t just bridge your containers with net0, because the VPS’s network usually drops traffic from unexpected MAC addresses. Try it yourself: run ip link set net0 address 112233445566 and see if you lose connectivity.

The Solution

Delegate your /64 prefix, or some subset of it, to lxdbr0, and configure LXD to use your choice of SLAAC or DHCPv6 to assign addresses to your containers. Then use the NDP Proxy Daemon to advertise the presence of your containers to the wider /48 prefix.

Set up LXD networking

Assign an IPv6 prefix to lxdbr0 with LXD. If you allocate your entire /64, you may use SLAAC:

$ lxc network set lxdbr0 ipv6.address 2602:ff75:7:373c::1/64

But if you want to reserve parts of your prefix for other purposes, you must use stateful DHCPv6:

$ lxc network set lxdbr0 ipv6.address 2602::ff75:7:373c::ea:bad:1/112
$ lxc network set lxdbr0 ipv6.dhcp.stateful true
$ lxc network set lxdbr0 ipv6.dhcp.ranges 2602::ff75:7:373c::ea:bad:2-2602::ff75:7:373c::ea:bad:255 # optionally

Sample configuration:

$ lxc network show lxdbr0
  ipv4.address: none
  ipv6.address: 2602:ff75:7:373c::1/64
  ipv6.dhcp: "false"
  ipv6.firewall: "true"
  ipv6.nat: "false"
  ipv6.routing: "true"

Set up host networking

You must use on-link addressing for net0; do not attach the shared /48 prefix. If the prefixes assigned to two different interfaces (e.g., a /48 on net0 and a /64 on lxdbr0) overlap, dnsmasq will seemingly fail to send Router Advertisements, breaking automatic IPv6 configuration.

On Ubuntu, netplan is supposed to be able to configure this, but the on-link addressing option is currently broken for IPv6. (May 2020 update: Micha Cassola of Foundyn found a way to accomplish this with pure netplan; see this thread.) Therefore, you must use ifupdown, augmented with some scripted iproute2 glue:

# apt install ifupdown
# cat >>/etc/network/interfaces
auto net0
iface net0 inet static
        up ip -6 address add 2602:ff75:7:373c::/128 dev net0
        up ip -6 route add 2602:ff75:7::1/128 onlink dev net0
        up ip -6 route add default via 2602:ff75:7::1
        down ip -6 route delete default via 2602:ff75:7::1
        down ip -6 route delete 2602:ff75:7::1/128 onlink dev net0
        down ip -6 address delete 2602:ff75:7:373c::/128 dev net0

Your IPv6 routing table should thus resemble:

$ ip -6 route show
2602:ff75:7::1 dev net0 metric 1024 pref medium
2602:ff75:7:373c:: dev net0 proto kernel metric 256 pref medium
default via 2602:ff75:7::1 dev net0 metric 1024 pref medium

Set up NDP proxying

Finally, use ndppd to make your containers “appear” on the same broadcast domain attached to net0. Here is a sample configuration file (for further information, see the manual):

# cat >/etc/ndppd.conf
proxy net0 {
    rule 2602:ff75:7:373c::/64 {
        iface lxdbr0
        router no

Alternatively, you can use the kernel’s builtin NDP proxy facility. You have to insert each address one-by-one, and the command does not stick across reboots:

# sysctl -w net.ipv6.conf.all.proxy_ndp=1
# ip -6 neighbour add proxy 2607:f8b0:4004:811::2 dev net0
# ip -6 neighbour add proxy 2607:f8b0:4004:811::3 dev net0


You’re all done!

$ lxc list
|    NAME    |  STATE  | IPV4 |                    IPV6                    |    TYPE    | SNAPSHOTS |
| container1 | RUNNING |      | 2602:ff75:7:373c:216:3eff:fedd:3f4e (eth0) | PERSISTENT | 0         |
| container2 | RUNNING |      | 2602:ff75:7:373c:216:3eff:fe5d:5f6a (eth0) | PERSISTENT | 0         |
$ lxc exec container1 -- ip -6 addr show eth0
13: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000
    inet6 2602:ff75:7:373c:216:3eff:fedd:3f4e/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 3145sec preferred_lft 3145sec
    inet6 fe80::216:3eff:fedd:3f4e/64 scope link 
       valid_lft forever preferred_lft forever
$ lxc exec container1 -- ip -6 route show
2602:ff75:7:373c::/64 dev eth0 proto ra metric 1024 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
default via fe80::e432:28ff:fe6c:b421 dev eth0 proto ra metric 1024 hoplimit 64 pref medium
$ lxc exec container1 -- ping -c 4
PING (2a00:1450:400d:805::200e)) 56 data bytes
64 bytes from (2a00:1450:400d:805::200e): icmp_seq=1 ttl=47 time=153 ms
64 bytes from (2a00:1450:400d:805::200e): icmp_seq=2 ttl=47 time=153 ms
64 bytes from (2a00:1450:400d:805::200e): icmp_seq=3 ttl=47 time=153 ms
64 bytes from (2a00:1450:400d:805::200e): icmp_seq=4 ttl=47 time=153 ms

--- ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 153.202/153.324/153.412/0.486 ms

Enjoy having end-to-end connectivity on your containers, the way the Internet was intended to be experienced.

Post-script: If you still need IPv4 (looking at you,, you can let LXD handle the NAT44 configuration, or use a public NAT64/DNS64 gateway.

My VGA Passthrough Notes

Introduction: What is VGA passthrough?

Answer: Attaching a graphics card to a Windows virtual machine, on a Linux host, for near-native graphical performance. This is evolving technology.

This post reflects my experience running my VGA passthrough setup for several years. It is not intended as a complete step-by-step guide, but rather a collection of notes to supplement the existing literature (notably, Alex Williamson’s VFIO blog) given my specific configuration and objectives. In particular, I am interested in achieving a smooth gaming experience, maintaining access to the attached graphics card from the Linux host, and staying as close to a stock Fedora configuration as possible. Hopefully, my notes will be useful to someone.

Continue reading “My VGA Passthrough Notes”

Print Your Stuff from the Terminal with

Recently — in the spring of 2016, I believe — the UT Austin libraries rolled out a new printing system that allows students and staff to upload documents via a web interface. This was a huge deal to me because previously, I had to get off my laptop and sign in to a library computer to print things.

Functional but frustratingly slow.

It works well enough, but as is always the case for university computer systems, it’s a little cumbersome to use. My typical workflow looked like this:

  1. Log in
  2. Upload my essay
  3. Set my standard printing options: no color, duplex

That works out to about ten clicks and a password manager access. The horror! We can do much better. We have the technology.

Over the last two weekends, I put together a Python script that can send documents straight from the command line. It stays authenticated for two weeks at a time and there’s a configuration file to specify preferred printing settings.

$ ./ ~/Documents/utcs.pdf
Print settings:
  - Full color
  - Simplex
  - Copies: 1
  - Page range: all
Logging in with saved token ... done
Uploading utcs.pdf ... done
Processing ... done
    Available balance: $1.16
    Cost to print:     $0.42

    Remaining balance: $0.74

I’m sure it will prove useful to all the… one… UT Austin students who are handy with a terminal and do a lot of writing. Find it on GitHub.

Prototype Sustainable Map for the UT Austin Campus

I usually keep my writings on my personal blog and The Daily Texan separate, but I’ll make an exception here: this post is an addendum to my recent opinion column on sustainability and the UT campus map.

Here’s how I think the campus map should look like. I took the visitor map from the Parking and Transportation Services website and added the UT shuttle campus circulators and some suggested bicycling routes.

(I’m not a huge fan of the segregated bike lanes on Guadalupe and Dean Keeton, but they are depicted for posterity’s sake.)

If we want to claim the mantle of a sustainable campus, a map like this is just plain common sense.

Hacking Piazza with Cross-Site Scripting

Piazza is a free classroom discussion service marketed for science and mathematics classes. It is best described as a hybrid wiki and forum; students can post questions, and other students can collaborate on answers. Like WordPress, content can be formatted with a rich-text editor or with plain HTML with a restricted set of features. Piazza’s distinguishing feature is the ability to post anonymously, which it claims makes underrepresented groups in the sciences more comfortable with interacting with the class. At UT, the computer science department makes extensive use of Piazza for most of its classes.

Piazza is primarily accessed through the web interface on Of great interest, there is also a “lite” web interface designed for mobile devices and accessible browsers at I will demonstrate that Piazza is susceptible to common client-side web attacks, such as cross-site scripting, as a result of its reliance on web apps. (There are also native iOS and Android apps, but they are awful, and nobody uses them.) Continue reading “Hacking Piazza with Cross-Site Scripting”

Creating a Guest Network with a Tomato Router

Here are my notes on how to portion off a guest wireless network for… you know, guests… if you have a router powered by the excellent Tomato third-party firmware. (I run Tomato RAF on a Linksys E4200.)

It’s not meant to be an exhaustive guide, because there are a few already on the Internet. Rather this is how I achieved my specific setup:

  • Do not allow guests to make connections to the router, thus preventing them from accessing the web interface or making DNS requests.
  • Firewall guests from the main network and any connected VPN’s.
  • Push different DNS servers and a different domain to the guest network.

First you’ll need to create a separate VLAN and a virtual SSID for your guest network. My router has two antennas, so I could have used a dedicated antenna for the guest network, but I opted to use a virtual SSID anyway because the second antenna is used for the 5 GHz band.

By default, VLAN 1 is the LAN and VLAN 2 is the WAN (the Internet). So, I created VLAN 3 for my guest network. I then attached a virtual wireless network on wl0.1 named

This is where most guides stop, since Tomato already firewalls the new guest network from the rest of your LAN. Instead of bothering to tweak the firewall, they simply advise you to set a strong administrator password on the web interface.

This didn’t satisfy me, though – I wanted firewall-level separation. Also, the guest network is still able to access any VPN’s the router is running. So here’s some iptables magic:

# Add a special forward chain for guests. Accept all Internet-bound traffic but drop anything else.
iptables -N guestforward
iptables -A guestforward -o vlan2 -j ACCEPT
iptables -A guestforward -j DROP
iptables -I FORWARD -i br1 -j guestforward

# Add an input chain for guests. Make an exception for DHCP traffic (UDP 67/68) but refuse any other connections.
iptables -N guestin
iptables -A guestin -p udp -m udp --sport 67:68 --dport 67:68 -j ACCEPT
iptables -A guestin -j REJECT
iptables -I INPUT -i br1 -j guestin

This goes in Administration > Scripts > Firewall. Simple and easy to understand. Note that ‘br1’ is the network bridge for your guest network and ‘vlan2’ is the WAN VLAN. You probably don’t have to change these.

Last thing that bothered me was that Tomato by default assigns both networks the same DNS and domain settings. This means that guests can make DNS queries to your router for system hostnames, like ‘owl,’ and get back legitimate IP addresses. Overly paranoid? Probably, but here’s the fix:

# DNS servers for guest network
# Domain name for guest network

This goes in Advanced > DHCP/DNS > Dnsmasq custom configuration. Combined with the iptables rules above, this will force your guests to not use the router’s DNS.

Once again, ‘br1’ is the guest bridge. You can also specify your own DNS servers instead of OpenDNS.

And there you have it – a secure network for your own devices and a guest network, carefully partitioned off from everything else, solely for Internet access.

There are two pitfalls with this setup: no bandwidth prioritization and the possibility that someone could do illegal things with your IP address.

I don’t really care about bandwidth, because I already have a QoS setup, and I live in a suburban neighborhood so users of my guest network will be few and far between.

However, I am considering forcing all my guest traffic through the Tor network. That may be a future post.