GET Upcoming Service Changes

Golden Empire Transit is going to hold a public hearing on some proposed service changes.

A few thoughts:

  • 21/22/44 are due to see more buses (which means slight frequency increases, I guess). Always a good thing! The 21/22 especially deserve 20-minute or better frequencies on the weekends.
  • 61 finally gets evening service! (As this is my go-to bus route this is advantageous for me personally 🙂 .) It sucked not having a ride after 6 PM. Still needs a frequency increase (which is actually on GET’s long-term plans). One can dream.
  • 62 will get evening service too.
  • 82 will now provide evening service to the Northwest Promenade, meaning better access to the businesses there and connections (albeit poorly-timed) to the 61.
  • Holiday service has been eliminated, but with those abysmal boarding numbers, I suppose it’s not hard to see why.

Since GET is just now proposing these changes, perhaps their ridership is finally starting to see an increase since the route reorganization and the summer 2014 strike.

BIOS Mods and Integrated GPU’s: a Tale of Hybrid Graphics

Well, today I called it quits with my designated “home gaming” laptop. It was a HP dv7t-6000 laptop with the following equipped:

  • CPU: Intel Core i7-2630QM
  • Integrated GPU: Intel HD Graphics 3000
  • Discrete GPU: AMD Radeon HD 7400M

Despite being augmented with a discrete GPU, its gaming performance was never anything to write home about. However, I had been keeping it serviceable for the last couple years by using modified AMD graphics drivers from Leshcat. The performance was slightly improved with the newer drivers, especially with games like Wargame: Red Dragon that were released long after HP had ceased supporting the machine’s software.

This month, however, during a routine upgrade to the latest Leshcat release, it appeared that my discrete graphics stopped working. Programs were suddenly reporting that they were being run on the HD Graphics 3000 iGPU.

I’m still not sure what happened but I think that the latest Leshcat release dropped support for older “fixed” switchable graphics. Thus, I was now using “dynamic” switchable graphics, which means that work done by the iGPU is offloaded to the dGPU. Programs still see the active graphics processor as the HD Graphics 3000 even though the 7400M is doing all the work.

In the process of figuring all this out I discovered that the InsydeH2O BIOS that comes with many HP laptops actually has a few hidden screens that grant access to a plethora of settings.

The advertised easy method (F10 + A) didn’t work for me, so naturally the next step was to flash a modded BIOS – which not only granted access to the secret settings but also removed the infamous HP wireless card whitelist.

With switchable graphics working again (after forcing the now-modded BIOS to use “dynamic” mode to work with Leshcat), my graphics performance was back but seemed… lackluster.

The benchmarks confirmed this. Here are the PassMark scores for every laptop GPU I’ve ever owned:

  • Nvidia GeForce 310M: 221
  • AMD Radeon HD 7400M: 634 (?)
  • Intel HD Graphics 4600: 726

That’s right, the entry-level integrated graphics from 2013 smoked the 2010 and 2011-era discrete GPUs. It’s surprising what difference a couple of years can make.

I’ve now retired the Sandy Bridge HP. My Haswell laptop, of all things, now holds the crown for the most powerful GPU in the household.

Capital Metro: What’s Wrong, and How to Fix It

Having lived in Austin for some months now I’ve been surprised by the lack of quality public transportation in a so-called “liberal” and “weird” city.

Austinites like Mike Dahmus criticize Capital Metro constantly for being inefficient, opaque, and making all the wrong decisions.

After many sleepless nights thinking about Capital Metro’s shortcomings, I thought I’d add my viewpoint to the Austin transit scene.

  • Poorly laid out routes. Just look at the Capital Metro system map (warning: ridiculously large PDF) and see how Austin’s transit lines are structured. It’s clear that they are designed to do one thing: provide single-seat rides from the suburbs to the downtown core. But this is the 21st century, and as modern transit planners have noted, downtown just isn’t that important anymore. What happens if you want to go somewhere else besides downtown? On Capital Metro, be prepared for numerous transfers, long waits, and very, very long travel times.
  • Too focused on coverage. So many of Capital Metro’s routes zigzag through neighborhoods and detour into strip malls to provide “service” to those special-interest areas. Also, particularly in the downtown/UT core, there are far too many stops. This is the safe way to run transit if you’re looking to appease your political base, but the fact is that streamlining routes and consolidating stops would go a long way in speeding up service. This would result in a more efficient system, increased frequencies, and lower operating costs.
  • Not enough frequency or weekend service. Routes have 30-minute headways if you’re lucky. On weekends, particularly Sundays, be prepared to wait 45 to 60 minutes for the next bus. Also, there are no express buses running on the weekend.
  • Too confusing. Bus stops signs are nearly impossible to decipher, and in a lot of cases the information is incomplete (e.g. no indication that certain stops are drop-off only). The headways are not only long, they’re also inconsistent (varying between 25-35 minutes), forcing you to check the schedule to really know when the bus will arrive. I have also seen riders get on the wrong route or attempt to pay for an express bus with a local day pass. Maybe Capital Metro should stop flashing random messages like “ATX IS HANDS FREE” on the destination blinds.
  • Commuter-oriented. Far too many of Capital Metro’s services are only useful to commuters. I’m talking about buses that only run one direction in the morning and the other direction in the evening. Or coach-style buses to nowhere with no obvious connections to local service. Good transit isn’t about getting you to work and back – it’s about being there for all your transportation needs, anytime, anywhere.
  • Trains to nowhere. Austin’s over-hyped MetroRail service connects North Austin’s far-flung suburbs with downtown. Well, in theory. The downtown station is half-mile walk from the local bus routes, so walking that distance to make a transfer is a pain. (Furthermore, forget about making a transfer to Amtrak or Lone Star Rail if it ever gets built.) And MetroRail completely misses the UT campus, making it useless for students. Capital Metro’s latest rail expansion plan (November 2014) called for light rail tracks in a very low-ridership corridor. It reeked of developer speculation and special-interest lobbying; Austinites were smart to turn it down.
  • Poor downtown coverage. So you ride the bus downtown. Great, the mediocre bus system is working to your advantage. But where do you go from here? Capital Metro recently realigned all services onto the same pair of streets, so many downtown attractions are a half-mile walk or more from the nearest bus stop. Oh, and the downtown circulator named the “Dillo” was cut a few years ago.

“Crap Metro” should follow the lead of other cities like Houston, which is transforming its bus system into an efficient grid network and built a cost-effective, high-ridership light rail system. (Oh, the irony! A conservative Texas city with progressive transit policies!)

Instead, it’s poured all its resources into more commuter-centric services and a “Bus Rapid Transit” project that isn’t actually BRT and has a premium fare.

I am shocked that Austinites aren’t demanding better.

The Technology Gap

As written in an application for the East Bakersfield Rotary Scholarship:

The greatest enigma that we face today is that people do not understand how technology works.

While we are quick to admit that our elders have trouble using computers, the younger generation gets a pass on technology education. We don’t usually think of Internet hipsters using Facebook on the latest iPhone as “technologically challenged.” But in reality, we are all in the same boat. We do not really understand the devices that we use everyday.

The average person knows how to use a web browser to open websites and type a document using Microsoft Word. That is all. He can barely navigate files and folders, he cannot solve computer problems by himself, and he almost certainly cannot maintain the machine properly. Good security practices will stop nearly any computer virus, but he runs out to buy the latest copy of his favorite anti-virus software. Computers are modular and can be progressively upgraded, but he purchases a new system every year. And the mere thought of the average man being able to program a computer is simply ludicrous.

This lack of comprehension is disturbing because it can be so easily exploited. TV infomercials advertise miracle virus-removal programs that actually scam ignorant computer users out of their money; shady websites and tech startups offer low-quality software that over-promises and fails to deliver, frustrating customers who didn’t know any better; people buy the latest and greatest models every year because their old devices, thanks to neglected maintenance, have become “too slow.” It’s almost as if the tech industry profits from our lack of computer education.

But the most concerning development has been the rise of cloud computing: services that entice computer users to upload their data onto Internet servers. Google, Microsoft, and Apple tempt consumers by marketing these services as easy to use and safe. What most people don’t realize is that there is a hidden cost. Companies make money on their cloud services by selling the data to advertisers – and government agencies such as the NSA can also snoop through it.

If American youth expect to get ahead in the 21st century, they must be able to use technology to its fullest potential. Today, computer education is stuck in the 1990’s. Students are only taught how to write documents and, occasionally, create presentations. We need to change this! Computer classes should expand their curricula with lessons about keyboard shortcuts, installing new software, using files and folders, and maintaining operating systems. And of course, schools must embrace the exciting new field of computer science! Teaching basic programming logic could benefit all students by giving them new insights into science and mathematics. And for those who want to dive deeper, low-cost minicomputers like the Raspberry Pi could allow schools to create truly innovative robotics and electronics courses.

We have been taught how to use computers, not how to understand them. Today’s children deserve better.

Creating a Guest Network with a Tomato Router

Here are my notes on how to portion off a guest wireless network for… you know, guests… if you have a router powered by the excellent Tomato third-party firmware. (I run Tomato RAF on a Linksys E4200.)

It’s not meant to be an exhaustive guide, because there are a few already on the Internet. Rather this is how I achieved my specific setup:

  • Do not allow guests to make connections to the router, thus preventing them from accessing the web interface or making DNS requests.
  • Firewall guests from the main network and any connected VPN’s.
  • Push different DNS servers and a different domain to the guest network.

First you’ll need to create a separate VLAN and a virtual SSID for your guest network. My router has two antennas, so I could have used a dedicated antenna for the guest network, but I opted to use a virtual SSID anyway because the second antenna is used for the 5 GHz band.

By default, VLAN 1 is the LAN and VLAN 2 is the WAN (the Internet). So, I created VLAN 3 for my guest network. I then attached a virtual wireless network on wl0.1 named openwireless.org.

This is where most guides stop, since Tomato already firewalls the new guest network from the rest of your LAN. Instead of bothering to tweak the firewall, they simply advise you to set a strong administrator password on the web interface.

This didn’t satisfy me, though – I wanted firewall-level separation. Also, the guest network is still able to access any VPN’s the router is running. So here’s some iptables magic:

# Add a special forward chain for guests. Accept all Internet-bound traffic but drop anything else.
iptables -N guestforward
iptables -A guestforward -o vlan2 -j ACCEPT
iptables -A guestforward -j DROP
iptables -I FORWARD -i br1 -j guestforward

# Add an input chain for guests. Make an exception for DHCP traffic (UDP 67/68) but refuse any other connections.
iptables -N guestin
iptables -A guestin -p udp -m udp --sport 67:68 --dport 67:68 -j ACCEPT
iptables -A guestin -j REJECT
iptables -I INPUT -i br1 -j guestin

This goes in Administration > Scripts > Firewall. Simple and easy to understand. Note that ‘br1’ is the network bridge for your guest network and ‘vlan2’ is the WAN VLAN. You probably don’t have to change these.

Last thing that bothered me was that Tomato by default assigns both networks the same DNS and domain settings. This means that guests can make DNS queries to your router for system hostnames, like ‘owl,’ and get back legitimate IP addresses. Overly paranoid? Probably, but here’s the fix:

# DNS servers for guest network
dhcp-option=tag:br1,6,208.67.222.222,208.67.220.220
# Domain name for guest network
dhcp-option=tag:br1,15,guest

This goes in Advanced > DHCP/DNS > Dnsmasq custom configuration. Combined with the iptables rules above, this will force your guests to not use the router’s DNS.

Once again, ‘br1’ is the guest bridge. You can also specify your own DNS servers instead of OpenDNS.

And there you have it – a secure network for your own devices and a guest network, carefully partitioned off from everything else, solely for Internet access.

There are two pitfalls with this setup: no bandwidth prioritization and the possibility that someone could do illegal things with your IP address.

I don’t really care about bandwidth, because I already have a QoS setup, and I live in a suburban neighborhood so users of my guest network will be few and far between.

However, I am considering forcing all my guest traffic through the Tor network. That may be a future post.

The Problem with City-Building Games

For me, that something was SimCity 4.

I loved it – and that is probably an understatement. I built villages and towns on rolling hills, coastlines, and plains. I enjoyed laying out cities and watching the grand effects of my policies. I loved watching the skyscrapers spring up and the wealthy sims move in.

My cities always ran deficits and eventually went bankrupt, but then I discovered cheats!

…And then I discovered that by governing conservatively, you could actually make money and play the game as intended.

The great thing about the SimCity series is that it’s organic. The problem with city-building games these days is that you build everything that you want. Literally. You want a house? Click the “house” button and plop it down on wherever your heart desires. A business? Gas station? Factory? Same idea! These games might seem fun at first, but they quickly get stale. Because you control everything, they’re predictable and boring.

Contrast this to SimCity, where the game is built around zoning. You zone areas for residential, commercial, and industrial establishments, and they build themselves automatically. Not only that, but they also upgrade themselves when they are redeveloped by wealthier sims or when you zone the area for higher-density buildings.

That’s what makes SimCity so great – you build the infrastructure, you set the policies, and then you kick back and watch as your awesome city starts growing. Compared to those dumb “move the cursor around and collect all the products made by the factories,” there’s very little micromanagement.

Another area where SimCity stands out is transportation. In most city-building games, transportation is limited to roads (and sometimes railways). But SimCity gives you busses, subways, freeways, airports and seaports, and proper train stations. Best of all, transportation in SimCity is organic too. The cars that move around on the map aren’t random – they represent the actual commutes of sims going to work. And since they can interchange between multiple modes of transportation and commute across city limits, you could spend literally hours designing an incredibly complex network dedicated to moving sims between their homes and their jobs.

SimCity 4 went one step further, introducing graphs and overlay views that could show you exactly how sims moved along roads and networks. That let you get really in depth and build bus stops and subway stations exactly where they were needed. Then you hit the fast forward button and watched as the sims adapted to your new infrastructure. It was magical.

Maxis eventually released a single expansion pack for SimCity 4, dubbed Rush Hour. It was a must-have, adding even more transportation possibilities, like elevated metros (compatible with existing subways), more freeway options, toll booths, and ferry terminals. But Rush Hour also opened the floodgate for modding. Soon, SimCity 4 players found they could make nearly limitless changes to the gameplay – from custom buildings to game-changing mods that added even more transportation options.

Compared to this level of customization and simulation depth, everything else was a joke.

Thus, EA caused quite a stir when it announced the development of a SimCity reboot, called not “SimCity 5” but simply SimCity, set for release in 2013. There was a closed beta and the pre-release press gave it excellent reviews. The Maxis developers released a number of “making of” videos detailing SimCity’s new “Glassbox engine” and the depth of simulation now possible using newer hardware and cloud computing technologies. It looked like it was going to be a slam dunk.

Then release day hit.

Players were extremely disappointed to find that EA had once again screwed up their servers – nobody could log on for hours at a time, and loss of progress was frequent. SimCity was no longer a refined, solitary experience. It had devolved into a cookie-cutter MMORPG that happened to be a city builder.

“Cloud computing” in SimCity also turned out to be a guise for always-online DRM. Hackers proved that the game would run just fine without an Internet connection, at least until it tried to check in with EA.

Now most of the server issues are ironed out, Maxis has committed itself to content updates, and recently an offline mode was even announced. But the new SimCity is still fundamentally broken:

  • Sims go to work at a random business and return to a random home. They no longer have an identity! If sims do not have predictable commutes, then it is almost pointless to design proper transportation infrastructure.
  • Traffic pathfinding is still simplistic. Sims take the shortest route, rather than the fastest and least congested route. This was forgiveable in older SimCities because computers were not fast enough back then, but SimCity promised to fix this with “the cloud.” Obviously, that did not happen.
  • City sizes are ridiculously small compared to previous SimCity installments. Maxis’s reason? “Technical limitations.”
  • Transportation options have been axed severely. There are still railways and highways, but they only connect cities; you cannot build them for intracity transportation. Trams have been added, but only within the medians of 6-lane avenues. Meanwhile, subways have been cut entirely!

Thus, many hardcore city builders are sticking with SimCity 4. But the game isn’t aging well. It was designed for single-core processors while today’s computers have two or four cores (not to mention hyperthreading). It runs quite slow and chugs on metropolises, even on modern machines. Crashes are common, the game forces a 2D isometric perspective, and there are no updates in sight. Mods can only go so far.

Thanks, gaming industry. You killed another genre. Was it because it wasn’t a first-person shooter?

My Take: Malaysia Airlines Flight 370

For the past few days, the world has been captivated by the mysterious disappearance of Malaysia Airlines Flight 370. Everyone is demanding answers, so naturally the media are eating up every possible lead. And most of them are false.

Plenty of these “media myths” are big exaggerations or just speculation; the networks are irresponsible to report them as fact. For example, we do not know for sure that the planed turned back in the direction of Malaysia. That assertion is based on preliminary radar data that is incomplete and possibly wrong.

Another widely dispersed myth is that the passengers are still reachable via their cell phones (voice rings and SMS delivery reports). This idea is simply ludicrous – if the phones were really connected to the cell towers, their location could be pinpointed by the rescue crews immediately. The rings and the delivery reports are the well-documented results of roaming phones and international calls.

The aviation enthusiasts at airliners.net have put together a wiki to compile all known information on the accident and refute some of these misunderstandings. If you are at all interested in the facts, you should check it out.

But a dishonest press is a topic for another day. The subject of this post is my personal theory on what happened to MH 370.

The plane

Flight 370 was operated by a Boeing 777-200ER twinjet airliner, which is one of the safest passenger aircraft in the industry. It has not had a fatal accident since its introduction in the mid-90s, with the exceptions of a fuel fire at Denver and last year’s crash of Asiana Airlines Flight 214 (which was caused by pilot error).

Could the crash have been caused by a mechanical failure or even a design flaw? It’s always a possibility, but it’s also extremely unlikely. After almost two decades of service, the 777 has proven a rock-solid workhorse. And nearly all aviation accidents have been caused by pilot error or botched maintenance.

If it was indeed a problem with the aircraft, an electrical problem of some kind would be most likely. A loss of the electrical system would explain the loss of the transponder and the lack of communication from the flight crew – at least, initially. They could always use battery-powered radios or satellite phones to raise ATC or home base. And an electrical problem alone should not have brought the plane down.

A fire related to the electrical system, however, is another story. The aircraft would be crippled and the crew would be disoriented. This is what happened to Swissair Flight 111 back in 1998. However, that flight crew had enough time to declare an emergency and attempt a diversion.

If the oil rig worker’s report of a jet going down in flames is true, the possibility of an electrical fire could have some substance. However, it is too early to say.

The pilots

The most likely cause of the plane’s loss is pilot error of some kind. The captain was very experienced and had thousands of hours on the 777, but the first officer was a 777 pilot-in-training. (This is a typical setup for Asian airlines – in fact, this is exactly how the Asiana crew operated.)

The accident has stark parallels with Air France Flight 447. That crash happened while the captain was taking a break and an inexperienced crew was in charge. When their airspeed indicator broke down, they panicked and inadvertently stalled the airplane – and then exhibited very poor airmanship by not actually recovering from the stall.

One can imagine a similar occurrence on our doomed aircraft. However, AF 447’s airspeed indicator broke down in the first place because of an ice storm. The weather for the Malaysian flight was crystal clear.

Another hypothetical situation – which is somewhat more controversial – is that one of the crew members intentionally brought down the aircraft down, either as a suicide or a complicated life insurance scam. This too has happened before – EgyptAir Flight 990 was brought down by a suicidal pilot. (To date, Egypt disputes the official conclusion of the American investigation, and insists the crash was caused by jammed flight controls. There is no substance to this – it seems Egyptian culture does not take kindly to the idea of suicide.)

I personally think the suicide theory deserves some serious investigation. It would perfectly explain the loss of the transponder and the lack of a distress call.

The passengers

Many media reports are pointing to two stolen passports used by Iranian individuals on the flight. Naturally, terrorism comes to mind. However, I think they are a red herring. It is quite common in this part of the world to travel with falsified documents. Furthermore, based on his Twitter feed, it appears that one of the Iranians was a legitimate immigrant using the passport to return to his mother in Germany.

So what about some other kind of terrorist attack? The lack of debris indicates that the airplane was not blown out of the sky in a huge explosion (which also discounts the missile strike theory). But a carefully placed bomb could simultaneously disrupt the electrical system and cripple the airliner only to the extent that it would enter a dive and crash reasonably intact.

Interpol doesn’t think a terrorist attack on this flight is likely, especially given the lack of a group claiming responsibility, but they cannot yet rule it out.

Conclusion

Why did Malaysia Airlines Flight 370 vanish into thin air? Ultimately, we don’t know. There are a myriad of possibilities, but to me, electrical fire, pilot suicide, and a medium-size terrorist bomb are the more plausible ones. Some of the conspiracy theories are really outlandish, like the idea that pirates hijacked the plane and flew it under the radar back to Somalia or it was struck by a missile launched by North Korea.

Ultimately, we need to look at what we know, as well as what we don’t know.

And remember, as far as getting from point A to point B is concerned, commercial airlines are the still the safest possible way.

Windows: Combat Evolved: a Halo Satire

What would Microsoft’s Halo video game series be like if it involved Microsoft itself?

The Introduction

Halo tells the story of 26th century humanity, which has organized itself under the auspices of the United Nations Space Command (Microsoft). Humans are fighting a losing war against the Covenant (Apple), a theocratic collection of alien races that worship a long-dead alien species called the Forerunners (pre-2000 Macs). Already, many colony worlds, including the military stronghold Reach (IBM), have fallen.

In the Beginning

In the first game, Halo: Combat Evolved, a lone starship (Windows XP) crash lands on a mysterious Forerunner ringworld (Best Buy) that is thought to be some kind of superweapon. Its human survivors, including the superhuman cyborg Master Chief (Bill Gates), fight the Covenant for control of the ring (store). However, the Covenant accidentally release a zombie-like parasite known as the Flood (Android). It is discovered that the purpose of the halo is actually to cleanse the galaxy of all sentient life, thereby depriving the Flood of all possible infection vectors. The Chief then destroys the ring and its Flood infestation before returning to Earth (Redmond, Washington) to warn of an impending invasion by a new Covenant fleet (the Intel Macintosh).

The Story Continues

In the sequel, Halo 2, the Covenant locate and invade Earth. Despite a valiant defense by the UNSC Home Fleet and Earth’s orbital defense platforms (Windows Vista), a single Covenant carrier punches through and lands at New Mombasa, an African metropolis. With the Master Chief’s help, the UNSC destroys most of the initial assault. However, the carrier makes a hasty slipspace jump to Delta Halo, another halo installation (New Egg). The Covenant and the UNSC once again battle for control of it. Meanwhile, the Chief assassinates a key Covenant leader (Steve Jobs) and the Flood are once again released. This sets off a complicated chain of events that leads to the primary warrior race of the Covenant, the Elites (Mac OS X), seceding from the theocracy. They are opposed by the new warrior race, the Brutes (iOS).

The Elites make a temporary truce with the humans to stop the rest of the Covenant from firing the halo ring. They succeed, but all rings are put on standby, ready to fire remotely from a location known only as “the Ark” (Amazon). The remaining Covenant leadership plan to bring the entire fleet to Earth and uncover a major Forerunner artifact (iOS 7).

In one of the worst cliffhangers in gaming history, the Master Chief stows away and prepares to “finish the fight.”

Finish the Fight

Halo 3 opens with the Chief jumping from the ship and landing outside the ruins of New Mombasa. He helps the UNSC (Windows Phone 7) launch a last-ditch attack against the Covenant excavation site, but they fail to put a dent in the operation. The artifact is activated by the Covenant; it turns out to be a portal to the Ark (flat UI design). The Elites and the UNSC (Windows 7) follow the Covenant through the portal to stop them from once and for all. After an epic three-way battle, the Master Chief kills the Covenant leadership and blows up the Ark to eradicate the Flood. Unfortunately, his ship fails to make it back through the portal in one piece, and he is left stranded in unknown space.

A New Era

Halo 3 was followed years later by Halo 4, which is intended to begin a new Halo trilogy.

In Halo 4, the Master Chief crash-lands on a mysterious Forerunner planet called Requiem (Power Mac G4). The UNSC Infinity (Windows 8), a massive capital ship commissioned after the war with the Covenant, attempts a rescue mission, but instead finds itself trapped in Requiem’s gravity well. The Chief helps to free it, but accidentally releases an immensely powerful Forerunner warlord known as the Didact (Steve Wozniak). The Didact intends to take a Forerunner ship to Earth and wipe out humanity with the Composer (Mac OS), a Forerunner weapon that allows him to turn sentient beings into his own soldiers. However, he is stopped in the nick of time thanks to the efforts of the Chief and the Infinity.