All posts by Waldo Jaquith

Waldo Jaquith (JAKE-with) is an open government technologist who lives near Char­lottes­­ville, VA, USA. more »

Truth, earned credibility, and a publisher’s responsibility.

I spent much of the ’00s as a political blogger. I wrote here, mostly about state politics. When I decided to start writing about state politics, in 2003, I sought out other political blogs in Virginia. There weren’t many, maybe a half-dozen. I added them all to my blogroll, made a point of reading those sites and linking to them, and they did the same. Despite our often-oppositional political perspectives, our exchanges were friendly, informative, and fun. I’m still friends with those folks.

In the spring of 2006, I was casting around for how to elevate lesser-known Virginia political blogs, as it didn’t strike me as entirely fair that my site should get so much of the readership. So I set up a blog aggregator—a bit of software that would check each Virginia political blog’s RSS feed every half-hour or so, and syndicate all new blog entries on a central site, creatively named Virginia Political Blogs. It didn’t take long to set up, and having a central site to serve as a commons was immediately popular. Every blog entry was shown in the same context as every other, all in the same typeface, all on an equal footing, listed chronologically.

In the few months afterward, blogging exploded in popularity, in part because it became much easier to set up a blog. No longer was it necessary to install Movable Type, or the nascent WordPress—Blogger.com would host your site for free. For a while, in that lead-up to the 2006 midterm elections, there was a new Virginia political blog every week. In retrospect, this must have been the peak of the popularity of blogging, before the rise of Facebook and Twitter

Inevitably, the lowered technological bar meant that some less-knowledgable folks were able to participate in this commons. But that was OK, because this was a marketplace of ideas, and if people wanted to promote their foolish ideas, they could do that.

This went badly.

Any fool could start a website, get added to my aggregator, and immediately have an audience that numbered in the thousands. And fools did that, by the dozen. They didn’t have to earn an audience by writing things that other blogs would want to link to. They didn’t have to prove themselves in any way. When they wrote things that were completely wrong, offensive, or even dangerous, instead of going to an audience of a dozen people, it went to an audience that included not just every political reporter in Virginia and the DC metro area, but a great many crackpots and fringe-theorists as well.

By putting terrible ideas on even footing with great ones, and by replacing a free marketplace of ideas with a leveled one, I inadvertently created a promotional vehicle for the ignorant, the rage-filled, and the chronically dishonest. By presenting them all in the same, shared context, it gave them an aura of legitimacy. And by automatically reposting their occasional hate speech, violent imagery, and calls to violence, I was enabling—even endorsing—those things.

I knew what I had to do. In December, I pruned the worst actors from the list. They were enraged, and their rage was only magnified by their ignorance (“your taking my free expression that’s against the constitution!!!!”) and their partisanship (“classic libtard”). I’d created a commons, gave them access to it, and then took it away. They tried to create their own blog aggregator, as I recall, but it proved to be beyond their technological capabilities.

* * *
In 2016, my little mistake has been repeated as a huge, national mistake. We call it “Facebook.”

Any fool can create a Facebook account, get added to their friends’ feeds, and immediately have an audience in the hundreds, potentially in the millions. They don’t have to earn an audience with their writings, but instead rely on social obligations around relationships to be guaranteed a readership. Everybody’s Facebook posts are displayed in the same, shared context, with reshared, weaponized Russia propaganda adjacent to class reunion photos and New York Times articles.

Propaganda on Facebook has the aura of legitimacy. It’s been shared by a friend or family member, preying on our sense of trust. The name of the outlet that published the news is displayed in light gray at the bottom of the post, while the friend’s name and photo is displayed prominently at the top. News from eaglepatriot.co looks the same as any article from The Washington Post.

It only took me eight months to figure out that I’d inadvertently created a terrible system that was enabling dangerously stupid people. It’s been twelve years, and Facebook hasn’t learned that lesson yet.

I quite doubt that my poor editorial policy changed the outcome of any elections. Can Facebook say the same?

I want you to become a government tech vendor.

Hey, competent tech folks: your country needs you. Your knowledge, your experience, and your connections can improve the United States for everybody.

I’m not asking you to go work for the federal government.

I’m not asking you to go work for a non-profit.

I’m asking you to become a government technology vendor. I want you to sign up at SAM.gov, start bidding on 18F microcontracts, and eventually pay an attorney to help you navigate the procurement process to get a multi-million-dollar federal technology contract.

Uncle Sam drawing, captioned: I want you to become a gov't tech vendor.

* * *
The same handful of vendors bid on every federal tech project, and often the bids are between one and two orders of magnitude higher than they should be. FedScoop recently published a list of the top 100 federal IT vendors, ranked by income. (I’d only heard of 13 of them before.) 71% of of the contract income in this list goes to the top 10 vendors. 22% goes to Lockheed Martin. Look at this distribution of the top 100:

A chart that drops off very steeply
Numbers are in millions.

If we performed this exercise with all federal IT contracts—and there will be $86 billion in federal IT spending in FY2017—we’d see that there is a very long tail, though it probably wouldn’t change the fact that most of the spoils are divided among a handful of vendors.

Despite all of this spending (or perhaps because of it), only 6.4% of large federal IT projects succeed. The failure and subsequent rescue of Healthcare.gov shined a light on the pitfalls of our legacy procurement model, and the enormous benefits that can come of working with small, agile teams of software developers who are given the space to do their job.

* * *

I’ve spent the last few years working in tech as a non-profit partner to government and, before that, I worked in tech within government. I am here to tell you that you can effect more positive change as a government vendor than as a helpful non-profit, and that you can be at least as helpful to our nation as a government vendor as you can by working on tech within government. (After all, government outsources work to thousands of times more technical positions than the number that they employ directly.)

Generally speaking, free software is useless to governments. If you spent the next eight months feverishly producing the perfect regulatory management platform, and then handed a copy of it to an agency (complete with a FOSS license), odds are slim to none that they would be able to use it. There are far too many hurdles.  But if you bid $100,000 on a government RFP for that system, you’d ensure that government would save a bundle and be able to actually use your software. Government has a system for acquiring new technology: the procurement process. Realistically, the change you can provide will come from working within that system, not trying to work outside of it.

The United States needs a small army of competent developers to start hundreds of businesses, bid on federal contracts, and do top-notch work for a fair rate. We need people whose goal isn’t an IPO and fabulous wealth, but instead to earn a nice living for themselves and everybody who works for them while making their country better by creating better technology for government.

There is good and important work that needs to be done in government technology, at federal, state, and local levels. Doing that work in exchange for payment isn’t merely not a bad thing, it may be the only thing you can viably do that actually makes a difference. It’s not reasonable to expect talented developers to perform free work at government hackathons in exchange for pizza while major vendors produce failing software for hundreds of millions of dollars.

A 6.4% success rate isn’t good enough. Fixing this will involve a lot of work beyond merely attracting new vendors—and most of that work is within government—but attracting new vendors is an important part in this improvement. Become a government tech vendor. Help make the U.S. a better place for everybody.

How I built a chicken coop.

I am not a carpenter. My occasional effort to build something, no matter how mundane, ends badly. That’s because carpentry is hard. There are a hundred ways to screw up, and ninety of them are only obvious in retrospect. I took shop class in middle school, so I’m generally comfortable with the tools of the trade, but of course I don’t actually own a drill press or a band saw or any of the other sixties-era industrial equipment that my school trained on. One time I tried to build a temporary woodshed—just something to cover some freshly-milled lumber until it dried—and I wound up blowing a couple of hundred bucks on the structural equivalent of the deformed Ripley clone in Alien Resurrection.

00_example
The photo accompanying the plans.

But our chicken coop’s lifespan was reaching its end, and I was faced with a choice: repair our six-year-old (to us), second-hand chicken coop? Or build a new one? One thing led to another, and before I knew it I’d decided to build a pretty non-trivial shed-cum-coop. The cost of the materials was steep—about $800—and I didn’t understand easily a quarter of the instructions. (“Pocket holes”? “Kreg Jig”? “Toenail”?) I figured I could knock it out in a week. That was in May. I finished it this week.

 

Preparation

There were a few things I figured out quickly. The first was that my tools were not up to the task. Using a combination of American Express points and cash, I got a cordless drill, aviation shears, some stronger drill bits, sawhorses, an 8′ ladder, and a jigsaw. I filled up my pickup’s bed in the first of a dozen trip to Lowe’s, with dozens of 2×3s and 2×4s, a stack of plywood and T-11 (which turned out to be a kind of cheap, wooden siding), screws, hinges, concrete deck blocks, and bags of gravel. The second was that I was in way over my head. The third was that I was determined to follow through to completion, not cutting any corners, but getting it right for once.

The Foundation

The framework of the foundation.
The framework of the foundation.

The foundation was a huge pain in the ass. I had no idea. Living on the side of a mountain, as I do, there is no flat ground. Getting each of the six of the concrete deck blocks both level and located correctly relative to the other five blocks was an exercise in frustration. I did this work on a 90° day in May, and within 30 minutes I found myself wanting to cut corners. (“This is level enough.” “This is square enough.”) A bad foundation would be a bad…er…foundation, so I stopped, relaxed, thought about it, took measurements, and did things right. Framing the foundation was easy—it was just a grid of treated 2×4s.

I put a plywood/insulation sandwich atop the foundation framing, to which the walls could be attached.

02_platform

Framing the Walls

“Measure twice, cut once.” This aphorism is wrong. Twice is not nearly enough times. I built each one of the coop walls at least two times. It’s pretty simple, in theory—cut sticks of wood at the prescribed size, screw them together with an impact driver—and yet. Three of them were pretty straightforward, at least in retrospect, but the back wall was tough, because the header (the piece of wood that spans the top) had to be mounted at an angle, for the shed roof to slant down. This meant learning to use my speed square. It’s basically a protractor for carpentry. Once I figured it out, it was easy. When they were all finished, my shed was completely full of framed walls.

The framed walls.
The framed walls.

Mounting the walls to the foundation was not a one-man job. I pressed my wife into service, who held each of them up while I used structural wood screws (very long, thick screws) to affix each of them to the foundation. When all four were up, I had walls that were all wrong. The front wall was not wide enough—falling maybe 1.5″ short of reaching all the way to the left side of the shed—and the side walls were also too narrow, meaning that the whole shed had a 1.5″ lip on the front of it. At this point, I could have taken down the walls and rebuilt them, but I decided these problems were within reason, and also rebuilding the walls sounded awful.

At this point it looked real! Or, at least, like a real-world wireframe of something real.

Skinning It

The next step was to put T-11 siding on, and plywood to form the roof. This could have been through transferring the framing measurements onto waiting sheets, but instead I went with something more direct—my wife held the sheets in place, while I traced the outline of the framing. Using the sawhorses and clamps to hold the sheets of wood in place, I used the jigsaw to cut along the lines. After mounting them on the framing, using the impact driver to quickly affix a couple of screws, I used the jigsaw to trim off the remaining bits. Since the roof is just a big rectangle, I was able to cut out the plywood for that pretty easily.

The basic structure, completed.
The basic structure, completed.

At this point, I thought most of the work was finished. That was not even a little bit true.

Doors, Windows, and Roofing

Shutters are easy.
Shutters are easy.

I immediately set to building doors and shutters, to keep the rain out. Both turned out to be surprisingly easy. The shutters are just a long piece of wood cut into five segments and screwed together. I covered the windows with hardware cloth, to keep out predators. The door is just a piece of plywood with some framing to keep it rigid. (That’s not to say they’re great. More on that below.) Trim was also pretty simple, although complicated a bit by the need to make sure that the style is kept consistent (e.g., horizontal trim overhangs vertical trim, and 4″ wide corner trim overhangs corners by 1″, where it meets 3″ trim). I made a bunch of mistakes on the trim, despite its hypothetical simplicity, but I eventually got it more or less correct.

The next protective measure was the roof. I’d hoped to use metal, but the more than I read about working with metal roofing, the more convinced I became of the inevitability of grievous injury. So I went with cheap, plastic roofing, which was really hard to cut (the saw made it vibrate all over the place), and I wound up hiding the terribly ragged cut ends on the back side of the coop. It was easy enough to screw down and, later, use gap sealer to close off the space under each of the ridges in its wavy profile.

Painting

I blew $20 on brushes by not cleaning them promptly.
I blew $20 on brushes by not cleaning them promptly.

Painting was not so easy. I used red barn paint on the T-11 and white latex paint on the trim. (I still haven’t treated the wooden foundation, to protect it from rot, but I intend to do that shortly.) The trim needed two coats, the painters tape refused to stick to the T-11, cleaning paintbrushes turned out to be frustrating and hard, and I kept splashing things with the wrong color of paint and forgetting to paint every surface (e.g., the top edge of the window trim). I had to paint in inside, too, to protect it from the messy realities of poultry. I did that with (white) Kilz primer, painting the walls and the floor.

The freshly painted interior.
The freshly painted interior.

Floor

Paint would not be enough to protect the floor. I bought a roll of cheap, remnant vinyl flooring and a small bucket of flooring glue. Following instructions carefully, I trimmed it to fit and glued it down. After it dried, I used silicon caulk to seal the edges, to keep moisture from getting under the floor.

Poultry Accouterments

14_perch
A perch for the chickens.

At this point, I had a shed, which would do my chickens and ducks no good in its present form. Using some industrial-strength shelving brackets, I made a mount for a perch—which would consist of a 2×4—and, below it, a wide shelf (known as a “poop deck” for obvious reasons). I cut a 12″ square hole in the wall for the hens’ door, and built a little glide-track frame for a guillotine-style door. And I used some leftover T-11 and lumber to assemble a ramp for the hens to use to get in and out of the shed, as their door is elevated a good two feet above the ground. It remains to build nesting boxes and a second door, so that we can rotate their pasture access.

15_door
A guillotine-style door for the hens.

Acceptance Testing

They’re not thrilled with the change. Our hens have spent their whole lives in their rapidly-decaying mobile coop. One even flew over the fence to get to their old coop, where I found her in the nesting box at dusk. On the first night, I had to carry each of them into the coop, one by one, and lock them up to prevent their escape. So, not a home run.

The completed coop.
The completed coop.

Mistakes

Oh, God, the mistakes. I see nothing but mistakes when I look at the shed. A door latch that doesn’t quite line up. Shutters with a 1″ gap between them. Little holes everywhere, where I had to re-drill and later patch with wood filler. Blobs of gap filler, which expanded far beyond my expectations, requiring that the excess be trimmed off with a knife. Crooked trim, angles that aren’t even close to 90°. I could go on. Some of these I’ve fixed, or will fix. Others I’ll have to live with. But I think all are within reasonable parameters.

Patches in the T-11.
Patches in the T-11.
Overflowing gap filler.
Overflowing gap filler.
The shutters that don't quite close.
The shutters that don’t quite close.
A misaligned door latch. (It started out aligned correctly, but the door settled.)
A misaligned door latch. (It started out aligned correctly, but the door settled.)
Sloppy paint.
Sloppy paint.

Lessons Learned

So, so many lessons learned.

  • Learning new things is hard. Learning new things in the physical world—i.e., not writing code—is really hard.
  • Building things is expensive. Even if the individual components are inexpensive, the tools required to build stuff cost a lot of money.
  • When I hear a nagging voice in my head that says “hey, you’re making this wrong, maybe,” I should stop and listen to it.
  • Painting sucks.
  • Expect to do everything at least twice, because I’m going to mess it up the first time.
  • Building physical things scratches a very similar itch as building software, with the frustrating exception that a physical thing can only be used by me, while software that I build can benefit many people.

All in all, this has been a great experience. I’ve gotten a lot better at basic carpentry in the intervening two months, making fewer mistakes and increasing my understanding of what’s possible. I intend to continue to build things, as life necessitates. But now that this project is wrapping up, I look forward to getting back to writing code.

Term-limiting your organization can be a gift to future you.

For just a great many mission-based organizations, there is some point in time at which it should have accomplished its mission. If it’s done that, then it should stop. And if it hasn’t accomplished its mission by then, it should still stop, because it’s apparently not able to get the job done.

The landscape is littered with zombie non-profits that exist to exist, doing work that is almost wholly unrelated to their original purpose, so that its employees may continue to be employed. Is your goal to build promote the building of roads suitable for automobiles? Cool, do that, and stop once you’ve accomplished that. Otherwise, a century later, you’ll just be a sad, sprawling towing insurance company that could be easily replaced by an app. A review of the non-profits supported by any local community foundation will yield vast numbers of organizations that long ago ceased to be useful, as those community foundations know, but a board member golfs with the board chair of that non-profit, and, hey, they employ people…so.

Three years ago, when a few of us were planning U.S. Open Data, then-US-Deputy-CTO Nick Sinai made a suggestion premised on this notion. He said I should plan to shut down the organization after a few years. At first, the proposal struck me as bizarre. Wouldn’t it make more sense to build something lasting? When you’re trying to foster a movement, doesn’t a time limit hinder that goal? The answer, as it turned out, was a very clear no.

Having a clear, public drop-dead date (which we set at four years) for U.S. Open Data has been a gift. It’s informed my work on a daily, even hourly, basis.

The organization was created to further the cause of open data—that is, to advance something larger than any one organization, that will outlast any one organization. Trying to do that with U.S. Open Data on a permanent basis (whatever that means) would mean working to make U.S. Open Data to be important enough that funders would want to support it, and creating a permanent fundraising infrastructure. And in building the larger network infrastructure of open data, our incentive would be to place ourselves at the center of that network, so that we’d be too important to go away.  When people approached us about creating new businesses or organizations in this space, our incentive would be to discourage them, to reduce competition. So we see that the best interests of a single actor don’t necessarily overlap with the best interests of a cause.

This organizational term limit informed every major decision that we made, most every minor decision, and a great many trivial ones. With an overall incentive to build up the open data ecosystem, instead of building up ourselves, we’ve been forced to consider the decisions before us not in terms of “what is best for us?” but instead in terms of “what is best for open data?” This approach clarifies every decision, forcing the morally superior decision. And the most reliable way to make morally superior decisions is to make sure that’s in your best interest.

Like Odysseus lashing himself to his ship’s mast, we declared our time limit publicly, at the outset, to ensure that we’d be held to it. Am I good enough person to shut down an organization just to keep it from potentially losing its way in years to come? Maybe. Best not to test it.

To anybody looking to start a mission-based non-profit, especially one with a goal-based mission (“build a monument”) as opposed to an unlimited mission (“support the arts”), I heartily recommend establishing a publicly-promoted end date.

Making it work requires 1) setting a date certain instead of something vague 2) having a board that is committed to making you adhere to that date and 3) reminding people early and often of your term limit. These three things will ensure that you’ll be held to your own standards, and that nobody will be caught by surprise when your organization goes away.

As a benefit, this creates a sort of sustainability in funding that’s appealing to the right funders. When your organization’s goal isn’t existence in perpetuity, funding it no longer becomes an exercise in long-term strategy. Depending on the organization’s time limit, it might be possible to fund its entire existence in a single grant. (We had two general funding grants: one from the John S. and James L. Knight Foundation and one from the Shuttleworth Foundation.)

Ultimately, every grant-funded organization will eventually be terminated by an inability to obtain additional funding, at the point at which they either have accomplished their mission (and funders see no reason to continue to support them) or they have failed to accomplish their mission (and, again, funders see no reason to continue to support them). In short, you can decide to term-limit your own organization, or you can wait for funders to do that for you, at time of their choosing.

Term limits aren’t right for all mission-based organizations, but many of them should regard the use of term limits as their null hypothesis. It may be harder to reject than you suspect.

How to get started with continuous integration.

I’ve put off learning to use continuous integration tools for a few years now. There’s never a good time to complicate my development process. But today I finally did it. It works differently than I thought, and was easier than I expected, so it’s worth documenting the process for others.

I have a non-trivial number of GitHub repositories that are primarily data-based—that is, there’s no software to be tested, but instead the validity of data. I’m forever lazily checking in YAML, JSON, and XML that’s malformed, or doesn’t comply with the schema. I try to remember to test locally, but sometimes I forget. And sometimes others contribute code to my projects, and I have no way of knowing how they’ve tested those modifications.

So today I set up a Travis CI account. I started with their “Travis CI for Complete Beginners” guide, which immediately proved to be poorly named and generally confusing. Instead, I mucked around a bit until I figured things out.

Here are the salient points about Travis CI:

  • Once you link your GitHub account to Travis CI, every commit that you make is reported to their server, to (potentially) be tested.
  • By creating a .travis.yml file in a repo, you are telling Travis CI “please test this repository with each commit.”
  • The .travis.yml file tells Travis CI exactly what to test and how.
  • Travis CI runs tests by launching a little Docker container (I assume), per the specs established in your .travis.yml config file, and executing the program that you instruct it to run. That program might be a shell script or it might be something you write in any language that you want. You keep it in your repo (perhaps in a /tests/ directory) with everything else.
  • A test fails or succeeds based on your test program’s return status. If it returns 0, it succeeded, otherwise it failed. If the build fails, Travis CI will email you with your program’s output.

tl;dr: With every commit that you make, Travis CI runs one or more commands of your choosing (e.g., a linter) and, if that throws an error, you’ll be emailed a notification.

For example, here’s the .travis.yml file that I wrote:

language: node_js
node_js:
  - "stable"
script: tests/validate.sh
before_install:
  - npm install jsonlint -g

This tells Travis CI to launch a Docker instance that supports Node.js—using the most recent stable version—and install the jsonlint Node module (a JSON validator). Then it should run the script at tests/validate.sh within the repository.

validate.sh looks like this:

#!/bin/sh
find schemas -type f -exec jsonlint -q '{}' +

It’s simply making sure that every file in the /schemas/ directory is valid JSON. I could run all kinds of tests within this script, of course, but this is all that I’m interested in right now.

And that’s it! Every time that I make a commit to this repository, Travis CI is notified, it loads the config file, runs that test program, and tells me if it returns an error.

Of course, there’s a lot more that I need to learn about continuous integration. I should be writing tests for all of my software and running those tests on each commit. I imagine that continuous integration gets a lot more highfalutin’ than what I’ve done so far. But this new practice, if I’m good about making a habit of it, will improve the quality of my work and help make me a better member of the open source software world.

Shuttleworth fellowship.

I am very happy that, as of this week, my work at U.S. Open Data is funded entirely by the Shuttleworth Foundation. The South African organization has awarded me a one-year fellowship, which covers my salary and also provides up to $250,000 in project funding in that time. I’m very happy to have their support, and I’m excited about the work I’ll be able to do in the year ahead to make open data more vibrant and sustainable in the U.S., especially within government.

“Accidental APIs”: Naming a design pattern.

Like many open data developers, I’m sick of scraping. Writing yet another script to extract data from thousands of pages of HTML is exhausting, made worse by the sneaking sense that I’m enabling the continuation of terrible information-sharing practices by government. Luckily, it’s becoming more common for government websites to create a sort of an accidental API—populating web pages with JSON retrieved asynchronously. Because these are simply APIs, albeit without documentation, this is a far better method of obtaining data than via scraping. There is no standard term to describe this. I’ve been using the phrase “accidental API,” but that’s wrong, because it implies a lack of intent that can’t be inferred. (Perhaps the developer intended to create an API?)

Recently, I solicited suggestions for a better name for these. Here are some of my favorites:

The best ones are immediately understandable and don’t ascribe intent on the part of the developer. I suspect I’m going to find myself using Bill Hunt’s “incidental API” and my (and Tony Becker’s) “undocumented API.” I particularly like “undocumented API” because it begins with the assumption of competency on the part of the developer, and that the only shortcoming of the API is its documentation, but I’ll try out a few of them in the coming weeks and see what sticks.

Dynamic electrical pricing demands dynamic price data.

The power industry has begun its long-anticipated shift towards demand-based pricing of electricity. Dominion Power, my electric company here in Virginia, has two basic rates: winter and summer. Although the math is a bit complicated, electricity costs about 50% more in the summer than in the winter, averaging 12¢ per kilowatt hour. (One can also pay for sustainably sourced energy, as I do, and this raises these rates by 1.3¢ per kilowatt hour.) While this price system is very simple, it is also bad, because it fails to respond to consumer demand or the realities of electrical generation.

Here’s an explanation of the problem and the proposed solution: open electrical rate data.

Excess Demand

On a very hot day—say, north of 100°F—everybody wants to keep their house at 72°. This requires a great deal of electricity, which means that Dominion has to generate a great deal of electricity. And that’s fine, because people are paying per kilowatt hour. If they want to pay $1 an hour to keep their house cool, that’s their prerogative. They pay, and Dominion uses the money to run their plants. But this all starts to fall apart when Dominion nears its maximum capacity.

As demand approaches capacity, Dominion is faced with a dilemma. Like most power companies, Dominion probably has a standby boiler in their coal-based power plants. This is not normally fired up, because it’s the oldest, polluting-ist boiler that they have. This boiler falls well below the modern standards of efficiency within state and federal regulations. Turning it on might increase by tenfold the power plant’s emissions of regulated pollutants, and guarantees that they’re going to be paying fines. At 10¢ per kilowatt hour, running their modern boilers is a profitable enterprise, but running the ancient, standby one is a money-losing endeavor.

In order to avoid brown-outs—demand exceeding capacity, resulting in insufficient amounts of power being delivered to customers—Dominion has to start up this nasty old boiler, even though they might only be needed to provide power to a few thousand customers. The incremental cost of serving these few customers is enormous, but necessary to keep the whole enterprise going.

Worse still, imagine if the temperature continues to climb. Demand spikes further. More power is needed than Dominion can generate or buy from other power companies (who are dealing with the same problem). Brown-outs or rolling blackouts are now impossible to avoid. Customers are angry. Dominion is losing money.

Dynamic Pricing Models

Enter dynamic—aka “demand-based”—pricing. There are two ways that dynamic pricing can work.

Dominion's summer rate plan.
Dominion’s summer rate plan.

The first dynamic pricing model is based on a schedule of rates relative to demand. This tells customers how much power costs on low-demand days versus high-demand days, with any number of gradients between the two. And within that daily rate difference, there are price changes throughout the day. A low-demand day might average around 9¢ per kilowatt hour, and a high-demand day might top out at 20, 30, even 50¢ per kilowatt hour. The advantage of this system is that it’s controlled and limited—people know what the possibilities are, and there’s a theoretical cap on how much power can cost. The disadvantage to this system is that there’s no way for customers to know how much collective demand exists. While Dominion understands that a high-capacity day is anything north of (say) 25,000 megawatts, customers have no way of knowing how high that collective demand is. This is an actual system that exists around the nation right now, and that Dominion allows customers to opt into.

The second dynamic pricing model is based on a real-time auction of electrical rates. For this approach to work, you’d tell your appliances how much you’re willing to pay to run them. You’ll pay no more than 35¢ to dry a load of laundry. You’ll pay no more than $2.50/day to keep your house cool, unless your house gets above 78°, in which case you’ll pay up to $5.00/day. Your water heater will keep water at 130°, unless power goes above 15¢ per kilowatt hour, in which case it will drop to 120°. And so on. Then your home power meter aggregates this data, and makes bids for power, bidding against every other customer. This works somewhat like eBay’s automatic bid system, and very much like Google Ads’ pricing model. Of course, this infrastructure does not exist yet, and so this is entirely in the realm of the imaginary. Still, I feel comfortable saying that this system is inevitable.

Returning to the reality of the first model—a published rate schedule—there’s a serious problem with information asymmetry. How is one to know the cost of electricity at any given time, if you don’t know if it’s a low-, medium-, or high-cost day? Dominion’s solution to this is both straightforward and complicated: they’ll e-mail you at 6 PM every day and tell you which of three rate structures that they’ll use the following day. Each rate structure changes over the course of the day, with different prices overnight, in the morning, through the bulk of the day, and in the evening.

But, wait, it gets harder. Dominion also institutes a “demand charge.” Every half hour, they sample how much power that you’re using at that moment. Then your monthly bill has a fee based on the largest amount of power that home was using at one of those sampled moments in the prior 30 days. If you used no power all month, except for one minute in which you used a very large amount of power, you would be billed a corresponding large amount, despite your near-zero average.

For customers, Dominion’s approach is dizzying. It requires that people keep track of electrical rates on a day-to-day and hour-to-hour basis, peak home power usage at all times, and provides nothing that would support the growing industry of home automation and energy saving devices, which could manage electrical use automatically. The popular Nest thermostat can be automatically reprogrammed via the internet. Apple recently announced an entire platform of home automation tools, controllable and configurable via iPhone, iPad, or desktop computer. Philips makes a light bulb kit that permits each bulb to be controlled remotely, the brightness and color of the bulbs configurable individually. There’s a whole ecosystem of hardware, software, and data to allow one’s home’s energy use to be adjusted in response to external factors. But what they can’t do is read Dominion’s e-mails at 6 PM every night. That’s an unbridgeable air gap, a failure on the part of Dominion that is perhaps mystifying or perhaps rational, depending on one’s level of cynicism.

Open Electrical Rate Data

There’s a simple solution to this: open electrical rate data. In addition to sending out an e-mail at 6 PM every day, Dominion could maintain a file on their server that provides machine-readable data about current and near-future power rates. It might look like this:

Right now, the closest that they get is a retrospective page, which has allotted space for the next day’s price (“Classification for tomorrow:”), but the page text ends with the colon—I’m yet to see that classification provided. [Hours after I published this, Dominion finally wrote something in that space, I assume prompted by the 90°F forecast.]

If this data was provided, it would be trivial to use it to enable home automation and energy management tools to schedule and control energy-intensive home services and appliances.

And, in fact, that’s a feature that Nest supports. The thermostat will dynamically adjust the temperature of a home during the highest-priced periods, generally very hot summer afternoons and very cold winter nights. But because precious few power companies provide the necessary data to support this feature, it’s not useful to most Nest customers. Nest doesn’t provide a comprehensive list of participating power companies, and after searching through their press releases and trying out a handful of ZIP codes from across the country in their form, I have to conclude it’s because there are very few participating power companies.

Publishing open electrical rate data is not difficult. If they can send out an e-mail, they can certainly update a JSON file. For a competent developer, it would be an afternoon project. A company that is capable of managing an entire electrical grid and the entire usage tracking and billing system that accompanies it is certainly capable of a tiny project like this.

I’ll warrant that Nest—which is owned by Google—is in a good position to establish a standard JSON schema for power companies to use. Some power companies would probably welcome being told what schema to use, giving them one fewer thing to worry about. Right now, it appears that Nest is basically taking any data that they can get. (It wouldn’t shock me to find out what they’re intercepting night-before e-mail alerts and using those to update thermostats with rate data.) Power companies are going to catch on to the enormous importance of rate data, and Nest has the first-mover advantage. I hope that Nest puts together an uncomplicated schema, advertises it on a developer page, encourages existing and new partners to publish in that schema, and eventually requires that participating power companies comply with their schema, assuming that they end up in a position where they can make such demands.

Open electrical rate data will provide real savings to consumers and utilities alike. It’s a necessary and inevitable development in power distribution and home automation. I hope that power companies and Nest take the simple steps necessary to usher in this era of open energy data, and soon.

What’s wrong with Puckett’s resignation?

Further to the matter of Sen. Phil Puckett’s retirement, I want to play out the shades of inappropriateness here. While what he has done clearly feels wrong (allegedly quitting his seat in the Senate of Virginia in exchange for a job running a state-chartered organization and a judgeship for his daughter, all done immediately prior to the deadline for the legislature to hold a vote on the budget, in which his absence will give Republicans a one-member majority and the ability to prevent healthcare reform), I think it’s worth exploring what about it is wrong.

Imagine that Sen. Puckett had sold his vote. In exchange for $150,000 in cash, he would vote against a budget that included healthcare reform. Any reasonable person would regard that as wrong.

Imagine that Sen. Puckett had sold his non-vote. In exchange for $150,000 in cash, he would take a walk when the bill came up for a vote. I think that any reasonable person would regard that as wrong, too.

Imagine that Sen. Puckett had sold his absence. In exchange for $150,000 in cash, he would make sure that he was thousands of miles away when the legislature reconvened to hold the vote. I also think that any reasonable person would regard that as wrong.

Now imagine that Sen. Puckett sold his resignation. In exchange for $150,000, he would quit so that he could not cast a vote on the budget bill. Many reasonable people would regard that as wrong.

And now we have the alleged reality, of Sen. Puckett selling his resignation in exchange for perhaps $150,000 annually, so that he could not cast a vote on a budget bill. Many reasonable people would also regard that as wrong.

In that real-life scenario, we have two possible parties who might have done something wrong. First, we have Sen. Puckett, who may or may not have intended to quit in order to prevent a vote from happening—he might argue that he simply quit the legislature for a tantalizing job offer that was only available within a small window, but that he couldn’t have held while also serving in the legislature. Puckett of course knew that his resignation would prevent him from voting on the most important bill before the legislature, and unless he is a very stupid man, he would have known that Kilgore was offering him the job so that he could not cast that vote. Then we have Del. Terry Kilgore, the chair of the tobacco commission, who offered Puckett the job. Kilgore likewise knew what Puckett’s resignation would mean. I suspect that two key questions here these: Did Kilgore intend to prevent Puckett from voting on the budget bill by offering him a job? And did Puckett intend to not vote on the budget bill in exchange for accepting a job?

Kilgore, of course, thinks that he’s being clever by saying that he never offered Puckett a job, but that “if he’s available, we would like to have him.” This is almost certainly bullshit, which I define as meaning that the statement is a) untrue, b) Kilgore knows that it’s untrue, and c) we know it to be untrue. It insults our collective intelligence to claim that Puckett resigned from the Senate of Virginia on the vague hope of a job heading the tobacco commission. (Or, at least, it insults Puckett.) And that leads us to the third key question: Have Puckett and Kilgore already negotiated the terms of employment for the tobacco commission?

If state investigators look into a violation of § 18.2-447, they’re going to get records of communications between Puckett and Kilgore. If they actually struck a deal here, and they didn’t have the good sense to only negotiate the terms privately and in-person (making discovery impossible), this could get ugly, and fast. On the other hand, if they’re aboveboard and have any sense, they made sure that all negotiations occurred in the presence of attorneys, and were done either exclusively by writing or were recorded as audio or video.

Evidence has a way of disappearing. I hope investigators are looking into this.