Category Archives: Tech

Truth, earned credibility, and a publisher’s responsibility.

I spent much of the ’00s as a political blogger. I wrote here, mostly about state politics. When I decided to start writing about state politics, in 2003, I sought out other political blogs in Virginia. There weren’t many, maybe a half-dozen. I added them all to my blogroll, made a point of reading those sites and linking to them, and they did the same. Despite our often-oppositional political perspectives, our exchanges were friendly, informative, and fun. I’m still friends with those folks.

In the spring of 2006, I was casting around for how to elevate lesser-known Virginia political blogs, as it didn’t strike me as entirely fair that my site should get so much of the readership. So I set up a blog aggregator—a bit of software that would check each Virginia political blog’s RSS feed every half-hour or so, and syndicate all new blog entries on a central site, creatively named Virginia Political Blogs. It didn’t take long to set up, and having a central site to serve as a commons was immediately popular. Every blog entry was shown in the same context as every other, all in the same typeface, all on an equal footing, listed chronologically.

In the few months afterward, blogging exploded in popularity, in part because it became much easier to set up a blog. No longer was it necessary to install Movable Type, or the nascent WordPress— would host your site for free. For a while, in that lead-up to the 2006 midterm elections, there was a new Virginia political blog every week. In retrospect, this must have been the peak of the popularity of blogging, before the rise of Facebook and Twitter

Inevitably, the lowered technological bar meant that some less-knowledgable folks were able to participate in this commons. But that was OK, because this was a marketplace of ideas, and if people wanted to promote their foolish ideas, they could do that.

This went badly.

Any fool could start a website, get added to my aggregator, and immediately have an audience that numbered in the thousands. And fools did that, by the dozen. They didn’t have to earn an audience by writing things that other blogs would want to link to. They didn’t have to prove themselves in any way. When they wrote things that were completely wrong, offensive, or even dangerous, instead of going to an audience of a dozen people, it went to an audience that included not just every political reporter in Virginia and the DC metro area, but a great many crackpots and fringe-theorists as well.

By putting terrible ideas on even footing with great ones, and by replacing a free marketplace of ideas with a leveled one, I inadvertently created a promotional vehicle for the ignorant, the rage-filled, and the chronically dishonest. By presenting them all in the same, shared context, it gave them an aura of legitimacy. And by automatically reposting their occasional hate speech, violent imagery, and calls to violence, I was enabling—even endorsing—those things.

I knew what I had to do. In December, I pruned the worst actors from the list. They were enraged, and their rage was only magnified by their ignorance (“your taking my free expression that’s against the constitution!!!!”) and their partisanship (“classic libtard”). I’d created a commons, gave them access to it, and then took it away. They tried to create their own blog aggregator, as I recall, but it proved to be beyond their technological capabilities.

* * *
In 2016, my little mistake has been repeated as a huge, national mistake. We call it “Facebook.”

Any fool can create a Facebook account, get added to their friends’ feeds, and immediately have an audience in the hundreds, potentially in the millions. They don’t have to earn an audience with their writings, but instead rely on social obligations around relationships to be guaranteed a readership. Everybody’s Facebook posts are displayed in the same, shared context, with reshared, weaponized Russia propaganda adjacent to class reunion photos and New York Times articles.

Propaganda on Facebook has the aura of legitimacy. It’s been shared by a friend or family member, preying on our sense of trust. The name of the outlet that published the news is displayed in light gray at the bottom of the post, while the friend’s name and photo is displayed prominently at the top. News from looks the same as any article from The Washington Post.

It only took me eight months to figure out that I’d inadvertently created a terrible system that was enabling dangerously stupid people. It’s been twelve years, and Facebook hasn’t learned that lesson yet.

I quite doubt that my poor editorial policy changed the outcome of any elections. Can Facebook say the same?

How to get started with continuous integration.

I’ve put off learning to use continuous integration tools for a few years now. There’s never a good time to complicate my development process. But today I finally did it. It works differently than I thought, and was easier than I expected, so it’s worth documenting the process for others.

I have a non-trivial number of GitHub repositories that are primarily data-based—that is, there’s no software to be tested, but instead the validity of data. I’m forever lazily checking in YAML, JSON, and XML that’s malformed, or doesn’t comply with the schema. I try to remember to test locally, but sometimes I forget. And sometimes others contribute code to my projects, and I have no way of knowing how they’ve tested those modifications.

So today I set up a Travis CI account. I started with their “Travis CI for Complete Beginners” guide, which immediately proved to be poorly named and generally confusing. Instead, I mucked around a bit until I figured things out.

Here are the salient points about Travis CI:

  • Once you link your GitHub account to Travis CI, every commit that you make is reported to their server, to (potentially) be tested.
  • By creating a .travis.yml file in a repo, you are telling Travis CI “please test this repository with each commit.”
  • The .travis.yml file tells Travis CI exactly what to test and how.
  • Travis CI runs tests by launching a little Docker container (I assume), per the specs established in your .travis.yml config file, and executing the program that you instruct it to run. That program might be a shell script or it might be something you write in any language that you want. You keep it in your repo (perhaps in a /tests/ directory) with everything else.
  • A test fails or succeeds based on your test program’s return status. If it returns 0, it succeeded, otherwise it failed. If the build fails, Travis CI will email you with your program’s output.

tl;dr: With every commit that you make, Travis CI runs one or more commands of your choosing (e.g., a linter) and, if that throws an error, you’ll be emailed a notification.

For example, here’s the .travis.yml file that I wrote:

language: node_js
  - "stable"
script: tests/
  - npm install jsonlint -g

This tells Travis CI to launch a Docker instance that supports Node.js—using the most recent stable version—and install the jsonlint Node module (a JSON validator). Then it should run the script at tests/ within the repository. looks like this:

find schemas -type f -exec jsonlint -q '{}' +

It’s simply making sure that every file in the /schemas/ directory is valid JSON. I could run all kinds of tests within this script, of course, but this is all that I’m interested in right now.

And that’s it! Every time that I make a commit to this repository, Travis CI is notified, it loads the config file, runs that test program, and tells me if it returns an error.

Of course, there’s a lot more that I need to learn about continuous integration. I should be writing tests for all of my software and running those tests on each commit. I imagine that continuous integration gets a lot more highfalutin’ than what I’ve done so far. But this new practice, if I’m good about making a habit of it, will improve the quality of my work and help make me a better member of the open source software world.

“Accidental APIs”: Naming a design pattern.

Like many open data developers, I’m sick of scraping. Writing yet another script to extract data from thousands of pages of HTML is exhausting, made worse by the sneaking sense that I’m enabling the continuation of terrible information-sharing practices by government. Luckily, it’s becoming more common for government websites to create a sort of an accidental API—populating web pages with JSON retrieved asynchronously. Because these are simply APIs, albeit without documentation, this is a far better method of obtaining data than via scraping. There is no standard term to describe this. I’ve been using the phrase “accidental API,” but that’s wrong, because it implies a lack of intent that can’t be inferred. (Perhaps the developer intended to create an API?)

Recently, I solicited suggestions for a better name for these. Here are some of my favorites:

The best ones are immediately understandable and don’t ascribe intent on the part of the developer. I suspect I’m going to find myself using Bill Hunt’s “incidental API” and my (and Tony Becker’s) “undocumented API.” I particularly like “undocumented API” because it begins with the assumption of competency on the part of the developer, and that the only shortcoming of the API is its documentation, but I’ll try out a few of them in the coming weeks and see what sticks.

Dynamic electrical pricing demands dynamic price data.

The power industry has begun its long-anticipated shift towards demand-based pricing of electricity. Dominion Power, my electric company here in Virginia, has two basic rates: winter and summer. Although the math is a bit complicated, electricity costs about 50% more in the summer than in the winter, averaging 12¢ per kilowatt hour. (One can also pay for sustainably sourced energy, as I do, and this raises these rates by 1.3¢ per kilowatt hour.) While this price system is very simple, it is also bad, because it fails to respond to consumer demand or the realities of electrical generation.

Here’s an explanation of the problem and the proposed solution: open electrical rate data.

Excess Demand

On a very hot day—say, north of 100°F—everybody wants to keep their house at 72°. This requires a great deal of electricity, which means that Dominion has to generate a great deal of electricity. And that’s fine, because people are paying per kilowatt hour. If they want to pay $1 an hour to keep their house cool, that’s their prerogative. They pay, and Dominion uses the money to run their plants. But this all starts to fall apart when Dominion nears its maximum capacity.

As demand approaches capacity, Dominion is faced with a dilemma. Like most power companies, Dominion probably has a standby boiler in their coal-based power plants. This is not normally fired up, because it’s the oldest, polluting-ist boiler that they have. This boiler falls well below the modern standards of efficiency within state and federal regulations. Turning it on might increase by tenfold the power plant’s emissions of regulated pollutants, and guarantees that they’re going to be paying fines. At 10¢ per kilowatt hour, running their modern boilers is a profitable enterprise, but running the ancient, standby one is a money-losing endeavor.

In order to avoid brown-outs—demand exceeding capacity, resulting in insufficient amounts of power being delivered to customers—Dominion has to start up this nasty old boiler, even though they might only be needed to provide power to a few thousand customers. The incremental cost of serving these few customers is enormous, but necessary to keep the whole enterprise going.

Worse still, imagine if the temperature continues to climb. Demand spikes further. More power is needed than Dominion can generate or buy from other power companies (who are dealing with the same problem). Brown-outs or rolling blackouts are now impossible to avoid. Customers are angry. Dominion is losing money.

Dynamic Pricing Models

Enter dynamic—aka “demand-based”—pricing. There are two ways that dynamic pricing can work.

Dominion's summer rate plan.
Dominion’s summer rate plan.

The first dynamic pricing model is based on a schedule of rates relative to demand. This tells customers how much power costs on low-demand days versus high-demand days, with any number of gradients between the two. And within that daily rate difference, there are price changes throughout the day. A low-demand day might average around 9¢ per kilowatt hour, and a high-demand day might top out at 20, 30, even 50¢ per kilowatt hour. The advantage of this system is that it’s controlled and limited—people know what the possibilities are, and there’s a theoretical cap on how much power can cost. The disadvantage to this system is that there’s no way for customers to know how much collective demand exists. While Dominion understands that a high-capacity day is anything north of (say) 25,000 megawatts, customers have no way of knowing how high that collective demand is. This is an actual system that exists around the nation right now, and that Dominion allows customers to opt into.

The second dynamic pricing model is based on a real-time auction of electrical rates. For this approach to work, you’d tell your appliances how much you’re willing to pay to run them. You’ll pay no more than 35¢ to dry a load of laundry. You’ll pay no more than $2.50/day to keep your house cool, unless your house gets above 78°, in which case you’ll pay up to $5.00/day. Your water heater will keep water at 130°, unless power goes above 15¢ per kilowatt hour, in which case it will drop to 120°. And so on. Then your home power meter aggregates this data, and makes bids for power, bidding against every other customer. This works somewhat like eBay’s automatic bid system, and very much like Google Ads’ pricing model. Of course, this infrastructure does not exist yet, and so this is entirely in the realm of the imaginary. Still, I feel comfortable saying that this system is inevitable.

Returning to the reality of the first model—a published rate schedule—there’s a serious problem with information asymmetry. How is one to know the cost of electricity at any given time, if you don’t know if it’s a low-, medium-, or high-cost day? Dominion’s solution to this is both straightforward and complicated: they’ll e-mail you at 6 PM every day and tell you which of three rate structures that they’ll use the following day. Each rate structure changes over the course of the day, with different prices overnight, in the morning, through the bulk of the day, and in the evening.

But, wait, it gets harder. Dominion also institutes a “demand charge.” Every half hour, they sample how much power that you’re using at that moment. Then your monthly bill has a fee based on the largest amount of power that home was using at one of those sampled moments in the prior 30 days. If you used no power all month, except for one minute in which you used a very large amount of power, you would be billed a corresponding large amount, despite your near-zero average.

For customers, Dominion’s approach is dizzying. It requires that people keep track of electrical rates on a day-to-day and hour-to-hour basis, peak home power usage at all times, and provides nothing that would support the growing industry of home automation and energy saving devices, which could manage electrical use automatically. The popular Nest thermostat can be automatically reprogrammed via the internet. Apple recently announced an entire platform of home automation tools, controllable and configurable via iPhone, iPad, or desktop computer. Philips makes a light bulb kit that permits each bulb to be controlled remotely, the brightness and color of the bulbs configurable individually. There’s a whole ecosystem of hardware, software, and data to allow one’s home’s energy use to be adjusted in response to external factors. But what they can’t do is read Dominion’s e-mails at 6 PM every night. That’s an unbridgeable air gap, a failure on the part of Dominion that is perhaps mystifying or perhaps rational, depending on one’s level of cynicism.

Open Electrical Rate Data

There’s a simple solution to this: open electrical rate data. In addition to sending out an e-mail at 6 PM every day, Dominion could maintain a file on their server that provides machine-readable data about current and near-future power rates. It might look like this:

Right now, the closest that they get is a retrospective page, which has allotted space for the next day’s price (“Classification for tomorrow:”), but the page text ends with the colon—I’m yet to see that classification provided. [Hours after I published this, Dominion finally wrote something in that space, I assume prompted by the 90°F forecast.]

If this data was provided, it would be trivial to use it to enable home automation and energy management tools to schedule and control energy-intensive home services and appliances.

And, in fact, that’s a feature that Nest supports. The thermostat will dynamically adjust the temperature of a home during the highest-priced periods, generally very hot summer afternoons and very cold winter nights. But because precious few power companies provide the necessary data to support this feature, it’s not useful to most Nest customers. Nest doesn’t provide a comprehensive list of participating power companies, and after searching through their press releases and trying out a handful of ZIP codes from across the country in their form, I have to conclude it’s because there are very few participating power companies.

Publishing open electrical rate data is not difficult. If they can send out an e-mail, they can certainly update a JSON file. For a competent developer, it would be an afternoon project. A company that is capable of managing an entire electrical grid and the entire usage tracking and billing system that accompanies it is certainly capable of a tiny project like this.

I’ll warrant that Nest—which is owned by Google—is in a good position to establish a standard JSON schema for power companies to use. Some power companies would probably welcome being told what schema to use, giving them one fewer thing to worry about. Right now, it appears that Nest is basically taking any data that they can get. (It wouldn’t shock me to find out what they’re intercepting night-before e-mail alerts and using those to update thermostats with rate data.) Power companies are going to catch on to the enormous importance of rate data, and Nest has the first-mover advantage. I hope that Nest puts together an uncomplicated schema, advertises it on a developer page, encourages existing and new partners to publish in that schema, and eventually requires that participating power companies comply with their schema, assuming that they end up in a position where they can make such demands.

Open electrical rate data will provide real savings to consumers and utilities alike. It’s a necessary and inevitable development in power distribution and home automation. I hope that power companies and Nest take the simple steps necessary to usher in this era of open energy data, and soon.

Opening up Virginia corporate data.

In Virginia, you can’t just get a list of all of the registered corporations. That’s not a thing. If you dig for a while on the State Corporation Commission’s website, you’ll find their “Business Entity Search,” where you can search for a business by name. But if you want to get a list of all businesses in your county, all businesses that have been formed in the past month, all businesses located at a particular address, etc., then you’re just out of luck.

Except. The SCC will sell you their database of all 1,126,069 companies. It’s not cheap, at $150/month, with a minimum three-month commitment. You have to sign a five-page contract, and the data is a hot mess, of no value to anybody other than a programmer.

So, naturally, I wrote the SCC a check for $450 at the end of April, bought the data, and now give it away for free. (Updated weekly, early Wednesday morning, I automatically transfer the enormous file to Because it’s not right that people should have to pay for public data. The SCC is already generating this data, and they’re already hosting the file on their website—why sell it? We’ve already paid for it, out of our taxes and out of our business incorporation fees. I FOIAed the list of customers for this data. There are just six, so it’s not like this is a money-making endeavor for the SCC. (Only one of them, Attentive Law Group, is in Virginia.)

Now people can have this terrible file, useful only to programmers. So what are they to do with that file? Well, maybe nothing. So I’ve also written some software to turn that data into modern, useful formats. Named “Crump” (for Beverley T. Crump, the first-ever member of the State Corporation Commission), it is, naturally, free and open source. Crump turns the SCC’s fixed-width text file into JSON and CSV files. Optionally, it will clean up the data and produce Elasticsearch import files, basically allowing the data to be quickly loaded into a database and made searchable. Again, anybody can have the data for free, and anybody can have Crump for free, to turn that data into useful data.

And, finally, I’ve created a website, creatively named “Virginia Businesses,” where non-programmers can access that data and do things with it. I’ve barely gotten started on the website—at this point, one can download individual data files as either CSV or JSON, download the original data file from the SCC, or search through the data. The search results are terrible looking, and not all of the data is loaded in at the moment, but by the time you read this blog entry, perhaps that will all be much improved. I intend to add functionality to generate statistics, maps, charts, etc., to let people dig into this really interesting data. The website updates its data, automatically, every week. Naturally, the website itself is also an open source project—anybody can have the website, too, and can set up a duplicate to compete with me, or perhaps create a similar site for another state.

So, free data, free software, and a free website. There’s no catch.

OpenCorporates, whose excellent work inspired this project, has imported the data into their own system, meaning that Virginia’s corporate data is now available in a common system with 69 million other corporations from around the U.S. and the world.

Then, a couple of weeks ago, a happy surprise: the Shuttleworth Foundation e-mailed me, out of the blue, informing me that they’re giving me $5,000 to support my work in open data, as a part of their “flash grant” program. I can do whatever I want with that money, and I’m going to use a chunk of it to support this work. That means that I’m not out of pocket on that $450 check, and that I can continue to pay for this data for a while, so that others can continue to benefit from it.

I don’t know where this project is going—it’s just a hobby—but even if I stopped doing any more work on it tomorrow, I know I’d be leaving Virginians with much better business data than they had before.

In addition to the Shuttleworth Foundation, my thanks to the ACLU of Virginia and the EFF for providing me with legal advice, without which I couldn’t have even begun this project, and to Blue Ridge InternetWorks, who generously donates the website hosting and server power to crunch and distribute all of this data.

Cloud corporations.

Many months ago, my friend Tim Hwang told me that he’d like to see an API created for corporate registrations, because that would enable all kinds of interesting things. Tim runs the semi-serious Robot Robot & Hwang, a legal startup that aspires to be a law firm run entirely in software. I’ve been chewing over this idea for the past year or so, and I’m convinced that, writ large, this could constitute a major rethinking of the Virginia State Corporation Commission. Or, really, any state’s business regulation agency, but my familiarity and interest lies with Virginia. But first I have to explain Amazon Web Services. (If you don’t need that explained, you can skip past that bit.)

Amazon Web Services

Not so long ago, if you wanted to have a web server, you needed to actually acquire a computer, or pay a website host to do so on your behalf. That might cost a couple of thousand dollars, and it took days or weeks. Then you had to set it up, which probably meant somebody installing Linux or Windows from CD-ROMs, configuring it to have the software that you needed, mounting it in a rack, and connecting it to the internet. You’d have to sign a contract with the host, agreeing to pay a certain amount of money over a year or more in exchange for them housing your server and providing it with a connection to the internet. That server required maintenance throughout its life, some of which could be done online, but occasionally somebody had to go in to reboot it or swap out a broken part. But what if your website suddenly got popular, if your planned 100 orders per day turned into 10,000 orders per day? Well, you had to place orders for new servers, install operating systems on them, mount them in more racks, and connect them to the internet. That might take a few weeks, in which time you could have missed out on hundreds of thousands of orders. And when your orders drop back to 100 per day, you’ve still got the infrastructure—and the bills—for a much more popular website.

And then, in 2006, launched Amazon Web Services, a revolutionary computing-on-demand service. AWS upended all of this business of requisitioning servers. AWS consists of vast warehouses of servers that, clustered together, host virtual servers—simulated computers that exist in software. To set up a web server via AWS, you need only to complete a form, select how powerful of a server that you want, agree to pay a particular hourly rate for that server (ranging from a few cents to a few dollars per hour), and it’s ready within a few minutes. Did your planned 100 orders turn into 10,000? No problem—just step up to a more powerful server, or add a few more small servers. Did your 10,000 orders go back to 100? Scale your servers back down again. Better still, AWS has a powerful API (application programming interface), so you don’t even have to even intervene—you can set your own servers to create and destroy themselves, control them all from an iPhone app, or let software on your desktop start up and shut down servers without any involvement on your part.

There are other companies providing similar cloud computing services—Rackspace, Google, and Microsoft, among others—but Amazon dominates the industry, in part because they were first, and in part because they have the most robust, mature platform. There remain many traditional website hosts, which you can pay to house your physical servers, but they’re surely just a few years away from being niche players. Amazon did it first, Amazon did it best, and Amazon is the hosting company to beat now.

Cloud Corporations

Imagine Virginia’s State Corporation Commission (SCC) using the Amazon Web Services model. Virginia Business Services, if you will. One could create a business trivially, use it for whatever its purpose is, and then shut it down again. That might span an hour, a day, or a week. Or one could start a dozen or a hundred businesses, for different amounts of time, with some businesses owned by other businesses.

Why would you do this? This is actually done already, albeit awkwardly. Famously, the Koch brothers maintain a complicated, sophisticated web of LLCs, which they create, destroy, and rename to make it difficult to track their political contributions. This probably costs them millions of dollars in attorneys’ fees alone. Doing so is perfectly legal. Why should that only be available to billionaires? Or perhaps you want to give a political contribution to a candidate, but not in your own name. Wealthy people create a quick LLC to do this. Maybe you want to host a one-off event, or print and sell a few hundred T-shirts as a one-time thing—a corporate shield would be helpful, but hardly worth the time and effort, except for the wealthy. There’s no reason why the rest of us shouldn’t be able to enjoy these same protections and abilities.

Cloud corporations would be particularly useful to law firms who specialize in managing legal entities. Right now, they spend a lot of time filing paperwork. Imagine if they could just have a desktop program, allowing them to establish a corporation in a few minutes. Instead of charging clients $1,500, they could charge $500, and make an even larger profit. Although surely Delaware would remain attractive for registering many corporations, due to their friendly tax laws, the ease of registering a corporation in Virginia would surely make it attractive for certain types of business.

So what would the SCC need to do to make this happen? Well, right now, one can register for an account on their site, complete a form on their website, pay $75 via credit card, and have a corporation formed instantly. From there on out, it costs $100/year, plus they require that an annual report be filed. Both of these things can be done via forms on their website. (Note that these dollar values are for stock corporations. There are different rates for non-stock corporations and limited liability corporations.) All of which is to say that they’ve got the infrastructure in place for purely digital transactions.

But to support to an AWS model, they’d need to make a few changes. First they’d have to expose the API behind those forms, to allow programmatic access to the SCC’s services. Then they’d have to add a few new services, such as the ability to destroy a business. And they’d need to change their pricing, so that instead of being billed annually, pricing would be based on units of weeks, days, or even hours. (That pricing could be elevated significantly over standard pricing, as a trade-off for convenience.) The SCC has some antiquated regulations that would need to be fixed, such as their requirement that a business have a physical address where its official documents are stored (“Google Docs” is not an acceptable location). Finally, to do this right, I suspect that the Virginia Department of Taxation would need to get involved, to allow automated payment of business taxes (something that Intuit has spent a great deal of money to prevent) via an API.

Next Steps

I regret that this is unlikely to happen in Virginia. The State Corporation Commission is like its own mini-government within Virginia, with its own executive, legislative, and judicial functions, and seems accountable to nobody but themselves. FOIA doesn’t even apply to them. They’re not known as a forward-thinking or responsive organization, and I’m dubious that either the legislature or the governor could persuade them or even make them do this.

But I am confident that some state will do this (I hope it won’t be Delaware) and that, eventually, all states will do this. It’s inevitable. Whoever does it first, though, will enjoy a first-mover advantage, perhaps on the scale of Amazon Web Services. I’ll enjoy watching it. Maybe I’ll even register a few corporations myself.

A Virginia campaign finance API.

Last year, I wrote here that I was working on an open-source campaign finance parser for Virginia State Board of Elections data. Thanks to the good work of the folks at the SBE, who are making enormous advances in opening up their data, I’ve been able to make some great progress on this recently. That open-source project, named “Saberva,” is now a fully-functioning program. When run, it gathers a host of data from the State Board of Elections’ great new campaign finance site and saves it all as a series of machine-readable JSON files. (And a simple CSV file of basic committee data, which is more useful for some folks.) The program is running on Open Virginia, which means that, at long last, Virginia has an API and bulk downloads for campaign finance data.

This is now the source of Richmond Sunlight‘s campaign finance data about each candidate (currently limited to their cash-on-hand and a link to their most recent filing), which provides me with a good incentive to continue to improve it.

If you’ve got ideas for how to improve this still-young project, you’re welcome to comment here, open a ticket on GitHub, or make a pull request. Hate it, and want to copy it and make your own, radically different version? Fork it! It’s released under the MIT License, so you can do anything you want with it. I look forward to seeing where this goes.

New site, new datasets.

Since creating Richmond Sunlight and Virginia Decoded, I’ve been building up a public trove of datasets about Virginia government: legislative video, the court system’s definitions of legal terms, court rulings, all registered dangerous dogs, etc. But they’re all scattered about on different websites. A couple of years ago, I slapped together a quick site to list all of them, but I outgrew it pretty quickly.

So now I’m launching a new site: the Open Virginia data repository. It’s an implementation of the excellent CKAN data repository software (which will soon drive The idea is to provide a single, searchable, extensible website where every known state dataset can be listed, making them easy to find and interact with. It’s built on the industry’s best software, in part because I’m hopeful that, eventually, I can persuade Virginia to simply take the site from me, to establish a long-overdue

There are a few new datasets that accompany this launch:

  • The Dangerous Dog Registry as JSON, meaning that programmers can take these records and do something interesting with them. (Imagine an iPhone app that tells you when you’re close to a registered dangerous dog.) Previously I provided this only as HTML.
  • VDOT 511 Geodata. This is the GeoJSON that powers Virginia 511, exposed here for the first time. Road work, traffic cameras, accidents—all kinds of great data, updated constantly, with each GeoJSON feed listed here.
  • Public comments on proposed regulations. Over 28,000 comments have been posted by members of the public about regulations to the Virginia Regulatory Town Hall site over the past decade. Now they’re all available in a single file (formatted as JSON), for programmers to do interesting things with.

There’s so much more to come—good datasets already available, and datasets that need to be scraped from government sites and normalized—but this is a good start. I’m optimistic that providing an open, accessible home for this data will encourage others to join in and help create a comprehensive collection of data about the Virginia government and its services.

$500 speech transcription bounty claimed.

It took just 27 hours for the $500 speech transcription bounty to be claimed. Aaron Williamson produced youtube-transcription, a Python-based pair of scripts that upload video to YouTube and download the resulting machine-generated transcripts of speech. It took me longer to find the time to test it out than it did for Aaron to write it. But I finally did test it, and it works quite well.

There are lots of changes and features that I’d like to see, and the beauty of open source software is that those changes don’t need to be Aaron’s problem—I (and anybody else) can make whatever changes that I see fit.

This will be pressed into service on Richmond Sunlight ASAP. Thanks to Matt Cutts for the idea, and to the 95 people who backed this project on Kickstarter, since they’re the ones who funded this effort.

$500 bounty for a speech transcription program.

The world needs an API to automatically generate transcript captions for videos. I am offering a $500 bounty for a program that does this via YouTube’s built-in machine transcription functionality. It should work in approximately this manner:

  1. Accepts a manifest that lists one or more video URLs and other metadata fields. The manifest may be in any common, reasonable format (e.g., JSON, CSV, XML).
  2. Retrieves the video from the URL and stores it on the filesystem.
  3. Uploads the video to YouTube, appending the other metadata fields to the request.
  4. Deletes the video from the filesystem.
  5. Downloads the resulting caption file, storing it with a unique name that can be connected back to a unique field contained within the manifest (e.g., a unique ID metadata field).


  • Must be written in a common, non-compiled language (e.g., Python, PHP, Perl, Ruby) that requires no special setup or server configuration that will run on any standard, out-of-the-box Linux distribution.
  • Must run at the command line. (It’s fine to provide additional interfaces.)
  • May have additional features and options.
  • May use existing open source components (of course). This is not a clean-room implementation.
  • May be divided into multiple programs (e.g., one to parse the manifest and retrieve the specified videos, one to submit the video to YouTube, and one to poll YouTube for the completed transcripts), or combined as one.
  • Must be licensed under the GPL, MIT, or Apache licenses. Other licenses may be considered.
  • If multiple parties develop the program collaboratively, it’s up to them to determine how to divide the bounty. If they cannot come to agreement within seven days, the bounty will be donated to the 501(c)3 of my choosing.
  • The first person to provide functioning code that meets the specifications will receive the bounty.
  • Anybody who delivers incomplete code, or who delivers complete code after somebody else has already done so, will receive a firm handshake and the thanks of a grateful nation.
  • If nobody delivers a completed product within 30 days then I may, within my discretion, award some or all of the bounty to whomever has gotten closest to completion.

Participants are encouraged to develop in the open, on GitHub, and to comment here with a link to their repository, so that others may observe their work, and perhaps join in.

This bounty is funded entirely by the 95 folks who backed this Kickstarter project, though I suppose especially by those people who kept backing the project even after the goal was met. I deserve zero credit for it.