Vote ”yes“ on VA’s redistricting constitutional amendment.

Gerrymandering persists because it’s the rational choice for elected officials.

The way electoral districts are drawn in most of the U.S. is that state legislators decide what they want their districts to look like. The majority party that controls the legislature draws their own districts to allow them to cruise to reelection, using fancy redistricting software with household-level party membership data. And then they draw the minority party’s districts to make their lives difficult, such as by pitting incumbents against each other, lumping together unrelated communities, and making sprawling districts that are difficult to travel. This might sound complex, but the redistricting software makes it easy.

In short, legislators draw their own districts, ensuring their reelection. And why wouldn’t they, given the choice? It’d be completely irrational to do otherwise.

An illustration of how district lines can be drawn to allow a 60% "blue" / 40% "red" area to be divided into five districts allowing total control by either "red" or "blue."

Legislators generally get to do this once each decade, right after the decennial census determines how many people live where. The census is in 2000, 2010, 2020, etc., so redistricting is in 2001, 2011, 2021, etc.

Democrats and Republicans both gerrymander. It’s not an affliction of partisanship, it’s an affliction of power. Every time, the majority party defends their gerrymandering as mere redistricting, and the minority party decries the evils of gerrymandering and promises they’ll get rid of it when they’re in charge. This has gone back and forth for many decades.

Gerrymandering has lots of terrible effects. Especially the very premise: that legislators choose their constituents, instead of vice versa. There’s the effect on voters who are of the opposite party as their representatives, knowing that their votes will have no effect. Perhaps most important, there’s the resulting extremism. When the general election isn’t competitive, then the competition happens in the primaries. This results in the nomination of candidates who have no incentive to appeal to the opposing party, who get the nomination on the basis of promising to give no quarter to their opponents. This creates a spiral of extremism, and eliminates the possibility of bipartisan cooperation within the legislature.

In much of the U.S., there’s nothing that you and I can do about gerrymandering, because that requires a constitutional amendment…which legislatures must approve of. (Except in Delaware — they don’t bother with voter approval.) The majority party is quite happy with how redistricting is working for them, so they don’t pass those constitutional amendments. There’s nothing the public can do about it. (You might be thinking “well, people could vote them out of office.” No, they couldn’t. That’s the underlying problem here.)

So we’re stuck with gerrymandered districts.

* * *

In Virginia, right now, we have a chance to fix this, thanks to an extraordinary coincidence of timing.

Advocates for redistricting reform got to work a decade ago to get a constitutional amendment in place by 2021. With Republicans firmly in control of the General Assembly, it was easy to persuade Democrats in the legislature to get behind the cause. In fact, Democrats were so eager to support it that there was real danger that the movement would be perceived as a partisan one, so organizers had to go to considerable pains to avoid that.

In the 2019 session, the Virginia Senate was on the knife’s edge of control between Democrats and Republicans. Demographic changes in Virginia over the past decade had reached the point where Democrats had made big gains a few months prior, and Republicans were deeply concerned that they could find themselves a minority party by 2021. So the General Assembly was able to pass the legislation to amend the constitution to reform the redistricting process, by an 85-13 margin. Democrats supported it overwhelmingly. Sure, they watered it down, moving from nonpartisan redistricting to bipartisan redistricting, but it passed. But passing it once wasn’t enough — by law, they needed to pass the same amendment again the next year, after an intervening General Assembly election.

Before that could happen, though, something remarkable happened. Last November, Democrats flipped the House and the Senate, taking control of both chambers. Those new legislators were seated this year. And then the constitutional amendment came up for its second vote. By something had happened in the interim: Democrats were no longer so enthusiastic.

The same Democrats who cheerfully voted for the bill a year prior now had “concerns.” They wanted to “tweak” the legislation and perhaps delay when it would take effect, proposing substitute legislation.

But here’s the crucial fact about that substitute legislation: those changes would have reset the clock. That is, it wouldn’t have been the second time that the legislature had voted on this amendment, but the first time they had voted on a new amendment, requiring a pause of two years (so there could be an intervening legislative election) before the legislature could have the second vote. If it passed in 2023, and then voters backed it that November, that amendment would happen years too late for the 2021 redistricting, and have no effect at all until 2031.

Whatever the legitimacy of these newfound concerns, these Democratic legislators well knew that there were only two options: pass the amendment as it was, or vote against the amendment so they could gerrymander next year. There was no third option.

When forced to vote on the very bill they’d passed the prior year, what was the outcome? Well, every House Republican voted for the amendment, and nearly every House Democrat voted against it. Just nine House Democrats joined with Republicans in supporting it. (The Senate, on other hand, passed it 38-2, with two Democrats dissenting.)

Here are the 23 legislators who voted for the bill in 2019, and then voted against it this year:

  • Hala Ayala (D-51)
  • Betsy Carr (D-69)
  • Jennifer Carroll Foy (D-2)
  • Lee J. Carter (D-50)
  • Karrie Delaney (D-67)
  • Eileen Filler-Corn (D-41)
  • Wendy W. Gooditis (D-10)
  • Elizabeth Guzman (D-31)
  • Charniele Herring (D-46)
  • Patrick Hope (D-47)
  • Chris Hurst (D-12)
  • Mark Keam (D-35)
  • Kaye Kory (D-38)
  • Paul Krizek (D-44)
  • Mark Levine (D-45)
  • Kathleen Murphy (D-34)
  • David Reid (D-32)
  • Danica A. Roem (D-13)
  • Mark Sickles (D-43)
  • Marcus Simon (D-53)
  • Rip Sullivan (D-48)
  • Kathy Tran (D-42)
  • Vivian Watts (D-39)

(Of course, thanks to the intervening election, some legislators who voted for it in 2019 were gone by 2020, and some legislators were new in 2020. That’s the purpose of requiring an election between the first and second votes.)

When these Democrats were in the minority, they were all for redistricting reform. But now that they’re in the majority, one year later, now they’re against it.

As St. Augustine prayed, “Grant me chastity and self-control…but not yet.”

* * *

Gerrymandering persists because it’s the rational choice for elected officials.

Are legislators being hypocritical in voting for a bill and then voting against it? Yes. Are they behaving rationally? Absolutely. Left to their own devices, legislators will never vote to restrict their own power, only others’ power.

In 2007, then-House Minority Leader Ward Armstrong came to Charlottesville to speak at a public event that I held on redistricting. I asked him, before an audience of fifty or so Democrats, whether he supported redistricting reform. He delivered some relatively impassioned remarks about the importance of redistricting reform, about how we’ve got to get rid of gerrymandering. Then I asked him whether he’d still support that if Democrats controlled the legislature. He didn’t even pause before saying that, no, then he’d be against any kind of reform.

Extraordinarily, Virginia managed to get redistricting reform on the ballot, through an amazing coincidence of timing and, again, hard work by a small, dedicated group who pushed this for years.

Could we do better than bipartisan redistricting? Absolutely. Is bipartisan redistricting better than partisan redistricting? Absolutely. Should we let the perfect be the enemy of the good? Absolutely not.

Let’s make bipartisan redistricting a stop on the way to nonpartisan redistricting. Let’s move past the harms of gerrymandering, the extremism that comes of packed districts, the constant back and forth of each party punishing the other after they claw their way back from redistricting oblivion.

Let’s vote to pass amendment #1, making bipartisan redistricting the law of the land.

Make sure your UI modernization plan includes an open source clause.

Tens of millions of Americans have lost their jobs this year. Overall, over 35 million people have made unemployment benefit claims since the crisis began. State unemployment systems are crumbling under the load, and they’re desperate to modernize.

The CARES Act, passed March 27th, expands benefits for those who lost work due to the pandemic, notably $600 per week atop state-provided unemployment insurance (UI). It has an expanded scope, too — even people who were self-employed or worked in the gig economy can apply for Pandemic Unemployment Assistance (PUA) to receive that $600 per week. (At this writing, that funding expired a week ago, and Congress is wrangling over what to do.)

States’ legacy UI systems have struggled to keep up with the volume of applications, and are too inflexible to accommodate the changes in criteria.

At U.S. Digital Response, we spent April, May, June, and July volunteering at several state labor agencies, helping to patch struggling unemployment insurance systems. We’ve helped scale websites, troubleshoot call center volume spikes, and implement new web forms for PUA applications. As state labor agencies implement programs, they’ve also started revisiting their modernization plan, and in some cases, have asked us for advice and recommendations.

We’ve seen systems from all of the vendors in this space, from big firms like IBM, Deloitte, and Tata, to smaller firms like Fast Enterprises and Geographic Solutions. No matter what vendor states work with, we advise states to make sure they include an open source clause in their software development contracts. Here’s why.

* * *

State government requires custom software to operate. Unemployment insurance, Medicaid, DMVs, child welfare, SNAP, and all core government infrastructure depends on custom software bought by states at a large expense. Most of the software is overpriced and bad, but there is an easy way to improve this situation: open source software.

When it comes to important state infrastructure, top government software vendors use a double-dip business model. First, they charge us to build the software, then they charge us to use it. Nobody would pay to build a house and then pay to rent that house, so we shouldn’t do this with important state infrastructure. Vendors often claim that states are merely licensing software they’ve already built (“Commercial Off-The-Shelf,” aka “COTS”). This is often untrue. Instead, the software they’ve already built is custom for another state and will require large changes to work in a new state. These changes will be done at our collective expense, but with the resulting software owned by the vendor. In this scenario, we’re paying for a house to be rehabbed before we start to pay rent on it. This is absurd.

When paying a vendor to build custom software, vendors should be paid for their time, not for their time and for a license for the resulting software. The contract should reflect that, giving the agency ownership over the software. But it’s best to go a step further than that and place the software in the public domain, making it open source. Publishing government software as open source has many benefits with no drawbacks.

First and foremost, open source helps to prevent “lock-in,” in which a vendor tries to make it impractical or impossible for a client to ever switch to a competitor. When the source code is available for open replication, the vendor can’t use it as a weapon of monopoly. For future projects, vendors can inspect the existing source code prior to bidding, reducing uncertainty, and lowering the cost of the bids. This also reduces the switching costs associated with a system, which is important for government to be able to participate in a competitive marketplace.

Open source is also a crucial prerequisite for building secure software. Source code should never contain any secrets, like passwords, but it’s common for developers to have a casual attitude about this when the source code is closed. This is a mistake. Software can be decompiled — those secrets can be extracted. Source code can be leaked, as Edward Snowden demonstrated. And in many states, works of government are inherently in the public domain, which may include the source code that was thought to be secret. The Department of Defense maintains the excellent Open Source Software FAQ, and their answer to “Doesn’t hiding source code automatically make software more secure?” is summed up with the first sentence: “No.” Software projects should be open source from day one to avoid these problems.

Finally, the work done by developers will be enormously improved if it’s open source. Requiring all development to be performed on a social coding website like GitHub (think Facebook but for software developers) completely changes the incentive structure. Developers who work on closed source software generally work away by themselves — other than their immediate coworkers, nobody is ever likely to see their work. Government agencies rarely have anybody on staff who is capable of reviewing vendors’ source code to know if the work is any good, so they’re not inspecting their purchases. These are circumstances that provide developers with little reason to perform their best work. But if the RFP declares that the work will be open source, all of this changes. Vendors won’t want to respond if they know that their work is low-quality. Those who do respond will want to put their best team on the project. Those team members know that their work is being watched by friends and colleagues—and will be available for inspection by future employers forever—so their work is correspondingly high-quality. Open source gets the best work out of the best team at the best vendor.

How does an agency publish their procured software as open source? Simply, with a data rights clause:

Data Rights and Ownership of Deliverables — The Agency intends that all software and documentation delivered by the Contractor will be owned by the Agency and committed to the public domain. This software and documentation includes, but is not limited to, data, documents, graphics, code, plans, reports, schedules, schemas, metadata, architecture designs, and the like; all new open source software created by the Contractor and forks or branches of current open source software where the Contractor has made a modification; and all new tooling, scripting configuration management, infrastructure as code, or any other final changes or edits to successfully deploy or operate the software.

To the extent that the Contractor seeks to incorporate in the software delivered under this task order any software that was not first produced in the performance of this task order, the Agency encourages the Contractor to incorporate either software that is in the public domain or free and open source software that qualifies under the Open Source Definition promulgated by the Open Source Initiative. In any event, the Contractor must promptly disclose to the Agency in writing and list in the documentation, any software incorporated in the delivered software that is subject to a license.

If software delivered by the Contractor incorporates software that is subject to an open source license that provides implementation guidance, then the Contractor must ensure compliance with that guidance. If software delivered by the Contractor incorporates software that is subject to an open source license that does not provide implementation guidance, then the Contractor must attach or include the terms of the license within the work itself, such as in code comments at the beginning of a file, or in a license file within a software repository.

In addition, the Contractor must obtain written permission from the Agency before incorporating into the delivered software any software that is subject to a license that does not qualify under the Open Source Definition promulgated by the Open Source Initiative. If the Agency grants such written permission, then the Contractor’s rights to use that software must be promptly assigned to the Agency.

This can be dropped into an RFP or a contract as-is.

* * *

It’s trivial to commit to open source within an RFP, and the benefits are enormous. It reduces switching costs, improves security, and provides vendors with an incentive to do their best work. When states are procuring custom software, they should default to open source.

If you have questions about your unemployment insurance modernization process or need another set of eyes on your RFP, U.S. Digital Response is here to help.

How to stop failures of major custom software procurements.

When government pays companies to build big custom software programs for them, they succeed just 13% of the time. Here is why failure is so common, and about the simple change that turns those outcomes on their head.

Major government software projects fail because government has learned, over many years, to do exactly the wrong thing: specify down to the last pixel exactly how the software should look and function. All of those specs go in the request for proposals (or “RFP,” also known as the “solicitation” or the “tender”), telling the vendor exactly what to do.

(If you’re thinking “how is agile software development possible if you specify everything up front” then congratulations you already know the punchline.)

All those specs are created with no user research, with no regard for user needs.

The vendor who gets the contract is then required to make precisely the software outlined in that 800-page RFP, even if their own user research shows that it’s wrong.

This is, of course, madness.

Half of the major government software projects that “succeed” succeed only in the sense that they do what the RFP laid out. Was that what the users needed? Probably not!

The vendors who bid on this work know full well that it ain’t gonna work out. The staff assigned to these projects are staff who are OK with doing the wrong work that will probably never be deployed because what’s the point. It’s an industry built on an expectation of failure.

How do you stop these failures from happening? Pretty easily! Stop prescribing exactly what contractors need to do. Instead, state the outcomes that are desired and leave it at that. The 800-page RFP is now 20 pages.

Of course, this requires a government product owner overseeing work every day, this requires competent scrum teams doing work as prioritized by the product owner, and it requires a time & materials contract that pays vendors for their labor, not their software.

Basically, stop thinking of these RFPs as procuring software. You’re not buying a thing! You’re buying developers’ time, time in which they do work as prioritized by the government product owner.

That’s it! The contract is dead simple.

Is the vendor’s work not good enough? No problem — stop assigning them work and they’ll go away. You don’t even need to terminate the contract. Because they were using agile, you can hire a new vendor and them pick up where the old one left off. Done.

And that is the story of why major government software projects fail, and the simple change that stops that from happening.

For more, see the handbook that Randy Hart, Robin Carnahan, and I wrote about budgeting for agile software procurements.

My videoconferencing setup.

My job on a distributed team necessitates that I spend 1–6 hours meeting with folks via videoconference. Google Hangouts, Zoom, Appear.in, and GoToMeeting intermediate my professional interactions with co-workers, vendors, and clients. Initially, I joined these using my Macbook’s standard camera and iPhone earbuds, but spending so much time on the phone, I needed to make some changes. In the intervening six months, I’ve created a setup in my home office that I’m really happy with, at a total cost of about five hundred bucks.

Having a comfortable, flexible videoconferencing setup makes it easier to spend hours in meetings, but it also has improved substantially how I look and sound (and, presumably, how I’m perceived by others). These are the upgrades that I made.

 * * *

Camera

I am 6’3″ tall, meaning that I loom over my  Macbook camera threateningly. I also use a 23″ display with my Macbook, but displaying video on that screen means that I appear to be gazing over the heads of the other folks on the call.

The solution is an external, USB camera. Initially, I used a $30 Logitech standard definition camera (I can’t even find a model number on it). That addressed the matters of comfort and looking folks in the eye, but the video just didn’t look great.

I recently upgraded to a Wirecutter-recommended  Logitech HD Pro C920 ($62). The 1080p video is a huge improvement, even at the downscaled quality supported by most videoconferencing clients right now. The images are much sharper, clearly better than the Macbook’s internal camera.

Headphones

I don’t find EarPods comfortable for hours at a time. I also spend a non-trivial amount of time on voice-only calls. I was not happy with the Bluetooth earbud options out there, and just before I pulled the trigger on a pair of Plantronics, Apple started shipping AirPods ($160). I took a gamble on them, and I’m awfully glad that I did. They move seamlessly between my laptop and phone, they’re so unobtrusive that I often appear to be wearing no headphones at all, and the sound quality is fine for voice. They also have a built-in mic that’s OK, certainly better than that thing with EarPods where you spend half of the call holding the wire-attached mic up to your mouth.

I never realized how much time I spent subconsciously managing my EarPods cables until I switched to AirPods—my first time on a call with EarPods again seemed comparatively oppressive. The short length of EarPods’ cable kept me tethered to my display, keeping me from changing positions often. AirPods have provided freedom of movement while on calls.

Microphone

My webcam, my laptop, and my headphones all have built-in mics, but the audio quality is pretty low.

After some research, I went with that favorite of podcasters, the Audio-Technica ATR2100-USB ($62). I actually sprang for the the extra $17 to get the kit that comes with a boom arm stand and a pop filter. I have it mounted on the side of my desk, where I can keep it swung back toward the wall when I’m not using it, or pull it adjacent to my display when I’m on a call. I used to work in radio, so this setup is particularly comfortable for me.

This is perhaps the least-necessary upgrade for my videoconferencing setup, but I’m really pleased with the improved sound quality.

Lighting

Poor lighting is the Achilles heel of most people’s videoconferencing setups. That’s because what makes for good work lighting is really different than what makes for good lighting for video. In any call with more than a few people, at least one person looks like they’re recording a proof-of-life video. Window backlighting blows out the exposure, or single-source lighting casts their face in shadows. This is so important to us as a culture that it’s part of our language:  we describe unsavory people as “working in the shadows”; if you portray yourself badly, you “aren’t presenting yourself in the best light.”

I bought the Neewer 18″ LED Ring Light ($100), with a Fovitec StudioPRO 18″ Light Stand ($12). An 18″ ring light turns out to be enormous—perhaps a smaller one would do the trick. This has been transformative. It makes it look like I’m being interviewed on broadcast TV. The lighting is just excellent. This was the purchase that I was most dubious about, and the upgrade that I’m happiest with. I’m looking around for a swing arm that I could mount this on, though, so that I can get it off my desk and out of the way when I’m not using it.

Monitor Arm

This might not seem like it’s an important part of a videoconference setup, but it’s my experience that it makes a big difference. Having a screen-mounted camera means that being positioned properly within the frame requires that you stay in place relative to your monitor. This is uncomfortable and, ultimately, unhealthy.

Again, I went with the Wirecutter-recommended option: the AmazonBasics Display Mount Arm ($100). Throughout a long call, I can move my screen as I change positions at my desk, to ensure that I remain well framed in the video. This has been great.

 * * *

By way of comparison, here’s how I look with my old webcam:

Here’s how I look with my new webcam:

And here’s how I look with my new webcam with my ring light on:

The difference is striking.

Obviously, $500 is a lot of money for most people. I justify the expense as both recognizing that the reality of my future employment will involve lots of videoconferencing and that I’m saving a lot of money on commuting and my wardrobe.

If you’re going to acquire these one piece at a time, as I did, I recommend this order of priority: camera, light, monitor arm, microphone, headphones. This is also the order of impact, a good camera providing the most impact, the headphones providing the least.

Have alternate suggestions for equipment? Email me, tweet at me, or let me know in the comments.


June 2019 Update: I’ve been using Webcam Settings, the generically-named macOS program, and it’s become an indispensable part of my setup. Being able to adjust lighting settings in software is great, especially compensating for unwanted backlighting. Recommended.

Truth, earned credibility, and a publisher’s responsibility.

I spent much of the ’00s as a political blogger. I wrote here, mostly about state politics. When I decided to start writing about state politics, in 2003, I sought out other political blogs in Virginia. There weren’t many, maybe a half-dozen. I added them all to my blogroll, made a point of reading those sites and linking to them, and they did the same. Despite our often-oppositional political perspectives, our exchanges were friendly, informative, and fun. I’m still friends with those folks.

In the spring of 2006, I was casting around for how to elevate lesser-known Virginia political blogs, as it didn’t strike me as entirely fair that my site should get so much of the readership. So I set up a blog aggregator—a bit of software that would check each Virginia political blog’s RSS feed every half-hour or so, and syndicate all new blog entries on a central site, creatively named Virginia Political Blogs. It didn’t take long to set up, and having a central site to serve as a commons was immediately popular. Every blog entry was shown in the same context as every other, all in the same typeface, all on an equal footing, listed chronologically.

In the few months afterward, blogging exploded in popularity, in part because it became much easier to set up a blog. No longer was it necessary to install Movable Type, or the nascent WordPress—Blogger.com would host your site for free. For a while, in that lead-up to the 2006 midterm elections, there was a new Virginia political blog every week. In retrospect, this must have been the peak of the popularity of blogging, before the rise of Facebook and Twitter

Inevitably, the lowered technological bar meant that some less-knowledgable folks were able to participate in this commons. But that was OK, because this was a marketplace of ideas, and if people wanted to promote their foolish ideas, they could do that.

This went badly.

Any fool could start a website, get added to my aggregator, and immediately have an audience that numbered in the thousands. And fools did that, by the dozen. They didn’t have to earn an audience by writing things that other blogs would want to link to. They didn’t have to prove themselves in any way. When they wrote things that were completely wrong, offensive, or even dangerous, instead of going to an audience of a dozen people, it went to an audience that included not just every political reporter in Virginia and the DC metro area, but a great many crackpots and fringe-theorists as well.

By putting terrible ideas on even footing with great ones, and by replacing a free marketplace of ideas with a leveled one, I inadvertently created a promotional vehicle for the ignorant, the rage-filled, and the chronically dishonest. By presenting them all in the same, shared context, it gave them an aura of legitimacy. And by automatically reposting their occasional hate speech, violent imagery, and calls to violence, I was enabling—even endorsing—those things.

I knew what I had to do. In December, I pruned the worst actors from the list. They were enraged, and their rage was only magnified by their ignorance (“your taking my free expression that’s against the constitution!!!!”) and their partisanship (“classic libtard”). I’d created a commons, gave them access to it, and then took it away. They tried to create their own blog aggregator, as I recall, but it proved to be beyond their technological capabilities.

* * *
In 2016, my little mistake has been repeated as a huge, national mistake. We call it “Facebook.”

Any fool can create a Facebook account, get added to their friends’ feeds, and immediately have an audience in the hundreds, potentially in the millions. They don’t have to earn an audience with their writings, but instead rely on social obligations around relationships to be guaranteed a readership. Everybody’s Facebook posts are displayed in the same, shared context, with reshared, weaponized Russia propaganda adjacent to class reunion photos and New York Times articles.

Propaganda on Facebook has the aura of legitimacy. It’s been shared by a friend or family member, preying on our sense of trust. The name of the outlet that published the news is displayed in light gray at the bottom of the post, while the friend’s name and photo is displayed prominently at the top. News from eaglepatriot.co looks the same as any article from The Washington Post.

It only took me eight months to figure out that I’d inadvertently created a terrible system that was enabling dangerously stupid people. It’s been twelve years, and Facebook hasn’t learned that lesson yet.

I quite doubt that my poor editorial policy changed the outcome of any elections. Can Facebook say the same?

I want you to become a government tech vendor.

Hey, competent tech folks: your country needs you. Your knowledge, your experience, and your connections can improve the United States for everybody.

I’m not asking you to go work for the federal government.

I’m not asking you to go work for a non-profit.

I’m asking you to become a government technology vendor. I want you to sign up at SAM.gov, start bidding on 18F microcontracts, and eventually pay an attorney to help you navigate the procurement process to get a multi-million-dollar federal technology contract.

Uncle Sam drawing, captioned: I want you to become a gov't tech vendor.

* * *
The same handful of vendors bid on every federal tech project, and often the bids are between one and two orders of magnitude higher than they should be. FedScoop recently published a list of the top 100 federal IT vendors, ranked by income. (I’d only heard of 13 of them before.) 71% of of the contract income in this list goes to the top 10 vendors. 22% goes to Lockheed Martin. Look at this distribution of the top 100:

A chart that drops off very steeply
Numbers are in millions.

If we performed this exercise with all federal IT contracts—and there will be $86 billion in federal IT spending in FY2017—we’d see that there is a very long tail, though it probably wouldn’t change the fact that most of the spoils are divided among a handful of vendors.

Despite all of this spending (or perhaps because of it), only 6.4% of large federal IT projects succeed. The failure and subsequent rescue of Healthcare.gov shined a light on the pitfalls of our legacy procurement model, and the enormous benefits that can come of working with small, agile teams of software developers who are given the space to do their job.

* * *

I’ve spent the last few years working in tech as a non-profit partner to government and, before that, I worked in tech within government. I am here to tell you that you can effect more positive change as a government vendor than as a helpful non-profit, and that you can be at least as helpful to our nation as a government vendor as you can by working on tech within government. (After all, government outsources work to thousands of times more technical positions than the number that they employ directly.)

Generally speaking, free software is useless to governments. If you spent the next eight months feverishly producing the perfect regulatory management platform, and then handed a copy of it to an agency (complete with a FOSS license), odds are slim to none that they would be able to use it. There are far too many hurdles.  But if you bid $100,000 on a government RFP for that system, you’d ensure that government would save a bundle and be able to actually use your software. Government has a system for acquiring new technology: the procurement process. Realistically, the change you can provide will come from working within that system, not trying to work outside of it.

The United States needs a small army of competent developers to start hundreds of businesses, bid on federal contracts, and do top-notch work for a fair rate. We need people whose goal isn’t an IPO and fabulous wealth, but instead to earn a nice living for themselves and everybody who works for them while making their country better by creating better technology for government.

There is good and important work that needs to be done in government technology, at federal, state, and local levels. Doing that work in exchange for payment isn’t merely not a bad thing, it may be the only thing you can viably do that actually makes a difference. It’s not reasonable to expect talented developers to perform free work at government hackathons in exchange for pizza while major vendors produce failing software for hundreds of millions of dollars.

A 6.4% success rate isn’t good enough. Fixing this will involve a lot of work beyond merely attracting new vendors—and most of that work is within government—but attracting new vendors is an important part in this improvement. Become a government tech vendor. Help make the U.S. a better place for everybody.

How I built a chicken coop.

I am not a carpenter. My occasional effort to build something, no matter how mundane, ends badly. That’s because carpentry is hard. There are a hundred ways to screw up, and ninety of them are only obvious in retrospect. I took shop class in middle school, so I’m generally comfortable with the tools of the trade, but of course I don’t actually own a drill press or a band saw or any of the other sixties-era industrial equipment that my school trained on. One time I tried to build a temporary woodshed—just something to cover some freshly-milled lumber until it dried—and I wound up blowing a couple of hundred bucks on the structural equivalent of the deformed Ripley clone in Alien Resurrection.

00_example
The photo accompanying the plans.

But our chicken coop’s lifespan was reaching its end, and I was faced with a choice: repair our six-year-old (to us), second-hand chicken coop? Or build a new one? One thing led to another, and before I knew it I’d decided to build a pretty non-trivial shed-cum-coop. The cost of the materials was steep—about $800—and I didn’t understand easily a quarter of the instructions. (“Pocket holes”? “Kreg Jig”? “Toenail”?) I figured I could knock it out in a week. That was in May. I finished it this week.

 

Preparation

There were a few things I figured out quickly. The first was that my tools were not up to the task. Using a combination of American Express points and cash, I got a cordless drill, aviation shears, some stronger drill bits, sawhorses, an 8′ ladder, and a jigsaw. I filled up my pickup’s bed in the first of a dozen trip to Lowe’s, with dozens of 2×3s and 2×4s, a stack of plywood and T-11 (which turned out to be a kind of cheap, wooden siding), screws, hinges, concrete deck blocks, and bags of gravel. The second was that I was in way over my head. The third was that I was determined to follow through to completion, not cutting any corners, but getting it right for once.

The Foundation

The framework of the foundation.
The framework of the foundation.

The foundation was a huge pain in the ass. I had no idea. Living on the side of a mountain, as I do, there is no flat ground. Getting each of the six of the concrete deck blocks both level and located correctly relative to the other five blocks was an exercise in frustration. I did this work on a 90° day in May, and within 30 minutes I found myself wanting to cut corners. (“This is level enough.” “This is square enough.”) A bad foundation would be a bad…er…foundation, so I stopped, relaxed, thought about it, took measurements, and did things right. Framing the foundation was easy—it was just a grid of treated 2×4s.

I put a plywood/insulation sandwich atop the foundation framing, to which the walls could be attached.

02_platform

Framing the Walls

“Measure twice, cut once.” This aphorism is wrong. Twice is not nearly enough times. I built each one of the coop walls at least two times. It’s pretty simple, in theory—cut sticks of wood at the prescribed size, screw them together with an impact driver—and yet. Three of them were pretty straightforward, at least in retrospect, but the back wall was tough, because the header (the piece of wood that spans the top) had to be mounted at an angle, for the shed roof to slant down. This meant learning to use my speed square. It’s basically a protractor for carpentry. Once I figured it out, it was easy. When they were all finished, my shed was completely full of framed walls.

The framed walls.
The framed walls.

Mounting the walls to the foundation was not a one-man job. I pressed my wife into service, who held each of them up while I used structural wood screws (very long, thick screws) to affix each of them to the foundation. When all four were up, I had walls that were all wrong. The front wall was not wide enough—falling maybe 1.5″ short of reaching all the way to the left side of the shed—and the side walls were also too narrow, meaning that the whole shed had a 1.5″ lip on the front of it. At this point, I could have taken down the walls and rebuilt them, but I decided these problems were within reason, and also rebuilding the walls sounded awful.

At this point it looked real! Or, at least, like a real-world wireframe of something real.

Skinning It

The next step was to put T-11 siding on, and plywood to form the roof. This could have been through transferring the framing measurements onto waiting sheets, but instead I went with something more direct—my wife held the sheets in place, while I traced the outline of the framing. Using the sawhorses and clamps to hold the sheets of wood in place, I used the jigsaw to cut along the lines. After mounting them on the framing, using the impact driver to quickly affix a couple of screws, I used the jigsaw to trim off the remaining bits. Since the roof is just a big rectangle, I was able to cut out the plywood for that pretty easily.

The basic structure, completed.
The basic structure, completed.

At this point, I thought most of the work was finished. That was not even a little bit true.

Doors, Windows, and Roofing

Shutters are easy.
Shutters are easy.

I immediately set to building doors and shutters, to keep the rain out. Both turned out to be surprisingly easy. The shutters are just a long piece of wood cut into five segments and screwed together. I covered the windows with hardware cloth, to keep out predators. The door is just a piece of plywood with some framing to keep it rigid. (That’s not to say they’re great. More on that below.) Trim was also pretty simple, although complicated a bit by the need to make sure that the style is kept consistent (e.g., horizontal trim overhangs vertical trim, and 4″ wide corner trim overhangs corners by 1″, where it meets 3″ trim). I made a bunch of mistakes on the trim, despite its hypothetical simplicity, but I eventually got it more or less correct.

The next protective measure was the roof. I’d hoped to use metal, but the more than I read about working with metal roofing, the more convinced I became of the inevitability of grievous injury. So I went with cheap, plastic roofing, which was really hard to cut (the saw made it vibrate all over the place), and I wound up hiding the terribly ragged cut ends on the back side of the coop. It was easy enough to screw down and, later, use gap sealer to close off the space under each of the ridges in its wavy profile.

Painting

I blew $20 on brushes by not cleaning them promptly.
I blew $20 on brushes by not cleaning them promptly.

Painting was not so easy. I used red barn paint on the T-11 and white latex paint on the trim. (I still haven’t treated the wooden foundation, to protect it from rot, but I intend to do that shortly.) The trim needed two coats, the painters tape refused to stick to the T-11, cleaning paintbrushes turned out to be frustrating and hard, and I kept splashing things with the wrong color of paint and forgetting to paint every surface (e.g., the top edge of the window trim). I had to paint in inside, too, to protect it from the messy realities of poultry. I did that with (white) Kilz primer, painting the walls and the floor.

The freshly painted interior.
The freshly painted interior.

Floor

Paint would not be enough to protect the floor. I bought a roll of cheap, remnant vinyl flooring and a small bucket of flooring glue. Following instructions carefully, I trimmed it to fit and glued it down. After it dried, I used silicon caulk to seal the edges, to keep moisture from getting under the floor.

Poultry Accouterments

14_perch
A perch for the chickens.

At this point, I had a shed, which would do my chickens and ducks no good in its present form. Using some industrial-strength shelving brackets, I made a mount for a perch—which would consist of a 2×4—and, below it, a wide shelf (known as a “poop deck” for obvious reasons). I cut a 12″ square hole in the wall for the hens’ door, and built a little glide-track frame for a guillotine-style door. And I used some leftover T-11 and lumber to assemble a ramp for the hens to use to get in and out of the shed, as their door is elevated a good two feet above the ground. It remains to build nesting boxes and a second door, so that we can rotate their pasture access.

15_door
A guillotine-style door for the hens.

Acceptance Testing

They’re not thrilled with the change. Our hens have spent their whole lives in their rapidly-decaying mobile coop. One even flew over the fence to get to their old coop, where I found her in the nesting box at dusk. On the first night, I had to carry each of them into the coop, one by one, and lock them up to prevent their escape. So, not a home run.

The completed coop.
The completed coop.

Mistakes

Oh, God, the mistakes. I see nothing but mistakes when I look at the shed. A door latch that doesn’t quite line up. Shutters with a 1″ gap between them. Little holes everywhere, where I had to re-drill and later patch with wood filler. Blobs of gap filler, which expanded far beyond my expectations, requiring that the excess be trimmed off with a knife. Crooked trim, angles that aren’t even close to 90°. I could go on. Some of these I’ve fixed, or will fix. Others I’ll have to live with. But I think all are within reasonable parameters.

Patches in the T-11.
Patches in the T-11.

Overflowing gap filler.
Overflowing gap filler.

The shutters that don't quite close.
The shutters that don’t quite close.

A misaligned door latch. (It started out aligned correctly, but the door settled.)
A misaligned door latch. (It started out aligned correctly, but the door settled.)

Sloppy paint.
Sloppy paint.

Lessons Learned

So, so many lessons learned.

  • Learning new things is hard. Learning new things in the physical world—i.e., not writing code—is really hard.
  • Building things is expensive. Even if the individual components are inexpensive, the tools required to build stuff cost a lot of money.
  • When I hear a nagging voice in my head that says “hey, you’re making this wrong, maybe,” I should stop and listen to it.
  • Painting sucks.
  • Expect to do everything at least twice, because I’m going to mess it up the first time.
  • Building physical things scratches a very similar itch as building software, with the frustrating exception that a physical thing can only be used by me, while software that I build can benefit many people.

All in all, this has been a great experience. I’ve gotten a lot better at basic carpentry in the intervening two months, making fewer mistakes and increasing my understanding of what’s possible. I intend to continue to build things, as life necessitates. But now that this project is wrapping up, I look forward to getting back to writing code.

Term-limiting your organization can be a gift to future you.

For just a great many mission-based organizations, there is some point in time at which it should have accomplished its mission. If it’s done that, then it should stop. And if it hasn’t accomplished its mission by then, it should still stop, because it’s apparently not able to get the job done.

The landscape is littered with zombie non-profits that exist to exist, doing work that is almost wholly unrelated to their original purpose, so that its employees may continue to be employed. Is your goal to build promote the building of roads suitable for automobiles? Cool, do that, and stop once you’ve accomplished that. Otherwise, a century later, you’ll just be a sad, sprawling towing insurance company that could be easily replaced by an app. A review of the non-profits supported by any local community foundation will yield vast numbers of organizations that long ago ceased to be useful, as those community foundations know, but a board member golfs with the board chair of that non-profit, and, hey, they employ people…so.

Three years ago, when a few of us were planning U.S. Open Data, then-US-Deputy-CTO Nick Sinai made a suggestion premised on this notion. He said I should plan to shut down the organization after a few years. At first, the proposal struck me as bizarre. Wouldn’t it make more sense to build something lasting? When you’re trying to foster a movement, doesn’t a time limit hinder that goal? The answer, as it turned out, was a very clear no.

Having a clear, public drop-dead date (which we set at four years) for U.S. Open Data has been a gift. It’s informed my work on a daily, even hourly, basis.

The organization was created to further the cause of open data—that is, to advance something larger than any one organization, that will outlast any one organization. Trying to do that with U.S. Open Data on a permanent basis (whatever that means) would mean working to make U.S. Open Data to be important enough that funders would want to support it, and creating a permanent fundraising infrastructure. And in building the larger network infrastructure of open data, our incentive would be to place ourselves at the center of that network, so that we’d be too important to go away.  When people approached us about creating new businesses or organizations in this space, our incentive would be to discourage them, to reduce competition. So we see that the best interests of a single actor don’t necessarily overlap with the best interests of a cause.

This organizational term limit informed every major decision that we made, most every minor decision, and a great many trivial ones. With an overall incentive to build up the open data ecosystem, instead of building up ourselves, we’ve been forced to consider the decisions before us not in terms of “what is best for us?” but instead in terms of “what is best for open data?” This approach clarifies every decision, forcing the morally superior decision. And the most reliable way to make morally superior decisions is to make sure that’s in your best interest.

Like Odysseus lashing himself to his ship’s mast, we declared our time limit publicly, at the outset, to ensure that we’d be held to it. Am I good enough person to shut down an organization just to keep it from potentially losing its way in years to come? Maybe. Best not to test it.

To anybody looking to start a mission-based non-profit, especially one with a goal-based mission (“build a monument”) as opposed to an unlimited mission (“support the arts”), I heartily recommend establishing a publicly-promoted end date.

Making it work requires 1) setting a date certain instead of something vague 2) having a board that is committed to making you adhere to that date and 3) reminding people early and often of your term limit. These three things will ensure that you’ll be held to your own standards, and that nobody will be caught by surprise when your organization goes away.

As a benefit, this creates a sort of sustainability in funding that’s appealing to the right funders. When your organization’s goal isn’t existence in perpetuity, funding it no longer becomes an exercise in long-term strategy. Depending on the organization’s time limit, it might be possible to fund its entire existence in a single grant. (We had two general funding grants: one from the John S. and James L. Knight Foundation and one from the Shuttleworth Foundation.)

Ultimately, every grant-funded organization will eventually be terminated by an inability to obtain additional funding, at the point at which they either have accomplished their mission (and funders see no reason to continue to support them) or they have failed to accomplish their mission (and, again, funders see no reason to continue to support them). In short, you can decide to term-limit your own organization, or you can wait for funders to do that for you, at time of their choosing.

Term limits aren’t right for all mission-based organizations, but many of them should regard the use of term limits as their null hypothesis. It may be harder to reject than you suspect.

How to get started with continuous integration.

I’ve put off learning to use continuous integration tools for a few years now. There’s never a good time to complicate my development process. But today I finally did it. It works differently than I thought, and was easier than I expected, so it’s worth documenting the process for others.

I have a non-trivial number of GitHub repositories that are primarily data-based—that is, there’s no software to be tested, but instead the validity of data. I’m forever lazily checking in YAML, JSON, and XML that’s malformed, or doesn’t comply with the schema. I try to remember to test locally, but sometimes I forget. And sometimes others contribute code to my projects, and I have no way of knowing how they’ve tested those modifications.

So today I set up a Travis CI account. I started with their “Travis CI for Complete Beginners” guide, which immediately proved to be poorly named and generally confusing. Instead, I mucked around a bit until I figured things out.

Here are the salient points about Travis CI:

  • Once you link your GitHub account to Travis CI, every commit that you make is reported to their server, to (potentially) be tested.
  • By creating a .travis.yml file in a repo, you are telling Travis CI “please test this repository with each commit.”
  • The .travis.yml file tells Travis CI exactly what to test and how.
  • Travis CI runs tests by launching a little Docker container (I assume), per the specs established in your .travis.yml config file, and executing the program that you instruct it to run. That program might be a shell script or it might be something you write in any language that you want. You keep it in your repo (perhaps in a /tests/ directory) with everything else.
  • A test fails or succeeds based on your test program’s return status. If it returns 0, it succeeded, otherwise it failed. If the build fails, Travis CI will email you with your program’s output.

tl;dr: With every commit that you make, Travis CI runs one or more commands of your choosing (e.g., a linter) and, if that throws an error, you’ll be emailed a notification.

For example, here’s the .travis.yml file that I wrote:

language: node_js
node_js:
  - "stable"
script: tests/validate.sh
before_install:
  - npm install jsonlint -g

This tells Travis CI to launch a Docker instance that supports Node.js—using the most recent stable version—and install the jsonlint Node module (a JSON validator). Then it should run the script at tests/validate.sh within the repository.

validate.sh looks like this:

#!/bin/sh
find schemas -type f -exec jsonlint -q '{}' +

It’s simply making sure that every file in the /schemas/ directory is valid JSON. I could run all kinds of tests within this script, of course, but this is all that I’m interested in right now.

And that’s it! Every time that I make a commit to this repository, Travis CI is notified, it loads the config file, runs that test program, and tells me if it returns an error.

Of course, there’s a lot more that I need to learn about continuous integration. I should be writing tests for all of my software and running those tests on each commit. I imagine that continuous integration gets a lot more highfalutin’ than what I’ve done so far. But this new practice, if I’m good about making a habit of it, will improve the quality of my work and help make me a better member of the open source software world.

Shuttleworth fellowship.

I am very happy that, as of this week, my work at U.S. Open Data is funded entirely by the Shuttleworth Foundation. The South African organization has awarded me a one-year fellowship, which covers my salary and also provides up to $250,000 in project funding in that time. I’m very happy to have their support, and I’m excited about the work I’ll be able to do in the year ahead to make open data more vibrant and sustainable in the U.S., especially within government.