Axiomatic CSS and Lobotomized Owls

 ∗ A List Apart: The Full Feed

At CSS Day last June I introduced, with some trepidation, a peculiar three-character CSS selector. Called the “lobotomized owl selector” for its resemblance to an owl’s vacant stare, it proved to be the most popular section of my talk.

I couldn’t tell you whether the attendees were applauding the thinking behind the invention or were, instead, nervously laughing at my audacity for including such an odd and seemingly useless construct. Perhaps I was unwittingly speaking to a room full of paid-up owl sanctuary supporters. I don’t know.

The lobotomized owl selector looks like this:

* + *

Despite its irreverent name and precarious form, the lobotomized owl selector is no mere thought experiment for me. It is the result of ongoing experimentation into automating the layout of flow content. The owl selector is an “axiomatic” selector with a voracious purview. As such, many will be hesitant to use it, and it will terrify some that I include it in production code. I aim to demonstrate how the selector can reduce bloat, speed up development, and help automate the styling of arbitrary, dynamic content.

Styling by prescription

Almost universally, professional web interface designers (engineers, whatever) have accustomed themselves to styling HTML elements prescriptively. We conceive of an interface object, then author styles for the object that are inscribed manually in the markup as “hooks.”

Despite only pertaining to presentation, not semantic interoperability, the class selector is what we reach for most often. While elements and most attributes are predetermined and standardized, classes are the placeholders that gift us with the freedom of authorship. Classes give us control.

.my-module {
	/* ... */
}

CSS frameworks are essentially libraries of non-standard class-based ciphers, intended for forming explicit relationships between styles and their elements. They are vaunted for their ability to help designers produce attractive interfaces quickly, and criticized for the inevitable accessibility shortcomings that result from leading with style (form) rather than content (function).

< !-- An unfocusable, semantically inaccurate "button" -->
<a class="ui-button">press me</a>

Whether you use a framework or your own methodology, the prescriptive styling mode also prohibits non-technical content editors. It requires not just knowledge of presentational markup, but also access to that markup to encode the prescribed styles. WYSIWYG editors and tools like Markdown necessarily lack this complexity so that styling does not impede the editorial process.

Bloat

Regardless of whether you can create and maintain presentational markup, the question of whether you should remains. Adding presentational ciphers to your previously terse markup necessarily engorges it, but what’s the tradeoff? Does this allow us to reduce bloat in the stylesheet?

By choosing to style entirely in terms of named elements, we make the mistake of asserting that HTML elements exist in a vacuum, not subject to inheritance or commonality. By treating the element as “this thing that needs to be styled,” we are liable to redundantly set some values for the element in hand that should have already been defined higher in the cascade. Adding new modules to a project invites bloat, which is a hard thing to keep in check.

.module-new {
	/* So… what’s actually new here? */
}

From pre-processors with their addition of variables to object-based CSS methodologies and their application of reusable class “objects,” we are grappling with sandbags to stem this tide of bloat. It is our industry’s obsession. However, few remedies actually eschew the prescriptive philosophy that invites bloat in the first place. Some interpretations of object-oriented CSS even insist on a flattened hierarchy of styles, citing specificity as a problem to be overcome—effectively reducing CSS to SS and denying one of its key features.

I am not writing to condemn these approaches and technologies outright, but there are other methods that just may be more effective for certain conditions. Hold onto your hats.

Selector performance

I’m happy to concede that when some of you saw the two asterisks in * + * at the beginning of this article, you started shaking your head with vigorous disapproval. There is a precedent for that. The universal selector is indeed a powerful tool. But it can be good powerful, not just bad powerful. Before we get into that, though, I want to address the perceived performance issue.

All the studies I’ve read, including Steve Souders’ and Ben Frain’s, have concluded that the comparative performance of different CSS selector types is negligible. In fact, Frain concludes that “sweating over the selectors used in modern browsers is futile.” I’ve yet to read any compelling evidence to counter these findings.

According to Frain, it is, instead, the quantity of CSS selectors—the bloat—that may cause issues; he mentions unused declarations specifically. In other words, embracing class selectors for their “speed” is of little use when their proliferation is causing the real performance issue. Well, that and the giant JPEGs and un-subsetted web fonts.

Contrariwise, the * selector’s simultaneous control of multiple elements increases brevity, helping to reduce file size and improve performance.

The real trouble with the universal sector is that it alone doesn’t represent a very compelling axiom—nothing more intelligent than “style whatever,” anyway. The trick is in harnessing this basic selector and forming more complex expressions that are context-aware.

Dispensing with margins

The trouble with confining styles to objects is that not everything should be considered a property of an object per se. Take margins: margins are something that exist between elements. Simply giving an element a top margin makes no sense, no matter how few or how many times you do it. It’s like applying glue to one side of an object before you’ve determined whether you actually want to stick it to something or what that something might be.

.module-new {
	margin-bottom: 3em; /* what, all the time? */
}

What we need is an expression (a selector) that matches elements only in need of margin. That is, only elements in a contextual relationship with other sibling elements. The adjacent sibling combinator does just this: using the form x + n, we can add a top margin to any n where x has come before it.

This would, as with standard prescriptive styling, become verbose very quickly if we were to create rules for each different element pairing within the interface. Hence, we adopt the aforementioned universal selector, creating our owl face. The axiom is as follows: “All elements in the flow of the document that proceed other elements must receive a top margin of one line.”

* + * {
	margin-top: 1.5em;
}

Completeness

Assuming that your paragraphs’ font-size is 1 em and its line-height is 1.5, we just set a default margin of one line between all successive flow elements of all varieties occurring in any order. Neither we developers nor the folks building content for the project have to worry about any elements being forgotten and not adopting at least a standard margin when rendered one after the other. To achieve this the prescriptive way, we’d have to anticipate specific elements and give them individual margin values. Boring, verbose, and liable to be incomplete.

Instead of writing styles, we’ve created a style axiom: an overarching principle for the layout of flow content. It’s highly maintainable, too; if you change the line-height, just change this singular margin-top value to match.

Contextual awareness

It’s better than that, though. By applying margin between elements only, we don’t generate any redundant margin (exposed glue) destined to combine with the padding of parent elements. Compare solution (a), which adds a top margin to all elements, with solution (b), which uses the owl selector.

Diagram showing elements with margins, with and without the owl selector.
The diagrams in the left column show margin in dark grey and padding in light gray.

Now consider how this behaves in regard to nesting. As illustrated, using the owl selector and just a margin-top value, no first or last element of a set will ever present redundant margin. Whenever you create a subset of these elements, by wrapping them in a nested parent, the same rules that apply to the superset will apply to the subset. No margin, regardless of nesting level, will ever meet padding. With a sort of algorithmic elegance, we protect against compound whitespace throughout our interface.

Diagram showing nested elements with margins using the owl selector.

This is eminently less verbose and more robust than approaching the problem unaxiomatically and removing the leftover glue after the fact, as Chris Coyier reluctantly proposed in “Spacing The Bottom of Modules”. It was this article, I should point out, that helped give me the idea for the lobotomized owl.

.module > *:last-child,
.module > *:last-child > *:last-child,
.module > *:last-child > *:last-child > *:last-child {
	margin: 0;
}

Note that this only works having defined a “module” context (a big ask of a content editor), and requires estimating possible nesting levels. Here, it supports up to three.

Exception-driven design

So far, we’ve not named a single element. We’ve simply written a rule. Now we can take advantage of the owl selector’s low specificity and start judiciously building in exceptions, taking advantage of the cascade rather than condemning it as other methods do.

Book-like, justified paragraphs

p {
	text-align: justify;
}

p + p {
margin-top: 0;
text-indent: 2em;
}

Note that only successive paragraphs are indented, which is conventional—another win for the adjacent sibling combinator.

Compact modules

.compact * + * {
	margin-top: 0.75em;
}

You can employ a little class-based object orientation if you like, to create a reusable style for more compact modules. In this example, all elements that need margin receive a margin of only half a line.

Widgets with positioning

.margins-off > * {
	margin-top: 0;
}

The owl selector is an expressive selector and will affect widgets like maps, where everything is positioned exactly. This is a simple off switch. Increasingly, widgets like these will occur as web components where our margin algorithm will not be inherited anyway. This is thanks to the style encapsulation feature of Shadow DOM.

The beauty of ems

Although a few exceptions are inevitable, by harnessing the em unit in our margin value, margins already adjust automatically according to another property: font-size. In any instances that we adjust font-size, the margin will adapt to it: one-line spaces remain one-line spaces. This is especially helpful when setting an increased or reduced body font-size via a @media query.

When it comes to headings, there’s still more good fortune. Having set heading font sizes in your stylesheet in ems, appropriate margin (leading whitespace) for each heading has been set without you writing a single line of additional code.

Diagram showing automatically adjusted margins based on font-size.

Phrasing elements

This style declaration is intended to be inherited. That is how it, and CSS in general, is designed to work. However, I appreciate that some will be uncomfortable with just how voracious this selector is, especially after they have become accustomed to avoiding inheritance wherever possible.

I have already covered the few exceptions you may wish to employ, but, if it helps further, remember that phrasing elements with a typical display value of inline will inherit the top margin but be unaffected in terms of layout. Inline elements only respect horizontal margin, which is as specified and standard behavior across all browsers.

Diagram showing inline elements with margin.

If you find yourself overriding the owl selector frequently, there may be deeper systemic issues with the design. The owl selector deals with flow content, and flow content should make up the majority of your content. I don’t advise depending heavily on positioned content in most interfaces because they break implicit flow relationships. Even grid systems, with their floated columns, should require no more than a simple .row > * selector applying margin-top: 0 to reset them.

Diagram showing floated columns with margins.

Conclusion

I am a very poor mathematician, but I have a great fondness for Euclid’s postulates: a set of irreducible rules, or axioms, that form the basis for complex and beautiful geometries. Thanks to Euclid, I understand that even the most complex systems must depend on foundational rules, and CSS is no different. Although modularization of a complex interface is a necessary step in its maturation, any interface that does not follow basic governing tenets is going to lack clarity.

The owl selector allows you to control flow content, but it is also a way of relinquishing control. By styling elements according to context and circumstance, we accept that the structure of content is—and should be—mutable. Instead of prescribing the appearance of individual items, we build systems to anticipate them. Instead of prescribing the appearance of the interface as a whole, we let the content determine it. We give control back to the people who would make it.

When turning off CSS for a webpage altogether, you should notice two things. First, the page is unfalteringly flexible: the content fits the viewport regardless of its dimensions. Second—provided you have written standard, accessible markup—you should see that the content is already styled in a way that is, if not highly attractive, then reasonably traversable. The browser’s user agent styles take care of that, too.

Our endeavors to reclaim and enhance the innate device independence offered by user agents are ongoing. It’s time we worked on reinstating content independence as well.

The Specialized Web: Working with Subject-Matter Experts

 ∗ A List Apart: The Full Feed

The time had come for The Big Departmental Website Redesign, and my content strategist heart was all aflutter. Since I work at a research university, the scope wasn’t just the department’s site—there were also 20 microsites focusing on specific faculty projects. Each one got an audit, an inventory, and a new strategy proposal.

I met one-on-one with each faculty member to go over the plans, and they loved them. Specific strategy related to their users and their work! Streamlined and clarified content to help people do what needed doing! “Somebody pinch me,” I enthused after another successful and energizing meeting.

Don’t worry, the pinch came.

I waltzed into my next microsite meeting, proud of my work and capabilities. I outlined my grand plan to this professor, but instead of meeting with the enthusiasm I expected, I was promptly received a brick wall of “not interested.” She dismissed my big strategy with frowns and flat refusals without elaboration. Not to be deterred, I took a more specific tack, pointing out that the photos on the site felt disconnected from the research. No dice: she insisted that the photos not only needed to stay, but were critical to understanding the heart of the research itself.

She shot down idea after idea, all the while maintaining that the site should both be better but not change. My frustration mounted, and I finally pulled my papers together and asked, “Do you really even need a website?!” Of course, she scoffed. Meeting over.

Struggles with subject-matter experts (SMEs) are as diverse as the subject-matter experts themselves. Whether they’re surgeons, C-level executives, engineers, policy makers, faculty—we as web workers need SMEs for their specialized content knowledge. Arming yourself with the right tools, skills, and mentalities will make your work and projects run smoother for everyone on your team—SME included.

The right frame of mind

Know that nobody comes to the table with a clean slate. While the particulars may be new—a web presence, a social media campaign, a new database-driven tool—projects aren’t.

When starting off a project, I’ll ask each person why they’re at the table. Even though it may be obvious why the SME is on the team, each person gets equal time (no more than a minute or two) to state how what they do relates to the project or outcome. You’re all qualified to be there, and stating those qualifications not only builds familiarity, but provides everyone with a picture of the team as a whole.

I see SMEs as colleagues and co-collaborators, no matter what they may think of me and my lack of similar specialized knowledge. I don’t come to them from a service mentality—that they give me their great ideas and I humbly craft them to be web-ready. We’re working together to create something that can serve and help the user.

Listening for context

After my disastrous initial meeting with the prickly professor, I gave myself some time to calm down, and scheduled another meeting with one thing on my agenda: listening. I knew I was missing part of the story, and when we sat down again, I told the SME that I only wanted her to talk to me about the site, the content, and the research.

I wasn’t satisfied with her initial surface-level responses, because they weren’t solvable problems. To find the deeper root causes of her reluctance, I busted out my friends the Five Ws (and that tagalong how). When she insisted something couldn’t be removed or changed, I breezed right past why, because it wasn’t getting me anywhere. Instead, I asked: when did you choose this image? Where did this image come from? If it’s so essential to the site, it must have a history. I kept asking questions until I understood the context that existed already around this site. Once I understood the context, I could identify the need that content served, and could make sure that need was addressed, rather than just cutting it out.

Through this deeper line of questioning, I learned that the SME had been through an earlier redesign process with a different web team. They started off much in the same way I had—with big plans for her content, and not a lot of time for her. The design elements and photos that she was determined to hang on to? That was all that she had been able to control in the process before.

By swooping in with my ideas, I was just another Web Person to her, with mistrust feeding off old—but still very real—feelings of being ignored and passed over. It was my responsibility to build the working relationship back up and make it productive.

In the end, the SME and I agreed to start off with only a few changes—moving to a 960-pixel width and removing dead links—and left the rest of the content and structure as-is in the migration. This helped build her trust that I would not only listen to her, but be a good steward of her content. When we revisited the content later on, she was much more receptive to all my big ideas.

If someone seems afraid, ornery, reluctant, distrustful, or any other work-hampering trait, they’re likely not doing it just to be a jerk—there are histories, insecurities, and fears at work beneath the less-than-ideal behavior, as Kerry-Anne Gilowey points out in her presentation “The People Puzzle: Making the Pieces Fit.”

Listening is a key skill here: let them be heard, and try to uncover what’s at the root of their resistance. Some people may have a natural affinity for these people skills, but anyone will benefit from spending time practicing and working on them.

Tools before strategy, heading for tragedy

Being a good listener, however, is not a simple Underpants Gnome scheme toward project success:

  1. Listen to your frustrating SME
  2. PROFIT

Sometimes you and your SME are on the same page, ready to hop right in to Shiny New Project World! And hey, they have a great idea of what that new project is, and it is totally a Facebook page. Or a Twitter feed. Or an Instagram account, even though there is nothing to take photographs of.

This doesn’t necessarily indicate a mentality of “Social media, how hard can it be!”–instead, exuberance signals your SME’s desire to be involved in the work.

In the case of social media like Facebook or Twitter, the SME knows there is a conversation, a connection, happening somewhere, and they want to be a part of it. They may latch onto the thing they’ve heard of—maybe they check out photos of their friend’s kids on Facebook, or saw the use of hashtags mentioned during a big event like the World Cup. They’re not picking a specific tool just to be stubborn—they often just don’t have a clue as to how many options they actually have.

Sometimes the web is a little freaky, so we might as well stare it in the face together:

Graphic showing the range of available social media tools.


Each wedge of this photo is a different type of service, and inside the wedge are the sites or tools or apps that offer that service. This is a great way to show the SME the large toolbox at our disposal, and the need to be mindful and strategic in our selection.

After peering at the glorious toolbox of possible options, it becomes clear we’ll need a strategy to pick the right tool—an Allen wrench is great for building an IKEA bookshelf, but is lousy for tearing down drywall. I start my SME off with homework—a few simple, top-level questions:

  1. Who is this project/site/page for?
  2. Who is this project/site/page not for?

Oftentimes, this is the first time the SME has really thought about audience. If the answer to Question 1 is “everyone,” I start to ask about specific stakeholder groups: Customers? Instructors? Board Members? Legislators? Volunteers? As soon as we can get one group in the “not for” column, the conversation moves forward more easily.

An SME who says a website is “for everyone” is not coming from a place of laziness or obstinacy; the SMEs I work with simply want their website to be the most helpful to the most people.

  1. What other sites out there are like, or related to, the one we hope to make?

SMEs know who their peers, their competitors, and their colleagues are. While you may toil for hours, days, or weeks looking at material you think is comparable, your SME will be able to rattle off people or projects for you to check out in the blink of an eye. Their expertise saves you time.

There are obviously a lot more questions that get asked about a project, but these two are a great start to collaborative work, and function independently of specific tools. They facilitate an ongoing discussion about the Five Ws, and lay a good foundation to think about the practical side of the how.

Get yourself (and your project, and your team) right

It is possible to have a great working relationship with an SME. The place to start is with people—meet your SME, and introduce yourself! Meet as soon as the project starts, or even earlier.

Go to their stomping grounds

If you work with a large group of SMEs, find out where and when they gather (board meetings, staff retreats, weekly team check-ins) and get on the agenda. I managed to grab five minutes of a faculty meeting and told everyone who I was, a very basic overview of what I did, and that I was looking forward to working together—which sped up putting faces to names.

Find other avenues

If you’re having trouble locating the SMEs—either because they’re phenomenally busy or reluctant to work together—try tracking down an assistant. These assistants might have access to some of the same specialized knowledge as your SME (in the case of a research assistant), or they could have more access to your SME themselves (in the case of an executive assistant or other calendar-wrangler). Assistants are phenomenal people to know and respect in either of these cases; build a good, trusting relationship with them, and projects will move forward without having to wait for one person’s calendar to clear.

Make yourself available

Similarly, making yourself easier to find can open doors. I told people it was fine to “drop by any time,” and, while true, actually left people with no sense of my availability. When I started establishing set “office hours” instead, I found that drop-in meetings happened more often and more predictably. People knew that from 9 to 11 on Tuesdays and Thursdays, I was happy to talk about any random thing on their mind. I ended up having more impromptu meetings that led to better work.

For those of you about to collaborate, we salute you

SMEs, as stated in their very name, have specialized knowledge I don’t. However, the flip side is also true: my specialized knowledge is something they need so their content can be useful and usable on the web. Though you and your SMEs may be coming from different places, with different approaches to your work, and different skill sets and knowledge, you’ve got to work together to advance the project.

Do the hard work to understand each other, and move forward even if the steps seem tiny (don’t let perfect be the enemy of the good!). Find a seed of respect for each other’s knowledge, nurture it as it grows, and bask in the fruits of your labor—together.

It's not you, it's me

 ∗ journal

Dear web conferences, It’s not you, it’s me. Something’s changed and it’s not your fault. I’m just on a different path to you. Maybe we’ll be friends in a while, but at the moment I just want some space to do and try other things. I still love you. But we just need a break. Love, Mark


I’m taking next year off speaking at web conferences. It’s not that I don’t have anything to say, or contribute, but more that I have better things to do with my time right now. Speaking at conferences takes about two weeks per conference if it’s overseas once you factor in preparing and writing the talk, rehearsing, travel, and the conference itself. That’s two weeks away from my wife, my daughters, my new job and a team that needs me.

Two conferences the world over

What I’ve noticed this past year or so is, largely, we have two different types of web conference running the world over: small independents and larger corporate affairs. The former is generally run by one person with hoards of volunteers and is community-focussed (cheap ticket price, single track). The latter is big-budget, aimed at corporations as a training expense, maybe multi-track and has A-list speakers.

As well as these two trends, I see others in the material and the way that material is presented. ‘Corporate’ conferences expect valuable, actionable content; that is what corporations are paying for. Schlickly delivered for maximum ROI. ‘Community’ conferences have their own trends, too. Talks about people, empathy, community, and how start-ups are changing the world. Community conferences are frequently an excuse to hang out with your internet mates. Which is fine, I guess.

My problem with both of these is I’m not sure I fit anymore. I’m not what you would call a slick presenter: I ‘um’ and ‘ah’, I swear, I get excited and stumble on stage in more ways than one. Some would say I’m disrespectful to the audience I’m talking to. I’m lazy with my slides, preferring to hand-write single words and the odd picture. I’ve never used a keynote transition. I’m not really at home amongst the world’s corporate presenters who deliver scripted, rehearsed, beautifully crafted presentations. They’re great and everything, but it’s just not me. Not for the first time in my life, I don’t quite fit.

And then there’s the community conferences. I feel more at home here. Or at least I used to. This year, not so much. A lot of my friends in this industry just don’t really go to conferences that much anymore. They have family commitments, work to do, and – frankly – just aren’t that into getting pissed up in a night-club after some talks with 90% men. Younger men at that.

Time for something different

All of that may sound like I’m dissing the conference industry. That’s not my intention, but more like a realisation that, after nearly ten years at speaking at events, I think it’s time I had a little break. Time away to refresh myself, explore other industries that interest me like typography and architecture. Maybe an opportunity to present at one of these types of conferences would present itself; now that would be cool.

I know it’s a bit weird me posting about this when I could quietly just not accept any invitations to speak. To be honest, I’ve been doing that for a little while, but not for the first time, writing things down helps me clarify my position on things. For a while I was angry at web conferences in general. Angry at the content, disappointed with speakers, disappointed at myself. Then I realised, like so many times before, that when I feel like that it’s just that my ‘norm’ has changed. I’m no longer where I used to be and I’m getting my head around it.

It’s just this time, I’m going to listen to my head instead of burying it two feet in some sand.

Laters.

Infocon: green

 ∗ SANS Internet Storm Center, InfoCON: green

CSAM Month of False Positives: Ghosts in the Pentest Report

CSAM Month of False Positives: Ghosts in the Pentest Report, (Tue, Oct 21st)

 ∗ SANS Internet Storm Center, InfoCON: green

As part of most vulnerability assessments and penetration tests against a website, we almost alwa ...(more)...

ISC StormCast for Tuesday, October 21st 2014 http://isc.sans.edu/podcastdetail.html?id=4201, (Tue, Oct 21st)

 ∗ SANS Internet Storm Center, InfoCON: green

...(more)...

Apple Multiple Security Updates, (Mon, Oct 20th)

 ∗ SANS Internet Storm Center, InfoCON: green


Adventures with Plex

 ∗ journal

I’ve written before about going completely digital for our home entertainment. To recap: I have a big, shared hard drive attached to an iMac that two Apple TVs share to using ATV Flash This was fine for a while, but, frankly, ATV Flash is a little buggy over our network and the Apple TV struggled with any transcoding (converting one file type to another) and streaming – especially in HD. So, we needed something better. In steps a few things: Netflix, Plex and a Mac Mini.

Plex has been on my radar for a few years and up until recently didn’t really make much sense for me. But as ATV Flash was becoming more unstable as Apple updated their OS, then Plex started to look like a good alternative.

The hardware

As you may have read from my older post, I did have shared hard drive with all the media on hooked up to an iMac which the Apple TVs shared into to browse the media. The issue here became network and sharing reliability. Quite often, the shared hard drive was invisible because the iMac was asleep, or the network had dropped. Sometimes this happened in the middle of a movie. Not ideal.

The new setup is almost identical, but instead of using the Apple TVs as hardware to browse the library, they are now being used just as a device to Airplay to. I barely use the Apple TV UI at all. Browsing from my iPad and then air playing to the Apple TV. What’s cool here is that the iPad just acts as a remote, the file itself is being transcoded on the server and just pushed to the Apple TV directly.

What about a standalone NAS (Network Attached Storage)?

Plex does run on a NAS , but the issue there is most consumer NAS boxes don’t have the hardware grunt to do the on-the-fly transcoding. So, I finally decided to ditch my iMac in favour of a headless Mac Mini to run as a decent media box, running Plex.

Getting started with Plex

  1. Download it. Get the Media Server on your computer or NAS of choice (Plex has huge device support). Also, get hold of the mobile apps. Once you’re done there, download Plex for your connected apps: from Chromecast, Amazon Fire TV, Roku, Google TV or native Samsung apps and, now, the Xbox One, too. The app support is really quite incredible.

  2. Plex Pass. Even though the software for Plex is free, there are some additional things that are left for a subscription that you have to buy. The good thing is, you can get a lifetime subscription and the cost is very reasonable at $149.99. For that, you get early access to new builds, syncing content remotely, things like playlists and trailers. But the killer feature of the Plex Pass is the ability to create user accounts for your content. Now this is something I’ve been after for ages on the Apple TV, and even more important now my eldest daughter regularly watches films on it. I need the ability to filter the content appropriately for her.

  3. Setting up a server is a breeze. Once you’ve installed the server software, get yourself a user account on the Plex website and set up a server. This launches some web software for you to start adding files to your libraries and fiddle away to your hearts content with all the settings.

  4. If you did get the Plex Pass, I’d recommend creating multiple user accounts and playlists with the features Plex Pass gives you. The way I did this was to have email addresses and user accounts for server-plex, parents-plex and kids-plex. server-plex is for administering the account and has all the libraries shared with it. ‘parents’ for Emma and I, and ‘kids’ just has the ‘children’s’ library shared with it. Now, by simply signing in and out of the iPad, I can access the appropriate content for each user group.

Next up: streaming, or ‘How do I watch the film on my telly!?’

There are a few options:

  1. Native apps (Samsung, XBox One etc) These are apps installed directly on your TV or Xbox. To watch your content, simply fire up the app and away you go. Yesterday, I installed the Xbox One app and was up and running in less than three minutes.

  2. iOS and Airplay This is what I described earlier. Simply download the iOS apps and hook up to your plex server. Once you’re done, browse your library, press play and then airplay to your Apple TV.

  3. iOS and Chromecast Exactly the same as above!

Now, there are some disadvantages and advantages to streaming.

Disadvantages: From what I understand, adding Airplay into the mix does have a slight performance hit. Not that I’ve seen it, though. I’m only generally streaming 720 rather 1080 resolution, so the file sizes are coming up against network limitations. I do expect this to change in the coming years as resolution increases. Advantages: It’s a breeze. I use my Plex app on my iPad, choose a film or TV show I want to watch and then just stream it via Airplay. When I’m travelling, I take a Chromecast with me to plug into the TV and stream to that (more on that in another post).

‘Hacking’ the Apple TV

Currently there is no native app for the Apple TV, but there is a way to get around this by ‘hacking’ the Trailers app to directly browse your content on your plex server using PlexConnect or OpenPlex. Now, there’s a lot to read to get up to speed on this, so I’d recommend a good look through the plex forums. I followed the instructions here to install the OSX app, add an IP address to the Apple TV (to point to the plex server) and, so far, so good.

To be honest, though, I tend to just Airplay these days. The iPad remote / Apple TV combination is quite hard to beat. It’s fast, flexible and stable.

Is this it for my digital home needs?

For a good few years now I’ve been looking for the optimum solution to this problem. My home media centre needed the following:

  • Multi-user accounts
  • Full-featured remote
  • Large file format support
  • Manage music, photos and movies
  • Fast transcoding and streaming (minimum 720)

Both iTunes, ATV Flash, Drobo (in fact, any domestic NAS) fail on all or most of these points. Plex not only ticks every single box (if it’s run on a decent machine for transcoding), but provides very broad device support, an active developer community and a really good UX for the interface.

Who knows how long I’ll stick with Plex as I do have a habit of switching this around as often as I change my email client (quite often!). But, for now, it’s working just fine!

Cote D'Azur

 ∗ jmoiron.net blog

the fairmont hairpin, photo by steve harris, cc

The Monte Carlo method describes a class of algorithms which use repeated random samplings to produce statistical or approximate results. A famous algorithm employing such a method, brought to my attention by my friend Jeremy Self recently, is one which calculates the digits of pi.

It starts by considering the ratio between the area of a circle and a bounding rectangle. To make the math easier, you can take the unit circle with radius \(r = 1\) and a bounding box with sides of length 2, presented below. The area of a circle is \(\pi r^2\), and a square \(a^2\), where a is the length of a side. The areas of our unit circle and bounding square are therefore \(\pi\) and 4, respectively.

If we randomly plot points somewhere inside the square, we can expect for the ratio of points inside and outside the circle to be equivalent to the ratio between the areas of the circle and the square, or \(\pi/4\). The more points we plot, the lower the error rate we should encounter: you can understand this intuitively, as it's possible to plot 5 random points inside the circle and get \(\pi = 4\), but for 10,000 points it's incredibly unlikely.

We can make the further observation that each quadrant represents an area of a quarter circle and a unit square with the same ratio discussed above, so we can use random samplings where \(0 <= x <= 1\) and \(0 <= y <=1\), which works well for lots of random number generators that generate floats in this range.

This is an incredibly simple algorithm to implement, so we decided to have a little shootout and see how the naive solutions fared in multiple languages, running this algorithm with 100,000,000 iterations.

I implemented the algorithm in C, Rust, Go, Python and Ruby, and between Jeremy and the other Jeremy we also had a PHP implementation, which I used below, and a Common Lisp implementation kicking about. The point really wasn't to compare languages to each other; more to challenge our own preconceptions about their performance characteristics.

These are the runtimes on my fairly old Intel I5 750:

CRustGoGo (gccgo) RubyPython2Python3PyPyPHP
2.15 1.70 4.791.96 25.5351.7450.827.2454.6

Despite only testing function call overhead (or compiler inlining), arithmetic, sqrt, and rand, I still learned some interesting things and encountered some surprising results.

There are large differences in the speed of random number generators; having never looked at the implementations of any, I had assumed that most would use the same algorithms. Rust has quite a few at its disposal, and XorShiftRng gave me the fastest results of any language in my tests, while using rng::task_rng() clocked in at 9.45s, slower than PyPy. Since these processes have to seed the generator and produce 200,000,000 random numbers, the tradeoffs made in each RNG implementation can represent huge shifts in elapsed time.

Function calls have surprisingly variable overhead across languages. While I expected that manual inlining of the various in_circle test functions to improve the performance of most languages, in PHP in particular the effect was drastic: runtime drops from 54s to 36s. To compare, the Python runtime only improved by 2s to 48s.

After using Go for so long, I'd blissfully forgotten about compiler optimization flags and other build-time tuning.

When I'd initially implemented the rust version, I used the task_rng random number generator and no optimization switches on rustc, which resulted in a surprisingly high runtime of 24s. Turning on the optimizer lowered that to 9s, nearly 3x improvement. Running with XorShiftRng unoptimized was 9s itself, and flipping on the optimizer got nearly 5x improvement. I'm unsure if this says good things about the optimizer or bad things about the quality of output when not running it.

This made me think to turn on the optimizer for C as well, and finally to try out gccgo and see what it would produce. The runtimes for unoptimized gccgo and C were both around 3.6s, which means that simply adding -O2 to the build roughly doubles performance. Again, this surprised me; I'd always thought, based primarily on ignorance and scoffing at gentoo, that optimizers would produce some incremental improvements at best, and perhaps for general purpose desktop software where you're rarely CPU bound, they do. I wish I had some better knowledge about compiler optimizers to share; for now, these observations will have to suffice.

The star of the show in my mind has to be PyPy: its results on some unmodified, straightforward Python code are pretty astonishing, given the dismal performance of CPython. This is just the sort of thing that should be its bread and butter, but it really came through in a big way. Ruby's results were also quite impressive, as I'd run the Python code first and expected Ruby to be around the same. I was expecting the massive gulf between the scripting and compiled languages, but PyPy shows that this needn't be the case.

These programs represent the tiniest slice of their respective languages, and surely say more about my relative abilities between them than some deeper truths on the language of implementation. Although it should be obvious, I strongly warn against using the results to determine what language to use for a real project, but perhaps this exercise can inspire you to do your own experiments and learn some interesting algorithms in the process!

Microsoft MSRT October Update, (Sun, Oct 19th)

 ∗ SANS Internet Storm Center, InfoCON: green

This past week Microsoft

Alice – Painless Middleware Chaining for Go

 ∗ Justinas Stankevičius

According to a recent thread on Reddit, many people like it simple when doing web development in Go and use net/http with useful addons (like the Gorilla toolkit) instead of a full-fledged framework.

Such applications often make use of middleware, but wrapping lots of layers of handlers can become messy in the long run:

final := gzipHandler(rateLimitHandler(securityHandler(authHandler(myApp))))

Sure, that works. But then you want to remove a handler, add one or reorder them. Suddenly, you're drowning in parentheses.

Alice (available on GitHub) was created as a solution to simplify chaining while remaining flexible and playing nice with the existing net/http middleware.

It's not a framework, a mux or a toolkit. Its sole functionality is to let you create the same middleware chain like this:

final := alice.New(gzipHandler, ratelimitHandler, 
    securityHandler, authHandler).Then(myApp)

Flaws of other approaches

Many might point out that there are existing solutions for chaining middleware. That's true, but any of the solutions I had looked into had at least one thing that I thought should be done in a differently.

The recent Negroni package allows handlers to be added as middleware like this:

n := negroni.Classic()
// func (n *Negroni) UseHandler(handler http.Handler)
n.UseHandler(myMiddleware)

See the fault here? Traditional net/http middleware wrap the next handler so the wrapper can have a total control. Here, there's simply no handler to pass in. You can't pass in the Negroni instance as it would result in infinite recursion (Negroni calls middleware calls Negroni).

Negroni has its own mechanism for control flow (next() to call the following handlers), but you have to modify your middleware to fully utilize it, which is not ideal.

go-httppipe has the same problem: it's suited for successive handlers, but not wrapper-type middleware.

go-stackbuilder forcibly uses http.ServeMux as the main handler, instead of http.Handler.

func New(mux *http.ServeMux) Builder {
    return Builder{mux}
}

func Build(hs ...interface{}) http.Handler {
    return New(http.DefaultServeMux).Build(hs...)
}

That is not ideal as one might like to bypass the default mux completely, especially when using an alternative one.

Muxchain, like Negroni, doesn't provide a reference to the next handler.

Matt Silverlock's use.go snippet came closest to what I wanted. My only complaint is that the ordering of handlers here is counter-intuitive. Reading the chaining code makes it obvious that

use(myApp, csrf, logging, recovery)

is equivalent to this code:

recovery(logging(csrf(myApp)))

and this request cycle:

recovery -> logging -> csrf -> myApp

So, a reversed order from what you've written in your code.

How Alice is better

Alice adopts Matt's model and fixes the small imperfections. It still requires middleware constructors of form func (http.Handler) http.Handler, but requests now flow exactly the way you order your handlers.

This code:

alice.Chain(recovery, logging, csrf).Then(myApp)

will result in this request cycle:

recovery -> logging -> csrf -> myApp

Here, recovery receives the request and has a reference to logging. It may or may not call logging: it's completely up to recovery, just as without Alice.

One more thing

Deciding on a unified constructor for middleware is the main reason why creating such a convenient API for chaining is even possible.

However, limiting ourselves to one function signature has a drawback. Many middleware have settings one might want (or have) to set. Not only does that not fit into our chosen signature, there is no standard way adopted by developers.

Middleware found in the standard library take the options as additional arguments to the same function:

handler = http.StripPrefix("/old/", handler)

Throttled constructs a rate limiting strategy which has a method to wrap our handler:

th := throttled.RateLimit(throttled.PerMin(30), 
    &throttled.VaryBy{RemoteAddr: true}, 
    store.NewMemStore(1000))
handler := th.Throttle(handler)

And nosurf has additional methods on the handler:

handler := nosurf.New(handler)
handler.ExemptPath("/")
handler.SetFailureHandler(http.NotFoundHandler())

It's nearly impossible for Alice to solve this without resorting to ugly reflection tricks, so it doesn't try to. Instead, when in need of customization, one should create their own constructor.

func myStripPrefix(h http.Handler) http.Handler {
    return http.StripPrefix("/old/", h)
}

This now complies to the constructor interface, so we can plug it into Alice.

alice.New(myStripPrefix).Then(myApp)

Prove me wrong, though

Alice was born in a matter of minutes and based on my own perception of what's right and what's not. Just like I've found other solutions non-ideal, some might find inherent flaws in how Alice works. Although it's unlikely that the very essence of how Alice functions will change, any feedback is welcome.

Check out Alice on GitHub.

Best Practices for Errors in Go

 ∗ Justinas Stankevičius

Error handling seems to be one of the more controversial areas of Go. Some are pleased with it, while others hate it with passion. Nevertheless, there is a handful of best practices that will make dealing with errors less painful if you're a sceptic and even better if you like it as it is.

Know when to panic

The idiomatic way of reporting errors in Go is having the error as the last return value of a funtion. However, Go also offers an alternative error mechanism called panic that is similar to what is known as exceptions in other programming languages.

Despite not being suitable everywhere, panics are useful in several specific scenarios.

When the user explicitly asks: Okay

In certain cases, the user might want to panic and thus abort the whole application if an important part of it can not be initialized. Go's standard library is ripe with examples of variants of functions that panic instead of returning an error, e.g. regexp.MustCompile. As long as there remains a function that does this in an idiomatic way (regexp.Compile), providing the user with a nifty shortcut is okay.

When setting up: Maybe

A scenario that, in a way, intersects with the previous one, is the setting up phase of an application. In most cases, if preparations fail, there is no point in continuing. One case of this might be a web application failing to bind the required port or being unable to connect to the database server. In that case, there's not much left to do, so panicking is acceptable, even if not explicitly requested by the user.

This behavior can also be observed in the standard library: the net/http muxer will panic if a pattern is invalid in any way.

func (mux *ServeMux) Handle(pattern string, handler Handler) {
    mux.mu.Lock()
    defer mux.mu.Unlock()

    if pattern == "" {
        panic("http: invalid pattern " + pattern)
    }
    if handler == nil {
        panic("http: nil handler")
    }
    if mux.m[pattern].explicit {
        panic("http: multiple registrations for " + pattern)
    }

    // ...

Otherwise: Not really

Although there might be other cases where panic is useful, returned errors remain preferable in most scenarios.

Predefine errors

It's not unusual to see code returning errors like this

func doStuff() error {
    if someCondition {
        return errors.New("no space left in the device")
    } else {
        return errors.New("permission denied")
    }
}

The wrongdoing here is that it's not convenient to check which error has been returned. Comparing strings is error prone: misspelling a string in comparison will result in an error that cannot be caught at compile time, while a cosmetic change of the returned error will break the checking just as much.

Given a small set of errors, the best way to handle this is to predefine each error publicly at the package level.

var ErrNoSpaceLeft = errors.New("no space left in the device")
var ErrPermissionDenied = errors.New("permission denied")

func doStuff() error {
    if someCondition {
        return ErrNoSpaceLeft
    } else {
        return ErrPermissionDenied 
    }
}

Now the previous problems aren't anymore.

if err == ErrNoSpaceLeft {
    // handle this particular error
}

Provide information

Sometimes, an error can happen because of a whole lot of different reasons. Wikipedia lists 41 different HTTP client errors. Let's say we want to treat them as errors in Go (net/http does not). What's more, we'd like to be able to look into the specifics of the error we received and find out whether the error was 404, 410 or 418.

In the last paragraph we developed a pattern for discerning errors from one another, but here it gets a bit messy. Predefining 41 separate errors like this:

var ErrBadRequest = errors.New("HTTP 400: Bad Request")
var ErrUnauthorized = errors.New("HTTP 401: Unauthorized")
// ...

will make our code and documentation messy and our fingers sore.

A custom error type is the best solution to this problem. Go's implicit interfaces make creating one easy: to conform to the error interface, we only need to have an Error() method that returns a string.

type HttpError struct {
    Code        int
    Description string
}

func (h HttpError) Error() string {
    return fmt.Sprintf("HTTP %d: %s", h.Code, h.Description)
}

Not only does this act like a regular error, it also contains the status code as an integer. We can then use it as follows:

func request() error {
    return HttpError{404, "Not Found"}
}

func main() {
    err := request()

    if err != nil {
        // an error occured
        if err.(HttpError).Code == 404 {
            // handle a "not found" error
        } else {
            // handle a different error
        }
    }

}

The only minor annoyance left here is the need to do a type assertion to a concrete error type. But it's a small price to pay compared to the fragility of parsing the error code from a string.

Provide stack traces

Errors as they exist in Go remain inferior to panics in one important way: they do not provide important information about where in the call stack the error happened.

A solution to this problem has recently been created by the Juju team at Canonical. Their package errgo provides the functionality of wrapping an error into another one that records where the error happened.

Building up on the HTTP error handling example, we'll now put this to use.

package main

import (
    "fmt"

    "github.com/juju/errgo"
)

type HttpError struct {
    Code        int
    Description string
}

func (h HttpError) Error() string {
    return fmt.Sprintf("HTTP %d: %s", h.Code, h.Description)
}

func request() error {
    return errgo.Mask(HttpError{404, "Not Found"})
}

func main() {
    err := request()

    fmt.Println(err.(errgo.Locationer).Location())

    realErr := err.(errgo.Wrapper).Underlying()

    if realErr.(HttpError).Code == 404 {
        // handle a "not found" error
    } else {
        // handle a different error
    }
}

Our code has changed in several ways. Firstly, we wrap an error into an errgo-provided type. This code now prints the information on where the error happened. On my machine it outputs

/private/tmp/example.go:19

referencing a line in request().

However, our code has become somewhat messier. To get to the real HttpError we need to do more unwrapping. Sadly, I'm not aware of a real way to make this nicer, so it's all about tradeoffs. If your codebase is small and you can always tell where the error came from, you might not need to use errgo at all.

What about concrete types?

Some might point out that a portion of the type assertions could have been avoided if we returned a concrete type from a function instead of an error interface.

However, doing that in conjunction with the := operator will bring trouble. As a result, the error variable will not be reusable.

func f1() HttpError { ... }
func f2() OsError { ... }

func main() {
    // err automatically declared as HttpError
    err := f1()

    // OsError is a completely different type
    // The compiler does not allow this
    err = f2()
}

To avoid this, errors are best returned using the error type. Standard library avoids returning concrete types as well, e.g. os package states:

Often, more information is available within the error. For example, if a call that takes a file name fails, such as Open or Stat, the error will include the failing file name when printed and will be of type *PathError, which may be unpacked for more information.

This concludes my list of best practices for errors in Go. As in other areas, a different mindset has to be adopted when coming into Go from elsewhere. "Different" does not imply "worse" though and deciding on a set of conventions is vital to making Go development even better.

Writing HTTP Middleware in Go

 ∗ Justinas Stankevičius

In the context of web development, "middleware" usually stands for "a part of an application that wraps the original application, adding additional functionality". It's a concept that usually seems to be somewhat underappreciated, but I think middleware is great.

For one, a good middleware has a single responsibility, is pluggable and self-contained. That means you can plug it in your app at the interface level and have it just work. It doesn't affect your coding style, it isn't a framework, but merely another layer in your request handling cycle. There's no need to rewrite your code: if you decide that you want the middleware, you add it into the equation, if you change your mind, you remove it. That's it.

Looking at Go, HTTP middleware is quite prevalent, even in the standard library. Although it might not be obvious at first, functions in the net/http package, like StripPrefix or TimeoutHandler are exactly what we defined middleware to be: they wrap your handler and take additional steps when dealing with requests or responses.

My recent Go package nosurf is middleware too. I intentionally designed it as one from the very beginning. In most cases you don't need to be aware of things happening at the application layer to do a CSRF check: nosurf, like any proper middleware, stands completely on its own and works with any tools that use the standard net/http interface.

You can also use middleware for:

  • Mitigating BREACH attack by length hiding
  • Rate-limiting
  • Blocking evil bots
  • Providing debugging information
  • Adding HSTS, X-Frame-Options headers
  • Recovering gracefully from panics
  • ...and probably many others

Writing a simple middleware

For the first example, we'll write middleware that only allows users visit our website through a single domain (specified by HTTP in the Host header). A middleware like that could serve to protect the web application from host spoofing attacks.

Constructing the type

For starters, let's define a type for the middleware. We'll call it SingleHost.

type SingleHost struct {
    handler     http.Handler
    allowedHost string
}

It consists of only two fields:

  • the wrapped Handler we'll call if the request comes with a valid Host.
  • the allowed host value itself.

As we made the field names lowercase, making them private to our package, we should also make a constructor for our type.

func NewSingleHost(handler http.Handler, allowedHost string) *SingleHost {
    return &SingleHost{handler: handler, allowedHost: allowedHost}
}

Request handling

Now, for the actual logic. To implement http.Handler, our type only needs to have one method:

type Handler interface {
        ServeHTTP(ResponseWriter, *Request)
}

And here it is:

func (s *SingleHost) ServeHTTP(w http.ResponseWriter, r *http.Request) {
    host := r.Host
    if host == s.allowedHost {
        s.handler.ServeHTTP(w, r)
    } else {
        w.WriteHeader(403)
    }
}

ServeHTTP method simply checks the Host header on the request:

  • if it matches the allowedHost set by the constructor, it calls the wrapped handler's ServeHTTP method, thus passing the responsibility for handling the request.
  • if it doesn't match, it returns the 403 (Forbidden) status code and the request is dealt with.

The original handler's ServeHTTP is never called in the latter case, so not only does it not get a say in this, it won't even know such a request arrived at all.

Now that we're done coding our middleware, we just need to plug it in. Instead of passing our original Handler directly into the net/http server, we wrap it in the middleware.

singleHosted = NewSingleHost(myHandler, "example.com")
http.ListenAndServe(":8080", singleHosted)

An alternative approach

The middleware we just wrote is really simple: it literally consists of 15 lines of code. For writing such middleware, there exists a method with less boilerplate. Thanks to Go's support of first class functions and closures, and having the neat http.HandlerFunc wrapper, we'll be able to implement this as a simple function, rather than a separate struct type. Here is the function-based version of our middleware in its entirety.

func SingleHost(handler http.Handler, allowedHost string) http.Handler {
    ourFunc := func(w http.ResponseWriter, r *http.Request) {
        host := r.Host
        if host == allowedHost {
            handler.ServeHTTP(w, r)
        } else {
            w.WriteHeader(403)
        }
    }
    return http.HandlerFunc(ourFunc)
}

Here we declare a simple function called SingleHost that takes in a Handler to wrap and the allowed hostname. Inside it, we construct a function analogous to ServeHTTP from the previous version of our middleware. Our inner function is actually a closure, so it can access the variables from the outer function. Finally, HandlerFunc lets us use this function as a http.Handler.

Deciding whether to use a HandlerFunc or to roll out your own http.Handler type is ultimately up to you. While for basic cases a function might be enough, if you find your middleware growing, you might want to consider making your own struct type and separate the logic into several methods.

Meanwhile, the standard library actually uses both ways of building middleware. StripPrefix is a function that returns a HandlerFunc, while TimeoutHandler, although a function too, returns a custom struct type that handles the requests.

A more complex case

Our SingleHost middleware was trivial: we checked one attribute of the request and either passed the request to the original handler, not caring about it anymore, or returned a response ourselves and didn't let the original handler touch it at all. Nevertheless, there are cases where, rather than acting based on what the request is, our middleware has to post-process the response after the original handler has written it, modifying it in some way.

Appending data is easy

If we just want to append some data after the body written by the wrapped handler, all we have to do is call Write() after it finishes:

type AppendMiddleware struct {
    handler http.Handler
}

func (a *AppendMiddleware) ServeHTTP(w http.ResponseWriter, r *http.Request) {
    a.handler.ServeHTTP(w, r)
    w.Write([]byte("Middleware says hello."))
}

The response body will now consist of whatever the original handler outputted, followed by Middleware says hello..

The problem

Doing other types of response manipulations is a bit harder though. Say, we'd like to prepend data to the response instead of appending it. Write() before our original handler does. If we did that, it would cause the default status code and headers to be written immediately. That way, the original handler wouldn't have any control over what those will be anymore. --> If we call Write() before the original handler does, it will lose control over the status code and headers, since the first Write() writes them out immediately.

Modifying the original output in any other way (say, replacing strings in it), changing certain response headers or setting a different status code won't work because of a similar reason: when the wrapped handler returns, those will have already be sent to the client.

To counter this we need a particular kind of ResponseWriter that would work as a buffer, gathering the response and storing it for later use (and modifications). We would then pass this buffering ResponseWriter to the original handler instead of giving it the real RW, thus preventing it from actually sending the response to the user just yet.

Luckily, there's a tool just like that in the Go standard library. ResponseRecorder in the net/http/httptest package does all we need: it saves the response status code, a map of response headers and accumulates the body into a buffer of bytes. Although (like the package name implies) it's intended to be used in tests, it fits our use case as well.

Let's look at an example of middleware that uses ResponseRecorder and modifies everything in a response, just for the sake of completeness.

type ModifierMiddleware struct {
    handler http.Handler
}

func (m *ModifierMiddleware) ServeHTTP(w http.ResponseWriter, r *http.Request) {
    rec := httptest.NewRecorder()
    // passing a ResponseRecorder instead of the original RW
    m.handler.ServeHTTP(rec, r)
    // after this finishes, we have the response recorded
    // and can modify it before copying it to the original RW

    // we copy the original headers first
    for k, v := range rec.Header() {
        w.Header()[k] = v
    }
    // and set an additional one
    w.Header().Set("X-We-Modified-This", "Yup")
    // only then the status code, as this call writes out the headers 
    w.WriteHeader(418)
    // the body hasn't been written (to the real RW) yet,
    // so we can prepend some data.
    w.Write([]byte("Middleware says hello again. "))
    // then write out the original body
    w.Write(rec.Body.Bytes())
}

And here's the response we get by wrapping a handler that would otherwise simply return "Success!" with our middleware.

HTTP/1.1 418 I'm a teapot
X-We-Modified-This: Yup
Content-Type: text/plain; charset=utf-8
Content-Length: 37
Date: Tue, 03 Sep 2013 18:41:39 GMT

Middleware says hello again. Success!

This opens up a whole lot of new possibilities. The wrapped handler is now completely in or control: even after it handling the request, we can manipulate the response in any way we want.

Sharing data with other handlers

In various cases, your middleware might need to expose certain information to other middleware or your app itself. For example, nosurf needs to give the user a way to access the CSRF token and the reason of failure (if any).

A nice pattern for this is to use a map, usually an unexported one, that maps http.Request pointers to pieces of the data needed, and then expose package (or handler) level functions to access the data.

I used this pattern for nosurf too. Here, I created a global context map. Note that a mutex is also needed, since Go's maps aren't safe for concurrent access by default.

type csrfContext struct {
    token string
    reason error
}

var (
    contextMap = make(map[*http.Request]*csrfContext)
    cmMutex    = new(sync.RWMutex)
)

The data is set by the handler and exposed via exported functions like Token().

func Token(req *http.Request) string {
    cmMutex.RLock()
    defer cmMutex.RUnlock()

    ctx, ok := contextMap[req]
    if !ok {
            return ""
    }

    return ctx.token
}

You can find the whole implementation in the context.go file in nosurf's repository.

While I chose to implement this on my own for nosurf, there exists a handy gorilla/context package that implements a generic map for saving request information. In most cases, it should suffice and protect you from pitfalls of implementing a shared storage on your own. It even has a middleware of its own that clears the request data after it's been served.

All in all

The intention of this article was both to draw fellow gophers' attention to middleware as a concept and to demonstrate some of the basic building blocks for writing middleware in Go. Despite being a relatively young language, Go has an amazing standard HTTP interface. It's one of the factors that make coding middleware for Go a painless and even fun process.

Nevertheless, there is still a lack of quality HTTP tools for Go. Most, if not all, of the middleware ideas for Go I mentioned earlier are yet to come to life. Now that you know how to build middleware for Go, why not do it yourself? ;)

P.S. You can find samples for all the middleware written for this post in a GitHub gist.

Embrace Go's HTTP Tools

 ∗ Justinas Stankevičius

Some time ago I released nosurf, a Go middleware for mitigating Cross-Site Request Forgery attacks. Writing a seemingly simple and small package was enough to fall in love with how Go handles HTTP. Yet, it's up to us to either embrace the standard HTTP facilities or fragmentate, sacrificing composability and modularity.

http.Handler is THE interface

Unified HTTP interfaces for web apps written in certain programming languages, like WSGI for Python and Rack for Ruby are a great idea, but they weren't always there. For instance, Rack only emerged in 2007, when Rails had already been going strong for a while.

Meanwhile in Go, the only interface needed has been in development since 2009, and although it's been through some serious changes since that, by the end of 2011, months before Go 1.0 was released, it had already stabilized.

Of course, I'm talking about the mighty http.Handler.

type Handler interface {
    ServeHTTP(ResponseWriter, *Request)
}

To be able to handle HTTP requests, your type only needs to implement this one method. The method reads the request info from the given *Request and writes a response into the given ResponseWriter. Seems simple enough, right?

Complement, don't replace

Yet, when building abstractions on top of that, some get it wrong. Take for example Mango, described by its author as "a modular web-application framework for Go, inspired by Rack and PEP333".

This is what a Mango application looks like:

func Hello(env mango.Env) (mango.Status, mango.Headers, mango.Body) {
  return 200, mango.Headers{}, mango.Body("Hello World!")
}

Looks simple, concise and very similar to WSGI or Rack, right? Except for one thing. While with dynamic/duck typing, you could have any iterable for a body, here mango.Body is simply a string. Essentially, that takes away the ability to do any sort of streaming responses with Mango. Even if it were to expose a ResponseWriter, anything written to it would clash with the returned values, since they're only returned at the end of the function, after the calls to ResponseWriter have already been made.

That's bad. Whether you need another interface on top of existing net/http is a matter of taste, but even if you do, it should not take functionality away. An interface that is nicer to code with, but takes away important functions is clearly inferior.

The right way

A popular "micro" web framework web.go deals with this in a simple, yet much better way. Its handlers take a pointer to web.Context as an optional first argument.

type Context struct {
    Request *http.Request
    Params  map[string]string
    Server  *Server
    http.ResponseWriter
}
// ...
func hello(ctx *web.Context, val string) string {
    return "hello " + val 
}

web.Context does not take the standard HTTP handler structures away. Instead, the *Request argument is available as a struct member and Context implements the required ResponseWriter methods itself embeds the original ResponseWriter. The string you return from the function (if any) is simply appended to the response.

That is a good design choice and I think it goes well with Go's philosophy. Even though you get a nice higher-level API, you don't have to sacrifice the low-level control over the request handling.

Start now

Go's HTTP library infrastructure, despite growing rapidly, still has some gaps left to fill. But the last thing we need is fragmentation and annoying incompatibilities due to poor design and abstractions that actually take important functionality away. Embracing and supporting the standard Go HTTP facilities is, in my humble opinion, the straightest way to having functional and modular 3rd-party HTTP tools.

GamerGate (Doesn’t) Speak!

 ∗ andlabs's blog

I’m surprised by the lack of responses I’ve received. I’m also unsurprised, since it seems like I have to actually try to get GamerGate’s attention. I won’t bother closing comments on the other post since there was only one. So without further ado…

Willy
‘I believe that those who participated in the August tirade with Zoe’s ex are the same people who attacked her on Steam’

You believe it, eh? Evidence stronk!

There are several ways I could respond to this. I’ll use them all.

  1. Sorry, I don’t answer to “stronk”. *walks away*
  2. You ask me to raise evidence when your whole movement is about anonymity and untraceability? Wow, what a Batman gambit!
  3. Go find the Steam Greenlight page for Depression Quest and look at all the people who bashed her there; see if they’re a member of the GamerGate group(s) on Steam.

‘Yes you are. Just ask Brianna Wu.’

What? I can’t tell if this is a joke or your pre-supposed opinions are really blinding you this much.

I cannot respond to this since I do not know what your stance is, but I have a hypothesis, discussed later.

‘Isn’t it weird that you have to go around constantly defending criticism of GamerGate?’

No, it’s not weird at all. Unjustified criticism of a whole movement for the actions of a few should be defended against, since it drowns out the message of better games journalism with a misrepresentation of everything. Sure, GamerGate is an umbrella that harbours vicious worms. But nobody else condones it.
ISIS are a small section of Muslims, and some right-wingers like to think that all Muslims are terrorists because of them. Yet an imam defending his beliefs wouldn’t be met with accusations of ‘Why are so quick to defend against blatantly false accusations? Huh? HUH?’

Because ISIS didn’t come first. Islamist terrorism erupted within the past century, for a variety of reasons I am not qualified to discuss. Islam is, instead, traditionally known as a rather peaceful religion, and in the centuries that the religion has existed, the Arab people have been responsible for some of the most important technological innovations in human history.

GamerGate, on the other hand, was started by people in the Zoe Quinn hate camp who decided to shift focus and attack games journalism instead. You were born of bad blood, and the attacks on Brianna Wu and Anita Sarkeesian, which I must say were done in your name (and I can show proof of the latter), shows that the bad blood is still there.

So yeah. It’s quite obvious you’re presuming GG is toxic and that attempts to defend its reputation are ‘whitewashing’, which clouds your argument with all sorts of presumption and ‘belief’ backed up with nothing.

There’s a difference between defending something’s reputation and ignoring its actions.

I specifically chose the wording “I believe” because it is what I believe, not what I can easily confirm. But the two things in my original post that I said were beliefs were both backstory and origin-related; they are separate from the actual timeline.

THE BONUS TWITTER CHAPTER

There were two tweets from one “@Captain_Biglou” but I can’t tell if they were for or against GamerGate so I’ll omit them here.

@thepestoa
https://www.youtube.com/watch?v=ipcWm4B3EU4 – #Gamergate in 60s
https://www.youtube.com/watch?v=tRaAJBKmi5I – Longer Video

First I’ll point out that this was in response to a tweet I made asking if Twitter tailored the “Trends” list based on your recent activity, so I’m not sure what the point is here.

The 60 Seconds video talks about integrity in the games industry only. It makes no mention of the whole women in gaming being abused problem. Its description also repeats the whole “ALL Gamers Are Dead!!!11″ misunderstanding, so now we’re just talking in circles.

The second video, which is much longer, falls flat on its face in the first minute, by claiming that Brianna made up her death threats herself. Who the fuck does that? Why would you stage having the police remove you from your home out of fear?

Criticized on the basis of strawman fallacy. #gamergate isn’t about misogyny talking about misogyny is not debating #corruption

Is it just me, or are all of you missing the point that this movement started as a branching-off of the Zoe Quinn harassment? I don’t know if you’re deliberately doing this or are just delusional.

Closing Comment: More Proof

Just this. Here’s a whole Storify full of proof. Enjoy

I want to make a page that collects things like the above and other images and news articles together in a giant “Evidence stronk!” page.

But at this point I’m just starting to wonder if this is all going to be hopeless in the end…

Postscript: To Those Who Still Believe in Journalistic Ethics

To those of you who think you are being unfairly silenced or ignored: we are trying to convince you that you are being duped into following an organization that does not actually believe any of the things you stand for. I wish I could explain this in a nicer or more thorough way, but this is all I can say.

Have a parting gift. Hopefully this will be the last current events post here for a long time.

Personalizing Git with Aliases

 ∗ A List Apart: The Full Feed

Part of getting comfortable with the command line is making it your own. Small customizations, shortcuts, and time saving techniques become second nature once you spend enough time fiddling around in your terminal. Since Git is my Version Control System of choice (due partially to its incredible popularity via GitHub), I like to spend lots of time optimizing my experience there.

Once you’ve become comfortable enough with Git to push, pull, add, and commit, and you feel like you’d like to pursue more, you can customize it to make it your own. A great way to start doing this is with aliases. Aliases can help you by providing shorthand commands so you can move faster and have to remember less of Git’s sometimes very murky UI. Luckily, Git makes itself easy to customize by setting global options in a file named .gitconfig in our home directory.

Quick note: for me, the home directory is /Users/jlembeck, you can get there on OSX or most any other Unix platform by typing cd ~ and hitting enter or return. On Windows, if you’re using Powershell, you can get there with the same command and if you’re not using Powershell, cd %userprofile% should do the trick.

Now, let’s take a look. First, open your .gitconfig file (from your home directory):

~/code/grunticon master* $ cd ~
~ $ open .gitconfig

You might see a file that looks similar to this:

[user]
  name = Jeff Lembeck
  email = your.email.address@host.com
[alias]
  st = status
  ci = commit
  di = diff
  co = checkout
  amend = commit --amend
  b = branch

Let’s look at the different lines and what they mean.

[user]
  name = Jeff Lembeck
  email = your.email.address@host.com

First up, the global user configuration. This is what Git references to say who you are when you make commits.

[alias]
  st = status
  ci = commit
  di = diff
  co = checkout
  amend = commit --amend
  b = branch

Following the user information is what we’re here for, aliases.

Any command given in that screen is prefaced with git. For example, git st is an alias for git status and git ci is git commit. This allows you to save a little time while you’re typing out commands. Soon, the muscle memory kicks in and git ci -m “Update version to 1.0.2” becomes your keystroke-saving go-to.

Ok, so aliases can be used to shorten commands you normally type and that’s nice, but a lot of people don’t really care about saving 10 keystrokes here and there. For them, I submit the use case of aliases for those ridiculous functions that you can never remember how to do. As an example, let’s make one for learning about a file that was deleted. I use this all of the time.

Now, to check the information on a deleted file, you can use git log --diff-filter=D -- path/to/file. Using this information I can create an alias.

d = log --diff-filter=D -- $1

Let’s break that down piece by piece.

This should look pretty familiar. It is almost the exact command from above, with a few changes. The first change you’ll notice is that it is missing git. Since we are in the context of git, it is assumed in the alias. Next, you’ll see a $1, this allows you to pass an argument into the alias command and it will be referenced there.

Now, with an example. git d lib/fileIDeleted.js. d is not a normal command in git, so git checks your config file for an alias. It finds one. It calls git log --diff-filter=D -- $1. And passes the argument lib/fileIDeleted.js into it. That will be the equivalent of calling git log --diff-filter=D -- lib/fileIDeleted.js.

Now you never have to remember how to do that again. Time to celebrate the time you saved that would normally be spent on Google trying to figure out how to even search for this. I suggest ice cream.

For further digging into this stuff: I got most of my ideas from Gary Bernhardt’s wonderful dotfiles repository. I strongly recommend checking out dotfiles repos to see what wild stuff you can do out there with your command line. Gary’s is an excellent resource and Mathias’s might be the most famous. To learn more about Git aliases from the source, check them out in the Git documentation.

Nishant Kothary on the Human Web: The Politics of Feedback

 ∗ A List Apart: The Full Feed

“Were you going for ‘not classy’? Because if you were, that’s cool. This isn’t classy like some of your other work,” said my wife, glancing at a long day’s work on my screen.

“Yep. That’s what I was going for!” I responded with forced cheer. I knew she was right, though, and that I’d be back to the drawing board the next morning.

This is a fairly typical exchange between us. We quit our jobs last year to bootstrap an app (for lack of a better word) that we’re designing and building ourselves. I’m the front-end guy, she’s the back-end girl. And currently, she’s the only user who gives me design feedback. Not because it’s hard to find people to give you feedback these days; we all know that’s hardly the case. She’s the only one providing feedback because I think that’s actually the right approach here.

I realize this flies in the face of conventional wisdom today, though. From VC’s and startup founders emphatically endorsing the idea that a successful entrepreneur is characterized by her willingness—scratch that: her obsession with seeking out feedback from anyone willing to give it, to a corporate culture around “constructive” feedback so pervasive that the seven perpendicular lines-drawing Expert can have us laughing and crying with recognition, we’ve come to begrudgingly accept that when it comes to feedback—the more, the merrier.

This conventional wisdom flies in the face of some opposing conventional wisdom, though, that’s best captured by the adage, “Too many cooks spoil the broth.” Or if you’d prefer a far more contemporary reference, look no further than Steve Jobs when he talked to Business Week about the iMac back in ’98: “For something this complicated, it’s really hard to design products by focus groups. A lot of times, people don’t know what they (customers) want until you show it to them.”

So which is it? Should we run out and get as much feedback as possible? Or should we create in a vacuum? As with most matters of conventional wisdom, the answer is: Yes.

In theory, neither camp is wrong. The ability to place your ego aside and calmly listen to someone tell you why the color scheme of your design or the architecture of your app is wrong is not just admirable and imitable, but extremely logical. Quite often, it’s exactly these interactions that help preempt disasters. On the flip side, there is too much self-evident wisdom in the notion that, borrowing words from Michael Harris, “Our ideas wilt when exposed to scrutiny too early.” Indeed, some of the most significant breakthroughs in the world can be traced back to the stubbornness of an individual who saw her vision through in solitude, and usually in opposition to contemporary opinion.

In practice, however, we can trace most of our failures to a blind affiliation to one of the two camps. In the real world, the more-the-merrier camp typically leaves us stumbling through a self-inflicted field of feedback landmines until we step on one that takes with it our sense of direction and, often more dramatically, our faith in humanity. The camp of shunners, on the other hand, leads us to fortify our worst decisions with flimsy rationales that inevitably cave in on us like a wall of desolate Zunes.

Over the years I’ve learned that we’re exceptionally poor at determining whether the task at hand calls for truly seeking feedback about our vision, or simply calls for managing the, pardon my French, politics of feedback: ensuring that stakeholders feel involved and represented fairly in the process. Ninety-nine out of a hundred times, it is the latter, but we approach it as the former. And, quite expectedly, ninety-nine out of a hundred times the consequences are catastrophic.

At the root of this miscalculation is our repugnance at the idea of politics. Our perception of politics in the office—that thing our oh-so-despicable middle managers mask using words like “trade-off,” “diplomacy,” “partnership,” “process,” “metrics,” “review” and our favorite, “collaboration”—tracks pretty closely to our perception of governmental politics: it’s a charade that people with no real skills use to oppress us. What we conveniently forget is that politics probably leads to the inclusion of our own voice in the first place.

We deceive ourselves into believing that our voice is the most important one. That the world would be better served if the voices of those incompetent, non-technical stakeholders were muted or at the very least, ignored. And while this is a perfectly fine conclusion in some cases, it’s far from true for a majority of them. But this fact usually escapes most of us, and we frequently find ourselves clumsily waging a tense war on our clients and stakeholders: a war that is for the greater good, and thus, a necessary evil, we argue. And the irony of finding ourselves hastily forgoing a politically-savvy, diplomatic design process in favor of more aggressive (or worse, passive-aggressive) tactics is lost on us thanks to our proficiency with what Ariely dubs the fudge factor in his book The (Honest) Truth About Dishonesty: “How can we secure the benefits of cheating and at the same time still view ourselves as honest, wonderful people? As long as we cheat by only a little bit, we can benefit from cheating and still view ourselves as marvelous human beings. This balancing act is the process of rationalization, and it is the basis of what we’ll call the fudge factor theory.”

Whether we like it or not, we’re all alike: we’re deeply political and our level of self-deception about our own political natures is really the only distinguishing factor between us.

And the worst part is that politics isn’t even a bad thing.

On the contrary, when you embrace it and do it right, politics is a win-win, with you delivering your best work, and your clients, stakeholders, and colleagues feeling a deep sense of accomplishment and satisfaction as well. It’s hard to find examples of these situations, and even harder to drive oneself to search for them over the noise of the two camps, but there are plenty out there if you keep your eyes open. One of my favorites, particularly because the scenarios are in the form of video and have to do with design and development, comes in the form of the hit HGTV show Property Brothers. Starring 6'4" identical twins Drew (the “business guy” realtor) and Jonathan (the “designer/developer” builder), every episode is a goldmine for learning the right way to make clients, stakeholders, and colleagues (first-time home owners) a part of the feedback loop for a project (remodeling a fixer-upper) without compromising on your value system.

Now, on the off-chance you are actually looking for someone to validate your vision—say you’re building a new product for a market that doesn’t exist or is already saturated, or if someone specifically hired you to run with a radical new concept of your own genius (hey, it can happen)—it’ll be a little trickier. You will need feedback, and it’ll have to be from someone who is attuned to the kind of abstract thinking that would let them imagine and navigate the alternate universe that is so vivid in your mind. If you are able to find such a person, paint them the best picture you can with whatever tools are at your disposal, leave your ego at the door, and pay close attention to what they say.

But bear in mind that if they are unable see your alternate universe, it’s hardly evidence that it’s just a pipe dream with no place in the real world. After all, at first not just the most abstract thinkers, but even the rest of us couldn’t imagine an alternate universe with the internet. Or the iPhone. Or Twitter. The list is endless.

For now, I’m exhilarated that there’s at least one person who sees mine. And I’d be a fool to ignore her feedback.

Only a poor student of history could fail to notice history's cycles. The future can't ...

 ∗ iRi

Only a poor student of history could fail to notice history's cycles. The future can't be fortold in detail, but asking the question "Where are the cycles taking us?" gives you a better chance of guessing general shapes than anything else I know.

So it's easy for a student of history to look out at the United States and guess that we're approaching a libertine peak, and that over the next couple of decades we should expect to see the pendulum swing away from the wild excesses of the Baby Boomers back in a more "conservative" direction.

But at my age, I've never lived through a shift. So had I guessed how the counter-libertine shift would occur last week, I would have guessed a gradual cultural waning of the libertines and a gradual cultural waxing of those of a more conservative bent, with the advocates not changing their own views but their relative influence changing over time.

Read the rest...

GamerGate

 ∗ andlabs's blog

Facts are simple and facts are straight
Facts are lazy and facts are late
Facts all come with points of view
Facts don’t do what I want them to
Facts just twist the truth around
Facts are living turned inside out
Facts are getting the best of them
Facts are nothing on the face of things
- Talking Heads, “Crosseyed and Painless”

Okay, I didn’t want to do this, but I’m postponing talking about coding for another two blog posts.

In the month and a half since I posted this, the situation has gone waaaaaay downhill. People have been driven out of their homes, major corporations have stopped working with publications, lots and lots of black-hat hacking, and the like. And the people who are doing all this are doing so while attempting to silence anyone who dares say anything other than high praise for a movement that claims to support journalistic integrity while everyone else can see the real goal (to remove women from the video game industry).

So let’s get the facts straight.

But this isn’t about Zoe Quinn anymore!

Oh yes it is. It was always about Zoe Quinn.

For those of you unfamiliar, Zoe Quinn is an independent game designer who has been accused of doing some awful things. The first accusation was made by her ex-boyfriend of her sleeping with several video game journalists for review points (as far as I can tell, anyway). But rather than reveal this information in, say, a periodical publication, or on Facebook, he waltzed into the 4chan Internet community. Why? Simple: he wanted to hurt Zoe in more than just reputation, and he knew 4chan could do it. They’ve done it before, under the guise of their ad-hoc Anonymous network of volunteer hackers.

My point of view is simple: last year, Zoe tried to put her game Depression Quest (an adventure game that aims to guide players through the life of a person suffering depression) on the Steam digital game market, and was met with a barrage of misogynistic attacks against her for trying. I believe that those who participated in the August tirade with Zoe’s ex are the same people who attacked her on Steam, and were just waiting for something that they could use to say “aha! See guys, she did something wrong after all! Now our misogynistic attacks are completely justified!”

Which any sane person will tell you is wrong.

But it happened anyway. And this time, it didn’t stop with just Zoe; these attackers went after Zoe’s (female) friends as well.

And ever since, the number of targeted women have increased. At least one female journalist has quit the gaming industry over this. And a few days ago, another indie game developer, Brianna Wu, was forced to leave her home after receiving death threats.

All right, so I explained that misogyny is still rampant, but what about Zoe Quinn? Even if this wasn’t the case, people on the GamerGate front are still posting infographics that try to compare Zoe to Soviet dictators. Clearly you still care about her.

But GamerGate isn’t about misogyny, it’s about journalistic integrity!

There are two ways I could respond.

Had this been last month, I would have said “wrong”. Before Zoe revealed she was spying on her core attackers, the e-violence against women was at its peak. It hasn’t disappeared, mind (see what I said about Brianna Wu above), but it has diminished enough that the people doing it are resorting to a different tactic, described below.

Now, I must respond with “too bad”.

As I just went to great lengths explaining, this whole campaign started because some people decided to be misogynists against a woman they just found out was a cheater. And we’ve reached the point where several major media outlets (yes, even though Jezebel is a partner to your much-loathed Kotaku under the ownership of Gawker Media, they are still a major media outlet because of said ownership) are calling the movement a misogynist hate group.

The public thinks you are misogynist. No amount of trying to separate yourselves from the abusers will remedy that at this point. Hell, you had your chance to do so when we (the people not on your side) tried to break apart (vis a vis #NotYourShield and other such movements). But instead, people in the movement followed us out, claiming that we were trying to “break up the hashtag”. And now a lot of GamerGate supporters are trying to convince people that they do not condone the actions of the abusers! Which leads me to my next point…

But we didn’t do it! 8chan did it! We’re not 8chan!

Yes you are. Just ask Brianna Wu. Oh wait, you drove her out of her home.

Moving right along…

(This one is to our allies.) But this will all just blow over! It’s just like the Occupy movement and any other silly Internet uprising!

No, we saw this already. In addition to all of the above, a major corporation, namely Intel, pulled a marketing partnership with games journalism website Gamasutra because GamerGate was louder than the rest of us and convinced them to.

This was ostensibly in response to this post by Leigh Alexander, which I’m pretty sure the entire world misread as being about all video game players. It’s not. It’s about the toxic part of the gamer community; the one that’s responsible for all this madness. It’s pretty obvious that’s what she meant the moment she first says “it’s about getting mad on the Internet”, and it’s pretty evident at the end too.

And now (linked earlier) people are threatening to shoot up a university if Anita Sarkeesian speaks out. How can you say this isn’t having an effect?

(Back to attacking GamerGate talking points.) But we’re being misrepresented! We are trying to stop corruption in games journalism; why won’t anyone believe us?

Again, I have two responses, one to each side of the GamerGate cammp.

First, to the attackers: congratulations! You are on a whitewashing campaign! How typical; how expected. In fact, I believe this whole lie-fest about corruption in games journalism (see previous section) is just an attempt to cover up your misogyny to the general public! Nice try, but I was never fooled.

Now, to those of you who support journalistic integrity and don’t want to associate with the attackers anymore: I’m going to once again paraphrase icculus:

Isn’t it weird that you have to go around constantly defending criticism of GamerGate? Isn’t it weird that you have to follow Twitter users around like hawks, commenting whenever they say something disapproving? Isn’t it weird that Wikipedia presently has 10 pages and counting of people arguing that their article on GamerGate doesn’t line up with what they claim is the truth?

I think it’s weird. I hope you do too.

But Pietro, it doesn’t matter! This is the truth about GamerGate!

I’m going to quote a friend from a heated discussion in a private IRC channel earlier this morning:

[12:22] <Xkeeper> there are basically 4 facts about gg:
[12:23] <Xkeeper> - stop talking about it
[12:23] <Xkeeper> - it is a documented hate campaign
[12:23] <Xkeeper> - there is nothing good about it
[12:23] <Xkeeper> - stop talking about it 
(when told the person he is talking to thinks he's wrong)
[12:23] <Xkeeper> you know what:
[12:23] <Xkeeper> tell that to the people who are run out of their fucking homes, [person].
[12:23] <Xkeeper> tell that to the people who have had nude images posted, [person].
[12:24] <Xkeeper> tell that to the people who receive fucking death threats about how their families are going to be murdered.
[12:24] <Xkeeper> no.
[12:24] <Xkeeper> there is nothing to talk about.
[12:24] <Xkeeper> I am saying:
[12:24] <Xkeeper> this is the end of the discussion.
[12:24] <Xkeeper> if anyone seriously believes there is any good in gg: just get out now and save me the trouble.

(Allies again.) Pietro, what now?

We’re losing.

First, no one is saying anything about all the abuse. That’s the reason why I made this post: to talk. And I’m not done talking; I’ll explain in a bit.

But please, go out and talk. Spread the facts around. Convince people that siding with GamerGate when all you care about is journalistic integrity is the same as siding with a hate group and thus is committing political suicide.

Don’t be afraid to speak out. (And I don’t mean saying just “this is bad”; I mean actual protest speech, like what this post aims to be.) I have nothing to lose. I know some people have received threats against their incomes and families and lives for speaking out. Taking up a cause like this comes at a cost. If you aren’t willing to pay the price, then don’t bother showing up.

That Intel was enticed to pull their support from Gamasutra shows that GamerGate has a voice and they’re not afraid to use it. But if we truly outnumber them, we need to do the same. If you’re an avid Gamasutra reader, call up Intel and tell them to bring their advertising back. And if another company starts being enticed, pull them away too. They listen to their customers; it’s how they get their profit. Don’t pretend we’re outnumbered.

Epilogue: Come At Me, Bro

Against my better judgment, I will keep this open to comments. I will keep comments open for an unspecified amount of time. After that point, I will collect and respond to those of you who still consider yourselves part of the GamerGate movement and still want to whitewash me for saying what I’ve said today. I’m waiting, and I’m ready. I won’t edit the comments to say “fart fart fart”, but I stand with Matthew Garrett for daring to do so.

My Handbook – Environment

 ∗ journal

I’ve been doing a talk this year called ‘My Handbook’. it’s a rather silly little title for a bunch of principles I work to. They are my ‘star to sail my ship by’, and I’m going to start documenting them here over the coming months, starting with Environment – a post about how, for me, design is more about the conditions in which you work.

I’d describe myself as an armchair mountaineer. I enjoy reading about man’s exploits to get to the roof of the world, or to scale precipitous walls under harsh conditions for no other reason than the same reason George Mallory said he was climbing Everest: ‘Because it’s there’.

In any expedition to a mountain, great care and consideration is taken over the kit, the climber’s skill, the team around them, the communications, the list is seemingly endless. But, the biggest single factor in a successful trip are the conditions of the mountain. Will the mountain let them up. And back down again. Assessing the condition of a mountain takes experience, time and careful consideration; it may be snowing, too warm, too much snow on the ground, too cold, too windy. The list of variables is endless, but the climber considers all of them, and if necessary moves to adjust the route, or simply doesn’t attempt the climb.

Now, let’s shift to design – not necessarily web design, but commercial design of almost any kind. Let’s say you take a brief for a project, you begin the work and suddenly in the project, other stakeholders come on board and start to have comment on your work and direction on strategy that was unknown to you. We’ve all had projects like those, right? Suddenly, your work becomes less about what you may think of as ‘design’, and more about meetings, project management, account management, sales, production work. You know, all of those things that have a bad reputation in design. Meetings are, apparently, toxic. Well, I’ve started to look at this in a different light over the past few years.

As I’ve grown as a designer, like many, I’ve found myself doing less ‘design’. Or, rather, less of what I thought was design. Five years ago, I thought design was creating beautiful layouts, or building clean HTML and CSS, or pouring over typefaces for just that right combination. Now, this is design. But, so are meetings.

Experienced designers spend time making the environment right whilst they are doing the work. Because, frankly, you can push pixels around forever, but if the conditions aren’t right for the work to be created and received by the client in the right way, the work will never be as good as it could be. But, what do I mean by ‘conditions’? Here are a few practical things:

  • The physical space: I see a large part of my job as making the environment in the studio as conducive as possible for good work to happen. That means it’s relaxed, and up-beat. Happy people make good things.

  • A Shit Umbrella: It’s my job to be a filter between client and my team on certain things. Someone recently described this as being a ‘Shit Umbrella’.

  • Politics: Wherever you get people, you get politics – because people are weird. I spend a lot of time on client projects trying to traverse a landscape of people to understand motivations, problems, history or direction. Once you understand the landscape, you can assess, and work to change, the conditions.

  • People first, process second: We fit the processes to the people rather than the other way around. Our team runs things that works for us, but that’s the result of a lot of trying & discarding. Like tending a garden, this is a continual process of improvement.

  • Just enough process: I’m a firm believer in working to the path of least resistance. Being in-tune with how people work, and changing your processes to suit, helps create a good environment. But we ensure we impose just enough structure. To much, and it gets in the way. This doesn’t work if you don’t do the previous point, in my experience.

  • Talk. Do. Talk.: It really is true that the more we talk, the better work we do. We talk in person, on Slack, on Skype, on email. Just like meetings, there is an industry-wide backlash against more communication because the general consensus is we’re getting bombarded. But recently, we’ve been working to change that perception in the team so that talking, and meetings, and writing is the work. It’s tending the garden. Making the conditions right for good work to happen.

  • Making things is messy: This is actually another point from my ‘handbook’. Since the 1950’s clients and designers have been sold a lie by advertising. Design generally isn’t something that happens from point A to Z with three rounds of revisions. It’s squiggly, with hundreds or thousands of points of change. A degree of my time is spent getting people – clients, internal clients, the team – comfortable with the mess we may feel we’re in. It’s all part of it.

I see all of this as design work. It’s also my view that much of the disfunction from large agencies to other organisations is that this work isn’t being done by designers because they don’t see it as the work. It’s being done by other people like account managers who may not best placed to get the conditions right. Designers need to take responsibility for changing the environment to make their work as good as it can be. Sometimes, that means sitting in a board room, or having a difficult discussion with a CEO.

Mountaineering is so often not about climbing. You may do some if the conditions are right. Design is so often not about designing beautiful, useful products. But, you may do some if the conditions are right.

Ingredients

 ∗ journal

Jeremy wrote something special yesterday. That’s not unlike Jeremy, but this blog post in particular struck a chord with me.

A couple of weeks ago, Google Chrome has toyed with the idea of removing most of the URL because it’s a “power user” feature in favour of a simple, easy to understand signpost of where the user is. Jeremy’s point is there is a deeper warning here of ease of use.

… it really doesn’t matter what we think about Chrome removing visible URLs. What appears to be a design decision about the user interface is in fact a manifestation of a much deeper vision. It’s a vision of a future where people can have everything their heart desires without having to expend needless thought. It’s a bright future filled with seamless experiences.

I read Jeremy’s post and kept re-reading it. My instant thought was of food.

I enjoy cooking – have done for a decade – and the more I do, the more I care about ingredients. Good produce matters. Now, I’m not talking about organic artisan satsumas here, but well grown, tasty ingredients; in season, picked at the right time, prepared in the right way. The interesting thing is most people who eat the resulting dish don’t think about food in this way. They experience the dish, but not the constituent parts.The same way some people experience music – if you play an instrument, you may hear base-lines, or a particular harmony. If you enjoy cooking, you appreciate ingredients and the combination of them.

But ingredients matter.

And they do of websites, too. And the URL is an ingredient. Just because a non-power user has no particular need for a unique identifier doesn’t mean it’s any less valuable. They just experience the web in a different way than I do.

Without URLs, or ‘view source’, or seeing performance data – without access to the unique ingredients of websites – we’ll be forced into experiencing the web in the same way we eat fast food. And we’ll grow fat. And lazy. And stop caring how it’s grown.

As Jeremy says: Welcome aboard the Axiom.

A new beginning for Five Simple Steps

 ∗ journal

I’m so happy to tell you that Five Simple Steps has been acquired by Craig Lockwood and Amie Duggan. The dynamic duo behind Handheld conference, The Web Is, FoundersHub and BeSquare. Before I tell you again how thrilled I am, let me take you way back to 2005…

Next year, it will be ten years since I wrote a blog post called Five Simple Steps to better typography. The motivation behind the post was simple: the elements of good typesetting are not difficult, and, with a few simple guidelines, anyone could create good typographic design. That one article became part of a small series of five posts: five simple steps, with each article containing five simple steps. It was a simple formula, but it turned out pretty well.

Soon after that initial post, I wrote Five Simple Steps to designing grid systems for the web, then the same for colour theory. This was now 2006 and I’d just left my job at the BBC. It was a dreary October day and, whilst sat in a coffee shop in Bristol after just visiting one of my first freelance clients, I was talking over email to the Britpack mailing list about compiling my posts into a book. In 2008, Emma and I hired my brother to help me design it and in early 2009, we finally released it. And with the release of that first book, Five Simple Steps Publishing was born. But we didn’t know it at the time.

Over subsequent months and years other authors saw what we produced and wanted us to publish their books. Before we really knew it, we were a publisher with a catalogue of titles and providing a uniquely British voice to the web community. But publishing is tough. As we found out.

All over the world, publishers’ profits are being eroded; from production costs to cost-difference in digital versions. And – except for a couple of notable companies – you see it in the physical books that were being produced for our customers by competitors: terrible paper quality, templatised design, automated eBook production. Everywhere, margins are being squeezed, and the product really suffers.

Our biggest challenge was that Five Simple Steps started as a side project, and always stayed that way. Over time, we just couldn’t commit the time and money it needed to really scale. We had so much we wanted to do – there was never any shortage of great authors wanted to write a book – but could never find the time and energy when we had to run a client services business. Oh, and also during this time, Emma and I had two children. Running and growing two businesses is somewhat challenging when you’re being thrown up on and have barely four hours sleep a night.

So about a year ago, Emma and I sat in our dining room and faced a tough decision: wind down Five Simple Steps, sell it, or give it one more year. We chose the latter. It was a tough year, but Emma, Nick and the team worked to make the Pocket Guide series a great success. So much so, it required tons of work and compounded the problem we had: Five Simple Steps needed to take centre stage rather than be a side project.

A month ago today, Emma and I announced that Five Simple Steps was closing. The team were joining Monotype, and Five Simple Steps could no longer be sustainable as a side project. The writing had been on the wall for a while, but the stop was abrupt for us, the authors and the team. We tried to find the right people to take the company forward before the sale, but we couldn’t find the right people. Luckily, immediately following the announcement, a few people got in touch about seeing if they could help. Two of those people really said some interesting things and got us excited about the possibilities: Craig Lockwood and Amie Duggan.

Craig and Amie live locally in Wales. They run conferences: Handheld conference and The Web Is conference later this year. They also run a co-working space in Cardiff called FoundersHub. They have a background in education and training, and together with their conferences and BeSquare – a conference video streaming site – they have the ecosystem in place to take Five Simple Steps to places we could only dream of. As you may gather, we’re chuffed to bits that Five Simple Steps is going to live on. Not only that, but it’s in Wales and in the competent hands of friends who we know are going to give it the attention it deserves.

Emma and I can’t wait to see where it goes from here.

Conference speakers, what are you worth?

 ∗ journal

Over the past couple of days, there have been rumblings and grumblings about speaking at conferences. How, if you’re a speaker, you should be compensated for your time and efforts. My question to this is: does this just mean money?

I’ve been lucky enough to speak at quite a few conferences over the years. Some of them paid me for my time, some of them didn’t. All of them – with the exception of any DrupalCon – paid for my travel and expenses.

When I get asked to speak at a conference, I try to gauge what type of conference is it. Is it an event with a high ticket price with a potential for large corporate attendance? A middle sized conference with a notable lineup. Or, is it a grassroots event organised by a single person. In other words, is it ‘for-lots-of-profit’, ‘for-profit’, or ‘barely-breaking-even’. This will not only determine any speaker fee I may have charged, but also other opportunities that I could take for compensation instead of cash.

Back to bartering

When I ran a design studio, speaking at conferences brought us work. It was our sales activity. In all honesty, every conference I’ve spoken at brought project leads, which sometimes led to projects, which more than compensated me for my time and effort if it kept my company afloat and food on the table for myself and my team. The time away from my family and team was a risk I speculated against this. Conference spec-work, if you will.

In addition to speculative project leads for getting on stage and talking about what I do, I also bartered for other things instead of cash for myself or my company. Maybe a stand so we could sell some books, or a sponsorship deal for Gridset. Maybe the opportunity to sponsor the speaker dinner at a reduced rate. There was always a deal to be done where I felt I wasn’t being undervalued, I could benefit my company, product or team, but still get the benefit of speaking, sharing, hanging out with peers and being at a conference together.

It’s about sharing

If every speaker I knew insisted on charging $5000 per gig, there will be a lot less conferences in the future apart from the big, corporate, bland pizza-huts of the web design conference world.

My advice to anyone starting out speaking, or maybe a year or so in, is have a think about why you do it. If you’re a freelancer, let me ask you: is speaking at a conference time away from your work, and therefore should be calculated as to how much you should charge based on your hourly rate? Or, is it an investment in yourself, your new business opportunities, and the opportunity to share. Of course, the answer to this is personal, and – for me – depends on what type of conference it is.

This community is unique. We share everything we do. We organise conferences to do just that. Most of the conference organisers I know come from that starting point, but then the business gets in the way. Most speakers I know, get on stage from that starting point, but then the business gets in the way.

There’s nothing wrong with valuing yourself and your work. If speaking is part of your work, then you should be compensated. But next time you’re asked to speak by a conference, just stop for a moment and think about what that compensation should be.

Collaborative Moodboards

 ∗ journal

Creating moodboards is something I was taught from a very early age. In primary school, they were a simple mixed-media way of expressing a form of an idea.

The thing I find interesting about mood boards is not the end-result, but the process of creation. Watching my children make posters from torn up bits of newspaper and magazines is really no different to watching my clients do it. Similar to watching other activities – such as affinity sorting, or depth interviewing – it’s the listening that I find interesting. Every moodboard tells a story, and as a designer, listening to your clients tell that story when they make them can be very insightful.

Making moodboards for you, not for me.

I have to be honest, I don’t make moodboards for myself. Not physical ones anyway. When I familiarise myself with a brand, or make some suggestions for design context, I always try to place those things in a context the client understands. This is where design visuals are important. They are almost unsurpassed in their immediacy of understanding for a client because they show the design in context. Of course, replace that with a high fidelity prototype, and you get the same thing. But, I want to step back a little here, as to when I find creating moodboards valuable.

Let me ask you a question: how many times have you heard this from a client?

‘I’m not so sure I think the design is heading in the right direction’. ‘It needs more pop’. ‘It’s just not us’.

These are all because a client cannot communicate about design at the same level we do. So, it’s abstract. Either that, or:

‘I don’t like that green’. ‘That button is great! But, it needs more pop’. ‘The logo needs to be bigger’.

Then things get subjective and extremely detailed. Why? Because these are approachable things people can comment on. More often than not, these comments are a failing that should rest firmly on our shoulders. We need to give our clients the words and understanding to express their thoughts. Either that, or we tease out these issues earlier in the process, in a way that is abstracted from the design work that will come later. This is where I feel collaborative moodboards work extremely well.

So, why would want to try and run one of these sessions?

  1. When a client’s brand is repositioning, sometimes we’re brought in very early on the back of a strategy. No tactical work as been done. So, it’s up to us to navigate the waters of implementing the branding strategy. Making design work on the back of a few bullet points in a slide deck can be challenging.

  2. Usually in a discover process, I will get a few red flags from speaking with a client. Generally these come through when talking about competitors, or things they like.

  3. When I get conflicting stories from different stakeholders. The homepage team has a completely different view on the branding than the marketing team.

  4. When branding needs evolving. A lot of organisations have mature branding collateral for print and advertising. Not so much for web (still!), so these are useful exercises to start to tease out differences or how they can align to the web in future.

I’m sure there are more, but those are few I can think of off the top of my head for now.

How to run a collaborative moodboard session

  1. Get the stakeholders in a room. 3-4 is ideal. 9 is way too many.
  2. Bring with you lots of magazines, newspapers, flyers – just physical paper stuff – that you can all cut up.
  3. Glue. Lots of glue. One tub each.
  4. Large (A1) pieces of paper.

The thing about this that I find interesting from a people-watching/behaviour perspective, is that the act of cutting things up and sticking them down is something that most of these people wouldn’t have done since school. The process involves collaborating, getting stuck-in and discussing the work. I find it a great leveller for the client team (hierarchy quickly disappears), and a very good ice breaker.

You set the brief for the morning/afternoon (all day is generally too long for the making part of this process). The idea is to find content that communicates part of the visual story of the product – and that could be anything:colour, type, texture, image – and stick it down.

For the agency team, it’s our job to ask questions throughout the day. To tease out the insights as people are in the moment of choice – before they’ve had chance to post-rationalise. And you know what? Answers like: ‘I just really like this green’ are great, because our next question is ‘Why?’ and it forces rationale. Without us being there, and asking that, almost always post-rationalising and ‘business stuff’ gets in the way of finding the truth behind those choices.

Quite often, just like cave paintings, moodboards are an artefact of a conversation. We often discard them from this point because they have served their purpose. We have the insights. The marketing team are best buddies with the homepage team. We all heading in the same direction.

So, next time you start a project and you need some steer on branding, or reconciling differences of opinion on a client team, try collaborative moodboarding as a way of coming together to try and solve the problem.

Responsive Web Design – Defining The Damn Thing

 ∗ journal

Unlike many design disciplines, web design goes through cyclical discussions about how to define itself and what it does – anyone who’s ever spent any time in the UX community will know about this.

I was prompted to write about this from reading Lyza’s column on A List Apart, and Jeffrey’s follow-up post this weekend.

In 2010, I attended An Event Apart in Seattle. During that show, I saw three or four presentations – from Eric Meyer, Dan Cederholm, Jeremy, and of course, Ethan. All of them, independently, talked about how using media queries and CSS we could change the content using a fluid layout. It was a perfect storm, and indicative of the thinking that led Ethan to write – and A Book Apart to publish – Responsive Web Design a year later. The rest, they say, is history.

Responsive Web Design had a simple formula: fluid grids, media queries and flexible images. Put them all together, and your web product will be responsive. As Jeffrey said:

If Ethan hadn’t included three simple executional requirements as part of his definition, the concept might have quickly fallen by the wayside, as previous insights into the fluid nature of the web have done. The simplicity, elegance, and completeness of the package—here’s why, and here’s how—sold the idea to thousands of designers and developers, whose work and advocacy in turn sold it to hundreds of thousands more. This wouldn’t have happened if Ethan had promoted a more amorphous notion. Our world wouldn’t have changed overnight if developers had had too much to think about. Cutting to the heart of things and keeping it simple was as powerful a creative act on Ethan’s part as the “discovery” of #RWD itself.

The idea of responsive design has taken a few years to go from cubicle to board room. But now it is a project requirement coming directly from there. For the past eighteen months, at Mark Boulton Design, we’ve seen it as a requirement on RFPs. And with that, it brings a whole other set of problems. Because what does it mean? Hence, we have to Define The Damn Thing all over again. And recently, to be honest with you, I’ve stopped doing it. Because, depending on who you speak to, responsive web design has come to mean everything and nothing.

There are some who see it as media queries, fluid grids and scalable images. There are those who see it as adaptive content, or smarter queries to the server to make better use of bandwidth available. There are those who just see it as web design.

Me? I think it’s just like Web 2.0. And AJAX. It’s just like Web Standards (although to a lesser extent) and exactly like HTML5 (in the minds of those of you who aren’t developers) and its rather splendid branding. Responsive design has grown into a term that represents change above all else. To me, responsive design is more about a change in the browser and device landscape. A change in how people consume content. A change in how we make things for the web. And responsive design is just the term to encapsulate that change in a nice, easy solution that can get sold to a board of directors worrying about their profit and loss.

‘Responsive design is forward-thinking and means it will work on a phone, and that’s where things are headed’.

We’ve heard this line time and time again over the past couple of years. You see, responsive design is a useful term and one that will stick around for a while whilst we’re going through this change. How else do we describe it, otherwise? Web design? I don’t think so. No board member is going to get behind that; it’s not new enough.

How we work

 ∗ journal

I’ve had a few people ask me recently about how we work at Mark Boulton Design. And, the truth be told, it slightly differs from project to project, from client to client. But the main point is that we work in an iterative way with prototypes at the heart of our work every step of the way.

Work from facts AND your intuition

We always start by trying to understand the problem: the users of the website or product, the organisation on their customer strategy, the goals and needs of the project, who’s in charge and who isn’t. There’s a lot to take in on those early meetings with a client. One of the first things we do is to try and put in place some kind of research plan: what do we need to know, and how are we going to get it.

This could be as simple as running some face to face interviews with existing or potential customers coupled with a new survey. Of course, good research should provide some data to a problem, not just ‘what do you think of our website?’. Emma has written some good, quick methods for doing this yourself.

We couple that with trying to extract the scope from the client. I say that because, half the time, we’re given a briefing document – or something similar – and most of the time that document hasn’t been written for us. It’s been written for internal management to sign off on the budget of the project. So, rather than ask for a new document, we run a couple of workshops to tease out those problems:

User story workshop

This workshop is designed to tease out the scope of the project – everything we can think of. We ask the client to write user stories describing the product. Nothing is off the table at this point and our aim is to exhaust the possibilities.

Persona / user modelling workshop

Personas have been called bullshit in UX circles for years now. Some say they pay lip-service to a process, or they’re ignored by organisations. Whatever. I think, sometimes, something like personas are useful for putting a face to that big, amorphous blob of a customer group. Maybe that’s just a set of indicative behaviours or maybe a lightweight pen-portrait of an archetypical user. The tool is not the important thing here, but how you can use something to help people think of other people. To help an organisation to think of their customers, or designers to think of the audience they’re designing for, or the CEO to think in terms of someone’s disability rather than the P&L.

What I find generally useful about running a workshop like this is that it exposes weaknesses in an organisation. If a client pays lip-service to a customer-centric approach, it will soon become very evident in a meeting like this that that’s what’s going on.

Brand workshop

This is a vital workshop for me. As a design lead on a project, I need to understand the tone of a company. From the way it talks about itself, through to the corporate guidelines. But, my experience is, that’s only half the story if you’re lucky. So much of a brand is a shared, consensual understanding in an organisation. Quite a lot of that can go un-said. This workshop is, again, about teasing out those opinions, views and arguments.

Bonus!

The first three workshops have the added bonus of finding out who runs the show in an organisation. I make it my business to find out – and get on side – the following people:

  • The founders / CEO. This should be a given.
  • The people with a loud mouth. It’s useful to find the people who have a loud voice and get them around to our way of thinking. Then they can shout about our work internally.
  • The people with influence. Sometimes, these are the quiet, unassuming people, but they carry great sway. If we want things done, these people need to be our friends.

That’s quite a lot of people to keep happy, but if we get these three groups on side, we find projects run a lot smoother.

Prototype your UX strategy

Leisa gave a great talk at last year’s Generate conference in London about prototyping your UX strategy. The crux of this was it is way more efficient to demonstrate your thinking and design, than it is to talk about it. If you can quickly make something, test it, iterate a bit, and then present it, then you can massive gains to cutting down on procrastination and cutting through organisation politics like a hot knife through butter. Showing that something works is infinitely more preferable to me than arguing about whether something would work or not.

Wherever possible, we’ve been making prototypes in HTML. It gives us something tangible and portable to work with. We can put it in front of users, show a CEO on their mobile device to demonstrate something.

The right tool at the right time

I’ve spoken before about designing in the browser, or designing in Photoshop, or on pencil, or whatever. Frankly, we try to use the most appropriate tool at the right time. Sometimes that’s a browser, but a client may respond dreadfully to that because they’re are used to seeing work presented to them in a completely different way. Then, we change tack and do something else. My feeling is the best design tool you can use is the one that requires the least amount of work to use: be it a pencil, Photoshop or HTML.

agile not Agile

I feel that design is a naturally iterative process. We make things and then fix things as we go. Commercial design, though, has to be paid for. And so, in the 1950’s, the Ad industry imposed limits to this iteration – ’you have three changes, then you must sign off on this creative’. Of course, I can understand this thinking; you can’t just get a blank cheque for as many iterations as you like for a project until something does (or does not) work. But, what we gain in commercial control, I’ve found we’ve definitely lost in design quality. It takes time to make useful, beautiful things.

So, from about 2009, Mark Boulton Design have been working in the following way:

  • We work in sprints that are two weeks long. We never have a deadline on a Friday. Sprints run from Monday to Monday, with a release end of play Monday.

  • ‘Releases’ are output. Sometimes code. Sometimes research. Sometimes design visuals.

  • We front-load research into a discovery sprint. This is to get a head-start and give the designers (and clients) some of the facts to work around. Organising, running and feeding back on research takes time.

  • Together with the client, we capture the scope of the project with user stories. These are not typical Agile user stories – for example, we don’t find estimating complexity and points, useful in our process – but they are small, user-centred sentences that describe a core piece of the product. It could be a need, or a bit of functionality, or a piece of research data. The key point here is, for us, they are points of discussion that are small and focussed. This helps keeps us arrow-straight when we prioritise them sprint on sprint.

  • We conduct research each sprint if it’s required. This is determined by the priorities for that sprint. For example, if the priority for the sprint is focussed on aesthetics, or typography, or browser testing, then usability testing is not going to be of much use for those.

And now for some of the commercial considerations:

  • Contracts are most often fixed-price, but broken down into sprints. Each sprint has an identical price.

  • We bill as we go. The client pays a degree up-front, and that is then factored into cost of each sprint.

  • We explain to prospective clients how we work: each sprint, we work on agreed priorities, with no detailed functional spec to work against.

  • Points. In the past, we’ve worked on Agile agreements where we would be delivering against agreed estimated points. This was to see if we could make web development agile work in a project environment. It didn’t. We found we were delivering to the points, rather than to the project. Plus, if we didn’t hit the points for that sprint, we were penalised financially.

  • Coaching our clients through this process is as challenging as coaching through clients of a responsive design project. When the project is in the early-mid messy stages – when client preconceptions are being challenged, the prototype is not being received well by users – it takes a strong partnership to push through it. Design is messy. Iteration, by it’s very nature, is about failing to some degree or another. Everyone has to get used to that feeling of things not working out the way they first thought.

  • The sticky end. When we get to the final stages of a project, we should be in a good place. The highest priority items should be addressed, we will have buy in and sign off from the right people and we should be focussed on low priority features. But sometimes, that’s not the case. Sometimes, we’ve got high priority things left over which are critical. And that’s the time when we have to go back to the client and discuss how these need to be addressed. Sometimes that’s an extra sprint or two. Sometimes it’s an entirely new contract.

What we don’t do from ‘Agile’

We don’t do:

  • Estimating tasks. We don’t assign time to design tasks. In our studio, work just doesn’t happen that way. Generally, things are a bit more holistic.

  • Tracking velocity. For the same reason above, if we’re not measuring delivering against user stories in a numeric way, we can’t track our velocity.

  • Retrospectives. We don’t run traditional retrospectives on sprints. Maybe this is more a symptom of a close, high-communication level of our team. We’re talking all the time anyway. We have found that retrospectives have been a useful forum for clients to feed back on how they’re feeling about progress in the past, but this has felt like a somewhat forced environment to do it. So, recently, we have points of checking in with a client to see how they’re feeling about things.

So, that’s about it. A whistle-stop tour of how we like to work. As much as possible, we’ve tried to tailor our process to what works for us, built on some useful structures that agile gives us. I guess the most important thing for us is that we’re not wedded to our processes at all. We regularly shift focus, or the way we work, to meet the needs of particular clients or projects. Just as long as we align those processes to how design naturally happens, then I’m happy.

Al Jazeera & Content shelf-life

 ∗ journal

From speaking at the phenomenal MK Geek Night All Dayer, to launching a project three years in the making for Al Jazeera, the releasing a new design language for one of the oldest university in London, to Mark Boulton Design being nominated in four categories in the Net Awards. It’s been a busy couple of weeks.

Last week, I was up in London visiting a client when I heard that another project of ours was to be launched shortly. It was part of a project we’ve been working on for just over three years: the global design language for Al Jazeera Network digital, with the first two products being launched in Turkey and a beta of the Arabic news channel.

There is so much to talk about on a project of this scale. Here are just a few highlights:

  • Spending time with journalists and the newsroom to understand how news is reported.
  • Working with Al Jazeera during the Arab Spring; from the uprising in Egypt to Libya.
  • Course-correcting throughout the project. Responsive Design wasn’t really a thing three years ago.
  • Designing in four languages – Arabic, English, Turkish and Slavic – when the MBD team primarily speaks one.
  • Adopting an Object Oriented approach from content through to code. Modular, transferrable and scalable. It required a level of detailed thought right down to how content types were defined in the CMS.
  • Working with three development partners across three independent content management systems.

I could go on and on. And I probably will at some point. Needless to say, none of the above could achieved without a patient, smart and agile client-side team. Good job the Al Jazeera team are just that.

There are many buzz words you could label this project with: content-first, responsive, atomic, OOCSS. Again, I could go on. But the one thing that was first, central and always through prototyping and early strategy was good research. It was a research-first project. That probably won’t come as a surprise to some of you given we have our own in-house researcher, Emma. What may come as a surprise, however, is the degree in which that early research led approach laid the foundation for a fundamental shift in how Al Jazeera thought about their content.

Content shelf-life.

Many news journalists think of their content as a few distinct types:

  • Rolling news: Typically taken straight from the wire and edited over time to fit the growing needs of the story.
  • Editorial: Longer form piece. Still highly topical and timely.
  • Op-ed: Opinion piece from a named author.
  • Feature: A story. With a beginning, a middle and an end. Long-form content, and not necessarily timely.

These can all be mapped to timeliness; both in terms of how long they take to create and their editorial time-life. The more timely a piece, the shorter it takes to create and the shorter the shelf-life.

  • Rolling news: timely, short shelf-life.
  • Editorial: timely, long-form, short to mid shelf-life.
  • Op-ed: Long-form, mid shelf-life.
  • feature: Long-form, long shelf-life.

Publication schedules are often focussed around this creation with journalists having several pieces of the different types in various degrees of completion to various deadlines focussed on different stories. This is a comfortable mental model, one that newspapers have been arranged around for decades. But it isn’t necessarily how users of websites look for content. Users will not typically look for a type of content, but will look for a context of a story first: the topic.

The new information architecture of the Al Jazeera platform has been built around a topic-first approach. But also, the modular content and design allows for the rapid changing of display of the news as a topic or news story moves through the various content types. It’s a design system, connected to a CMS that accommodates what news naturally does. It changes.

The Design System

The whole platform is built on top of Gridset using modular design principles. The content is modular and multifaceted, designed for re-use, as is the design. For years now at Mark Boulton Design, we’ve not designed websites, but an underpinning design system with naming conventions, rules, patterns. This is particularly useful when many CMS software thinks of content objects in this way. Our systematic thinking can applied all the way through CMS integration. Software engineers love designers giving them rules.

It’s funny, we seem to have just discovered this in web design, but many other design disciplines have been approaching their work in this way for decades. Some for centuries. Take typography, for example. The design process of creating a typographic design is systematic thinking at its purest. Designing heading hierarchies and the constituent parts of written language can be approached in an abstracted way. This is exactly the right approach when designing for other languages.

Arabic has obvious challenges for an English-speaker. Not only is it written right to left, but the glyphs are non-roman. To approach this as a English-speaker, we needed to create tools and process to help. Words no longer look like words, but shapes of words. Page designs no longer look like familiar blocks of text, type hierarchy and colour. We saw form more than we saw function.

Just the start

Three years is a long time to work on a project. I’m so delighted to finally see the design system in the wild. For such a long time, we only saw it in prototype form, but you can only take prototypes so far. We needed to pressure-test content types, see where it breaks, adjust a hundred and one small details to make it work. All of this just underpins the fact that now the system is being rolled out, there needs to be changes made every day to evolve the system. This is the web after-all. It’s a feature, not a bug.

Some social good

 ∗ journal

I was going to do a usual year-end wrap up for this blog post as I have done in previous years. But, as it’s the start of the new year already, I thought set my stall out for the coming year. What do I want to do, rather than what have a just done.

A couple of days ago, I was reminded of a video I watched a while ago about Free Enterprise via my friend Andy Rutledge.

“Don’t Eat Your Dog: The Surprising Moral Case for Free Enterprise. Based on his best-selling book “The Road to Freedom,” AEI President Arthur C. Brooks explains how we can win the fight for free enterprise by articulating what’s written on our hearts.”

I’m always interested in how other country’s politics, viewpoints and economics work, and this was no exception. Rather than to bat down things like capitalism, I’m making a concerted effort to understand the nuance in such things.

As someone who runs a design studio, a publishing company and a web-based design tool, you could count me in the group of people who work hard for what I get. And I’m rewarded for that. I don’t expect a free ride. I don’t expect anything beyond the realms of what is offered in the country I live in (such as state health care and education etc. In fact, I pay for that through my taxes – the NHS is not free). But before I disappear into a politics hole, I want to bring this back to design.

Running a design company, we charge clients for the work we do and for the customers who use our products and buy our books. In doing so, we create jobs, and more tax revenue for the government. But one thing I don’t agree with from the video above is that what i’m doing is a purely selfish exercise. I’m not just doing business to pay the bills, design great products for clients, and give people work to do. To me, there is more to making things than just making things.

I believe my job is not only about doing work for clients but that I have a social responsibility to make the world a better place through the work I do. Design is a powerful tool to affect social change. However small.

Let me give you an example.

You’re out for a walk at lunchtime. You come at a road crossing and there is a family by your side waiting to cross the road. The crossing indicator is counting down the seconds, but you spot a small gap in the traffic for you to cross. You just skip across the road, running in between cars and carry on. The family is left waiting for a safe gap in the traffic.

Do you:

  1. Think that what you did was fine? It was safe for you to cross. No problem.

or

  1. Do you think that you should’ve waited next to the family to build upon the good example the parents were trying to show to their small children that they should wait for a safe gap in the traffic to cross?

It’s a small but important thing. And this is social responsibility. A responsibility to help the community around you, and not through just helping yourself. Next time you take on a design project, just stop for a second and think:

“beyond getting paid for this, and making my client’s business better, what is the benefit in doing this work? What is the social good?”

In addition to the work itself, ask them if you could blog about the process, or speak about the work at a conference. If it’s something you really believe in, could you offer do it pro-bono, or heavily discounted? Could you open source the code produced? How about aspects of the design – such as icons? Could you have one of their team members sit with your team for the whole project to soak up your skills? How could you benefit the web design and development community and still get paid well?

We’re in an incredibly fortunate position as designers to create change in the world. Many people can’t. Or simply won’t. Through our products, our work, and how we talk about it, we can have a much greater benefit to society that just lining our pockets.

This is exactly what I plan on doing in 2014. Happy New Year!

Running ragged

 ∗ journal

In my fourth article for 24ways over the years, I wrote about typesetting the right rag.

One of the first little typesetting trips I was taught – in my internship at an advertising agency – all those years ago, was how to make text fit within a given space, but still read well. This involved a dance of hyphenation, letter-spacing, leading and type-size. But a crucial ingredient of this recipe was the soft-return.

Scanning a piece of text I was looking for certain criteria – or violations – that needed a soft-return (or, in Quark XPress, shift-return). Using those violations, I would typeset the right-rag of the piece of text, and then use hyphenation, and what-not, to tease the rag into as smooth a line as possible. All whilst ensuring the content was pleasurable to read. In a perverse kind of way, I always enjoyed this part of the typesetting process.

My article on 24ways is about how we can apply this thinking to the web, where the inherent lack of control on the medium means we have to apply things in a slightly different (read: clumsy) way.

Emma read the article this morning and pretty much summed up the way I feel when I read text sometimes.

“Another article by @markboulton which gives me a glimpse into how broken the world looks through his eyes” – Emma Boulton

Just like a musician listens to music, I view text in a different way to most people. I just forget that I do it most of the time.

I can hardly believe that 24ways has been running since 2005. In the web years, that’s like 72 years ago. It’s a credit to Drew, Brian, Anna, and Owen. It’s not easy running this year in, year out, on a daily publishing schedule for a month. Hours and hours of work go into this, and we should all be thankful for their time and effort. Oh, and let’s not forget Paul, who has given 24ways a lovely redesign this year (you can read more about that on his blog)

The Undemocracy of Vale of Glamorgan Planning

 ∗ journal

The democracy of the Vale of Glamorgan planning process is absent and favours wealthy developers over citizens.

My house is situated between a narrow country road leading into an old village and farmer’s fields. In April this year, we were sent a letter by the Vale of Glamorgan planning department that a housing development of 115 houses was planned and we have just a few weeks to register our objections.

Now, it’s understandable we’d object; we live next to the proposed site. But, there are some issues that have some serious cause for concern:

  1. A public right of way crosses the site and over an unmanned railway crossing of a line soon to be electrified.

  2. A pond is proposed to capture the water from the often-waterlogged field. The town has a history of flooding and this field acts like a large sponge safeguarding this part of the town.

  3. The roads leading to and from the development are designed for sheep and horse carts. Fifty percent of them are unpaved, single-carriage and pose a risk to pedestrians.

You can read more about this application, if you like, on our website and the planning department website. But, this journal post is not just about documenting the history and problems of the site (that could last a while). This is a journal post about the sickeningly undemocratic process that the Vale of Glamorgan council has undertaken, and I’m sure, it is somewhat similar throughout the country.

  • As a citizen of the England and Wales (Scotland and Northern Ireland have different planning laws) I am not entitled to appeal against a planning application directly. However, a developer can appeal.

  • The Vale of Glamorgan undertook no independent review of reports commissioned and presented, with questionable findings, might I add, by the developer.

  • The Vale of Glamorgan are recommending the development despite Network Rail objecting on the grounds of safety with the un-manned right of way over the railway line.

  • The vale of Glamorgan undertook no independent risk assessment of either the open pond or unguarded railway crossing. Both of which are a grave concern for me having two small children.

  • The Vale of Glamorgan undertook no independent review of the flood risk assessments despite the developer’s reports showing multiple failed percolation tests.

I could go on about the deviations from what I’d view as an independent and democratic process. There have been many, but these pose the greatest dangers in my view.

As a father of two small children, I worry about them. My children are beautifully curious. Wonderfully full of energy. But, despite my best efforts, woefully oblivious to the danger they put themselves in. Just like every other child out there. The Vale of Glamorgan planners, and planning committee – who we elect into those positions, let’s not forget that – has a social responsibility for the well-being of the citizens of this country. Instead, throughout this process, I saw the opposite.

  • I saw an over-worked, under-resourced council making bad decisions.

  • I saw an undemocratic process that favoured negotiation with the developers over hearing the concerns of local residents.

  • I saw a council under pressure to meet housing quoters by paving over the green fields rather than the more difficult option of brown field sites.

And I’ve had enough of it.

Many people locally have been saying that the building of this development will result in more flooding in the village. It will result in more congestion on roads designed for horse carts and sheep. With the dangers of the railway line and open pond, it may result in the injury or death of one of the new residents. Is that when the planners would sit up and listen? Maybe verify the developers reports with independent review?

The cynic in me says ‘probably not’. The apathetic in me says ‘who cares? We all know politics is corrupt’. But, this is the second time in a year that I’ve voiced my concerns about the local government’s ability to make good decisions.

I know we need more, and affordable, houses in this country. But new developments need proper, independent scrutiny from experts. And this proposal has not had that.

The proposed development goes before the planning committee on Thursday 19th December. It is being recommended by the overworked planning officer – despite the points above. A copy of this journal post is being forwarded to the local councillors; the members of the Vale of Glamorgan planning committee; the Vale of Glamorgan planning department; my local Welsh Assembly Member and Member of Parliament, in addition to the BBC and local newspapers.

Just as local government has a duty to behave in a democratic way, I have a duty to act as a citizen of the UK and to stand up and say when something is not right.

Design Abstraction Escalation

 ∗ journal

What are we losing by abstracting our design processes? Could it be as fundamental as losing a sense of humanity in our work?

A few years ago, Michael Bierut, wrote about a natural progression in a designer’s career.

“The client asks you to design a business card. You respond that the problem is really the client’s logo. The client asks you to design a logo. You say the problem is the entire identity system. The client asks you to design the identity. You say that the problem is the client’s business plan. And so forth.”

He calls this Problem Definition Escalation. Where a designer takes one problem and escalates it to a ‘higher’ plane of benefit and worth – one where it will have greater impact, and ultimately, make the designer feel like they’re doing their real job.

Constituent parts

Designing in a browser, in your head, on paper, on a wall, on post-it notes. It doesn’t really matter. What matters is the work. Is it appropriate? Does it do the job well? Will you get paid for it? Does the client understand the benefits?

Really. Who cares how you get there? We’re all coming around to the idea that designing responsive web sites in Photoshop is inefficient and inaccurate (if things like web font rendering matter to you).

Let’s look at the arguments:

  1. For those familiar with the tools, designing in Photoshop is just as efficient as designing in code.

  2. I design using the tools of least resistance. Preferably a pencil, sometimes Photoshop, and a lot of HTML. Photoshop is my tool of choice for creating website designs.

  3. Presenting static visuals to clients is different than using them as a tool yourself as a means to an end.

All of that is good news. Good for clients. Good for the work. Good for us.

A natural result of this is abstraction.


Design patterns are everywhere. The often-repeated chunks of content that we find ourselves designing and building time and time again. User’s get used to seeing them in certain ways, and over time, perhaps their performance is hindered by deviating from the norm. We see this all the time on e-commerce websites, or in new user registrations. Over time, we all collect these little bits of content, design and code. They build up, and eventually they need organising.

Why not group them all together, categorise them, and iterate on them over time? Throw in your boilerplate templates, too. Maybe group them together as a ‘starter kit’ with included navigation, indicative content – for different types of sites like ecommerce, blogs or magazine sites?

And… wait a second, you’ve got all you need to churn out site after site, product after product for clients now. Excellent. All we need to do is change the CSS, right? Maximise our profits.

No. It’s not right.

Conformity and efficiency have a price. And that price is design. That price is a feeling of humanity. Of something that’s been created from scratch. What I described is not a design process. It’s manufacturing. It’s a cupcake machine churning out identical cakes with different icing. But they all taste the same.

Documenting things that repeat is an important thing to do. I have my own pattern library that I’ve been adding to for years now – it’s an electronic scrapbook where I take snapshots of little content bits and bobs that I find interesting, and that keep on cropping up. It’ll never see the light of day. I’ll never use it on a project, because what I’m doing is building up a head full of this stuff so that when a problem presents itself, I will have a fuzzy recollection of something – maybe – that is similar. Instead of going straight to my big ‘ol database of coded examples, I’ll try to recreate this little pattern from memory – and that’s when something interesting happens.

Recreating something just slightly differently – from memory – means you end up with something new.

That’s why I wanted to be a designer, after all. To create new, beautiful things.

mgo r2014.10.12

 ∗ Labix Blog

A new release of the mgo MongoDB driver for Go is out, packed with contributions and features. But before jumping into the change list, there’s a note in the release of MongoDB 2.7.7 a few days ago that is worth celebrating:

New Tools!
– The MongoDB tools have been completely re-written in Go
– Moved to a new repository: https://github.com/mongodb/mongo-tools
– Have their own JIRA project: https://jira.mongodb.org/browse/TOOLS

So far this is part of an unstable release of the MongoDB server, but it implies that if the experiment works out every MongoDB server release will be carrying client tools developed in Go and leveraging the mgo driver. This extends the collaboration with MongoDB Inc. (mgo is already in use in the MMS product), and some of the features in release r2014.10.12 were made to support that work.

The specific changes available in this release are presented below. These changes do not introduce compatibility issues, and most of them are new features.

Fix in txn package

The bug would be visible as an invariant being broken, and the transaction application logic would panic until the txn metadata was cleaned up. The bug does not cause any data loss nor incorrect transactions to be silently applied. More stress tests were added to prevent that kind of issue in the future.

Debug information contributed by the juju team at Canonical.

MONGODB-X509 auth support

The MONGODB-X509 authentication mechanism, which allows authentication via SSL client certificates, is now supported.

Feature contributed by Gabriel Russel.

SCRAM-SHA-1 auth support

The MongoDB server is changing the default authentication protocol to SCRAM-SHA-1. This release of mgo defaults to authenticating over SCRAM-SHA-1 if the server supports it (2.7.7 and later).

Feature requested by Cailin Nelson.

GSSAPI auth on Windows too

The driver can now authenticate with the GSSAPI (Kerberos) mechanism on Windows using the standard operating system support (SSPI). The GSSAPI support on Linux remains via the cyrus-sasl library.

Feature contributed by Valeri Karpov.

Struct document ids on txn package

The txn package can now handle documents that use struct value keys.

Feature contributed by Jesse Meek.

Improved text index support

The EnsureIndex family of functions may now conveniently define text indexes via the usual shorthand syntax ("$text:field"), and Sort can use equivalent syntax ("$textScore:field") to inject the text indexing score.

Feature contributed by Las Zenow.

Support for BSON’s deprecated DBPointer

Although the BSON specification defines DBPointer as deprecated, some ancient applications still depend on it. To enable the migration of these applications to Go, the type is now supported.

Feature contributed by Mike O’Brien.

Generic Getter/Setter document types

The Getter/Setter interfaces are now respected when unmarshaling documents on any type. Previously they would only be respected on maps and structs.

Feature requested by Thomas Bouldin.

Improvements on aggregation pipelines

The Pipe.Iter method will now return aggregation results using cursors when possible (MongoDB 2.6+), and there are also new methods to tweak the aggregation behavior: Pipe.AllowDiskUse, Pipe.Batch, and Pipe.Explain.

Features requested by Roman Konz.

Decoding into custom bson.D types

Unmarshaling will now work for types that are slices of bson.DocElem in an equivalent way to bson.D.

Feature requested by Daniel Gottlieb.

Indexes and CommandNames via commands

The Indexes and CollectionNames methods will both attempt to use the new command-based protocol, and fallback to the old method if that doesn’t work.

GridFS default chunk size

The default GridFS chunk size changed from 256k to 255k, to ensure that the total document size won’t go over 256k with the additional metadata. Going over 256k would force the reservation of a 512k block when using the power-of-two allocation schema.

Performance of bson.Raw decoding

Unmarshaling data into a bson.Raw will now bypass the decoding process and record the provided data directly into the bson.Raw value. This significantly improves the performance of dumping raw data during iteration.

Benchmarks contributed by Kyle Erf.

Performance of seeking to end of GridFile

Seeking to the end of a GridFile will now not read any data. This enables a client to find the size of the file using only the io.ReadSeeker interface with low overhead.

Improvement contributed by Roger Peppe.

Added Query.SetMaxScan method

The SetMaxScan method constrains the server to only scan the specified number of documents when fulfilling the query.

Improvement contributed by Abhishek Kona.

Added GridFile.SetUploadDate method

The SetUploadDate method allows changing the upload date at file writing time.

A Sickroom With a View

 ∗ Jeffrey Zeldman Presents The Daily Report: Web Design News & Insights Since 1995

CHICAGO is a dynamite town, but it may not be the best place to recover from a cold. Since I arrived, my virus has gone from a 4 to an 11. There’s a spectacular view out my hotel window, which I’ve spent the day ignoring by sleeping. I have several nice friends in this town who I’m similarly ignoring, having canceled plans with them today because of this fershlugginer cold. I was flat on my back, sleeping, my phone like a cat on my chest, when my dad called this afternoon to recommend gargling with a three percent peroxide solution. My trainer texted a moment later to ixnay the peroxide. She recommended going back to bed to finish sweating it out, and that looks like my plan for the next twelve hours, give or take a hot bath.

I brought a heap of work with me to Chicago, planning to tackle it between visits with Chicagoland friends, but the cold has pushed all chance of work aside. I got one sentence written for an Ask Dr Web column—the easiest task on my plate—and if I’m being completely honest, I didn’t so much write that sentence as copy and paste it from a reader’s email. Come to think of it, it wasn’t even a sentence. It was a question, which the column I was going to write was supposed to answer. So the sum total of my work today consisted of selecting and copying a question and pasting it into a blank piece of digital paper. Also answering the phone, and removing the Do Not Disturb sign from my door just long enough to admit Room Service.

I get colds a lot. My daughter brings them home from school to visit, and when they see my lungs they move in for the winter. And who can blame them? I’ve got great lungs. All the years I smoked cigarettes, I never caught colds, go figure. There’s a message in that, or maybe not. Maybe I just never caught cold when I was young and had no kid, but time has corrected both of those things.

It’s nice to be awake for a few minutes, listening to the inane chatter that passes for my consciousness and sharing it with you. Thank you for reading. And thank you, Chicago, for your marathon winds. I thought New York was a tough town. New York ain’t nothing to this.