Infocon: green

 ∗ SANS Internet Storm Center, InfoCON: green

Scanning for Single Critical Vulnerabilities

Scanning for Single Critical Vulnerabilities, (Fri, Oct 24th)

 ∗ SANS Internet Storm Center, InfoCON: green

Where I work, we hav ...(more)...

Shellshock via SMTP, (Fri, Oct 24th)

 ∗ SANS Internet Storm Center, InfoCON: green

Ive received several reports of what appears to be shellshock exploit attempts via SMTP. The sour ...(more)...

Beyond You

 ∗ A List Apart: The Full Feed

In client work, it’s our responsibility to ensure that our work lives beyond ourselves. Sometimes that means making sure the CMS can handle clients’ ever-changing business needs, or making sure it continually teaches its users. For clients with an internal development team that will be taking over after you, it means making sure the design system you create is flexible enough to handle changes, yet rigid enough to maintain consistency.

Making your work live beyond you starts by changing your approach to design and development. Rather than defining and building a certain set of pages, focus on building an extensible design system of styles, modules, and page types.

Clients can use the system like LEGO bricks, by taking apart and rearranging modules in new and different ways. The key to a successful system is keeping that modularity in mind while going through design, front-end, and backend development. Every piece you build should be self-contained and reusable.

But a system like that only survives by educating your clients on how to use, maintain, and update it. Show how the components function, independently and together. Document and teach everything you can in whatever way works best—writing, screencasting, or in-person training sessions.

The most common mistake made in this process—and I’ve made it plenty of times before—is stopping at education. Building and teaching a modular system is a great start, but success hinges on your clients being able to use it entirely on their own.

In order for that to be the case, there is an important role reversal that must happen: the client team must become the doer, and you become the feedback-provider. That may sound weird, but think about how the design process normally goes. Say the client gives feedback on a checkout form—the functionality needs to be tweaked a bit to match their business operations. You take that feedback, design a solution, and present it back to them.

Instead, after this role reversal, the client team identifies the change that needs to be made, they use their knowledge of the system to make it, and you provide feedback on their implementation. That process gives their team a chance to work with your system while you’re still involved, and it lets you ensure that things are being used the way you intended.

So if you’re on the agency side like I am, remember that it’s your responsibility to make your work live on beyond your involvement. If you’re on the client side, hold your partner accountable for that. Ask all the necessary questions to really learn the system. Late in the project, ask how you can make changes, instead of letting the agency (or freelancer) make changes themselves. Force them to teach you how to care for what they’ve built, and everyone will be happier with, and more confident in, the result.

Are you receiving Empty or "Hi" emails?, (Fri, Oct 24th)

 ∗ SANS Internet Storm Center, InfoCON: green

I wanted to perform a little unscientific information gathering, Im working with a small group w ...(more)...

Houston

 ∗ xkcd.com

'Oh, hey Mom. No, nothing important, just at work.'

ISC StormCast for Friday, October 24th 2014 http://isc.sans.edu/podcastdetail.html?id=4207, (Fri, Oct 24th)

 ∗ SANS Internet Storm Center, InfoCON: green

...(more)...

Digest: 23 OCT 2014, (Thu, Oct 23rd)

 ∗ SANS Internet Storm Center, InfoCON: green

A number of items for your consideration today, readers. Thanks as always to our own

Rian van der Merwe on A View from a Different Valley: How to Do What You Love, the Right Way

 ∗ A List Apart: The Full Feed

Every time I start a new job I take my dad to see my office. He loves seeing where I work, and I love showing him. It’s a thing. As much as I enjoy this unspoken ritual of ours, there’s always a predictable response from my dad that serves as a clear indicator of our large generation gap. At some point he’ll ask a question along the lines of, “So… no one has an office? You just sit out here in the open?” I’ve tried many times to explain the idea of colocation and collaborative work, but I don’t think it’s something that will ever compute for him.

This isn’t a criticism on how he’s used to doing things (especially if he’s reading this… Hi Dad!). But it shows how our generation’s career goals have changed from “I want the corner office!” to “I just want a space where I’m able to do good work.” We’ve mostly gotten over our obsession with the size and location of our physical workspaces. But we haven’t completely managed to let go of that corner office in our minds: the job title.

Even that’s starting to change, though. This tweet from Jack Dorsey has received over 1,700 retweets so far:

In episode 60 of Back to Work, Merlin Mann and Dan Benjamin discuss what they call “work as platform.” The basic idea is that we need to stop looking at work as a thing you do for a company. If you view your career like that, your success will always be linked to the success of the company, as well as your ability to survive within that particular culture. You will be at the mercy of people who are concerned about their own careers, not yours.

Instead, if you think about your work as platform, your attention starts to shift to using whatever job you are doing to develop your skills further, so that you’re never at the mercy of one company. Here’s Merlin, from about 31 minutes into that episode of Back to Work (edited down slightly):

If you think just in terms of jobs, you become a little bit short-sighted, because you tend to think in terms of, “What’s my next job?”, or “If I want good jobs in my career, what do I put on my resume?” So in terms of what you can do to make the kinds of things you want, and have the kind of career you like, I think it’s very interesting to think about what you do in terms of having a platform for what you do.

There’s always this thing about “doing what you love.” Well, doing what you love might not ever make you a nickel. And if doing what you love sucks, no one is ever going to see it, like it, and buy it, which is problematic. That’s not a branding problem, that’s a “you suck” problem. So the platform part is thinking about what you do not simply in terms of what your next job is — it’s a way of thinking about how all of the things that you do can and should and do feed into each other.

I think it’s worth giving yourself permission to take a dip into the douche-pool, and think a little bit about what platform thinking might mean to you. Because if you are just thinking about how unhappy you are with your job your horizons are going to become pretty short, and your options are going to be very limited.

So here’s how I want to pull this all together. Just like we’ve moved on from the idea that the big office is a big deal, we have to let go of the idea that a big enough title is equal to a successful career. Much more important is that we figure out what it is that we want to spend our time and attention on — and then work at our craft to make that our platform. Take a realistic look at how much agency you have at work — it may be more than you realize — and try to get the responsibilities that interest you most, just to see where it takes you.

This is also why side projects are so important. They help you use the areas you’re truly interested in to hone your skills by making something real, just for you, because you want to. And as you get really good, you’ll be able to use those skills more in your current role, which will almost certainly make for a more enjoyable job. But it could even turn into a new role at your company — or who knows, maybe even your own startup.

If you go down this path, little by little you’ll discover that you suddenly start loving what you do more and more. Doing what you love doesn’t necessarily mean quitting your job and starting a coffee shop. Most often, it means building your own platform, and crafting your own work, one step at a time.

Adventures with Plex

 ∗ journal

I’ve written before about going completely digital for our home entertainment. To recap: I have a big, shared hard drive attached to an iMac that two Apple TVs share to using ATV Flash This was fine for a while, but, frankly, ATV Flash is a little buggy over our network and the Apple TV struggled with any transcoding (converting one file type to another) and streaming – especially in HD. So, we needed something better. In steps a few things: Netflix, Plex and a Mac Mini.

Plex has been on my radar for a few years and up until recently didn’t really make much sense for me. But as ATV Flash was becoming more unstable as Apple updated their OS, then Plex started to look like a good alternative.

The hardware

As you may have read from my older post, I did have shared hard drive with all the media on hooked up to an iMac which the Apple TVs shared into to browse the media. The issue here became network and sharing reliability. Quite often, the shared hard drive was invisible because the iMac was asleep, or the network had dropped. Sometimes this happened in the middle of a movie. Not ideal.

The new setup is almost identical, but instead of using the Apple TVs as hardware to browse the library, they are now being used just as a device to Airplay to. I barely use the Apple TV UI at all. Browsing from my iPad and then air playing to the Apple TV. What’s cool here is that the iPad just acts as a remote, the file itself is being transcoded on the server and just pushed to the Apple TV directly.

What about a standalone NAS (Network Attached Storage)?

Plex does run on a NAS , but the issue there is most consumer NAS boxes don’t have the hardware grunt to do the on-the-fly transcoding. So, I finally decided to ditch my iMac in favour of a headless Mac Mini to run as a decent media box, running Plex.

Getting started with Plex

  1. Download it. Get the Media Server on your computer or NAS of choice (Plex has huge device support). Also, get hold of the mobile apps. Once you’re done there, download Plex for your connected apps: from Chromecast, Amazon Fire TV, Roku, Google TV or native Samsung apps and, now, the Xbox One, too. The app support is really quite incredible.

  2. Plex Pass. Even though the software for Plex is free, there are some additional things that are left for a subscription that you have to buy. The good thing is, you can get a lifetime subscription and the cost is very reasonable at $149.99. For that, you get early access to new builds, syncing content remotely, things like playlists and trailers. But the killer feature of the Plex Pass is the ability to create user accounts for your content. Now this is something I’ve been after for ages on the Apple TV, and even more important now my eldest daughter regularly watches films on it. I need the ability to filter the content appropriately for her.

  3. Setting up a server is a breeze. Once you’ve installed the server software, get yourself a user account on the Plex website and set up a server. This launches some web software for you to start adding files to your libraries and fiddle away to your hearts content with all the settings.

  4. If you did get the Plex Pass, I’d recommend creating multiple user accounts and playlists with the features Plex Pass gives you. The way I did this was to have email addresses and user accounts for server-plex, parents-plex and kids-plex. server-plex is for administering the account and has all the libraries shared with it. ‘parents’ for Emma and I, and ‘kids’ just has the ‘children’s’ library shared with it. Now, by simply signing in and out of the iPad, I can access the appropriate content for each user group.

Next up: streaming, or ‘How do I watch the film on my telly!?’

There are a few options:

  1. Native apps (Samsung, XBox One etc) These are apps installed directly on your TV or Xbox. To watch your content, simply fire up the app and away you go. Yesterday, I installed the Xbox One app and was up and running in less than three minutes.

  2. iOS and Airplay This is what I described earlier. Simply download the iOS apps and hook up to your plex server. Once you’re done, browse your library, press play and then airplay to your Apple TV.

  3. iOS and Chromecast Exactly the same as above!

Now, there are some disadvantages and advantages to streaming.

Disadvantages: From what I understand, adding Airplay into the mix does have a slight performance hit. Not that I’ve seen it, though. I’m only generally streaming 720 rather 1080 resolution, so the file sizes are coming up against network limitations. I do expect this to change in the coming years as resolution increases. Advantages: It’s a breeze. I use my Plex app on my iPad, choose a film or TV show I want to watch and then just stream it via Airplay. When I’m travelling, I take a Chromecast with me to plug into the TV and stream to that (more on that in another post).

‘Hacking’ the Apple TV

Currently there is no native app for the Apple TV, but there is a way to get around this by ‘hacking’ the Trailers app to directly browse your content on your plex server using PlexConnect or OpenPlex. Now, there’s a lot to read to get up to speed on this, so I’d recommend a good look through the plex forums. I followed the instructions here to install the OSX app, add an IP address to the Apple TV (to point to the plex server) and, so far, so good.

To be honest, though, I tend to just Airplay these days. The iPad remote / Apple TV combination is quite hard to beat. It’s fast, flexible and stable.

Is this it for my digital home needs?

For a good few years now I’ve been looking for the optimum solution to this problem. My home media centre needed the following:

  • Multi-user accounts
  • Full-featured remote
  • Large file format support
  • Manage music, photos and movies
  • Fast transcoding and streaming (minimum 720)

Both iTunes, ATV Flash, Drobo (in fact, any domestic NAS) fail on all or most of these points. Plex not only ticks every single box (if it’s run on a decent machine for transcoding), but provides very broad device support, an active developer community and a really good UX for the interface.

Who knows how long I’ll stick with Plex as I do have a habit of switching this around as often as I change my email client (quite often!). But, for now, it’s working just fine!

It's not you, it's me

 ∗ journal

Dear web conferences, It’s not you, it’s me. Something’s changed and it’s not your fault. I’m just on a different path to you. Maybe we’ll be friends in a while, but at the moment I just want some space to do and try other things. I still love you. But we just need a break. Love, Mark


I’m taking next year off speaking at web conferences. It’s not that I don’t have anything to say, or contribute, but more that I have better things to do with my time right now. Speaking at conferences takes about two weeks per conference if it’s overseas once you factor in preparing and writing the talk, rehearsing, travel, and the conference itself. That’s two weeks away from my wife, my daughters, my new job and a team that needs me.

Two conferences the world over

What I’ve noticed this past year or so is, largely, we have two different types of web conference running the world over: small independents and larger corporate affairs. The former is generally run by one person with hoards of volunteers and is community-focussed (cheap ticket price, single track). The latter is big-budget, aimed at corporations as a training expense, maybe multi-track and has A-list speakers.

As well as these two trends, I see others in the material and the way that material is presented. ‘Corporate’ conferences expect valuable, actionable content; that is what corporations are paying for. Schlickly delivered for maximum ROI. ‘Community’ conferences have their own trends, too. Talks about people, empathy, community, and how start-ups are changing the world. Community conferences are frequently an excuse to hang out with your internet mates. Which is fine, I guess.

My problem with both of these is I’m not sure I fit anymore. I’m not what you would call a slick presenter: I ‘um’ and ‘ah’, I swear, I get excited and stumble on stage in more ways than one. Some would say I’m disrespectful to the audience I’m talking to. I’m lazy with my slides, preferring to hand-write single words and the odd picture. I’ve never used a keynote transition. I’m not really at home amongst the world’s corporate presenters who deliver scripted, rehearsed, beautifully crafted presentations. They’re great and everything, but it’s just not me. Not for the first time in my life, I don’t quite fit.

And then there’s the community conferences. I feel more at home here. Or at least I used to. This year, not so much. A lot of my friends in this industry just don’t really go to conferences that much anymore. They have family commitments, work to do, and – frankly – just aren’t that into getting pissed up in a night-club after some talks with 90% men. Younger men at that.

Time for something different

All of that may sound like I’m dissing the conference industry. That’s not my intention, but more like a realisation that, after nearly ten years at speaking at events, I think it’s time I had a little break. Time away to refresh myself, explore other industries that interest me like typography and architecture. Maybe an opportunity to present at one of these types of conferences would present itself; now that would be cool.

I know it’s a bit weird me posting about this when I could quietly just not accept any invitations to speak. To be honest, I’ve been doing that for a little while, but not for the first time, writing things down helps me clarify my position on things. For a while I was angry at web conferences in general. Angry at the content, disappointed with speakers, disappointed at myself. Then I realised, like so many times before, that when I feel like that it’s just that my ‘norm’ has changed. I’m no longer where I used to be and I’m getting my head around it.

It’s just this time, I’m going to listen to my head instead of burying it two feet in some sand.

Laters.

My Handbook – Environment

 ∗ journal

I’ve been doing a talk this year called ‘My Handbook’. it’s a rather silly little title for a bunch of principles I work to. They are my ‘star to sail my ship by’, and I’m going to start documenting them here over the coming months, starting with Environment – a post about how, for me, design is more about the conditions in which you work.

I’d describe myself as an armchair mountaineer. I enjoy reading about man’s exploits to get to the roof of the world, or to scale precipitous walls under harsh conditions for no other reason than the same reason George Mallory said he was climbing Everest: ‘Because it’s there’.

In any expedition to a mountain, great care and consideration is taken over the kit, the climber’s skill, the team around them, the communications, the list is seemingly endless. But, the biggest single factor in a successful trip are the conditions of the mountain. Will the mountain let them up. And back down again. Assessing the condition of a mountain takes experience, time and careful consideration; it may be snowing, too warm, too much snow on the ground, too cold, too windy. The list of variables is endless, but the climber considers all of them, and if necessary moves to adjust the route, or simply doesn’t attempt the climb.

Now, let’s shift to design – not necessarily web design, but commercial design of almost any kind. Let’s say you take a brief for a project, you begin the work and suddenly in the project, other stakeholders come on board and start to have comment on your work and direction on strategy that was unknown to you. We’ve all had projects like those, right? Suddenly, your work becomes less about what you may think of as ‘design’, and more about meetings, project management, account management, sales, production work. You know, all of those things that have a bad reputation in design. Meetings are, apparently, toxic. Well, I’ve started to look at this in a different light over the past few years.

As I’ve grown as a designer, like many, I’ve found myself doing less ‘design’. Or, rather, less of what I thought was design. Five years ago, I thought design was creating beautiful layouts, or building clean HTML and CSS, or pouring over typefaces for just that right combination. Now, this is design. But, so are meetings.

Experienced designers spend time making the environment right whilst they are doing the work. Because, frankly, you can push pixels around forever, but if the conditions aren’t right for the work to be created and received by the client in the right way, the work will never be as good as it could be. But, what do I mean by ‘conditions’? Here are a few practical things:

  • The physical space: I see a large part of my job as making the environment in the studio as conducive as possible for good work to happen. That means it’s relaxed, and up-beat. Happy people make good things.

  • A Shit Umbrella: It’s my job to be a filter between client and my team on certain things. Someone recently described this as being a ‘Shit Umbrella’.

  • Politics: Wherever you get people, you get politics – because people are weird. I spend a lot of time on client projects trying to traverse a landscape of people to understand motivations, problems, history or direction. Once you understand the landscape, you can assess, and work to change, the conditions.

  • People first, process second: We fit the processes to the people rather than the other way around. Our team runs things that works for us, but that’s the result of a lot of trying & discarding. Like tending a garden, this is a continual process of improvement.

  • Just enough process: I’m a firm believer in working to the path of least resistance. Being in-tune with how people work, and changing your processes to suit, helps create a good environment. But we ensure we impose just enough structure. To much, and it gets in the way. This doesn’t work if you don’t do the previous point, in my experience.

  • Talk. Do. Talk.: It really is true that the more we talk, the better work we do. We talk in person, on Slack, on Skype, on email. Just like meetings, there is an industry-wide backlash against more communication because the general consensus is we’re getting bombarded. But recently, we’ve been working to change that perception in the team so that talking, and meetings, and writing is the work. It’s tending the garden. Making the conditions right for good work to happen.

  • Making things is messy: This is actually another point from my ‘handbook’. Since the 1950’s clients and designers have been sold a lie by advertising. Design generally isn’t something that happens from point A to Z with three rounds of revisions. It’s squiggly, with hundreds or thousands of points of change. A degree of my time is spent getting people – clients, internal clients, the team – comfortable with the mess we may feel we’re in. It’s all part of it.

I see all of this as design work. It’s also my view that much of the disfunction from large agencies to other organisations is that this work isn’t being done by designers because they don’t see it as the work. It’s being done by other people like account managers who may not best placed to get the conditions right. Designers need to take responsibility for changing the environment to make their work as good as it can be. Sometimes, that means sitting in a board room, or having a difficult discussion with a CEO.

Mountaineering is so often not about climbing. You may do some if the conditions are right. Design is so often not about designing beautiful, useful products. But, you may do some if the conditions are right.

Ingredients

 ∗ journal

Jeremy wrote something special yesterday. That’s not unlike Jeremy, but this blog post in particular struck a chord with me.

A couple of weeks ago, Google Chrome has toyed with the idea of removing most of the URL because it’s a “power user” feature in favour of a simple, easy to understand signpost of where the user is. Jeremy’s point is there is a deeper warning here of ease of use.

… it really doesn’t matter what we think about Chrome removing visible URLs. What appears to be a design decision about the user interface is in fact a manifestation of a much deeper vision. It’s a vision of a future where people can have everything their heart desires without having to expend needless thought. It’s a bright future filled with seamless experiences.

I read Jeremy’s post and kept re-reading it. My instant thought was of food.

I enjoy cooking – have done for a decade – and the more I do, the more I care about ingredients. Good produce matters. Now, I’m not talking about organic artisan satsumas here, but well grown, tasty ingredients; in season, picked at the right time, prepared in the right way. The interesting thing is most people who eat the resulting dish don’t think about food in this way. They experience the dish, but not the constituent parts.The same way some people experience music – if you play an instrument, you may hear base-lines, or a particular harmony. If you enjoy cooking, you appreciate ingredients and the combination of them.

But ingredients matter.

And they do of websites, too. And the URL is an ingredient. Just because a non-power user has no particular need for a unique identifier doesn’t mean it’s any less valuable. They just experience the web in a different way than I do.

Without URLs, or ‘view source’, or seeing performance data – without access to the unique ingredients of websites – we’ll be forced into experiencing the web in the same way we eat fast food. And we’ll grow fat. And lazy. And stop caring how it’s grown.

As Jeremy says: Welcome aboard the Axiom.

A new beginning for Five Simple Steps

 ∗ journal

I’m so happy to tell you that Five Simple Steps has been acquired by Craig Lockwood and Amie Duggan. The dynamic duo behind Handheld conference, The Web Is, FoundersHub and BeSquare. Before I tell you again how thrilled I am, let me take you way back to 2005…

Next year, it will be ten years since I wrote a blog post called Five Simple Steps to better typography. The motivation behind the post was simple: the elements of good typesetting are not difficult, and, with a few simple guidelines, anyone could create good typographic design. That one article became part of a small series of five posts: five simple steps, with each article containing five simple steps. It was a simple formula, but it turned out pretty well.

Soon after that initial post, I wrote Five Simple Steps to designing grid systems for the web, then the same for colour theory. This was now 2006 and I’d just left my job at the BBC. It was a dreary October day and, whilst sat in a coffee shop in Bristol after just visiting one of my first freelance clients, I was talking over email to the Britpack mailing list about compiling my posts into a book. In 2008, Emma and I hired my brother to help me design it and in early 2009, we finally released it. And with the release of that first book, Five Simple Steps Publishing was born. But we didn’t know it at the time.

Over subsequent months and years other authors saw what we produced and wanted us to publish their books. Before we really knew it, we were a publisher with a catalogue of titles and providing a uniquely British voice to the web community. But publishing is tough. As we found out.

All over the world, publishers’ profits are being eroded; from production costs to cost-difference in digital versions. And – except for a couple of notable companies – you see it in the physical books that were being produced for our customers by competitors: terrible paper quality, templatised design, automated eBook production. Everywhere, margins are being squeezed, and the product really suffers.

Our biggest challenge was that Five Simple Steps started as a side project, and always stayed that way. Over time, we just couldn’t commit the time and money it needed to really scale. We had so much we wanted to do – there was never any shortage of great authors wanted to write a book – but could never find the time and energy when we had to run a client services business. Oh, and also during this time, Emma and I had two children. Running and growing two businesses is somewhat challenging when you’re being thrown up on and have barely four hours sleep a night.

So about a year ago, Emma and I sat in our dining room and faced a tough decision: wind down Five Simple Steps, sell it, or give it one more year. We chose the latter. It was a tough year, but Emma, Nick and the team worked to make the Pocket Guide series a great success. So much so, it required tons of work and compounded the problem we had: Five Simple Steps needed to take centre stage rather than be a side project.

A month ago today, Emma and I announced that Five Simple Steps was closing. The team were joining Monotype, and Five Simple Steps could no longer be sustainable as a side project. The writing had been on the wall for a while, but the stop was abrupt for us, the authors and the team. We tried to find the right people to take the company forward before the sale, but we couldn’t find the right people. Luckily, immediately following the announcement, a few people got in touch about seeing if they could help. Two of those people really said some interesting things and got us excited about the possibilities: Craig Lockwood and Amie Duggan.

Craig and Amie live locally in Wales. They run conferences: Handheld conference and The Web Is conference later this year. They also run a co-working space in Cardiff called FoundersHub. They have a background in education and training, and together with their conferences and BeSquare – a conference video streaming site – they have the ecosystem in place to take Five Simple Steps to places we could only dream of. As you may gather, we’re chuffed to bits that Five Simple Steps is going to live on. Not only that, but it’s in Wales and in the competent hands of friends who we know are going to give it the attention it deserves.

Emma and I can’t wait to see where it goes from here.

Conference speakers, what are you worth?

 ∗ journal

Over the past couple of days, there have been rumblings and grumblings about speaking at conferences. How, if you’re a speaker, you should be compensated for your time and efforts. My question to this is: does this just mean money?

I’ve been lucky enough to speak at quite a few conferences over the years. Some of them paid me for my time, some of them didn’t. All of them – with the exception of any DrupalCon – paid for my travel and expenses.

When I get asked to speak at a conference, I try to gauge what type of conference is it. Is it an event with a high ticket price with a potential for large corporate attendance? A middle sized conference with a notable lineup. Or, is it a grassroots event organised by a single person. In other words, is it ‘for-lots-of-profit’, ‘for-profit’, or ‘barely-breaking-even’. This will not only determine any speaker fee I may have charged, but also other opportunities that I could take for compensation instead of cash.

Back to bartering

When I ran a design studio, speaking at conferences brought us work. It was our sales activity. In all honesty, every conference I’ve spoken at brought project leads, which sometimes led to projects, which more than compensated me for my time and effort if it kept my company afloat and food on the table for myself and my team. The time away from my family and team was a risk I speculated against this. Conference spec-work, if you will.

In addition to speculative project leads for getting on stage and talking about what I do, I also bartered for other things instead of cash for myself or my company. Maybe a stand so we could sell some books, or a sponsorship deal for Gridset. Maybe the opportunity to sponsor the speaker dinner at a reduced rate. There was always a deal to be done where I felt I wasn’t being undervalued, I could benefit my company, product or team, but still get the benefit of speaking, sharing, hanging out with peers and being at a conference together.

It’s about sharing

If every speaker I knew insisted on charging $5000 per gig, there will be a lot less conferences in the future apart from the big, corporate, bland pizza-huts of the web design conference world.

My advice to anyone starting out speaking, or maybe a year or so in, is have a think about why you do it. If you’re a freelancer, let me ask you: is speaking at a conference time away from your work, and therefore should be calculated as to how much you should charge based on your hourly rate? Or, is it an investment in yourself, your new business opportunities, and the opportunity to share. Of course, the answer to this is personal, and – for me – depends on what type of conference it is.

This community is unique. We share everything we do. We organise conferences to do just that. Most of the conference organisers I know come from that starting point, but then the business gets in the way. Most speakers I know, get on stage from that starting point, but then the business gets in the way.

There’s nothing wrong with valuing yourself and your work. If speaking is part of your work, then you should be compensated. But next time you’re asked to speak by a conference, just stop for a moment and think about what that compensation should be.

Collaborative Moodboards

 ∗ journal

Creating moodboards is something I was taught from a very early age. In primary school, they were a simple mixed-media way of expressing a form of an idea.

The thing I find interesting about mood boards is not the end-result, but the process of creation. Watching my children make posters from torn up bits of newspaper and magazines is really no different to watching my clients do it. Similar to watching other activities – such as affinity sorting, or depth interviewing – it’s the listening that I find interesting. Every moodboard tells a story, and as a designer, listening to your clients tell that story when they make them can be very insightful.

Making moodboards for you, not for me.

I have to be honest, I don’t make moodboards for myself. Not physical ones anyway. When I familiarise myself with a brand, or make some suggestions for design context, I always try to place those things in a context the client understands. This is where design visuals are important. They are almost unsurpassed in their immediacy of understanding for a client because they show the design in context. Of course, replace that with a high fidelity prototype, and you get the same thing. But, I want to step back a little here, as to when I find creating moodboards valuable.

Let me ask you a question: how many times have you heard this from a client?

‘I’m not so sure I think the design is heading in the right direction’. ‘It needs more pop’. ‘It’s just not us’.

These are all because a client cannot communicate about design at the same level we do. So, it’s abstract. Either that, or:

‘I don’t like that green’. ‘That button is great! But, it needs more pop’. ‘The logo needs to be bigger’.

Then things get subjective and extremely detailed. Why? Because these are approachable things people can comment on. More often than not, these comments are a failing that should rest firmly on our shoulders. We need to give our clients the words and understanding to express their thoughts. Either that, or we tease out these issues earlier in the process, in a way that is abstracted from the design work that will come later. This is where I feel collaborative moodboards work extremely well.

So, why would want to try and run one of these sessions?

  1. When a client’s brand is repositioning, sometimes we’re brought in very early on the back of a strategy. No tactical work as been done. So, it’s up to us to navigate the waters of implementing the branding strategy. Making design work on the back of a few bullet points in a slide deck can be challenging.

  2. Usually in a discover process, I will get a few red flags from speaking with a client. Generally these come through when talking about competitors, or things they like.

  3. When I get conflicting stories from different stakeholders. The homepage team has a completely different view on the branding than the marketing team.

  4. When branding needs evolving. A lot of organisations have mature branding collateral for print and advertising. Not so much for web (still!), so these are useful exercises to start to tease out differences or how they can align to the web in future.

I’m sure there are more, but those are few I can think of off the top of my head for now.

How to run a collaborative moodboard session

  1. Get the stakeholders in a room. 3-4 is ideal. 9 is way too many.
  2. Bring with you lots of magazines, newspapers, flyers – just physical paper stuff – that you can all cut up.
  3. Glue. Lots of glue. One tub each.
  4. Large (A1) pieces of paper.

The thing about this that I find interesting from a people-watching/behaviour perspective, is that the act of cutting things up and sticking them down is something that most of these people wouldn’t have done since school. The process involves collaborating, getting stuck-in and discussing the work. I find it a great leveller for the client team (hierarchy quickly disappears), and a very good ice breaker.

You set the brief for the morning/afternoon (all day is generally too long for the making part of this process). The idea is to find content that communicates part of the visual story of the product – and that could be anything:colour, type, texture, image – and stick it down.

For the agency team, it’s our job to ask questions throughout the day. To tease out the insights as people are in the moment of choice – before they’ve had chance to post-rationalise. And you know what? Answers like: ‘I just really like this green’ are great, because our next question is ‘Why?’ and it forces rationale. Without us being there, and asking that, almost always post-rationalising and ‘business stuff’ gets in the way of finding the truth behind those choices.

Quite often, just like cave paintings, moodboards are an artefact of a conversation. We often discard them from this point because they have served their purpose. We have the insights. The marketing team are best buddies with the homepage team. We all heading in the same direction.

So, next time you start a project and you need some steer on branding, or reconciling differences of opinion on a client team, try collaborative moodboarding as a way of coming together to try and solve the problem.

Responsive Web Design – Defining The Damn Thing

 ∗ journal

Unlike many design disciplines, web design goes through cyclical discussions about how to define itself and what it does – anyone who’s ever spent any time in the UX community will know about this.

I was prompted to write about this from reading Lyza’s column on A List Apart, and Jeffrey’s follow-up post this weekend.

In 2010, I attended An Event Apart in Seattle. During that show, I saw three or four presentations – from Eric Meyer, Dan Cederholm, Jeremy, and of course, Ethan. All of them, independently, talked about how using media queries and CSS we could change the content using a fluid layout. It was a perfect storm, and indicative of the thinking that led Ethan to write – and A Book Apart to publish – Responsive Web Design a year later. The rest, they say, is history.

Responsive Web Design had a simple formula: fluid grids, media queries and flexible images. Put them all together, and your web product will be responsive. As Jeffrey said:

If Ethan hadn’t included three simple executional requirements as part of his definition, the concept might have quickly fallen by the wayside, as previous insights into the fluid nature of the web have done. The simplicity, elegance, and completeness of the package—here’s why, and here’s how—sold the idea to thousands of designers and developers, whose work and advocacy in turn sold it to hundreds of thousands more. This wouldn’t have happened if Ethan had promoted a more amorphous notion. Our world wouldn’t have changed overnight if developers had had too much to think about. Cutting to the heart of things and keeping it simple was as powerful a creative act on Ethan’s part as the “discovery” of #RWD itself.

The idea of responsive design has taken a few years to go from cubicle to board room. But now it is a project requirement coming directly from there. For the past eighteen months, at Mark Boulton Design, we’ve seen it as a requirement on RFPs. And with that, it brings a whole other set of problems. Because what does it mean? Hence, we have to Define The Damn Thing all over again. And recently, to be honest with you, I’ve stopped doing it. Because, depending on who you speak to, responsive web design has come to mean everything and nothing.

There are some who see it as media queries, fluid grids and scalable images. There are those who see it as adaptive content, or smarter queries to the server to make better use of bandwidth available. There are those who just see it as web design.

Me? I think it’s just like Web 2.0. And AJAX. It’s just like Web Standards (although to a lesser extent) and exactly like HTML5 (in the minds of those of you who aren’t developers) and its rather splendid branding. Responsive design has grown into a term that represents change above all else. To me, responsive design is more about a change in the browser and device landscape. A change in how people consume content. A change in how we make things for the web. And responsive design is just the term to encapsulate that change in a nice, easy solution that can get sold to a board of directors worrying about their profit and loss.

‘Responsive design is forward-thinking and means it will work on a phone, and that’s where things are headed’.

We’ve heard this line time and time again over the past couple of years. You see, responsive design is a useful term and one that will stick around for a while whilst we’re going through this change. How else do we describe it, otherwise? Web design? I don’t think so. No board member is going to get behind that; it’s not new enough.

How we work

 ∗ journal

I’ve had a few people ask me recently about how we work at Mark Boulton Design. And, the truth be told, it slightly differs from project to project, from client to client. But the main point is that we work in an iterative way with prototypes at the heart of our work every step of the way.

Work from facts AND your intuition

We always start by trying to understand the problem: the users of the website or product, the organisation on their customer strategy, the goals and needs of the project, who’s in charge and who isn’t. There’s a lot to take in on those early meetings with a client. One of the first things we do is to try and put in place some kind of research plan: what do we need to know, and how are we going to get it.

This could be as simple as running some face to face interviews with existing or potential customers coupled with a new survey. Of course, good research should provide some data to a problem, not just ‘what do you think of our website?’. Emma has written some good, quick methods for doing this yourself.

We couple that with trying to extract the scope from the client. I say that because, half the time, we’re given a briefing document – or something similar – and most of the time that document hasn’t been written for us. It’s been written for internal management to sign off on the budget of the project. So, rather than ask for a new document, we run a couple of workshops to tease out those problems:

User story workshop

This workshop is designed to tease out the scope of the project – everything we can think of. We ask the client to write user stories describing the product. Nothing is off the table at this point and our aim is to exhaust the possibilities.

Persona / user modelling workshop

Personas have been called bullshit in UX circles for years now. Some say they pay lip-service to a process, or they’re ignored by organisations. Whatever. I think, sometimes, something like personas are useful for putting a face to that big, amorphous blob of a customer group. Maybe that’s just a set of indicative behaviours or maybe a lightweight pen-portrait of an archetypical user. The tool is not the important thing here, but how you can use something to help people think of other people. To help an organisation to think of their customers, or designers to think of the audience they’re designing for, or the CEO to think in terms of someone’s disability rather than the P&L.

What I find generally useful about running a workshop like this is that it exposes weaknesses in an organisation. If a client pays lip-service to a customer-centric approach, it will soon become very evident in a meeting like this that that’s what’s going on.

Brand workshop

This is a vital workshop for me. As a design lead on a project, I need to understand the tone of a company. From the way it talks about itself, through to the corporate guidelines. But, my experience is, that’s only half the story if you’re lucky. So much of a brand is a shared, consensual understanding in an organisation. Quite a lot of that can go un-said. This workshop is, again, about teasing out those opinions, views and arguments.

Bonus!

The first three workshops have the added bonus of finding out who runs the show in an organisation. I make it my business to find out – and get on side – the following people:

  • The founders / CEO. This should be a given.
  • The people with a loud mouth. It’s useful to find the people who have a loud voice and get them around to our way of thinking. Then they can shout about our work internally.
  • The people with influence. Sometimes, these are the quiet, unassuming people, but they carry great sway. If we want things done, these people need to be our friends.

That’s quite a lot of people to keep happy, but if we get these three groups on side, we find projects run a lot smoother.

Prototype your UX strategy

Leisa gave a great talk at last year’s Generate conference in London about prototyping your UX strategy. The crux of this was it is way more efficient to demonstrate your thinking and design, than it is to talk about it. If you can quickly make something, test it, iterate a bit, and then present it, then you can massive gains to cutting down on procrastination and cutting through organisation politics like a hot knife through butter. Showing that something works is infinitely more preferable to me than arguing about whether something would work or not.

Wherever possible, we’ve been making prototypes in HTML. It gives us something tangible and portable to work with. We can put it in front of users, show a CEO on their mobile device to demonstrate something.

The right tool at the right time

I’ve spoken before about designing in the browser, or designing in Photoshop, or on pencil, or whatever. Frankly, we try to use the most appropriate tool at the right time. Sometimes that’s a browser, but a client may respond dreadfully to that because they’re are used to seeing work presented to them in a completely different way. Then, we change tack and do something else. My feeling is the best design tool you can use is the one that requires the least amount of work to use: be it a pencil, Photoshop or HTML.

agile not Agile

I feel that design is a naturally iterative process. We make things and then fix things as we go. Commercial design, though, has to be paid for. And so, in the 1950’s, the Ad industry imposed limits to this iteration – ’you have three changes, then you must sign off on this creative’. Of course, I can understand this thinking; you can’t just get a blank cheque for as many iterations as you like for a project until something does (or does not) work. But, what we gain in commercial control, I’ve found we’ve definitely lost in design quality. It takes time to make useful, beautiful things.

So, from about 2009, Mark Boulton Design have been working in the following way:

  • We work in sprints that are two weeks long. We never have a deadline on a Friday. Sprints run from Monday to Monday, with a release end of play Monday.

  • ‘Releases’ are output. Sometimes code. Sometimes research. Sometimes design visuals.

  • We front-load research into a discovery sprint. This is to get a head-start and give the designers (and clients) some of the facts to work around. Organising, running and feeding back on research takes time.

  • Together with the client, we capture the scope of the project with user stories. These are not typical Agile user stories – for example, we don’t find estimating complexity and points, useful in our process – but they are small, user-centred sentences that describe a core piece of the product. It could be a need, or a bit of functionality, or a piece of research data. The key point here is, for us, they are points of discussion that are small and focussed. This helps keeps us arrow-straight when we prioritise them sprint on sprint.

  • We conduct research each sprint if it’s required. This is determined by the priorities for that sprint. For example, if the priority for the sprint is focussed on aesthetics, or typography, or browser testing, then usability testing is not going to be of much use for those.

And now for some of the commercial considerations:

  • Contracts are most often fixed-price, but broken down into sprints. Each sprint has an identical price.

  • We bill as we go. The client pays a degree up-front, and that is then factored into cost of each sprint.

  • We explain to prospective clients how we work: each sprint, we work on agreed priorities, with no detailed functional spec to work against.

  • Points. In the past, we’ve worked on Agile agreements where we would be delivering against agreed estimated points. This was to see if we could make web development agile work in a project environment. It didn’t. We found we were delivering to the points, rather than to the project. Plus, if we didn’t hit the points for that sprint, we were penalised financially.

  • Coaching our clients through this process is as challenging as coaching through clients of a responsive design project. When the project is in the early-mid messy stages – when client preconceptions are being challenged, the prototype is not being received well by users – it takes a strong partnership to push through it. Design is messy. Iteration, by it’s very nature, is about failing to some degree or another. Everyone has to get used to that feeling of things not working out the way they first thought.

  • The sticky end. When we get to the final stages of a project, we should be in a good place. The highest priority items should be addressed, we will have buy in and sign off from the right people and we should be focussed on low priority features. But sometimes, that’s not the case. Sometimes, we’ve got high priority things left over which are critical. And that’s the time when we have to go back to the client and discuss how these need to be addressed. Sometimes that’s an extra sprint or two. Sometimes it’s an entirely new contract.

What we don’t do from ‘Agile’

We don’t do:

  • Estimating tasks. We don’t assign time to design tasks. In our studio, work just doesn’t happen that way. Generally, things are a bit more holistic.

  • Tracking velocity. For the same reason above, if we’re not measuring delivering against user stories in a numeric way, we can’t track our velocity.

  • Retrospectives. We don’t run traditional retrospectives on sprints. Maybe this is more a symptom of a close, high-communication level of our team. We’re talking all the time anyway. We have found that retrospectives have been a useful forum for clients to feed back on how they’re feeling about progress in the past, but this has felt like a somewhat forced environment to do it. So, recently, we have points of checking in with a client to see how they’re feeling about things.

So, that’s about it. A whistle-stop tour of how we like to work. As much as possible, we’ve tried to tailor our process to what works for us, built on some useful structures that agile gives us. I guess the most important thing for us is that we’re not wedded to our processes at all. We regularly shift focus, or the way we work, to meet the needs of particular clients or projects. Just as long as we align those processes to how design naturally happens, then I’m happy.

Al Jazeera & Content shelf-life

 ∗ journal

From speaking at the phenomenal MK Geek Night All Dayer, to launching a project three years in the making for Al Jazeera, the releasing a new design language for one of the oldest university in London, to Mark Boulton Design being nominated in four categories in the Net Awards. It’s been a busy couple of weeks.

Last week, I was up in London visiting a client when I heard that another project of ours was to be launched shortly. It was part of a project we’ve been working on for just over three years: the global design language for Al Jazeera Network digital, with the first two products being launched in Turkey and a beta of the Arabic news channel.

There is so much to talk about on a project of this scale. Here are just a few highlights:

  • Spending time with journalists and the newsroom to understand how news is reported.
  • Working with Al Jazeera during the Arab Spring; from the uprising in Egypt to Libya.
  • Course-correcting throughout the project. Responsive Design wasn’t really a thing three years ago.
  • Designing in four languages – Arabic, English, Turkish and Slavic – when the MBD team primarily speaks one.
  • Adopting an Object Oriented approach from content through to code. Modular, transferrable and scalable. It required a level of detailed thought right down to how content types were defined in the CMS.
  • Working with three development partners across three independent content management systems.

I could go on and on. And I probably will at some point. Needless to say, none of the above could achieved without a patient, smart and agile client-side team. Good job the Al Jazeera team are just that.

There are many buzz words you could label this project with: content-first, responsive, atomic, OOCSS. Again, I could go on. But the one thing that was first, central and always through prototyping and early strategy was good research. It was a research-first project. That probably won’t come as a surprise to some of you given we have our own in-house researcher, Emma. What may come as a surprise, however, is the degree in which that early research led approach laid the foundation for a fundamental shift in how Al Jazeera thought about their content.

Content shelf-life.

Many news journalists think of their content as a few distinct types:

  • Rolling news: Typically taken straight from the wire and edited over time to fit the growing needs of the story.
  • Editorial: Longer form piece. Still highly topical and timely.
  • Op-ed: Opinion piece from a named author.
  • Feature: A story. With a beginning, a middle and an end. Long-form content, and not necessarily timely.

These can all be mapped to timeliness; both in terms of how long they take to create and their editorial time-life. The more timely a piece, the shorter it takes to create and the shorter the shelf-life.

  • Rolling news: timely, short shelf-life.
  • Editorial: timely, long-form, short to mid shelf-life.
  • Op-ed: Long-form, mid shelf-life.
  • feature: Long-form, long shelf-life.

Publication schedules are often focussed around this creation with journalists having several pieces of the different types in various degrees of completion to various deadlines focussed on different stories. This is a comfortable mental model, one that newspapers have been arranged around for decades. But it isn’t necessarily how users of websites look for content. Users will not typically look for a type of content, but will look for a context of a story first: the topic.

The new information architecture of the Al Jazeera platform has been built around a topic-first approach. But also, the modular content and design allows for the rapid changing of display of the news as a topic or news story moves through the various content types. It’s a design system, connected to a CMS that accommodates what news naturally does. It changes.

The Design System

The whole platform is built on top of Gridset using modular design principles. The content is modular and multifaceted, designed for re-use, as is the design. For years now at Mark Boulton Design, we’ve not designed websites, but an underpinning design system with naming conventions, rules, patterns. This is particularly useful when many CMS software thinks of content objects in this way. Our systematic thinking can applied all the way through CMS integration. Software engineers love designers giving them rules.

It’s funny, we seem to have just discovered this in web design, but many other design disciplines have been approaching their work in this way for decades. Some for centuries. Take typography, for example. The design process of creating a typographic design is systematic thinking at its purest. Designing heading hierarchies and the constituent parts of written language can be approached in an abstracted way. This is exactly the right approach when designing for other languages.

Arabic has obvious challenges for an English-speaker. Not only is it written right to left, but the glyphs are non-roman. To approach this as a English-speaker, we needed to create tools and process to help. Words no longer look like words, but shapes of words. Page designs no longer look like familiar blocks of text, type hierarchy and colour. We saw form more than we saw function.

Just the start

Three years is a long time to work on a project. I’m so delighted to finally see the design system in the wild. For such a long time, we only saw it in prototype form, but you can only take prototypes so far. We needed to pressure-test content types, see where it breaks, adjust a hundred and one small details to make it work. All of this just underpins the fact that now the system is being rolled out, there needs to be changes made every day to evolve the system. This is the web after-all. It’s a feature, not a bug.

Some social good

 ∗ journal

I was going to do a usual year-end wrap up for this blog post as I have done in previous years. But, as it’s the start of the new year already, I thought set my stall out for the coming year. What do I want to do, rather than what have a just done.

A couple of days ago, I was reminded of a video I watched a while ago about Free Enterprise via my friend Andy Rutledge.

“Don’t Eat Your Dog: The Surprising Moral Case for Free Enterprise. Based on his best-selling book “The Road to Freedom,” AEI President Arthur C. Brooks explains how we can win the fight for free enterprise by articulating what’s written on our hearts.”

I’m always interested in how other country’s politics, viewpoints and economics work, and this was no exception. Rather than to bat down things like capitalism, I’m making a concerted effort to understand the nuance in such things.

As someone who runs a design studio, a publishing company and a web-based design tool, you could count me in the group of people who work hard for what I get. And I’m rewarded for that. I don’t expect a free ride. I don’t expect anything beyond the realms of what is offered in the country I live in (such as state health care and education etc. In fact, I pay for that through my taxes – the NHS is not free). But before I disappear into a politics hole, I want to bring this back to design.

Running a design company, we charge clients for the work we do and for the customers who use our products and buy our books. In doing so, we create jobs, and more tax revenue for the government. But one thing I don’t agree with from the video above is that what i’m doing is a purely selfish exercise. I’m not just doing business to pay the bills, design great products for clients, and give people work to do. To me, there is more to making things than just making things.

I believe my job is not only about doing work for clients but that I have a social responsibility to make the world a better place through the work I do. Design is a powerful tool to affect social change. However small.

Let me give you an example.

You’re out for a walk at lunchtime. You come at a road crossing and there is a family by your side waiting to cross the road. The crossing indicator is counting down the seconds, but you spot a small gap in the traffic for you to cross. You just skip across the road, running in between cars and carry on. The family is left waiting for a safe gap in the traffic.

Do you:

  1. Think that what you did was fine? It was safe for you to cross. No problem.

or

  1. Do you think that you should’ve waited next to the family to build upon the good example the parents were trying to show to their small children that they should wait for a safe gap in the traffic to cross?

It’s a small but important thing. And this is social responsibility. A responsibility to help the community around you, and not through just helping yourself. Next time you take on a design project, just stop for a second and think:

“beyond getting paid for this, and making my client’s business better, what is the benefit in doing this work? What is the social good?”

In addition to the work itself, ask them if you could blog about the process, or speak about the work at a conference. If it’s something you really believe in, could you offer do it pro-bono, or heavily discounted? Could you open source the code produced? How about aspects of the design – such as icons? Could you have one of their team members sit with your team for the whole project to soak up your skills? How could you benefit the web design and development community and still get paid well?

We’re in an incredibly fortunate position as designers to create change in the world. Many people can’t. Or simply won’t. Through our products, our work, and how we talk about it, we can have a much greater benefit to society that just lining our pockets.

This is exactly what I plan on doing in 2014. Happy New Year!

Running ragged

 ∗ journal

In my fourth article for 24ways over the years, I wrote about typesetting the right rag.

One of the first little typesetting trips I was taught – in my internship at an advertising agency – all those years ago, was how to make text fit within a given space, but still read well. This involved a dance of hyphenation, letter-spacing, leading and type-size. But a crucial ingredient of this recipe was the soft-return.

Scanning a piece of text I was looking for certain criteria – or violations – that needed a soft-return (or, in Quark XPress, shift-return). Using those violations, I would typeset the right-rag of the piece of text, and then use hyphenation, and what-not, to tease the rag into as smooth a line as possible. All whilst ensuring the content was pleasurable to read. In a perverse kind of way, I always enjoyed this part of the typesetting process.

My article on 24ways is about how we can apply this thinking to the web, where the inherent lack of control on the medium means we have to apply things in a slightly different (read: clumsy) way.

Emma read the article this morning and pretty much summed up the way I feel when I read text sometimes.

“Another article by @markboulton which gives me a glimpse into how broken the world looks through his eyes” – Emma Boulton

Just like a musician listens to music, I view text in a different way to most people. I just forget that I do it most of the time.

I can hardly believe that 24ways has been running since 2005. In the web years, that’s like 72 years ago. It’s a credit to Drew, Brian, Anna, and Owen. It’s not easy running this year in, year out, on a daily publishing schedule for a month. Hours and hours of work go into this, and we should all be thankful for their time and effort. Oh, and let’s not forget Paul, who has given 24ways a lovely redesign this year (you can read more about that on his blog)

The Undemocracy of Vale of Glamorgan Planning

 ∗ journal

The democracy of the Vale of Glamorgan planning process is absent and favours wealthy developers over citizens.

My house is situated between a narrow country road leading into an old village and farmer’s fields. In April this year, we were sent a letter by the Vale of Glamorgan planning department that a housing development of 115 houses was planned and we have just a few weeks to register our objections.

Now, it’s understandable we’d object; we live next to the proposed site. But, there are some issues that have some serious cause for concern:

  1. A public right of way crosses the site and over an unmanned railway crossing of a line soon to be electrified.

  2. A pond is proposed to capture the water from the often-waterlogged field. The town has a history of flooding and this field acts like a large sponge safeguarding this part of the town.

  3. The roads leading to and from the development are designed for sheep and horse carts. Fifty percent of them are unpaved, single-carriage and pose a risk to pedestrians.

You can read more about this application, if you like, on our website and the planning department website. But, this journal post is not just about documenting the history and problems of the site (that could last a while). This is a journal post about the sickeningly undemocratic process that the Vale of Glamorgan council has undertaken, and I’m sure, it is somewhat similar throughout the country.

  • As a citizen of the England and Wales (Scotland and Northern Ireland have different planning laws) I am not entitled to appeal against a planning application directly. However, a developer can appeal.

  • The Vale of Glamorgan undertook no independent review of reports commissioned and presented, with questionable findings, might I add, by the developer.

  • The Vale of Glamorgan are recommending the development despite Network Rail objecting on the grounds of safety with the un-manned right of way over the railway line.

  • The vale of Glamorgan undertook no independent risk assessment of either the open pond or unguarded railway crossing. Both of which are a grave concern for me having two small children.

  • The Vale of Glamorgan undertook no independent review of the flood risk assessments despite the developer’s reports showing multiple failed percolation tests.

I could go on about the deviations from what I’d view as an independent and democratic process. There have been many, but these pose the greatest dangers in my view.

As a father of two small children, I worry about them. My children are beautifully curious. Wonderfully full of energy. But, despite my best efforts, woefully oblivious to the danger they put themselves in. Just like every other child out there. The Vale of Glamorgan planners, and planning committee – who we elect into those positions, let’s not forget that – has a social responsibility for the well-being of the citizens of this country. Instead, throughout this process, I saw the opposite.

  • I saw an over-worked, under-resourced council making bad decisions.

  • I saw an undemocratic process that favoured negotiation with the developers over hearing the concerns of local residents.

  • I saw a council under pressure to meet housing quoters by paving over the green fields rather than the more difficult option of brown field sites.

And I’ve had enough of it.

Many people locally have been saying that the building of this development will result in more flooding in the village. It will result in more congestion on roads designed for horse carts and sheep. With the dangers of the railway line and open pond, it may result in the injury or death of one of the new residents. Is that when the planners would sit up and listen? Maybe verify the developers reports with independent review?

The cynic in me says ‘probably not’. The apathetic in me says ‘who cares? We all know politics is corrupt’. But, this is the second time in a year that I’ve voiced my concerns about the local government’s ability to make good decisions.

I know we need more, and affordable, houses in this country. But new developments need proper, independent scrutiny from experts. And this proposal has not had that.

The proposed development goes before the planning committee on Thursday 19th December. It is being recommended by the overworked planning officer – despite the points above. A copy of this journal post is being forwarded to the local councillors; the members of the Vale of Glamorgan planning committee; the Vale of Glamorgan planning department; my local Welsh Assembly Member and Member of Parliament, in addition to the BBC and local newspapers.

Just as local government has a duty to behave in a democratic way, I have a duty to act as a citizen of the UK and to stand up and say when something is not right.

Design Abstraction Escalation

 ∗ journal

What are we losing by abstracting our design processes? Could it be as fundamental as losing a sense of humanity in our work?

A few years ago, Michael Bierut, wrote about a natural progression in a designer’s career.

“The client asks you to design a business card. You respond that the problem is really the client’s logo. The client asks you to design a logo. You say the problem is the entire identity system. The client asks you to design the identity. You say that the problem is the client’s business plan. And so forth.”

He calls this Problem Definition Escalation. Where a designer takes one problem and escalates it to a ‘higher’ plane of benefit and worth – one where it will have greater impact, and ultimately, make the designer feel like they’re doing their real job.

Constituent parts

Designing in a browser, in your head, on paper, on a wall, on post-it notes. It doesn’t really matter. What matters is the work. Is it appropriate? Does it do the job well? Will you get paid for it? Does the client understand the benefits?

Really. Who cares how you get there? We’re all coming around to the idea that designing responsive web sites in Photoshop is inefficient and inaccurate (if things like web font rendering matter to you).

Let’s look at the arguments:

  1. For those familiar with the tools, designing in Photoshop is just as efficient as designing in code.

  2. I design using the tools of least resistance. Preferably a pencil, sometimes Photoshop, and a lot of HTML. Photoshop is my tool of choice for creating website designs.

  3. Presenting static visuals to clients is different than using them as a tool yourself as a means to an end.

All of that is good news. Good for clients. Good for the work. Good for us.

A natural result of this is abstraction.


Design patterns are everywhere. The often-repeated chunks of content that we find ourselves designing and building time and time again. User’s get used to seeing them in certain ways, and over time, perhaps their performance is hindered by deviating from the norm. We see this all the time on e-commerce websites, or in new user registrations. Over time, we all collect these little bits of content, design and code. They build up, and eventually they need organising.

Why not group them all together, categorise them, and iterate on them over time? Throw in your boilerplate templates, too. Maybe group them together as a ‘starter kit’ with included navigation, indicative content – for different types of sites like ecommerce, blogs or magazine sites?

And… wait a second, you’ve got all you need to churn out site after site, product after product for clients now. Excellent. All we need to do is change the CSS, right? Maximise our profits.

No. It’s not right.

Conformity and efficiency have a price. And that price is design. That price is a feeling of humanity. Of something that’s been created from scratch. What I described is not a design process. It’s manufacturing. It’s a cupcake machine churning out identical cakes with different icing. But they all taste the same.

Documenting things that repeat is an important thing to do. I have my own pattern library that I’ve been adding to for years now – it’s an electronic scrapbook where I take snapshots of little content bits and bobs that I find interesting, and that keep on cropping up. It’ll never see the light of day. I’ll never use it on a project, because what I’m doing is building up a head full of this stuff so that when a problem presents itself, I will have a fuzzy recollection of something – maybe – that is similar. Instead of going straight to my big ‘ol database of coded examples, I’ll try to recreate this little pattern from memory – and that’s when something interesting happens.

Recreating something just slightly differently – from memory – means you end up with something new.

That’s why I wanted to be a designer, after all. To create new, beautiful things.

Blue Beanie Day Tees & Hoodies

 ∗ Jeffrey Zeldman Presents The Daily Report: Web Design News & Insights Since 1995

Blue Beanie Day tee shirt

JUST IN TIME for Blue Beanie Day 2014, I’ve teamed up with our friends at Cotton Bureau to bring you Blue Beanie Day Tees and Blue Beanie Day Hoodies. For sale at cost (no profit). Hurry! Only 14 days left to buy:

cottonbureau.com/products/the-blue-beanie-tee


The eighth annual Blue Beanie Day in support of web standards will be celebrated around the world on November 30, 2014.

No more JS frameworks

 ∗ BitWorking

Stop writing Javascript frameworks.

Translations: Japanese

JavaScript frameworks seem like death and taxes; inevitable and unavoidable. I'm sure that if I could be a fly on that wall every time someone started a new web project, the very first question they'd ask is, which JS framework are we using? That's how ingrained the role of JS frameworks are in the industry today. But that's not the way it needs to be, and actually, it needs to stop.

Let's back up and see how we got here.

Angular and Backbone and Ember, oh my.

For a long time the web platform, the technology stack most succinctly described as HTML+CSS+JS, was, for lack of a better term, a disaster. Who can forget the IE box model, or the layer tag? I'm sure I just started several of you twitching with flashbacks to the bad old days of web development with just those words.

For a long time there was a whole lot of inconsistency between browsers and we, as an industry, had to write frameworks to paper over them. The problem is that there was disagreement even on the fundamental issues among browsers, like how events propagate, or what tags to support, so every framework not only papered over the holes, but designed their own model of how the browser should work. Actually their own models, plural, because you got to invent a model for how events propagate, a model for how to interact with the DOM, etc. A lot of inventing went on. So frameworks were written, each one a snowflake, a thousand flowers bloomed and gave us the likes of jQuery and Dojo and MochiKit and Ext JS and AngularJS and Backbone and Ember and React. For the past ten years we’ve been churning out a steady parade of JS frameworks.

But something else has happened over the past ten years; browsers got better. Their support for standards improved, and now there are evergreen browsers: automatically updating browsers, each version more capable and standards compliant than the last. With newer standards like:

I think it's time to rethink the model of JS frameworks. There's no need to invent yet another way to do something, just use HTML+CSS+JS.

So why are we still writing JS frameworks? I think a large part of it is inertia, it's habit. But is that so bad, it's not like frameworks are actively harmful, right? Well, let's first start off by defining what I mean by web framework. There's actually a gradient of code that starts with a simple snippet of code, such as a Gist, and that moves to larger and larger collections of code, moving up to libraries, and finally frameworks:

gist -> library -> framework

Frameworks aren't just big libraries, they have their own models for how to interact with events, with the DOM, etc. So why avoid frameworks?

Abstractions Well, one of the problems of frameworks is usually one of their selling points, that they abstract away the platform so you can concentrate on building your own software. The problem is that now you have two systems to learn, HTML+CSS+JS, and the framework. Sure, if the framework was a perfect abstraction of the web as a platform you would never have to go beyond the framework, but guess what, abstractions leak. So you need to know HTML+CSS+JS because at some point your program won't work the way you expect it to, and you’ll have to dig down through all the layers in the framework to figure out what's wrong, all the way down to HTML+CSS+JS.

Mapping the iceberg.

A framework is like an iceberg, that 10% floating above the water doesn't look dangerous, it's the hidden 90% that will eventually get you. Actually it's even more apt than that, learning a framework is like mapping an iceberg, in order to use the framework you need to learn the whole thing, apply the effort of mapping out the entire thing, and in the long run the process is pointless because the iceberg is going to melt anyway.

Widgets Another selling point of frameworks is that you can get access to a library of widgets. But really, you shouldn't need to adopt a framework to get access to widgets, they should all be orthogonal and independent. A good example of this today is CodeMirror, a syntax highlighting code editor built in JavaScript. You can use it anywhere, no framework needed.

There is also the lost effort of building widgets for a framework. Remember all those MochiKit widgets you wrote? Yeah, how much good are they doing you now that you've migrated to Ember, or Angular?

Data Binding Honestly I've never needed it, but if you do, it should come in the form of a library and not a framework.

The longer term problem with frameworks is that they end up being silos, they segment the landscape, widgets built for framework A don't work in framework B. That's lost effort.

So what does a post-framework world look like?

HTML+CSS+JS are my framework.

The fundamental idea is that frameworks aren't needed, use the capabilities already built into HTML+CSS+JS to build your widgets. Break apart the monoliths into orthogonal components that can be mixed in any combination. The final pieces that enable all of this fall under the umbrella of Web Components.

HTML Imports, HTML Templates, Custom Elements, and Shadow DOM are the enabling technologies that should allow us to cut the cord from frameworks, allowing the creation of reusable elements and functionality. For a much better introduction see these articles and libraries:

So, we all create <x-flipbox>'s, declare victory, and go home?

No, not actually, the first thing you need for working with Web Components are polyfills for that functionality, such as X-Tag and Polymer. The need for those will decrease over time as browsers flesh out their implementations of those specifications.

A point to be stressed here is that these polyfills aren't frameworks that introduce their own models to developing on the web, they enable the HTML 5 model. But that isn't really the only need, there are still minor gaps in the platform where one browser deviates in a small way from current standards, and that's something we need to polyfill. MDN seems to have much of the needed code, as the documentation frequently contains short per-function polyfills.

So one huge HTML 5 Polyfill library would be good, but even better would be what I call html-5-polyfill-o-matic, a set of tools that allows me to write Web Components via bog standard HTML+JS and then after analyzing my code, either via static analysis or via Object.observe at runtime, it produces a precise subset of the full HTML 5 polyfill for my project.

This sort of functionality will be even more important as I start trying to mix and match web components and libraries from multiple sources, i.e. an <x-foo> from X-Tag and a <core-bar> from Polymer, does that mean I should have to include both of their polyfill libraries? (It turns out the answer is no.) And how exactly should I get these custom elements? Both X-Tag and Brick have custom bundle generators:

If I start creating custom elements do I need to create my own custom bundler too? I don't think that's a scalable idea, I believe we need idioms and tools that handle this much better. This may actually mean changing how we do open source; a 'widget' isn't a project, so our handling of these things needs to change. Sure, still put the code in Git, but do you need the full overhead of a GitHub project? Something lighter weight, closer to a Gist than a current project might be a better fit. How do I minimize/vulcanize all of this code into the right form for use in my project? Something like Asset Graph might be a good start on that.

So what do we need now?

  1. Idioms and guidelines for building reusable components.
  2. Tools that work under those idioms to compile, crush, etc. all that HTML, CSS, and JS.
  3. A scalable HTML 5 polyfill, full or scaled down based on what's really used.

That's what we need to build a future where we don't need to learn the latest model of the newest framework, instead we just work directly with the platform, pulling in custom elements and libraries to fill specific needs, and spend our time building applications, not mapping icebergs.

Q&A

Q: Why do you hate framework authors.

A: I don’t hate them. Some of my best friends are framework authors. I will admit a bit of inspiration from the tongue-in-cheek you have ruined javascript, but again, no derision intended for framework authors.

Q: You can’t do ____ in HTML5, for that you need a framework.

A: First, that's not a question. Second, thanks for pointing that out. Now let's work together to add the capabilities to HTML 5 that allows ____ to be done w/o a framework.

Q: But ___ isn't a framework, it's a library!

A: Yeah, like I said, it’s a gradient from gist to framework, and you might draw the lines slightly differently from me. That's OK, this isn't about the categorization of any one particular piece of software, it's about moving away from frameworks.

Q: I've been doing this for years, with ___ and ___ and ___.

A: Again, that's not a question, but regardless, good for you, you should be in good shape to help everyone else.

Q: So everyone needs to rewrite dropdown menus, tabs, sliders and toggles themselves?

A: Absolutely not, the point is there should be a way to create those elements in a way that doesn't require buying into one particular framework.

Q: Dude, all those HTML Imports are going to kill my sites performance.

A: Yes, if you implemented all this stuff naively it would, which is why I mentioned the need for tools to compile and crush all the HTML, CSS, and JS.

Q: So I'm not supposed to use any libraries?

A: No, that's not what I said, I was very careful to delineate a line between libraries and frameworks, a library providing an orthogonal piece of functionality that can be used with other libraries. Libraries are fine, it's the frameworks that demand 100% buyin that I'd like to see us move away from.

Q: But I like data binding!

A: Lot's of people do, I was only expressing a personal preference. I didn't say that you shouldn't use data binding, but only that you don't need to adopt an entire framework to get data-binding, there are standalone libraries for that.

Learning to Be Flexible

 ∗ A List Apart: The Full Feed

As a freelancer, I work in a lot of different code repos. Almost every team I work with has different ideas of how code should be organized, maintained, and structured.

Now, I’m not here to start a battle about tabs versus spaces or alphabetical order of CSS properties versus organizing in terms of concerns (positioning styles, then element layout styles, then whatever else), because I’m honestly not attached to any one system anymore. I used to be a one-tab kind of person, along with not really even thinking about the ordering of my properties, but slowly, over time, I’ve realized that most of that doesn’t really matter. In all the projects I’ve worked on, the code got written and the product or site worked for the users—which is really the most important thing. What gets me excited about projects now is the code, making something work, seeing it work across different devices, seeing people use something I built, not getting upset about how it’s written.

Since I went down the freelance route again earlier this year, I’m working with many different teams and they all have different standards for how their code should be written. What I really want to know when I start a project is what the standards are, so I can adhere to them. For many teams that means a quick look through their documentation (when they have it, it’s a dream come true—there are no questions and I can just get to work). For other teams, it means I ask a lot of questions after I’ve taken a look at the code to verify how they prefer to do things.

Even more so than just thinking about how to write code, there’s the fact that I may be working in straight CSS, Sass, Stylus, Handlebars, plain old HTML, or Jade and I usually roll right along with that as well. Every team makes decisions that suit them and their way of working—I’m there to make life easier by coming in and helping them get a job done, not tell them their whole setup is wrong. The variety keeps me on my toes, but it also helps me remember that there isn’t just one way to do any of this.

What has this really done for me? I’ve started letting go of some things. I have opinions on how to structure and write CSS, but whether it’s written with a pre-processor or not, I don’t always care, and which pre-processor matters less to me as well. Any way you do it, you can get the job done. Choosing what works best for your team is what’s most important, not what anyone outside the team says is the “right” or “only” way to do something.

The Specialized Web: Working with Subject-Matter Experts

 ∗ A List Apart: The Full Feed

The time had come for The Big Departmental Website Redesign, and my content strategist heart was all aflutter. Since I work at a research university, the scope wasn’t just the department’s site—there were also 20 microsites focusing on specific faculty projects. Each one got an audit, an inventory, and a new strategy proposal.

I met one-on-one with each faculty member to go over the plans, and they loved them. Specific strategy related to their users and their work! Streamlined and clarified content to help people do what needed doing! “Somebody pinch me,” I enthused after another successful and energizing meeting.

Don’t worry, the pinch came.

I waltzed into my next microsite meeting, proud of my work and capabilities. I outlined my grand plan to this professor, but instead of meeting with the enthusiasm I expected, I was promptly received a brick wall of “not interested.” She dismissed my big strategy with frowns and flat refusals without elaboration. Not to be deterred, I took a more specific tack, pointing out that the photos on the site felt disconnected from the research. No dice: she insisted that the photos not only needed to stay, but were critical to understanding the heart of the research itself.

She shot down idea after idea, all the while maintaining that the site should both be better but not change. My frustration mounted, and I finally pulled my papers together and asked, “Do you really even need a website?!” Of course, she scoffed. Meeting over.

Struggles with subject-matter experts (SMEs) are as diverse as the subject-matter experts themselves. Whether they’re surgeons, C-level executives, engineers, policy makers, faculty—we as web workers need SMEs for their specialized content knowledge. Arming yourself with the right tools, skills, and mentalities will make your work and projects run smoother for everyone on your team—SME included.

The right frame of mind

Know that nobody comes to the table with a clean slate. While the particulars may be new—a web presence, a social media campaign, a new database-driven tool—projects aren’t.

When starting off a project, I’ll ask each person why they’re at the table. Even though it may be obvious why the SME is on the team, each person gets equal time (no more than a minute or two) to state how what they do relates to the project or outcome. You’re all qualified to be there, and stating those qualifications not only builds familiarity, but provides everyone with a picture of the team as a whole.

I see SMEs as colleagues and co-collaborators, no matter what they may think of me and my lack of similar specialized knowledge. I don’t come to them from a service mentality—that they give me their great ideas and I humbly craft them to be web-ready. We’re working together to create something that can serve and help the user.

Listening for context

After my disastrous initial meeting with the prickly professor, I gave myself some time to calm down, and scheduled another meeting with one thing on my agenda: listening. I knew I was missing part of the story, and when we sat down again, I told the SME that I only wanted her to talk to me about the site, the content, and the research.

I wasn’t satisfied with her initial surface-level responses, because they weren’t solvable problems. To find the deeper root causes of her reluctance, I busted out my friends the Five Ws (and that tagalong how). When she insisted something couldn’t be removed or changed, I breezed right past why, because it wasn’t getting me anywhere. Instead, I asked: when did you choose this image? Where did this image come from? If it’s so essential to the site, it must have a history. I kept asking questions until I understood the context that existed already around this site. Once I understood the context, I could identify the need that content served, and could make sure that need was addressed, rather than just cutting it out.

Through this deeper line of questioning, I learned that the SME had been through an earlier redesign process with a different web team. They started off much in the same way I had—with big plans for her content, and not a lot of time for her. The design elements and photos that she was determined to hang on to? That was all that she had been able to control in the process before.

By swooping in with my ideas, I was just another Web Person to her, with mistrust feeding off old—but still very real—feelings of being ignored and passed over. It was my responsibility to build the working relationship back up and make it productive.

In the end, the SME and I agreed to start off with only a few changes—moving to a 960-pixel width and removing dead links—and left the rest of the content and structure as-is in the migration. This helped build her trust that I would not only listen to her, but be a good steward of her content. When we revisited the content later on, she was much more receptive to all my big ideas.

If someone seems afraid, ornery, reluctant, distrustful, or any other work-hampering trait, they’re likely not doing it just to be a jerk—there are histories, insecurities, and fears at work beneath the less-than-ideal behavior, as Kerry-Anne Gilowey points out in her presentation “The People Puzzle: Making the Pieces Fit.”

Listening is a key skill here: let them be heard, and try to uncover what’s at the root of their resistance. Some people may have a natural affinity for these people skills, but anyone will benefit from spending time practicing and working on them.

Tools before strategy, heading for tragedy

Being a good listener, however, is not a simple Underpants Gnome scheme toward project success:

  1. Listen to your frustrating SME
  2. PROFIT

Sometimes you and your SME are on the same page, ready to hop right in to Shiny New Project World! And hey, they have a great idea of what that new project is, and it is totally a Facebook page. Or a Twitter feed. Or an Instagram account, even though there is nothing to take photographs of.

This doesn’t necessarily indicate a mentality of “Social media, how hard can it be!”–instead, exuberance signals your SME’s desire to be involved in the work.

In the case of social media like Facebook or Twitter, the SME knows there is a conversation, a connection, happening somewhere, and they want to be a part of it. They may latch onto the thing they’ve heard of—maybe they check out photos of their friend’s kids on Facebook, or saw the use of hashtags mentioned during a big event like the World Cup. They’re not picking a specific tool just to be stubborn—they often just don’t have a clue as to how many options they actually have.

Sometimes the web is a little freaky, so we might as well stare it in the face together:

Graphic showing the range of available social media tools.
The Conversation Prism, a visual map of social media. Click to view larger.


Each wedge of this photo is a different type of service, and inside the wedge are the sites or tools or apps that offer that service. This is a great way to show the SME the large toolbox at our disposal, and the need to be mindful and strategic in our selection.

After peering at the glorious toolbox of possible options, it becomes clear we’ll need a strategy to pick the right tool—an Allen wrench is great for building an IKEA bookshelf, but is lousy for tearing down drywall. I start my SME off with homework—a few simple, top-level questions:

  1. Who is this project/site/page for?
  2. Who is this project/site/page not for?

Oftentimes, this is the first time the SME has really thought about audience. If the answer to Question 1 is “everyone,” I start to ask about specific stakeholder groups: Customers? Instructors? Board Members? Legislators? Volunteers? As soon as we can get one group in the “not for” column, the conversation moves forward more easily.

An SME who says a website is “for everyone” is not coming from a place of laziness or obstinacy; the SMEs I work with simply want their website to be the most helpful to the most people.

  1. What other sites out there are like, or related to, the one we hope to make?

SMEs know who their peers, their competitors, and their colleagues are. While you may toil for hours, days, or weeks looking at material you think is comparable, your SME will be able to rattle off people or projects for you to check out in the blink of an eye. Their expertise saves you time.

There are obviously a lot more questions that get asked about a project, but these two are a great start to collaborative work, and function independently of specific tools. They facilitate an ongoing discussion about the Five Ws, and lay a good foundation to think about the practical side of the how.

Get yourself (and your project, and your team) right

It is possible to have a great working relationship with an SME. The place to start is with people—meet your SME, and introduce yourself! Meet as soon as the project starts, or even earlier.

Go to their stomping grounds

If you work with a large group of SMEs, find out where and when they gather (board meetings, staff retreats, weekly team check-ins) and get on the agenda. I managed to grab five minutes of a faculty meeting and told everyone who I was, a very basic overview of what I did, and that I was looking forward to working together—which sped up putting faces to names.

Find other avenues

If you’re having trouble locating the SMEs—either because they’re phenomenally busy or reluctant to work together—try tracking down an assistant. These assistants might have access to some of the same specialized knowledge as your SME (in the case of a research assistant), or they could have more access to your SME themselves (in the case of an executive assistant or other calendar-wrangler). Assistants are phenomenal people to know and respect in either of these cases; build a good, trusting relationship with them, and projects will move forward without having to wait for one person’s calendar to clear.

Make yourself available

Similarly, making yourself easier to find can open doors. I told people it was fine to “drop by any time,” and, while true, actually left people with no sense of my availability. When I started establishing set “office hours” instead, I found that drop-in meetings happened more often and more predictably. People knew that from 9 to 11 on Tuesdays and Thursdays, I was happy to talk about any random thing on their mind. I ended up having more impromptu meetings that led to better work.

For those of you about to collaborate, we salute you

SMEs, as stated in their very name, have specialized knowledge I don’t. However, the flip side is also true: my specialized knowledge is something they need so their content can be useful and usable on the web. Though you and your SMEs may be coming from different places, with different approaches to your work, and different skill sets and knowledge, you’ve got to work together to advance the project.

Do the hard work to understand each other, and move forward even if the steps seem tiny (don’t let perfect be the enemy of the good!). Find a seed of respect for each other’s knowledge, nurture it as it grows, and bask in the fruits of your labor—together.

Writing HTTP Middleware in Go

 ∗ Justinas Stankevičius

In the context of web development, "middleware" usually stands for "a part of an application that wraps the original application, adding additional functionality". It's a concept that usually seems to be somewhat underappreciated, but I think middleware is great.

For one, a good middleware has a single responsibility, is pluggable and self-contained. That means you can plug it in your app at the interface level and have it just work. It doesn't affect your coding style, it isn't a framework, but merely another layer in your request handling cycle. There's no need to rewrite your code: if you decide that you want the middleware, you add it into the equation, if you change your mind, you remove it. That's it.

Looking at Go, HTTP middleware is quite prevalent, even in the standard library. Although it might not be obvious at first, functions in the net/http package, like StripPrefix or TimeoutHandler are exactly what we defined middleware to be: they wrap your handler and take additional steps when dealing with requests or responses.

My recent Go package nosurf is middleware too. I intentionally designed it as one from the very beginning. In most cases you don't need to be aware of things happening at the application layer to do a CSRF check: nosurf, like any proper middleware, stands completely on its own and works with any tools that use the standard net/http interface.

You can also use middleware for:

  • Mitigating BREACH attack by length hiding
  • Rate-limiting
  • Blocking evil bots
  • Providing debugging information
  • Adding HSTS, X-Frame-Options headers
  • Recovering gracefully from panics
  • ...and probably many others

Writing a simple middleware

For the first example, we'll write middleware that only allows users visit our website through a single domain (specified by HTTP in the Host header). A middleware like that could serve to protect the web application from host spoofing attacks.

Constructing the type

For starters, let's define a type for the middleware. We'll call it SingleHost.

type SingleHost struct {
    handler     http.Handler
    allowedHost string
}

It consists of only two fields:

  • the wrapped Handler we'll call if the request comes with a valid Host.
  • the allowed host value itself.

As we made the field names lowercase, making them private to our package, we should also make a constructor for our type.

func NewSingleHost(handler http.Handler, allowedHost string) *SingleHost {
    return &SingleHost{handler: handler, allowedHost: allowedHost}
}

Request handling

Now, for the actual logic. To implement http.Handler, our type only needs to have one method:

type Handler interface {
        ServeHTTP(ResponseWriter, *Request)
}

And here it is:

func (s *SingleHost) ServeHTTP(w http.ResponseWriter, r *http.Request) {
    host := r.Host
    if host == s.allowedHost {
        s.handler.ServeHTTP(w, r)
    } else {
        w.WriteHeader(403)
    }
}

ServeHTTP method simply checks the Host header on the request:

  • if it matches the allowedHost set by the constructor, it calls the wrapped handler's ServeHTTP method, thus passing the responsibility for handling the request.
  • if it doesn't match, it returns the 403 (Forbidden) status code and the request is dealt with.

The original handler's ServeHTTP is never called in the latter case, so not only does it not get a say in this, it won't even know such a request arrived at all.

Now that we're done coding our middleware, we just need to plug it in. Instead of passing our original Handler directly into the net/http server, we wrap it in the middleware.

singleHosted = NewSingleHost(myHandler, "example.com")
http.ListenAndServe(":8080", singleHosted)

An alternative approach

The middleware we just wrote is really simple: it literally consists of 15 lines of code. For writing such middleware, there exists a method with less boilerplate. Thanks to Go's support of first class functions and closures, and having the neat http.HandlerFunc wrapper, we'll be able to implement this as a simple function, rather than a separate struct type. Here is the function-based version of our middleware in its entirety.

func SingleHost(handler http.Handler, allowedHost string) http.Handler {
    ourFunc := func(w http.ResponseWriter, r *http.Request) {
        host := r.Host
        if host == allowedHost {
            handler.ServeHTTP(w, r)
        } else {
            w.WriteHeader(403)
        }
    }
    return http.HandlerFunc(ourFunc)
}

Here we declare a simple function called SingleHost that takes in a Handler to wrap and the allowed hostname. Inside it, we construct a function analogous to ServeHTTP from the previous version of our middleware. Our inner function is actually a closure, so it can access the variables from the outer function. Finally, HandlerFunc lets us use this function as a http.Handler.

Deciding whether to use a HandlerFunc or to roll out your own http.Handler type is ultimately up to you. While for basic cases a function might be enough, if you find your middleware growing, you might want to consider making your own struct type and separate the logic into several methods.

Meanwhile, the standard library actually uses both ways of building middleware. StripPrefix is a function that returns a HandlerFunc, while TimeoutHandler, although a function too, returns a custom struct type that handles the requests.

A more complex case

Our SingleHost middleware was trivial: we checked one attribute of the request and either passed the request to the original handler, not caring about it anymore, or returned a response ourselves and didn't let the original handler touch it at all. Nevertheless, there are cases where, rather than acting based on what the request is, our middleware has to post-process the response after the original handler has written it, modifying it in some way.

Appending data is easy

If we just want to append some data after the body written by the wrapped handler, all we have to do is call Write() after it finishes:

type AppendMiddleware struct {
    handler http.Handler
}

func (a *AppendMiddleware) ServeHTTP(w http.ResponseWriter, r *http.Request) {
    a.handler.ServeHTTP(w, r)
    w.Write([]byte("Middleware says hello."))
}

The response body will now consist of whatever the original handler outputted, followed by Middleware says hello..

The problem

Doing other types of response manipulations is a bit harder though. Say, we'd like to prepend data to the response instead of appending it. Write() before our original handler does. If we did that, it would cause the default status code and headers to be written immediately. That way, the original handler wouldn't have any control over what those will be anymore. --> If we call Write() before the original handler does, it will lose control over the status code and headers, since the first Write() writes them out immediately.

Modifying the original output in any other way (say, replacing strings in it), changing certain response headers or setting a different status code won't work because of a similar reason: when the wrapped handler returns, those will have already be sent to the client.

To counter this we need a particular kind of ResponseWriter that would work as a buffer, gathering the response and storing it for later use (and modifications). We would then pass this buffering ResponseWriter to the original handler instead of giving it the real RW, thus preventing it from actually sending the response to the user just yet.

Luckily, there's a tool just like that in the Go standard library. ResponseRecorder in the net/http/httptest package does all we need: it saves the response status code, a map of response headers and accumulates the body into a buffer of bytes. Although (like the package name implies) it's intended to be used in tests, it fits our use case as well.

Let's look at an example of middleware that uses ResponseRecorder and modifies everything in a response, just for the sake of completeness.

type ModifierMiddleware struct {
    handler http.Handler
}

func (m *ModifierMiddleware) ServeHTTP(w http.ResponseWriter, r *http.Request) {
    rec := httptest.NewRecorder()
    // passing a ResponseRecorder instead of the original RW
    m.handler.ServeHTTP(rec, r)
    // after this finishes, we have the response recorded
    // and can modify it before copying it to the original RW

    // we copy the original headers first
    for k, v := range rec.Header() {
        w.Header()[k] = v
    }
    // and set an additional one
    w.Header().Set("X-We-Modified-This", "Yup")
    // only then the status code, as this call writes out the headers 
    w.WriteHeader(418)

    // The body hasn't been written (to the real RW) yet,
    // so we can prepend some data.
    data := []byte("Middleware says hello again. ")

    // But the Content-Length might have been set already,
    // we should modify it by adding the length
    // of our own data.
    // Ignoring the error is fine here:
    // if Content-Length is empty or otherwise invalid,
    // Atoi() will return zero,
    // which is just what we'd want in that case.
    clen, _ := strconv.Atoi(r.Header.Get("Content-Length"))
    clen += len(data)
    r.Header.Set("Content-Length", strconv.Itoa(clen))

    // finally, write out our data
    w.Write(data)
    // then write out the original body
    w.Write(rec.Body.Bytes())
}

And here's the response we get by wrapping a handler that would otherwise simply return "Success!" with our middleware.

HTTP/1.1 418 I'm a teapot
X-We-Modified-This: Yup
Content-Type: text/plain; charset=utf-8
Content-Length: 37
Date: Tue, 03 Sep 2013 18:41:39 GMT

Middleware says hello again. Success!

This opens up a whole lot of new possibilities. The wrapped handler is now completely in or control: even after it handling the request, we can manipulate the response in any way we want.

Sharing data with other handlers

In various cases, your middleware might need to expose certain information to other middleware or your app itself. For example, nosurf needs to give the user a way to access the CSRF token and the reason of failure (if any).

A nice pattern for this is to use a map, usually an unexported one, that maps http.Request pointers to pieces of the data needed, and then expose package (or handler) level functions to access the data.

I used this pattern for nosurf too. Here, I created a global context map. Note that a mutex is also needed, since Go's maps aren't safe for concurrent access by default.

type csrfContext struct {
    token string
    reason error
}

var (
    contextMap = make(map[*http.Request]*csrfContext)
    cmMutex    = new(sync.RWMutex)
)

The data is set by the handler and exposed via exported functions like Token().

func Token(req *http.Request) string {
    cmMutex.RLock()
    defer cmMutex.RUnlock()

    ctx, ok := contextMap[req]
    if !ok {
            return ""
    }

    return ctx.token
}

You can find the whole implementation in the context.go file in nosurf's repository.

While I chose to implement this on my own for nosurf, there exists a handy gorilla/context package that implements a generic map for saving request information. In most cases, it should suffice and protect you from pitfalls of implementing a shared storage on your own. It even has middleware of its own that clears the request data after it's been served.

All in all

The intention of this article was both to draw fellow gophers' attention to middleware as a concept and to demonstrate some of the basic building blocks for writing middleware in Go. Despite being a relatively young language, Go has an amazing standard HTTP interface. It's one of the factors that make coding middleware for Go a painless and even fun process.

Nevertheless, there is still a lack of quality HTTP tools for Go. Most, if not all, of the middleware ideas for Go I mentioned earlier are yet to come to life. Now that you know how to build middleware for Go, why not do it yourself? ;)

P.S. You can find samples for all the middleware written for this post in a GitHub gist.

Making and Using HTTP Middleware

 ∗ Alex Edwards

When you're building a web application there's probably some shared functionality that you want to run for many (or even all) HTTP requests. You might want to log every request, gzip every response, or check a cache before doing some heavy processing.

One way of organising this shared functionality is to set it up as middleware – self-contained code which independently acts on a request before or after your normal application handlers. In Go a common place to use middleware is between a ServeMux and your application handlers, so that the flow of control for a HTTP request looks like:

ServeMux => Middleware Handler => Application Handler

In this post I'm going to explain how to make custom middleware that works in this pattern, as well as running through some concrete examples of using third-party middleware packages.

The Basic Principles

Making and using middleware in Go is fundamentally simple. We want to:

  • Implement our middleware so that it satisfies the http.Handler interface.
  • Build up a chain of handlers containing both our middleware handler and our normal application handler, which we can register with a http.ServeMux.

I'll explain how.

Hopefully you're already familiar with the following method for constructing a handler (if not, it's probably best to read this primer before continuing).

    func messageHandler(message string) http.Handler {
      return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        w.Write([]byte(message)
      })
    }
    

In this snippet we're placing our handler logic (a simple w.Write) in an anonymous function and closing-over the message variable to form a closure. We're then converting this closure to a handler by using the http.HandlerFunc adapter and returning it.

We can use this same approach to create a chain of handlers. Instead of passing a string into the closure (like above) we could pass the next handler in the chain as a variable, and then transfer control to this next handler by calling it's ServeHTTP() method.

This gives us a complete pattern for constructing middleware:

    func exampleMiddleware(next http.Handler) http.Handler {
      return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        // Our middleware logic goes here...
        next.ServeHTTP(w, r)
      })
    }
    

You'll notice that this middleware function has a func(http.Handler) http.Handler signature. It accepts a handler as a parameter and returns a handler. This is useful for two reasons:

  • Because it returns a handler we can register the middleware function directly with the standard ServeMux provided by the net/http package.
  • We can create an arbitrarily long handler chain by nesting middleware functions inside each other. For example:

http.Handle("/", middlewareOne(middlewareTwo(finalHandler)))

Illustrating the Flow of Control

Let's look at a stripped-down example with some middleware that simply writes log messages to stdout:

File: main.go
    package main

    import (
      "log"
      "net/http"
    )

    func middlewareOne(next http.Handler) http.Handler {
      return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        log.Println("Executing middlewareOne")
        next.ServeHTTP(w, r)
        log.Println("Executing middlewareOne again")
      })
    }

    func middlewareTwo(next http.Handler) http.Handler {
      return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        log.Println("Executing middlewareTwo")
        if r.URL.Path != "/" {
          return
        }
        next.ServeHTTP(w, r)
        log.Println("Executing middlewareTwo again")
      })
    }

    func final(w http.ResponseWriter, r *http.Request) {
      log.Println("Executing finalHandler")
      w.Write([]byte("OK"))
    }

    func main() {
      finalHandler := http.HandlerFunc(final)

      http.Handle("/", middlewareOne(middlewareTwo(finalHandler)))
      http.ListenAndServe(":3000", nil)
    }
    

Run this application and make a request to http://localhost:3000. You should get log output similar to this:

    $ go run main.go
    2014/10/13 20:27:36 Executing middlewareOne
    2014/10/13 20:27:36 Executing middlewareTwo
    2014/10/13 20:27:36 Executing finalHandler
    2014/10/13 20:27:36 Executing middlewareTwo again
    2014/10/13 20:27:36 Executing middlewareOne again
    

It's clear to see how control is being passed through the handler chain in the order we nested them, and then back up again in the reverse direction.

We can stop control propagating through the chain at any point by issuing a return from a middleware handler.

In the example above I've included a conditional return in the middlewareTwo function. Try it by visiting http://localhost:3000/foo and checking the log again – you'll see that this time the request gets no further than middlewareTwo before passing back up the chain.

Understood. How About a Proper Example?

OK, let's say that we're building a service which processes requests containing a XML body. We want to create some middleware which a) checks for the existence of a request body, and b) sniffs the body to make sure it is XML. If either of those checks fail, we want our middleware to write an error message and to stop the request from reaching our application handlers.

File: main.go
    package main

    import (
      "bytes"
      "net/http"
    )

    func enforceXMLHandler(next http.Handler) http.Handler {
      return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        // Check for a request body
        if r.ContentLength == 0 {
          http.Error(w, http.StatusText(400), 400)
          return
        }
        // Check its MIME type
        buf := new(bytes.Buffer)
        buf.ReadFrom(r.Body)
        if http.DetectContentType(buf.Bytes()) != "text/xml; charset=utf-8" {
          http.Error(w, http.StatusText(415), 415)
          return
        }
        next.ServeHTTP(w, r)
      })
    }

    func main() {
      finalHandler := http.HandlerFunc(final)

      http.Handle("/", enforceXMLHandler(finalHandler))
      http.ListenAndServe(":3000", nil)
    }

    func final(w http.ResponseWriter, r *http.Request) {
      w.Write([]byte("OK"))
    }
    

This looks good. Let's test it by creating a simple XML file:

    $ cat > books.xml
    <?xml version="1.0"?>
    <books>
      <book>
        <author>H. G. Wells</author>
        <title>The Time Machine</title>
        <price>8.50</price>
      </book>
    </books>
    

And making some requests using cURL:

    $ curl -i localhost:3000
    HTTP/1.1 400 Bad Request
    Content-Type: text/plain; charset=utf-8
    Content-Length: 12

    Bad Request
    $ curl -i -d "This is not XML" localhost:3000
    HTTP/1.1 415 Unsupported Media Type
    Content-Type: text/plain; charset=utf-8
    Content-Length: 23

    Unsupported Media Type
    $ curl -i -d @books.xml localhost:3000
    HTTP/1.1 200 OK
    Date: Fri, 17 Oct 2014 13:42:10 GMT
    Content-Length: 2
    Content-Type: text/plain; charset=utf-8

    OK
    

Using Third-Party Middleware

Rather than rolling your own middleware all the time you might want to use a third-party package. We're going to look at a couple here: goji/httpauth and Gorilla's LoggingHandler.

The goji/httpauth package provides HTTP Basic Authentication functionality. It has a SimpleBasicAuth helper which returns a function with the signature func(http.Handler) http.Handler. This means we can use it in exactly the same way as our custom-built middleware.

    $ go get github.com/goji/httpauth
    
File: main.go
    package main

    import (
      "github.com/goji/httpauth"
      "net/http"
    )

    func main() {
      finalHandler := http.HandlerFunc(final)
      authHandler := httpauth.SimpleBasicAuth("username", "password")

      http.Handle("/", authHandler(finalHandler))
      http.ListenAndServe(":3000", nil)
    }

    func final(w http.ResponseWriter, r *http.Request) {
      w.Write([]byte("OK"))
    }
    

If you run this example you should get the responses you'd expect for valid and invalid credentials:

    $ curl -i username:password@localhost:3000
    HTTP/1.1 200 OK
    Content-Length: 2
    Content-Type: text/plain; charset=utf-8

    OK
    $ curl -i username:wrongpassword@localhost:3000
    HTTP/1.1 401 Unauthorized
    Content-Type: text/plain; charset=utf-8
    Www-Authenticate: Basic realm=""Restricted""
    Content-Length: 13

    Unauthorized
    

Gorilla's LoggingHandler – which records Apache-style logs – is a bit different.

It uses the signature func(out io.Writer, h http.Handler) http.Handler, so it takes not only the next handler but also the io.Writer that the log will be written to.

Here's a simple example in which we write logs to a server.log file:

    go get github.com/gorilla/handlers
    
File: main.go
    package main

    import (
      "github.com/gorilla/handlers"
      "net/http"
      "os"
    )

    func main() {
      finalHandler := http.HandlerFunc(final)

      logFile, err := os.OpenFile("server.log", os.O_WRONLY|os.O_CREATE|os.O_APPEND, 0666)
      if err != nil {
        panic(err)
      }

      http.Handle("/", handlers.LoggingHandler(logFile, finalHandler))
      http.ListenAndServe(":3000", nil)
    }

    func final(w http.ResponseWriter, r *http.Request) {
      w.Write([]byte("OK"))
    }
    

In a trivial case like this our code is fairly clear. But what happens if we want to use LoggingHandler as part of a larger middleware chain? We could easily end up with a declaration looking something like this:

http.Handle("/", handlers.LoggingHandler(logFile, authHandler(enforceXMLHandler(finalHandler))))

Woah! That makes my brain hurt.

We can clean this up by creating a constructor function (let's call it myLoggingHandler) with the signature func(http.Handler) http.Handler. This will allow us to nest it more neatly with other middleware:

    func myLoggingHandler(h http.Handler) http.Handler {
      logFile, err := os.OpenFile("server.log", os.O_WRONLY|os.O_CREATE|os.O_APPEND, 0666)
      if err != nil {
        panic(err)
      }
      return handlers.LoggingHandler(logFile, h)
    }

    func main() {
      finalHandler := http.HandlerFunc(final)

      http.Handle("/", myLoggingHandler(finalHandler))
      http.ListenAndServe(":3000", nil)
    }
    

If you run this application and make a few requests your server.log file should look something like this:

    $ cat server.log
    127.0.0.1 - - [21/Oct/2014:18:56:43 +0100] "GET / HTTP/1.1" 200 2
    127.0.0.1 - - [21/Oct/2014:18:56:36 +0100] "POST / HTTP/1.1" 200 2
    127.0.0.1 - - [21/Oct/2014:18:56:43 +0100] "PUT / HTTP/1.1" 200 2
    

If you're interested, here's a gist of the three middleware handlers from this post combined in one example.

As a side note: notice that the Gorilla LoggingHandler is recording the response status (200) and response length (2) in the logs. This is interesting. How did the upstream logging middleware get to know about the response body written by our application handler?

It does this by defining it's own responseLogger type which wraps http.ResponseWriter, and creating custom responseLogger.Write() and responseLogger.WriteHeader() methods. These methods not only write the response but also store the size and status for later examination. Gorilla's LoggingHandler passes responseLogger onto the next handler in the chain, instead of the normal http.ResponseWriter.

(I'll probably write a proper tutorial about this sometime, in which case I'll add a link here!)

Additional Tools

Alice by Justinas Stankevičius is a clever and very lightweight package which provides some syntactic sugar for chaining middleware handlers. At it's most basic Alice lets you rewrite this:

http.Handle("/", myLoggingHandler(authHandler(enforceXMLHandler(finalHandler))))

As this:

http.Handle("/", alice.New(myLoggingHandler, authHandler, enforceXMLHandler).Then(finalHandler))

In my eyes at least, that code is slightly clearer to understand at a glance. However, the real benefit of Alice is that it lets you to specify a handler chain once and reuse it for multiple routes. Like so:

    stdChain := alice.New(myLoggingHandler, authHandler, enforceXMLHandler)

    http.Handle("/foo", stdChain.Then(fooHandler))
    http.Handle("/bar", stdChain.Then(barHandler))
    

If you found this post useful, you might like to subscribe to my RSS feed.

Form Validation and Processing in Go

 ∗ Alex Edwards

In this post I'm going to run through a start-to-finish tutorial for building a online contact form in Go. I'll be trying to explain the different steps in detail, as well as outlining a sensible pattern for processing that can be extended to other forms.

Let's begin the example by creating a new directory for the application, along with a main.go file for our code and a couple of vanilla HTML templates:

    $ mkdir -p contact-form/templates
    $ cd contact-form
    $ touch main.go templates/index.html templates/confirmation.html
    
File: templates/index.html
    <h1>Contact</h1>
    <form action="/" method="POST" novalidate>
      <div>
        <label>Your email:</label>
        <input type="email" name="email">
      </div>
      <div>
        <label>Your message:</label>
        <textarea name="content"></textarea>
      </div>
      <div>
        <input type="submit" value="Send message">
      </div>
    </form>
    
File: templates/confirmation.html
    <h1>Confirmation</h1>
    <p>Your message has been sent!</p>
    

Our contact form will issue a POST request to /, which will be the same URL path that we use for presenting the form. This means that we'll need to route requests for the same URL to different handlers based on the HTTP method.

There are a few ways of achieving this, but we'll use Pat – a third-party routing library which I've talked about before. You'll need to install it if you're following along:

    $ go get github.com/bmizerany/pat
    

Go ahead and create a skeleton for the application:

File: main.go
    package main

    import (
      "github.com/bmizerany/pat"
      "html/template"
      "log"
      "net/http"
    )

    func main() {
      mux := pat.New()
      mux.Get("/", http.HandlerFunc(index))
      mux.Post("/", http.HandlerFunc(send))
      mux.Get("/confirmation", http.HandlerFunc(confirmation))

      log.Println("Listening...")
      http.ListenAndServe(":3000", mux)
    }

    func index(w http.ResponseWriter, r *http.Request) {
      render(w, "templates/index.html", nil)
    }

    func send(w http.ResponseWriter, r *http.Request) {
      // Validate form
      // Send message in an email
      // Redirect to confirmation page
    }

    func confirmation(w http.ResponseWriter, r *http.Request) {
      render(w, "templates/confirmation.html", nil)
    }

    func render(w http.ResponseWriter, filename string, data interface{}) {
      tmpl, err := template.ParseFiles(filename)
      if err != nil {
        http.Error(w, err.Error(), http.StatusInternalServerError)
      }
      if err := tmpl.Execute(w, data); err != nil {
        http.Error(w, err.Error(), http.StatusInternalServerError)
      }
    }
    

This is fairly straightforward stuff so far. The only real point of note is that we've put the template handling into a render function to cut down on boilerplate code.

If you run the application:

    $ go run main.go
    Listening...
    

And visit localhost:3000 in your browser you should see the contact form, although it doesn't do anything yet.

Now for the interesting part. Let's add a couple of validation rules to our contact form, display the errors if there are any, and make sure that the form values get presented back if there's an error so the user doesn't need to retype them.

One approach to setting this up is to add the code inline in our send handler, but personally I find it cleaner and neater to break out the logic into a separate message.go file:

    $ touch message.go
    
File: message.go
    package main

    import (
      "regexp"
      "strings"
    )

    type Message struct {
      Email    string
      Content string
      Errors  map[string]string
    }

    func (msg *Message) Validate() bool {
      msg.Errors = make(map[string]string)

      re := regexp.MustCompile(".+@.+\\..+")
      matched := re.Match([]byte(msg.Email))
      if matched == false {
        msg.Errors["Email"] = "Please enter a valid email address"
      }

      if strings.TrimSpace(msg.Content) == "" {
        msg.Errors["Content"] = "Please write a message"
      }

      return len(msg.Errors) == 0
    }

    

So what's going on here?

We've started by defining a new Message type, consisting of the Email and Content values (which will hold the data from the submitted form), along with an Errors map to hold any validation error messages.

We then created a Validate method that acts on a given Message, which summarily checks the format of the email address and makes sure that the content isn't blank. In the event of any errors we add them to the Errors map, and finally return a true or false value to indicate whether validation passed successful or not.

This approach means that we can keep the code in our send handler fantastically light. All we need it to do is retrieve the form values from the POST request, create a new Message object with them, and call Validate(). If the validation fails we'll then want to reshow the contact form, passing back the relevant Message object.

File: main.go
    ...
    func send(w http.ResponseWriter, r *http.Request) {
      msg := &Message{
        Email: r.FormValue("email"),
        Content: r.FormValue("content"),
      }

      if msg.Validate() == false {
        render(w, "templates/index.html", msg)
        return
      }

      // Send message in an email
      // Redirect to confirmation page
    }
    ...
    

As a side note, in this example above we're using the FormValue method on the request to access the POST data. We could also access the data directly via r.Form, but there is a gotcha to point out – by default r.Form will be empty until it is filled by calling ParseForm on the request. Once that's done, we can access it in the same way as any url.Values type. For example:

    err := r.ParseForm()
    // Handle error
    msg := &Message{
      Email: r.Form.Get("email"),
      Content: r.Form.Get("content"),
    }
    

Anyway, let's update our template so it shows the validation errors (if they exist) above the relevant fields, and repopulate the form inputs with any information that the user previously typed in:

File: templates/index.html
    <style type="text/css">.error {color: red;}</style>

    <h1>Contact</h1>
    <form action="/" method="POST" novalidate>
      <div>
        {{ with .Errors.Email }}
        <p class="error">{{ . }}</p>
        {{ end }}
        <label>Your email:</label>
        <input type="email" name="email" value="{{ .Email }}">
      </div>
      <div>
        {{ with .Errors.Content }}
        <p class="error" >{{ . }}</p>
        {{ end }}
        <label>Your message:</label>
        <textarea name="content">{{ .Content }}</textarea>
      </div>
      <div>
        <input type="submit" value="Send message">
      </div>
    </form>
    

Go ahead and give it a try:

    $ go run main.go message.go
    Listening...
    

Still, our contact form is pretty useless unless we actually do something with it. Let's add a Deliver method which sends the contact form message to a particular email address. In the code below I'm using Gmail, but the same thing should work with any other SMTP server.

File: message.go
    package main

    import (
      "fmt"
      "net/smtp"
      "regexp"
      "strings"
    )
    ...

    func (msg *Message) Deliver() error {
      to := []string{"someone@example.com"}
      body := fmt.Sprintf("Reply-To: %v\r\nSubject: New Message\r\n%v", msg.Email, msg.Content)

      username := "you@gmail.com"
      password := "..."
      auth := smtp.PlainAuth("", username, password, "smtp.gmail.com")

      return smtp.SendMail("smtp.gmail.com:587", auth, msg.Email, to, []byte(body))
    }
    

The final step is to head back to our main.go file, add some code to call Deliver(), and issue a 303 redirect to the confirmation page that we made earlier:

File: main.go
    ...
    func send(w http.ResponseWriter, r *http.Request) {
      msg := &Message{
        Email: r.FormValue("email"),
        Content: r.FormValue("content"),
      }

      if msg.Validate() == false {
        render(w, "templates/index.html", msg)
        return
      }

      if err := msg.Deliver(); err != nil {
        http.Error(w, err.Error(), http.StatusInternalServerError)
        return
      }
      http.Redirect(w, r, "/confirmation", http.StatusSeeOther)
    }
    ...
    

Additional Tools

If mapping form data to objects is something that you're doing a lot of, you may find Gorilla Schema's automatic decoder useful. If we were using it for our contact form example, the code would look a bit like this:

    import "github.com/gorilla/schema"
    ...
    err := r.ParseForm()
    // Handle error
    msg := new(Message)
    decoder := schema.NewDecoder()
    decoder.Decode(msg, r.Form)
    

Additionally, Goforms appears to be a promising idea, with a fairly slick Django-like approach to dealing with forms. However, the existing validation options are fairly limited and the library doesn't seem to be under active development at the moment. It's still worth a look though, especially if you're thinking of rolling something a bit more generic for your form handling.

If you found this post useful, you might like to subscribe to my RSS feed.

Golang Automatic Reloads

 ∗ Alex Edwards

I wrote a short Bash script to automatically reload Go programs.

The script acts as a light wrapper around go run, stopping and restarting it whenever a .go file in your current directory or $GOPATH/src folder is saved. I've been using it mainly when developing web applications, in the same way that I use Shotgun or Guard when working with Ruby.

You can grab this from the Github repository.

File: go-reload
    #!/bin/bash

    # Watch all *.go files in the specified directory
    # Call the restart function when they are saved
    function monitor() {
      inotifywait -q -m -r -e close_write --exclude '[^g][^o]$' $1 |
      while read line; do
        restart
      done
    }

    # Terminate and rerun the main Go program
    function restart {
      if [ "$(pidof $PROCESS_NAME)" ]; then
        killall -q -w -9 $PROCESS_NAME
      fi
      echo ">> Reloading..."
      go run $FILE_PATH $ARGS &
    }

    # Make sure all background processes get terminated
    function close {
      killall -q -w -9 inotifywait
      exit 0
    }

    trap close INT
    echo "== Go-reload"
    echo ">> Watching directories, CTRL+C to stop"

    FILE_PATH=$1
    FILE_NAME=$(basename $FILE_PATH)
    PROCESS_NAME=${FILE_NAME%%.*}

    shift
    ARGS=$@

    # Start the main Go program
    go run $FILE_PATH $ARGS &

    # Monitor the /src directories in all directories on the GOPATH
    OIFS="$IFS"
    IFS=':'
    for path in $GOPATH
    do
      monitor $path/src &
    done
    IFS="$OIFS"

    # Monitor the current directory
    monitor .
    

Usage

The only dependency for this script is inotify-tools, which is used to monitor the filesystem for changes.

    $ sudo apt-get install inotify-tools
    

Once you've downloaded (or copy-pasted) the script, you'll need to make it executable and move it to /usr/local/bin or another directory on your system path:

    $ wget https://raw.github.com/alexedwards/go-reload/master/go-reload
    $ chmod +x go-reload
    $ sudo mv go-reload /usr/local/bin/
    

You should then be able to use the go-reload command in place of go run:

    $ go-reload main.go
    == Go-reload
    >> Watching directories, CTRL+C to stop
    

If you found this post useful, you might like to subscribe to my RSS feed.

A Recap of Request Handling in Go

 ∗ Alex Edwards

Processing HTTP requests with Go is primarily about two things: ServeMuxes and Handlers.

A ServeMux is essentially a HTTP request router (or multiplexor). It compares incoming requests against a list of predefined URL paths, and calls the associated handler for the path whenever a match is found.

Handlers are responsible for writing response headers and bodies. Almost any object can be a handler, so long as it satisfies the Handler interface. In lay terms, that simply means it must have a ServeHTTP method with the following signature:

ServeHTTP(http.ResponseWriter, *http.Request)

Go's HTTP package ships with a few functions to generate common handlers, such as FileServer, NotFoundHandler and RedirectHandler. Let's begin with a simple but contrived example:

    $ mkdir handler-example
    $ cd handler-example
    $ touch main.go
    
File: main.go
    package main

    import (
      "log"
      "net/http"
    )

    func main() {
      mux := http.NewServeMux()

      rh := http.RedirectHandler("http://example.org", 307)
      mux.Handle("/foo", rh)

      log.Println("Listening...")
      http.ListenAndServe(":3000", mux)
    }
    

Let's step through this quickly:

  • In the main function we use the http.NewServeMux function to create an empty ServeMux.
  • We then use the http.RedirectHandler function to create a new handler. This handler 307 redirects all requests it receives to http://example.org.
  • Next we use the ServeMux.Handle function to register this with our new ServeMux, so it acts as the handler for all incoming requests with the URL path /foo.
  • Finally we create a new server and start listening for incoming requests with the http.ListenAndServe function, passing in our ServeMux for it to match requests against.

Go ahead and run the application:

    $ go run main.go
    Listening...
    

And visit http://localhost:3000/foo in your browser. You should find that your request gets successfully redirected.

The eagle-eyed of you might have noticed something interesting: The signature for the ListenAndServe function is ListenAndServe(addr string, handler Handler), but we passed a ServeMux as the second parameter.

We were able to do this because ServeMux also has a ServeHTTP method, meaning that it too satisfies the Handler interface.

For me it simplifies things to think of a ServeMux as just being a special kind of handler, which instead of providing a response itself passes the request on to a second handler. This isn't as much of a leap as it first sounds – chaining handlers together is fairly commonplace in Go.

Custom Handlers

Let's create a custom handler which responds with the current local time in a given format:

    type timeHandler struct {
      format string
    }

    func (th *timeHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
      tm := time.Now().Format(th.format)
      w.Write([]byte("The time is: " + tm))
    }
    

The exact code here isn't too important.

All that really matters is that we have an object (in this case it's a timeHandler struct, but it could equally be a string or function or anything else), and we've implemented a method with the signature ServeHTTP(http.ResponseWriter, *http.Request) on it. That's all we need to make a handler.

Let's embed this in a concrete example:

File: main.go
    package main

    import (
      "log"
      "net/http"
      "time"
    )

    type timeHandler struct {
      format string
    }

    func (th *timeHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
      tm := time.Now().Format(th.format)
      w.Write([]byte("The time is: " + tm))
    }

    func main() {
      mux := http.NewServeMux()

      th := &timeHandler{format: time.RFC1123}
      mux.Handle("/time", th)

      log.Println("Listening...")
      http.ListenAndServe(":3000", mux)
    }
    

In the main function we initialised the timeHandler in exactly the same way we would any normal struct, using the & symbol to yield a pointer. And then, like the previous example, we use the mux.Handle function to register this with our ServeMux.

Now when we run the application, the ServeMux will pass any request for /time straight on to our timeHandler.ServeHTTP method.

Go ahead and give it a try: http://localhost:3000/time.

Notice too that we could easily reuse the timeHandler in multiple routes:

    func main() {
      mux := http.NewServeMux()

      th1123 := &timeHandler{format: time.RFC1123}
      mux.Handle("/time/rfc1123", th1123)

      th3339 := &timeHandler{format: time.RFC3339}
      mux.Handle("/time/rfc3339", th3339)

      log.Println("Listening...")
      http.ListenAndServe(":3000", mux)
    }
    

Functions as Handlers

For simple cases (like the example above) defining new custom types and ServeHTTP methods feels a bit verbose. Let's look at an alternative approach, where we leverage Go's http.HandlerFunc type to coerce a normal function into satisfying the Handler interface.

Any function which has the signature func(http.ResponseWriter, *http.Request) can be converted into a HandlerFunc type. This is useful because HandleFunc objects come with an inbuilt ServeHTTP method which – rather cleverly and conveniently – executes the content of the original function.

If that sounds confusing, try taking a look at the relevant source code. You'll see that it's a very succinct way of making a function satisfy the Handler interface.

Let's reproduce the timeHandler application using this technique:

File: main.go
    package main

    import (
      "log"
      "net/http"
      "time"
    )

    func timeHandler(w http.ResponseWriter, r *http.Request) {
      tm := time.Now().Format(time.RFC1123)
      w.Write([]byte("The time is: " + tm))
    }

    func main() {
      mux := http.NewServeMux()

      // Convert the timeHandler function to a HandleFunc type
      th := http.HandlerFunc(timeHandler)
      // And add it to the ServeMux
      mux.Handle("/time", th)

      log.Println("Listening...")
      http.ListenAndServe(":3000", mux)
    }
    

In fact, converting a function to a HandlerFunc type and then adding it to a ServeMux like this is so common that Go provides a shortcut: the ServeMux.HandleFunc method.

This is what the main() function would have looked like if we'd used this shortcut instead:

    func main() {
      mux := http.NewServeMux()

      mux.HandleFunc("/time", timeHandler)

      log.Println("Listening...")
      http.ListenAndServe(":3000", mux)
    }
    

Most of the time using a function as a handler like this works well. But there is a bit of a limitation when things start getting more complex.

You've probably noticed that, unlike the method before, we've had to hardcode the time format in the timeHandler function. What happens when we want to pass information or variables from main() to a handler?

A neat approach is to put our handler logic into a closure, and close over the variables we want to use:

File: main.go
    package main

    import (
      "log"
      "net/http"
      "time"
    )

    func timeHandler(format string) http.Handler {
      fn := func(w http.ResponseWriter, r *http.Request) {
        tm := time.Now().Format(format)
        w.Write([]byte("The time is: " + tm))
      }
      return http.HandlerFunc(fn)
    }

    func main() {
      mux := http.NewServeMux()

      th := timeHandler(time.RFC1123)
      mux.Handle("/time", th)

      log.Println("Listening...")
      http.ListenAndServe(":3000", mux)
    }
    

The timeHandler function now has a subtly different role. Instead of coercing the function into a handler (like we did previously), we are now using it to return a handler. There's two key elements to making this work.

First it creates fn, an anonymous function which accesses &dash or closes over – the format variable forming a closure. Regardless of what we do with the closure it will always be able to access the variables that are local to the scope it was created in – which in this case means it'll always have access to the format variable.

Secondly our closure has the signature func(http.ResponseWriter, *http.Request). As you may remember from earlier, this means that we can convert it into a HandlerFunc type (so that it satisfies the Handler interface). Our timeHandler function then returns this converted closure.

In this example we've just been passing a simple string to a handler. But in a real-world application you could use this method to pass database connection, template map, or any other application-level context. It's a good alternative to using global variables, and has the added benefit of making neat self-contained handlers for testing.

You might also see this same pattern written as:

    func timeHandler(format string) http.Handler {
      return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        tm := time.Now().Format(format)
        w.Write([]byte("The time is: " + tm))
      })
    }
    

Or using an implicit conversion to the HandlerFunc type on return:

    func timeHandler(format string) http.HandlerFunc {
      return func(w http.ResponseWriter, r *http.Request) {
        tm := time.Now().Format(format)
        w.Write([]byte("The time is: " + tm))
      }
    }
    

The DefaultServeMux

You've probably seen DefaultServeMux mentioned in lots of places, from the simplest Hello World examples to the Go source code.

It took me a long time to realise it isn't anything special. The DefaultServeMux is just a plain ol' ServeMux like we've already been using, which gets instantiated by default when the HTTP package is used. Here's the relevant line from the Go source:

var DefaultServeMux = NewServeMux()

The HTTP package provides a couple of shortcuts for working with the DefaultServeMux: http.Handle and http.HandleFunc. These do exactly the same as their namesake functions we've already looked at, with the difference that they add handlers to the DefaultServeMux instead of one that you've created.

Additionally, ListenAndServe will fall back to using the DefaultServeMux if no other handler is provided (that is, the second parameter is set to nil).

So as a final step, let's update our timeHandler application to use the DefaultServeMux instead:

File: main.go
    package main

    import (
      "log"
      "net/http"
      "time"
    )

    func timeHandler(format string) http.Handler {
      fn := func(w http.ResponseWriter, r *http.Request) {
        tm := time.Now().Format(format)
        w.Write([]byte("The time is: " + tm))
      }
      return http.HandlerFunc(fn)
    }

    func main() {
      // Note that we skip creating the ServeMux...

      var format string = time.RFC1123
      th := timeHandler(format)

      // We use http.Handle instead of mux.Handle...
      http.Handle("/time", th)

      log.Println("Listening...")
      // And pass nil as the handler to ListenAndServe.
      http.ListenAndServe(":3000", nil)
    }
    

If you found this post useful, you might like to subscribe to my RSS feed.

Gorilla vs Pat vs Routes: A Mux Showdown

 ∗ Alex Edwards

One of the first things I missed when learning Go was being able to route HTTP requests to handlers based on the pattern of a URL path, like you can with web frameworks like Sinatra and Django.

Although Go's ServeMux does a great job at routing incoming requests, it only works for fixed URL paths. To support pretty URLs with variable parameters we either need to roll a custom router (or HTTP request multiplexer in Go terminology), or look to a third-party package.

In this post we'll compare and contrast three popular packages for the job: Pat, Routes and Gorilla Mux. If you're already familiar with them, you might want to skip to the benchmarks and summary.

Pat

Pat by Blake Mizerany is the simplest and lightest of the three packages. It supports basic pattern matching on request paths, matching on request method (GET, POST etc), and the capture of named parameters.

The syntax for defining URL patterns should feel familiar if you're from the Ruby world – named parameters start with a colon, with the remainder of the path matched literally. For example, the pattern /user/:name/profile would match a request to /user/oomi/profile, with the name oomi captured as a parameter.

It's worth pointing out that behind the scenes Pat uses a custom algorithm for pattern matching, rather than a regular expression based approach like the other two packages. In theory this means it should be more a little more optimised for the task at hand.

Let's take a look at a sample application using Pat:

    $ mkdir pat-example && cd pat-example
    $ touch app.go
    $ go get github.com/bmizerany/pat
    
File: app.go
    package main

    import (
      "github.com/bmizerany/pat"
      "log"
      "net/http"
    )

    func main() {
      mux := pat.New()
      mux.Get("/user/:name/profile", http.HandlerFunc(profile))

      http.Handle("/", mux)

      log.Println("Listening...")
      http.ListenAndServe(":3000", nil)
    }

    func profile(w http.ResponseWriter, r *http.Request) {
      params := r.URL.Query()
      name := params.Get(":name")
      w.Write([]byte("Hello " + name))
    }
    

We'll quickly step through the interesting bits.

In the main function we start by creating a new HTTP request multiplexer (or mux for short) with Pat. Then we add a rule to the mux so that all GET requests which match the specified pattern are routed to the profile function.

Next we use the Handle function to register our custom mux as the handler for all incoming requests in Go's DefaultServeMux.

Because we're only using a single handler in this code, an alternative approach would be to skip registering with the DefaultServeMux, and pass our custom mux directly to ListenAndServe as the handler instead.

When a request gets matched, Pat adds any named parameters to the URL RawQuery. In the profile function we then access these in the same way as a normal query string value.

Go ahead and run the application:

    $ go run app
    Listening...
    

And visit localhost:3000/user/oomi/profile in your browser. You should see a Hello oomi response.

Pat also provides a couple of other nice touches, including redirecting paths with trailing slashes. Here's the full documentation.

Routes

Routes by Drone provides a similar interface to Pat, with the additional benefit that patterns can be more tightly controlled with optional Regular Expressions. For example, the two patterns below are both valid, with the second one matching if the name parameter contains lowercase letters only:

  • /user/:name/profile
  • /user/:name([a-z]+)/profile

Routes also provides a few other nice features, including:

  • Built-in routing for a static files.
  • A before filter, so specific code can be run before each request is handled.
  • Helpers for returning JSON and XML responses.

Basic usage of Routes is almost identical to Pat:

    $ mkdir routes-example && cd routes-example
    $ touch app.go
    $ go get github.com/drone/routes
    
File: app.go
    package main

    import (
      "github.com/drone/routes"
      "log"
      "net/http"
    )

    func main() {
      mux := routes.New()
      mux.Get("/user/:name([a-z]+)/profile", profile)

      http.Handle("/", mux)

      log.Println("Listening...")
      http.ListenAndServe(":3000", nil)
    }

    func profile(w http.ResponseWriter, r *http.Request) {
      params := r.URL.Query()
      name := params.Get(":name")
      w.Write([]byte("Hello " + name))
    }
    

Gorilla Mux

Gorilla Mux is the most full-featured of the three packages. It supports:

  • Pattern matching on request paths, with optional regular expressions.
  • Matching on URL host and scheme, request method, header and query values.
  • Matching based on custom functions.
  • Use of sub-routers for easy nested routing.

Additionally the matchers can be chained together, giving a lot of potential for granular routing rules if you need them.

The pattern syntax that Gorilla uses is slightly different to the other packages, with named parameters surrounded by curly braces. For example: /user/{name}/profile and /user/{name:[a-z]+}/profile.

Let's take a look at an example:

    $ mkdir gorilla-example && cd gorilla-example
    $ touch app.go
    $ go get github.com/gorilla/mux
    
File: app.go
    package main

    import (
      "github.com/gorilla/mux"
      "log"
      "net/http"
    )

    func main() {
      rtr := mux.NewRouter()
      rtr.HandleFunc("/user/{name:[a-z]+}/profile", profile).Methods("GET")

      http.Handle("/", rtr)

      log.Println("Listening...")
      http.ListenAndServe(":3000", nil)
    }

    func profile(w http.ResponseWriter, r *http.Request) {
      params := mux.Vars(r)
      name := params["name"]
      w.Write([]byte("Hello " + name))
    }
    

Fundamentally there's the same thing going on here as in the previous two examples. So although the syntax looks a bit different I won't dwell on it – the Gorilla documentation does a fine job of explaining it if it's not immediately clear.

Relative Performance

I ran two different sets of benchmarks on the packages. The first was a stripped-down benchmark to look at their performance in isolation, and the second was an attempt at profiling a more real-world use case.

In both tests I measured the number of successful requests across a ten second period, and took the average over 50 iterations, all running on my local machine.

For the 'stripped-down' benchmark, requests were simply routed to a handler that returned a 200 status code and message. Here are the code samples and results:

In this test the best performing package appeared to be Pat by a large margin. It handled around 30% more requests than Routes and Gorilla Mux, which were very evenly matched.

In the second benchmark requests were routed to a handler which accessed a named parameter from the URL, and then merged it with a HTML template read from disk. Here are the code samples and results:

In this benchmark the performance difference between the three packages was negligible.

Although it's always dangerous to draw conclusions from just one set of tests, it does point toward the overall performance impact of a router being much smaller for higher-latency applications, such as those with a lot of file system or database access in the handlers.

Summary

Pat would appear to be a good choice for scenarios where performance is important, you have a low-latency application, and only require simple pattern-based routing.

If you're likely to be validating a lot of parameter input with regular expressions in your application, then it probably makes sense to skip Pat and use Routes or Gorilla Mux instead, with the expressions built into your routing patterns.

For higher-latency applications, where there appears to be less of an overall impact due to router performance, Gorilla Mux may be a wise choice because of the sheer number of options and the flexibility it provides. Although I haven't looked at it in detail, larger applications with a lot of URL endpoints may also get a performance benefit from using Gorilla's nested routing too.

If you found this post useful, you might like to subscribe to my RSS feed.

Serving Static Sites with Go

 ∗ Alex Edwards

I've recently moved the site you're reading right now from a Sinatra application to an (almost) static one served by Go. While it's fresh in my head, here's an explanation of principles behind creating and serving static sites with Go.

Let's begin with a simple but real-world example: serving vanilla HTML and CSS files from a particular location.

Start by creating a directory to hold the project:

    $ mkdir static-site
    $ cd static-site
    

Along with an app.go file to hold our code, and some sample HTML and CSS files in a static directory.

    $ touch app.go
    $ mkdir -p static/stylesheets
    $ touch static/example.html static/stylesheets/main.css
    
File: static/example.html
    <!doctype html>
    <html>
    <head>
      <meta charset="utf-8">
      <title>A static page</title>
      <link rel="stylesheet" href="/stylesheets/main.css">
    </head>
    <body>
      <h1>Hello from a static page</h1>
    </body>
    </html>
    
File: static/stylesheets/main.css
    body {color: #c0392b}
    

Once those files are created, the code we need to get up and running is wonderfully compact:

File: app.go
    package main

    import (
      "log"
      "net/http"
    )

    func main() {
      fs := http.FileServer(http.Dir("static"))
      http.Handle("/", fs)

      log.Println("Listening...")
      http.ListenAndServe(":3000", nil)
    }
    

Let's step through this. First we use the FileServer function to create a handler that responds to HTTP requests with the contents of a given FileSystem. Here we've used the static directory relative to our application, but you could use any other directory on your system (or indeed anything that implements the FileSystem interface). Next we use the Handle function to register it as the handler for all requests, and launch the server listening on port 3000.

Go ahead and run the application:

    $ go run app.go
    Listening...
    

And open localhost:3000/example.html in your browser. You should see the HTML page we made with a big red heading.

Almost-Static Sites

If you're creating a lot of static HTML files by hand, it can be tedious to keep repeating boilerplate content. Let's explore using the Template package to put shared markup in a layout file.

At the moment all requests are being handled by our file server. Let's make a slight adjustment so that it only handles request paths that begin with the pattern /static/ instead.

File: app.go
    ...
    fs := http.FileServer(http.Dir("static"))
    http.Handle("/static/", http.StripPrefix("/static/", fs))
    ...
    

If you restart the application, you should find the CSS file we made earlier available at localhost:3000/static/stylesheets/main.css.

Now let's create a templates directory, containing a layout.html file with shared markup, and an example.html file with some page-specific content.

    $ mkdir templates
    $ touch templates/layout.html templates/example.html
    
File: templates/layout.html
    {{define "layout"}}
    <!doctype html>
    <html>
    <head>
      <meta charset="utf-8">
      <title>{{template "title"}}</title>
      <link rel="stylesheet" href="/static/stylesheets/main.css">
    </head>
    <body>
      {{template "body"}}
    </body>
    </html>
    {{end}}
    
File: templates/example.html
    {{define "title"}}A templated page{{end}}

    {{define "body"}}
    <h1>Hello from a templated page</h1>
    {{end}}
    

If you've used templating in other web frameworks or languages before, this should hopefully feel familiar.

Go templates – in the way we're using them here – are essentially just named text blocks surrounded by {{define}} and {{end}} tags. Templates can be embedded into each other, as we do above where the layout template embeds both the title and body templates.

Let's update the application code to use these:

File: app.go
    package main

    import (
      "html/template"
      "log"
      "net/http"
      "path"
    )

    func main() {
      fs := http.FileServer(http.Dir("static"))
      http.Handle("/static/", http.StripPrefix("/static/", fs))

      http.HandleFunc("/", serveTemplate)

      log.Println("Listening...")
      http.ListenAndServe(":3000", nil)
    }

    func serveTemplate(w http.ResponseWriter, r *http.Request) {
      lp := path.Join("templates", "layout.html")
      fp := path.Join("templates", r.URL.Path)

      templates, _ := template.ParseFiles(lp, fp)
      templates.ExecuteTemplate(w, "layout", nil)
    }
    

So what's changed here?

First we've added the html/template and path packages to the import statement.

We've then specified that all the requests not picked up by the static file server should be handled with a new serveTemplate function.

In the serveTemplate function, we build paths to the layout file and the template file corresponding with the request. Rather than manual concatenation we use Join, which has the advantage of cleaning the path to help prevent directory traversal attacks.

We then use the ParseFiles function to bundle the requested template and layout into a template set. Finally, we use the ExecuteTemplate function to render a named template in the set, in our case the layout template.

Restart the application:

    $ go run app.go
    Listening...
    

And open localhost:3000/example.html in your browser. If you look at the source you should find the markup from both templates merged together. You might also notice that the Content-Type and Content-Length headers have been set for us, courtesy of the ExecuteTemplate function.

Lastly, let's make the code a bit more robust. We should:

  • Send a 404 response if the requested template doesn't exist.
  • Send a 404 response if the requested template path is a directory.
  • Send and log a 500 response if the template.ParseFiles function throws an error.
File: app.go
    package main

    import (
      "html/template"
      "log"
      "net/http"
      "os"
      "path"
    )

    func main() {
      fs := http.FileServer(http.Dir("static"))
      http.Handle("/static/", http.StripPrefix("/static/", fs))

      http.HandleFunc("/", serveTemplate)

      log.Println("Listening...")
      http.ListenAndServe(":3000", nil)
    }

    func serveTemplate(w http.ResponseWriter, r *http.Request) {
      lp := path.Join("templates", "layout.html")
      fp := path.Join("templates", r.URL.Path)

      // Return a 404 if the template doesn't exist
      info, err := os.Stat(fp)
      if err != nil {
        if os.IsNotExist(err) {
          http.NotFound(w, r)
          return
        }
      }

      // Return a 404 if the request is for a directory
      if info.IsDir() {
        http.NotFound(w, r)
        return
      }

      templates, err := template.ParseFiles(lp, fp)
      if err != nil {
        log.Print(err)
        http.Error(w, "500 Internal Server Error", 500)
        return
      }
      templates.ExecuteTemplate(w, "layout", nil)
    }
    

If you found this post useful, you might like to subscribe to my RSS feed.

Axiomatic CSS and Lobotomized Owls

 ∗ A List Apart: The Full Feed

At CSS Day last June I introduced, with some trepidation, a peculiar three-character CSS selector. Called the “lobotomized owl selector” for its resemblance to an owl’s vacant stare, it proved to be the most popular section of my talk.

I couldn’t tell you whether the attendees were applauding the thinking behind the invention or were, instead, nervously laughing at my audacity for including such an odd and seemingly useless construct. Perhaps I was unwittingly speaking to a room full of paid-up owl sanctuary supporters. I don’t know.

The lobotomized owl selector looks like this:

* + *

Despite its irreverent name and precarious form, the lobotomized owl selector is no mere thought experiment for me. It is the result of ongoing experimentation into automating the layout of flow content. The owl selector is an “axiomatic” selector with a voracious purview. As such, many will be hesitant to use it, and it will terrify some that I include it in production code. I aim to demonstrate how the selector can reduce bloat, speed up development, and help automate the styling of arbitrary, dynamic content.

Styling by prescription

Almost universally, professional web interface designers (engineers, whatever) have accustomed themselves to styling HTML elements prescriptively. We conceive of an interface object, then author styles for the object that are inscribed manually in the markup as “hooks.”

Despite only pertaining to presentation, not semantic interoperability, the class selector is what we reach for most often. While elements and most attributes are predetermined and standardized, classes are the placeholders that gift us with the freedom of authorship. Classes give us control.

.my-module {
	/* ... */
}

CSS frameworks are essentially libraries of non-standard class-based ciphers, intended for forming explicit relationships between styles and their elements. They are vaunted for their ability to help designers produce attractive interfaces quickly, and criticized for the inevitable accessibility shortcomings that result from leading with style (form) rather than content (function).

< !-- An unfocusable, semantically inaccurate "button" -->
<a class="ui-button">press me</a>

Whether you use a framework or your own methodology, the prescriptive styling mode also prohibits non-technical content editors. It requires not just knowledge of presentational markup, but also access to that markup to encode the prescribed styles. WYSIWYG editors and tools like Markdown necessarily lack this complexity so that styling does not impede the editorial process.

Bloat

Regardless of whether you can create and maintain presentational markup, the question of whether you should remains. Adding presentational ciphers to your previously terse markup necessarily engorges it, but what’s the tradeoff? Does this allow us to reduce bloat in the stylesheet?

By choosing to style entirely in terms of named elements, we make the mistake of asserting that HTML elements exist in a vacuum, not subject to inheritance or commonality. By treating the element as “this thing that needs to be styled,” we are liable to redundantly set some values for the element in hand that should have already been defined higher in the cascade. Adding new modules to a project invites bloat, which is a hard thing to keep in check.

.module-new {
	/* So… what’s actually new here? */
}

From pre-processors with their addition of variables to object-based CSS methodologies and their application of reusable class “objects,” we are grappling with sandbags to stem this tide of bloat. It is our industry’s obsession. However, few remedies actually eschew the prescriptive philosophy that invites bloat in the first place. Some interpretations of object-oriented CSS even insist on a flattened hierarchy of styles, citing specificity as a problem to be overcome—effectively reducing CSS to SS and denying one of its key features.

I am not writing to condemn these approaches and technologies outright, but there are other methods that just may be more effective for certain conditions. Hold onto your hats.

Selector performance

I’m happy to concede that when some of you saw the two asterisks in * + * at the beginning of this article, you started shaking your head with vigorous disapproval. There is a precedent for that. The universal selector is indeed a powerful tool. But it can be good powerful, not just bad powerful. Before we get into that, though, I want to address the perceived performance issue.

All the studies I’ve read, including Steve Souders’ and Ben Frain’s, have concluded that the comparative performance of different CSS selector types is negligible. In fact, Frain concludes that “sweating over the selectors used in modern browsers is futile.” I’ve yet to read any compelling evidence to counter these findings.

According to Frain, it is, instead, the quantity of CSS selectors—the bloat—that may cause issues; he mentions unused declarations specifically. In other words, embracing class selectors for their “speed” is of little use when their proliferation is causing the real performance issue. Well, that and the giant JPEGs and un-subsetted web fonts.

Contrariwise, the * selector’s simultaneous control of multiple elements increases brevity, helping to reduce file size and improve performance.

The real trouble with the universal sector is that it alone doesn’t represent a very compelling axiom—nothing more intelligent than “style whatever,” anyway. The trick is in harnessing this basic selector and forming more complex expressions that are context-aware.

Dispensing with margins

The trouble with confining styles to objects is that not everything should be considered a property of an object per se. Take margins: margins are something that exist between elements. Simply giving an element a top margin makes no sense, no matter how few or how many times you do it. It’s like applying glue to one side of an object before you’ve determined whether you actually want to stick it to something or what that something might be.

.module-new {
	margin-bottom: 3em; /* what, all the time? */
}

What we need is an expression (a selector) that matches elements only in need of margin. That is, only elements in a contextual relationship with other sibling elements. The adjacent sibling combinator does just this: using the form x + n, we can add a top margin to any n where x has come before it.

This would, as with standard prescriptive styling, become verbose very quickly if we were to create rules for each different element pairing within the interface. Hence, we adopt the aforementioned universal selector, creating our owl face. The axiom is as follows: “All elements in the flow of the document that proceed other elements must receive a top margin of one line.”

* + * {
	margin-top: 1.5em;
}

Completeness

Assuming that your paragraphs’ font-size is 1 em and its line-height is 1.5, we just set a default margin of one line between all successive flow elements of all varieties occurring in any order. Neither we developers nor the folks building content for the project have to worry about any elements being forgotten and not adopting at least a standard margin when rendered one after the other. To achieve this the prescriptive way, we’d have to anticipate specific elements and give them individual margin values. Boring, verbose, and liable to be incomplete.

Instead of writing styles, we’ve created a style axiom: an overarching principle for the layout of flow content. It’s highly maintainable, too; if you change the line-height, just change this singular margin-top value to match.

Contextual awareness

It’s better than that, though. By applying margin between elements only, we don’t generate any redundant margin (exposed glue) destined to combine with the padding of parent elements. Compare solution (a), which adds a top margin to all elements, with solution (b), which uses the owl selector.

Diagram showing elements with margins, with and without the owl selector.
The diagrams in the left column show margin in dark grey and padding in light gray.

Now consider how this behaves in regard to nesting. As illustrated, using the owl selector and just a margin-top value, no first or last element of a set will ever present redundant margin. Whenever you create a subset of these elements, by wrapping them in a nested parent, the same rules that apply to the superset will apply to the subset. No margin, regardless of nesting level, will ever meet padding. With a sort of algorithmic elegance, we protect against compound whitespace throughout our interface.

Diagram showing nested elements with margins using the owl selector.

This is eminently less verbose and more robust than approaching the problem unaxiomatically and removing the leftover glue after the fact, as Chris Coyier reluctantly proposed in “Spacing The Bottom of Modules”. It was this article, I should point out, that helped give me the idea for the lobotomized owl.

.module > *:last-child,
.module > *:last-child > *:last-child,
.module > *:last-child > *:last-child > *:last-child {
	margin: 0;
}

Note that this only works having defined a “module” context (a big ask of a content editor), and requires estimating possible nesting levels. Here, it supports up to three.

Exception-driven design

So far, we’ve not named a single element. We’ve simply written a rule. Now we can take advantage of the owl selector’s low specificity and start judiciously building in exceptions, taking advantage of the cascade rather than condemning it as other methods do.

Book-like, justified paragraphs

p {
	text-align: justify;
}

p + p {
margin-top: 0;
text-indent: 2em;
}

Note that only successive paragraphs are indented, which is conventional—another win for the adjacent sibling combinator.

Compact modules

.compact * + * {
	margin-top: 0.75em;
}

You can employ a little class-based object orientation if you like, to create a reusable style for more compact modules. In this example, all elements that need margin receive a margin of only half a line.

Widgets with positioning

.margins-off > * {
	margin-top: 0;
}

The owl selector is an expressive selector and will affect widgets like maps, where everything is positioned exactly. This is a simple off switch. Increasingly, widgets like these will occur as web components where our margin algorithm will not be inherited anyway. This is thanks to the style encapsulation feature of Shadow DOM.

The beauty of ems

Although a few exceptions are inevitable, by harnessing the em unit in our margin value, margins already adjust automatically according to another property: font-size. In any instances that we adjust font-size, the margin will adapt to it: one-line spaces remain one-line spaces. This is especially helpful when setting an increased or reduced body font-size via a @media query.

When it comes to headings, there’s still more good fortune. Having set heading font sizes in your stylesheet in ems, appropriate margin (leading whitespace) for each heading has been set without you writing a single line of additional code.

Diagram showing automatically adjusted margins based on font-size.

Phrasing elements

This style declaration is intended to be inherited. That is how it, and CSS in general, is designed to work. However, I appreciate that some will be uncomfortable with just how voracious this selector is, especially after they have become accustomed to avoiding inheritance wherever possible.

I have already covered the few exceptions you may wish to employ, but, if it helps further, remember that phrasing elements with a typical display value of inline will inherit the top margin but be unaffected in terms of layout. Inline elements only respect horizontal margin, which is as specified and standard behavior across all browsers.

Diagram showing inline elements with margin.

If you find yourself overriding the owl selector frequently, there may be deeper systemic issues with the design. The owl selector deals with flow content, and flow content should make up the majority of your content. I don’t advise depending heavily on positioned content in most interfaces because they break implicit flow relationships. Even grid systems, with their floated columns, should require no more than a simple .row > * selector applying margin-top: 0 to reset them.

Diagram showing floated columns with margins.

Conclusion

I am a very poor mathematician, but I have a great fondness for Euclid’s postulates: a set of irreducible rules, or axioms, that form the basis for complex and beautiful geometries. Thanks to Euclid, I understand that even the most complex systems must depend on foundational rules, and CSS is no different. Although modularization of a complex interface is a necessary step in its maturation, any interface that does not follow basic governing tenets is going to lack clarity.

The owl selector allows you to control flow content, but it is also a way of relinquishing control. By styling elements according to context and circumstance, we accept that the structure of content is—and should be—mutable. Instead of prescribing the appearance of individual items, we build systems to anticipate them. Instead of prescribing the appearance of the interface as a whole, we let the content determine it. We give control back to the people who would make it.

When turning off CSS for a webpage altogether, you should notice two things. First, the page is unfalteringly flexible: the content fits the viewport regardless of its dimensions. Second—provided you have written standard, accessible markup—you should see that the content is already styled in a way that is, if not highly attractive, then reasonably traversable. The browser’s user agent styles take care of that, too.

Our endeavors to reclaim and enhance the innate device independence offered by user agents are ongoing. It’s time we worked on reinstating content independence as well.

Apple Multiple Security Updates, (Mon, Oct 20th)

 ∗ SANS Internet Storm Center, InfoCON: green


Cote D'Azur

 ∗ jmoiron.net blog

the fairmont hairpin, photo by steve harris, cc

The Monte Carlo method describes a class of algorithms which use repeated random samplings to produce statistical or approximate results. A famous algorithm employing such a method, brought to my attention by my friend Jeremy Self recently, is one which calculates the digits of pi.

It starts by considering the ratio between the area of a circle and a bounding rectangle. To make the math easier, you can take the unit circle with radius \(r = 1\) and a bounding box with sides of length 2, presented below. The area of a circle is \(\pi r^2\), and a square \(a^2\), where a is the length of a side. The areas of our unit circle and bounding square are therefore \(\pi\) and 4, respectively.

If we randomly plot points somewhere inside the square, we can expect for the ratio of points inside and outside the circle to be equivalent to the ratio between the areas of the circle and the square, or \(\pi/4\). The more points we plot, the lower the error rate we should encounter: you can understand this intuitively, as it's possible to plot 5 random points inside the circle and get \(\pi = 4\), but for 10,000 points it's incredibly unlikely.

We can make the further observation that each quadrant represents an area of a quarter circle and a unit square with the same ratio discussed above, so we can use random samplings where \(0 <= x <= 1\) and \(0 <= y <=1\), which works well for lots of random number generators that generate floats in this range.

This is an incredibly simple algorithm to implement, so we decided to have a little shootout and see how the naive solutions fared in multiple languages, running this algorithm with 100,000,000 iterations.

I implemented the algorithm in C, Rust, Go, Python and Ruby, and between Jeremy and the other Jeremy we also had a PHP implementation, which I used below, and a Common Lisp implementation kicking about. The point really wasn't to compare languages to each other; more to challenge our own preconceptions about their performance characteristics.

These are the runtimes on my fairly old Intel I5 750:

CRustGoGo (gccgo) RubyPython2Python3PyPyPHP
2.15 1.70 4.791.96 25.5351.7450.827.2454.6

Despite only testing function call overhead (or compiler inlining), arithmetic, sqrt, and rand, I still learned some interesting things and encountered some surprising results.

There are large differences in the speed of random number generators; having never looked at the implementations of any, I had assumed that most would use the same algorithms. Rust has quite a few at its disposal, and XorShiftRng gave me the fastest results of any language in my tests, while using rng::task_rng() clocked in at 9.45s, slower than PyPy. Since these processes have to seed the generator and produce 200,000,000 random numbers, the tradeoffs made in each RNG implementation can represent huge shifts in elapsed time.

Function calls have surprisingly variable overhead across languages. While I expected that manual inlining of the various in_circle test functions to improve the performance of most languages, in PHP in particular the effect was drastic: runtime drops from 54s to 36s. To compare, the Python runtime only improved by 2s to 48s.

After using Go for so long, I'd blissfully forgotten about compiler optimization flags and other build-time tuning.

When I'd initially implemented the rust version, I used the task_rng random number generator and no optimization switches on rustc, which resulted in a surprisingly high runtime of 24s. Turning on the optimizer lowered that to 9s, nearly 3x improvement. Running with XorShiftRng unoptimized was 9s itself, and flipping on the optimizer got nearly 5x improvement. I'm unsure if this says good things about the optimizer or bad things about the quality of output when not running it.

This made me think to turn on the optimizer for C as well, and finally to try out gccgo and see what it would produce. The runtimes for unoptimized gccgo and C were both around 3.6s, which means that simply adding -O2 to the build roughly doubles performance. Again, this surprised me; I'd always thought, based primarily on ignorance and scoffing at gentoo, that optimizers would produce some incremental improvements at best, and perhaps for general purpose desktop software where you're rarely CPU bound, they do. I wish I had some better knowledge about compiler optimizers to share; for now, these observations will have to suffice.

The star of the show in my mind has to be PyPy: its results on some unmodified, straightforward Python code are pretty astonishing, given the dismal performance of CPython. This is just the sort of thing that should be its bread and butter, but it really came through in a big way. Ruby's results were also quite impressive, as I'd run the Python code first and expected Ruby to be around the same. I was expecting the massive gulf between the scripting and compiled languages, but PyPy shows that this needn't be the case.

These programs represent the tiniest slice of their respective languages, and surely say more about my relative abilities between them than some deeper truths on the language of implementation. Although it should be obvious, I strongly warn against using the results to determine what language to use for a real project, but perhaps this exercise can inspire you to do your own experiments and learn some interesting algorithms in the process!

Microsoft MSRT October Update, (Sun, Oct 19th)

 ∗ SANS Internet Storm Center, InfoCON: green

This past week Microsoft

Alice – Painless Middleware Chaining for Go

 ∗ Justinas Stankevičius

According to a recent thread on Reddit, many people like it simple when doing web development in Go and use net/http with useful addons (like the Gorilla toolkit) instead of a full-fledged framework.

Such applications often make use of middleware, but wrapping lots of layers of handlers can become messy in the long run:

final := gzipHandler(rateLimitHandler(securityHandler(authHandler(myApp))))

Sure, that works. But then you want to remove a handler, add one or reorder them. Suddenly, you're drowning in parentheses.

Alice (available on GitHub) was created as a solution to simplify chaining while remaining flexible and playing nice with the existing net/http middleware.

It's not a framework, a mux or a toolkit. Its sole functionality is to let you create the same middleware chain like this:

final := alice.New(gzipHandler, ratelimitHandler, 
    securityHandler, authHandler).Then(myApp)

Flaws of other approaches

Many might point out that there are existing solutions for chaining middleware. That's true, but any of the solutions I had looked into had at least one thing that I thought should be done in a differently.

The recent Negroni package allows handlers to be added as middleware like this:

n := negroni.Classic()
// func (n *Negroni) UseHandler(handler http.Handler)
n.UseHandler(myMiddleware)

See the fault here? Traditional net/http middleware wrap the next handler so the wrapper can have a total control. Here, there's simply no handler to pass in. You can't pass in the Negroni instance as it would result in infinite recursion (Negroni calls middleware calls Negroni).

Negroni has its own mechanism for control flow (next() to call the following handlers), but you have to modify your middleware to fully utilize it, which is not ideal.

go-httppipe has the same problem: it's suited for successive handlers, but not wrapper-type middleware.

go-stackbuilder forcibly uses http.ServeMux as the main handler, instead of http.Handler.

func New(mux *http.ServeMux) Builder {
    return Builder{mux}
}

func Build(hs ...interface{}) http.Handler {
    return New(http.DefaultServeMux).Build(hs...)
}

That is not ideal as one might like to bypass the default mux completely, especially when using an alternative one.

Muxchain, like Negroni, doesn't provide a reference to the next handler.

Matt Silverlock's use.go snippet came closest to what I wanted. My only complaint is that the ordering of handlers here is counter-intuitive. Reading the chaining code makes it obvious that

use(myApp, csrf, logging, recovery)

is equivalent to this code:

recovery(logging(csrf(myApp)))

and this request cycle:

recovery -> logging -> csrf -> myApp

So, a reversed order from what you've written in your code.

How Alice is better

Alice adopts Matt's model and fixes the small imperfections. It still requires middleware constructors of form func (http.Handler) http.Handler, but requests now flow exactly the way you order your handlers.

This code:

alice.Chain(recovery, logging, csrf).Then(myApp)

will result in this request cycle:

recovery -> logging -> csrf -> myApp

Here, recovery receives the request and has a reference to logging. It may or may not call logging: it's completely up to recovery, just as without Alice.

One more thing

Deciding on a unified constructor for middleware is the main reason why creating such a convenient API for chaining is even possible.

However, limiting ourselves to one function signature has a drawback. Many middleware have settings one might want (or have) to set. Not only does that not fit into our chosen signature, there is no standard way adopted by developers.

Middleware found in the standard library take the options as additional arguments to the same function:

handler = http.StripPrefix("/old/", handler)

Throttled constructs a rate limiting strategy which has a method to wrap our handler:

th := throttled.RateLimit(throttled.PerMin(30), 
    &throttled.VaryBy{RemoteAddr: true}, 
    store.NewMemStore(1000))
handler := th.Throttle(handler)

And nosurf has additional methods on the handler:

handler := nosurf.New(handler)
handler.ExemptPath("/")
handler.SetFailureHandler(http.NotFoundHandler())

It's nearly impossible for Alice to solve this without resorting to ugly reflection tricks, so it doesn't try to. Instead, when in need of customization, one should create their own constructor.

func myStripPrefix(h http.Handler) http.Handler {
    return http.StripPrefix("/old/", h)
}

This now complies to the constructor interface, so we can plug it into Alice.

alice.New(myStripPrefix).Then(myApp)

Prove me wrong, though

Alice was born in a matter of minutes and based on my own perception of what's right and what's not. Just like I've found other solutions non-ideal, some might find inherent flaws in how Alice works. Although it's unlikely that the very essence of how Alice functions will change, any feedback is welcome.

Check out Alice on GitHub.

Best Practices for Errors in Go

 ∗ Justinas Stankevičius

Error handling seems to be one of the more controversial areas of Go. Some are pleased with it, while others hate it with passion. Nevertheless, there is a handful of best practices that will make dealing with errors less painful if you're a sceptic and even better if you like it as it is.

Know when to panic

The idiomatic way of reporting errors in Go is having the error as the last return value of a funtion. However, Go also offers an alternative error mechanism called panic that is similar to what is known as exceptions in other programming languages.

Despite not being suitable everywhere, panics are useful in several specific scenarios.

When the user explicitly asks: Okay

In certain cases, the user might want to panic and thus abort the whole application if an important part of it can not be initialized. Go's standard library is ripe with examples of variants of functions that panic instead of returning an error, e.g. regexp.MustCompile. As long as there remains a function that does this in an idiomatic way (regexp.Compile), providing the user with a nifty shortcut is okay.

When setting up: Maybe

A scenario that, in a way, intersects with the previous one, is the setting up phase of an application. In most cases, if preparations fail, there is no point in continuing. One case of this might be a web application failing to bind the required port or being unable to connect to the database server. In that case, there's not much left to do, so panicking is acceptable, even if not explicitly requested by the user.

This behavior can also be observed in the standard library: the net/http muxer will panic if a pattern is invalid in any way.

func (mux *ServeMux) Handle(pattern string, handler Handler) {
    mux.mu.Lock()
    defer mux.mu.Unlock()

    if pattern == "" {
        panic("http: invalid pattern " + pattern)
    }
    if handler == nil {
        panic("http: nil handler")
    }
    if mux.m[pattern].explicit {
        panic("http: multiple registrations for " + pattern)
    }

    // ...

Otherwise: Not really

Although there might be other cases where panic is useful, returned errors remain preferable in most scenarios.

Predefine errors

It's not unusual to see code returning errors like this

func doStuff() error {
    if someCondition {
        return errors.New("no space left in the device")
    } else {
        return errors.New("permission denied")
    }
}

The wrongdoing here is that it's not convenient to check which error has been returned. Comparing strings is error prone: misspelling a string in comparison will result in an error that cannot be caught at compile time, while a cosmetic change of the returned error will break the checking just as much.

Given a small set of errors, the best way to handle this is to predefine each error publicly at the package level.

var ErrNoSpaceLeft = errors.New("no space left in the device")
var ErrPermissionDenied = errors.New("permission denied")

func doStuff() error {
    if someCondition {
        return ErrNoSpaceLeft
    } else {
        return ErrPermissionDenied 
    }
}

Now the previous problems aren't anymore.

if err == ErrNoSpaceLeft {
    // handle this particular error
}

Provide information

Sometimes, an error can happen because of a whole lot of different reasons. Wikipedia lists 41 different HTTP client errors. Let's say we want to treat them as errors in Go (net/http does not). What's more, we'd like to be able to look into the specifics of the error we received and find out whether the error was 404, 410 or 418.

In the last paragraph we developed a pattern for discerning errors from one another, but here it gets a bit messy. Predefining 41 separate errors like this:

var ErrBadRequest = errors.New("HTTP 400: Bad Request")
var ErrUnauthorized = errors.New("HTTP 401: Unauthorized")
// ...

will make our code and documentation messy and our fingers sore.

A custom error type is the best solution to this problem. Go's implicit interfaces make creating one easy: to conform to the error interface, we only need to have an Error() method that returns a string.

type HttpError struct {
    Code        int
    Description string
}

func (h HttpError) Error() string {
    return fmt.Sprintf("HTTP %d: %s", h.Code, h.Description)
}

Not only does this act like a regular error, it also contains the status code as an integer. We can then use it as follows:

func request() error {
    return HttpError{404, "Not Found"}
}

func main() {
    err := request()

    if err != nil {
        // an error occured
        if err.(HttpError).Code == 404 {
            // handle a "not found" error
        } else {
            // handle a different error
        }
    }

}

The only minor annoyance left here is the need to do a type assertion to a concrete error type. But it's a small price to pay compared to the fragility of parsing the error code from a string.

Provide stack traces

Errors as they exist in Go remain inferior to panics in one important way: they do not provide important information about where in the call stack the error happened.

A solution to this problem has recently been created by the Juju team at Canonical. Their package errgo provides the functionality of wrapping an error into another one that records where the error happened.

Building up on the HTTP error handling example, we'll now put this to use.

package main

import (
    "fmt"

    "github.com/juju/errgo"
)

type HttpError struct {
    Code        int
    Description string
}

func (h HttpError) Error() string {
    return fmt.Sprintf("HTTP %d: %s", h.Code, h.Description)
}

func request() error {
    return errgo.Mask(HttpError{404, "Not Found"})
}

func main() {
    err := request()

    fmt.Println(err.(errgo.Locationer).Location())

    realErr := err.(errgo.Wrapper).Underlying()

    if realErr.(HttpError).Code == 404 {
        // handle a "not found" error
    } else {
        // handle a different error
    }
}

Our code has changed in several ways. Firstly, we wrap an error into an errgo-provided type. This code now prints the information on where the error happened. On my machine it outputs

/private/tmp/example.go:19

referencing a line in request().

However, our code has become somewhat messier. To get to the real HttpError we need to do more unwrapping. Sadly, I'm not aware of a real way to make this nicer, so it's all about tradeoffs. If your codebase is small and you can always tell where the error came from, you might not need to use errgo at all.

What about concrete types?

Some might point out that a portion of the type assertions could have been avoided if we returned a concrete type from a function instead of an error interface.

However, doing that in conjunction with the := operator will bring trouble. As a result, the error variable will not be reusable.

func f1() HttpError { ... }
func f2() OsError { ... }

func main() {
    // err automatically declared as HttpError
    err := f1()

    // OsError is a completely different type
    // The compiler does not allow this
    err = f2()
}

To avoid this, errors are best returned using the error type. Standard library avoids returning concrete types as well, e.g. os package states:

Often, more information is available within the error. For example, if a call that takes a file name fails, such as Open or Stat, the error will include the failing file name when printed and will be of type *PathError, which may be unpacked for more information.

This concludes my list of best practices for errors in Go. As in other areas, a different mindset has to be adopted when coming into Go from elsewhere. "Different" does not imply "worse" though and deciding on a set of conventions is vital to making Go development even better.

Embrace Go's HTTP Tools

 ∗ Justinas Stankevičius

Some time ago I released nosurf, a Go middleware for mitigating Cross-Site Request Forgery attacks. Writing a seemingly simple and small package was enough to fall in love with how Go handles HTTP. Yet, it's up to us to either embrace the standard HTTP facilities or fragmentate, sacrificing composability and modularity.

http.Handler is THE interface

Unified HTTP interfaces for web apps written in certain programming languages, like WSGI for Python and Rack for Ruby are a great idea, but they weren't always there. For instance, Rack only emerged in 2007, when Rails had already been going strong for a while.

Meanwhile in Go, the only interface needed has been in development since 2009, and although it's been through some serious changes since that, by the end of 2011, months before Go 1.0 was released, it had already stabilized.

Of course, I'm talking about the mighty http.Handler.

type Handler interface {
    ServeHTTP(ResponseWriter, *Request)
}

To be able to handle HTTP requests, your type only needs to implement this one method. The method reads the request info from the given *Request and writes a response into the given ResponseWriter. Seems simple enough, right?

Complement, don't replace

Yet, when building abstractions on top of that, some get it wrong. Take for example Mango, described by its author as "a modular web-application framework for Go, inspired by Rack and PEP333".

This is what a Mango application looks like:

func Hello(env mango.Env) (mango.Status, mango.Headers, mango.Body) {
  return 200, mango.Headers{}, mango.Body("Hello World!")
}

Looks simple, concise and very similar to WSGI or Rack, right? Except for one thing. While with dynamic/duck typing, you could have any iterable for a body, here mango.Body is simply a string. Essentially, that takes away the ability to do any sort of streaming responses with Mango. Even if it were to expose a ResponseWriter, anything written to it would clash with the returned values, since they're only returned at the end of the function, after the calls to ResponseWriter have already been made.

That's bad. Whether you need another interface on top of existing net/http is a matter of taste, but even if you do, it should not take functionality away. An interface that is nicer to code with, but takes away important functions is clearly inferior.

The right way

A popular "micro" web framework web.go deals with this in a simple, yet much better way. Its handlers take a pointer to web.Context as an optional first argument.

type Context struct {
    Request *http.Request
    Params  map[string]string
    Server  *Server
    http.ResponseWriter
}
// ...
func hello(ctx *web.Context, val string) string {
    return "hello " + val 
}

web.Context does not take the standard HTTP handler structures away. Instead, the *Request argument is available as a struct member and Context implements the required ResponseWriter methods itself embeds the original ResponseWriter. The string you return from the function (if any) is simply appended to the response.

That is a good design choice and I think it goes well with Go's philosophy. Even though you get a nice higher-level API, you don't have to sacrifice the low-level control over the request handling.

Start now

Go's HTTP library infrastructure, despite growing rapidly, still has some gaps left to fill. But the last thing we need is fragmentation and annoying incompatibilities due to poor design and abstractions that actually take important functionality away. Embracing and supporting the standard Go HTTP facilities is, in my humble opinion, the straightest way to having functional and modular 3rd-party HTTP tools.

GamerGate (Doesn’t) Speak!

 ∗ andlabs's blog

I’m surprised by the lack of responses I’ve received. I’m also unsurprised, since it seems like I have to actually try to get GamerGate’s attention. I won’t bother closing comments on the other post since there was only one. So without further ado…

Willy
‘I believe that those who participated in the August tirade with Zoe’s ex are the same people who attacked her on Steam’

You believe it, eh? Evidence stronk!

There are several ways I could respond to this. I’ll use them all.

  1. Sorry, I don’t answer to “stronk”. *walks away*
  2. You ask me to raise evidence when your whole movement is about anonymity and untraceability? Wow, what a Batman gambit!
  3. Go find the Steam Greenlight page for Depression Quest and look at all the people who bashed her there; see if they’re a member of the GamerGate group(s) on Steam.

‘Yes you are. Just ask Brianna Wu.’

What? I can’t tell if this is a joke or your pre-supposed opinions are really blinding you this much.

I cannot respond to this since I do not know what your stance is, but I have a hypothesis, discussed later.

‘Isn’t it weird that you have to go around constantly defending criticism of GamerGate?’

No, it’s not weird at all. Unjustified criticism of a whole movement for the actions of a few should be defended against, since it drowns out the message of better games journalism with a misrepresentation of everything. Sure, GamerGate is an umbrella that harbours vicious worms. But nobody else condones it.
ISIS are a small section of Muslims, and some right-wingers like to think that all Muslims are terrorists because of them. Yet an imam defending his beliefs wouldn’t be met with accusations of ‘Why are so quick to defend against blatantly false accusations? Huh? HUH?’

Because ISIS didn’t come first. Islamist terrorism erupted within the past century, for a variety of reasons I am not qualified to discuss. Islam is, instead, traditionally known as a rather peaceful religion, and in the centuries that the religion has existed, the Arab people have been responsible for some of the most important technological innovations in human history.

GamerGate, on the other hand, was started by people in the Zoe Quinn hate camp who decided to shift focus and attack games journalism instead. You were born of bad blood, and the attacks on Brianna Wu and Anita Sarkeesian, which I must say were done in your name (and I can show proof of the latter), shows that the bad blood is still there.

So yeah. It’s quite obvious you’re presuming GG is toxic and that attempts to defend its reputation are ‘whitewashing’, which clouds your argument with all sorts of presumption and ‘belief’ backed up with nothing.

There’s a difference between defending something’s reputation and ignoring its actions.

I specifically chose the wording “I believe” because it is what I believe, not what I can easily confirm. But the two things in my original post that I said were beliefs were both backstory and origin-related; they are separate from the actual timeline.

THE BONUS TWITTER CHAPTER

There were two tweets from one “@Captain_Biglou” but I can’t tell if they were for or against GamerGate so I’ll omit them here.

@thepestoa
https://www.youtube.com/watch?v=ipcWm4B3EU4 – #Gamergate in 60s
https://www.youtube.com/watch?v=tRaAJBKmi5I – Longer Video

First I’ll point out that this was in response to a tweet I made asking if Twitter tailored the “Trends” list based on your recent activity, so I’m not sure what the point is here.

The 60 Seconds video talks about integrity in the games industry only. It makes no mention of the whole women in gaming being abused problem. Its description also repeats the whole “ALL Gamers Are Dead!!!11″ misunderstanding, so now we’re just talking in circles.

The second video, which is much longer, falls flat on its face in the first minute, by claiming that Brianna made up her death threats herself. Who the fuck does that? Why would you stage having the police remove you from your home out of fear?

Criticized on the basis of strawman fallacy. #gamergate isn’t about misogyny talking about misogyny is not debating #corruption

Is it just me, or are all of you missing the point that this movement started as a branching-off of the Zoe Quinn harassment? I don’t know if you’re deliberately doing this or are just delusional.

Closing Comment: More Proof

Just this. Here’s a whole Storify full of proof. Enjoy

I want to make a page that collects things like the above and other images and news articles together in a giant “Evidence stronk!” page.

But at this point I’m just starting to wonder if this is all going to be hopeless in the end…

Postscript: To Those Who Still Believe in Journalistic Ethics

To those of you who think you are being unfairly silenced or ignored: we are trying to convince you that you are being duped into following an organization that does not actually believe any of the things you stand for. I wish I could explain this in a nicer or more thorough way, but this is all I can say.

Have a parting gift. Hopefully this will be the last current events post here for a long time.

Personalizing Git with Aliases

 ∗ A List Apart: The Full Feed

Part of getting comfortable with the command line is making it your own. Small customizations, shortcuts, and time saving techniques become second nature once you spend enough time fiddling around in your terminal. Since Git is my Version Control System of choice (due partially to its incredible popularity via GitHub), I like to spend lots of time optimizing my experience there.

Once you’ve become comfortable enough with Git to push, pull, add, and commit, and you feel like you’d like to pursue more, you can customize it to make it your own. A great way to start doing this is with aliases. Aliases can help you by providing shorthand commands so you can move faster and have to remember less of Git’s sometimes very murky UI. Luckily, Git makes itself easy to customize by setting global options in a file named .gitconfig in our home directory.

Quick note: for me, the home directory is /Users/jlembeck, you can get there on OSX or most any other Unix platform by typing cd ~ and hitting enter or return. On Windows, if you’re using Powershell, you can get there with the same command and if you’re not using Powershell, cd %userprofile% should do the trick.

Now, let’s take a look. First, open your .gitconfig file (from your home directory):

~/code/grunticon master* $ cd ~
~ $ open .gitconfig

You might see a file that looks similar to this:

[user]
  name = Jeff Lembeck
  email = your.email.address@host.com
[alias]
  st = status
  ci = commit
  di = diff
  co = checkout
  amend = commit --amend
  b = branch

Let’s look at the different lines and what they mean.

[user]
  name = Jeff Lembeck
  email = your.email.address@host.com

First up, the global user configuration. This is what Git references to say who you are when you make commits.

[alias]
  st = status
  ci = commit
  di = diff
  co = checkout
  amend = commit --amend
  b = branch

Following the user information is what we’re here for, aliases.

Any command given in that screen is prefaced with git. For example, git st is an alias for git status and git ci is git commit. This allows you to save a little time while you’re typing out commands. Soon, the muscle memory kicks in and git ci -m “Update version to 1.0.2” becomes your keystroke-saving go-to.

Ok, so aliases can be used to shorten commands you normally type and that’s nice, but a lot of people don’t really care about saving 10 keystrokes here and there. For them, I submit the use case of aliases for those ridiculous functions that you can never remember how to do. As an example, let’s make one for learning about a file that was deleted. I use this all of the time.

Now, to check the information on a deleted file, you can use git log --diff-filter=D -- path/to/file. Using this information I can create an alias.

d = log --diff-filter=D -- $1

Let’s break that down piece by piece.

This should look pretty familiar. It is almost the exact command from above, with a few changes. The first change you’ll notice is that it is missing git. Since we are in the context of git, it is assumed in the alias. Next, you’ll see a $1, this allows you to pass an argument into the alias command and it will be referenced there.

Now, with an example. git d lib/fileIDeleted.js. d is not a normal command in git, so git checks your config file for an alias. It finds one. It calls git log --diff-filter=D -- $1. And passes the argument lib/fileIDeleted.js into it. That will be the equivalent of calling git log --diff-filter=D -- lib/fileIDeleted.js.

Now you never have to remember how to do that again. Time to celebrate the time you saved that would normally be spent on Google trying to figure out how to even search for this. I suggest ice cream.

For further digging into this stuff: I got most of my ideas from Gary Bernhardt’s wonderful dotfiles repository. I strongly recommend checking out dotfiles repos to see what wild stuff you can do out there with your command line. Gary’s is an excellent resource and Mathias’s might be the most famous. To learn more about Git aliases from the source, check them out in the Git documentation.

Nishant Kothary on the Human Web: The Politics of Feedback

 ∗ A List Apart: The Full Feed

“Were you going for ‘not classy’? Because if you were, that’s cool. This isn’t classy like some of your other work,” said my wife, glancing at a long day’s work on my screen.

“Yep. That’s what I was going for!” I responded with forced cheer. I knew she was right, though, and that I’d be back to the drawing board the next morning.

This is a fairly typical exchange between us. We quit our jobs last year to bootstrap an app (for lack of a better word) that we’re designing and building ourselves. I’m the front-end guy, she’s the back-end girl. And currently, she’s the only user who gives me design feedback. Not because it’s hard to find people to give you feedback these days; we all know that’s hardly the case. She’s the only one providing feedback because I think that’s actually the right approach here.

I realize this flies in the face of conventional wisdom today, though. From VC’s and startup founders emphatically endorsing the idea that a successful entrepreneur is characterized by her willingness—scratch that: her obsession with seeking out feedback from anyone willing to give it, to a corporate culture around “constructive” feedback so pervasive that the seven perpendicular lines-drawing Expert can have us laughing and crying with recognition, we’ve come to begrudgingly accept that when it comes to feedback—the more, the merrier.

This conventional wisdom flies in the face of some opposing conventional wisdom, though, that’s best captured by the adage, “Too many cooks spoil the broth.” Or if you’d prefer a far more contemporary reference, look no further than Steve Jobs when he talked to Business Week about the iMac back in ’98: “For something this complicated, it’s really hard to design products by focus groups. A lot of times, people don’t know what they (customers) want until you show it to them.”

So which is it? Should we run out and get as much feedback as possible? Or should we create in a vacuum? As with most matters of conventional wisdom, the answer is: Yes.

In theory, neither camp is wrong. The ability to place your ego aside and calmly listen to someone tell you why the color scheme of your design or the architecture of your app is wrong is not just admirable and imitable, but extremely logical. Quite often, it’s exactly these interactions that help preempt disasters. On the flip side, there is too much self-evident wisdom in the notion that, borrowing words from Michael Harris, “Our ideas wilt when exposed to scrutiny too early.” Indeed, some of the most significant breakthroughs in the world can be traced back to the stubbornness of an individual who saw her vision through in solitude, and usually in opposition to contemporary opinion.

In practice, however, we can trace most of our failures to a blind affiliation to one of the two camps. In the real world, the more-the-merrier camp typically leaves us stumbling through a self-inflicted field of feedback landmines until we step on one that takes with it our sense of direction and, often more dramatically, our faith in humanity. The camp of shunners, on the other hand, leads us to fortify our worst decisions with flimsy rationales that inevitably cave in on us like a wall of desolate Zunes.

Over the years I’ve learned that we’re exceptionally poor at determining whether the task at hand calls for truly seeking feedback about our vision, or simply calls for managing the, pardon my French, politics of feedback: ensuring that stakeholders feel involved and represented fairly in the process. Ninety-nine out of a hundred times, it is the latter, but we approach it as the former. And, quite expectedly, ninety-nine out of a hundred times the consequences are catastrophic.

At the root of this miscalculation is our repugnance at the idea of politics. Our perception of politics in the office—that thing our oh-so-despicable middle managers mask using words like “trade-off,” “diplomacy,” “partnership,” “process,” “metrics,” “review” and our favorite, “collaboration”—tracks pretty closely to our perception of governmental politics: it’s a charade that people with no real skills use to oppress us. What we conveniently forget is that politics probably leads to the inclusion of our own voice in the first place.

We deceive ourselves into believing that our voice is the most important one. That the world would be better served if the voices of those incompetent, non-technical stakeholders were muted or at the very least, ignored. And while this is a perfectly fine conclusion in some cases, it’s far from true for a majority of them. But this fact usually escapes most of us, and we frequently find ourselves clumsily waging a tense war on our clients and stakeholders: a war that is for the greater good, and thus, a necessary evil, we argue. And the irony of finding ourselves hastily forgoing a politically-savvy, diplomatic design process in favor of more aggressive (or worse, passive-aggressive) tactics is lost on us thanks to our proficiency with what Ariely dubs the fudge factor in his book The (Honest) Truth About Dishonesty: “How can we secure the benefits of cheating and at the same time still view ourselves as honest, wonderful people? As long as we cheat by only a little bit, we can benefit from cheating and still view ourselves as marvelous human beings. This balancing act is the process of rationalization, and it is the basis of what we’ll call the fudge factor theory.”

Whether we like it or not, we’re all alike: we’re deeply political and our level of self-deception about our own political natures is really the only distinguishing factor between us.

And the worst part is that politics isn’t even a bad thing.

On the contrary, when you embrace it and do it right, politics is a win-win, with you delivering your best work, and your clients, stakeholders, and colleagues feeling a deep sense of accomplishment and satisfaction as well. It’s hard to find examples of these situations, and even harder to drive oneself to search for them over the noise of the two camps, but there are plenty out there if you keep your eyes open. One of my favorites, particularly because the scenarios are in the form of video and have to do with design and development, comes in the form of the hit HGTV show Property Brothers. Starring 6'4" identical twins Drew (the “business guy” realtor) and Jonathan (the “designer/developer” builder), every episode is a goldmine for learning the right way to make clients, stakeholders, and colleagues (first-time home owners) a part of the feedback loop for a project (remodeling a fixer-upper) without compromising on your value system.

Now, on the off-chance you are actually looking for someone to validate your vision—say you’re building a new product for a market that doesn’t exist or is already saturated, or if someone specifically hired you to run with a radical new concept of your own genius (hey, it can happen)—it’ll be a little trickier. You will need feedback, and it’ll have to be from someone who is attuned to the kind of abstract thinking that would let them imagine and navigate the alternate universe that is so vivid in your mind. If you are able to find such a person, paint them the best picture you can with whatever tools are at your disposal, leave your ego at the door, and pay close attention to what they say.

But bear in mind that if they are unable see your alternate universe, it’s hardly evidence that it’s just a pipe dream with no place in the real world. After all, at first not just the most abstract thinkers, but even the rest of us couldn’t imagine an alternate universe with the internet. Or the iPhone. Or Twitter. The list is endless.

For now, I’m exhilarated that there’s at least one person who sees mine. And I’d be a fool to ignore her feedback.

Only a poor student of history could fail to notice history's cycles. The future can't ...

 ∗ iRi

Only a poor student of history could fail to notice history's cycles. The future can't be fortold in detail, but asking the question "Where are the cycles taking us?" gives you a better chance of guessing general shapes than anything else I know.

So it's easy for a student of history to look out at the United States and guess that we're approaching a libertine peak, and that over the next couple of decades we should expect to see the pendulum swing away from the wild excesses of the Baby Boomers back in a more "conservative" direction.

But at my age, I've never lived through a shift. So had I guessed how the counter-libertine shift would occur last week, I would have guessed a gradual cultural waning of the libertines and a gradual cultural waxing of those of a more conservative bent, with the advocates not changing their own views but their relative influence changing over time.

Read the rest...

Reliably hosted by WebFaction.com