Infocon: green

 ∗ SANS Internet Storm Center, InfoCON: green

Tech tip follow-up: Using the data Invoked with R's system command

Tech tip follow-up: Using the data Invoked with R's system command, (Fri, Jul 31st)

 ∗ SANS Internet Storm Center, InfoCON: green

In follow up to yesterdays

Ask Dr. Web with Jeffrey Zeldman: If Ever I Should Leave You: Job Hunting For Web Designers and Developers

 ∗ A List Apart: The Full Feed

In our last installment, we discussed what to do when your boss is satisfied with third-party code that would make Stalin yak. This time out, we’ll discuss when, why, and how to quit your job.

When is the right time to leave your first job for something new? How do you know you’re ready to take the plunge?
Wet Behind The Ears

Dear Wet Behind:

From frying an egg to proposing marriage, you can never know for sure when it’s the right time to do anything—let alone anything as momentous as leaving your first job. First, search your heart: most times, you already know what you want to do. (Hint: if you’re thinking about leaving your job, you probably want to.) This doesn’t mean you should heedlessly stomp off to do what you want. Other factors must be carefully considered. But knowing what your heart wants is vital to framing a question that will provide your best answer.

So ask yourself, do I want to leave? And if the answer is yes, ask yourself why. Are you the only girl in a boys’ club? Perhaps the only one with a real passion for the web? Are other folks, including your boss, dialing it in? Have you lost your passion for the work? Are you dialing it in? Is the place you work political? Do your coworkers or boss undervalue you? Have you been there two years or more without a raise or a promotion? Most vital of all, are you still learning on the job?

Stagnation is fine for some jobs—when I was a dishwasher at The Earth Kitchen vegetarian restaurant, I enjoyed shutting off my brain and focusing on the rhythmic scrubbing of burnt pans, the slosh and swirl of peas and carrots in a soapy drain—but professionals, particularly web professionals, are either learning and growing or, like the love between Annie Hall and Alvy Singer, becoming a dead shark. If you’ve stopped learning on the job, it’s past time to look around.

Likewise for situations where you face on-the-job discrimination. Or where you’re the only one who cares about designing and building sites and applications that meet real human needs, and of which you can truly be proud. Or where, after three years of taking on senior-level tasks, and making mature decisions that helped the company, you’re still seen as entry-level because you came in as an intern—and first impressions are forever. Or where you will never be promoted, because the person above you is young, healthy, adored by the owner, or has burrowed in like a tick.

Some companies are smart enough to promote from within. These are the companies that tend to give you an annual professional development budget to attend conferences, buy books, or take classes; that encourage you to blog and attend meet-ups. Companies that ignore or actively discourage your professional growth are not places where you will succeed. (And in most cases, they won’t do that well themselves—although some bad places do attain a kind of financial success by taking on the same kinds of boring jobs over and over again, and hiring employees they can treat as chattel. But that ain’t you, babe.)

It’s important, when answering these questions about your organization and your place within it, to be ruthlessly honest with yourself. If you work alongside a friend whose judgement you trust, ask her what she thinks. It is all too easy, as fallible human beings, to believe that we should be promoted before we may actually be ready; to think that people are treating us unfairly when they may actually be supporting and mentoring us; to ignore valuable knowledge we pick up on the job because we think we should be learning something different.

If there’s no one at your workplace you can trust with these questions, talk to a solid friend, sibling, or love partner—one who is brave enough to tell you what you need to hear. Or check in with a professional—be they a recruiter, job counselor, yoga instructor, barista, or therapist. But be careful not to confide in someone who may have a vested interest in betraying your confidence. (For example, a recruiter who earns $100,000 per year in commissions from your company may not be the best person to talk to about your sense that said company grossly undervalues you.)

Assuming you have legitimate reasons to move on, it’s time to consider those other factors: namely, have you identified the right place to move on to? And have you protected yourself and your family by setting aside a small financial cushion (at least three months’ rent in the bank) and lining up a freelance gig?

Don’t just make a move to make a move—that’s how careers die. Identify the characteristics of the kind of place you want to work for. What kind of work do they do? If they are agencies, what do their former customers say about them? If friends work for them, what do they say about the place? What’s their company culture like? Do they boast a diverse workforce—diverse psychologically, creatively, and politically as well as physically? Is there a sameness to the kind of person they hire, and if so, will you fit in or be uncomfortable? If you’d be comfortable, might you be too comfortable (i.e. not learning anything new)? Human factors are every bit as important as the work, and, career-wise, more important than the money.

If five of your friends work for your current employer’s biggest competitor, don’t assume you can walk across the street and interview with that competitor. The competitor may feel honor-bound to tell your boss how unhappy you are—and that won’t do you any good. Your boss might also feel personally betrayed if you take a job with her biggest competitor, and that might be burning a bridge.

Don’t burn any bridges you don’t have to. After all, you never know who you might work for—or who you might want to hire—five years from now. Leaving on good terms is as important as securing the right next job. Word of mouth is everything in this business, and one powerful enemy can really hurt your career. More importantly, regardless of what they can do for or against your career, it’s always best to avoid hurting others when possible. After all, everyone you meet is fighting their own hard battle, so why add to their burdens?

This isn’t to say you don’t have the right to work for anyone you choose who chooses you back. You absolutely have the right. Just be smart and empathetic about it.

In some places, with some bosses, you can honestly say you’re looking for a new job, and the boss will not only understand, she’ll actually recommend you for a good job elsewhere. But that saintly a boss is rare—and if you work for one, are you sure you want to quit? Most bosses, however professional they may be, take it personally when you leave. So be discreet while job hunting. Once you decide to take a new job, let your boss know well ahead of time, and be honest but helpful if they ask why you’re leaving—share reasons that are true and actionable and that, if listened to, could improve the company you’re leaving.

Lastly, before job hunting, line up those three months’ rent and that freelance gig. This protects you and your family if you work for a vindictive boss who fires employees he finds out are seeking outside jobs. Besides, having cash in the bank and a freelance creative challenge will boost your confidence and self-esteem, helping you do better in interviews.

A good job is like a good friend. But people grow and change, and sometimes even the best of friends must part. Knowing when to make your move will keep you ahead of the curve—and the axe. Happy hunting!

If Ever I Should Leave You: Job Hunting For Web Designers & Developers

 ∗ Jeffrey Zeldman Presents The Daily Report: Web Design News & Insights Since 1995

Stagnation is fine for some jobs—when I was a dishwasher at The Earth Kitchen vegetarian restaurant, I enjoyed shutting off my brain and focusing on the rhythmic scrubbing of burnt pans, the slosh and swirl of peas and carrots in a soapy drain—but professionals, particularly web professionals, are either learning and growing or, like the love between Annie Hall and Alvy Singer, becoming a dead shark. If you’ve stopped learning on the job, it’s past time to look around.

Source: If Ever I Should Leave You: Job Hunting For Web Designers and Developers · An A List Apart Column

The Designer As Writer and Public Speaker

 ∗ Jeffrey Zeldman Presents The Daily Report: Web Design News & Insights Since 1995

I’M ALWAYS telling my colleagues and students that a designer who wants to make a difference and enjoy a long career must master writing and public speaking. I’ve seen way too many designers with far greater natural gifts than I possess struggle in early, mid-, and late career, because, despite their great talent and the thoughtful rigor of their work, they lack the practiced, confident communications ability that every designer needs to sell their best work.

Few designers currently working better exemplify the designer-as-communicator than Mike Monteiro, co-founder of Mule Design and author of two great design books (so far). If you’ve heard Monteiro speak, or if you’ve read his books, you already know this. If you haven’t, you’re in for a treat: last month, because he loves designers and clients, and with the blessing of his publisher, Mike posted an entire chapter of his latest book, You’re My Favorite Client, for your free reading pleasure; now he has done the same thing with his best-selling first book, Design Is A Job.

Chapter 1 of Design Is A Job, posted for free last night on Medium, poses the question, just what exactly is a designer, anyway? After dispensing with the myth of the magical pixie creative and explaining why it is destructive, Mike settles in to explain what a good designer actually does—and why design, in its role as societal gatekeeper, is a high calling with a tough ethical mandate.

Reading good writing about design, by someone who actually works as a designer, is a rare pleasure. What are you waiting for? Enjoy Chapter 1 of Design Is A Job.

(And, heck, if you want more, you can always buy the book.)

Top animation by Mike Essl for Mike Monteiro.

Publishing Versus Performance: Our Struggle for the Soul of the Web

 ∗ Jeffrey Zeldman Presents The Daily Report: Web Design News & Insights Since 1995

MY SOUL is in twain. Two principles on which clued-in web folk heartily agree are coming more and more often into conflict—a conflict most recently thrust into relief by discussions around the brilliant Vox Media team, publishers of The Verge.

The two principles are:

  1. Building performant websites is not only a key differentiator that separates successful sites from those which don’t get read; it’s also an ethical obligation, whose fulfillment falls mainly on developers, but can only happen with the buy-in of the whole team, from marketing to editorial, from advertising to design.
  2. Publishing and journalism are pillars of civilized society, and the opportunity to distribute news and information via the internet (and to let anyone who is willing to do the work become a publisher) has long been a foundational benefit of the web. As the sad, painful, slow-motion decline of traditional publishing and journalism is being offset by the rise of new, primarily web-based publications and news organizations, the need to sustain these new publications and organizations—to “pay for the content,” in popular parlance—is chiefly being borne by advertising…which, however, pays less and less and demands more and more as customers increasingly find ways to route around it.

The conflict between these two principles is best summarized, as is often the case, by the wonderfully succinct Jeremy Keith (author, HTML5 For Web Designers). In his 27 July post, “On The Verge,” Jeremy takes us through prior articles beginning with Nilay Patel’s Verge piece, “The Mobile Web Sucks,” in which Nilay blames browsers and a nonexistent realm he calls “the mobile web” for the slow performance of websites built with bloated frameworks and laden with fat, invasive ad platforms—like The Verge itself.

The Verge’s Web Sucks,” by Les Orchard, quickly countered Nilay’s piece, as Jeremy chronicles (“Les Orchard says what we’re all thinking”). Jeremy then points to a half-humorous letter of surrender posted by Vox Media’s developers, who announce their new Vox Media Performance Team in a piece facetiously declaring performance bankruptcy.

A survey of follow-up barbs and exchanges on Twitter concludes Jeremy’s piece (which you must read; do not settle for this sloppy summary). After describing everything that has so far been said, Mr Keith weighs in with his own opinion, and it’s what you might expect from a highly thoughtful, open-source-contributing, standards-flag-flying, creative developer:

I’m hearing an awful lot of false dichotomies here: either you can have a performant website or you have a business model based on advertising. …

Tracking and advertising scripts are today’s equivalent of pop-up windows. …

For such a young, supposedly-innovative industry, I’m often amazed at what people choose to treat as immovable, unchangeable, carved-in-stone issues. Bloated, invasive ad tracking isn’t a law of nature. It’s a choice. We can choose to change.

Me, I’m torn. As a 20-year-exponent of lean web development (yes, I know how pretentious that sounds), I absolutely believe that the web is for everybody, regardless of ability or device. The web’s strength lies precisely in its unique position as the world’s first universal platform. Tim Berners-Lee didn’t invent hypertext, and his (and his creation’s) genius doesn’t lie in the deployment of tags; it subsists in the principle that, developed rightly, content on the web is as accessible to the Nigerian farmer with a feature phone as it is to a wealthy American sporting this year’s device. I absolutely believe this. I’ve fought for it for too many years, alongside too many of you, to think otherwise.

And yet, as a 20-year publisher of independent content (and an advertising professional before that), I am equally certain that content requires funding as much as it demands research, motivation, talent, and nurturing. Somebody has to pay our editors, writers, journalists, designers, developers, and all the other specialtists whose passion and tears go into every chunk of worthwhile web content. Many of you reading this will feel I’m copping out here, so let me explain:

It may indeed be a false dichotomy that “either you can have a performant website or you have a business model based on advertising” but it is also a truth that advertisers demand more and more for their dollar. They want to know what page you read, how long you looked at it, where on the web you went next, and a thousand other invasive things that make thoughtful people everywhere uncomfortable—but are the price we currently pay to access the earth’s largest library.

I don’t like this, and I don’t do it in the magazine I publish, but A List Apart, as a direct consequence, will always lack certain resources to expand its offerings as quickly and richly as we’d like, or to pay staff and contributors at anything approaching the level that Vox Media, by accepting a different tradeoff, has achieved. (Let me also acknowledge ALA’s wonderful sponsors and our longtime partnership with The Deck ad network, lest I seem to speak from an ivory tower. Folks who’ve never had to pay for content cannot lay claim to moral authority on this issue; untested virtue is not, and so on.)

To be clear, Vox Media could not exist if its owners had made the decisions A List Apart made in terms of advertising—and Vox Media’s decisions about advertising are far better, in terms of consumer advocacy and privacy, than those made by most web publishing groups. Also to be clear, I don’t regret A List Apart’s decisions about advertising—they are right for us and our community.

I know and have worked alongside some of the designers, developers, and editors at Vox Media; you’d be proud to work with any of them. I know they are painfully aware of the toll advertising takes on their site’s performance; I know they are also doing some of the best editorial and publishing work currently being performed on the web—which is what happens when great teams from different disciplines get together to push boundaries and create something of value. This super team couldn’t do their super work without salaries, desks, and computers; acquiring those things meant coming to some compromise with the state of web advertising today. (And of course it was the owners, and not the employees, who made the precise compromise to which Vox Media currently adheres.)

Put a gun to my head, and I will take the same position as Jeremy Keith. I’ll even do it without a gun to my head, as my decisions as a publisher probably already make clear. And yet, two equally compelling urgencies in my core being—love of web content, and love of the web’s potential—make me hope that web and editorial teams can work with advertisers going forward, so that one day soon we can have amazing content, brilliantly presented, without the invasive bloat. In the words of another great web developer I know, “Hope is a dangerous currency—but it’s all I’ve got.”

Also published in Medium.

GopherCon 2015 Roundup

 ∗ The Go Programming Language Blog

A few weeks ago, Go programmers from around the world descended on Denver, Colorado for GopherCon 2015. The two-day, single-track conference attracted more than 1,250 attendees—nearly double last year's number—and featured 22 talks presented by Go community members.

The Cowboy Gopher (a toy given to each attendee) watches over the ranch.
Photograph by Nathan Youngman. Gopher by Renee French.

Today the organizers have posted the videos online so you can now enjoy the conference from afar:

Day 1:

  • Go, Open Source, Community — Russ Cox (video) (text)
  • Go kit: A Standard Library for Distributed Programming — Peter Bourgon (video) (slides)
  • Delve Into Go — Derek Parker (video) (slides)
  • How a complete beginner learned Go as her first backend language in 5 weeks — Audrey Lim (video) (slides)
  • A Practical Guide to Preventing Deadlocks and Leaks in Go — Richard Fliam (video)
  • Go GC: Solving the Latency Problem — Rick Hudson (video) (slides)
  • Simplicity and Go — Katherine Cox-Buday (video) (slides)
  • Rebuilding in Go - an opinionated rewrite — Abhishek Kona (video) (slides)
  • Prometheus: Designing and Implementing a Modern Monitoring Solution in Go — Björn Rabenstein (video) (slides)
  • What Could Go Wrong? — Kevin Cantwell (video)
  • The Roots of Go — Baishampayan Ghose (video) (slides)

Day 2:

  • The Evolution of Go — Robert Griesemer (video) (slides)
  • Static Code Analysis Using SSA — Ben Johnson (video) (slides)
  • Go on Mobile — Hana Kim (video) (slides)
  • Go Dynamic Tools — Dmitry Vyukov (video) (slides)
  • Embrace the Interface — Tomás Senart (video) (slides)
  • Uptime: Building Resilient Services with Go — Blake Caldwell (video) (slides)
  • Cayley: Building a Graph Database — Barak Michener (video) (slides)
  • Code Generation For The Sake Of Consistency — Sarah Adams (video)
  • The Many Faces of Struct Tags — Sam Helman and Kyle Erf (video) (slides)
  • Betting the Company on Go and Winning — Kelsey Hightower (video)
  • How Go Was Made — Andrew Gerrand (video) (slides)

The hack day was also a ton of fun, with hours of lightning talks and a range of activities from programming robots to a Magic: the Gathering tournament.

Huge thanks to the event organizers Brian Ketelsen and Eric St. Martin and their production team, the sponsors, the speakers, and the attendees for making this such a fun and action-packed conference. Hope to see you there next year!

Go, Open Source, Community

 ∗ The Go Programming Language Blog


[This is the text of my opening keynote at Gophercon 2015. The video is available here.]

Thank you all for traveling to Denver to be here, and thank you to everyone watching on video. If this is your first Gophercon, welcome. If you were here last year, welcome back. Thank you to the organizers for all the work it takes to make a conference like this happen. I am thrilled to be here and to be able to talk to all of you.

I am the tech lead for the Go project and the Go team at Google. I share that role with Rob Pike. In that role, I spend a lot of time thinking about the overall Go open source project, in particular the way it runs, what it means to be open source, and the interaction between contributors inside and outside Google. Today I want to share with you how I see the Go project as a whole and then based on that explain how I see the Go open source project evolving.

Why Go?

To get started, we have to go back to the beginning. Why did we start working on Go?

Go is an attempt to make programmers more productive. We wanted to improve the software development process at Google, but the problems Google has are not unique to Google.

There were two overarching goals.

The first goal is to make a better language to meet the challenges of scalable concurrency. By scalable concurrency I mean software that deals with many concerns simultaneously, such as coordinating a thousand back end servers by sending network traffic back and forth.

Today, that kind of software has a shorter name: we call it cloud software. It's fair to say that Go was designed for the cloud before clouds ran software.

The larger goal is to make a better environment to meet the challenges of scalable software development, software worked on and used by many people, with limited coordination between them, and maintained for years. At Google we have thousands of engineers writing and sharing their code with each other, trying to get their work done, reusing the work of others as much as possible, and working in a code base with a history dating back over ten years. Engineers often work on or at least look at code originally written by someone else, or that they wrote years ago, which often amounts to the same thing.

That situation inside Google has a lot in common with large scale, modern open source development as practiced on sites like GitHub. Because of this, Go is a great fit for open source projects, helping them accept and manage contributions from a large community over a long period of time.

I believe much of Go's success is explained by the fact that Go is a great fit for cloud software, Go is a great fit for open source projects, and, serendipitously, both of those are growing in popularity and importance in the software industry.

Other people have made similar observations. Here are two. Last year, on, Donnie Berkholz wrote about “Go as the emerging language of cloud infrastructure,” observing that “[Go's] marquee projects ... are cloud-centric or otherwise made for dealing with distributed systems or transient environments.”

This year, on, the author wrote an article titled “Why Golang is doomed to succeed,” pointing out that this focus on large-scale development was possibly even better suited to open source than to Google itself: “This open source fitness is why I think you are about to see more and more Go around ...”

The Go Balance

How does Go accomplish those things?

How does it make scalable concurrency and scalable software development easier?

Most people answer this question by talking about channels and goroutines, and interfaces, and fast builds, and the go command, and good tool support. Those are all important parts of the answer, but I think there is a broader idea behind them.

I think of that idea as Go's balance. There are competing concerns in any software design, and there is a very natural tendency to try to solve all the problems you foresee. In Go, we have explicitly tried not to solve everything. Instead, we've tried to do just enough that you can build your own custom solutions easily.

The way I would summarize Go's chosen balance is this: Do Less. Enable More.

Do less, but enable more.

Go can't do everything. We shouldn't try. But if we work at it, Go can probably do a few things well. If we select those things carefully, we can lay a foundation on which developers can easily build the solutions and tools they need, and ideally can interoperate with the solutions and tools built by others.


Let me illustrate this with some examples.

First, the size of the Go language itself. We worked hard to put in as few concepts as possible, to avoid the problem of mutually incomprehensible dialects forming in different parts of a large developer community. No idea went into Go until it had been simplified to its essence and then had clear benefits that justified the complexity being added.

In general, if we have 100 things we want Go to do well, we can't make 100 separate changes. Instead, we try to research and understand the design space and then identify a few changes that work well together and that enable maybe 90 of those things. We're willing to sacrifice the remaining 10 to avoid bloating the language, to avoid adding complexity only to address specific use cases that seem important today but might be gone tomorrow.

Keeping the language small enables more important goals. Being small makes Go easier to learn, easier to understand, easier to implement, easier to reimplement, easier to debug, easier to adjust, and easier to evolve. Doing less enables more.

I should point out that this means we say no to a lot of other people's ideas, but I assure you we've said no to even more of our own ideas.

Next, channels and goroutines. How should we structure and coordinate concurrent and parallel computations? Mutexes and condition variables are very general but so low-level that they're difficult to use correctly. Parallel execution frameworks like OpenMP are so high-level that they can only be used to solve a narrow range of problems. Channels and goroutines sit between these two extremes. By themselves, they aren't a solution to much. But they are powerful enough to be easily arranged to enable solutions to many common problems in concurrent software. Doing less—really doing just enough—enables more.

Next, types and interfaces. Having static types enables useful compile-time checking, something lacking in dynamically-typed languages like Python or Ruby. At the same time, Go's static typing avoids much of the repetition of traditional statically typed languages, making it feel more lightweight, more like the dynamically-typed languages. This was one of the first things people noticed, and many of Go's early adopters came from dynamically-typed languages.

Go's interfaces are a key part of that. In particular, omitting the ``implements'' declarations of Java or other languages with static hierarchy makes interfaces lighter weight and more flexible. Not having that rigid hierarchy enables idioms such as test interfaces that describe existing, unrelated production implementations. Doing less enables more.

Next, testing and benchmarking. Is there any shortage of testing and benchmarking frameworks in most languages? Is there any agreement between them?

Go's testing package is not meant to address every possible facet of these topics. Instead, it is meant to provide the basic concepts necessary for most higher-level tooling. Packages have test cases that pass, fail, or are skipped. Packages have benchmarks that run and can be measured by various metrics.

Doing less here is an attempt to reduce these concepts to their essence, to create a shared vocabulary so that richer tools can interoperate. That agreement enables higher-level testing software like Miki Tebeka's go2xunit converter, or the benchcmp and benchstat benchmark analysis tools.

Because there is agreement about the representation of the basic concepts, these higher-level tools work for all Go packages, not just ones that make the effort to opt in, and they interoperate with each other, in that using, say, go2xunit does not preclude also using benchstat, the way it would if these tools were, say, plugins for competing testing frameworks. Doing less enables more.

Next, refactoring and program analysis. Because Go is for large code bases, we knew it would need to support automatic maintenance and updating of source code. We also knew that this topic was too large to build in directly. But we knew one thing that we had to do. In our experience attempting automated program changes in other settings, the most significant barrier we hit was actually writing the modified program out in a format that developers can accept.

In other languages, it's common for different teams to use different formatting conventions. If an edit by a program uses the wrong convention, it either writes a section of the source file that looks nothing like the rest of the file, or it reformats the entire file, causing unnecessary and unwanted diffs.

Go does not have this problem. We designed the language to make gofmt possible, we worked hard to make gofmt's formatting acceptable for all Go programs, and we made sure gofmt was there from day one of the original public release. Gofmt imposes such uniformity that automated changes blend into the rest of the file. You can't tell whether a particular change was made by a person or a computer. We didn't build explicit refactoring support. Establishing an agreed-upon formatting algorithm was enough of a shared base for independent tools to develop and to interoperate. Gofmt enabled gofix, goimports, eg, and other tools. I believe the work here is only just getting started. Even more can be done.

Last, building and sharing software. In the run up to Go 1, we built goinstall, which became what we all know as "go get". That tool defined a standard zero-configuration way to resolve import paths on sites like, and later a way to resolve paths on other sites by making HTTP requests. This agreed-upon resolution algorithm enabled other tools that work in terms of those paths, most notably Gary Burd's creation of In case you haven't used it, you go to for any valid "go get" import path, and the web site will fetch the code and show you the documentation for it. A nice side effect of this has been that serves as a rough master list of the Go packages publicly available. All we did was give import paths a clear meaning. Do less, enable more.

You'll notice that many of these tooling examples are about establishing a shared convention. Sometimes people refer to this as Go being “opinionated,” but there's something deeper going on. Agreeing to the limitations of a shared convention is a way to enable a broad class of tools that interoperate, because they all speak the same base language. This is a very effective way to do less but enable more. Specifically, in many cases we can do the minimum required to establish a shared understanding of a particular concept, like remote imports, or the proper formatting of a source file, and thereby enable the creation of packages and tools that work together because they all agree about those core details.

I'm going to return to that idea later.

Why is Go open source?

But first, as I said earlier, I want to explain how I see the balance of Do Less and Enable More guiding our work on the broader Go open source project. To do that, I need to start with why Go is open source at all.

Google pays me and others to work on Go, because, if Google's programmers are more productive, Google can build products faster, maintain them more easily, and so on. But why open source Go? Why should Google share this benefit with the world?

Of course, many of us worked on open source projects before Go, and we naturally wanted Go to be part of that open source world. But our preferences are not a business justification. The business justification is that Go is open source because that's the only way that Go can succeed. We, the team that built Go within Google, knew this from day one. We knew that Go had to be made available to as many people as possible for it to succeed.

Closed languages die.

A language needs large, broad communities.

A language needs lots of people writing lots of software, so that when you need a particular tool or library, there's a good chance it has already been written, by someone who knows the topic better than you, and who spent more time than you have to make it great.

A language needs lots of people reporting bugs, so that problems are identified and fixed quickly. Because of the much larger user base, the Go compilers are much more robust and spec-compliant than the Plan 9 C compilers they're loosely based on ever were.

A language needs lots of people using it for lots of different purposes, so that the language doesn't overfit to one use case and end up useless when the technology landscape changes.

A language needs lots of people who want to learn it, so that there is a market for people to write books or teach courses, or run conferences like this one.

None of this could have happened if Go had stayed within Google. Go would have suffocated inside Google, or inside any single company or closed environment.

Fundamentally, Go must be open, and Go needs you. Go can't succeed without all of you, without all the people using Go for all different kinds of projects all over the world.

In turn, the Go team at Google could never be large enough to support the entire Go community. To keep scaling, we need to enable all this ``more'' while doing less. Open source is a huge part of that.

Go's open source

What does open source mean? The minimum requirement is to open the source code, making it available under an open source license, and we've done that.

But we also opened our development process: since announcing Go, we've done all our development in public, on public mailing lists open to all. We accept and review source code contributions from anyone. The process is the same whether you work for Google or not. We maintain our bug tracker in public, we discuss and develop proposals for changes in public, and we work toward releases in public. The public source tree is the authoritative copy. Changes happen there first. They are only brought into Google's internal source tree later. For Go, being open source means that this is a collective effort that extends beyond Google, open to all.

Any open source project starts with a few people, often just one, but with Go it was three: Robert Griesemer, Rob Pike, and Ken Thompson. They had a vision of what they wanted Go to be, what they thought Go could do better than existing languages, and Robert will talk more about that tomorrow morning. I was the next person to join the team, and then Ian Taylor, and then, one by one, we've ended up where we are today, with hundreds of contributors.

Thank You to the many people who have contributed code or ideas or bug reports to the Go project so far. We tried to list everyone we could in our space in the program today. If your name is not there, I apologize, but thank you.

I believe the hundreds of contributors so far are working toward a shared vision of what Go can be. It's hard to put words to these things, but I did my best to explain one part of the vision earlier: Do Less, Enable More.

Google's role

A natural question is: What is the role of the Go team at Google, compared to other contributors? I believe that role has changed over time, and it continues to change. The general trend is that over time the Go team at Google should be doing less and enabling more.

In the very early days, before Go was known to the public, the Go team at Google was obviously working by itself. We wrote the first draft of everything: the specification, the compiler, the runtime, the standard library.

Once Go was open sourced, though, our role began to change. The most important thing we needed to do was communicate our vision for Go. That's difficult, and we're still working at it.. The initial implementation was an important way to communicate that vision, as was the development work we led that resulted in Go 1, and the various blog posts, and articles, and talks we've published.

But as Rob said at Gophercon last year, "the language is done." Now we need to see how it works, to see how people use it, to see what people build. The focus now is on expanding the kind of work that Go can help with.

Google's primarily role is now to enable the community, to coordinate, to make sure changes work well together, and to keep Go true to the original vision.

Google's primary role is: Do Less. Enable More.

I mentioned earlier that we'd rather have a small number of features that enable, say, 90% of the target use cases, and avoid the orders of magnitude more features necessary to reach 99 or 100%. We've been successful in applying that strategy to the areas of software that we know well. But if Go is to become useful in many new domains, we need experts in those areas to bring their expertise to our discussions, so that together we can design small adjustments that enable many new applications for Go.

This shift applies not just to design but also to development. The role of the Go team at Google continues to shift more to one of guidance and less of pure development. I certainly spend much more time doing code reviews than writing code, more time processing bug reports than filing bug reports myself. We need to do less and enable more.

As design and development shift to the broader Go community, one of the most important things we the original authors of Go can offer is consistency of vision, to help keep Go Go. The balance that we must strike is certainly subjective. For example, a mechanism for extensible syntax would be a way to enable more ways to write Go code, but that would run counter to our goal of having a consistent language without different dialects.

We have to say no sometimes, perhaps more than in other language communities, but when we do, we aim to do so constructively and respectfully, to take that as an opportunity to clarify the vision for Go.

Of course, it's not all coordination and vision. Google still funds Go development work. Rick Hudson is going to talk later today about his work on reducing garbage collector latency, and Hana Kim is going to talk tomorrow about her work on bringing Go to mobile devices. But I want to make clear that, as much as possible, we aim to treat development funded by Google as equal to development funded by other companies or contributed by individuals using their spare time. We do this because we don't know where the next great idea will come from. Everyone contributing to Go should have the opportunity to be heard.


I want to share some evidence for this claim that, over time, the original Go team at Google is focusing more on coordination than direct development.

First, the sources of funding for Go development are expanding. Before the open source release, obviously Google paid for all Go development. After the open source release, many individuals started contributing their time, and we've slowly but steadily been growing the number of contributors supported by other companies to work on Go at least part-time, especially as it relates to making Go more useful for those companies. Today, that list includes Canonical, Dropbox, Intel, Oracle, and others. And of course Gophercon and the other regional Go conferences are organized entirely by people outside Google, and they have many corporate sponsors besides Google.

Second, the conceptual depth of Go development done outside the original team is expanding.

Immediately after the open source release, one of the first large contributions was the port to Microsoft Windows, started by Hector Chu and completed by Alex Brainman and others. More contributors ported Go to other operating systems. Even more contributors rewrote most of our numeric code to be faster or more precise or both. These were all important contributions, and very much appreciated, but for the most part they did not involve new designs.

More recently, a group of contributors led by Aram Hăvărneanu ported Go to the ARM 64 architecture, This was the first architecture port by contributors outside Google. This is significant, because in general support for a new architecture requires more design work than support for a new operating system. There is more variation between architectures than between operating systems.

Another example is the introduction over the past few releases of preliminary support for building Go programs using shared libraries. This feature is important for many Linux distributions but not as important for Google, because we deploy static binaries. We have been helping guide the overall strategy, but most of the design and nearly all of the implementation has been done by contributors outside Google, especially Michael Hudson-Doyle.

My last example is the go command's approach to vendoring. I define vendoring as copying source code for external dependencies into your tree to make sure that they doesn't disappear or change underfoot.

Vendoring is not a problem Google suffers, at least not the way the rest of the world does. We copy open source libraries we want to use into our shared source tree, record what version we copied, and only update the copy when there is a need to do so. We have a rule that there can only be one version of a particular library in the source tree, and it's the job of whoever wants to upgrade that library to make sure it keeps working as expected by the Google code that depends on it. None of this happens often. This is the lazy approach to vendoring.

In contrast, most projects outside Google take a more eager approach, importing and updating code using automated tools and making sure that they are always using the latest versions.

Because Google has relatively little experience with this vendoring problem, we left it to users outside Google to develop solutions. Over the past five years, people have built a series of tools. The main ones in use today are Keith Rarick's godep, Owen Ou's nut, and the gb-vendor plugin for Dave Cheney's gb,

There are two problems with the current situation. The first is that these tools are not compatible out of the box with the go command's "go get". The second is that the tools are not even compatible with each other. Both of these problems fragment the developer community by tool.

Last fall, we started a public design discussion to try to build consensus on some basics about how these tools all operate, so that they can work alongside "go get" and each other.

Our basic proposal was that all tools agree on the approach of rewriting import paths during vendoring, to fit with "go get"'s model, and also that all tools agree on a file format describing the source and version of the copied code, so that the different vendoring tools can be used together even by a single project. If you use one today, you should still be able to use another tomorrow.

Finding common ground in this way was very much in the spirit of Do Less, Enable More. If we could build consensus about these basic semantic aspects, that would enable "go get" and all these tools to interoperate, and it would enable switching between tools, the same way that agreement about how Go programs are stored in text files enables the Go compiler and all text editors to interoperate. So we sent out our proposal for common ground.

Two things happened.

First, Daniel Theophanes started a vendor-spec project on GitHub with a new proposal and took over coordination and design of the spec for vendoring metadata.

Second, the community spoke with essentially one voice to say that rewriting import paths during vendoring was not tenable. Vendoring works much more smoothly if code can be copied without changes.

Keith Rarick posted an alternate proposal for a minimal change to the go command to support vendoring without rewriting import paths. Keith's proposal was configuration-free and fit in well with the rest of the go command's approach. That proposal will ship as an experimental feature in Go 1.5 and likely enabled by default in Go 1.6. And I believe that the various vendoring tool authors have agreed to adopt Daniel's spec once it is finalized.

The result is that at the next Gophercon we should have broad interoperability between vendoring tools and the go command, and the design to make that happen was done entirely by contributors outside the original Go team.

Not only that, the Go team's proposal for how to do this was essentially completely wrong. The Go community told us that very clearly. We took that advice, and now there's a plan for vendoring support that I believe everyone involved is happy with.

This is also a good example of our general approach to design. We try not to make any changes to Go until we feel there is broad consensus on a well-understood solution. For vendoring, feedback and design from the Go community was critical to reaching that point.

This general trend toward both code and design coming from the broader Go community is important for Go. You, the broader Go community, know what is working and what is not in the environments where you use Go. We at Google don't. More and more, we will rely on your expertise, and we will try to help you develop designs and code that extend Go to be useful in more settings and fit well with Go's original vision. At the same time, we will continue to wait for broad consensus on well-understood solutions.

This brings me to my last point.

Code of Conduct

I've argued that Go must be open, and that Go needs your help.

But in fact Go needs everyone's help. And everyone isn't here.

Go needs ideas from as many people as possible.

To make that a reality, the Go community needs to be as inclusive, welcoming, helpful, and respectful as possible.

The Go community is large enough now that, instead of assuming that everyone involved knows what is expected, I and others believe that it makes sense to write down those expectations explicitly. Much like the Go spec sets expectations for all Go compilers, we can write a spec setting expectations for our behavior in online discussions and in offline meetings like this one.

Like any good spec, it must be general enough to allow many implementations but specific enough that it can identify important problems. When our behavior doesn't meet the spec, people can point that out to us, and we can fix the problem. At the same time, it's important to understand that this kind of spec cannot be as precise as a language spec. We must start with the assumption that we will all be reasonable in applying it.

This kind of spec is often referred to as a Code of Conduct. Gophercon has one, which we've all agreed to follow by being here, but the Go community does not. I and others believe the Go community needs a Code of Conduct.

But what should it say?

I believe the most important overall statement we can make is that if you want to use or discuss Go, then you are welcome here, in our community. That is the standard I believe we aspire to.

If for no other reason (and, to be clear, there are excellent other reasons), Go needs as large a community as possible. To the extent that behavior limits the size of the community, it holds Go back. And behavior can easily limit the size of the community.

The tech community in general and the Go community in particular is skewed toward people who communicate bluntly. I don't believe this is fundamental. I don't believe this is necessary. But it's especially easy to do in online discussions like email and IRC, where plain text is not supplemented by the other cues and signals we have in face-to-face interactions.

For example, I have learned that when I am pressed for time I tend to write fewer words, with the end result that my emails seem not just hurried but blunt, impatient, even dismissive. That's not how I feel, but it's how I can come across, and that impression can be enough to make people think twice about using or contributing to Go. I realized I was doing this when some Go contributors sent me private email to let me know. Now, when I am pressed for time, I pay extra attention to what I'm writing, and I often write more than I naturally would, to make sure I'm sending the message I intend.

I believe that correcting the parts of our everyday interactions, intended or not, that drive away potential users and contributors is one of the most important things we can all do to make sure the Go community continues to grow. A good Code of Conduct can help us do that.

We have no experience writing a Code of Conduct, so we have been reading existing ones, and we will probably adopt an existing one, perhaps with minor adjustments. The one I like the most is the Django Code of Conduct, which originated with another project called SpeakUp! It is structured as an elaboration of a list of reminders for everyday interaction.

"Be friendly and patient. Be welcoming. Be considerate. Be respectful. Be careful in the words that you choose. When we disagree, try to understand why."

I believe this captures the tone we want to set, the message we want to send, the environment we want to create for new contributors. I certainly want to be friendly, patient, welcoming, considerate, and respectful. I won't get it exactly right all the time, and I would welcome a helpful note if I'm not living up to that. I believe most of us feel the same way.

I haven't mentioned active exclusion based on or disproportionately affecting race, gender, disability, or other personal characteristics, and I haven't mentioned harassment. For me, it follows from what I just said that exclusionary behavior or explicit harassment is absolutely unacceptable, online and offline. Every Code of Conduct says this explicitly, and I expect that ours will too. But I believe the SpeakUp! reminders about everyday interactions are an equally important statement. I believe that setting a high standard for those everyday interactions makes extreme behavior that much clearer and easier to deal with.

I have no doubts that the Go community can be one of the most friendly, welcoming, considerate, and respectful communities in the tech industry. We can make that happen, and it will be a benefit and credit to us all.

Andrew Gerrand has been leading the effort to adopt an appropriate Code of Conduct for the Go community. If you have suggestions, or concerns, or experience with Codes of Conduct, or want to be involved, please find Andrew or me during the conference. If you'll still be here on Friday, Andrew and I are going to block off some time for Code of Conduct discussions during Hack Day.

Again, we don't know where the next great idea will come from. We need all the help we can get. We need a large, diverse Go community.

Thank You

I consider the many people releasing software for download using “go get,” sharing their insights via blog posts, or helping others on the mailing lists or IRC to be part of this broad open source effort, part of the Go community. Everyone here today is also part of that community.

Thank you in advance to the presenters who over the next few days will take time to share their experiences using and extending Go.

Thank you in advance to all of you in the audience for taking the time to be here, to ask questions, and to let us know how Go is working for you. When you go back home, please continue to share what you've learned. Even if you don't use Go for daily work, we'd love to see what's working for Go adopted in other contexts, just as we're always looking for good ideas to bring back into Go.

Thank you all again for making the effort to be here and for being part of the Go community.

For the next few days, please: tell us what we're doing right, tell us what we're doing wrong, and help us all work together to make Go even better.

Remember to be friendly, patient, welcoming, considerate, and respectful.

Above all, enjoy the conference.

2015 Summer Reading Issue

 ∗ A List Apart: The Full Feed

Summer is halfway over. Have you hid out for a day of reading yet? Grab a shady spot and a picnic blanket (or just park it in front of the nearest AC unit), turn off your notifications, and unwrap this tasty treat: our 2015 summer reader.

Refresh your mind, heart, and spirit with this curated list of articles, videos, and other goodies from the recent past—from A List Apart and across the web.

Which web do we want?

Is the web “a place to connect knowledge, people, and cats,” or do “hordes threaten all that we have built for one another”? Where will native-versus-web fights end up? And why are we all here, doing this work, anyhow?

From us

From elsewhere

Toward an inclusive industry

This web is what we make of it. We can use it to insult strangers in a comments field, or to fight for greater fairness and opportunity in our world. We are inspired by those who choose the path of inclusion:

From us

From elsewhere

Trying out new techniques

Today’s code is so complex no individual can master it all—but that also means there’s always something new to learn…like these new, niche, or just plain cool techniques.

From us

Speeding up

Big, lumbering websites, endless load times, and crappy experiences on mobile? No, thanks! Here’s to those in the trenches of performance, fighting the good fight.

From us

From elsewhere

Accessibility for everyone

“The power of the Web is in its universality,” Tim Berners-Lee once said—and that means working for all kinds of people, with all kinds of abilities. Let’s stop leaving accessibility for last, and instead start from a place that embraces the needs of all our users.

From us

From elsewhere

Working better, together

What comes after static comps and toss-it-over-the-wall processes? We’re still figuring that out—but one thing’s for sure: people are at the center.

From us

From elsewhere

Becoming mentors

The more we teach, the more we learn—and the more our industry benefits. Discover the joys of mentoring and the future of web education.

From us

From elsewhere

Getting content right

In the beginning was content—and it’s the core of every experience we design and build. Connect the right person to the right content at the right time using strategy, design, and writing.

From us

From elsewhere

The evolution of type

If not yet ubiquitous, sophisticated typography on the web is now at least possible. It continues to evolve apace—virtually anything we could do in print, we can now do on screens.

From us

From elsewhere

A List Apart Summer Reading

 ∗ Jeffrey Zeldman Presents The Daily Report: Web Design News & Insights Since 1995

SUMMER is halfway over. Have you hid out for a day of reading yet? Grab a shady spot and a picnic blanket (or just park it in front of the nearest AC unit), turn off your notifications, and unwrap this tasty treat: our 2015 summer reader.

Refresh your mind, heart, and spirit with this curated list of articles, videos, and other goodies from the recent past—from A List Apart and across the web.

Our 2015 compilation of articles, blogs, and other gems from across the web: perfect summer reading, poolside and beyond.

Source: 2015 Summer Reading Issue · An A List Apart Article

Android Stagefright multimedia viewer prone to remote exploitation, (Tue, Jul 28th)

 ∗ SANS Internet Storm Center, InfoCON: green

Guest Diary: Xavier Mertens - Integrating VirusTotal within ELK, (Tue, Jul 28th)

 ∗ SANS Internet Storm Center, InfoCON: green

Redesigning the Apple Watch UI

 ∗ LukeW | Digital Product Design + Strategy

After wearing my Apple watch daily for the past two+ months, I've found myself wishing for a simpler interaction model for moving between content and apps. Here's what I'd propose and why.

The vast majority of my interactions with the Apple Watch involve notifications. I get a light tap on my wrist, raise my arm, and get a bit of useful information. This awareness of what's happening helps me stay connected to the real World and is the primary reason I enjoy wearing a smartwatch.

use case for smartwatches: awareness

Thanks to "long look" notifications, I'm also able to take action on some notifications —mostly to triage things quickly. Much more rarely do I engage with Glances and even less with apps. This creates a hierarchy of use: notifications, glances, apps.

priorities when designing for apple watch

Unfortunately, the Apple Watch's interaction model doesn't echo this hierarchy. Instead, there's a lot of emphasis on apps, which seems to create more UI than is necessary on your wrist. Consider for starters, all the ways to access an app on the Apple Watch (not counting the special case of the Friends app).

ways to access apps on apple watch

This structure makes the times when I do use apps needlessly complex. As an example, the common use case of listening to music while working out requires moving between watch faces, glances, and apps.

switching between two tasks on apple watch

There's also an app-only way to accomplish the same task but with different results.

switching between two active apps on apple watch

So switching tasks using Glances requires a trip to the watch face and a swipe up but switching tasks using apps requires double-tapping the digital crown. All this is made more confusing when you compare the Music Glance and app: which is which and why does using each result in different OS-level behavior?

switching between two active apps on apple watch

Looking at the interaction model of the iPhone suggests an alternative. Why not echo that (now familiar) structure on the Apple Watch?

proposal for new interaction model on apple watch

comparing iOS  interaction model

comparing apple watch  interaction model

Comparing the current Apple Watch interaction model with this change illustrates the simplification. In the redesign, there's one layer for home screens (filled with apps and/or complications), one for notifications, and one for currently running apps.

Glances are replaced with the ability to scroll through active apps. But that doesn't mean "glance-able content" goes ways. In fact, each app could have a "glance-like" home screen as its default (it could even be the Glance) but also display its last used screen when appropriate (like after you've recently interacted with the app).

With this model, switching between two active apps is trivial, just swipe up from the bottom to move between them with no confusion between Glances and apps. To return to the watch face, simply press the home button.

switching between two active apps on apple watch proposed

Android wear has had a similar interaction model from the start. Just swipe up from the bottom of the watch face to scroll through active (relevant) apps. Then tap on any app to go deeper.

comparing apple watch and android wear interaction models

This approach also allows you to scroll through the content within each app easily. Want to go deeper? Simply tap a card to scroll within it, or swipe across to access other features in the app.

access app content within Android Wear

When wearing an Android Wear smartwatch, I found myself keeping up with more than I do when wearing the Apple Watch. A simple scroll up on Wear would give me the latest content from several apps ordered by relevance. In their current state, Glances on the Apple Watch don't give me that lightweight way of staying on top of the information I care about. Their inclusion in the Apple Watch interaction model seems, instead, to complicate moving between tasks (and apps).

Perhaps this is simply an artifact of what's possible with the first version of Apple Watch given the current long loading times for apps and Glances. If so, Apple's interaction model may well be a good fit when performance on the Watch improves.

Or perhaps third party complications will offer a better way to keep up with timely information. In which case, the multiple home screen model I proposed above could give you an effective way to move across multiple watch faces complete with distinct app launchers and complications.

3rd party complications on Apple Watch

Regardless of how things end up, it's hard to argue there's not room for improvement on the wrist.

Co-Design, Not Redesign, by Kevin M. Hoffman – An Event Apart Video

 ∗ Jeffrey Zeldman Presents The Daily Report: Web Design News & Insights Since 1995

Kevin M. Hoffman

MOVE from the nightmare of design by committee to the joys of design collaboration.

In this 60-minute video captured live at An Event Apart Orlando: Special Edition, Kevin M. Hoffman explains how service design thinking, lean approaches to user experience, and co-design processes offer an alternative to the usual (expensive) design project frustrations, and deliver experiences to delight your users.

Source: An Event Apart News: Co-Design, Not Redesign, by Kevin M. Hoffman – An Event Apart Video

A Beautiful Life

 ∗ Jeffrey Zeldman Presents The Daily Report: Web Design News & Insights Since 1995

LIZZIE VELASQUEZ, age 25, weighs 64 pounds. Born with a rare syndrome that prevents her from gaining weight, she was not expected to survive. Her parents took her home, raised her normally, and, when she turned five, sent her to kindergarten, where she discovered, through bullying, that she was different.

The bullying peaked when an adult male posted a photo of thirteen-year-old Lizzie labeled “World’s Ugliest Woman” on YouTube. The video got four million views. The uniformly unkind comments included sentiments like, “Do the world a favor. Put a gun to your head, and kill yourself.”

Rather than take the advice of anonymous cowards, Lizzie determined not to let their cruelty define her. Instead, as she reveals in this inspiring video captured at TEDxAustinWomen, Lizzie channeled the experience into a beautiful and fulfilling life.

An Introduction to Pointers for Go Programmers Not Coming from C Family Languages

 ∗ andlabs's blog

As co-designer Rob Pike noted in 2012, much to his surprise, most Go programmers come from languages like Python and Ruby, not from C or C++ or the like. As such, they have trouble with one of the most important properties of all these languages, including Go: pointers. Very high-level languages like Python don’t usually have you worry about pointers, so it’s understandable that these new Go users would have trouble. Unfortunately, Go runs a lower level, and thus the relationship between memory and storage becomes very important.

This confusion usually manifests itself in the form of “what’s the difference between (t T) and (t *T) in method receivers?” asked on a near-daily basis in #go-nuts. I originally had a post planned talking about that specifically, but other people in #go-nuts argued that this is a problem of not understanding pointers, so I’ll start with this topic instead.

The good thing about Go that’ll make this post much easier to consume is that Go restricts what you can do with pointers quite heavily and lets you do a few things that’ll cause the neighbor’s cat to explode in any other language, so we won’t have to go into the murky waters of C pointer usage. So if you’re already crouching behind your chair in fear, don’t worry, this won’t hurt.

Thanks to buro9, Noeble, jescalan, bhenderson, and NHO in #go-nuts for input.

Data is stored in memory, memory is referenced by addresses, and a pointer is data that stores an address.

I… can’t really make that any less concise, so there’s the thesis statement as a topic header.

Let’s say you have

var x int16 = 5
var y float64 = 2.34

These two variables would be stored in your computer’s memory. Memory is stored in byte-sized cells, and each cell has a numerical identifier called an address. For example, a possible configuration of the above could be


The exact in-memory representation of the values is not important, just that the data is stored in memory, has a definite size (you’ll notice that the number of bytes is based on the number at the end of the type name), and has a definite address.

The address of a variable will not change throughout the lifetime of that variable. Knowing this, you’d probably start wondering if there is a way to reference the address of that variable from other parts of the program, so as to change the value of the variable from an outside function, for instance. And you’d be right: that’s what a pointer does.

The expression


returns the address of x. Simple as that, no? The type of this expression is the same as the type of x but with a * prefixed; in this case, *int16. The * in the type is read “pointer to”, just as [] is read “slice of”.

So now let’s create a pointer:

var p *int16 = &x

If you print the value of p, you should see some random number, possibly in hexadecimal. This number is the address in memory. Again, its exact value is not important, apart from not being equal to zero. We’ll get to that later.

Here’s a picture of what p and x look like in memory.


Notice that a pointer takes a very small amount of memory. This makes pointers really useful if you want to carry around large structures of data: instead of copying all that data back and forth, just pass around a pointer!

So how do you get the original value out of a pointer? Easy: just prefix the pointer with * again:

if *p != x {
    panic("shouldn't happen")

(Some of you might wonder why the * goes before, but things like [] and . go after. This was a historical design decision.)

You can use pointers to change variables indirectly as well:

*p = 20
fmt.Println(x) // will print 20

This last bit is very important: it’s how you can change variables passed as parameters to functions!

func changeFloat(f *float64) {
    *f = 987.65
fmt.Println(y) // prints 987.65

You can compare two pointers to see if they represent the same thing. The pointers must point to the same type, otherwise the compiler will complain:

if p != &x {
    panic("shouldn't happen")
if p != &y {
    panic("shouldn't compile")

The special keyword nil can be used to represent an uninitialized pointer; that is, a pointer that points nowhere. You cannot get the thing pointed to by a nil pointer, because it points nowhere. To be specific, the address value of a nil pointer is always zero, which is guaranteed to panic at runtime if you try to access the memory there.

You can use the special nil value as an indicator, however:

// DoSomething does something.
// If options is nil, default options are used instead.
func DoSomething(options *Options) {
    // load default options
    if options != nil {
        // load specified options
    // do work

nil is a typeless expression; the compiler will infer the proper type out of the rest of the expression. Because nil is the zero value for a pointer,

var z *string

sets z to nil.

You don’t have to point to variables only either. The built-in function new(T) creates a new object of type T and returns a pointer to that new object.

var q = new(string)
*q = "hello, world"

The object created by new() is initially given its type’s zero value.

That’s pretty much all you can do with (that is, once you have) pointers. Unlike C, Go doesn’t let you add pointers or subtract pointers or compare pointers using inequalities (that is, pointers in Go cannot be ordered). You can, however, use pointers as map keys; this might come in handy, so keep that in mind.

But Go does let you do two other special things involving pointers.

First, once you take the address of anything, that thing is kept alive as long as at least one pointer to that thing exists. It does not matter where, or what. It can be a local variable!

func NewThing() *Thing {
    var thing Thing
    // set thing's fields
    return &thing

This is completely legal in Go, and works as expected. In fact, you can even call this encouraged. Do this anywhere else and you’ll suddenly find that gravity now points in a Möbius strip shape or that fish can sing like Barry White or something.

Second, if you have a composite literal — one of those T{} things where T is a structure type or a slice type or a map type or a named version of one of those or something — then &T{} returns a pointer to that new composite literal instance. This isn’t just encouraged; it’s idiom:

address := &url.URL{
    Scheme: "http",
    Host:   "",
    Path:   resource,
fmt.Printf("%p\n", address) // explicitly prints the memory address of address

Finally, to access the fields of a pointer to a structure, you can just use . like you would with a normal struct:

if secure {
    address.Scheme = "https"
    address.Host += ":12345"

You can even omit the * when accessing an element in a pointer to an array or slice.

Wait, Pointers to Slices? (Okay, I Lied; There’s One Pitfall After All)

Some of you may have been wondering why you don’t need to use a pointer when passing a slice into a function:

func change(slice []int) {
    slice[4] = 4
j := []int{1, 2, 3, 4, 5}
fmt.Println(j) // prints [1 2 3 4 4]

This is because a slice value is a reference to an underlying array, so changing an element actually changes the underlying array, which can be shared by multiple slices. Or in other words, slice values are pointers themselves! However, the slice variable is not a pointer. That is, you can’t change j, you can only change j‘s elements:

func change(slice []int) {
    slice = []int{6, 7, 8, 9, 10}
j := []int{1, 2, 3, 4, 5}
fmt.Println(j) // prints [1 2 3 4 5]

This means you need to use a pointer to a slice if you want to change a slice’s length or capacity (such as with append()).

The same applies to maps and channels.

That’ll be all for this post. I won’t go into the unsafe package, which lets you break all the rules, but if you will ever venture into the world of the underlying system components like I have, you’re going to need it. And you’ll need to be really damn good at it, too.

My ui package: lessons learned about threading and plans for the future

 ∗ andlabs's blog

Update: I’ve now drafted a formal propsoal; see the bottom of the post for details.

Update 2: Some people are confused: this applies to all platforms, not just Microsoft Windows. When I say “windows”, I mean actual top-level windows.

In case you aren’t aware, for the past five months my main project has been ui, a package for Go which provides cross-platform GUI development backed by each platform’s native API (or the closest thing to native if not applicable) and with a focus on having a simple API. This was an interesting experience: it stood as a testing ground for my idea about using channels for events, showing me situations that both seemed to work and that flat-out didn’t. I learned the Windows API (something that I always was curious about) and GTK+, revived my knowledge of Cocoa and extended it with how to build Cocoa GUIs without Interface Builder, and discovered that all three were not really as daunting as they appear at first glance. (In fact, I’ve now written moderately-sized programs in C for both the Windows API and GTK+, the latter which I plan on writing a blog post about once finished).

Now, I could go on about lessons learned when designing frameworks, such as the limitations of providing a common interface and that you can’t really provide the same feel across platforms. But that’s not what today’s post is about. Today’s post is about the very design of the package, why it’s such a time bomb, and a proposal for fixing it. These are realizations that came after I started the project and got the ball rolling with some of the package’s more advanced features. It’s a shame this has to happen after the package suddenly became popular, but that’s what you get, I suppose :/

Before we begin, I should say that the current feature set of the package encapsulates almost everything I wanted to write my own software with. I could theoretically start now, but if I’m also going to write an audio library, I’m now going to have to think twice…

This post and the new proposal were sparked by discussion with James Tucker (raggi) and Dominik Honnef (dominikh) in #go-nuts. Thanks to them for their input and brainstorming. In addition, thanks to mb0, Tv`, hyphenated, ChikkaChiCHi, lidaaa, and lid_ in #go-nuts and tadvi in the Gopher Academy Slack for providing input on drafts of this post.

The Original Plan

I had originally written callback-based Go code in two places: one with PortAudio and one with some IRC client framework that used callbacks for all possible IRC events. I was never particularly satisfied with either. With PortAudio, I found myself scratching my head as to how to hook up the callback with the rest of the program. Now don’t get me wrong, in C and C++, PortAudio works well, but in Go I wasn’t sure how to communicate between the sample pump and the rest of the program effectively. And the IRC client framework (which I used to write a bot whose details are irrelevant) just felt really really clunky.

Now flash back to 2007-2009. (Or if you were part of 9fans, don’t; I admit/consign to being an idiot at the time.) I had a fixation on old Bell Labs stuff, and I had read and re-read Rob Pike’s Newsqueak papers and talks and noticed how it uses channels (like Go’s channels) for handling the low-level user interface (displays and input devices).

My idea for package ui was to use the channel-based event model for user interface and widget events. For example:

for {
    select {
    case <-submit.Clicked:
         name := nameField.Text()
         number := numberField.Text()
         addToAddressBook(name, number)
     case <-numberField.Typing:
         _, err := strconv.Atoi(numberField.Text())
         if err != nil {
         prevText = numberField.Text()

This design seemed really elegant and simple to me. And I wanted to keep the API minimal, only providing the events I needed.

Over time, I realized a potential advantage of using channels: chaining equivalent events. You could say

menuItem.Clicked = toolbarButton.Clicked

to bind a menu item and a toolbar button so they would both do the same thing; you simply poll menuItem.Clicked. The ui.AppQuit mechanism (to deal with Mac OS X's application-centric (as opposed to window-centric) design) already works in this way.

The other big thing is that I wanted to be able to operate with the GUI from multiple goroutines, My original motive for this is long forgotten, but my history with threading and GUI toolkits was full of confusion and people being confusing, and I figure that being able to do things from arbitrary goroutines would allow simpler program structure.

Pretty much every GUI toolkit demands that it be run on a single thread. So to make all this work, we would have the GUI thread used for all the backend work; the backend sends on all the event channels directly, and we use a master channel (called uitask in the package source) to issue other requests (like the nameField.Text() above). For this to work, these other requests need to be notified when they have been completed; we use temporary return-value channels for this.

Okay, So What Went Wrong?

First, when initially adding button clicks, I found that I needed to wrap the event send in its own goroutine in order to not hang up the GUI loop waiting for the event to be registered. (I forget if this was a bug that I actually hit or not. But this is important; keep it in mind.)

The biggest thing in my package right now is Area, which is a blank slate canvas for drawing on. There are three events that an Area would need to expose:

  • Paint, for when the OS wants to redraw a part of the Area,
  • Mouse, for when the user moves the mouse or clicks one of its buttons, and
  • Key, for when the user presses or releases a key on the keyboard

The killer here is Paint.

Paint sends the Area the rectangle that the OS wants to redraw and a channel to send back the image to draw. From my original plan for Area, an example usage would be

select {
case req := <-area.Paint:
     req.Out <- img.SubImage(req.Rect).(*image.NRGBA)
 case e := <-area.Mouse:
     // draw on a mouse click, for instance

(Note that I originally wanted to use image.NRGBA; Areas now use image.RGBA instead.) So let's write this, and add a timer to test that everything's working as intended:

timer := time.NewTicker(1 * time.Second)
for {
    select {
    case req := <-area.Paint:
         req.Out <- image.SubImage(req.Rect).(*image.RGBA)
     case t := <-timer.C:

Compile this, and... things stop working, if not total deadlock.


Here's what's going on: timer fires, so we call label.SetText(). But before we can issue the request to the GUI thread to change the text, the OS makes a redraw request. The redraw code then sends the redraw request across the Area's channel. Oops, we're waiting for the label.SetText() to finish running! Neither the area.Paint send nor the label.GetText() return channel receive completes, and we get deadlock.

Ultimately, I had to introduce an interface ui.AreaHandler that contains the three Area events as callbacks. This does have one advantage in that you can simply reuse the AreaHandler if you need multiple Areas that work the same.

But there's a flaw that's even more insidious, and that no one will possibly run across in a production program until I add a few more functions.

Let us assume that ui.Control has a Disable() method that disables the control, preventing the user from interacting with it. What's wrong with the following code?

case <-button.Clicked:

Answer: in the current design of package ui, by the time stack.Disable() is called, we are already out of the OS-side event handler for the button click! It is entirely possible that the user clicks the button multiple times in this short period of time (for instance, if they hold down a keyboard key to do the button press). Anyone who has ever double-posted on an online forum knows what happens next.

Single-threaded environments cannot be made multi-threaded with the use of channels alone. In fact, channels used this way are asynchronous, even though channels are supposed to be synchronous! There's no solution to this problem that still uses channels; the design is fatally flawed.

Go on, try to think of a solution. One of the three things I discovered above will kill you. If not, one of these three will:

  1. You might wonder if we could use a return channel like we did with Area.Paint, but run a new message processing loop on each event so other things can continue to run. But then you just get a linear chain of modal message loops which you cannot break in a non-linear fashion. That is, if a button is clicked and then some text entered, you must leave the entered text handler before you can leave the button press handler.
  2. And if you suggest that all events must be responded to before anything else can happen, then we lose the ability to call methods on objects in package ui from different goroutines. Plus, one window not responding or waiting for something that takes too long to run (such as a network request) will cause the entire program to lock up during this time. This isn't what we want.
  3. And having just a single event channel that delivers every single event in the entire system and waits for a response before continuing only leads to more work in a multi-goroutine/multi-window program.

Now the reason why everyone uses callbacks should be clear. But I wasn't interested in writing callbacks because that just felt too clumsy!

Window Handlers: The Proposal

Therefore, I propose that I move package ui to use a handler-based system, similar to net/http. To avoid handlers being turned into callbacks, the handler granularity is the Window: there is a single WindowEvent() function for every window that either receives an event itself or has a child control that receives an event. For example, the name/number example from the top of this post would be written

func windowHandler(e ui.Event, c ui.Control) {
    switch e {
    case Clicked:
        name := nameField.Text()
        number := numberField.Text()
        addToAddressBook(name, number)
    case Typing:
        if c != numberField {
        _, err := strconv.Atoi(numberField.Text())
        if err != nil {
        prevText = numberField.Text()

ui.Go(), which is what performs the main event loop, would need to be called on the main thread still (as Mac OS X demands it) but rather than taking a main() to run in its own goroutine, it would take nothing. You would do all your GUI initialization and opening in main(), just like with GTK+.

By making handlers window-global, there's also the advantage that you can build programs by structuring around windows. For instance, you can have a MainWindow struct that contains all the controls and other metadata for the main window, and have its window handler be a method. Or for the example above:

type AddContactWindow struct {
    nameField   *ui.LineEdit
    numberField *ui.LineEdit
    prevText    string

func (w *AddContactWindow) windowHandler(e ui.Event, c ui.Control) {
    switch e {
    case Clicked:
        name := w.nameField.Text()
        number := w.numberField.Text()
        addToAddressBook(name, number)
    case Typing:
        if c != numberField {
        _, err := strconv.Atoi(w.numberField.Text())
        if err != nil {
        w.prevText = w.numberField.Text()

Furthermore, ui.AppQuit will be obviated: we can just have the event handler that handles ui.AppQuit ask to close all the windows. Closing a window would now require you to send back whether the window should be closed, or at least provide a way to cancel the closing. If one window declines, the quit is aborted. Platforms that don't use ui.AppQuit won't need special casing with this design.

There are two issues here.

First is that opening a window would still need to be done on the GUI thread. However, this would be the only situation where calling across goroutines would be necessary.

Second is that we lose the original goal of allowing any goroutine to touch controls. However, with the struct-based handler I just described above, this might not really be much of an issue anymore, as just like with net/http, the programmer would be responsible for treating each window event as if it was running in its own little world. Perhaps dropping this goal is necessary to fix the design...

Worse however is that you won't be able to send your own event to the windows using channels anymore. Perhaps I could provide a method func CustomEvent(e interface[}) which allows you to send arbitrary messages to a window. I don't know if I like this approach or not...

What do you all think? (That's why this is a proposal.) I'll continue working on package ui and my various other projects in the meantime, but there will come a time when I have to implement something.

Thank you for your understanding.

Update: I now have a drafted formal proposal for the new API. You can read it here. Please comment below this blog post to leave input and opinions. Thanks again!

Mark Llobrera · Professional Amateurs: Memory Management

 ∗ A List Apart: The Full Feed

When I was starting out as a web designer, one of my chief joys was simply observing how my mentors went about their job—the way they prepared for projects, the way they organized their work. I knew that it would take a while for my skills to catch up to theirs, but I had an inkling that developing a foundation of good work habits was something that would stay with me throughout my career.

Many of those habits centered around creating a personal system for organizing all the associated bits and pieces that contributed to the actual code I wrote. These days as I mentor Bluecadet’s dev apprentices, I frequently get asked how I keep all this information in my head. And my answer is always: I don’t. It’s simply not possible for me. I don’t have a “memory palace” like you’d see onscreen in Sherlock (or described in Hilary Mantel’s Wolf Hall). But I have tried a few things over the years, and what follows are a few habits and tools that have helped me.

Extend your memory

Remember this: you will forget. It may not seem like it, hammering away with everything so freshly-imprinted in your mind. But you will forget, at least long enough to drive you absolutely batty—or you’ll remember too late to do any good. So the trick is figuring out a way to augment your fickle memory.

The core of my personal memory system has remained fairly stable over the years: networked notes, lots of bookmarks, and a couple of buffer utilities. I’ve mixed and matched many different tools on top of those components, like a chef trying out new knives, but the general setup remains the same. I describe some OS X/iOS tools that I use as part of my system, but those are not a requirement and can be substituted with applications for your preferred operating system.

Networked notes

Think of these as breadcrumbs for yourself. You want to be able to quickly jot things down, true—but more importantly, you have to be able to find them once some time has passed.

I use a loose system of text notes, hooked up to a single folder in Dropbox. I settled on text for a number of reasons:

  • It’s not strongly tied to any piece of software. I use nvALT to create, name, and search through most of my notes, but I tend to edit them in Byword, which is available on both OS X and iOS.
  • It’s easily searchable, it’s extremely portable, and it’s lightweight.
  • It’s easily backed up.
  • I can scan my notes at the file system level in addition to within an app.
  • It’s fast. Start typing a word in the nvALT search bar and it whittles down the results. I use a system of “tags” when naming my files, where each tag is preceded by an @ symbol, like so: @bluecadet. Multiple tags can be chained together, for example: @bluecadet @laphamsquarterly. Generally I use anywhere from one to four tags per note. Common ones are a project tag, or a subject (say, @drupal or @wordpress). So a note about setting up Drupal on a project could be named “@bluecadet @drupal @projectname Setup Notes.txt.” There are lots of naming systems. I used this nvALT 101 primer by Michael Schechter as a jumping-off point, but I found it useful to just put my tags directly into the filename. Try a few conventions out and see what sticks for you.
Notes naming system screenshot
My file naming system for text notes.

What do I use notes for? Every time I run into anything on a project, whether it’s something that confuses me, or something I just figured out, I put that in a note. If I have a commonly-used snippet for a project (say, a deploy command), then I put that in a note too. I try to keep the notes short and specific—if I find myself adding more and more to a note I will often break it out into separate notes that are related by a shared tag. This makes it easier to find things when searching (or even just scanning the file directory of all the notes).

Later on those notes could form the basis for a blog post, a talk, or simply a lunch-and-learn session with my coworkers.

Scratch pad

I have one special note that I keep open during the day, a “scratch pad” for things that pop into my brain while I’m focusing on a specific task. (Ironically, this is a tip that I read somewhere and failed to bookmark). These aren’t necessarily things that are related to what I’m doing at that moment—in fact, they might be things that could potentially distract me from my current task. I jot a quick line in the scratch pad and when I have a break I can follow up on those items. I like to write this as a note in nvALT instead of in a notebook because I can later copy-and-paste bits and pieces into specific, tagged notes.

Bookmarking: Pinboard

So notes cover my stuff, but what about everyone else’s? Bookmarks can be extremely useful for building up a body of links around a subject, but like my text notes they only started to have value when I could access them anywhere. I save my bookmarks to Pinboard. I used to use Delicious, but after its near-death, I imported my links to Pinboard when a friend gave me a gift subscription. I like that Pinboard gives you a (paid) option to archive your bookmarks, so you can retrieve a cached copy of a page if link rot has set in with the original.

Anything that could potentially help me down the line gets tagged and saved. When I’m doing research in the browser, I will follow links off Google searches, skim them quickly, and bookmark things for later, in-depth reading. When I’m following links off Twitter I dump stuff to Pocket, since I have Pinboard set to automatically grab all my Pocket articles. Before I enabled that last feature, I had some links in Pocket and some in Pinboard, so I had to look for things in two separate places.

Whatever system you use, make sure it’s accessible from your mobile devices. I use Pinner for iOS, which works pretty well with iOS 8’s share sheets. Every few days I sit down with my iPad and sift through the links that are auto-saved from Pocket and add more tags to them.

Buffers: clipboard history and command line lookup

These last two tips are both very small, but they’ve saved me so much time (and countless keystrokes) over the years, especially given how often cut-and-paste figures into my job.

Find a clipboard history tool that works for you. I suggest using the clipboard history in your launcher application of choice (I use Launchbar since it has one built in, but Alfred has one as part of its Powerpack). On iOS I use Clips (although it does require an in-app purchase to store unlimited items and sync them across all your devices). Having multiple items available means less time spent moving between windows and applications—you can grab several items, and then paste them back from your history. I’m excited to see how the recently-announced multitasking features in iOS 9 help in this regard. (It also looks like Android M will have multiple window support.) If you don’t use a launcher, Macworld has a fairly recent roundup of standalone Mac apps.

If you use the command line bash shell, CTRL+R is your friend: it will allow you to do a string search through your recent commands. Hit CTLR+R repeatedly to cycle through multiple matches in your command history. When you deal with repetitive terminal commands like I do (deploying to remote servers, for instance), it’s even faster than copying-and-pasting from a clipboard history. (zsh users: looks like there’s some key bindings involved.)

Finding your way

I like to tell Bluecadet’s dev apprentices that they should pay close attention to the little pieces that form the “glue” of their mentor’s process. Developing a personal way of working that transcends projects and code can assist you through many changes in roles and skills over the course of your career.

Rather than opting in to a single do-it-all tool, I’ve found it helpful to craft my own system out of pieces that are lightweight, simple, flexible, and low-maintenance. The tools I use are just layers on top of that system. For example, as I wrote this column I tested out two Markdown text editors without having to change how I organize my notes.

Your personal system may look very different from the one I’ve described here. I have colleagues who use Evernote, Google Docs, or Simplenote as their primary tool. The common thread is that they invested some time and found something that worked for them.

What’s missing? I still don’t have a great tool for compiling visual references. I’ve seen some colleagues use Pinterest and Gimmebar. I’ll close by asking: what are you using?

WHATWG Weekly: Path objects for canvas and creating paths through SVG syntax

 ∗ The WHATWG Blog

Jonas Sicking proposed an API for decoding ArrayBuffer objects as strings, and encoding strings as ArrayBuffer objects. The thread also touched on a proposal mentioned here earlier, StringEncoding. This is the mid-March WHATWG Weekly.

Revision 7023 added the Path object to HTML for use with the canvas element, and the next revision made it possible to actually use it:

var path = new Path()

A new method addPathData() (introduced in revision 7026) can be used to construct canvas paths using SVG path data. Revision 7025 meanwhile added ellipse support to canvas.

Tune in next week for more additions to canvas.

Developing Empathy

 ∗ A List Apart: The Full Feed

I recently wrote about how to have empathy for our teammates when working to make a great site or application. I care a lot about this because being able to understand and relate to others is vital to creating teams that work well together and makes it easier for us to reach people we don’t know.

I see a lot of talk about empathy, but I find it hard to take the more theory-driven talk and boil that down into things that I can do in my day-to-day work. In my last post, I talked about how I practice empathy with my team members, but after writing that piece I got to thinking about how I, as a developer in particular, can practice empathy with the users of the things I make as well.

Since my work is a bit removed from the design and user experience layer, I don’t always have interactions and usability front of mind while coding. Sometimes I get lost in the code as I focus on making the design work across various screen sizes in a compact, modular way. I have to continually remind myself of ways I can work to make sure the application will be easy to use.

To that end, there are things I’ve started thinking about as I code and even ways I’ve gone outside the traditional developer role to ensure I understand how people are using the software and sites I help make.


From a pure coding standpoint, I do as much as I can to make sure the things I make are accessible to everyone. This is still a work in progress for me, as I try to learn more and more about accessibility. Keeping the A11Y Project checklist open while I work means I can keep accessibility in mind. Because all the people who want to use what I’m building should be able to.

In addition to focusing on what I can do with code to make sure I’m thinking about all users, I’ve also tried a few other things.


In a job I had a few years ago, the entire team was expected to be involved with support. One of the best ways to understand how people were using our product was to read through the questions and issues they were having.

I was quite nervous at first, feeling like I didn’t have the knowledge or experience to adequately answer user emails, but I came to really enjoy it. I was lucky to be mentored by my boss on how to write those support messages better, by acknowledging and listening to the people writing in, and hopefully, helping them out when I could.

Just recently I spent a week doing support work for an application while my coworker was on vacation, reminding me yet again how much I learn from it. Since this was the first time I’d been involved with the app, I learned about the ways our users were getting tripped up, and saw pitfalls which I may never have thought about otherwise.

As I’ve done support, I’ve learned quite a bit. I’ve seen browser and operating system bugs, especially on devices that I may not test or use regularly. I’ve learned that having things like receipts on demand and easy flows for renewal is crucial to paid application models. I’ve found out about issues when English may not be the users’ native language—internationalization is huge and also hard. Whatever comes up, I’m always reminded (in a good way!), that not everyone uses an application or computer in the same ways that I do.

For developers specifically, support work also helps jolt us out of our worlds and reminds us that not everyone thinks the same way, nor should they. I’ve found that while answering questions, or having to explain how to do certain tasks, I come to realizations of ways we can make things better. It’s also an important reminder that not everyone has the technical know how I do, so helping someone learn to use Fluid to make a web app behave more like a native app, or even just showing how to dock a URL in the OS X dock can make a difference. And best of all? When you do help someone out, they’re usually so grateful for it—it’s always great to get the happy email in response.

Usability testing

Another way I’ve found to get a better sense of what users are doing with the application is to sit in on usability testing when possible. I’ve only been able to do this once, but it was eye opening. There’s nothing better than watching someone use the software you’re making, or in my case, stumble through trying to use it.

In the one instance where I was able to watch usability testing, I found it fascinating on several levels. We were testing a mobile website for an industry that has a lot of jargon. So, people were stumbling not just with the application itself, but also with the language—it wasn’t just the UI that caused problems, but the words the industry uses regularly that people didn’t understand. With limited space on a small screen, we’d shortened things up too much, and it was not working for many of the people trying to use the site.

Since I’m not doing user experience work myself, I don’t get the opportunity to watch usability testing often, but I’m grateful for the time I was able to, and I’m hopeful that I’ll be able to observe it again in the future. Like answering support emails, it puts you on the front lines with your users and helps you understand how to make things better for them.

Getting in touch with users, in whatever ways are available to you, makes a big difference in how you think about them. Rather than a faceless person typing away on a keyboard, users become people with names who want to use what you are helping to create, but they may not think exactly the same way you do, and things may not work as they expect.

Even though many of us have roles where we aren’t directly involved in designing the interfaces of the sites and apps we build, we can all learn to be more empathetic to users. This matters. It makes us better at what we do and we create better applications and sites because of it. When you care about the person at the other end, you want to write more performant, accessible code to make their lives easier. And when the entire team cares, not just the people who interact with users most on a day-to-day basis, then the application can only get better as you iterate and improve it for your users.

Reliably hosted by