Infocon: green

 ∗ SANS Internet Storm Center, InfoCON: green

Asset Inventory: Do you have yours?

Asset Inventory: Do you have yours?, (Sun, Feb 1st)

 ∗ SANS Internet Storm Center, InfoCON: green

The year is hardly a month old and we have people racing around as if their hair is on fire, dema ...(more)...

Improving SSL Warnings, (Sun, Feb 1st)

 ∗ SANS Internet Storm Center, InfoCON: green

One of the things that has concerned mefor the last few years is how we are slowly creating a cli ...(more)...

The Long Web – An Event Apart Video

 ∗ Jeffrey Zeldman Presents The Daily Report: Web Design News & Insights Since 1995

Jeremy Keith at An Event Apart

IN THIS 60-minute video caught live at An Event Apart Austin, Jeremy Keith bets on HTML for the long haul:

The pace of change in our industry is relentless. New frameworks, processes, and technologies are popping up daily. If you’re feeling overwhelmed, you are not alone. Let’s take a step back and look at the over-arching trajectory of web design. Instead of focusing all our attention on the real-time web, let’s see which design principles and development approaches have stood the test of the time. Those who cannot remember the past are condemned to repeat it, but those who can learn from the past will create a future-friendly web.

Enjoy The Long Web by Jeremy Keith – An Event Apart Video.


Follow An Event Apart on Twitter, Google+, or Facebook. And for the latest web design news plus special offers, discounts, and more, subscribe to the AEA mailing list.

Big Web Show № 125: “You’re My Favorite Client” with Mike Monteiro

 ∗ Jeffrey Zeldman Presents The Daily Report: Web Design News & Insights Since 1995

Mike Monteiro and Jeffrey Zeldman

Monteiro and I talk design:

Designers Mike Monteiro (author, “You’re My Favorite Client”) and Jeffrey Zeldman discuss why humility is expensive, how to reassure the client at every moment that you know what you’re doing, and how to design websites that look as good on Day 400 as they do on Day 1. Plus old age, unsung heroines of the early web, and a book for designers to give to their clients.

5by5 | The Big Web Show № 125: “You’re My Favorite Client,” with Mike Monteiro.

Beware of Phishing and Spam Super Bowl Fans!, (Sat, Jan 31st)

 ∗ SANS Internet Storm Center, InfoCON: green

Beware of Super Bowl spam that may come to your email inbox this weekend. The big game is Sunday ...(more)...

Super Bowl

 ∗ xkcd.com

My hobby: Pretending to miss the sarcasm when people show off their lack of interest in football by talking about 'sportsball' and acting excited to find someone else who's interested, then acting confused when they try to clarify.

ISC StormCast for Friday, January 30th 2015 http://isc.sans.edu/podcastdetail.html?id=4335, (Fri, Jan 30th)

 ∗ SANS Internet Storm Center, InfoCON: green

...(more)...

Laura Kalbag on Freelance Design: How Big is Big Enough to Pick On?

 ∗ A List Apart: The Full Feed

I’m a firm believer in constructive criticism. As I said in a previous column, being professional in the way we give and receive criticism is a large part of being a designer.

However, criticism of the work has to be separated from criticism of the person. It can be all too easy to look at your own work and think “This is rubbish, so I’m rubbish, ” or have somebody else say “This isn’t good enough” and hear “You’re not good enough. ” Unfortunately, it’s also easy to go from critical to judgmental when we’re evaluating other people’s work.

Being able to criticize someone’s work without heaping scorn on them constitutes professionalism. I’ve occasionally been guilty of forgetting that: pumped up by my own sense of self-worth and a compulsion to give good drama to my followers on social networks, I’ve blurted unconstructive criticism into a text field and hit “send. ”

Deriding businesses and products is a day-to-day occurrence on Twitter and Facebook, one that’s generally considered acceptable since real live individuals aren’t under attack. But we should consider that businesses come in all sizes, from the one-person shop to the truly faceless multinational corporation.

As Ashley Baxter wrote, we tend to jump on social networks as a first means of contact, rather than attempting to communicate our issues privately first. This naming and shaming perhaps stems from years of being let down by unanswered emails and being put on hold by juggernaut corporations. Fair enough: in our collective memory is an era when big business seemingly could ignore customer service without suffering many repercussions. Now that we as consumers have been handed the weapon of social media, we’ve become intent on no longer being ignored.

When we’re out for some online humiliation, we often don’t realize how small our targets can be. Some businesses of one operate under a company name rather than a personal name. And yet people who may approach a customer service issue differently if faced with an individual will be incredibly abusive to “Acme Ltd. ” Some choice reviews from an app I regularly use:

Should be free

Crap. Total rip off I want my money back

Whoever designed this app should put a gun to there [sic] head. How complicated does if [sic] have to be…

In the public eye

We even have special rules that allow us to rationalize our behavior toward a certain class of individual. Somehow being a celebrity, or someone with many followers, means that cruel and unconstructive criticism doesn’t hurt—either because we mix up the special status of public figures in matters of libel with emotional invincibility, or because any hurt is supposed to be balanced out by positivity and praise from fans and supporters. Jimmy Kimmel’s Celebrities Read Mean Tweets shows hurt reactions veiled with humor. Harvard’s Q Guide allows students to comment anonymously on their professors and classes, so even Harvard profs get to read mean comments.

Why do we do it?

We love controversial declarations that get attention and give us all something to talk about, rally for, or rally against. Commentators who deliver incisive criticism in an entertaining way become leaders and celebrities.

Snarky jokes and sarcastic remarks often act as indirect criticisms of others’ opinions of the business. It might not be the critic’s intention from the beginning, but that tends to be the effect. No wonder companies try so hard to win back public favor.

Perhaps we’re quick to take to Twitter and Facebook to complain because we know that most companies will fall all over themselves to placate us. Businesses want to win back our affections and do damage control, and we’ve learned that we can get something out of it.

We’re only human

When an individual from a large company occasionally responds to unfair criticism, we usually become apologetic and reassure them that we have nothing personal against them. We need to remember that on the other side of our comments there are human beings, and that they have feelings that can be hurt too.

If we can’t be fair or nuanced in our arguments on social media, maybe we should consider writing longform critical pieces where we have more space and time for thoughtful arguments. That way, we could give our outbursts greater context (as well their own URLS for greater longevity and findability).

If that doesn’t sound worthwhile, perhaps our outbursts just aren’t worth the bandwidth. Imagine that.

Blindly confirming XXE, (Thu, Jan 29th)

 ∗ SANS Internet Storm Center, InfoCON: green

Almost exactly a year ago I posted a diary called Is XXE the new SQLi? you can read it at

Patent clauses

 ∗ One Big Fluke

Interesting discussion of the license Facebook is now using for their open-sourced code:

For those who aren't aware, it says

1. If facebook sues you, and you counterclaim over patents (whether about software or not), you will lose rights under all these patent grants.

So essentially you can't defend yourself.

This is different than the typical apache style patent grant, which instead would say "if you sue me over patents in apache licensed software x, you lose rights to software x" (IE it's limited to software, and limited to the thing you sued over)

2. It terminates if you challenge the validity of any facebook patent in any way. So no shitty software patent busting!

I think React is a great tool, but I didn't realize this license change happened on October 8th, 2014. Prior to that it was Apache 2.0 licensed (which is my license of choice). After that, React is licensed with BSD plus their special patent document (described above). Bummer.

Live Font Interpolation on the Web

 ∗ A List Apart: The Full Feed

We all want to design great typographic experiences. We also want to serve users on an increasing range of devices and contexts. But today’s webfonts tie our responsive sites and applications to inflexible type that doesn’t scale. As a result, our users get poor reading experiences and longer loading times from additional font weights.

As typographers, designers, and developers, we can solve this problem. But we’ll need to work together to make webfonts more systemized and context-aware. Live webfont interpolation—the modification of a font’s design in the browser—exists today and can serve as an inroad for using truly responsive typography.

An introduction to font interpolation

Traditional font interpolation is a process used by type designers to generate new intermediary fonts from a series of master fonts. Master fonts represent key archetypal designs across different points in a font family. By using math to automatically find the in-betweens of these points, type designers can derive additional font variants/weights from interpolation instead of designing each one manually. We can apply the same concept to our webfonts to serve different font variants for our users. For example, the H letter (H glyph) in this proof of concept (currently for desktop browsers) has light and heavy masters in order to interpolate a new font weight.

An interpolated H glyph using 50 percent of the light weight and 50 percent of the black weight. There can be virtually any number of poles and axes linked to combinations of properties, but in this example everything is being interpolated at once between two poles.

Normally these interpolated type designs end up being exported as separate fonts. For example, the TheSans type family contains individual font files for Extra Light, Light, Semi Light, Plain, SemiBold, Bold, Extra Bold, and Black weights generated using interpolation.

Individual font weights generated from interpolation from the TheSans type family.

Interpolation can alter more than just font weight. It also allows us to change the fundamental structure of a font’s glyphs. Things like serifs (or lack thereof), stroke contrast/direction, and character proportions can all be changed with the right master fonts.

A Noordzij cube showing an interpolation space with multiple poles and axes.

Although generating fonts with standard interpolation gives us a great deal of flexibility, webfont files are still static in their browser environment. Because of this, we’ll need more to work with the web’s responsiveness.

Web typography’s medium

Type is tied to its medium. Both movable type and phototypesetting methods influenced the way that type was designed and set in their time. Today, the inherent responsiveness of the web necessitates flexible elements and relative units—both of which are used when setting type. Media queries are used to make more significant adjustments at different breakpoints.

An approximation of typical responsive design breakpoints.

However, fonts are treated as another resource that needs to be loaded, instead of a living, integral part of a responsive design. Changing font styles and swapping out font weights with media queries represent the same design compromises inherent in breakpoints.

Breakpoints set by media queries often reflect the best-case design tradeoffs—often during a key breakpoint, like collapsing the navigation under a menu icon. Likewise, siloed font files often reflect best-case design tradeoffs—there’s no font in between The Mix Light and The Sans SemiLight.

Enter live webfont interpolation

Live webfont interpolation just means interpolating a font on the fly inside the browser instead of being exported as a separate file resource. By doing this, our fonts themselves can respond to their context. Because type reflows and is partially independent of a responsive layout, there’s less of a need to set abrupt points of change. Fonts can adhere to bending points—not just breaking points—to adapt type to the design.

Live interpolation doesn’t have to adhere to any specific font weight or design.

Precise typographic control

With live font interpolation, we can bring the same level of finesse to our sites and applications that type designers do. Just as we take different devices into account when designing, type designers consider how type communicates and performs at small sizes, low screen resolutions, large displays, economical body copy, and everything in between. These considerations are largely dependent on the typeface’s anatomy, which requires live font interpolation to be changed in the browser. Properties like stroke weight and contrast, counter size, x-height, and character proportions all affect how users read. These properties are typically balanced across a type family. For example, the JAF Lapture family includes separate designs for text, display, subheads, and captions. Live font interpolation allows a single font to fit any specific role. The same font can be optimized for captions set at .8em, body text set at 1.2em, or H1s set at 4.8em in a light color.

JAF Lapture Display (top) and JAF Lapture Text (bottom). Set as display type at 40 pixels, rendered on Chrome 38. Note how the display version uses thinner stroke weights and more delicate features that support its sharp, authoritative character without becoming too heavy at larger sizes. (For the best examples, compare live type in your own device and browser.)

JAF Lapture Text. Set as body copy at 16 pixels, rendered on Chrome. Note how features like the increased character width, thicker stroke weights, and shorter ascenders and descenders make the text version more appropriate for smaller body copy set in paragraph blocks.

JAF Lapture Display. Set as body copy at 16 pixels, rendered on Chrome.

Live font interpolation also allows precise size-specific adjustments to be made for the different distances at which a reader can perceive type. Type can generally remove finer typographic details at sizes where they won’t be perceived by the reader—like on far-away billboards, or captions and disclaimers set at small sizes.

Adaptive relationships

Live font interpolation’s context-awareness builds inherent flexibility into the font’s design. A font’s legibility and readability adjustments can be linked to accessibility options. People with low vision who increase the default text size or zoom the browser can get type optimized for them. Fonts can start to respond to combinations of factors like viewport size, screen resolution, ambient light, screen brightness, and viewing distance. Live font interpolation offers us the ability to extend great reading experiences to everyone, regardless of how their context changes.

Live font interpolation on the web today

While font interpolation can be done with images or canvas, these approaches don’t allow text to be selectable, accessible via screen readers, or crawlable by search engines. SVG fonts offer accessible type manipulation, but they currently miss out on the properties that make a font robust: hinting and OpenType tables with language support, ligatures, stylistic alternates, and small caps. An SVG OpenType spec exists, but still suffers from limited browser support.

Unlike SVG files, which are made of easily modifiable XML, font file formats (ttf, otf, woff2, etc.) are compiled as binary files, complicating the process of making live changes. Sets of information describing a font are stored in tables. These tables can range from things like a head table containing global settings for the font to a name table holding author’s notes. Different font file formats contain different sets of information. For example, the OpenType font format, a superset of TrueType, contains additional tables supporting more features and controls (per Microsoft’s OpenType spec):

  • cmap: Character to glyph mapping
  • head: Font header
  • hhea: Horizontal header
  • hmtx: Horizontal metrics
  • maxp: Maximum profile
  • name: Naming table
  • OS/2: OS/2 and Windows-specific metrics
  • post: PostScript information

For live webfont interpolation, we need a web version of something like ttx, a tool for converting font files into a format we can read and parse.

Accessing font tables

Projects like jsfont and opentype.js allow us to easily access and modify font tables in the browser. Much like a game of connect-the-dots, each glyph (the glyp table in OpenType) is made up of a series of points positioned on an x-y grid.

A series of numbered points on an H glyph. The first x-y coordinate set determines where the first point is placed on the glyph’s grid and is relative to the grid itself. After the first point, all points are relative to the point right before it. Measurements are set in font design units.

Interpolation involves the modification of a glyph to fall somewhere between master font poles—similar to the crossfading of audio tracks. In order to make changes to glyphs on the web with live webfonts, we need to compare and move individual points.

The first points for the light and heavy H glyph have different x coordinates, so they can be interpolated.

Interpolating a glyph via coordinates is essentially a matter of averaging points. More robust methods exist, but aren’t available for the web yet.

Other glyph-related properties (like xMin and xMax) also must be interpolated in order to ensure the glyph bounding box is large enough to show the whole glyph. Additionally, padding—or bearings, in font terminology—can be added to position a glyph in its bounding box (leftsidebearing and width properties). This becomes important when considering the typeface’s larger system. Any combination of glyphs can end up adjacent to each other, so changes must be made considering their relationship to the typeface’s system as a whole.

Glyph properties. Both xMin/xMax and advancewidth must be scaled in addition to the glyph’s coordinate points.

Doing it responsibly

Our job is to give users the best experience possible—whether they’re viewing the design on a low-end mobile device, a laptop with high resolution, or distant digital signage. Both poorly selected and slowly loading fonts hinder the reading experience. With CSS @fontface as a baseline, fonts can be progressively enhanced
with interpolation where appropriate. Users on less capable devices and browsers are best served with standard @fontface fonts.

After the first interpolation and render, we can set a series of thresholds where re-renders are triggered, to avoid constant recalculations for insignificant changes (like every single change in width as the browser is resized). Element queries are a natural fit here (pun intended) because they’re based at the module level, which is where type often lives within layouts. Because information for interpolation is stored with JavaScript, there’s no need to load an entirely different font—just the data required for interpolation. Task runners can also save this interpolation data in JavaScript during the website or application build process, and caching can be used to avoid font recalculations when a user returns to a view a second time.

Another challenge is rendering interpolated type quickly and smoothly. Transitioning in an interpolated font lined up with the original can minimize the visual change. Other techniques, like loading JavaScript asynchronously, or just caching the font for next time if the browser cannot load the font fast enough, could also improve perceived performance.

As noted by Nick Sherman, all these techniques illustrate the need for a standardized font format that wraps everything up into a single sustainable solution. Modifying live files with JavaScript serves only as an inroad for future font formats that can adapt to the widely varied conditions they’re subjected to.

Fonts that interpolate well

Like responsive design, font interpolation requires considerations for the design at both extremes, as well as everything in the middle. Finch—the typeface in these examples—lends itself well to interpolation. David Jonathan Ross, Finch’s designer, explains:

Interpolation is easiest when letter structure, contrast, and shapes stay relatively consistent across a family. Some typeface designs (like Finch) lend themselves well to that approach, and can get by interpolating between two extremes. However, other designs need more care and attention when traversing axes like weight or width. For example, very high-contrast or low-contrast designs often require separately-drawn poles between the extremes to help maintain the relationship between thick and thin, especially as certain elements are forced to get thin, such as the crossbar of the lowercase ’e’. Additionally, some designs get so extreme that letter shape is forced to change, such as replacing a decorative cursive form of lowercase ’k’ with a less-confusing one at text sizes, or omitting the dollar sign’s bar in the heaviest weights.

Finch’s consistency across weights allows it to avoid a complex interpolation space—there’s no need for additional master fonts or rules to make intermediate changes between two extremes.

Glyphs also don’t have to scale linearly across an axis’s delta. Guidelines like Lucas De Groot’s interpolation theory help us increase the contrast between near-middle designs, which may appear too similar to the user.

A call to responsive typography

We already have the tools to make this happen. For example, jsfont loads a font file and uses the DataView API to create a modifiable font object that it then embeds through CSS @fontface. The newer project opentype.js has active contributors and a robust API for modifying glyphs.

As font designers, typographers, designers, and developers, the only way to take advantage of responsive typography on the web is to work together and make sure it’s done beautifully and responsibly. In addition to implementing live webfont interpolation in your projects, you can get involved in the discussion, contribute to projects like opentype.js, and let type designers know there’s a demand for interpolatable fonts.

Adobe Flash Update Available for CVE-2015-0311 & -0312, (Wed, Jan 28th)

 ∗ SANS Internet Storm Center, InfoCON: green

VMware Security Advisories - 1 New, 1 Updated, (Wed, Jan 28th)

 ∗ SANS Internet Storm Center, InfoCON: green

Adobe updates Security Advisory for Adobe Flash Player, Infocon returns to green, (Mon, Jan 26th)

 ∗ SANS Internet Storm Center, InfoCON: green

On Saturday, 24 JAN 2015, Adobe updated their

Link

 ∗ One Big Fluke

Seems that some folks are having problems using Open ID to post comments here. I've disabled that mode until I can figure it out. In the meantime, please use the other comment form. It'd be nice to hear from you. Sorry for the trouble!

Readying mgo for MongoDB 3.0

 ∗ Labix Blog

MongoDB 3.0 (previously known as 2.8) is right around the block, and it’s time to release a few fixes and improvements on the mgo driver for Go to ensure it works fine on that new major server version. Compatibility is being preserved both with old applications and with old servers, so updating should be a smooth experience.

Release r2015.01.24 of mgo includes the following changes:


Support ReplicaSetName in DialInfo

DialInfo now offers a ReplicaSetName field that may contain the name of the MongoDB replica set being connected to. If set, the cluster synchronization routines will prevent communication with any server that does not report itself as part of that replica set.

Feature implemented by Wisdom Omuya.

MongoDB 3.0 support for collection and index listing

MongoDB 3.0 requires the use of commands for listing collections and indexes, and may report long results via cursors that must be iterated over. The CollectionNames and Indexes methods were adapted to support both the old and the new cases.

Introduced Collection.NewIter method

In the last few releases of MongoDB, a growing number of low-level database commands are returning results that include an initial set of documents and one or more cursor ids that should be iterated over for obtaining the remaining documents. Such results defeated one of the goals in mgo’s design: developers should be able to walk around the convenient pre-defined static interfaces when they must, so they don’t have to patch the driver when a feature is not yet covered by the convenience layer.

The introduced NewIter method solves that problem by enabling developers to create normal iterators by providing the initial batch of documents and optionally the cursor id for obtaining the remaining documents, if any.

Thanks to John Morales, Daniel Gottlieb, and Jeff Yemin, from MongoDB Inc, for their help polishing the feature.

Improved JSON unmarshaling of ObjectId

bson.ObjectId can now be unmarshaled correctly from an empty or null JSON string, when it is used as a field in a struct submitted for unmarshaling by the json package.

Improvement suggested by Jason Raede.

Remove GridFS chunks if file insertion fails

When writing a GridFS file, the chunks that hold the file content are written into the database before the document representing the file itself is inserted. This ensures the file is made visible to concurrent readers atomically, when it’s ready to be used by the application. If writing a chunk fails, the call to the file’s Close method will do a best effort to clean up previously written chunks. This logic was improved so that calling Close will also attempt to remove chunks if inserting the file document itself failed.

Improvement suggested by Ed Pelc.

Field weight support for text indexing

The new Index.Weights field allows providing a map of field name to field weight for fine tuning text index creation, as described in the MongoDB documentation.

Feature requested by Egon Elbre.

Fixed support for $** text index field name

Support for the special $** field name, which enables the indexing of all document fields, was fixed.

Problem reported by Egon Elbre.

Consider only exported fields on omitempty of structs

The implementation of bson’s omitempty feature was also considering the value of non-exported fields. This was fixed so that only exported fields are taken into account, which is both in line with the overall behavior of the package, and also prevents crashes in cases where the field value cannot be evaluated.

Fix potential deadlock on Iter.Close

It was possible for Iter.Close to deadlock when the associated server was concurrently detected unavailable.

Problem investigated and reported by John Morales.

Return ErrCursor on server cursor timeouts

Attempting to iterate over a cursor that has timed out at the server side will now return mgo.ErrCursor.

Feature implemented by Daniel Gottlieb.

Support for collection repairing

The new Collection.Repair method returns an iterator that goes over all recovered documents in the collection, in a best-effort manner. This is most useful when there are damaged data files. Multiple copies of the same document may be returned by the iterator.

Feature contributed by Mike O’Brien.

Variable Fonts for Responsive Design

 ∗ A List Apart: The Full Feed

Choosing typefaces for use on the web today is a practice of specifying static fonts with fixed designs. But what if the design of a typeface could be as flexible and responsive as the layout it exists within?

The glass floor of responsive typography

Except for low-level font hinting and proof-of-concept demos like the one Andrew Johnson published earlier this week, the glyph shapes in modern fonts are restricted to a single, static configuration. Any variation in weight, width, stroke contrast, etc.—no matter how subtle—requires separate font files. This concept may not seem so bad in the realm of print design, where layouts are also static. On the web, though, this limitation is what I refer to as the “glass floor” of responsive typography: while higher-level typographic variables like margins, line spacing, and font size can adjust dynamically to each reader’s viewing environment, that flexibility disappears for lower-level variables that are defined within the font. Each glyph is like an ice cube floating in a sea of otherwise fluid design.

The “glass floor” of responsive typography
The continuum of responsive design is severed for variables below the “glass floor” in the typographic hierarchy.

Flattening of dynamic typeface systems

The irony of this situation is that so many type families today are designed and produced as flexible systems, with dynamic relationships between multiple styles. As Erik van Blokland explained during the 2013 ATypI conference:

If you design a single font, it’s an island. If you design more than one, you’re designing the relationships, the recipe.

Erik is the author of Superpolator, a tool for blending type styles across multiple dimensions. Such interpolation saves type designers thousands of hours by allowing them to mathematically mix design variables like weight, width, x-height, stroke contrast, etc.

Superpolator allows type designers to generate variations of a typeface mathematically by interpolating between a small number of master styles.

The newest version of Superpolator even allows designers to define complex conditional rules for activating alternate glyph forms based on interpolation numbers. For example, a complex ‘$’ glyph with two vertical strokes can be automatically replaced with a simplified single-stroke form when the weight gets too bold or the width gets too narrow.

Unfortunately, because of current font format limitations, all this intelligence and flexibility must be flattened before the fonts end up in the user’s hands. It’s only in the final stages of font production that static instances are generated for each interpolated style, frozen and detached from their siblings and parent except in name.

The potential for 100–900 (and beyond)

The lobotomization of dynamic type systems is especially disappointing in the context of CSS—a system that has variable stylization in its DNA. The numeric weight system that has existed in the CSS spec since it was first published in 1996 was intended to support a dynamic stylistic range from the get-go. This kind of system makes perfect sense for variable fonts, especially if you introduce more than just weight and the standard nine incremental options from 100 to 900. Håkon Wium Lie (the inventor of CSS!) agrees, saying:

One of the reasons we chose to use three-digit numbers [in the spec for CSS font-weight values] was to support intermediate values in the future. And the future is now :)

Beyond increased granularity for font-weight values, imagine the other stylistic values that could be harnessed with variable fonts by tying them to numeric values. Digital typographers could fine-tune typeface specifics such as x-height, descender length, or optical size, and even tie those values to media queries as desired to improve readability or layout.

Toward responsive fonts

It’d be hard to write about variable fonts without mentioning Adobe’s Multiple Master font format from the 1990s. It allows smooth interpolation between various extremes, but the format was abandoned and is now mostly obsolete for typesetting by end-users. We’ll get back to Multiple Master later, but for now it suffices to say that—despite a meager adoption rate—it was perhaps the most widely used variable font format in history.

More recently, there have been a number of projects that touch on ideas of variable fonts and dynamic typeface adjustment. For example, Matthew Carter’s Sitka typeface for Microsoft comes in six size-specific designs that are selected automatically based on the size used. While the implementation doesn’t involve fluid interpolation between styles (as was originally planned), it does approximate the effect with live size-aware selections.

The Sitka type system by Matthew Carter for Microsoft
The Sitka type family, designed by Matthew Carter, automatically switches between optical sizes in Microsoft applications. From left to right: Banner, Display, Heading, Subheading, Text, Small. All shown at the same point size for comparison. Image courtesy of John Hudson / Tiro Typeworks.

There are also some options for responsive type adjustments on the web using groups of static fonts. In 2014 at An Event Apart Seattle, my colleague Chris Lewis and I introduced a project, called Font-To-Width, that takes advantage of large multi-width and multi-weight type families to fit pieces of text snugly within their containers. Our demo shows what I call “detect and serve” responsive type solutions: swapping static fonts based on the specifics of the layout or reading environment.

One of the more interesting recent developments in the world of variable font development was the the publication of Erik van Blokland’s MutatorMath under an open source license. MutatorMath is the interpolation engine inside Superpolator. It allows for special kinds of font extrapolation that aren’t possible with MultipleMaster technology. Drawing on masters for Regular, Condensed, and Bold styles, MutatorMath can calculate a Bold Condensed style. For an example of MutatorMath’s power, I recommend checking out some type tools that are utilizing it, like the Interpolation Matrix by Loïc Sander.

Loïc Sander’s Interpolation Matrix tool harnesses the power of Erik van Blokland’s MutatorMath

A new variable font format

All of these ideas seem to be leading to the creation of a new variable font format. Though none of the aforementioned projects offers a complete solution on its own, there are definitely ideas from all of them that could be adopted. Proposals for variable font formats are starting to show up around the web, too. Recently on the W3C Public Webfonts Working Group list, FontLab employee Adam Twardoch made an interesting proposal for a “Multiple Master webfonts resurrection.”

And while such a thing would help improve typographic control, it could also improve a lot of technicalities related to serving fonts on the web. Currently, accessing variations of a typeface requires loading multiple files. With a variable font format, a set of masters could be packaged in a single file, allowing not only for more efficient files, but also for a vast increase in design flexibility.

Consider, for example, how multiple styles from within a type family are currently served, compared to how that process might work with a variable font format.


Static fonts vs. variable fonts
 

With static fonts

With a variable font

*It is actually possible to use three masters to achieve the same range of styles, but it is harder to achieve the desired glyph shapes. I opted to be conservative for this test.

**This table presumes 120 kB per master for both static and variable fonts. In actual implementation, the savings for variable fonts compared with static fonts would likely be even greater due to reduction in repeated/redundant data and increased efficiency in compression.

Number of weights3Virtually infinite
Number of widths2Virtually infinite
Number of masters64*
Number of files61
Data @ 120 kB/master**720 kB480 kB
Download time @ 500 kB/s1.44 sec0.96 sec
Latency @ 100 ms/file0.6 sec0.1 sec
Total load time2.04 sec1.06 sec

A variable font would mean less bandwidth, fewer round-trips to the server, faster load times, and decidedly more typographic flexibility. It’s a win across the board. (The still-untested variable here is how much time might be taken for additional computational processing.)

But! But! But!

You may feel some skepticism about a new variable font format. In anticipation of that, I’ll address the most obvious questions.

This all seems like overkill. What real-world problems would be solved by introducing a new variable font format?

This could address any problem where a change in the reading environment would inform the weight, width, descender length, x-height, etc. Usually these changes are implemented by changing fonts, but there’s no reason you shouldn’t be able to build those changes around some fluid and dynamic logic instead. Some examples:

  • Condensing the width of a typeface for narrow columns
  • Subtly tweaking the weight for light type on a dark background
  • Showing finer details at large sizes
  • Increasing the x-height at small sizes
  • Adjusting the stroke contrast for low resolutions
  • Adjusting the weight to maintain the same stem thickness across different sizes
  • Adjusting glyphs set on a circle according to the curvature of the baseline. (Okay, maybe that’s pushing it, but why should manhole covers and beer coasters have all the fun?)

Multiple Master was a failure. What makes you think variable fonts will take off now?

For starters, the web now offers the capability for responsive design that print never could. Variable fonts are right at home in the context of responsive layouts. Secondly, we are already seeing real-world attempts to achieve similar results via “detect and serve” solutions. The world is already moving in this direction with or without a variable font format. Also, the reasons the Multiple Master format was abandoned include a lot of political and/or technical issues that are less problematic today. Furthermore, the tools to design variable typefaces are much more advanced and accessible now than in the heyday of Multiple Master, so type designers are better equipped to produce such fonts.

How are we supposed to get fonts that are as compressed as possible if we’re introducing all of this extra flexibility into their design?

One of the amazing things about variable fonts is that they can potentially reduce file sizes while simultaneously increasing design flexibility (see the “Static fonts vs. variable fonts” comparison).

Most interpolated font families have additional masters between the extremes. Aren’t your examples a bit optimistic about the efficiency of interpolation?

The most efficient variable fonts will be those that were designed from scratch with streamlined interpolation in mind. As David Jonathan Ross explained, some styles are better suited for interpolation than others.

Will the additional processing power required for interpolation outweigh the benefits of variable fonts?

Like many things today, especially on the web, it depends on the complexity of the computation, processing speed, rendering engine, etc. If interpolated styles are cached to memory as static instances, the related processing may be negligible. It’s also worth noting that calculations of comparable or higher complexity happen constantly in web browsers without any issues related to processing (think SVG scaling and animation, responsive layouts, etc). Another relevant comparison would be the relatively minimal processing power and time required for Adobe Acrobat to interpolate styles of Adobe Sans MM and Adobe Serif MM when filling in for missing fonts.

But what about hinting? How would that work with interpolation for variable fonts?

Any data that is stored as numbers can be interpolated. With that said, some hinting instructions are better suited for interpolation than others, and some fonts are less dependent on hinting than others. For example, the hinting instructions are decidedly less crucial for “PostScript-flavored” CFF-based fonts that are meant to be set at large sizes. Some new hinting tables may be helpful for a variable font format, but more experimentation would be in order to determine the issues.

If Donald Knuth’s MetaFont was used as a variable font model, it could be even more efficient because it wouldn’t require data for multiple masters. Why not focus more on a parametric type system like that?

Parametric type systems like MetaFont are brilliant, and indeed can be more efficient, but in my observation the design results they bear are decidedly less impressive or useful for quality typography.

What about licensing? How would you pay for a variable font that can provide a range of stylistic variation?

This is an interesting question, and one that I imagine would be approached differently depending on the foundry or distributor. One potential solution might be to license ranges of stylistic variation. So it would cost less to license a limited weight range from Light to Medium (300–500) than a wide gamut from Thin to Black (100–900).

What if I don’t need or want these fancy-pants variable fonts? I’m fine with my old-school static fonts just the way they are!

There are plenty of cases where variable fonts would be unnecessary and even undesirable. In those cases, nothing would stop you from using static fonts.

Web designers are already horrible at formatting text. Do we really want to introduce more opportunities for bad design choices?

People said similar things about digital typesetting on the Mac, mechanical typesetting on the Linotype, and indeed the whole practice of typography back in Gutenberg’s day. I’d rather advance the state of the art with some growing pains than avoid progress on the grounds of snobbery.

Okay, I’m sold. What should I do now?

Experiment with things like Andrew Johnson’s proof-of-concept demo. Read up on MutatorMath. Learn more about the inner workings of digital fonts. Get in touch with your favorite type foundries and tell them you’re interested in this kind of stuff. Then get ready for a future of responsive typography.

A Vision for Our Sass

 ∗ A List Apart: The Full Feed

At a recent CSS meetup, I asked, “Who uses Sass in their daily workflow?” The response was overwhelmingly positive; no longer reserved for pet projects and experiments, Sass is fast becoming the standard way for writing CSS.

This is great news! Sass gives us a lot more power over complex, ever-growing stylesheets, including new features like variables, control directives, and mixins that the original CSS spec (intentionally) lacked. Sass is a stylesheet language that’s robust yet flexible enough to keep pace with us.

Yet alongside the wide-scale adoption of Sass (which I applaud), I’ve observed a steady decline in the quality of outputted CSS (which I bemoan). It makes sense: Sass introduces a layer of abstraction between the author and the stylesheets. But we need a way to translate the web standards—that we fought so hard for—into this new environment. The problem is, the Sass specification is expanding so much that any set of standards would require constant revision. Instead, what we need is a charter—one that sits outside Sass, yet informs the way we code.

To see a way forward, let’s first examine some trouble spots.

The symptoms

One well-documented abuse of Sass’s feature-set is the tendency to heavily nest our CSS selectors. Now don’t get me wrong, nesting is beneficial; it groups code together to make style management easier. However, deep nesting can be problematic.

For one, it creates long selector strings, which are a performance hit:

body #main .content .left-col .box .heading { font-size: 2em; }

It can muck with specificity, forcing you to create subsequent selectors with greater specificity to override styles further up in the cascade—or, God forbid, resort to using !important:

body #main .content .left-col .box .heading  [0,1,4,1]
.box .heading  [0,0,2,0]

Comparative specificity between two selectors.

Last, nesting can reduce the portability and maintainability of styles, since selectors are tied to the HTML structure. If we wanted to repeat the style heading for a box that wasn’t in the leftcol, we would need to write a separate rule to accomplish that.

Complicated nesting is probably the biggest culprit in churning out CSS soup. Others include code duplication and tight coupling—and again, these are the results of poorly formed Sass. So, how can we learn to use Sass more judiciously?

Working toward a cure

One option is to create rules that act as limits and reign in some of that power. For example, Mario Ricalde uses an Inception-inspired guideline for nesting: “Don’t go more than four levels deep.”

Rules like this are especially helpful for newcomers, because they provide clear boundaries to work within. But few universal rules exist; the Sass spec is sprawling and growing (as I write this, Sass is at version 3.4.5). With each new release, more features are introduced, and with them more rope with which to hang ourselves. A rule set alone would be ineffective.

We need a proactive, higher-level stance toward developing best practices rather than an emphasis on amassing individual rules. This could take the form of a:

  • Code standard, or guidelines for a specific programming language that recommend programming style, practices, and methods.
  • Framework, or a system of files and folders of standardized code, which can be used as the foundation of a website.
  • Style guide, or a living document of code, which details all the various elements and coded modules of your site or application.

Each approach has distinct advantages:

  • Code standards provide a great way of unifying a team and improving maintainability across a large codebase (see Chris Coyier’s Sass guidelines).
  • Frameworks are both practical and flexible, offering the lowest barrier to entry and removing the burden of decision. As every seasoned front-end developer knows, even deciding on a CSS class name can become debilitating.
  • Style guides make the relationship between the code and the output explicit by illustrating each of the components within the system.

Each also has its difficulties:

  • Code standards are unwieldy. They must be kept up-to-date and can become a barrier to entry for new or inexperienced users.
  • Frameworks tend to become bloated. Their flexibility comes at a cost.
  • Style guides suffer from being context-specific; they are unique to the brand they represent.

Unfortunately, while these methods address the technical side of Sass, they don’t get to our real problem. Our difficulties with Sass don’t stem from the specification itself but from the way we choose to use it. Sass is, after all, a CSS preprocessor; our Sass problem, therefore, is one of process.

So, what are we left with?

Re-examining the patient

Every job has its artifacts, but problems arise if we elevate these by-products above the final work. We must remember that Sass helps us construct our CSS, but it isn’t the end game. In fact, if the introduction of CSS variables is anything to go by, the CSS and Sass specs are beginning to converge, which means one day we may do away with Sass entirely.

What we need, then, is a solution directed not at the code itself but at us as practitioners—something that provides technical guidelines as we write our Sass, but simultaneously lifts our gaze toward the future. We need a public declaration of intentions and objectives, or, in other words, a manifesto.

Sass manifesto

When I first discovered Sass, I developed some personal guidelines. Over time, they formalized into a manifesto that I could then use to evaluate new features and techniques—and whether they’d make sense for my workflow. This became particularly important as Sass grew and became more widely used within my team.

My Sass manifesto is composed of six tenets, or articles, outlined below:

  1. Output over input
  2. Proximity over abstraction
  3. Understanding over brevity
  4. Consolidation over repetition
  5. Function over presentation
  6. Consistency over novelty

It’s worth noting that while the particular application of each article may evolve as the specification advances, the articles themselves should remain unchanged. Let’s cover each in a little more depth.

1. Output over input

The quality and integrity of the generated CSS is of greater importance than the precompiled code.

This is the tenet from which all the others hang. Remember that Sass is one step in the process toward our goal, delivering CSS files to the browser. This doesn’t mean the CSS has to be beautifully formatted or readable (this will never be the case if you’re following best practices and minimizing CSS), but you must keep performance at the forefront of your mind.

When you adopt new features in the Sass spec, you should ask yourself, “What is the CSS output?” If in doubt, take a look under the hood—open the processed CSS. Developing a deeper understanding of the relationship between Sass and CSS will help you identify potential performance issues and structure your Sass accordingly.

For example, using @extend targets every instance of the selector. The following Sass

.box {
	background: #eee;
	border: 1px solid #ccc;

	.heading {
	  font-size: 2em;
	}
}

.box2 {
	@extend .box;
	padding: 10px;
}


compiles to

.box, .box2 {
  background: #eee;
  border: 1px solid #ccc;
}
.box .heading, .box2 .heading {
  font-size: 2em;
}

.box2 {
  padding: 10px;
}

As you can see, not only has .box2 inherited from .box, but .box2 has also inherited from the instances where .box is used in an ancestor selector. It’s a small example, but it shows how you can arrive at some unexpected results if you don’t understand the output of your Sass.

2. Proximity over abstraction

Projects should be portable without over-reliance on external dependencies.

Anytime you use Sass, you’re introducing a dependency—the simplest installation of Sass depends on Ruby and the Sass gem to compile. But keep in mind that the more dependencies you introduce, the more you risk compromising one of Sass’s greatest benefits: the way it enables a large team to work on the same project without stepping on one another’s toes.

For instance, along with the Sass gem you can install a host of extra packages to accomplish almost any task you can imagine. The most common library is Compass (maintained by Chris Epstein, one of Sass’s original contributors), but you can also install gems for grid systems, and frameworks such as Bootstrap, right down to gems that help with much smaller tasks like creating a color palette and adding shadows.

These gems create a set of pre-built mixins that you can draw upon in your Sass files. Unlike the mixins you write inside your project files, a gem is written to your computer’s installation directory. Gems are used out-of-the-box, like Sass’s core functions, and the only reference to them is via an @include method.

Here’s where gems get tricky. Let’s return to the scenario where a team is contributing to the same project: one team member, whom we’ll call John, decides to install a gem to facilitate managing grids. He installs the gem, includes it in the project, and uses it in his files; meanwhile another team member—say, Mary—pulls down the latest version of the repository to change the fonts on the website. She downloads the files, runs the compiler, but suddenly gets an error. Since Mary last worked on the project, John has introduced an external dependency; before Mary can do her work, she must debug the error and download the correct gem.

You see how this problem can be multiplied across a larger team. Add in the complexity of versioning and inter-gem-dependency, and things can get very hairy. Best practices exist to maintain consistent environments for Ruby projects by tracking and installing the exact necessary gems and versions, but the simplest approach is to avoid using additional gems altogether.

Disclaimer: I currently use the Compass library as I find its benefits outweigh the disadvantages. However, as the core Sass specification advances, I’m considering when to say goodbye to Compass.

3. Understanding over brevity

Write Sass code that is clearly structured. Always consider the developer who comes after you.

Sass is capable of outputting super-compressed CSS, so you don’t need to be heavy-handed in optimizing your precompiled code. Further, unlike regular CSS comments, inline comments in Sass aren’t outputted to the final CSS.

This is particularly helpful when documenting mixins, where the output isn’t always transparent:

// Force overly long spans of text to truncate, e.g.:
// @include truncate(100%);
// Where $truncation-boundary is a united measurement.

@mixin truncate($truncation-boundary){
    max-width:$truncation-boundary;
    white-space:nowrap;
    overflow:hidden;
    text-overflow:ellipsis;
}

However, do consider which parts of the your Sass will make it to the final CSS file.

4. Consolidation over repetition

Don’t Repeat Yourself. Recognize and codify repeating patterns.

Before you start any project, it’s sensible to sit down and try to identify all the different modules in a design. This is the first step in writing object-oriented CSS. Inevitably some patterns won’t become apparent until you’ve written the same (or similar) line of CSS three or four times.

As soon as you recognize these patterns, codify them in your Sass.

Add variables for recurring values:

$base-font-size: 16px;
$gutter: 1.5em;

Use placeholders for repeating visual styles:

%dotted-border { border: 1px dotted #eee; }

Write mixins where the pattern takes variables:

//transparency for image features
@mixin transparent($color, $alpha) {
  $rgba: rgba($color, $alpha);
  $ie-hex-str: ie-hex-str($rgba);
  background-color: transparent;
  background-color: $rgba;
  filter:progid:DXImageTransform.Microsoft.gradient(startColorstr=#{$ie-hex-str},endColorstr=#{$ie-hex-str});
  zoom: 1;
}

If you adopt this approach, you’ll notice that both your Sass files and resulting CSS will become smaller and more manageable.

5. Function over presentation

Choose naming conventions that focus on your HTML’s function and not its visual presentation.

Sass variables make it incredibly easy to theme a website. However, too often I see code that looks like this:

$red-color: #cc3939; //red
$green-color: #2f6b49; //green

Connecting your variables to their appearance might make sense in the moment. But if the design changes, and the red is replaced with another color, you end up with a mismatch between the variable name and its value.

$red-color: #b32293; //magenta
$green-color: #2f6b49; //green

A better approach is to name these color variables based on their function on the site:

$primary-color: #b32293; //magenta
$secondary-color: #2f6b49; //green

Presentational classes with placeholder selectors

What happens when we can’t map a visual style to a functional class name? Say we have a website with two call-out boxes, “Contact” and “References.” The designer has styled both with a blue border and background. We want to maximize the flexibility of these boxes but minimize any redundant code.

We could choose to chain the classes in our HTML, but this can become quite restrictive:

<div class="contact-box blue-box">
<div class="references-box blue-box">

Remember, we want to focus on function over presentation. Fortunately, using the Sass @extend method together with a placeholder class makes this a cinch:

%blue-box {
	background: #bac3d6;
	border: 1px solid #3f2adf;
}

.contact-box {
	@extend %blue-box;
	...
}
.references-box {
@extend %blue-box;
	...
}

This generates the following CSS, with no visible references to %blue-box anywhere, except in the styles that carry forward.

.contact-box,
.references-box {
	background: #bac3d6;
	border: 1px solid #3f2adf;
}

This approach cuts references in our HTML to presentational class names, but it still lets us use them in our Sass files in a descriptive way. Trying to devise functional names for common styles can have us reaching for terms like base-box, which is far less meaningful here.

6. Consistency over novelty

Avoid introducing unnecessary changes to the processed CSS.

If you’re keen to introduce Sass into your workflow but don’t have any new projects, you might wonder how best to use Sass inside a legacy codebase. Sass fully supports CSS, so initially it’s as simple as changing the extension from .css to .scss.

Once you’ve made this move, it may be tempting to dive straight in and refactor all your files, separating them into partials, nesting your selectors, and introducing variables and mixins. But this can cause trouble down the line for anyone who is picking up your processed CSS. The refactoring may not have affected the display of anything on your website, but it has generated a completely different CSS file. And any changes can be extremely hard to isolate.

Instead, the best way to switch to a Sass workflow is to update files as you go. If you need to change the navigation, separate that portion into its own partial before working on it. This will preserve the cascade and make it much easier to pinpoint any changes later.

The prognosis

I like to think of our current difficulties with Sass as growing pains. They’re symptoms of the adjustments we must make as we move to a new way of working. And an eventual cure does exist, as we mature in our understanding of Sass.

It’s my vision that this manifesto will help us get our bearings as we travel along this path: use it, change it, or write your own—but start by focusing on how you write your code, not just what you write.

It is impossible to deeply understand a solution before you have the problem. Let me ...

 ∗ iRi

It is impossible to deeply understand a solution before you have the problem.

Let me give you an example that probably all my readers can relate to: Mathematics education. Do you remember first seeing the quadratic equation and wondering why you should care? Or even if you were a math nerd like me, can you understand why someone would be asking themselves that at that point?

The problem is that the students have not had the problem. For one thing, all the quadratic equations they've been solving were simple ones with integer solutions. For another, even the factoring problems themselves were artificial; they're not factoring because they've encountered a real life problem that requires a factoring solution, they're factoring because they were given homework. It's hard to care about the quadratic equation when it's a solution to a problem you don't have.

Also, the problem is recursive in mathematical education... students don't care about the quadratic expression, because they have no quadratic equation problems, except in school. They don't care about the quadratic equations because they only needed to work them to learn about polynomials, which they have no reason to care about, except that it was homework. They have no reason to care about polynomials, except that they were assigned to teach algebraic manipulation, which they have no reason to care about except that it was assigned in school... and so on, recursively, until you arrive at simple arthimetic at which point most people would agree that it corresponds to real life.

In some sense this is the complement to the classic Lockhart's Lament in which a mathematician observes current math education is nearly worthless from a pure mathematical point of view; I observe that it is also nearly worthless from a practical point of view. The ideal education would encompass both, an education that at least covered one would be better than what we have now, but we offer neither.

As is my way, this theoretical-sounding observation can easily be converted into practice, informing both how I teach myself and others. For myself, when learning something I give the documentation a good skimming to sort of load in the "table of contents", then I just charge in. When I encounter a problem, I go back to the documentation. If what I'm using is solid and the documentation is well-written, I'll find the solution to my problem. It will be in some feature or command that previous did not fully make sense to me (proved by the fact that I had the problem in the first place), but now it will make complete sense, because I've actually had the problem it was trying to solve.

At work the amount of time I spend training others is slowly-but-surely increasing, and this observation very strongly informs how I structure my training. The fact is, most "training" fails, because the average training session consists of trying to essentially read out a glossary and then reading out the solutions to some problems. This is a waste of time when the trainees have never had the problems you are giving them the solution to.

Solution: If at all possible, give them the problems. Right now I'm training people on git at work, and despite the abundance of tutorials on the topic I still ended up writing my own training course. I find most tutorials consist of huge blocks of text explaining the basics of git, then they offer various solutions to source control problems, then that's the end. And I have found most people's idea of what being "trained on git" means is that we'll pile into a room, I'll stand in front of some slides gesturing wildly reading these things off, and then by presumably Magic they'll all know git by virtue of having heard some words.

That is not how I run this training. Instead, I make sure everyone is in front of a computer. We all create a repo and follow along, issuing the same commands to the repo. And I do not cover the happy path of what happens when everything is working properly; in fact I probably spend more of the presentation in error conditions and confusing situations than I do things working correctly. People need to know why adding something before switching branches does what it does. People need to learn how to deal with the "detached head" state. People need to learn what to do when a branch is merged and there's a conflict. They can't understand the solutions until they've had the problem, so if you hope for people to walk away understanding the solution you have to have given them the real problem, too.

If you're a programmer, hopefully that makes sense to you. Yeah, the problems themselves are still ultimately artifical, but hopefully it's still enough to trigger recognition when they occur for real. If you aren't a programmer and the last few sentences of the previous paragraph were so much gibberish, well, I ask you to imagine what use it would be to you if I were to describe the solutions here. You can't understand, partially due to glossary issues, sure, but also because you are so far from having those problems that you have nothing to hang a solution on to.

It's early going yet on whether this training is sticking, but early results are positive. I'm seeing better uptake on these topics than I was expecting.

I also have to admit it's a bit funny to hear how completely unused to being pushed into problem situations people are. I explain up front that I'm going to work us through some problems, but I've found people often still end up surprised when I tell them to type something and git yields an error. It's clearly a violation of deeply-held expectations.

At some point I'm going to be delivering security training, which also has its own long, sordid history of failure. For that, I plan on pursuing an even more advanced variation based on this observation: The most important result that security training can have is for the developers to come away aware that they have a problem at all. The vast bulk of security failures can be traced back to that core issue, that the developers didn't even realize they had problems, because they aren't jumping up and down in front of them wearing a sign that clearly labels it as a problem. It is the thinking on that topic that led me to write my previous post. Security issues are about more than just not making the "top 10" errors, it's about realizing that frankly the deck is utterly and totally stacked against you in almost every conceivable way, that dealing with that problem one line at a time is virtually impossible, and that we must start with recognizing these problems before we can even start solving them.

Providing solutions in training is a distant second-rate concern in this case; if developers realize they have problems, they'll seek out and create solutions. Solving problems is what they do, after all. It's that first clause of the "if" where the problem lies.

I hope this can help you improve your own training, and any teaching you may do for others.

Showing Passwords on Log-In Screens

 ∗ LukeW | Digital Product Design + Strategy

In 2012 I outlined why we should let people see their password when logging in to an application -especially on mobile devices. Now two years later with many large scale implementations released, here’s a compendium of why and how to show passwords and what’s coming next.

Why Show Passwords?

Passwords have long been riddled with usability issues. Because of overly complex security requirements (a minimum number of characters, some punctuation, the birthdate of at least one French king) and difficult to use input fields, password entry often results in frustrated customers and lost business.

Around 82% of people have forgotten their passwords. Password recovery is the number one request to Intranet help desks and if people attempt to recover a password while checking out on a e-commerce site, 75% won’t complete their purchase. In other words, passwords are broken. And the situation is worse on mobile where small screens and imprecise fingers are the norm.

Password Fields

Masking passwords makes complex input even harder while doing little to improve security -especially on mobile where there’s a visible touch keyboard directly below every input field that highlights each key as you press it. These bits of feedback show the characters in your password at a larger size than most input fields. So in reality, masking characters isn’t really hiding your password from prying eyes. And if you do suspect an eavesdropper is peering at your screen, just move your mobile device out of their line of sight!

Hide/Show Passwords

For all these reasons and more we opted to display your password as readable text on Polar’s Log In screen. A simple Hide action was present right next to the password field so in situations where you might need to, you could instantly switch the password to a string of ••••’s instantly.

Hide Password on Polar

While I was confident we were doing the right thing for increased usability and ease of access, I nevertheless worried that people would respond negatively to seeing their password in clear text despite having the ability to mask it. After all, years of log-in forms had trained people that masked fields meant “secure” fields.

So I was pleasantly surprised when we instead heard from people who had not only launched software with visible passwords but been successful with it as well. Steven Hoober shared how he removed password masking for 20M Sprint customers -no issues, well-measured. Mike Lee told us how Yahoo! showed passwords by default and saw double digit improvements (among other changes) with no security issues.

Quite quickly I came to see password-masking as a rut. A design pattern that has been around so long that no one thinks about it much. We all just go through the motions when assembling a log-in screen and add password-masking by default. Lost business and usability issues just come along for the ride.

Design Solutions

More recently many companies have started to take a critical look at the password-masking problem and launched several different solutions to address it. PayPal and Foursquare have implemented a hide/show text-based interaction similar to the one we employed at Polar.

PayPal and Foursquare  Show Password

LinkedIn, Adobe, and Twitter have opted to let people hide and show their passwords by tapping on an open/closed eye icon. While this type of visual symbol might be less obvious than written text, chances are it has localization advantages for companies that operate globally as it avoids variable lengths of translated copy.

LinkedIn, Adobe, and Twitter  Show Password

Microsoft may have the strangest implementation of the hide/show password pattern out there. In Windows 8, you have to press down on their eye icon in order to reveal your password. Once you remove your finger from the icon, the mask comes back and your password is no longer visible. Awkward.

Show Password on Windows 8

Amazon has been iterating on their log-in form repeatedly. Their progression went: from no ability to view your password; to allowing you to reveal your password with an explicit action (tapping a checkbox), to displaying your password by default and allowing you to hide it with a tap.

Amazon Show Password

While there’s pros and cons to each of these design solutions, the more important take-away is Microsoft, Adobe, Twitter, LinkedIn, PayPal, Amazon, and more are recognizing the issues that exist on Log In screens and designing accordingly.

Design is in the Details

Ok, so many companies now let people see their passwords when they try to log in. That’s great but it doesn’t necessarily mean everyone should go and copy their interfaces. That kind of thinking is what got us masked passwords in the first place and probably lots of other questionable design “patterns” like security questions.

Instead, it’s worth spending time to develop the right solution. Especially when you consider even small design details can have a big impact. To illustrate, let’s look at a study Jack Holmes ran that analyzed the impact of removing password masking.

In Jack’s tests, when passwords were displayed as clear text by default in an e-commerce form, 60% of people surveyed said they became suspicious of the site, while only 45% identified not masking the password as a usability benefit. In contrast, when a simple checkbox was added that indicated a Show Password setting was on, 100% of participants noticed the checkbox and interpreted the clear text password as a feature.

the impact of removing password masking

From 60% suspicious of the site to 100% viewing it as a usability feature with a single checkbox. Design really is in the details. And it’s no wonder Amazon’s included this checkbox on their Log In screen.

Web Vs. Native


As we’ve seen through several examples, many companies now let people view their passwords when logging in to mobile apps. Yet few allow the same behavior on the Web. Why should people in native apps have an easier time logging in then those accessing the same services in a Web browser?

Web Vs. Native
 password fields

Once again the issue boils down to security. Specifically, if:

  1. You are able to get possession of my device
  2. And unlock it
  3. If you navigate to a Web site
  4. For which I auto-saved my password in the browser
  5. And that Web site allows you to view passwords on their Log In screen
  6. You can now see my password
  7. auto-saving password in the browser

    This intermingling of the Web browser’s password-saving and auto fill-in features with visible passwords on a site can cause real security problems. One way to potentially mitigate this issue is to hide the password field if you detect it was filled in by the browser. Then if someone tries to view it, remove the password field entry and force people to re-enter it.

    hide the password field

    Sadly the design and development work required to implement this solution starts to outweigh potential benefits and it makes more sense to start looking beyond the password field entirely to ease Log In issues.

    Beyond the Password

    Amazon hasn't stopped iterating on their mobile Log In screen and in their latest iteration on iOS, they removed the need to enter a password altogether. To log in to your account, just touch your phone’s home button to verify your fingerprint using Apple’s Touch ID.

    evolution of passwords on amazon app

    The car-ride service, Uber, has gone one step further. Instead of signing up for an account, creating a password, and entering your payment information to order and pay for a car, you simply need to place your finger on the phone’s Touch ID sensor. No forms and no passwords required.

    uber's use of touch ID

    While it’s true Touch ID is a solution limited to Apple’s latest iOS devices, it’s a solution that sets a new bar for Log In by reducing the level of effort to a single touch. Given the choice, how will people decide to Log In or checkout? Through lengthy complicated forms and arduous password requirements and masks? Or with a single touch?

    Looking at things from this perspective… the future of Log In forms and password fields is pretty clear.

: The People are the Work

 ∗ A List Apart: The Full Feed

Not long ago at the Refresh Pittsburgh meetup, I saw my good friend Ben Callahan give his short talk called Creating Something Timeless. In his talk, he used examples ranging from the Miles Davis sextet to the giant sequoias to try to get at how we—as makers of things that seem innately ephemeral—might make things that stand the test of time.

And that talk got me thinking.

Very few of the web things I’ve made over the years are still in existence—at least not in their original state. The evolution and flux of these things is something I love about the web. It’s never finished; there’s always a chance to improve or pivot.

And yet we all want to make something that lasts. So what could that be?

For me, it’s not the things I make, but the experience of making them. Every project we’ve worked on at Bearded has informed the next one, building on the successes and failures of its predecessors. The people on the team are the vessels for that accumulated experience, and together we’re the engine that makes better and better work each time.

From that perspective it’s not the project that’s the timeless work, it’s us. But it doesn’t stop there, either. It’s also our clients. When we do our jobs well, we leave our clients and their teams more knowledgeable and capable, more empowered to use the web to further the goals of their organization and meet the needs of their users. So how do we give our clients more power to—ultimately—help themselves?

Not content (kənˈtent) with content (ˈkäntent)

Back in 2008 (when we started Bearded), one of our differentiators was that we built every site on a CMS. At the time, many agencies had not-insignificant revenue streams derived from updating their clients’ site content on their behalf.

But we didn’t want to do that work, and our clients didn’t want to pay for it. Building their site on a CMS and training them to use it was a natural solution. It solved both of our problems, recurring revenue be damned! It gave our clients power that they wanted and needed.

And there are other things like this that gnaw at me. Like site support.

Ask any web business owner what they do for post-launch site support, and you’re likely to get a number of different answers. Most of those answers, if we’re honest with ourselves, will have a thread of doubt in their tone. That’s because none of the available options feel super good.

We’ll do it ourselves!

For years at Bearded we did our own site support. When there were upgrades, feature changes, or (gasp!) bugs, we’d take care of it. Even for sites that had launched years ago.

But this created a big problem for us. We were only six people, and only three of us could handle those sorts of development tasks. Those three people also had all the important duties of building the backend features for all our new client projects. Does the word bottleneck mean anything to you? Because, brother, it does to me!

Not only that but, just like updating content, this was not work we enjoyed (nor was it work our clients liked paying for, but we’ll get to that later).

We’ll let someone else do it!

The next thing we did was find a development partner that specialized in site support. If you’re lucky enough to find a good shop like this (especially in your area) hang on to them, my friend! They can be invaluable.

This situation is great, because it instantly relieved our bottleneck problem. But it also put us in a potentially awkward position, because it relied on someone else’s business to support our clients.

If they started making decisions that I didn’t agree with, or they went out of business, I’d be in trouble and it could start affecting my client relationships. And without healthy client relationships, you’ve got nothing.

But what else is there to do?

We’ll empower our clients!

For the last year or two, we’ve been doing something totally different. For most of our projects now, we’re not doing support—because we’re not building the whole site. Instead we’ve started working closely with our client’s internal teams, who build the site with us.

We listen to them, pair with them, and train them. We bring them into our team, transfer every bit of knowledge we can throughout the whole project, and build the site together. At the end there’s no hand-off, because we’ve been handing off since day one. They don’t need site support because they know the site as well as we do, and can handle things themselves.

It’s just like giving our clients control of their own content. We give them access to the tools they need, lend them our expertise, and give them the guidance they’ll need to make good decisions after we’re gone.

At the end of it, we’ve probably built a website, but we’ve also done something more valuable: we’ve helped their team grow along with us. Just like us, they’re now better at what they do. They can take that knowledge and experience with them to their next projects, share that knowledge with other team members, and on, and on, and on.

What we develop is not websites, it’s people. And if that’s not timeless work, what is?

 

Reliably hosted by WebFaction.com