Monday, 22 September

09:30

02:10

Saturday, 20 September

20:20

The debate about the reproducibility of science bubbles onward, with everyone agreeing that it's a ...

 ∗ iRi

The debate about the reproducibility of science bubbles onward, with everyone agreeing that it's a problem but of course nobody with power to fix it doing anything about it.

Recently I've been thinking that science as we know it sits in a very unpleasant middle ground.

On the one hand, despite the propaganda institutional science is biased against replication. This holes it below the waterline, and any serious scientist (alas) must consider fixing this in their field their top priority or they are consenting to just spin their wheels forever. We do not work formally enough to produce good results, because merely reaching "Peer Approved Once" and getting published is provably not a solid foundation to build on.

If one is inclined to take offense to that, consider the fact that scientists are supposed to be building on the work of others. It's very simple math to see that even if a uniformly-distributed 95% of the papers published are perfectly correct, that 5% has a disproportional impact on the accuracy of a tower of knowledge; as the tower grows, the chances of any particular new result containing a false result in the set of results it is building on approaches 1 quickly.

Many scientific disciplines would be lucky to have a 95% accuracy rate.

On the other hand, scientists are also not allowed to just "fool around", by virtue of not being able to get funding for it. Even simple experiments must be submitted, approved, funded, etc, all involving processes a great deal more complicated than the simple little English words imply. As a second-order effect it becomes a waste of time to go through the process for a small experiment, making the small experiments even less likely to be conducted than you would initially think. And yet, historically, a lot of great stuff happened from very skilled, knowledgeable scientists just fooling around. In only a few fields can a scientist afford to fool around on their own time and money, mostly the mathematical ones.

The system both crushes away the rigor we're promised in the brochure, and also crushes away any chance of serendipity or discovery on the cheap. The miracle is when we get any science at all.

16:30

00:00

CipherShed Fork from TrueCrypt Project, Support Windows, Mac OS and Linux - https://ciphershed.org, (Fri, Sep 19th)

 ∗ SANS Internet Storm Center, InfoCON: green

 

-----------

Guy Bruneau

Friday, 19 September

16:30

Widespread angst about school quality is easy to fix... schools just need to look around ...

 ∗ iRi

Widespread angst about school quality is easy to fix... schools just need to look around and copy what's working out there in the real world.

Five Solids that you have to see to believe!

7 Places You Have To See Before You Die

This Woman's SHOCKING Actions Will Restore Your Faith In Humanity

See why this guy thinks he can make you dance to his tune

This one weird chemical will BLOW YOUR MIND!

Oh... uh... I may have gotten carried away on that last one. Maybe it should be, uh, covered differently....

10:30

03:40

Getting Started With CSS Audits

 ∗ A List Apart: The Full Feed

This week I wrote about conducting CSS audits to organize your code, keeping it clean and performant—resulting in faster sites that are easier to maintain. Now that you understand the hows and whys of auditing, let’s take a look at some more resources that will help you maintain your CSS architecture. Here are some I’ve recently discovered and find helpful.

Organizing CSS

  • Harry Roberts has put together a fantastic resource for thinking about how to write large CSS systems, CSS Guidelines.
  • Interested in making the style guide part of the audit easier? This Github repo includes a whole bunch of info on different generators.

Help from task runners

Do you like task runners such as grunt or gulp? Andy Osmani’s tutorial walks through using all kinds of task runners to find unused CSS selectors: Spring Cleaning Unused CSS Selectors.

Accessibility

Are you interested in auditing for accessibility as well (hopefully you are!)? There are tools for that, too. This article helps you audit your site for accessibility— it’s a great outline of exactly how to do it.

Performance

  • Sitepoint takes a look at trimming down overall page weight, which would optimize your site quite a bit.
  • Google Chrome’s dev tools include a built-in audit tool, which suggests ways you could improve performance. A great article on HTML5 Rocks goes through this tool in depth.

With these tools, you’ll be better prepared to clean up your CSS, optimize your site, and make the entire experience better for users. When talking about auditing code, many people are focusing on performance, which is a great benefit for all involved, but don’t forget that maintainability and speedier development time come along with a faster site.

Thursday, 18 September

Wednesday, 17 September

22:40

03:20

Client Education and Post-Launch Success

 ∗ A List Apart: The Full Feed

What our clients do with their websites is just as important as the websites themselves. We may pride ourselves on building a great product, but it’s ultimately up to the client to see it succeed or fail. Even the best website can become neglected, underused, or messy without a little education and training.

Too often, my company used to create amazing tools for clients and then send them out into the world without enough guidance. We’d watch our sites slowly become stale, and we’d see our strategic content overwritten with fluffy filler.

It was no one’s fault but our own.

As passionate and knowledgeable web enthusiasts, it’s literally our job to help our clients succeed in any way we can, even after launch. Every project is an opportunity to educate clients and build a mutually beneficial learning experience.

Meeting in the middle

If we want our clients to use our products to their full potential, we have to meet them in the middle. We have to balance our technical expertise with their existing processes and skills.

At my company, Brolik, we learned this the hard way.

We had a financial client whose main revenue came from selling in-depth PDF reports. Customers would select a report, generating an email to an employee who would manually create and email an unprotected PDF to the customer. The whole process would take about two days.

To make the process faster and more secure, we built an advanced, password-protected portal where their customers could purchase and access only the reports they’d paid for. The PDFs themselves were generated on the fly from the content management system. They were protected even after they were downloaded and only viewable with a unique username and password generated with the PDF.

The system itself was technically advanced and thoroughly solved our client’s needs. When the job was done, we patted ourselves on the back, added the project to our portfolio, and moved on to the next thing.

The client, however, was generally confused by the system we’d built. They didn’t quite know how to explain it to their customers. Processes had been automated to the point where they seemed untrustworthy. After about a month, they asked us if we’d revert back to their previous system.

We had created too large of a process change for our client. We upended a large part of their business model without really considering whether they were ready for a new approach.

From that experience, we learned not only to create online tools that complement our clients’ existing business processes, but also that we can be instrumental in helping clients embrace new processes. We now see it as part of our job to educate our clients and explain the technical and strategic thought behind all of our decisions.

Leading by example

We put this lesson to work on a more recent project, developing a site-wide content tagging system where images, video, and other media could be displayed in different ways based on how they were tagged.

We could have left our clients to figure out this new system on their own, but we wanted to help them adopt it. So we pre-populated content and tags to demonstrate functionality. We walked through the tagging process with as many stakeholders as we could. We even created a PDF guide to explain the how and why behind the new system.

In this case, our approach worked, and the client’s cumbersome media management time was significantly reduced. The difference between the outcome of the two projects was simply education and support.

Education and support can, and usually does, take the form of setting an example. Some clients may not fully understand the benefits of a content strategy, for instance, so you have to show them results. Create relevant and well-written sample blog posts for them, and show how they can drive website traffic. Share articles and case studies that relate to the new tools you’re building for them. Show them that you’re excited, because excitement is contagious. If you’re lucky and smart enough to follow Geoff Dimasi’s advice and work with clients who align with your values, this process will be automatic, because you’ll already be invested in their success.

We should be teaching our clients to use their website, app, content management system, or social media correctly and wisely. The more adept they are at putting our products to use, the better our products perform.

Dealing with budgets

Client education means new deliverables, which have to be prepared by those directly involved in the project. Developers, designers, project managers, and other team members are responsible for creating the PDFs, training workshops, interactive guides, and other educational material.

That means more organizing, writing, designing, planning, and coding—all things we normally bill for, but now we have to bill in the name of client education.

Take this into account at the beginning of a project. The amount of education a client needs can be a consideration for taking a job at all, but it should at least factor into pricing. Hours spent helping your client use your product is billable time that you shouldn’t give away for free.

At Brolik, we’ve helped a range of clients—from those who have “just accepted that the Web isn’t a fad” (that’s an actual quote from 2013), to businesses that have a team of in-house developers. We consider this information and price accordingly, because it directly affects the success of the entire product and partnership. If they need a lot of education but they’re not willing to pay for it, it may be smart to pass on the job.

Most clients actually understand this. Those who are interested in improving their business are interested in improving themselves as well. This is the foundation for a truly fulfilling and mutually beneficial client relationship. Seek out these relationships.

It’s sometimes challenging to justify a “client education” line item in your proposals, however. If you can’t, try to at least work some wiggle room into your price. More specifically, try adding a 10 percent contingency for “Support and Training” or “Onboarding.”

If you can’t justify a price increase at all, but you still want the job, consider factoring in a few client education hours and their opportunity cost as part of your company’s overall marketing budget. Teaching your client to use your product is your responsibility as a digital business.

This never ends (hopefully)

What’s better than arming your clients with knowledge and tools, pumping them up, and then sending them out into the world to succeed? Venturing out with them!

At Brolik, we’ve started signing clients onto digital strategy retainers once their websites are completed. Digital strategy is an overarching term that covers anything and everything to grow a business online. Specifically for us, it includes audience research, content creation, SEO, search and display advertising, website maintenance, social media, and all kinds of analysis and reporting.

This allows us to continue to educate (and learn) on an ongoing basis. It keeps things interesting—and as a bonus, we usually upsell more work.

We’ve found that by fostering collaboration post-launch, we not only help our clients use our product more effectively and grow their business, but we also alleviate a lot of the panic that kicks in right before a site goes live. They know we’ll still be there to fix, tweak, analyze, and even experiment.

This ongoing digital strategy concept was so natural for our business that it’s surprising it took us so long to implement it. After 10 years making websites, we’ve only offered digital strategy for the last two, and it’s already driving 50 percent of our revenue.

It pays to be along for the ride

The extra effort required for client education is worth it. By giving our clients the tools, knowledge, and passion they need to be successful with what we’ve built for them, we help them improve their business.

Anything that drives their success ultimately drives ours. When the tools we build work well for our clients, they return to us for more work. When their websites perform well, our portfolios look better and live longer. Overall, when their business improves, it reflects well on us.

A fulfilling and mutually beneficial client relationship is good for the client and good for future business. It’s an area where we can follow our passion and do what’s right, because we get back as much as we put in.

CSS Audits: Taking Stock of Your Code

 ∗ A List Apart: The Full Feed

Most people aren’t excited at the prospect of auditing code, but it’s become one of my favorite types of projects. A CSS audit is really detective work. You start with a site’s code and dig deeper: you look at how many stylesheets are being called, how that affects site performance, and how the CSS itself is written. Your goal is to look for ways to improve on what’s there—to sleuth out fixes to make your codebase better and your site faster.

I’ll share tips on how to approach your own audit, along with the advantages of taking a full inventory of your CSS and various tools.

Benefits of an audit

An audit helps you to organize your code and eliminate repetition. You don’t write any code during an audit; you simply take stock of what’s there and document recommendations to pass off to a client or discuss with your team. These recommendations ensure new code won’t repeat past mistakes. Let’s take a closer look at other benefits:

  • Reduce file sizes. A complete overview of the CSS lets you take the time to find ways to refactor the code: to clean it up and perhaps cut down on the number of properties. You can also hunt for any odds and ends, such as outdated versions of browser prefixes, that aren’t in use anymore. Getting rid of unused or unnecessary code trims down the file people have to download when they visit your site.
  • Ensure consistency with guidelines. As you audit, create documentation regarding your styles and what’s happening with the site or application. You could make a formal style guide, or you could just write out recommendations to note how different pieces of your code are used. Whatever form your documentation takes, it’ll save anyone coming onto your team a lot of time and trouble, as they can easily familiarize themselves with your site’s CSS and architecture.
  • Standardize your code. Code organization—which certainly attracts differing opinions—is essential to keeping your codebase more maintainable into the future. For instance, if you choose to alphabetize your properties, you can readily spot duplicates, because you’d end up with two sets of margin properties right next to each other. Or you may prefer to group properties according to their function: positioning, box model-related, etc. Having a system in place helps you guard against repetition.
  • Increase performance. I’ve saved the best for last. Auditing code, along with combining and zipping up stylesheets, leads to markedly faster site speeds. For example, Harry Roberts, a front-end architect in the UK who conducts regular audits, told me about a site he recently worked on:
    I rebuilt Fasetto.com with a view to improving its performance; it went from 27 separate stylesheets for a single-page site (mainly UI toolkits like Bootstrap, etc.) down to just one stylesheet (which is actually minified and inlined, to save on the HTTP request), which weighs in at just 5.4 kB post-gzip.

    This is a huge win, especially for people on slower connections—but everyone gains when sites load quickly.

How to audit: take inventory

Now that audits have won you over, how do you go about doing one? I like to start with a few tools that provide an overview of the site’s current codebase. You may approach your own audit differently, based on your site’s problem areas or your philosophy of how you write code (whether OOCSS or BEM). The important thing is to keep in mind what will be most useful to you and your own site.

Once I’ve diagnosed my code through tools, I examine it line by line.

Tools

The first tool I reach for is Nicole Sullivan’s invaluable Type-o-matic, an add-on for Firebug that generates a JSON report of all the type styles in use across a site. As an added bonus, Type-o-matic creates a visual report as it runs. By looking at both reports, you know at a glance when to combine type styles that are too similar, eliminating unnecessary styles. I’ve found that the detail of the JSON report makes it easy to see how to create a more reusable type system.

In addition to Type-o-matic, I run CSS Lint, an extremely flexible tool that flags a wide range of potential bugs from missing fallback colors to shorthand properties for better performance. To use CSS Lint, click the arrow next to the word “Lint” and choose the options you want. I like to check for repeated properties or too many font sizes, so I always run Maintainability & Duplication along with Performance. CSS Lint then returns recommendations for changes; some may be related to known issues that will break in older browsers and others may be best practices (as the tool sees them). CSS Lint isn’t perfect. If you run it leaving every option checked, you are bound to see things in the end report that you may not agree with, like warnings for IE6. That said, this is a quick way to get a handle on the overall state of your CSS.

Next, I search through the CSS to review how often I repeat common properties, like float or margin. (If you’re comfortable with the command line, type grep along with instructions and plug in something like grep “float” styles/styles.scss to find all instances of “float”.) Note any properties you may cut or bundle into other modules. Trimming your properties is a balancing act: to reduce the number of repeated properties, you may need to add more classes to your HTML, so that’s something you’ll need to gauge according to your project.

I like to do this step by hand, as it forces me to walk through the CSS on my own, which in turn helps me better understand what’s going on. But if you’re short on time, or if you’re not yet comfortable with the command line, tools can smooth the way:

  • CSS Dig is an automated script that runs through all of your code to help you see it visually. A similar tool is StyleStats, where you type in a url to survey its CSS.
  • CSS Colorguard is a brand-new tool that runs on Node and outputs a report based on your colors, so you know if any colors are too alike. This helps limit your color palette, making it easier to maintain in the future.
  • Dust-Me Selectors is an add-on for Firebug in Firefox that finds unused selectors.

Line by line

After you run your tools, take the time to read through the CSS; it’s worth it to get a real sense of what’s happening. For instance, comments in the code—that tools miss—may explain why some quirk persists.

One big thing I double-check is the depth of applicability, or how far down an attribute string applies. Does your CSS rely on a lot of specificity? Are you seeing long strings of selectors, either in the style files themselves or in the output from a preprocessor? A high depth of applicability means your code will require a very specific HTML structure for styles to work. If you can scale it back, you’ll get more reusable code and speedier performance.

Review and recommend

Now to the fun part. Once you have all your data, you can figure out how to improve the CSS and make some recommendations.

The recommendation document doesn’t have to be heavily designed or formatted, but it should be easy to read. Splitting it into two parts is a good idea. The first consists of your review, listing the things you’ve found. If you refer to the results of CSS Lint or Type-o-matic, be sure to include either screenshots or the JSON report itself as an attachment. The second half contains your actionable recommendations to improve the code. This can be as simple as a list, with items like “Consolidate type styles that are closely related and create mixins for use sitewide.”

As you analyze all the information you’ve collected, look for areas where you can:

  • Tighten code. Do you have four different sets of styles for a call-out box, several similar link styles, or way too many exceptions to your standard grid? These are great candidates for repeatable modular styles. To make consolidation even easier, you could use a preprocessor like Sass to turn them into mixins or extend, allowing styles to be applied when you call them on a class. (Just check that the outputted code is sensible too.)
  • Keep code consistent. A good audit makes sure the code adheres to its own philosophy. If your CSS is written based on a particular approach, such as BEM or OOCSS, is it consistent? Or do styles veer from time to time, and are there acceptable deviations? Make sure you document these exceptions, so others on your team are aware.

If you’re working with a client, it’s also important to explain the approaches you favor, so they understand where you’re coming from—and what things you may consider as issues with the code. For example, I prefer OOCSS, so I tend to push for more modularity and reusability; a few classes stacked up (if you aren’t using a preprocessor) don’t bother me. Making sure your client understands the context of your work is particularly crucial when you’re not on the implementation team.

Hand off to the client

You did it! Once you’ve written your recommendations (and taken some time to think on them and ensure they’re solid), you can hand them off to the client—be prepared for any questions they may have. If this is for your team, congratulations: get cracking on your list.

But wait—an audit has even more rewards. Now that you’ve got this prime documentation, take it a step further: use it as the springboard to talk about how to maintain your CSS going forward. If the same issues kept popping up throughout your code, document how you solved them, so everyone knows how to proceed in the future when creating new features or sections. You may turn this document into a style guide. Another thing to consider is how often to revisit your audit to ensure your codebase stays squeaky clean. The timing will vary by team and project, but set a realistic, regular schedule—this a key part of the auditing process.

Conducting an audit is a vital first step to keeping your CSS lean and mean. It also helps your documentation stay up to date, allowing your team to have a good handle on how to move forward with new features. When your code is structured well, it’s more performant—and everyone benefits. So find the time, grab your best sleuthing hat, and get started.

One Hug

 ∗ Jeffrey Zeldman Presents The Daily Report: Web Design News & Insights Since 1995

JUST WEEKS ago, my daughter’s mother moved out of state. The kid’s been having a tough time with it, and with school, and with her upcoming tenth birthday, which won’t work out the way she hoped. And then, over the weekend, her laptop and mine both broke—hers by cat-and-ginger-ale misfortune, mine by gravity abetted by my stupidity.

To lighten the mood, this morning broke grey, pounding rain. We pulled on our hoodies, scooped up our bodega umbrellas, and shrugged on our backpacks—hers heavy with school books, mine with gym clothes, a camera, and two busted laptops.

We were standing by the elevator when an apartment door burst open and Ava’s best friend in the world sprinted down the hall to hug her good morning. The two girls embraced until the elevator arrived.

The whole dark wet walk to school, my child hummed happily to herself.

#1hug

00:00

FreeBSD Denial of Service advisory (CVE-2004-0230), (Tue, Sep 16th)

 ∗ SANS Internet Storm Center, InfoCON: green

Tuesday, 16 September

20:30

While Star Trek was ahead of its time in many ways, you could tell they ...

 ∗ iRi

While Star Trek was ahead of its time in many ways, you could tell they never lived with the technology they hypothesized. For instance, there's no episode in which Wesley Crusher walks around with his PADD unlocked while cradling it on his chest, causing a Major Interstellar Diplomatic Incident when he accidentally ends up emailing pictures of his armpit to the Klingon High Council along with a text message consisting of "klxitijtjqtktkjjt", which is of course an ancient and dishonorable way of challenging the entire Council to a mandatory duel to the death.

Fortunately, all I managed to do with my accidentally unlocked phone today was start the stopwatch and bring up the texting screen without actually sending anything. But still, it's sorta scary just what socially-horrible things you can do from that touchscreen.

03:10

Mobile & Multi-Device Design: Lessons Learned

 ∗ LukeW | Digital Product Design + Strategy

My new book compiles the articles I published over the past two years about Polar’s mobile and multi-device design decisions. It's filled with nuanced user interface design details and big-picture thinking on software design for PCs, tablets, TVs, and beyond. And it's free.

Over the past two years, I served as co-founder and CEO of Polar. During that time, we built mobile apps, responsive Web apps, second screen experiences, and... we learned a lot. When Polar joined Google last week, we took some time to package up the articles I wrote over the past two years about our thinking, our failures, and our successes. We’re making this compilation freely available:

Free Mobile & Multi-Device Design Book by Luke Wroblewski

  • A beautiful iBook complete with hidden treats and awesome videos.
  • A PDF of the same content and layout but without the slick UI and videos.
  • The original Web pages published on this blog from 2012-2014.
  • I hope that some of our experiences, insights, and missteps will ultimately help others as they wrestle with similar issues and ideas in their products. It’s a small way for us to formally say thanks and give back to everyone that helped us do the same. Thanks.

    Monday, 15 September

    14:20

    03:10

    On to The Next Journey…

     ∗ LukeW | Digital Product Design + Strategy

    Creating products is a journey. And like any journey, it’s filled with new experiences, missteps, and perhaps most importantly, lots of opportunities to learn. My most recent journey started nearly two years ago when we began working on Polar.

    Today I’m delighted to announce we’re joining Google. But before embarking on this new journey, I wanted to thank everyone that was part of Polar.

    Polar is Joining Google

    We started with the simple idea that everyone has an opinion worth hearing. But the tools that existed online to meet this need weren’t up to the task: think Web forms, radio buttons, and worse. Ugh. We felt we could do much better by making opinions easy and fun for everyone.

    Since then one in every 449 Internet users told us their opinion by voting on a Polar poll. We served more than half a billion polls in the past eight months and had 1.1 million active voters in September. To everyone that made this possible, our heartfelt thanks.

    If you voted on a Polar poll, downloaded our app, our embedded us in your site, we learned from you. Personally, I learned more than I could imagine from the Polar team, our partners, and investors. I can’t imagine a better gift than knowledge, so I’m grateful to all of you.

    Sharing What We Learned

    In an effort to return the favor, we took some time to package up the articles I wrote over the past two years about Polar’s mobile and multi-device designs. We’re making this compilation freely available in:

  • A beautiful iBook complete with hidden treats and awesome videos.
  • A PDF of the same content and layout but without the slick UI and videos.
  • The original Web pages published on this blog from 2012-2014.
  • Free Mobile & Multi-Device Design Book by Luke Wroblewski

    Each version is filled with nuanced user interface design details and big-picture thinking on mobile and multi-device design for PCs, tablets, TVs, and beyond. It’s our hope that some of our experiences, insights, and missteps, will ultimately help you with the product journey you’re on now or will be in the future.

    If you used Polar, we’ve made it super-easy for you to download an archive of the polls and data you created –they’re yours after all. We’ll also be keeping the site running for a while to help with this transition.

    Big bear hugs & thanks,
    Luke Wroblewski & Team Polar

    WHATWG Weekly: Fullscreen dialog

     ∗ The WHATWG Blog

    Ian Hickson made a proposal to unify Web Intents with registerProtocolHandler() and registerContentHandler(). The Encoding Standard now has all its decoders defined. This is the WHATWG Weekly.

    The big news this week is the new dialog element. Introduced in revision 7050, along with a new global attribute called inert, a new form element method attribute value "dialog", and a new CSS property anchor-point.

    Yours truly updated the Fullscreen Standard just in time for the dialog element. It defines a new CSS ::backdrop pseudo-element as well as a new rendering layer to address the combined use cases of Fullscreen and the dialog element.

    WHATWG Weekly: HTML canvas version 5 has arrived

     ∗ The WHATWG Blog

    The StringEncoding proposal is getting closer to consensus. It now consists of a TextEncoder and a TextDecoder object that can be used both for streaming and non-streaming use cases. This is the WHATWG Weekly.

    Some bad news for a change. It may turn out that the web platform will only work on little-endian devices, as the vast majority of devices is little-endian today (computers, phones, …) and code written using ArrayBuffer today assumes little-endian. Boris Zbarsky gives a rundown of options for browsers on big-endian devices. Kenneth Russell thinks the situation can still be saved by universal deployment of DataView and sufficient developer advocacy.

    Over the past couple of weeks the canvas element 2D API has gotten some major new features. Ian Hickson wrote a lengthy email detailing the canvas v5 API additions. Path primitives, dashed lines, ellipses, SVG path syntax, text along a path, hit testing, more text metrics, transforming patterns, and a bunch more.

    WHATWG Weekly: Path objects for canvas and creating paths through SVG syntax

     ∗ The WHATWG Blog

    Jonas Sicking proposed an API for decoding ArrayBuffer objects as strings, and encoding strings as ArrayBuffer objects. The thread also touched on a proposal mentioned here earlier, StringEncoding. This is the mid-March WHATWG Weekly.

    Revision 7023 added the Path object to HTML for use with the canvas element, and the next revision made it possible to actually use it:

    var path = new Path()
    path.rect(1,1,10,10)
    context.stroke(path)

    A new method addPathData() (introduced in revision 7026) can be used to construct canvas paths using SVG path data. Revision 7025 meanwhile added ellipse support to canvas.

    Tune in next week for more additions to canvas.

    WHATWG Weekly: http+aes URL scheme, control Referer, …

     ∗ The WHATWG Blog

    Apple's Safari team provided feedback to the Web Notifications Working Group. That group, incidentally, is looking for an active editor to address that and other feedback. Opera Mobile shipped with WebGL support. This is March's first WHATWG Weekly.

    Simon Pieters overhauled much of HTML5 differences from HTML4 and the document now provides information on added/changed APIs, differences between HTML and W3C HTML5, content model changes, and more.

    Ian Hickson introduced a new URL scheme named http+aes (and also https+aes) in revision 7012 that allows for hosting private data on content distribution networks. Revision 7009 by the way, added the necessary hooks for the DOM mutation observers feature to HTML.

    A new "referrer" metadata name for the meta element has been proposed on the WHATWG Wiki. This allows for controlling the Referer header on outgoing links.

    WHATWG Weekly: New canvas API goodies

     ∗ The WHATWG Blog

    A draft for the SPDY protocol has been submitted, the W3C HTML WG mailing list goes crazy over media DRM. This is the WHATWG Weekly.

    In response to feedback Adam Barth changed the getRandomValues() method to return the array the method modifies. The method is part of the window.crypto proposal.

    Ian Hickson has been busy updating the Canvas Wiki page with proposals for dashed lines, ellipsis, hit regions, using SVG path syntax for paths, and path primitives. Updates to HTML itself seem imminent.

    WHATWG Weekly: Unicode for the platform?

     ∗ The WHATWG Blog

    In less than a year we reached another arbitrary milestone. HTML is another thousand revisions further, now over 7000 (not quite 9000). This is the WHATWG Weekly.

    Over on public-script-coord@w3.org, the mailing list used by TC39 (responsible for JavaScript) and the WebApps WG to coordinate development of JavaScript, IDL, and APIs, Brendan Eich launched a mega thread on full Unicode for ES6. The entire platform is currently build around 16-bit code units, which are not quite sufficient to encompass all code points. Some code points therefore require two code units, but string manipulation, length information, etc. is all in code units, making it difficult to deal with code points that require two (in practice nobody seems to bother much). The idea is to introduce some kind of switch which when used would let you deal with code points exclusively, rather than code units.

    HTML did not change much last week as its editor was playing in the snow. The DOM meanwhile now has mutation observers defined, the replacement for mutation events. Adam Klein did all the heavy lifting and yours truly cleaned it up a bit. An introduction to DOM events has been added as well.

    WHATWG Weekly: Quirks Mode and Error Recovery for XML

     ∗ The WHATWG Blog

    Quirks Mode has its first public draft and a group working on XML Error Recovery just started. This is the WHATWG Weekly.

    Simon Pieters published a first draft of the Quirks Mode Standard. This should help align implementations of quirks mode and reduce the overall number of quirks implementations currently have. In other words, making the quirks that are needed for compatibility with legacy content more interoperable.

    In a message to the W3C TAG Jeni Tennison introduced the XML Error Recovery Community Group whose charter is about creating a newish version of XML 1.0 that is fault tolerant. Community Groups are open for everyone to join, so if you want to help out, you can!

    That is all, be sure to keep an eye on the HTML5 Tracker for recent changes to HTML!

    WHATWG Weekly: translate attribute and other changes to HTML

     ∗ The WHATWG Blog

    Since the last WHATWG Weekly, almost a month ago now, over a hundred changes have been committed to the HTML standard. This is the WHATWG Weekly and it will cover those changes so you don’t have to. Also, remember kids, that fancy email regular expression is non-normative.

    translate attribute

    To aid translators and automated translation HTML sports a translate since revision 6971. By default everything can be translated. You can override that by setting the translate attribute to the "no" value. This can be used for names, computer code, expressions that only make sense in a given language, etc.

    Selector and CSS related changes

    In revision 6888 the :valid and :invalid pseudo-classes were made applicable to the form element. This way you can determine whether all controls in a given form are correctly filled in.

    Revision 6898 made the wbr element less magical. Well, it defined the element fully in terms of CSS rather than using prose.

    A new CSS feature was introduced in revision 6935. The @global at-rule allows for selectors to “escape” scoped stylesheets as it were, by letting them apply to the whole document. It will likely be moved out of HTML and into a CSS once a suitable location has been found.

    APIs; teehee!

    It turns out that clearTimeout() and clearInterval() can be used interchangeably. Revision 6949 makes sure that new implementors make it work that way too.

    Per a request from Adrian Bateman revision 6957 added a fourth argument to the window.onerror callback, providing scripts with the script error column position.

    Speaking of scripts, in revision 6964 script elements gained two new events. beforescriptexecute which is dispatched before the script executes and can be cancelled to prevent execution altogether. And afterscriptexecute for when script execution has completed.

    Revision 6966 implemented a change that allows browsers to not execute alert(), showModalDialog(), and friends during pagehide, beforeunload, and unload events. This can improve the end user experience.

    WHATWG Weekly: Happy New Year!

     ∗ The WHATWG Blog

    Happy new year everyone! We made great progress in standardizing the platform in 2011 and plan to continue doing just that with your help. You can join our mailing list to discuss issues with web development or join IRC if you prefer more lively interaction.

    I will be taking the remainder of the month off and as nobody has volunteered thus far, WHATWG Weekly is unlikely to be updated in January. All the more reason to follow email and IRC.

    Since last time the toBlob() method of the canvas element has been updated in revisions 6879 and 6880 to make sure it honors the same-origin policy (for exposure of image data) and handles the empty grid.

    In the land of ECMAScript a proposal was made to avoid versioning by David Herman, which if successful will keep ECMAScript simple and more in line with other languages used on the web.

    WHATWG Weekly: Shadow DOM and more encoding fun!

     ∗ The WHATWG Blog

    You might have missed this. Because of this lengthy thread on throwing for atob() space characters will no longer cause the method to throw from revision>6874 onwards. This is the WHATWG Weekly, with some standards related updates just before the world slacks off to feast and watch reindeer on Google Earth.

    Shadow DOM

    Dimitri Glazkov (from good morning, WHATWG!) published Shadow DOM. A while earlier he also published, together with Dominic Cooney, Web Components Explained. The general idea is to be able to change the behavior and style of elements without changing their intrinsic semantics. A very basic example would be adding a bunch of children to a certain element to have more styling hooks (since this is the shadow DOM the children will not appear as actual children in the normal DOM, but can be styled).

    Encoding Standard

    Two weeklies ago you were informed about the encoding problem we have on the platform. While HTML already took quite a few steps to tighten up things (discouraging support for UTF-7, UTF-32, etc. defining encoding label matching more accurately), more were needed. Especially when it comes to actually decoding and encoding with legacy encodings. The Encoding Standard aims to tackle these issues and your input is much appreciated. Especially with regards to the implementation details of multi-octet encodings.

    Saturday, 13 September

    03:00

    Running Code Reviews with Confidence

     ∗ A List Apart: The Full Feed

    Growing up, I learned there were two kinds of reviews I could seek out from my parents. One parent gave reviews in the form of a shower of praise. The other parent, the one with a degree from the Royal College of Art, would put me through a design crit. Today the reviews I seek are for my code, not my horse drawings, but it continues to be a process I both dread and crave.

    In this article, I’ll describe my battle-tested process for conducting code reviews, highlighting the questions you should ask during the review process as well as the necessary version control commands to download and review someone’s work. I’ll assume your team uses Git to store its code, but the process works much the same if you’re using any other source control system.

    Completing a peer review is time-consuming. In the last project where I introduced mandatory peer reviews, the senior developer and I estimated that it doubled the time to complete each ticket. The reviews introduced more context-switching for the developers, and were a source of increased frustration when it came to keeping the branches up to date while waiting for a code review.

    The benefits, however, were huge. Coders gained a greater understanding of the whole project through their reviews, reducing silos and making onboarding easier for new people. Senior developers had better opportunities to ask why decisions were being made in the codebase that could potentially affect future work. And by adopting an ongoing peer review process, we reduced the amount of time needed for human quality assurance testing at the end of each sprint.

    Let’s walk through the process. Our first step is to figure out exactly what we’re looking for.

    Determine the purpose of the proposed change

    Our code review should always begin in a ticketing system, such as Jira or GitHub. It doesn’t matter if the proposed change is a new feature, a bug fix, a security fix, or a typo: every change should start with a description of why the change is necessary, and what the desired outcome will be once the change has been applied. This allows us to accurately assess when the proposed change is complete.

    The ticketing system is where you’ll track the discussion about the changes that need to be made after reviewing the proposed work. From the ticketing system, you’ll determine which branch contains the proposed code. Let’s pretend the ticket we’re reviewing today is 61524—it was created to fix a broken link in our website. It could just as equally be a refactoring, or a new feature, but I’ve chosen a bug fix for the example. No matter what the nature of the proposed change is, having each ticket correspond to only one branch in the repository will make it easier to review, and close, tickets.

    Set up your local environment and ensure that you can reproduce what is currently the live site—complete with the broken link that needs fixing. When you apply the new code locally, you want to catch any regressions or problems it might introduce. You can only do this if you know, for sure, the difference between what is old and what is new.

    Review the proposed changes

    At this point you’re ready to dive into the code. I’m going to assume you’re working with Git repositories, on a branch-per-issue setup, and that the proposed change is part of a remote team repository. Working directly from the command line is a good universal approach, and allows me to create copy-paste instructions for teams regardless of platform.

    To begin, update your local list of branches.

    git fetch
    

    Then list all available branches.

    git branch -a
    

    A list of branches will be displayed to your terminal window. It may appear something like this:

    * master
    remotes/origin/master
    remotes/origin/HEAD -> origin/master
    remotes/origin/61524-broken-link
    

    The * denotes the name of the branch you are currently viewing (or have “checked out”). Lines beginning with remotes/origin are references to branches we’ve downloaded. We are going to work with a new, local copy of branch 61524-broken-link.

    When you clone your project, you’ll have a connection to the remote repository as a whole, but you won’t have a read-write relationship with each of the individual branches in the remote repository. You’ll make an explicit connection as you switch to the branch. This means if you need to run the command git push to upload your changes, Git will know which remote repository you want to publish your changes to.

    git checkout --track origin/61524-broken-link
    

    Ta-da! You now have your own copy of the branch for ticket 61524, which is connected (“tracked”) to the origin copy in the remote repository. You can now begin your review!

    First, let’s take a look at the commit history for this branch with the command log.

    git log master..
    

    Sample output:

    Author: emmajane 
    Date: Mon Jun 30 17:23:09 2014 -0400
    
    Link to resources page was incorrectly spelled. Fixed.
    
    Resolves #61524.
    

    This gives you the full log message of all the commits that are in the branch 61524-broken-link, but are not also in the master branch. Skim through the messages to get a sense of what’s happening.

    Next, take a brief gander through the commit itself using the diff command. This command shows the difference between two snapshots in your repository. You want to compare the code on your checked-out branch to the branch you’ll be merging “to”—which conventionally is the master branch.

    git diff master
    

    How to read patch files

    When you run the command to output the difference, the information will be presented as a patch file. Patch files are ugly to read. You’re looking for lines beginning with + or -. These are lines that have been added or removed, respectively. Scroll through the changes using the up and down arrows, and press q to quit when you’ve finished reviewing. If you need an even more concise comparison of what’s happened in the patch, consider modifying the diff command to list the changed files, and then look at the changed files one at a time:

    git diff master --name-only
    git diff master <filename>
    

    Let’s take a look at the format of a patch file.

    diff --git a/about.html b/about.html
    index a3aa100..a660181 100644
    	--- a/about.html
    	+++ b/about.html
    @@ -48,5 +48,5 @@
    	(2004-05)
    
    - A full list of <a href="emmajane.net/events">public 
    + A full list of <a href="http://emmajane.net/events">public 
    presentations and workshops</a> Emma has given is available
    

    I tend to skim past the metadata when reading patches and just focus on the lines that start with - or +. This means I start reading at the line immediate following @@. There are a few lines of context provided leading up to the changes. These lines are indented by one space each. The changed lines of code are then displayed with a preceding - (line removed), or + (line added).

    Going beyond the command line

    Using a Git repository browser, such as gitk, allows you to get a slightly better visual summary of the information we’ve looked at to date. The version of Git that Apple ships with does not include gitk—I used Homebrew to re-install Git and get this utility. Any repository browser will suffice, though, and there are many GUI clients available on the Git website.

    gitk
    

    When you run the command gitk, a graphical tool will launch from the command line. An example of the output is given in the following screenshot. Click on each of the commits to get more information about it. Many ticket systems will also allow you to look at the changes in a merge proposal side-by-side, so if you’re finding this cumbersome, click around in your ticketing system to find the comparison tools they might have—I know for sure GitHub offers this feature.

    Screenshot of the gitk repository browser.

    Now that you’ve had a good look at the code, jot down your answers to the following questions:

    1. Does the code comply with your project’s identified coding standards?
    2. Does the code limit itself to the scope identified in the ticket?
    3. Does the code follow industry best practices in the most efficient way possible?
    4. Has the code been implemented in the best possible way according to all of your internal specifications? It’s important to separate your preferences and stylistic differences from actual problems with the code.

    Apply the proposed changes

    Now is the time to start up your testing environment and view the proposed change in context. How does it look? Does your solution match what the coder thinks they’ve built? If it doesn’t look right, do you need to clear the cache, or perhaps rebuild the Sass output to update the CSS for the project?

    Now is the time to also test the code against whatever test suite you use.

    1. Does the code introduce any regressions?
    2. Does the new code perform as well as the old code? Does it still fall within your project’s performance budget for download and page rendering times?
    3. Are the words all spelled correctly, and do they follow any brand-specific guidelines you have?

    Depending on the context for this particular code change, there may be other obvious questions you need to address as part of your code review.

    Do your best to create the most comprehensive list of everything you can find wrong (and right) with the code. It’s annoying to get dribbles of feedback from someone as part of the review process, so we’ll try to avoid “just one more thing” wherever we can.

    Prepare your feedback

    Let’s assume you’ve now got a big juicy list of feedback. Maybe you have no feedback, but I doubt it. If you’ve made it this far in the article, it’s because you love to comb through code as much as I do. Let your freak flag fly and let’s get your review structured in a usable manner for your teammates.

    For all the notes you’ve assembled to date, sort them into the following categories:

    1. The code is broken. It doesn’t compile, introduces a regression, it doesn’t pass the testing suite, or in some way actually fails demonstrably. These are problems which absolutely must be fixed.
    2. The code does not follow best practices. You have some conventions, the web industry has some guidelines. These fixes are pretty important to make, but they may have some nuances which the developer might not be aware of.
    3. The code isn’t how you would have written it. You’re a developer with battle-tested opinions, and you know you’re right, you just haven’t had the chance to update the Wikipedia page yet to prove it.

    Submit your evaluation

    Based on this new categorization, you are ready to engage in passive-aggressive coding. If the problem is clearly a typo and falls into one of the first two categories, go ahead and fix it. Obvious typos don’t really need to go back to the original author, do they? Sure, your teammate will be a little embarrassed, but they’ll appreciate you having saved them a bit of time, and you’ll increase the efficiency of the team by reducing the number of round trips the code needs to take between the developer and the reviewer.

    If the change you are itching to make falls into the third category: stop. Do not touch the code. Instead, go back to your colleague and get them to describe their approach. Asking “why” might lead to a really interesting conversation about the merits of the approach taken. It may also reveal limitations of the approach to the original developer. By starting the conversation, you open yourself to the possibility that just maybe your way of doing things isn’t the only viable solution.

    If you needed to make any changes to the code, they should be absolutely tiny and minor. You should not be making substantive edits in a peer review process. Make the tiny edits, and then add the changes to your local repository as follows:

    git add .
    git commit -m "[#61524] Correcting <list problem> identified in peer review."
    

    You can keep the message brief, as your changes should be minor. At this point you should push the reviewed code back up to the server for the original developer to double-check and review. Assuming you’ve set up the branch as a tracking branch, it should just be a matter of running the command as follows:

    git push
    

    Update the issue in your ticketing system as is appropriate for your review. Perhaps the code needs more work, or perhaps it was good as written and it is now time to close the issue queue.

    Repeat the steps in this section until the proposed change is complete, and ready to be merged into the main branch.

    Merge the approved change into the trunk

    Up to this point you’ve been comparing a ticket branch to the master branch in the repository. This main branch is referred to as the “trunk” of your project. (It’s a tree thing, not an elephant thing.) The final step in the review process will be to merge the ticket branch into the trunk, and clean up the corresponding ticket branches.

    Begin by updating your master branch to ensure you can publish your changes after the merge.

    git checkout master
    git pull origin master
    

    Take a deep breath, and merge your ticket branch back into the main repository. As written, the following command will not create a new commit in your repository history. The commits will simply shuffle into line on the master branch, making git log −−graph appear as though a separate branch has never existed. If you would like to maintain the illusion of a past branch, simply add the parameter −−no-ff to the merge command, which will make it clear, via the graph history and a new commit message, that you have merged a branch at this point. Check with your team to see what’s preferred.

    git merge 61524-broken-link
    

    The merge will either fail, or it will succeed. If there are no merge errors, you are ready to share the revised master branch by uploading it to the central repository.

    git push
    

    If there are merge errors, the original coders are often better equipped to figure out how to fix them, so you may need to ask them to resolve the conflicts for you.

    Once the new commits have been successfully integrated into the master branch, you can delete the old copies of the ticket branches both from your local repository and on the central repository. It’s just basic housekeeping at this point.

    git branch -d 61524-broken-link
    git push origin --delete 61524-broken-link
    

    Conclusion

    This is the process that has worked for the teams I’ve been a part of. Without a peer review process, it can be difficult to address problems in a codebase without blame. With it, the code becomes much more collaborative; when a mistake gets in, it’s because we both missed it. And when a mistake is found before it’s committed, we both breathe a sigh of relief that it was found when it was.

    Regardless of whether you’re using Git or another source control system, the peer review process can help your team. Peer-reviewed code might take more time to develop, but it contains fewer mistakes, and has a strong, more diverse team supporting it. And, yes, I’ve been known to learn the habits of my reviewers and choose the most appropriate review style for my work, just like I did as a kid.

    Friday, 12 September

    17:20

    So you want to write a Monad tutorial in Not-Haskell...

     ∗ iRi

    There are a number of errors made in putative Monad tutorials in languages other than Haskell. Any implementation of monadic computations should be able to implement the equivalent of the following in Haskell:

    minimal :: Bool -> [(Int, String)]
    minimal b = do
        x <- if b then [1, 2] else [3, 4]
        if x `mod` 2 == 0
            then do
                y <- ["a", "b"]
                return (x, y)
            else do
                y <- ["y", "z"]
                return (x, y)

    This should yield the local equivalent of:

    Prelude> minimal True
    [(1,"y"),(1,"z"),(2,"a"),(2,"b")]
    Prelude> minimal False
    [(3,"y"),(3,"z"),(4,"a"),(4,"b")]

    At the risk of being offensive, you, ahhh... really ought to understand why that's the result too, without too much effort... or you really shouldn't be writing a Monad tutorial. Ahem.

    In particular:

    • Many putative monadic computation solutions only work with a "container" that contains zero or one elements, and therefore do not work on lists. >>= is allowed to call its second argument (a -> m b) an arbitrary number of times. It may be once, it may be dozens, it may be none. If you can't do that, you don't have a monadic computation.
    • A monadic computation has the ability to examine the intermediate results of the computation, and make decisions, as shown by the if statement. If you can't do that, you don't have a monadic computation.
    • In statically-typed languages, the type of the inner value is not determined by the incoming argument. It's a -> m b, not a -> m a, which is quite different. Note how x and y are of different types.
    • The monadic computation builds up a namespace as it goes along; note we determine x, then somewhat later use it in the return, regardless of which branch we go down, and in both cases, we do not use it right away. Many putative implementations end up with a pipeline, where each stage can use the previous stage's values, but can not refer back to values before that.
    • Monads are not "about effects". The monadic computation I show above is in fact perfectly pure, in every sense of the term. And yes, in practice monad notation is used this way in real Haskell all the time, it isn't just an incidental side-effect.

    A common misconception is that you can implement this in Javascript or similar languages using "method chaining". I do not believe this is possible; for monadic computations to work in Javascript at all, you must be nesting functions within calls to bind within functions within calls to bind... basically, it's impossibly inconvenient to use monadic computations in Javascript, and a number of other languages. A mere implementation of method chaining is not "monadic", and libraries that use method chaining are not "monadic" (unless they really do implement the rest of what it takes to be a monad, but I've so far never seen one).

    If you can translate the above code correctly, and obtain the correct result, I don't guarantee that you have a proper monadic computation, but if you've got a bind or a join function with the right type signatures, and you can do the above, you're probably at least on the right track. This is the approximately minimal example that a putative implementation of a monadic computation ought to be able to do.

    15:00