Cyber Security Awareness Month: What's your favorite/most scary false positive
As in prior years, we would like to use a theme for our October diaries, in order to participate ...(more)...
The debate about the reproducibility of science bubbles onward, with everyone agreeing that it's a problem but of course nobody with power to fix it doing anything about it.
Recently I've been thinking that science as we know it sits in a very unpleasant middle ground.
On the one hand, despite the propaganda institutional science is biased against replication. This holes it below the waterline, and any serious scientist (alas) must consider fixing this in their field their top priority or they are consenting to just spin their wheels forever. We do not work formally enough to produce good results, because merely reaching "Peer Approved Once" and getting published is provably not a solid foundation to build on.
If one is inclined to take offense to that, consider the fact that scientists are supposed to be building on the work of others. It's very simple math to see that even if a uniformly-distributed 95% of the papers published are perfectly correct, that 5% has a disproportional impact on the accuracy of a tower of knowledge; as the tower grows, the chances of any particular new result containing a false result in the set of results it is building on approaches 1 quickly.
Many scientific disciplines would be lucky to have a 95% accuracy rate.
On the other hand, scientists are also not allowed to just "fool around", by virtue of not being able to get funding for it. Even simple experiments must be submitted, approved, funded, etc, all involving processes a great deal more complicated than the simple little English words imply. As a second-order effect it becomes a waste of time to go through the process for a small experiment, making the small experiments even less likely to be conducted than you would initially think. And yet, historically, a lot of great stuff happened from very skilled, knowledgeable scientists just fooling around. In only a few fields can a scientist afford to fool around on their own time and money, mostly the mathematical ones.
The system both crushes away the rigor we're promised in the brochure, and also crushes away any chance of serendipity or discovery on the cheap. The miracle is when we get any science at all.
Reader Ronnie provided us today a packet capture with a very interesting situation:
PHP announced the released of version 5.5 ...(more)...
Widespread angst about school quality is easy to fix... schools just need to look around and copy what's working out there in the real world. Oh... uh... I may have gotten carried away on that last one. Maybe it should be, uh, covered differently.... This week I wrote about conducting CSS audits to organize your code, keeping it clean and performant—resulting in faster sites that are easier to maintain. Now that you understand the hows and whys of auditing, let’s take a look at some more resources that will help you maintain your CSS architecture. Here are some I’ve recently discovered and find helpful. Do you like task runners such as grunt or gulp? Andy Osmani’s tutorial walks through using all kinds of task runners to find unused CSS selectors: Spring Cleaning Unused CSS Selectors. Are you interested in auditing for accessibility as well (hopefully you are!)? There are tools for that, too. This article helps you audit your site for accessibility— it’s a great outline of exactly how to do it. With these tools, you’ll be better prepared to clean up your CSS, optimize your site, and make the entire experience better for users. When talking about auditing code, many people are focusing on performance, which is a great benefit for all involved, but don’t forget that maintainability and speedier development time come along with a faster site. Nathan reported today that he has been seeing a new trend of web scanning against his webservers ...(more)...
Friday, 19 September
Help from task runners
Widespread angst about school quality is easy to fix... schools just need to look around and copy what's working out there in the real world.
Oh... uh... I may have gotten carried away on that last one. Maybe it should be, uh, covered differently....
This week I wrote about conducting CSS audits to organize your code, keeping it clean and performant—resulting in faster sites that are easier to maintain. Now that you understand the hows and whys of auditing, let’s take a look at some more resources that will help you maintain your CSS architecture. Here are some I’ve recently discovered and find helpful.
Do you like task runners such as grunt or gulp? Andy Osmani’s tutorial walks through using all kinds of task runners to find unused CSS selectors: Spring Cleaning Unused CSS Selectors.
Are you interested in auditing for accessibility as well (hopefully you are!)? There are tools for that, too. This article helps you audit your site for accessibility— it’s a great outline of exactly how to do it.
With these tools, you’ll be better prepared to clean up your CSS, optimize your site, and make the entire experience better for users. When talking about auditing code, many people are focusing on performance, which is a great benefit for all involved, but don’t forget that maintainability and speedier development time come along with a faster site.
Nathan reported today that he has been seeing a new trend of web scanning against his webservers ...(more)...
Johannes B. Ullrich, Ph ...(more)...
An email titled "Your online background check is now public" might be half-scary if it was sent t ...(more)...
What our clients do with their websites is just as important as the websites themselves. We may pride ourselves on building a great product, but it’s ultimately up to the client to see it succeed or fail. Even the best website can become neglected, underused, or messy without a little education and training.
Too often, my company used to create amazing tools for clients and then send them out into the world without enough guidance. We’d watch our sites slowly become stale, and we’d see our strategic content overwritten with fluffy filler.
It was no one’s fault but our own.
As passionate and knowledgeable web enthusiasts, it’s literally our job to help our clients succeed in any way we can, even after launch. Every project is an opportunity to educate clients and build a mutually beneficial learning experience.
If we want our clients to use our products to their full potential, we have to meet them in the middle. We have to balance our technical expertise with their existing processes and skills.
At my company, Brolik, we learned this the hard way.
We had a financial client whose main revenue came from selling in-depth PDF reports. Customers would select a report, generating an email to an employee who would manually create and email an unprotected PDF to the customer. The whole process would take about two days.
To make the process faster and more secure, we built an advanced, password-protected portal where their customers could purchase and access only the reports they’d paid for. The PDFs themselves were generated on the fly from the content management system. They were protected even after they were downloaded and only viewable with a unique username and password generated with the PDF.
The system itself was technically advanced and thoroughly solved our client’s needs. When the job was done, we patted ourselves on the back, added the project to our portfolio, and moved on to the next thing.
The client, however, was generally confused by the system we’d built. They didn’t quite know how to explain it to their customers. Processes had been automated to the point where they seemed untrustworthy. After about a month, they asked us if we’d revert back to their previous system.
We had created too large of a process change for our client. We upended a large part of their business model without really considering whether they were ready for a new approach.
From that experience, we learned not only to create online tools that complement our clients’ existing business processes, but also that we can be instrumental in helping clients embrace new processes. We now see it as part of our job to educate our clients and explain the technical and strategic thought behind all of our decisions.
We put this lesson to work on a more recent project, developing a site-wide content tagging system where images, video, and other media could be displayed in different ways based on how they were tagged.
We could have left our clients to figure out this new system on their own, but we wanted to help them adopt it. So we pre-populated content and tags to demonstrate functionality. We walked through the tagging process with as many stakeholders as we could. We even created a PDF guide to explain the how and why behind the new system.
In this case, our approach worked, and the client’s cumbersome media management time was significantly reduced. The difference between the outcome of the two projects was simply education and support.
Education and support can, and usually does, take the form of setting an example. Some clients may not fully understand the benefits of a content strategy, for instance, so you have to show them results. Create relevant and well-written sample blog posts for them, and show how they can drive website traffic. Share articles and case studies that relate to the new tools you’re building for them. Show them that you’re excited, because excitement is contagious. If you’re lucky and smart enough to follow Geoff Dimasi’s advice and work with clients who align with your values, this process will be automatic, because you’ll already be invested in their success.
We should be teaching our clients to use their website, app, content management system, or social media correctly and wisely. The more adept they are at putting our products to use, the better our products perform.
Client education means new deliverables, which have to be prepared by those directly involved in the project. Developers, designers, project managers, and other team members are responsible for creating the PDFs, training workshops, interactive guides, and other educational material.
That means more organizing, writing, designing, planning, and coding—all things we normally bill for, but now we have to bill in the name of client education.
Take this into account at the beginning of a project. The amount of education a client needs can be a consideration for taking a job at all, but it should at least factor into pricing. Hours spent helping your client use your product is billable time that you shouldn’t give away for free.
At Brolik, we’ve helped a range of clients—from those who have “just accepted that the Web isn’t a fad” (that’s an actual quote from 2013), to businesses that have a team of in-house developers. We consider this information and price accordingly, because it directly affects the success of the entire product and partnership. If they need a lot of education but they’re not willing to pay for it, it may be smart to pass on the job.
Most clients actually understand this. Those who are interested in improving their business are interested in improving themselves as well. This is the foundation for a truly fulfilling and mutually beneficial client relationship. Seek out these relationships.
It’s sometimes challenging to justify a “client education” line item in your proposals, however. If you can’t, try to at least work some wiggle room into your price. More specifically, try adding a 10 percent contingency for “Support and Training” or “Onboarding.”
If you can’t justify a price increase at all, but you still want the job, consider factoring in a few client education hours and their opportunity cost as part of your company’s overall marketing budget. Teaching your client to use your product is your responsibility as a digital business.
What’s better than arming your clients with knowledge and tools, pumping them up, and then sending them out into the world to succeed? Venturing out with them!
At Brolik, we’ve started signing clients onto digital strategy retainers once their websites are completed. Digital strategy is an overarching term that covers anything and everything to grow a business online. Specifically for us, it includes audience research, content creation, SEO, search and display advertising, website maintenance, social media, and all kinds of analysis and reporting.
This allows us to continue to educate (and learn) on an ongoing basis. It keeps things interesting—and as a bonus, we usually upsell more work.
We’ve found that by fostering collaboration post-launch, we not only help our clients use our product more effectively and grow their business, but we also alleviate a lot of the panic that kicks in right before a site goes live. They know we’ll still be there to fix, tweak, analyze, and even experiment.
This ongoing digital strategy concept was so natural for our business that it’s surprising it took us so long to implement it. After 10 years making websites, we’ve only offered digital strategy for the last two, and it’s already driving 50 percent of our revenue.
The extra effort required for client education is worth it. By giving our clients the tools, knowledge, and passion they need to be successful with what we’ve built for them, we help them improve their business.
Anything that drives their success ultimately drives ours. When the tools we build work well for our clients, they return to us for more work. When their websites perform well, our portfolios look better and live longer. Overall, when their business improves, it reflects well on us.
A fulfilling and mutually beneficial client relationship is good for the client and good for future business. It’s an area where we can follow our passion and do what’s right, because we get back as much as we put in.
Most people aren’t excited at the prospect of auditing code, but it’s become one of my favorite types of projects. A CSS audit is really detective work. You start with a site’s code and dig deeper: you look at how many stylesheets are being called, how that affects site performance, and how the CSS itself is written. Your goal is to look for ways to improve on what’s there—to sleuth out fixes to make your codebase better and your site faster.
I’ll share tips on how to approach your own audit, along with the advantages of taking a full inventory of your CSS and various tools.
An audit helps you to organize your code and eliminate repetition. You don’t write any code during an audit; you simply take stock of what’s there and document recommendations to pass off to a client or discuss with your team. These recommendations ensure new code won’t repeat past mistakes. Let’s take a closer look at other benefits:
This is a huge win, especially for people on slower connections—but everyone gains when sites load quickly.
Now that audits have won you over, how do you go about doing one? I like to start with a few tools that provide an overview of the site’s current codebase. You may approach your own audit differently, based on your site’s problem areas or your philosophy of how you write code (whether OOCSS or BEM). The important thing is to keep in mind what will be most useful to you and your own site.
Once I’ve diagnosed my code through tools, I examine it line by line.
The first tool I reach for is Nicole Sullivan’s invaluable Type-o-matic, an add-on for Firebug that generates a JSON report of all the type styles in use across a site. As an added bonus, Type-o-matic creates a visual report as it runs. By looking at both reports, you know at a glance when to combine type styles that are too similar, eliminating unnecessary styles. I’ve found that the detail of the JSON report makes it easy to see how to create a more reusable type system.
In addition to Type-o-matic, I run CSS Lint, an extremely flexible tool that flags a wide range of potential bugs from missing fallback colors to shorthand properties for better performance. To use CSS Lint, click the arrow next to the word “Lint” and choose the options you want. I like to check for repeated properties or too many font sizes, so I always run Maintainability & Duplication along with Performance. CSS Lint then returns recommendations for changes; some may be related to known issues that will break in older browsers and others may be best practices (as the tool sees them). CSS Lint isn’t perfect. If you run it leaving every option checked, you are bound to see things in the end report that you may not agree with, like warnings for IE6. That said, this is a quick way to get a handle on the overall state of your CSS.
Next, I search through the CSS to review how often I repeat common properties, like
margin. (If you’re comfortable with the command line, type
grep along with instructions and plug in something like
grep “float” styles/styles.scss to find all instances of
“float”.) Note any properties you may cut or bundle into other modules. Trimming your properties is a balancing act: to reduce the number of repeated properties, you may need to add more classes to your HTML, so that’s something you’ll need to gauge according to your project.
I like to do this step by hand, as it forces me to walk through the CSS on my own, which in turn helps me better understand what’s going on. But if you’re short on time, or if you’re not yet comfortable with the command line, tools can smooth the way:
After you run your tools, take the time to read through the CSS; it’s worth it to get a real sense of what’s happening. For instance, comments in the code—that tools miss—may explain why some quirk persists.
One big thing I double-check is the depth of applicability, or how far down an attribute string applies. Does your CSS rely on a lot of specificity? Are you seeing long strings of selectors, either in the style files themselves or in the output from a preprocessor? A high depth of applicability means your code will require a very specific HTML structure for styles to work. If you can scale it back, you’ll get more reusable code and speedier performance.
Now to the fun part. Once you have all your data, you can figure out how to improve the CSS and make some recommendations.
The recommendation document doesn’t have to be heavily designed or formatted, but it should be easy to read. Splitting it into two parts is a good idea. The first consists of your review, listing the things you’ve found. If you refer to the results of CSS Lint or Type-o-matic, be sure to include either screenshots or the JSON report itself as an attachment. The second half contains your actionable recommendations to improve the code. This can be as simple as a list, with items like “Consolidate type styles that are closely related and create mixins for use sitewide.”
As you analyze all the information you’ve collected, look for areas where you can:
If you’re working with a client, it’s also important to explain the approaches you favor, so they understand where you’re coming from—and what things you may consider as issues with the code. For example, I prefer OOCSS, so I tend to push for more modularity and reusability; a few classes stacked up (if you aren’t using a preprocessor) don’t bother me. Making sure your client understands the context of your work is particularly crucial when you’re not on the implementation team.
You did it! Once you’ve written your recommendations (and taken some time to think on them and ensure they’re solid), you can hand them off to the client—be prepared for any questions they may have. If this is for your team, congratulations: get cracking on your list.
But wait—an audit has even more rewards. Now that you’ve got this prime documentation, take it a step further: use it as the springboard to talk about how to maintain your CSS going forward. If the same issues kept popping up throughout your code, document how you solved them, so everyone knows how to proceed in the future when creating new features or sections. You may turn this document into a style guide. Another thing to consider is how often to revisit your audit to ensure your codebase stays squeaky clean. The timing will vary by team and project, but set a realistic, regular schedule—this a key part of the auditing process.
Conducting an audit is a vital first step to keeping your CSS lean and mean. It also helps your documentation stay up to date, allowing your team to have a good handle on how to move forward with new features. When your code is structured well, it’s more performant—and everyone benefits. So find the time, grab your best sleuthing hat, and get started.
JUST WEEKS ago, my daughter’s mother moved out of state. The kid’s been having a tough time with it, and with school, and with her upcoming tenth birthday, which won’t work out the way she hoped. And then, over the weekend, her laptop and mine both broke—hers by cat-and-ginger-ale misfortune, mine by gravity abetted by my stupidity.
To lighten the mood, this morning broke grey, pounding rain. We pulled on our hoodies, scooped up our bodega umbrellas, and shrugged on our backpacks—hers heavy with school books, mine with gym clothes, a camera, and two busted laptops.
We were standing by the elevator when an apartment door burst open and Ava’s best friend in the world sprinted down the hall to hug her good morning. The two girls embraced until the elevator arrived.
The whole dark wet walk to school, my child hummed happily to herself.
Pretty much ever since the new top level domain (TLD) ".biz" went online a couple years ago, and ...(more)...
While Star Trek was ahead of its time in many ways, you could tell they never lived with the technology they hypothesized. For instance, there's no episode in which Wesley Crusher walks around with his PADD unlocked while cradling it on his chest, causing a Major Interstellar Diplomatic Incident when he accidentally ends up emailing pictures of his armpit to the Klingon High Council along with a text message consisting of "klxitijtjqtktkjjt", which is of course an ancient and dishonorable way of challenging the entire Council to a mandatory duel to the death.
Fortunately, all I managed to do with my accidentally unlocked phone today was start the stopwatch and bring up the texting screen without actually sending anything. But still, it's sorta scary just what socially-horrible things you can do from that touchscreen.
All the packet captures we received so far show the same behavior. The scans ...(more)...
My new book compiles the articles I published over the past two years about Polar’s mobile and multi-device design decisions. It's filled with nuanced user interface design details and big-picture thinking on software design for PCs, tablets, TVs, and beyond. And it's free.
Over the past two years, I served as co-founder and CEO of Polar. During that time, we built mobile apps, responsive Web apps, second screen experiences, and... we learned a lot. When Polar joined Google last week, we took some time to package up the articles I wrote over the past two years about our thinking, our failures, and our successes. We’re making this compilation freely available:
I hope that some of our experiences, insights, and missteps will ultimately help others as they wrestle with similar issues and ideas in their products. It’s a small way for us to formally say thanks and give back to everyone that helped us do the same. Thanks.
Creating products is a journey. And like any journey, it’s filled with new experiences, missteps, and perhaps most importantly, lots of opportunities to learn. My most recent journey started nearly two years ago when we began working on Polar.
Today I’m delighted to announce we’re joining Google. But before embarking on this new journey, I wanted to thank everyone that was part of Polar.
We started with the simple idea that everyone has an opinion worth hearing. But the tools that existed online to meet this need weren’t up to the task: think Web forms, radio buttons, and worse. Ugh. We felt we could do much better by making opinions easy and fun for everyone.
Since then one in every 449 Internet users told us their opinion by voting on a Polar poll. We served more than half a billion polls in the past eight months and had 1.1 million active voters in September. To everyone that made this possible, our heartfelt thanks.
If you voted on a Polar poll, downloaded our app, our embedded us in your site, we learned from you. Personally, I learned more than I could imagine from the Polar team, our partners, and investors. I can’t imagine a better gift than knowledge, so I’m grateful to all of you.
In an effort to return the favor, we took some time to package up the articles I wrote over the past two years about Polar’s mobile and multi-device designs. We’re making this compilation freely available in:
Each version is filled with nuanced user interface design details and big-picture thinking on mobile and multi-device design for PCs, tablets, TVs, and beyond. It’s our hope that some of our experiences, insights, and missteps, will ultimately help you with the product journey you’re on now or will be in the future.
If you used Polar, we’ve made it super-easy for you to download an archive of the polls and data you created –they’re yours after all. We’ll also be keeping the site running for a while to help with this transition.
Big bear hugs & thanks,
Luke Wroblewski & Team Polar
The big news this week is the new
dialog element. Introduced in revision 7050, along with a new global attribute called
inert, a new
method attribute value "
dialog", and a new CSS property
Yours truly updated the Fullscreen Standard just in time for the
dialog element. It defines a new CSS
::backdrop pseudo-element as well as a new rendering layer to address the combined use cases of Fullscreen and the
The StringEncoding proposal is getting closer to consensus. It now consists of a
TextEncoder and a
TextDecoder object that can be used both for streaming and non-streaming use cases. This is the WHATWG Weekly.
Some bad news for a change. It may turn out that the web platform will only work on little-endian devices, as the vast majority of devices is little-endian today (computers, phones, …) and code written using
ArrayBuffer today assumes little-endian. Boris Zbarsky gives a rundown of options for browsers on big-endian devices. Kenneth Russell thinks the situation can still be saved by universal deployment of
DataView and sufficient developer advocacy.
Over the past couple of weeks the
canvas element 2D API has gotten some major new features. Ian Hickson wrote a lengthy email detailing the canvas v5 API additions. Path primitives, dashed lines, ellipses, SVG path syntax, text along a path, hit testing, more text metrics, transforming patterns, and a bunch more.
Jonas Sicking proposed an API for decoding ArrayBuffer objects as strings, and encoding strings as ArrayBuffer objects. The thread also touched on a proposal mentioned here earlier, StringEncoding. This is the mid-March WHATWG Weekly.
var path = new Path() path.rect(1,1,10,10) context.stroke(path)
Tune in next week for more additions to canvas.
Apple's Safari team provided feedback to the Web Notifications Working Group. That group, incidentally, is looking for an active editor to address that and other feedback. Opera Mobile shipped with WebGL support. This is March's first WHATWG Weekly.
Simon Pieters overhauled much of HTML5 differences from HTML4 and the document now provides information on added/changed APIs, differences between HTML and W3C HTML5, content model changes, and more.
Ian Hickson introduced a new URL scheme named
http+aes (and also
https+aes) in revision 7012 that allows for hosting private data on content distribution networks. Revision 7009 by the way, added the necessary hooks for the DOM mutation observers feature to HTML.
A new "
referrer" metadata name for the
meta element has been proposed on the WHATWG Wiki. This allows for controlling the
Referer header on outgoing links.
A draft for the SPDY protocol has been submitted, the W3C HTML WG mailing list goes crazy over media DRM. This is the WHATWG Weekly.
In response to feedback Adam Barth changed the
getRandomValues() method to return the array the method modifies. The method is part of the
Ian Hickson has been busy updating the Canvas Wiki page with proposals for dashed lines, ellipsis, hit regions, using SVG path syntax for paths, and path primitives. Updates to HTML itself seem imminent.
In less than a year we reached another arbitrary milestone. HTML is another thousand revisions further, now over 7000 (not quite 9000). This is the WHATWG Weekly.
HTML did not change much last week as its editor was playing in the snow. The DOM meanwhile now has mutation observers defined, the replacement for mutation events. Adam Klein did all the heavy lifting and yours truly cleaned it up a bit. An introduction to DOM events has been added as well.
Quirks Mode has its first public draft and a group working on XML Error Recovery just started. This is the WHATWG Weekly.
Simon Pieters published a first draft of the Quirks Mode Standard. This should help align implementations of quirks mode and reduce the overall number of quirks implementations currently have. In other words, making the quirks that are needed for compatibility with legacy content more interoperable.
In a message to the W3C TAG Jeni Tennison introduced the XML Error Recovery Community Group whose charter is about creating a newish version of XML 1.0 that is fault tolerant. Community Groups are open for everyone to join, so if you want to help out, you can!
That is all, be sure to keep an eye on the HTML5 Tracker for recent changes to HTML!
Since the last WHATWG Weekly, almost a month ago now, over a hundred changes have been committed to the HTML standard. This is the WHATWG Weekly and it will cover those changes so you don’t have to. Also, remember kids, that fancy email regular expression is non-normative.
To aid translators and automated translation HTML sports a
translate since revision 6971. By default everything can be translated. You can override that by setting the
translate attribute to the "
no" value. This can be used for names, computer code, expressions that only make sense in a given language, etc.
In revision 6888 the
:invalid pseudo-classes were made applicable to the
form element. This way you can determine whether all controls in a given form are correctly filled in.
Revision 6898 made the
wbr element less magical. Well, it defined the element fully in terms of CSS rather than using prose.
A new CSS feature was introduced in revision 6935. The
@global at-rule allows for selectors to “escape” scoped stylesheets as it were, by letting them apply to the whole document. It will likely be moved out of HTML and into a CSS once a suitable location has been found.
It turns out that
clearInterval() can be used interchangeably. Revision 6949 makes sure that new implementors make it work that way too.
Per a request from Adrian Bateman revision 6957 added a fourth argument to the
window.onerror callback, providing scripts with the script error column position.
Speaking of scripts, in revision 6964
script elements gained two new events.
beforescriptexecute which is dispatched before the script executes and can be cancelled to prevent execution altogether. And
afterscriptexecute for when script execution has completed.
Revision 6966 implemented a change that allows browsers to not execute
showModalDialog(), and friends during
unload events. This can improve the end user experience.
Happy new year everyone! We made great progress in standardizing the platform in 2011 and plan to continue doing just that with your help. You can join our mailing list to discuss issues with web development or join IRC if you prefer more lively interaction.
I will be taking the remainder of the month off and as nobody has volunteered thus far, WHATWG Weekly is unlikely to be updated in January. All the more reason to follow email and IRC.
Since last time the
toBlob() method of the
canvas element has been updated in revisions 6879 and 6880 to make sure it honors the same-origin policy (for exposure of image data) and handles the empty grid.
In the land of ECMAScript a proposal was made to avoid versioning by David Herman, which if successful will keep ECMAScript simple and more in line with other languages used on the web.
You might have missed this. Because of this lengthy thread on throwing for
atob() space characters will no longer cause the method to throw from revision>6874 onwards. This is the WHATWG Weekly, with some standards related updates just before the world slacks off to feast and watch reindeer on Google Earth.
Dimitri Glazkov (from good morning, WHATWG!) published Shadow DOM. A while earlier he also published, together with Dominic Cooney, Web Components Explained. The general idea is to be able to change the behavior and style of elements without changing their intrinsic semantics. A very basic example would be adding a bunch of children to a certain element to have more styling hooks (since this is the shadow DOM the children will not appear as actual children in the normal DOM, but can be styled).
Two weeklies ago you were informed about the encoding problem we have on the platform. While HTML already took quite a few steps to tighten up things (discouraging support for UTF-7, UTF-32, etc. defining encoding label matching more accurately), more were needed. Especially when it comes to actually decoding and encoding with legacy encodings. The Encoding Standard aims to tackle these issues and your input is much appreciated. Especially with regards to the implementation details of multi-octet encodings.
Stack is a simple task management system for devs and designers. Fully customizable and flexible to suit your workflow.
Growing up, I learned there were two kinds of reviews I could seek out from my parents. One parent gave reviews in the form of a shower of praise. The other parent, the one with a degree from the Royal College of Art, would put me through a design crit. Today the reviews I seek are for my code, not my horse drawings, but it continues to be a process I both dread and crave.
In this article, I’ll describe my battle-tested process for conducting code reviews, highlighting the questions you should ask during the review process as well as the necessary version control commands to download and review someone’s work. I’ll assume your team uses Git to store its code, but the process works much the same if you’re using any other source control system.
Completing a peer review is time-consuming. In the last project where I introduced mandatory peer reviews, the senior developer and I estimated that it doubled the time to complete each ticket. The reviews introduced more context-switching for the developers, and were a source of increased frustration when it came to keeping the branches up to date while waiting for a code review.
The benefits, however, were huge. Coders gained a greater understanding of the whole project through their reviews, reducing silos and making onboarding easier for new people. Senior developers had better opportunities to ask why decisions were being made in the codebase that could potentially affect future work. And by adopting an ongoing peer review process, we reduced the amount of time needed for human quality assurance testing at the end of each sprint.
Let’s walk through the process. Our first step is to figure out exactly what we’re looking for.
Our code review should always begin in a ticketing system, such as Jira or GitHub. It doesn’t matter if the proposed change is a new feature, a bug fix, a security fix, or a typo: every change should start with a description of why the change is necessary, and what the desired outcome will be once the change has been applied. This allows us to accurately assess when the proposed change is complete.
The ticketing system is where you’ll track the discussion about the changes that need to be made after reviewing the proposed work. From the ticketing system, you’ll determine which branch contains the proposed code. Let’s pretend the ticket we’re reviewing today is 61524—it was created to fix a broken link in our website. It could just as equally be a refactoring, or a new feature, but I’ve chosen a bug fix for the example. No matter what the nature of the proposed change is, having each ticket correspond to only one branch in the repository will make it easier to review, and close, tickets.
Set up your local environment and ensure that you can reproduce what is currently the live site—complete with the broken link that needs fixing. When you apply the new code locally, you want to catch any regressions or problems it might introduce. You can only do this if you know, for sure, the difference between what is old and what is new.
At this point you’re ready to dive into the code. I’m going to assume you’re working with Git repositories, on a branch-per-issue setup, and that the proposed change is part of a remote team repository. Working directly from the command line is a good universal approach, and allows me to create copy-paste instructions for teams regardless of platform.
To begin, update your local list of branches.
Then list all available branches.
git branch -a
A list of branches will be displayed to your terminal window. It may appear something like this:
* master remotes/origin/master remotes/origin/HEAD -> origin/master remotes/origin/61524-broken-link
* denotes the name of the branch you are currently viewing (or have “checked out”). Lines beginning with
remotes/origin are references to branches we’ve downloaded. We are going to work with a new, local copy of branch
When you clone your project, you’ll have a connection to the remote repository as a whole, but you won’t have a read-write relationship with each of the individual branches in the remote repository. You’ll make an explicit connection as you switch to the branch. This means if you need to run the command
git push to upload your changes, Git will know which remote repository you want to publish your changes to.
git checkout --track origin/61524-broken-link
Ta-da! You now have your own copy of the branch for ticket 61524, which is connected (“tracked”) to the origin copy in the remote repository. You can now begin your review!
First, let’s take a look at the commit history for this branch with the command
git log master..
Author: emmajane Date: Mon Jun 30 17:23:09 2014 -0400 Link to resources page was incorrectly spelled. Fixed. Resolves #61524.
This gives you the full log message of all the commits that are in the branch
61524-broken-link, but are not also in the
master branch. Skim through the messages to get a sense of what’s happening.
Next, take a brief gander through the commit itself using the
diff command. This command shows the difference between two snapshots in your repository. You want to compare the code on your checked-out branch to the branch you’ll be merging “to”—which conventionally is the
git diff master
When you run the command to output the difference, the information will be presented as a patch file. Patch files are ugly to read. You’re looking for lines beginning with
-. These are lines that have been added or removed, respectively. Scroll through the changes using the up and down arrows, and press
q to quit when you’ve finished reviewing. If you need an even more concise comparison of what’s happened in the patch, consider modifying the diff command to list the changed files, and then look at the changed files one at a time:
git diff master --name-only git diff master <filename>
Let’s take a look at the format of a patch file.
diff --git a/about.html b/about.html index a3aa100..a660181 100644 --- a/about.html +++ b/about.html @@ -48,5 +48,5 @@ (2004-05) - A full list of <a href="emmajane.net/events">public + A full list of <a href="http://emmajane.net/events">public presentations and workshops</a> Emma has given is available
I tend to skim past the metadata when reading patches and just focus on the lines that start with
+. This means I start reading at the line immediate following
@@. There are a few lines of context provided leading up to the changes. These lines are indented by one space each. The changed lines of code are then displayed with a preceding
- (line removed), or
+ (line added).
Using a Git repository browser, such as gitk, allows you to get a slightly better visual summary of the information we’ve looked at to date. The version of Git that Apple ships with does not include gitk—I used Homebrew to re-install Git and get this utility. Any repository browser will suffice, though, and there are many GUI clients available on the Git website.
When you run the command
gitk, a graphical tool will launch from the command line. An example of the output is given in the following screenshot. Click on each of the commits to get more information about it. Many ticket systems will also allow you to look at the changes in a merge proposal side-by-side, so if you’re finding this cumbersome, click around in your ticketing system to find the comparison tools they might have—I know for sure GitHub offers this feature.
Now that you’ve had a good look at the code, jot down your answers to the following questions:
Now is the time to start up your testing environment and view the proposed change in context. How does it look? Does your solution match what the coder thinks they’ve built? If it doesn’t look right, do you need to clear the cache, or perhaps rebuild the Sass output to update the CSS for the project?
Now is the time to also test the code against whatever test suite you use.
Depending on the context for this particular code change, there may be other obvious questions you need to address as part of your code review.
Do your best to create the most comprehensive list of everything you can find wrong (and right) with the code. It’s annoying to get dribbles of feedback from someone as part of the review process, so we’ll try to avoid “just one more thing” wherever we can.
Let’s assume you’ve now got a big juicy list of feedback. Maybe you have no feedback, but I doubt it. If you’ve made it this far in the article, it’s because you love to comb through code as much as I do. Let your freak flag fly and let’s get your review structured in a usable manner for your teammates.
For all the notes you’ve assembled to date, sort them into the following categories:
Based on this new categorization, you are ready to engage in passive-aggressive coding. If the problem is clearly a typo and falls into one of the first two categories, go ahead and fix it. Obvious typos don’t really need to go back to the original author, do they? Sure, your teammate will be a little embarrassed, but they’ll appreciate you having saved them a bit of time, and you’ll increase the efficiency of the team by reducing the number of round trips the code needs to take between the developer and the reviewer.
If the change you are itching to make falls into the third category: stop. Do not touch the code. Instead, go back to your colleague and get them to describe their approach. Asking “why” might lead to a really interesting conversation about the merits of the approach taken. It may also reveal limitations of the approach to the original developer. By starting the conversation, you open yourself to the possibility that just maybe your way of doing things isn’t the only viable solution.
If you needed to make any changes to the code, they should be absolutely tiny and minor. You should not be making substantive edits in a peer review process. Make the tiny edits, and then add the changes to your local repository as follows:
git add . git commit -m "[#61524] Correcting <list problem> identified in peer review."
You can keep the message brief, as your changes should be minor. At this point you should push the reviewed code back up to the server for the original developer to double-check and review. Assuming you’ve set up the branch as a tracking branch, it should just be a matter of running the command as follows:
Update the issue in your ticketing system as is appropriate for your review. Perhaps the code needs more work, or perhaps it was good as written and it is now time to close the issue queue.
Repeat the steps in this section until the proposed change is complete, and ready to be merged into the main branch.
Up to this point you’ve been comparing a ticket branch to the master branch in the repository. This main branch is referred to as the “trunk” of your project. (It’s a tree thing, not an elephant thing.) The final step in the review process will be to merge the ticket branch into the trunk, and clean up the corresponding ticket branches.
Begin by updating your master branch to ensure you can publish your changes after the merge.
git checkout master git pull origin master
Take a deep breath, and merge your ticket branch back into the main repository. As written, the following command will not create a new commit in your repository history. The commits will simply shuffle into line on the master branch, making
git log −−graph appear as though a separate branch has never existed. If you would like to maintain the illusion of a past branch, simply add the parameter
−−no-ff to the merge command, which will make it clear, via the graph history and a new commit message, that you have merged a branch at this point. Check with your team to see what’s preferred.
git merge 61524-broken-link
The merge will either fail, or it will succeed. If there are no merge errors, you are ready to share the revised master branch by uploading it to the central repository.
If there are merge errors, the original coders are often better equipped to figure out how to fix them, so you may need to ask them to resolve the conflicts for you.
Once the new commits have been successfully integrated into the master branch, you can delete the old copies of the ticket branches both from your local repository and on the central repository. It’s just basic housekeeping at this point.
git branch -d 61524-broken-link git push origin --delete 61524-broken-link
This is the process that has worked for the teams I’ve been a part of. Without a peer review process, it can be difficult to address problems in a codebase without blame. With it, the code becomes much more collaborative; when a mistake gets in, it’s because we both missed it. And when a mistake is found before it’s committed, we both breathe a sigh of relief that it was found when it was.
Regardless of whether you’re using Git or another source control system, the peer review process can help your team. Peer-reviewed code might take more time to develop, but it contains fewer mistakes, and has a strong, more diverse team supporting it. And, yes, I’ve been known to learn the habits of my reviewers and choose the most appropriate review style for my work, just like I did as a kid.
There are a number of errors made in putative Monad tutorials in languages other than Haskell. Any implementation of monadic computations should be able to implement the equivalent of the following in Haskell:
minimal :: Bool -> [(Int, String)] minimal b = do x <- if b then [1, 2] else [3, 4] if x `mod` 2 == 0 then do y <- ["a", "b"] return (x, y) else do y <- ["y", "z"] return (x, y)
This should yield the local equivalent of:
Prelude> minimal True [(1,"y"),(1,"z"),(2,"a"),(2,"b")] Prelude> minimal False [(3,"y"),(3,"z"),(4,"a"),(4,"b")]
At the risk of being offensive, you, ahhh... really ought to understand why that's the result too, without too much effort... or you really shouldn't be writing a Monad tutorial. Ahem.
If you can translate the above code correctly, and obtain the correct result, I don't guarantee that you have a proper monadic computation, but if you've got a bind or a join function with the right type signatures, and you can do the above, you're probably at least on the right track. This is the approximately minimal example that a putative implementation of a monadic computation ought to be able to do.