Infocon: green

 ∗ SANS Internet Storm Center, InfoCON: green

DDOS is down, but still a concern for ISPs

DDOS is down, but still a concern for ISPs, (Sun, Feb 7th)

 ∗ SANS Internet Storm Center, InfoCON: green

For many reasons,most ISPs are finding that service affecting DDOSes, which were a common occurre ...(more)...

More updates to kippo-log2db, (Sat, Feb 6th)

 ∗ SANS Internet Storm Center, InfoCON: green

It has been a while, but I finally got around to fixing a bugin my script for putting kippo text ...(more)...

To Taste


Look, recipe, if I knew how much was gonna taste good, I wouldn't need you.



I searched my .bash_history for the line with the highest ratio of special characters to regular alphanumeric characters, and the winner was: cat out.txt | grep -o "\[[(].*\[])][^)]]*$" ... I have no memory of this and no idea what I was trying to do, but I sure hope it worked.

A trip through the spam filters: more malspam with zip attachments containing .js files, (Fri, Feb 5th)

 ∗ SANS Internet Storm Center, InfoCON: green


I was discussing malicious spam (malspam) with a ...(more)...

ISC Stormcast For Friday, February 5th 2016, (Fri, Feb 5th)

 ∗ SANS Internet Storm Center, InfoCON: green


mgo r2016.02.04

 ∗ Labix Blog

This is one of the most packed releases of the mgo driver for Go in recent times. There are new features, important fixes, and relevant
internal reestructuring to support the on-going server improvements.

As usual for the driver, compatibility is being preserved both with old applications and with old servers, so updating should be a smooth experience.

Release r2016.02.04 of mgo includes the following changes which were requested, proposed, and performed by a great community.


Exposed access to individual bulk error cases

Accessing the individual errors obtained while attempting a set of bulk operations is now possible via the new mgo.BulkError error type and its Cases method which returns a slice of mgo.BulkErrorCases which are properly indexed according to the operation order used. There are documented server limitations for MongoDB version 2.4 and older.

This change completes the bulk API. It can now perform optimized bulk queries when communicating with recent servers (MongoDB 2.6+), perform the same operations in older servers using compatible but less performant options, and in both cases provide more details about obtained errors.

Feature first requested by pjebs.

New fields in CollectionInfo

The CollectionInfo type has new fields for dealing with the recently introduced document validation MongoDB feature, and also the storage engine-specific options.

Features requested by nexcode and pjebs.

New Find and GetMore command support

MongoDB is moving towards replacing the old wire protocol with a command-based based implementation, and every recent release introduced changes around that. This release of mgo introduces support for the find and getMore commands which were added to MongoDB 3.2. These are exercised whenever querying the database or iterating over queried results.

Previous server releases will continue to use the classical mechanism, and the two approaches should be compatible. Please report any issues in that regard.

Do not fallback to Monotonic mode improperly

Recent driver changes adapted the Pipe.Iter, Collection.Indexes, and Database.CollectionNames methods to work with recent server releases. These changes also introduced a bug that could cause the driver to talk to a secondary server improperly, when that operation was the first operation performed on the session. This has been fixed.

Problem reported by Sundar.

Fix crash in new bulk update API

The new methods introduced in the bulk update API in the last release were crashing when a connection error occurred.

Fix contributed by Maciej Galkowski.

Enable TCP keep-alives for all connections

As requested by developers, TCP keep-alives are now enabled for all connections. No timing is specified, so the default operating system setting will be used.

Feature requested by Hunor Kovács, Berni Varga, and Martin Garton.

ChangeInfo.Updated now behaves as documented

The ChangeInfo.Updated field is documented to report the number of documents that were changed, but in fact that was not possible in old releases of the driver, since the server did not provide that information. Instead, the server only reported the number of documents matched by the selection document.

This has been fixed, so starting with MongoDB 2.6, the driver will behave as documented, and inform the number of documents that were indeed updated. This is related to the next driver change:

New ChangeInfo.Matched field

The new ChangeInfo.Matched field will report the number of documents that matched the selection document, whether the performed change was a removal, an update, or an upsert.

Feature requested by Žygimantas and other list members.

ObjectId now supports TextMarshaler/TextUnmarshaler

ObjectId now knows how to marshal/unmarshal itself as text in hex format when using its encoding.TextMarshaler and TextUnmarshaler interfaces.

Contributed by Jack Spirou.

Created GridFS index is now unique

The index on {“files_id”, “n”} automatically created for GridFS chunks when a file write completes now enforces the uniqueness of the key.

Contributed by Wisdom Omuya.

Use SIGINT in dbtest.DBServer

The dbtest.DBServer was stopping the server with SIGKILL, which would not give it enough time for a clean shutdown. It will now stop it with SIGINT.

Contributed by Haijun Wang.

Ancient field tag logic dropped

The very old field tag format parser, in use several years back even before Go 1 was released, was still around in the code base for no benefit.

This has been removed by Alexandre Cesaro.

Documentation improvements

Documentation improvements were contributed by David Glasser, Ryan Chipman, and Shawn Smith.

Fixed BSON skipping of incorrect slice types

The BSON parser was collapsing when an array value was unmarshaled into an existing field that was not of an appropriate type for such values. This has been fixed so that the the bogus field is ignored and the value skipped.

Fix contributed by Gabriel Russel.

Announcing mgo r2015.10.05

 ∗ Labix Blog

Another stable release of the mgo Go driver for MongoDB hits the shelves, and this one brings some important improvements and fixes. As usual, it also remains fully compatible with prior releases in the v2 series.

Please read along for the complete list of changes.

New read preference modes

In addition to the traditional Strong, Monotonic, and Eventual modes, the driver now supports all read preference modes defined by the MongoDB specification:

See Session.SetMode for details on how to switch modes.

bson.NewObjectId random initial counter

The bson.NewObjectId method will now uses a random initial counter value, as defined by the specification.

Documentation improvements

Various documentation improvements have been made, as suggested or submitted by contributors.

Bulk API improvements

Several improvements have been made to the Bulk API, including support for update and upsert operations, error reporting improvements, and a more efficient implementation based on write commands where that’s supported (MongoDB 2.6+).

Custom index name support

Indexes created by Collection.EnsureIndex may now declare a custom name during creation, if convenient. The Collection.DropIndexName method was also added to support the dropping of indexes by name.

Collection.Indexes method fixes

The Collection.Indexes method was returning results that lacked some of the information that was input into the Collection.EnsureIndexe method during creation. This has been fixed.

Problem reported by Louisa Berger.

Introduced Index.Minf/Maxf

The Min and Max fields currently offered by the Index type for tuning of geographical indexes were incorrectly assumed to be integers. Unfortunately these types cannot be changed without a backwards incompatible modification, so two new Minf and Maxf fields were introduced with the float64 type. Despite still working, the old fields are now obsolete.

Problem reported by Louisa Berger.

Name resolution fixed for Go 1.5

Go 1.5 modified the behavior of address resolution in a way that breaks the procedure implemented by mgo to resolve names within a given time span. This was addressed and now both IPv4 and IPv6 servers should be working correctly. This change was also applied as a hot fix to the previous release of the driver, to ensure developers could make use of the newly released compiler with mgo.

Issue reported and collaborated around by several contributors.

mgo r2015.05.29

 ∗ Labix Blog

Another release of mgo hits the shelves, just in time for the upcoming MongoDB World event.

A number of of relevant improvements have landed since the last stable release:

New package for having a MongoDB server in test suites

The new package makes it comfortable to plug a real MongoDB server into test suites. Its simple interface consists of a handful of methods, which together allow obtaining a new mgo session to the server, wiping all existent data, or stopping it altogether once the suite is done. This design encourages an efficient use of resources, by only starting the server if necessary, and then quickly cleaning data across runs instead of restarting the server.

See the documentation for more details.

(UPDATE: The type was originally testserver.TestServer and was renamed dbtest.DBServer to improve readability within test suites. The old package and name remain working for the time being, to avoid breakage)

Full support for write commands

This release includes full support for the write commands first introduced in MongoDB 2.6. This was done in a compatible way, both in the sense that the driver will continue to use the wire protocol to perform writes on older servers, and also in the sense that the public API has not changed.

Tests for the new code path have been successfully run against MongoDB 2.6 and 3.0. Even then, as an additional measure to prevent breakage of existent applications, in this release the new code path will be enabled only when interacting with MongoDB 3.0+. The next stable release should then enable it for earlier releases as well, after some additional real world usage took place.

New ParseURL function

As perhaps one of the most requested features of all times, there’s now a public ParseURL function which allows code to parse a URL in any of the formats accepted by Dial into a DialInfo value which may be provided back into DialWithInfo.

New BucketSize field in mgo.Index

The new BucketSize field in mgo.Index supports the use of indexes of type geoHaystack.

Contributed by Deiwin Sarjas.

Handle Setter and Getter interfaces in slice types

Slice types that implement the Getter and/or Setter interfaces will now be custom encoded/decoded as usual for other types.

Problem reported by Thomas Bouldin.

New Query.SetMaxTime method

The new Query.SetMaxTime method enables the use of the special $maxTimeMS query parameter, which constrains the query to stop after running for the specified time. See the method documentation for details.

Feature implemented by Min-Young Wu.

New Query.Comment method

The new Query.Comment method may be used to annotate queries for further analysis within the profiling data.

Feature requested by Mike O’Brien.

sasl sub-package moved into internal

The sasl sub-package is part of the implementation of SASL support in mgo, and is not meant to be accessed directly. For that reason, that package was moved to internal/sasl, which according to recent Go conventions is meant to explicitly flag that this is part of mgo’s implementation rather than its public API.

Improvements in txn’s PurgeMissing

The PurgeMissing logic was improved to work better in older server versions which retained all aggregation pipeline results in memory.

Improvements made by Menno Smits.

Fix connection statistics bug

Change prevents the number of slave connections from going negative on a particular case.

Fix by Oleg Bulatov.

EnsureIndex support for createIndexes command

The EnsureIndex method will now use the createIndexes command where available.

Feature requested by Louisa Berger.

Support encoding byte arrays

Support encoding byte arrays in an equivalent way to byte slices.

Contributed by Tej Chajed.

Readying mgo for MongoDB 3.0

 ∗ Labix Blog

MongoDB 3.0 (previously known as 2.8) is right around the block, and it’s time to release a few fixes and improvements on the mgo driver for Go to ensure it works fine on that new major server version. Compatibility is being preserved both with old applications and with old servers, so updating should be a smooth experience.

Release r2015.01.24 of mgo includes the following changes:

Support ReplicaSetName in DialInfo

DialInfo now offers a ReplicaSetName field that may contain the name of the MongoDB replica set being connected to. If set, the cluster synchronization routines will prevent communication with any server that does not report itself as part of that replica set.

Feature implemented by Wisdom Omuya.

MongoDB 3.0 support for collection and index listing

MongoDB 3.0 requires the use of commands for listing collections and indexes, and may report long results via cursors that must be iterated over. The CollectionNames and Indexes methods were adapted to support both the old and the new cases.

Introduced Collection.NewIter method

In the last few releases of MongoDB, a growing number of low-level database commands are returning results that include an initial set of documents and one or more cursor ids that should be iterated over for obtaining the remaining documents. Such results defeated one of the goals in mgo’s design: developers should be able to walk around the convenient pre-defined static interfaces when they must, so they don’t have to patch the driver when a feature is not yet covered by the convenience layer.

The introduced NewIter method solves that problem by enabling developers to create normal iterators by providing the initial batch of documents and optionally the cursor id for obtaining the remaining documents, if any.

Thanks to John Morales, Daniel Gottlieb, and Jeff Yemin, from MongoDB Inc, for their help polishing the feature.

Improved JSON unmarshaling of ObjectId

bson.ObjectId can now be unmarshaled correctly from an empty or null JSON string, when it is used as a field in a struct submitted for unmarshaling by the json package.

Improvement suggested by Jason Raede.

Remove GridFS chunks if file insertion fails

When writing a GridFS file, the chunks that hold the file content are written into the database before the document representing the file itself is inserted. This ensures the file is made visible to concurrent readers atomically, when it’s ready to be used by the application. If writing a chunk fails, the call to the file’s Close method will do a best effort to clean up previously written chunks. This logic was improved so that calling Close will also attempt to remove chunks if inserting the file document itself failed.

Improvement suggested by Ed Pelc.

Field weight support for text indexing

The new Index.Weights field allows providing a map of field name to field weight for fine tuning text index creation, as described in the MongoDB documentation.

Feature requested by Egon Elbre.

Fixed support for $** text index field name

Support for the special $** field name, which enables the indexing of all document fields, was fixed.

Problem reported by Egon Elbre.

Consider only exported fields on omitempty of structs

The implementation of bson’s omitempty feature was also considering the value of non-exported fields. This was fixed so that only exported fields are taken into account, which is both in line with the overall behavior of the package, and also prevents crashes in cases where the field value cannot be evaluated.

Fix potential deadlock on Iter.Close

It was possible for Iter.Close to deadlock when the associated server was concurrently detected unavailable.

Problem investigated and reported by John Morales.

Return ErrCursor on server cursor timeouts

Attempting to iterate over a cursor that has timed out at the server side will now return mgo.ErrCursor.

Feature implemented by Daniel Gottlieb.

Support for collection repairing

The new Collection.Repair method returns an iterator that goes over all recovered documents in the collection, in a best-effort manner. This is most useful when there are damaged data files. Multiple copies of the same document may be returned by the iterator.

Feature contributed by Mike O’Brien.

A timely coffee hack

 ∗ Labix Blog

It’s somewhat ironic that just as Ubuntu readies itself for the starting wave of smart connected devices, my latest hardware hack was in fact a disconnected one. In my defense, it’s quite important for these smart devices to preserve a convenient physical interface with the user, so this one was a personal lesson on that.

The device hacked was a capsule-based coffee machine which originally had just a manual handle for on/off operation. This was both boring to use and unfortunate in terms of the outcome being somewhat unpredictable given the variations in amount of water through the capsule. While the manufacturer does offer a modern version of the same machine with an automated system, buying a new one wouldn’t be nearly as satisfying.

So the first act was to take the machine apart and see how it basically worked. To my surprise, this one model is quite difficult to take apart, but it was doable without any visible damage. Once in, the machine was “enhanced” with an external barrel connector that can command the operation of the machine:

Open Coffee Machine

The connector wire was soldered to the right spots, routed away from the hot components, and includes a relay that does the operation safely without bridging the internal circuit into the external world. The proper way to do that would have been with an optocoupler, but without one at hand a relay should do.

With the external connector in place, it was easy to evolve the controlling circuit without bothering with the mechanical side of it. The current version is based on an atmega328p MCU that sits inside a small box exposing a high-quality LED bargraph and a single button that selects the level, turns on the machine on long press, and cancels if pressed again before the selected level is completed:

The MCU stays on 24/7, and when unused goes back into a deep sleep mode consuming only a few microamps from an old laptop battery cell that sits within the same box.

Being a for-fun exercise, the controlling logic was written in assembly to get acquainted with the details of that MCU. The short amount of code is available if you are curious.

Inline maps in

 ∗ Labix Blog

After the updates to itself, it’s time for to receive some attention. The following improvements are now available in the yaml package:

Support for omitempty on struct values

The omitempty attribute can now be used in tags of fields with a struct type. In those cases, the given field and its value only become part of the generated yaml document if one or more of the fields exported by the field type contain non-empty values, according to the usual conventions for omitempty .

For instance, considering these two types:

type TypeA struct {
        Maybe TypeB `yaml:",omitempty"` 

type TypeB struct {
        N int

the yaml package would only serialize the Maybe mapping into the generated yaml document if its N field was non-zero.

Support for inlined maps

The yaml package was previously able to handle the inlining of structs. For example, in the following snippet TypeB would be handled as if its fields were part of TypeA during serialization or deserialization:

type TypeA struct {
        Field TypeB `yaml:",inline"`

This convention for inlining differs from the standard json package, which inlines anonymous fields instead of considering such an attribute. That difference is mainly historical: the base of the yaml package was copied from mgo’s bson package, which had this convention before the standard json package supported any inlining at all.

Now the support for inlining maps, previously available in the bson package, is also being copied over. In practice, it allows unmarshaling a yaml document such as

a: 1
b: 2
c: 3

into a type that looks like

type T struct {
        A int
        Rest map[string]int `yaml:",inline"`

and obtaining in the resulting Rest field the value map[string]int{“b”: 2, “c”: 3} , while field A is set to 1 as usual. Serializing out that resulting value would reverse the process, and generate the original document including the extra fields.

That’s a convenient way to read documents with a partially known structure and manipulating them in a non-destructive way.

Bug fixes

A few problems were also fixed in this release. Most notably:

No minor versions in Go import paths

 ∗ Labix Blog

This post provides the background for a deliberate and important decision in the design of that people often wonder about: while the service does support full versions in tag and branch names (as in “v1.2” or “v1.2.3”), the URL must contain only the major version (as in “”) which gets mapped to the best matching version in the repository.

As will be detailed, there are multiple reasons for that behavior. The critical one is ensuring all packages in a build tree that depend on the same API of a given dependency (different majors means different APIs) may use the exact same version of that dependency. Without that, an application might easily get multiple copies unnecessarily and perhaps incorrectly.

Consider this example:

Under that scenario, when someone executes go get on application A, two independent copies of D would be embedded in the binary. This happens because both B and C have exact control of the version in use. When everybody can pick their own preferred version, it’s easy to end up with multiple of these.

The current implementation solves that problem by requiring that both B and C necessarily depend on the major version which defines the API version they were coded against. So the scenario becomes:

With that approach, when someone runs go get to import the application it would get the newest version of D that is still compatible with both B and C (might be 3.0.3, 3.1, etc), and both would use that one version. While by default this would just pick up the most recent version, the package might also be moved back to 3.0.2 or 3.0.1 without touching the code. So the approach in fact empowers the person assembling the binary to experiment with specific versions, and gives package authors a framework where the default behavior tends to remain sane.

This is the most important reason why works like this, but there are others. For example, to encode the micro version of a dependency on a package, the import paths of dependent code must be patched on every single minor release of the package (internal and external to the package itself), and the code must be repositioned in the local system to please the go tool. This is rather inconvenient in practice.

It’s worth noting that the issues above describe the problem in terms of minor and patch versions, but the problem exists and is intensified when using individual source code revision numbers to refer to import paths, as it would be equivalent in this context to having a minor version on every single commit.

Finally, when you do want exact control over what builds, godep may be used as a complement to That partnership offers exact reproducibility via godep, and gives people stable APIs they can rely upon over longer periods with Good match.

Improvements on

 ∗ Labix Blog

Early last year the service was introduced with the goal of encouraging Go developers to establish strategies that enable existent software to remain working while package APIs evolve. After the initial discussions and experimentation that went into defining the (simple) design and feature set of the service, it’s great to see that the approach is proving reasonable in practice, with steady growth in usage. Meanwhile, the service has been up and unchanged for several months while we learned more about which areas needed improvement.

Now it’s time to release some of these improvements:

Source code links

Thanks to Gary Burd, was improved to support custom source code links, which means all packages in can now properly reference, for any given package version, the exact location of functions, methods, structs, etc. For example, the function name in the documentation at is clickable, and redirects to the correct source code line in GitHub.

Unstable releases

As detailed in the documentation, a major version must not have any breaking changes done to it so that dependent packages can rely on the exposed API once it goes live. Often, though, there’s a need to experiment with the upcoming changes before they are ready for prime time, and while small packages can afford to have that testing done locally, it’s usual for non-trivial software to have external validation with experienced developers before the release goes fully public.

To support that scenario properly, now allows the version string in exposed branch and tag names to be suffixed with “-unstable”. For example:

Such unstable versions are hidden from the version list in the package page, except for the specific version being looked at, and their use in released software is also explicitly discouraged via a warning message.

For the package to work properly during development, any imports (both internal and external to the package) must be modified to import the unstable version. While doing that by hand is easy, thanks to Roger Peppe’s govers there’s a very convenient way to do that.

For example, to use mgo.v2-unstable, run:


and to go back:


Repositories with no master branch

Some people have opted to omit the traditional “master” branch altogether and have only versioned branches and tags. Unfortunately, did not accept such repositories as valid. This was fixed.

These changes are all live right now at

Holiday hardware hacks

 ∗ Labix Blog

Last night I did a trivial yet surprisingly satisfying hardware hack, of the kind that can only be accomplished when the brain is in holiday mode. Our son has that very simple airplane toy, which turned out to become one of his most loved ones, enough to have deserved wheel repairs before. He’s also reportedly a fan of all kinds of light-emitting or reflecting objects (including the sun, and specially the moon). So the idea sparkled: how easily can that airplane get a blinking led?

With an attiny85, a CR2032 battery, a LED, and half an hour of soldering work, this was the result:

The code loaded in the chip is small enough to be listed here, and it gets away with blinking without waking up the main CPU clock:

    ; Set inverse OC1B pin as output for the led.
    sbi _SFR_IO_ADDR(DDRB), 3

    ; Enable timer TC1 with PCK/16k prescaling (attiny85 p.89)
    ldi r18, (1<<CS10)|(1<<CS11)|(1<<CS12)|(1<<CS13)
    out _SFR_IO_ADDR(TCCR1), r18

    ; Set OC1B on compare match (250), clear on 0x00 (attiny85 p.86,90)
    ldi r18, (1<<PWM1B) | (1<<COM1B0)
    out _SFR_IO_ADDR(GTCCR), r18
    ldi r18, 250
    out _SFR_IO_ADDR(OCR1B), r18

    ; Set the sleep mode to idle (attiny85 p.39).
    ldi r18, (1<<SE)
    out _SFR_IO_ADDR(MCUCR), r18

    ; Shutdown unnecessary MCU modules (attiny85 p.38)
    ldi r18, (1<<PRTIM0)|(1<<PRUSI)|(1<<PRADC)
    out _SFR_IO_ADDR(PRR), r18

    rjmp .-4

The power consumption in the idle mode plus the blinks should keep the coin battery running for a couple of weeks, at least. A vibration sensor would improve that significantly, by enabling the MCU to go into powerdown mode and be awaken externally, but I don’t have a sensor at hand that is small enough.

This is the assembly, and the final result:

Toy Airplane Hack

He’s enjoying it. :-)

mgo r2014.10.12

 ∗ Labix Blog

A new release of the mgo MongoDB driver for Go is out, packed with contributions and features. But before jumping into the change list, there’s a note in the release of MongoDB 2.7.7 a few days ago that is worth celebrating:

New Tools!
– The MongoDB tools have been completely re-written in Go
– Moved to a new repository:
– Have their own JIRA project:

So far this is part of an unstable release of the MongoDB server, but it implies that if the experiment works out every MongoDB server release will be carrying client tools developed in Go and leveraging the mgo driver. This extends the collaboration with MongoDB Inc. (mgo is already in use in the MMS product), and some of the features in release r2014.10.12 were made to support that work.

The specific changes available in this release are presented below. These changes do not introduce compatibility issues, and most of them are new features.

Fix in txn package

The bug would be visible as an invariant being broken, and the transaction application logic would panic until the txn metadata was cleaned up. The bug does not cause any data loss nor incorrect transactions to be silently applied. More stress tests were added to prevent that kind of issue in the future.

Debug information contributed by the juju team at Canonical.

MONGODB-X509 auth support

The MONGODB-X509 authentication mechanism, which allows authentication via SSL client certificates, is now supported.

Feature contributed by Gabriel Russel.

SCRAM-SHA-1 auth support

The MongoDB server is changing the default authentication protocol to SCRAM-SHA-1. This release of mgo defaults to authenticating over SCRAM-SHA-1 if the server supports it (2.7.7 and later).

Feature requested by Cailin Nelson.

GSSAPI auth on Windows too

The driver can now authenticate with the GSSAPI (Kerberos) mechanism on Windows using the standard operating system support (SSPI). The GSSAPI support on Linux remains via the cyrus-sasl library.

Feature contributed by Valeri Karpov.

Struct document ids on txn package

The txn package can now handle documents that use struct value keys.

Feature contributed by Jesse Meek.

Improved text index support

The EnsureIndex family of functions may now conveniently define text indexes via the usual shorthand syntax ("$text:field"), and Sort can use equivalent syntax ("$textScore:field") to inject the text indexing score.

Feature contributed by Las Zenow.

Support for BSON’s deprecated DBPointer

Although the BSON specification defines DBPointer as deprecated, some ancient applications still depend on it. To enable the migration of these applications to Go, the type is now supported.

Feature contributed by Mike O’Brien.

Generic Getter/Setter document types

The Getter/Setter interfaces are now respected when unmarshaling documents on any type. Previously they would only be respected on maps and structs.

Feature requested by Thomas Bouldin.

Improvements on aggregation pipelines

The Pipe.Iter method will now return aggregation results using cursors when possible (MongoDB 2.6+), and there are also new methods to tweak the aggregation behavior: Pipe.AllowDiskUse, Pipe.Batch, and Pipe.Explain.

Features requested by Roman Konz.

Decoding into custom bson.D types

Unmarshaling will now work for types that are slices of bson.DocElem in an equivalent way to bson.D.

Feature requested by Daniel Gottlieb.

Indexes and CommandNames via commands

The Indexes and CollectionNames methods will both attempt to use the new command-based protocol, and fallback to the old method if that doesn’t work.

GridFS default chunk size

The default GridFS chunk size changed from 256k to 255k, to ensure that the total document size won’t go over 256k with the additional metadata. Going over 256k would force the reservation of a 512k block when using the power-of-two allocation schema.

Performance of bson.Raw decoding

Unmarshaling data into a bson.Raw will now bypass the decoding process and record the provided data directly into the bson.Raw value. This significantly improves the performance of dumping raw data during iteration.

Benchmarks contributed by Kyle Erf.

Performance of seeking to end of GridFile

Seeking to the end of a GridFile will now not read any data. This enables a client to find the size of the file using only the io.ReadSeeker interface with low overhead.

Improvement contributed by Roger Peppe.

Added Query.SetMaxScan method

The SetMaxScan method constrains the server to only scan the specified number of documents when fulfilling the query.

Improvement contributed by Abhishek Kona.

Added GridFile.SetUploadDate method

The SetUploadDate method allows changing the upload date at file writing time.

Fake Adobe Flash Update OS X Malware, (Thu, Feb 4th)

 ∗ SANS Internet Storm Center, InfoCON: green

Yesterday, while investigating some Facebook click-bait, I came across a fake Flash update that i ...(more)...

ISC Stormcast For Thursday, February 4th 2016, (Thu, Feb 4th)

 ∗ SANS Internet Storm Center, InfoCON: green


Exploring the Just-in-Time Social Web

 ∗ LukeW | Digital Product Design + Strategy

In her presentation at Google today, Stephanie Rieger shared how commerce is evolving (and in many cases innovating) in emerging Internet markets. Here's my notes from her talk: Exploring the Just-in-Time Social Web.

A New Kind of Web

Starting with Social


The Impact of Culture

The Art of the Commit

 ∗ A List Apart: The Full Feed

A note from the editors: We’re pleased to share an excerpt from Chapter 5 of David Demaree’s new book, Git for Humans, available now from A Book Apart.

Git and tools like GitHub offer many ways to view what has changed in a commit. But a well-crafted commit message can save you from having to use those tools by neatly (and succinctly) summarizing what has changed.

The log message is arguably the most important part of a commit, because it’s the only place that captures not only what was changed, but why.

What goes into a good message? First, it needs to be short, and not just because brevity is the soul of wit. Most of the time, you’ll be viewing commit messages in the context of Git’s commit log, where there’s often not a lot of space to display text.

Think of the commit log as a newsfeed for your project, in which the log message is the headline for each commit. Have you ever skimmed the headlines in a newspaper (or, for a more current example, BuzzFeed) and come away thinking you’d gotten a summary of what was happening in the world? A good headline doesn’t have to tell the whole story, but it should tell you enough to know what the story is about before you read it.

If you’re working by yourself, or closely with one or two collaborators, the log may seem interesting just for historical purposes, because you would have been there for most of the commits. But in Git repositories with a lot of collaborators, the commit log can be more valuable as a way of knowing what happened when you weren’t looking.

Commit messages can, strictly speaking, span multiple lines, and can be as long or as detailed as you want. Git doesn’t place any hard limit on what goes into a commit message, and in fact, if a given commit does call for additional context, you can add additional paragraphs to a message, like so:

Updated Ruby on Rails version because security

Bumped Rails version to 3.2.11 to fix JSON security bug. 
See also

Note that although this message contains a lot more context than just one line, the first line is important because only the first line will be shown in the log:

commit f0c8f185e677026f0832a9c13ab72322773ad9cf
Author: David Demaree 
Date:   Sat Jan 3 15:49:03 2013 -0500

Updated Ruby on Rails version because security

Like a good headline, the first line here summarizes the reason for the commit; the rest of the message goes into more detail.

Writing commit messages in your favorite text editor

Although the examples in this book all have you type your message inline, using the --message or -m argument to git commit, you may be more comfortable writing in your preferred text editor. Git integrates nicely with many popular editors, both on the command line (e.g., Vim, Emacs) or more modern, graphical apps like Atom, Sublime Text, or TextMate. With an editor configured, you can omit the --message flag and Git will hand off a draft commit message to that other program for authoring. When you’re done, you can usually just close the window and Git will automatically pick up the message you entered.

To take advantage of this sweet integration, first you’ll need to configure Git to use your editor (specifically, your editor’s command-line program, if it has one). Here, I’m telling Git to hand off commit messages to Atom:

$: git config --global core.editor "atom --wait"

Every text editor has a slightly different set of arguments or options to pass in to integrate nicely with Git. (As you can see here, we had to pass the --wait option to Atom to get it to work.) GitHub’s help documentation has a good, brief guide to setting up several popular editors.

Elements of commit message style

There are few hard rules for crafting effective commit messages—just lots of guidelines and good practices, which, if you were to try to follow all of them all of the time, would quickly tie your mind in knots.

To ease the way, here are a few guidelines I’d recommend always following.

Be useful

The purpose of a commit message is to summarize a change. But the purpose of summarizing a change is to help you and your team understand what is going on in your project. The information you put into a message, therefore, should be valuable and useful to the people who will read it.

As fun as it is to use the commit message space for cursing—at a bug, or Git, or your own clumsiness—avoid editorializing. Avoid the temptation to write a commit message like “Aaaaahhh stupid bugs.” Instead, take a deep breath, grab a coffee or some herbal tea or do whatever you need to do to clear your head. Then write a message that describes what changed in the commit, as clearly and succinctly as you can.

In addition to a short, clear description, when a commit is relevant to some piece of information in another system—for instance, if it fixes a bug logged in your bug tracker—it’s also common to include the issue or bug number, like so:

Replace jQuery onReady listener with plain JS; fixes #1357

Some bug trackers (including the one built into every GitHub project) can even be hooked into Git so that commit messages like this one will automatically mark the bug numbered 1357 as done as soon as the commit with this message is merged into master.

Be detailed (enough)

As a recovering software engineer, I understand the temptation to fill the commit message—and emails, and status reports, and stand-up meetings—with nerdy details. I love nerdy details. However, while some details are important for understanding a change, there’s almost always a more general reason for a change that can be explained more succinctly. Besides, there’s often not enough room to list every single detail about a change and still yield a commit log that’s easy to scan in a Terminal window. Finding simpler ways to describe something doesn’t just make the changes you’ve made more comprehensible to your teammates; it’s also a great way to save space.

A good rule of thumb is to keep the “subject” portion of your commit messages to one line, or about 70 characters. If there are important details worth including in the message, but that don’t need to be in the subject line, remember you can still include them as a separate paragraph.

Be consistent

However you and your colleagues decide to write commit messages, your commit log will be more valuable if you all try to follow a similar set of rules. Commit messages are too short to require an elaborate style guide, but having a conversation to establish some conventions, or making a short wiki page with some examples of particularly good (or bad) commit messages, will help things run more smoothly.

Use the active voice

The commit log isn’t a list of static things; it’s a list of changes. It’s a list of actions you (or someone) have taken that have resulted in versions of your work. Although it may be tempting to use a commit message to label a version of the work—“Version 1.0,” “Jan 24th deliverable”—there are other, better ways of doing that. Besides, it’s all too easy to end up in an embarrassing situation like this:

# Making the last homepage update before releasing the new site
$: git commit -m "Version 1.0"

# Ten minutes later, after discovering a typo in your CSS
$: git commit -m "Version 1.0 (really)"

# Forty minutes later, after discovering another typo
$: git commit -m "Version 1.0 (oh FFS)"

Describing changes is not only the most correct format for a commit message, but it’s also one of the easiest rules to stick to. Rather than concern yourself with abstract questions like whether a given commit is the release version of a thing, you can focus on a much simpler story: I just did a thing, and this is the thing I just did.

Those “Version 1.0” commits, therefore, could be described much more simply and accurately:

$: git commit -m "Update homepage for launch"
$: git commit -m "Fix typo in screen.scss"
$: git commit -m "Fix misspelled name on about page"

I also recommend picking a tense and sticking with it, for consistency’s sake. I tend to use the imperative present tense to describe commits: Fix misspelled name on About page rather than fixed or fixing. There’s nothing wrong with fixed or fixing, except that they’re slightly longer. If another style works better for you or your team, go for it—just try to go for it consistently.

What happens if your commit message style isn’t consistent? Your Git repo will collapse into itself and all of your work will be ruined. Kidding! People are fallible, lapses will happen, and a little bit of nonsense in your logs is inevitable. Note, though, that following style rules like these gets easier the more practice you get. Aim to write the best commit messages you can, and your logs will be better and more valuable for it.

Mark Llobrera · Professional Amateurs: Write What You Know (Now)

 ∗ A List Apart: The Full Feed

EMET 5.5 Released, (Wed, Feb 3rd)

 ∗ SANS Internet Storm Center, InfoCON: green


Automating Vulnerability Scans, (Wed, Feb 3rd)

 ∗ SANS Internet Storm Center, InfoCON: green

ISC Stormcast For Wednesday, February 3rd 2016, (Wed, Feb 3rd)

 ∗ SANS Internet Storm Center, InfoCON: green


Targeted IPv6 Scans Using ., (Tue, Feb 2nd)

 ∗ SANS Internet Storm Center, InfoCON: green

IPv6 poses a problem for systems like Shodan, who try to enumerate vulnerabilities Internet-wide. ...(more)...

ISC Stormcast For Tuesday, February 2nd 2016, (Tue, Feb 2nd)

 ∗ SANS Internet Storm Center, InfoCON: green


Around about 1998, I was talking to my electronic music teacher and ethused about the ...

 ∗ iRi

Around about 1998, I was talking to my electronic music teacher and ethused about the day that would come when we could put, say, everything Mozart ever did on a single cube, holding my fingers up in the air separated by about an inch. "You know, not everything Mozart did was great." "No, I get it, I just mean him as someone who put out a lot of stuff. Everything the Beatles ever did would work, too."

Silly me. I thought it would take a cube.

For $3, you can get the Big Mozart Box and for another $1, the complete Mozart symphonies. This clocks in at approximately 2.6GB of reasonable quality MP3s. (Not the bestest possible, but certainly pretty good.) It's not everything he did, I don't think, so let's call it 4GB total. That's a mere 1/32nd of a 128GB SD card, which is very much an areal storage device and not a cubic one.

In other news, if you've been inclined to buff your classical music collection, it turns out to be really cheap to just buy the MP3s, even if you've already got Amazon prime.

This week's sponsor: Hired

 ∗ A List Apart: The Full Feed

Get 5+ job offers with 1 application from companies like Uber, Square, and Facebook—plus a $1000 bonus when you get a job—with our sponsor Hired. Join today.

Salt Mine


This one is a little bland. Pass the saltshaker?

ISC Stormcast For Monday, February 1st 2016, (Mon, Feb 1st)

 ∗ SANS Internet Storm Center, InfoCON: green


Windows 10 and System Protection for DATA Default is OFF, (Sun, Jan 31st)

 ∗ SANS Internet Storm Center, InfoCON: green

I had the unfortunate consequences of a main hard drive failure this week and I had to rebuild my ...(more)...

OpenSSL 1.0.2 Advisory and Update, (Sun, Jan 31st)

 ∗ SANS Internet Storm Center, InfoCON: green

On the 26 ISC handler Rob posted a

All CVE Details at Your Fingertips, (Sat, Jan 30th)

 ∗ SANS Internet Storm Center, InfoCON: green

CVE (Common Vulnerabilities and Exposure) is a system developed to provide structured data for in ...(more)...

Scripting Web Categorization, (Fri, Jan 29th)

 ∗ SANS Internet Storm Center, InfoCON: green

When you are dealing with a huge amount of data, it can be very useful to enhance them by adding ...(more)...

XKCD Stack


This site requires Sun Java (32-bit) or higher. You have Macromedia Java¾ (48-bit). Click here [link to main page] to download an installer which will run fine but not really change anything.

Reliably hosted by