Infocon: green

 ∗ SANS Internet Storm Center, InfoCON: green

Rig Exploit Kit Changes Traffic Patterns

Rig Exploit Kit Changes Traffic Patterns, (Wed, Apr 1st)

 ∗ SANS Internet Storm Center, InfoCON: green

Sometime within the past month, Rig exploit kit (EK) changed URL structure." />

Not ...(more)...

ISC StormCast for Tuesday, March 31st 2015 http://isc.sans.edu/podcastdetail.html?id=4419, (Tue, Mar 31st)

 ∗ SANS Internet Storm Center, InfoCON: green

...(more)...

Select Star from PCAP - Treating Packet Captures as Databases, (Tue, Mar 31st)

 ∗ SANS Internet Storm Center, InfoCON: green

Have you ever had to work with a large packet capture, and after getting past the initial stage o ...(more)...

YARA Rules For Shellcode, (Mon, Mar 30th)

 ∗ SANS Internet Storm Center, InfoCON: green

I had a

On Our Radar: Self-Centered Edition

 ∗ A List Apart: The Full Feed

Okay, we admit it: it’s all about us. From steps to sleep to social activities, we’re counting every kind of personal data you can think of. But what’s all that data add up to? How could we look at it—and ourselves—differently? This week, we’re asking ourselves—and our self—the tough questions. 

My so-called lifelog

While waiting for an invite from gyrosco.pe, which promises to help me lead a healthier and happier life by harnessing my personal data, I started reading about life resource planning: the idea that we can administer every aspect of our lives using our timeline, our life feed, as a tool. LRP isn’t just the lifelogging data gathered by all the apps we use (health, finance, commuting, social graph, etc.). It’s about a user interface to make sense of it—a personal agent telling my story.

This has me thinking, how can I ever reinvent myself if my life feed becomes part of a documented history? The answer seems to lie in the notion of storytelling, becoming active autobiographers ourselves, using the same tools that tell our history, only to tell it better. When people are prompted to “tell a story” rather than state “what’s on their mind,” a character emerges—a qualified self (as opposed to the notion of the quantified self)—that may defy “big” data.

Michelle Kondou, developer

Mirror, mirror

A couple of days ago, I came across dear-data.com, a project by data visualization pros Giorgia Lupi and Stefanie Posavec. Instead of building digital charts and graphs, they’re documenting details of their lives onto handmade postcards—translating quiet moments of the everyday into colors and lines of self-awareness, and reinventing the rules each week. With a flickering edge of whimsy and objectivity, those moments are real life—through a filter.

What I love about Dear Data is that their conditions create new filters; they end up with a different view of themselves each week. Getting out of their usual medium and having to create new ways to tell each story is a tactic for hunting down catalysts. I also like how they went to something so square one: paper and colored pens, no expectations to be fancy, no need for neat lines.

Dear Data has me thinking about how we can all gain momentum from reimagining our digital selves every once in a while—from ditching our habitual means of describing and defining. How I can so easily show myself a new mirror and allow a situation to filter through me—I’d discover a different result each time. Those moments are grounding: they’re a sharp instant of humility, a moment of recognition that you’ll never see anything in the same way again.

Mica McPheeters, submissions and events manager

My birthday, my self

Ah, spring—that special time of year when a young developer’s fancy soon turns to thoughts of lexical scoping, and I’ve got ECMAScript 6 arrow functions on the brain.

Defining a function as usual introduces a new value for the this keyword, meaning we sometimes need to write code like the following:

function Wilto() {
	var self = this;
	self.age = 32;

	setInterval( function constantBirthdays() {
		self.age++;
		console.log( "I am now " + self.age + " years old");
	}, 3000 );
}

Since the meaning of this is going to change inside the constantBirthdays function, we alias the enclosing function’s this value as the variable self—or sometimes as that, depending on your own preference.

Arrow functions will maintain the this value of the enclosing context, however, so we can do away with that variable altogether:

function Wilto() {
	this.age = 32;

	setInterval(() => {
		this.age++;
		console.log( "I am now " + this.age + " years old");
	}, 3000 );
}

Thanks to ES6, we can finally start getting over our selfs.

Mat Marquis, technical editor

A gif about: self(ie) love

President Obama using a selfie stick
Haters gonna hate.

Laura Kalbag on Freelance Design: The Illusion of Free

 ∗ A List Apart: The Full Feed

Our data is out of our control. We might (wisely or unwisely) choose to publicly share our statuses, personal information, media and locations, or we might choose to only share this data with our friends. But it’s just an illusion of choice—however we share, we’re exposing ourselves to a wide audience. We have so much more to worry about than future employers seeing photos of us when we’ve had too much to drink.

Corporations hold a lot of information about us. They store the stuff we share on their sites and apps, and provide us with data storage for our emails, files, and much more. When we or our friends share stuff on their services, either publicly or privately, clever algorithms can derive a lot of of detailed knowledge from a small amount of information. Did you know that you’re pregnant? Did you know that you’re not considered intelligent? Did you know that your relationship is about to end? The algorithms know us better than our families and only need to know ten of our Facebook Likes before they know us better than our average work colleague.

A combination of analytics and big data can be used in a huge variety of ways. Many sites use our data just to ensure a web page is in the language we speak. Recommendation engines are used by companies like Netflix to deliver fantastic personalized experiences. Google creates profiles of us to understand what makes us tick and sell us the right products. 23andme analyzes our DNA for genetic risk factors and sells the data to pharmaceutical companies. Ecommerce sites like Amazon know how to appeal to you as an individual, and whether you’re more persuaded by social proof when your friends also buy a product, or authority when an expert recommends a product. Facebook can predict the likelihood that you drink alcohol or do drugs, or determine if you’re physically and mentally healthy. It also experiments on us and influences our emotions. What can be done with all this data varies wildly, from the incredibly convenient and useful to the downright terrifying.

This data has a huge value to people who may not have your best interests at heart. What if this information is sold to your boss? Your insurance company? Your potential partner?

As Tim Cook said, “Some companies are not transparent that the connection of these data points produces five other things that you didn’t know that you gave up. It becomes a gigantic trove of data.” The data is so valuable that cognitive scientists are giddy with excitement at the size of studies they can conduct using Facebook. For neuroscience studies, a sample of twenty white undergraduates used to be considered sufficient to say something general about how brains work. Now Facebook works with scientists on sample sizes of hundreds of thousands to millions. The difference between more traditional scientific studies and Facebook’s studies is that Facebook’s users don’t know that they’re probably taking part in ten “experiments” at any given time. (Of course, you give your consent when you agree to the terms and conditions. But very few people ever read the terms and conditions, or privacy policies. They’re not designed to be read or understood.)

There is the potential for big data to be collected and used for good. Apple’s ResearchKit is supported by an open source framework that makes it easy for researchers and developers to create apps to collect iPhone users’ health data on a huge scale. Apple says they’ve designed ResearchKit with people’s privacy values in mind, “You choose what studies you want to join, you are in control of what information you provide to which apps, and you can see the data you’re sharing.”

But the allure of capturing huge, valuable amounts of data may encourage developers to design without ethics. An app may pressure users to quickly sign the consent form when they first open the app, without considering the consequences. The same way we’re encouraged to quickly hit “Agree” when we’re presented with terms and conditions. Or how apps tell us we need to allow constant access to our location so the app can, they tell us, provide us with the best experience.

The intent of the developers, their bosses, and the corporations as a whole, is key. They didn’t just decide to utilize this data because they could. They can’t afford to provide free services for nothing, and that was never their intention. It’s a lucrative business. The business model of these companies is to exploit our data, to be our corporate surveillers. It’s their good fortune that we share it like—as Zuckerberg said—dumb fucks.

To say that this is a privacy issue is to give it a loaded term. The word “privacy” has been hijacked to suggest that you’re hiding things you’re ashamed about. That’s why Google’s Eric Schmidt said “if you’ve got something to hide, you shouldn’t be doing it in the first place.” (That line is immortalized in the fantastic song, Sergey Says.) But privacy is our right to choose what we do and don’t share. It’s enshrined in the Universal Declaration of Human Rights.

So when we’re deciding which cool new tools and services to use, how are we supposed to make the right decision? Those of us who vaguely understand the technology live in a tech bubble where we value convenience and a good user experience so highly that we’re willing to trade it for our information, privacy and future security. It’s the same argument I hear again and again from people who choose to use Gmail. But will the tracking and algorithmic analysis of our data give us a good user experience? We just don’t know enough about what the companies are doing with our data to judge whether it’s a worthwhile risk. What we do know is horrifying enough. And whatever corporations are doing with our data now, who knows how they’re going to use it in the future.

And what about people outside the bubble, who aren’t as well-informed when it comes to the consequences of using services that exploit our data? The everyday consumer will choose a product based on free and fantastic user experiences. They don’t know about the cost of running, and the data required to sustain, such businesses.

We need to be aware that our choice of communication tools, such as Gmail or Facebook, doesn’t just affect us, but also those who want to communicate with us.

We need tools and services that enable us to own our own data, and give us the option to share it however we like, without conditions attached. I’m not an Apple fangirl, but Tim Cook is at least talking about privacy in the right way:

None of us should accept that the government or a company or anybody should have access to all of our private information. This is a basic human right. We all have a right to privacy. We shouldn’t give it up.

“Apple has a very straightforward business model,” he said. “We make money if you buy one of these [pointing at an iPhone]. That’s our product. You [the consumer] are not our product. We design our products such that we keep a very minimal level of information on our customers.”

But Apple is only one potential alternative to corporate surveillance. Their services may have some security benefits if our data is encrypted and can’t be read by Apple, but our data is still locked into their proprietary system. We need more *genuine* alternatives.

What can we do?

It’s a big scary issue. And that’s why I think people don’t talk about it. When you don’t know the solution, you don’t want to talk about the problem. We’re so entrenched in using Google’s tools, communicating via Facebook, and benefitting from a multitude of other services that feed on our data, it feels wildly out of our control. When we feel like we’ve lost control, we don’t want to admit it was our mistake. We’re naturally defensive of the choices of our past selves.

The first step is understanding and acknowledging that there’s a problem. There’s a lot of research, articles, and information out there if you want to learn how to regain control.

The second step is questioning the corporations and their motives. Speak up and ask these companies to be transparent about the data they collect, and how they use it. Encourage government oversight and regulation to protect our data. Have the heart to stand up against a model you think is toxic to our privacy and human rights.

The third, and hardest, step is doing something about it. We need to take control of our data, and begin an exodus from the services and tools that don’t respect our human rights. We need to demand, find and fund alternatives where we can be together without being an algorithm’s cash crop. It’s the only way we can prove we care about our data, and create a viable environment for the alternatives to exist.

Progressive Enhancement FTW with Aaron Gustafson

 ∗ Jeffrey Zeldman Presents The Daily Report: Web Design News & Insights Since 1995

IN EPISODE № 130 of The Big Web Show (“Everything Web That Matters”), I interview long-time web standards evangelist Aaron Gustafson, author of Adaptive Web Design, on web design then and now; why Flipboard’s 60fps web launch is anti-web and anti-user; design versus art; and the 2nd Edition of Aaron’s book, coming from New Riders this year.

Enjoy Episode № 130 of The Big Web Show.

Show Links

A Bit About Aaron Gustafson
Adaptive Web Design: Crafting Rich Experiences with Progressive Enhancement
Responsive Issues Community Group
Easy Designs – Web Design, Development & Consulting
Web Standards Sherpa
Code & Creativity
WebStandardsProject (@wasp) | Twitter
A List Apart: For People Who Make Websites
Genesis – Land Of Confusion [Official Music Video] – YouTube

PHP 5.5.23 is available, (Wed, Mar 25th)

 ∗ SANS Internet Storm Center, InfoCON: green

From the fine folks at php.net:

Marchgasm!

 ∗ Jeffrey Zeldman Presents The Daily Report: Web Design News & Insights Since 1995

I’VE BEEN BUSY this month:

And March is only half over.

Receiving Kindle books as donation for my open source project

 ∗ Fatih Arslan

A side not,  I'm the author and maintainer of the popular Vim plugin for Go programming language: github.com/fatih/vim-go It's being used a lot in the Go community to develop Go with Vim. 

Recently I've was asked a lot whether I was receiving donations or not. I was not. Because I didn't need it and I thought it was not necessary. However the number of requests of having a donation option was increasing. People wanted to give something and I wasn't providing them a way to fulfill their wish. 

So I immediately thought how to add a donation button to my Github page. I was first thinking of adding a PayPal donation button. Obviously this is the button that I see the most. However for me money was not much important. I tried to think it differently and I thought what I would do once I received the donation.  It was obvious for me. I would buy a Kindle book from my wish list.

So instead of putting a donation button, I've added a URL pointing to my Amazon.com Wish list

I didn't expect it to be much a thing, however since I've added it I've received over ten books in total. In average I received a book every week. It feels great, I can read books that I enjoy and people can give something back. Many thanks for all contributors and donators of vim-go. It's a great feeling!

Reliably hosted by WebFaction.com