Using File Entropy to Identify "Ransomwared" Files
Any engineer or physisist will tell you that Entropy is like Gravity - theres no fighting it, its ...
Earlier today, I posted
2016-08-07 update: Ive posted a follow-up diary [
"With two inputs, a neuron can classify the data points in two-dimensional space into two kinds with a straight line. If you have three inputs, a neuron can classify data points in three-dimensional space into two parts with a flat plane, and so on. This is called 'dividing n-dimensional space with a hyperplane.'"
— Understanding neural networks with TensorFlow Playground
"We were looking to make usage of Kafka and Python together just as fast as using Kafka from a JVM language. That’s what led us to develop the pykafka.rdkafka module. This is a Python C extension module that wraps the highly performant librdkafka client library written by Magnus Edenhill."
— PyKafka: Fast, Pythonic Kafka, at Last!
"To satisfy this claim, we need to see a complete set of statically checkable rules and a plausible argument that a program adhering to these rules cannot exhibit memory safety bugs. Notably, languages that offer memory safety are not just claiming you can write safe programs in the language, nor that there is a static checker that finds most memory safety bugs; they are claiming that code written in that language (or the safe subset thereof) cannot exhibit memory safety bugs."
— "Safe C++ Subset" Is Vapourware
"For each mutant the tool runs the unit test suite; and if that suite fails, the mutant is said to have been killed. That's a good thing. If, on the other hand, a mutant passes the test suite, it is said to have survived. This is a bad thing."
— Mutation Testing
"This meant that literally everything was asynchronous: all file and network IO, all message passing, and any “synchronization” activities like rendezvousing with other asynchronous work. The resulting system was highly concurrent, responsive to user input, and scaled like the dickens. But as you can imagine, it also came with some fascinating challenges."
— Asynchronous Everything
I've said several times that XML Namespaces are simple, but I don't have a good proof document. So here it is.
Read the rest...
Out reader submitted to us severalodd packets. Of course, I cant resist to figure out what is exa ...
Johannes B. Ullrich, Ph ...
Johannes B. Ullrich, Ph ...
Roughly six years into my software development career, I had worked on interesting projects, met amazing people, and had the opportunity to travel to exotic cities. Yet I was frustrated. I was burning the candle at both ends to get things done. I didn’t look back to see if I could improve on how things were being done; I had no time. Deep down I knew it wasn’t feasible. I was working hard, not smart; I felt like I wasn’t working toward anything; I was falling behind with technology. I was burning out.
I started searching for an opportunity to facilitate my technical growth. Two years later I was based at an enterprise client who adopted agile software development methodologies, and everything changed for me. This new world exposed me to a diverse working environment and new perspectives, and encouraged me to ask even more questions than before. This is when I discovered the power of the words “reflect, inspect, and adapt.”
It wasn’t a walk in the park with unicorns and rainbows, but the experience has aided me in officially branding my career as one exciting journey of professional and self-discovery. Now ten years into my career, I realize that for most of that time I have been in survival mode. After looking back, I’d like to share how I found opportunities in the mistakes I made.
Define clear career and personal goals
Computers weren’t a household name when I was growing up in South Africa, but I was lucky to have access to my dad’s Pentium 386. I was amazed at this technology. When we got internet access, I was immediately hooked on the online world. I taught myself HTML and later built my own machine with the money I made from designing a website for the local newspaper.
When I chose my higher education path I had one goal—I wanted to make websites. I didn’t want a degree; I wanted experience. I studied at a college for two years, then excitedly entered the workforce to follow my passion.
As I entered the workforce, I wasn’t prepared for the politics: managers expecting things to be done almost immediately; clients who don’t engage and are unsure of what they want; clients who express urgency, yet wait for the last minute to provide you with everything you need; an increased workload due to colleagues who stay well inside their comfort zone. These are just some examples of the politics that initiated my frustrations.
I wondered if this is where I’d still be in five or ten years and if I would be able to sustain it. I didn’t know the answer to the former, but to the latter it was definitely no.
Coupled with turning thirty, the new perspectives I developed in the agile environment made me really evaluate my future. I realized that I didn’t have goals; I was only chasing my passion. Granted, it is fun and I gained a lot of experience in many different areas in IT, but I don’t have anything tangible to show for it now.
After much reflection, I discovered these goals for myself:
- Increase productivity. I minimize distractions like email, social media, and uninvited guests to improve my productivity. To make sure I am working on the right tasks, I need to have a clear understanding about what I am working on and why.
- Develop software that has a positive impact on people. It is important to understand business thinking and impact on users. I need to ask appropriate questions, and I need to guide and negotiate with product stakeholders.
- Share my knowledge. I can create an online identity (publish articles, blog), possibly speak at events, and contribute to open-source software. I can find projects on GitHub of libraries and tools that I regularly use and create a pull request.
- Better my craftsmanship. I can learn through code reviews and peer conversations, listen to podcasts, read up on best practices, read more craft-related reference books, and reflect on my implementation.
- Learn to live mindfully. To have a positive impact on people, I can make small adjustments and engage those around me to help me grow. Meditation, reflection, and motivational books are tools I could use to guide me.
- Showcase my career. Create a tangible timeline of projects I have worked on including screenshots, descriptions, technologies, and learnings.
These goals feel more defined to me than just making cool websites. I wish I had set some goals a little sooner but luckily — as cliché as it sounds — it’s never too late. Goals give you direction and purpose. Like me, you may have worked many late nights on personal projects that never materialized. It helps to have focus and something definite to achieve. I find what’s best of all is that I don’t feel constrained by having these goals. They represent what’s important to me now but if my values change, I can inspect and adapt my goals.
Put people before technology
For too long, I worked alone on my own codebases and wondered if I was doing things the right way. I had little to no exposure to working in teams and dealing with industry buzzwords like agile, TDD/BDD, Gang of Four, SOLID, code reviews, continuous integration/delivery, DevOps, and <insert your favorite technical jargon here>. I was in a bubble falling further behind in the fast-paced technical world. I was focused on working with technology and never realized how important it is to collaborate.
If you work in a company with a silo-based culture or one- or two-people teams, try not to accept things for what they are:
- Get involved with your coworkers by communicating and collaborating on projects.
- Try introducing knowledge-sharing sessions and code reviews.
- Reflect on what worked and what didn’t and also unpack why, so that you can learn from it.
- Approach management with suggestions on how you and your colleagues can produce more solid and effective software.
- Attend conferences or smaller community meetups. Not only can you learn a lot through the content but you have the chance to network and learn from an array of people with different skills.
Prioritize your tasks
I often worked about twelve to sixteen hours a day on projects with short deadlines. I spent my official work hours helping colleagues with problems, immediately responding to email, attending to people with queries or friendly drop-ins, supporting projects that were in production, or fighting fires resulting from errors that usually came from miscommunication. This left me with very little time to be productive. When I finally got to work on my project, my perfectionism only increased my stress levels. Regardless, I never missed a deadline.
I thought everything was important. If I didn’t do what I was doing the world would end, right? No! The reality is that when everything is important, nothing is important.
This working behavior sets unrealistic expectations for the business, your colleagues, and yourself. It hides underlying issues that need to be addressed and resolved. If you are working at an unsustainable pace, you can’t deliver your best work plus you end up missing out on actually living your life.
The power of retrospectives
The most important ceremony (or activity) I was introduced to in the agile environment was the retrospective, which is “the process of retrospecting at the heart of Scrum (Inspect and Adapt), eXtreme Programming (fix it when it breaks) and Lean Software Development (Kaizen or Continuous Improvement)”.1
Through retrospection you are granted the opportunity to reflect on how you — and the team — did something, so that you can improve the process. Let’s run through this technique to identify some pain points using the situation I had found myself in:
- Working unsustainable hours because there was too much to do. I helped everyone else before I worked on my own tasks, I worked on things that didn’t add much value, and I thought that all the features needed to be ready for launch. I was blind to asking for help when I needed it.
- Dealing with too many distractions. I allowed the distractions by immediately switching context to help others because it was important to them.
- Key-person dependency. I was the only person working on one of the projects.
- Miscommunication resulting in errors. Communication was done via email and the stakeholders were off-site. There wasn’t quick feedback to indicate if the project was going in the right direction.
Once the pain points are identified, adjustments need to be made in order to see improvement. Large adjustments could take too long to implement or adjust to, which leads to disruptions. Smaller adjustments are better. These adjustments may or may not work in the long haul, so we can look at them as experiments.
- To work more sustainably I need to know what I need to work on — and why — so that I can add value without wearing myself out. Perhaps I could find out what needs to be available for launch and create a prioritized list of things to do. This list could help me focus and get into the “zone.”
- To manage client expectations, we can try open communication. This can also help me prioritize my tasks.
- To overcome some of the distractions I could reap the benefits of being selfish by saying no (within reason). This could help me stay in the zone for longer. If anything must be expedited I can start offering trade-offs: if I do X now, can Y wait?
- To alleviate the pressures of being the sole person able to do certain things, I could have more conversations with my manager and train a colleague so that they are aware of what is going on and someone can take over in the event that I get sick or am on vacation.
- To reduce errors from miscommunication, perhaps we could create visibility for stakeholders. Introduce a physical workflow board and have constant feedback loops by requesting frequent reviews to demonstrate what we have done.
Experiments run for a period of time and need to be measured. This is a grey area. Measurements aren’t always accurate, but it always boils down to the pain. If the pain is the same or has increased, then the experiment needs to be adjusted or a new experiment introduced. If it has been alleviated, even slightly, then there is improvement.
Learning through experimentation
Many of the experiments mentioned above already form part of the agile Scrum framework, so let me introduce you to real-world experiments we did in our team.
Based on the way our development stories were deployed, we experienced pain with testing stories in the appropriate order. We were using Jenkins for automated deployments and each one got a number incremented from the previous one, but the testers weren’t testing the stories in any particular order. If a story was ready to be deployed, they wouldn’t know if there was another, untested story that they were unwittingly promoting to production along with it, or if the story they tried to deploy was being held back by other stories still awaiting testing.
Without waiting for a retrospective we had a conversation to highlight the pain. We chose to write the build number on a note stuck on the story card on our wall and add a comment to our digital storyboard. This created quick visibility on the chronological order of the possible deployments of our stories.
A change control process was later introduced that required details of a production deployment and a rollback plan for that change. We couldn’t quickly access the last few production build numbers, so we started writing them on stickies and put those onto a new section on our physical board. Now we didn’t have to search through email or log in to Jenkins to find these numbers. One day, we were asked when we last deployed and had to go back to email for the answer, so we started adding the date to the deployment number stickies.
These were simple experiments but they added a lot of value by saving time. We acted on alleviating pain as it happened.
Don’t be afraid to experiment if you are not in an Agile world. If you simply run to business with problems and offer no solutions then business will frown at you. The goal here is simple: identify your pain points and find simple solutions (or improvements) to try to alleviate the pain. Experiment, inspect, and adapt often.
Believe in yourself
Survival mode never did me any good. I didn’t get an award for working long hours to make deadlines. Letting my mistakes and frustrations build up over the years made me stop believing in myself.
I was stuck in a rut; technology was changing around me fast and I was burnt out and falling behind. I’d scroll through Stack Overflow and instantly feel stupid. I’d spend time looking at all the amazing websites winning awards on Awwwards and feel inadequate. I didn’t have a life as it was consumed by my obsession for work. I didn’t know what I wanted anymore, or what I wanted to aspire to.
Introspection helped me. By inspecting my behavior, I was able to make minor adjustments that I would then inspect again to see if they worked. This simple activity can show you what you are capable of and lead you to learning more about yourself and those around you. I am applying what I have learned in software in a personal capacity. I have my life back, and I feel empowered and freed.
My final thoughts
I’ve definitely made a lot of mistakes in my career. What I have shared with you is probably only a fraction of them. I don’t regret my mistakes at all; that is how I got my experience. The only regret I have is that I wish I had begun reflecting on them sooner.
When a mistake is made, an opportunity is born: learn from that mistake to do something differently next time. Take time to step out of the subjective into the objective, so that you can reflect and consider what you could do to change it. (And don’t be too hard on yourself!)
My journey has taught me to implement small experiments that can be measured and to run them for short periods of time. If something works, keep it. If not, adjust it or throw it away. By making small changes, there are fewer disruptions. If you too are in survival mode — stop and breathe now! Reflect, inspect, and adapt.
- 1. From the Agile Retrospective Wiki.
I have adopted a parsimonious approach towards the inclusion of library dependencies in my software projects. I've been meaning to write about this since before the left-pad debacle, and at the time I even teased it a bit:
left-pad exposes cultural problems with respect to dependencies and code reuse, the technology is just an enabler
First, I want to go on an absolutely massive tangent, and start with 3 axioms of dependency management complexity:
Cool APIs Don't Change; all other things being equal, a stable API is better than one that keeps changing.
Dependency graphs have a total complexity on the order of their edge count.
Edge weight isn't constant, it scales up by some factor based on the number of paths that use it.
(Henceforth, "dependency" will be abbreviated "dep")
2 has some good data going for it, and I suspect there may be ways to prove 3, though it may also be the kind of thing that graph theorists argue about when they aren't trying to color in high dimensional cubes. If you have a very basic familiarity with graph theory, and how it applies to networks, then take the opposite approach to deps. Flow and connectivity are bad, because they propagate the breakages that develop when deps change. Star topology is good, because isolation is both insulation and loose coupling, which is a computer scientist's way of saying "superior."
These axioms suggest that relying on lots of deps isn't necessarily bad if their arrangement is simple. This is good news, because it means I'm not just some stuffy Luddite admonishing you to stop installing code off the internet.
If you want to try and guess how much a dep will cost, here's a napkin estimate. Come up with 3 numbers: the frequency that a dep changes (a real number from 0-1), its distance from the root of your dep graph, and the number of edges coming out of it. Since I'm not very sophisticated, I think it's a reasonable approximation to multiply all the terms together 1. Deps that are isolated, stable, and directly imported are cheap and those that are volatile, tightly coupled with others, and a distant concern are expensive.
Obviously, you cannot arbitrarily add code to a project without it getting more complex. The first term having a limit at zero is meant to suggest that deps that rarely change (like standard libraries) do not impose a lot of extra management overhead as you upgrade them. It's why stable libraries are better than unstable ones. And its why people in the Go community have been banging on about vendoring even though nobody wants to hear it.
The elephant in the room about vendoring is that it works great for applications and large projects being undertaken by teams at companies, but it's rubbish for libraries. Peter Bourgon's go best practices post as a nice section on dependency management, and in it he claims:
Libraries with vendored dependencies are very difficult to use; so difficult that it is probably better said that they are impossible to use. [...] Without getting too deep in the weeds, the lesson is clear: libraries should never vendor dependencies.
This is an uncontroversial opinion among those who have actually tried it.
Lots of languages today ship with or encourage the use of a tool 2 and repository that attempts to fetch deps automatically. Go 1.0 shipped with a set of tools that included:
- a way to fetch deps based on the contents of another package
- a canonical way to build and linked packages based on conventions
- static linkage by default
This made dep resolution a build time concern. Version resolution for production deployments was something you solved once, albeit manually, and not on every box you deployed to. The simplistic
go get lowered the barrier to entry for tinkerers and library authors; if you had a URL or a github account, you could "publish" a package without worrying about the land rush.
With no solution for reproducible builds or pinning other than "we prefer vendoring" and "go get is not a package manager", the community tried out a bunch of stuff. Amazingly, lots of them focus on vendoring, and most of them are different from the standard approach. To contrast, without a pre-existing automatic build system, Rust's build tool and package manager were combined into one project. It follows all of the standard practices, and in a move of unprecedented irony was named cargo. 3
It's this class of tools that I tweeted about.
The manifest+lock pattern allows you to fetch, freeze and pin dep versions, but it doesn't make any improvements to the management process. The TCO doesn't change. Over time, people start to underestimate the cost of longer term maintenance. At its worst, you get an almost comical combinatorial explosion and you're told not to worry and not to look behind the curtain. Eventually, someone decides it's overly complex and they build something new and simple, but the culture ensures they end up right back where they started.
If you think the culture isn't informed by the tooling, next time you go to include a dep in your project ask yourself if you'd still do so if it were a C project and the library was a shared object file instead. This process is so daunting that a special type of hell was invented to describe it. Many Go bindings for C projects don't bother and just bundle all of the C code.
I'm cautiously optimistic that once we've solved the minor technical hurdles that are conveniently modelling these real long term complexities, we'll be able to keep our culture of suspicion and aversion when it comes to deps. Aside from those on the core team, we've long had influential members of the community supporting this; Blake Mizerany's dotGo 2014 talk on The Three Fallacies of Dependencies is an early example, and they echo even today in Peter's best practices blog post.
If you're writing a library, focus on what your library is trying to do, and don't go including a bunch of deps for a few bells and whistles. If you want a simpler project and a simpler way to manage your deps, then by far the easiest and most effective thing to do is use fewer libraries.
If this doesn't exist, we can call it the Moiron score. It might not have a ring to it, but people might start to learn how to pronounce my name.
Central open source repos for languages probably started with Perl's CPAN, but many languages have adopted one (or more!) tools to manage their own dependencies: npm, a number of PHP attempts culminating in Composer, pypi/cheeseshop, gem/rubygems, cabal/hackage, rebar/hex and now cargo/crates, et al.
Sincere apologies for this, I could not help myself. I really do think, given the calibre of the people who built it, that they might have thought a bit out of the box (UGH sorry again) if the build system was already there.
I am a big fan of the idea behind Certificate Transparency . The real problem with SSL (and TL ...
- Serve your
/api/routes from your Go service
- Also have it serve the static content for your application (e.g. your JS bundle, CSS, assets)
- Any other route will serve the
The Folder Layout
Here's a fairly simple folder layout: we have a simple Vue.js application sitting alongside a Go service. Our Go's
main() is contained in
serve.go, with the datastore interface and handlers inside
~ gorilla-vue tree -L 1 . ├── README.md ├── datastore ├── dist ├── handlers ├── index.html ├── node_modules ├── package.json ├── serve.go ├── src └── webpack.config.js ~ gorilla-vue tree -L 1 dist dist ├── build.js └── build.js.map
With this in mind, let's see how we can serve
index.html and the contents of our
Note: If you're looking for tips on how to structure a Go service, read through @benbjohnson's excellent Gophercon 2016 talk.
The example below uses gorilla/mux, but you can achieve this with vanilla net/http or httprouter, too.
The main takeaway is the combination of a catchall route and
http.ServeFile, which effectively serves our
index.html for any unknown routes (instead of 404'ing). This allows something like
example.com/deep-link to still run your JS application, letting it handle the route explicitly.
Build that, and then run it, specifying where to find the files:
go build serve.go ./serve -entry=~/gorilla-vue/index.html -static=~/gorilla-vue/dist/
You can see an example app live here
That's it! It's pretty simple to get this up and running, and there's already a few 'next steps' we could take: some useful caching middleware for setting
Cache-Control headers when serving our static content or
index.html or using Go's html/template package to render the initial
index.html (adding a CSRF meta tag, injecting hashed asset URLs).
If something is non-obvious and/or you get stuck, reach out via Twitter.
A major new release of the mgo MongoDB driver for Go is out, including the following enhancements and fixes:
Introduces the new bson.Decimal128 type, upcoming in MongoDB 3.4 and already available in the 3.3 development releases.
Extended JSON support
Introduces support for marshaling and unmarshaling MongoDB’s extended JSON specification, which extend the syntax of JSON to include support for other BSON types and also allow for extensions such as unquoted map keys and trailing commas.
The new functionality is available via the bson.MarshalJSON and bson.UnmarshalJSON functions.
New Iter.Done method
The new Iter.Done method allows querying whether an iterator is completely done or there’s some likelyhood of more items being returned on the next call to Iter.Next.
Feature implemented by Evan Broder.
Retry on Upsert key-dup errors
Curiously, as documented the server can actually report a key conflict error on upserts. The driver will now retry a number of times on these situations.
Fix submitted by Christian Muirhead.
Switched test suite to daemontools
Support for supervisord has been removed and replaced by daemontools, as the latter is easier to support across environments.
Travis CI support
All pull requests and master branch changes now being tested on several server releases.
Initial collation support in indexes
Support for collation is being widely introduced in 3.4 release, with experimental support already visible in 3.3.
This releases introduces the Index.Collation field which may be set to a mgo.Collation value.
Removed unnecessary unmarshal when running commands
Code which marshaled the command result for debugging purposes was being run out of debug mode. This has been fixed.
Reported by John Morales.
Fixed Secondary mode over mongos
Secondary mode wasn’t behaving properly when connecting to the cluster over a mongos. This has been fixed.
Reported by Gabriel Russell.
The VersionAtLeast comparison was broken when comparing certain strings. Logic was fixed and properly tested.
Reported by John Morales.
Fixed unmarshaling of ,inline structs on 1.6
Go 1.6 changed the behavior on unexported anonymous structs.
Livio Soares submitted a fix addressing that.
Fixed Apply on results containing errmsg
The Apply method was confusing a resulting object containing an errmsg field as an actual error.
Reported by Moshe Revah.
Several contributions on documentation improvements and fixes.
I found this gem from 2006 that explains how the Linux kernel interacts with memory barriers. Amazingly thorough!
The big Windows 10 patchstart today. It incudes many different things but here are the highlights ...
One year ago, I already covered the impact that
I think almost every one of us working in the IR/Threat Intel area has faced this question at lea ...
About a year ago I received RTF samples that I could not analyze with