Mistakes happen – Learn from them

Jon Rajewski computer forensics, mistakes

I’ve had this blog post brewing for some time, but in light of Harlan Carvey’s “Uncertainty” post and Christa Miller’s Book Review “Uncertainty” I felt it appropriate to release it.

In light of the recent press into several “mistakes”

I wanted to quickly post up a blog post about learning from mistakes. But first, perception and reality should be discussed. You will see a lot of “shock media” when incidents relating to technology are presented to the general public. Sometimes the perception is there was a major problem, but in reality the damage caused was minimal. However – perception is (almost) everything. Perception is what causes a company’s stock to drop and customers to go elsewhere. We must always have our eye on perception.

It’s critical to have someone at the helm of an investigation that is seasoned and experienced. Information flow and vetting said intelligence is equally as important.

In the Water Pump “hack” someone reviewed logs from a SCADA system and saw an IP address from Russia that authenticated with credentials 5 months prior to the water pump failure. Then someone wrongfully linked that event to the ultimate failure of the water pump.

digital computer forensics is a science jonathan rajewski
Jim Mimlitz with his family in Russia – He connected to the SCADA system while on vacation (Photo from Wired.com)

In the CarrierIQ situation a YouTube Video was posted by Trevor Eckhart claiming that Carrier IQ was sniffing typed keys and https url’s on Android devices. Dan Rosenberg, a few days later, conducted some interesting research and demonstrated Carrier IQ’s current transmission capabilities. This research showed that only a finite amount of information could be transmitted back to the mobile phone carrier. The larger issue that Trevor (and the media) was pointing out was phone carriers could read content of messages/emails etc – this just wasn’t the case according to Dan Rosenburg’s research. Another point to mention is the insecurity of applications on smart phones in general – but this will be a future blog post.

In the alleged CIA drone capture, we might not ever know the details, but it should be assumed something went wrong. Unless of course this is all a ruse by Iran to point fingers at the United States but that’s another conversation for the conspiracy theorists. (For the record, President Obama did publicly request for it back..)

digital computer forensics is a science jonathan rajewski
Iran in possession of CIA Stealth RQ-170 Sentinel

The reality is, mistakes happen.

To circle back to the theme of this blog – Perception outside of the digital forensic / incident response (DFIR) industry is blurred by things like the CSI effect. The reality is people are in positions to make decisions that could impact the lives of few or even many. Practitioners base their decisions on years of training and the situation that’s presented to them. Variables are ever-changing and you’re rarely presented with the same technical problem. And therein lies the issue at hand – there isn’t a “play book” for every situation. With that, people are inevitably going to fail from time to time. One way to mitigate “mistakes” is to promote Peer Review and collaboration.

I really love to hold people accountable for their work. Meaning, in an organization we expect to help everyone and to contribute where we can, but at the end of the day, everyone has their role and responsibilities. At times, those accountable for particular activities fail. Sometimes the failure is reconcilable, and other times it’s not. That said, people make mistakes and from those mistakes we learn. However, what is most unforgivable are the mistakes that occur over-and-over-and-over again. These situations typically require some sort of intervention.  

Theoretically, in the alleged water pump hack, if the Fusion Center’s analyst had only a snippet of the information (that Russia logged into the system which directly caused the failure), and they made the decision to broadcast an national alert which ultimately was leaked to the media – and that was wrong – shame on the system, not the person. That person was operating with the variables presented to them. And everyone in that investigation had a role to play. If the investigators that were investigating the logs called Jim Mimlitz to ask if he was the person that logged into the SCADA system, mistakes could have been avoided.(note: This is what DHS/FBI eventually did, but by then it was too late.)

We are not perfect, we are not Borg, nor Cylon (I had to introduce some Nerd Sci-Fi reference) we are Human.

We as an industry are only as strong as the weakest link in our process/investigation. One of the messages I’m trying to convey in this blog post is that we need to be held accountable for our mistakes but given an opportunity to learn and provide a corrective action. For example, every website or organization must assume that they will be one day attacked/compromised – and that doesn’t mean we should immediately terminate employees involved with managing those digital systems when it happens.
Having a post-event debrief to discuss what went wrong and what can be done to prevent it from happening again is a far better productive experience.

It’s very easy to play “Monday morning quarterback” or  to live in the world of “hindsight 20-20” and point fingers – We are in an ever-evolving industry with new threats and risks being presented on a daily basis. We must be ready to learn and be prepared to fail. The healthy balance, of course, is to trying not to fail as often 🙂