Archive for the ‘Computer Security’ Category

Rap News on Surveillance

Posted on November 16th, 2012 in Computer Security, Entertainment, Politics and Law | No Comments »

I think Bruce Schneier said it best: “Wow.” There’s not much else to say about something like this.

Phone-based Microsoft Scam

Posted on January 2nd, 2012 in Computer Security, Technology | No Comments »

A phishing scammer called me this afternoon. He spoke with a strong Indian accent and said he was from “Microsoft Technical Division.” He told me that my computer sent them error reports indicating that it was infected with a virus. They wanted to help me remove the virus.

I was rather stunned since the last time I used a Microsoft operating system by choice was 1999. Still, I wanted to see where this went, so I asked them what I should do.

They wanted to start by verifying that I had the virus. This involved looking for warnings in some part of the control panel. I just agreed with the prompts the man on the other end of the phone gave me. I remain familiar enough to understand what sorts of screens he was walking me through, but since I’m not an active Windows user, I wasn’t able to learn much about their procedures here.

Eventually, they wanted me to visit www.teamviewer.com, which I will not link to here. This allowed me to determine that they were, as I suspected, scammers. You can read more about a previous version of the scam on Microsoft’s website.

At this point, I knew I wouldn’t be able to fake them out any more. I told them I wasn’t comfortable with the fact that they had my phone number, and I wanted to call them back to ensure that they were a legitimate operation. The man on the other end of the phone didn’t bat an eye at this. He immediately gave me a phone number, which I immediately Googled. The number he gave me was used in other scams previously.

I thought I would write about this experience for two reasons. First, it’s worth knowing that something like this particular scam could happen to less tech savvy folks. Second, this is a data point in a trend of phishing attacks becoming more personalized. I expect to see more attacks like this, not fewer.

Book: The Friar and The Cipher

Posted on January 2nd, 2012 in Books, Computer Security | No Comments »

I picked up a copy of The Friar and The Cipher by Lawrence and Nancy Goldstone from a used book store called Recycled Books in Denton, TX over Thanksgiving weekend. Sometimes when I’m in a book store with that much personality I will find a book that fascinates me, I’ll pick it up, and I’ll read it all rather quickly. The last time I did that was with Surveillance. I enjoyed finding both of those books in their respective used book stores, and reading something unscheduled and totally off my radar is something I plan to do again in the future.

I finished reading The Friar and The Cipher in less than a week, but it’s taken me quite some time to write the review of it. I was interested in the book because I love cryptography and everything on the dust jacket of the book indicated that it was roughly half about the Voynich manuscript. The Voynich manuscript is one of the most interesting puzzles in cryptography and linguistics. It’s a 240 page book written in the early 15th century, and its contents remain a complete mystery. The script is not latin, arabic, or any other recognizable alphabet, but the arrangement and frequency of the characters appear to have many of the same characteristics of natural languages. Deciphering it would almost certainly become a world-wide story regardless of what it actually says.

However, having read the book, saying that The Friar and The Cipher is about the Voynich manuscript feels extremely misleading. Most of the book is about Roger Bacon, whom the authors believe is the author of the Voynich manuscript, and the history of science and the Catholic church. Although the book is extremely readable, chapter after chapter about Roger Bacon, Thomas Acquinas, and the early debate between science and religion was not what I expected. Luckily, I find those topics interesting as well. In fact, I’ve read quite a bit about the debate regarding the interplay of science and religion. If I didn’t have that background, I would have felt rather cheated.

Frankly, this book didn’t satisfy my desire to learn more about the Voynich manuscript, and I would only recommend it as a book about the Voynich manuscript because of the dearth of material covering that mystery. Still, you might be better served by picking randomly from the further reading list on its Wikipedia page.

Bacon
Creative Commons License photo credit: Lawrence OP

The authors don’t present material as unbiased journalists; they emphatically present their opinions that Roger Bacon has yet to receive proper recognition for his work and that Bacon is the most likely author of the Voynich manuscript. As a result, the book reads as rather one-sided. They lay much of the blame for this on the Catholic church, which may be justified, but they also seem willing to take shots at Christianity in general. They take the position that science and religion are completely incompatible as if there weren’t even a debate about it. They also make broad statements about “the church” as if they are fact even though they directly contradict other authors without even mentioning the other interpretation. (For examples, read the Amazon reviews. I won’t repeat their examples here.)

Books that purport to explore a mystery should at least attempt to be unbiased. I wouldn’t recommend it to anyone who wasn’t already familiar with both cryptography and the debate between science and religion. If you don’t feel comfortable talking about those topics with knowledgeable folks, then you will find it hard to discern between fact and opinion while reading The Friar and The Cipher. Still, the book was not irredeemably bad. I did learn a little bit more about church history, and it’s sparked my interest in reading more unbiased accounts of that time period. Besides, it was a quick read. There probably aren’t that many people who would be interested in The Friar and The Cipher, but it’s interesting if you consider yourself to fit that category.

Movie: Enigma

Posted on May 2nd, 2010 in Computer Security, Entertainment, Movies | No Comments »

It’s probably not a stretch to imagine that few people are interested in watching a movie about cryptography. Cryptography isn’t exactly a sexy topic in pop culture, but then again almost every element of World War 2 except cryptography has its own movie. This is somewhat ironic given the huge number of movies set in World War 2 and the incredible importance of cryptography during the war. Enigma is an attempt at a cryptography movie set in World War 2, and I had the opportunity of watching it this past weekend.

Engima is a mixed bag of incredible accuracy and complete fantasy. Perhaps the best example of this is the setting. Although Enigma is set in Bletchley Park, which is where Enigma was actually broken during the war, they don’t even use the actual Bletchley Park Mansion in the film. Almost the entire plot is fictional, but the details about the Enigma machines themselves are extremely accurate. There are a couple of scenes that quickly explain the purpose, construction, and use of the Enigma machines in a concise and easily understandable manner. If you’ve ever tried to do this, you probably know that isn’t easy. The depictions of the huts, bombe, and other elements of Bletchley Park were similarly well done. For example, the windows were shuttered at night to ensure that Bletchley Park wasn’t visible to the Germans from the air. The movie uses this during an excellent depiction of an all-nighter at Bletchley Park.

The main character, Tom Jericho, is based on a real person, Alan Turing. In the movie Tom’s life is turned upside down by a romantic relationship with a Claire, who disappears without a trace. Tom spends most of the movie trying to figure out what happened to Claire, and he ends up falling in love with his primary collaborator in the search. I hesitate to give away much more than that because the movie is primarily a mystery involving spies, codes, and secret love. It’s a fun little movie if you like guessing at what really happened.

Tom’s real-life counterpart, Alan Turing, proposed to Joan Clarke while at Bletchley Park despite being a homosexual. Turing was also famously eccentric. For example, he was an avid runner, but he would sprint anywhere he needed to be regardless of the distance or circumstances. He even ran the 40 miles to London from time to time. As you might imagine, this doesn’t always create a socially pleasant appearance. Alan spent the rest of his (short) life struggling with society’s treatment of his homosexuality. He committed suicide at age 41 in a strange and ambiguous fashion that might best be described in film.

Although I really enjoyed Enigma, it mostly left me thinking that someone desperately needs to make a movie about what really happened at Bletchley Park. So much has basically been forgotten or overlooked simply because it is so hard to describe to folks who aren’t mathematically inclined. There’s a common cliché in science fiction films where a geeky guy does some indescribably hard fictional math and saves the day. This usually occupies a tiny slice of the films in between insane computer generated fight scenes. This cliché applies to World War 2 films with two important exceptions. First, the cryptographers at Bletchley Park actually solved some incredibly hard problems. Like, it really, truly happened, and it actually saved the day. Second, World War 2 movies usually don’t even give Bletchley Park a tiny slice of the film in between computer generated fight scenes! As a result, pop culture is ignorant of one of the greatest stories of World War 2.

In short, if you’re interested in a fun, fictional mystery or if you enjoy cryptography at all, then Enigma is for you. However, if you’re looking for a movie that will accurately represent the reality of Bletchley Park’s role in World War 2, then you’re going to have to make it yourself. I recommend starting with some sort of combination of A Beautiful Mind and Saving Private Ryan.

Ten Plus Systems

Posted on December 20th, 2009 in Computer Security, Education, Technology | No Comments »

On Friday, December 11th, my MacBook Pro stopped working properly. I couldn’t get video regardless of what I did. I took it to the Apple store the next day, where I learned that my graphics logic board was the victim of the infamous NVIDIA recall. I was told that it would take up to 10 days to get it repaired. Just as I was starting to recover from the shock of being without my computer for 10 full days, the Apple employee who examined my laptop said they would need my username and password to complete the repairs.

There is no valid reason Apple needs a username and password to repair a graphics logic board. This is a basic principle of computer security: Do not give anyone your username and password. I asked why they wanted it, and I was told that they needed to be able to log into the machine to verify that it works. This is simply false, and I’m disappointed that Apple would claim it was true. Graphics can be tested in a variety of ways without using an existing username and password. First, they could have used the guest account on the machine. Second, they could have booted into an operating system on a CD/DVD such as Knoppix. Third, they could use a bootable USB drive. Fourth, they could boot from an external hard drive. These options are even documented on their website. Needless to say, I refused to give them my username and password. They refused to send the computer off to be fixed. I asked if there was anywhere else I could get it fixed. To their credit, the Apple store employees were prepared to give me a recommendation to Ten Plus Systems.

I knew almost immediately after walking into their store that Ten Plus Systems was a quality computer repair shop. First, I saw one of the technicians talking with the receptionist about a repair. They were clearly organized, and my gut told me immediately that the technician was a genuine computer geek. Second, they were selling an original, fully restored 1984 Macintosh. It was absolutely beautiful. It looked almost new, and a great deal of care clearly went into restoring this machine. I strongly believe that people who are experts in their field have an intuitive sense that allows them to identify other experts rapidly. (Read Blink by Malcolm Gladwell if you are interested in exploring this concept.) As a computer science PhD student who has built at least a dozen computers from parts, I consider myself an expert in this field. I could tell this store was run by experts.

I arrived Monday morning and my computer was fixed 26 hours later. It was basically a one day turn around on a repair that Apple said would probably take 10 days. They didn’t need my username or password. They didn’t even ask. Ten Plus Systems is an Apple-certified repair store, which means that any machine covered by AppleCare can be repaired there. They also repair Apple and PC machines not covered by AppleCare, and they recycle old computer parts for their customers. If you are near Raleigh and need computer repair work done, I would strongly recommend Ten Plus Systems based on my experiences with them.

Disclosure #1: According to the relatively new FTC rules for bloggers, I should disclose my connection with the companies I’m endorsing. I haven’t been paid for this post. I haven’t been given any gift of any kind for this post. I haven’t had an out-of-body experience in which I was in any way compensated for this post. (At least, not yet…) I’m just a genuinely satisfied customer.

Disclosure #2: I agree with Adam Thierer: the relatively new FTC rules for bloggers are almost completely unenforceable.

Hiring Felons to do Computer Security?

Posted on October 13th, 2009 in Computer Security, Movies, Television | 3 Comments »

Last week Bruce Schneier commented on a story about a prison that let an inmate convicted of credit card fraud reprogram a prison computer. Schneier believes this sort of thing should be an “obvious” no-no, and I agree. However, it isn’t obvious to a lot of intelligent and well-intentioned people. In fact there’s consistently been debate on whether or not criminals should be hired for computer security positions. There are people who fervently believe the myth that being an excellent criminal carries over into being an excellent law enforcement officer or security adviser.

Unfortunately, pop culture continues to prop this myth up with TV shows like the USA Network’s upcoming White Collar. The show is about an FBI agent who teams up with his nemesis-turned-good-guy to solve crimes that no one else could solve. Another TV series, called Dexter, which appears on Showtime, portrays a forensics expert who secretly murders the criminals he finds through his work. Both of these shows operate on the premise that experience committing crimes is useful in preventing them.

In reality, committing crimes and preventing crime are fundamentally different activities not because of the skill sets but because of the motivation and interests involved. In fact, the skill sets may be strikingly similar in a lot of ways. Some pirates are excellent sailors, some outlaws can shoot extremely well, and some hackers know a lot about computers. Don’t focus on asking whether the skill sets overlap. Instead, focus on questions like these: Are they dependable? Can they work well with other people in your particular work environment? How do you know they are actually interested in helping your organization? How do you know they are truly reformed?

After focusing on these questions, the truth comes to light: it is very rare that an excellent criminal history translates to an excellent crime-prevention future. There is a reason that police departments do a criminal background check before hiring someone. There is a reason that day care providers don’t hire convicted child molesters. There is a reason that banks don’t hire convicted felons to do security. Why wouldn’t the same rationale carry over to information or computer-based crimes?

Now, there are instances of convicts making amends and turning their lives around. Frank Abagnale is perhaps the most famous of these reformed con men. Hollywood capitalized on his story with the highly successful movie Catch Me If You Can. I know several people who have heard him speak at security conferences, and they have told me that he continues to apologize for his life of crime at the beginning of his talks, decades after they occurred. In fact, he may be a good model of how to lead a life of contrite contribution to law enforcement after being an extremely skilled criminal. He worked long and hard to earn the trust of banks and the FBI. He was initially paid only for positive results, and used the money he earned as a security consultant to pay back his debts.

Still, as a general rule, it should be obvious that hiring anyone convicted of computer fraud to do computer security work is a bad idea. Why take the risk? There are a lot of extraordinarily talented computer security experts who do not have the baggage of a criminal record. If you find, after searching for a non-felon, that you need the particular skills or expertise of a convicted computer fraudster, then don’t put them in a position of power. Don’t trust them without oversight. Don’t get caught up in the Hollywood story. The Frank Abagnales of the world are exceedingly rare; hiring a felon to do computer security almost never ends well.

Using the Tools We Have

Posted on June 26th, 2009 in Computer Security, Technology | No Comments »

Recent cryptography news serves as a microcosm of the development of computer security technologies. The discovery of fully homomorphic encryption by Craig Gentry, a Stanford PhD student working at IBM this summer, is by far the biggest headline in cryptography theory this week, month, year, and (probably) decade. Essentially, fully homomorphic encryption can perform arbitrary computations on encrypted data while preserving the encryption. For example, a spam filter could be used to identify encrypted emails containing spam, or an audit logging system could append an entry into an encrypted log file without decrypting it and then re-encrypting it.

Now, nothing is perfect right out of the gate, and there are caveats to this discovery. For the scheme to work, one must know in advance the maximum number of computations that can be performed on an encrypted file. It’s not practical; the discovery shows only that it is possible. Last but not least, we’ve already developed schemes that allow some limited operations, such as search, on encrypted data. These have been around for years, and some have even been reported on technical news sites. But even taking these concerns into account, the discovery is legitimately headline news.

The media loves to report juicy computer security stories, particularly relating to the discovery of new cryptographic techniques. Unfortunately, these headlines distract from the primary concern of the average computer security professional: We are just not using the tools we have! Consider last summer when a flaw in the DNS protocol became huge news. It was a problem that could have been completely avoided using existing cryptography. We just weren’t using it. In fact, despite Dan Kaminsky’s recent efforts, we still aren’t using it. Here’s a great quote from Dan:

DNS is the world’s largest PKI without the ‘K.’All DNSSEC does is add keys.

Why haven’t we “added the ‘K’” yet? DNSSEC has been sitting in a drawer, and even after last summer, it doesn’t appear to be a priority. It is designed with security in mind from the start; it is real, practical, and can be implemented without another breakthrough in cryptography. Only, we aren’t using it. And this has been the pattern of cryptography technologies for the last few decades:

  1. Some smart people create something like public key encryption and/or fight against ludicrous export controls on cryptography tools.
  2. The story becomes headline news for a day or two, and we all walk around feeling great about how we ‘solved’ the security problem and we’re all going to be ‘safe’ soon.
  3. A few weeks pass and we find that no one is actually using the inventions that were just created and/or saved from oppressive regulation.
  4. Eventually, we start all over from Step 1 with a new miracle discovery in computer security. That’s what happened this week.

Consider email encryption. Gmail (and most other webmail providers) still doesn’t support GPG. Gmail also doesn’t use persistent SSL connections by default, which means that your emails are delivered to your web browser in plain text when there’s a cheap and effective form of encryption that could easily be enabled. This was old news when I blogged about it here nearly two years ago, but Google is “looking into whether it would make sense” only recently, perhaps because of a letter organized earlier this month by Chris Soghoian and signed by numerous computer security experts.

I’m not saying that fully homomorphic encryption isn’t important, or that solving this longstanding, open academic question isn’t an achievement. It is important, exciting, and a huge achievement. All I’m saying is that fully homomorphic encryption, or any security technology, won’t solve computer security and privacy problems unless we start using the tools we have.

Edited to add: Here’s a nice piece by Brian Krebs that talks more about the letter sent to Google about ecrypting by default. In particular, I love this quote:

“What we’re saying in this letter is that as an iconic service, and one that professes to be concerned about user safety, Google could set a good example and set the right defaults, and if users want to switch back to something less secure, then they can.”

Dr. Eugene Spafford

(Full Disclosure: I am working with Dr. Spafford this summer at CERIAS on campus at Purdue University.)

The Twitter Monoculture

Posted on January 19th, 2009 in Computer Security, Technology | 3 Comments »

Currently, Twitter is the Internet’s dominant micro-blogging service. It has shown that micro-blogging is a distinctly different form of communication deserving of it’s own niche, and it has done so well with its own micro-blogging service that micro-blogging itself is perhaps better known as Twittering.

Of course, there is one small problem. Twitter is a closed platform. As Tim Bray put it:

The basic problem is that Twitter is centralized; that’s not how the Internet works.

A quick look at history tells us that open communication protocols win in the long run. When you call someone on the phone, you aren’t limited to people using the same telephone service provider. When you email someone, you aren’t limited to people who are using the same Internet service provider. Even actual blogging has standardized norms (RSS and Atom) that allow people using blogger, WordPress, LiveJournal, or any other blogging mechanism to easily follow blogs on other platforms. (Though, cross-blog commenting is still a bit of a problem.)

Although I could talk about the Network Effect or Metcalfe’s Law, for the purposes of this post, I will focus on the key security design problem facing Twitter. This is not to say that the Network Effect and Metcalfe’s Law aren’t important. They are. I’m just talking about another, unrelated reason that supports the need for diversity in the micro-blogging industry.

A recent incident is an exemplar of the real problems caused by a centralized protocol like Twitter. An attacker was able to hijack several high-profile Twitter feeds, including Barack Obama’s campaign feed and the official Fox News feed. How did this happen? Well, it turns out that there was a security design flaw on the Twitter site that allowed rapid login attempts. This allowed an attacker to use a dictionary attack against the Twitter account of a member of Twitter’s support staff. Once the password was guessed, the attacker was able to get access to any feed in all of Twitter-dom.

The key security flaw in any centralized protocol is that such protocols are monocultures. Bananas are a great example of the danger of monocultures. Bananas are an extremely important crop worldwide, but the vast majority of bananas grown are of the Cavendish variety. Why? Because the tastier Gros Michel bananas were wiped out by a disease. They were all essentially genetically identical. There was almost no diversity in the banana ecosystem. As a result, they were unable to adapt to the disease, and since the same problem exists with the Cavendish, we’re still one bad disease away from a worldwide shortage of bananas.

The same problem exists for micro-blogging. If you want to micro-blog, you effectively need a Twitter account. Twitter is so dominant that almost all micro-bloggers are using Twitter, which makes it a monoculture. Because Twitter is a monoculture for micro-blogging, the micro-blogging itself is one bad security incident away from obliteration. Also, if Twitter were to go belly up (which is not, as Tim Bray discussed, outside the realm of possibility for an Internet-based company), then, effectively, the entire micro-blogging industry would be eliminated.

At this point you might say, “Wait! Twitter has an open API!” This is not the same as open source, and it does not eliminate the threats posed by monocultures. It does mean that it is very easy to add functionality to the Twitter protocol, but it does not mean that you can participate freely without a Twitter account.

Micro-blogging needs a viable open source alternative to create a federated micro-blogging protocol. Tim Bray proffered Laconica and one of the commenters in his thread mentioned the soon-to-be open source Jaiku, which was recently shutdown by Google.

Whatever happens, a federated micro-blogging protocol would be far more robust than the current Twitter monoculture. If I were to add a single gutsy prediction to the list over at Freedom to Tinker, it would be that a major security incident at Twitter allows an open source alternative to gain a foothold in micro-blogging. It may not happen this year, but I think it’s inevitable with any monoculture.

Reports on Electronic Voting

Posted on November 6th, 2008 in Computer Security, Politics and Law, Technology | No Comments »

As a technologist with a strong interest in computer security, privacy, and public policy, I am naturally drawn to the topic of electronic voting. I have written about electronic voting several times before, including this piece on Ed Felten’s work. Recently, I have seen lists of things things could have gone wrong and some lists of things that actually did go wrong. I have even seen a hilarious account of the worst case scenario, but the most interesting accounts that I’ve seen have been personal accounts of computer science professors who volunteered to operate the polls as election workers.

Avi Rubin, a Professor of Computer Science at Johns Hopkins and director of the ACCURATE Voting center, wrote a post describing his experience working the polls and posted it only minutes before most news outlets announced that Barack Obama will be the 44th President of the United States. Professor Rubin is the author of the book Brave New Ballot, an excellent book on the dangers of electronic voting machines that I have reviewed here. His experience at the polls in Maryland describes the very practical and non-technical aspects of just what a poll worker does during the day.

Steven Bellovin, a Professor of Computer Science at Columbia, also wrote about his experience as an election official. Professor Bellovin is another well-respected authority on computer security whose post focuses on the non-technical details of the responsibilities of poll workers in New Jersey. Andrew Appel, a Professor of Computer Science at Princeton, also wrote about the use of voting machines in New Jersey.

Both New Jersey and Maryland used Direct-record electronic voting machines, which have a myriad of security concerns that have been detailed extensively elsewhere. Essentially, DREs store the official record of an election in an electronic form rather than a paper form. If you are interested in some of the problems with DREs and proposed solutions to those problems, then you should check out the USACM’s page on electronic voting.

You may be asking yourself: Why would a computer science professor volunteer to work a poll as an election official? It’s not like there’s anything technical going on there. Well, any computer security expert will tell you that the first line of defense must be physical access. This means that you can have all technology you want, all the cryptography you want, and spend all the money you have and still not be secure without common sense. There was a great video on No-Tech Hacking at DefCon in 2007 which covers what I’m talking about.

Physical access is one of the key problems with DREs: thousands of people must have physical access to the machines themselves to cast their vote. The environment is filled with opportunities for absolutely simple no-tech hacking. Even if these systems weren’t notoriously bad in terms of the technology used, the physical access alone makes these devices difficult to secure.

The challenges of physical access and the stakes of a Presidential election are both great reasons that computer science professors are interested. It’s a unique opportunity to see how these machines are actually used, and some of their observations are excellent. Their posts are worth reading if you’re interested in electronic voting or computer security: Avi Rubin’s post; Steven Bellovin’s post.

ABC News Exclusive: Inside Account of U.S. Eavesdropping on Americans

Posted on October 9th, 2008 in Computer Security, Life, Politics and Law, Technology | No Comments »

ABC News has an article on the eavesdropping of Americans that answers any remaining questions regarding the FISA Amendments passed this past summer. Essentially, the article details the use of surveillance systems to spy on ordinary Americans. Here’s a quote from the article:

“These were just really everyday, average, ordinary Americans who happened to be in the Middle East, in our area of intercept and happened to be making these phone calls on satellite phones,” said Adrienne Kinne, a 31-year old US Army Reserves Arab linguist assigned to a special military program at the NSA’s Back Hall at Fort Gordon from November 2001 to 2003.

Kinne described the contents of the calls as “personal, private things with Americans who are not in any way, shape or form associated with anything to do with terrorism.”

The article goes on to describe the nature of some of the phone call as pillow talk or phone sex. Some of the individuals involved were from the US Military, the International Red Cross, and Doctors Without Borders. Naturally, the Senate is investigating. The article further states that some especially juicy clips were saved by employees of the NSA.

Unfortunately, abuse of surveillance systems by insiders is nothing new. Bruce Schneier has shown us that surveillance cameras are abused and ineffective. Six well-known security and privacy researchers have warned about this sort of abuse with telephone surveillance as well (pdf).

The only thing that is remotely surprising about this is that we have specific details from whistleblowers, who are risking their careers and livelihood to tell us about this abuse. In this case, it is even more surprising that not one, but two independent whistleblowers came forward simply because the agency involved was the notoriously secretive NSA.

The GCHQ, which is the British equivalent of the NSA, recently dealt with its own whistleblower: Katherine Gun. In this case, Gun was a translator asked to favorably translate documents as evidence to garner support for the Iraq war. Her case was dropped at trial almost immediately. Speculatively, the decision to drop the case was due to the calculated decision that producing the evidence required to prosecute her would have been more embarrassing for the GCHQ than simply letting her go.

Many whistleblowers find the ethics of betraying their employer for the greater good an excruciating ethical dilemma. Check out this BBC News interview of Katherine Gun if you are interested in how she weighed the decision. (There’s a book about her if you are more ambitious.) For these reasons and many more, whistleblowers like Mark Klein in the AT&T case that prompted the FISA Amendments and now David Murfee Faulk and Adrienne Kinne in this more recent case with the NSA shouldn’t be our last line of defense.

Essentially, lesson from this ABC News article is simple: surveillance tools will be abused. It is human nature for power to corrupt. The Founding Fathers of the United States recognized this and tried to limit the power of the government explictly for this reason. They built checks and balances into our government because they knew that hoping for whistleblowers to highlight problems was not reliable. Why does the current US government not seem to comprehend this?  How many more whistleblowers and ABC News stories will it take for our government to catch on?