Greetings sports fans! (I really like this. Yeah, this is going to be a thing from now on.) Today I want to fill you into one of the most asked question in the field of computer security: "Who should I tell about my latest discovery?" There are few possible answers to that questions, most commonly (in order of size): nobody, the people involved, the people affected, the research community, everybody and for completeness TeH I/\/t3W3bzzz!!1!! It's not always clear what the real answer is, or even if there is a real answer, as we shall soon see.
So, lets start of with the case I am most familiar with, as it is what I do, theoretical constructive cryptography. Sounds fancy, don't it? Basically, what I do is I look at existing schemes and try to make a better one, by either improving the extant scheme or creating a new one. In this case it's obvious that what you have now found should be shared with at least the research community and maybe the whole world if it has any real-world applications/impacts/etc. The same goes for the implementation side of cryptography.
one would assume advances in constructions or protocols are somewhat non-threating to the security of any other system. That is normally, the case, if we consider only the security of a system. A better version of a extant protocol may pose a financial threat to any parties selling the afore mentioned protocol, but it would not compromise it in any other way. The real difference is on "The Other Side of the Coin." (Heyooo!)
All silly self-referencing puns aside, what I am really referring to is cryptanalysis. These are the guys whose job it is to take cryptographic schemes and find ways to break them. They sound evil, right? Well they aren't. The idea behind cryptanalysis is to find out which schemes can and can not be broken by using a variety of techniques. If a given scheme, or indeed a class of schemes, is broken, it gives cryptographers insight to what they should not do. You may think of cryptanalysts as safety inspectors.
Now, here's the problem. Consider this, I make a new and particularly bad crypto scheme, let call it AVeryBadIdea or AVBI (C)(TM)(Pat. Pend.). I publish this scheme and I'm happy. A cryptanalyst has a look at it and breaks it completely within days of its publication. They publish the attack and life goes on. Number of people affected: 2. Doesn't sound like a problem? Well, consider the following scenario: I sell this very same cryptosystem to a couple of small time businesses to secure their data, blah, blah. Now when the attack comes out, number of people affected: 2 + all the people who bought AVBI.
Let's take this a step further. What is AVBI is used for something important, say credit cards. Well, then when if they system is broken, we have a problem. Now every credit card in existence is at threat of being used by malicious parties. Affected people: 2 + banks + credit institutions + everybody who has a credit card. Here the responsible thing to do is to tell the banks and credit institutions and they can try and find a remedy for it. The wrong thing to do is tell everybody else first.
Then you get into more complex issues. A large number of schemes have one "master secret." The gist of it is that if anybody knew this they could do whatever they wanted and not be found out. Suppose AVBI is now an industry standard of some description or the other. Somebody comes up with an attack that allows them to recover the master secret and indeed they do. What do they do? Tell the industry governing body? Sounds like a good idea right?
It is, if the concerned party/parties are not overtly hostile. The classical example of this is HDCP, as explained by Niels Ferguson. On the flip side you have the Stony Brook researchers who released the source code that allows you to do this. It's quite a grey area and I'm not sure there is a real right answer to this. There is a middle ground, which is publishing the idea of the attack, but not releasing the implementation. I believe this is what has been done by my colleagues at the Ruhr University of Bochum wrt their recent work on HDCP. However, this does also leave open the question: Could someone develop a similar attack on their own? It's possible, but then consider that the master secret is already out there, so is it really a bigger threat?
There is scope for even more potential pitfalls and possible permutations of the present problem regarding all participating parties (that's a lot of p's) and the water can get even more murky. Yes, there are clear cut consequences of cryptographic and cryptanalytic creations (and a few c's), but not always. There is so much room for error and personal judgment and it can be quite a burden trying to tackle such a dilemma. So in short, responsible disclosure can be an irresponsible thing to do.
!!!!!WARNING: This blog may cause your brain to explode, implode or melt!!!!! What is IMHO the side of the story the media didn't cover, if at all. My "expert" gleanings on the current state of digital security. Also, the occasional mildy to non-related tirade. Enjoy :D Feel free to contact me with feedback or if you would like more details/clarification on anything :)
Saturday, 26 November 2011
Sunday, 6 November 2011
BBM and Siri outages, a failure in more ways that you think.
Morning sports fans! Yes, I've missed you too, but I'm having a super perfectionist phase and none of my posts seem good enough to publish. This should all blow over and there will quite a few post some time in the future. So, let's wind the clock back a smidge and remember one of the biggest fails of the year: The Great BlackBerry Outage of 2011! (Yeah, I'm expecting more to come.)
So, cast your mind back to October 10th-ish when the first reports of a RIM server crash came in. Millions of people were left without access to BBM and some Internet services, such as Facebook. Ah, the many jokes we made that they didn't see. Well it quickly spread to North America and then other planets! (BONUS QUESTION: How many of these planets do you know?) It was somewhat fitting that BlackBerry users who were fairly vain about BBM had it ripped from them for a couple of days. It was a good thing.
Eventually, RIM apologised, service and the status quo were restored. There was still the great debate of BlackBerry vs. iPhone, (as explained here by Jimmy Carr and Sean Locke on 8 out of 10 Cats) but the iPhone users had a little chip on their shoulder that said "We never have service outages." This was compounded by the fact the release of the iPhone 4S, and with it Siri, was imminent. Just to catch you up, Siri is the voice activated personal assistant that comes with the iPhone 4S. (For further details see this)
Anywho, Siri is now here and people are enjoying asking it silly questions, demonstrating which accents it can't understand and showing that it's only fully functional in USA. What I was, until recently, unaware of is that Siri runs in the cloud. I have no love for cloud computing, but will ignore that at this juncture. A couple of days a ago a failure caused Siri to be unable to connect to the Apple servers and thus not work. Wait, you mean Apple has service outages as well? *le gasp*! Well of course they do! The reason is simple,they seem to have overlooked a very basic principle of computer security: critical infrastructure.
What is critical infrastructure you ask? Good question! Critical infrastructure is an old-ish field which studies an setup and sees what it would take for that to stop working. The classical example is a very nice graph theoretic problem, which is quite nicely demonstrated by the London Underground map. Assume this your only means of transport. Pick any station and/or section of the map. The problem is can you make a single cut and isolate that station/section from the rest of the map? There are variants, such as the minimum number of cuts needed to isolate a station/section and also on other things such as electricity, water and gas supply. You get the gist of it all, right?
The same can be done for communication and telecommunication networks. This is normally done, but it can be a bit tricky. With wired communications, it's easy to draw up a graph-style map, with each wire as an edge and each node as a vertex. However the same is not really true of wireless communications. To stop wired communications between point A and B, you need to sever the wire joining them. It's not as clear what the equivalent for wireless communication is. There is also the issue that unlike wired devices, which are immobile, wireless devices by definition are mobile.
So, now do we consider simply the connection between the devices or do we also have to consider the location? Can we only consider one or do we have to consider both? If I go into a lift and lose wireless connectivity is that a failure of the network or the device or both or neither? If you are thinking such distinctions are a moot point, then you are pretty much correct. Yes, it's not a major issue, but it should not be completely overlooked. There are a lot more examples of this, but that would mean delving into technicalities, which I would rather not do.
And there is the issue of time. These things take time, quite often a lot of it. There are so many contingencies to consider, such as the classic CTO chokes on sushi, rest of the department is killed in a meteor strike and the only other guy who knows the password gets retrograde amnesia. Yes, that is a tad far-fetched and one should probably stop when retrograde amnesia is the most likely event in your scenario. The digital market thrives on speed. You need to get the next product out there 2 weeks before the previous one is launched.
So, as you can see, owing to several issues, the critical infrastructure analysis is possibly not done as well as it should be, which can cause these kinds of issues. On the other hand, you can do the most thorough analysis and the worst case scenario may still occur, thus causing an outage. So basically it's all a roll of the dice and remember "God doesn't play dice!"
So, cast your mind back to October 10th-ish when the first reports of a RIM server crash came in. Millions of people were left without access to BBM and some Internet services, such as Facebook. Ah, the many jokes we made that they didn't see. Well it quickly spread to North America and then other planets! (BONUS QUESTION: How many of these planets do you know?) It was somewhat fitting that BlackBerry users who were fairly vain about BBM had it ripped from them for a couple of days. It was a good thing.
Eventually, RIM apologised, service and the status quo were restored. There was still the great debate of BlackBerry vs. iPhone, (as explained here by Jimmy Carr and Sean Locke on 8 out of 10 Cats) but the iPhone users had a little chip on their shoulder that said "We never have service outages." This was compounded by the fact the release of the iPhone 4S, and with it Siri, was imminent. Just to catch you up, Siri is the voice activated personal assistant that comes with the iPhone 4S. (For further details see this)
Anywho, Siri is now here and people are enjoying asking it silly questions, demonstrating which accents it can't understand and showing that it's only fully functional in USA. What I was, until recently, unaware of is that Siri runs in the cloud. I have no love for cloud computing, but will ignore that at this juncture. A couple of days a ago a failure caused Siri to be unable to connect to the Apple servers and thus not work. Wait, you mean Apple has service outages as well? *le gasp*! Well of course they do! The reason is simple,they seem to have overlooked a very basic principle of computer security: critical infrastructure.
What is critical infrastructure you ask? Good question! Critical infrastructure is an old-ish field which studies an setup and sees what it would take for that to stop working. The classical example is a very nice graph theoretic problem, which is quite nicely demonstrated by the London Underground map. Assume this your only means of transport. Pick any station and/or section of the map. The problem is can you make a single cut and isolate that station/section from the rest of the map? There are variants, such as the minimum number of cuts needed to isolate a station/section and also on other things such as electricity, water and gas supply. You get the gist of it all, right?
The same can be done for communication and telecommunication networks. This is normally done, but it can be a bit tricky. With wired communications, it's easy to draw up a graph-style map, with each wire as an edge and each node as a vertex. However the same is not really true of wireless communications. To stop wired communications between point A and B, you need to sever the wire joining them. It's not as clear what the equivalent for wireless communication is. There is also the issue that unlike wired devices, which are immobile, wireless devices by definition are mobile.
So, now do we consider simply the connection between the devices or do we also have to consider the location? Can we only consider one or do we have to consider both? If I go into a lift and lose wireless connectivity is that a failure of the network or the device or both or neither? If you are thinking such distinctions are a moot point, then you are pretty much correct. Yes, it's not a major issue, but it should not be completely overlooked. There are a lot more examples of this, but that would mean delving into technicalities, which I would rather not do.
And there is the issue of time. These things take time, quite often a lot of it. There are so many contingencies to consider, such as the classic CTO chokes on sushi, rest of the department is killed in a meteor strike and the only other guy who knows the password gets retrograde amnesia. Yes, that is a tad far-fetched and one should probably stop when retrograde amnesia is the most likely event in your scenario. The digital market thrives on speed. You need to get the next product out there 2 weeks before the previous one is launched.
So, as you can see, owing to several issues, the critical infrastructure analysis is possibly not done as well as it should be, which can cause these kinds of issues. On the other hand, you can do the most thorough analysis and the worst case scenario may still occur, thus causing an outage. So basically it's all a roll of the dice and remember "God doesn't play dice!"
Sunday, 9 October 2011
Privacy? Is that a vegtable?
So, here we are opening this can of worms. Yeah I know there are other stories that going on, but I'm working on a couple of posts, which should surface sometime soon. OK, so let's talk about privacy on the Internet. It's the one thing you will hear over and over again "There is no privacy on the Internet." Which is part of the truth, but not the whole truth.
This is normally the cry of the anti-social narwhal (not an actual meme, yet!) against social networks, but it is a smidge unfair. The main complaint people have is that all your information is out there and anybody can see it and so on and so forth. Well, yes because you put it out there. It's like complaining that your diary contains all these personal and embarrassing things about you. Yes, in this case the diary is actually owned by somebody else, but you knew what you were getting into. There really is no way around, except you know not posting stuff like that on social networks.
Another issue regarding matter of posting stuff is visibility. People seem to be unable to comprehend the very basic fact that stuff you post will be visible to other people. You can control who those people are, granted it is not always in the most obvious way. There always exists some mechanism to limit the visibility of your post. There are countless stories of students putting up statuses about teachers they added and employees doing the same with employers.
Well, let's say you mastered all the above, there is still one small problem. The people you are sharing this content with may not be so discerning. This is especially true for "amusing" content, exemplified best by the sites Lamebook and Failbook. Both these sites allow users to post screenshots of post on Facebook, or any other social network in the case of Failbook, that they found amusing. The best are then shared on these sites for consumption by the general public.
Even as I write this I can hear the anti-social narwhal (this should totally be a meme) bellowing in my ears "BUT WHAT ABOUT THE PRIVACY!!!" Well these sites do apply some discretion and redact names and profile pictures so as the preserve the identity of the posters. This does not always work. The trouble is that most posts get submitted to both sites. The really good ones show up on both. And well if you mess up the redaction then it gets a bit hinky.
A perfect example of this is the following post on Lamebook and Failbook. Lamebook redacted the surnames and Failbook redacted the forenames. The end result is that you found the original people and the original post. In all fairness this post is public so there is not really much of an issue of privacy at this point, but try tell that to a narwhal (don't ask, I'm just going with it now).
So, in conclusion children: be aware of what you post on the Internet, for there are no secrets. Also, always brush your teeth before going to bed.
This is normally the cry of the anti-social narwhal (not an actual meme, yet!) against social networks, but it is a smidge unfair. The main complaint people have is that all your information is out there and anybody can see it and so on and so forth. Well, yes because you put it out there. It's like complaining that your diary contains all these personal and embarrassing things about you. Yes, in this case the diary is actually owned by somebody else, but you knew what you were getting into. There really is no way around, except you know not posting stuff like that on social networks.
Another issue regarding matter of posting stuff is visibility. People seem to be unable to comprehend the very basic fact that stuff you post will be visible to other people. You can control who those people are, granted it is not always in the most obvious way. There always exists some mechanism to limit the visibility of your post. There are countless stories of students putting up statuses about teachers they added and employees doing the same with employers.
Well, let's say you mastered all the above, there is still one small problem. The people you are sharing this content with may not be so discerning. This is especially true for "amusing" content, exemplified best by the sites Lamebook and Failbook. Both these sites allow users to post screenshots of post on Facebook, or any other social network in the case of Failbook, that they found amusing. The best are then shared on these sites for consumption by the general public.
Even as I write this I can hear the anti-social narwhal (this should totally be a meme) bellowing in my ears "BUT WHAT ABOUT THE PRIVACY!!!" Well these sites do apply some discretion and redact names and profile pictures so as the preserve the identity of the posters. This does not always work. The trouble is that most posts get submitted to both sites. The really good ones show up on both. And well if you mess up the redaction then it gets a bit hinky.
A perfect example of this is the following post on Lamebook and Failbook. Lamebook redacted the surnames and Failbook redacted the forenames. The end result is that you found the original people and the original post. In all fairness this post is public so there is not really much of an issue of privacy at this point, but try tell that to a narwhal (don't ask, I'm just going with it now).
So, in conclusion children: be aware of what you post on the Internet, for there are no secrets. Also, always brush your teeth before going to bed.
Wednesday, 14 September 2011
Hackers = Mobsters? Redux
So, I earlier wrote a post about how they want to try hackers under organised crime laws. Well, I must admit, must to my chagrin, that I may have overlooked some details. Well, not so much details as scenarios and/or types of attackers. My previous post focused primarily on the "breaking and entering" breed of hacker, specifically the kind without any financial motivations. There in, lies my folly.
The attacker I described was the kind that will break a system, to quote the famed LulzSec group, "just for lulz," or with some form of activist agenda, a la Operation Payback. Here the attacker(s) main objective was to point out a weakness in a system, cripple a system as a form of protest, or simply to entertain themselves. Well, in any case, here the idea of organised crime does fall a tad flat, as explained previously.
Now, we move to something a colleague pointed out to me today. If we consider fiscally motivated crimes, then we begin to see the motivation for this kind of approach. Consider the case of identity theft via phishing, for argument's sake. Although this kind of attack can be done alone, there is essentially a mafia that controls large parts of this trade. It is very reminiscent of the classical mobsters, to the extent that there is large speculation of them being linked. Of course I know no knowledge beyond the rumblings of their existance, but I am convinced.
Although there are other, and arguably more sophisticated, ways of committing digital identity fraud, they all do have the same mafia-esque touch to them. Here, the idea of treating these in the same manner as organised crime is not a far fetched idea at all. In fact, I believe it is the right idea.
So, in summary, this idea is not all bad and in fact is very good for certain classes of digital criminals, but not so much for others. Hopefully, the law all over will catch up to all the crazy types of security threats in our crazy world.
The attacker I described was the kind that will break a system, to quote the famed LulzSec group, "just for lulz," or with some form of activist agenda, a la Operation Payback. Here the attacker(s) main objective was to point out a weakness in a system, cripple a system as a form of protest, or simply to entertain themselves. Well, in any case, here the idea of organised crime does fall a tad flat, as explained previously.
Now, we move to something a colleague pointed out to me today. If we consider fiscally motivated crimes, then we begin to see the motivation for this kind of approach. Consider the case of identity theft via phishing, for argument's sake. Although this kind of attack can be done alone, there is essentially a mafia that controls large parts of this trade. It is very reminiscent of the classical mobsters, to the extent that there is large speculation of them being linked. Of course I know no knowledge beyond the rumblings of their existance, but I am convinced.
Although there are other, and arguably more sophisticated, ways of committing digital identity fraud, they all do have the same mafia-esque touch to them. Here, the idea of treating these in the same manner as organised crime is not a far fetched idea at all. In fact, I believe it is the right idea.
So, in summary, this idea is not all bad and in fact is very good for certain classes of digital criminals, but not so much for others. Hopefully, the law all over will catch up to all the crazy types of security threats in our crazy world.
Monday, 12 September 2011
Hackers = Mobsters?
Ok, so as promised: post number 2 of today (just to be pedantic, my today). So, I recently read this in which President Obama said that he wants hackers will be treated, for the purposes of the law, in a manner similar to that of organised crime. Yes, people, that means mobsters, as in Tony Montana or Al Capone. That does make hackers sound so much cooler now that we are imagining them in pinstripe suits and not nerdy T-Shirts, but we must question the validity of this.
My main objection to this is the term "organized", not only due to the fact that I prefer British spelling, but mainly because, it's not always true. Yes, one could say that LulzSec is/was somewhat akin to the famed "Cosa Nostra," but they do indeed prove to be the exception to the rule. The next closest thing is Anonymous, but they are at best a loose collection of similar-ish minded individuals, who got together for one job and then disbanded. Of course some members will carry out attacks in unison after that, but it would almost certainly not be the whole group again.
Further more, there is a somewhat implicit assumption of some form of heirarchy amongst hackers. There may well be "senior" and "junior" member of the group and there may well be some people with more influence or more authority, but no really chain of command, so to speak. To the best of my knowledge there is no Godfather in hacker communities. So, here again the organised argument breaks down.
Of course, the previous is in the case where there is actually more than one person involved. It is neither impossible nor uncommon, for a single hacker to mounts attacks on a fairly large scale. Yes, I know that the article states that "complex and sophisticated electronic crimes are rarely perpetrated by a lone individual," but there have been reports of single attackers mounted somewhat complex attacks. Granted, they may have obtained resources from other individuals/groups, but they did mainly act alone. In the case of this single perpetrator, the term organised seems to be a dash irrelevant. I can imagine that the lawmen would be well pressed to somehow fits such a scenario into these laws.
Furthermore one would assume that the Racketeer Influenced and Corrupt Organizations (RICO) Act would be the basis for this new version of the Computer Fraud and Abuse Act (CFAA). I am not a legal expert, but I can imagine that this would be quite challenging. You see, these two classes of criminals live and operate in very different environments, thus making any sort of an analogy difficult. However, this idea is not without merit.
Recently, this story emerged. An Australian blogger wrote about domain-name fraud and found himself in a spot of bother. He was, and still is from what I gather, being DDoSed. The thing is that the log files show the traffic coming from non-existent websites, which are actually death threats to him. One example is “http://lastwarning-shutdown-yourblog-or-die-withyourparklogic.com”. This does seem to be very much like old school mafia behaviour, which lends great credence to this new idea.
So, although it may not be IMHO the best way to go about, it is not without its merits. As this develops further, it may even become an excellent law. Until, all we can do is wait and see.
My main objection to this is the term "organized", not only due to the fact that I prefer British spelling, but mainly because, it's not always true. Yes, one could say that LulzSec is/was somewhat akin to the famed "Cosa Nostra," but they do indeed prove to be the exception to the rule. The next closest thing is Anonymous, but they are at best a loose collection of similar-ish minded individuals, who got together for one job and then disbanded. Of course some members will carry out attacks in unison after that, but it would almost certainly not be the whole group again.
Further more, there is a somewhat implicit assumption of some form of heirarchy amongst hackers. There may well be "senior" and "junior" member of the group and there may well be some people with more influence or more authority, but no really chain of command, so to speak. To the best of my knowledge there is no Godfather in hacker communities. So, here again the organised argument breaks down.
Of course, the previous is in the case where there is actually more than one person involved. It is neither impossible nor uncommon, for a single hacker to mounts attacks on a fairly large scale. Yes, I know that the article states that "complex and sophisticated electronic crimes are rarely perpetrated by a lone individual," but there have been reports of single attackers mounted somewhat complex attacks. Granted, they may have obtained resources from other individuals/groups, but they did mainly act alone. In the case of this single perpetrator, the term organised seems to be a dash irrelevant. I can imagine that the lawmen would be well pressed to somehow fits such a scenario into these laws.
Furthermore one would assume that the Racketeer Influenced and Corrupt Organizations (RICO) Act would be the basis for this new version of the Computer Fraud and Abuse Act (CFAA). I am not a legal expert, but I can imagine that this would be quite challenging. You see, these two classes of criminals live and operate in very different environments, thus making any sort of an analogy difficult. However, this idea is not without merit.
Recently, this story emerged. An Australian blogger wrote about domain-name fraud and found himself in a spot of bother. He was, and still is from what I gather, being DDoSed. The thing is that the log files show the traffic coming from non-existent websites, which are actually death threats to him. One example is “http://lastwarning-shutdown-yourblog-or-die-withyourparklogic.com”. This does seem to be very much like old school mafia behaviour, which lends great credence to this new idea.
So, although it may not be IMHO the best way to go about, it is not without its merits. As this develops further, it may even become an excellent law. Until, all we can do is wait and see.
Sunday, 11 September 2011
(Distributed) Denial of Service attacks, intentional or otherwise.
So, I have been away for a bit and thus the lack of posting. So to make that up, there will be two posts today and at least one more this week. Right, lets get into its shall we? Today's topic is (Distributed) Denial of Service attacks and how they can be inadvertently caused. So, first off, what exactly is is a Denial of Service (DoS) and indeed a Distributed Denial of Service (DDoS) attack.
A Denial of Service (DoS) attack involves sending an excessive amounts of data/requests/pings to a server with the aim of overloading the server so that legitimate users can not access the server. Imagine the following scenario: there is an office with an information counter. Normally, people would walk up tot he counter, get the information they need and then leave. After this the next person does the same and so on and so forth. A DoS would essentially be one person standing at the counter and asking so many questions that nobody else can get up to the counter.
A Distributed DoS (DDoS) is the same thing, except with one minor difference. In a standard DoS, there is only one attacker and one attacking system. In a DDoS, there may still be one attacker, but there are several systems that involved in the attack. For all intents and purposes, DoS attacks really only exist in textbooks, so we will only consider DDoS attacks.
So, now that we know what DDoS attacks are, let's look at how they happen. The normal scenario is that our attacker(s) pick a target and then bombard them with request. At a technical level, there are several ways to this in an intelligent ways, but the simplest is just overwhelming the server with requests. I would rather not get into the details, because to be quite honest, I find them inane and boring. SO, let's just say there are many ways of doing it.
Now, if you recall I did say we were going to discuss how one may inadvertently perform a DDoS. First off, we need to realise that different websites require different levels of hardware. Right at the top you have the likes of Google, who require server farms of sizes that are difficult to fathom. Then you go down to the bottom, where you have tiny websites that get a couple of hits a week, which probably run on a single machine. Obviously, the smaller the server, the easier it is to DDoS.Now, the unintentional DDoS attacks happen to theses smaller sites. How you ask? Well simple, they get very popular, very fast.
There a few ways you can achieve this. Firstly, start off a small website and then becomes popular. Then when you post new content, number of people accessing your site goes through the roof and your site becomes temporarily unavailable. Don't think this is possible? I refer you to a delightful webcomic (in a manner of speaking) The Oatmeal, run by Matthew Inman. He even says something about it on his Facebook page. He does somewhat DDoS himself, by being awesome!
Another way is best explained by using Stephen Fry as an example. Stephen had built up quite a fan base as an entertainer and television personality over the years, so when he ended up in Twitter, well naturally he had a smattering of followers (myself included). He is quite an avid user and apart from the usual tweets of his current activities (and of course his tweets for charity), he does tweet links to amusing content from time to time. The moment that tweet hits the net, there are thousands of people clicking that link and well it has caused more that one site to go down.
As we can see in both cases, neither party had any malicious intent towards the sites that they inadvertently DDoS'ed, but it did happen. The unfortunate part of this is that there is no way to defend against it. Well, there is the no practical way to defend against it. Of course, everybody could use industrial size server farms, but that is not really practical. There may be some sort of gains made if everything was hosted in the cloud, but I'm not sure how feasible that is.
A Denial of Service (DoS) attack involves sending an excessive amounts of data/requests/pings to a server with the aim of overloading the server so that legitimate users can not access the server. Imagine the following scenario: there is an office with an information counter. Normally, people would walk up tot he counter, get the information they need and then leave. After this the next person does the same and so on and so forth. A DoS would essentially be one person standing at the counter and asking so many questions that nobody else can get up to the counter.
A Distributed DoS (DDoS) is the same thing, except with one minor difference. In a standard DoS, there is only one attacker and one attacking system. In a DDoS, there may still be one attacker, but there are several systems that involved in the attack. For all intents and purposes, DoS attacks really only exist in textbooks, so we will only consider DDoS attacks.
So, now that we know what DDoS attacks are, let's look at how they happen. The normal scenario is that our attacker(s) pick a target and then bombard them with request. At a technical level, there are several ways to this in an intelligent ways, but the simplest is just overwhelming the server with requests. I would rather not get into the details, because to be quite honest, I find them inane and boring. SO, let's just say there are many ways of doing it.
Now, if you recall I did say we were going to discuss how one may inadvertently perform a DDoS. First off, we need to realise that different websites require different levels of hardware. Right at the top you have the likes of Google, who require server farms of sizes that are difficult to fathom. Then you go down to the bottom, where you have tiny websites that get a couple of hits a week, which probably run on a single machine. Obviously, the smaller the server, the easier it is to DDoS.Now, the unintentional DDoS attacks happen to theses smaller sites. How you ask? Well simple, they get very popular, very fast.
There a few ways you can achieve this. Firstly, start off a small website and then becomes popular. Then when you post new content, number of people accessing your site goes through the roof and your site becomes temporarily unavailable. Don't think this is possible? I refer you to a delightful webcomic (in a manner of speaking) The Oatmeal, run by Matthew Inman. He even says something about it on his Facebook page. He does somewhat DDoS himself, by being awesome!
Another way is best explained by using Stephen Fry as an example. Stephen had built up quite a fan base as an entertainer and television personality over the years, so when he ended up in Twitter, well naturally he had a smattering of followers (myself included). He is quite an avid user and apart from the usual tweets of his current activities (and of course his tweets for charity), he does tweet links to amusing content from time to time. The moment that tweet hits the net, there are thousands of people clicking that link and well it has caused more that one site to go down.
As we can see in both cases, neither party had any malicious intent towards the sites that they inadvertently DDoS'ed, but it did happen. The unfortunate part of this is that there is no way to defend against it. Well, there is the no practical way to defend against it. Of course, everybody could use industrial size server farms, but that is not really practical. There may be some sort of gains made if everything was hosted in the cloud, but I'm not sure how feasible that is.
Thursday, 18 August 2011
rankmyhack.com - WHY?
So, recently it has come to my attention that there is a website called rankmyhack.com [twitter account] (at last attempt the site was unreachable and isup.me said it looks down) which basically encourages the general populous to hack stuff, post details of it and get points based on how good it was. So, something simple like logging into a system where they left the guest account open would score minimal points, but a more complex exploit, such as say a SQL injection, would score more. Sounds fun right?
WRONG! I for one will tell how important it is to secure your web-facing interfaces, devices and any combination thereof till the cows come home, but there is a proper way to do that. There are some standard known practices and counter-measures against exploits that you can put in place. Of course this process is fairly mechanical and does not account for human ingenuity.
Tiger Teams (that term always makes me think of this), enter stage left. Now a Tiger Team or Red Team is a bunch of inhouse or outsourced hackers whose sole job is to attack the system. They do this in a contained environment and report all the exploits the the developers who then correct any flaws. Ideally, they will find everything, but there is no guarantee of that. If they are good, they will find most of them.
That's the normal way of doing it. This site however basically sets the dogs loose on every single person on the Internet. You, my dear beloved reader are at risk. If you are reading this, it is a safe assumption that you have access to the Internet. A further safe assumption is that you have at least one e-mail account. BOOM! Target numero uno. But it gets better. Do you have: Facebook? Twitter? Social media sites? Other sites? Your own website? Smartphone? All targets. There are a plethora more and I will not list them all but you get the idea.
The very idea that a website would be dedicated to this kind of malicious and illegal behaviour is utterly beyond me. Why don't we have a website dedicated to videos of us crashing our cars into walls and rate those? ratemycrash.com! Brilliant idea! And as I typed that, I realised that it probably exists, which it does. When I saw that page, my soul died a little bit. But I digress.
This website is the digital equivalent of a bunch of mobsters gathered around a dinner table bragging about all the crimes they have committed. Yes, I know I have shown a little bit of annoyance at the Black Hat conferences and the like, but in the end it is serious security research. I know they sexy it up and throw in a bit of FUD but at the end of the day it is valid research with some useful insights and is helpful in the design of future systems.
This site, not so much. It also helps further perpetuate the whole image of hackers portrayed in the media. You may remember my previous comment about the green on black terminals. Yeah, that's all there. There are times when people do things which we don't agree on and we move on. Then there are times when people do things and it just about turns you into a misanthrope. This isn't one of the latter, but it sure as hell ain't helping!
WRONG! I for one will tell how important it is to secure your web-facing interfaces, devices and any combination thereof till the cows come home, but there is a proper way to do that. There are some standard known practices and counter-measures against exploits that you can put in place. Of course this process is fairly mechanical and does not account for human ingenuity.
Tiger Teams (that term always makes me think of this), enter stage left. Now a Tiger Team or Red Team is a bunch of inhouse or outsourced hackers whose sole job is to attack the system. They do this in a contained environment and report all the exploits the the developers who then correct any flaws. Ideally, they will find everything, but there is no guarantee of that. If they are good, they will find most of them.
That's the normal way of doing it. This site however basically sets the dogs loose on every single person on the Internet. You, my dear beloved reader are at risk. If you are reading this, it is a safe assumption that you have access to the Internet. A further safe assumption is that you have at least one e-mail account. BOOM! Target numero uno. But it gets better. Do you have: Facebook? Twitter? Social media sites? Other sites? Your own website? Smartphone? All targets. There are a plethora more and I will not list them all but you get the idea.
The very idea that a website would be dedicated to this kind of malicious and illegal behaviour is utterly beyond me. Why don't we have a website dedicated to videos of us crashing our cars into walls and rate those? ratemycrash.com! Brilliant idea! And as I typed that, I realised that it probably exists, which it does. When I saw that page, my soul died a little bit. But I digress.
This website is the digital equivalent of a bunch of mobsters gathered around a dinner table bragging about all the crimes they have committed. Yes, I know I have shown a little bit of annoyance at the Black Hat conferences and the like, but in the end it is serious security research. I know they sexy it up and throw in a bit of FUD but at the end of the day it is valid research with some useful insights and is helpful in the design of future systems.
This site, not so much. It also helps further perpetuate the whole image of hackers portrayed in the media. You may remember my previous comment about the green on black terminals. Yeah, that's all there. There are times when people do things which we don't agree on and we move on. Then there are times when people do things and it just about turns you into a misanthrope. This isn't one of the latter, but it sure as hell ain't helping!
Sunday, 14 August 2011
Black Hat and the constant accompyning headlines!
So, recently there was the Black Hat conference in Vegas. For those of you who are less informed, this is basically a large gathering of security researchers presenting their latest findings. And by findings I mean what they have recently broken. Most people dub this a "hacker" conference which is not to unreasonable, but I have one issue with it. The media coverage of it.
The only reason the term "hacker" is used is to sound sexy to the media. They hear that word and they are doing backflips through rings of fire to get the story. And as we are aware the media doesn't always get it right when reporting computer security related issues. Black hat presentations are geared to getting the media attention and causing a bit of a frenzy.
A prime example of that is Don Bailey's presentation which was entitled "War Texting: Identifying and Interacting with Devices on the Telephone Network" which does raise some valid points about connectivity of critical devices (details in another post) but it was also well marketed. He showed that he could unlock cars just by sending a few text messages. When normal people hear something like "vulnerability in FPGA-based control systems" or something similar they do not really know what it means.
Say "I can unlock your car with my phone" and they are scared. Don did say (quote in this article) "I could care less if I could unlock a car door. It's cool. It's sexy. But the same system is used to control phone, power, traffic systems. I think that's the real threat." Which is basically my grievance. As security researchers, we have to sexy up our ideas and then present them to the general populous. Which in turn leads to what I would deem to be an inconvenience.
If you want media attention, then you research on topics that you can sell with a little bit of FUD. Which does restrict your scope quite a lot. This then has a further effect that people view security researchers as only doing this kind of research. This leaves the more theoretical people, like myself, out in the cold, so to speak. Which may or may not be a bad thing, I am not really sure, but I am very sure that it does grind my gears a smidge.
The only reason the term "hacker" is used is to sound sexy to the media. They hear that word and they are doing backflips through rings of fire to get the story. And as we are aware the media doesn't always get it right when reporting computer security related issues. Black hat presentations are geared to getting the media attention and causing a bit of a frenzy.
A prime example of that is Don Bailey's presentation which was entitled "War Texting: Identifying and Interacting with Devices on the Telephone Network" which does raise some valid points about connectivity of critical devices (details in another post) but it was also well marketed. He showed that he could unlock cars just by sending a few text messages. When normal people hear something like "vulnerability in FPGA-based control systems" or something similar they do not really know what it means.
Say "I can unlock your car with my phone" and they are scared. Don did say (quote in this article) "I could care less if I could unlock a car door. It's cool. It's sexy. But the same system is used to control phone, power, traffic systems. I think that's the real threat." Which is basically my grievance. As security researchers, we have to sexy up our ideas and then present them to the general populous. Which in turn leads to what I would deem to be an inconvenience.
If you want media attention, then you research on topics that you can sell with a little bit of FUD. Which does restrict your scope quite a lot. This then has a further effect that people view security researchers as only doing this kind of research. This leaves the more theoretical people, like myself, out in the cold, so to speak. Which may or may not be a bad thing, I am not really sure, but I am very sure that it does grind my gears a smidge.
Wednesday, 27 July 2011
Security the MS way: Protecting you from yourself!
I have always maintained that Microsoft's security policy is essentially to stop you from doing anything stupid. The concept in itself is fairly sound, but the implementation is not. In the classic Operating System debate of Windows versus Linux, the biggest point Linux users make is that they can modify any part of the operating system to suit their needs and desires. When I used Windows XP, I had found all the little secrets to get my machine to do what I wanted it to do. But, I digress.
Microsoft basically adopted the "protect the users from themselves" approach in earnest in Windows Vista. There are several reason why I (and others) am not too fond of Vista, but that aside. The idea is sound in theory, but the implemenatation of it left so much to be desired. In hiding all the knifes from the kids, they also hid all the forks and spoons. Yes, I agree that some of the functionalities should not be available to normal users, but it should be available to admin users.
A whole plethora of useful features were hidden, but we shan't go into that now. The main thing is this article. Now I know I'm a bit late to jump onto this, but I have been a tad lazy. Moving on. So it seems that Hotmail will ban common and quite frankly shit passwords. This is a good and a bad thing.
As I have pointed out before, passwords can be tricky things. For something iek your e-mail account, you need a decent password. So now if Hotmail will reject your password because it's shit, that good right? Well, yes and no. It does stop dictionary attacks, however it drastically changes the search space.
Previously, an attacker would run dictionary attacks in the hope that somebody was a fool. Now that cannot happen then the system is foolproof right? Yes, but to quote Douglas Adams "A common mistake that people make when trying to design something completely foolproof is to underestimate the ingenuity of complete fools." It may sound a touch misanthropic, but people are stupid.
Eventually what is going to happen is that people will find that people will find the least complex passwords that pass through the Hotmail filter and then use those passwords repeatedly. Now dictionary based attacks kick in again, just with a new dictionary. The dictionaries may be larger than previously, but it may not be a significant amount.
So, it is a good idea and I am very much in favour of this, but it could also backfire. Only time will tell, we shall wait and see.
Microsoft basically adopted the "protect the users from themselves" approach in earnest in Windows Vista. There are several reason why I (and others) am not too fond of Vista, but that aside. The idea is sound in theory, but the implemenatation of it left so much to be desired. In hiding all the knifes from the kids, they also hid all the forks and spoons. Yes, I agree that some of the functionalities should not be available to normal users, but it should be available to admin users.
A whole plethora of useful features were hidden, but we shan't go into that now. The main thing is this article. Now I know I'm a bit late to jump onto this, but I have been a tad lazy. Moving on. So it seems that Hotmail will ban common and quite frankly shit passwords. This is a good and a bad thing.
As I have pointed out before, passwords can be tricky things. For something iek your e-mail account, you need a decent password. So now if Hotmail will reject your password because it's shit, that good right? Well, yes and no. It does stop dictionary attacks, however it drastically changes the search space.
Previously, an attacker would run dictionary attacks in the hope that somebody was a fool. Now that cannot happen then the system is foolproof right? Yes, but to quote Douglas Adams "A common mistake that people make when trying to design something completely foolproof is to underestimate the ingenuity of complete fools." It may sound a touch misanthropic, but people are stupid.
Eventually what is going to happen is that people will find that people will find the least complex passwords that pass through the Hotmail filter and then use those passwords repeatedly. Now dictionary based attacks kick in again, just with a new dictionary. The dictionaries may be larger than previously, but it may not be a significant amount.
So, it is a good idea and I am very much in favour of this, but it could also backfire. Only time will tell, we shall wait and see.
Wednesday, 29 June 2011
FIFTY!!! YAAAYY!!!
So, this is my 50th post. WOOHOOO!!! Enough with the celebrations, we have work to do. I decided to mark this momentous milestone, I will discuss the basics of cryptography. Now, I know I have covered some bits of this before, feel free to skip any parts you think you know. So, let's start by naming the three principle components of cryptography : Encryption, Signatures/Message Authentication Codes(MACs) and Proofs.
So, let's start with the first, and probably most well-known, section: Encryption. The scenario is simple, you have a message that you wish to communicate to somebody else, without anybody else finding out that message. The way to do this is encryption. Think back to when you were a kid and you and your friends made up a secret language. That could be viewed as a crude form of encryption, as can all languages. But now, lets talk about the more modern stuff.
An encryption scheme is basically two functions: and encryption function, which I will denote by ENC, and a decryption function, which I will denote by DEC. ENC takes a normal message and turns it into something unintelligible, called a ciphertext. DEC takes a ciphertext and turns it back into the message. To do this, both functions need some additional information, called a key. There are two flavours of encryption, both defined by the relation between the keys used by ENC and DEC. In symmetric or secret key encryption, both ENC and DEC use the same key, whereas in asymmetric or public key encryption, they use distinct but related keys.
Now you must be wondering where all these keys come from and how they get to the right people. For the asymmetric setting it's fairly simple. The receiver generates the keys and publishes the public encryption key, PK, and keeps the decryption key, SK, secret. The publication of PK leads to some very interesting problems, but that's out of scope, but worth mentioning. In the symmetric setting, it's a tad more complicated and this problem is a whole research area on it's own. Let's just say that there are clever ways of doing it in several circumstances.
Now signatures/MACs are essentially the same thing, with different key settings. MAC use symmetric keys and signatures use asymmetric keys. They both serve the same purpose, that is to authenticate the source of a message. For simplicity, I will refer to both of them as codes, which is an abuse of many notations, but will suffice for our purposes. All code systems have two functions: generate, denoted GEN, and verify, denoted VER, and they work as follows: generate takes in a message and outputs a tag and verify takes in a message and a tag and checks if the tag was indeed generated by the sender for that message.
Here we must note two very important things. Tags are both sender and message dependant. Thus a valid tag for one message, cannot be a valid tag for another message sent by the same person. Also, a valid tag for a message sent by one person cannot be a valid tag for the same message sent by another person. Thus the VER functions takes as input the message and the identity of the person, implicitly in the key. The main assumption here is that only the sender can generate a valid tag and thus any valid tag implies that the message was sent by them. This is very close to real world signatures and how we view them. Yes, I know that real signatures can be forged, but we also consider that in cryptography. There are some subtleties as to how this is done, but we shan't venture into that.
Finally, we come upon proofs. These are the least well-known and some would say the least understood of the three. The basic idea behind a proof system is that one party wants to prove to the other that a secret value x has some property. The property could be proven by revealing x, but then that would defeat the point of it being secret. So you need to somehow show that x has this property, but without revealing x. We seem to be in a bit f a pickle. That is, until our hero zero knowledge proofs show. These are protocols which allow you to prove certain statements about a value(s) without revealing the value and without the other person learning anything about the secret value.
A simple way to view this is by the simple statement: "Pick a card, any card!" Let's say I hand you a deck of cards and you pick one out at random and put it aside. Now I ask you what suit it is. You tell me it's a spade. I ask you to prove it. You could show me your card, but that's a secret right? Well, how about you show me 13 clubs, 13 hearts and 13 diamonds. *DING*! Now I know nothing more about your card than that it is in fact a spade. This example is great when explaining not only zero knowledge proofs, but also some of it's stranger variants. But that's all for later (read: maybe if I feel like, but don't hold your breath).
So, all in all, we covered the three basic concepts in cryptography because I'm too lazy to do real posts and the number 50 was just mainly an excuse. We hope to have a resumption of normal service soon-ish.
So, let's start with the first, and probably most well-known, section: Encryption. The scenario is simple, you have a message that you wish to communicate to somebody else, without anybody else finding out that message. The way to do this is encryption. Think back to when you were a kid and you and your friends made up a secret language. That could be viewed as a crude form of encryption, as can all languages. But now, lets talk about the more modern stuff.
An encryption scheme is basically two functions: and encryption function, which I will denote by ENC, and a decryption function, which I will denote by DEC. ENC takes a normal message and turns it into something unintelligible, called a ciphertext. DEC takes a ciphertext and turns it back into the message. To do this, both functions need some additional information, called a key. There are two flavours of encryption, both defined by the relation between the keys used by ENC and DEC. In symmetric or secret key encryption, both ENC and DEC use the same key, whereas in asymmetric or public key encryption, they use distinct but related keys.
Now you must be wondering where all these keys come from and how they get to the right people. For the asymmetric setting it's fairly simple. The receiver generates the keys and publishes the public encryption key, PK, and keeps the decryption key, SK, secret. The publication of PK leads to some very interesting problems, but that's out of scope, but worth mentioning. In the symmetric setting, it's a tad more complicated and this problem is a whole research area on it's own. Let's just say that there are clever ways of doing it in several circumstances.
Now signatures/MACs are essentially the same thing, with different key settings. MAC use symmetric keys and signatures use asymmetric keys. They both serve the same purpose, that is to authenticate the source of a message. For simplicity, I will refer to both of them as codes, which is an abuse of many notations, but will suffice for our purposes. All code systems have two functions: generate, denoted GEN, and verify, denoted VER, and they work as follows: generate takes in a message and outputs a tag and verify takes in a message and a tag and checks if the tag was indeed generated by the sender for that message.
Here we must note two very important things. Tags are both sender and message dependant. Thus a valid tag for one message, cannot be a valid tag for another message sent by the same person. Also, a valid tag for a message sent by one person cannot be a valid tag for the same message sent by another person. Thus the VER functions takes as input the message and the identity of the person, implicitly in the key. The main assumption here is that only the sender can generate a valid tag and thus any valid tag implies that the message was sent by them. This is very close to real world signatures and how we view them. Yes, I know that real signatures can be forged, but we also consider that in cryptography. There are some subtleties as to how this is done, but we shan't venture into that.
Finally, we come upon proofs. These are the least well-known and some would say the least understood of the three. The basic idea behind a proof system is that one party wants to prove to the other that a secret value x has some property. The property could be proven by revealing x, but then that would defeat the point of it being secret. So you need to somehow show that x has this property, but without revealing x. We seem to be in a bit f a pickle. That is, until our hero zero knowledge proofs show. These are protocols which allow you to prove certain statements about a value(s) without revealing the value and without the other person learning anything about the secret value.
A simple way to view this is by the simple statement: "Pick a card, any card!" Let's say I hand you a deck of cards and you pick one out at random and put it aside. Now I ask you what suit it is. You tell me it's a spade. I ask you to prove it. You could show me your card, but that's a secret right? Well, how about you show me 13 clubs, 13 hearts and 13 diamonds. *DING*! Now I know nothing more about your card than that it is in fact a spade. This example is great when explaining not only zero knowledge proofs, but also some of it's stranger variants. But that's all for later (read: maybe if I feel like, but don't hold your breath).
So, all in all, we covered the three basic concepts in cryptography because I'm too lazy to do real posts and the number 50 was just mainly an excuse. We hope to have a resumption of normal service soon-ish.
Saturday, 25 June 2011
Why is digital money may be a bad idea.
Basically right after I posted this, I read this. Kind of an "I told you so" moment. But apart from that I am at a lack of words for this. I've been staring at my screen for a couple of days now and I have nothing (useful or insightful) to write. Apart from the fact that this whole real currency-bitcoin exchange is a little bit hinky and this is one of the problems you can have with it. For this post, please insert the accustomed amount of wit, cynicism and all that jazz you are used to. Thanks :)
Monday, 20 June 2011
Let's talk money, digital money!
Alrighty then, I'm going to assume that everybody has some basic understanding of the concept of money. Next, I assume you all have some idea of how to spend money online using things like paypal, credit cards, debit cards and so forth. Also, the reason nobody that people shouldn't be able to steal your details and thus your money, if it's all done right, depends heavily on crypto. Best way to explain what I do, is to ask "Have you ever bought anything online?" When they answer in the affirmative, then I say "You're welcome."
All levity aside, let's talk about money. Money is official looking paper and bits of metal that carry some value. This value is backed by some central authority. This would normally be the central bank of the country, but could be larger such as the Eurozone. There's a whole lot of economics behind how and why this works, inflation, deflation, devaluation, exchange rates etc that I don't even pretend to understand. We all accept this at face value and move with our lives.
In the online world, it's basically the same thing. The authorities may have changed to credit card issuers, certification authorities and others, but the principle remains the same. Now, this idea doesn't sit too well with the über-privacy people. They are now afraid of all the digital "paper trail", if you will, that is created by all of this. They say that if we can use crypto to secure our transactions, then why not use it to preserve our privacy and create anonymity.
Well, there is quite a lot of cryptographic research in the field of what we like to call e-cash. All this research is completely agnostic of the economic aspects and focuses on the crypto stuff. Until a few days ago, I thought there was no real implementation of any sort of e-cash. Then I heard about Bitcoin. Just as a brief side-note, cryptographers love coins. It's some what of a convention that all randomness is generated using coins and that all e-cash schemes are described in terms of coins. There is good reasoning behind it, but I shan't go into details.
So, back to Bitcoin, which is "the first decentralised digital currency" according to the introductory video. They then go on to explain how it all works and what the advantages are. I'll just recap it for you, for completeness. Bitcoins works using identifiers called addresses, which are essentially random strings. Each user gets 1 when they download the client software. They can then create more so as to have different types of payments come and/or go to/from different addresses. All of these are tied to the same wallet. So if person has addresses a and b then sending money to either address would be the same. This is how anonymity is preserved.
When Bitcoins are sent from person to person, the transaction is hashed and signed. The hash value and digital signature are then verified by the the other users in the system. Once a transaction is verified, the Bitcoins are added to and subtracted from the relevant accounts. This is the decentralised aspect. In normal e-commerce transactions, the verification would be done by a centralised authority such as a bank or clearing house. With Bitcoins this is done in a peer-to-peer (P2P) manner. Another interesting thing is that Bitcoins are super divisible. You can go down to 0.00000001BTC. Which is the advantage of having a digital currency.
So I thought I'll give this a try. So, I downloaded the client software and started reading through the literature and all the wikis and got a feeling for how this all works. There is a whole sub-culture built based around bitcoins and it is quite fascinating. There are entire forums and IRC channels dedicated to the provision of trade in and using Bitcoins. However, as I dug deeper I discovered two very interesting points.
Firstly, Bitcoins are more of a commodity than a currency IMHO. I would like to think of Bitcoins as digital gold. This analogy is fairly apt given the way the currency works, especially with respect to generation. The generation of Bitcoins is called "mining" and involves essentially finding a pre-image for a hash function. Now this requires huge amounts of computations, but once done, a "block" is created. The creation of this block gives the creator some Bitcoins, at time of writing this stands at 50BTC. For those of us that do not have a super computer, there are still options.
The basic technique is called "pooled mining." Here what you do is you combine your computational power and split up the profits according to how much work you did. One way of doing this, if you have a reasonable large amount of computational power, is to join a mining pool. There are several ways this can be done and there are a few technical details that need to considered. Mostly these depend on a central server, which is ironically what Bitcoin was trying to avoid. For those of us with less computational power, there are alternatives, such as this (BTW if you are feeling really nice, you could try and generate a few coins for me here or you could just send some to 1KbnDDaS3UTAMZkqHSJwGuWgdApQr3wAqp).
However, there are other ways. Carrying on the gold analogy, there are people who own gold but have never even been near a mine. How? They buy it! The same goes for Bitcoins. There are some marketplaces where you can buy and sell Bitcoins for real money. It's fairly easy to compare to say a fresh fish market, let's say. Basically, the fishermen catch the fish (in this case they mine Bitcoins) and then go to a fixed place to sell it. The public knows this place and come there to buy some fish (or in our case Bitcoins). The reason I use the fish market analogy is that there is some haggling and negotiations involved, which is not unlike the Bitcoin marketplaces. In this places you can buy and/or sell Bitcoins for USD, GBP, EUR, or even SLL, the currency of Second Life. Not kidding on the last one.
Which sort of brings me to the second point. Even though Bitcoin is supposed to be decentralised, it seems to be doing it's best to achieve the exact opposite. The whole idea is to not trust this one monolithic central institution, but instead distribute the trust amongst all participants in the system, that is using P2P. There is always some sort of large trust placed in central entities, of varying size, but the point still remains. Transaction verification is still very much P2P, but not much else is. And therein lie the problems.
"With great power comes great responsibility" said Uncle Ben, rightly so. In the mining context, there are ways that servers and miners can cheat. The details of this are fairly technical and thus I will skip them. The essence is that if you control a large enough share of the mining pool, you can control the outcome of the pool, in that who receives how much money. Some people would argue that such attacks are infeasible, but I think they are possible. Further more, with all the multiple currency exchanges, it's not unlikely that somebody could be making, or trying to make, money speculating of price rises and drops. The problem here is that because it's so decentralised, there is the risk of somebody "making a run on the currency." I'm not entirely sure I know how that works, but I believe them.
The most recent problem that has surfaced is that of theft. All the "money" is stored locally on your hard drive in a single file called "wallet.dat". After reading a few of the forums, it became painfully obvious that everybody knows exactly what this file is and what it does. I thought to myself "That's quite a nice target for an attack". Hey presto, somebody did it. The thing with attacks of this kind is that they are pretty much untraceable. Remember, Bitcoin operates on anonymous identities, so even if you get the address that the money was sent to, you don't really learn anything.
So, there are some really cool things about Bitcoin and some not so cool things. I really have no strong opinions about it either way at this point in time. I am just going to let things develop and see what happens. There is a lot of talk about how these may be used to buy and sell drugs, which could lead to the whole thing being shut down, but we shall have to wait and see.
All levity aside, let's talk about money. Money is official looking paper and bits of metal that carry some value. This value is backed by some central authority. This would normally be the central bank of the country, but could be larger such as the Eurozone. There's a whole lot of economics behind how and why this works, inflation, deflation, devaluation, exchange rates etc that I don't even pretend to understand. We all accept this at face value and move with our lives.
In the online world, it's basically the same thing. The authorities may have changed to credit card issuers, certification authorities and others, but the principle remains the same. Now, this idea doesn't sit too well with the über-privacy people. They are now afraid of all the digital "paper trail", if you will, that is created by all of this. They say that if we can use crypto to secure our transactions, then why not use it to preserve our privacy and create anonymity.
Well, there is quite a lot of cryptographic research in the field of what we like to call e-cash. All this research is completely agnostic of the economic aspects and focuses on the crypto stuff. Until a few days ago, I thought there was no real implementation of any sort of e-cash. Then I heard about Bitcoin. Just as a brief side-note, cryptographers love coins. It's some what of a convention that all randomness is generated using coins and that all e-cash schemes are described in terms of coins. There is good reasoning behind it, but I shan't go into details.
So, back to Bitcoin, which is "the first decentralised digital currency" according to the introductory video. They then go on to explain how it all works and what the advantages are. I'll just recap it for you, for completeness. Bitcoins works using identifiers called addresses, which are essentially random strings. Each user gets 1 when they download the client software. They can then create more so as to have different types of payments come and/or go to/from different addresses. All of these are tied to the same wallet. So if person has addresses a and b then sending money to either address would be the same. This is how anonymity is preserved.
When Bitcoins are sent from person to person, the transaction is hashed and signed. The hash value and digital signature are then verified by the the other users in the system. Once a transaction is verified, the Bitcoins are added to and subtracted from the relevant accounts. This is the decentralised aspect. In normal e-commerce transactions, the verification would be done by a centralised authority such as a bank or clearing house. With Bitcoins this is done in a peer-to-peer (P2P) manner. Another interesting thing is that Bitcoins are super divisible. You can go down to 0.00000001BTC. Which is the advantage of having a digital currency.
So I thought I'll give this a try. So, I downloaded the client software and started reading through the literature and all the wikis and got a feeling for how this all works. There is a whole sub-culture built based around bitcoins and it is quite fascinating. There are entire forums and IRC channels dedicated to the provision of trade in and using Bitcoins. However, as I dug deeper I discovered two very interesting points.
Firstly, Bitcoins are more of a commodity than a currency IMHO. I would like to think of Bitcoins as digital gold. This analogy is fairly apt given the way the currency works, especially with respect to generation. The generation of Bitcoins is called "mining" and involves essentially finding a pre-image for a hash function. Now this requires huge amounts of computations, but once done, a "block" is created. The creation of this block gives the creator some Bitcoins, at time of writing this stands at 50BTC. For those of us that do not have a super computer, there are still options.
The basic technique is called "pooled mining." Here what you do is you combine your computational power and split up the profits according to how much work you did. One way of doing this, if you have a reasonable large amount of computational power, is to join a mining pool. There are several ways this can be done and there are a few technical details that need to considered. Mostly these depend on a central server, which is ironically what Bitcoin was trying to avoid. For those of us with less computational power, there are alternatives, such as this (BTW if you are feeling really nice, you could try and generate a few coins for me here or you could just send some to 1KbnDDaS3UTAMZkqHSJwGuWgdApQr3wAqp).
However, there are other ways. Carrying on the gold analogy, there are people who own gold but have never even been near a mine. How? They buy it! The same goes for Bitcoins. There are some marketplaces where you can buy and sell Bitcoins for real money. It's fairly easy to compare to say a fresh fish market, let's say. Basically, the fishermen catch the fish (in this case they mine Bitcoins) and then go to a fixed place to sell it. The public knows this place and come there to buy some fish (or in our case Bitcoins). The reason I use the fish market analogy is that there is some haggling and negotiations involved, which is not unlike the Bitcoin marketplaces. In this places you can buy and/or sell Bitcoins for USD, GBP, EUR, or even SLL, the currency of Second Life. Not kidding on the last one.
Which sort of brings me to the second point. Even though Bitcoin is supposed to be decentralised, it seems to be doing it's best to achieve the exact opposite. The whole idea is to not trust this one monolithic central institution, but instead distribute the trust amongst all participants in the system, that is using P2P. There is always some sort of large trust placed in central entities, of varying size, but the point still remains. Transaction verification is still very much P2P, but not much else is. And therein lie the problems.
"With great power comes great responsibility" said Uncle Ben, rightly so. In the mining context, there are ways that servers and miners can cheat. The details of this are fairly technical and thus I will skip them. The essence is that if you control a large enough share of the mining pool, you can control the outcome of the pool, in that who receives how much money. Some people would argue that such attacks are infeasible, but I think they are possible. Further more, with all the multiple currency exchanges, it's not unlikely that somebody could be making, or trying to make, money speculating of price rises and drops. The problem here is that because it's so decentralised, there is the risk of somebody "making a run on the currency." I'm not entirely sure I know how that works, but I believe them.
The most recent problem that has surfaced is that of theft. All the "money" is stored locally on your hard drive in a single file called "wallet.dat". After reading a few of the forums, it became painfully obvious that everybody knows exactly what this file is and what it does. I thought to myself "That's quite a nice target for an attack". Hey presto, somebody did it. The thing with attacks of this kind is that they are pretty much untraceable. Remember, Bitcoin operates on anonymous identities, so even if you get the address that the money was sent to, you don't really learn anything.
So, there are some really cool things about Bitcoin and some not so cool things. I really have no strong opinions about it either way at this point in time. I am just going to let things develop and see what happens. There is a lot of talk about how these may be used to buy and sell drugs, which could lead to the whole thing being shut down, but we shall have to wait and see.
Monday, 13 June 2011
Something that has been bugging me for a while
Do you have a facebook account? Rhetorical question, of course you do. If you don't well then you can leave now because this post is all about *drumroll* FACEBOOK! Seeing as how it is on my blog, one can safely assume that it is about facebook security. So, what have facebook done now? They are protecting your from them.
Confused? So was I. Basically they have started up this new scheme to prevent Cross-Site Scripting(XSS) and Clickjacking and other scripting based vulnerabilities. Some of you may be unfamiliar with scripting and the vulnerabilities therein. Most modern webpages serve up dynamic content, making the experience different for each user. A good example of this is your facebook newsfeed, which is different from your friend's feed.
This is all achieved using scripting. A script is essentially some sort of program code that runs within your web-browser. The catch is, you never explicitly execute the scripts like you do programs. They are embedded in the webpage and are executed when you open the webpage, or at some other suitable trigger. The problem then is that people could embed malicious scripts into pages and you will not realise they have run, until it's too late.
I'm sure you've all had that one friend who has posted the same spam link to everybody and 10mins later warned you not to click it. That is basically what these malicious scripts do. So, facebook decided that they need to address the issue, which they did pathetically.
What they have done is now they "read" your URLs and check it for any script. Again, ANY script. That means that if any script is detected, you will be logged out of facebook instantly as a security precaution. You may wonder why this is a problem. Well, as eluded to earlier, almost all actions on facebook are scripts. See more items in your news feed, liking a post, commenting on a post, writing on somebody's wall, the chat feature. Everything is a script. So now, facebook sees you trying to do something legitimate and decides to kick you out. It doesn't always happen, but it's often enough to be mildly aggravating.
It's bad enough that you have to re-login, but what's even worse is that you go through the following twp screens: (full size images here and here)
and then a 3rd asking if you would like to share a link explaining how great facebook security is. Honestly, I would rather have a red-hot iron bar slapped onto my arm. This is because if you read the messages carefully, you will notice a couple of "< br >" tags popping up.
This is not a security issue, but it does mean that whom so ever wrote those pages is probably a moron! "< br >" was/is the tag used in HTML to induce a line break. However, newer standards such as XHTML and HTML5 insist on using "< br />" for technical reasons. Believe me, it's a good idea. So, this lead me to the conclusion that most facebook web developers have written sloppy HTML/PHP/JScript/Whatever else they use and that is causing the "safety filter" to go off at least twice a week on my account. Also, I'm not sure how good the code for the "filter" is. I have very low expectations.
The first time I was logged out by this "filter", I was impressed that facebook had implemented such a feature. I guess that they had some bugs in it, which was understandable. With each subsequent occurrence of me being "filtered out", I grew more sceptical. Then when I saw the horrendous HTML code in the warnings, I gave up hope and waited for it to happen again to make screen shots.
I didn't have to wait too long. Most people would say "Well at least they tried!" To which I reply, "Welcome to cyber-security, where a half-assed attempt doesn't count!" Really, facebook, get your act together and actually make an attempt and then maybe I'll be impressed and stop writing evil comments on the your security fan pages.
Confused? So was I. Basically they have started up this new scheme to prevent Cross-Site Scripting(XSS) and Clickjacking and other scripting based vulnerabilities. Some of you may be unfamiliar with scripting and the vulnerabilities therein. Most modern webpages serve up dynamic content, making the experience different for each user. A good example of this is your facebook newsfeed, which is different from your friend's feed.
This is all achieved using scripting. A script is essentially some sort of program code that runs within your web-browser. The catch is, you never explicitly execute the scripts like you do programs. They are embedded in the webpage and are executed when you open the webpage, or at some other suitable trigger. The problem then is that people could embed malicious scripts into pages and you will not realise they have run, until it's too late.
I'm sure you've all had that one friend who has posted the same spam link to everybody and 10mins later warned you not to click it. That is basically what these malicious scripts do. So, facebook decided that they need to address the issue, which they did pathetically.
What they have done is now they "read" your URLs and check it for any script. Again, ANY script. That means that if any script is detected, you will be logged out of facebook instantly as a security precaution. You may wonder why this is a problem. Well, as eluded to earlier, almost all actions on facebook are scripts. See more items in your news feed, liking a post, commenting on a post, writing on somebody's wall, the chat feature. Everything is a script. So now, facebook sees you trying to do something legitimate and decides to kick you out. It doesn't always happen, but it's often enough to be mildly aggravating.
It's bad enough that you have to re-login, but what's even worse is that you go through the following twp screens: (full size images here and here)
and then a 3rd asking if you would like to share a link explaining how great facebook security is. Honestly, I would rather have a red-hot iron bar slapped onto my arm. This is because if you read the messages carefully, you will notice a couple of "< br >" tags popping up.
This is not a security issue, but it does mean that whom so ever wrote those pages is probably a moron! "< br >" was/is the tag used in HTML to induce a line break. However, newer standards such as XHTML and HTML5 insist on using "< br />" for technical reasons. Believe me, it's a good idea. So, this lead me to the conclusion that most facebook web developers have written sloppy HTML/PHP/JScript/Whatever else they use and that is causing the "safety filter" to go off at least twice a week on my account. Also, I'm not sure how good the code for the "filter" is. I have very low expectations.
The first time I was logged out by this "filter", I was impressed that facebook had implemented such a feature. I guess that they had some bugs in it, which was understandable. With each subsequent occurrence of me being "filtered out", I grew more sceptical. Then when I saw the horrendous HTML code in the warnings, I gave up hope and waited for it to happen again to make screen shots.
I didn't have to wait too long. Most people would say "Well at least they tried!" To which I reply, "Welcome to cyber-security, where a half-assed attempt doesn't count!" Really, facebook, get your act together and actually make an attempt and then maybe I'll be impressed and stop writing evil comments on the your security fan pages.
Sunday, 12 June 2011
Quick post on how I may be kind of wrong.
If you know me at all, you will know that I have strong opinions on some things. If you don't know me, you now know that I have strong opinions on certain things. Now that everybody is caught up, let's all sit back and enjoy me being wrong-ish. I had a post earlier, which really is based on the fact that access to the Internet is a privilege, that some people abuse. Well now the United Nations has declared it a human right. My argument falls flat on it's face. I'm a big boy and I am willing to admit that in light of this, those arguments no longer hold water. Things change, people's ideas are made to be wrong, that's life.
Also, just a minor side-note: read this article!
Also, just a minor side-note: read this article!
Monday, 6 June 2011
Cyberwarfare Part 2 (No more lazy me, for now)
Alrighty then, we had a basic intro to cyberwar in my previous post. In between then and now, the clever chaps at the SIS, commonly incorrectly referred to as MI6, told us about this little gem. This has to be one of the funniest things in existence... EVER!!! But minor state-sponsered hacktivism aside, back to the crux of the matter: the issues arising from cyberwar.
One of the main problems is that you may not even know that you were attacked. If somebody blows up a building the sound, and the lack of building, would alert you pretty quickly to the fact that there was an attack. The attacker may have installed some malicious software on your system or copied some data and you would be none the wiser. Yes, there are ways to detect this, but it is very possible that you wouldn't even notice.
Not only is it the lack of physical evidence, but also the time scale. Normal wars tend to take a long time. If you don't notice you are at war, well then you have bigger problems than the army barrelling down you front driveway. A cyberwar or cyber attack can be executed and completed within a matter of hours, if not minutes. It is really that fast. Yes there is a lot of prep time required but this is analogous to training your army, building your tanks etc.
Then there is the last (I promise, well for now) issue arising in cyberwar: non-interactivity. To take a touch of a cryptographic twist onto the whole matter war is an interactive protocol. Sure if you surprise the enemy they won't know they are at war right away, but they will pick up pretty quickly and then return in kind. The thing with cyberwar is that not only is the decision to go to war unilateral, but in some sense so is the war. One party decides to attack another party and does so. The other may or may not discover this and may or may not respond in kind. But again the whole thing is done very non-interactively (despite what pop culture (couldn't find anything for that, sorry) and video games may tell you).
So, to sum up: cyberwar is confusing, unclear, hard to track, pinpoint and blame the perpetrators and is inherently non-interactive. And if that wasn't bad enough, the actual definition of cycberwar is pretty fuzzy and very much up in the air right now. Most likely I may revert back to lazy me. Unless something cool happens.
One of the main problems is that you may not even know that you were attacked. If somebody blows up a building the sound, and the lack of building, would alert you pretty quickly to the fact that there was an attack. The attacker may have installed some malicious software on your system or copied some data and you would be none the wiser. Yes, there are ways to detect this, but it is very possible that you wouldn't even notice.
Not only is it the lack of physical evidence, but also the time scale. Normal wars tend to take a long time. If you don't notice you are at war, well then you have bigger problems than the army barrelling down you front driveway. A cyberwar or cyber attack can be executed and completed within a matter of hours, if not minutes. It is really that fast. Yes there is a lot of prep time required but this is analogous to training your army, building your tanks etc.
Then there is the last (I promise, well for now) issue arising in cyberwar: non-interactivity. To take a touch of a cryptographic twist onto the whole matter war is an interactive protocol. Sure if you surprise the enemy they won't know they are at war right away, but they will pick up pretty quickly and then return in kind. The thing with cyberwar is that not only is the decision to go to war unilateral, but in some sense so is the war. One party decides to attack another party and does so. The other may or may not discover this and may or may not respond in kind. But again the whole thing is done very non-interactively (despite what pop culture (couldn't find anything for that, sorry) and video games may tell you).
So, to sum up: cyberwar is confusing, unclear, hard to track, pinpoint and blame the perpetrators and is inherently non-interactive. And if that wasn't bad enough, the actual definition of cycberwar is pretty fuzzy and very much up in the air right now. Most likely I may revert back to lazy me. Unless something cool happens.
Friday, 3 June 2011
Cyberwarfare Part 1 (A post I have been procrastinating on)
Well this post has been in the works for a couple of week now. I have been procrastinating on a epic level about finishing this off. However the universe decide to give me a kick in the backside in the form of these related recent articles (all links to separate slashdot stories)
So, in recent times, there has been a lot of talk of digital warfare, internet wars, cyberwar and so forth. The most recent being the aforementioned. The general idea behind them is all the same, we have a strategy/army/assets/whatever for cyberwarfare. What happens when warfare goes from being about things in the real world to things in the digital world?
So let's start from the start shall we? What is modern warfare? (apart from a terrible pun on a pretty good video game) War as a concept is fairly simple. Two nation states (in general) disagree on something and wish to resolve the issue. So basically they start blowing each other up until they get bored or one party is very very dead. Yes, that is a gross oversimplification, but the concept holds. Now, onto the crux of the matter: What is Cyberwafare?
Cyberwar (which is the term I shall be using from now on, because I think it's the coolest) is essentially a war fought in the digital realm. This is generally in tandem with conventional warfare with the aim of disabling digital assets. There could also be political goals, achieved by defacing websites and so on, but IMHO the main goal is the destruction of digital assets.
Well, this is all pretty fine and dandy when the war is being carried out by nation states, because there is some inherent chain of command and somebody who would be responsible for ordering these attacks. However, this is not always the case with cyberwar. Now you may ask "why this is possible?"
Good question. The thing with conventional war (ignoring any peace negotiations) is that the winner is the side with the most and/or better equipment and/or training. There is the main point where cyberwar becomes so much easier. To build a real army you need to train people to drive tanks and fly planes and shoot guns and blah blah blah. To build a cyber army, you need to teach people how download a program and run it.
Here the "army" is recruited by word of mouth and because there is no physical danger caused by participating in this attack the number of people who join in are much more numerous. However, we do fall into an interesting problem: who is responsible for this attack, which is essentially tantamount to an act of war?
The answer to the question is ill-defined at best. An prime example would be the recent attack on the Playstation Network (another blog post I will finish soon). First Sony said it was Anonymous, who then claimed it wasn't them, but then it later turned out the be a "faction" (for lack of a better word) of anonymous. So here we see no chain of command and the leaders of the group had no idea what the other members were upto.
And there in lie the first complications of cyberwar. First off, we have the ability to engage in cyberwar. ConvenConventional warfare requires a substantial amount of resources, which are pretty much never available to the average individual. In the cyber realm, all you need is an Internet connection and possibly some more people to help out, or just their computers (whole other problem there, which I will cover later). And then there is the problem of accountability. At best you get an IP address(es) for the attacking platform(s) which may just be under the control of the attacker (again, to be covered in more detail in another post) and thus may not yield anything useful.
Now, this post is getting pretty long and falling into TL;DR territory. That and I really don't want to write anything more at this point in time. So, I will end here and will pick this up later (note the "Part 1" in the title of the post).
So, in recent times, there has been a lot of talk of digital warfare, internet wars, cyberwar and so forth. The most recent being the aforementioned. The general idea behind them is all the same, we have a strategy/army/assets/whatever for cyberwarfare. What happens when warfare goes from being about things in the real world to things in the digital world?
So let's start from the start shall we? What is modern warfare? (apart from a terrible pun on a pretty good video game) War as a concept is fairly simple. Two nation states (in general) disagree on something and wish to resolve the issue. So basically they start blowing each other up until they get bored or one party is very very dead. Yes, that is a gross oversimplification, but the concept holds. Now, onto the crux of the matter: What is Cyberwafare?
Cyberwar (which is the term I shall be using from now on, because I think it's the coolest) is essentially a war fought in the digital realm. This is generally in tandem with conventional warfare with the aim of disabling digital assets. There could also be political goals, achieved by defacing websites and so on, but IMHO the main goal is the destruction of digital assets.
Well, this is all pretty fine and dandy when the war is being carried out by nation states, because there is some inherent chain of command and somebody who would be responsible for ordering these attacks. However, this is not always the case with cyberwar. Now you may ask "why this is possible?"
Good question. The thing with conventional war (ignoring any peace negotiations) is that the winner is the side with the most and/or better equipment and/or training. There is the main point where cyberwar becomes so much easier. To build a real army you need to train people to drive tanks and fly planes and shoot guns and blah blah blah. To build a cyber army, you need to teach people how download a program and run it.
Here the "army" is recruited by word of mouth and because there is no physical danger caused by participating in this attack the number of people who join in are much more numerous. However, we do fall into an interesting problem: who is responsible for this attack, which is essentially tantamount to an act of war?
The answer to the question is ill-defined at best. An prime example would be the recent attack on the Playstation Network (another blog post I will finish soon). First Sony said it was Anonymous, who then claimed it wasn't them, but then it later turned out the be a "faction" (for lack of a better word) of anonymous. So here we see no chain of command and the leaders of the group had no idea what the other members were upto.
And there in lie the first complications of cyberwar. First off, we have the ability to engage in cyberwar. ConvenConventional warfare requires a substantial amount of resources, which are pretty much never available to the average individual. In the cyber realm, all you need is an Internet connection and possibly some more people to help out, or just their computers (whole other problem there, which I will cover later). And then there is the problem of accountability. At best you get an IP address(es) for the attacking platform(s) which may just be under the control of the attacker (again, to be covered in more detail in another post) and thus may not yield anything useful.
Now, this post is getting pretty long and falling into TL;DR territory. That and I really don't want to write anything more at this point in time. So, I will end here and will pick this up later (note the "Part 1" in the title of the post).
Sunday, 8 May 2011
Password Lockers Part 2
So, this is becoming a trend, well two trends: follow-up posts and data breaches. As you may or may not know, there was a MASSIVE breach involving Sony Entertainment, specifically the Playstation, but more on that later. More the the point you may recall my previous post on password lockers etc. Well, this post is about what can go wrong with a password locker.
LastPass is a company that provides a password locker service. What you do is register and download their software. Your master password, which unlocks the locker is then stored there. Now it recently came to light that some of these passwords were compromised (or not). Well, LastPast, if you are reading this, have a gander over here for a sec, k? We assume, hypothetically, that the master passwords were compromised (mainly because I have already written out most of this post and I'm kinda lazy). LastPass issues a warning to all its users to change their master passwords and they all do. Their servers could not handle the load and so they had to restrict the number of users allowed to change their passwords. This actually happened before they announced they were not hacked.
Well, I would like to say that I am somewhat impressed by the expediency with which the users tried to change their passwords. I am also impressed by LastPass's inability to deal with the situation. Agreed, that they had issues dealing with the load but according to their blog they have put affected accounts in "lock-down" mode. Kudos to you.
After all of this, LastPass then claimed they were not hacked. It seems that they just broken their system. After users changed the master passwords, they were met with garbage characters, random images and occasionally the deep dark void of nothing. Somewhere somebody thought that implied a hack. And that brings us to today's lesson.
When you think you have been breached, DO NOT PANIC! Check, re-check, double-check and confirm that there has been a breach. Immediately put in place counter-measures and check for other possible backdoors opened by this breach. Take a deep breath. Notify the affected users as required by law and/or company policy. If you follow these steps properly, then there should be no need to ever retract a security warning. Issuing a security warning scares people, retracting it causes doubt. We are trying to bring digital security out of the realm of FUD (Fear, Uncertainty, Doubt)!
LastPass is a company that provides a password locker service. What you do is register and download their software. Your master password, which unlocks the locker is then stored there. Now it recently came to light that some of these passwords were compromised (or not). Well, LastPast, if you are reading this, have a gander over here for a sec, k? We assume, hypothetically, that the master passwords were compromised (mainly because I have already written out most of this post and I'm kinda lazy). LastPass issues a warning to all its users to change their master passwords and they all do. Their servers could not handle the load and so they had to restrict the number of users allowed to change their passwords. This actually happened before they announced they were not hacked.
Well, I would like to say that I am somewhat impressed by the expediency with which the users tried to change their passwords. I am also impressed by LastPass's inability to deal with the situation. Agreed, that they had issues dealing with the load but according to their blog they have put affected accounts in "lock-down" mode. Kudos to you.
After all of this, LastPass then claimed they were not hacked. It seems that they just broken their system. After users changed the master passwords, they were met with garbage characters, random images and occasionally the deep dark void of nothing. Somewhere somebody thought that implied a hack. And that brings us to today's lesson.
When you think you have been breached, DO NOT PANIC! Check, re-check, double-check and confirm that there has been a breach. Immediately put in place counter-measures and check for other possible backdoors opened by this breach. Take a deep breath. Notify the affected users as required by law and/or company policy. If you follow these steps properly, then there should be no need to ever retract a security warning. Issuing a security warning scares people, retracting it causes doubt. We are trying to bring digital security out of the realm of FUD (Fear, Uncertainty, Doubt)!
Wednesday, 27 April 2011
Why the movies are wrong (Surprise, Surprise)
On the lighter side of life, my friend @zarino tweeted this link, which got me thinking about hackers in popular culture. Think about your favorite movie and/or TV hacker. My vote goes to Alec Hardison, but that's irrelevant. In any "hacking sequence" you see the hacker typing away furiously on a keyboard and all sorts of random green text on a black background. The green on black dates way back to the old days and I have no clue as to why they used those colours, but everybody loves it.
Anyway, you see them typing away furiously at a console screen and all sorts of text just popping up.
Sadly, hacking is really not that glamorous. It's mainly typing one or two commands or even just a button click. That is preceded by actually coding the tool you are using but nobody types that fast, especially not when programming. Just by the by, the text that appears in the link is a program of some sort. Haven't read all the code, so not sure what it does. All I can say is that it looks something from the C-family.
*****EDIT******
Turns out they last change the site a touch since I visited it. It appears the code is part of the Linux kernel.
Anyway, you see them typing away furiously at a console screen and all sorts of text just popping up.
IT'S ALL WRONG!
Sadly, hacking is really not that glamorous. It's mainly typing one or two commands or even just a button click. That is preceded by actually coding the tool you are using but nobody types that fast, especially not when programming. Just by the by, the text that appears in the link is a program of some sort. Haven't read all the code, so not sure what it does. All I can say is that it looks something from the C-family.
*****EDIT******
Turns out they last change the site a touch since I visited it. It appears the code is part of the Linux kernel.
Sunday, 24 April 2011
Location, Location, Location! What you don't know that they know! (Part 2)
So, some of you may remember this post. Well this is part two of that. I contemplated for about 15mins if I should end the post with the fact that your phone is also capable of tracking your movements but decided against it. Well that would been pretty cool, and mildly prophetic, but hindsight is always 20/20. Well back to the present and how your phone tracks you.
So, recently people discovered, much to their surprise, that the iPhone stores an unencrypted history of where you have been for the past 10 months. I seem to be the only person whom this did not surprise. In fact if the phone did not store any location history would surprise me. I often, mostly jokingly, say to my friends who own Apple products that Steve Jobs owns their souls. After reading this, some of them are starting to think it's true (side-note: this article seems to agree).
It also surfaced that android phones do exactly the same thing. So much for being the free and open platform right? So, I would normally take this time to be smug that I am use a Symbian smartphone, but in all honesty, I would not be surprised if they did the exact same thing. Of course I haven't forgotten all you lovely Blackberry users. RIM may well be doing the exact same thing, but I have not found any solid evidence either way.
So, base assumption: if you have a smartphone, it has a record of where you have been for the past x amount of time. Why is this a) done? and b) a problem? Well in the previous post, I covered most of the answer to b), so lets move on to why it is done. The official answer: "to improve the quality of our location based services." The real answer: "to improve the quality of our location based services." SHOCKER!
Yes, I am aware that this law enforcement agencies are aware of this data and sometimes use this data in the course of enforcing the law. But in all fairness, when the cops are looking for you, the normal rules don't totally apply. So, back to the main point: it really does help them improve the location based services. There is no other way than to actually use your actual location data. If you want a great app that finds the nearest bar, restaurant or even condoms in New York (was very amused when I read that article), your handset manufacturer needs to collect this data.
The upshot: this is something you have to give in order for you to get the services that you want. I for one think it's a fair trade-off. I have no proof that my phone does this, but if it turns out that it does, I'm OK with that. Again, in the digital age, privacy is not quite what it used to be, which is a fact we all have to deal with.
So, recently people discovered, much to their surprise, that the iPhone stores an unencrypted history of where you have been for the past 10 months. I seem to be the only person whom this did not surprise. In fact if the phone did not store any location history would surprise me. I often, mostly jokingly, say to my friends who own Apple products that Steve Jobs owns their souls. After reading this, some of them are starting to think it's true (side-note: this article seems to agree).
It also surfaced that android phones do exactly the same thing. So much for being the free and open platform right? So, I would normally take this time to be smug that I am use a Symbian smartphone, but in all honesty, I would not be surprised if they did the exact same thing. Of course I haven't forgotten all you lovely Blackberry users. RIM may well be doing the exact same thing, but I have not found any solid evidence either way.
So, base assumption: if you have a smartphone, it has a record of where you have been for the past x amount of time. Why is this a) done? and b) a problem? Well in the previous post, I covered most of the answer to b), so lets move on to why it is done. The official answer: "to improve the quality of our location based services." The real answer: "to improve the quality of our location based services." SHOCKER!
Yes, I am aware that this law enforcement agencies are aware of this data and sometimes use this data in the course of enforcing the law. But in all fairness, when the cops are looking for you, the normal rules don't totally apply. So, back to the main point: it really does help them improve the location based services. There is no other way than to actually use your actual location data. If you want a great app that finds the nearest bar, restaurant or even condoms in New York (was very amused when I read that article), your handset manufacturer needs to collect this data.
The upshot: this is something you have to give in order for you to get the services that you want. I for one think it's a fair trade-off. I have no proof that my phone does this, but if it turns out that it does, I'm OK with that. Again, in the digital age, privacy is not quite what it used to be, which is a fact we all have to deal with.
Wednesday, 20 April 2011
Why you are not dead from the robot-induced nuclear apocolypse (or why CAPTHAs still wotk)
If you are reading this then you are not dead. That is generally a good thing. Now, you may ask yourself as to why you should be dead. Well according to the popular Terminator series of movies, 18th of April the day when we all bite the big one. Unless you happen to be John Conner.
The original premise of the movie was that the Artificial Intelligence (AI) known as "Skynet" became self aware and decided that it reaaaaallly hated humans. So, it decided to get rid of them the best way it knew how; it nuked the living daylights out of EVERYTHING (Barring John Conner and the other lucky guys) and then some. And then we have time-travel and fights and craziness spread over 4 movies and a TV series. Why, you may ask, do I care about this. Well, apart from being a massive geek?
There is a mild connection between the Skynet and computer security. What Skynet represents is a sentient AI, which is basically a computer that can think for itself. Now, having personally worked with an AI (for my undergraduate thesis) I now look at all movies/TV shows that have real full AI with a great amount of scepticism. I know I am not an expert by any metric, but it took me close to 2 months to write an AI agent that learns to play blackjack. Nothing fancy at all, just plays a basic strategy. The upshot: it's really really hard.
I know that they have made some massive leaps in the field, such as Watson and Deep Blue, but that's not quite the same. There are supervised AI agents that can just do one thing and had to be fed a ton of data before hand. Watson for example has the whole of Wikipedia stored, which is in my books cheating just a little bit. Although these are very impressive, they are still far away from full self-awareness and sentience.
The only way that could happen is if we had an unsupervised agent, that is, an AI agent who is given only knowledge of the problem and needs to learn how to solve it. For example, you could tell an agent what a maze it and then put it in a maze and tell it to find it's way out. It will (eventually) learn how to do that. Then you give it another maze and it will learn and so on and so forth. And then you have to make it able to learn new tasks. But once you hammer out those little details, then you have the computer that will end the world.
To, the main point: the reason AI is interesting from a computer security point of view is several things, but the main is CAPTCHAs. Now, it's a little bit of an abuse of the term, but for simplicity I use CAPTCHA to mean all the systems and implementations thereof. The general idea is that you are shown a picture of somewhat distorted letters and you have to enter the letters in to prove you are a human.
The reason for this is that computers, such as automated sign-ups and spambots etc were use to make THOUSANDS of false email accounts for the purpose of spreading spam, viruses, boredom, marketing maybe, but generally evil stuff. IF you can prove you are human, then all will be hunky-dory and you can sign up. A computer will almost always fail these tests. AIs can try to learn these things, for example, using neural networks. And some of these algorithms have had some reasonable success. This is exactly why you sometimes get a CAPTCHA that is completely unintelligible, to protect against smart AIs, with the side effect of annoying the humans, ironically, as the machines wanted.
But, if we could for a second just shift back to self-aware AI. Well, the upshot is if the AI were self-aware, it's well beyond breaking CAPTCHA. We would basically have a robot with human cognitive skills, but much more computing power. I leave it to your imagination (mostly sculpted by TV and movies) to do the rest.
SIDENOTE: More real/relevant blog posts to come soon.
The original premise of the movie was that the Artificial Intelligence (AI) known as "Skynet" became self aware and decided that it reaaaaallly hated humans. So, it decided to get rid of them the best way it knew how; it nuked the living daylights out of EVERYTHING (Barring John Conner and the other lucky guys) and then some. And then we have time-travel and fights and craziness spread over 4 movies and a TV series. Why, you may ask, do I care about this. Well, apart from being a massive geek?
There is a mild connection between the Skynet and computer security. What Skynet represents is a sentient AI, which is basically a computer that can think for itself. Now, having personally worked with an AI (for my undergraduate thesis) I now look at all movies/TV shows that have real full AI with a great amount of scepticism. I know I am not an expert by any metric, but it took me close to 2 months to write an AI agent that learns to play blackjack. Nothing fancy at all, just plays a basic strategy. The upshot: it's really really hard.
I know that they have made some massive leaps in the field, such as Watson and Deep Blue, but that's not quite the same. There are supervised AI agents that can just do one thing and had to be fed a ton of data before hand. Watson for example has the whole of Wikipedia stored, which is in my books cheating just a little bit. Although these are very impressive, they are still far away from full self-awareness and sentience.
The only way that could happen is if we had an unsupervised agent, that is, an AI agent who is given only knowledge of the problem and needs to learn how to solve it. For example, you could tell an agent what a maze it and then put it in a maze and tell it to find it's way out. It will (eventually) learn how to do that. Then you give it another maze and it will learn and so on and so forth. And then you have to make it able to learn new tasks. But once you hammer out those little details, then you have the computer that will end the world.
To, the main point: the reason AI is interesting from a computer security point of view is several things, but the main is CAPTCHAs. Now, it's a little bit of an abuse of the term, but for simplicity I use CAPTCHA to mean all the systems and implementations thereof. The general idea is that you are shown a picture of somewhat distorted letters and you have to enter the letters in to prove you are a human.
The reason for this is that computers, such as automated sign-ups and spambots etc were use to make THOUSANDS of false email accounts for the purpose of spreading spam, viruses, boredom, marketing maybe, but generally evil stuff. IF you can prove you are human, then all will be hunky-dory and you can sign up. A computer will almost always fail these tests. AIs can try to learn these things, for example, using neural networks. And some of these algorithms have had some reasonable success. This is exactly why you sometimes get a CAPTCHA that is completely unintelligible, to protect against smart AIs, with the side effect of annoying the humans, ironically, as the machines wanted.
But, if we could for a second just shift back to self-aware AI. Well, the upshot is if the AI were self-aware, it's well beyond breaking CAPTCHA. We would basically have a robot with human cognitive skills, but much more computing power. I leave it to your imagination (mostly sculpted by TV and movies) to do the rest.
SIDENOTE: More real/relevant blog posts to come soon.
Friday, 1 April 2011
More irony
So, after this post went up this story surfaced pretty soon. I never got round to writing about it, because I have just moved from my old flat to a new one. So, I've kinda preoccupied. There really isn't more to say about this than how ironic it is. I may be tempted to do a post on Cross-Site Scripting soon, but we'll see how that goes
Monday, 28 March 2011
Irony thou name is SQL injection
As I clicked on my slashdot bookmark, I for some reason said to my browser "Please give me something juicy" and it did not disappoint. It gave me this article. The sheer irony alone made me chuckle for 2-3 minutes. So, meine Damen und Herren, (I just had to throw a little German in there) let's talk about SQL injections. I promise this won't hurt (much)!
So, to understand a SQL injection, we need to understand SQL. To understand SQL, we need to know what a database is. And that's where we will start. This may be a bit round about, because to frank I find databases to be a dull and boring topic. We start at the bottom, with data elements. Now a data element is a single piece of data about an entity e.g. Name, Gender, Age, Favourite Star Wars Character and so on. A record is all the specific data elements about a specific entity e.g. {Saqib A Kakvi, Male, 23, Yoda} would be a record about me. If we have several such records stored as rows, we get a table. If we have more tables (generally related) we now have a database. In summary: A database is a collection of tables, which in turn is a collection of records, which in turn are a set of data elements.
Agreed, it's all fine and dandy having all this data nicely stored, but how do we access specific parts of it? The answer is Structured Query Language or SQL (sometimes pronounced 'sequel') for short. SQL is basically a language that allows us to get a section of a database based on some criteria e.g. all the records of people who are over the age of 30. Although SQL gives you quite a lot of lean room, it is strongly typed, which means that all SQL statements must have a very specific form, syntax and all the right symbols in all the right places.
And this brings us to SQL injection. A SQL injection exploits the srong-typing of SQL and issues malformed statements which cause the SQL interpreter to go a little bit bonkers and produce some crazy result. By taking very, for lack of a better phrase, well-formed malformed queries, an attacker can recover parts of (and even all of) the database. When implementing a database, one must ensure that any and all malformed queries are rejected, thus making SQL injections irrelevant.
MySQL is a software that helps you implement, run and maintain a database (known as a Relation DataBase Management System {RDBMS}). The MySQL company seems to have forgotten about this vulnerability in a primary part of their system. As we have seen, MySQL (and apparently sun.com) have been so ironically compromised due to a SQL vulnerability. Well who would have thought it?
ME! ME! ME! Well, actually the thought had crossed my mind a few times and I thought it was funny, but sincerely hoped that it would never happen. Well done world, you continue to surprise me.
So, to understand a SQL injection, we need to understand SQL. To understand SQL, we need to know what a database is. And that's where we will start. This may be a bit round about, because to frank I find databases to be a dull and boring topic. We start at the bottom, with data elements. Now a data element is a single piece of data about an entity e.g. Name, Gender, Age, Favourite Star Wars Character and so on. A record is all the specific data elements about a specific entity e.g. {Saqib A Kakvi, Male, 23, Yoda} would be a record about me. If we have several such records stored as rows, we get a table. If we have more tables (generally related) we now have a database. In summary: A database is a collection of tables, which in turn is a collection of records, which in turn are a set of data elements.
Agreed, it's all fine and dandy having all this data nicely stored, but how do we access specific parts of it? The answer is Structured Query Language or SQL (sometimes pronounced 'sequel') for short. SQL is basically a language that allows us to get a section of a database based on some criteria e.g. all the records of people who are over the age of 30. Although SQL gives you quite a lot of lean room, it is strongly typed, which means that all SQL statements must have a very specific form, syntax and all the right symbols in all the right places.
And this brings us to SQL injection. A SQL injection exploits the srong-typing of SQL and issues malformed statements which cause the SQL interpreter to go a little bit bonkers and produce some crazy result. By taking very, for lack of a better phrase, well-formed malformed queries, an attacker can recover parts of (and even all of) the database. When implementing a database, one must ensure that any and all malformed queries are rejected, thus making SQL injections irrelevant.
MySQL is a software that helps you implement, run and maintain a database (known as a Relation DataBase Management System {RDBMS}). The MySQL company seems to have forgotten about this vulnerability in a primary part of their system. As we have seen, MySQL (and apparently sun.com) have been so ironically compromised due to a SQL vulnerability. Well who would have thought it?
ME! ME! ME! Well, actually the thought had crossed my mind a few times and I thought it was funny, but sincerely hoped that it would never happen. Well done world, you continue to surprise me.
Sunday, 27 March 2011
Location, Location, Location! What you don't know that they know!
Alrighty then folks, I have been away for about a month. Between my holiday, work and trying to write another post which I hope to publish some time soon, you have seen zero in terms of output from me. This is me correcting that. So, as I was browsing through the magical interwebz, I happened upon this article. This set of all kinds of crazy alarm bells in my mind. So, let's look at this issue in a bit more detail.
Historically, your mobile provided has always known where you are to some extent. With GSM (I believe there is a difference with CDMA, but I am not to familiar with it, so I will skip it) the service would know what the nearest base station to you was. With this information, they would that you were in a certain area. The reason they need to know this is so that when you make a phone call, they know which base station to forward the authentication information to.
One little point to make here is that one can tell approximately how far you are from a base station based on signal strength. If you can find out the distance from several base stations, you can use a method called multilateration to calculate an more accurate location. The more distances you know, the more accurate the location is. This is how location-based services, such as Google Maps, work on a handset with no GPS.
Now, it would be very very very easy for an service provider to obtain the location of any customer and store it, but it may be ILLEGAL!!! Under the European Data Protection Directive (and analogous legislation in other countries) no company may collect any personal data about you without your explicit consent. Now we need to clear up two points (in the simplest case):
1) Is your location personal data
2) Did the company have your consent
So, let's start with point 1. The defintion of personal data is as follows in Article 1 Clause 2 Sub-Clause a:
Historically, your mobile provided has always known where you are to some extent. With GSM (I believe there is a difference with CDMA, but I am not to familiar with it, so I will skip it) the service would know what the nearest base station to you was. With this information, they would that you were in a certain area. The reason they need to know this is so that when you make a phone call, they know which base station to forward the authentication information to.
One little point to make here is that one can tell approximately how far you are from a base station based on signal strength. If you can find out the distance from several base stations, you can use a method called multilateration to calculate an more accurate location. The more distances you know, the more accurate the location is. This is how location-based services, such as Google Maps, work on a handset with no GPS.
Now, it would be very very very easy for an service provider to obtain the location of any customer and store it, but it may be ILLEGAL!!! Under the European Data Protection Directive (and analogous legislation in other countries) no company may collect any personal data about you without your explicit consent. Now we need to clear up two points (in the simplest case):
1) Is your location personal data
2) Did the company have your consent
So, let's start with point 1. The defintion of personal data is as follows in Article 1 Clause 2 Sub-Clause a:
'personal data' shall mean any information relating to an identified or identifiable natural person ('data subject'); an identifiable person is one who can be identified, directly or indirectly, in particular by reference to an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural or social identity;
I think we can safely say that a person's location and their movements would definitely qualify. So that's one point out of the way.
Next, we need to know if this information was collected legally. I'm going to go out on a limb and say probably. Most companies have you agree to a Terms of Service, which nobody ever reads. This is because it tends to be dozens of pages written in legal parlance. It's enough to make any sane non-lawyer cry tears of sheer anguish. We all sign our consent to it having read the summary and hope we haven't signed away one of our kidneys.
In this case, it's not really the end of the world if our cellphone provider knows where we are. The problem arises when they decide to share that data. In the Terms of Service it may say that they can share this information with certain 3rd parties for any reason. This means that marketing companies could potentially track your every move and learn a lot about your preferences. This could be a problem.
This is an example of why privacy experts complain bitterly about the loss of privacy in the digital age. And they have every right to, with things like this, less and less information is becoming private. However, their constant and sometimes annoyingly repetitive rants tend to fall on deaf ears. Unfortunately, some people release this information themselves using applications such as Foursquare. It's a classic case of taking a horse to the river and the horse drowning itself scenario.
Although despite this, people such as Malte Spitz (link is in German) still have concerns about the privacy of their data. I would not recommend that anybody try and get their hands on what locational data they have, as it would probably not go down well. According to the article it took 6 months of legal wrangling for Herr Spitz to get this data. It would be at least as for you.
Now to sum up I would say "Big Brother is watching you!" but that is trite and cliché. And frankly a tad more alarmist than I would like to be at dark-and-scary-o'clock in the morning. So, I will go with the slightly milder "Be careful what you share on the Internet!"
Sunday, 20 February 2011
Just a litlle bit more on N vs. NP: Computational Complexity
So, after having a conversation with my friend Daniel (who will at some time help me redesign this blog), I realised that I had made one very big, and fairly erroneous, assumption with regards to P vs. NP: People know about computational complexity analysis. So, now I will correct that and explain computational complexity.
So, every computer program is basically a set of steps, which is called an algorithm. When designing any software, you first design the algorithm and then implement it. There are several reasons for this, the one of interest to us is complexity analysis. This is basically counting how many steps the program will take and how many CPU cycles are needed. The basic rule of thumb is that each addition, subtraction, multiplication, division, memory lookup and assignment take 1 CPU cycle.
Of course, this is just a heuristic measure and the actual time taken will depend on the actual hardware, but it gives us a fair idea. Another advantage is that it allows us to compare two algorithms for solving the same and find out which one is more efficient. Just to illustrate this, I will demonstrate one of the most classic examples: Adding up numbers from 1 to n.
The simplest way to do that is to have a counter and add each number to it in sequence. The pseudocode algorithm is below. Anything in green is a comment and is simply to explain what is happening.
counter += i; //Add i to the counter
next i //Increase the value of i by 1 and repeat above
output counter //Output the result
If we count the number of steps, we see we need 2 steps for initialisation (lines 1&2) and n additions and 1output. This gives a total of n+3 steps. Now we look at another way of doing this, by using this formula. In pseudocode, it would look like this:
output counter //Output the result
Now, if we count the steps, we get 1 for initialisation, 1addition, 1 multiplication, 1 division and 1 step for output. This gives us a grand total of 5 steps. Comparing that with the previous algorithm, we see that this is more efficient and does not depend on the actual input.
Most programs are more complex than this and have more complicated expressions for their running time. For these we use big-O notation. This gives us an approximate estimate of the effective running time. It also give us very nice and neat complexity classes. Looking at the example above, algorithm 1 has a running time linear in the input size, O(n), while algorithm 2 has constant running time, O(1). We also have logarithmic time O(log n), polynomial time O(n^c), exponentital time, O(c^n) and so on and so forth. Which, quite neatly brings us to the main problem: P vs. NP.
All problems in computing have a solution, but not all are efficient. Some solutions would takes minutes, others hours and some would take 100s of years to compute. Again, this all depends on your hardware, so actual time measures are not exactly useful. And, this is why we have P and NP.
All the problems that have a solution using an algorithm with time complexity that is polynomial, or less, are classed as P type problems. These can be solved in an acceptable amount of time, e.g. a couple of days. Any thing that takes more than polynomial time is classed as NP. The trick is that even though for small enough instances of NP problems, you can find a solution efficiently, it is the scaling that matters.
So, thus we can see how the above means this post makes a little bit more sense, I hope.
So, every computer program is basically a set of steps, which is called an algorithm. When designing any software, you first design the algorithm and then implement it. There are several reasons for this, the one of interest to us is complexity analysis. This is basically counting how many steps the program will take and how many CPU cycles are needed. The basic rule of thumb is that each addition, subtraction, multiplication, division, memory lookup and assignment take 1 CPU cycle.
Of course, this is just a heuristic measure and the actual time taken will depend on the actual hardware, but it gives us a fair idea. Another advantage is that it allows us to compare two algorithms for solving the same and find out which one is more efficient. Just to illustrate this, I will demonstrate one of the most classic examples: Adding up numbers from 1 to n.
The simplest way to do that is to have a counter and add each number to it in sequence. The pseudocode algorithm is below. Anything in green is a comment and is simply to explain what is happening.
int counter = 0; //Creates a counter, with an intial value of 0
int n = getUserInput(); //Asks the user to enter a number
for i = 1 to n //Repeat the following steps with i having a value from 1 to ncounter += i; //Add i to the counter
next i //Increase the value of i by 1 and repeat above
output counter //Output the result
If we count the number of steps, we see we need 2 steps for initialisation (lines 1&2) and n additions and 1output. This gives a total of n+3 steps. Now we look at another way of doing this, by using this formula. In pseudocode, it would look like this:
int n = getUserInput(); //Asks the user to enter a number
int counter = (n*(n+1))/2; //Creates a counter, and assigns it the value of the sumoutput counter //Output the result
Now, if we count the steps, we get 1 for initialisation, 1addition, 1 multiplication, 1 division and 1 step for output. This gives us a grand total of 5 steps. Comparing that with the previous algorithm, we see that this is more efficient and does not depend on the actual input.
Most programs are more complex than this and have more complicated expressions for their running time. For these we use big-O notation. This gives us an approximate estimate of the effective running time. It also give us very nice and neat complexity classes. Looking at the example above, algorithm 1 has a running time linear in the input size, O(n), while algorithm 2 has constant running time, O(1). We also have logarithmic time O(log n), polynomial time O(n^c), exponentital time, O(c^n) and so on and so forth. Which, quite neatly brings us to the main problem: P vs. NP.
All problems in computing have a solution, but not all are efficient. Some solutions would takes minutes, others hours and some would take 100s of years to compute. Again, this all depends on your hardware, so actual time measures are not exactly useful. And, this is why we have P and NP.
All the problems that have a solution using an algorithm with time complexity that is polynomial, or less, are classed as P type problems. These can be solved in an acceptable amount of time, e.g. a couple of days. Any thing that takes more than polynomial time is classed as NP. The trick is that even though for small enough instances of NP problems, you can find a solution efficiently, it is the scaling that matters.
So, thus we can see how the above means this post makes a little bit more sense, I hope.
Tuesday, 15 February 2011
A note on passphrase guessing time calculations
Alrighty then, I had few thoughts about these figures that I mentioned here and decided to some math. It all falls apart very quickly. If we actually crunch the numbers, the titular hacker's computing power changes for every equation and produces some interesting values. I will assume there is some rounding off that is done and thus we lose accuracy in the answer explaining the fluctuations.
Now, don't just take my word for it, you can join in at home. Grab a pen, paper and a calculator because we are dealing with HUGE numbers. Before we can begin, we need to do some housekeeping and define some variables. If you recall from my last post, I stated the 3 characteristics a good password should have, but we only consider the two under attack here that is length and complexity.
We define the complexity by the size of the alphabet, a, that is possible characters the password contains. For lower case a = 26, lower and upper case a = 52. With symbols it depends on how many symbols are considered valid, but in general, we have a = 52 + number of valid symbols. The length is fairly straightforward and self-explanatory. We define the total complexity of our password as
c = a^l (where ^ denotes exponentiation.)
Now, to calculate the time it would take for a hacker to guess your password, we need to know how many guesses they can make per second (or other appropriate time unit), which we denote by g. We can see that the time required is
t = c/g (in the worst case)
You may want to consider the average case, which is obtained by dividing by 2g instead of g.
If you compute g for lower case passwords with length 6 and 7, you see that there is a discrepancy in the g value. However, if you take the g from length 6 and plug it into the equation, you get a t of 4.333.... hours, which is close enough to the 4 hours they have stated. This lends credence to my rounding theory, but does not prove it.
The rest is left as an exercise for the reader. So, go on and give it a whirl. Try this with different combinations and see how long it would take somebody to crack your password.
Now, don't just take my word for it, you can join in at home. Grab a pen, paper and a calculator because we are dealing with HUGE numbers. Before we can begin, we need to do some housekeeping and define some variables. If you recall from my last post, I stated the 3 characteristics a good password should have, but we only consider the two under attack here that is length and complexity.
We define the complexity by the size of the alphabet, a, that is possible characters the password contains. For lower case a = 26, lower and upper case a = 52. With symbols it depends on how many symbols are considered valid, but in general, we have a = 52 + number of valid symbols. The length is fairly straightforward and self-explanatory. We define the total complexity of our password as
c = a^l (where ^ denotes exponentiation.)
Now, to calculate the time it would take for a hacker to guess your password, we need to know how many guesses they can make per second (or other appropriate time unit), which we denote by g. We can see that the time required is
t = c/g (in the worst case)
You may want to consider the average case, which is obtained by dividing by 2g instead of g.
If you compute g for lower case passwords with length 6 and 7, you see that there is a discrepancy in the g value. However, if you take the g from length 6 and plug it into the equation, you get a t of 4.333.... hours, which is close enough to the 4 hours they have stated. This lends credence to my rounding theory, but does not prove it.
The rest is left as an exercise for the reader. So, go on and give it a whirl. Try this with different combinations and see how long it would take somebody to crack your password.
Subscribe to:
Posts (Atom)