Friday 26 March 2010

Small side-note

As I have been very busy the past couple of weeks, I haven't been keeping up-to-date with most of my TV viewing. I just recent managed to watch FlashForward S01E11. What made me laugh was the part where Dr. Simon Campos is trying to "brute force" an encryption algorithm. Their attempted accuracy made me laugh a little. As with the portrayal of most computer-based security in the media, they try, but fail at accuracy. Points for trying though.

More complaints about the iPhone (from other people this time)

As anybody who knows me will tell you right away, I abhor the iPhone (cf. my previous post). Now it has emerged that security professionals rank the iPhone as the "worst workplace risk." Essentially all "smartphones" are a risk in a secure environment, however some are more so than others. The iPhone came 1st with 57%, followed by the Android phones at 39%, BlackBerry at 28% and Nokia Symbian smartphones in last with 13%. (Please note these figures are straight from the article and I am not exactly sure how the

Apple's constant "bare-minimum" approach to security is what has landed them in this position. This philosophy of "just enough security to keep us afloat" is actually the worst idea ever. Throughout its relatively brief history, Information Security professionals have realised one thing very quickly: The weakest get attacked the most.

Its simple, if your system is constantly being attacked, you should then upgrade your security. This means, if you did it right, there is now somebody who is less secure than you are. No prizes for guessing who the most attacked person is. Theoretically, by being the least secure and then upgrade should eventually push everybody up to a decent threshold level of security and the world would be a better place. Of course, not every sees it that way or don't care.

As far as I am concerned, all mobile devices should not be allowed to enter a secure environment. There is a plethora of possible security risks involved there (which I will cover in another post). As it says in the end some companies are discouraging or even banning iPhones in the workplace. It's a start, but I think that all smartphones should be discouraged or banned. Just in case people are thinking I don't like smartphones, I own a smartphone. Even so they are still a security risk.

Friday 19 March 2010

JCSecurity (This may partly be my fault)

Recently while discussing my blog with a colleague, he mentioned his former blog. He also mentioned that it had been offline for a while. Conversation moved on swiftly and I mostly forgot about it. Just a few minutes ago he has informed me that it is back on track and he's *hopefully* going to keep it alive for quite a while. I would recommend you all to read his blog

Thursday 11 March 2010

Fault-Based Attack on RSA

So, right in the wake of Ross Aderson's attack on Chip-And-Pin comes another scary story, this time on the famous RSA encryption scheme. Just as a quick detour, I will briefly go over the key concepts of private-/public-key cryptography. After which I will go over the attack and then the implications. Readers familiar with the concepts are welcome to skip over the next 4 paragraphs.

Private-key or symmetric cryptography involves 2 parties sharing some secret key K (and obviously a cryptographic scheme). Party A can then encrypt a message m, using K and get the resulting encryption or ciphertext c. This can then be transmitted to Party B (details of transmission and the various attacks there-in are skipped). Once Party B has received c then they can decrypt it using K and read m. Which is all fine and dandy, until you add in more people.

If you add in Party C to the system, then you need 2 more keys; 1 for A & C and 1 for B & C. IF you re-use the same key, then anybody can read the message, which is now what we want. As the system grows, the number of keys grows exponentially fast (for n users, you need (n(n-1))/2 keys). Even if you could generate and store that many keys, how do you give the keys to everybody in a secure manner? This is called the key distribution problem, which motivated public-key cryptography.

Public-key or asymmetric cyptography is based on the principal that every user creates 2 distinct but related keys, called the public key and private key. The public key, as the name suggests, is public and is published in "telephone directory" of sorts (details skipped, this leads to a few open problems). The secret key is kept secret, as the name so aptly states. Now when party A wants to send m to Party B, they look up B's public key in directory and encrypt it using that key and send c as above. B can then decrypt c using their secret key. "So that solves the problem, right?" Not quite, as public-key cryptography is much slower the private-key.

So this leads us naturally to a compromise: hybrid encryption. The principal is that you send 1 message using public-key cryptography, which contains a key for private-key cryptography. All communications from then onwards are done using private-key cryptography. Again this is just a brief idea, but should be sufficient to appreciate the problems given below.

What this attack does (full paper can be found here) is basically exploit a vulnerability that arises from the combination of hardware and software used to implement this scheme. As most of you are aware, computers have gotten really, really, REALLY fast. This is due to the fact that the transistors are getting smaller and smaller, hence allowing chip designers to fit more in the same space. Of course, this leads to the problem of bit errors; as they are smaller and closer together small changes in voltage/current/magnetic field can cause a single bit to flip, which will then propagate through the whole calculation. Having said that, there are solutions, which I really don't understand, so take my word for it, this problem is managed.

However you can still induce these errors, by passing a small, transient electrical pulse to the processor. These pulses last for less than 1 clock cycle, hence affecting at most 1 bit. Normally all this would do is give you an erroneous output, and be of no use to anybody, but this is where the software part comes in. This specific attack relies on the Fixed-Window Exponentiation (FWE) algorithm being employed. I will not detail the algorithm, those curious can look it up in the paper (link give above) and read up further if they so choose.

The basic operation in RSA is a single exponentiation, which is the message m raised to the secret key d (in standard RSA notation), which is md, modulo an RSA modulus N. Suffice to say, these calculations are not easy to perform on a chip and can be VERY SLOW. So there are several clever algorithms there that do these computations in an acceptable time, such as the FWE. it basically split the binary representation of d into "windows" and performs the calculations on that window and then moves on.

What the attack does, is induce a bit-error in the calculation, by send a small transient pulse, as described above. This generates a "broken" signature s', which the receiver will reject as incorrect. The attacker then gets broken signatures and based on some complex math, can start to extract single bits of the key. As the attacker learns more and more bits, it becomes easier to calculate the rest. I will at this point admit that even I am at a loss as to exactly how this works (mainly due to not having enough time to fully study the attack).

Although most people would think that this is just a theoretical result, but they did present the results of an experiment as well. They set up a system using an FPGA board running a SPARC-based Linux system. They managed to recover the key in 104 hours, which just over 4 days. It must be noted that this was achieved using 81 machines running a distributed algorithm, giving an estimated 1 year on a single system. Although, one must realise that most attackers would have control of several computers on which to run their algorithm, thus making the distributed system attack plausible.

So you make think "Why do I care about this RSA cryptography stuff?" Well simple, this is the basic scheme that is used in a hybrid key exchange as described above, to establish a secure connections. So if this attack is being run against you, after you have signed ~700 key-exchange messages, the attackers now know your private key.

But then you may think "700 messages is quite a lot!" Which in all fairness, for a single user, it probably is. But lets think of large corporations who on a daily basis engage in large numbers of secured connections with their customers for financial transactions(think of any medium to large e-retailer). Now they need to send signed certificates to large numbers of users (possibly in the thousands). If you were able to run this attack against them, well that would be a problem.

I'm fairly sure you would not be able to achieve the speeds of attack presented in the paper due to certain practical considerations, but you would still be in a fairly comfortable place. Even if it took you 20days to crack any retailer's secret key, that's not that bad. Once you have their secret key, you can then impersonate them very easily. Phising sites may look real, but cannot produce authentically signed certificates, which is where people sometimes catch on. With the secret key, you can sign certificates on behalf of the retailer and convince even the most aware and conscious users that you are legitimate. "Please enter your card details" and fraud ensues.

But now we look at the slightly less scary point of view: it's still fairly hard. It depends on a software/hardware combination. It is no trivial task to determine if a certain site's servers use that combination. Even if one were in possesion of such information, reliably accessing the communications channels and performing this attack would be another Herculean task. Furthermore, to remain in the spirit of this attack, one must do all of this UNDETECTED, which is easier said than done.

So, although this could be a major problem for some people, there is a simple solution: change the software. This removes half the vulnerability, and may be enough to completely invalidate this attack. Of course, you could also change all your hardware, but this more costly and a tad more difficult. There could be issues of your new hardware not working well with your old hardware, amongst other things.

But as I always say, they're two sides to the security coin; there is also the issue that it takes them 100 hours to recover a 1024-bit secret key, if you use a larger key (2048-bits or for really secure things 4096-bit) the complexity of the attack, and thus the time required, greatly increases. Also certain keys can be revoked, albeit with some difficulties. (the key revocation problem in itself entails at least at 2hour lecture, so let's no go there.) Key compromise is not the end of the world, it may cost you some reputation and issues with repaying your customers, but it may not kill you.

Tuesday 9 March 2010

Bloggers!

cf. my post on the 24th of feb, mainly the part about bloggers. Now somebody important actually reads this and decided to capture this phenomenon in a TV series . Well truly the sarcasm was that they got the idea from my blog. However this weeks epsiode of House MD dealt with this issue, albeit wrapped in the usual medical drama. Highly recommend this episode, and the show in general.

Friday 5 March 2010

Slight edit on Chip-and-Pin

cf. my post on 25th Feb, there is a small correction. It adherently is possible to fool online terminals, however the point still remains that you can not fool the bank verifying the PIN