I have always maintained that Microsoft's security policy is essentially to stop you from doing anything stupid. The concept in itself is fairly sound, but the implementation is not. In the classic Operating System debate of Windows versus Linux, the biggest point Linux users make is that they can modify any part of the operating system to suit their needs and desires. When I used Windows XP, I had found all the little secrets to get my machine to do what I wanted it to do. But, I digress.
Microsoft basically adopted the "protect the users from themselves" approach in earnest in Windows Vista. There are several reason why I (and others) am not too fond of Vista, but that aside. The idea is sound in theory, but the implemenatation of it left so much to be desired. In hiding all the knifes from the kids, they also hid all the forks and spoons. Yes, I agree that some of the functionalities should not be available to normal users, but it should be available to admin users.
A whole plethora of useful features were hidden, but we shan't go into that now. The main thing is this article. Now I know I'm a bit late to jump onto this, but I have been a tad lazy. Moving on. So it seems that Hotmail will ban common and quite frankly shit passwords. This is a good and a bad thing.
As I have pointed out before, passwords can be tricky things. For something iek your e-mail account, you need a decent password. So now if Hotmail will reject your password because it's shit, that good right? Well, yes and no. It does stop dictionary attacks, however it drastically changes the search space.
Previously, an attacker would run dictionary attacks in the hope that somebody was a fool. Now that cannot happen then the system is foolproof right? Yes, but to quote Douglas Adams "A common mistake that people make when trying to design something completely foolproof is to underestimate the ingenuity of complete fools." It may sound a touch misanthropic, but people are stupid.
Eventually what is going to happen is that people will find that people will find the least complex passwords that pass through the Hotmail filter and then use those passwords repeatedly. Now dictionary based attacks kick in again, just with a new dictionary. The dictionaries may be larger than previously, but it may not be a significant amount.
So, it is a good idea and I am very much in favour of this, but it could also backfire. Only time will tell, we shall wait and see.
!!!!!WARNING: This blog may cause your brain to explode, implode or melt!!!!! What is IMHO the side of the story the media didn't cover, if at all. My "expert" gleanings on the current state of digital security. Also, the occasional mildy to non-related tirade. Enjoy :D Feel free to contact me with feedback or if you would like more details/clarification on anything :)
Showing posts with label scripting. Show all posts
Showing posts with label scripting. Show all posts
Wednesday, 27 July 2011
Monday, 13 June 2011
Something that has been bugging me for a while
Do you have a facebook account? Rhetorical question, of course you do. If you don't well then you can leave now because this post is all about *drumroll* FACEBOOK! Seeing as how it is on my blog, one can safely assume that it is about facebook security. So, what have facebook done now? They are protecting your from them.
Confused? So was I. Basically they have started up this new scheme to prevent Cross-Site Scripting(XSS) and Clickjacking and other scripting based vulnerabilities. Some of you may be unfamiliar with scripting and the vulnerabilities therein. Most modern webpages serve up dynamic content, making the experience different for each user. A good example of this is your facebook newsfeed, which is different from your friend's feed.
This is all achieved using scripting. A script is essentially some sort of program code that runs within your web-browser. The catch is, you never explicitly execute the scripts like you do programs. They are embedded in the webpage and are executed when you open the webpage, or at some other suitable trigger. The problem then is that people could embed malicious scripts into pages and you will not realise they have run, until it's too late.
I'm sure you've all had that one friend who has posted the same spam link to everybody and 10mins later warned you not to click it. That is basically what these malicious scripts do. So, facebook decided that they need to address the issue, which they did pathetically.
What they have done is now they "read" your URLs and check it for any script. Again, ANY script. That means that if any script is detected, you will be logged out of facebook instantly as a security precaution. You may wonder why this is a problem. Well, as eluded to earlier, almost all actions on facebook are scripts. See more items in your news feed, liking a post, commenting on a post, writing on somebody's wall, the chat feature. Everything is a script. So now, facebook sees you trying to do something legitimate and decides to kick you out. It doesn't always happen, but it's often enough to be mildly aggravating.
It's bad enough that you have to re-login, but what's even worse is that you go through the following twp screens: (full size images here and here)
and then a 3rd asking if you would like to share a link explaining how great facebook security is. Honestly, I would rather have a red-hot iron bar slapped onto my arm. This is because if you read the messages carefully, you will notice a couple of "< br >" tags popping up.
This is not a security issue, but it does mean that whom so ever wrote those pages is probably a moron! "< br >" was/is the tag used in HTML to induce a line break. However, newer standards such as XHTML and HTML5 insist on using "< br />" for technical reasons. Believe me, it's a good idea. So, this lead me to the conclusion that most facebook web developers have written sloppy HTML/PHP/JScript/Whatever else they use and that is causing the "safety filter" to go off at least twice a week on my account. Also, I'm not sure how good the code for the "filter" is. I have very low expectations.
The first time I was logged out by this "filter", I was impressed that facebook had implemented such a feature. I guess that they had some bugs in it, which was understandable. With each subsequent occurrence of me being "filtered out", I grew more sceptical. Then when I saw the horrendous HTML code in the warnings, I gave up hope and waited for it to happen again to make screen shots.
I didn't have to wait too long. Most people would say "Well at least they tried!" To which I reply, "Welcome to cyber-security, where a half-assed attempt doesn't count!" Really, facebook, get your act together and actually make an attempt and then maybe I'll be impressed and stop writing evil comments on the your security fan pages.
Confused? So was I. Basically they have started up this new scheme to prevent Cross-Site Scripting(XSS) and Clickjacking and other scripting based vulnerabilities. Some of you may be unfamiliar with scripting and the vulnerabilities therein. Most modern webpages serve up dynamic content, making the experience different for each user. A good example of this is your facebook newsfeed, which is different from your friend's feed.
This is all achieved using scripting. A script is essentially some sort of program code that runs within your web-browser. The catch is, you never explicitly execute the scripts like you do programs. They are embedded in the webpage and are executed when you open the webpage, or at some other suitable trigger. The problem then is that people could embed malicious scripts into pages and you will not realise they have run, until it's too late.
I'm sure you've all had that one friend who has posted the same spam link to everybody and 10mins later warned you not to click it. That is basically what these malicious scripts do. So, facebook decided that they need to address the issue, which they did pathetically.
What they have done is now they "read" your URLs and check it for any script. Again, ANY script. That means that if any script is detected, you will be logged out of facebook instantly as a security precaution. You may wonder why this is a problem. Well, as eluded to earlier, almost all actions on facebook are scripts. See more items in your news feed, liking a post, commenting on a post, writing on somebody's wall, the chat feature. Everything is a script. So now, facebook sees you trying to do something legitimate and decides to kick you out. It doesn't always happen, but it's often enough to be mildly aggravating.
It's bad enough that you have to re-login, but what's even worse is that you go through the following twp screens: (full size images here and here)


This is not a security issue, but it does mean that whom so ever wrote those pages is probably a moron! "< br >" was/is the tag used in HTML to induce a line break. However, newer standards such as XHTML and HTML5 insist on using "< br />" for technical reasons. Believe me, it's a good idea. So, this lead me to the conclusion that most facebook web developers have written sloppy HTML/PHP/JScript/Whatever else they use and that is causing the "safety filter" to go off at least twice a week on my account. Also, I'm not sure how good the code for the "filter" is. I have very low expectations.
The first time I was logged out by this "filter", I was impressed that facebook had implemented such a feature. I guess that they had some bugs in it, which was understandable. With each subsequent occurrence of me being "filtered out", I grew more sceptical. Then when I saw the horrendous HTML code in the warnings, I gave up hope and waited for it to happen again to make screen shots.
I didn't have to wait too long. Most people would say "Well at least they tried!" To which I reply, "Welcome to cyber-security, where a half-assed attempt doesn't count!" Really, facebook, get your act together and actually make an attempt and then maybe I'll be impressed and stop writing evil comments on the your security fan pages.
Subscribe to:
Posts (Atom)