Xing \ Creative \ Coding

Web Application Software Development

Whenever possible, you should utilize existing frameworks that are well-maintained to handle your user logins. However, if you’ve inherited a project that handles its own logins, sometimes you simply have to make do and try to secure your application to the best of your ability.

Beyond using SSL, hashing your passwords, and making sure you salt your hashes…one of the other things you can do to improve the security of your website is to ensure you have some form of brute-force protection.

What is a Brute-Force Attack?

A brute-force attack is when someone is randomly or pseudo-randomly trying different passwords or keys to get into your system. Examples of brute-force attacks are:

  1. Session Hijacking: After a user logs in, they are often given a session key, typically placed in a cookie, that identifies them so that you know who they are on subsequent requests. If a person manages to “guess” a session key, they’ve gained access to your system and are able to perform actions as if they were the valid user.
  2. Password Cracking: An attacker, especially if they know a username or e-mail that is in your system, can attempt to “guess” the password. If you have no brute-force prevention measures, they are able to try thousands upon thousands of login attempts to try to get the right answer…only limited by your network capacity. If they manage to get a successful login, not only do they have access to that user’s account on your server, but since people often use the same passwords all over the place, they will probably be able to use that password to gain access to that user’s other accounts.

Securing Against Brute-Force Attacks

The goal of brute-force protection is not to completely prevent someone doing a brute-force attack, but rather to make it more time-consuming and costly than it is worth. Basically, you are just trying to slow them down.

In any case, first, to prevent Session Hijacking, you should ensure that your session keys are being generated by a cryptographically secure random generator. If you use something like PHP’s uniqid or a non-random Guid, since these are often non-random codes based on time, it’s very likely for a person to be able to guess the session keys that will get generated. Non-random keys mean that they are more guessable, and–thus–other brute-force prevention measures will not be as successful.

Second, for both Session Hijacking and Password Cracking prevention, you should log failed validations that occur by IP address. Here’s a quick design. I’ll use a database table, though you can also use a cache which may be more practical unless you want IPs to be able to be permanently blocked when they exceed some high threshold.

NOTE: IP addresses can be stored as integers (e.g. the ip2long method in php). This means you have to convert them on the way in and out, but it makes the indexing more optimized.

CREATE TABLE BruteForceLog (
    IpAddress    BIGINT PRIMARY KEY AUTO_INCREMENT,
    FailedCount  INTEGER UNSIGNED NOT NULL,
    LastFailure  DATETIME NOT NULL
)

Once you have this table, the process is simple.  I’ll try to turn this into a work-flow in the near future, but for now, hopefully this is simple enough to understand:

  1. When a user-request comes in, load up its IP address (if any) from your BruteForceLog.
  2. If the FailedCount > YourPreferredCutOff and the LastFailure > (Now – YourPreferredTimeBlock), then block the request.
  3. Else, validate the session key / password against your sessions or user account.
  4. If the session key or password is invalid, create or increment the BruteForceLog and end the Request.
  5. Else, user is validated, continue.
You may have noticed on some sites that implement this, once you’ve exceeded the limit, it still tells you that you have an invalid password rather than telling you that your IP is blocked. So, you technically don’t know when you’ve exceeded the FailedCount limit. This is an additional security measure so that brute-force attackers don’t know how many of their brute-force attempts are actually being checked…giving some small additional security. However, this ambiguity itself is less important than delaying their attempts.

August 22nd, 2017

Posted In: Security

Tags: , , ,

Leave a Comment

If you sign each request as described in the post Security Without SSL, Creating a Shared Secret to Sign Requests, you can be fairly confident that a passive MitM attacker won’t be able to modify the user’s request. However, they can potentially “replay” the request over and over again. Imagine if an attacker were able to replay Amazon’s 1-click purchase request over and over again and instead of paying for and getting 1 hard-drive like you wanted, you get 100. Without SSL, a passive attacker can replay the same post over and over again, and if you don’t defend against it, it can potentially create a lot of havoc that could ruin your relationship with your users. So, we need a way to prevent replay.

On the simple side, you can just have the client put a Request Time into each request. Then, when you receive the request on the server, you can validate that the time is within a reasonable grace period of the Request Time. If the request is older than the grace period allows, then you can reject the message. However, this relies on the user’s clock being set right, and a replay attack could still occur during the grace period.

So, the better way is to use a nonce. In some cases, the client requests a nonce from the server, and then is allowed to use that nonce one time. However, for this case, I’d say to do something easier. Since we already have a unique string (the signature we created from the shared secret), we can simply never allow the same signature to be used twice. This will reduce the overhead on the server and make the client’s life much easier since it won’t have to request any nonces. Then, since we’ll probably want to keep our nonce database small so that we aren’t looking through millions of records on each request, we can delete any records older than 24 hours or something. However, once we start deleting records, we need to do Request Time checking. If we keep records in the database for 24 hours, then we just need to ensure that all requests coming in are less than 24 hours old (or whatever timeframe we use to store nonces). This way, we can keep our database table of nonces smaller and still rest assured that a request from 5 days ago can’t be replayed ever again.

So, you may wonder what happens if the same user makes the same GET request twice in a row and produces the same signature hash. Well, obviously, the second request would get blocked. However, it seems that there is an overall application problem if the same person requests the same data twice within the same second. If the requests are even 1 second apart, the nonce will be completely different because the RequestTime in the request would change. However, if you’re still worried about it, just add a milliseconds field to the request, the server doesn’t even have to know what the field is for, it just has to make the request signature unique.

As far as other legitimate requests, the odds of producing the same signature on two different requests (hash collision) with a hashing algorithm like SHA256 or better that produces at least 64 characters is so vanishingly small that I would place more bets on you being the first to create FTL warp drive. So, I wouldn’t worry about it. Worst case scenario, the client script will have to make a second request when the first fails, and the time will have changed by then, and there won’t be another hash collission until our sun dies out 5 billion years from now.

February 9th, 2015

Posted In: Security

Tags: , , ,

One Comment

Prerequisites: To sign requests with a shared secret, you will have to force your users to enable JavaScript. This may alienate some users, and ultimately I’m not sure it’s worth the effort just to avoid purchasing an SSL cert. But maybe for some open source apps that have heavy JavaScript dependencies anyway, this might be a good fit since you can’t control whether your users will get a cert..
Strengths: A request signed with a Shared Secret prevents session hijacking by passive attackers (e.g. Firesheep)
Weaknesses: All Non-SSL security is vulnerable to MitM attacks. Also, if you use a password hash as the Shared Secret, it can potentially be compromised at registration and during password changes.

As discussed in a previous post, Why you MUST use SSL, without SSL, there is no way to prevent a Man-in-the-Middle (MitM) attack from bypassing any security measures you take. However, by using a Shared Secret to sign your requests, you can make the task more difficult for an MitM as well as reduce the risk of a passive attacker getting the password, hijacking the session, or running a replay attack. A Shared Secret is something that only the server and client know, which implies that you never send it over the wire in either direction. This can be a bit problematic. How can both sides know a shared secret without ever communicating that secret across the web?

Well, ideally, if we truly want the communication to be a shared secret, we would need to communicate it to the user via an SMS message or an e-mail. By using an alternate method, you reduce the risk of interception by any MitM or passive attacker. An SMS would probably be safest, as if their e-mail is transmitted over HTTP or unsecured POP3, then an MitM would possibly be able to see that communication as well. But, at the very least, by transmitting through another method of communication, you greatly reduce the risk of the shared secret getting intercepted in conjunction with an MitM or Side-Jacking attempt.

A slightly less secure Shared Secret is the user’s password. The reason it is less secure is because it does have to be transmitted at least once: during registration and also anytime the user changes his or her password. Anyone listening at that point in time will know the Shared Secret, and your security can be bypassed. However, this is still significantly more secure than if you don’t use a Shared Secret. The advantage to your users in this case, versus an SMS message, is that they don’t have to jump through any additional hoops beyond what they are already used to. However, if we do this, we have to make sure that the password Shared Secret is not communicated during the login…we want to make sure that it is ONLY sent during the initial registration or password change to reduce the frequency of communicating the shared secret.

How to Login Without Forfeiting the Shared Secret

Assuming that we dutifully hashed our password with a salt, the server doesn’t actually know the password, so our Shared Secret is actually the hash of salt+password rather than the password itself. As our Shared Secret, it is important that we never send the hash of salt+password to the server or vice-versa. So, when the user logs in, we must first send them TWO salts. The first passwordSalt the client will use to generate the shared secret, and the second oneTimeSalt will be used to communicate the shared secret to the server for the login.

var sharedSecret = hash( passwordSalt + password ),
    loginHash    = hash( oneTimeSalt + sharedSecret )
;
$.post(loginPath, {"email":email,"password":loginHash}, callback);

The server then validates in a method something like this:

isValidPassword( hashFromClient ) {
    return hash_equals(
        hash( oneTimeSalt + storedPasswordHash ),
        hashFromClient
    ); //slow, time-constant equality check
}

If the above method is successful, then we destroy the one-time salt and put a new one in its place for the next use. We then send the client a session token that it is able to use for the rest of its session to remain logged in. However, if the session token is all we use for validating the user on subsequent requests, then any novice MitM, or person using Firesheep, will be able to hijack the session. So, we need to use our shared secret to “sign” each request in a way that the server can ensure the data is actually from someone that knows the shared secret and hasn’t been modified.

Fortunately, this is actually quite simple. We already have our shared secret, the first hash. So, we can use the first hash to sign each request something like the following:

var signature = hash( sharedSecret + sessionToken + requestMethod + requestUrl + requestBody );

We append this signature to our request, and when the server receives the request, it can find the Shared Secret associated with that session token and then run the same hash. If the two hashes are equal, then the server can assume that the request comes from someone who knows the Shared Secret. So, about the only thing an attacker can do without knowing the Shared Secret is replay the exact same request multiple times. However, this is still potentially problematic. Imagine an attacker adding a million rows to your database by replaying the same POST request a million times…it could get ugly. So, next on the list is to look at Security Without SSL, How to Prevent Replay Attacks.

February 8th, 2015

Posted In: Security

Tags: , , ,

One Comment

Firesheep is one app that proved just how easy it is to hijack sessions from sites like Twitter and Facebook when the communication is sent plain-text over HTTP. Thus, if you have any kind of high profile site, SSL has become a MUST. Without SSL, it doesn’t matter what security measures you use or how clever you are because a Man-in-the-Middle (MitM) can simply send your users his own, unsecured, version of your website without all your clever security, your users will happily send the MitM all the information unsecured, and then the MitM can send the data up to your server using your clever security measures…and nobody will ever know the MitM was even there. So, I wish there was another answer, but if you truly want your site to be secure, you must use SSL.

The prevention for an MitM attack is NOT the Asymmetrical RSA encryption like some people probably think. By itself, RSA encryption only prevents passive attackers from reading communications sent between two parties. If encryption is the only security used, then someone who controls the router at your local Starbucks could route all amazon.com traffic to their own computer, and when a user requests amazon.com, they could send the user their own public key, and the user would happily encrypt all their user and credit card information using the attackers public key and send it to the attacker. The attacker would then decrypt it, harvest all the information, encrypt it with amazon’s public key and send it up to amazon.com, and vice-versa. The MitM would be able to sit in the middle harvesting all the information sent between the two parties and be completely invisible…that is…IF encryption were the only security. Fortunately for us, it’s not.

It is not the RSA encryption that makes SSL secure, but rather the Trusted Certificate Authority (CA). When a user connects to a website over SSL, the browser takes the domain name they THINK they are on, the IP address they are actually connected to, and the “public key” (certificate) they were given and verifies all that information against a CA that the browser knows it can trust. If some of the information doesn’t match what the CA has on file, or the website used a CA that your browser doesn’t trust, then the user gets a very ugly security warning from the browser telling the user not to trust the site

Using a signed certificate from a CA, given the same scenario above where the attacker routes all amazon.com traffic to their own computer at IP 10.10.0.10, when the user’s browser reads the certificate and tries to verify it, it will find that the amazon.com certificate they received was “self-signed” or from a CA it doesn’t know or trust, etc., and the subsequent browser warning will hopefully scare the user away. Thus, the only way for an MitM to get away with such a scenario would be to get a Trusted CA to issue them a certificate for amazon.com, which is why CAs make you jump through hoops to get a certificate and charge so much money. CAs actually have a lot of responsibility to their clients and to users whose browsers trust them.

In any case, if you need a secure site, you MUST use SSL. If you don’t use SSL, you will ALWAYS be vulnerable to an MitM attack. By adding more complicated security, you can make it a lot more work for an MitM, and if you have a small user-base, less MitMs will have the opportunity to be in a position to compromise your site. However, ultimately, no matter what you do, your site is vulnerable to MitM attacks without SSL and a signed certificate from a Trusted CA.

February 7th, 2015

Posted In: Security

Tags: , ,

One Comment

If you store your users’ passwords in plain text, life is easy for you, and its equally easy for someone to steal them. If someone manages to login to your database; if someone makes a successful SQL-injection attack; if the next developer on the project, or that contractor you hired for a week to optimize the database, are less trustworthy than you…any of these scenarios could compromise every one of your users’ accounts. And maybe users can’t hurt anything on your site, but unfortunately, people often use the same password for your site that they use for their e-mail account, etc. So, if someone gains access to your users’ usernames and passwords, they can potentially access those users’ e-mail, bank accounts, amazon accounts, facebook accounts, etc. So, just don’t do it. This is the first and most sacred rule in handling your users’ passwords. You should treat passwords as carefully as you would credit card details and social security numbers. I don’t care what your reason is for storing passwords in plain text…your reason is not worth the risk to your users, and there is always another way to do what you are trying to do. There is NO valid reason to store a password in plain text.

Additionally, you should ALWAYS store the password using a ONE-WAY hash, and that hash must be salted as pointed out in Why You MUST Salt Your Hashes. A one-way hash is NOT “encryption”. A password that is encrypted can also be decrypted. If you store a password using the very secure AES-512 encryption with a key that is stored carefully in your source code in a protected folder that can’t even be accessed from the web at all…it may seem like the passwords are safe, but they aren’t. Ask yourself, would YOU, the developer, be able to decrypt the passwords? If YOU can do it, then so can someone else. Remember, the next developer, or that contractor, may not be as honest as you are. My goal, and your goal, should be to make it impossible for anyone, including yourself, from being able to access the passwords.

I can hear the comments rolling in before I’ve even posted this about how you NEED to have the password encrypted so you can decrypt it, because you have to use the password to login to some other legacy system from your system, and that other system requires the password. Well, okay, if you don’t control another system, and that system was designed poorly, then maybe–just maybe–there is a valid reason…but that’s about the only reason I can think of. And there is still no excuse to store it in plain text.

Which Hashing Algorithm to Use?

Now, there is a lot of information out there saying to not use MD5 and use SHA256 instead. This is only mediocre advice. SHA256 is definitely safer than MD5. However, SHA256 is still a “fast” algorithm. While my quick tests show that it would take my computer hundreds of years to crack an 8 character password through random brute-force methods, I have heard reports of a custom 25 GPU system that is able to do 22 billion hashes per second with SHA256, which means an 8 character password could be brute-forced in a few hours. So, it is important to use a “slow” hashing algorithm.

Based on some quick, incomplete research it appears that the most common or talked about algorithms that do “slow” hashing are:

Among these, it seems that bcrypt is the most trusted, while scrypt might have some good future promise but doesn’t have as much software support from what I could find. These algorithms work by hashing the password thousands of times with configuration options to increase the number of rounds as computer processing speeds up. Forcing the password to be hashed thousands of times to get the end-result will only take a couple-hundred milliseconds when the user types in a valid password, so it’s barely noticeable to the user. However, to the person trying to brute-force the password, it becomes expensive in terms of time and financial cost of processing power.

So, pick a slow hashing algorithm and hash your passwords with a salt. It’s the only way to truly protect your users’ information to the best of your ability. Your users will never know how grateful they really are, but you’ll also never have to tell them about a compromised password if your database table gets leaked.

February 6th, 2015

Posted In: Security

Tags: , ,

One Comment

In a another post, Why You MUST Hash Your User’s Passwords, I talked about the importance of hashing rather than encrypting (or worse storing in plain text) your user’s passwords. I also discussed why it is important to use a hashing algorithm designed to be “slow” such as bcrypt, scrypt, PBKDF2, and SHA512crypt. However, even if you use the slowest possible hashing algorithm ever known to man, if you don’t, also, salt the passwords, they will be much more easily compromised.

Unfortunately, your users are DUMB. They are going to use passwords like “molly77″. As a quick example, a SHA256 hash is considered fairly secure. To brute-force a SHA256 password would take hours even for the best machines. However, a SHA256 hash of a DUMB password is NOT secure at all. E.g. here’s the SHA256 hash of “molly77″:

1a17ea140e6d274c92bd053559f649370e35866c88a3da5e8065178810a82a03

Take that hash, and go to this link: https://crackstation.net/, paste it into the textbox, submit it, and you’ll find that it very quickly finds the result from a Rainbow Table lookup. A Rainbow Table is a list of hashes and their corresponding passwords, and a rainbow table can be easily generated when either you don’t salt your password or you use the same salt for every password. Thus, it is important that you create a random salt for each password whenever the user registers or updates their password. Salting is extremely easy, all you do is prepend or append the password with a string, e.g. our molly77 password becomes something like:

molly77257401b6e671b38ec98fc3fb107530f8

When we hash this new password string, it becomes:

d2131964d0071fa9d191199ad218632866fbf4744511d1eb29ac4f6247e7497e

And when we lookup this hash in the link I provided above, the rainbow table doesn’t work. It’s so easy that there is no valid reason not to do it. The only additional work is storing the salt in the user row alongside the hashed password so that when the user logs in with the right password, we can produce the same hash. The method for doing this is something like this:

isValidPassword( submittedPassword ) {
    // areHashesEqual needs to be a "length constant" comparison method
    // as using "==" is vulnerable to timing attacks
    return areHashesEqual(hashMethod(submittedPassword + salt), hashedPassword);
}

Randomly salting your passwords makes rainbow tables useless and forces a would-be attacker to use brute-force methods if they want to crack your users’ passwords. If you’ve, additionally, used a slow hash like PBKDF2, bcrypt, etc., then brute-force methods become much more expensive, and it becomes much less likely that an attacker will bother with the passwords even if they happen to get a dump of your database table.

Best

  • Use a well-supported PBKDF2, BCrypt, or SCrypt library that takes care of randomly generating the salt for you and allows you to increase the rounds progressively.
  • Use said library’s “length-constant” method to compare the stored hash with the password submitted by the user.

Second Best

  • Use a “slow”hashing algorithm
  • Generate a sufficiently long salt. A rainbow table can still be generated if your salt has too few possible combinations. The PBKDF2 recommendation is a minimum of 8 bytes.
  • At the bottom of this page, I’ve posted a PHP example for using the crypt method correctly before PHP 5.5 when password_hash() and hash_equals() become available.

Don’t

  • Try to improve the hashing by coming up with some super-secret way to salt your passwords. It’s a waste of time. You should assume that if the user gets a data dump, they also have your source code, in which case you’ve done nothing except add more work for no additional security.
<?php
namespace Modules\Authentication\Helpers {
    class BCrypt {
        public static function hash( $password, $cost=12 ) {
            $base64 = base64_encode(openssl_random_pseudo_bytes(17));
            $salt = str_replace('+','.',substr($base64,0,22));
            $cost = str_pad($cost,2,'0',STR_PAD_LEFT);
            $algo = version_compare(phpversion(),'5.3.7') >= 0 ? '2y' : '2a';
            $prefix = "\${$algo}\${$cost}\${$salt}";

            return crypt($password, $prefix);
        }
        public static function isValidPassword( $password, $storedHash ) {
            $newHash = crypt( $password, $storedHash );
            return self::areHashesEqual($newHash,$storedHash);
        }
        private static function areHashesEqual( $hash1, $hash2 ) {
            $length1 = strlen($hash1);
            $length2 = strlen($hash2);
            $diff = $length1 ^ $length2;
            for($i = 0; $i < $length1 && $i < $length2; $i++) {
                $diff |= ord($hash1[$i]) ^ ord($hash2[$i]);
            }
            return $diff === 0;
        }
    }
}

February 5th, 2015

Posted In: Security

Tags: , ,

One Comment