Xing \ Creative \ Coding

Web Application Software Development

Whenever possible, you should utilize existing frameworks that are well-maintained to handle your user logins. However, if you’ve inherited a project that handles its own logins, sometimes you simply have to make do and try to secure your application to the best of your ability.

Beyond using SSL, hashing your passwords, and making sure you salt your hashes…one of the other things you can do to improve the security of your website is to ensure you have some form of brute-force protection.

What is a Brute-Force Attack?

A brute-force attack is when someone is randomly or pseudo-randomly trying different passwords or keys to get into your system. Examples of brute-force attacks are:

  1. Session Hijacking: After a user logs in, they are often given a session key, typically placed in a cookie, that identifies them so that you know who they are on subsequent requests. If a person manages to “guess” a session key, they’ve gained access to your system and are able to perform actions as if they were the valid user.
  2. Password Cracking: An attacker, especially if they know a username or e-mail that is in your system, can attempt to “guess” the password. If you have no brute-force prevention measures, they are able to try thousands upon thousands of login attempts to try to get the right answer…only limited by your network capacity. If they manage to get a successful login, not only do they have access to that user’s account on your server, but since people often use the same passwords all over the place, they will probably be able to use that password to gain access to that user’s other accounts.

Securing Against Brute-Force Attacks

The goal of brute-force protection is not to completely prevent someone doing a brute-force attack, but rather to make it more time-consuming and costly than it is worth. Basically, you are just trying to slow them down.

In any case, first, to prevent Session Hijacking, you should ensure that your session keys are being generated by a cryptographically secure random generator. If you use something like PHP’s uniqid or a non-random Guid, since these are often non-random codes based on time, it’s very likely for a person to be able to guess the session keys that will get generated. Non-random keys mean that they are more guessable, and–thus–other brute-force prevention measures will not be as successful.

Second, for both Session Hijacking and Password Cracking prevention, you should log failed validations that occur by IP address. Here’s a quick design. I’ll use a database table, though you can also use a cache which may be more practical unless you want IPs to be able to be permanently blocked when they exceed some high threshold.

NOTE: IP addresses can be stored as integers (e.g. the ip2long method in php). This means you have to convert them on the way in and out, but it makes the indexing more optimized.

CREATE TABLE BruteForceLog (
    LastFailure  DATETIME NOT NULL

Once you have this table, the process is simple.  I’ll try to turn this into a work-flow in the near future, but for now, hopefully this is simple enough to understand:

  1. When a user-request comes in, load up its IP address (if any) from your BruteForceLog.
  2. If the FailedCount > YourPreferredCutOff and the LastFailure > (Now – YourPreferredTimeBlock), then block the request.
  3. Else, validate the session key / password against your sessions or user account.
  4. If the session key or password is invalid, create or increment the BruteForceLog and end the Request.
  5. Else, user is validated, continue.
You may have noticed on some sites that implement this, once you’ve exceeded the limit, it still tells you that you have an invalid password rather than telling you that your IP is blocked. So, you technically don’t know when you’ve exceeded the FailedCount limit. This is an additional security measure so that brute-force attackers don’t know how many of their brute-force attempts are actually being checked…giving some small additional security. However, this ambiguity itself is less important than delaying their attempts.

August 22nd, 2017

Posted In: Security

Tags: , , ,

Leave a Comment

The examples below are in PHP, but a knowledge of any OOP language should suffice in understanding the examples.

Agile programmers like to throw around their acronyms. Sometimes it’s just to make them look smart, but the better purpose is so that we can communicate a plethora of information without taking up a lot of time. SOLID programming is one of those acronyms that contains quite a few principles about code design that are foundational to good OOP programming.

Single Responsibility Principle

To start us off, the first letter “S” stands for Single Responsibility Principle. Perhaps the easiest way to understand it is by knowing its opposite. The opposite of this principle would be the god-object, which does everything. A god-object might format your strings, open the connection to the database, query for results, and handle business logic. The god-object is a jack of all trades and master of none. The Single Responsibility Principle in contrast states that your objects should only do one thing (and, if I might add, do it well).

Why is this important?

Let’s say that you have an e-mailing class MailMessage in your open source project. The class is, initially, just an OOP wrapper for the mail() method in PHP with methods like addRecipient(name,email), which streamline the composition of the e-mail. However, you start to realize that various platforms and requirements are going to require very different ways of sending e-mail:

  • Unix sendmail requires headers to be separated with \n.
  • Windows MTAs can’t handle the name in the to parameter for the mail() method.
  • Some people will likely need to send e-mail via SMTP.
  • It would be useful for unit tests to mock the sending process but not actually send e-mail.

So, in thinking about this problem, we have discovered two responsibilities. The first responsibility is the interface by which a user builds the e-mail, and the second responsibility is how that message is sent. Thinking of the Single Responsibility Principle we decide that we are going to separate these responsibilities into two separate classes: MailMessage and MailSender.

The MailMessage class is how the user builds the e-mail. This is unlikely to ever need to change. So, we’ll build this as a concrete class. However, we’ve already identified that our other responsibility–how to send the e-mail–is likely to change a lot. Just by separating these responsibilities, and knowing where things are likely to change, we’ve somewhat stumbled into the Open/Closed Principle (the “O” in SOLID).

Open/Closed Principle

If you write the MailMessage class with \r\n between headers, and someone using sendmail needs \n, they are going to have to directly modify your MailMessage class. The Open/Closed Principle reminds us that we want to avoid that scenario, because once they modify your class they can’t update your library to the latest without breaking their implementation. After we complete our MailMessage and MailSender classes they should be “closed to modification,” but “open to extension”. Anything that may need to change for different implementations should have an avenue of extension that does not require the original classes to be modified.

The obvious avenue of “open to extension” is inheritance. If you can inherit the class and modify what you need to modify, you can do whatever you want. However, that has problems of its own when the parent class’s behavior changes unexpectedly, so we don’t want to force users’ to extend our class in order to modify it’s behavior. In this case, we already know where the extension points need to be, which is that we need multiple instances of MailSender. To give an easy method of extension without modification, we are going to use another SOLID principle–out of order this time–Dependency Inversion Principle

Dependency Inversion Principle

The Dependency Inversion Principle states that our high-level, library class MailMessage should not depend on the low-level implementation details of our MailSender class. Instead, both the library and the client code should depend on abstractions. So, to avoid implementation details in our high-level class, we create an interface: IMailSender. Then, both our MailMessage class and our implementations will depend on this abstraction but not have any knowledge about each other. Here’s a rough concept of what that would look like:

interface IMailSender {
      * Returns true on success, or false on failure
      * @return bool
    public function send( $to, $subject, $message, array $headers );

Then, in our MailMessage class, we implement dependency-injection:

class MailMessage {
    private $_sender;
    public function __construct( IMailSender $sender ) {
        $this->_sender = $sender;

So, that takes care of the SO**D, but unless you’re British, SOD isn’t a useful acronym. So, what about the “L”, and the “I”?

Now that we have an interface IMailSender, and we have dependencies in our MailMessage, we can discuss these two items.

Liskov Substitution Principle

I put a comment in the interface that it returns true on success and false on failure. Any implementation and any subtype to those implementations should be able to be “substituted” into the MailMessage class without altering the correctness of the MailMessage class. In this simplistic example, if an implementation started returning “success”/”failure” instead of true/false, it would alter the behavior of the program, violate this principle, and break the MailMessage implementation.

Now, this is where I stick in my shameless plug about strongly-typed languages being better, because in a strongly typed language like C#, trying to return a string from a bool method would cause a compile-time failure. However, PhpStorm and possibly some other IDEs do try to give intellisense that communicates the same problems when you utilize DocBlocks.

Interface Segregation Principle

In a nutshell, interfaces should be small and not have a lot of methods that an implementation has to define (or throw NotImplementedException for). In our case, we’ve done this. An implementation will only need to implement the send() method. If we had added a method setHeaderDelimiter() to our interface, this might be good for the default mail() implementations, but would not be needed or used by the SMTP implementations which would always use \r\n. Those implementations would have no need for such a method, so it’s better to leave it out and handle such concerns in the objects they pertain to.


It’s not the end of the world if you don’t follow all of these principles all of the time. These principles can also add to the complexity of your application, so if You-Aren’t-Going-to-Need-It (YAGNI), it may not be worth the time investment. However, it is a good idea to fully understand the principles so that you know which ones you are breaking, why you are breaking them, and possible code smells that can result as the application changes.

March 24th, 2015

Posted In: Software Design

Tags: ,

Leave a Comment

I was reading up on some interview questions on Scott Hanselman’s blog today, and in the comments I saw someone list Singleton and Service Locator as Anti-Patterns, thinking, “Why would someone go against the almighty Martin Fowler?” These are very common patterns of development. I can kind of see how the Singleton Pattern might be an Anti-Pattern–as it forces a dependency on a static method to instantiate the class. However, one of the very reasons that I don’t like this aspect about the Singleton Pattern is because it makes it unusable with some other patterns, like the Service Locator…the Service Locator can’t instantiate it. In any case, I was mildly surprised to see that the Service Locator Pattern was viewed by some as an Anti-Pattern, and wondering how I’d never come across the anti-SL ravings before.

What it seemed to come down to was that–with the Service Locator pattern–you get runtime errors instead of compile-time errors. Even more problematically, if you are using an API designed by someone else, you may not know that you need to configure a JsonSerializer for their library to work (or something to that effect). In such a case, being required to pass a JsonSerializer into the constructor of the API’s object would be more clear. Okay, that makes sense. I can see how being more explicit in what your object requires, especially when developing an API, is useful…and how using the Service Locator might be viewed as a code-smell. I do know that I have a grudge against Zend Framework 2. They seem to haphazardly create services on the fly. It seems that almost every ZF2 object is accessed via the service locator, and it just hasn’t seemed very developer-friendly. So, I can totally get on board with the idea that the Service Locator Pattern can have a code-smell. However, I’m still not sure I’m willing to call it an anti-pattern.

If you’ve ever setup database connection details in a config file (e.g. web.config), then–in essence–you were utilizing the Service Locator Pattern to specify the driver and connection details. To say that connecting to a database is an anti-pattern seems a bit silly. However, it has all the same problems. If you don’t know that the module requires a database connection, so you don’t configure it or you configure it incorrectly, then you get runtime errors instead of compile time errors. So, it has all the same potential for “code smell” that the Service Locator pattern has in terms of external modules, but to my recollection we don’t call connecting to the database an anti-pattern.

Personally, ZF2 aside, I still like using the Service Locator. Call it a personal weakness. When using something like Zend Framework 2, I don’t have control over the instantiation of the Controller classes. So, I can’t inject my services into the constructor of the controller without rewriting Zend. The Service Locator is also extremely useful in handling Convention over Configuration dependencies that can still be overridden with configuration. For instance, given the URL /user/register, we can dynamically get the dependency for the string ‘UserController‘. If none is found, we can look for the namespace ‘Ui.UserController‘, but the developer can easily define ‘UserController‘ to be assigned to ‘Modules.Authentication.Controllers.UserController‘ by explicitly giving the Service Locator a definition of ‘UserController’ so that the router doesn’t have to fallback to the default convention.

In the end, I think ANY pattern can create a code-smell if used carelessly. Imagine the mess you could make by doing almost everything with the Decorator Pattern. Heck, we could even write about all the problems that can arise from inheritance, but the problem isn’t inheritance, which is a useful and needed pattern…the problem is the implementation. So, I’m not ready to write-off the Service Locator quite yet. However, I will try to keep in mind this potential problem, especially if I ever develop an API that is going to require the consumer to provide the implementation.

February 10th, 2015

Posted In: Software Design

Tags: , , ,

Leave a Comment

If you sign each request as described in the post Security Without SSL, Creating a Shared Secret to Sign Requests, you can be fairly confident that a passive MitM attacker won’t be able to modify the user’s request. However, they can potentially “replay” the request over and over again. Imagine if an attacker were able to replay Amazon’s 1-click purchase request over and over again and instead of paying for and getting 1 hard-drive like you wanted, you get 100. Without SSL, a passive attacker can replay the same post over and over again, and if you don’t defend against it, it can potentially create a lot of havoc that could ruin your relationship with your users. So, we need a way to prevent replay.

On the simple side, you can just have the client put a Request Time into each request. Then, when you receive the request on the server, you can validate that the time is within a reasonable grace period of the Request Time. If the request is older than the grace period allows, then you can reject the message. However, this relies on the user’s clock being set right, and a replay attack could still occur during the grace period.

So, the better way is to use a nonce. In some cases, the client requests a nonce from the server, and then is allowed to use that nonce one time. However, for this case, I’d say to do something easier. Since we already have a unique string (the signature we created from the shared secret), we can simply never allow the same signature to be used twice. This will reduce the overhead on the server and make the client’s life much easier since it won’t have to request any nonces. Then, since we’ll probably want to keep our nonce database small so that we aren’t looking through millions of records on each request, we can delete any records older than 24 hours or something. However, once we start deleting records, we need to do Request Time checking. If we keep records in the database for 24 hours, then we just need to ensure that all requests coming in are less than 24 hours old (or whatever timeframe we use to store nonces). This way, we can keep our database table of nonces smaller and still rest assured that a request from 5 days ago can’t be replayed ever again.

So, you may wonder what happens if the same user makes the same GET request twice in a row and produces the same signature hash. Well, obviously, the second request would get blocked. However, it seems that there is an overall application problem if the same person requests the same data twice within the same second. If the requests are even 1 second apart, the nonce will be completely different because the RequestTime in the request would change. However, if you’re still worried about it, just add a milliseconds field to the request, the server doesn’t even have to know what the field is for, it just has to make the request signature unique.

As far as other legitimate requests, the odds of producing the same signature on two different requests (hash collision) with a hashing algorithm like SHA256 or better that produces at least 64 characters is so vanishingly small that I would place more bets on you being the first to create FTL warp drive. So, I wouldn’t worry about it. Worst case scenario, the client script will have to make a second request when the first fails, and the time will have changed by then, and there won’t be another hash collission until our sun dies out 5 billion years from now.

February 9th, 2015

Posted In: Security

Tags: , , ,

One Comment

Prerequisites: To sign requests with a shared secret, you will have to force your users to enable JavaScript. This may alienate some users, and ultimately I’m not sure it’s worth the effort just to avoid purchasing an SSL cert. But maybe for some open source apps that have heavy JavaScript dependencies anyway, this might be a good fit since you can’t control whether your users will get a cert..
Strengths: A request signed with a Shared Secret prevents session hijacking by passive attackers (e.g. Firesheep)
Weaknesses: All Non-SSL security is vulnerable to MitM attacks. Also, if you use a password hash as the Shared Secret, it can potentially be compromised at registration and during password changes.

As discussed in a previous post, Why you MUST use SSL, without SSL, there is no way to prevent a Man-in-the-Middle (MitM) attack from bypassing any security measures you take. However, by using a Shared Secret to sign your requests, you can make the task more difficult for an MitM as well as reduce the risk of a passive attacker getting the password, hijacking the session, or running a replay attack. A Shared Secret is something that only the server and client know, which implies that you never send it over the wire in either direction. This can be a bit problematic. How can both sides know a shared secret without ever communicating that secret across the web?

Well, ideally, if we truly want the communication to be a shared secret, we would need to communicate it to the user via an SMS message or an e-mail. By using an alternate method, you reduce the risk of interception by any MitM or passive attacker. An SMS would probably be safest, as if their e-mail is transmitted over HTTP or unsecured POP3, then an MitM would possibly be able to see that communication as well. But, at the very least, by transmitting through another method of communication, you greatly reduce the risk of the shared secret getting intercepted in conjunction with an MitM or Side-Jacking attempt.

A slightly less secure Shared Secret is the user’s password. The reason it is less secure is because it does have to be transmitted at least once: during registration and also anytime the user changes his or her password. Anyone listening at that point in time will know the Shared Secret, and your security can be bypassed. However, this is still significantly more secure than if you don’t use a Shared Secret. The advantage to your users in this case, versus an SMS message, is that they don’t have to jump through any additional hoops beyond what they are already used to. However, if we do this, we have to make sure that the password Shared Secret is not communicated during the login…we want to make sure that it is ONLY sent during the initial registration or password change to reduce the frequency of communicating the shared secret.

How to Login Without Forfeiting the Shared Secret

Assuming that we dutifully hashed our password with a salt, the server doesn’t actually know the password, so our Shared Secret is actually the hash of salt+password rather than the password itself. As our Shared Secret, it is important that we never send the hash of salt+password to the server or vice-versa. So, when the user logs in, we must first send them TWO salts. The first passwordSalt the client will use to generate the shared secret, and the second oneTimeSalt will be used to communicate the shared secret to the server for the login.

var sharedSecret = hash( passwordSalt + password ),
    loginHash    = hash( oneTimeSalt + sharedSecret )
$.post(loginPath, {"email":email,"password":loginHash}, callback);

The server then validates in a method something like this:

isValidPassword( hashFromClient ) {
    return hash_equals(
        hash( oneTimeSalt + storedPasswordHash ),
    ); //slow, time-constant equality check

If the above method is successful, then we destroy the one-time salt and put a new one in its place for the next use. We then send the client a session token that it is able to use for the rest of its session to remain logged in. However, if the session token is all we use for validating the user on subsequent requests, then any novice MitM, or person using Firesheep, will be able to hijack the session. So, we need to use our shared secret to “sign” each request in a way that the server can ensure the data is actually from someone that knows the shared secret and hasn’t been modified.

Fortunately, this is actually quite simple. We already have our shared secret, the first hash. So, we can use the first hash to sign each request something like the following:

var signature = hash( sharedSecret + sessionToken + requestMethod + requestUrl + requestBody );

We append this signature to our request, and when the server receives the request, it can find the Shared Secret associated with that session token and then run the same hash. If the two hashes are equal, then the server can assume that the request comes from someone who knows the Shared Secret. So, about the only thing an attacker can do without knowing the Shared Secret is replay the exact same request multiple times. However, this is still potentially problematic. Imagine an attacker adding a million rows to your database by replaying the same POST request a million times…it could get ugly. So, next on the list is to look at Security Without SSL, How to Prevent Replay Attacks.

February 8th, 2015

Posted In: Security

Tags: , , ,

One Comment

Firesheep is one app that proved just how easy it is to hijack sessions from sites like Twitter and Facebook when the communication is sent plain-text over HTTP. Thus, if you have any kind of high profile site, SSL has become a MUST. Without SSL, it doesn’t matter what security measures you use or how clever you are because a Man-in-the-Middle (MitM) can simply send your users his own, unsecured, version of your website without all your clever security, your users will happily send the MitM all the information unsecured, and then the MitM can send the data up to your server using your clever security measures…and nobody will ever know the MitM was even there. So, I wish there was another answer, but if you truly want your site to be secure, you must use SSL.

The prevention for an MitM attack is NOT the Asymmetrical RSA encryption like some people probably think. By itself, RSA encryption only prevents passive attackers from reading communications sent between two parties. If encryption is the only security used, then someone who controls the router at your local Starbucks could route all traffic to their own computer, and when a user requests, they could send the user their own public key, and the user would happily encrypt all their user and credit card information using the attackers public key and send it to the attacker. The attacker would then decrypt it, harvest all the information, encrypt it with amazon’s public key and send it up to, and vice-versa. The MitM would be able to sit in the middle harvesting all the information sent between the two parties and be completely invisible…that is…IF encryption were the only security. Fortunately for us, it’s not.

It is not the RSA encryption that makes SSL secure, but rather the Trusted Certificate Authority (CA). When a user connects to a website over SSL, the browser takes the domain name they THINK they are on, the IP address they are actually connected to, and the “public key” (certificate) they were given and verifies all that information against a CA that the browser knows it can trust. If some of the information doesn’t match what the CA has on file, or the website used a CA that your browser doesn’t trust, then the user gets a very ugly security warning from the browser telling the user not to trust the site

Using a signed certificate from a CA, given the same scenario above where the attacker routes all traffic to their own computer at IP, when the user’s browser reads the certificate and tries to verify it, it will find that the certificate they received was “self-signed” or from a CA it doesn’t know or trust, etc., and the subsequent browser warning will hopefully scare the user away. Thus, the only way for an MitM to get away with such a scenario would be to get a Trusted CA to issue them a certificate for, which is why CAs make you jump through hoops to get a certificate and charge so much money. CAs actually have a lot of responsibility to their clients and to users whose browsers trust them.

In any case, if you need a secure site, you MUST use SSL. If you don’t use SSL, you will ALWAYS be vulnerable to an MitM attack. By adding more complicated security, you can make it a lot more work for an MitM, and if you have a small user-base, less MitMs will have the opportunity to be in a position to compromise your site. However, ultimately, no matter what you do, your site is vulnerable to MitM attacks without SSL and a signed certificate from a Trusted CA.

February 7th, 2015

Posted In: Security

Tags: , ,

One Comment

If you store your users’ passwords in plain text, life is easy for you, and its equally easy for someone to steal them. If someone manages to login to your database; if someone makes a successful SQL-injection attack; if the next developer on the project, or that contractor you hired for a week to optimize the database, are less trustworthy than you…any of these scenarios could compromise every one of your users’ accounts. And maybe users can’t hurt anything on your site, but unfortunately, people often use the same password for your site that they use for their e-mail account, etc. So, if someone gains access to your users’ usernames and passwords, they can potentially access those users’ e-mail, bank accounts, amazon accounts, facebook accounts, etc. So, just don’t do it. This is the first and most sacred rule in handling your users’ passwords. You should treat passwords as carefully as you would credit card details and social security numbers. I don’t care what your reason is for storing passwords in plain text…your reason is not worth the risk to your users, and there is always another way to do what you are trying to do. There is NO valid reason to store a password in plain text.

Additionally, you should ALWAYS store the password using a ONE-WAY hash, and that hash must be salted as pointed out in Why You MUST Salt Your Hashes. A one-way hash is NOT “encryption”. A password that is encrypted can also be decrypted. If you store a password using the very secure AES-512 encryption with a key that is stored carefully in your source code in a protected folder that can’t even be accessed from the web at all…it may seem like the passwords are safe, but they aren’t. Ask yourself, would YOU, the developer, be able to decrypt the passwords? If YOU can do it, then so can someone else. Remember, the next developer, or that contractor, may not be as honest as you are. My goal, and your goal, should be to make it impossible for anyone, including yourself, from being able to access the passwords.

I can hear the comments rolling in before I’ve even posted this about how you NEED to have the password encrypted so you can decrypt it, because you have to use the password to login to some other legacy system from your system, and that other system requires the password. Well, okay, if you don’t control another system, and that system was designed poorly, then maybe–just maybe–there is a valid reason…but that’s about the only reason I can think of. And there is still no excuse to store it in plain text.

Which Hashing Algorithm to Use?

Now, there is a lot of information out there saying to not use MD5 and use SHA256 instead. This is only mediocre advice. SHA256 is definitely safer than MD5. However, SHA256 is still a “fast” algorithm. While my quick tests show that it would take my computer hundreds of years to crack an 8 character password through random brute-force methods, I have heard reports of a custom 25 GPU system that is able to do 22 billion hashes per second with SHA256, which means an 8 character password could be brute-forced in a few hours. So, it is important to use a “slow” hashing algorithm.

Based on some quick, incomplete research it appears that the most common or talked about algorithms that do “slow” hashing are:

Among these, it seems that bcrypt is the most trusted, while scrypt might have some good future promise but doesn’t have as much software support from what I could find. These algorithms work by hashing the password thousands of times with configuration options to increase the number of rounds as computer processing speeds up. Forcing the password to be hashed thousands of times to get the end-result will only take a couple-hundred milliseconds when the user types in a valid password, so it’s barely noticeable to the user. However, to the person trying to brute-force the password, it becomes expensive in terms of time and financial cost of processing power.

So, pick a slow hashing algorithm and hash your passwords with a salt. It’s the only way to truly protect your users’ information to the best of your ability. Your users will never know how grateful they really are, but you’ll also never have to tell them about a compromised password if your database table gets leaked.

February 6th, 2015

Posted In: Security

Tags: , ,

One Comment

In a another post, Why You MUST Hash Your User’s Passwords, I talked about the importance of hashing rather than encrypting (or worse storing in plain text) your user’s passwords. I also discussed why it is important to use a hashing algorithm designed to be “slow” such as bcrypt, scrypt, PBKDF2, and SHA512crypt. However, even if you use the slowest possible hashing algorithm ever known to man, if you don’t, also, salt the passwords, they will be much more easily compromised.

Unfortunately, your users are DUMB. They are going to use passwords like “molly77″. As a quick example, a SHA256 hash is considered fairly secure. To brute-force a SHA256 password would take hours even for the best machines. However, a SHA256 hash of a DUMB password is NOT secure at all. E.g. here’s the SHA256 hash of “molly77″:


Take that hash, and go to this link:, paste it into the textbox, submit it, and you’ll find that it very quickly finds the result from a Rainbow Table lookup. A Rainbow Table is a list of hashes and their corresponding passwords, and a rainbow table can be easily generated when either you don’t salt your password or you use the same salt for every password. Thus, it is important that you create a random salt for each password whenever the user registers or updates their password. Salting is extremely easy, all you do is prepend or append the password with a string, e.g. our molly77 password becomes something like:


When we hash this new password string, it becomes:


And when we lookup this hash in the link I provided above, the rainbow table doesn’t work. It’s so easy that there is no valid reason not to do it. The only additional work is storing the salt in the user row alongside the hashed password so that when the user logs in with the right password, we can produce the same hash. The method for doing this is something like this:

isValidPassword( submittedPassword ) {
    // areHashesEqual needs to be a "length constant" comparison method
    // as using "==" is vulnerable to timing attacks
    return areHashesEqual(hashMethod(submittedPassword + salt), hashedPassword);

Randomly salting your passwords makes rainbow tables useless and forces a would-be attacker to use brute-force methods if they want to crack your users’ passwords. If you’ve, additionally, used a slow hash like PBKDF2, bcrypt, etc., then brute-force methods become much more expensive, and it becomes much less likely that an attacker will bother with the passwords even if they happen to get a dump of your database table.


  • Use a well-supported PBKDF2, BCrypt, or SCrypt library that takes care of randomly generating the salt for you and allows you to increase the rounds progressively.
  • Use said library’s “length-constant” method to compare the stored hash with the password submitted by the user.

Second Best

  • Use a “slow”hashing algorithm
  • Generate a sufficiently long salt. A rainbow table can still be generated if your salt has too few possible combinations. The PBKDF2 recommendation is a minimum of 8 bytes.
  • At the bottom of this page, I’ve posted a PHP example for using the crypt method correctly before PHP 5.5 when password_hash() and hash_equals() become available.


  • Try to improve the hashing by coming up with some super-secret way to salt your passwords. It’s a waste of time. You should assume that if the user gets a data dump, they also have your source code, in which case you’ve done nothing except add more work for no additional security.
namespace Modules\Authentication\Helpers {
    class BCrypt {
        public static function hash( $password, $cost=12 ) {
            $base64 = base64_encode(openssl_random_pseudo_bytes(17));
            $salt = str_replace('+','.',substr($base64,0,22));
            $cost = str_pad($cost,2,'0',STR_PAD_LEFT);
            $algo = version_compare(phpversion(),'5.3.7') >= 0 ? '2y' : '2a';
            $prefix = "\${$algo}\${$cost}\${$salt}";

            return crypt($password, $prefix);
        public static function isValidPassword( $password, $storedHash ) {
            $newHash = crypt( $password, $storedHash );
            return self::areHashesEqual($newHash,$storedHash);
        private static function areHashesEqual( $hash1, $hash2 ) {
            $length1 = strlen($hash1);
            $length2 = strlen($hash2);
            $diff = $length1 ^ $length2;
            for($i = 0; $i < $length1 && $i < $length2; $i++) {
                $diff |= ord($hash1[$i]) ^ ord($hash2[$i]);
            return $diff === 0;

February 5th, 2015

Posted In: Security

Tags: , ,

One Comment

Let’s say that your server runs in Central Time but it logs data for facilities all across the US. 99% of the time you won’t have any problem. You’ll grab the Date/Time out of the database, and if displaying to a facility on the East Coast, you convert the Date/Time to Eastern for the user to see. It’s all good. However, you set yourself up for a very big headache in the Fall.

In the Fall, the clocks will move back, which means that your server will experience 1am – 2am twice. If MySQL stored the offset in the DateTime field (e.g. -05:00 before the change and -06:00 after the change), then this wouldn’t be a problem. However, since it doesn’t, when you pull a log out of the table and get the time 1:30am, you have no way of knowing which 1:30am it is, the one before or the one after the change.

If only you had stored the times in UTC you would know which was which.

However, we have a bigger problem. Your Easter timezone facilities’ clocks don’t change back at the same time…they change back at their own 2am. This means that when you translate back and forth between Central and Eastern time, you are going to run into some odd glitches illustrated below. The “Time In” is the valid time sent from the Eastern timezone facility, the “Time Stored” is the time that it gets converted to on the local machine before storage. The “Time Out” is the translation back to local machine time when it pulls it out of the table, and the “Display Time” is the end-result that you display in your logs for the Eastern timezone facility.

Time In Time Stored Time Out Display Time
01:00 EDT 00:00 CDT 00:00 CDT 01:00 EDT
01:30 EDT 00:30 CDT 00:30 CDT 01:30 EDT
01:00 EST 01:00 CDT 01:00 CDT 01:00 EST
01:30 EST 01:30 CDT 01:30 CDT 01:30 EST
02:00 EST 01:00 CST 01:00 CDT 01:00 EST
02:30 EST 01:30 CST 01:30 CDT 01:30 EST
03:00 EST 02:00 CST 02:00 CST 03:00 EST

You’ll notice that you’re all good up until you get to 2:00am EST. Your local machine converts it to local time as 01:00 CST, which is correct. However, since MySQL doesn’t store the offset, when your local machine pulls it out of the database table, it translates it to 1:00am CDT since it doesn’t know which. This results in your Eastern timezone logs having 3 instances of 1am – 2am instead of 2, and a complete loss of the logs from 2am – 3am.

Therefore, if you are working with multiple timezones, or your business might grow to include multiple timezones…start things off the right way. You can avoid this problem by storing your DateTimes in any format that is immune to DST changes. So, the following will all work:

  • Store your Date/Time in UTC. This is the “best-practice” standard.
  • Store the offset as part of the DateTime (e.g. 2014-11-02T02:00:00-05:00). SQL Server has a DateTimeOffset column type, but for MySQL you would have to use a VARCHAR column. Using VARCHAR increases your storage space requirements while also reducing the inherent functionality provided with the DateTime column, so I don’t recommend it.
  • Store in a Unix Timestamp format. This is functionally equivalent to storing in UTC, and is great if you’re storing the time for “right now”, but it is not good for something that needs to store past or future Date/Times because it’s only good for 1970 – 2038. Also, when doing manual queries into the database just to look at the data, timestamps are not human-readable, so I never use them for anything myself.

It can be a bit of a pain getting used to storing data in UTC and then translating it into the display timezone later. However, it avoids many pitfalls, and the work you do up-front to get used to it will pay off in the long run.

December 18th, 2014

Posted In: Software Design

Tags: , , ,

Leave a Comment

So, in my years in the programming business, I’ve seen a few people just stop showing up for work, or cuss out the boss and walk out the door. They are burning bridges, and in the long run they are hurting themselves a lot more than they are hurting the company that they are lashing out at. Whether you give a company a two week notice or not, you are hurting the company about the same amount…they still have to find a replacement, and two weeks is not all that much time to find a new hire and train them. So, really, the only person you are hurting when you don’t follow quitting-etiquette is yourself.

When you storm out of a job without following quitting-etiquette, 2 things happen. First, you go onto the D.N.R. (Do Not Rehire) list. Then, when your next potential job’s HR staff calls up your previous employers, they aren’t allowed to ask all that many questions but one they can ask, “Is [so and so] rehire-able?”. If your previous employer says, “no”, you may not make it past HR to be able to show the company what a good programmer you are. If they have 10 great candidates and you’re the only one that’s not rehire-able, and they can only do 5 interviews, you’re out of luck.

Since my very first job, at McDonald’s, I have never been fired from a job. I have always given a two week (or longer) notice. And 3 of the jobs that I’ve quit, I later went back to. When I went back to those jobs I not only got my job back, but in all 3 cases I got a raise, and in 2 cases I got a promotion. So, following quitting-etiquette can literally pay.

All his life has he looked away… to the future, to the horizon. Never his mind on where he was. (Yoda)

There is something more to it than just quitting-etiquette, however. There’s a mentality that you have to have if you want to be the guy that companies want to hire back. It’s an extremely simple thing in my opinion: own your work, right now! I’ve seen quite a few developers who just did their work to get their paycheck until “the next job” comes along. This makes them unhappy and lowers the quality of their work. In the long run, these developers are less valuable. They lose learning opportunities to improve themselves because they aren’t focused on doing their best work right now.

Being happy at your job is 10% your company and 90% you. Trust me on this. I worked for McDonald’s for THREE years and I liked it (yes, the jokes are rolling through my head right now too). In fact, I’d work for McDonald’s flipping burgers in a heart beat if it paid enough to support my family. I enjoyed it because I took ownership of my work. I took pride in the work that I did and in trying to be the best at what I did. My work is a reflection of me. It’s my name that goes in the @author tag, it’s my face that the customer sees…not my employer’s. I am constantly seeking to improve my skills as a programmer, not because my company needs me to, but because I need to for me.

When you take ownership of your work and take pride in always trying to do your best, the whole “grass is greener” mentality slips away. You are focused on the “now” doing your best in the present. In 3 years you’ll look back at your glaring programming mistakes and ask yourself how you ever wrote such crappy code. However, for the “now” it’s gotta be your best. There is something absolutely magical about doing your best work, finishing it, and knowing that you did the best work you’ve ever done and that tomorrow, you’ll do better.

September 28th, 2012

Posted In: Career


Leave a Comment