Tag Archives: web app security

Design a hack proof password storage

I like to thank Dale Johnstone and Meng-Chow Kang for their useful comments and suggestions.

Recent security breaches further confirm that a password alone is not an effective security measure. An increasing number of cyber attackers are going after the password storage, be it a file or a database. Password storage once exposed, results in an impact that has been shown to be catastrophic. Password storage facilities today need to designed on the basis that a password storage mechanism will be compromised by a malicious party who will get a copy of the password credentials. In both the Target and eBay cases, it is hacking. In other cases, it may be the result of a lost backup tape or as a result of an insider exploiting their authorised access for unauthorised purposes. Through the application of the defense in depth principle, the formulation of an additional layer of control to mitigate the risk of a compromised password storage is becoming more critical.

RSA tried to tackle this issue by using threshold cryptography, which protects through the splitting of an individual password across two servers, addressing the risk of a single point of compromise at any one server. This may be a solution to the rich man. However, I am going to suggest a DIY way for the more cost conscious and tech-savvy to implement this additional layer of defence.

A better strategy would be that we DO NOT store all input with the password hash. Making it less feasible to automate a password cracking system.  When a password is required to be retrieved through a manual process that requires human interactions, it changes the game completely by making it extremely resource intensive to scale up an attack. It is easy to add hundreds of servers while it will be cost prohibitive to recruit a hundred computer operators to perform password screening.

Best Practises of Password Storage

So what do the cyber attackers get after gaining access to your system? When a cyber attacker gains control of a system, they will likely make a copy of your security configuration files, including database files, password file, private keys and logs. From these files, a cyber attacker obtains full knowledge of :

  1. The password hash
  2. Username, email, phone number, DoB etc
  3. User login records and transaction records (like invoices)
  4. Password reset questions plus answers

With the above information, a cyber attacker could launch  Brute force attackDictionary attack and Birthday attacks. There is an excellent article on how to defend against these attacks, “How to encrypt user passwords?” provides 6 rules on how to store passwords securely as follows:

  1. Encrypt passwords using one-way techniques, this is, digests
  2. Match input and stored passwords by comparing digests, not unencrypted strings
  3. Use a salt containing at least 8 random bytes, and attach these random bytes, undigested, to the result
  4. Iterate the hash function at least 1,000 times
  5. Prior to digesting, perform string-to-byte sequence translation using a fixed encoding, preferably UTF-8
  6. Finally, apply BASE64 encoding and store the digest as an US-ASCII character string

Assume a developer follows these rules strictly and correctly, what are the risks when a cyber attacker gets a copy of the password storage and user information? The password is still safe with cryptography sound hashing, right?

No. These six rules offer good protection if the following assumptions hold:

  • The password is complex enough
  • The password is stored totally independent of the user’s personal details, like Date-of-Birth, spouse’s name, address etc

A brute force attack is still a very efficient way to uncover passwords like “123456” or “imapassword”!! If the user creates a password using a Date-of-Birth or home street name, then a cyber attacker could generate a set of possible password combinations using the user’s personal information. A dictionary attack is still a high risk. Users who create a password using personal information are the low hanging fruit. “PaulLivein_Florance” is a complex password in mathematical sense but is not complicated at all when hacking is involved.

Let us assume you are tasked to decrypt a password using the above-mentioned six rules. What will you do? Here are some examples:

1. Find the shared parameter using a few simple passwords “123456” or “abcd1234”. The shared parameters will be the hash iteration count shown above in rule 4, minimum, maximum password length.

2. Optimise the decryption mechanism with knowledge from step 1 and harvest ALL simple passwords.

3. For the remaining passwords, create hash using combinations of personal information (i.e “DoB_name”, “NameInCity” etc). Then match them with the stolen password storage.

With these three simple steps, most likely there are enough user/password pairs that will keep a cyber attacker busy for a few weeks. The cyber attacker may also possibly start profiting from it! While the cyber attacker is busy shopping using stolen information, the decryption process does not stop. It will continue to yield golden eggs everyday.

Applying defense in depth, an additional layer of defence

When password storage is compromised, our only defence boils down to protecting the secret by making it computational expensive to transform and reveal passwords in plain text. This worked in the past without cloud computing and GPU arrays. The Bitcoin gold rush has already created cloud based infrastructure for GPU mining. Relying on a computational expensive hash process alone is not the best strategy.

My suggestions will involve changing rule 3 and rule 4.

The above 6 rules focus on hashing and making it computational intensive. But these 6 rules ignore that the good guys has an advantage over the cyber attacker and do not make effective use of it. Correct password attributes give us an advantage over the cyber attacker.

For rule 3, the simple and traditional way is adding the salt and password directly. But we do not need to store the random salt side by side with the password! We store the password hash and the salt totally separate from each other in a manner where there is no logical relationship between the password hash and the salt. The good guys will associate which salt is to be used with which password hash using user input, which could be the answer to a security question or the sum of ASCII code of a user’s password.

Let me provide an example:

When user enters his password “Pass123”, these characters are converted to a number using a formula like ASCII(P)x10+ASCII(a)x100+ASCII(s)x1000+ASCII(s)x10000 etc.. which results in “566175500”. Let us call this number the salt index. Then the system will use the salt at location 566175500 as an input for the hash function.

How the formula is designed is not important. Anyone one can create one, as long as it maps the user-entered password string to a large number with few collisions. However, collisions will happen when two passwords generate the same number. Which will also mean two passwords are using the same salt. Using the above example formula, “Pass123” and “Pkrs123” both give the same salt index “566175500”. So the same salt is used for generating the password hash “Pass123” and “Pkrs123”. Collisions happen since we are using password attributes (like length, its ASCII code etc.) to generate salt index. This collision gives us another advantage over one-to-one mapping.

As the cyber attacker does not know the correct password, even the attack needs to determine the formula to generate the salt index and then spend time to cracking the passwords. The password cracking output will give multiple passwords for one user. Among this multiple possible passwords, which one is used by the user? “Paul@2014” is more likely to be a password than “Pkvl@2014”, right? To distinguish the linguistic differences and make a right guess, will require human cognitive brain. A computer program will not be able to tell immediately, without help from some linguistic algorithm. This is the game changer I mentioned earlier in the article. Making it less feasible to automate a password cracking system.  An attacker trying to retrieve password from a stolen password storage will need to go through some manual steps that requires human interactions, it changes the game completely by making it resource intensive to scale up an attack. We choose the right battlefield to setup our defense.

Another advantage is of this approach is to prevent possible spillover when users reuse password in multiple websites/applications. When eBay has a data breaches and possible user password leakage, the real risk is on users reusing same eBay password in their email account or bank account. With this suggest method, the attacker will find a list of possible password instead of one. It is less likely for the attacker to gain immediately access to other websites/applications belonging to the same user.

I have to admit the headline statement is a bit exaggerating and it is not entirely hack proof, but engineering a mechanism to pinpoint a weakness of password cracking by adding manual interpretations certainly introduces some advantages.

 

 

Reference: https://crackstation.net/hashing-security.htm

 

 

 

Layer 7 DDoS Attack : A Web Architect Perspective

The arm race on cyber security makes protecting Internet resources harder and harder. In the past, DDoS was mostly on Layer 3 and Layer 4 but reports from various sources identified Layer 7 DDoS is the prevalent threat. The slide below from Radware explains the changes in new DDoS trend. While protection on network traffic flooding is mature, attacker shift target to application layer.

radware ddos layer7

As DDoS attack evolves and now is targeting application layer, the responsibility to protect web application is not only rest on the shoulder of CISO or network security team. It is time for web application architects to go to the front line. This article will analyse Layer 7 DDoS attacks and discuss some measures web application architects could deploy to mitigate impacts.

Types of Layer 7 DDoS Attack 

A study conducted by Arbor Network showed that while attacks on DNS and SMTP exist, the most prevalent attack vector is still on HTTP.

DDoS Attack types

Layer 7 DDoS attack is usually targeting the landing page of website or a specific URL. If an attacker successfully consume the either the bandwidth or OS resources (like socket) of the target, normal users are not able to access these resources.

A closer look at the targeted web resources

When developing protection strategy for website against Layer 7 DDoS attacks, we need to understand not all webpage are created equal. Diagram 1 shows different types of web pages. There are two ways to classify webserver resources: one is based on their access rights, the other is based on their content type. When the content is for registered users, usually there is a user authentication process which prevents unauthenticated HTTP request. Pages only accessible by authenticated users are usually not the target of DDoS attack unless the attacker pre-registered a large number of user accounts and also automated the login process. The login page which usually uses HTTPS is another websever resource that will be exploited by DDoS attackers since HTTPS handshaking is a high loading process for webserver. Public accessible content has higher risk of HTTP Flooding. The impact of HTTP Flooding is different depends on the content type. Over load these resources for a long period of time would mean the DDoS attack is successful. On the other hand, DDoS attack on static web page usually impacts the outbound bandwidth and web server resources (like http connection pool, socket, memory and CPU). Dynamic pages will generate backend database queries and it has impact on the application server and database server. Those types with red word are facing a higher risk of DDoS attack.

Web Resources types

In the above paragraphs, we established a general understanding of Layer 7 DDoS attacks on different types of web resources. Below is a discussion on how to minimize the DDoS attack impact and lower the impact of website users. These steps do not replace Layer 3 and 4 DDoS defenses and traffic filtering. However, unless your website is designed with these principles in mind, a carefully crafted Layer 7 DDoS attack could also bring down your website.

Protecting the Static Page

The most effective and also expensive way to protect static page against DDoS attack is buying services from a reputable Content Delivery Network (CDN). However, cost to run whole website on CDN may add up to a large sum. An alternative is to make uses of cloud platforms and distribute graphics, flash and JS files to one or more web servers located in another network. This method is already practiced by most high traffic web site which has dedicated server used for delivering images only.

Nowadays, most webpage sizes are over 100k bytes when all graphic, CSS and JS are loaded. Using the LOCI DDoS tool, it can easily consume all the bandwidth and web server CPU resources. In a HTTP GET Flood attack, both inbound link and outbound link will become a bottleneck. A Web developer could not do much on managing the inbound traffic, however there are ways to lower the risk of outbound link becoming a bottleneck when under DDoS attack. Web architect should monitor the inbound/outbound traffic ratio for web server network interfaces.

One simple way to increase website defense against HTTP GET flooding attack is to store images and other large size media files in another sever (either within the same data centre or in another data centre). By using another web server for delivering non-HTML static content, it helps to lower the both the CPU loading and also bandwidth consumption of the main web site. This method is similar to creating a DIY CDN. The cloud platforms and on-demand charging scheme is a excellent resources for deploying this solution. Since the graphic files are public accessible, placing on a public cloud platform will not increase data security risk. Although this is a simple solution for static page, there are a few points to note. First, in the HTML code the “height” and “width” attribute of each image should be defined. This will ensure user see a proper screen layout even when the image server is inaccessible. Secondly, when choosing a cloud platform it is best to find one that does not share the same upstream connectivity provider as your primary data centre. When DDoS attack happens, a good architecture should eliminate performance bottleneck as much as possible.

Protecting the Dynamic Page

Dynamic pages are generated based on user input and it involves business logic at application server and data queries at database server. If the web application displays large amount of data or accept user upload media files, an attacker could easily write a script and generate a large number of legitimate requests and consume both bandwidth and CPU power. DDoS defenses mechanism relies on filtering out malicious traffic via pattern recognition will not be much useful if the attack target this weakness in dynamic pages. It is the web application developer responsibility to identify high risk web pages and develop mitigation measures to prevent DDoS attack

As displaying large amount of data based on user input is getting popular, web architects should develop strategy to prevent misuses of high loading webpage, particularly when it is publicly available.

One way is to make uses a form-specific cookie and verify the HTTP request is submitted from a valid HTML form. The steps would be

  1. When the browser loads a HTML form, the web server set a cookie specific to this form. When user click on the submit button, a cookie is also set in the HTTP request.
  2. When the web server process user request, the server side logic first check if a cookie is properly set before processing the user request.
  3. If the cookie does not exist or value not correct, the server side logic should forward the request to a landing page that displaying a URL link points to the HTML form.

Below diagram show the logical flow.

dynamic ddos

The principle of this method is to establish an extra checking before executing the server side logic supporting the dynamic page, thus preserving CPU resources at the application server and database server. Most DDoS attacking tools has simple HTTP functions and is not able to process cookies. When using Wireshark to capture HTTP request from LOIC DDoS attack tool, the packets show that LOIC does not accept cookie from web sites. The similar result is obtained from HOIC tool. 

LOIC traffic

DDoS attacker may record this cookie and send out this cookie within the attack traffic, which could be easily done using a script. To compensate this, this form specific cookie should be changed regularly or following a triggering logic.

Unfinished Story

This article only shows some ways web architects could use and add DDoS defense into the DNA of web application. The methods described will increase a web application ability to survive a HTTP Flooding attacks. When attacks are more application specific, web architects should start take on DDoS defense responsibility and design systems that both secure and scalable.

[1] http://www.owasp.org/index.php/OWASP_HTTP_Post_Tool

[1] http://www.securelist.com/en/analysis/204792126/Black_DDoS

[1] http://www.ihteam.net/advisory/make-requests-through-google-servers-ddos/