Category Archives: Technology

Microsoft tries to address PKI issues in IE11 (SmartScreen and SNDS)

Digital certificate is widely used and the Internet cannot work without it. However, PKI (the framework digital certificates based on) has lots of issues. Last year in ISO SC27 meeting at ENISA there was a special meeting on PKI. Many issues are only raised without a conclusion, same as most issues brought international meetings.

Microsoft with a 10% – 20% footprint (depends on which report ) of browser market is taking steps in managing this madness. In a recent blog post, “A novel method in IE11 for dealing with fraudulent digital certificates” explain their strategy. I think Microsoft action is very responsible and will help to mitigate issues with fraudulent digital certificates. Certificate and its associated private key is very sensitive and must be handled with security in mind. In my over 10 years audit experiences, I had seen many engineers or administrators treated private key same as a configuration file. In most enterprise, there is general lack of documented procedures or best practises to administrate digital certificate. Malicious attackers may abuse this weakness and create fraudulent certificates.

In IE11, Microsoft uses SmartScreen Filter to detect and report high risk uses of certificate. Three scenarios are explained in the blog post:

1. A website is using a certificate that is capable of being used as a subordinate CA. This would indicate the certificate has been issued wrongly

2. If a website suddenly presents a different certificate only to a certain region where a different CA issued the certificate. This might indicate a possible MITM attack in a specific country or region

3. There was a sudden and significant change in the fields a CA includes in certificates it issues. For example, omission or change in the OCSP responder location. This would indicate a CA was either compromised, or has not followed standard operating procedures.”

There is a practicality issues with item 2 above with a 24×7 website. Suppose Apple adm update the SSL certificate on midnight, APAC region users will be the first batch of users using this updated and also different certificate. Will IE11 warn user regarding this new SSL certificate although it is updated due to normal refresh? I hope Microsoft will add intelligent to their detection algorithm and take consideration of the effective date of old SSL cert.

Another important control Microsoft implemented is ” domain registrants could be notified by email when new certificates with their domain names appear in our database. The domain registrant would have the option to report suspicious certificates to us and notify the CA to revoke the suspicious certificate.” In short, Microsoft is sharing the uses of certificate of specific domain to who claimed to the domain owner. The domain owner will need to take action accordingly. This is a responsive strategy by increasing transparency. (There is a new trend in security industry on sharing info and responding timely, in additional to defence in depth principle. Will write on this trend later when I finish reading “Responsive Security” by Meng-Chow Kang)

It is a prefect design in theory. My first question is who read such warning email! Is the email recipient understand the risks when reported by SNDS? Time will tell.

 

 

 

Link

Preparations for a blended IT environment

Although the author discussed preparations for hybrid cloud, his points apply to most IT organisation now : This growth in the use of cloud services requires IT managers to re-evaluate their role.

What role? Not only as a broker but builder. For most enterprise, IT manager will not build application from scratch. Cost and time constraint require them to source cloud application while managing outsourcing risk, data privacy and security issues.

VMware Virtual Machine file descriptors security

Around 18 months ago, a security researcher reported that he found a bug in VMDK descriptor that allows user to access all driver in a VMware hypervisor. Today VMware released another vulnerability “VMware ESXi and ESX unauthorized file access through vCenter Server and ESX, in their words “an unprivileged vCenter Server user with the privilege “Add Existing Disk” to obtain read and write access to arbitrary files on ESXi or ESX.”

It seems the security design of ESX will need to beef up. The file permission checking and access right verification in ESX has major issue that caused this type of privilege escalation. VMware shall disclose more on the file access right design and fundamentally upgrade it, not just patching.

With Xmas holiday is coming, not sure how soon this patch will be pushed to production environment!

Layer 7 DDoS Attack : A Web Architect Perspective

The arm race on cyber security makes protecting Internet resources harder and harder. In the past, DDoS was mostly on Layer 3 and Layer 4 but reports from various sources identified Layer 7 DDoS is the prevalent threat. The slide below from Radware explains the changes in new DDoS trend. While protection on network traffic flooding is mature, attacker shift target to application layer.

radware ddos layer7

As DDoS attack evolves and now is targeting application layer, the responsibility to protect web application is not only rest on the shoulder of CISO or network security team. It is time for web application architects to go to the front line. This article will analyse Layer 7 DDoS attacks and discuss some measures web application architects could deploy to mitigate impacts.

Types of Layer 7 DDoS Attack 

A study conducted by Arbor Network showed that while attacks on DNS and SMTP exist, the most prevalent attack vector is still on HTTP.

DDoS Attack types

Layer 7 DDoS attack is usually targeting the landing page of website or a specific URL. If an attacker successfully consume the either the bandwidth or OS resources (like socket) of the target, normal users are not able to access these resources.

A closer look at the targeted web resources

When developing protection strategy for website against Layer 7 DDoS attacks, we need to understand not all webpage are created equal. Diagram 1 shows different types of web pages. There are two ways to classify webserver resources: one is based on their access rights, the other is based on their content type. When the content is for registered users, usually there is a user authentication process which prevents unauthenticated HTTP request. Pages only accessible by authenticated users are usually not the target of DDoS attack unless the attacker pre-registered a large number of user accounts and also automated the login process. The login page which usually uses HTTPS is another websever resource that will be exploited by DDoS attackers since HTTPS handshaking is a high loading process for webserver. Public accessible content has higher risk of HTTP Flooding. The impact of HTTP Flooding is different depends on the content type. Over load these resources for a long period of time would mean the DDoS attack is successful. On the other hand, DDoS attack on static web page usually impacts the outbound bandwidth and web server resources (like http connection pool, socket, memory and CPU). Dynamic pages will generate backend database queries and it has impact on the application server and database server. Those types with red word are facing a higher risk of DDoS attack.

Web Resources types

In the above paragraphs, we established a general understanding of Layer 7 DDoS attacks on different types of web resources. Below is a discussion on how to minimize the DDoS attack impact and lower the impact of website users. These steps do not replace Layer 3 and 4 DDoS defenses and traffic filtering. However, unless your website is designed with these principles in mind, a carefully crafted Layer 7 DDoS attack could also bring down your website.

Protecting the Static Page

The most effective and also expensive way to protect static page against DDoS attack is buying services from a reputable Content Delivery Network (CDN). However, cost to run whole website on CDN may add up to a large sum. An alternative is to make uses of cloud platforms and distribute graphics, flash and JS files to one or more web servers located in another network. This method is already practiced by most high traffic web site which has dedicated server used for delivering images only.

Nowadays, most webpage sizes are over 100k bytes when all graphic, CSS and JS are loaded. Using the LOCI DDoS tool, it can easily consume all the bandwidth and web server CPU resources. In a HTTP GET Flood attack, both inbound link and outbound link will become a bottleneck. A Web developer could not do much on managing the inbound traffic, however there are ways to lower the risk of outbound link becoming a bottleneck when under DDoS attack. Web architect should monitor the inbound/outbound traffic ratio for web server network interfaces.

One simple way to increase website defense against HTTP GET flooding attack is to store images and other large size media files in another sever (either within the same data centre or in another data centre). By using another web server for delivering non-HTML static content, it helps to lower the both the CPU loading and also bandwidth consumption of the main web site. This method is similar to creating a DIY CDN. The cloud platforms and on-demand charging scheme is a excellent resources for deploying this solution. Since the graphic files are public accessible, placing on a public cloud platform will not increase data security risk. Although this is a simple solution for static page, there are a few points to note. First, in the HTML code the “height” and “width” attribute of each image should be defined. This will ensure user see a proper screen layout even when the image server is inaccessible. Secondly, when choosing a cloud platform it is best to find one that does not share the same upstream connectivity provider as your primary data centre. When DDoS attack happens, a good architecture should eliminate performance bottleneck as much as possible.

Protecting the Dynamic Page

Dynamic pages are generated based on user input and it involves business logic at application server and data queries at database server. If the web application displays large amount of data or accept user upload media files, an attacker could easily write a script and generate a large number of legitimate requests and consume both bandwidth and CPU power. DDoS defenses mechanism relies on filtering out malicious traffic via pattern recognition will not be much useful if the attack target this weakness in dynamic pages. It is the web application developer responsibility to identify high risk web pages and develop mitigation measures to prevent DDoS attack

As displaying large amount of data based on user input is getting popular, web architects should develop strategy to prevent misuses of high loading webpage, particularly when it is publicly available.

One way is to make uses a form-specific cookie and verify the HTTP request is submitted from a valid HTML form. The steps would be

  1. When the browser loads a HTML form, the web server set a cookie specific to this form. When user click on the submit button, a cookie is also set in the HTTP request.
  2. When the web server process user request, the server side logic first check if a cookie is properly set before processing the user request.
  3. If the cookie does not exist or value not correct, the server side logic should forward the request to a landing page that displaying a URL link points to the HTML form.

Below diagram show the logical flow.

dynamic ddos

The principle of this method is to establish an extra checking before executing the server side logic supporting the dynamic page, thus preserving CPU resources at the application server and database server. Most DDoS attacking tools has simple HTTP functions and is not able to process cookies. When using Wireshark to capture HTTP request from LOIC DDoS attack tool, the packets show that LOIC does not accept cookie from web sites. The similar result is obtained from HOIC tool. 

LOIC traffic

DDoS attacker may record this cookie and send out this cookie within the attack traffic, which could be easily done using a script. To compensate this, this form specific cookie should be changed regularly or following a triggering logic.

Unfinished Story

This article only shows some ways web architects could use and add DDoS defense into the DNA of web application. The methods described will increase a web application ability to survive a HTTP Flooding attacks. When attacks are more application specific, web architects should start take on DDoS defense responsibility and design systems that both secure and scalable.

[1] http://www.owasp.org/index.php/OWASP_HTTP_Post_Tool

[1] http://www.securelist.com/en/analysis/204792126/Black_DDoS

[1] http://www.ihteam.net/advisory/make-requests-through-google-servers-ddos/

Cloud Computing in Singapore Financial Industry

Cloud Computing industry is well developed in Singapore, so it is not a big surprise seeing MAS TRM guideline has a section only on Cloud Computing. Reading the document as whole, it seems MAS is accepting the fact that cloud computing is or will be part of financial industry development.

Section 5.2 Cloud Computing is group under a bigger topic which is IT Outsourcing. For banks, the uses of third party computing resources is indeed a form of outsourcing. Operationally and legally the relationship between banks and cloud services providers is not much different. 

From the text “Outsourcing can involve the provision of IT capabilities and facilities by a single third party or multiple vendors located in Singapore or abroad.” One can assume that outsourcing to overseas cloud computing is possible. The statement does not restrict Singapore data from being stored or processed abroad. This is important as most international organisation is hosting their application centrally in regional hubs. However, it does have some catches.

The TRM guideline (5.2.3 and 5.2.4) does not discuss much of the technical side of Cloud Computing, rather it stress on the importance on data governance, which include data segregation and removal of data on exit. I believe this is due to the enforcement of banking secrecy principle (more details are available on MAS website

In cloud computing setup, deleting all information related to one entity is tricky and costly. It would be possible for IaaS deployment where the data are stored in disk images. For SaaS or other data services, to identify each data owned by the exiting entity will be a daunting task ! The data schema must be able to cater for this unique requirement. Unless it is considered when the cloud service provider is developing the system, the cost to manual deleting data is going to escalate. 

 

Car Rental is more promising than ever

When capturing and storing technology are so cheap, it is tempting for Gov to store everything. In this case, car plate images.

I guess car rental business has another marketing theme to explore! Soon we will see computer rental and mobile phone rental. When trust is gone, people are willing to try extreme measures.

There is a book offers a critical review on the abundance of surveillance technology.

Critical Issues in Crime and Society : Surveillance in the Time of Insecurity.
New Brunswick, NJ, USA: Rutgers University Press 

You are being tracked. How license plate readers are being used to record Americans’ movements (ACLU, July 2013) – A little noticed surveillance technology, designed to track the movements of every passing driver, is fast proliferating on America’s streets. Automatic license plate readers, mounted on police cars or on objects like road signs and bridges, use small, high-speed cameras to photograph thousands of plates per minute. The information captured by the readers – including the license plate number, and the date, time, and location of every scan – is being collected and sometimes pooled into regional sharing systems. As a result, enormous databases of innocent motorists’ location information are growing rapidly. This information is often retained for years or even indefinitely, with few or no restrictions to protect privacy rights.

What 4 hours RTO means

In last post I mentioned an analysis done by a group of VCPs. In their ppt, one slide is worth more discussion which is the 4 hours RTO defined in MAS notice to banks.

Recovery time objective is a well established concept and has been seeing it in large scale project design documents and also procurement RFPs. Wiki has this definition “The recovery time objective (RTO) is the duration of time and a service level within which a business process must be restored after a disaster (or disruption) in order to avoid unacceptable consequences associated with a break in business continuity.”

The reader has to distinguish between recover to full services and recover to a service level. When disaster happens, everything has to be prioritized. Not all program are the same when you have limited resources and time. We may not expect to pay telephone bill via ATM when there is serious flooding but you expect the ATM shall still let you draw money.

The slide (shown below) highlighted the time differences between event happen and disaster is declared. Due to complexity of current system and network, the time to fully assess an system malfunction may take hours. Usually the incident handling procedure will require a few clarification (if not finger pointing) until senior staff is informed about the major outage. How a bank response to outage is now a critical element in meeting MAS requirement on RTO. The authors of this slide contended that it is far less than four hours and manual steps are not going to meet this requirement. I believe they do have a point.

Will the MAS TRM requirements and notice makes 24×7 internet banking a white elephant? Let us wait until the 2104 DBS annual report and found out their cost ratio.

Image