Category Archives: Cloud

The Most Read Terms and Services, how age-guessing tool tell us about our uses of personal information

In the last 48 hours, age became a hot topic on Facebook, thanks to Microsoft How-Old.net free age-guessing online tool. It proves age is still a contentious topic, regardless gender, race and obviously age. A marvellous marketing gimmick!

As it always happens, once a story caught fire, a few risk aversive or investigating minds start to dig deeper and uncover an inconvenient truth — the terms of this service authorise Microsoft to use user photos more than just age-guessing. Exactly what are the future uses are unknown!

Working in cloud computing and outsourcing for the last 5 years, it is not unusual to see such user terms and conditions. Most of them are crafted in a way that almost all risks are excluded from the service provider liability. The legal counsels are paid to read all reported and unreported court cases and protect company like Microsoft in this case.

The basic assumptions of data privacy protection is in question here and this case offered a chance to review it.

 Consent from user is enough? 

For How-Old.net, clearly the intention of user uploading the photo is to find out the age and gender. User don’t expect it to tell if you have diabetes or your sexualities (it maybe possible with enough data points !). However,  the service provider terms open to possibility of others uses of the photo, without specifying what it will be. Service providers are giving themselves some elbow room for future innovations. This is actually a typical way how commercial terms response to data privacy legislations.

Most data privacy law requires informed and specific uses of personal data. The rationale is  as long as users consent with the uses of PII, there is NO violation of data privacy law. However, we have seen software or web services terms tries to include extensive scope of uses and sometimes non-restrictive uses. Users are either lured to give consent or just ignore the terms completely. User gives consents rather spontaneously !

 

For those like to read the legal terms , extracted here.

However, by posting, uploading, inputting, providing, or submitting your Submission, you are granting Microsoft, its affiliated companies, and necessary sublicensees permission to use your Submission in connection with the operation of their Internet businesses (including, without limitation, all Microsoft services), including, without limitation, the license rights to: copy, distribute, transmit, publicly display, publicly perform, reproduce, edit, translate, and reformat your Submission; to publish your name in connection with your Submission; and to sublicense such rights to any supplier of the Website Services.

In AWS cloud contracts (as in life), read before signing

Gigaom

Lawyers say never to sign (or click on) anything without reading it first, but that rule typically goes out the window when it comes to complex-yet-boring end user licensing agreements (EULAs) and other software licenses.

As John Oliver said in his epic net neutrality screed: “If you want to do something evil, put it inside something boring. Apple could put the entire text of Mein Kampf inside the iTunes user agreement and you’d just go: Agree. Agree. Agree.”

That read-before-clicking mantra holds true for license agreements from cloud providers as well. For example, I would bet that when many startups — which often don’t have legal departments — sign on for Amazon Web Services, they don’t check out all the verbiage fully. And they should.

In particular, there is a provision in the AWS customer agreement that they really should scrutinize. The contract’s Section 8.5 on license restrictions includes the usual restrictions…

View original post 780 more words

No single prediction is perfect, so I look at four

As 2015 approaches, it is time for new year resolutions and wishes. For security industry, we are busy preparing for another eventful year!!

When preparing for our budget and project portfolios, it maybe useful to look at predictions from leading security vendors.  Cyber security is an intelligence game. Can Websense, Sophos, FireEye and TrendMicro predictions help us? I will write another post to provide my thoughts.

Legend : Orange cells are directly related to Smartphone. Red words are related to payment systems.

2015 Cyber Security Predictions

Websense Sophos FireEye TrendMicro
Healthcare will see a substantial increase of
data stealing attack campaigns
Exploit mitigations reduce the number of useful vulnerabilities Mobile and Web-based viruses remain a scourge, and hardly a week goes by without hearing of another data breach or a new malware. More cybercriminals will turn to darknets to share attack tools, stage attacks, and market stolen goods.
Attacks on the Internet of Things will focus on
business use cases, not consumer products
Internet of Things attacks move from
proof-of-concept to mainstream risks
Mobile ransomware will surge in popularity. Cryptolocker attained a measure of success this year, and so attention is expected to further turn to mobile in order for attackers to gain access to your phone and contacts. There will be bolder hacking attempts as cyber activity increases.
Credit card thieves will morph into
information dealers
Encryption becomes standard, but not everyone is happy about it Point-of-sale (PoS) attacks will also become a more popular method of stealing data and money — and PoS attacks will strike a broader group of victims with increasing frequency. An exploit kit that specifically targets Android users will surface.
Authentication consolidation on the phone
will trigger data-specific exploits, but not for
stealing data on the phone
More major flaws in widely-used software that had escaped notice by the security
industry over the past 15 years
 As retailers strengthen their defenses and more criminals get into the game, cyberattacks will spread to “middle layer” targets including payment processors and PoS management firms. Targeted attacks will become a norm.
New vulnerabilities will emerge from decades
old source code
Regulatory landscape forces greater
disclosure and liability, particularly
in Europe
Attacks on the enterprise supply chain will surge, as less mature or financially able companies become weak links in an ecosystem where only top firms can bolster their defenses to acceptable standards. Bugs in open source apps will continue to be exploited.
Email threats will take on a new level of
sophistication and evasiveness
Attackers increase focus on mobile
payment systems, but stick more to
traditional payment fraud for a while
Lack of adequate response could result in a major brand going out of business  New mobile payment methods will introduce new threats.
As companies increase access to cloud and
social media tools, command and control
instructions will increasingly be hosted on
legitimate sites
Global skills gap continues to increase, with
incident response and education a key focus
With such risks in the corporate realm, cyber insurance as an industry is expected to grow. We won’t see head-on IoE/IoT device attacks, but the data they process will tell another story.
There will be the new (or newly revealed)
players on the global cyber espionage/cyber war battlefield
Attack services and exploit kits arise for mobile (and other) platforms   More severe online banking and other financially motivated threats will surface.
  The gap between ICS/SCADA and real
world security only grows bigger
   
  Interesting rootkit and bot capabilities
may turn up new attack vectors
   

Soon will come the software defined transaction (SDT) age.

“It’s comforting to imagine that, in the end, the power of innovative technologies and business models will win out over status-quo thinking and entrenched interests, all for the public good.”

From a security and risk management point of view, a central or using the author’s words “the powers that have traditionally controlled those transactions” provides assurance on quality of service, security and privacy protections. However, with new technologies most of this assurance features could be delivered by software. 

Soon will come the software defined transaction (SDT) age.

 

April 27, 2014

Link

Preparations for a blended IT environment

Although the author discussed preparations for hybrid cloud, his points apply to most IT organisation now : This growth in the use of cloud services requires IT managers to re-evaluate their role.

What role? Not only as a broker but builder. For most enterprise, IT manager will not build application from scratch. Cost and time constraint require them to source cloud application while managing outsourcing risk, data privacy and security issues.

VMware Virtual Machine file descriptors security

Around 18 months ago, a security researcher reported that he found a bug in VMDK descriptor that allows user to access all driver in a VMware hypervisor. Today VMware released another vulnerability “VMware ESXi and ESX unauthorized file access through vCenter Server and ESX, in their words “an unprivileged vCenter Server user with the privilege “Add Existing Disk” to obtain read and write access to arbitrary files on ESXi or ESX.”

It seems the security design of ESX will need to beef up. The file permission checking and access right verification in ESX has major issue that caused this type of privilege escalation. VMware shall disclose more on the file access right design and fundamentally upgrade it, not just patching.

With Xmas holiday is coming, not sure how soon this patch will be pushed to production environment!

Layer 7 DDoS Attack : A Web Architect Perspective

The arm race on cyber security makes protecting Internet resources harder and harder. In the past, DDoS was mostly on Layer 3 and Layer 4 but reports from various sources identified Layer 7 DDoS is the prevalent threat. The slide below from Radware explains the changes in new DDoS trend. While protection on network traffic flooding is mature, attacker shift target to application layer.

radware ddos layer7

As DDoS attack evolves and now is targeting application layer, the responsibility to protect web application is not only rest on the shoulder of CISO or network security team. It is time for web application architects to go to the front line. This article will analyse Layer 7 DDoS attacks and discuss some measures web application architects could deploy to mitigate impacts.

Types of Layer 7 DDoS Attack 

A study conducted by Arbor Network showed that while attacks on DNS and SMTP exist, the most prevalent attack vector is still on HTTP.

DDoS Attack types

Layer 7 DDoS attack is usually targeting the landing page of website or a specific URL. If an attacker successfully consume the either the bandwidth or OS resources (like socket) of the target, normal users are not able to access these resources.

A closer look at the targeted web resources

When developing protection strategy for website against Layer 7 DDoS attacks, we need to understand not all webpage are created equal. Diagram 1 shows different types of web pages. There are two ways to classify webserver resources: one is based on their access rights, the other is based on their content type. When the content is for registered users, usually there is a user authentication process which prevents unauthenticated HTTP request. Pages only accessible by authenticated users are usually not the target of DDoS attack unless the attacker pre-registered a large number of user accounts and also automated the login process. The login page which usually uses HTTPS is another websever resource that will be exploited by DDoS attackers since HTTPS handshaking is a high loading process for webserver. Public accessible content has higher risk of HTTP Flooding. The impact of HTTP Flooding is different depends on the content type. Over load these resources for a long period of time would mean the DDoS attack is successful. On the other hand, DDoS attack on static web page usually impacts the outbound bandwidth and web server resources (like http connection pool, socket, memory and CPU). Dynamic pages will generate backend database queries and it has impact on the application server and database server. Those types with red word are facing a higher risk of DDoS attack.

Web Resources types

In the above paragraphs, we established a general understanding of Layer 7 DDoS attacks on different types of web resources. Below is a discussion on how to minimize the DDoS attack impact and lower the impact of website users. These steps do not replace Layer 3 and 4 DDoS defenses and traffic filtering. However, unless your website is designed with these principles in mind, a carefully crafted Layer 7 DDoS attack could also bring down your website.

Protecting the Static Page

The most effective and also expensive way to protect static page against DDoS attack is buying services from a reputable Content Delivery Network (CDN). However, cost to run whole website on CDN may add up to a large sum. An alternative is to make uses of cloud platforms and distribute graphics, flash and JS files to one or more web servers located in another network. This method is already practiced by most high traffic web site which has dedicated server used for delivering images only.

Nowadays, most webpage sizes are over 100k bytes when all graphic, CSS and JS are loaded. Using the LOCI DDoS tool, it can easily consume all the bandwidth and web server CPU resources. In a HTTP GET Flood attack, both inbound link and outbound link will become a bottleneck. A Web developer could not do much on managing the inbound traffic, however there are ways to lower the risk of outbound link becoming a bottleneck when under DDoS attack. Web architect should monitor the inbound/outbound traffic ratio for web server network interfaces.

One simple way to increase website defense against HTTP GET flooding attack is to store images and other large size media files in another sever (either within the same data centre or in another data centre). By using another web server for delivering non-HTML static content, it helps to lower the both the CPU loading and also bandwidth consumption of the main web site. This method is similar to creating a DIY CDN. The cloud platforms and on-demand charging scheme is a excellent resources for deploying this solution. Since the graphic files are public accessible, placing on a public cloud platform will not increase data security risk. Although this is a simple solution for static page, there are a few points to note. First, in the HTML code the “height” and “width” attribute of each image should be defined. This will ensure user see a proper screen layout even when the image server is inaccessible. Secondly, when choosing a cloud platform it is best to find one that does not share the same upstream connectivity provider as your primary data centre. When DDoS attack happens, a good architecture should eliminate performance bottleneck as much as possible.

Protecting the Dynamic Page

Dynamic pages are generated based on user input and it involves business logic at application server and data queries at database server. If the web application displays large amount of data or accept user upload media files, an attacker could easily write a script and generate a large number of legitimate requests and consume both bandwidth and CPU power. DDoS defenses mechanism relies on filtering out malicious traffic via pattern recognition will not be much useful if the attack target this weakness in dynamic pages. It is the web application developer responsibility to identify high risk web pages and develop mitigation measures to prevent DDoS attack

As displaying large amount of data based on user input is getting popular, web architects should develop strategy to prevent misuses of high loading webpage, particularly when it is publicly available.

One way is to make uses a form-specific cookie and verify the HTTP request is submitted from a valid HTML form. The steps would be

  1. When the browser loads a HTML form, the web server set a cookie specific to this form. When user click on the submit button, a cookie is also set in the HTTP request.
  2. When the web server process user request, the server side logic first check if a cookie is properly set before processing the user request.
  3. If the cookie does not exist or value not correct, the server side logic should forward the request to a landing page that displaying a URL link points to the HTML form.

Below diagram show the logical flow.

dynamic ddos

The principle of this method is to establish an extra checking before executing the server side logic supporting the dynamic page, thus preserving CPU resources at the application server and database server. Most DDoS attacking tools has simple HTTP functions and is not able to process cookies. When using Wireshark to capture HTTP request from LOIC DDoS attack tool, the packets show that LOIC does not accept cookie from web sites. The similar result is obtained from HOIC tool. 

LOIC traffic

DDoS attacker may record this cookie and send out this cookie within the attack traffic, which could be easily done using a script. To compensate this, this form specific cookie should be changed regularly or following a triggering logic.

Unfinished Story

This article only shows some ways web architects could use and add DDoS defense into the DNA of web application. The methods described will increase a web application ability to survive a HTTP Flooding attacks. When attacks are more application specific, web architects should start take on DDoS defense responsibility and design systems that both secure and scalable.

[1] http://www.owasp.org/index.php/OWASP_HTTP_Post_Tool

[1] http://www.securelist.com/en/analysis/204792126/Black_DDoS

[1] http://www.ihteam.net/advisory/make-requests-through-google-servers-ddos/

Cloud Computing in Singapore Financial Industry

Cloud Computing industry is well developed in Singapore, so it is not a big surprise seeing MAS TRM guideline has a section only on Cloud Computing. Reading the document as whole, it seems MAS is accepting the fact that cloud computing is or will be part of financial industry development.

Section 5.2 Cloud Computing is group under a bigger topic which is IT Outsourcing. For banks, the uses of third party computing resources is indeed a form of outsourcing. Operationally and legally the relationship between banks and cloud services providers is not much different. 

From the text “Outsourcing can involve the provision of IT capabilities and facilities by a single third party or multiple vendors located in Singapore or abroad.” One can assume that outsourcing to overseas cloud computing is possible. The statement does not restrict Singapore data from being stored or processed abroad. This is important as most international organisation is hosting their application centrally in regional hubs. However, it does have some catches.

The TRM guideline (5.2.3 and 5.2.4) does not discuss much of the technical side of Cloud Computing, rather it stress on the importance on data governance, which include data segregation and removal of data on exit. I believe this is due to the enforcement of banking secrecy principle (more details are available on MAS website

In cloud computing setup, deleting all information related to one entity is tricky and costly. It would be possible for IaaS deployment where the data are stored in disk images. For SaaS or other data services, to identify each data owned by the exiting entity will be a daunting task ! The data schema must be able to cater for this unique requirement. Unless it is considered when the cloud service provider is developing the system, the cost to manual deleting data is going to escalate. 

 

What 4 hours RTO means

In last post I mentioned an analysis done by a group of VCPs. In their ppt, one slide is worth more discussion which is the 4 hours RTO defined in MAS notice to banks.

Recovery time objective is a well established concept and has been seeing it in large scale project design documents and also procurement RFPs. Wiki has this definition “The recovery time objective (RTO) is the duration of time and a service level within which a business process must be restored after a disaster (or disruption) in order to avoid unacceptable consequences associated with a break in business continuity.”

The reader has to distinguish between recover to full services and recover to a service level. When disaster happens, everything has to be prioritized. Not all program are the same when you have limited resources and time. We may not expect to pay telephone bill via ATM when there is serious flooding but you expect the ATM shall still let you draw money.

The slide (shown below) highlighted the time differences between event happen and disaster is declared. Due to complexity of current system and network, the time to fully assess an system malfunction may take hours. Usually the incident handling procedure will require a few clarification (if not finger pointing) until senior staff is informed about the major outage. How a bank response to outage is now a critical element in meeting MAS requirement on RTO. The authors of this slide contended that it is far less than four hours and manual steps are not going to meet this requirement. I believe they do have a point.

Will the MAS TRM requirements and notice makes 24×7 internet banking a white elephant? Let us wait until the 2104 DBS annual report and found out their cost ratio.

Image