Skip to content

Writing secure code

31 May 2006

Security can seem a daunting subject but there are a few basic concepts and simple techniques that can help you build more secure applications. As Matt Nicholson explains, you need to think like a hacker and adopt a mind-set that makes you suspicious of every item of data that can come into your system.

Originally published on DNJ Online, May 2006

Every hour of the day, every day of the year, someone is trying to break into your system. Most of these attacks are automated – spiders tirelessly scanning your ports, looking for a way in. It doesn’t matter whether you’re an international bank or a one-man band, these programs are looking for weaknesses that they can report back to their owners for evaluation – and possibly a more sophisticated follow-up attack.

Until recently, such attacks have concentrated on your operating systems and network infrastructure. However, as companies like Microsoft put more resources into plugging the security holes in their software, attackers have realised there is an easier way: through the applications that you write to run on these systems. Few companies have the resources or expertise of Microsoft when it comes to resolving security issues, and if the application is on the Internet then the attacker can access it in a fairly anonymous fashion from almost anywhere in the world.

The problem – or opportunity if you look at it from the attacker’s point of view – is that the application development processes adopted by most organisations rarely takes security into account. Applications are tested in ways that will often throw up bugs that could be exploited by a wilful attack, but tests are rarely formulated with that goal specifically in mind. Furthermore, security is rarely part of the initial design process, where the analysis of an application architecture from the point of view of an attacker could lock doors and block holes right from the start.

Lines of defence

From the attacker’s point of view, an application has much in common with a medieval fortress. Until the invention of gun powder, there was little point in a direct attack on the fabric of the walls themselves. Instead, attacks focused on points of weakness such as the main gates – and if a secret tunnel could be found that might not be so well guarded, then so much the better.

Ken Thompson, who a decade earlier had created Unix together with Dennis Ritchie, admitted in his 1983 Turing Award lecture that he had constructed a version of the C compiler used to compile Unix in such a way that it would recognise when the ‘login’ command was being compiled and insert code that would recognise a password known only to him, so allowing him to login to any Unix system. The C compiler was itself written in C, so in addition he arranged for the compiler to recognise when it was compiling a version of itself and re-insert the source code that created the back door, even if someone had realised what was happening and removed it from the source code for the compiler itself. Ken has stated that this C compiler was never distributed. (see ‘Reflections on Trusting Trust’ at http://cm.bell-labs.com/who/ken/trust.html)

If your systems are accessible from the Internet, then it is your firewall and your Web server that forms the first line of defence. Even if your system does not offer a Web or an FTP interface, it may still be accessible through VPN, Wireless LAN or even dialup modem. Failing that, a little social engineering and your attacker could be sitting at a secluded terminal in your accounts department with not much more than a log-in screen between him or her and your crown jewels.

Your attacker may have been contracted to steal a copy of your database of credit card details, or the plans for your prototype, or your client list, but his first goal is to find as many valid username and password combinations as possible. Once he has those, he can pass himself off as someone that the system will accept as a legitimate user, and eventually as an administrator with full control of your whole network. Furthermore, if your attacker is one of your company’s employees, then he may well already have a foot in the door.

Which is where we can learn from history. Only the most basic of medieval castles presented one line of defence. Most had an outer wall, defended by the common foot-soldiers, and then an inner keep guarded by an elite force charged with granting entrance to only the most trusted. Even today, we lock our front doors and set our alarms, but our most precious valuables are stored in a safe with a combination lock so they remain protected even if the burglar does make it into the house.

Most applications similarly break down into two distinct domains: the application itself, perhaps running on the .NET Framework, and the underlying database which could be running on SQL Server or Oracle. While the developer will need administration rights over the database, the same does not apply to the application.

Indeed if written in a secure fashion, the application doesn’t need direct access to the tables at all. It makes much more sense to restrict access to stored procedures that provide only the functionality that the application requires, and that do at least check the type of any parameters that may be thrown in its direction. A typical data access snippet that offers some degree of protection, written in Visual Basic for an ASP.NET application, might look like this:

dim sDataSource as string = ConfigurationSettings.AppSettings(“DSN”)
dim oConnection as new SqlConnection(sDataSource)
dim oCommand as new SqlCommand(“GetRecord”, oConnection)
oCommand.CommandType = CommandType.StoredProcedure
oCommand.Parameters.Add(“@RecordID”, SqlDbType.int).Value = sRecordID
dim oDataReader as SQLDataReader
oConnection.Open()
oDataReader = oCommand.ExecuteReader()

The corresponding stored procedure might look like this:

CREATE PROCEDURE dbo.GetRecord
@RecordID INT
AS SELECT * From Users WHERE UserID=@RecordID

It is also of course important to hide the log-in details of the application itself, and in the above we are storing them in the web.config file for the application under the key “DSN”. Internet Information Server will protect this file from outside access, but the same is not necessarily true of any backup copies or ‘works in progress’ that you may have accidentally uploaded to the Web server.

If you are building a more complex solution that combines a number of interacting applications, resources and services, then you need to think defensively right from the initial design process. The idea is to ensure that the components that interact directly with the end user, or to external systems, operate at a lower privilege level than components deeper into the system. For each application in the solution:

  1. List all the resources that the application needs to access;
  2. Write down what the application needs to do with each resource (such as read, write, create, delete);
  3. Work out what is the lowest privilege level that allows these actions.

As you proceed you may find that one particular operation requires a higher level of access than the others. In this case you may be able to move it to another application, further away from the outside world, that is already running at a higher privilege level.

The Distributed System Designers that come with Visual Studio Team System for Software Architects are particularly well suited to performing this kind of exercise. The .NET Framework also introduced Code-Access Security which could be useful here as it allows you to specify the privilege level at which your code runs, irrespective of that assigned to the end user.

Guarding the gate

Creating a totally secure application is actually very simple: just don’t let anyone use it. Unfortunately no-one is going to pay you to do that, so you need to strike a balance between usability and security. You are going to have to let users enter data into your application, but you need to know who they are so you can verify that they are authorised to do so; and you need to ensure that the data they enter is not going to affect your application in an unexpected fashion.

If you are writing a Windows Forms application that is going to run on the client’s machine, then you can probably find out all you need to know about the user from the operating system. However, you might still want to use Code-Access Security to control the privilege level at which the application runs. If you are writing a server-side application that could be accessed from a Web browser then you need to be more careful as you really have no idea who the user is, or where in the world they may be.

Either way, it makes sense to check that input data does at least conform to expectations before allowing it into your application. If a particular data field is expecting a six-digit part number, then reject any entry that doesn’t consist of six digits. If you are asking for the user’s surname, then reject any entry that contains suspicious characters such as angle brackets or semi-colons. This is very easy to implement using regular expressions, and failing to do so could leave you open to an SQL injection attack (see panel). More complex is a cross-scripting attack (XSS). Any Web site that takes user input, such as a username, and then displays it on a subsequent page, perhaps in a ‘welcome’ banner, is vulnerable to such an attack. To determine if your site is vulnerable to XSS, enter the following into the relevant field:

<script>alert(‘XSS’)</script>

If the subsequent page pops up an alert box containing the letters ‘XSS’ then you know the string has been returned to your browser and executed.

This is a trivial example – after all, there’s not much point in attacking yourself. The trick with XSS is to insert the script into a URL which the attacker can persuade someone else to click. Thankfully, .NET Framework 1.1 introduced Request Validation which means that sites running under ASP.NET 1.1 or later automatically check requests for such content.

A secure Web application

Web applications are particularly vulnerable to attack. The Internet is by nature a stateless system, which means there is nothing to inherently link one page request to another. As a result, Web applications have to find other ways to maintain state across transactions that involve multiple requests. This can be done in three ways: by storing a ‘cookie’ on the client computer; using the information returned when a form is submitted; or by adding parameters to the URL requested by the browser.

The problem with all these solutions is that the session identifier is stored on the client, where it can be tampered with by an attacker. Cookies can be poisoned, form data changed and parameters edited before the next request, including the session identifier, is returned to the server.

So it is very important that you store as little information as possible on the client. Indeed the session identifier need only hold a single ‘handle’ value that your application can use to reference relevant details about the user, such as user name or account number, which are maintained on the server.

ASP.NET supports a number of techniques for securing such mechanisms. If you are using Forms Authentication to authenticate users, for example, you can set the Protection attribute in your web.config file. This will automatically encrypt the authentication cookie using Triple DES, and ensure that it hasn’t been tampered with by checking it against a MAC (Message Authentication Code).

Form data is similarly vulnerable. However, here you can use ASP.NET’s ViewState facility to maintain state, which uses a hidden field to pass data back to the server concerning the state of the form. ViewState data is not passed in a particularly transparent form and you can use the page’s ViewState property to store data of your own. You can further enhance security by setting the EnableViewStateMac directive for the page, to have the ASP.NET runtime run a MAC check against it. This is rather more secure than passing ‘1234’ back in a hidden field called ‘SessionID’.

Using URL parameters to pass data is particularly vulnerable as the URL is clearly visible in editable form within the browser window. For example, if you are in the process of ordering something from an online shop, you might see the following URL displayed:

http://www.shop.com?orderid=4334

Unless the site performs some additional verification behind the scenes, the chances are that editing the orderid to ‘4333’ and hitting return would reveal the previous order, which could provide a competitor with valuable information. Rather more secure would be to check that the identity of the user, stored in a tamper-proof form, matches the customer associated with the requested order before sending the page.

The illusion of flow

An important function of many modern Web applications is to take the user through the sequence of stages that make up a business process, working more like a Win32 or Windows Forms application than a traditional Web site. If you are implementing an eCommerce application, for example, then the user must set up an account, add items to a shopping basket and go through a checkout procedure.

However, this is a dangerous mindset to adopt in the stateless world of the Internet, where anyone can jump into the middle of your application simply by typing in the appropriate URL. It is therefore important that you map out every possible state that your application needs to handle and then assign a page to each. Having done that you can work out what assumptions you are making about the user when they reach that page (for instance that they are a subscriber, or that they have logged in) and make sure you test those assumptions before allowing them to proceed further.

Just as with your home, making your application completely invulnerable is unlikely to be possible, or even desirable. No-one wants to live in a house that is surrounded by barbed-wire and security guards, and no-one wants to use an application that is so secure it is well-nigh impossible to log in, let alone use.

If you’re developing an online bank, or a Web site for a politically-sensitive organisation, then you need to take extra care. Otherwise, a knowledge of some basic techniques and an awareness of security issues is probably enough to encourage most attackers to look elsewhere.

For more information, check out the Microsoft Application Security Web site. Microsoft has also put together The Developer Highway Code or more comprehensive is Writing Secure Code 2nd Edition by Michael Howard and David LeBlanc (Microsoft Press).


SQL injection

A typical SQL statement designed to verify a login might look like this:

“SELECT COUNT(*) FROM Users
WHERE UserName='” & sUserName & “‘
AND Password='” & sPassword & “‘”

If the attacker was to enter the following string as the password:

‘ OR ‘1’=’1

Then the final clause becomes:

WHERE UserName=” AND Password=” OR ‘1’=’1′

As one is always equal to one, the clause as a whole will be evaluated as true and the procedure will always return the number of valid records as greater than zero.

This is just one example of an SQL injection attack, so called because it attempts to inject additional SQL into an existing statement to make it operate in a fashion not intended. A more malicious example might be to enter the following user name:

‘; DROP TABLE Users —

The semicolon signifies the end of one statement, which can then be followed by another of the attacker’s choosing. The double-dash tells SQL Server to treat the remainder of the statement as a comment.

The easiest way to prevent this kind of attack is not to allow data entry containing dangerous characters, which can be done using a regular expression:

if Regex.IsMatch(sPassword, “‘|\%27|\-\-|<|>|\%3C|\%3E”) then
‘ handle problem
else

This traps any password that contains a single quote, its hex equivalent, or a double dash. It also traps angle brackets that might be used to launch a cross-site scripting attack. As a further precaution you might want to check the length of the string entered – if passwords are limited in length to ten characters then either truncate after ten characters or reject anything longer.

Three researchers at Vrije University in Amsterdam recently demonstrated how just 127 bytes stored in a Radio Frequency Identification (RFID) tag could perform an SQL injection attack against an Oracle database. Such an attack could be launched against an airport baggage-handling system if it was stored in a tag on a suitcase, for example.

This is all very well for passwords, but what about note fields, or users with the surname O’Reilly? A simple solution here is to double-up on all occurrences of a single quote before the data is sent to the database, and then remove the superfluous characters when the data is read back out, as in:

sValue = replace(sValue, “‘”, “””)

This provides at least some defence against an SQL injection attack.

No comments yet

What do you think?