Chapter 5: Filtering User Input – Hacking the Code

Chapter 5

Filtering User Input

Introduction

Throughout this book we discuss a wide variety of threats and security risks. But none of these threats is as serious as when an attacker goes straight for the heart and directly attacks your application code. Many developers have faced the somber reality that their code is insecure; that some hacker broke in using nothing other than a flaw found in the Web application itself. By manipulating program input, attackers can often trick the server into revealing customer data, or allowing access to unauthorized files or execution of program code on the server itself. Indeed, insecure code is the source of countless intrusions.

The risks of insecure code are great for several reasons:

 There are so many different ways to exploit insecure code.

 There is no need to obtain a password because the code is already running in the context of an authenticated user.

 The attacker gains access to anything that the Web application can access, which usually includes sensitive user data.

 Most Web applications are not properly configured to detect and prevent these types of attacks.

Every week, security researchers flood mailing lists such as BugTraq and VulnWatch with discoveries of security flaws in commercial or widely available Web-based applications. While many programmers are finally learning the skills to avoid insecure code, there seems to be a never-ending supply of programmers who put users at risk because they don’t filter the input coming into their application.

Here are some common threats caused by poor input filtering:

 SQL injection Manipulating user input to construct SQL statements that execute on the database server

 Directory traversal Accessing files outside the bounds of the Web application by manipulating input with directory traversal characters. This is also known as the double dot attack.

 Server-side code access Revealing the content of server-side code or configuration files by manipulating input to disguise the true file extension

 File system access Manipulating input to read, write, or delete protected files on disk

 Denial of service Causing the application to consume system resources excessively or to stop functioning altogether

 Information leakage Intentionally sending invalid input to produce error messages with information that may facilitate an attack

 Cross site scripting Injecting HTML or script commands, causing the Web application to attack other users

 Command injection Injecting special shell metacharacters or otherwise manipulating input to cause the server to run code of the attacker’s choice

 Buffer overflows Overwriting a buffer by sending more data than a buffer can handle, resulting in the application crashing or executing code of the attacker’s choice

Despite the wide variety of input injection attacks, Web developers have one great advantage: these attacks are completely preventable through careful input filtering and smart coding practices.

Handling Malicious Input

Before an attacker can exploit your application with malicious input, the attacker has to get the input to your code. And that is where Web developers have the advantage. By carefully identifying and controlling input, you can prevent the attacks before they ever get to the sensitive code.

The input handling strategies we will address here are:

 Identifying Input Sources

 Programming Defensively

Identifying Input Sources

Summary: Sometimes the hidden sources of user input are the most dangerous.
Threats: Malicious input

Half the challenge of stopping malicious input is identifying the numerous ways your application accepts input. All attacks on the application itself are based on some form of manipulating the permitted user input. If you handle form input properly, you can eliminate most, if not all, of these vulnerabilities. Not only should you handle form input and query strings, but you must also consider any other data that an attacker can modify. Often overlooked are indirect sources of input and data that you might think an attacker cannot access.

With an ASP.NET application the most obvious place to look for input is any place you use the Request object. Classic ASP provided the Request object as a property of the Server object. To maintain compatibility with existing ASP code, ASP.NET provides the HttpRequest class through the Request property of the Page class. The HttpRequest class exposes elements of the HTTP request through its various properties. Table 5.1 shows some of these properties and how they relate to different elements of the HTTP request.

Table 5.1

HttpRequest Class and HTTP Elements

Property Source
Browser Guessed based on the client-provided User-Agent header
ClientCertificate Based on client certificate headers
Cookies Set-Cookie header from client
Form Post data from the client
Headers All HTTP headers
Path Parsed from the URL
PathInfo Parsed from the URL
QueryString Parsed from the URL
ServerVariables Combination of client and server data
UrlReferrer Referer header from the client
UserAgent User-Agent header from the client
UserHostName May be controlled by user if user controls the DNS server

An easy way to identify potential flaws in your code is to search for all references to the Request object to make sure that you handle user input properly. As described later in this chapter, you should always filter data coming from the Request object, and you should never concatenate the Request object directly to a string. For example, consider this code:

In this example, the code directly outputs the contents of the UserName variable without validating or filtering its contents. This is a common, but definitely not a safe, practice. A better solution is to filter the input before acting on the data, as shown here:

In this example, the code calls a custom function FilterInput that you create to perform whatever filtering is necessary for the type of data.

Other Sources of Input

The Request object is the most common source of input, but it is possible for a user to inject input indirectly or through less obvious sources. A user may input data directly into your database or manipulate HTTP headers sent to your server. For example, consider the ASP code shown in Figure 5.1. This code is from an ASP error handler page (found by default at C:\WINNT\Help\iisHelp\common\500-100.asp) that IIS 5 uses to handle server-side errors. The intended behavior is that if an error occurs it will show the details only if the request is going to “LocalHost” based on checking the contents of the SERVER_NAME server variable.

Figure 5.1 ASP Source From 500-100.ASP

The problem is that the server derives the SERVER_NAME variable from the Host header provided by the client. This Host header comes from the client based on what IP address resolves to the name “LocalHost.” Normally, this IP address always points to loopback, 127.0.0.1, but anyone can edit their HOSTS file to point it to any IP address. So if an attacker were to enter your Web site IP address as the LocalHost IP address in their HOSTS file, they could browse to LocalHost and hit your Web site instead. Furthermore, the client’s Host header will now return LocalHost and the server will therefore show all the error details. With IIS 6, Microsoft fixed this by removing the check for LocalHost, and showing error details only if the server IP address matches the remote IP address.

NOTE

Editing the HOSTS file is just one way to accomplish this type of attack. An attacker could also use a tool or a script to build a custom HTTP request with any of the headers he wants.

Another example of unexpected user input is the DNS host name for the client. If an attacker controls the reverse DNS entries for his IP address, he could potentially use that hostname to inject malicious input or fool access control restrictions that rely merely upon the hostname.

It is important to consider all sources of input, including input that comes from your internal network or from internal users. Rather than attack your server directly, an attacker may find it easier to go in the back way and inject malicious code where you’d least expect it—from a trusted resource.

Security Policy

 Identify all instances of the Request object to be sure you properly filtered this input.

 Search for and filter other forms of indirect input, including input from the application itself.

Programming Defensively

Summary: Preventing application vulnerabilities requires smart coding practices.
Threats: Malicious input

Of all the different types of application attacks brought to the world’s attention in recent years, none required anything more than reasonable and foreseeable defensive coding practices. Unfortunately, until recently, few Web developers had the time, motivation, or training to build this extra code into their applications. But the never-ending discovery of application vulnerabilities painfully emphasizes the need for defensive coding practices. Although defensive coding may not always be your highest priority, it doesn’t take much effort to follow some simple best practices for improving code security.

Controlling Variables

Because all user input at some point is connected to a variable (or a property or method result of an object variable), if you control variables, then you control user input. The Perl programming language has a feature called Taint Mode that treats all user-supplied input as tainted, and therefore unsafe. Furthermore, any variable derived from a tainted variable also becomes tainted. You cannot perform certain operations with tainted variables until you untaint or filter them with a regular expression.

ASP.NET has no equivalent taint feature, but Web developers can write code using the same approach. To do this, simply make sure you always assign user input to a variable, first running the input through a filtering function to make sure the input is safe to use. Next, make sure you work only with the untainted variables, and never raw user input.

TIP

To help you keep track of variable taintedness, it might be helpful to append a suffix to variable names after you check the data for safety, for example: userName_safe=FilterString(Request.Form(“UserName”).

Now, just make sure you never act on a variable unless it contains the _safe suffix.

Classic ASP allows Web developers to use variables without first defining them. There is an Option Explicit directive you can use to enforce variable declaration, but this is not enabled by default. Explicit variable declaration is always a good practice, but it also has some security benefits. By declaring variables in your code you have a list of all data that you will use in your code—an excellent way to identify sources of user input. By declaring your variables, you are controlling them. Fortunately, VB.NET enables Option Explicit by default; just make sure it always stays that way. It’s critical to note that C# requires variable declaration, so this does not apply.

Classic ASP has another weakness: there is no way to define a variable type; all variables are Variants. Fortunately, the CLR provides a strong type system, but VB.NET does not enforce this by default. It is important to strongly type your variables because this helps to enforce data validity and limit exposure to attacks. By defining the variable type you are limiting the type of data that variable can hold. For example, if you are passing numeric input into an SQL query, you can prevent SQL injection because a numeric variable will not accept the string data required to inject SQL commands. With classic ASP, the variant type would automatically adjust to accommodate the string data.

Because VB.NET does not enforce strict data typing, you must enable this option. In Visual Studio .NET 2003 you can enable this by selecting Tools | Options to bring up the dialog box shown in Figure 5.2. Select the Projects folder, and then click on VB Defaults. From there, set Option Strict to On. As with variable declaration, C# automatically requires variable typing so this issue does not apply.

Figure 5.2 Enabling Option Strict for VB.NET

Centralizing Code

Controlling variables at some point involves filtering or sanitizing the data in those variables. Rather than writing code for each time you accept user input, it is a good practice to centralize your filtering code. As you build your ASP.NET application, use centralized filtering functions on every source of user input. Centralizing your code has several security benefits:

 It organizes your code and reduces complexity.

 It reduces the attack surface by reducing the amount of code.

 It allows you to make quick fixes to deal with future attacks as they surface.

Complexity is the enemy of security. By keeping your code organized and under control, you reduce the likelihood of application vulnerabilities. In general, reducing the code volume reduces bugs, while keeping your code simple and reusing code decreases the number of attack vectors in your code. Most of all, having centralized code allows you to easily adjust your filtering functions to address new attacks as security knowledge and research evolves.

Another benefit of using centralized code is that it is easy to identify user input that you have not properly filtered because it is not wrapped in a filtering function. For example, if you have a filtering function named FilterInput, you should never refer to the Request Object without running it through the FilterInput function this way: safeInput = FilterInput(Request.QueryString (“Username”)). If you follow this practice, you can easily search for all references to the Request object not inside a FilteringInput function using a tool such as Grep.

TIP

You can download a native Win32 port of Grep and other Unix utilities from http://unxutils.sourceforge.net/

Testing and Auditing

Due to the complexity and variety of application-level attacks, it is easy to overlook simple mistakes. You should always test your security code to verify that it in fact does what you expect. For example, one commercial Web application uses a regular expression to restrict access to certain administration pages so that only users on the local system could browse those pages. To do this, it checked the client’s IP address against the regular expression “127.*.” Since any IP address that begins with 127 refers to the local host, the programmer expected that this expression would properly restrict access. However, because the programmer did not use the ^ anchor to force matching from the beginning of the string, and because the .* portion of the expression means zero or more occurrences of any character, the regular expression in fact matches any IP address that contains 127 in any position, such as 192.168.1.127. It would not be difficult for an attacker to find an IP address with a 127 and completely bypass this restriction.

By building a proper audit plan and testing with different IP addresses, the programmer could have prevented this flaw.

Using Explicit References

Many programming languages allow programmers to take shortcuts to save typing by allowing certain implicit defaults. For example, if you do not provide a fully qualified path when accessing a file, the system assumes that the file is in the current working directory.

This is important when it comes to filtering user input because ASP.NET allows you to reference items in the Request object without explicitly naming a specific collection. For example, Request(“Password”) is the same as Request.Form(“Password”). When you refer to the generic Request object, ASP.NET searches the QueryString, Form, Cookies, ClientCertificate, and ServerVariables collections, in that order, to find a match. Therefore, by not explicitly stating the collection, you could inadvertently take input from the wrong source. The problem here is that QueryString is the first collection searched.

Now consider the code in Figures 5.3 (C#) and 5.4 (VB.NET). This is a simple ASP.NET page that restricts access to LocalHost by checking the IP address of the client using the REMOTE_ADDR variable. The server itself provides this value, so it is a reliable method for checking the IP address as shown in Figure 5.5.

Figure 5.3 Using Generic Request References [C#]

Figure 5.4 Using Generic Request References [VB.NET]

Figure 5.5 IP Address Blocked

The problem with this code is that the programmer failed to specify the specific collection to use so ASP.NET will search the QueryString, Form, Cookies, and ClientCertificate collections before it tries the ServerVariables collection. Knowing this, an attacker could populate any of these collections with an item matching the server variable name and bypass the protection. For example, adding a query string variable named REMOTE_ADDR using the IP address 127.0.0.1 will fool the application’s IP restriction as shown in Figure 5.6.

Figure 5.6 IP Address Allowed with REMOTE_ADDR in QueryString

In a similar manner, an attacker could trick another user by passing variables on the URL that override form, cookie, or certificate values. The solution for this is simple: always explicitly name the collection from which you expect to pull the variable. This is illustrated in Figures 5.7 (C#) and 5.8 (VB.NET) as explicitly referring to the Request.ServerVariables object. By avoiding implied references, you can prevent attackers from exploiting ambiguities in your code.

Figure 5.7 Using Generic Request References [C#]

Figure 5.8 Using Generic Request References [VB.NET]

Security Policy

 Always assign filtered user input to variables to distinguish it from the raw data.

 When using VB.NET, always use Option Explicit and Option Strict.

 Use centralized filtering functions on all user input.

 Never use the generic Request collection.

Constraining Input

The key to protecting your application from malicious data is to validate all user input. There are numerous actions in your code that user input may affect, and therefore many different techniques for validating this input. Examples of actions that user input might have an effect on are:

 Accessing a database

 Reading the file system

 Allowing users to upload or save files

 Running a shell command

 Sending an e-mail

 Sending HTML output to the client

 Setting an object property

 Processing a shopping cart purchase

Each of these actions is vulnerable to one or more of the threats mentioned at the beginning of this chapter. To counter these threats, I have established the following techniques, which I will describe in more detail throughout this section:

 Bounds checking Checking input values for appropriate data type, string length, format, characters, and range

 Pattern matching Using regular expressions to match and allow known good input or match and block known bad characters or strings

 Data reflecting Passing data to the system, reading back the system’s interpretation of the data, and then comparing it to the original data

 Encoding Transforming string characters to a specific format for safe handling

 Encapsulating Deriving a digest from user input that contains a known set of characters, making it safe to use.

 Parameterizing Taking user input as a parameter to fix its scope, for example, appending a file extension or prefixing a directory path

 Double decoding Decoding data twice to ensure that both instances match

 Escaping Quoting or escaping special characters so the target treats them as literal characters

 Syntax checking Checking a finished string to make sure the syntax is appropriate for the target

 Exception handling Checking for errors and performing sanity checks to ensure that results are returned as expected

 Honey drops Small pieces of data that work as a honey pot to help detect intrusions.

The following are examples of how to use these techniques with various programming tasks.

Bounds Checking

Summary: Check input data to make sure it is appropriate for its purpose.
Threats: Malicious input

Bounds checking is a quick and easy way to prevent many application-level attacks. Check input values to be sure they comply with the expected data type, string length, string format, set of characters, and range of values. ASP.NET provides several easy methods to check input data:

 Validator Controls

 Type Conversion

 SqlParameters

Validator Controls

ASP.NET provides a set of controls to validate all form data entered by a user. By attaching a validator control to a form control and setting a few properties, you can have ASP.NET automatically check user input values. Table 5.2 summarizes the validator controls available with ASP.NET.

Table 5.2

ASP.NET Validator Controls

Control Description
CompareValidator Compares a control’s value to a fixed value or to the value of another control. Also performs data-type checks.
CustomValidator Runs a user-defined validation function
RangeValidator Checks to make sure numeric values fall between minimum and maximum values
RegularExpressionValidator Matches a control’s value against a regular expression pattern
RequiredFieldValidator Ensures that a control is not left empty
ValidationSummary Summarizes all validation errors on a page

To use a validator control, set the ControlToValidate property and then set any other properties to define the validation to perform. Figure 5.9 (C#) demonstrate how to use a validator control to check a numeric input field.

Figure 5.9 Validating Numeric Input (C#)

The most powerful of these validators is the RegularExpressionValidator, which allows complex pattern matching to ensure that input falls within very specific parameters.

But it is important to note that although validator controls are powerful, they do have some limitations:

 You can use them only to validate form controls.

 You can validate form controls only when the page posts back to itself, not to another page.

 They work only with server-side controls.

 ASP.NET does not perform any validation with validator controls before it fires the Page_Load event.

 They tend to decentralize validation code, moving it to individual pages rather than having a centralized mechanism for input filtering.

Because validator controls focus exclusively on form input, it is easy to neglect filtering other forms of user input. To deal with these limitations, you will need to develop custom functions for validating other input. Nevertheless, because of their automated error messages and the addition of client-side code, you should still always use validator controls for form input.

WARNING

The client-side validation features of validation controls speed up validation for the client and prevent extra load on your server from continual post backs, but they are not reliable as a security measure. An attacker can easily disable client-side scripting, or use a custom tool to post forms that bypass client-side validation.

Security Policy

 Use validator controls to validate form input if a page posts back to itself.

 Never rely on client-side validation for security.

Pattern Matching

Summary: Pattern matching is an effective technique for filtering malicious input.
Threats: Malicious input

The most common and effective method for addressing malicious input is to apply pattern matching through regular expressions. With pattern matching you block input that contains specific malicious characters or permit only input that contains a set of known safe characters. Under most circumstances, the latter is the preferred method for checking input.

Because it is difficult to anticipate the numerous ways one could exploit your application, it is usually best to establish which characters you will allow and then block everything else. Figures 5.10 (C#) and 5.11 (VB.NET) show how to use regular expressions to allow only specific characters. Using this method, however, does require some forethought. Users will quickly get frustrated if you do not allow certain characters, such as an apostrophes or hyphens, in a last name field.

Figure 5.10 Allowing Known Good Characters (C#)

Figure 5.12 Allowing Known Good Characters (VB.NET)

But pattern matching is more than blocking and allowing individual characters. Some attacks might use no invalid characters but still be malicious. For example, consider a Web application that saves data to a file, and selects the filename based on user input. To prevent directory traversal or file access attacks, you might allow users to input only alphanumeric data, which you can enforce with a regular expression. But what happens if the user selects a filename using a reserved DOS device name such as COM1, PRN, or NUL? Although these device names do not contain anything other than alphabetic characters, accessing these devices might cause a denial of service or facilitate some other kind of attack. For some types of input you should allow only known good data and then perform a follow-up check to make sure that input does not contain known bad data. Figures 5.12 (C#) and 5.13 (VB.NET) show how to use a regular expression to check for these patterns.

Figure 5.12 Matching Known Bad Input (C#)

Figure 5.13 Matching Known Bad Input (VB.NET)

Table 5.3 shows some common input scenarios and examples of regular expression patterns you might use to identify malicious input. Sometimes you will allow only known good data and other times you might filter out known bad data, but usually you should perform both checks. Note that the patterns in this table do not address every possible exploit, and you should customize them for your particular application.

Table 5.3

Regular Expression Patterns for Filtering Input

Action Regex for matching known bad input
File system access (?:AUX|CO(?:M\d|N\.\.)|LPT\d|NUL|PRN|\n|\r| progra(?:\~1|m\sfiles)|system32|winnt| [\?\*\<\<\|\”\:\%\&])
Database access (?:\;\-\-|d(?:elete\sfrom|rop\stable)|insert\sinto |s(?:elect\s\*|p_)|union\sselect|xp_)
Sending e-mail (?:rcpt\sto|[\,\<\>\;])
Formatting HTML (?:\<(?:applet|img\ssrc|object|s(?:cript|tyle)| a)|javascript|onmouseover|vbscript)

TIP

Matching with regular expressions can be complex, and the examples in Table 5.14 may or may not be sufficient for your needs. See www.ex-parrot.com/~pdw/Mail-RFC822-Address.html for an example of the complexity of validating something as simple as an e-mail address.

Escaping Data

Sometimes you want users to be able to enter special characters without restrictions. But allowing these characters might expose your application to attacks such as SQL injection. You might, for example, want to allow users to enter an apostrophe in their last name to allow for names such as O’Brien, but this character has special meaning in an SQL statement. Allowing this character might make the application vulnerable to SQL injection. But the fix is easy: replace every single quote with two single quotes. This allows you to build SQL statements without having to worry about users passing in single quotes. By escaping (or quoting) the single quote character, it no longer has any special meaning to the SQL interpreter.

Here are some common types of data that would require escaping:

 Shell commands

 SQL statements

 Regular expression patterns

 HTML

Security Policy

 Use regular expressions to either block known bad data or allow only known good data.

 Use regular expressions to identify malicious keywords or other patterns.

 Escape all special characters from user input.

Data Reflecting

Summary: Data reflection verifies path or other information using trusted system functions.
Threats: Directory traversal

When Microsoft first released Windows 2000, security was a long-neglected issue that rapidly gained attention. Security researchers found numerous holes in the operating system, particularly in Microsoft’s Internet Information Services (IIS). Some of the most serious flaws allowed the viewing of protected files and traversing the file system to access files outside the Web content directories. Security researchers found ways to fool IIS into thinking it was retrieving a file with a different extension or a file in the same directory when it was in fact pulling a file from a parent directory. While these techniques fooled IIS, the operating system itself used a different mechanism to access files and therefore accessed them correctly. By discovering subtle differences between how IIS interpreted file paths and how the OS interpreted file paths, researchers exposed some serious vulnerabilities.

Unauthorized File Access

One of the early vulnerabilities discovered in IIS 5 was the ability to view portions of server-side source code by simply appending the string “+.htr” to any URL. Instead of processing the server-side script normally, IIS would return the source code of the file itself, often revealing sensitive information such as database connection strings and passwords. To exploit this vulnerability, an attacker could enter a URL such as this:

www.example.com/global.asa+.htr

Normally IIS does not allow requests for files with .ASP or .ASA extensions, but adding the extra characters fooled IIS into thinking it was accessing a file with the .HTR extension. However, the ISAPI filter that handled .HTR extensions discarded the extra data and returned the contents of the file itself.

Microsoft quickly released a hotfix to address this vulnerability but another researcher found that you could still fool IIS by simply adjusting the string to “%3F+.htr” like this:

www.example.com/global.asa%3F+.htr

Once again, the server returned the source code for global.asa rather than blocking the request. Although Microsoft fixed the specific known vulnerability the first time around, they failed to address the underlying weakness that made it possible to fool IIS in the first place.

IIS was also vulnerable to various directory traversal vulnerabilities. In these, an attacker requests files outside the bounds of the Web application. Normally, IIS will not allow requests outside the Web root, but by disguising the double dots (“..”) through encoding and other techniques, researchers found ways to trick IIS into thinking it was accessing a file within the Web root when it was in fact accessing a file in a parent directory. These turned out to be very serious vulnerabilities because they usually allowed attackers to execute commands and quickly gain control of the server. Furthermore, many internet worms such as Code Red and Nimda exploited these vulnerabilities to propagate themselves from server to server.

Reflecting the Data

To prevent Directory Traversal and Server-Side Code Access, developers usually check file extensions and watch for paths that contain double dots. However, this is not always effective because there are techniques, such as encoding, that attackers use to disguise these characters. Rather than attempting to anticipate every way an attacker can fool your code, a more effective technique is data reflection. With this technique, you take the user input and pass it to a trusted system function. You then read back the system interpretation of that data and compare it to the user input. The steps you would take to reflect a file path are:

1. Decode the path and expand any environment variables.

2. Use the System.IO.Path.GetFullPath() method to reflect back a normalized path.

3.  Compare the directory of user input to the directory of the reflected path.

4. Make sure this path falls within the constraints of the application.

5. Use only the reflected path from that point on in your code.

The code in Figures 5.14 (C#) and 5.15 (VB.NET) demonstrate these steps.

Figure 5.14 Reflecting Data (C#)

Figure 5.15 Reflecting Data (VB.NET)

The advantage of this technique is that because the operating system will ultimately decide which file to access, you have the system tell you which file it intends to access based on the given user input. You validate the path and use that same path when actually accessing the file.

Security Policy

 Reflect data using trusted system functions to prevent attacks such as directory traversal.

 Always work with the reflected path in subsequent operations.

Encoding Data

Summary: Data encoding neutralizes malicious HTML content.
Threats: Cross-site scripting

Sometimes hackers are not trying to break into your Web site but instead want to exploit your Web application to target other users or glean user data. For example, an attacker may want to gain access to another user’s online bank account or personal e-mail. Using a technique called cross-site scripting (sometimes referred to as XSS), an attacker injects active HTML content into a Web page to exploit other users. This content may contain malicious HTML markup, including:

 Deceptive links

 HTML form tags

 Client-side script

 ActiveX components

At the heart of this attack is the abuse of trust that results from the malicious content running on a trusted Web site. Attackers can exploit cross-site scripting vulnerabilities to carry out a large number of attacks, such as:

 Stealing client cookies

 Bypassing policy restrictions

 Accessing restricted Web content

 Gathering Web-user IP addresses

 Modifying the behavior of links or forms

 Redirecting users to an untrusted Web site

Indeed, many developers underestimate the seriousness of cross-site scripting attacks.

Cross-Site Scripting vulnerabilities occur when a Web application dynamically displays HTML output to one user based on input from another user, such as displaying the unfiltered results of a guestbook or feedback system. Attackers can exploit this by injecting HTML tags that modify the behavior of the Web page. For example, an attacker might inject JavaScript code that redirects a user to another site or steals a cookie that contains authentication information. Web-based e-mail services such as Hotmail have long been a target of cross-site scripting attacks because they display HTML content in e-mail messages. An attacker simply has to send the target a specially crafted e-mail to execute the attack.

For cross-site scripting to work, the attacker must send HTML markup through some form of input. This might include an HTML form, a cookie, a QueryString parameter, or even an HTTP header. For example, there are many login pages that pass error messages back to the user like this:

www.example.com/login.aspx?err=Invalid+username+or+password

The page checks the Err parameter and if it exists, displays the contents back to the user as an error message. If the page does not filter this input, an attacker might be able to inject malicious code.

Fortunately, ASP.NET will automatically block any user input that appears to contain HTML code. Figure 5.16 shows how ASP.NET blocks a request for the URL:

Figure 5.16 Built-In ASP.NET HTML Blocking

http://localhost/input.aspx?text=<a href=“”></a>.

This method is limited, because it is easy to overlook all the different character sets and encoding methods that ASP.NET or the client browser supports. Some character sets allow for multi-byte or other encoded representations of special characters. Character sequences that may seem benign in one character set could in fact represent malicious code in another. While you can often filter out special characters, you cannot completely rely upon this method for total security.

Encoding is a technique that neutralizes special characters by modifying the representation of those characters. HTML encoding in particular is a technique that replaces special characters with character-entity equivalents that prevent the browser from interpreting the characters as active HTML. Table 5.5 shows the HTML-encoded representations of some common characters. If a browser encounters any of these HTML-encoded characters, it displays the character itself rather than treating it as a special character.

Table 5.5

Example HTML Character Entity Encoding

Using the Table 5.5, if we had this HTML markup:

<a href=“www.asp.net”>ASP.NET</a>

We would encode it as follows:

<a href="www.asp.net">ASP.NET</a>

The first example would show up as an active link, whereas the second example would display the HTML markup itself.

The .NET Framework provides methods in the Server object, an HttpServerUtility class, to encode strings that could potentially be dangerous if left unencoded. These methods are:

 HtmlEncode Encodes an HTML string to safely output to a client browser

 UrlEncode Encodes a string to safely pass it as a URL

 UrlPathEncode Encodes a string to safely pass it as the path portion of a URL

Figures 5.17 (C#) and 5.18 (VB.NET) demonstrate how to use the HtmlEncode method

Figure 5.17 Using HtmlEncode (C#)

Figure 5.18 Using HtmlEncode (VB.NET)

Another type of encoding is URL encoding for URLs and query strings embedded in HTML. You should use UrlEncode and UrlPathEncode anywhere you reference a URL or query string in an HTML document. This includes the A, APPLET, AREA, BASE, BGSOUND, BODY, EMBED, FORM, FRAME, IFRAME, ILAYER, IMG, ISINDEX, INPUT, LAYER, LINK, OBJECT, SCRIPT, SOUND, TABLE, TD, TH, and TR HTML elements. Use UrlEncode on full URLs and UrlPathEncode to encode a path only. The difference is that UrlPathEncode encodes spaces as %20, rather than the plus sign (“+”) that UrlEncode uses. Furthermore, UrlPathEncode does not encode all punctuation characters as UrlEncode does.

Security Policy

 Use HtmlEncode to encode a string for browser output.

 Use UrlEncode to encode a URL string for output.

 Use UrlPathEncode to encode the path portion of a URL for output.

Encapsulating

Summary: Hashing encapsulates data for safe handling.
Threats: Malicious input

Sometimes you need to act on user input but you may not care about the actual value of the input. For example, you might want a unique identifier based on user input or want to store a value such as a password for later comparison. You can use a hash to encapsulate the data in a safe string format while still maintaining a link to the original data.

Good hashing functions have some properties that make them useful for encapsulating data:

 They produce long, random digests that make use of the entire key space.

 They produce few collisions; it would be extremely rare for two input strings to generate the same hash.

 They always produce digest strings of the same length.

 Given a hash you cannot derive the original data.

With a hash you can neutralize any malicious content because the hash mangles the string into a safe format. You can then format the hash as a hex string for safe handling.

If you hash a password before saving it, you never need to bother with checking it for invalid characters because the hash will not contain malicious content. This allows users to enter characters they want in a password without you having to worry about the impact of special characters in the string.

Another example is when you must create a file based on user input. Because any file operation based on user input could be dangerous, you might want to first convert the input to a safe hash string. This specific technique is described in more detail in Chapter 6.

Hashes are also effective at disguising data to make it less vulnerable to guessing attacks. One e-commerce application used temporary XML files for its shopping cart. The filenames were based on the user ID and the current date. However, a flaw in the application often left the files orphaned so they were not deleted at the end of the session, leaving a directory full of temporary files containing private user information that included customer credit card details. An attacker needed simply to employ smart guessing tactics to gain access to this information. Instead, using a filename based on a hash would make it unpredictable and would use a large enough key space to prevent guessing.

Security Policy

 Use hashes to encapsulate data for safe handling.

 Convert hashes to hex values to create safe alphanumeric strings.

Parameterizing

Summary: Parameterizing allows you to fix the context of user input.
Threats: Directory traversal, file system access, SQL injection, command injection

Parameterizing is a technique in which you take user input and place it within a fixed context so that you can control the scope of access. Consider a Web application in which you access files based on a selected link. A link may take you to a URL such as this:

www.example.org/articles.aspx?xml=/articles/a0318.xml

This first problem with this URL is that it is immediately apparent to attackers that you are accessing the file system, perhaps prompting them to experiment to find ways to break the application: what happens if you pass a filename with a different extension? Or what if you add additional path information?

To prevent abuse of your Web application, accept only the minimal amount of information required and insert this as a parameter to a full path. If the path and filename are fixed, a better version of the URL may be this:

www.example.org/articles.aspx?article=a0318

Now take the /articles path and append the article parameter, followed by the .xml extension. Now, no matter what the user enters, it will start in the /articles path and have an .xml extension.

WARNING

Be careful not to rely on parameterizing alone to guarantee scope. Microsoft made this mistake with the showcode.asp sample included with early versions of IIS (see www.securityfocus.com/bid/167). The programmer checked to see if the final path string contained a specific directory, but did not check for double-dots (“..”) that allowed attackers to request files from a parent directory. The best way to handle this is to combine parameterizing with data reflecting, pattern matching, and other techniques described in this chapter.

Parameterizing is not just for file access; it is an effective technique for limiting many types of attacks. Chapter 6 shows how to use parameters to prevent SQL injection.

Security Policy

 Use parameterizing to fix the context and scope of user data.

 Combine parameterization with other techniques to prevent directory traversal.

Double Decoding

Summary: Double decoding helps detect multiple layers of encoding.
Threats: Directory traversal, file system access, server-side code access

Double decoding is a technique specifically designed to counter a type of encoding attack called double encoding. Vulnerability to this type of attack occurs because your application may decode an encoded string more than once from different areas of the application. Attackers can take advantage of this by creating multiple layers of encoded strings, usually in a path or query string. In other words, you encode a string, and then encode that string again. This might allow an attacker to bypass pattern matching or other security checks in your application code.

IIS 5 was vulnerable to this type of attack. In May of 2001, Microsoft issued a cumulative security patch for IIS (see www.microsoft.com/technet/security/bulletin/MS01-026.aspx) that included a fix for a double decoding vulnerability. IIS decodes incoming requests to render the path in a canonical form. It then performs the necessary security checks on this path to be sure that the user is requesting a file within a web content directory. After this check, IIS performed a second superfluous decoding pass. An attacker could take advantage of this by encoding the path twice. The first decoding pass will remove the first layer of encoding but because the path still has another layer of encoding it got past the IIS security checks. In the second decoding pass, IIS decoded the second layer of encoding and produced the final path. The end result is that an attacker could bypass IIS security checks to access files outside of the web root.

Because it is difficult to anticipate a string being decoded twice in your application, a more effective strategy is to initially check user input for multiple layers of encoding. By decoding a string twice, you can detect multiple layers of encoding, but what happens if someone uses more than two levels of decoding? How do you know how many times to decode to get to the final layer? Could someone cause a denial of service by encoding a string a hundred times? The solution is that you only decode the string twice, comparing the first result with the second result. If these do not match, then you know that the string contains two or more levels of encoding and is likely not a valid request. If you encounter this, simply reject the request and return an error to the client. Figures 5.19 (C#) and 5.20 (VB.NET) demonstrate the double decoding technique.

Figure 5.19 Double Decoding (C#)

Figure 5.20 Double Decoding (VB.NET)

Security Policy

 Use double decoding to detect multiple layers of encoding.

 Reject all requests that contain more than one layer of encoding.

Syntax Checking

Summary: Syntax checking is a last line of defense against those attacks that get past other filters.
Threats: Malicious input

After accepting user input and applying one or more of the techniques described in this chapter, you will eventually need to do something with the data. You may, for instance, build an SQL statement to look up account information based on a given username. You might use one or more techniques in this chapter to check user input, but before executing that SQL statement on the server, you might want to perform a final check to be sure that the SQL syntax follows the format you expect. For example, you don’t want to send an SQL statement with multiple verbs such as two SELECT statements or a SELECT and a DELETE. Passing the final string through a pattern-matching function can be extremely effective in stopping attacks, albeit at the cost of some additional processing overhead.

Syntax checking serves as a last line of defense against those attacks that get past all your other filters. Examples of syntax checking are:

 Ensuring that a shelled command does not contain piping, redirection, command-concatenation characters, or carriage returns

 Ensuring that e-mail address strings contain only a single address

 Ensuring that file paths are relative to a Web content directory and do not contain drive designators, UNC paths, directory traversal characters, or reserved DOS device names

Security Policy

 When appropriate, check the final syntax of any string that is based on user input.

Exception Handling

Summary: Exception handling can catch errors before hackers exploit them.
Threats: Malicious input

Hackers don’t exploit normal operations of your Web application; they usually go after the exceptions that you failed to anticipate. Properly handling exceptions is a powerful defense in stopping a large percentage of Web application vulnerabilities. Although your code might fail to catch malicious user input, an exception handler might catch an error before an attacker can exploit it.

Exception handling is a long-standing best practice, but limited error handling capabilities in classic ASP has resulted in many programmers failing to properly deal with exceptions. ASP.NET provides a much more robust error handling system that you should take advantage of.

Exception handling is much more than handling errors. Some components do not raise an error but provide error information through events or properties. Furthermore, sometimes an error never occurs, but the results are not what you would expect. For example, if you perform a database query to look up a particular user’s record, you would expect only that record to be returned. If it returns more than one record, you have reason to be suspicious. You should always check results to be sure they are as you would expect them to be.

Other exceptions to consider are:

 Return codes from shelled DOS commands

 Return codes from mail servers

 The size, length, or data type of returned data

 Operation timeouts

TIP

Sometimes the nature of the error is not as important as its frequency. For higher security you might want to consider adding code to check for multiple errors from the same client within a given period of time. Because many attacks depend on exploiting errors, encountering too many errors from one user might be a strong indicator of an attack.

Security Policy

 Take advantage of the robust error handling features in ASP.NET.

 Check return results to be sure they are consistent with what you expected.

Honey Drops

Summary: Honey drops work as mini intrusion detectors.
Threats: Server-side code access, file system access, command execution, SQL injection

Many people are familiar with the concept of a honey pot, which is a system designed to lure and ensnare hackers, giving an administrator time to gather evidence and track down the intruder. Sometimes you can’t anticipate all possible attacks, but you’ll at least want to detect and log intrusions. Honey pots, if carefully managed, can prove to be effective intrusion detection systems. You can integrate this same concept into your Web application by using small honey pots, or honey drops. This is how it works:

1. Place unique strings throughout your application or data that you can use as honey drops. For example, create fake database records, fields, tables, or even complete databases, depending on the type of intrusion you want to monitor.

2. Configure your application so that it will never normally access this data. For example, if you created a fake database field, never use a wildcard select statement (such as “SELECT * FROM”), but instead list the specific fields you require.

3. Configure your application or an external packet sniffer (or both) to watch for these strings leaving your database or Web server.

Suppose that you have an e-commerce Web site that accepts credit card transactions and want to use honey drops to detect any unauthorized access to your data. To do this, create a single fake record in your database using a unique credit card number that you would not otherwise encounter, perhaps one containing all zeros. Make sure that you structure any SQL queries so that this record would not appear under normal circumstances so that if it ever does appear in a query, there is a good chance it is an intrusion, such as a hacker using SQL injection to access your database.

There are several ways for you to watch for this string. One method is to write code to check every query result to see if it contains that record, although this might add a considerable amount of processing overhead. Another method is to use an intrusion detection system (IDS), such as Snort (www.snort.org), to sniff the network link between the Web server and the database, and also between the Web server and the Internet. Finally, configure the sniffer to look for the fake record you created and alert you anytime this value travels from the database to the Web server or from your Web server to the Internet. Note that encrypted network connections prevent sniffing, so you might need to adjust your strategy based on your particular configuration.

Honey drops are not just for databases. You can also use them to detect access to files, directories, or even commands. Here are some more ideas:

 Place a conspicuous, blank text file with a unique filename within your Web content directories. Then, configure your IDS to watch for this filename string leaving the network.

 Place server-side comments with a unique string in your source code to detect access to server-side scripts.

 Change the prompt variable in your command prompt to a unique string to detect remote command access.

Honey drops are not appropriate for all applications, but they can provide an extra layer of protection by allowing early detection of application attacks.

Security Policy

 Use honey drops in your database to detect SQL injection attacks.

 Use honey drops in your file system to detect file system access.

 Use honey drops in your source code to detect server-side code access.

Limiting Exposure to Malicious Input

Application attacks are widespread and varied, but we have yet to discover all the possible ways a hacker could exploit your Web application. It is also improbable that every developer will write secure code 100% of the time. Security flaws are bugs, and no amount of developer training or funds can guarantee bug-free code. So while you should take every opportunity to secure your code, you must also take measures to limit exposure to attacks and make your application more resilient to hackers. In this section we will cover:

 Reducing the attack surface

 Limiting attack scope

 Hardening server applications

Reducing the Attack Surface

Summary: Reduce the attack surface of your application to provide fewer opportunities to hackers.
Threats: Malicious input

All code has a certain probability of containing flaws. The more code you have, the higher the probability your application will have flaws. The more flaws you have, the greater the attack surface of your application. Attack surface represents your application’s exposure to attack, but not necessarily its vulnerability to attack. Consider a bank, for example: the outer walls and roof of the building are its attack surface. Some areas, such as windows and doors, are more vulnerable to attack than other areas, such as brick walls. And although brick walls are exposed and are part of the attack surface, they are likely not going to be vulnerable to attack. Nevertheless, a bank robber could drive a tank through a bank wall, so therefore it is part of the attack surface.

Vulnerability depends greatly on other factors, such as how easily a bank robber could get his hands on a tank, his willingness to rob the bank, or how much money is in the bank itself. It also depends on how quickly the robber could execute the plan without getting caught. Despite all these factors, the bank’s attack surface remains the same. In fact, the bigger the bank, the bigger the attack surface. And if a bank has multiple branches, each one increases the overall attack surface for the bank as a whole.

A Web application also has an attack surface. This attack surface is made up of every dynamic Web page, every open TCP/IP port, every system account, and every running application or service, among a list of other factors. Many security efforts address the need to reduce an application’s attack surface. A firewall, for example, limits the number of accessible TCP/IP ports. There are also a number of techniques for limiting attack surface within your application itself.

The attack surface consists of any component of your application that meets these requirements:

 The component is visible or discoverable by the attacker.

 The component is accessible to the attacker.

 The component is potentially exploitable, even though actual vulnerability might not be foreseeable or likely.

Note that addressing any of these items will reduce your application’s attack surface. With this in mind, there are many creative strategies you could use to reduce exposure to attack.

Unused Code

As your Web application matures and grows in features, you might find yourself adding more and more functionality to key modules. Sometimes a central module expands to handle much of the functionality of the application. Consider, for example, this URL from Microsoft’s search engine application that contains nine parameters:

Notice that this particular search does not even make use of all the parameters, so therefore their values are empty. While this is not a vulnerability, it increases the attack surface because it offers the hacker a variety of potential attack vectors. If you are not using a parameter, don’t even show the parameter. The less the hacker can see, the less there is to exploit. Although this does not increase the actual attack surface of the application, it has the same effect because it limits the attacker’s ability to discover all the available parameters.

WARNING

Hiding parameters does not mean that you need not secure the code that handles them. Obscurity does not replace security, but it does enhance other security measures you might have in place. If fewer people see the parameters, fewer will attack them; therefore it has the effect of reducing the attack surface.

Even though you should hide unused parameters, you must also consider parameters that should not even be there in the first place. You might, for instance, have code that handles parameters that should not exist in the production application. Carefully review each module to identify any debugging, testing, or dead code. Never rely on obscurity to hide this type of parameter.

TIP

Software developers do need to test and debug code, and you will inevitably end up with code that someone forgot to remove. To help prevent this, establish a coding policy to always use the same naming scheme with testing or other temporary variables. This makes it easy to quickly search for any leftover code that should not go into the production environment.

Limiting Access to Code

The most obvious way to limit attack surface is to limit the code in your application. A single static HTML page is much more secure than a fully functional e-commerce application. The less code you have, the less there is to attack. While this is not a realistic strategy, you can accomplish the same effect by limiting access to components of you application.

Carefully consider how you allow access to these features:

 Online demos You might want to showcase the features of your application with an online demo, but this gives everyone access to all of your code. Instead, consider providing static HTML demos that only simulate the features of the full application. Doing this doesn’t fix any vulnerabilities you might have, but it does have the effect of reducing the attack surface, limiting access only to customers or registered users.

 Administration or content-management modules Your administration pages might require authentication to gain access, but the authentication page itself might be vulnerable. Limit access to administration modules by enforcing IP restrictions, using obscure ports, using client certificates, and moving administration modules to a separate Web site.

 Intranet or extranet modules Restrict access to intranet or extranet modules using the same strategies as with administration modules.

 Sample code and applications Many Web servers or applications come with sample or default code, or programs that should always be removed when migrating to a production environment.

 Third-party applications Many organizations opt to buy rather than build certain features in their Web application. There are thousands of widely available search engines, shopping carts, guest books, user management, and content management systems. Running one of these might not make you vulnerable, but consider that just about anyone can gain access to the source code. Wide code availability has the effect of increasing the attack surface, especially if it is a popular component used on many different Web sites. Some hackers will find a vulnerability in these components and then use a search engine to discover which sites use this component. If you do use third-party components, try obscuring their identities, and always review and test the code carefully.

Security Policy

 Reduce the attack surface of the application to limit exposure to hackers.

 Don’t show query string parameters if you do not use them in a particular context.

 Remove testing, debug, and dead code from production applications.

 If possible, use static content in application demos.

 Limit access to administration or other private modules.

 Remove sample code and programs from production servers.

 Avoid or carefully audit third-party components.

Limiting Attack Scope

Summary: Use security permissions to limit the scope of attacks.
Threats: Malicious input

It might be impossible to build a bullet-proof application that is impervious to all current and future application-level attacks. You can filter input and reduce your attack surface, but you must also consider that someone might eventually find a way to exploit your code. Build your application so that exploiting your code does not provide much information for the attacker.

Least Privilege

An important strategy is to always follow the principle of least privilege. Consider the security context of the Web application user and evaluate this user’s access to the following:

 The file system

 Registry keys

 Executables

 COM components

 WMI classes

 TCP/IP ports

 Databases

 Other Web sites on the same server

Plan the security context of your Web application to properly limit access to these items. Careful attention to user security will contain and separate the Web application from the rest of the operating system.

Server-Side Code

A common mistake Web developers make is assuming that server-side code is protected from intruders. Although it is meant to be protected, experience has shown us that this is not always the case. You should work with the assumption that this code is not safe, and therefore take appropriate precautions with what you include in these files. Server-side code is not an appropriate place to store secrets such as passwords, database connection strings, or other sensitive information. Sometimes something as simple as a comment could reveal vital information for an intruder to further an attack. Look at your server-side code from the perspective of a hacker to see what information might be a security risk.

Security Policy

 Use the principle of least privilege to limit the access of Web users.

 Avoid storing passwords, private comments, or other sensitive information in server-side code.

Hardening Server Applications

Summary: Many Web applications have settings to protect from various types of attacks.
Threats: Malicious input

Writing secure code is an important way to defend yourself from attack, but ASP.NET and IIS both help in this effort by providing settings to prevent or mitigate application-level attacks. Some settings you can use to harden your Web server against attack are as follows:

Request Length

Some attacks rely upon being able to send data beyond expected limits. A buffer overflow, for example, might require sending a very large string as part of the Web request. IIS 6.0 allows you to limit the size of the entity body of a request with the MaxRequestEntityAllowed and AspMaxRequestEntityAllowed metabase settings. Both of these settings let you set the maximum size, in bytes, for the entity body of a request, as specified by the HTTP content-length header. In other words, the content-length header of a request cannot exceed the limits imposed by these settings. MaxRequestEntityAllowed can be set at any level of the metabase, such as for the server, a specific site, a virtual directory, or even for a single file. The AspMaxRequestEntityAllowed setting is similar, but applies only to ASP files.

IIS 6 also provides registry settings for specific control over the length of various parts of a request. Table 5.6 summarizes these settings.

Table 5.6

IIS 6 Registry Settings to Limit Request Length

Allowed Characters

To limit exposure to directory traversal and encoding attacks, IIS 6 provides several registry settings to limit which characters users can send in a request. These two settings are shown in table 5.7.

Table 5.7

IIS 6 Registry Settings to Restrict Characters

The first of these settings, EnableNonUTF8, allows you to limit requests so that they contain only UTF-8 encoded characters. This helps prevent ambiguity with various character encodings.

The second setting, PercentUAllowed, allows users to send request URLs using the format %UNNNN, where NNNN is the Unicode value of the character you want to submit. Again, allowing this might cause ambiguity, so it is best not to allow this unless you have a specific use for it.

Security Policy

 Use the MaxRequestEntityAllowed and AspMaxRequestEntityAllowed metabase settings to limit the overall length of a request.

 Use the MaxFieldLength, MaxRequestBytes, UrlSegmentMaxLength, and UrlSegmentMaxCount registry settings to limit the length of specific parts of a request.

 Use the EnableNonUTF8 and PercentUAllowed registry keys to limit valid characters in a request.

Coding Standards Fast Track

Handling Malicious Input

Identifying Input Sources

 Always identify any source of user input, including all references to the Request object.

 Carefully identify other indirect or less obvious sources of input.

Programming Defensively

 Always assign filtered user input to variables to distinguish it from the raw data.

 When using VB.NET, use Option Explicit and Option Strict.

 Use centralized filtering functions on all user input.

 Never use the generic Request collection when gathering user input.

Constraining Input

Bounds Checking

 Use validator controls to validate form input if a page posts back to itself.

 Never rely on client-side validation for security.

Pattern Matching

 Use regular expressions either to block known bad data or allow only known good data.

 Use regular expressions to identify malicious keywords or other patterns.

Data Reflecting

 Reflect data using trusted system functions to prevent attacks such as directory traversal.

 Always work with the reflected path in subsequent operations.

Encoding Data

 Use HtmlEncode to encode a string for browser output.

 Use UrlEncode to encode a URL string for output.

 Use UrlPathEncode to encode the path portion of a URL for output.

Encapsulating

 Use hashes to encapsulate data for safe handling.

 Convert hashes to hex values to create safe alphanumeric strings.

Parameterizing

 Use parameterizing to fix the context and scope of user data.

 Combine parameterization with other techniques to prevent directory traversal.

Double Decoding

 Use double decoding to detect multiple layers of encoding.

 Reject all requests that contain more than one layer of encoding.

Syntax Checking

 Check the final syntax of any string that is based on user input to be sure it matches the expected format.

Exception Handling

 Take advantage of the robust error handling features in ASP.NET.

 Check return results to be sure they are consistent with what you expected.

Honey Drops

 Use honey drops in your database to detect SQL injection attacks.

 Use honey drops in your file system to detect file system access.

 Use honey drops in your source code to detect server-side code access.

Limiting Exposure to Malicious Input

Reducing the Attack Surface

 Reduce the attack surface of the application to limit exposure to hackers.

 Don’t show query string parameters if you do not use them in a particular context.

 Remove testing, debug, and dead code from production applications.

 If possible, use static content in application demos.

 Limit access to administration or other private modules.

 Remove sample code and programs from production servers.

 Avoid or carefully audit third-party components.

Limiting Attack Scope

 Use the principle of least privilege to limit the access of Web users.

 Avoid storing passwords, private comments, or other sensitive information in server-side code.

Hardening Server Applications

 Use the MaxRequestEntityAllowed and AspMaxRequestEntityAllowed metabase settings to limit overall length of a request.

 Use the MaxFieldLength, MaxRequestBytes, UrlSegmentMaxLength, and UrlSegmentMaxCount registry settings to limit the length of specific parts of a request.

 Use the EnableNonUTF8 and PercentUAllowed registry keys to limit valid characters in a request.

Code Audit Fast Track

Handling Malicious Input

Identifying Input Sources

 Does the application properly identify all possible sources of user input, including less obvious and secondary input sources?

Programming Defensively

 Does the application assign filtered user input to variables to distinguish it from the raw data?

 When using VB.NET, does the application use Option Explicit and Option Strict?

 Does the application use centralized filtering functions on all user input?

 Does the application avoid using the generic Request collection when gathering user input?

Constraining Input

Bounds Checking

 Does the application use validator controls to validate form input if a page posts back to itself?

 Does the application avoid enforcing security through client-side validation?

Pattern Matching

 Does the application use regular expressions to either block known bad data or allow only known good data?

 Does the application use regular expressions to identify malicious keywords or other patterns?

Data Reflecting

 Does the application reflect data using trusted system functions to prevent attacks such as directory traversal?

 Does the application always work with the reflected path in subsequent operations?

Encoding Data

 Does the application use HtmlEncode to encode all strings for browser output?

 Does the application use UrlEncode to encode all URL strings for output?

 Does the application use UrlPathEncode to encode the path portion of all URLs for output?

Encapsulating

 Does the application use hashes to encapsulate data for safe handling?

 Does the application convert hashes to hex values to create a safe alphanumeric string?

Parameterizing

 Does the application use parameterizing to fix the context and scope of user data?

 Does the application combine parameterization with other filtering techniques to prevent directory traversal?

Double Decoding

 Does the application use double decoding to detect multiple layers of encoding?

 Does the application reject all requests that contain more than one layer of encoding?

Syntax Checking

 Does the application check the final syntax of any string that is based on user input?

Exception Handling

 Does the application take advantage of the robust error handling features in ASP.NET?

 Does the application check return results to be sure they are consistent with what is expected?

Honey Drops

 Does the application use honey drops in the database to detect SQL injection attacks?

 Does the application use honey drops in the file system to detect file system access?

 Does the application use honey drops in your source code to detect server-side code access?

Limiting Exposure to Malicious Input

Reducing the Attack Surface

 Does the application reduce the attack surface of the application to limit exposure to hackers?

 Does the application avoid showing unused query string parameters?

 Is the code devoid of any testing, debug, or other dead code?

 Does the application use static content in application demos?

 Does the application limit access to administration or other private modules?

 Is the production server devoid of sample code?

 Did any third party components undergo a thorough security audit?

Limiting Attack Scope

 Does the application use the principle of least privilege to limit the access of Web users?

 Does the application avoid storing passwords, private comments, or other sensitive information in server-side code?

Hardening Server Applications

 Does the application use the MaxRequestEntityAllowed and AspMaxRequestEntityAllowed metabase settings to limit overall length of a request?

 Does the application use the MaxFieldLength, MaxRequestBytes, UrlSegmentMaxLength, and UrlSegmentMaxCount registry settings to limit the length of specific parts of a request?

 Does the application use the EnableNonUTF8 and PercentUAllowed registry keys to limit valid characters in a request?

Frequently Asked Questions

The following Frequently Asked Questions, answered by the authors of this book, are designed to both measure your understanding of the concepts presented in this chapter and to assist you with real-life implementation of these concepts. To have your questions about this chapter answered by the author, browse to www.syngress.com/solutions and click on the “Ask the Author” form. You will also gain access to thousands of other FAQs at ITFAQnet.com.

Q: I want to allow users to enter some HTML tags such as <b> and <i> but the built-in validation feature will not allow it. How do I configure ASP.NET to allow this and how do I allow these tags without exposing other users to cross-site scripting attacks?

A: To allow users to send HTML markup from a form field, query string, or cookie, you must first disable the built-in validation feature. This setting is an attribute of the <pages> element in machine.config. You can also disable it on a per-page basis with this tag on the page itself:

    <% @ Page validateRequest=“True” %>

    Once you disable this feature, you must manually encode user input to make sure that it does not contain any HTML markup. Next, search for the encoded tags you want to allow and change them back to their original form. For example, to allow bold markup, search for the string <b> and replace it with <b>. Repeat this for each allowed tag. Likewise, replace all occurrences of </b> with </b>.

    Note that since you have disabled the built-in validation you must be very careful to always check user input for HTML markup.

Q: Is ASP.NET vulnerable to buffer overflow attacks?

A: Managed code is generally not vulnerable to buffer overflow attacks, but that does not mean you are completely safe. Because fully trusted code has the capability of calling unmanaged code such as external components or Windows API calls, buffer overflows are still a risk. Always use caution to check string lengths when calling unmanaged code. Also, be sure to run code with a low privilege account, and use strong NTFS permissions to limit access to the file system. For example, Web users should not have access to files outside the Web content directories.

Q:  Can I configure IIS to stop or at least minimize source-code viewing and file access attacks?

A: You can often minimize the effects of file system attacks with how you configure IIS and the file system. Here are some tips:

 In the Internet Services Manager, remove Read permissions for all scripts and executables. Neither scripts nor executables need read permissions to run.

 Set strong NTFS permissions on Web content files and directories to prevent Web users from modifying or creating files. If you need Web users to be able to create and modify data files, consider placing these files outside the Web root. If you must allow users to create or modify files in a Web content directory, use specific NTFS permissions to allow access only to those particular files, rather than the entire directory.

 Use the file system to set the read-only attribute on Web content files and directories to prevent easy modification of these files.

Q: Microsoft states that you don’t need URLScan with IIS 6. Is there any benefit for using URLScan to stop application-level attacks?

A: IIS 6 has many security features that for the most part make URLScan irrelevant. However, there are still some features that can reduce the attack surface and help limit application level attacks. For example, you can use URLScan to block requests that contain certain strings and you can limit the length of specific HTTP headers. See www.microsoft.com/technet/security/tools/urlscan.mspx for more information on using URLScan with IIS 6.