Common client-side vulnerabilities of web applications
You may be aware of the OWASP Top 10, which is commonly referring to the top 10 web application security risks list from the OWASP. But it’s lessen known that the OWASP published many of these top 10’s such as the one for the APIs or the one for the mobile applications. In the same vein, they published a proposal for a “Client-Side Security Risks” top 10 in 2022, focused on web application’s front-end, which can be compared to the mobile one as they both treat client-side security.
This list is not definitive and still under construction, but it is interesting as it highlights some of the most important things to consider for the front-end developers to avoid security mistakes.
In this newsletter, we will go through the 10 elements from the list to explain them and propose solutions to avoid the associated issues. As we will see, some of them are not that straightforward to understand, and some of them even need some interpretation.
Broken Client-side Access Control
"Insufficient control of JavaScript access to client-side assets (data an code), exfiltration of sensitive data, or manipulation of the DOM for malicious purposes (to access those assets)."
This first vulnerability is common on applications that rely heavily on JavaScript or any client-side processing (such as WASM). Sometime, the client will send a request to an API just after the authentication to get the rights associated to the user. If the response from the server has been tampered, with Burp Suite for example, the client will display more features than initially attended. Depending on the implementation, the features that should not be visible will or will not be functional; if the back end performs access control checks on the received requests, the features will not work. However, it happens that the back end relies way too heavily on the front-end and trusts every of its requests.
To avoid such issue, the best thing is to avoid handling authorizations in the front-end. The back end should be responsible for the authorization checking and reply solely with the features that are allowed to the user. The JavaScript should not be responsible for whether or not a feature should be available to a user. Moreover, all requests should be checked for access control to avoid user’s crafted ones.
DOM-based XSS
"Vulnerabilities that permit XSS attacks through DOM manipulation or abuse."
This type of XSS (Cross-Site Scripting) is the result of the JavaScript modifying the HTML of a page on the fly based on user’s inputs. This is where the DOM (which stands for Document Oriented Model) part comes from.
The idea is to abuse the JavaScript modifying a page based on different kind of user-controllable inputs, such as URL TAGs, form inputs… These are called “sources” and here are the most targeted ones:
▪️ document.URL
▪️ document.documentURI
▪️ location.href
▪️ location.search
▪️ location.*
▪️ window.name
▪️ document.referrer
The execution of code arises when the input from one of the mentioned sources is passed to a “sink”, where the user input is written. The most popular sinks for DOM XSS are the following:
▪️ document.write
▪️ (element).innerHTML
▪️ Eval
▪️ setTimeout
▪️ setInterval
▪️ execScript
To avoid such issue, the JavaScript should use what can be called “safe sink”. The idea is to use the “textContent” property of the placeholder instead of the HTML itself. Moreover, data from a source should never be passed to a JavaScript function that executes code such as “eval” or “setTimeout”.
A new protection that has been proposed against such vulnerabilities is the “Trusted Types”. They behave in a similar way than SQL prepared statements. An implementation of this mechanism has been released by W3C. This solution could become the default one against DOM vulnerability if its development continues.
As a side-note of this point of the list, it should be noted that DOM-XSS is not the only kind of DOM vulnerability, take for example the DOM-based open redirection vulnerability.
Sensitive Data Leakage
"Inability to detect/prevent digital trackers and pixels across a web property to ensure national and international privacy laws are complied with."
The ability to ensure the proper handling and total control over the data gathered from a client is now required by the law for many countries/regions (GDPR for example). This task is complexified by the usage of the many components required for the proper functioning of a modern applications. The detection and/or prevention of digital trackers is thus a requirement for all applications wanting to operate in those parts of the world. However, it is often overlooked as it is very challenging to implement.
The most efficient way to guarantee control over the user’s data is to deeply analyze the business need of a feature in terms of data to ensure to only gather the minimum needed. The same aspect applied to the user behavior tracking. A strict analysis, on required data, must be performed to ensure to only collect data needed to provide the expected metrics.
Vulnerable and Outdated Components
"Lack of detection and updates to JavaScript libraries that are outdated or contain known vulnerabilities."
This is one of the most encountered client-side vulnerabilities. Components can be hard to keep up to date, mainly because of time constraints.
This can be linked to many factors, such as breaking changes or unawareness of the vulnerabilities. It also happens that the developers are not aware that the component is used (The component is brought by another one).
The best example for this issue is JQuery. Many of its old versions have known vulnerabilities, sometime with public exploits. In two Web Application Vulnerability Assessments out of three, the application uses a vulnerable version of JQuery.
The goal is not to blame the JQuery’s developers nor the developers using JQuery, it is a great tool. But it is important to be aware that using external components comes at the price of keeping them updated to avoid being exposed to their vulnerabilities. Tools such as Retire.js can help with identifying the used components that have known vulnerabilities.
Lack of Third-party Origin Control
"Origin control allows the restriction of certain web assets or resources by comparing the origin of the resource to the origin of the third-party library. Without leveraging such controls, supply chain risk increases due to inclusion of unknown or uncontrolled third-party code that has access to data in the site's origin."
CORS, or Cross Origin Resource Sharing, should be leveraged to ensure that the externally loaded JavaScript or code is never querying API endpoints that they are not supposed to. The objective is to avoid uncontrolled code accessing sensitive data.
The CORS are a front-end mechanism activated with the “Access-Control-Allow-Origin” header in the server’s response.
JaveScript Drift
"Inability to detect changes at the asset and code level of JavaScript used client-side. This includes the inability to detect behavioral changes of this code to determine if the changes are potentially malicious in nature. This is particularly important for third-party libraries."
When the application is pulling third-party components directly from the provider, it is possible that a bug or malicious behavior is added to it when an update is performed.
The easiest way to prevent such issue is verifying a checksum of the imported resource to ensure that no unverified changes have been made. However, this process is risky as it requires the maintainer of the application to verify and adapt the application very quickly after each third-party components update.
Another option is to enforce the usage of a specific version of the component, this can give some time for the developers to review the new versions before using them. It is preferable to enforce specific versions instead of using permissive requirements such as “Version = 1.*” or “Version >= 1.0” to ensure that each version can be reviewed before being used.
The Content-Security-Policy and Subresource Integrity browser security feature can also be used to prevent unwanted behaviors.
Sensitive Data Stored Client-Side
"Storage of sensitive data like passwords, crypto secrets, API tokens, or PII data in persistent client-side storage like LocalStorage, browser cache, or transient storage like JavaScript variables in a data layer."
This issue is common on heavy clients but is also present on web clients. As detailed in the OWASP description, some applications store critical information on the client side such as passwords, cryptographic keys, … The location of the secret in the client side can vary. It can be in the various browser’s storages, code’s comments, JavaScript variables, …
Sometimes, this issue is an oversight, and the secret has no need to be accessed by the client side. To avoid this, it’s important to include scripts in your CI/CD process to check that no secrets are present on the released version of the product.
For cases where the secret is voluntarily used by the front-end, it’s simply a design mistake and the processing requiring the secret should be performed by the back end.
With the rise of WASM, it’s important to note that the same restrictions should be applied. Inspecting the code and the memory of a WASM application searching for secrets is not that hard.
Client-side Security Logging and Monitoring Failures
"Insufficient monitoring and detection of client-side changes and data accesses, particularly failures and errors, in real-time as each page is assembled and executed using both first-party and third-party code."
It can be important to keep track of what’s happening to the client side of the application since it can permit to detect bugs or attacks. These can be due to an update to some third-party component or user actions. Logging such errors in the back end can be useful to act quickly to avoid any sort of damages. This kind of logging can help prevent attacks too, if the attacker does not block the logging requests, as the testing of payload often results in errors.
The Content-Security-Policy reporting is a good start for logging unexpected resource loading and code execution and prevent them.
Not Using Standard Browser Security Controls
"Not using common standards-based security controls built into browsers such as iframe sandboxes, and security headers like Content Security Policy (CSP), subresource integrity, and many other standard security features."
Nowadays, browsers embed many security features preventing the application from loading content from unwanted resources, executing unwanted code or enforcing TLS encryption. These features are a great way to easily improve an application’s security, especially when one of the other security measures fails.
However, they sometime can be challenging to set up properly, (like the Content-Security-Policy for example), that’s why they are too often left aside or not configured properly by the developers to make their life easier.
This can be acceptable in a test or development environment but when pushed to production, these features need to be added and properly configured to maximize the robustness of the application against various attacks such as XSS.
Including Proprietary Information on the Client-Side
"Presence of sensitive business logic, developer comments, proprietary algorithms, or system information contained in client-side code or stored data.”
This point is very much related to the first point of this list: security wise, all the critical processing of an application should be performed by the back end and not the front-end. Every action performed and/or data stored on the front-end can be considered as public (or at least not secret).
All sensitive or proprietary processing should be performed by the back end. When pushed in production, the front-end of the application should be stripped from its comments.
Conclusion
Even though this list is not definitive and some points are redundant, it proposes interesting guidelines on what to check when creating a web application front-end.
Authors
Elliot Rasch
Alexis Pain