To summarize the 3 day B-SidesDC conference: Be Afraid. In all seriousness, there are many systems we use daily which are quite vulnerable. The solution is to be vigilant, know what to look for, and understand how to fix it.

It is good to know that the industry mindset is migrating towards an “Assume Breach” model. This means that companies are learning their outer boundaries are not invulnerable, so it is imperative to improve internal detection methods to limit the huge losses of data.

 

Credit Cards – Review those monthly statements, preferably more frequently

There was a large focus on credit cards and fraud across the talks, and that is not a bad thing, given the recent high-profile compromises. Ken Westin and the keynote speaker, G. Mark Hardy, both presented separate takes on the industry, where the gaps exist, and how they need to be improved.

As of this month, American credit cards companies are migrating towards using the Europay, MasterCard, Visa (EMV) chip. This is a step in the right direction. However, much like any configuration or security process, a poor implementation still remains vulnerable.

What this means is that it really comes down to the card-holder. Don’t shrug aside a seemingly random $1.00 charge when it does not show up at the end of the month. That might be a test of your stolen card, and reviewing card transactions is the strongest means of detection. You alone know what you are purchasing. Yes, there are some delays between when a sale is made and when it appears on a statement, or it may change as a tip is added. Overall, the outliers and anomalies do stand out and warrant a second look.

Bank protections are automated and based on heuristic models and alert thresholds to detect fraud. These can be thwarted with a little bit of testing by making small charges to soda machines and pre-charges to hotels. This leaves us more vulnerable to those who understand and test properly for the triggers.

One of Ken Westin’s slides showed 50 credit card transactions making purchases at a single soda machine. These did nothing more than determine whether the credit cards were valid. The problem is that credit card companies only look at the individual transaction, as each card was treated as a unique transaction. When the data was correlated as we viewed it, it was easily apparent malicious activity was taking place. What other plausible explanation exists for so many cards from around the country being used at a single location, in rapid succession? Unfortunately, privacy issues come into place between these credit cards and the data is not shared. Thus, an easy opportunity for early detection of fraud is missed.

Yes, your card is insured and you can get back the money that was charged, under most situations. But let’s be honest, have you tried calling your credit card company? If it is for anything more complicated than declaring travel or verifying payments, you are guaranteed a multi-hour time sink with no guarantee of a timely reimbursement.

 

 

Web Application Testing – Three simple tests

Joseph McCray presented an interactive explanation of how to perform testing for web applications. He boiled a complex series of varying methodologies into 3 simple questions. The obvious caveat to any and all security testing is to only test on systems you have explicit permissions to do so and only within the boundaries you are given.

  1. Is the web page talking to a database?
  2. Can I or someone else see what I am typing?
  3. Does it reference a file?

To determine if the site is connecting to a database, look no further than its URL.

blog5_1

Parameters passed via the GET request are a strong indicator of a database on the backend. Alternatively, such parameters may be passed by POST requests, as a part of a given request. By using Tamper Data or another proxy, the POST requests may be viewed or altered. Most logins should pass parameters using POST as the URL can be read by anyone listening on the wire. If you ever see your credentials passed in the URL, be extremely wary of the site.

Once a database communication is identified, it makes an ideal location to look for vulnerabilities associated with SQL, or even error messages that provide system information. Simple tests range from adding an extra single quote, ‘, that end the SQL statement early to math computations, +1, to see if the database performs the calculation. Should the SQL statement of the form id=1+1 return the content associated with id=2, the database has a vulnerability.

 

Search bars, dropdown menus, and comment posts under the control of users of the application are a good starting point for testing what they directly interact with.

blog5_2

Here is where you shift focus towards cross site scripting (XSS) or attempting to change how the user’s browser interprets the page. For example, you may attempt to use XSS Javascript to steal cookies to then act as a Man-in-the-Middle and read all communications between the server and end-user.

 

The last place to start looking for vulnerabilities is within any file references stored locally on the hosted server.

blog5_3

Imagine if instead of viewing a harmless robots.txt or index.html, this were /etc/password.

 

Yes, these 3 questions seem like basic practices and should not be exploitable by such simple means, but when looking through the long list of steps to perform within OWASP or SANS, they make a highly valuable resource to begin testing.

As someone who has spent years in the security industry, I still saw the value in taking a complex process and summarizing it in a way that is relatable to anyone.

Moreover, using these methods can provide a much-needed human element. For those who just handing over a meaningless report of vulnerabilities, these questions provide an interpretation on the results. Where a report listed a vulnerability to cross-site scripting, now the analyst may demonstrate a custom script to create a popup that steals login credentials. Or expanding on a SQL injection vulnerability found in the report, the analyst can show how a crafted request returns the data for the user’s table within the browser.

 

Closing Thoughts

This by no means covers the conference. There were also great presentations by Jared Atkinson on how to use Powershell to highlight signs of exploitation and Michael Gough’s presentation on methodologies to detect APT. The biggest takeaway from both is to log as much as possible, but more importantly, to understand how the monitored environment behaves and what is expected. I highly recommend watching the rest of the talks given this past weekend.

 

Assorted thoughts and gotchas:

  • Don’t just run a vulnerability scan. Have a clear defined process to prioritize, remediate, and verify an issue is resolved.
  • Be better at identifying indicators of compromise
    • Look for new IP communications, especially to new external sources or during off hours
    • Large files or registry keys. If a registry key is over 20k, that alone is suspicious
    • Certain programs should never make outbound connections (Ex: WINWORD attempting to connect to a non-Microsoft address). Any such connection has a high probability that it should be blacklisted and any other attempts to connect may indicate other sources of compromise

Leave a comment

Your email address will not be published. Required fields are marked *

X