5.31.2016

Manually send HTTP requests using Netcat

In my last post, I talked about the importance of reviewing your web server's log files periodically for any unusual behavior.  This is a good practice and should be implemented in your organization.  In OWASP's Log review and management, they point out that the frequency "depends on the criticality (i.e. payment system, customer information, business secret, etc.) of the system labelled by the organization, logs could be reviewed ranging from minute, every day, weekly, monthly or even 3 months." Assessing your log management needs doesn't end here.  Keep in mind that there are other considerations including but not limited to: the need for centralizing your log files, retention periods, and protection and integrity of log information. The latter can be done via a simple checksum to each log file to determine if the file has been tampered with in any way.  Another consideration is to include log files in automatic backups and disaster recovery plans.  How long you choose to keep these files is very important.  Include all of these things in your organization's security policies.

Let's say you come across more than the usual number of requests coming in on Port 80 that should be coming in on the secured Port 443.  You may become suspicious but more than likely if you forced your site to communicate over HTTPS then it's probably a few users who have bookmarked the HTTP url.  If indeed the site is responding to those requests over HTTPS like it should, there should be no harm in that.  But, it might be a good time to do some manual testing to be doubly sure.

The tool we want to use is Netcat. It is a network utility used for sending and receiving data from networked computers. You can transfer files, serve up a single web page, and send messages from one system to another.  In our case we will be sending simple HTTP requests.

It's history goes back as far as 1996. Although, I have some suspicions that it goes back further than that. The original Netcat and today's Ncat that we can download from here is the same tool in the sense that it performs the same function but it doesn't share the same code base.  The original included a port scanner. It wasn't included in Ncat because Nmap replaced it as the de facto tool.

Let's Visit the download page and install it for your system. If you're on windows like I am, open an command prompt and navigate to the folder.  Type in the commands as shown in the following screen shot.  It should be in the following format:

nc [host] [port]
[httpMethod] [url] [httpversion]



You should get back the raw html from the requested page. How often will you use this? Probably not too often. But let's appreciate its simplicity. If you need to quick test some security settings, want to look at some raw JSON or do some testing and want to look through your log files quickly you can add in a custom User-Agent and CTRL-F for the value you put in there.  Like so:

nc yourdomain.com 80
GET / HTTP/1.1
User-Agent: blah

Search for the term "blah" and you'll quickly find them in the log files. Or, set up a batch job in Log Parser Studio as mentioned in my previous blog post and view them there.  There are many other things you can do with this tool. Let me know if you do anything cool with it.


4.29.2016

Review your Web Server's log files!

Let's face it. Viewing log files is not the most glamorous thing to do.  If your company has a high-traffic web site, web server, API, or web anything, someone in your organization has got to do it.  And chances are since you are reading this, then that person is probably going to be YOU.

Web servers are configured to log URLs, just as cash registers are designed to spit out unnecessarily long receipts.  Unfortunately, that is a sad reality.  These receipts are a real problem.  For our purposes though, these logs are an invaluable tool in preventing and analyzing attacks especially after the fact.  In the case of IIS servers, logging is turned on by default capturing data in the W3C format.  This applies to IIS7 and later.  IIS logs the following fields under the W3C format, which should be sufficient enough but the option exists to log more.  Bytes Sent and Bytes Received seem like good candidates depending on what/how your web applications transmit/receive data.



This means you should wrangle these files, located %SystemDrive%\inetpub\logs\LogFiles by default, and review them periodically.  That may be every hour, every day or every week - whatever makes sense to you and your organization.  If you feel it's too infrequent, then it's time to change the very policies that exist to protect your company's data.  However, it must be done.

The good news is that you can do this pretty easily using a tool called Log Parser Studio.  It's an easy to use utility that has a number of pre-set queries to show, for example the Top 20 URLs requested.  This may be nice for usage metric purposes but we are more concerned with errors and unusual behavior outside of two standard deviations.  Look at the data and determine if anything seems suspicious and really look for the outliers.  As the motto goes, if you see something, say something.


Haven't reviewed logs before?  Not a problem.  I would suggest running a number of these pre-defined queries daily in a batch.  That way, you can peruse over the data within a few minutes with a few button clicks.  A few things you might want to look for in your logs: 
  • HTTP verbs used
  • GET Requests w/sensitive data
  • Requests sent over Port 80 that shouldn't have been 
Long running queries, errors by error code and requests per hour are nice to see.  A couple of things are more interesting here however.  IIS: Top 20 HTTP Verbs shows us all the methods used.  If you know you don't allow the PUT and DELETE methods but it shows up in the logs, something is wrong.  Double check your IIS server for these settings.  Better yet, disable all methods including PUT, DELETE, TRACE, CONNECT, OPTIONS and only allow GET POST and HEAD.  This may vary from application to application so if you are unsure, ask someone.  Also, if you see any GET requests being sent over with sensitive data in the query please take note of it.  Furthermore, if you see requests coming in via the unsecured channel (Port 80) but those should be secured then this could be a client explicitly requesting HTTP instead of HTTPS.  Determine if there is a pattern and investigate.  And last but not least, a good practice might be to check the client IPs that are sending most of the requests.  Do a quick search and see which area the requests are being made the most.  If you are a local business or do most of your business in a confined geographical area and most of your requests are coming from China or North Korea, that would be a cause for concern.

There are many other ways of dissecting this log data and attackers are only getting smarter.  Revisit the strategy of review, data points and other markers that may be a warning sign for hackers.  Do this every quarter to stay on top of this perpetual cat and mouse game.

If you are looking for PCI DSS or HIPAA compliance, look into OSSEC - a host based IDS.  It does a lot more than just log inspection, but that's something I will delve into next time.  I will create a blog post on it in the near future so stay tuned.  

4.28.2016

Which HTTP method, GET or POST, is more secure? How about over a secure connection (HTTPS)?

There are many considerations when deciding to send either a HTTP GET or POST request when submitting form data.  Some of those reasons may include: ease of use, allowing the use of the back and reload buttons on the browser, and etc.  Some may implement solutions that use only one type or another exclusively.  But, the main consideration that we will look into is security.

We all know that when we send a GET request, the URL is visible to you and the person right next to you.  Well of course that's insecure!  In a POST request, the form data is sent as a block.  What about GET and POST requests sent via HTTPS?  Surely that's secure, right?

Submitting data via POST is the more secure way less insecure way.  The reasons are pretty simple.  URLs are saved or transmitted in a least a couple places.  1) In the browser's history, 2) in the HTTP Referer field and 3) in the web server's log files.  Attackers have at least these places to look for to get at the juicy URLs.

How hard would it be to put a piece of malicious software on a USB stick around the office or better yet at various conferences and event halls with the label, "try our demo today?"  Once ran, it can crawl your browser history and upload it periodically.  How about another attack vector via the ad networks that will display an ad and log the referer, aka the last page that was visited by the user.  And this URL can very well be that GET request with all kinds of query string information.  Don't even get me started with CDNs and the danger of leaking your URLs when fetching images and javascript files with the referer info.  Just about all webpages these days do this unless you specify this meta element in every page of your site: meta name="referrer" content="never".  Of course, as of yet, not all browsers support this under HTML5.  What's even worse is that most web servers keep logs of all URLs.  And ever single URL can be potentially logged, whether it comes from a secured TCP connection or not.

As a security minded developer, if you stick to this one rule your users and employers will thank you: Never send sensitive data using the GET method.  Ask yourself this question the next time you are working on a web application: "Am I relying too heavily on passing data via the GET request and the query string?"  If the answer is yes, choose POST.  To help you remember, think of the POST OFFICE as being more secure because they package up your data as opposed to the GET OFFICE.  :[