Manually send HTTP requests using Netcat

In my last post, I talked about the importance of reviewing your web server's log files periodically for any unusual behavior.  This is a good practice and should be implemented in your organization.  In OWASP's Log review and management, they point out that the frequency "depends on the criticality (i.e. payment system, customer information, business secret, etc.) of the system labelled by the organization, logs could be reviewed ranging from minute, every day, weekly, monthly or even 3 months." Assessing your log management needs doesn't end here.  Keep in mind that there are other considerations including but not limited to: the need for centralizing your log files, retention periods, and protection and integrity of log information. The latter can be done via a simple checksum to each log file to determine if the file has been tampered with in any way.  Another consideration is to include log files in automatic backups and disaster recovery plans.  How long you choose to keep these files is very important.  Include all of these things in your organization's security policies.

Let's say you come across more than the usual number of requests coming in on Port 80 that should be coming in on the secured Port 443.  You may become suspicious but more than likely if you forced your site to communicate over HTTPS then it's probably a few users who have bookmarked the HTTP url.  If indeed the site is responding to those requests over HTTPS like it should, there should be no harm in that.  But, it might be a good time to do some manual testing to be doubly sure.

The tool we want to use is Netcat. It is a network utility used for sending and receiving data from networked computers. You can transfer files, serve up a single web page, and send messages from one system to another.  In our case we will be sending simple HTTP requests.

It's history goes back as far as 1996. Although, I have some suspicions that it goes back further than that. The original Netcat and today's Ncat that we can download from here is the same tool in the sense that it performs the same function but it doesn't share the same code base.  The original included a port scanner. It wasn't included in Ncat because Nmap replaced it as the de facto tool.

Let's Visit the download page and install it for your system. If you're on windows like I am, open an command prompt and navigate to the folder.  Type in the commands as shown in the following screen shot.  It should be in the following format:

nc [host] [port]
[httpMethod] [url] [httpversion]

You should get back the raw html from the requested page. How often will you use this? Probably not too often. But let's appreciate its simplicity. If you need to quick test some security settings, want to look at some raw JSON or do some testing and want to look through your log files quickly you can add in a custom User-Agent and CTRL-F for the value you put in there.  Like so:

nc yourdomain.com 80
GET / HTTP/1.1
User-Agent: blah

Search for the term "blah" and you'll quickly find them in the log files. Or, set up a batch job in Log Parser Studio as mentioned in my previous blog post and view them there.  There are many other things you can do with this tool. Let me know if you do anything cool with it.


Review your Web Server's log files!

Let's face it. Viewing log files is not the most glamorous thing to do.  If your company has a high-traffic web site, web server, API, or web anything, someone in your organization has got to do it.  And chances are since you are reading this, then that person is probably going to be YOU.

Web servers are configured to log URLs, just as cash registers are designed to spit out unnecessarily long receipts.  Unfortunately, that is a sad reality.  These receipts are a real problem.  For our purposes though, these logs are an invaluable tool in preventing and analyzing attacks especially after the fact.  In the case of IIS servers, logging is turned on by default capturing data in the W3C format.  This applies to IIS7 and later.  IIS logs the following fields under the W3C format, which should be sufficient enough but the option exists to log more.  Bytes Sent and Bytes Received seem like good candidates depending on what/how your web applications transmit/receive data.

This means you should wrangle these files, located %SystemDrive%\inetpub\logs\LogFiles by default, and review them periodically.  That may be every hour, every day or every week - whatever makes sense to you and your organization.  If you feel it's too infrequent, then it's time to change the very policies that exist to protect your company's data.  However, it must be done.

The good news is that you can do this pretty easily using a tool called Log Parser Studio.  It's an easy to use utility that has a number of pre-set queries to show, for example the Top 20 URLs requested.  This may be nice for usage metric purposes but we are more concerned with errors and unusual behavior outside of two standard deviations.  Look at the data and determine if anything seems suspicious and really look for the outliers.  As the motto goes, if you see something, say something.

Haven't reviewed logs before?  Not a problem.  I would suggest running a number of these pre-defined queries daily in a batch.  That way, you can peruse over the data within a few minutes with a few button clicks.  A few things you might want to look for in your logs: 
  • HTTP verbs used
  • GET Requests w/sensitive data
  • Requests sent over Port 80 that shouldn't have been 
Long running queries, errors by error code and requests per hour are nice to see.  A couple of things are more interesting here however.  IIS: Top 20 HTTP Verbs shows us all the methods used.  If you know you don't allow the PUT and DELETE methods but it shows up in the logs, something is wrong.  Double check your IIS server for these settings.  Better yet, disable all methods including PUT, DELETE, TRACE, CONNECT, OPTIONS and only allow GET POST and HEAD.  This may vary from application to application so if you are unsure, ask someone.  Also, if you see any GET requests being sent over with sensitive data in the query please take note of it.  Furthermore, if you see requests coming in via the unsecured channel (Port 80) but those should be secured then this could be a client explicitly requesting HTTP instead of HTTPS.  Determine if there is a pattern and investigate.  And last but not least, a good practice might be to check the client IPs that are sending most of the requests.  Do a quick search and see which area the requests are being made the most.  If you are a local business or do most of your business in a confined geographical area and most of your requests are coming from China or North Korea, that would be a cause for concern.

There are many other ways of dissecting this log data and attackers are only getting smarter.  Revisit the strategy of review, data points and other markers that may be a warning sign for hackers.  Do this every quarter to stay on top of this perpetual cat and mouse game.

If you are looking for PCI DSS or HIPAA compliance, look into OSSEC - a host based IDS.  It does a lot more than just log inspection, but that's something I will delve into next time.  I will create a blog post on it in the near future so stay tuned.  


Which HTTP method, GET or POST, is more secure? How about over a secure connection (HTTPS)?

There are many considerations when deciding to send either a HTTP GET or POST request when submitting form data.  Some of those reasons may include: ease of use, allowing the use of the back and reload buttons on the browser, and etc.  Some may implement solutions that use only one type or another exclusively.  But, the main consideration that we will look into is security.

We all know that when we send a GET request, the URL is visible to you and the person right next to you.  Well of course that's insecure!  In a POST request, the form data is sent as a block.  What about GET and POST requests sent via HTTPS?  Surely that's secure, right?

Submitting data via POST is the more secure way less insecure way.  The reasons are pretty simple.  URLs are saved or transmitted in a least a couple places.  1) In the browser's history, 2) in the HTTP Referer field and 3) in the web server's log files.  Attackers have at least these places to look for to get at the juicy URLs.

How hard would it be to put a piece of malicious software on a USB stick around the office or better yet at various conferences and event halls with the label, "try our demo today?"  Once ran, it can crawl your browser history and upload it periodically.  How about another attack vector via the ad networks that will display an ad and log the referer, aka the last page that was visited by the user.  And this URL can very well be that GET request with all kinds of query string information.  Don't even get me started with CDNs and the danger of leaking your URLs when fetching images and javascript files with the referer info.  Just about all webpages these days do this unless you specify this meta element in every page of your site: meta name="referrer" content="never".  Of course, as of yet, not all browsers support this under HTML5.  What's even worse is that most web servers keep logs of all URLs.  And ever single URL can be potentially logged, whether it comes from a secured TCP connection or not.

As a security minded developer, if you stick to this one rule your users and employers will thank you: Never send sensitive data using the GET method.  Ask yourself this question the next time you are working on a web application: "Am I relying too heavily on passing data via the GET request and the query string?"  If the answer is yes, choose POST.  To help you remember, think of the POST OFFICE as being more secure because they package up your data as opposed to the GET OFFICE.  :[


How to add a program exception to Windows Firewall for SQL Server

Every now and then when installing a new instance of SQL Server you may want to connect to it from other machines via Management Studio.  Here are the instructions on how to do that.  

To add a program exception to the firewall using the Windows Firewall item in Control Panel.

  1. On the Exceptions tab of the Windows Firewall item in Control Panel, click Add a program.
  2. Browse to the location of the instance of SQL Server that you want to allow through the firewall, for example C:\Program Files\Microsoft SQL Server\MSSQL11.\MSSQL\Binn, select sqlservr.exe, and then click Open.
  3. Click OK.


My File Recovery Story

These days hard drive manufacturers are pushing the terabyte limits, so I thought this would be a good time to find a cheap backup solution for personal and business use at least for the interim.  As already implied from reading the title of this post, I ended up losing data.  But I learned far more than I ever expected.  Here's my story.

I had just bought a shiny new 500GB external storage device to be used as a central backup location.  I figured it would be large enough to hold all my files from the various places: my home pc, an external hard drive enclosure, and 2 usb flash drives.  So I have every device plugged into my pc and I started to move data to this new backup device.  I have multiple explorer windows open and I see a flurry of dialog messages with the all familiar file-transfer animation.  Everything is working just fine and after an hour or so it's all complete.  Success. At this point, I was proud.  I was responsible enough to be proactive in backup before an imminent hard drive failure occurred.

The hdd enclosure was now reformatted to NTFS and the usb drives to FAT32.  (If you want to know why, read here)  After a few seconds into the format of one of the flash drives, I had a deer in headlights uh-oh moment.  I was formatting the one good backup that I had.  Talk about failure.  I quickly did what anyone else in that moment would have done - I pulled the cable.  I used diskpart, a DOS utility, but I had indicated the wrong drive to format.  This was all in an attempt to create a bootable usb flash drive.  Ok, enough of the excuses.  It was all my own fault.

Initially I didn't really worry too much because I knew a reformat, at least a quick one, didn't visit every cylinder of every head of every sector to do an erase.  All it does is remove references to those files on disk I thought.  So, I researched into several recovery products online.  Here's a list of them:
As is the case with emergency data recovery, I was price insensitive to the cost of software.  The only thing I wasn't willing to do was send it to a lab.  The top two on the list are not free (all under $100), but the Data Recovery Wizard comes in a free edition.  The caveat is that you are limited to 1GB of data recovered.  As far as time investment is concerned, let's just say you have to be very patient.  A full scan using each of these products took anywhere between 5 and 8 hours.  Remember, this is only a 5400 rpm 500 GB drive.  Also, keep the drives well ventilated because the constant head movement will make this thing hotter than you ever want it to run.

I ran each software at least twice just to be sure it got all my files.  Unfortunately, none of them were able to recover everything in it's entirety because file names were lost.

There are my findings:

Data Recovery Wizard Professional v5.0.1 - A very intuitive product.  Great for the novice user.  Free up to 1GB with the free edition.

SpinRite 6.0 - I was excited to use this product, but it couldn't even find my damaged disk drive.  Somewhat of a disappointment.  It is still, however, a great product.  It just didn't help me in my situation here.

Recuva - Another great product for the novice user.  Recommended as it is free.

PhotoRec - Open Source +1 (distributed under GNU General Public License).  Please see the list of file formats recovered with this tool.  No GUI so this is probably best for advanced users only.


Sometimes the best things in life are free.  The best products were Recuva and PhotoRec.  I recommend Recuva for those users who require a GUI and want a no fuss solution.


Speed Tracer (Chrome Extension)

I discovered this amazing tool a few months back created by Google called Speed Tracer. It is a Chrome Extension that allows, I suppose, just about anyone to diagnose performance issues in web applications. Take a look at the description below.

Using Speed Tracer you are able to get a better picture of where time is being spent in your application. This includes problems caused by:
  • Javascript parsing and execution
  • Layout
  • CSS style recalculation and selector matching
  • DOM Event handling
  • Network resource loading
  • Timer fires
  • XMLHttpRequest callbacks
  • Painting
  • and more ...


32-bit and 64-bit software on Windows 7

Maintenance programming - yes, we all have to do it. Most of us experience it by way of coercion. The fortunate ones have the privilege of delegating this task to an eager intern willing to get their hands on any production code. I for one do not have such a privilege.

I have the responsibility of updating a VB6 application every year and in part of doing so is the need to set up an ODBC for each client machine. This year has been a little different in that these machines run Windows 7.

In Windows 7 there are two folders where dll, driver and executable files live...


I was surprised to find out that on a 64-bit machine the System32 folder actually holds 64-bit files and not 32-bit files as the moniker suggests. Why? Backwards compatibility. That's right. That means the sysWoW64 folder contains 32-bit files. So let me give it to you again. System32 holds 64-bit files and sysWoW64 holds 32-bit files. That certainly is backwards.

And no, the WoW in sysWoW64 is not an acronym for World of Warcraft.  It actually means Windows 32-bit on Windows 64-bit, if that helps you remember at all.