12.18.2009

The transition from C# to VB

For not having written any visual basic code in about a year, the transition from C# back to VB has been a slightly rough one.  I noticed an endless barrage of squiggly lines indicating syntax errors have plagued my code.  VB's compile-time syntax checking in Visual Studio didn't help either.  It let me be even more aware of my failure to comply with VB's rules.

The biggest offender was in my variable declaration.  Yes!  In the variable declaration!  I shall declare all my variables with a Dim.

I shall declare all my variables with a Dim.
I shall declare all my variables with a Dim.
I shall declare all my variables with a Dim.
I shall declare all my variables with a Dim.
I shall declare all my variables with a Dim.
All your base are belong to us.

The point here is, like that of spoken languages, computer languages too need to be learned and perpetually practiced. Just like that violin you quit playing the moment you got accepted to a good university.  Don't neglect it or you may never get it back.

Ok, that's being overly dramatic.  Fine.  But, honestly, going from language to language is more of a nuisance than anything else unless you program in all of them everyday.  I quickly got over the Dim thing. I got used to the absence of the symbols in the beloved C-Style language: { } [ ] ; . I didn't like the fact that I had to google everything from "VB ternary operators" to "VB linq to sql examples."  I'm pretty good with Linq in C#, but in VB it seems so foreign to me. Man, I wish I saved my search history. But, in the end everything got done... with a 30% drop in productivity. Just kidding.

As for the long standing C# vs VB debate, here's my view.  Who cares?  For those of you who share the same passion as I do, we can have a language preference but that doesn't mean should refuse to work on projects written in a different languages.  Besides the fact that we might get fired, it's just not right.  We programmers are taught to learn to how learn.  Our programming languages course showed us how to adapt to other languages.  Where's the fun in exploring new ways to the do same thing?  So what if C# developers get paid more than their VB counterparts on average? Wait, I do.

12.16.2009

Backpropagation in Neural Networks

Backpropagation is a form of Supervised Training which teaches a Neural Network how to work and operate.  The training is done prior to using the network and works only for feed-forward networks.  There are many other ways to train a Neural Network, even in unsupervised ways, but Backpropagation is a widely popular training method because of its "learn by example" applicability in many real world cases.  This kind of network operates on the premise that given an input, it will produce the known and "correct" output.  This is analogous to training a cell phone to recognize your voice and how you pronounce certain words.  So you can train a network with inputs and what their corresponding outputs should be.  You can't train a network, however, to decipher what your cat's mood is at any given point.  Maybe someday?

Let's look into the well known XOR example.  We all know what this is right?  The bitwise Exclusive Or produces a known and correct output given two inputs as shown below.  Download the source code / demo for the Exclusive OR (XOR) problem.
Input data  Output data
(1, 1)          (0)
(1, 0)          (1)
(0, 1)          (1)
(0, 0)          (0)


In Neural Networks, neurons have weighted inputs, activation function and an output.  The input layer in this example has two elements (one for each bit).  The hidden layer calculates values based on the forumula: f(Sum(inputs * weight)).  The weight here is initialized to small random values, let's say between -1 to 1, with a mean of 0.  This produces an output value and since we know what the expected output value is, we calculate this difference and call it the error.  Then, this error is backpropagated to the hidden layer and the input layer where by the weights are adjusted so that each time the same input pattern is presented to the network, the output will be a little closer to the expected output.  The goal of training is to minimize this error a little bit during each iteration, aka epoch.  Here's a snippet from my powerpoint presentation that sums this process up.  

     -For each input-output pattern
Evaluate output
Calculate the error between output and expected output
Adjust weights in the output layer
*Do the same for the hidden layer(s)

You're probably asking the question - why do we need a Neural Network to give us the answer of an XOR operation.  We don't.  It is for theoretical and teaching purposes.  Now, the real uses of this is technique is widely seen in the AI of video games.  Here is a statement that caught my eye when researching this topic.
An agent was trained in Quake III by to collect items, engage in combat, and navigate a map. The controller was a neural network that learned by backpropagation on pre-recorded demos of human players, using the player’s weapon information and location as inputs.
*From Backpropagation without Human Supervision for Visual Control in Quake II by Matt Parker and Bobby D. Bryant