Neural Nets: Difference between revisions
Bradley Monk (talk | contribs) No edit summary |
Bradley Monk (talk | contribs) No edit summary |
||
Line 6: | Line 6: | ||
}} | }} | ||
Let's just dive right in... Below I've embedded a neural network classifier rendered using [https://www.tensorflow.org Tensorflow][http://playground.tensorflow.org Playground]. There are a variety of knobs and buttons on the interface; as we move along, more of these options will become available. Don't worry though, all these will be explained in detail, in due time. For now though, let's define our primary goal throughout this tutorial: ''' | Let's just dive right in... Below I've embedded a neural network classifier rendered using [https://www.tensorflow.org Tensorflow][http://playground.tensorflow.org Playground]. There are a variety of knobs and buttons on the interface; as we move along, more of these options will become available. Don't worry though, all these will be explained in detail, in due time. For now though, let's define our primary goal throughout this tutorial: '''classification'''. | ||
Our primary task is to train neural nets to classify items into categories, based on some limited information. Like fruit or vegetable; or undergrad major; or Alzheimer's Disease patient (CASE) or control participant (CTRL). In the Tensorflow playground below, you can see a bunch of orange and blue dots. Instead of simply thinking about these as dots at arbitrary spatial coordinates, it will be helpful to think of these as representing people in a clinical study. Let's define the blue dots are patients from the CASE group, and orange dots are CTRL participants. What is our 'limited information' about them? Let's say we have collected information about their age and their score on a dementia screening exam (scores represent number of items forgotten). So the first thing we'd probably want to do is make a scatter plot of these two variables. Let's define their respective dimensions on the plot axes as: | Our primary task is to train neural nets to classify items into categories, based on some limited information. Like fruit or vegetable; or undergrad major; or Alzheimer's Disease patient (CASE) or control participant (CTRL). In the Tensorflow playground below, you can see a bunch of orange and blue dots. Instead of simply thinking about these as dots at arbitrary spatial coordinates, it will be helpful to think of these as representing people in a clinical study. Let's define the blue dots are patients from the CASE group, and orange dots are CTRL participants. What is our 'limited information' about them? Let's say we have collected information about their age and their score on a dementia screening exam (scores represent number of items forgotten). So the first thing we'd probably want to do is make a scatter plot of these two variables. Let's define their respective dimensions on the plot axes as: |
Revision as of 14:17, 22 January 2018
TUTORIAL ON MACHINE LEARNING AND NEURAL NETWORKS
Let's just dive right in... Below I've embedded a neural network classifier rendered using TensorflowPlayground. There are a variety of knobs and buttons on the interface; as we move along, more of these options will become available. Don't worry though, all these will be explained in detail, in due time. For now though, let's define our primary goal throughout this tutorial: classification.
Our primary task is to train neural nets to classify items into categories, based on some limited information. Like fruit or vegetable; or undergrad major; or Alzheimer's Disease patient (CASE) or control participant (CTRL). In the Tensorflow playground below, you can see a bunch of orange and blue dots. Instead of simply thinking about these as dots at arbitrary spatial coordinates, it will be helpful to think of these as representing people in a clinical study. Let's define the blue dots are patients from the CASE group, and orange dots are CTRL participants. What is our 'limited information' about them? Let's say we have collected information about their age and their score on a dementia screening exam (scores represent number of items forgotten). So the first thing we'd probably want to do is make a scatter plot of these two variables. Let's define their respective dimensions on the plot axes as:
- x-axis | dim1 | current age (AGE)
- y-axis | dim2 | exam score (SCORE)
Notice that after plotting, the dots seem to form clusters. That very promising! If you were asked to draw a line on this plane, to separate these two clusters, it could be easily done. Our brain's neural nets have already solved the the spatial problem. Now let's see if an artificial neural net can solve the same problem.
Go ahead and click the blue start button below; let it run for about 500 epochs (~5 seconds), then click pause.
Tensorflow
Finished?
How'd it do? Is one neuron with input from a single feature (the dim1 data: AGE) performing well in the separation task? If so, an orange-colored background should have formed behind the orange dots, while a blue-colored background should have formed behind the blue dots. This colored surface gradient can be understood as the neural network's prediction value at that given coordinate. We will explore prediction values in more detail later on in the tutorial. First let's take a look at what the neural net is taking as inputs.
Inputs
Take a close look at the input options in Figure-1 on the right. There are a bunch of X variables with subscripts and superscripts, and next to each is a box with various color gradients. For now, let's focus on just two of those symbols, and what they mean to us...
X1 | AGE | |
X2 | SCORE |
These are parsed such that subscripts (X1 , X2 ,... Xi ) represent each predictor variable, like AGE and SCORE. As you can see, the first two input options X1 and X2 are just XAGE and XSCORE. Note that since X1 is plotted on the x-axis, it has a color gradient that changes horizontally, but is constant in the vertical dimension. Conversely the X2 feature plotted on the y-axis has a vertical color gradient. To clarify why this happens...
If the only thing we know about these study participants is their AGE, X1, we can only make a 1-D plot with each person's age along the x-axis, such that [ x = AGEi , y = 0 ]. If you take a look at Figure-2, it should be clear that when information is collapsed onto its single dimension and plotted along the x-axis, the best line we can draw to separate the dim-1 data will be orthogonal to the x-axis (a vertical line). As you move horizontally along the x-axis your categorical guess will likely change, along with the confidence in that guess, which is precisely what is being represented by the color gradient. On the other hand, knowing nothing about exam score, moving up and down on the y-axis will have no effect on your decision, which is why color is constant in the y-dimension.
When the neural net only gets input about a single feature of each person in the dataset, its synaptic weights will only adapt output along that one dimension. Thus, if for example the network sees that a person is 3 years above the dataset average (considering the data has been mean deviated and centered), it won't matter what that person's cognitive SCORE was (since the neural net doesn't have access to that info), the network will always make the same guess for anyone 3 years above average age. This is why color is constant at x=3 for any y value.
This isn't a shortcoming of having just one single neuron in the entire network. You could add as many neurons and layers as you want (go ahead and try it)...... if the network only gets input about one feature dimension, the output will be the same, whether there is 1 neuron, or 1 billion. To realize this fact, pretend you can only see the dots as they are plotted in along the number line in 1D (in Figure 2); if we were unable to see the 2D cluster clouds above that line, the billions of neurons in our brain would tell us to draw the classification line in basically the same place as that one single neuron in our artificial neural net. This is a very interesting concept worth noting: neural net classifiers can fail for two very different reasons.
(1) The neural network itself might be ill-formulated in such a way that, no matter how much information you provide, it cannot seem to learn to solve the classification problem. (2) On the other hand, you might have implemented an apposite deep neural network; yet if the input data is insufficient to solve the classification problem, it will appear to you that this potentially very good neural network performs like garbage.
“ | A perfectly capable neural net might end up performing like garbage because, with the info you were feeding in, it never stood a chance. | ” |
— anonymous social worker |
With that said, there are ways to help prevent that later scenario from happening. These involve doing things like you see for the rest of the input features. The next page will discuss the full set of possible network inputs we have here, which includes...
X1 | AGE | |
X2 | SCORE | |
X12 | AGE2 | |
X22 | SCORE2 | |
X1X2 | AGE × SCORE | |
sin(X1) | sin(AGE) | |
sin(X2) | sin(SCORE) |