Neural Nets 2: Difference between revisions

From bradwiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 5: Line 5:
4. [[Neural Nets 4|Network Outputs]]<br>
4. [[Neural Nets 4|Network Outputs]]<br>
}}
}}
<br><br><br>
<br>


The previous page ended by mentioning that neural net classifiers can fail for two independent reasons:
The previous page ended by mentioning that neural net classifiers can fail for two independent reasons:
Line 11: Line 11:
# The neural network itself is ill-formulated, so no matter how much information is provided it cannot learn to solve the classification problem.  
# The neural network itself is ill-formulated, so no matter how much information is provided it cannot learn to solve the classification problem.  
# The input data is insufficient in quantity or format to solve the classification problem.
# The input data is insufficient in quantity or format to solve the classification problem.
<br>


That said, we found that a single neuron with input from a single feature was sufficient to solve the previous classification problem. Let's try another one. Without changing anything yet, press the play button on the Tensorflow playground below and let it run for about 5 seconds, and then press pause...
That said, we found that a single neuron with input from a single feature was sufficient to solve the previous classification problem. Let's try another one. Without changing anything yet, press the play button on the Tensorflow playground below and let it run for about 5 seconds, and then press pause...
Line 19: Line 21:
{{#widget:Tensorflow2}}
{{#widget:Tensorflow2}}


<br><br><br>
<br><br>
<big>'''''Finished?'''''</big>
====''Finished?''====
 
How'd it do? This time around is one neuron with input from ''a single feature'' (the dim1 data: AGE) performing still performing well in the separation task? If so, an orange-colored background should have formed behind the orange dots, while a blue-colored background should have formed behind the blue dots.


It probably didn't do so well this time around, right?
How'd it do on this new classification task? Is one neuron with input from ''a single feature'' (the dim1 data: AGE) still performing well to crate a background color gradient that matches the color of the dots? I'm assuming it didn't do so well this time around. Try again. Resent Tensorflow, and more neurons, and an additional layer also several neurons. Rerun.


Reset Tensorflow and try adding more neurons, and add an additional layer with several neurons, and rerun.
How'd it do? It still can't solve the classification problem right? It seems like we'll need to add an additional feature as input. Let's take a look at our options. Again, here is the full set of possible network inputs...
 
How'd it do? Still can't solve the classification problem right? Seems like we need to add an additional feature as input, let's try some of those. Here is the full set of possible network inputs...


{| class="wikitable" width=30% align=center  
{| class="wikitable" width=30% align=center  
Line 62: Line 60:
----
----


Take a close look at the input options in Figure-1 on the right. There are a bunch of ''X'' variables with subscripts and superscripts, and next to each is a box with various color gradients. For now, let's focus on just two of those symbols, and what they mean to us...


{| class="wikitable" width=30% align=center
====''DISCUSSION TBD''====
|+ style="font-weight:bold;"|Input Features
|- style="height:30px"
| style="background:#f7f7f7; border:3px solid #ffffff"| ''X''<sub>1</sub>
|colspan=2 style="background:#f7f7f7; border:3px solid #ffffff"| AGE
|- style="height:30px"
| style="background:#f7f7f7; border:3px solid #ffffff"| ''X''<sub>2</sub> 
|colspan=2 style="background:#f7f7f7; border:3px solid #ffffff"|  SCORE
|}
 
These are parsed such that subscripts (''X''<sub>1</sub> , ''X''<sub>2</sub> ,... ''X''<sub>i</sub> ) represent each predictor variable, like AGE and SCORE. As you can see, the first two input options ''X''<sub>1</sub> and ''X''<sub>2</sub> are just ''X''<sub>AGE</sub> and ''X''<sub>SCORE</sub>. Note that since ''X''<sub>1</sub> is plotted on the x-axis, it has a color gradient that changes horizontally, but is constant in the vertical dimension. Conversely the ''X''<sub>2</sub> feature plotted on the y-axis has a vertical color gradient. To clarify why this happens...
 
If the only thing we know about these study participants is their AGE, ''X''<sub>1</sub>, we can only make a 1-D plot with each person's age along the x-axis, such that [ x = ''AGE''<sub>i</sub> , y = 0 ]. If you take a look at Figure-2, it should be clear that when information is collapsed onto its single dimension and plotted along the x-axis, the best line we can draw to separate the dim-1 data will be orthogonal to the x-axis (a vertical line). As you move horizontally along the x-axis your categorical guess will likely change, along with the confidence in that guess, which is precisely what is being represented by the color gradient. On the other hand, knowing nothing about exam score, moving up and down on the y-axis will have no effect on your decision, which is why color is constant in the y-dimension.


When the neural net only gets input about a single feature of each person in the dataset, its synaptic weights will only adapt output along that one dimension. Thus, if for example the network sees that a person is 3 years above the dataset average (considering the data has been ''mean deviated'' and centered), it won't matter what that person's cognitive SCORE was (since the neural net doesn't have access to that info), the network will always make the same guess for anyone 3 years above average age. This is why color is constant at ''x''=3 for any ''y'' value.


{{SmallBox|display=block
|float=right
|clear=none
|width=420px
|margin=25px -10% 5px 10px
|border-width=2px
|border-radius=2px
|[[File:NN NumberLine.png|400px]]
| Figure 2
}}


This isn't a shortcoming of having just one single neuron in the entire network. You could add as many neurons and layers as you want (go ahead and try it)...... if the network only gets input about one feature dimension, the output will be the same, whether there is 1 neuron, or 1 billion. To realize this fact, pretend you can only see the dots as they are plotted in along the number line in 1D (in Figure 2); if we were unable to see the 2D cluster clouds above that line, the billions of neurons in our brain would tell us to draw the classification line in basically the same place as that one single neuron in our artificial neural net. This is a very interesting concept worth noting: neural net classifiers can fail for two very different reasons.
===Neuron Activation Function===
 
(1) The neural network itself might be ill-formulated in such a way that, no matter how much information you provide, it cannot seem to learn to solve the classification problem. (2) On the other hand, you might have implemented an apposite deep neural network; yet if the input data is insufficient to solve the classification problem, it will appear to you that this potentially very good neural network performs like garbage. <br>
 
{{Quote|A perfectly capable neural net might end up performing like garbage because, with the info you were feeding in, it never stood a chance.|source=anonymous social worker}} <br><br>
 
With that said, there are ways to help prevent that later scenario from happening. These involve doing things like you see for the rest of the input features.
 
 
<br>
{{SmallBox|'''[[Neural Nets 2|Continue to Neural Nets Tutorial Page 2]]'''}}
 
<!-- <btn data-toggle="tooltip">Neural Nets 2</btn> -->
 
 
 
 
 
 
 
Below I've embedded another Tensorflow neural net playground.
 
 
 
 
 
 
<!-- [[File: Tensorflow Tutorial img1.png]] -->
 
{{Clear}}
<br><br><br>
 
===Outputs===
----
----


More directly, it is the value spit-out by the activation function of the 'output layer'. Here, since we only have a single layer, our hidden 'hidden layer' and 'output layer' are one in the same. The output function of our neuron is known as the '''tanh''' function.  
The output function from each neuron in our neural net was chosen to be the '''tanh''' function.  


The tanh function is an extremely common choice for an output function in artificial neural network machine learning frameworks because it yields a nice sigmoid shape, and no matter the magnitude of its inputs, the output from the tanh function is bounded between { 0 : 1}. These are very desirable properties for neural net nodes. Here you see the tanh function evaluated across various x-dim inputs...
The tanh function is an extremely common choice for an output function in artificial neural network machine learning frameworks because it yields a nice sigmoid shape, and no matter the magnitude of its inputs, the output from the tanh function is bounded between { 0 : 1}. These are very desirable properties for neural net nodes. Here you see the tanh function evaluated across various x-dim inputs...
Line 133: Line 75:
<br><br><br><br>
<br><br><br><br>
{{Clear}}
{{Clear}}
[[File: Tanh.png|thumb|500px|left|see [http://reference.wolfram.com/language/ref/Tanh.html tanh on wolfram alpha] for many details about tanh function.]]
[[File: Tanh.png|thumb|400px|left|see [http://reference.wolfram.com/language/ref/Tanh.html tanh on wolfram alpha] for many details about tanh function.]]
{{Clear}}
{{Clear}}


Tanh produces a sigmoid output over the range {-2 : 2}, and automatically evaluates to exact values when its argument is the natural logarithm. Speaking of the natural log, that is another very common choice of output function for the same reasons as tanh.  
Tanh produces a sigmoid output over the range {-2 : 2}, and automatically evaluates to exact values when its argument is the natural logarithm. Speaking of the natural log, that is another very common choice of output function for the same reasons as tanh.  


For now, let's not belabor the point that our neuron (and in going forward, all our neurons) are using the tanh function. Maybe just keep this in mind if you're wondering what sorts of numbers are travelling along the axons of these neurons, and ultimately those colored gradients underneath the dots.
This is something to keep this in mind, if you're wondering what sorts of numbers are travelling along the axons of these neurons, and ultimately those colored gradients underneath the dots.  


This tutorial continues on the next page. Don't worry about playing around too much with the TensorFlow GUI, there will be plenty of that on the next page, and those that follow.
 
 
====''TO BE CONTINUED...''====
 
<br>
<!-- {{SmallBox|'''[[Neural Nets 2|Continue to Neural Nets Tutorial Page 2]]'''}} -->
<!-- <btn data-toggle="tooltip">Neural Nets 2</btn> -->

Revision as of 14:58, 22 January 2018

TUTORIAL ON MACHINE LEARNING AND NEURAL NETWORKS (PAGE 2)

Tutorial Pages


The previous page ended by mentioning that neural net classifiers can fail for two independent reasons:

  1. The neural network itself is ill-formulated, so no matter how much information is provided it cannot learn to solve the classification problem.
  2. The input data is insufficient in quantity or format to solve the classification problem.


That said, we found that a single neuron with input from a single feature was sufficient to solve the previous classification problem. Let's try another one. Without changing anything yet, press the play button on the Tensorflow playground below and let it run for about 5 seconds, and then press pause...


Tensorflow


{{#widget:Tensorflow2}}



Finished?

How'd it do on this new classification task? Is one neuron with input from a single feature (the dim1 data: AGE) still performing well to crate a background color gradient that matches the color of the dots? I'm assuming it didn't do so well this time around. Try again. Resent Tensorflow, and more neurons, and an additional layer also several neurons. Rerun.

How'd it do? It still can't solve the classification problem right? It seems like we'll need to add an additional feature as input. Let's take a look at our options. Again, here is the full set of possible network inputs...

Input Features
X1 AGE
X2 SCORE
X12 AGE2
X22 SCORE2
X1X2 AGE × SCORE
sin(X1) sin(AGE)
sin(X2) sin(SCORE)

Start by running X1 and X2 together, just to see what happens. Then feel free to test out whatever combinations you think might work.


Input Combinations



DISCUSSION TBD

Neuron Activation Function


The output function from each neuron in our neural net was chosen to be the tanh function.

The tanh function is an extremely common choice for an output function in artificial neural network machine learning frameworks because it yields a nice sigmoid shape, and no matter the magnitude of its inputs, the output from the tanh function is bounded between { 0 : 1}. These are very desirable properties for neural net nodes. Here you see the tanh function evaluated across various x-dim inputs...






Error creating thumbnail: File missing
see tanh on wolfram alpha for many details about tanh function.

Tanh produces a sigmoid output over the range {-2 : 2}, and automatically evaluates to exact values when its argument is the natural logarithm. Speaking of the natural log, that is another very common choice of output function for the same reasons as tanh.

This is something to keep this in mind, if you're wondering what sorts of numbers are travelling along the axons of these neurons, and ultimately those colored gradients underneath the dots.


TO BE CONTINUED...