Tensorflow: Difference between revisions
Bradley Monk (talk | contribs) No edit summary |
Bradley Monk (talk | contribs) No edit summary |
||
(12 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
Below I've embedded a neural network classifier that was rendered in [http://playground.tensorflow.org Tensorflow playground]. There are a variety of knobs and buttons on the interface; there are even some that I've hidden. There is no need to go looking for documentation about what each one does. I will explain what they all mean in due time. For now though, let's define our primary goal throughout this tutorial: '''categorization''' | |||
Our primary task is to train neural nets to classify items into 1 of 2 categories. Here we represent those categories as either an orange dot or blue dot. You can think of these dots as CASE and CTRL participants in an Alzheimer's Disease (AD) study. For the heck of it, say blue dots represent CASE and orange dots represent CTRL. | |||
In this first example, let's say... | |||
* dim-1 (x-axis): '''''braak score''''' | |||
* dim-2 (y-axis): '''''age''''' | |||
Notice the dots form clusters. If you were asked to draw a line on this plane, to separate these two clusters, it could be easily done. Our brain's neural nets have already solved the the spatial problem. Now let's see if an artificial neural net can solve the same problem. | |||
Go ahead and click the blue ''start'' button below; let it run for about 500 epochs (~5 seconds), then click pause. | |||
{{#widget:Tensorflow1}} | |||
{{ | |||
How'd it do? Does one neuron, with a single feature input (the value of each dot in the first dimension, the x-axis) perform well in the separation task? If it did well, an orange background should have formed behind the orange dots, and a blue background behind the blue dots. | |||
The color gradient along the surface (under the dots) can be understood as the neural net's prediction 'confidence' at that given coordinate. More directly, it is the value spit-out by the activation function of the 'output layer'. Here, since we only have a single layer, our hidden 'hidden layer' and 'output layer' are one in the same. The output function of our neuron is known as the '''tanh''' function. | |||
The tanh function is an extremely common choice for an output function in artificial neural network machine learning frameworks because it yields a nice sigmoid shape, and no matter the magnitude of its inputs, the output from the tanh function is bounded between { 0 : 1}. These are very desirable properties for neural net nodes. Here you see the tanh function evaluated across various x-dim inputs... | |||
}} | {{Clear}} | ||
[[File: Tanh.png|thumb|500px|left|see [http://reference.wolfram.com/language/ref/Tanh.html tanh on wolfram alpha] for many details about tanh function.]] | |||
{{Clear}} | |||
Tanh produces a sigmoid output over the range {-2 : 2}, and automatically evaluates to exact values when its argument is the natural logarithm. Speaking of the natural log, that is another very common choice of output function for the same reasons as tanh. | |||
For now, let's not belabor the point that our neuron (and in going forward, all our neurons) are using the tanh function. Maybe just keep this in mind if you're wondering what sorts of numbers are travelling along the axons of these neurons, and ultimately those colored gradients underneath the dots. | |||
This tutorial continues on the next page. Don't worry about playing around too much with the TensorFlow GUI, there will be plenty of that on the next page, and those that follow. | |||
<br> | |||
{{SmallBox|width=450px|padding=18px 10px 5px 10px|margin=20px 10px 5px 10px|'''Continue to [[TensorFlow Tutorial Page 2]]'''}} |
Latest revision as of 09:01, 20 January 2018
Below I've embedded a neural network classifier that was rendered in Tensorflow playground. There are a variety of knobs and buttons on the interface; there are even some that I've hidden. There is no need to go looking for documentation about what each one does. I will explain what they all mean in due time. For now though, let's define our primary goal throughout this tutorial: categorization
Our primary task is to train neural nets to classify items into 1 of 2 categories. Here we represent those categories as either an orange dot or blue dot. You can think of these dots as CASE and CTRL participants in an Alzheimer's Disease (AD) study. For the heck of it, say blue dots represent CASE and orange dots represent CTRL.
In this first example, let's say...
- dim-1 (x-axis): braak score
- dim-2 (y-axis): age
Notice the dots form clusters. If you were asked to draw a line on this plane, to separate these two clusters, it could be easily done. Our brain's neural nets have already solved the the spatial problem. Now let's see if an artificial neural net can solve the same problem.
Go ahead and click the blue start button below; let it run for about 500 epochs (~5 seconds), then click pause.
{{#widget:Tensorflow1}}
How'd it do? Does one neuron, with a single feature input (the value of each dot in the first dimension, the x-axis) perform well in the separation task? If it did well, an orange background should have formed behind the orange dots, and a blue background behind the blue dots.
The color gradient along the surface (under the dots) can be understood as the neural net's prediction 'confidence' at that given coordinate. More directly, it is the value spit-out by the activation function of the 'output layer'. Here, since we only have a single layer, our hidden 'hidden layer' and 'output layer' are one in the same. The output function of our neuron is known as the tanh function.
The tanh function is an extremely common choice for an output function in artificial neural network machine learning frameworks because it yields a nice sigmoid shape, and no matter the magnitude of its inputs, the output from the tanh function is bounded between { 0 : 1}. These are very desirable properties for neural net nodes. Here you see the tanh function evaluated across various x-dim inputs...
Tanh produces a sigmoid output over the range {-2 : 2}, and automatically evaluates to exact values when its argument is the natural logarithm. Speaking of the natural log, that is another very common choice of output function for the same reasons as tanh.
For now, let's not belabor the point that our neuron (and in going forward, all our neurons) are using the tanh function. Maybe just keep this in mind if you're wondering what sorts of numbers are travelling along the axons of these neurons, and ultimately those colored gradients underneath the dots.
This tutorial continues on the next page. Don't worry about playing around too much with the TensorFlow GUI, there will be plenty of that on the next page, and those that follow.