Neural Nets 2: Difference between revisions
Bradley Monk (talk | contribs) (Created page with "<sup>(Back to Tensorflow Tutorial Page 1)</sup> Below I've embedded another Tensorflow neural net playground. {{#widget:Tensorflow}} <!-- File: Tensorf...") |
Bradley Monk (talk | contribs) No edit summary |
||
Line 1: | Line 1: | ||
<sup>(Back to [[Tensorflow|Tensorflow Tutorial Page 1]])</sup> | <sup>(Back to [[Tensorflow|Tensorflow Tutorial Page 1]])</sup> | ||
<big>TUTORIAL ON MACHINE LEARNING AND NEURAL NETWORKS (PAGE 2)</big>{{SmallBox|float=right|clear=none|margin=0px 0px 8px 18px|width=170px|font-size=13px|Tutorial Pages|txt-size=11px| | |||
1. [[Neural Nets|Intro]]<br> | |||
2. [[Neural Nets 2|Network Inputs]]<br> | |||
3. [[Neural Nets 3|Network Activation]]<br> | |||
4. [[Neural Nets 4|Network Outputs]]<br> | |||
}} | |||
Below I've embedded another Tensorflow neural net playground. | Below I've embedded another Tensorflow neural net playground. | ||
Line 9: | Line 17: | ||
<!-- [[File: Tensorflow Tutorial img1.png]] --> | <!-- [[File: Tensorflow Tutorial img1.png]] --> | ||
{{Clear}} | |||
<br><br><br> | |||
===Outputs=== | |||
---- | |||
More directly, it is the value spit-out by the activation function of the 'output layer'. Here, since we only have a single layer, our hidden 'hidden layer' and 'output layer' are one in the same. The output function of our neuron is known as the '''tanh''' function. | |||
The tanh function is an extremely common choice for an output function in artificial neural network machine learning frameworks because it yields a nice sigmoid shape, and no matter the magnitude of its inputs, the output from the tanh function is bounded between { 0 : 1}. These are very desirable properties for neural net nodes. Here you see the tanh function evaluated across various x-dim inputs... | |||
<br><br><br><br> | |||
{{Clear}} | |||
[[File: Tanh.png|thumb|500px|left|see [http://reference.wolfram.com/language/ref/Tanh.html tanh on wolfram alpha] for many details about tanh function.]] | |||
{{Clear}} | |||
Tanh produces a sigmoid output over the range {-2 : 2}, and automatically evaluates to exact values when its argument is the natural logarithm. Speaking of the natural log, that is another very common choice of output function for the same reasons as tanh. | |||
For now, let's not belabor the point that our neuron (and in going forward, all our neurons) are using the tanh function. Maybe just keep this in mind if you're wondering what sorts of numbers are travelling along the axons of these neurons, and ultimately those colored gradients underneath the dots. | |||
This tutorial continues on the next page. Don't worry about playing around too much with the TensorFlow GUI, there will be plenty of that on the next page, and those that follow. |
Revision as of 13:46, 22 January 2018
(Back to Tensorflow Tutorial Page 1)
TUTORIAL ON MACHINE LEARNING AND NEURAL NETWORKS (PAGE 2)
Below I've embedded another Tensorflow neural net playground.
{{#widget:Tensorflow}}
Outputs
More directly, it is the value spit-out by the activation function of the 'output layer'. Here, since we only have a single layer, our hidden 'hidden layer' and 'output layer' are one in the same. The output function of our neuron is known as the tanh function.
The tanh function is an extremely common choice for an output function in artificial neural network machine learning frameworks because it yields a nice sigmoid shape, and no matter the magnitude of its inputs, the output from the tanh function is bounded between { 0 : 1}. These are very desirable properties for neural net nodes. Here you see the tanh function evaluated across various x-dim inputs...
Tanh produces a sigmoid output over the range {-2 : 2}, and automatically evaluates to exact values when its argument is the natural logarithm. Speaking of the natural log, that is another very common choice of output function for the same reasons as tanh.
For now, let's not belabor the point that our neuron (and in going forward, all our neurons) are using the tanh function. Maybe just keep this in mind if you're wondering what sorts of numbers are travelling along the axons of these neurons, and ultimately those colored gradients underneath the dots.
This tutorial continues on the next page. Don't worry about playing around too much with the TensorFlow GUI, there will be plenty of that on the next page, and those that follow.