Deep Learning

Forward Propagation, Forward Propagation Computation

Naranjito 2021. 4. 7. 12:06
  • Forward Propagation Computation

z : variable, accumulated sum from previous layers of x * w, before applied sigmoid(or whatever activation function) function

z1=W1x1+W2x2=0.3×0.1+0.25×0.2=0.08z1=W1x1+W2x2=0.3×0.1+0.25×0.2=0.08

z2=W3x1+W4x2=0.4×0.1+0.35×0.2=0.11

 

Then, z1 and z2 shoulc through sigmoid(or whatever activation function) function in each hidden layer.

h1=sigmoid(z1)=0.51998934h1=sigmoid(z1)=0.51998934

h2=sigmoid(z2)=0.52747230

 

Each h1 and h2 heading for output layers, multiply them by each corresponding weight and accumulating.

z3=W5h1+W6h2=0.45×h1+0.4×h2=0.44498412z3=W5h1+W6h2=0.45×h1+0.4×h2=0.44498412

z4=W7h1+W8h2=0.7×h1+0.6×h2=0.68047592

 

Then, it is the final computed result(y-hat) through sigmoid(or whatever activation function) function.

o1=sigmoid(z3)=0.60944600o1=sigmoid(z3)=0.60944600

o2=sigmoid(z4)=0.66384491

 

Then, get the loss function with MSE. The y is target, predicted y is output.

Eo1=12(targeto1outputo1)2=0.02193381Eo1=12(targeto1−outputo1)2=0.02193381

Eo2=12(targeto2outputo2)2=0.00203809

 

And, sum total of them.

Etotal=Eo1+Eo2=0.02397190

 

  • Back Propagation Step Computation

The weight should be updated from reversely, from W5,W6,W7,W8.

- reference : wikidocs.net/37406 

medium.com/@14prakash/back-propagation-is-very-simple-who-made-it-complicated-97b794c97e5c