Learning representations by back-propagating errors


Resource | v1 | created by semantic-scholar-bot |
Type Paper
Created 1986-01-01
Identifier DOI: 10.1038/323533a0

Description

We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.

Relations

about Computer science

Computer science is the study of computation and information. Computer science deals with theory of c...

relates to Sequence to Sequence Learning with Neural Networks

Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult...


Edit details Edit relations Attach new author Attach new topic Attach new resource
0.0 /10
useless alright awesome
from 0 reviews
Write comment Rate resource Tip: Rating is anonymous unless you also write a comment.
Resource level 0.0 /10
beginner intermediate advanced
Resource clarity 0.0 /10
hardly clear sometimes unclear perfectly clear
Reviewer's background 0.0 /10
none basics intermediate advanced expert
Comments 0
Currently, there aren't any comments.