Learning representations by back-propagating errors

Abril 27, 2023
De 5:00pm hasta 6:00pm

Blue Room

Specialist level
Speaker: 
João Seabra Fonseca
Institution: 
IFT
Location&Place: 

Blue Room

Abstract: 

João Seabra Fonseca will present the following paper: 'Learning representations by back-propagating errors' by David E. Rumelhart, Geoffrey E. Hinton and Ronald J. Williams (Nature volume 323, 533–536 (1986))

We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure. 

PDF iconrumelhart1986.pdf