ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • Cambridge, Mass. : Westview  (1)
  • Molecular Diversity Preservation International  (1)
  • 1
    Monograph available for loan
    Monograph available for loan
    Cambridge, Mass. : Westview
    Call number: PIK M 370-18-91350
    Type of Medium: Monograph available for loan
    Pages: XXII, 327 Seiten , Illustrationen, Diagramme
    Edition: [Nachdr.]
    ISBN: 0201515601 , 9780201515602 , 0201503956 , 9780201503951
    Series Statement: Santa Fe Institute studies in the sciences of complexity 1
    Language: English
    Note: Contents: ONE Introduction ; TWO The Hopfield Model ; THREE Extensions of the Hopfield Model ; FOUR Optimization Problems ; FIVE Simple Perceptrons ; SIX Multi-Layer Networks ; SEVEN Recurrent Networks ; EIGHT Unsupervised Hebbian Learning ; NINE Unsupervised Competitive Learning ; TEN Formal Statistical Mechanics of Neural Networks ; APPENDIX Statistical Mechanics
    Location: A 18 - must be ordered
    Branch Library: PIK Library
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Publication Date: 2021-10-25
    Description: Autoencoders are commonly used in representation learning. They consist of an encoder and a decoder, which provide a straightforward method to map n-dimensional data in input space to a lower m-dimensional representation space and back. The decoder itself defines an m-dimensional manifold in input space. Inspired by manifold learning, we showed that the decoder can be trained on its own by learning the representations of the training samples along with the decoder weights using gradient descent. A sum-of-squares loss then corresponds to optimizing the manifold to have the smallest Euclidean distance to the training samples, and similarly for other loss functions. We derived expressions for the number of samples needed to specify the encoder and decoder and showed that the decoder generally requires much fewer training samples to be well-specified compared to the encoder. We discuss the training of autoencoders in this perspective and relate it to previous work in the field that uses noisy training examples and other types of regularization. On the natural image data sets MNIST and CIFAR10, we demonstrated that the decoder is much better suited to learn a low-dimensional representation, especially when trained on small data sets. Using simulated gene regulatory data, we further showed that the decoder alone leads to better generalization and meaningful representations. Our approach of training the decoder alone facilitates representation learning even on small data sets and can lead to improved training of autoencoders. We hope that the simple analyses presented will also contribute to an improved conceptual understanding of representation learning.
    Electronic ISSN: 1099-4300
    Topics: Chemistry and Pharmacology , Physics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...