ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • Blackwell Publishing Ltd  (1)
  • Cambridge, Massachusetts : The MIT Press  (1)
Collection
Publisher
Language
Years
  • 1
    Monograph available for loan
    Monograph available for loan
    Cambridge, Massachusetts : The MIT Press
    Call number: 19/M 18.91404
    Description / Table of Contents: Applied math and machine learning basics. Linear algebra -- Probability and information theory -- Numerical computation -- Machine learning basics -- Deep networks: modern practices. Deep feedforward networks -- Regularization for deep learning -- Optimization for training deep models -- Convolutional networks -- Sequence modeling: recurrent and recursive nets -- Practical methodology -- Applications -- Deep learning research. Linear factor models -- Autoencoders -- Representation learning -- Structured probabilistic models for deep learning -- Monte Carlo methods -- Confronting the partition function -- Approximate inference -- Deep generative models
    Type of Medium: Monograph available for loan
    Pages: xxii, 775 Seiten , Illustrationen, Diagramme
    ISBN: 9780262035613
    Series Statement: Adaptive computation and machine learning
    Classification:
    Mathematics
    Parallel Title: Erscheint auch als Deep learning
    Language: English
    Location: Reading room
    Branch Library: GFZ Library
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Electronic Resource
    Electronic Resource
    Oxford, UK : Blackwell Publishing Ltd
    Computational intelligence 5 (1989), S. 0 
    ISSN: 1467-8640
    Source: Blackwell Publishing Journal Backfiles 1879-2005
    Topics: Computer Science
    Notes: Artificial neural networks capable of doing hard learning offer a new way to undertake automatic speech recognition. The Boltzmann machine algorithm and the error back-propagation algorithm have been used to perform speaker normalization. Spectral segments are represented by spectral lines. Speaker-independent recognition of place of articulation for vowels is performed on lines. Performance of the networks is shown to depend on the coding of the input data. Samples were extracted from continuous speech of 38 speakers. The error rate obtained (4.2% error on test set of 72 samples with the Boltzmann machine and 6.9% error with error back-propagation) is better than that of previous experiments, using the same data, with continuous Hidden Markov Models (7.3% error on test set and 3% error on training set). These experiments are part of an attempt to construct a data-driven speech recognition system with multiple neural networks specialized to different tasks. Results are also reported on the recognition performance of other trained networks, such as one trained on the E-set consonants.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...