ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    Publication Date: 2007-04-11
    Description: The leaky competing accumulator (LCA) is a biologically inspired model of choice. It describes the processes of leaky accumulation and competition observed in neuronal populations during choice tasks and it accounts for reaction time distributions observed in psychophysical experiments. This paper discusses recent analyses and extensions of the LCA model. First, it reviews the dynamics and examines the conditions that make the model achieve optimal performance. Second, it shows that nonlinearities of the type present in biological neurons improve performance when the number of choice alternatives increases. Third, the model is extended to value-based choice, where it is shown that nonlinearities in the value function explain risk aversion in risky choice and preference reversals in choice between alternatives characterized across multiple dimensions.
    Print ISSN: 0962-8436
    Electronic ISSN: 1471-2970
    Topics: Biology
    Published by The Royal Society
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Publication Date: 2019-12-16
    Description: We argue that natural language can be usefully described as quasi-compositional and we suggest that deep learning-based neural language models bear long-term promise to capture how language conveys meaning. We also note that a successful account of human language processing should explain both the outcome of the comprehension process and the continuous internal processes underlying this performance. These points motivate our discussion of a neural network model of sentence comprehension, the Sentence Gestalt model, which we have used to account for the N400 component of the event-related brain potential (ERP), which tracks meaning processing as it happens in real time. The model, which shares features with recent deep learning-based language models, simulates N400 amplitude as the automatic update of a probabilistic representation of the situation or event described by the sentence, corresponding to a temporal difference learning signal at the level of meaning. We suggest that this process happens relatively automatically, and that sometimes a more-controlled attention-dependent process is necessary for successful comprehension, which may be reflected in the subsequent P600 ERP component. We relate this account to current deep learning models as well as classic linguistic theory, and use it to illustrate a domain general perspective on some specific linguistic operations postulated based on compositional analyses of natural language. This article is part of the theme issue ‘Towards mechanistic models of meaning composition’.
    Print ISSN: 0962-8436
    Electronic ISSN: 1471-2970
    Topics: Biology
    Published by The Royal Society
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Publication Date: 2020-04-06
    Description: According to complementary learning systems theory, integrating new memories into the neocortex of the brain without interfering with what is already known depends on a gradual learning process, interleaving new items with previously learned items. However, empirical studies show that information consistent with prior knowledge can sometimes be integrated very quickly. We use artificial neural networks with properties like those we attribute to the neocortex to develop an understanding of the role of consistency with prior knowledge in putatively neocortex-like learning systems, providing new insights into when integration will be fast or slow and how integration might be made more efficient when the items to be learned are hierarchically structured. The work relies on deep linear networks that capture the qualitative aspects of the learning dynamics of the more complex nonlinear networks used in previous work. The time course of learning in these networks can be linked to the hierarchical structure in the training data, captured mathematically as a set of dimensions that correspond to the branches in the hierarchy. In this context, a new item to be learned can be characterized as having aspects that project onto previously known dimensions, and others that require adding a new branch/dimension. The projection onto the known dimensions can be learned rapidly without interleaving, but learning the new dimension requires gradual interleaved learning. When a new item only overlaps with items within one branch of a hierarchy, interleaving can focus on the previously known items within this branch, resulting in faster integration with less interleaving overall. The discussion considers how the brain might exploit these facts to make learning more efficient and highlights predictions about what aspects of new information might be hard or easy to learn. This article is part of the Theo Murphy meeting issue ‘Memory reactivation: replaying events past, present and future’.
    Print ISSN: 0962-8436
    Electronic ISSN: 1471-2970
    Topics: Biology
    Published by The Royal Society
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...