ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • Articles  (3)
  • Artificial intelligence  (3)
  • Springer  (3)
  • Annual Reviews
  • Blackwell Publishing Ltd
  • Elsevier
  • Periodicals Archive Online (PAO)
  • Wiley
  • 2005-2009
  • 1990-1994  (3)
  • 1980-1984
  • 2008
  • 1991  (3)
  • 1982
  • 1980
  • Philosophy  (3)
  • Process Engineering, Biotechnology, Nutrition Technology
Collection
  • Articles  (3)
Publisher
  • Springer  (3)
  • Annual Reviews
  • Blackwell Publishing Ltd
  • Elsevier
  • Periodicals Archive Online (PAO)
  • +
Years
  • 2005-2009
  • 1990-1994  (3)
  • 1980-1984
Year
  • 2008
  • 1991  (3)
  • 1982
  • 1980
  • 1
    Electronic Resource
    Electronic Resource
    Springer
    Minds and machines 1 (1991), S. 43-54 
    ISSN: 1572-8641
    Keywords: Artificial intelligence ; causality ; cognition ; computation ; explanation ; mind/body problem ; other-minds problem ; robotics ; Searle ; symbol grounding ; Turing Test
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Philosophy
    Notes: Abstract Any attempt to explain the mind by building machines with minds must confront the other-minds problem: How can we tell whether any body other than our own has a mind when the only way to know is by being the other body? In practice we all use some form of Turing Test: If it can do everything a body with a mind can do such that we can't tell them apart, we have no basis for doubting it has a mind. But what is “everything” a body with a mind can do? Turing's original “pen-pal” version of the Turing Test (the TT) only tested linguistic capacity, but Searle has shown that a mindless symbol-manipulator could pass the TT undetected. The Total Turing Test (TTT) calls instead for all of our linguistic and robotic capacities; immune to Searle's argument, it suggests how to ground a symbol manipulating system in the capacity to pick out the objects its symbols refer to. No Turing Test, however, can guarantee that a body has a mind. Worse, nothing in the explanation of its successful performance requires a model to have a mind at all. Minds are hence very different from the unobservables of physics (e.g., superstrings); and Turing Testing, though essential for machine-modeling the mind, can really only yield an explanation of the body.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Electronic Resource
    Electronic Resource
    Springer
    Minds and machines 1 (1991), S. 1-30 
    ISSN: 1572-8641
    Keywords: Artificial intelligence ; content ; cognitive science ; mind-body problem ; representation ; semantic ; syntax
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Philosophy
    Notes: Abstract I argue that there are no mental representations, in the sense of “representation” used in standard computational theories of the mind. I take Cummins' Meaning and Mental Representation as my stalking-horse, and argue that his view, once properly developed, is self-defeating. The argument implicitly undermines Fodor's view of the mind; I draw that conclusion out explicitly. The idea of mental representations can then only be saved by appeal to a Dennett-like instrumentalism; so I argue against that too. Finally, I argue that there is no good metaphysical reason in favour of believing in mental representations and that cognitive science can manage perfectly well without them.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Electronic Resource
    Electronic Resource
    Springer
    Minds and machines 1 (1991), S. 259-277 
    ISSN: 1572-8641
    Keywords: Artificial intelligence ; ethics ; intelligence ; anthropocentrism
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Philosophy
    Notes: Abstract Intuitive conceptions guide practice, but practice reciprocally reshapes intuition. The intuitive conception of intelligence in AI was originally highly anthropocentric. However, the internal dynamics of AI research have resulted in a divergence from anthropocentric concerns. In particular, the increasing emphasis on commonsense knowledge and peripheral intelligence (perception and movement) in effect constitutes an incipient reorientation of intuitions about the nature of intelligence in a non-anthropocentric direction. I argue that this conceptual shift undermines Joseph Weizenbaum's claim that the project of artificial intelligence is inherently dehumanizing.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...