ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    Monograph available for loan
    Monograph available for loan
    Amsterdam [u.a.] : Morgan Kaufman
    Call number: M 09.0096
    Description / Table of Contents: Contents: Preface 1. What's it all about? 2. Input: Concepts, instances, attributes 3. Output: Knowledge representation 4. Algorithms: The basic methods 5. Credibility: Evaluating what's been learned 6. Implementations: Real machine learning schemes 7. Transformations: Engineering the input and output 8. Moving on: Extensions and applications Part II: The Weka machine learning workbench 9. Introduction to Weka 10. The Explorer 11. The Knowledge Flow interface 12. The Experimenter 13. The command-line interface 14. Embedded machine learning 15. Writing new learning schemes
    Type of Medium: Monograph available for loan
    Pages: xxxi, 525 p. , ill , 24 cm
    Edition: 2nd ed.
    ISBN: 0120884070
    Series Statement: Morgan Kaufmann series in data management systems
    Classification:
    Informatics
    Location: Upper compact magazine
    Branch Library: GFZ Library
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Monograph available for loan
    Monograph available for loan
    Amsterdam [u.a.] : Morgan Kaufmann
    Call number: M 15.0104
    Description / Table of Contents: Data Mining: Practical Machine Learning Tools and Techniques, Third Edition, offers a thorough grounding in machine learning concepts as well as practical advice on applying machine learning tools and techniques in real-world data mining situations. This highly anticipated third edition of the most acclaimed work on data mining and machine learning will teach you everything you need to know about preparing inputs, interpreting outputs, evaluating results, and the algorithmic methods at the heart of successful data mining. Thorough updates reflect the technical changes and modernizations that have taken place in the field since the last edition, including new material on Data Transformations, Ensemble Learning, Massive Data Sets, Multi-instance Learning, plus a new version of the popular Weka machine learning software developed by the authors. Witten, Frank, and Hall include both tried-and-true techniques of today as well as methods at the leading edge of contemporary research. The book is targeted at information systems practitioners, programmers, consultants, developers, information technology managers, specification writers, data analysts, data modelers, database R&D professionals, data warehouse engineers, data mining professionals. The book will also be useful for professors and students of upper-level undergraduate and graduate-level data mining and machine learning courses who want to incorporate data mining as part of their data management knowledge base and expertise.
    Type of Medium: Monograph available for loan
    Pages: xxxiiii, 629 pages , illustrations, diagrams , 24 cm
    Edition: 3rd ed.
    ISBN: 978-0-12-374856-0
    Series Statement: Morgan Kaufmann series in data management systems
    Classification:
    Informatics
    Language: English
    Note: Part I: Introduction to Data Mining Chapter 1 - What's It All About? Chapter 2 - Input: Concepts, Instances, and Attributes Chapter 3 - Output: Knowledge Representation Chapter 4 - Algorithms: The Basic Methods Chapter 5 - Credibility: Evaluating What's Been Learned Part II: Advanced Data Mining Chapter 6 - Implementations: Real Machine Learning Schemes Chapter 7 - Data Transformations Chapter 8 - Ensemble Learning Chapter 9 - Moving on: Applications and Beyond Part III: The Weka Data Mining Workbench Chapter 10 - Introduction to Weka Chapter 11 - The Explorer Chapter 12 - The Knowledge Flow Interface Chapter 13 - The Experimenter Chapter 14 - The Command-Line Interface Chapter 15 - Embedded Machine Learning Chapter 16 - Writing New Learning Schemes Chapter 17 - Tutorial Exercises for the Weka Explorer
    Location: Upper compact magazine
    Branch Library: GFZ Library
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Electronic Resource
    Electronic Resource
    Springer
    Machine learning 16 (1994), S. 203-225 
    ISSN: 0885-6125
    Keywords: Inductive logic programming ; data compression ; minimum description length principle ; model complexity ; learning from positive–only examples ; theory preference criterion
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract A central problem in inductive logic programming is theory evaluation. Without some sort of preference criterion, any two theories that explain a set of examples are equally acceptable. This paper presents a scheme for evaluating alternative inductive theories based on an objective preference criterion. It strives to extract maximal redundancy from examples, transforming structure into randomness. A major strength of the method is its application to learning problems where negative examples of concepts are scarce or unavailable. A new measure called model complexity is introduced, and its use is illustrated and compared with a proof complexity measure on relational learning tasks. The complementarity of model and proof complexity parallels that of model and proof–theoretic semantics. Model complexity, where applicable, seems to be an appropriate measure for evaluating inductive logic theories.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 4
    Electronic Resource
    Electronic Resource
    Springer
    Machine learning 16 (1994), S. 203-225 
    ISSN: 0885-6125
    Keywords: Inductive logic programming ; data compression ; minimum description length principle ; model complexity ; learning from positive-only examples ; theory preference criterion
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract A central problem in inductive logic programming is theory evaluation. Without some sort of preference criterion, any two theories that explain a set of examples are equally acceptable. This paper presents a scheme for evaluating alternative inductive theories based on an objective preference criterion. It strives to extract maximal redundancy from examples, transforming structure into randomness. A major strength of the method is its application to learning problems where negative examples of concepts are scarce or unavailable. A new measure calledmodel complexity is introduced, and its use is illustrated and compared with aproof complexity measure on relational learning tasks. The complementarity of model and proof complexity parallels that of model and proof-theoretic semantics. Model complexity, where applicable, seems to be an appropriate measure for evaluating inductive logic theories.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 5
    Electronic Resource
    Electronic Resource
    Springer
    Machine learning 41 (2000), S. 5-25 
    ISSN: 0885-6125
    Keywords: naive Bayes ; regression ; model trees ; linear regression ; locally weighted regression
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract Despite its simplicity, the naive Bayes learning scheme performs well on most classification tasks, and is often significantly more accurate than more sophisticated methods. Although the probability estimates that it produces can be inaccurate, it often assigns maximum probability to the correct class. This suggests that its good performance might be restricted to situations where the output is categorical. It is therefore interesting to see how it performs in domains where the predicted value is numeric, because in this case, predictions are more sensitive to inaccurate probability estimates. This paper shows how to apply the naive Bayes methodology to numeric prediction (i.e., regression) tasks by modeling the probability distribution of the target value with kernel density estimators, and compares it to linear regression, locally weighted linear regression, and a method that produces “model trees”—decision trees with linear regression functions at the leaves. Although we exhibit an artificial dataset for which naive Bayes is the method of choice, on real-world datasets it is almost uniformly worse than locally weighted linear regression and model trees. The comparison with linear regression depends on the error measure: for one measure naive Bayes performs similarly, while for another it is worse. We also show that standard naive Bayes applied to regression problems by discretizing the target value performs similarly badly. We then present empirical evidence that isolates naive Bayes' independence assumption as the culprit for its poor performance in the regression setting. These results indicate that the simplistic statistical assumption that naive Bayes makes is indeed more restrictive for regression than for classification.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 6
    Electronic Resource
    Electronic Resource
    Springer
    Machine learning 32 (1998), S. 63-76 
    ISSN: 0885-6125
    Keywords: Model trees ; classification algorithms ; M5 ; C5.0 ; decision trees
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract Model trees, which are a type of decision tree with linear regression functions at the leaves, form the basis of a recent successful technique for predicting continuous numeric values. They can be applied to classification problems by employing a standard method of transforming a classification problem into a problem of function approximation. Surprisingly, using this simple transformation the model tree inducer M5′, based on Quinlan's M5, generates more accurate classifiers than the state-of-the-art decision tree learner C5.0, particularly when most of the attributes are numeric.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 7
    Electronic Resource
    Electronic Resource
    Springer
    Multimedia tools and applications 10 (2000), S. 113-132 
    ISSN: 1573-7721
    Keywords: music retrieval ; melody recall ; acoustic interfaces ; relevance ranking
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract Musical scores are traditionally retrieved by title, composer or subject classification. Just as multimedia computer systems increase the range of opportunities available for presenting musical information, so they also offer new ways of posing musically-oriented queries. This paper shows how scores can be retrieved from a database on the basis of a few notes sung or hummed into a microphone. The design of such a facility raises several interesting issues pertaining to music retrieval. We first describe an interface that transcribes acoustic input into standard music notation. We then analyze string matching requirements for ranked retrieval of music and present the results of an experiment which tests how accurately people sing well known melodies. The performance of several string matching criteria are analyzed using two folk song databases. Finally, we describe a prototype system which has been developed for retrieval of tunes from acoustic input and evaluate its performance.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 8
    Electronic Resource
    Electronic Resource
    Springer
    AI & society 6 (1992), S. 166-180 
    ISSN: 1435-5655
    Keywords: Attention focusing ; Instructible systems ; Intentional stance ; Machine learning ; Programming by example ; Teaching metaphor
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract It is argued that “human-centredness” will be an important characteristic of systems that learn tasks from human users, as the difficulties in inductive inference rule out learning without human assistance. The aim of “programming by example” is to create systems that learn how to perform tasks from their human users by being shown examples of what is to be done. Just as the user creates a learning environment for the system, so the system provides a teaching opportunity for the user, and emphasis is placed as much on facilitating successful teaching as on incorporating techniques of machine learning. If systems can “learn” repetitive tasks, their users will have the power to decide for themselves which parts of their jobs should be automated, and teach the system how to do them — reducing their dependence on intermediaries such as system designers and programmers. This paper presents principles for programming by example derived from experience in creating four prototype learners: for technical drawing, text editing, office tasks, and robot assembly. A teaching metaphor (a) enables the user to demonstrate a task by performing it manually, (b) helps to explain the learner's limited capabilities in terms of a persona, and (c) allows users to attribute intentionality. Tasks are represented procedurally, and augmented with constraints. Suitable mechanisms for attention focusing are necessary in order to control inductive search. Hidden features of a task should be made explicit so that the learner need not embark on the huge search entailed by hypothesizing missing steps.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 9
    Publication Date: 1976-01-01
    Print ISSN: 0016-0032
    Electronic ISSN: 1879-2693
    Topics: Mathematics , Technology
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 10
    Publication Date: 1999-02-01
    Print ISSN: 0219-1377
    Electronic ISSN: 0219-3116
    Topics: Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...