ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

Ihre E-Mail wurde erfolgreich gesendet. Bitte prüfen Sie Ihren Maileingang.

Leider ist ein Fehler beim E-Mail-Versand aufgetreten. Bitte versuchen Sie es erneut.

Vorgang fortführen?

Exportieren
Filter
  • Artikel  (12)
  • lazy learning  (12)
  • Springer  (12)
  • American Chemical Society
  • American Institute of Physics (AIP)
  • Cambridge University Press
  • Emerald
  • SciELO Brazil
  • Wiley
  • 2010-2014
  • 2005-2009
  • 2000-2004
  • 1995-1999  (12)
  • 1935-1939
  • 1997  (12)
  • Informatik  (12)
  • Klassische Archäologie
  • Geschichte
  • Maschinenbau
Sammlung
  • Artikel  (12)
Verlag/Herausgeber
  • Springer  (12)
  • American Chemical Society
  • American Institute of Physics (AIP)
  • Cambridge University Press
  • Emerald
  • +
Erscheinungszeitraum
  • 2010-2014
  • 2005-2009
  • 2000-2004
  • 1995-1999  (12)
  • 1935-1939
Jahr
Thema
  • Informatik  (12)
  • Klassische Archäologie
  • Geschichte
  • Maschinenbau
  • 1
    Digitale Medien
    Digitale Medien
    Springer
    Artificial intelligence review 11 (1997), S. 193-225 
    ISSN: 1573-7462
    Schlagwort(e): lazy learning ; model selection ; cross validation ; optimization ; attribute selection
    Quelle: Springer Online Journal Archives 1860-2000
    Thema: Informatik
    Notizen: Abstract Given a set of models and some training data, we would like to find the model that best describes the data. Finding the model with the lowest generalization error is a computationally expensive process, especially if the number of testing points is high or if the number of models is large. Optimization techniques such as hill climbing or genetic algorithms are helpful but can end up with a model that is arbitrarily worse than the best one or cannot be used because there is no distance metric on the space of discrete models. In this paper we develop a technique called “racing” that tests the set of models in parallel, quickly discards those models that are clearly inferior and concentrates the computational effort on differentiating among the better models. Racing is especially suitable for selecting among lazy learners since training requires negligible expense, and incremental testing using leave-one-out cross validation is efficient. We use racing to select among various lazy learning algorithms and to find relevant features in applications ranging from robot juggling to lesion detection in MRI scans.
    Materialart: Digitale Medien
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 2
    Digitale Medien
    Digitale Medien
    Springer
    Artificial intelligence review 11 (1997), S. 75-113 
    ISSN: 1573-7462
    Schlagwort(e): locally weighted regression ; LOESS ; LWR ; lazy learning ; memory-based learning ; least commitment learning ; forward models ; inverse models ; linear quadratic regulation (LQR) ; shifting setpoint algorithm ; dynamic programming
    Quelle: Springer Online Journal Archives 1860-2000
    Thema: Informatik
    Notizen: Abstract Lazy learning methods provide useful representations and training algorithms for learning about complex phenomena during autonomous adaptive control of complex systems. This paper surveys ways in which locally weighted learning, a type of lazy learning, has been applied by us to control tasks. We explain various forms that control tasks can take, and how this affects the choice of learning paradigm. The discussion section explores the interesting impact that explicitly remembering all previous experiences has on the problem of learning to control.
    Materialart: Digitale Medien
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 3
    Digitale Medien
    Digitale Medien
    Springer
    Artificial intelligence review 11 (1997), S. 133-155 
    ISSN: 1573-7462
    Schlagwort(e): lazy learning ; non-stationary function ; concept drift ; nearest neighbor learning ; time-varying functions
    Quelle: Springer Online Journal Archives 1860-2000
    Thema: Informatik
    Notizen: Abstract In their unmodified form, lazy-learning algorithms may have difficulty learning and tracking time-varying input/output function maps such as those that occur in concept shift. Extensions of these algorithms, such as Time-Windowed forgetting (TWF), can permit learning of time-varying mappings by deleting older exemplars, but have decreased classification accuracy when the input-space sampling distribution of the learning set is time-varying. Additionally, TWF suffers from lower asymptotic classification accuracy than equivalent non-forgetting algorithms when the input sampling distributions are stationary. Other shift-sensitive algorithms, such as Locally-Weighted forgetting (LWF) avoid the negative effects of time-varying sampling distributions, but still have lower asymptotic classification in non-varying cases. We introduce Prediction Error Context Switching (PECS) which allows lazy-learning algorithms to have good classification accuracy in conditions having a time-varying function mapping and input sampling distributions, while still maintaining their asymptotic classification accuracy in static tasks. PECS works by selecting and re-activating previously stored instances based on their most recent consistency record. The classification accuracy and active learning set sizes for the above algorithms are compared in a set of learning tasks that illustrate the differing time-varying conditions described above. The results show that the PECS algorithm has the best overall classification accuracy over these differing time-varying conditions, while still having asymptotic classification accuracy competitive with unmodified lazy-learners intended for static environments.
    Materialart: Digitale Medien
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 4
    Digitale Medien
    Digitale Medien
    Springer
    Artificial intelligence review 11 (1997), S. 11-73 
    ISSN: 1573-7462
    Schlagwort(e): locally weighted regression ; LOESS ; LWR ; lazy learning ; memory-based learning ; least commitment learning ; distance functions ; smoothing parameters ; weighting functions ; global tuning ; local tuning ; interference
    Quelle: Springer Online Journal Archives 1860-2000
    Thema: Informatik
    Notizen: Abstract This paper surveys locally weighted learning, a form of lazy learning and memory-based learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, assessing predictions, handling noisy data and outliers, improving the quality of predictions by tuning fit parameters, interference between old and new data, implementing locally weighted learning efficiently, and applications of locally weighted learning. A companion paper surveys how locally weighted learning can be used in robot learning and control.
    Materialart: Digitale Medien
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 5
    Digitale Medien
    Digitale Medien
    Springer
    Artificial intelligence review 11 (1997), S. 157-174 
    ISSN: 1573-7462
    Schlagwort(e): lazy learning ; discretisation ; bias ; axis-orthogonal representation ; empirical evaluation
    Quelle: Springer Online Journal Archives 1860-2000
    Thema: Informatik
    Notizen: Abstract This paper adopts the idea of discretising continuous attributes (Fayyad and Irani 1993) and applies it to lazy learning algorithms (Aha 1990; Aha, Kibler and Albert 1991). This approach converts continuous attributes into nominal attributes at the outset. We investigate the effects of this approach on the performance of lazy learning algorithms and examine it empirically using both real-world and artificial data to characterise the benefits of discretisation in lazy learning algorithms. Specifically, we have showed that discretisation achieves an effect of noise reduction and increases lazy learning algorithms' tolerance for irrelevant continuous attributes. The proposed approach constrains the representation space in lazy learning algorithms to hyper-rectangular regions that are orthogonal to the attribute axes. Our generally better results obtained using a more restricted representation language indicate that employing a powerful representation language in a learning algorithm is not always the best choice as it can lead to a loss of accuracy.
    Materialart: Digitale Medien
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 6
    Digitale Medien
    Digitale Medien
    Springer
    Artificial intelligence review 11 (1997), S. 371-405 
    ISSN: 1573-7462
    Schlagwort(e): speedup learning ; nonlinear planning ; lazy learning ; multistrategy learning ; learning to improve plan quality
    Quelle: Springer Online Journal Archives 1860-2000
    Thema: Informatik
    Notizen: Abstract General-purpose generative planners use domain-independent search heuristics to generate solutions for problems in a variety of domains. However, in some situations these heuristics force the planner to perform inefficiently or obtain solutions of poor quality. Learning from experience can help to identify the particular situations for which the domain-independent heuristics need to be overridden. Most of the past learning approaches are fully deductive and eagerly acquire correct control knowledge from a necessarily complete domain theory and a few examples to focus their scope. These learning strategies are hard to generalize in the case of nonlinear planning, where it is difficult to capture correct explanations of the interactions among goals, multiple planning operator choices, and situational data. In this article, we present a lazy learning method that combines a deductive and an inductive strategy to efficiently learn control knowledge incrementally with experience. We present hamlet, a system we developed that learns control knowledge to improve both search efficiency and the quality of the solutions generated by a nonlinear planner, namely prodigy4.0. We have identified three lazy aspects of our approach from which we believe hamlet greatly benefits: lazy explanation of successes, incremental refinement of acquired knowledge, and lazy learning to override only the default behavior of the problem solver. We show empirical results that support the effectiveness of this overall lazy learning approach, in terms of improving the efficiency of the problem solver and the quality of the solutions produced.
    Materialart: Digitale Medien
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 7
    Digitale Medien
    Digitale Medien
    Springer
    Artificial intelligence review 11 (1997), S. 407-423 
    ISSN: 1573-7462
    Schlagwort(e): lazy learning ; eager learning ; decision trees ; information gain ; data compression ; instance base indexing
    Quelle: Springer Online Journal Archives 1860-2000
    Thema: Informatik
    Notizen: Abstract We describe the IGTree learning algorithm, which compresses an instance base into a tree structure. The concept of information gain is used as a heuristic function for performing this compression. IGTree produces trees that, compared to other lazy learning approaches, reduce storage requirements and the time required to compute classifications. Furthermore, we obtained similar or better generalization accuracy with IGTree when trained on two complex linguistic tasks, viz. letter–phoneme transliteration and part-of-speech-tagging, when compared to alternative lazy learning and decision tree approaches (viz., IB1, information-gain-weighted IB1, and C4.5). A third experiment, with the task of word hyphenation, demonstrates that when the mutual differences in information gain of features is too small, IGTree as well as information-gain-weighted IB1 perform worse than IB1. These results indicate that IGTree is a useful algorithm for problems characterized by the availability of a large number of training instances described by symbolic features with sufficiently differing information gain values.
    Materialart: Digitale Medien
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 8
    Digitale Medien
    Digitale Medien
    Springer
    Artificial intelligence review 11 (1997), S. 115-132 
    ISSN: 1573-7462
    Schlagwort(e): lazy learning ; nonparametric estimation ; k-nearest neighbor ; condensed nearest neighbor ; voting
    Quelle: Springer Online Journal Archives 1860-2000
    Thema: Informatik
    Notizen: Abstract Lazy learning methods like the k-nearest neighbor classifier require storing the whole training set and may be too costly when this set is large. The condensed nearest neighbor classifier incrementally stores a subset of the sample, thus decreasing storage and computation requirements. We propose to train multiple such subsets and take a vote over them, thus combining predictions from a set of concept descriptions. We investigate two voting schemes: simple voting where voters have equal weight and weighted voting where weights depend on classifiers' confidences in their predictions. We consider ways to form such subsets for improved performance: When the training set is small, voting improves performance considerably. If the training set is not small, then voters converge to similar solutions and we do not gain anything by voting. To alleviate this, when the training set is of intermediate size, we use bootstrapping to generate smaller training sets over which we train the voters. When the training set is large, we partition it into smaller, mutually exclusive subsets and then train the voters. Simulation results on six datasets are reported with good results. We give a review of methods for combining multiple learners. The idea of taking a vote over multiple learners can be applied with any type of learning scheme.
    Materialart: Digitale Medien
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 9
    Digitale Medien
    Digitale Medien
    Springer
    Artificial intelligence review 11 (1997), S. 227-253 
    ISSN: 1573-7462
    Schlagwort(e): lazy learning ; feature selection ; nearest neighbor ; induction ; machine learning
    Quelle: Springer Online Journal Archives 1860-2000
    Thema: Informatik
    Notizen: Abstract High sensitivity to irrelevant features is arguably the main shortcoming of simple lazy learners. In response to it, many feature selection methods have been proposed, including forward sequential selection (FSS) and backward sequential selection (BSS). Although they often produce substantial improvements in accuracy, these methods select the same set of relevant features everywhere in the instance space, and thus represent only a partial solution to the problem. In general, some features will be relevant only in some parts of the space; deleting them may hurt accuracy in those parts, but selecting them will have the same effect in parts where they are irrelevant. This article introduces RC, a new feature selection algorithm that uses a clustering-like approach to select sets of locally relevant features (i.e., the features it selects may vary from one instance to another). Experiments in a large number of domains from the UCI repository show that RC almost always improves accuracy with respect to FSS and BSS, often with high significance. A study using artificial domains confirms the hypothesis that this difference in performance is due to RC's context sensitivity, and also suggests conditions where this sensitivity will and will not be an advantage. Another feature of RC is that it is faster than FSS and BSS, often by an order of magnitude or more.
    Materialart: Digitale Medien
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 10
    Digitale Medien
    Digitale Medien
    Springer
    Artificial intelligence review 11 (1997), S. 255-272 
    ISSN: 1573-7462
    Schlagwort(e): nearest neighbor learning algorithms ; lazy learning ; attribute weight setting ; theoretical analyses
    Quelle: Springer Online Journal Archives 1860-2000
    Thema: Informatik
    Notizen: Abstract Nearest neighbor (NN) learning algorithms, examples of the lazy learning paradigm, rely on a distance function to measure the similarity of testing examples with the stored training examples. Since certain attributes are more discriminative, while others can be less or totally irrelevant, attributes should be weighed differently in the distance function. Most previous studies on weight setting for NN learning algorithms are empirical. In this paper we describe our attempt on deciding theoretically optimal weights that minimize the predictive error for NN algorithms. Assuming a uniform distribution of examples in a 2-d continuous space, we first derive the average predictive error introduced by a linear classification boundary, and then determine the optimal weight setting for any polygonal classification region. Our theoretical results of optimal attribute weights can serve as a baseline or lower bound for comparing other empirical weight setting methods.
    Materialart: Digitale Medien
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 11
    Digitale Medien
    Digitale Medien
    Springer
    Artificial intelligence review 11 (1997), S. 343-370 
    ISSN: 1573-7462
    Schlagwort(e): lazy learning ; nearest neighbor ; genetic algorithms ; differential games ; pursuit games ; teaching ; reinforcement learning
    Quelle: Springer Online Journal Archives 1860-2000
    Thema: Informatik
    Notizen: Abstract Combining different machine learning algorithms in the same system can produce benefits above and beyond what either method could achieve alone. This paper demonstrates that genetic algorithms can be used in conjunction with lazy learning to solve examples of a difficult class of delayed reinforcement learning problems better than either method alone. This class, the class of differential games, includes numerous important control problems that arise in robotics, planning, game playing, and other areas, and solutions for differential games suggest solution strategies for the general class of planning and control problems. We conducted a series of experiments applying three learning approaches – lazy Q-learning, k-nearest neighbor (k-NN), and a genetic algorithm – to a particular differential game called a pursuit game. Our experiments demonstrate that k-NN had great difficulty solving the problem, while a lazy version of Q-learning performed moderately well and the genetic algorithm performed even better. These results motivated the next step in the experiments, where we hypothesized k-NN was having difficulty because it did not have good examples – a common source of difficulty for lazy learning. Therefore, we used the genetic algorithm as a bootstrapping method for k-NN to create a system to provide these examples. Our experiments demonstrate that the resulting joint system learned to solve the pursuit games with a high degree of accuracy – outperforming either method alone – and with relatively small memory requirements.
    Materialart: Digitale Medien
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 12
    Digitale Medien
    Digitale Medien
    Springer
    Artificial intelligence review 11 (1997), S. 273-314 
    ISSN: 1573-7462
    Schlagwort(e): lazy learning ; k-nearest neighbor ; feature weights ; comparison
    Quelle: Springer Online Journal Archives 1860-2000
    Thema: Informatik
    Notizen: Abstract Many lazy learning algorithms are derivatives of the k-nearest neighbor (k-NN) classifier, which uses a distance function to generate predictions from stored instances. Several studies have shown that k-NN's performance is highly sensitive to the definition of its distance function. Many k-NN variants have been proposed to reduce this sensitivity by parameterizing the distance function with feature weights. However, these variants have not been categorized nor empirically compared. This paper reviews a class of weight-setting methods for lazy learning algorithms. We introduce a framework for distinguishing these methods and empirically compare them. We observed four trends from our experiments and conducted further studies to highlight them. Our results suggest that methods which use performance feedback to assign weight settings demonstrated three advantages over other methods: they require less pre-processing, perform better in the presence of interacting features, and generally require less training data to learn good settings. We also found that continuous weighting methods tend to outperform feature selection algorithms for tasks where some features are useful but less important than others.
    Materialart: Digitale Medien
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
Schließen ⊗
Diese Webseite nutzt Cookies und das Analyse-Tool Matomo. Weitere Informationen finden Sie hier...