ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • Articles  (66)
  • simulation  (35)
  • data mining  (31)
  • Springer  (66)
  • Blackwell Publishing Ltd
  • 1995-1999  (64)
  • 1980-1984  (2)
  • 1925-1929
  • Computer Science  (66)
Collection
  • Articles  (66)
Publisher
  • Springer  (66)
  • Blackwell Publishing Ltd
Years
Year
  • 1
    Electronic Resource
    Electronic Resource
    Springer
    Computing 33 (1984), S. 247-257 
    ISSN: 1436-5057
    Keywords: Random variate generation ; simulation ; log concavity ; inequalities
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Description / Table of Contents: Zusammenfassung Wir legen einen kurzen Algorithmus zur Erzeugung von Zufallsveränderlichen mit log-konkaver Dichtef aufR mit bekanntem Median-Wert vor. Die mittlere Anzahl der erforderlichen Operationen ist unabhängig vonf. Die log-konkaven Dichtefunktionen beschreiben u. a. die Normal-, Gamma-, Weibull-, Beta-, Potenzexponential- (alle mit Formparameter mindestens 1), Perks- und Extremwert-Verteilung.
    Notes: Abstract We present a short algorithm for generating random variates with log-concave densityf onR and known mode in average number of operations independent off. Included in this class are the normal, gamma, Weibull, beta and exponential power (all with shape parameters at least 1), logistic, hyperbolic secant and extreme value distributions. The algorithm merely requires the presence of a uniform [0, 1] random number generator and a subprogram for computingf. It can be implemented in about 10 lines of FORTRAN code.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Electronic Resource
    Electronic Resource
    Springer
    Formal methods in system design 6 (1995), S. 11-44 
    ISSN: 1572-8102
    Keywords: abstract interpretation ; simulation ; property preservation ; model-checking
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract We study property preserving transformations for reactive systems. The main idea is the use of simulations parameterized by Galois connections (α, γ), relating the lattices of properties of two systems. We propose and study a notion of preservation of properties expressed by formulas of a logic, by a function α mapping sets of states of a systemS into sets of states of a systemS'. We give results on the preservation of properties expressed in sublanguages of the branching time μ-calculus when two systemsS andS' are related via (α, γ)-simulations. They can be used to verify a property for a system by verifying the same property on a simpler system which is an abstraction of it. We show also under which conditions abstraction of concurrent systems can be computed from the abstraction of their components. This allows a compositional application of the proposed verification method. This is a revised version of the papers [2] and [16]; the results are fully developed in [28].
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Electronic Resource
    Electronic Resource
    Springer
    Queueing systems 19 (1995), S. 169-192 
    ISSN: 1572-9443
    Keywords: Continuity ; rates of convergence ; robust estimation ; queueing systems ; simulation ; regenerative processes
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract Leta ands denote the inter arrival times and service times in aGI/GI/1 queue. Let a(n), s(n) be the r.v.s. with distributions as the estimated distributions ofa ands from iid samples ofa ands of sizesn. Letw be a r.v. with the stationary distribution π of the waiting times of the queue with input(a,s). We consider the problem of estimatingE[w α], α〉 0 and α via simulations when (a (n),s(n)) are used as input. Conditions for the accuracy of the asymptotic estimate, continuity of the asymptotic variance and uniformity in the rate of convergence to the estimate are obtained. We also obtain rates of convergence for sample moments, the empirical process and the quantile process for the regenerative processes. Robust estimates are also obtained when an outlier contaminated sample ofa ands is provided. In the process we obtain consistency, continuity and asymptotic normality of M-estimators for stationary sequences. Some robustness results for Markov processes are included.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 4
    Electronic Resource
    Electronic Resource
    Springer
    Queueing systems 21 (1995), S. 391-413 
    ISSN: 1572-9443
    Keywords: Dynamic scheduling control ; queueing network ; fluid network ; queueing theory ; flow control ; simulation
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract Motivated by dynamic scheduling control for queueing networks, Chen and Yao [8] developed a systematic method to generate dynamic scheduling control policies for a fluid network, a simple and highly aggregated model that approximates the queueing network. This study addresses the question of how good these fluid policies are as heuristic scheduling policies for queueing networks. Using simulation on some examples these heuristic policies are compared with traditional simple scheduling rules. The results show that the heuristic policies perform at least comparably to classical priority rules, regardless of the assumptions made about the traffic intensities and the arrival and service time distributions. However, they are certainly not always the best and, even when they are, the improvement is seldom dramatic. The comparative advantage of these policies may lie in their application to nonstationary situations such as might occur with unreliable machines or nonstationary demand patterns.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 5
    Electronic Resource
    Electronic Resource
    Springer
    Information systems frontiers 1 (1999), S. 259-266 
    ISSN: 1572-9419
    Keywords: data mining ; statistics ; patterns in data ; fitting distributions ; lambda ; beta
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract Data mining has, in the past, tended to use simplistic statistical methods (or even none at all). In this paper we show by example how cutting edge (but easy to use and comprehend) statistical methods can yield substantial gains in data mining. The role of statistics in IS/IT (information systems and information technology) in general can be substantial, yielding more nearly optimal performance of problems at the emerging frontiers in all their aspects.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 6
    Electronic Resource
    Electronic Resource
    Springer
    Queueing systems 27 (1997), S. 37-54 
    ISSN: 1572-9443
    Keywords: massive parallel processing ; simulation ; MIMD and SIMD parallel computers ; scalable algorithm ; G/G/1 queue ; longest path ; queueing networks
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract This paper presents several basic algorithms for the parallel simulation of G/G/1 queueing systems and certain networks of such systems. The coverage includes systems subject to manufacturing or communication blocking, or to loss of customer due to capacity constraints. The key idea is that the customer departure times are represented by longest-path distance in directed graphs instead of by the usual recursive equations. This representation leads to scalable algorithms with a high degree of parallelism that can be implemented on either MIMD or SIMD parallel computers.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 7
    Electronic Resource
    Electronic Resource
    Springer
    Queueing systems 31 (1999), S. 43-58 
    ISSN: 1572-9443
    Keywords: perturbation analysis ; sample path analysis ; queueing theory ; simulation
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract We study infinitesimal perturbation analysis (IPA) for queueing networks with general service time distributions. By “general” we mean that the distributions may have discrete components. We show that in the presence of service time distributions with discrete components commuting condition (CC) is no longer sufficient for unbiasedness of IPA. To overcome this difficulty, we introduce the notion of separability of real‐valued random variables, and show that separability of service times together with (CC) establishes unbiasedness of IPA for queueing systems with general service time distributions. It turns out that the piecewise analyticity of service times is a sufficient condition for separability.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 8
    Electronic Resource
    Electronic Resource
    Springer
    Queueing systems 19 (1995), S. 247-268 
    ISSN: 1572-9443
    Keywords: Sensitivity coefficient ; score function ; driving sequence ; queuing model ; simulation
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract Queuing models described as regenerative processes driven by several sequences are under investigation. Occurrence times in these sequences are supposed to be asynchronous, in general. Sensitivity coefficients that are derivatives of some stationary performance measures with respect to definite parameters as well as derivatives of higher orders are derived for such queuing models. These sensitivity coefficients can be evaluated by simulation.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 9
    Electronic Resource
    Electronic Resource
    Springer
    Machine learning 26 (1997), S. 177-211 
    ISSN: 0885-6125
    Keywords: inductive logic programming ; qualitative modelling ; system identification ; PAC learning ; physiological modelling ; cardiovascular system ; data mining ; patient monitoring
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract The automated construction of dynamic system models is an important application area for ILP. We describe a method that learns qualitative models from time-varying physiological signals. The goal is to understand the complexity of the learning task when faced with numerical data, what signal processing techniques are required, and how this affects learning. The qualitative representation is based on Kuipers' QSIM. The learning algorithm for model construction is based on Coiera's GENMODEL. We show that QSIM models are efficiently PAC learnable from positive examples only, and that GENMODEL is an ILP algorithm for efficiently constructing a QSIM model. We describe both GENMOEL which performs RLGG on qualitative states to learn a QSIM model, and the front-end processing and segmenting stages that transform a signal into a set of qualitative states. Next we describe results of experiments on data from six cardiac bypass patients. Useful models were obtained, representing both normal and abnormal physiological states. Model variation across time and across different levels of temporal abstraction and fault tolerance is explored. The assumption made by many previous workers that the abstraction of examples from data can be separated from the learning task is not supported by this study. Firstly, the effects of noise in the numerical data manifest themselves in the qualitative examples. Secondly, the models learned are directly dependent on the initial qualitative abstraction chosen.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 10
    Electronic Resource
    Electronic Resource
    Springer
    Computing 30 (1983), S. 185-188 
    ISSN: 1436-5057
    Keywords: 62E30 ; 62E25 ; Random numbers ; simulation ; gamma distribution ; pseudo-random
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Description / Table of Contents: Zusammenfassung Es wird eine Modifikation des Algorithmus von Ahrens und Dieter [1] angegeben, welcher gammaverteilte Zufallsvariable mit einem Formparameter kleiner Eins erzeugt. Der modifizierte Algorithmus ist deutlich schneller, obwohl er kaum komplexer ist als der ursprüngliche.
    Notes: Abstract A modification is given for an algorithm of Ahrens and Dieter [1] which generates random Gamma variates with shape parameter less than unity. The modified algorithm is substantially faster, although hardly more complex than the original one.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 11
    Electronic Resource
    Electronic Resource
    Springer
    Higher-order and symbolic computation 10 (1998), S. 237-271 
    ISSN: 1573-0557
    Keywords: abstract interpretation ; operational semantics ; collecting semantics ; simulation
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract We present trace-based abstract interpretation, a unification of severallines of research on applying Cousot-Cousot-style abstract interpretation a.i. tooperational semantics definitions (such as flowchart, big-step, and small-step semantics)that express a program‘s semantics as a concrete computation tree of trace paths. Aprogram‘s trace-based a.i. is also a computation tree whose nodes contain abstractions ofstate and whose paths simulate the paths in the program‘s concrete computation tree.Using such computation trees, we provide a simple explanation of the central concept of collecting semantics, and we distinguish concrete from abstract collectingsemantics and state-based from path-based collecting semantics. We also expose therelationship between collecting semantics extraction and results garnered from flow-analytic and model-checking-based analysis techniques. We adapt concepts fromconcurrency theory to formalize “safe” and “live” a.i.‘s for computation trees; in particular, coinduction techniques help extend fundamental results to infinite computation trees. Problems specific to the various operational semantics methodologies are discussed: Big-step semantics cannot express divergence, so we employ a mixture of induction andcoinduction in response; small-step semantics generate sequences of programconfigurations unbounded in size, so we abstractly interpret source language syntax.Applications of trace-based a.i. to data-flow analysis, model checking, closure analysis,and concurrency theory are demonstrated.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 12
    Electronic Resource
    Electronic Resource
    Springer
    The journal of supercomputing 11 (1997), S. 255-278 
    ISSN: 1573-0484
    Keywords: Ewald ; parallel ; T3E ; simulation ; electrostatic ; molecular dynamics ; PME
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract We report our work to parallelize the Particle Mesh Ewald (PME) method to compute the long-range electrostatic interactions in the molecular dynamics program AMBER and to extend the scalability of the PME method to hundreds of processors.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 13
    Electronic Resource
    Electronic Resource
    Springer
    Journal of intelligent and robotic systems 17 (1996), S. 309-325 
    ISSN: 1573-0409
    Keywords: modeling ; camera ; CCD ; subpixel ; simulation ; vision ; image ; diffusion ; CAD ; CIM ; bias
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mechanical Engineering, Materials Science, Production Engineering, Mining and Metallurgy, Traffic Engineering, Precision Mechanics
    Notes: Abstract In this paper we propose a modeling of an acquisition line made up of a CCD camera, a lens and a frame grabber card. The purpose of this modeling is to simulate the acquisition process in order to obtain images of virtual objects. The response time has to be short enough to permit interactive simulation. All the stages are modelised: in the first phase, we present a geometric model which supplies a point to point transformation that provides, for a space point in the camera field, the corresponding point on the plane of the CCD sensor. The second phase consists of modeling the discrete space which implies passing from the continous known object view to a discrete image, in accordance with the different orgin of the contrast loss. In the third phase, the video signal is reconstituted in order to be sampled by the frame grabber card. The practical results are close to reality when compared to image processing. This tool makes it possible to obtain a short computation time simulation of a vision sensor. This enables interactivity either with the user or with software for the design/simulation of an industrial workshop equipped with a vision system. It makes testing possible and validates the choice of sensor placement and image processing and analysis. Thanks to this simulation tool, we can control perfectly the position of the object image placed under the camera and in this way, we can characterise the performance of subpixel accuracy determining methods for object positioning.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 14
    Electronic Resource
    Electronic Resource
    Springer
    Journal of intelligent and robotic systems 24 (1999), S. 125-149 
    ISSN: 1573-0409
    Keywords: planning ; control ; multi-fingered robot hand ; optimization ; internal force ; simulation
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mechanical Engineering, Materials Science, Production Engineering, Mining and Metallurgy, Traffic Engineering, Precision Mechanics
    Notes: Abstract In this paper, the problem of controlling multi-fingered robot hands with rolling and sliding contacts is addressed. Several issues are explored. These issues involve the kinematic analysis and modeling, the dynamic analysis and control, and the coordination of a multi-fingered robot hand system. Based on a hand-object system in which the contacts are allowed to both roll and slide, a kinematic model is derived and analyzed. Also, the dynamic model of the hand-object system with relative motion contacts is studied. A control law is proposed to guarantee the asymptotic tracking of the object trajectory together with the desired rolling and/or sliding motions along the surface of the object. A planning approach is then introduced to minimize the contact forces so that the desired motion of the object and the relative motions between the fingers and the object can be achieved. Simulation results which support the theoretical development are presented.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 15
    ISSN: 0885-6125
    Keywords: machine learning ; pattern recognition ; learning from examples ; large image databases ; data mining ; automatic cataloging ; detection of natural objects ; Magellan SAR ; JARtool ; volcanoes ; Venus ; principal components analysis ; trainable
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract Dramatic improvements in sensor and image acquisition technology have created a demand for automated tools that can aid in the analysis of large image databases. We describe the development of JARtool, a trainable software system that learns to recognize volcanoes in a large data set of Venusian imagery. A machine learning approach is used because it is much easier for geologists to identify examples of volcanoes in the imagery than it is to specify domain knowledge as a set of pixel-level constraints. This approach can also provide portability to other domains without the need for explicit reprogramming; the user simply supplies the system with a new set of training examples. We show how the development of such a system requires a completely different set of skills than are required for applying machine learning to “toy world” domains. This paper discusses important aspects of the application process not commonly encountered in the “toy world,” including obtaining labeled training data, the difficulties of working with pixel data, and the automatic extraction of higher-level features.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 16
    Electronic Resource
    Electronic Resource
    Springer
    Computational economics 13 (1999), S. 93-101 
    ISSN: 1572-9974
    Keywords: efficiency ; multivariate probit ; quasi maximum likelihood ; simulation
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Economics
    Notes: Abstract This paper discusses the most efficient estimator among Quasi Maximum Likelihood Estimators using at most two levels of numerical integration, for the multivariate probit model. Simulations show that this estimator is more efficient but not more costly than the second-best alternative. However, its added efficiency depends on the correlation structure.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 17
    Electronic Resource
    Electronic Resource
    Springer
    Computational economics 10 (1997), S. 107-118 
    ISSN: 1572-9974
    Keywords: visualisation ; simulation ; linear quadratic control ; observer ; Kalmanfilter
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Economics
    Notes: Abstract Simulation of economic models is frequently used in the investigation of economic policy. Yet one of the problems with simulation is that it can be difficult to appreciate the model properties due to the nature of the simulation process. Stochastic simulation, for example, can produce large quantities of output which can be difficult to comprehend. Further, when mathematically sophisticated techniques such as the use of optimal control and Kalman Filtering are applied to models, the simulation process can become even more complex. Visualisation techniques in model building, simulation, and analysis of simulation output can help reduce the complexity. This is especially the case with interactive simulation. In this paper we investigate the use of visualisation in simulation by examining the application of optimal control techniques to a stochastic forward looking analytic economic model. We also use interactive object oriented simulation software where objects, such as components of models or graphs of outputs, can be visually manipulated to form simulation systems. We find that the use of visualisation can make the investigation of policy analysis issues with such models more comprehensible.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 18
    Electronic Resource
    Electronic Resource
    Springer
    Computational economics 10 (1997), S. 267-277 
    ISSN: 1572-9974
    Keywords: data mining ; forecasting ; genetic algorithms.
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Economics
    Notes: Abstract This paper presents an algorithm that permits the search for dependencies among sets of data (univariate or multivariate time-series, or cross-sectional observations). The procedure is modeled after genetic theories and Darwinian concepts, such as natural selection and survival of the fittest. It permits the discovery of equations of the data-generating process in symbolic form. The genetic algorithm that is described here uses parts of equations as building blocks to breed ever better formulas. Apart from furnishing a deeper understanding of the dynamics of a process, the method also permits global predictions and forecasts. The algorithm is successfully tested with artificial and with economic time-series and also with cross-sectional data on the performance and salaries of NBA players during the 94–95 season.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 19
    Electronic Resource
    Electronic Resource
    Springer
    Artificial intelligence review 13 (1999), S. 345-364 
    ISSN: 1573-7462
    Keywords: data mining ; document filtering ; exploratory data analysis ; information retrieval ; self-organizing map ; SOM ; text document collection ; WEBSOM
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract New methods that are user-friendly and efficient are needed for guidanceamong the masses of textual information available in the Internet and theWorld Wide Web. We have developed a method and a tool called the WEBSOMwhich utilizes the self-organizing map algorithm (SOM) for organizing largecollections of text documents onto visual document maps. The approach toprocessing text is statistically oriented, computationally feasible, andscalable – over a million text documents have been ordered on a single map.In the article we consider different kinds of information needs and tasksregarding organizing, visualizing, searching, categorizing and filteringtextual data. Furthermore, we discuss and illustrate with examples howdocument maps can aid in these situations. An example is presented wherea document map is utilized as a tool for visualizing and filtering a stream ofincoming electronic mail messages.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 20
    Electronic Resource
    Electronic Resource
    Springer
    Applied intelligence 11 (1999), S. 277-284 
    ISSN: 1573-7497
    Keywords: genetic algorithms ; classification ; data mining
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract A common approach to evaluating competing models in a classification context is via accuracy on a test set or on cross-validation sets. However, this can be computationally costly when using genetic algorithms with large datasets and the benefits of performing a wide search are compromised by the fact that estimates of the generalization abilities of competing models are subject to noise. This paper shows that clear advantages can be gained by using samples of the test set when evaluating competing models. Further, that applying statistical tests in combination with Occam's razor produces parsimonious models, matches the level of evaluation to the state of the search and retains the speed advantages of test set sampling.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 21
    ISSN: 1573-7497
    Keywords: discretisation ; data mining ; simulated annealing
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract An introduction to the approaches used to discretise continuous database features is given, together with a discussion of the potential benefits of such techniques. These benefits are investigated by applying discretisation algorithms to two large commercial databases; the discretisations yielded are then evaluated using a simulated annealing based data mining algorithm. The results produced suggest that dramatic reductions in problem size may be achieved, yielding improvements in the speed of the data mining algorithm. However, it is also demonstrated under certain circumstances that the discretisation produced may give an increase in problem size or allow overfitting by the data mining algorithm. Such cases, within which often only a small proportion of the database belongs to the class of interest, highlight the need both for caution when producing discretisations and for the development of more robust discretisation algorithms.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 22
    Electronic Resource
    Electronic Resource
    Springer
    Applied intelligence 11 (1999), S. 297-304 
    ISSN: 1573-7497
    Keywords: data mining ; rule discovery ; interest measure ; distinctive features ; characteristic rules
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract One strategy for increasing the efficiency of rule discovery in data mining is to target a restricted class of rules, such as exact or almost exact rules, rules with a limited number of conditions, or rules in which each condition, on its own, eliminates a competing outcome class. An algorithm is presented for the discovery of rules in which each condition is a distinctive feature of the outcome class on its right-hand side in the subset of the data set defined by the conditions, if any, which precede it. Such a rule is said to be characteristic for the outcome class. A feature is defined as distinctive for an outcome class if it maximises a well-known measure of rule interest or is unique to the outcome class in the data set. In the special case of data mining which arises when each outcome class is represented by a single instance in the data set, a feature of an object is shown to be distinctive if and only if no other feature is shared by fewer objects in the data set.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 23
    Electronic Resource
    Electronic Resource
    Springer
    Journal of intelligent information systems 5 (1995), S. 229-248 
    ISSN: 1573-7675
    Keywords: machine discovery ; data mining ; data compression ; inexact graph match ; scene analysis ; chemical analysis
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract Discovering repetitive substructure in a structural database improves the ability to interpret and compress the data. This paper describes the Subdue system that uses domain-independent and domain-dependent heuristics to find interesting and repetitive structures in structural data. This substructure discovery technique can be used to discover fuzzy concepts, compress the data description, and formulate hierarchical substructure definitions. Examples from the domains of scene analysis, chemical compound analysis, computer-aided design, and program analysis demonstrate the benefits of the discovery technique.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 24
    Electronic Resource
    Electronic Resource
    Springer
    Journal of network and systems management 3 (1995), S. 371-380 
    ISSN: 1573-7705
    Keywords: Telephone traffic ; network management ; control theory ; dynamic flows ; stability ; routing algorithms ; broadband networks ; simulation
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract The control of telephony traffic is the task of network management and routing algorithms. In this paper, a study of two trunk groups carrying telephony traffic is used to show that instabilities can arise if there is a delay in getting feedback information for a network controller. The network controller seeks to balance the traffic in the two trunk groups, which may represent two paths from a source to a destination. An analysis shows how factors such as holding time, controller gain and feedback delay influence stability. Simulation of a two service case is also carried out to show that the same instabilities can arise.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 25
    ISSN: 1573-7462
    Keywords: CancerLit ; concept spaces ; data mining ; Hopfield net ; information retrieval ; Kohonen net ; medical knowledge ; neural networks
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract This paper discusses several data mining algorithms and techniques thatwe have developed at the University of Arizona Artificial Intelligence Lab.We have implemented these algorithms and techniques into severalprototypes, one of which focuses on medical information developed incooperation with the National Cancer Institute (NCI) and the University ofIllinois at Urbana-Champaign. We propose an architecture for medicalknowledge information systems that will permit data mining across severalmedical information sources and discuss a suite of data mining tools that weare developing to assist NCI in improving public access to and use of theirexisting vast cancer information collections.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 26
    Electronic Resource
    Electronic Resource
    Springer
    Journal of intelligent information systems 10 (1998), S. 281-300 
    ISSN: 1573-7675
    Keywords: data mining ; text mining ; text categorization ; distribution comparison ; trend analysis
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract Knowledge Discovery in Databases (KDD) focuses on the computerized exploration of large amounts of data and on the discovery of interesting patterns within them. While most work on KDD has been concerned with structured databases, there has been little work on handling the huge amount of information that is available only in unstructured textual form. This paper describes the KDT system for Knowledge Discovery in Text, in which documents are labeled by keywords, and knowledge discovery is performed by analyzing the co-occurrence frequencies of the various keywords labeling the documents. We show how this keyword-frequency approach supports a range of KDD operations, providing a suitable foundation for knowledge discovery and exploration for collections of unstructured text.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 27
    Electronic Resource
    Electronic Resource
    Springer
    Journal of intelligent information systems 9 (1997), S. 57-81 
    ISSN: 1573-7675
    Keywords: Feature subset selection ; data mining ; simulated annealing
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract An overview of the principle feature subset selection methods isgiven. We investigate a number of measures of feature subset quality, usinglarge commercial databases. We develop an entropic measure, based upon theinformation gain approach used within ID3 and C4.5 to build trees, which isshown to give the best performance over our databases. This measure is usedwithin a simple feature subset selection algorithm and the technique is usedto generate subsets of high quality features from the databases. A simulatedannealing based data mining technique is presented and applied to thedatabases. The performance using all features is compared to that achievedusing the subset selected by our algorithm. We show that a substantialreduction in the number of features may be achieved together with animprovement in the performance of our data mining system. We also present amodification of the data mining algorithm, which allows it to simultaneouslysearch for promising feature subsets and high quality rules. The effect ofvarying the generality level of the desired pattern is alsoinvestigated.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 28
    Electronic Resource
    Electronic Resource
    Springer
    Journal of intelligent information systems 9 (1997), S. 33-56 
    ISSN: 1573-7675
    Keywords: combinatorial pattern matching ; data mining ; sequential pattern ; suffix tree ; update
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract Most daily and scientific data are sequential in nature. Discoveringimportant patterns from such data can benefit the user and scientist bypredicting coming activities, interpreting recurring phenomena, extractingoutstanding similarities and differences for close attention, compressingdata, and detecting intrusion. We consider the following incrementaldiscovery problem for large and dynamic sequential data. Suppose thatpatterns were previously discovered and materialized. An update is made tothe sequential database. An incremental discovery will take advantage ofdiscovered patterns and compute only the change by accessing the affectedpart of the database and data structures. In addition to patterns, thestatistics and position information of patterns need to be updated to allowfurther analysis and processing on patterns. We present an efficientalgorithm for the incremental discovery problem. The algorithm is applied tosequential data that honors several sequential patterns modeling weatherchanges in Singapore. The algorithm finds what it is supposed to find.Experiments show that for small updates and large databases, the incrementaldiscovery algorithm runs in time independent of the data size.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 29
    Electronic Resource
    Electronic Resource
    Springer
    Neural processing letters 5 (1997), S. 69-81 
    ISSN: 1573-773X
    Keywords: data mining ; feature extraction ; information retrieval ; Self-Organizing Map (SOM) ; text analysis
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract WEBSOM is a recently developed neural method for exploring full-text document collections, for information retrieval, and for information filtering. In WEBSOM the full-text documents are encoded as vectors in a document space somewhat like in earlier information retrieval methods, but in WEBSOM the document space is formed in an unsupervised manner using the Self-Organizing Map algorithm. In this article the document representations the WEBSOM creates are shown to be computationally efficient approximations of the results of a certain probabilistic model. The probabilistic model incorporates information about the similarity of use of different words to take into account their semantic relations.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 30
    Electronic Resource
    Electronic Resource
    Springer
    Applied intelligence 8 (1998), S. 195-218 
    ISSN: 1573-7497
    Keywords: modelling ; simulation ; teams ; coordination ; organizations
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract The problem of modelling air missions is part of a larger problem—simulating possible war-like scenarios in the air, sea, and on land. In modelling such military systems one is required to model the behaviour of various actors and the resources that are available to them. One aspect of this problem is the modelling of a group of actors as a team and then modelling the coordinated behaviour of such a team to achieve a joint goal. In the domain of air mission modelling the actors are pilots who control aircraft and their behaviour is referred to as tactics. In this paper we present the approach we adopted in modelling teams and team tactics as part of the development of the Smart Whole AiR Mission Model (SWARMM) for the Air Operations Division of the Australian Defence Science and Technology Organization. In our approach teams are composed of sub-teams and adopt organizational structures. Such structures define the responsibilities of the sub-teams towards the mission to be achieved as well as towards the control and coordination of the sub-teams. We also describe how communication is used when adopting a variety of control and coordination strategies and how one could reason about the choice of organizational structures for a given mission and situation.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 31
    Electronic Resource
    Electronic Resource
    Springer
    Autonomous robots 3 (1996), S. 253-268 
    ISSN: 1573-7527
    Keywords: underwater robotics ; hydrodynamics ; simulation ; articulated mechanisms ; tree topologies ; object-oriented design
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mechanical Engineering, Materials Science, Production Engineering, Mining and Metallurgy, Traffic Engineering, Precision Mechanics
    Notes: Abstract This paper presents a computational framework for efficiently simulating the dynamics and hydrodynamics of Underwater Robotic Vehicle (URV) systems. Through the use of object-oriented mechanisms, a very general yet efficient version of the Articulated-Body (AB) algorithm has been implemented. An efficient solution to branching within chains is developed in the paper so that the algorithm can be used to compute the dynamics for the entire class of open-chain, tree-structured mechanisms. By including compliant contacts with the environment, most closed-chain systems can also be modeled. URV systems with an extended set of topologies can be simulated including proposed underwater walking machines with intra-body powered articulations. Using the encapsulation inherent in C++, the hydrodynamics code has been confined to a single class, thereby explicitly defining this framework and providing an environment for readily implementing desired hydrodynamics algorithms. Resulting simulations are very efficient and can be used in a number of applications both in the development and use of URV systems.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 32
    Electronic Resource
    Electronic Resource
    Springer
    Autonomous robots 6 (1999), S. 281-292 
    ISSN: 1573-7527
    Keywords: mobile robotics ; reinforcement learning ; artificial neural networks ; simulation ; real world
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mechanical Engineering, Materials Science, Production Engineering, Mining and Metallurgy, Traffic Engineering, Precision Mechanics
    Notes: Abstract We present a case study of reinforcement learning on a real robot that learns how to back up a trailer and discuss the lessons learned about the importance of proper experimental procedure and design. We identify areas of particular concern to the experimental robotics community at large. In particular, we address concerns pertinent to robotics simulation research, implementing learning algorithms on real robotic hardware, and the difficulties involved with transferring research between the two.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 33
    Electronic Resource
    Electronic Resource
    Springer
    Journal of intelligent information systems 12 (1999), S. 61-73 
    ISSN: 1573-7675
    Keywords: association rules ; knowledge discovery ; data mining
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract We consider the problem of finding association rules in a database with binary attributes. Most algorithms for finding such rules assume that all the data is available at the start of the data mining session. In practice, the data in the database may change over time, with records being added and deleted. At any given time, the rules for the current set of data are of interest. The naive, and highly inefficient, solution would be to rerun the association generation algorithm from scratch following the arrival of each new batch of data. This paper describes the Borders algorithm, which provides an efficient method for generating associations incrementally, from dynamically changing databases. Experimental results show an improved performance of the new algorithm when compared with previous solutions to the problem.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 34
    Electronic Resource
    Electronic Resource
    Springer
    Journal of intelligent information systems 13 (1999), S. 195-234 
    ISSN: 1573-7675
    Keywords: data mining ; knowledge discovery ; machine learning ; knowledge representation ; attribute-oriented generalization ; domain generalization graphs
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract Attribute-oriented generalization summarizes the information in a relational database by repeatedly replacing specific attribute values with more general concepts according to user-defined concept hierarchies. We introduce domain generalization graphs for controlling the generalization of a set of attributes and show how they are constructed. We then present serial and parallel versions of the Multi-Attribute Generalization algorithm for traversing the generalization state space described by joining the domain generalization graphs for multiple attributes. Based upon a generate-and-test approach, the algorithm generates all possible summaries consistent with the domain generalization graphs. Our experimental results show that significant speedups are possible by partitioning path combinations from the DGGs across multiple processors. We also rank the interestingness of the resulting summaries using measures based upon variance and relative entropy. Our experimental results also show that these measures provide an effective basis for analyzing summary data generated from relational databases. Variance appears more useful because it tends to rank the less complex summaries (i.e., those with few attributes and/or tuples) as more interesting.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 35
    Electronic Resource
    Electronic Resource
    Springer
    Journal of intelligent information systems 6 (1996), S. 131-150 
    ISSN: 1573-7675
    Keywords: information mediation ; data mining ; semantic integration ; ontologies ; declarative interoperability
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract An end-to-end discussion, from logical architecture to implementation, of issues and design decisions in declarative information networks is presented. A declarative information network is defined to be a dynamic and decentralized structure where value-added services are declared and applied as mediators in a scalable and controlled manner. A primary result is the need to adopt dynamically linked ontologies as the semantic basis for knowledge sharing in scalable networks. It is shown that data mining techniques provide a promising basis upon which to explore and develop this result. Our prototype system, entitled Mystique, is described in terms of KQML, distributed object management, and distributed agent execution. An example shows how we map our architecture into the World Wide Web (WWW) and transform the appearance of the WWW into an intelligently integrated and multi-subject distributed information network.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 36
    Electronic Resource
    Electronic Resource
    Springer
    Journal of intelligent information systems 8 (1997), S. 5-28 
    ISSN: 1573-7675
    Keywords: machine learning ; meta-learning ; scalability ; data mining ; classifiers
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract In this paper, wedescribe a general approach to scaling data mining applications thatwe have come to call meta-learning. Meta-Learningrefers to a general strategy that seeks to learn how to combine anumber of separate learning processes in an intelligent fashion. Wedesire a meta-learning architecture that exhibits two key behaviors.First, the meta-learning strategy must produce an accurate final classification system. This means that a meta-learning architecturemust produce a final outcome that is at least as accurate as aconventional learning algorithm applied to all available data.Second, it must be fast, relative to an individual sequential learningalgorithm when applied to massive databases of examples, and operatein a reasonable amount of time. This paper focussed primarily onissues related to the accuracy and efficacy of meta-learning as ageneral strategy. A number of empirical results are presenteddemonstrating that meta-learning is technically feasible in wide-area,network computing environments.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 37
    Electronic Resource
    Electronic Resource
    Springer
    Journal of computational neuroscience 3 (1996), S. 73-82 
    ISSN: 1573-6873
    Keywords: primate retina ; simulation ; ganglion cells ; spatial vision ; hyperacuity
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Medicine , Physics
    Notes: Abstract We developed a physiologically plausible model of the first steps of spatial visual information processing in the fovea of the human retina. With the predictions of this model we could support the hypothesis that, for moderate contrasts (≤ 40%), hyperacuity is mediated by the magnocellular (MC-) pathway. Despite the lower sampling density in the MC pathway, as compared to the parvocellular (PC-) pathway, the information that is transferred by the MC ganglion cells is sufficient to achieve thresholds comparable to those of human subjects in psychophysical tasks. This is a result of the much higher signal-to-noise ratio of the MC pathway cell signals. The PC pathway cells do not transfer enough information for hyperacuity thresholds.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 38
    ISSN: 1573-6873
    Keywords: reflex pathway ; neuronal network ; simulation ; parliamentary principle ; neural basis of behavior
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Medicine , Physics
    Notes: Abstract In inactive stick insects, sensory information from the femoral chordotonal organ (fCO) about position and movement of the femur-tibia joint is transferred via local nonspiking interneurons onto extensor and flexor tibiae motoneurons. Information is processed by the interaction of antagonistic parallel pathways at two levels: (1) at the input side of the nonspiking interneurons and (2) at the input side of the motoneurons. We tested by a combination of physiological experiments and computer simulation whether the known network topology and the properties of its elements are sufficient to explain the generation of the motor output in response to passive joint movements, that is resistance reflexes. In reinvestigating the quantitative characteristics of interneuronal pathways we identified 10 distinct types of nonspiking interneurons. Synaptic inputs from fCO afferents onto these interneurons are direct excitatory and indirect inhibitory. These connections were investigated with respect to position and velocity signals from the fCO. The results were introduced in the network simulation. The motor output of the simulation has the same characteristics as the real system, even when particular types of interneurons were removed in the simulation and the real system.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 39
    Electronic Resource
    Electronic Resource
    Springer
    Journal of computational neuroscience 3 (1996), S. 137-153 
    ISSN: 1573-6873
    Keywords: potassium ; compartmental ; learning ; plasticity ; simulation
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Medicine , Physics
    Notes: Abstract We developed a multicompartmental Hodgkin-Huxley model of the Hermissenda type-B photoreceptor and used it to address the relative contributions of reductions of two K+ currents, I a and I C, to changes in cellular excitability and synaptic strength that occur in these cells after associative learning. We found that reductions of gC, the peak conductance of I C, substantially increased the firing frequency of the type-B cell during the plateau phase of a simulated light response, whereas reductions of gA had only a modest contribution to the plateau frequency. This can be understood at least in part by the contributions of these currents to the light-induced (nonspiking) generator potential, the plateau of which was enhanced by gC reductions, but not by gA reductions. In contrast, however, reductions of gA broadened the type-B cell action potential, increased Ca2+ influx, and increased the size of the postsynaptic potential produced in a type-A cell, whereas similar reductions of gC had only negligible contributions to these measures. These results suggest that reductions of I A and I C play important but different roles in type-B cell plasticity.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 40
    Electronic Resource
    Electronic Resource
    Springer
    Statistics and computing 6 (1996), S. 277-287 
    ISSN: 1573-1375
    Keywords: Bayesian inference ; contingency tables ; Gibbs sampling ; graphical methods ; hypothesis testing ; independence ; intraclass tables ; model comparison ; predictive densities ; quasisymmetry ; simulation
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract In this paper we present a simulation and graphics-based model checking and model comparison methodology for the Bayesian analysis of contingency tables. We illustrate the approach by testing the hypotheses of independence and symmetry on complete and incomplete simulated tables.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 41
    Electronic Resource
    Electronic Resource
    Springer
    Real-time systems 14 (1998), S. 251-267 
    ISSN: 1573-1383
    Keywords: development environment ; real-time ; scheduling ; simulation ; design tool
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract The development of time critical applications needs specific tools able to cope with both functional and non-functional requirements. In this paper we describe a design and programming environment to assist the development of hard real-time applications. An interactive graphic interface is provided to facilitate the design of the application according to three hierarchical levels. The development model we propose is based on an iterative process in which the real-time scheduling support is considered from the beginning of the design phases. Our graphic environment integrates several tools to analyze, test, and simulate the real-time application under development. In particular, the tools we have implemented are: a Design Tool, to describe the structure of the application, a Schedulability Analyser Tool (SAT), to verify off-line the feasibility of the schedule of a critical task set, a Scheduling Simulator, to test the average behavior of the application, and a Maximum Execution Time (MET) estimator to bound the worst case duration of each task.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 42
    Electronic Resource
    Electronic Resource
    Springer
    Journal of clinical monitoring and computing 15 (1999), S. 227-232 
    ISSN: 1573-2614
    Keywords: simulation ; problem-based learning
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Medicine
    Notes: Abstract Background. We developed a problem-based learning exercise with a full-scale human patient simulator to teach residents the emergency management and differential diagnosis of acute intraoperative hypotension. Methods.We developed the exercise through the following steps: clear definition of learning objectives, preparation of an appropriate case stem, development of clinically realistic scenarios to illustrate objectives, and an interactive instructor to stimulate discussion. Results. The exercise focused on the differential diagnosis of intraoperative hypotension, and the acute treatment of hypovolemia, cardiac tamponade, tension pneumothorax, and anaphylaxis. Conclusions. Exercises on a full-scale patient simulator are a natural extension of problem-based learning. Recent research in learning theory provides the rationale for this teaching modality's potential as a learning tool.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 43
    Electronic Resource
    Electronic Resource
    Springer
    Journal of clinical monitoring and computing 15 (1999), S. 481-491 
    ISSN: 1573-2614
    Keywords: Cerebral blood flow ; cerebral vascular resistance ; cerebral perfusion pressure ; PaCO2 ; PaO2 ; cerebral metabolic rate of oxygen consumption ; educational model ; simulation
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Medicine
    Notes: Abstract Objective.To implement a realistic autoregulation mechanism toenhance an existing educational brain model that displays in real-time thecerebral metabolic rate (CMRO2), cerebral blood flow (CBF),cerebral blood volume (CBV), intracranial pressure (ICP), and cerebralperfusion pressure (CPP). Methods.A dynamic cerebrovascular resistance(CVR) feedback loop adjusts automatically to maintain CBF within a range ofthe CPP and defines autoregulation. The model obtains physiologic parametersfrom a full-scale patient simulator. We assumed that oxygen demand andarterial partial pressure of carbon dioxide (CO2 responsivity) arethe two major factors involved in determining CBF. In addition, our brainmodel increases oxygen extraction up to 70% once CBF becomes insufficient tosupport CMRO2. The model was validated against data from theliterature. Results.The model's response varied less than 9%from the literature data. Similarly, based on correlation coefficients betweenthe brain model and experimental data, a good fit was obtained for curvesdescribing the relationship between CBF and PaCO2 at a meanarterial blood pressure of 150 mm Hg (R2 = 0.92) and 100 mm Hg(R2 = 0.70). Discussion.The autoregulated brain model, withincorporated CO2 responsivity and a variable oxygen extraction,automatically produces changes in CVR, CBF, CBV, and ICP consistent withliterature reports, when run concurrently with a METI full-scale patientsimulator (Medical Education Technologies, Inc., Sarasota, Florida). Once themodel is enhanced to include herniation, vasospasm, and drug effects, itsutility will be expanded beyond demonstrating only basic neuroanesthesiaconcepts.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 44
    Electronic Resource
    Electronic Resource
    Springer
    Journal of clinical monitoring and computing 12 (1995), S. 89-95 
    ISSN: 1573-2614
    Keywords: anesthesia ; atracurium ; closed-loop control ; drug delivery ; feedback control ; infusion protocol ; mivacurium ; neuromuscular blockade ; simulation ; vecuronium
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Medicine
    Notes: Abstract Neuromuscular blockade is controlled during anesthesia by administering either bolus doses or a continuous infusion of a blocking agent. To test whether a constant infusion technique requires less attention and provides better control we used a computer to simulate neuromuscular blockade. Using the model we maintained 95% blockade with mivacurium, atracurium, and vecuronium. It required 1.2 changes per hour to maintain the blockade by continuous infusion; an average of 4.5 bolus per hour were required to maintain blockade by the bolus technique. When the bolus and continuous infusion techniques were combined, only 0.16 changes per hour were required. Atracurium was then given to ten patients during anesthesia, following the bolus plus continuous infusion protocol. After a bolus was given to obtain 100% twitch depression, for tracheal intubation, neuromuscular function was assessed by train-of-four stimulation of the ulnar or facial nerves by observing the resultant muscle movement. When the first twitch of the train-of-four returned, relaxation was maintained by continuous infusion. A bolus was given and the drug infusion rate was changed whenever the level of relaxation changed from the desired one twitch of the train-of-four. The infusion rate was adjusted only 1.12±0.79 times per hour. The desired level of muscle relaxation was easily controlled using the bolus plus continuous infusion protocol. The infusion scheme might be implemented in future drug infusion pumps.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 45
    Electronic Resource
    Electronic Resource
    Springer
    Journal of clinical monitoring and computing 12 (1995), S. 105-112 
    ISSN: 1573-2614
    Keywords: modelling ; parameter estimation ; simulation ; ventilation mechanics ; data acquisition
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Medicine
    Notes: Abstract Routine application of ‘rule of thumb’ parameter sets in clinical practice pushes model visions to the background, including the complete framework of assumptions, simplifications, suppositions and conditions. But: models can be a very strong tool, when applied selectively — that means, with a clear idea of destination, definition, parameter selection and verification. This article discusses universal issues of modelling — based on ventilation mechanics models in intensive care medicine.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 46
    Electronic Resource
    Electronic Resource
    Springer
    Journal of clinical monitoring and computing 12 (1995), S. 231-239 
    ISSN: 1573-2614
    Keywords: gas exchange ; mechanical respiration ; model ; simulation ; ventilation mechanics
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Medicine
    Notes: Abstract The mechanical respiration of intubated patients is a process which is influenced by many parameters and, through its many interactions, is extremely complex. Taken by itself the logical analysis of this complexity is extraordinarily difficult and leads very often to false conclusions. For that reason computer simulation of complex systems by means of the computer is an important tool in the analysis of these processes. Required is a model which describes the actual behavior of the system. However, it should not be overlooked that a model always describes only a portion of reality. Models having exclusively to do with ventilation mechanics or with gas exchange cannot simulate the interactions between the two. To accomplish this purpose, an improved model is necessary, including both partial processes and thereby capturing the complexity of the system. Accordingly, both ventilation mechanics and gas exchange have been integrated equally into the newly developed model for the simulation program Simu Vent. The core of the program builds a functional multicompartment model of the lungs and considers the partial processes ventilation mechanics, gas transport, gas mixing and gas exchange. Further considered are the respirator, blood circulation and peripheral compartments. The program runs under a graphical user interface, allowing its easy use. Responsible for this ease is the user's ability to interact with the program while simulation is in progress and the specially designed graphic screen mask. Comparing measured with simulated values demonstrated that the measured curve can be simulated with minimal error. Furthermore, the multicompartment model describes disruptions in distribution. Simu Vent's application is especially well advised in the description and analysis of the theoretical fundamentals of mechanical respiration. This aspect is meaningful above all in education and research, rendering these two the model's main areas of use.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 47
    Electronic Resource
    Electronic Resource
    Springer
    Journal of clinical monitoring and computing 14 (1998), S. 135-140 
    ISSN: 1573-2614
    Keywords: simulation ; peripheral nerve stimulation ; neuromuscular blockade ; thumb adduction
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Medicine
    Notes: Abstract Objective. To design and fabricate a device to simulate evoked thumb adduction in response to ulnar nerve stimulation. Methods. We implemented a computer-controlled, motorized thumb (TWITCHER) that responds to ulnar nerve stimulation by an unmodified peripheral nerve stimulator. Clinically realistic response patterns are generated for both depolarizing and non-depolarizing muscle relaxants and three modes of stimulation (single twitch, train-of-four, tetanus). Results. The device has been used in a full-scale patient simulator for the last six years. Discussion. TWITCHER has been well received by participants in simulation exercises including the use of neuromuscular blocking drugs.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 48
    Electronic Resource
    Electronic Resource
    Springer
    Journal of clinical monitoring and computing 15 (1999), S. 17-21 
    ISSN: 1573-2614
    Keywords: Low flow anesthesia ; inhaled anesthetics ; anesthetic cost ; simulation
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Medicine
    Notes: Abstract Introduction. Inducing general anesthesia often involves mask ventilation using high fresh gas flow (FGF) to administer anesthetic vapor prior to endotracheal intubation. A common practice is to turn the vaporizer off when the mask is removed from the patient's face to avoid room contamination (VAPOff). An alternative approach is to leave the vaporizer on and turn the FGF to minimum to reduce the amount of vapor laden gas that can enter the room (FGFOff). The objective of this study is to compare the relative induction times and vapor costs associated with each induction strategy. Methods. Each induction method was simulated using Gasman® (MedMan Simulations, Chestnut Hill, MA) for Windows assuming a 70 kg patient. To simulate a period of mask ventilation with anesthetic vapor prior to intubation, the FGF was set to 6 l/min and the isoflurane vapor concentration to 1.2% (1 MAC) for three minutes with an alveolar ventilation of 5 l/min and cardiac output of 5 l/min. For the first simulation of the intubation period (FGFOff), FGF was turned to 150 ml/min, minute ventilation was set to zero and the vaporizer setting unchanged for one minute. Initial settings were then restored and the rate of change of anesthetic vapor concentration in the circuit (Vckt) and alveolus (Valv) followed for 10 minutes along with the cost of delivered vapor (V$). For the second simulation (VAPOff), after the initial three minutes of vapor delivery, the vaporizer was set to zero, minute ventilation was set to zero and the FGF left unchanged for one minute. The initial settings were then restored and Vckt, Valv and V$ followed for ten minutes. The cost calculation was based upon a 100 ml bottle of Isoflurane at $72/bottle. Actual gas flow was measured at the y-piece of a circle system for fresh gas flows from 0.15 to 6 l/min. Results. At the end of the simulated intubation period (minute 4), Vckt was unchanged with the FGFOff method whereas it had fallen by more than half with the VAPOff method. Using VAPOff, it took until the 6-minute mark for Vckt to return to the same concentration that existed prior to intubation at minute three. Throughout the 10 minute simulated induction, Valv using FGFOff exceeded Valv using VAPOff although the difference became small at the end of the period. V$ was essentially identical at all time points. No flow into the room was measured at the minimum fresh gas flow whereas higher fresh gas flows resulted in a significant portion of the fresh gas flowing into the room. Conclusions. The strategy of turning the FGF to minimum and leaving the vaporizer on during intubation does not contaminate the room and speeds induction by fostering a greater alveolar concentration than the VAPOff method. Cost savings are derived using FGFOff since a higher alveolar concentration is achieved at the same vapor cost. Additional cost savings are demonstrated since a low flow technique is possible immediately after intubation when using FGFOff. The practice of turning off the vaporizer during endotracheal intubation while FGF remains high should be abandoned.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 49
    Electronic Resource
    Electronic Resource
    Springer
    Computational optimization and applications 12 (1999), S. 53-79 
    ISSN: 1573-2894
    Keywords: support vector machines ; linear programming ; classification ; data mining ; machine learning.
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract We examine the problem of how to discriminate between objects of three or more classes. Specifically, we investigate how two-class discrimination methods can be extended to the multiclass case. We show how the linear programming (LP) approaches based on the work of Mangasarian and quadratic programming (QP) approaches based on Vapnik's Support Vector Machine (SVM) can be combined to yield two new approaches to the multiclass problem. In LP multiclass discrimination, a single linear program is used to construct a piecewise-linear classification function. In our proposed multiclass SVM method, a single quadratic program is used to construct a piecewise-nonlinear classification function. Each piece of this function can take the form of a polynomial, a radial basis function, or even a neural network. For the k 〉 2-class problems, the SVM method as originally proposed required the construction of a two-class SVM to separate each class from the remaining classes. Similarily, k two-class linear programs can be used for the multiclass problem. We performed an empirical study of the original LP method, the proposed k LP method, the proposed single QP method and the original k QP methods. We discuss the advantages and disadvantages of each approach.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 50
    Electronic Resource
    Electronic Resource
    Springer
    User modeling and user adapted interaction 5 (1995), S. 67-86 
    ISSN: 1573-1391
    Keywords: Adaptive ; planning ; planning recognition ; simulation ; multi-agent ; multimedia ; tutoring ; artificial intelligence ; knowledge representation
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract An intelligent tutoring system customizes its presentation of knowledge to the individual needs of each student based on a model of the student. Student models are more complex than other user models because the student is likely to have misconceptions. We have addressed several difficult issues in reasoning about a student's knowledge and skills within a real-time simulation-based training system. Our conceptual framework enables important aspects of the tutor's reasoning to be based upon simple, comprehensible representations that are the basis for a Student Centered Curriculum. We have built a system for teaching cardiac resuscitation techniques in which the decisions abouthow to teach are separated from the decisions aboutwhat to teach. The training context (i.e., choice of topics) is changed based on a tight interaction between student modeling techniques and simulation management. Although complex student models are still required to support detailed reasoning about how to teach, we argue that the decision about what to teach can be adequately supported by qualitatively simpler techniques, such as overlay models. This system was evaluated in formative studies involving medical school faculty and students. Construction of the student model involves monitoring student actions during a simulation and evaluating these actions in comparison with an expert model encoded as a multi-agent plan. The plan recognition techniques used in this system are novel and allow the expert knowledge to be expressed in a form that is natural for domain experts.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 51
    Electronic Resource
    Electronic Resource
    Springer
    Real-time systems 15 (1998), S. 61-90 
    ISSN: 1573-1383
    Keywords: system design ; executable specifications ; class-based modeling ; simulation ; formal methods ; model checking
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract The Production Cell example was chosen by FZI (the Computer Science Research Center), in Karlsruhe. to examine the benefits of formal methods for industrial applications. This example was implemented in more than 30 formalisms. This paper describes the implementation of the Production Cell in OBSERV. The OBSERV methodology for software development is based on rapid construction of an executable specification, or prototype, of a system, which may be examined and modified repeatedly to achieve the desired functionality. The objectives of OBSERV also include facilitating a smooth transition to a target system, and providing means for reusing specification, design, and code of systems, particularly real-time reactive systems. In this paper we show how the methods used in the OBSERV implementation address the requirements imposed by reactive systems. We describe the OBSERV implementation of the Production cell, explain design decisions, with special emphasis on reusability and safety issues. We demonstrate how to take care of safety and liveness properties required for this example. These properties are checked by means of simulation and formally proved with a model checker.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 52
    Electronic Resource
    Electronic Resource
    Springer
    Statistics and computing 6 (1996), S. 85-92 
    ISSN: 1573-1375
    Keywords: Cluster analysis ; Conditional Gaussian distribution ; EM algorithm ; graphical modelling ; location model ; mixture maximum likelihood ; simulation
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract One possible approach to cluster analysis is the mixture maximum likelihood method, in which the data to be clustered are assumed to come from a finite mixture of populations. The method has been well developed, and much used, for the case of multivariate normal populations. Practical applications, however, often involve mixtures of categorical and continuous variables. Everitt (1988) and Everitt and Merette (1990) recently extended the normal model to deal with such data by incorporating the use of thresholds for the categorical variables. The computations involved in this model are so extensive, however, that it is only feasible for data containing very few categorical variables. In the present paper we consider an alternative model, known as the homogeneous Conditional Gaussian model in graphical modelling and as the location model in discriminant analysis. We extend this model to the finite mixture situation, obtain maximum likelihood estimates for the population parameters, and show that computation is feasible for an arbitrary number of variables. Some data sets are clustered by this method, and a small simulation study demonstrates characteristics of its performance.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 53
    Electronic Resource
    Electronic Resource
    Springer
    Applied intelligence 8 (1998), S. 113-121 
    ISSN: 1573-7497
    Keywords: genetic algorithms ; neural networks ; pole-cart system ; neuro-controller ; simulation ; gene activation ; multi-level chromosome
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract This paper describes the application of the Structured Genetic Algorithm (sGA) to design neuro-controllers for an unstable physical system. In particular, the approach uses a single unified genetic process to automatically evolve complete neural nets (both architectures and their weights) for controlling a simulated pole-cart system. Experimental results demonstrate the effectiveness of the sGA-evolved neuro-controllers for the task—to keep the pole upright (within a specified vertical angle) and the cart within the limits of the given track.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 54
    Electronic Resource
    Electronic Resource
    Springer
    Applied intelligence 9 (1998), S. 7-23 
    ISSN: 1573-7497
    Keywords: intelligent vehicles ; evolutionary algorithms ; simulation ; distributed AI
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract Recent research in automated highway systems has ranged from low-level vision-based controllers to high-level route-guidance software. However, there is currently no system for tactical-level reasoning. Such a system should address tasks such as passing cars, making exits on time, and merging into a traffic stream. Many previous approaches have attempted to hand construct large rule-based systems which capture the interactions between multiple input sensors, dynamic and potentially conflicting subgoals, and changing roadway conditions. However, these systems are extremely difficult to design due to the large number of rules, the manual tuning of parameters within the rules, and the complex interactions between the rules. Our approach to this intermediate-level planning is a system which consists of a collection of autonomous agents, each of which specializes in a particular aspect of tactical driving. Each agent examines a subset of the intelligent vehicle's sensors and independently recommends driving decisions based on their local assessment of the tactical situation. This distributed framework allows different reasoning agents to be implemented using different algorithms. When using a collection of agents to solve a single task, it is vital to carefully consider the interactions between the agents. Since each reasoning object contains several internal parameters, manually finding values for these parameters while accounting for the agents' possible interactions is a tedious and error-prone task. In our system, these parameters, and the system's overall dependence on each agent, is automatically tuned using a novel evolutionary optimization strategy, termed Population-Based Incremental Learning (PBIL). Our system, which employs multiple automatically trained agents, can competently drive a vehicle, both in terms of the user-defined evaluation metric, and as measured by their behavior on several driving situations culled from real-life experience. In this article, we describe a method for multiple agent integration which is applied to the automated highway system domain. However, it also generalizes to many complex robotics tasks where multiple interacting modules must simultaneously be configured without individual module feedback.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 55
    ISSN: 1573-756X
    Keywords: data cube ; data mining ; aggregation ; summarization ; database ; analysis ; query
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract Data analysis applications typically aggregate data across manydimensions looking for anomalies or unusual patterns. The SQL aggregatefunctions and the GROUP BY operator produce zero-dimensional orone-dimensional aggregates. Applications need the N-dimensionalgeneralization of these operators. This paper defines that operator, calledthe data cube or simply cube. The cube operator generalizes the histogram,cross-tabulation, roll-up,drill-down, and sub-total constructs found in most report writers.The novelty is that cubes are relations. Consequently, the cubeoperator can be imbedded in more complex non-procedural dataanalysis programs. The cube operator treats each of the Naggregation attributes as a dimension of N-space. The aggregate ofa particular set of attribute values is a point in this space. Theset of points forms an N-dimensional cube. Super-aggregates arecomputed by aggregating the N-cube to lower dimensional spaces.This paper (1) explains the cube and roll-up operators, (2) showshow they fit in SQL, (3) explains how users can define new aggregatefunctions for cubes, and (4) discusses efficient techniques tocompute the cube. Many of these features are being added to the SQLStandard.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 56
    ISSN: 1573-756X
    Keywords: data mining ; knowledge discovery ; attribute focusing ; basketball ; NBA
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract Advanced Scout is a PC-based data mining application used by National Basketball Association (NBA)coaching staffs to discover interesting patterns in basketball game data. We describe Advanced Scout software from the perspective of data mining and knowledge discovery. This paper highlights the pre-processing of raw data that the program performs, describes the data mining aspects of the software and how the interpretation of patterns supports the processof knowledge discovery. The underlying technique of attribute focusing asthe basis of the algorithm is also described. The process of pattern interpretation is facilitated by allowing the user to relate patterns to video tape.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 57
    Electronic Resource
    Electronic Resource
    Springer
    Data mining and knowledge discovery 1 (1997), S. 203-224 
    ISSN: 1573-756X
    Keywords: causal discovery ; data mining ; observational data
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract This paper presents a simple, efficient computer-based method for discovering causal relationships from databases that contain observational data. Observational data is passively observed, as contrasted with experimental data. Most of the databases available for data mining are observational. There is great potential for mining such databases to discover causal relationships. We illustrate how observational data can constrain the causal relationships among measured variables, sometimes to the point that we can conclude that one variable is causing another variable. The presentation here is based on a constraint-based approach to causal discovery. A primary purpose of this paper is to present the constraint-based causal discovery method in the simplest possible fashion in order to (1) readily convey the basic ideas that underlie more complex constraint-based causal discovery techniques, and (2) permit interested readers to rapidly program and apply the method to their own databases, as a start toward using more elaborate causal discovery algorithms.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 58
    Electronic Resource
    Electronic Resource
    Springer
    Data mining and knowledge discovery 2 (1998), S. 39-68 
    ISSN: 1573-756X
    Keywords: data mining ; market basket ; association rules ; dependence rules ; closure properties ; text mining
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract One of the more well-studied problems in data mining is the search for association rules in market basket data. Association rules are intended to identify patterns of the type: “A customer purchasing item A often also purchases item B.” Motivated partly by the goal of generalizing beyond market basket data and partly by the goal of ironing out some problems in the definition of association rules, we develop the notion of dependence rules that identify statistical dependence in both the presence and absence of items in itemsets. We propose measuring significance of dependence via the chi-squared test for independence from classical statistics. This leads to a measure that is upward-closed in the itemset lattice, enabling us to reduce the mining problem to the search for a border between dependent and independent itemsets in the lattice. We develop pruning strategies based on the closure property and thereby devise an efficient algorithm for discovering dependence rules. We demonstrate our algorithm's effectiveness by testing it on census data, text data (wherein we seek term dependence), and synthetic data.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 59
    Electronic Resource
    Electronic Resource
    Springer
    Data mining and knowledge discovery 2 (1998), S. 283-304 
    ISSN: 1573-756X
    Keywords: data mining ; cluster analysis ; clustering algorithms ; categorical data
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract The k-means algorithm is well known for its efficiency in clustering large data sets. However, working only on numeric values prohibits it from being used to cluster real world data containing categorical values. In this paper we present two algorithms which extend the k-means algorithm to categorical domains and domains with mixed numeric and categorical values. The k-modes algorithm uses a simple matching dissimilarity measure to deal with categorical objects, replaces the means of clusters with modes, and uses a frequency-based method to update modes in the clustering process to minimise the clustering cost function. With these extensions the k-modes algorithm enables the clustering of categorical data in a fashion similar to k-means. The k-prototypes algorithm, through the definition of a combined dissimilarity measure, further integrates the k-means and k-modes algorithms to allow for clustering objects described by mixed numeric and categorical attributes. We use the well known soybean disease and credit approval data sets to demonstrate the clustering performance of the two algorithms. Our experiments on two real world data sets with half a million objects each show that the two algorithms are efficient when clustering large data sets, which is critical to data mining applications.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 60
    Electronic Resource
    Electronic Resource
    Springer
    Data mining and knowledge discovery 2 (1998), S. 233-262 
    ISSN: 1573-756X
    Keywords: sampling ; data mining ; knowledge discovery ; association rules ; update ; maintenance ; confidence interval
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract By nature, sampling is an appealing technique for data mining, because approximate solutions in most cases may already be of great satisfaction to the need of the users. We attempt to use sampling techniques to address the problem of maintaining discovered association rules. Some studies have been done on the problem of maintaining the discovered association rules when updates are made to the database. All proposed methods must examine not only the changed part but also the unchanged part in the original database, which is very large, and hence take much time. Worse yet, if the updates on the rules are performed frequently on the database but the underlying rule set has not changed much, then the effort could be mostly wasted. In this paper, we devise an algorithm which employs sampling techniques to estimate the difference between the association rules in a database before and after the database is updated. The estimated difference can be used to determine whether we should update the mined association rules or not. If the estimated difference is small, then the rules in the original database is still a good approximation to those in the updated database. Hence, we do not have to spend the resources to update the rules. We can accumulate more updates before actually updating the rules, thereby avoiding the overheads of updating the rules too frequently. Experimental results show that our algorithm is very efficient and highly accurate.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 61
    Electronic Resource
    Electronic Resource
    Springer
    Data mining and knowledge discovery 2 (1998), S. 391-398 
    ISSN: 1573-756X
    Keywords: data mining ; knowledge discovery ; pharmacy ; point of sales
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract Pharma, a drugstore chain in Japan, has been remarkably successful in the effective use of data mining. From over one tera bytes of sales data accumulated in databases, it has derived much interesting and useful knowledge that in turn has been applied to produce profits. In this paper, we shall explain several interesting cases of knowledge discovery at Pharma. We then discuss the innovative features of the data mining system developed in Pharma that led to meaningful knowledge discovery.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 62
    Electronic Resource
    Electronic Resource
    Springer
    Data mining and knowledge discovery 3 (1999), S. 197-217 
    ISSN: 1573-756X
    Keywords: binary decision tree ; classification ; data mining ; entropy ; Gini index ; impurity ; optimal splitting
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract To find the optimal branching of a nominal attribute at a node in an L-ary decision tree, one is often forced to search over all possible L-ary partitions for the one that yields the minimum impurity measure. For binary trees (L = 2) when there are just two classes a short-cut search is possible that is linear in n, the number of distinct values of the attribute. For the general case in which the number of classes, k, may be greater than two, Burshtein et al. have shown that the optimal partition satisfies a condition that involves the existence of 2 L hyperplanes in the class probability space. We derive a property of the optimal partition for concave impurity measures (including in particular the Gini and entropy impurity measures) in terms of the existence ofL vectors in the dual of the class probability space, which implies the earlier condition. Unfortunately, these insights still do not offer a practical search method when n and k are large, even for binary trees. We therefore present a new heuristic search algorithm to find a good partition. It is based on ordering the attribute's values according to their principal component scores in the class probability space, and is linear in n. We demonstrate the effectiveness of the new method through Monte Carlo simulation experiments and compare its performance against other heuristic methods.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 63
    Electronic Resource
    Electronic Resource
    Springer
    Data mining and knowledge discovery 3 (1999), S. 219-225 
    ISSN: 1573-756X
    Keywords: data mining ; knowledge discovery ; churn prediction application ; predictive modeling
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract We describe CHAMP (CHurn Analysis, Modeling, and Prediction), an automated system for modeling cellular customer behavior on a large scale. Using historical data from GTE's data warehouse for cellular phone customers, every month CHAMP identifies churn factors for several geographic regions and updates models to generate churn scores predicting who is likely to churn within the near future. CHAMP is capable of developing customized monthly models and churn scores for over one hundred GTE cellular phone markets totaling over 5 million customers.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 64
    Electronic Resource
    Electronic Resource
    Springer
    Data mining and knowledge discovery 3 (1999), S. 237-261 
    ISSN: 1573-756X
    Keywords: data mining ; parallel processing ; classification ; scalability ; decision trees
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract Classification decision tree algorithms are used extensively for data mining in many domains such as retail target marketing, fraud detection, etc. Highly parallel algorithms for constructing classification decision trees are desirable for dealing with large data sets in reasonable amount of time. Algorithms for building classification decision trees have a natural concurrency, but are difficult to parallelize due to the inherent dynamic nature of the computation. In this paper, we present parallel formulations of classification decision tree learning algorithm based on induction. We describe two basic parallel formulations. One is based on Synchronous Tree Construction Approach and the other is based on Partitioned Tree Construction Approach. We discuss the advantages and disadvantages of using these methods and propose a hybrid method that employs the good features of these methods. We also provide the analysis of the cost of computation and communication of the proposed hybrid method. Moreover, experimental results on an IBM SP-2 demonstrate excellent speedups and scalability.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 65
    Electronic Resource
    Electronic Resource
    Springer
    Data mining and knowledge discovery 3 (1999), S. 291-314 
    ISSN: 1573-756X
    Keywords: association rules ; data mining ; data skewness ; workload balance ; parallel mining ; parallel computing
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract Association rule mining is an important new problem in data mining. It has crucial applications in decision support and marketing strategy. We proposed an efficient parallel algorithm for mining association rules on a distributed share-nothing parallel system. Its efficiency is attributed to the incorporation of two powerful candidate set pruning techniques. The two techniques, distributed and global prunings, are sensitive to two data distribution characteristics: data skewness and workload balance. The prunings are very effective when both the skewness and balance are high. We have implemented FPM on an IBM SP2 parallel system. The performance studies show that FPM outperforms CD consistently, which is a parallel version of the representative Apriori algorithm (Agrawal and Srikant, 1994). Also, the results have validated our observation on the effectiveness of the two pruning techniques with respect to the data distribution characteristics. Furthermore, it shows that FPM has nice scalability and parallelism, which can be tuned for different business applications.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 66
    ISSN: 1573-7578
    Keywords: enterprise integration ; workflow management ; agents interoperation ; heterogeneous databases ; scientific decision support ; data mining
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract The Carnot project was an ambitious research project inheterogeneous databases. It integrated a variety of techniques toaddress a wide range of problems in achieving interoperation inheterogeneous environments. Here we describe some of the majorimplemented applications of this project. These applications concern(a) accessing a legacy scientific database, (b) automating a workflowinvolving legacy systems, (c) cleaning data, and (d) retrievingsemantically appropriate information from structured databases inresponse to text queries. These applications support scientificdecision support, business process management, data integrityenhancement, and analytical decision support, respectively. Theydemonstrate Carnot‘s capabilities for (a) heterogeneous queryprocessing, (b) relaxed transaction and workflow management, (c)knowledge discovery, and (d) heterogeneous resource modelintegration.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...