ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • Articles  (85)
  • Neural networks  (85)
  • Springer  (85)
  • American Association for the Advancement of Science
  • Wiley
  • Computer Science  (85)
  • 1
    Electronic Resource
    Electronic Resource
    Springer
    Neural computing & applications 5 (1997), S. 99-105 
    ISSN: 1433-3058
    Keywords: Neural networks ; Taxonomic expertise ; Committee classification
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract It has been established that committee classifiers, in which the outputs of different, individual network classifiers are combined in various ways, can produce better accuracy than the best individual in the committee. We describe results showing that these advantages are obtained when neural networks are applied to a taxonomic problem in marine science: the classification of images of marine phytoplankton. Significant benefits were found when individual networks, trained on different classes of input, having comparable individual performances, were combined. Combining networks of very different accuracy did not improve performance when measured against the best single network, but nor was it reduced. An alternative architecture, which we term a collective machine, in which the different data types are combined in a single network, was found to have significantly better accuracy than the committee machine architectures. The performance gains and resilience to non-discriminatory types of data suggest the techniques have great utility in the development of general purpose, network classifiers.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Electronic Resource
    Electronic Resource
    Springer
    Machine learning 16 (1994), S. 161-183 
    ISSN: 0885-6125
    Keywords: Neural networks ; PAC learning ; nonoverlapping ; read-once formula ; learning with queries
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract We investigate, within the PAC learning model, the problem of learning nonoverlapping perceptron networks (also known as read-once formulas over a weighted threshold basis). These are loop-free neural nets in which each node has only one outgoing weight. We give a polynomial time algorithm that PAC learns any nonverlapping perceptron network using examples and membership queries. The algorithm is able to identify both the architecture and the weight values necessary to represent the function to be learned. Our results shed some light on the effect of the overlap on the complexity of learning in neural networks.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Electronic Resource
    Electronic Resource
    Springer
    Machine learning 27 (1997), S. 173-200 
    ISSN: 0885-6125
    Keywords: Neural networks ; theory refinement ; knowledge-based neural networks ; probability density estimation ; knowledge extraction ; mixture densities ; combining knowledge bases ; Bayesian learning
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract There is great interest in understanding the intrinsic knowledge neural networks have acquired during training. Most work in this direction is focussed on the multi-layer perceptron architecture. The topic of this paper is networks of Gaussian basis functions which are used extensively as learning systems in neural computation. We show that networks of Gaussian basis functions can be generated from simple probabilistic rules. Also, if appropriate learning rules are used, probabilistic rules can be extracted from trained networks. We present methods for the reduction of network complexity with the goal of obtaining concise and meaningful rules. We show how prior knowledge can be refined or supplemented using data by employing either a Bayesian approach, by a weighted combination of knowledge bases, or by generating artificial training data representing the prior knowledge. We validate our approach using a standard statistical data set.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 4
    Electronic Resource
    Electronic Resource
    Springer
    Minds and machines 2 (1992), S. 71-83 
    ISSN: 1572-8641
    Keywords: Neural networks ; dynetic problem ; infinite networks ; neural computability ; neurocomputing ; scalability ; stability problem ; Turing machine ; universal neural network ; cellular automata
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Philosophy
    Notes: Abstract The importance of the Stability Problem in neurocomputing is discussed, as well as the need for the study of infinite networks. Stability must be the key ingredient in the solution of a problem by a neural network without external intervention. Infinite discrete networks seem to be the proper objects of study for a theory of neural computability which aims at characterizing problems solvable, in principle, by a neural network. Precise definitions of such problems and their solutions are given. Some consequences are explored, in particular, the neural unsolvability of the Stability Problem for neural networks.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 5
    Electronic Resource
    Electronic Resource
    Springer
    Neural computing & applications 1 (1993), S. 23-31 
    ISSN: 1433-3058
    Keywords: GANNET ; Genetic algorithms ; Neural networks ; Exclusive-or ; Two-spirals
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract This paper describes the use of an evolutionary design system known as GANNET to synthesize the structure of neural networks. Initial results are presented for two benchmark problems: the exclusive-or and the two-spirals. A variety of performance criteria and design components are used and comparisons are drawn between the performance of genetic algorithms and other related techniques on these problems.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 6
    Electronic Resource
    Electronic Resource
    Springer
    Neural computing & applications 1 (1993), S. 17-22 
    ISSN: 1433-3058
    Keywords: Adaptive architecture ; Construction ; Modelling ; Neural networks ; Pruning
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract A constructive algorithm is proposed for feed-forward neural networks which uses node-splitting in the hidden layers to build large networks from smaller ones. The small network forms an approximate model of a set of training data, and the split creates a larger, more powerful network which is initialised with the approximate solution already found. The insufficiency of the smaller network in modelling the system which generated the data leads to oscillation in those hidden nodes whose weight vectors cover regions in the input space where more detail is required in the model. These nodes are identified and split in two using principal component analysis, allowing the new nodes to cover the two main modes of the oscillating vector. Nodes are selected for splitting using principal component analysis on the oscillating weight vectors, or by examining the Hessian matrix of second derivatives of the network error with respect to the weights.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 7
    Electronic Resource
    Electronic Resource
    Springer
    Neural computing & applications 1 (1993), S. 107-114 
    ISSN: 1433-3058
    Keywords: Neural networks ; Principal component analysis ; Diffraction tomography ; Preprocessing
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract The use of principal component analysis in preprocessing neural network input data is explored. Four preprocessing schemes are compared in an example problem, and the theoretical basis for the results are discussed. A preconditioning method for the principal components is introduced here, combining normalisation and improved conditioning. The techniques are applied to an object location problem in diffraction tomography. The spectral analysed scattered field from an irradiated object form the input to a Multilayer Perceptron neural network, trained by backpropagation to calculate the coordinates of the object's centre in 2D.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 8
    Electronic Resource
    Electronic Resource
    Springer
    Neural computing & applications 1 (1993), S. 248-255 
    ISSN: 1433-3058
    Keywords: Neural networks ; Feedback networks ; Recurrent networks ; Meat industry ; Pruning ; Preprocessing ; Real-life application
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract This paper describes a full-sized integrated industrial application of neural networks of considerable extent and economic importance. Since May 1991, 16 million pig carcases have been individually graded every year in the Danish meat industry using a neural information processing system (NIPS) implemented at the 31 Classification Centres (CC). A CC is an 8 × 4 × 4 metre robot which fixs the pigs on a carousel and automatically positions and inserts nine probes into the carcases. A probe records a one-dimensional image, which is processed using multilayer perceptrons, yielding meat and fat thicknesses. The thicknesses from all the probes are subsequently fed to a recurrent neural network, which locates and predicts faulty or missing thicknesses. If there are too many predicted values, the robot repeats the measurement on the corresponding probes. The recurrent network employs a non-standard training method which alters the training data as well as the weights.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 9
    Electronic Resource
    Electronic Resource
    Springer
    Neural computing & applications 1 (1993), S. 280-286 
    ISSN: 1433-3058
    Keywords: Neural networks ; Financial forecast ; Parameter significance ; Currency exchange rate
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract This paper deals with the problem of parameter significance estimation, and its application to currency exchange rate prediction. The basic problem is that over the years, practitioners in the field of financial engineering have developed dozens of technical and fundamental indicators on the basis of which they try to predict financial time series. The practitioners are now faced with the problem of finding out which combinations of those indicators are most significant or relevant, and how their significance changes over time. The authors propose a novel neural architecture calledSupNet for estimating the significance of various parameters. The methodology is based on the principle of penalizing those features that are the largest contributors to the error term. Two algorithms based on this principle are proposed. This approach is different from related methodologies, which are based on the principle of removing parameters with the least significance. The proposed methodology is demonstrated on the next day returns of the DM-US currency exchange rate, and promising results are obtained.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 10
    Electronic Resource
    Electronic Resource
    Springer
    Neural computing & applications 2 (1994), S. 2-12 
    ISSN: 1433-3058
    Keywords: Hand-eye co-ordination ; Kohones network ; Neural networks
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract The work presented in this paper shows how the association of proprioceptive and exteroceptive stimuli can enable a Kohonen neural network, controlling a robot arm, to learn hand-eye co-ordination so that the arm can reach for and track a visually presented target. The approach presented in this work assumes no a priorimodel of arm kinematics or of the imaging characteristics of the cameras. No explicit representation, such as homogeneous transformations, is used for the specification of robot pose, and camera calibration and triangulation are done implicitly as the system adapts and learns its hand-eye co-ordination by experience. This research is validated on physical devices and not by simulation.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...