ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • Key words. Computational learning theory, Linear threshold functions, Perceptron algorithm, Learning with noise.  (1)
Collection
Keywords
Publisher
Years
  • 1
    Electronic Resource
    Electronic Resource
    Springer
    Algorithmica 22 (1998), S. 35-52 
    ISSN: 1432-0541
    Keywords: Key words. Computational learning theory, Linear threshold functions, Perceptron algorithm, Learning with noise.
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract. In this paper we consider the problem of learning a linear threshold function (a half-space in n dimensions, also called a ``perceptron''). Methods for solving this problem generally fall into two categories. In the absence of noise, this problem can be formulated as a Linear Program and solved in polynomial time with the Ellipsoid Algorithm or Interior Point methods. Alternatively, simple greedy algorithms such as the Perceptron Algorithm are often used in practice and have certain provable noise-tolerance properties; but their running time depends on a separation parameter, which quantifies the amount of ``wiggle room'' available for a solution, and can be exponential in the description length of the input. In this paper we show how simple greedy methods can be used to find weak hypotheses (hypotheses that correctly classify noticeably more than half of the examples) in polynomial time, without dependence on any separation parameter. Suitably combining these hypotheses results in a polynomial-time algorithm for learning linear threshold functions in the PAC model in the presence of random classification noise. (Also, a polynomial-time algorithm for learning linear threshold functions in the Statistical Query model of Kearns.) Our algorithm is based on a new method for removing outliers in data. Specifically, for any set S of points in R n , each given to b bits of precision, we show that one can remove only a small fraction of S so that in the remaining set T , for every vector v , max x ∈ T (v . x) 2 ≤ poly(n,b) E x ∈ T (v . x) 2 ; i.e., for any hyperplane through the origin, the maximum distance (squared) from a point in T to the plane is at most polynomially larger than the average. After removing these outliers, we are able to show that a modified version of the Perceptron Algorithm finds a weak hypothesis in polynomial time, even in the presence of random classification noise.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...