ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • Articles  (20,544)
  • Springer  (20,544)
  • 1995-1999  (19,523)
  • 1955-1959  (592)
  • 1950-1954  (370)
  • 1945-1949  (59)
  • Computer Science  (15,041)
  • Geography  (5,503)
Collection
  • Articles  (20,544)
Years
Year
Journal
  • 1
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 10 (1996), S. 1-15 
    ISSN: 1436-3259
    Keywords: Spacings ; quantiles ; generalized Pareto distribution ; log-logistic distribution
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract The maximum product of spacings (MPS) method is discussed from the standpoint of information theory. MPS parameter and quantile estimates for the generalized Pareto distribution and the two parameter log-logistic distribution are compared with the maximum likelihood(ML) and probability weighted moment (PWM) estimates.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 10 (1996), S. 17-37 
    ISSN: 1436-3259
    Keywords: Diffusion ; network ; reservoir ; power law
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract A diffusion approximation for a network of continuous time reservoirs with power law release rules is examined. Under a mild assumption on the inflow processes, we show that for physically reasonable values of the power law constants, the system of processes converges to a multi-dimensional Gaussian diffusion process. We also illustrate how the limiting Gaussian process may be used to compute approximations to the original system of reservoirs. In addition, we study the quality of our approximations by comparing them to results obtained by simulations of the original watershed model. The simulations offer support for the use of the approximation developed here.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 10 (1996), S. 39-63 
    ISSN: 1436-3259
    Keywords: Saturated flow ; rainfall ; groundwater monitoring
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract A numerical experiment of flow in variably saturated porous media was performed in order to evaluate the spatial and temporal distribution of the groundwater recharge at the phreatic surface for a shallow aquifer as a function of the input rainfall process and soil heterogeneity. The study focused on the groundwater recharge which resulted from the percolation of the excess rainfall for a 90-days period of an actual precipitation record. Groundwater recharge was defined as the water flux across the moving phreatic surface. The observed spatial non-uniformity of the groundwater recharge was caused by soil heterogeneity and is particularly pronounced during the stage of recharge peak (substantial percolation stage). During that stage the recharge is associated with preferential flow paths defined as soil zones of locally higher hydraulic conductivity. For the periods of low percolation intensity the groundwater recharge was exhibiting more uniform spatial characteristics. The temporal distribution of the recharge was found to be a function of the frequency and intensity of the rainfall events. Application of sampling design demonstrates the joint influence of the spatial and temporal recharge variability on the cost-effective monitoring of groundwater potentiometric surfaces.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 4
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 10 (1996), S. 65-85 
    ISSN: 1436-3259
    Keywords: Streamflow ; drought ; tree-ring data ; renewal model ; geometric variables
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract It is shown that runs of low-flow annual streamflow in a coastal semiarid basin of Central California can be adequately modelled by renewal theory. For example, runs of below-median annual streamflows are shown to follow a geometric distribution. The elapsed time between runs of below-median streamflow are geometrically distributed also. The sum of these two independently distributed geometric time variables defines the renewal time elapsing between the initiation of a low-flow run and the next one. The probability distribution of the renewal time is then derived from first principles, ultimately leading to the distribution of the number of low-flow runs in a specified time period, the expected number of low-flow runs, the risk of drought, and other important probabilistic indicators of low-flow. The authors argue that if one identifies drought threat with the occurrence of multiyear low-flow runs, as it is done by water supply managers in the study area, then our renewal model provides a number of interesting results concerning drought threat in areas historically subject to inclement, dry, climate. A 430-year long annual streamflow time series reconstructed by tree-ring analysis serves as the basis for testing our renewal model of low-flow sequences.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 5
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 10 (1996), S. 87-106 
    ISSN: 1436-3259
    Keywords: Climate change ; daily precipitation modelling ; generalized linear models ; iteratively reweighted least squares ; spline functions
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract The precipitation amounts on wet days at De Bilt (the Netherlands) are linked to temperature and surface air pressure through advanced regression techniques. Temperature is chosen as a covariate to use the model for generating synthetic time series of daily precipitation in a CO2 induced warmer climate. The precipitation-temperature dependence can partly be ascribed to the phenomenon that warmer air can contain more moisture. Spline functions are introduced to reproduce the non-monotonous change of the mean daily precipitation amount with temperature. Because the model is non-linear and the variance of the errors depends on the expected response, an iteratively reweighted least-squares technique is needed to estimate the regression coefficients. A representative rainfall sequence for the situation of a systematic temperature rise is obtained by multiplying the precipitation amounts in the observed record with a temperature dependent factor based on a fitted regression model. For a temperature change of 3°C (reasonable guess for a doubled CO2 climate according to the present-day general circulation models) this results in an increase in the annual average amount of 9% (20% in winter and 4% in summer). An extended model with both temperature and surface air pressure is presented which makes it possible to study the additional effects of a potential systematic change in surface air pressure on precipitation.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 6
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 10 (1996), S. 107-126 
    ISSN: 1436-3259
    Keywords: Gaussian process ; spatial correlation ; anisotropy ; Fourier transform ; Gauss-Newton ; ECM ; measurement error ; signal extraction ; irregular data
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract This paper is concerned with developing computational methods and approximations for maximum likelihood estimation and minimum mean square error smoothing of irregularly observed two-dimensional stationary spatial processes. The approximations are based on various Fourier expansions of the covariance function of the spatial process, expressed in terms of the inverse discrete Fourier transform of the spectral density function of the underlying spatial process. We assume that the underlying spatial process is governed by elliptic stochastic partial differential equations (SPDE's) driven by a Gaussian white noise process. SPDE's have often been used to model the underlying physical phenomenon and the elliptic SPDE's are generally associated with steady-state problems. A central problem in estimation of underlying model parameters is to identify the covariance function of the process. The cumbersome exact analytical calculation of the covariance function by inverting the spectral density function of the process, has commonly been used in the literature. The present work develops various Fourier approximations for the covariance function of the underlying process which are in easily computable form and allow easy application of Newton-type algorithms for maximum likelihood estimation of the model parameters. This work also develops an iterative search algorithm which combines the Gauss-Newton algorithm and a type of generalized expectation-maximization (EM) algorithm, namely expectation-conditional maximization (ECM) algorithm, for maximum likelihood estimation of the parameters. We analyze the accuracy of the covariance function approximations for the spatial autoregressive-moving average (ARMA) models analyzed in Vecchia (1988) and illustrate the performance of our iterative search algorithm in obtaining the maximum likelihood estimation of the model parameters on simulated and actual data.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 7
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 10 (1996), S. 127-150 
    ISSN: 1436-3259
    Keywords: Rainfall estimation ; indicator cokriging ; rain gage measurements
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract Indicator cokriging (Journel 1983) is examined as a tool for real-time estimation of rainfall from rain gage measurements. The approach proposed in this work obviates real-time estimation of real-time statistics of rainfall by using ensemble or climatological statistics exclusively, and reduces computational requirements attendant to indicator cokriging by employing only a few auxiliary cutoffs in estimation of conditional probabilities. Due to unavailability of suitable rain gage measurements, hourly radar rain fall data were used for both indicator covariance estimation and a comparative evaluation. Preliminary results suggest that the indicator cokriging approach is clearly superior to its ordinary kriging counterpart, whereas the indicator kriging approach is not. The improvement is most significant in estimation of light rainfall, but drops off significantly for heavy rainfall. The lack of predictability in spatial estimation of heavy rainfall is borne out in the integral scale of indicator correlation: peaking to its maximum for cutoffs near the median, indicator correlation scale becomes increasingly smaller for larger cutoffs of rainfall depth. A derived-distribution analysis, based on the assumption that radar rainfall is a linear sum of ground-truth and a random error, suggests that, at low cutoffs, indicator correlation scale of ground-truth can significantly differ from that of radar rainfall, and points toward inclusion of rainfall intermittency, for example, within the framework proposed in this work.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 8
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 10 (1996), S. 151-161 
    ISSN: 1436-3259
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 9
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 10 (1996), S. 163-166 
    ISSN: 1436-3259
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 10
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 10 (1996), S. 187-207 
    ISSN: 1436-3259
    Keywords: log-Gumbel distribution ; flood frequency analysis ; quantile estimation ; confidence intervals
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract The log-Gumbel distribution is one of the extreme value distributions which has been widely used in flood frequency analysis. This distribution has been examined in this paper regarding quantile estimation and confidence intervals of quantiles. Specific estimation algorithms based on the methods of moments (MOM), probability weighted moments (PWM) and maximum likelihood (ML) are presented. The applicability of the estimation procedures and comparison among the methods have been illustrated based on an application example considering the flood data of the St. Mary's River.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 11
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 10 (1996), S. 167-186 
    ISSN: 1436-3259
    Keywords: Reservoir stochastic theory ; reliability ; mean ; variance ; indicator function ; storage bounds ; nonlinear programming ; simulation
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract A new formulation is presented for the analysis of reservoir systems synthesizing concepts from the traditional stochastic theory of reservoir storage, moments analysis and reliability programming. The analysis is based on the development of the first and second moments for the stochastic storage state variable. These expressions include terms for the failure probabilities (probabilities of spill or deficit) and consider the storage bounds explicitly. Using this analysis, expected values of the storage state, variances of storage, optimal release policies and failure probabilities — useful information in the context of reservoir operations and design, can be obtained from a nonlinear programming solution. The solutions developed from studies of single reservoir operations on both an annual and monthly basis, compare favorably with those obtained from simulation. The presentation herein is directed to both traditional reservoir storage theorists who are interested in the design of a reservoir and modern reservoir analysts who are interested in the long term operation of reservoirs.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 12
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 10 (1996), S. 209-229 
    ISSN: 1436-3259
    Keywords: Infiltration-advance equation ; water spreading ; cellular automata ; irrigation ; surface hydrology ; hydrodynamics ; stochastic processes
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract A technique has been developed for predicting the irregular advance pattern often observed as water spreads on the surface of the ground. The technique is a combination of stochastic sketching, potential theory, probability theory, and a mass balance equation in the form of an advance equation. The technique can be used on flat as well as sloping terrain and addresses any form of obstructions or constraints to the flow of the water. The stochastic sketching portion of the technique uses cellular automata with transition probability movement rules to sketch the dynamics of small volume water elements in the defined environment. Randomly selected small volume flow path segments are computed and plotted. The envelope of these segments defines the wetted area and the advance front. Several examples are presented showing the patterns produced for various situations.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 13
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 10 (1996), S. 231-251 
    ISSN: 1436-3259
    Keywords: Stochastic ; multiphase ; three phase ; heterogeneity
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract The first paper (Chang et al., 1995b) of this two-part series described the stochastic analysis using spectral/perturbation approach to analyze steady state two-phase (water and oil) flow in a, liquid-unsaturated, three fluid-phase porous medium. In this paper, the results between the numerical simulations and closed-form expressions obtained using the perturbation approach are compared. We present the solution to the one-dimensional, steady-state oil and water flow equations. The stochastic input processes are the spatially correlated logk where k is the intrinsic permeability and the soil retention parameter, α. These solutions are subsequently used in the numerical simulations to estimate the statistical properties of the key output processes. The comparison between the results of the perturbation analysis and numerical simulations showed a good agreement between the two methods over a wide range of logk variability with three different combinations of input stochastic processes of logk and soil parameter α. The results clearly demonstrated the importance of considering the spatial variability of key subsurface properties under a variety of physical scenarios. The variability of both capillary pressure and saturation is affected by the type of input stochastic process used to represent the spatial variability. The results also demonstrated the applicability of perturbation theory in predicting the system variability and defining effective fluid properties through the ergodic assumption.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 14
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 10 (1996), S. 295-317 
    ISSN: 1436-3259
    Keywords: Bayesion methods ; time series ; hydrology
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract A review of literature reveals the inadequacy of Intervention analysis and spectrum based methods to adequately quantify changes in hydrologic times series. A Bayesian method is used to investigate the statistical significance of observed changes in hydrologic times series and the results are reported herein. The Bayesian method is superior to the previous methods.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 15
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 10 (1996), S. 253-278 
    ISSN: 1436-3259
    Keywords: Random fields ; stochastic processes ; fractals
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract This paper describes a new method for generating spatially-correlated random fields. Such fields are often encountered in hydrology and hydrogeology and in the earth sciences. The method is based on two observations: (i) spatially distributed attributes usually display a stationary correlation structure, and (ii) the screening effect of measurements leads to the sufficiency of a small search neighborhood when it comes to projecting measurements and data in space. The algorithm which was developed based on these principles is called HYDRO_GEN, and its features and properties are discussed in depth. HYDRO_GEN is found to be accurate and extremely fast. It is also versatile: it can simulate fields of different nature, starting from weakly stationary fields with a prescribed covariance and ending with fractal fields. The simulated fields can display statistical isotropy or anisotropy.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 16
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 10 (1996), S. 279-294 
    ISSN: 1436-3259
    Keywords: Linear estimation ; interpolation ; kriging ; splines ; conditional
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract This work presents analytical expressions for the best estimate, conditional covariance function, and conditional realizations of a function from sparse observations. In contrast to the prevalent approach in kriging where the best estimates at every point are determined from the solution of a system of linear equations, the best-estimate function can be represented analytically in terms of basis functions, whose number depends on the observations. This approach is computationally superior when graphing a function estimate and is also valuable in understanding what the solution should look like. For example, one can immediately see that all “singularities” in the best-estimate function are at observation points.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 17
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 10 (1996), S. 319-329 
    ISSN: 1436-3259
    Keywords: Particle tracking ; numerical methods ; random walks ; advection-dispersion equation ; stochastic processes
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract A formal statistical discussion of the origins of the random walk and its relation to the classic advection-dispersion equation is given. At issue is the common use of Gaussian distributed steps in producing the desired dispersive effects. Shown are alternative solutions to the basic Langevin equation describing mass displacements based on non-Gaussian, white increments. In particular, the results reveal that uniform or symmetric-triangular steps can be employed without loss of generality in accuracy of the solution (over all Peclet numbers) and may yield significant savings in the computational generation of the random deviates required in the Monte Carlo procedures of the random walk method.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 18
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 10 (1996), S. 330-330 
    ISSN: 1436-3259
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 19
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 11 (1997), S. 17-31 
    ISSN: 1436-3259
    Keywords: Bivariate density ; meta-Gaussian density ; normal quantile transform ; likelihood ratio dependence ; correlation coefficient
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract Convenient bivariate densities found in the literature are often unsuitable for modeling hydrologic variates. They either constrain the range of association between variates, or fix the form of the marginal distributions. The bivariate meta-Gaussian density is constructed by embedding the normal quantile transform of each variate into the Gaussian law. The density can represent a full range of association between variates and admits arbitrarily specified marginal distributions. Modeling and estimation can be decomposed into i) independent analyses of the marginal distributions, and ii) investigation of the dependence structure. Both statistical and judgmental estimation procedures are possible. Some comparisons to recent applications of bivariate densities in the hydrologic literature motivate and illustrate the model.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 20
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 11 (1997), S. 33-50 
    ISSN: 1436-3259
    Keywords: Unit hydrograph ; uncertainty analysis ; linearly constrained Monte-Carlo simulation ; reliability analysis
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract Unit hydrographs (UHs), along with design rainfalls, are frequently used to determine the discharge hydrograph for design and evaluation of hydraulic structures. Due to the presence of various uncertainties in its derivation, the resulting UH is inevitably subject to uncertainty. Consequently, the performance of hydraulic structures under the design storm condition is uncertain. This paper integrates the linearly constrained Monte-Carlo simulation with the UH theory and routing techniques to evaluate the reliability of hydraulic structures. The linear constraint is considered because the water volume of each generated design direct runoff hydrograph should be equal to that of the design effective rainfall hyetograph or the water volume of each generated UH must be equal to one inch (or cm) over the watershed. For illustration, the proposed methodology is applied to evaluate the overtopping risk of a hypothetical flood detention reservoir downstream of Tong-Tou watershed in Taiwan.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 21
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 11 (1997), S. 1-16 
    ISSN: 1436-3259
    Keywords: Nash cascade reservoir model ; rainfall-runoff ; EM algorithm ; filtering ; maximum likelihood estimation ; martingale estimating function
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract Abstract: Linear continuous time stochastic Nash cascade conceptual models for runoff are developed. The runoff is modeled as a simple system of linear stochastic differential equations driven by white Gaussian and marked point process noises. In the case of d reservoirs, the outputs of these reservoirs form a d dimensional vector Markov process, of which only the dth coordinate process is observed, usually at a discrete sample of time points. The dth coordinate process is not Markovian. Thus runoff is a partially observed Markov process if it is modeled using the stochastic Nash cascade model. We consider how to estimate the parameters in such models. In principle, maximum likelihood estimation for the complete process parameters can be carried out directly or through some form of the EM (estimation and maximization) algorithm or variation thereof, applied to the observed process data. In this research we consider a direct approximate likelihood approach and a filtering approach to an algorithm of EM type, as developed in Thompson and Kaseke (1994). These two methods are applied to some real life runoff data from a catchment in Wales, England. We also consider a special case of the martingale estimating function approach on the runoff model in the presence of rainfall. Finally, some simulations of the runoff process are given based on the estimated parameters.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 22
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 11 (1997), S. 173-192 
    ISSN: 1436-3259
    Keywords: Uncertainty analysis ; unit hydrograph ; regression analysis ; probabilistic point estimation methods
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract Hydrologic model parameters obtained from regional regression equations are subject to uncertainty. Consequently, hydrologic model outputs based on the stochastic parameters are random. This paper presents a systematic analysis of uncertainty associated with the two parameters, N and K, in Nash's IUH model from different regional regression equations. The uncertainty features associated with N and K are further incorporated to assess the uncertainty of the resulting IUH. Numerical results indicate that uncertainty of N and K from the regional regression equations are too significant to be ignored.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 23
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 11 (1997), S. 145-171 
    ISSN: 1436-3259
    Keywords: Hydrologic regionalization ; unit hydrograph ; regression analysis ; multivariate regression ; seemingly unrelated regression ; validation
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract Hydrologic regionalization is a useful tool that allows for the transfer of hydrological information from gaged sites to ungaged sites. This study developed regional regression equations that relate the two parameters in Nash's IUH model to the basin characteristics for 42 major watersheds in Taiwan. In the process of developing the regional equations, different regression procedures including the conventional univariate regression, multivariate regression, and seemingly unrelated regression were used. Multivariate regression and seeming unrelated regression were applied because there exists a rather strong correlation between the Nash's IUH parameters. Furthermore, a validation study was conducted to examine the predictability of regional equations derived by different regression procedures. The study indicates that hydrologic regionalization involving several dependent variables should consider their correlations in the process of establishing the regional equations. The consideration of such correlation will enhance the predictability of resulting regional equations as compared with the ones from the conventional univariate regression procedure.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 24
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 11 (1997), S. 193-210 
    ISSN: 1436-3259
    Keywords: Turbulence ; sediment ; fluvial ; river ; bursting process ; statistics
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract Entrainment of sediment particles from channel beds into the channel flow is influenced by the characteristics of the flow turbulence which produces stochastic shear stress fluctuations at the bed. Recent studies of the structure of turbulent flow has recognized the importance of bursting processes as important mechanisms for the transfer of momentum into the laminar boundary layer. Of these processes, the sweep event has been recognized as the most important bursting event for entrainment of sediment particles as it imposes forces in the direction of the flow resulting in movement of particles by rolling, sliding and occasionally saltating. Similarly, the ejection event has been recognized as important for sediment transport since these events maintain the sediment particles in suspension. In this study, the characteristics of bursting processes and, in particular, the sweep event were investigated in a flume with a rough bed. The instantaneous velocity fluctuations of the flow were measured in two-dimensions using a small electromagnetic velocity meter and the turbulent shear stresses were determined from these velocity fluctuations. It was found that the shear stress applied to the sediment particles on the bed resulting from sweep events depends on the magnitude of the turbulent shear stress and its probability distribution. A statistical analysis of the experimental data was undertaken and it was found necessary to apply a Box-Cox transformation to transform the data into a normally distributed sample. This enabled determination of the mean shear stress, angle of action and standard error of estimate for sweep and ejection events. These instantaneous shear stresses were found to be greater than the mean flow shear stress and for the sweep event to be approximately 40 percent greater near the channel bed. Results from this analysis suggest that the critical shear stress determined from Shield's diagram is not sufficient to predict the initiation of motion due to its use of the temporal mean shear stress. It is suggested that initiation of particle motion, but not continuous motion, can occur earlier than suggested by Shield's diagram due to the higher shear stresses imposed on the particles by the stochastic shear stresses resulting from turbulence within the flow.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 25
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 11 (1997), S. 211-227 
    ISSN: 1436-3259
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract The principle of maximum entropy (POME) was employed to derive a new method of parameter estimation for the 2-parameter generalized Pareto (GP2) distribution. Monte Carlo simulated data were used to evaluate this method and compare it with the methods of moments (MOM), probability weighted moments (PWM), and maximum likelihood estimation (MLE). The parameter estimates yielded by POME were comparable or better within certain ranges of sample size and coefficient of variation.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 26
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 11 (1997), S. 523-547 
    ISSN: 1436-3259
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract Kernel density estimators are useful building blocks for empirical statistical modeling of precipitation and other hydroclimatic variables. Data driven estimates of the marginal probability density function of these variables (which may have discrete or continuous arguments) provide a useful basis for Monte Carlo resampling and are also useful for posing and testing hypotheses (e.g bimodality) as to the frequency distributions of the variable. In this paper, some issues related to the selection and design of univariate kernel density estimators are reviewed. Some strategies for bandwidth and kernel selection are discussed in an applied context and recommendations for parameter selection are offered. This paper complements the nonparametric wet/dry spell resampling methodology presented in Lall et al. (1996).
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 27
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 11 (1997), S. 459-482 
    ISSN: 1436-3259
    Keywords: Karhunen-Loéve expansion ; Empirical Orthogonal Functions ; stochastic simulation ; gaussian fields ; analytical covariance functions ; eigenfunctions ; kriging
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract Simulation of multigaussian stochastic fields can be made after a Karhunen-Loéve expansion of a given covariance function. This method is also called simulation by Empirical Orthogonal Functions. The simulations are made by drawing stochastic coefficients from a random generator. These numbers are multiplied with eigenfunctions and eigenvalues derived from the predefined covariance model. The number of eigenfunctions necessary to reproduce the stochastic process within a predefined variance error, turns out to be a cardinal question. Some ordinary analytical covariance functions are used to evaluate how quickly the series of eigenfunctions can be truncated. This analysis demonstrates extremely quick convergence to 99.5% of total variance for the 2nd order exponential (‘gaussian’) covariance function, while the opposite is true for the 1st order exponential covariance function. Due to these convergence characteristics, the Karhunen-Loéve method is most suitable for simulating smooth fields with ‘gaussian’ shaped covariance functions. Practical applications of Karhunen-Loéve simulations can be improved by spatial interpolation of the eigenfunctions. In this paper, we suggest interpolation by kriging and limits for reproduction of the predefined covariance functions are evaluated.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 28
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 12 (1998), S. 1-14 
    ISSN: 1436-3259
    Keywords: Key words: Exceedance probability ; trend ; stochastic variables ; non-stationarity
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract Studying the hypothetical case of a trend superimposed on a random stationary variable, we highlight the strong influence of possible non-stationarities on exceedance probability. After a general outline, the subject is analytically developed using the Gumbel distribution, emphasizing the quick increase of the exceedance probability over time in the presence of weak rising trends, and its sensitive underestimation where the non-stationarity goes unnoticed or is considered negligible. Finally the work is applied to hydrological series of rainfall and river flow.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 29
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 12 (1998), S. 53-64 
    ISSN: 1436-3259
    Keywords: Key words: Risk ; clustering ; point process ; Poisson ; flood.
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract: Since the introduction into flood risk analysis, the partial duration series method has gained increasing acceptance as an appealing alternative to the annual maximum series method. However, when the base flow is low, there is clustering in the flood peak or flow volume point process. In this case, the general stochastic point process model is not suitable to risk analysis. Therefore, two types of models for flood risk analysis are derived on the basis of clustering stochastic point process theory in this paper. The most remarkable characteristic of these models is that the flood risk is considered directly within the time domain. The acceptability of different models are also discussed with the combination of the flood peak counted process in twenty years at Yichang station on the Yangtze river. The result shows that the two kinds of models are suitable ones for flood risk analysis, which are more flexible compared with the traditional flood risk models derived on the basis of annual maximum series method or the general stochastic point process theory.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 30
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 12 (1998), S. 33-52 
    ISSN: 1436-3259
    Keywords: Keywords: Streamflow ; simulation ; nonparametric
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract A new approach for streamflow simulation using nonparametric methods was described in a recent publication (Sharma et al. 1997). Use of nonparametric methods has the advantage that they avoid the issue of selecting a probability distribution and can represent nonlinear features, such as asymmetry and bimodality that hitherto were difficult to represent, in the probability structure of hydrologic variables such as streamflow and precipitation. The nonparametric method used was kernel density estimation, which requires the selection of bandwidth (smoothing) parameters. This study documents some of the tests that were conduced to evaluate the performance of bandwidth estimation methods for kernel density estimation. Issues related to selection of optimal smoothing parameters for kernel density estimation with small samples (200 or fewer data points) are examined. Both reference to a Gaussian density and data based specifications are applied to estimate bandwidths for samples from bivariate normal mixture densities. The three data based methods studied are Maximum Likelihood Cross Validation (MLCV), Least Square Cross Validation (LSCV) and Biased Cross Validation (BCV2). Modifications for estimating optimal local bandwidths using MLCV and LSCV are also examined. We found that the use of local bandwidths does not necessarily improve the density estimate with small samples. Of the global bandwidth estimators compared, we found that MLCV and LSCV are better because they show lower variability and higher accuracy while Biased Cross Validation suffers from multiple optimal bandwidths for samples from strongly bimodal densities. These results, of particular interest in stochastic hydrology where small samples are common, may have importance in other applications of nonparametric density estimation methods with similar sample sizes and distribution shapes.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 31
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 12 (1998), S. 15-32 
    ISSN: 1436-3259
    Keywords: Key words: Kalman filtering ; groundwater modelling ; inverse methods ; uncertainty analysis ; state prediction ; parameter estimation
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract The popularity of applying filtering theory in the environmental and hydrological sciences passed its first climax in the 1970s. Like so many other new mathematical methods it was simply the fashion at the time. The study of groundwater systems was not immune to this fashion, but neither was it by any means a prominent area of application. The spatial-temporal characteristics of groundwater flow are customarily described by analytical or, more frequently, numerical, physics-based models. Consequently, the state-space representations associated with filtering must be of a high order, with an immediately apparent computational over-burden. And therein lies part of the reason for the but modest interest there has been in applying Kalman filtering to groundwater systems, as reviewed critically in this paper. Filtering theory may be used to address a variety of problems, such as: state estimation and reconstruction, parameter estimation (including the study of uncertainty and its propagation), combined state-parameter estimation, input estimation, estimation of the variance-covariance properties of stochastic disturbances, the design of observation networks, and the analysis of parameter identifiability. A large proportion of previous studies has dealt with the problem of parameter estimation in one form or another. This may well not remain the focus of attention in the future. Instead, filtering theory may find wider application in the context of data assimilation, that is, in reconstructing fields of flow and the migration of sub-surface contaminant plumes from relatively sparse observations.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 32
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 12 (1998), S. 65-82 
    ISSN: 1436-3259
    Keywords: Key words: Flood flow ; threshold ; generalized Pareto ; Poisson
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract This study uses the method of peaks over threshold (P.O.T.) to estimate the flood flow quantiles for a number of hydrometric stations in the province of New Brunswick, Canada. The peak values exceeding the base level (threshold), or `exceedances', are fitted by a generalized Pareto distribution. It is known that under the assumption of Poisson process arrival for flood exceedances, the P.O.T. model leads to a generalized extreme value distribution (GEV) for yearly maximum discharge values. The P.O.T. model can then be applied to calculate the quantiles X T corresponding to different return periods T, in years. A regionalization of floods in New Brunswick, which consists of dividing the province into `homogeneous regions', is performed using the method of the `region of influence'. The 100-year flood is subsequently estimated using a regionally estimated value of the shape parameter of the generalized Pareto distribution and a regression of the 100-year flood on the drainage area. The jackknife sampling method is then used to contrast the regional results with the values estimated at site. The variability of these results is presented in box-plot form.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 33
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 12 (1998), S. 97-116 
    ISSN: 1436-3259
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract : The knowledge of the volume and duration of low-flow events in river channels is essential for water management and the design of hydraulics structures. In this study, both preceding characteristics, X 1 and X 2, are considered simultaneously via two types of bivariate distributions whose marginals are exponential. One of these bivariate distributions has been presented by Nagao and Kadoya (1971) and the other has been used by Singh and Singh (1991) to the study of rainfall intensity and rainfall depth. The results are applied to the low-flow series (“peaks-below-threshold”) of Lepreau River (station 01AQ001) in New Brunswick, Canada. These results show that the model that was successfully employed by Singh and Singh (1991) to study rainfall, presents certain difficulties when a very strong correlation, ρ, between the two random variables X 1 and X 2, exists. The model by Nagao and Kadoya (1971) seems to be more satisfactory for such situations, although this model seems also to be quite sensitive to variations in ρ.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 34
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 12 (1998), S. 83-96 
    ISSN: 1436-3259
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract Many natural porous geological rock formations, as well as engineered porous structures, have fractal properties, i.e., they are self-similar over several length scales. While there have been many experimental and theoretical studies on how to quantify a fractal porous medium and on how to determine its fractal dimension, the numerical generation of a fractal pore structure with predefined statistical and scaling properties is somewhat scarcer. In the present paper a new numerical method for generating a three-dimensional porous medium with any desired probability density function (PDF) and autocorrelation function (ACF) is presented. The well-known Turning Bands Method (TBM) is modified to generate three-dimensional synthetic isotropic and anisotropic porous media with a Gaussian PDF and exponential-decay ACF. Porous media with other PDF's and ACF's are constructed with a nonlinear, iterative PDF and ACF transformation, whereby the arbitrary PDF is converted to an equivalent Gaussian PDF which is then simulated with the classical TBM. Employing a new method for the estimation of the surface area for a given porosity, the fractal dimensions of the surface area of the synthetic porous media generated in this way are then measured by classical fractal perimeter/area relationships. Different 3D porous media are simulated by varying the porosity and the correlation structure of the random field. The performance of the simulations is evaluated by checking the ensemble statistics, the mean, variance and ACF of the simulated random field. For a porous medium with Gaussian PDF, an average fractal dimension of approximately 2.76 is obtained which is in the range of values of actually measured fractal dimensions of molecular surfaces. For a porous medium with a non-Gaussian quadratic PDF the calculated fractal dimension appears to be consistently higher and averages 2.82. The results also show that the fractal dimension is neither strongly dependent of the porosity nor of the degree of anisotropy assumed.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 35
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 12 (1998), S. 117-140 
    ISSN: 1436-3259
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract Transport of non-ergodic solute plumes by steady-state groundwater flow with a uniform mean velocity, μ, were simulated with Monte Carlo approach in a two-dimensional heterogeneous and statistically isotropic aquifer whose transmissivity, T, is log-normally distributed with an exponential covariance. The ensemble averages of the second spatial moments of the plume about its center of mass, 〈S i i (t)〉, and the plume centroid covariance, R i i (t) (i=1,2), were simulated for the variance of Y=log T, σ Y 2=0.1, 0.5 and 1.0 and line sources normal or parallel to μ of three dimensionless lengths, 1, 5, and 10. For σ Y 2=0.1, all simulated 〈S i i (t)〉−S i i (0) and R i i (t) agree well with the first-order theoretical values, where S i i (0) are the initial values of S i i (t). For σ Y 2=0.5 and 1.0 and the line sources normal to μ, the simulated longitudinal moments, 〈S 11(t)〉−S 11(0) and R 11(t), agree well with the first-order theoretical results but the simulated transverse moments 〈S 22(t)〉−S 22(0) and R 22(t) are significantly larger than the first-order values. For the same two larger values of σ Y 2 but the line sources parallel to μ, the simulated 〈S 11(t)〉−S 11(0) are larger than but the simulated R 11 are smaller than the first-order values, and both simulated 〈S 22(t)〉−S 22(0) and R 22(t) stay larger than the first-order values. For a fixed value of σ Y 2, the summations of 〈S i i (t)〉−S i i (0) and R i i , i.e., X i i (i=1,2), remain almost the same no matter what kind of source simulated. The simulated X 11 are in good agreement with the first-order theory but the simulated X 22 are significantly larger than the first-order values. The simulated X 22, however, are in excellent agreement with a previous modeling result and both of them are very close to the values derived using Corrsin's conjecture. It is found that the transverse moments may be significantly underestimated if less accurate hydraulic head solutions are used and that the decreasing of 〈S 22(t)〉−S 22(0) with time or a negative effective dispersivity, defined as , may happen in the case of a line source parallel to μ where σ Y 2 is small.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 36
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 12 (1998), S. 141-154 
    ISSN: 1436-3259
    Keywords: Key words: Ground truth ; geostatistical techniques ; areal reduction factor ; Rainfall process ; linear relationship.
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract Geostatistical techniques are used to quantify the reference mean areal rainfall (ground truth) from sparse raingaugenetworks. Based on the EPSAT-Niger event cumulative rainfall, a linear relationship between the ground truth considered as the mean area rainfall estimated from the densely available raingauge network and the area rainfall estimated from sparse network are derived. Also, a linear relationship between the ground truth and point rainfall is established. As it was reported experimentally by some authors, the slope of these relationships is less than one. Based on the geostatistical framework, the slope and the ordinate at the origin can be estimated as a function of the spatial structure of the rainfall process. It is shown that the slope is smaller than one. For the special case of one gauge inside a fixed area or a Field Of View (FOV), an areal reduction factor is derived. It has a limit value which depends only on the size of the area and the spatial structure of the rainfall process. The relative variance error of estimating the FOV cumulative rainfall from point rainfall is also given.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 37
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 12 (1998), S. 223-245 
    ISSN: 1436-3259
    Keywords: Key words: Stochastic differential equation ; spatial data ; irregularly sampled data ; parameter estimation.
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract: A second order stochastic differential equation is used for modeling of water-table elevation. The data were sampled at the Borden Aquifer as a part of a tracer experiment. The purpose of the water-table data collection was to determine presence of a water flow. We argue that the water-table surface is a simple plane oscillating up and down in time according to an equation for a stochastic oscillator. We derive the model, estimate its parameters and provide arguments for goodness-of-fit of the model.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 38
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 12 (1998), S. 267-283 
    ISSN: 1436-3259
    Keywords: Key words: Flood frequency analysis ; TCEV ; non-systematic information ; regional ; statistical gain.
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract: Due to the social and economic implications, flood frequency analysis must be done with the highest precision. For this reason, the most suitable statistical model must be selected, and the maximum amount of information must be used. Floods in Mediterranean rivers can be produced by two different mechanisms, which forces the use of a non-traditional distribution like the TCEV. The information can be increased by using additional non-systematic data, or with a regional analysis, or both. Through the statistical gain concept, it has been shown that in most cases the use of additional non-systematic information can decrease the quantile estimation error in about 50%. In a regional analysis, the␣benefit of additional information in one station, is propagated to the rest of␣the␣stations with only a small decrease with respect to the at-site equivalent analysis.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 39
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 12 (1998), S. 285-298 
    ISSN: 1436-3259
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract We present a geostatistically based inverse model for characterizing heterogeneity in parameters of unsaturated hydraulic conductivity for three-dimensional flow. Pressure and moisture content are related to perturbations in hydraulic parameters through cross-covariances, which are calculated to first-order. Sensitivities needed for covariance calculations are derived using the adjoint state sensitivity method. Approximations of the conditional mean parameter fields are then obtained from the cokriging estimator. Correlation between parameters and pressure – moisture content perturbations is seen to be strongly dependent on mean pressure or moisture content. High correlation between parameters and pressure data was obtained under saturated or near saturated flow conditions, providing accurate estimation of saturated hydraulic conductivity, while moisture content measurements provided accurate estimation of the pore size distribution parameter under unsaturated flow conditions.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 40
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 12 (1998), S. 247-266 
    ISSN: 1436-3259
    Keywords: Key words: Stochastic control ; dynamic programming ; reservoir systems ; hydrologic forecasting ; hydropower ; feedback control ; autoregressive models.
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract : As with all dynamic programming formulations, differential dynamic programming (DDP) successfully exploits the sequential decision structure of multi-reservoir optimization problems, overcomes difficulties with the nonconvexity of energy production functions for hydropower systems, and provides optimal feedback release policies. DDP is particularly well suited to optimizing large-scale multi-reservoir systems due to its relative insensitivity to state-space dimensionality. This advantage of DDP encourages expansion of the state vector to include additional multi-lag hydrologic information and/or future inflow forecasts in developing optimal reservoir release policies. Unfortunately, attempts at extending DDP to the stochastic case have not been entirely successful. A modified stochastic DDP algorithm is presented which overcomes difficulties in previous formulations. Application of the algorithm to a four-reservoir hydropower system demonstrates its capabilities as an efficient approach to solving stochastic multi-reservoir optimization problems. The algorithm is also applied to a single reservoir problem with inclusion of multi-lag hydrologic information in the state vector. Results provide evidence of significant benefits in direct inclusion of expanded hydrologic state information in optimal feedback release policies.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 41
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 12 (1998), S. 299-316 
    ISSN: 1436-3259
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract This paper presents a geostatistical approach to multi-directional aquifer stimulation in order to better identify the transmissivity field. Hydraulic head measurements, taken at a few locations but under a number of different steady-state flow conditions, are used to estimate the transmissivity. Well installation is generally the most costly aspect of obtaining hydraulic head measurements. Therefore, it is advantageous to obtain as many informative measurements from each sampling location as possible. This can be achieved by hydraulically stimulating the aquifer through pumping, in order to set-up a variety of flow conditions. We illustrate the method by applying it to a synthetic aquifer. The simulations provide evidence that a few sampling locations may provide enough information to estimate the transmissivity field. Furthermore, the innovation of, or new information provided by, each measurement can be examined by looking at the corresponding spline and sensitivity matrix. Estimates from multi-directional stimulation are found to be clearly superior to estimates using data taken under one flow condition. We describe the geostatistical methodology for using data from multi-directional simulations and address computational issues.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 42
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 9 (1995), S. 33-47 
    ISSN: 1436-3259
    Keywords: Hidden markov models ; maximum likelihood estimation ; EM algorithm ; martingale estimating function ; forward-backward algorithm ; Monte Carlo ; filtering ; Nash cascade model ; rainfall runoff modeling
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract Many stochastic process models for environmental data sets assume a process of relatively simple structure which is in some sense partially observed. That is, there is an underlying process (Xn, n ≥ 0) or (Xt, t ≥ 0) for which the parameters are of interest and physically meaningful, and an observable process (Yn, n ≥ 0) or (Yt, t ≥ 0) which depends on the X process but not otherwise on those parameters. Examples are wide ranging: the Y process may be the X process with missing observations; the Y process may be the X process observed with a noise component; the X process might constitute a random environment for the Y process, as with hidden Markov models; the Y process might be a lower dimensional function or reduction of the X process. In principle, maximum likelihood estimation for the X process parameters can be carried out by some form of the EM algorithm applied to the Y process data. In the paper we review some current methods for exact and approximate maximum likelihood estimation. We illustrate some of the issues by considering how to estimate the parameters of a stochastic Nash cascade model for runoff. In the case of k reservoirs, the outputs of these reservoirs form a k dimensional vector Markov process, of which only the kth coordinate process is observed, usually at a discrete sample of time points.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 43
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 9 (1995), S. 117-132 
    ISSN: 1436-3259
    Keywords: River Quality ; network ; computer model ; Thermodynamics
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract In this paper, concepts of network thermodynamics are applied to a river water quality model, which is based on Streeter-Phelps equations, to identify the corresponding physical components and their topology. Then, the randomness in the parameters, input coefficients and initial conditions are modeled by Gaussian white noises. From the stochastic components of the physical system description of problem and concepts of physical system theory, a set of stochastic differential equations can be automatically generated in a computer and the recent developments on the automatic formulation of the moment equations based on Ito calculus can be used. This procedure is illustrated through the solution of an example of stochastic river water quality problem and it is also shown how other related problems with different configurations can be automatically solved in a computer using just one software.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 44
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 9 (1995), S. 171-205 
    ISSN: 1436-3259
    Keywords: AR-AIC-Bayes filter ; autoregressive spectral density estimation ; diagnostic checks for ARMA models ; exploratory data analysis ; fast Fourier transform ; Hurst coefficient ; long-memory times series ; periodogram smoothing ; riverflow time series ; spectral density plots
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract Current methods of estimation of the univariate spectral density are reviewed and some improvements are made. It is suggested that spectral analysis may perhaps be best thought of as another exploratory data analysis (EDA) tool which complements, rather than competes with, the popular ARMA model building approach. A new diagnostic check for ARMA model adequacy based on the nonparametric spectral density is introduced. Additionally, two new algorithms for fast computation of the autoregressive spectral density function are presented. For improving interpretation of results, a new style of plotting the spectral density function is suggested. Exploratory spectral analyses of a number of hydrological time series are performed and some interesting periodicities are suggested for further investigation. The application of spectral analysis to determine the possible existence of long memory in natural time series is discussed with respect to long riverflow, treering and mud varve series. Moreover, a comparison of the estimated spectral densities suggests the ARMA models fitted previously to these datasets adequately describe the low frequency component. Finally, the software and data used in this paper are available by anonymous ftp from fisher.stats.uwo.ca.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 45
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 9 (1995), S. 215-237 
    ISSN: 1436-3259
    Keywords: Computation ; discretization ; entropy ; networks ; time averaging ; water quality
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract The computational aspects of using a new, entropy-based, theory to predict water quality values at discontinued water quality monitoring stations are discussed. The main computational issues addressed are the level of discretization used in converting the continuous probability distribution of water quality values to the discrete levels required for the entropy function, and the choice of the interval of time for which to assign the value of the water quality (period of time averaging) through the entropy function. Unlike most cases of entropy applications involving discretization of continuous functions the results of using entropy theory to predict water quality values at discontinued monitoring stations in this application appear to be insensitive to the choice of the level of discretization even down to the very coarse level discretization associated with only eight intervals. However, depending on the length of record available the choice of the time interval for which the water quality values are assigned (period for time averaging) appear to have a significant impact on the accuracy of the results.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 46
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 9 (1995), S. 13-32 
    ISSN: 1436-3259
    Keywords: Distributed parameter filter ; shallow water equations ; distributed dynamical systems ; data assimilation ; white Gaussian noise
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract Distributed parameter filtering theory is employed for estimating the state variables and associated error covariances of a dynamical distributed system under highly random tidal and meteorological influences. The stochastic-deterministic mathematical model of the physical system under study consists of the shallow water equations described by the momentum and continuity equations in which the external forces such as Coriolis force, wind friction, and atmospheric pressure are considered. White Gaussian noises in the system and measurement equations are used to account for the inherent stochasticity of the system. By using an optimal distributed parameter filter, the information provided by the stochastic dynamical model and the noisy measurements taken from the actual system are combined to obtain an optimal estimate of the state of the system, which in turn is used as the initial condition for the prediction procedure. The approach followed here has numerical approximation carried out at the end, which means that the numerical discretization is performed in the filtering equations, and not in the equations modelling the system. Therefore, the continuous distributed nature of the original system is maintained as long as possible and the propagation of modelling errors in the problem is minimized. The appropriateness of the distributed parameter filter is demonstrated in an application involving the prediction of storm surges in the North Sea. The results confirm excellent filter performance with considerable improvement with respect to the deterministic prediction.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 47
    Electronic Resource
    Electronic Resource
    Springer
    Stochastic environmental research and risk assessment 9 (1995), S. 77-88 
    ISSN: 1436-3259
    Keywords: Extreme rainfalls ; partial duration series ; regional estimation ; Bayes' theory
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying , Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract Based on the Partial Duration Series model a regional Bayesian approach is introduced in the modelling of extreme rainfalls from a country-wide system of recording raingauges in Denmark. The application of the Bayesian principles is derived in case of both exponential and generalized Pareto-distributed exceedances. The method is applied to, respectively, the total precipitation depth and the maximum 10 minutes rain intensity of individual storms from 41 stations. By means of the regional analysis prior distributions of the parameters in the Partial Duration Series model are estimated. It is shown that the regional approach significantly reduces the uncertainty of the T-year event estimator compared to estimation based solely on at-site data. In addition, the regional approach provides quantile estimates at non-monitored sites.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 48
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 68 (1995), S. 105-130 
    ISSN: 1436-4646
    Keywords: primary 49B34 ; secondary 90C31 ; 93C30 ; Variational inequalities ; Sensitivity analysis ; Generalized Jacobian
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract Optimization problems with variational inequality constraints are converted to constrained minimization of a local Lipschitz function. To this minimization a non-differentiable optimization method is used; the required subgradients of the objective are computed by means of a special adjoint equation. Besides tests with some academic examples, the approach is applied to the computation of the Stackelberg—Cournot—Nash equilibria and to the numerical solution of a class of quasi-variational inequalities.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 49
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 69 (1995), S. 1-43 
    ISSN: 1436-4646
    Keywords: Mathematical programming ; Cutting planes ; Analytic center
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract Anoracle for a convex setS ⊂ ℝ n accepts as input any pointz in ℝ n , and ifz ∈S, then it returns ‘yes’, while ifz ∉S, then it returns ‘no’ along with a separating hyperplane. We give a new algorithm that finds a feasible point inS in cases where an oracle is available. Our algorithm uses the analytic center of a polytope as test point, and successively modifies the polytope with the separating hyperplanes returned by the oracle. The key to establishing convergence is that hyperplanes judged to be ‘unimportant’ are pruned from the polytope. If a ball of radius 2−L is contained inS, andS is contained in a cube of side 2 L+1, then we can show our algorithm converges after O(nL 2) iterations and performs a total of O(n 4 L 3+TnL 2) arithmetic operations, whereT is the number of arithmetic operations required for a call to the oracle. The bound is independent of the number of hyperplanes generated in the algorithm. An important application in which an oracle is available is minimizing a convex function overS.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 50
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 69 (1995), S. 45-73 
    ISSN: 1436-4646
    Keywords: Cutting plane ; Stochastic programming ; Analytic center ; Interior-point method
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract The stochastic linear programming problem with recourse has a dual block-angular structure. It can thus be handled by Benders' decomposition or by Kelley's method of cutting planes; equivalently the dual problem has a primal block-angular structure and can be handled by Dantzig-Wolfe decomposition—the two approaches are in fact identical by duality. Here we shall investigate the use of the method of cutting planes from analytic centers applied to similar formulations. The only significant difference form the aforementioned methods is that new cutting planes (or columns, by duality) will be generated not from the optimum of the linear programming relaxation, but from the analytic center of the set of localization.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 51
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 69 (1995), S. 237-253 
    ISSN: 1436-4646
    Keywords: Variational inequality ; Nonlinear complementarity ; Nonlinear programming ; Continuation method
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract This paper presents a continuation method for monotone variational inequality problems based on a new smooth equation formulation. The existence, uniqueness and limiting behavior of the path generated by the method are analyzed.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 52
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 69 (1995), S. 269-309 
    ISSN: 1436-4646
    Keywords: Quadratic programming ; Submodular constraints ; Kuhn-Tucker conditions ; Lexicographically optimal flow ; Parametric maximum flow
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract We present new strongly polynomial algorithms for special cases of convex separable quadratic minimization over submodular constraints. The main results are: an O(NM log(N 2/M)) algorithm for the problemNetwork defined on a network onM arcs andN nodes; an O(n logn) algorithm for thetree problem onn variables; an O(n logn) algorithm for theNested problem, and a linear time algorithm for theGeneralized Upper Bound problem. These algorithms are the best known so far for these problems. The status of the general problem and open questions are presented as well.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 53
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 69 (1995), S. 335-349 
    ISSN: 1436-4646
    Keywords: Polyhedral combinatorics ; Valid inequalities ; Travelling salesman ; Worst-case analysis
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract We consider most of the known classes of valid inequalities for the graphical travelling salesman polyhedron and compute the worst-case improvement resulting from their addition to the subtour polyhedron. For example, we show that the comb inequalities cannot improve the subtour bound by a factor greater than 10/9. The corresponding factor for the class of clique tree inequalities is 8/7, while it is 4/3 for the path configuration inequalities.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 54
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 70 (1995), S. 1-16 
    ISSN: 1436-4646
    Keywords: Stochastic programming ; Polyhedral functions ; Simplicial functions
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract A dual method is presented to solve a linearly constrained optimization problem with convex, polyhedral objective function, along with a fast bounding technique, for the optimum value. The method can be used to solve problems, obtained from LPs, where some of the constraints are not required to be exactly satisfied but are penalized by piecewise linear functions, which are added to the objective function of the original problem. The method generalizes an earlier solution technique developed by Prékopa (1990). Applications to stochastic programming are also presented.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 55
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 70 (1995), S. 107-122 
    ISSN: 1436-4646
    Keywords: Linear complementarity problem ; Predictor—corrector algorithm ; Complexity analysis ; Central trajectory ; Curvature integral ; Interior-point methods
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract In this paper, we propose a predictor—corrector-type algorithm for solving the linear complementarity problem (LCP), and prove that the actual number of iterations needed by the algorithm is bounded from above and from below by a curvature integral along the central trajectory of the problem. This curvature integral is not greater than, and possibly smaller than, the best upper bound obtained in the literature to date.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 56
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 70 (1995), S. 159-172 
    ISSN: 1436-4646
    Keywords: Parametric nonlinear programming ; Directional differentiability ; B-derivative ; Piecewise smooth function ; Nonunique multipliers ; Degeneracy
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract Consider a parametric nonlinear optimization problem subject to equality and inequality constraints. Conditions under which a locally optimal solution exists and depends in a continuous way on the parameter are well known. We show, under the additional assumption of constant rank of the active constraint gradients, that the optimal solution is actually piecewise smooth, hence B-differentiable. We show, for the first time to our knowledge, a practical application of quadratic programming to calculate the directional derivative in the case when the optimal multipliers are not unique.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 57
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 70 (1995), S. 191-200 
    ISSN: 1436-4646
    Keywords: Strictly pseudomonotone map ; Z-map ; Complementarity problem ; Least element problem
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract Strictly pseudomonotoneZ-maps operating on Banach lattices are considered. Equivalence of complementarity problems and least-element problems is established under certain regularity and growth conditions. This extends a recent result by Riddell (1981) for strictly monotoneZ-maps to the pseudomonotone case. Some other problems equivalent to the above are discussed as well.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 58
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 70 (1995), S. 251-277 
    ISSN: 1436-4646
    Keywords: Linear programming ; Barrier methods ; Interior-point methods
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract Many interior-point methods for linear programming are based on the properties of the logarithmic barrier function. After a preliminary discussion of the convergence of the (primal) projected Newton barrier method, three types of barrier method are analyzed. These methods may be categorized as primal, dual and primal—dual, and may be derived from the application of Newton's method to different variants of the same system of nonlinear equations. A fourth variant of the same equations leads to a new primal—dual method. In each of the methods discussed, convergence is demonstrated without the need for a nondegeneracy assumption or a transformation that makes the provision of a feasible point trivial. In particular, convergence is established for a primal—dual algorithm that allows a different step in the primal and dual variables and does not require primal and dual feasibility. Finally, a new method for treating free variables is proposed.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 59
    ISSN: 1436-4646
    Keywords: Linear programming ; Mixed-integer programming ; Large-scale optimization ; Airline fleet assignment
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract Given a flight schedule and set of aircraft, the fleet assignment problem is to determine which type of aircraft should fly each flight segment. This paper describes a basic daily, domestic fleet assignment problem and then presents chronologically the steps taken to solve it efficiently. Our model of the fleet assignment problem is a large multi-commodity flow problem with side constraints defined on a time-expanded network. These problems are often severely degenerate, which leads to poor performance of standard linear programming techniques. Also, the large number of integer variables can make finding optimal integer solutions difficult and time-consuming. The methods used to attack this problem include an interior-point algorithm, dual steepest edge simplex, cost perturbation, model aggregation, branching on set-partitioning constraints and prioritizing the order of branching. The computational results show that the algorithm finds solutions with a maximum optimality gap of 0.02% and is more than two orders of magnitude faster than using default options of a standard LP-based branch-and-bound code.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 60
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 71 (1995), S. 77-100 
    ISSN: 1436-4646
    Keywords: Convex linearly constrained problems ; Variational inequalities ; Interior methods ; Entropy-like proximal method ; Maximal monotone operator
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract In this paper, an entropy-like proximal method for the minimization of a convex function subject to positivity constraints is extended to an interior algorithm in two directions. First, to general linearly constrained convex minimization problems and second, to variational inequalities on polyhedra. For linear programming, numerical results are presented and quadratic convergence is established.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 61
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 71 (1995), S. 29-50 
    ISSN: 1436-4646
    Keywords: Max-cut ; Cut polytope ; Metric polytope ; Linear relaxation ; One-third-integrality ; Box one-third-integrality ; Forbidden minor
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract Given a graphG = (V, E), the metric polytopeS (G) is defined by the inequalitiesx(F) − x(C∖F) ⩽ |F| − 1 for $$F \subseteq C$$ , |F| odd,C cycle ofG, and 0 ⩽x e ⩽ 1 fore ∈ E. Optimization overS (G) provides an approximation for the max-cut problem. The graphG is called 1/d-integral if all the vertices ofS(G) have their coordinates in{i/d ∣ 0 ⩽ i ⩽ d}. We prove that the class of 1/d-integral graphs is closed under minors, and we present several minimal forbidden minors for 1/3-integrality. In particular, we characterize the 1/3-integral graphs on seven nodes. We study several operations preserving 1/d-integrality, in particular, thek-sum operation for 0 ⩽k ⩽ 3. We prove that series parallel graphs are characterized by the following stronger property. All vertices of the polytopeS (G) ∩ {x ∣ ℓ ⩽ x ⩽ u} are 1/3-integral for every choice of 1/3-integral boundsℓ, u on the edges ofG.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 62
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 71 (1995), S. 101-112 
    ISSN: 1436-4646
    Keywords: Minmax ; Maximal covering problems ; Multi criteria decision-making
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract In this paper we introduce the parametric minquantile problem, a weighted generalisation ofkth maximum minimisation. It is shown that, under suitable quasiconvexity assumptions, its resolution can be reduced to solving a polynomial number of minmax problems. It is also shown how this simultaneously solves (parametric) maximal covering problems. It follows that bicriteria problems, where the aim is to both maximize the covering and minimize the cover-level, are reducible to a discrete problem, on which any multiple criteria method may be applied.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 63
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 71 (1995), S. 71-76 
    ISSN: 1436-4646
    Keywords: Location theory ; Fermat—Weber problem ; Weiszfeld's iterative algorithm
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract The Fermat—Weber location problem requires finding a point in ℝ N that minimizes the sum of weighted Euclidean distances tom given points. A one-point iterative method was first introduced by Weiszfeld in 1937 to solve this problem. Since then several research articles have been published on the method and generalizations thereof. Global convergence of Weiszfeld's algorithm was proven in a seminal paper by Kuhn in 1973. However, since them given points are singular points of the iteration functions, convergence is conditional on none of the iterates coinciding with one of the given points. In addressing this problem, Kuhn concluded that whenever them given points are not collinear, Weiszfeld's algorithm will converge to the unique optimal solution except for a denumerable set of starting points. As late as 1989, Chandrasekaran and Tamir demonstrated with counter-examples that convergence may not occur for continuous sets of starting points when the given points are contained in an affine subspace of ℝ N . We resolve this open question by proving that Weiszfeld's algorithm converges to the unique optimal solution for all but a denumerable set of starting points if, and only if, the convex hull of the given points is of dimensionN.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 64
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 71 (1995), S. 153-177 
    ISSN: 1436-4646
    Keywords: Network optimization ; Assignment problem ; Algorithms ; Experimental evaluation ; Cost scaling
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract The cost scaling push-relabel method has been shown to be efficient for solving minimum-cost flow problems. In this paper we apply the method to the assignment problem and investigate implementations of the method that take advantage of assignment's special structure. The results show that the method is very promising for practical use.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 65
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 71 (1995), S. 195-206 
    ISSN: 1436-4646
    Keywords: Superfluous matrix ; Linear complementarity problem
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract Superfluous matrices were introduced by Howe (1983) in linear complementarity. In general, producing examples of this class is tedious (a few examples can be found in Chapter 6 of Cottle, Pang and Stone (1992)). To overcome this problem, we define a new class of matrices $$\bar Z$$ and establish that in $$\bar Z$$ superfluous matrices of any ordern ⩾ 4 can easily be constructed. For every integerk, an example of a superfluous matrix of degreek is exhibited in the end.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 66
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 71 (1995), S. 249-258 
    ISSN: 1436-4646
    Keywords: Combinatorial optimization ; Integrality of polyhedra ; Generalized set packing ; Covering
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract A 0, ±1-matrixA is balanced if, in every submatrix with two nonzero entries per row and column, the sum of the entries is a multiple of four. This definition was introduced by Truemper (1978) and generalizes the notion of a balanced 0, 1-matrix introduced by Berge (1970). In this paper, we extend a bicoloring theorem of Berge (1970) and total dual integrality results of Fulkerson, Hoffman and Oppenheim (1974) to balanced 0, ±1-matrices.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 67
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 71 (1995), S. 369-370 
    ISSN: 1436-4646
    Keywords: Local Lipschitz property ; Infinite-dimensional Hilbert space
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract An oversight in a paper of Correa and Lemaréchal (this journal, 1993) is noted; a counterexample is given.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 68
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 80 (1998), S. 17-33 
    ISSN: 1436-4646
    Keywords: Dual simplex method ; Maximum flow ; Strongly polynomial ; Preflow algorithm ; Valid distance labels
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract This paper presents dual network simplex algorithms that require at most 2nm pivots and O(n 2 m) time for solving a maximum flow problem on a network ofn nodes andm arcs. Refined implementations of these algorithms and a related simplex variant that is not strictly speaking a dual simplex algorithm are shown to have a complexity of O(n 3). The algorithms are based on the concept of apreflow and depend upon the use of node labels that are underestimates of the distances from the nodes to the sink node in the extended residual graph associated with the current flow. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 69
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 80 (1998), S. 35-61 
    ISSN: 1436-4646
    Keywords: Graph partitioning ; Linear programming ; Bundle method ; Parallel optimization
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract This paper describes heuristics for partitioning a generalM × N matrix into arrowhead form. Such heuristics are useful for decomposing large, constrained, optimization problems into forms that are amenable to parallel processing. The heuristics presented can be easily implemented using publicly available graph partitioning algorithms. The application of such techniques for solving large linear programs is described. Extensive computational results on the effectiveness of our partitioning procedures and their usefulness for parallel optimization are presented. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 70
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 80 (1998), S. 129-160 
    ISSN: 1436-4646
    Keywords: Semidefinite programming ; Infeasible-interior-point method ; Predictor—correctormethod ; Superlinear convergence ; Primal—dual nondegeneracy
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract An example of an SDP (semidefinite program) exhibits a substantial difficulty in proving the superlinear convergence of a direct extension of the Mizuno—Todd—Ye type predictor—corrector primal-dual interior-point method for LPs (linear programs) to SDPs, and suggests that we need to force the generated sequence to converge to a solution tangentially to the central path (or trajectory). A Mizuno—Todd—Ye type predictor—corrector infeasible-interior-point algorithm incorporating this additional restriction for monotone SDLCPs (semidefinite linear complementarity problems) enjoys superlinear convergence under strict complementarity and nondegeneracy conditions. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 71
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 80 (1998), S. 161-169 
    ISSN: 1436-4646
    Keywords: Variational inequalities ; Complementarity problems ; Walrasian equilibrium ; Computational general equilibrium
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract This paper explains a method by which the number of variables in a variational inequality having a certain form can be substantially reduced by changing the set over which the variational inequality is posed. The method applies in particular to certain economic equilibrium problems occurring in applications. We explain and justify the method, and give examples of its application, including a numerical example in which the solution time for the reduced problem was approximately 2% of that for the problem in its original form. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 72
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 81 (1998), S. 201-214 
    ISSN: 1436-4646
    Keywords: Integer programming ; Cutting planes ; Cover inequalities ; Lifting ; Gomory mixed integer cuts ; Cut-and-branch
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract We investigate the use of cutting planes for integer programs with general integer variables. We show how cutting planes arising from knapsack inequalities can be generated and lifted as in the case of 0–1 variables. We also explore the use of Gomory's mixed-integer cuts. We address both theoretical and computational issues and show how to embed these cutting planes in a branch-and-bound framework. We compare results obtained by using our cut generation routines in two existing systems with a commercially available branch-and-bound code on a range of test problems arising from practical applications. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 73
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 81 (1998), S. 229-256 
    ISSN: 1436-4646
    Keywords: Branch-and-cut algorithm ; Clustering ; Compiler design ; Equipartitioning ; Finite element method ; Graph partitioning ; Layout of electronic circuits ; Separation heuristics
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract In this paper we consider the problem ofk-partitioning the nodes of a graph with capacity restrictions on the sum of the node weights in each subset of the partition, and the objective of minimizing the sum of the costs of the edges between the subsets of the partition. Based on a study of valid inequalities, we present a variety of separation heuristics for cycle, cycle with ears, knapsack tree and path-block cycle inequalities among others. The separation heuristics, plus primal heuristics, have been implemented in a branch-and-cut routine using a formulation including variables for the edges with nonzero costs and node partition variables. Results are presented for three classes of problems: equipartitioning problems arising in finite element methods and partitioning problems associated with electronic circuit layout and compiler design. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 74
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 81 (1998), S. 327-347 
    ISSN: 1436-4646
    Keywords: Convex composite function ; Second-order global optimality ; Second-order duality ; Variational inequality
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract In recent years second-order sufficient conditions of an isolated local minimizer for convex composite optimization problems have been established. In this paper, second-order optimality conditions are obtained of aglobal minimizer for convex composite problems with a non-finite valued convex function and a twice strictly differentiable function by introducing a generalized representation condition. This result is applied to a minimization problem with a closed convex set constraint which is shown to satisfy the basic constraint qualification. In particular, second-order necessary and sufficient conditions of a solution for a variational inequality problem with convex composite inequality constraints are obtained. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 75
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 82 (1998), S. 1-1 
    ISSN: 1436-4646
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 76
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 82 (1998), S. 3-12 
    ISSN: 1436-4646
    Keywords: Symmetric submodular function minimization ; Submodular function minimization ; Symmetric submodular functions ; Submodular functions ; Submodular systems
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract We describe a purely combinatorial algorithm which, given a submodular set functionf on a finite setV, finds a nontrivial subsetA ofV minimizingf[A] + f[V ∖ A]. This algorithm, an extension of the Nagamochi—Ibaraki minimum cut algorithm as simplified by Stoer and Wagner [M. Stoer, F. Wagner, A simple min cut algorithm, Proceedings of the European Symposium on Algorithms ESA '94, LNCS 855, Springer, Berlin, 1994, pp. 141–147] and by Frank [A. Frank, On the edge-connectivity algorithm of Nagamochi and Ibaraki, Laboratoire Artémis, IMAG, Université J. Fourier, Grenbole, 1994], minimizes any symmetric submodular function using O(|V|3) calls to a function value oracle. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 77
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 82 (1998), S. 41-81 
    ISSN: 1436-4646
    Keywords: Random sampling ; Greedy algorithm ; Matroid basis ; Matroid partitioning
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract Random sampling is a powerful tool for gathering information about a group by considering only a small part of it. We discuss some broadly applicable paradigms for using random sampling in combinatorial optimization, and demonstrate the effectiveness of these paradigms for two optimization problems on matroids: finding an optimum matroid basis and packing disjoint matroid bases. Application of these ideas to the graphic matroid led to fast algorithms for minimum spanning trees and minimum cuts. An optimum matroid basis is typically found by agreedy algorithm that grows an independent set into an optimum basis one element at a time. This continuous change in the independent set can make it hard to perform the independence tests needed by the greedy algorithm. We simplify matters by using sampling to reduce the problem of finding an optimum matroid basis to the problem of verifying that a givenfixed basis is optimum, showing that the two problems can be solved in roughly the same time. Another application of sampling is to packing matroid bases, also known as matroid partitioning. Sampling reduces the number of bases that must be packed. We combine sampling with a greedy packing strategy that reduces the size of the matroid. Together, these techniques give accelerated packing algorithms. We give particular attention to the problem of packing spanning trees in graphs, which has applications in network reliability analysis. Our results can be seen as generalizing certain results from random graph theory. The techniques have also been effective for other packing problems. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 78
    ISSN: 1436-4646
    Keywords: Quadratic assignment ; Special cases ; Polynomially solvable ; Anti-Monge matrices ; Toeplitz matrices
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract This paper investigates a restricted version of the Quadratic Assignment Problem (QAP), where one of the coefficient matrices is an Anti-Monge matrix with non-decreasing rows and columns and the other coefficient matrix is a symmetric Toeplitz matrix. This restricted version is called the Anti-Monge—Toeplitz QAP. There are three well-known combinatorial problems that can be modeled via the Anti-Monge—Toeplitz QAP: (Pl) The “Turbine Problem”, i.e. the assignment of given masses to the vertices of a regular polygon such that the distance of the center of gravity of the resulting system to the center of the polygon is minimized. (P2) The Traveling Salesman Problem on symmetric Monge distance matrices. (P3) The arrangement of data records with given access probabilities in a linear storage medium in order to minimize the average access time. We identify conditions on the Toeplitz matrixB that lead to a simple solution for the Anti-Monge—Toeplitz QAP: The optimal permutation can be given in advance without regarding the numerical values of the data. The resulting theorems generalize and unify several known results on problems (P1), (P2), and (P3). We also show that the Turbine Problem is NP-hard and consequently, that the Anti-Monge—Toeplitz QAP is NP-hard in general. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 79
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 85 (1999), S. 135-156 
    ISSN: 1436-4646
    Keywords: Key words: bound constrained quadratic programming – Huber’s M–estimator – condition estimation – Newton iteration – factorization update
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: 1 , the smallest eigenvalue of a symmetric, positive definite matrix, and is solved by Newton iteration with line search. The paper describes the algorithm and its implementation including estimation of λ1, how to get a good starting point for the iteration, and up- and downdating of Cholesky factorization. Results of extensive testing and comparison with other methods for constrained QP are given.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 80
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 85 (1999), S. 35-49 
    ISSN: 1436-4646
    Keywords: Key words: bimatrix game – quasi-strict equilibrium ; Mathematics Subject Classification (1991): 90D05
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 81
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 85 (1999), S. 107-134 
    ISSN: 1436-4646
    Keywords: Key words: equilibrium constraints – variational inequality problems – strong monotonicity – optimality conditions – global convergence ; Mathematics Subject Classification (1991): 90C30, 90C33, 65K05
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 82
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 85 (1999), S. 363-377 
    ISSN: 1436-4646
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: 1 ,...,Am are true, then at least ℓ of the propositions B1,...,Bn are true. The main result of the paper is that the procedure in fact provides a convex hull description.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 83
    ISSN: 1436-4646
    Keywords: Key words: maximum-entropy sampling – branch and bound – nonlinear programming
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 84
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 85 (1999), S. 259-276 
    ISSN: 1436-4646
    Keywords: Key words: linear complementary problems – Q-matrices – polyhedral combinatorics – triangulations of point configurations – 0-1 polytopes
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: T (Mx+q)=0, Mx+q≥0, x≥0 has a solution. We explain how one can use the polyhedral structure of the set of all triangulations of a finite point set to determine if an n×n matrix M is a Q-matrix. Our implementation of the algorithm is practical for deciding the Q-nature for all M with n≤8.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 85
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 85 (1999), S. 593-616 
    ISSN: 1436-4646
    Keywords: Key words: concave optimization – conical algorithms –ω-subdivisions Mathematics Subject Classification (1991): 90C26, 65K05
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract. In this paper the problem of finding the global optimum of a concave function over a polytope is considered. A well-known class of algorithms for this problem is the class of conical algorithms. In particular, the conical algorithm based on the so called ω-subdivision strategy is considered. It is proved that, for any given accuracy ε〉0, this algorithm stops in a finite time by returning an ε-optimal solution for the problem, while it is convergent for ε=0.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 86
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 85 (1999), S. 439-467 
    ISSN: 1436-4646
    Keywords: Mathematics Subject Classification (1991): 90C11
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract. We investigate strong inequalities for mixed 0-1 integer programs derived from flow cover inequalities. Flow cover inequalities are usually not facet defining and need to be lifted to obtain stronger inequalities. However, because of the sequential nature of the standard lifting techniques and the complexity of the optimization problems that have to be solved to obtain lifting coefficients, lifting of flow cover inequalities is computationally very demanding. We present a computationally efficient way to lift flow cover inequalities based on sequence independent lifting techniques and give computational results that show the effectiveness of our lifting procedures.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 87
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 85 (1999), S. 525-540 
    ISSN: 1436-4646
    Keywords: Key words: semidefinite programming – perturbation theory – Kantorovi theory – condition number Mathematics Subject Classification (1991): 90C31, 90C25, 90C05
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: theory. This approach also quantifies the size of permissible perturbations. We include a discussion of these results for block diagonal semidefinite programs, of which linear programming is a special case.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 88
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 86 (1999), S. 219-223 
    ISSN: 1436-4646
    Keywords: Key words: linear programming – computational complexity – complexity measure Mathematics Subject Classification (1991): 90C05, 90C60, 68Q25
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract. Given an m×n integer matrix A of full row rank, we consider the problem of computing the maximum of ∥B -1 A∥2 where B varies over all bases of A. This quantity appears in various places in the mathematical programming literature. More recently, logarithm of this number was the determining factor in the complexity bound of Vavasis and Ye’s primal-dual interior-point algorithm. We prove that the problem of approximating this maximum norm, even within an exponential (in the dimension of A) factor, is NP-hard. Our proof is based on a closely related result of L. Khachiyan [1].
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 89
    ISSN: 1436-4646
    Keywords: Key words: non-interior point method – complementarity problem – smoothing function – homotopy method
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract. We propose a class of non-interior point algorithms for solving the complementarity problems(CP): Find a nonnegative pair (x,y)∈ℝ 2n satisfying y=f(x) and x i y i =0 for every i∈{1,2,...,n}, where f is a continuous mapping from ℝ n to ℝ n . The algorithms are based on the Chen-Harker-Kanzow-Smale smoothing functions for the CP, and have the following features; (a) it traces a trajectory in ℝ 3n which consists of solutions of a family of systems of equations with a parameter, (b) it can be started from an arbitrary (not necessarily positive) point in ℝ 2n in contrast to most of interior-point methods, and (c) its global convergence is ensured for a class of problems including (not strongly) monotone complementarity problems having a feasible interior point. To construct the algorithms, we give a homotopy and show the existence of a trajectory leading to a solution under a relatively mild condition, and propose a class of algorithms involving suitable neighborhoods of the trajectory. We also give a sufficient condition on the neighborhoods for global convergence and two examples satisfying it.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 90
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 86 (1999), S. 41-50 
    ISSN: 1436-4646
    Keywords: Key words: resolvent method – proximal point method – bundle method – bundle-trust region method – subgradient Mathematics Subject Classification (1991): 90C25, 49M45
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract. This paper establishes a linear convergence rate for a class of epsilon-subgradient descent methods for minimizing certain convex functions on ℝ n . Currently prominent methods belonging to this class include the resolvent (proximal point) method and the bundle method in proximal form (considered as a sequence of serious steps). Other methods, such as a variant of the proximal point method given by Correa and Lemaréchal, can also fit within this framework, depending on how they are implemented. The convex functions covered by the analysis are those whose conjugates have subdifferentials that are locally upper Lipschitzian at the origin, a property generalizing classical regularity conditions.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 91
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 86 (1999), S. 387-415 
    ISSN: 1436-4646
    Keywords: Key words: semi-infinite optimization – reduction approach – stationary point – strong stability – extended Mangasarian-Fromovitz constraint qualification Mathematics Subject Classification (1991): 90C30, 90C31, 90C34, 49M39
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract. The paper deals with semi-infinite optimization problems which are defined by finitely many equality constraints and infinitely many inequality constraints. We generalize the concept of strongly stable stationary points which was introduced by Kojima for finite problems; it refers to the local existence and uniqueness of a stationary point for each sufficiently small perturbed problem, where perturbations up to second order are allowed. Under the extended Mangasarian-Fromovitz constraint qualification we present equivalent conditions for the strong stability of a considered stationary point in terms of first and second derivatives of the involved functions. In particular, we discuss the case where the reduction approach is not satisfied.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 92
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 86 (1999), S. 313-334 
    ISSN: 1436-4646
    Keywords: Key words: linear programming – potential functions – infeasible-interior-point methods – homogeneity – self-dual
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 93
    ISSN: 1436-4646
    Keywords: Key words: basis recovery – partition – principal pivot transforms – Balinski-Tucker tableaus – quadratic programming – linear complementarity problems – interior point methods – sufficient matrices – crossover – Criss-Cross method
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract. Optimal solutions of interior point algorithms for linear and quadratic programming and linear complementarity problems provide maximally complementary solutions. Maximally complementary solutions can be characterized by optimal partitions. On the other hand, the solutions provided by simplex–based pivot algorithms are given in terms of complementary bases. A basis identification algorithm is an algorithm which generates a complementary basis, starting from any complementary solution. A partition identification algorithm is an algorithm which generates a maximally complementary solution (and its corresponding partition), starting from any complementary solution. In linear programming such algorithms were respectively proposed by Megiddo in 1991 and Balinski and Tucker in 1969. In this paper we will present identification algorithms for quadratic programming and linear complementarity problems with sufficient matrices. The presented algorithms are based on the principal pivot transform and the orthogonality property of basis tableaus.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 94
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 86 (1999), S. 417-431 
    ISSN: 1436-4646
    Keywords: Mathematics Subject Classification (1991): 90A11, 90B50, 90C90, 90D65
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract. We study the following decision-making scenario: A linear program is solved by a set of agents arranged hierarchically in a tree, where each agent decides the level of certain variables, and has a distinct objective function, known to all agents. Authority is reflected in two ways: Agents higher in the tree set their variables first; and agents that are siblings in the tree resolve their game by focusing on the Nash equilibrium that is optimum for the agent above them. We give a necessary and sufficient condition for such a hierarchy to be efficient (i.e., to have perfect coordination, to ultimately optimize the objective of the firm). We study problems related to designing a hierarchy (assigning decision makers to positions in the tree) in order to achieve efficiency or otherwise optimize coordination.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 95
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 86 (1999), S. 533-563 
    ISSN: 1436-4646
    Keywords: Key words: generalized linear complementarity problem – non-interior continuation method – Newton method – Q-quadratical convergence Mathematics Subject Classification (1991): 90C33
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract. In this paper, we propose a non-interior continuation method for solving generalized linear complementarity problems (GLCP) introduced by Cottle and Dantzig. The method is based on a smoothing function derived from the exponential penalty function first introduced by Kort and Bertsekas for constrained minimization. This smoothing function can also be viewed as a natural extension of Chen-Mangasarian’s neural network smooth function. By using the smoothing function, we approximate GLCP as a family of parameterized smooth equations. An algorithm is presented to follow the smoothing path. Under suitable assumptions, it is shown that the algorithm is globally convergent and local Q-quadratically convergent. Few preliminary numerical results are also reported.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 96
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 86 (1999), S. 463-473 
    ISSN: 1436-4646
    Keywords: Key words: semidefinite relaxations – quadratic programming Mathematics Subject Classification (1991): 20E28, 20G40, 20C20
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract. We demonstrate that if A 1,...,A m are symmetric positive semidefinite n×n matrices with positive definite sum and A is an arbitrary symmetric n×n matrix, then the relative accuracy, in terms of the optimal value, of the semidefinite relaxation $$\max_X\{\Tr(AX)\mid\, \Tr(A_iX)\le1,\,\,i=1,...,m;\,X\succeq0\} \eqno{\hbox{(SDP)}}$$ of the optimization program $$x^TAx\to\max\mid\, x^TA_ix\le 1,\,\,i=1,...,m \eqno{\hbox{(P)}}$$ is not worse than $$1-\frac{1}{{2\ln(2m^2)}}$$ . It is shown that this bound is sharp in order, as far as the dependence on m is concerned, and that a~feasible solution x to (P) with $$x^TAx\ge \frac{{\Opt(\hbox{{\rm SDP}})}}{{2\ln(2m^2)}} \eqno{(*)}$$ can be found efficiently. This somehow improves one of the results of Nesterov [4] where bound similar to (*) is established for the case when all Ai are of rank 1.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 97
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 86 (1999), S. 161-185 
    ISSN: 1436-4646
    Keywords: Mathematics Subject Classification (1991): 90C10
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract. This paper considers the precedence constrained knapsack problem. More specifically, we are interested in classes of valid inequalities which are facet-defining for the precedence constrained knapsack polytope. We study the complexity of obtaining these facets using the standard sequential lifting procedure. Applying this procedure requires solving a combinatorial problem. For valid inequalities arising from minimal induced covers, we identify a class of lifting coefficients for which this problem can be solved in polynomial time, by using a supermodular function, and for which the values of the lifting coefficients have a combinatorial interpretation. For the remaining lifting coefficients it is shown that this optimization problem is strongly NP-hard. The same lifting procedure can be applied to (1,k)-configurations, although in this case, the same combinatorial interpretation no longer applies. We also consider K-covers, to which the same procedure need not apply in general. We show that facets of the polytope can still be generated using a similar lifting technique. For tree knapsack problems, we observe that all lifting coefficients can be obtained in polynomial time. Computational experiments indicate that these facets significantly strengthen the LP-relaxation.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 98
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 70 (1995), S. 27-45 
    ISSN: 1436-4646
    Keywords: Convex polytopes ; Enumeration of faces ; Adjacency ; Segments
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract We introduce the concept of a segment of a degenerate convex polytope specified by a system of linear constraints, and explain its importance in developing algorithms for enumerating the faces. Using segments, we describe an algorithm that enumerates all the faces, in time polynomial in their number. The role of segments in the unsolved problem of enumerating the extreme points of a convex polytope specified by a degenerate system of linear constraints, in time polynomial in the number of extreme points, is discussed.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 99
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 70 (1995), S. 47-72 
    ISSN: 1436-4646
    Keywords: Bilevel programming ; Nonlinear nonconvex ; Nondifferentiable optimization ; Economic planning ; Sensitivity analysis
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract This paper is concerned with general nonlinear nonconvex bilevel programming problems (BLPP). We derive necessary and sufficient conditions at a local solution and investigate the stability and sensitivity analysis at a local solution in the BLPP. We then explore an approach in which a bundle method is used in the upper-level problem with subgradient information from the lower-level problem. Two algorithms are proposed to solve the general nonlinear BLPP and are shown to converge to regular points of the BLPP under appropriate conditions. The theoretical analysis conducted in this paper seems to indicate that a sensitivity-based approach is rather promising for solving general nonlinear BLPP.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 100
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 70 (1995), S. 123-148 
    ISSN: 1436-4646
    Keywords: Generalized equations ; Variational inequalities ; Nonlinear programming ; Sensitivity analysis ; Power series ; Strong regularity ; Constrained optimization ; Perturbation theory
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract We show that the solution of a strongly regular generalized equation subject to a scalar perturbation expands in pseudopower series in terms of the perturbation parameter, i.e., the expansion of orderk is the solution of generalized equations expanded to orderk and thus depends itself on the perturbation parameter. In the polyhedral case, this expansion reduces to a usual Taylor expansion. These results are applied to the problem of regular perturbation in constrained optimization. We show that, if the strong regularity condition is satisfied, the property of quadratic growth holds and, at least locally, the solutions of the optimization problem and of the associated optimality system coincide. If, in addition the number of inequality constraints is finite, the solution and the Lagrange multiplier can be expanded in Taylor series. If the data are analytic, the solution and the multiplier are analytic functions of the perturbation parameter.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...