ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    Publication Date: 2012-08-23
    Description: Detailed analyses of the chromatin around the BIM promoter has revealed that latent Epstein–Barr virus (EBV) triggers the recruitment of polycomb repressive complex 2 (PRC2) core subunits and the trimethylation of histone H3 lysine 27 (H3K27me3) at this locus. The recruitment is absolutely dependent on nuclear proteins EBNA3A and EBNA3C; what is more, epitope-tagged EBNA3C could be shown bound near the transcription start site (TSS). EBV induces no consistent changes in the steady-state expression of PRC2 components, but lentivirus delivery of shRNAs against PRC2 and PRC1 subunits disrupted EBV repression of BIM . The activation mark H3K4me3 is largely unaltered at this locus irrespective of H3K27me3 status, suggesting the establishment of a ‘bivalent’ chromatin domain. Consistent with the ‘poised’ nature of these domains, RNA polymerase II (Pol II) occupancy was not altered by EBV at the BIM TSS, but analysis of phospho-serine 5 on Pol II indicated that EBNA3A and EBNA3C together inhibit initiation of BIM transcripts. B cell lines carrying EBV encoding a conditional EBNA3C-oestrogen receptor-fusion revealed that this epigenetic repression of BIM was reversible, but took more than 3 weeks from when EBNA3C was inactivated.
    Print ISSN: 0305-1048
    Electronic ISSN: 1362-4962
    Topics: Biology
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Publication Date: 2014-06-20
    Description: We investigate the effects of galaxy formation on the baryonic acoustic oscillation (BAO) peak by applying semi-analytic modelling techniques to the Millennium-XXL, a 3 10 11 particle N -body simulation of similar volume to the future Euclid survey. Our approach explicitly incorporates the effects of tidal fields and stochasticity on halo formation, as well as the presence of velocity bias, spatially correlated merger histories, and the connection of all these with the observable and physical properties of galaxies. We measure significant deviations in the shape of the BAO peak from the expectations of a linear bias model built on top of the non-linear dark matter distribution. We find that the galaxy correlation function shows an excess close to the maximum of the BAO peak ( r ~ 110 h –1 Mpc) and a deficit at r ~ 90 h –1 Mpc. Depending on the redshift, selection criteria and number density of the galaxy samples, these biased distortions can be up to 5 per cent in amplitude. They are, however, largely absorbed by marginalization over nuisance parameters in current analytical modelling of the BAO peak in configuration space, in particular into the parameter that controls the broadening due to non-linear evolution. As a result, the galaxy formation effects detected here are unlikely to bias the high-precision measurements planned by the upcoming generation of wide-field galaxy surveys.
    Print ISSN: 0035-8711
    Electronic ISSN: 1365-2966
    Topics: Physics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Publication Date: 2016-03-14
    Description: We introduce methods which allow observed galaxy clustering to be used together with observed luminosity or stellar mass functions to constrain the physics of galaxy formation. We show how the projected two-point correlation function of galaxies in a large semi-analytic simulation can be estimated to better than ~10 per cent using only a very small subsample of the subhalo merger trees. This allows measured correlations to be used as constraints in a Monte Carlo Markov Chain exploration of the astrophysical and cosmological parameter space. An important part of our scheme is an analytic profile which captures the simulated satellite distribution extremely well out to several halo virial radii. This is essential to reproduce the correlation properties of the full simulation at intermediate separations. As a first application, we use low-redshift clustering and abundance measurements to constrain a recent version of the Munich semi-analytic model. The preferred values of most parameters are consistent with those found previously, with significantly improved constraints and somewhat shifted ‘best’ values for parameters that primarily affect spatial distributions. Our methods allow multi-epoch data on galaxy clustering and abundance to be used as joint constraints on galaxy formation. This may lead to significant constraints on cosmological parameters even after marginalizing over galaxy formation physics.
    Print ISSN: 0035-8711
    Electronic ISSN: 1365-2966
    Topics: Physics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 4
    Publication Date: 2014-08-01
    Description: We develop and test a new statistical method to measure the kinematic Sunyaev-Zel'dovich (kSZ) effect. A sample of independently detected clusters is combined with the cosmic flow field predicted from a galaxy redshift survey in order to derive a matched filter that optimally weights the kSZ signal for the sample as a whole given the noise involved in the problem. We apply this formalism to realistic mock microwave skies based on cosmological N -body simulations, and demonstrate its robustness and performance. In particular, we carefully assess the various sources of uncertainty, cosmic microwave background primary fluctuations, instrumental noise, uncertainties in the determination of the velocity field, and effects introduced by miscentring of clusters and by uncertainties of the mass-observable relation (normalization and scatter). We show that available data ( Planck maps and the MaxBCG catalogue) should deliver a 7.7 detection of the kSZ. A similar cluster catalogue with broader sky coverage should increase the detection significance to ~13. We point out that such measurements could be binned in order to study the properties of the cosmic gas and velocity fields, or combined into a single measurement to constrain cosmological parameters or deviations of the law of gravity from General Relativity.
    Print ISSN: 0035-8711
    Electronic ISSN: 1365-2966
    Topics: Physics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 5
    Electronic Resource
    Electronic Resource
    Oxford, UK : Blackwell Publishing Ltd
    Annals of the New York Academy of Sciences 635 (1991), S. 0 
    ISSN: 1749-6632
    Source: Blackwell Publishing Journal Backfiles 1879-2005
    Topics: Natural Sciences in General
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 6
    Electronic Resource
    Electronic Resource
    Oxford, UK : Blackwell Publishing Ltd
    Geophysical prospecting 38 (1990), S. 0 
    ISSN: 1365-2478
    Source: Blackwell Publishing Journal Backfiles 1879-2005
    Topics: Geosciences , Physics
    Notes: This paper presents some results from an investigation into the utility of pattern recognition methods in seismic interpretation. The seismic instantaneous attributes of amplitude, phase and frequency provide a way of quantifying the character of a simple reflection. Measures of character can be developed from cross-plots and cluster analysis of these attributes. It is demonstrated that such seismic character can produce better-defined maps than a single attribute. These procedures can be extended to attributes derived from seismic trace segments, such as trace energy and centre frequency, and to multitrace attributes, but more effort is then needed to analyse the attributes and search out useful ones.An introduction is given to projection pursuit which has proved a useful exploratory tool for the anlysis of attribute relationships.It is important to stress that pattern recognition techniques simply help bring relationships and patterns in the data to the attention of the interpreter and the most persistent problem in applying these techniques is the evaluation of potentially interesting patterns. The decision on what use can be made of them is highly interpretive and their calibration is difficult. Well control is vital but it normally allows only very limited supervision of a seismic classifier. An example is presented to illustrate these problems.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 7
    Electronic Resource
    Electronic Resource
    Oxford, UK : Blackwell Publishing Ltd
    Geophysical prospecting 36 (1988), S. 0 
    ISSN: 1365-2478
    Source: Blackwell Publishing Journal Backfiles 1879-2005
    Topics: Geosciences , Physics
    Notes: Methods of minimum entropy deconvolution (MED) try to take advantage of the non-Gaussian distribution of primary reflectivities in the design of deconvolution operators. Of these, Wiggins’(1978) original method performs as well as any in practice. However, we present examples to show that it does not provide a reliable means of deconvolving seismic data: its operators are not stable and, instead of whitening the data, they often band-pass filter it severely. The method could be more appropriately called maximum kurtosis deconvolution since the varimax norm it employs is really an estimate of kurtosis. Its poor performance is explained in terms of the relation between the kurtosis of a noisy band-limited seismic trace and the kurtosis of the underlying reflectivity sequence, and between the estimation errors in a maximum kurtosis operator and the data and design parameters.The scheme put forward by Fourmann in 1984, whereby the data are corrected by the phase rotation that maximizes their kurtosis, is a more practical method. This preserves the main attraction of MED, its potential for phase control, and leaves trace whitening and noise control to proven conventional methods. The correction can be determined without actually applying a whole series of phase shifts to the data. The application of the method is illustrated by means of practical and synthetic examples, and summarized by rules derived from theory. In particular, the signal-dominated bandwidth must exceed a threshold for the method to work at all and estimation of the phase correction requires a considerable amount of data.Kurtosis can estimate phase better than other norms that are misleadingly declared to be more efficient by theory based on full-band, noise-free data.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 8
    Electronic Resource
    Electronic Resource
    Oxford, UK : Blackwell Publishing Ltd
    Geophysical prospecting 25 (1977), S. 0 
    ISSN: 1365-2478
    Source: Blackwell Publishing Journal Backfiles 1879-2005
    Topics: Geosciences , Physics
    Notes: Optimum stacking filters based on estimates of trace signal-to-uncorrelated noise ratios are assessed and compared in performance with conventional straight stacking. It is shown that for the trace durations and signal bandwidths normally encountered in seismic reflection data the errors in estimating signal/noise ratios largely counteract the theoretical advantages of the optimum filter. The more specific the filter (e.g. the more frequency components included in its design) the more this is true. Even for a simple weighted stack independent of frequency, the performance is likely to be better than a straight (equal weights) stack only for relatively high signal/noise ratios, when the performance is not critical anyway.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 9
    Electronic Resource
    Electronic Resource
    Oxford, UK : Blackwell Publishing Ltd
    Geophysical prospecting 21 (1973), S. 0 
    ISSN: 1365-2478
    Source: Blackwell Publishing Journal Backfiles 1879-2005
    Topics: Geosciences , Physics
    Notes: A seismic trace recorded with suitable gain control can be treated as a stationary time series. Each trace, χj(t), from a set of traces, can be broken down into two stationary components: a signal sequence, αj(t) *s(t—τj), which correlates from trace to trace, and an incoherent noise sequence, nj(t), which does not correlate from trace to trace. The model for a seismic trace used in this paper is thus χj(t) =αj(t) * s(t—τj) +nj(t) where the signal wavelet αj(t), the lag (moveout) of the signal τj, and the noise sequence nj(t) can vary in any manner from trace to trace. Given this model, a method for estimating the power spectra of the signal and incoherent noise components on each trace is presented.The method requires the calculation of the multiple coherence function γj(f) of each trace. γj(f) is the fraction of the power on traced at frequency f that can be predicted in a least-square error sense from all other traces. It is related to the signal-to-noise power ratio ρj(f) by 〈displayedItem type="mathematics" xml:id="mu1" numbered="no"〉〈mediaResource alt="image" href="urn:x-wiley:00168025:GPR660:GPR_660_mu1"/〉 where Kj(f) can be computed and is in general close to 1.0. The theory leading to this relation is given in an Appendix.Particular attention is paid to the statistical distributions of all estimated quantities. The statistical behaviour of cross-spectral and coherence estimates is complicated by the presence of bias as well as random deviations. Straightforward methods for removing this bias and setting up confidence limits, based on the principle of maximum likelihood and the Goodman distribution for the sample multiple coherence, are described.Actual field records differ from the assumed model mainly in having more than one correctable component, components other than the required sequence of reflections being lumped together as correlated noise. When more than one correlatable component is present, the estimate for the signal power spectrum obtained by the multiple coherence method is approximately the sum of the power spectra of the correlatable components. A further practical drawback to estimating spectra from seismic data is the limited number of degrees of freedom available. Usually at least one second of stationary data on each trace is needed to estimate the signal spectrum with an accuracy of about 10%. Examples using synthetic data are presented to illustrate the method.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 10
    Electronic Resource
    Electronic Resource
    Oxford, UK : Blackwell Publishing Ltd
    Geophysical prospecting 40 (1992), S. 0 
    ISSN: 1365-2478
    Source: Blackwell Publishing Journal Backfiles 1879-2005
    Topics: Geosciences , Physics
    Notes: In many branches of science, techniques designed for use in one context are used in other contexts, often with the belief that results which hold in the former will also hold or be relevant in the latter. Practical limitations are frequently overlooked or ignored. Three techniques used in seismic data analysis are often misused or their limitations poorly understood: (1) maximum entropy spectral analysis; (2) the role of goodness-of-fit and the real meaning of a wavelet estimate; (3) the use of multiple confidence intervals.It is demonstrated that in practice maximum entropy spectral estimates depend on a data-dependent smoothing window with unpleasant properties, which can result in poor spectral estimates for seismic data.Secondly, it is pointed out that the level of smoothing needed to give least errors in a wavelet estimate will not give rise to the best goodness-of-fit between the seismic trace and the wavelet estimate convolved with the broadband synthetic. Even if the smoothing used corresponds to near-minimum errors in the wavelet, the actual noise realization on the seismic data can cause important perturbations in residual wavelets following wavelet deconvolution.Finally the computation of multiple confidence intervals (e.g. at several spatial positions) is considered. Suppose a nominal, say 90%, confidence interval is calculated at each location. The confidence attaching to the simultaneous use of the confidence intervals is not then 90%. Methods do exist for working out suitable confidence levels. This is illustrated using porosity maps computed using conditional simulation.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...