Abstract
Estimating the expected value of an observable appearing in a nonequilibrium stochastic process usually involves sampling. If the observable’s variance is high, many samples are required. In contrast, we show that performing the same task without sampling, using tensor network compression, efficiently captures high variances in systems of various geometries and dimensions. We provide examples for which matching the accuracy of our efficient method would require a sample size scaling exponentially with system size. In particular, the high-variance observable , motivated by Jarzynski’s equality, with the work done quenching from equilibrium at inverse temperature , is exactly and efficiently captured by tensor networks.
- Received 14 October 2014
DOI:https://doi.org/10.1103/PhysRevLett.114.090602
This article is available under the terms of the Creative Commons Attribution 3.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal citation, and DOI.
© 2015 American Physical Society