ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    Publication Date: 2001-09-18
    Print ISSN: 1050-2947
    Electronic ISSN: 1094-1622
    Topics: Electrical Engineering, Measurement and Control Technology , Physics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Publication Date: 2021-01-01
    Print ISSN: 2168-6831
    Topics: Architecture, Civil Engineering, Surveying , Electrical Engineering, Measurement and Control Technology , Geosciences , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Publication Date: 2019-07-12
    Description: A relatively simple algorithm, devised for use in an image-data-compression application, partitions a rectangular pixelated image (or any other rectangle on which a regular rectangular grid has already been superimposed) into a specified number of smaller rectangles, hereafter denoted segments. The algorithm has the following properties: No floating-point operations are needed. The segments tend to be nearly square (in the sense that their widths and heights in pixel units tend to be nearly equal). The segments tend to have nearly equal areas. The algorithm yields valid results (no zero-width or zero-height segments) as long as the specified number of segments, s, does not exceed the number of pixels (equivalently, the number of grid cells). The inputs to the algorithm are the positive integer s plus the positive integers h and w, denoting the height and width, respectively, of the rectangle in pixel units. The limit on s for a valid result is given by s less than or equal to wh.
    Keywords: Man/System Technology and Life Support
    Type: NPO-30479 , NASA Tech Briefs, July 2004; 29-30
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 4
    Publication Date: 2019-07-12
    Description: Efficient onboard data compression can reduce the data volume from hyperspectral imagers on NASA and DoD spacecraft in order to return as much imagery as possible through constrained downlink channels. Lossless compression is important for signature extraction, object recognition, and feature classification capabilities. To provide onboard data compression, a hardware implementation of a lossless hyperspectral compression algorithm was developed using a field programmable gate array (FPGA). The underlying algorithm is the Fast Lossless (FL) compression algorithm reported in Fast Lossless Compression of Multispectral- Image Data (NPO-42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), p. 26 with the modification reported in Lossless, Multi-Spectral Data Comressor for Improved Compression for Pushbroom-Type Instruments (NPO-45473), NASA Tech Briefs, Vol. 32, No. 7 (July 2008) p. 63, which provides improved compression performance for data from pushbroom-type imagers. An FPGA implementation of the unmodified FL algorithm was previously developed and reported in Fast and Adaptive Lossless Onboard Hyperspectral Data Compression System (NPO-46867), NASA Tech Briefs, Vol. 36, No. 5 (May 2012) p. 42. The essence of the FL algorithm is adaptive linear predictive compression using the sign algorithm for filter adaption. The FL compressor achieves a combination of low complexity and compression effectiveness that exceeds that of stateof- the-art techniques currently in use. The modification changes the predictor structure to tolerate differences in sensitivity of different detector elements, as occurs in pushbroom-type imagers, which are suitable for spacecraft use. The FPGA implementation offers a low-cost, flexible solution compared to traditional ASIC (application specific integrated circuit) and can be integrated as an intellectual property (IP) for part of, e.g., a design that manages the instrument interface. The FPGA implementation was benchmarked on the Xilinx Virtex IV LX25 device, and ported to a Xilinx prototype board. The current implementation has a critical path of 29.5 ns, which dictated a clock speed of 33 MHz. The critical path delay is end-to-end measurement between the uncompressed input data and the output compression data stream. The implementation compresses one sample every clock cycle, which results in a speed of 33 Msample/s. The implementation has a rather low device use of the Xilinx Virtex IV LX25, making the total power consumption of the implementation about 1.27 W.
    Keywords: Man/System Technology and Life Support
    Type: NPO-47103 , NASA Tech Briefs, June 2012; 11-12
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 5
    Publication Date: 2019-07-13
    Description: This paper describes the emerging Issue 2 of the CCSDS-123.0-B standard for low-complexity compression of multispectral and hyperspectral imagery, focusing on its new features and capabilities. Most significantly, this new issue incorporates a closed-loop quantization scheme to provide near-lossless compression capability while still supporting lossless compression, and introduces a new entropy coding option that provides better compression of low-entropy data.
    Keywords: Computer Programming and Software
    Type: GSFC-E-DAA-TN61202 , International Workshop on On-Board Payload Data Compression; Sep 20, 2018 - Sep 21, 2018; Torre Spagnola, Matera; Italy
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 6
    Publication Date: 2019-07-12
    Description: A recently developed data-compression method is an adaptive technique for coding quantized wavelet-transformed data, nominally as part of a complete image-data compressor. Unlike some other approaches, this method admits a simple implementation and does not rely on the use of large code tables. A common data compression approach, particularly for images, is to perform a wavelet transform on the input data, and then losslessly compress a quantized version of the wavelet-transformed data. Under this compression approach, it is common for the quantized data to include long sequences, or runs, of zeros. The new coding method uses prefixfree codes for the nonnegative integers as part of an adaptive algorithm for compressing the quantized wavelet-transformed data by run-length coding. In the form of run-length coding used here, the data sequence to be encoded is parsed into strings consisting of some number (possibly 0) of zeros, followed by a nonzero value. The nonzero value and the length of the run of zeros are encoded. For a data stream that contains a sufficiently high frequency of zeros, this method is known to be more effective than using a single variable length code to encode each symbol. The specific prefix-free codes used are from two classes of variable-length codes: a class known as Golomb codes, and a class known as exponential-Golomb codes. The codes within each class are indexed by a single integer parameter. The present method uses exponential-Golomb codes for the lengths of the runs of zeros, and Golomb codes for the nonzero values. The code parameters within each code class are determined adaptively on the fly as compression proceeds, on the basis of statistics from previously encoded values. In particular, a simple adaptive method has been devised to select the parameter identifying the particular exponential-Golomb code to use. The method tracks the average number of bits used to encode recent runlengths, and takes the difference between this average length and the code parameter. When this difference falls outside a fixed range, the code parameter is updated (increased or decreased). The Golomb code parameter is selected based on the average magnitude of recently encoded nonzero samples. The coding method requires no floating- point operations, and more readily adapts to local statistics than other methods. The method can also accommodate arbitrarily large input values and arbitrarily long runs of zeros. In practice, this means that changes in the dynamic range or size of the input data set would not require a change to the compressor. The algorithm has been tested in computational experiments on test images. A comparison with a previously developed algorithm that uses large code tables (generated via Huffman coding on training data) suggests that the data-compression effectiveness of the present algorithm is comparable to the best performance achievable by the previously developed algorithm.
    Keywords: Man/System Technology and Life Support
    Type: NPO-40490 , NASA Tech Briefs, November 2005; 27
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 7
    Publication Date: 2019-07-12
    Description: A computer program that preprocesses multispectral image data has been developed to provide the Mars Exploration Rover (MER) mission with a means of exploiting the additional correlation present in such data without appreciably increasing the complexity of compressing the data.
    Keywords: Computer Programming and Software
    Type: NPO-40835 , NASA Tech Briefs, September 2006; 53-54
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 8
    Publication Date: 2019-07-12
    Description: ICER is computer software that can perform both lossless and lossy compression and decompression of gray-scale-image data using discrete wavelet transforms. Designed for primary use in transmitting scientific image data from distant spacecraft to Earth, ICER incorporates an error-containment scheme that limits the adverse effects of loss of data and is well suited to the data packets transmitted by deep-space probes. The error-containment scheme includes utilization of the algorithm described in "Partitioning a Gridded Rectangle Into Smaller Rectangles " (NPO-30479), NASA Tech Briefs, Vol. 28, No. 7 (July 2004), page 56. ICER has performed well in onboard compression of thousands of images transmitted from the Mars Exploration Rovers.
    Keywords: Man/System Technology and Life Support
    Type: NPO-40982 , NASA Tech Briefs, September 2006; 54
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 9
    Publication Date: 2019-07-12
    Description: An algorithm that effects fast lossless compression of multispectral-image data is based on low-complexity, proven adaptive-filtering algorithms. This algorithm is intended for use in compressing multispectral-image data aboard spacecraft for transmission to Earth stations. Variants of this algorithm could be useful for lossless compression of three-dimensional medical imagery and, perhaps, for compressing image data in general.
    Keywords: Computer Programming and Software
    Type: NPO-42517 , NASA Tech Briefs, August 2006; 6-7
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 10
    Publication Date: 2019-07-12
    Description: This work extends the lossless data compression technique described in Fast Lossless Compression of Multispectral- Image Data, (NPO-42517) NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26. The original technique was extended to include a near-lossless compression option, allowing substantially smaller compressed file sizes when a small amount of distortion can be tolerated. Near-lossless compression is obtained by including a quantization step prior to encoding of prediction residuals. The original technique uses lossless predictive compression and is designed for use on multispectral imagery. A lossless predictive data compression algorithm compresses a digitized signal one sample at a time as follows: First, a sample value is predicted from previously encoded samples. The difference between the actual sample value and the prediction is called the prediction residual. The prediction residual is encoded into the compressed file. The decompressor can form the same predicted sample and can decode the prediction residual from the compressed file, and so can reconstruct the original sample. A lossless predictive compression algorithm can generally be converted to a near-lossless compression algorithm by quantizing the prediction residuals prior to encoding them. In this case, since the reconstructed sample values will not be identical to the original sample values, the encoder must determine the values that will be reconstructed and use these values for predicting later sample values. The technique described here uses this method, starting with the original technique, to allow near-lossless compression. The extension to allow near-lossless compression adds the ability to achieve much more compression when small amounts of distortion are tolerable, while retaining the low complexity and good overall compression effectiveness of the original algorithm.
    Keywords: Man/System Technology and Life Support
    Type: NPO-46625 , NASA Tech Briefs, December 2009; 6
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...