ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
  • 2
  • 3
    Publication Date: 2010-06-01
    Description: A probabilistic tracking model is introduced that identifies storm tracks from feature vectors that are extracted from meteorological analysis data. The model assumes that the genesis and lysis times of each track are unknown and estimates their values along with the track’s position and storm intensity over time. A hidden-state dynamics model (Kalman filter) characterizes the temporal evolution of the storms. The model uses a Bayesian methodology for estimating the unknown lifetimes (genesis–lysis pairs) and tracks of the storms. Prior distributions are placed over the unknown parameters and their posterior distributions are estimated using a Markov Chain Monte Carlo (MCMC) sampling algorithm. The posterior distributions are used to identify and report the most likely storm tracks in the data. This approach provides a unified probabilistic framework that accounts for uncertainty in storm timing (genesis and lysis), storm location and intensity, and the feature detection process. Thus, issues such as missing observations can be accommodated in a statistical manner without human intervention. The model is applied to the field of relative vorticity at the 975-hPa level of analysis from the National Centers for Environmental Prediction Global Forecast System during May–October 2000–02, in the tropical east Pacific. Storm tracks in the National Hurricane Center best-track data (HURDAT) for the same period are used to assess the performance of the storm identification and tracking model.
    Print ISSN: 0027-0644
    Electronic ISSN: 1520-0493
    Topics: Geography , Geosciences , Physics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 4
    Publication Date: 2019-07-12
    Description: An autonomous vehicle patrols a large region, during which an algorithm receives measurements of detected potential objects within its sensor range. The goal of the algorithm is to track all objects in the region over time. This problem differs from traditional multi-target tracking scenarios because the region of interest is much larger than the sensor range and relies on the movement of the sensor through this region for coverage. The goal is to know whether anything has changed between visits to the same location. In particular, two kinds of alert conditions must be detected: (1) a previously detected object has disappeared and (2) a new object has appeared in a location already checked. For the time an object is within sensor range, the object can be assumed to remain stationary, changing position only between visits. The problem is difficult because the upstream object detection processing is likely to make many errors, resulting in heavy clutter (false positives) and missed detections (false negatives), and because only noisy, bearings-only measurements are available. This work has three main goals: (1) Associate incoming measurements with known objects or mark them as new objects or false positives, as appropriate. For this, a multiple hypothesis tracker was adapted to this scenario. (2) Localize the objects using multiple bearings-only measurements to provide estimates of global position (e.g., latitude and longitude). A nonlinear Kalman filter extension provides these 2D position estimates using the 1D measurements. (3) Calculate the probability that a suspected object truly exists (in the estimated position), and determine whether alert conditions have been triggered (for new objects or disappeared objects). The concept of a probability of existence was created, and a new Bayesian method for updating this probability at each time step was developed. A probabilistic multiple hypothesis approach is chosen because of its superiority in handling the uncertainty arising from errors in sensors and upstream processes. However, traditional target tracking methods typically assume a stationary detection volume of interest, whereas in this case, one must make adjustments for being able to see only a small portion of the region of interest and understand when an alert situation has occurred. To track object existence inside and outside the vehicle's sensor range, a probability of existence was defined for each hypothesized object, and this value was updated at every time step in a Bayesian manner based on expected characteristics of the sensor and object and whether that object has been detected in the most recent time step. Then, this value feeds into a sequential probability ratio test (SPRT) to determine the status of the object (suspected, confirmed, or deleted). Alerts are sent upon selected status transitions. Additionally, in order to track objects that move in and out of sensor range and update the probability of existence appropriately a variable probability detection has been defined and the hypothesis probability equations have been re-derived to accommodate this change. Unsupervised object tracking is a pervasive issue in automated perception systems. This work could apply to any mobile platform (ground vehicle, sea vessel, air vehicle, or orbiter) that intermittently revisits regions of interest and needs to determine whether anything interesting has changed.
    Keywords: Instrumentation and Photography
    Type: NPO-47274 , NASA Tech Briefs, October 2011; 30
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 5
    Publication Date: 2019-07-12
    Description: A computational method, SimLearn, has been devised to facilitate efficient knowledge discovery from simulators. Simulators are complex computer programs used in science and engineering to model diverse phenomena such as fluid flow, gravitational interactions, coupled mechanical systems, and nuclear, chemical, and biological processes. SimLearn uses active-learning techniques to efficiently address the "landscape characterization problem." In particular, SimLearn tries to determine which regions in "input space" lead to a given output from the simulator, where "input space" refers to an abstraction of all the variables going into the simulator, e.g., initial conditions, parameters, and interaction equations. Landscape characterization can be viewed as an attempt to invert the forward mapping of the simulator and recover the inputs that produce a particular output. Given that a single simulation run can take days or weeks to complete even on a large computing cluster, SimLearn attempts to reduce costs by reducing the number of simulations needed to effect discoveries. Unlike conventional data-mining methods that are applied to static predefined datasets, SimLearn involves an iterative process in which a most informative dataset is constructed dynamically by using the simulator as an oracle. On each iteration, the algorithm models the knowledge it has gained through previous simulation trials and then chooses which simulation trials to run next. Running these trials through the simulator produces new data in the form of input-output pairs. The overall process is embodied in an algorithm that combines support vector machines (SVMs) with active learning. SVMs use learning from examples (the examples are the input-output pairs generated by running the simulator) and a principle called maximum margin to derive predictors that generalize well to new inputs. In SimLearn, the SVM plays the role of modeling the knowledge that has been gained through previous simulation trials. Active learning is used to determine which new input points would be most informative if their output were known. The selected input points are run through the simulator to generate new information that can be used to refine the SVM. The process is then repeated. SimLearn carefully balances exploration (semi-randomly searching around the input space) versus exploitation (using the current state of knowledge to conduct a tightly focused search). During each iteration, SimLearn uses not one, but an ensemble of SVMs. Each SVM in the ensemble is characterized by different hyper-parameters that control various aspects of the learned predictor - for example, whether the predictor is constrained to be very smooth (nearby points in input space lead to similar output predictions) or whether the predictor is allowed to be "bumpy." The various SVMs will have different preferences about which input points they would like to run through the simulator next. SimLearn includes a formal mechanism for balancing the ensemble SVM preferences so that a single choice can be made for the next set of trials.
    Keywords: Man/System Technology and Life Support
    Type: NPO-43399 , NASA Tech Briefs, July 2007; 26-27
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 6
    Publication Date: 2019-07-13
    Description: Accurate, on-board classification of instrument data is used to increase science return by autonomously identifying regions of interest for priority transmission or generating summary products to conserve transmission bandwidth. Due to on-board processing constraints, such classification has been limited to using the simplest functions on a small subset of the full instrument data. FPGA co-processor designs for SVM1 classifiers will lead to significant improvement in on-board classification capability and accuracy.
    Keywords: Electronics and Electrical Engineering
    Type: IEEEAC Paper 1230 , IEEE Aerospace Conference; Mar 06, 2008; Big Sky, MT; United States
    Format: text
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 7
    Publication Date: 2019-07-13
    Description: We have been operating an autonomous, integrated sensorweb linking numerous space and ground sensors in 24/7 operations since 2004. This sensorweb includes elements of space data acquisition (MODIS, GOES, and EO-1), space asset retasking (EO-1), integration of data acquired from ground sensor networks with on-demand ground processing of data into science products. These assets are being integrated using web service standards from the Open Geospatial Consortium. Future plans include extension to fixed and mobile surface and subsurface sea assets as part of the NSF's ORION Program.
    Keywords: Computer Systems; Earth Resources and Remote Sensing; Communications and Radar
    Type: SpaceOps 2008; May 12, 2008 - May 16, 2008; Heidelberg; Germany
    Format: text
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 8
    Publication Date: 2019-07-13
    Description: We discuss a current, ongoing demonstration of insitu onboard detection in which the Earth Observing-1 spacecraft detects surface sulfur deposits that originate from underlying springs by distinguishing the sulfur from the ice-rich glacial background, a good analogue for the Europan surface. In this paper, we describe the process of developing the onboard classifier for detecting the presence of sulfur in a hyperspectral scene, including the use of a training/testing set that is not exhaustively labeled, i.e.not all true positives are marked, and the selection of 12, out of 242, Hyperion instrument wavelength bands to use in the onboard detector. This study aims to demonstrate the potential for future missions to capture short-lived science events, make decisions onboard, identify high priority data for downlink and perform onboard change detection. In the future, such capability could help maximize the science return of downlink bandwidth-limited missions, addressing a significant constraint in all deep-space missions.
    Keywords: Space Sciences (General)
    Type: International Symposium on Artificial Intelligence, Robotics, and Automation in Space; Feb 26, 2008; Los Angeles, CA; United States
    Format: text
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 9
    Publication Date: 2019-07-12
    Description: An effort related to that described in the preceding article focuses on developing a spaceborne processing platform for fast and accurate onboard classification of image data, a critical part of modern satellite image processing. The approach again has been to exploit the versatility of recently developed hybrid Virtex-4FX field-programmable gate array (FPGA) to run diverse science applications on embedded processors while taking advantage of the reconfigurable hardware resources of the FPGAs. In this case, the FPGA serves as a coprocessor that implements legacy C-language support-vector-machine (SVM) image-classification algorithms to detect and identify natural phenomena such as flooding, volcanic eruptions, and sea-ice break-up. The FPGA provides hardware acceleration for increased onboard processing capability than previously demonstrated in software. The original C-language program demonstrated on an imaging instrument aboard the Earth Observing-1 (EO-1) satellite implements a linear-kernel SVM algorithm for classifying parts of the images as snow, water, ice, land, or cloud or unclassified. Current onboard processors, such as on EO-1, have limited computing power, extremely limited active storage capability and are no longer considered state-of-the-art. Using commercially available software that translates C-language programs into hardware description language (HDL) files, the legacy C-language program, and two newly formulated programs for a more capable expanded-linear-kernel and a more accurate polynomial-kernel SVM algorithm, have been implemented in the Virtex-4FX FPGA. In tests, the FPGA implementations have exhibited significant speedups over conventional software implementations running on general-purpose hardware.
    Keywords: Electronics and Electrical Engineering
    Type: NPO-45961 , NASA Tech Briefs, December 2008; 9
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...