ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    Publication Date: 2019-06-28
    Description: Dynamic mesh adaption on unstructured grids is a powerful tool for computing unsteady flows that require local grid modifications to efficiently resolve solution features. For this work, we consider an edge-based adaption scheme that has shown good single-processor performance on the C90. We report on our experience parallelizing this code for the SP2. Results show a 47.0X speedup on 64 processors when 10 percent of the mesh is randomly refined. Performance deteriorates to 7.7X when the same number of edges are refined in a highly-localized region. This is because almost all the mesh adaption is confined to a single processor. However, this problem can be remedied by repartitioning the mesh immediately after targeting edges for refinement but before the actual adaption takes place. With this change, the speedup improves dramatically to 43.6X.
    Keywords: Computer Programming and Software
    Type: NASA-CR-201396 , RIACS-TR-96-11 , NAS 1.26:201396
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Publication Date: 2018-06-06
    Description: Earth science applications of the future will stress the capabilities of even the highest performance supercomputers in the areas of raw compute power, mass storage management, and software environments. These NASA mission critical problems demand usable multi-petaflops and exabyte-scale systems to fully realize their science goals. With an exciting vision of the technologies needed, NASA has established a comprehensive program of advanced research in computer architecture, software tools, and device technology to ensure that, in partnership with US industry, it can meet these demanding requirements with reliable, cost effective, and usable ultra-scale systems. NASA will exploit, explore, and influence emerging high end computing architectures and technologies to accelerate the next generation of engineering, operations, and discovery processes for NASA Enterprises. This article captures this vision and describes the concepts, accomplishments, and the potential payoff of the key thrusts that will help meet the computational challenges in Earth science applications.
    Keywords: Computer Operations and Hardware
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Publication Date: 2019-07-18
    Description: Mesh adaption is a powerful tool for efficient unstructured-grid computations but causes load imbalance among processors on a parallel machine. We describe a novel method to dynamically balance the processor workloads with a global view. Mesh question, repartitioning, processor assignment, and remapping are critical components of the framework that must be accomplished rapidly and efficiently so as not to cause a significant overhead to the numerical simulation. A data redistribution model will also be presented that predicts the remapping cost. This model is required to determine whether the gain from a balanced workload distribution offsets the cost of data movement. Results presented will demonstrate that this is an effective dynamic load balancing strategy which remains viable on a large number of processors.
    Keywords: Computer Programming and Software
    Type: NEC Europe Ltd. Conference; May 04, 1998 - May 08, 1998; Sankt Augustin; Germany
    Format: text
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 4
    Publication Date: 2019-07-18
    Description: This paper examines the applications most commonly run on the supercomputers at the Numerical Aerospace Simulation (NAS) facility. It analyzes the extent to which such applications are fundamentally oriented to vector computers, and whether or not they can be efficiently implemented on hierarchical memory machines, such as systems with cache memories and highly parallel, distributed memory systems.
    Keywords: Computer Programming and Software
    Format: text
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 5
    Publication Date: 2019-07-13
    Description: This viewgraph presentation explores the need for a NASA Advanced Supercomputing (NAS) parallel benchmark for problems with irregular dynamical memory access. This benchmark is important and necessary because: 1) Problems with localized error source benefit from adaptive nonuniform meshes; 2) Certain machines perform poorly on such problems; 3) Parallel implementation may provide further performance improvement but is difficult. Some examples of problems which use irregular dynamical memory access include: 1) Heat transfer problem; 2) Heat source term; 3) Spectral element method; 4) Base functions; 5) Elemental discrete equations; 6) Global discrete equations. Nonconforming Mesh and Mortar Element Method are covered in greater detail in this presentation.
    Keywords: Computer Operations and Hardware
    Type: ADAPT03: Conference on Adaptive Methods for PDEs and Large-Scale Computation; Oct 11, 2003 - Oct 12, 2003; Troy, NY; United States
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 6
    Publication Date: 2019-07-13
    Description: In this paper, we present a new approach to constructing a "self-avoiding" walk through a triangular mesh. Unlike the popular approach of visiting mesh elements using space-filling curves which is based on a geometric embedding, our approach is combinatorial in the sense that it uses the mesh connectivity only. We present an algorithm for constructing a self-avoiding walk which can be applied to any unstructured triangular mesh. The complexity of the algorithm is O(n x log(n)), where n is the number of triangles in the mesh. We show that for hierarchical adaptive meshes, the algorithm can be easily parallelized by taking advantage of the regularity of the refinement rules. The proposed approach should be very useful in the run-time partitioning and load balancing of adaptive unstructured grids.
    Keywords: Computer Programming and Software
    Type: 39th Symposium on Foundations of Computer Science; Nov 08, 1998 - Nov 11, 1998; Palo Alto, CA; United States
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 7
    Publication Date: 2019-07-13
    Description: The Conjugate Gradient (CG) algorithm is perhaps the best-known iterative technique to solve sparse linear systems that are symmetric and positive definite. A sparse matrix-vector multiply (SPMV) usually accounts for most of the floating-point operations within a CG iteration. In this paper, we investigate the effects of various ordering and partitioning strategies on the performance of parallel CG and SPMV using different programming paradigms and architectures. Results show that for this class of applications, ordering significantly improves overall performance, that cache reuse may be more important than reducing communication, and that it is possible to achieve message passing performance using shared memory constructs through careful data ordering and distribution. However, a multi-threaded implementation of CG on the Tera MTA does not require special ordering or partitioning to obtain high efficiency and scalability.
    Keywords: Computer Programming and Software
    Type: Parallel and Distributed Computing Systems; Aug 08, 2000 - Aug 10, 2000; Las Vegas, NV; United States
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 8
    Publication Date: 2019-07-13
    Description: No abstract available
    Keywords: Computer Programming and Software
    Type: JSC-CN-40257-2 , IEEE International Conference on Computer Science & Education; Aug 22, 2017 - Aug 25, 2017; Houston, TX; United States
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 9
    Publication Date: 2019-07-13
    Description: NASA's latest spacecraft Orion is in the development process of taking humans deeper into space. Orion is equipped with three main displays to monitor and control the spacecraft. To ensure the software behind the glass displays operates without faults, rigorous testing is needed. To conduct such testing, the Rapid Prototyping Lab at NASA's Johnson Space Center along with the University of Texas at Tyler employed a software verification tool, EggPlant Functional by TestPlant. It is an image based test automation tool that allows users to create scripts to verify the functionality within a program. A set of edge key framework and Common EggPlant Functions were developed to enable creation of scripts in an efficient fashion. This framework standardized the way to code and to simulate user inputs in the verification process. Moreover, the Common EggPlant Functions can be used repeatedly in verification of different displays.
    Keywords: Computer Programming and Software
    Type: JSC-CN-40257-1 , International Conference on Computer Science & Education; Aug 22, 2017 - Aug 25, 2017; Houston, TX; United States
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 10
    Publication Date: 2019-07-13
    Description: NASA has increasingly relied on high-performance computing (HPC) re- sources for computational modeling, simulation, and data analysis to meet the science and engineering goals of its missions in space exploration, aeronautics, and Earth and space science. The NASA Advanced Supercomputing (NAS) Division at Ames Research Center in Silicon Valley, Calif., hosts NASAs premier supercomputing resources, integral to achieving and enhancing the success of the agencys missions. NAS provides a balanced environment, funded under the High-End Computing Capability (HECC) project, comprised of world-class supercomputers, including its flagship distributed-memory cluster, Pleiades; high-speed networking; and massive data storage facilities, along with multi-disciplinary support teams for user support, code porting and optimization, and large-scale data analysis and scientific visualization. However, as scientists have increased the fidelity of their simulations and engineers are conducting larger parameter-space studies, the requirements for supercomputing resources have been growing by leaps and bounds. With the facility housing the HECC systems reaching its power and cooling capacity, NAS undertook a prototype project to investigate an alternative approach for housing supercomputers. Modular supercomputing, or container-based computing, is an innovative concept for expanding NASAs HPC capabilities. With modular supercomputing, additional containerssimilar to portable storage podscan be connected together as needed to accommodate the agencys ever-increasing demand for computing resources. In addition, taking advantage of the local weather permits the use of cooling technologies that would additionally save energy and reduce annual water usage. The first stage of NASAs Modular Supercomputing Facility (MSF) prototype, which resulted in a 1,000 square-foot module on a concrete pad with room for 16 compute racks, was completed in Fall 2016 and an SGI (now HPE) computer system, named Electra, was deployed there in early 2017. Cooling is performed via an evaporative system built into the module, and preliminary experience shows a Power Usage Effectiveness (PUE) measurement of 1.03. Electra achieved over a petaflop on the LINPACK benchmark, sufficient to rank number 96 on the November 2016 TOP500 list [14]. The system consists of 1,152 InfiniBand-connected Intel Xeon Broadwell-based nodes. Its users access their files on a facility-wide file system shared by all HECC compute assets via Mellanox MetroX InfiniBand extenders, which connect the Electra fabric to Lustre routers in the primary facility over fiber-optic links about 900 feet long. The MSF prototype has exceeded expectations and is serving as a blueprint for future expansions. In the remainder of this chapter, we detail how modular data center technology can be used to expand an existing compute resource. We begin by describing NASAs requirements for supercomputing and how resources were provided prior to the integration of the Electra module-based system.
    Keywords: Computer Operations and Hardware
    Type: ARC-E-DAA-TN47639 , Contemporary High Performance Computing: From Petascale toward Exascale; 3
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...