ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    Publication Date: 2013-08-31
    Description: The following topics are discussed: requirements for dynamic mesh adaption; linked-list data structure; edge-based data structure; adaptive-grid data structure; three types of element subdivision; mesh refinement; mesh coarsening; additional constraints for coarsening; anisotropic error indicator for edges; unstructured-grid Euler solver; inviscid 3-D wing; and mesh quality for solution-adaptive grids. The discussion is presented in viewgraph form.
    Keywords: COMPUTER PROGRAMMING AND SOFTWARE
    Type: NASA. Langley Research Center, Unstructured Grid Generation Techniques and Software; p 181-192
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Publication Date: 2019-07-18
    Description: High-End Computing (HEC) has always played a major role in meeting the modeling and simulation needs of various NASA missions. With NASA's newest 62 teraflops Columbia supercomputer, HEC is having an even greater impact within the Agency and beyond. Significant cutting-edge science and engineering simulations in the areas of space exploration, Shuttle operations, Earth sciences, and aeronautics research, are already occurring on Columbia, demonstrating its ability to accelerate NASA s exploration vision. The talk will describe how the integrated supercomputing production environment is being used to reduce design cycle time, accelerate scientific discovery, conduct parametric analysis of multiple scenarios, and enhance safety during the life cycle of NASA missions.
    Keywords: Computer Systems
    Type: International Supercomputer Conference (ISC 2006); Jun 27, 2006 - Jun 30, 2006; Dresden; Germany
    Format: text
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    facet.materialart.
    Unknown
    In:  Other Sources
    Publication Date: 2019-07-18
    Description: Using a collection of benchmark problems of increasing levels of realism and computational effort, we will characterize the strengths and limitations of the 10,240 processor Columbia system to deliver supercomputing value to application scientists. Scientists need to be able to determine if and how they can utilize Columbia to carry extreme workloads, either in terms of ultra-large applications that cannot be run otherwise (capability), or in terms of very large ensembles of medium-scale applications to populate response matrices (capacity). We select existing application benchmarks that scale from a small number of processors to the entire machine, and that highlight different issues in running supercomputing-calss applicaions, such as the various types of memory access, file I/O, inter- and intra-node communications and parallelization paradigms. http://www.nas.nasa.gov/Software/NPB/
    Keywords: Computer Systems
    Type: Supercomputing 2004; Nov 06, 2004 - Nov 12, 2004; Pittsburgh, PA; United States
    Format: text
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 4
    Publication Date: 2019-07-18
    Description: High order methods are frequently used in computational simulation for their high accuracy. An efficient way to avoid unnecessary computation in smooth regions of the solution is to use adaptive meshes which employ fine grids only in areas where they are needed. Nonconforming spectral elements allow the grid to be flexibly adjusted to satisfy the computational accuracy requirements. The method is suitable for computational simulations of unsteady problems with very disparate length scales or unsteady moving features, such as heat transfer, fluid dynamics or flame combustion. In this work, we select the Mark Element Method (MEM) to handle the non-conforming interfaces between elements. A new technique is introduced to efficiently implement MEM in 3-D nonconforming meshes. By introducing an "intermediate mortar", the proposed method decomposes the projection between 3-D elements and mortars into two steps. In each step, projection matrices derived in 2-D are used. The two-step method avoids explicitly forming/deriving large projection matrices for 3-D meshes, and also helps to simplify the implementation. This new technique can be used for both h- and p-type adaptation. This method is applied to an unsteady 3-D moving heat source problem. With our new MEM implementation, mesh adaptation is able to efficiently refine the grid near the heat source and coarsen the grid once the heat source passes. The savings in computational work resulting from the dynamic mesh adaptation is demonstrated by the reduction of the the number of elements used and CPU time spent. MEM and mesh adaptation, respectively, bring irregularity and dynamics to the computer memory access pattern. Hence, they provide a good way to gauge the performance of computer systems when running scientific applications whose memory access patterns are irregular and unpredictable. We select a 3-D moving heat source problem as the Unstructured Adaptive (UA) grid benchmark, a new component of the NAS Parallel Benchmarks (NPB). In this paper, we present some interesting performance results of ow OpenMP parallel implementation on different architectures such as the SGI Origin2000, SGI Altix, and Cray MTA-2.
    Keywords: Fluid Mechanics and Thermodynamics
    Type: International Conference on Spectral and High Order Methods; Jun 21, 2004 - Jun 25, 2004; RI; United States
    Format: text
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 5
    Publication Date: 2019-07-18
    Description: In this paper, we present a multi-threaded approach for the automatic load balancing of adaptive finite element (FE) meshes. The platform of our choice is the EARTH multi-threaded system which offers sufficient capabilities to tackle this problem. We implement the question phase of FE applications on triangular meshes, and exploit the EARTH token mechanism to automatically balance the resulting irregular and highly nonuniform workload. We discuss the results of our experiments on EARTH-SP2, an implementation of EARTH on the IBM SP2, with different load balancing strategies that are built into the runtime system.
    Keywords: Numerical Analysis
    Type: 5th International Symposium on Solving Irregularly Structured Problems in Parallel; Aug 09, 1998 - Aug 11, 1998; Berkley, CA; United States
    Format: text
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 6
    Publication Date: 2019-07-18
    Description: Mesh adaption is a powerful tool for efficient unstructured-grid computations but causes load imbalance among processors on a parallel machine. We describe a novel method to dynamically balance the processor workloads with a global view. Mesh question, repartitioning, processor assignment, and remapping are critical components of the framework that must be accomplished rapidly and efficiently so as not to cause a significant overhead to the numerical simulation. A data redistribution model will also be presented that predicts the remapping cost. This model is required to determine whether the gain from a balanced workload distribution offsets the cost of data movement. Results presented will demonstrate that this is an effective dynamic load balancing strategy which remains viable on a large number of processors.
    Keywords: Computer Programming and Software
    Type: NEC Europe Ltd. Conference; May 04, 1998 - May 08, 1998; Sankt Augustin; Germany
    Format: text
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 7
    Publication Date: 2019-07-18
    Description: This paper presents two unstructured mesh adaptation schemes for problems in computational fluid dynamics. The procedures allow localized grid refinement and coarsening to efficiently capture aerodynamic flow features of interest. The first procedure is for purely tetrahedral grids; unfortunately, repeated anisotropic adaptation may significantly deteriorate the quality of the mesh. Hexahedral elements, on the other hand, can be subdivided anisotropically without mesh quality problems. Furthermore, hexahedral meshes yield more accurate solutions than their tetrahedral counterparts for the same number of edges. Both the tetrahedral and hexahedral mesh adaptation procedures use edge-based data structures that facilitate efficient subdivision by allowing individual edges to be marked for refinement or coarsening. However, for hexahedral adaptation, pyramids, prisms, and tetrahedra are used as buffer elements between refined and unrefined regions to eliminate hanging vertices. Computational results indicate that the hexahedral adaptation procedure is a viable alternative to adaptive tetrahedral schemes.
    Keywords: Fluid Mechanics and Thermodynamics
    Type: NAS-96-007
    Format: text
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 8
    Publication Date: 2019-07-13
    Description: NASA has increasingly relied on high-performance computing (HPC) re- sources for computational modeling, simulation, and data analysis to meet the science and engineering goals of its missions in space exploration, aeronautics, and Earth and space science. The NASA Advanced Supercomputing (NAS) Division at Ames Research Center in Silicon Valley, Calif., hosts NASAs premier supercomputing resources, integral to achieving and enhancing the success of the agencys missions. NAS provides a balanced environment, funded under the High-End Computing Capability (HECC) project, comprised of world-class supercomputers, including its flagship distributed-memory cluster, Pleiades; high-speed networking; and massive data storage facilities, along with multi-disciplinary support teams for user support, code porting and optimization, and large-scale data analysis and scientific visualization. However, as scientists have increased the fidelity of their simulations and engineers are conducting larger parameter-space studies, the requirements for supercomputing resources have been growing by leaps and bounds. With the facility housing the HECC systems reaching its power and cooling capacity, NAS undertook a prototype project to investigate an alternative approach for housing supercomputers. Modular supercomputing, or container-based computing, is an innovative concept for expanding NASAs HPC capabilities. With modular supercomputing, additional containerssimilar to portable storage podscan be connected together as needed to accommodate the agencys ever-increasing demand for computing resources. In addition, taking advantage of the local weather permits the use of cooling technologies that would additionally save energy and reduce annual water usage. The first stage of NASAs Modular Supercomputing Facility (MSF) prototype, which resulted in a 1,000 square-foot module on a concrete pad with room for 16 compute racks, was completed in Fall 2016 and an SGI (now HPE) computer system, named Electra, was deployed there in early 2017. Cooling is performed via an evaporative system built into the module, and preliminary experience shows a Power Usage Effectiveness (PUE) measurement of 1.03. Electra achieved over a petaflop on the LINPACK benchmark, sufficient to rank number 96 on the November 2016 TOP500 list [14]. The system consists of 1,152 InfiniBand-connected Intel Xeon Broadwell-based nodes. Its users access their files on a facility-wide file system shared by all HECC compute assets via Mellanox MetroX InfiniBand extenders, which connect the Electra fabric to Lustre routers in the primary facility over fiber-optic links about 900 feet long. The MSF prototype has exceeded expectations and is serving as a blueprint for future expansions. In the remainder of this chapter, we detail how modular data center technology can be used to expand an existing compute resource. We begin by describing NASAs requirements for supercomputing and how resources were provided prior to the integration of the Electra module-based system.
    Keywords: Computer Operations and Hardware
    Type: ARC-E-DAA-TN47639 , Contemporary High Performance Computing: From Petascale toward Exascale; 3
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 9
    Publication Date: 2019-07-13
    Description: While scientific and engineering advancements used to rely primarily on theoretical studies and physical experiments, today digital technology enabled by petaflops-scale supercomputers is an equal, if not a greater, contributor to such achievements. In addition, computational modeling and simulation serves as a predictive tool that is not otherwise available. As a result, the use of high performance computing is integral to NASA's work in all mission areas such as space exploration, aeronautics, and scientific discovery. But traditional supercomputing alone is not sufficient for all of the space agency's needs. The success of many NASA missions depends on solving complex computing challenges, some of which are NP-hard (decision theory) if using classical solution methods. Quantum computing promises an unprecedented ability to solve such intractable problems by harnessing quantum mechanical effects such as tunneling, superposition, and entanglement. Another disruptive digital technology is neuromorphic computing that uses brain-inspired lessons to generate new architectures that are much more energy efficient, and capable of massive parallel processing and learning in-situ. Finally, with large amounts of observational and computational data sets, the opportunities of big data and data analytics can be leveraged to enable deep learning and knowledge discovery - it's all a massive digital transformation. This talk will be an overview how NASA utilizes digital technologies for its science and engineering efforts.
    Keywords: Systems Analysis and Operations Research
    Type: ARC-E-DAA-TN60761 , Digital Thailand Big Bang 2018; Sep 21, 2018 - Sep 24, 2018; Bangkok; Thailand
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 10
    Publication Date: 2011-08-24
    Description: We construct parallel finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. A posteriori estimates of spatial errors are obtained by a p-refinement technique using superconvergence at Radau points. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We compare results using different limiting schemes and demonstrate parallel efficiency through computations on an NCUBE/2 hypercube. We also present results using adaptive h- and p-refinement to reduce the computational cost of the method.
    Keywords: COMPUTER PROGRAMMING AND SOFTWARE
    Type: Applied Numerical Mathematics (ISSN 0618-9274); p. 255-283
    Format: text
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...