ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    Publication Date: 2018-06-11
    Description: The central premise in developing effective human-assistant planetary surface robots is that robotic intelligence is needed. The exact type, method, forms and/or quantity of intelligence is an open issue being explored on the ERA project, as well as others. In addition to field testing, theoretical research into this area can help provide answers on how to design future planetary robots. Many fundamental intelligence issues are discussed by Murphy [2], including (a) learning, (b) planning, (c) reasoning, (d) problem solving, (e) knowledge representation, and (f) computer vision (stereo tracking, gestures). The new "social interaction/emotional" form of intelligence that some consider critical to Human Robot Interaction (HRI) can also be addressed by human assistant planetary surface robots, as human operators feel more comfortable working with a robot when the robot is verbally (or even physically) interacting with them. Arkin [3] and Murphy are both proponents of the hybrid deliberative-reasoning/reactive-execution architecture as the best general architecture for fully realizing robot potential, and the robots discussed herein implement a design continuously progressing toward this hybrid philosophy. The remainder of this chapter will describe the challenges associated with robotic assistance to astronauts, our general research approach, the intelligence incorporated into our robots, and the results and lessons learned from over six years of testing human-assistant mobile robots in field settings relevant to planetary exploration. The chapter concludes with some key considerations for future work in this area.
    Keywords: Cybernetics, Artificial Intelligence and Robotics
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Publication Date: 2018-06-06
    Description: We have developed and tested an advanced EVA communications and computing system to increase astronaut self-reliance and safety, reducing dependence on continuous monitoring and advising from mission control on Earth. This system, called Mobile Agents (MA), is voice controlled and provides information verbally to the astronauts through programs called personal agents. The system partly automates the role of CapCom in Apollo-including monitoring and managing EVA navigation, scheduling, equipment deployment, telemetry, health tracking, and scientific data collection. EVA data are stored automatically in a shared database in the habitat/vehicle and mirrored to a site accessible by a remote science team. The program has been developed iteratively in the context of use, including six years of ethnographic observation of field geology. Our approach is to develop automation that supports the human work practices, allowing people to do what they do well, and to work in ways they are most familiar. Field experiments in Utah have enabled empirically discovering requirements and testing alternative technologies and protocols. This paper reports on the 2004 system configuration, experiments, and results, in which an EVA robotic assistant (ERA) followed geologists approximately 150 m through a winding, narrow canyon. On voice command, the ERA took photographs and panoramas and was directed to move and wait in various locations to serve as a relay on the wireless network. The MA system is applicable to many space work situations that involve creating and navigating from maps (including configuring equipment for local topology), interacting with piloted and unpiloted rovers, adapting to environmental conditions, and remote team collaboration involving people and robots.
    Keywords: Cybernetics, Artificial Intelligence and Robotics
    Type: 1st Space Exploration Conference; Unknown
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    facet.materialart.
    Unknown
    In:  CASI
    Publication Date: 2019-07-20
    Description: This is a presentation for students who will be participating in the App Development Challenge (ADC). This discussion tells the students the data format NASA will be using (which their software must be compatible with). This presentation contains only publicly available images and information, and is provided to help the students with their coding (that they will submit to NASA).
    Keywords: Computer Programming and Software
    Type: JSC-E-DAA-TN66531
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 4
    Publication Date: 2019-07-13
    Description: The NASA Autonomous precision Landing and Hazard Avoidance Technology (ALHAT) project developed a suite of prototype sensors to enable autonomous and safe precision landing of robotic or crewed vehicles under any terrain lighting conditions. Development of the ALHAT sensor suite was a cross-NASA effort, culminating in integration and testing on-board a variety of terrestrial vehicles toward infusion into future spaceflight applications. Terrestrial tests were conducted on specialized test gantries, moving trucks, helicopter flights, and a flight test onboard the NASA Morpheus free-flying, rocket-propulsive flight-test vehicle. To accomplish these tests, a tedious integration process was developed and followed, which included both command and telemetry interfacing, as well as sensor alignment and calibration verification to ensure valid test data to analyze ALHAT and Guidance, Navigation and Control (GNC) performance. This was especially true for the flight test campaign of ALHAT onboard Morpheus. For interfacing of ALHAT sensors to the Morpheus flight system, an adaptable command and telemetry architecture was developed to allow for the evolution of per-sensor Interface Control Design/Documents (ICDs). Additionally, individual-sensor and on-vehicle verification testing was developed to ensure functional operation of the ALHAT sensors onboard the vehicle, as well as precision-measurement validity for each ALHAT sensor when integrated within the Morpheus GNC system. This paper provides some insight into the interface development and the integrated-systems verification that were a part of the build-up toward success of the ALHAT and Morpheus flight test campaigns in 2014. These campaigns provided valuable performance data that is refining the path toward spaceflight infusion of the ALHAT sensor suite.
    Keywords: Spacecraft Instrumentation and Astrionics; Space Transportation and Safety
    Type: JSC-CN-32396 , AIAA SciTech 2015 Conference; Jan 05, 2015 - Jan 09, 2015; Kissimmee, FL; United States
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 5
    Publication Date: 2019-07-13
    Description: Three NASA centers are working together to address the challenge of operating robotic assets in support of human exploration of the Moon. This paper describes the combined work to date of the Ames Research Center (ARC), Jet Propulsion Laboratory (JPL) and Johnson Space Center (JSC) on a common support framework to control and monitor lunar robotic assets. We discuss how we have addressed specific challenges including time-delayed operations, and geographically distributed collaborative monitoring and control, to build an effective architecture for integrating a heterogeneous collection of robotic assets into a common work. We describe the design of the Robot Application Programming Interface Delegate (RAPID) architecture that effectively addresses the problem of interfacing a family of robots including the JSC Chariot, ARC K-10 and JPL ATHLETE rovers. We report on lessons learned from the June 2008 field test in which RAPID was used to monitor and control all of these assets. We conclude by discussing some future directions to extend the RAPID architecture to add further support for NASA's lunar exploration program.
    Keywords: Man/System Technology and Life Support
    Type: IEEE Aerospace Conference; Mar 07, 2009 - Mar 14, 2009; Big Sky, MT; United States
    Format: text
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 6
    Publication Date: 2019-07-13
    Description: My project Involved Improving upon existing software and writing new software for the Project Morpheus Team. Specifically, I created and updated Integrated Test and Operations Systems (ITOS) user Interfaces for on-board Interaction with the vehicle during archive playback as well as live streaming data. These Interfaces are an integral part of the testing and operations for the Morpheus vehicle providing any and all information from the vehicle to evaluate instruments and insure coherence and control of the vehicle during Morpheus missions. I also created a "bridge" program for Interfacing "live" telemetry data with the Engineering DOUG Graphics Engine (EDGE) software for a graphical (standalone or VR dome) view of live Morpheus nights or archive replays, providing graphical representation of vehicle night and movement during subsequent tests and in real missions.
    Keywords: Computer Programming and Software
    Type: JSC-CN-27205 , NASA URC Virtual Poster Session and Symposium; Oct 24, 2012 - Oct 31, 2012; United States
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 7
    Publication Date: 2019-07-20
    Description: Final paper is attached. The NASA developed Core Flight System (cFS) is a reusable software architecture that has been used on multiple spaceflight missions. By using this framework, missions are able to reuse code from other missions, as well as leverage deployment onto similar computer architectures (i.e. not "reinvent the wheel" on each new mission). The success in the cFS concept can be seen in the large number of projects using cFS at FSW-2018. The Habulous project is an Earth-based testbed, used for hardware and software that may one day be used on a future space habitat unit, with many participating groups from various NASA centers and aerospace organizations around the country. The distributed nature of the various teams mean that defining (and following) an interface definition is critical on the project. Additionally, since various groups use various types of computer hardware (32/64-bit, big/little endian, Linux/VxWorks/Windows) many additional complications exist in interfacing all the various components into a final integrated system. cFS is used on the majority the flight software (FSW) in running in Habulous. But some subsystems have elected to not use cFS, and use a software bridge (called SBN_lib) to interact with the other cFS nodes in Habulous. In order to most efficiently develop the FSW, a central database is used to define and store each message sent by cFS. A Command and Data Dictionary (CDD) is something nearly universal on spacecraft, but as a team we worked to develop the CDD before the SW development was complete, and not treat it like "as built" documentation. To manage the CDD, the cFS Command and Data Dictionary (CCDD) tool was chosen (available from NASA as open source software). The CCDD tool has successfully been used to automate/autocode a large amount of software used on Habulous, as we are hoping to use it to define even more items in the future (time-triggered Ethernet (TTE) network maps, CPU scheduling). Additionally, Habulous has been exploring the use of cFS on wildly heterogeneous CPUs, and how to coordinate all those various machines using/extending the software bus network (SBN) application in cFS, as well as TTE to coordinate message passing between various synchronized machines. The major topics to be covered in the presentation are: (1) Updating to the CCSDS_v2 extended headers (and using CPU# as subsystem ID). (2) Managing all the message identification numbers for each cFS message sent/received on any of the various CPUs. (3) Using the CCDD information to automatically generate the C-header files that define the structure for all software bus (SB) commands/telemetry messages. (4) Using the CCDD to automatically generate XML Telemetry and Command Exchange (XTCE) files, which streams display production/integration/testing in a web based display architecture (5) Extending/customizing SBN to pass messages among computers on multiple networks. (6) Using "Protobetter" inside SBN to manage different endian-ness/architectures. (7) Using SBN_lib to allow non-cFS node to communicate with cFS nodes. (8) Developing TTE network and schedule tables for all the various CPUs to use.
    Keywords: Space Communications, Spacecraft Communications, Command and Tracking; Computer Programming and Software
    Type: JSC-E-DAA-TN63306 , Annual Workshop on Spacecraft Flight Software; Dec 03, 2018 - Dec 06, 2018; San Antonio, TX; United States
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 8
    Publication Date: 2019-07-20
    Description: No abstract available
    Keywords: Computer Programming and Software
    Type: JSC-E-DAA-TN63350 , 2018 Flight Software Workshop; Dec 03, 2018 - Dec 06, 2018; San Antonio, TX; United States
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 9
    Publication Date: 2019-07-12
    Description: RAPID (Robot Application Programming Interface Delegate) software utilizes highly robust technology to facilitate commanding and monitoring of lunar assets. RAPID provides the ability for intercenter communication, since these assets are developed in multiple NASA centers. RAPID is targeted at the task of lunar operations; specifically, operations that deal with robotic assets, cranes, and astronaut spacesuits, often developed at different NASA centers. RAPID allows for a uniform way to command and monitor these assets. Commands can be issued to take images, and monitoring is done via telemetry data from the asset. There are two unique features to RAPID: First, it allows any operator from any NASA center to control any NASA lunar asset, regardless of location. Second, by abstracting the native language for specific assets to a common set of messages, an operator may control and monitor any NASA lunar asset by being trained only on the use of RAPID, rather than the specific asset. RAPID is easier to use and more powerful than its predecessor, the Astronaut Interface Device (AID). Utilizing the new robust middleware, DDS (Data Distribution System), developing in RAPID has increased significantly over the old middleware. The API is built upon the Java Eclipse Platform, which combined with DDS, provides platform-independent software architecture, simplifying development of RAPID components. As RAPID continues to evolve and new messages are being designed and implemented, operators for future lunar missions will have a rich environment for commanding and monitoring assets.
    Keywords: Man/System Technology and Life Support
    Type: NPO-46332 , NASA Tech Brief, May 2011; 11
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 10
    Publication Date: 2019-07-13
    Description: Human missions to the Moon or Mars will likely be accompanied by many useful robots that will assist in all aspects of the mission, from construction to maintenance to surface exploration. Such robots might scout terrain, carry tools, take pictures, curate samples, or provide status information during a traverse. At NASA/JSC, the EVA Robotic Assistant (ERA) project has developed a robot testbed for exploring the issues of astronaut-robot interaction. Together with JSC's Advanced Spacesuit Lab, the ERA team has been developing robot capabilities and testing them with space-suited test subjects at planetary surface analog sites. In this paper, we describe the current state of the ERA testbed and two weeks of remote field tests in Arizona in September 2002. A number of teams with a broad range of interests participated in these experiments to explore different aspects of what must be done to develop a program for robotic assistance to surface EVA. Technologies explored in the field experiments included a fuel cell, new mobility platform and manipulator, novel software and communications infrastructure for multi-agent modeling and planning, a mobile science lab, an "InfoPak" for monitoring the spacesuit, and delayed satellite communication to a remote operations team. In this paper, we will describe this latest round of field tests in detail.
    Keywords: Cybernetics, Artificial Intelligence and Robotics
    Type: JSC-CN-7773 , Intemational Symposium on Artificial Intelligence, Robotics and Automation in Space (iSAIRAS 2003); May 19, 2003 - May 23, 2003; Nara; Japan
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...