Open source software powers NASA's Mars VR project

No readers like this yet.
Girl reaching out to the stars

Opensource.com

Parker Abercrombie is a software engineer at NASA Jet Propulsion Laboratory, where he builds software to support Mars science missions. He has a special interest in geographic information systems (GIS) and has worked with teams at NASA and the U.S. Department of Energy on systems for geographic visualization and data management.

Parker holds an M.A. in geography from Boston University and a B.S. in creative studies with emphasis in computer science (which he swears is more technical than it sounds) from the University of California, Santa Barbara. In his spare time, Parker enjoys baking bread and playing the Irish wooden flute.

At SCaLE 14x in Pasadena, California, he's speaking about OnSight, a blend of open source and proprietary software that lets users experience Mars in virtual reality. We sat down with him to get a sneak peek at the talk and learn more about the project.

What is OnSight?

OnSight allows scientists and engineers to work virtually on Mars using mixed reality. A user dons a Microsoft HoloLens headset, and OnSight software running on the headset contacts our Mars terrain server and downloads the latest 3D terrain. The scientist has a first-person view of Mars, as if they were standing next to the Curiosity rover. This view gives scientists and engineers a better sense of the scale and nature of the Martian terrain surrounding the rover. The really powerful thing about this is that everything you see in OnSight is rendered at 1:1 scale—the same size it would be as if you were really there. You don't need to puzzle over the size of a rock in a panorama photo—you can just look at it and use the spatial skills that we've used all our lives. We can also detect where the user's desk and computer are located and cut them out of the virtual world, which allows them to continue using familiar tools on their desktop while exploring Mars.

To make this immersive experience possible, the OnSight team needed to create 3D reconstructions of Mars and produce new scenes on a daily basis as the rover drives and new images are sent back to Earth. The team developed a custom image processing pipeline that produces 3D scenes from stereo images sent back from the Curiosity rover. We also created an automated build system to generate new reconstructions when new images are available, dynamically allocating cloud resources to handle the work as needed. As soon as new images are downlinked to Earth, cloud machines spin up and spring into action crunching the new data into 3D scenes. The next time our users launch OnSight, they will see the latest from Mars.

How big is the project? How much data does it handle?

The OnSight terrain build system automatically builds new Mars scenes when new data is downlinked from Curiosity. This happens on a roughly daily basis. The input data is a set of stereo images from the rover that our image processing pipeline crunches into textured 3D mesh. That mesh can be loaded into our application on the HoloLens.

The number of source images varies from place to place. If the rover has been exploring an area for a while, we may have thousands of images. If it's a new place where the rover has just arrived, we may only a have a handful of pictures. A typical scene is about 1,000 images, or around 5GB of data. We've built hundreds of these scenes, all along Curiosity's path.

What open source software is being used to handle the data?

We use lots of open source software! We use the open source tools MeshLab and Blender to view and process some of our 3D models. For that matter, most of our terrain pipeline is implemented in .NET. Microsoft released the .NET Core libraries as open source in 2014.

We also use several open source tools and frameworks in our cloud build system. We use the Jenkins continuous integration system both to compile code for continuous integration and to run our image processing jobs. We store metadata about source images and completed builds in a MySQL database, which we access through a REST interface built using the LoopBack framework. And we have a web dashboard that we built using AngularJS and Bootstrap. We also use Ansible to help us configure our cloud machines.

Why did you chose to use open source software for such a challenging project?

When solving a technical problem, I start by asking "What's the most appropriate tool for the job?" There's a lot of great open source software available, and many times the appropriate tool is open source.

What are some of the biggest challenges you've been faced with in this project?

Wow, where to start? One of the exciting things about this project is that it cuts across a lot of different types of technology, from graphics and UI code that runs on the HoloLens to image processing code that performs the terrain reconstruction, and of course the back-end cloud infrastructure that drives the whole process. I could go on for hours about challenges faced in each of these layers.

Creating terrain reconstructions that are both good looking and scientifically accurate has been a challenge, especially when trying to find ways to do this completely automatically with no human in the loop. If you're making a game you can have a team of artists create an awesome looking environment, or you can even generate the environment procedurally. But we're restricted to using real data and manipulating it as little as possible.

As a software engineer, the challenges that stick out in my mind are technical ones—bugs that were completely baffling at the time. One of the challenges of running terrain builds in the cloud is that debugging the code can be difficult. We've run into a number of cases in which the code behaves differently running on a cloud machine than in a local development environment. For example, parts of the surface reconstruction process take advantage of GPU computing to accelerate processing, and getting our software to work with GPUs on cloud instances was pretty tricky. At one point we were trying to use GPU-enabled cloud computing instances, but for some reason the GPU wasn't recognized on our cloud machine until we connected to the machine with remote desktop. So we'd try to run a build on the cloud, it would fail, we'd connect with remote desktop to see what was wrong, and then the GPU would work. It would continue working until the machine was rebooted and then mysteriously start to fail again. At that point we'd remote desktop in to see what's wrong and it would start to work again. That was a tough one to track down.

What stage is the project in?

We're in a beta testing stage with a small group of scientists. We'll be rolling the system out to more scientists in the coming year.

Tags
Aleksandar Todorović
I'm a part of the tech department for an awesome investigative journalism network called OCCRP. I'm really passionate about open source software, artificial intelligence and information security. My open source contributions are now merged with projects like reddit, elementary OS and the Tor Project. I'm running a personal blog where I share my personal stories.

Comments are closed.

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.