Manipulating data in 3D with LidarViewer

No readers like this yet.
wavegraph

Opensource.com

For many people, a cave is just that, but for the NASA Graphics and Visualization (GVIS) team and I, it's a CAVE(™), or Cave Automatic Virtual Environment. In simpler terms, a CAVE(™) is an environment that allows the user to view and manipulate data in 3D.

I am lucky enough to work in a lab that specializes in virtual reality, and my lab has a CAVE(™) of its own, called the GRUVE lab. Some data came in that needed to be processed so that it could be displayed in the CAVE(™). I'm a lover of all things open source, so when my boss challenged me to process the data using open source software, I was eager to begin.

The software that I decided to use is called LidarViewer. It's open source, which is absolutely fantastic. LidarViewer is Linux specific. As a Linux user, that made me really happy, but I could only use a Windows machine while at NASA. That meant that I had to create a Linux virtual machine on top of a Windows computer. I had a lot of options for the operating system that I was going to use for the virtual machine.

At first I wanted to use Fedora because it was Linux, and I had used it before. I spent a day trying and failing to get LidarViewer installed on Fedora. I then realized that it might be a good idea to use the specified operating system in the manual, which was Ubuntu. Once I created an Ubuntu virtual machine, I started to install LidarViewer. That meant installing VRUI, the prerequisite to LidarViewer. After I installed VRUI, I was able to install LidarViewer.

Finally LidarViewer was installed. Now, all I had to do was get the scans converted from a .rcs file format to a .pts file format so that LidarViewer could process them. To convert the scans, I imported each individual scan into AutoCad Recap and then exported them as .pts files. Once that was done, I put the .pts files onto a USB. Before then, I had never connected a USB to a virtual machine before. After a lot of struggling, googling, and yelling at my computer, I finally figured out how to use shared folders. As it turns out, if you specify that a folder is shared with a virtual machine, you can access the data in the virtual machine.

After my day-long or so detour into the realm of shared folders, I ventured into the world of preprocessing the files. Preprocessing means turning the .pts files into .lidar files, which can be viewed in 3D in the GRUVE lab. I thought that it would be easy, since I had managed to do the examples on the LidarViewer Wikipedia page without too much trouble. Preprocessing wasn't as easy as I had hoped. As it turns out, there are a lot of options to choose from when preprocessing a file. It's also very easy to delete the data that you're working with and then have to start again. Fortunately, it didn't take too long to get the hang of preprocessing files. After all the files were preprocessed, I transferred them off of the virtual machine, onto the physical computer, and then onto the USB. I then moved to the GRUVE lab to finally view the files in 3D.

I was able to run the files successfully inside the GRUVE CAVE(™). However, I was only viewing one scan at a time and all of the scans had odd colors. To fix the color problem, I ended up re-preprocessing all of the files with the correct parameters. To fix the problem of only viewing one scan at a time, I "stitched" several files together. By that, I mean that I ran the preprocessing command and told it to merge multiple files into one file. This resulted in a single large scan.

The problem with having one large scan is exactly that: it's large. There are a lot of points for LidarViewer to render, and when it reaches capacity it crashes. Clearly, this needed to be fixed. The first thing that I tried to do was to decrease the point density. I decreased the number of points per portion of the scan. Unfortunately, that didn't work.

The next way that I tried to stop the crashes was changing the near clip plane and the far clip plane values. Essentially, the near and far clip planes determine what is rendered, or viewed by the user. The near clip plane determines how close the user has to be to something before it stops being rendered. The far clip plane determines which objects at a distance are rendered. By enlarging the near clip plane value and decreasing the far clip plane value, the distance that LidarViewer renders is made smaller. This resulted in less lag and no crashes.

The next step is to recreate what I have done here with other data point clouds and optimize them for the GRUVE lab.

Open
Science

A collection of articles on the topic of open source software, tools, hardware, philosophies, and more in science.

User profile image.
Lauren Egts, a student at Rochester Institute of Technology, has interned at Bank of America, GE Aviation, and the NASA Glenn Research Center Graphics and Visualization Lab (GVIS). She has won the National Center for Women and Information Technology (NCWIT) Ohio Affiliate Award four times, and the National Runner-Up Award once.

Comments are closed.

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.