[How-to] Easy way to 3D scan the surrounding environment (RealSense)

This article summarizes an easy way to 415D scan your surroundings using the Intel RealSense Depth Cameras (D435, D435, D515i) and LiDAR Camera L3.

Some software is required for 3D scanning (converting an object into 3D digital data), but it can be achieved with relatively simple steps.It is a procedure to scan the surrounding environment in XNUMXD and create a room with Mozilla Hubs, so we hope you find it helpful.

* This article refers to the following WEB pages
How-To: A Simple Way to 3D Scan an Environment


You need to choose different file formats to suit your purpose and software, such as 3D scanning and 3D modeling.There are several data formats and file formats in this article as well.First of all, I will give you a brief explanation.

・ PLY file format
3D image file format for storing XNUMXD point cloud data

・ OBJ file format
A simple data format that represents a three-dimensional spatial object (geometry) in one of the 3D model formats.

・ GLB file format
Binary file format of 3D models stored in GL Transmission Format (GLTF) for transmitting 3D models

After 3D scanning the surrounding environment, we aim to finally publish it on the VR-friendly platform "Mozilla Hubs" while saving the data according to the software used.


About Hubs

Hubs is Mozilla's Mixed Reality (Mojira), known for its web browser.Mixed reality) A free VR friendly platform created by the team.Through this open source social VR space, various simulated experiences are possible.Not only can you bring third parties into your space, but you can also create new rooms and import models to share them.


Step 1: Perform a scan

There are many applications for 3D scanning using Intel's RealSense depth camera, but this time we will use the file export function of "Dot3D Pro", which has a reputation for ease of use.

This demonstration uses Microsoft Surface Pro 4 (6th generation). It works on any device (desktop, laptop, etc.) as long as it is compatible with "Dot3D Pro", but you may need a USB cable of sufficient length.Also, depending on the distance to the object to be scanned, it may be necessary to consider the hardware installation location and wiring.In that respect, tablet selection has less restrictions on the installation location and hardware side, and the process of scanning the surrounding environment becomes easier.


1-1. Start scanning

Be sure to use USB 3 cables and ports when using Dot3.0D Pro.

Connect the camera to your tablet and open the software.Select the connected camera in the software settings window.

Then choose to create a new scan and select the Scan icon to start the scan.

As you move the camera, the pixels in the camera feed turn white, yellow, or green.Green means that the area has been completely scanned.For a complete scan, take the time to move around the area with small movements to fill the gap.

You can also use the host device's camera while scanning to capture the area you are scanning as a high-resolution RGB still image.You can refer to and use this still image when working with your model in a 3D package like "Blender" or "Maya" for later work.

When you are satisfied with the scan data, tap the scan button again to finish the work.Optimize your model here.It may take several minutes depending on the size of the scanned file.After optimization, scan the PLY file (3D /Point cloud file format).


Precautions when scanning

1. Avoid extremely bright places
Avoid extremely bright areas such as direct sunlight when scanning outdoors, as it can cause problems in accurately scanning the space and the resulting model may appear inconsistent. Please give me. When using the L515, scan only in indoor spaces for better model results.

2. Avoid dark and glossy materials
Objects made of materials that are very dark and reflect light, such as glossy black tables,Because there is a possibility that parallax cannot be obtained and the distance cannot be calculated.Avoid as much as possible.

See this article for a comparison of scan data in different environments.


* Reference video

The entire process of capturing and processing scan data from the newly supported Intel RealSense Depth Camera D455 and LiDAR Camera L515.

Master Class in Dot 3D ™ 4.0: Intel® RealSense ™ Handheld 3D Scanning



Step 2: Convert to OBJ file

PLY file (3D /Point cloud file format) Stores the 3D data (3D model) acquired from the 3D scanner, and can be used as it is as a 3D model.For example, 3D viewer “SketchfabYou can upload a 3D model to “and easily share (publish or share) it as 3D, VR, AR content.

However, PLY files are very large, so it is recommended to convert them to mesh format if you want to increase file processing and flexibility. The method of converting point cloud data (PLY file) to OBJ file using "Meshlab" is as follows.

2-1. Import PLY file

Open “Meshlab” and go to File> Import Mesh on the toolbar.Import the PLY file you exported in step 1.

"Meshlab" where the imported mesh is displayed


2-2. Clean the mesh data

As it is imported, it is not yet complete data.Perform some further operations to clean the point cloud data (point cloud data) and convert it to mesh data.Depending on the appearance of the file, you may mess with the settings or remove unnecessary vertices.In that case, the following procedure is recommended.

Use the Select Vertices tool at the top of the toolbar to select a group of vertices.Then use the Delete the current set of selected vertices tool to delete it.

Clean mesh * Marked on two tools on the toolbar

Filters> Sampling> [Poisson-Disk Sampling (Go to Poisson Disc Sampling)]

Make sure "Base Mesh Subsampling" is selected on the settings screen and change the number of samples to tens of thousands (35,000 was selected here).The higher the number here, the more sophisticated the final mesh will be.Note that the number of polygons (the number of triangles) affects the operation of the mesh in other programs and applications, so do not set it too high.

There is no image here, but the layer menu on the right shows the original point cloud and Poisson disc sampling.Delete the original mesh as it is no longer needed.


Filters> Point set> Compute normal for point set (Calculate surface normals )]

Change the mesh neighbor number to 16 and execute.It is trying to automatically determine the normal vector of each point (which direction each face points to) in order to generate faces with the Marching Cubes method.


Select Filters> Remeshing, Simplification and Reconstruction> Surface Reconstruction: Ball Pivoting

A single click on the up arrow in the World Unit Box next to "Pivoting Ball Radius" will auto-populate the appropriate value.When applied, a mesh will be created instead of the point cloud.Repeat the steps of going back and changing the parameters little by little until you are satisfied with the mesh you created.


Acquisition of color information

Here, we will also show you how to get the existing color information when exporting from "Meshlab".

Run Filters> Texture> Parametrization: Trivial Per triangle.If an error occurs, change the value of the border between the triangles to 1.

Execute [Filters]> [Texture]> [Transfer Vertex color to texture].At this time you will be asked to save the project.Save the texture file with the suggested name "project name".As for the save name, "_tex.png" is added to "Project name".

Export as an OBJ file to the same folder.Make sure all available checkboxes are selected.The texture file you just created is displayed in the box on the right.This file type can also be used with 3D packages such as "Unity" and "Unreal" and game engines.


The next step is to move from “Meshlab” to the open source 3D modeling tool “Blender”.


Step 3: Convert to GLB file

Open “Blender” and import the OBJ file.If you're having trouble importing a file, click View in the upper left corner of the viewport and select the Sidebar check box.

To the right of the viewport, there is a tab labeled "View".Clip start and end parameters0.01m10000mChange to each.

Zoom in and out until you see the model.It may be upside down, so rotate it to shrink it a little.

Models in Blender: The highlighted "Rotate" tool is on the left and the "View Panel" is on the right.


Click the model displayed next.Select the "rotate" icon on the left side of the screen and use the directional ring to adjust until you can see that the floor is in the correct orientation.

At the same time, shrink the model at this stage.You can fine-tune the final size in the next step, but we recommend a size of around 10%.

You may also notice that the model has no texture.At the top right of the viewport window are several viewport shading icons. Select "Material preview" to see the color of the model.

You can also use the “Blender” editing tools to fill in the holes and remove the outer faces to make the mesh look better (there are many tutorials to help you clean the mesh, so I won't cover it here). ..

When you are satisfied with the created mesh, export it as "GLTF 2.0".The actual file extension required is the binary version of gltf, ".glb".


Step 4: Import to Hubs

Open Firefox andhubs.mozilla.com Go to and sign in to your account. Click Create a room, Choose Scene, and select Create a scene with Spoke.

Select Empty Project to create a new project.

The avatar and crater terrain that represent the spawn point are displayed.Click "My assets" in the lower left panel.Upload the .glb file you exported in step 3 here and drag and drop it into the hierarchy panel just above the crater terrain.

Use the spawn point icon and crater terrain as a guide to scale your scene.This time I scaled the mesh from its original size to 0.002.You can keep or hide the crater terrain and add objects such as lights.

Spoke window showing the final scene of the hierarchy and viewport, scaled to the spawn point model


When you're happy with the result, select Publish to hubs.Since the mesh is very precise data, it may not work well on mobile, such as when there are too many polygons (number of triangles).Ideally, the mesh should be reduced to less than 50,000 polygons during the “blender” stage to improve performance.Make sure all other performance parameters are okay and publish the scene by selecting view your scene> Create a room with this scene. To do.

In the room, you can share the link with others to show your 3D scan.This flow should also work for point cloud objects scanned using “Dot3D Pro” or other Intel RealSense-enabled software.

This will create a social VR space where anyone can chat with the creator (you) via a browser and enjoy a virtual space using a VR headset.


The image below is the final scene I made this time.Of course, work is needed to further optimize and improve the mesh created from the original high quality scan, but we have demonstrated that this workflow can be used to create a 3D environment.

Social VR 3D scanning environment in Mozilla Hubs portal




We handle Intel's RealSense D series, LiDAR camera L515, etc. mentioned in this article.Service for R & D "Rental servicetegakariFeel free to try the actual machine.Please feel free to contact us regarding usage.