[Feature Article] Products related to SLAM technology (Visual SLAM / LiDAR SLAM)

■ This is an article posted on June 2020, 7, so the content of the information may be out of date.

Nowadays, we are seeing the topic of autonomous driving and autonomous robots on a daily basis. One of the supporting technologies is SLAM (Simultaneous Localization and Mapping). SLAM is a technology that can estimate the position of the device itself while creating a map of the surrounding environment based on information obtained from sensors such as cameras and LiDAR attached to moving objects (robots and machines). For example, it is an image of walking while traveling by yourself, checking the layout of roads and buildings, and understanding where you are and where you are.

Many of the products handled by our company (Tegara Corporation) are related to SLAM technology.
This time, we will introduce such a product along with the two types of technology used to realize SLAM.

 

Where is SLAM technology used?

There are many products related to SLAM technology, and a familiar example is a cleaning robot that automatically moves in a room. In addition, SLAM technology is also used for autonomous transportation solutions (used in warehouses such as Amazon) that are frequently mentioned in the news, delivery of luggage by autopilot of drones, UAVs (unmanned aerial vehicles), self-driving cars and Mars exploration rover. I am.

Cleans a Whole Level of Your Home | Roomba® 900 series | iRobot®

 

SLAM is based on image information taken by the cameraVisual SLAMAnd based on the point cloud data measured by LIDARLiDAR SLAM There are two of them, and both have high attention. We will introduce representative products by dividing each technology.

What is Visual SLAM

Visual SLAM (Simultaneous Localization and Mapping) is a technology that simultaneously estimates the 3D information of the environment (map, location) and the position and orientation of the camera from the images taken by the camera. As the camera, monocular camera, stereo camera, RGB-D camera (D=Depth, depth), etc. are used.

-Monocular camera: A typical camera like a digital camera or smartphone. Normally, one lens is used to create a 1D environmental image.

-Stereo camera: A camera that uses two lenses (two eyes) like the human eye. Depth is estimated from the two captured images to create a 2D image.

-RGB-D camera: A camera that uses a depth sensor in addition to the normal lens. Measures visual information in the real space and creates a 3D image.

There are many types of cameras, so please consult us depending on your application.

Image of Visual SLAM (for MYNT EYE)

MYNT EYE SLAM Demo 180 FOV

 

The following are Visul SLAM related products.

Duckietown

Duckietown is a platform for learning robotics and AI from MIT (Massachusetts Institute of Technology). It consists of small motor vehicles (Duckiebots), roads, traffic lights, signs, obstacles, people in need of transportation (Duckies), and the cities they live in (Duckietown).

The Duckiebot running in Duckietown is equipped with one monocular camera (a wide field of view of 1° and a fisheye lens of 160M pixels) to estimate the captured image and the position and orientation of the camera at the same time.
You can learn the alternative control system (detecting and avoiding dangers and obstacles, and driving safely) that is necessary for autonomous driving technology.

 

MYNT EYE S series

The MYNT EYE S series is a 3D stereo camera designed for Visual SLAM that can combine vision sensors, structure lights and inertial navigation for 3D data creation. In addition to indoor and outdoor applications, models equipped with IR sensors can identify indoor obstacles (walls) even in complete darkness.

The 6-axis IMU sensor built into the MYNT EYE has a maximum IMU synchronization accuracy of 0.02 ms, which is useful for technical development of visual inertial odometry (VIO, Visual Inertial Odometry) and estimation of position and posture in 3D space.

 

Intel RealSense Tracking Camera T265

The T265 tracking camera has two fisheye sensors (2° viewing angle), an IMU sensor and a visual processing unit (VPU) called Intel Movidius Myriad 160. A unique algorithm (high precision visual inertial odometry (VIO, Visual Inertial Odometry) and SLAM) complements the visual odometry (VO) and IMU(I) odometry to perform the entire SLAM algorithm. With stereo image analysis, autonomous device pose estimation, spatial orientation and translation data, you can accurately track and navigate without GPS.

 

What is LiDAR SLAM

This method mainly uses a laser sensor (distance sensor) called LiDAR (Light Detection and Ranging). More precisely,Flight time of light(TOF, Time of flight)Part of the sensor category, which measures how far an obstacle is from the sensor. Sensors collect data on physical parameters such as temperature, humidity, light, weight, distance, and so on. The scanner is divided into 1D, 2D and 3D depending on the number of axes, but the current mainstream is 2D/3D.

-1D: Measures the distance between the target (obstacle) and the scanner on one axis (one dimension).

-2D:  Two-dimensional (Obtain point cloud data (X axis and Y axis)In order to do so, one laser beam is emitted while rotating and the horizontal distance to the target is measured.

-3D:  Two-dimensional ( (X, Y, Z axes)Point cloudTo get the data,Irradiates multiple laser beams spread on the vertical axis toMeasurement of shape and position.

Since there are several LiDARs we handle, please consult us according to your application.

Image of LiDAR SLAM (for RPLIDAR S1)

Product Showcase: RPLIDAR S1 360 ° TOF Laser Range Scanner

 

The following are LiDAR SLAM related products.

Intel RealSense LiDAR Camera L515

The L515 is a 3D LiDAR compact and highly flexible solid state depth camera for the development of indoor applications that require high resolution and high precision depth data. Features of Intel's unique Micro-Electro-Mechanical System (MEMS)* mirror scan technology improve high laser output efficiency and low power consumption for depth streaming.

* MEMS is a type of solid-state type and is also called microelectromechanical system.

Please refer to LiDAR in the introduction video of L515 made by us (around 1:36).

[RealSense's first LiDAR type] L515 arrived, so I moved it [Free rental reception]

 

 

RPLIDAR S1

Slamtec's RPLIDAR S1 is a very compact ToF (Time of Flight) 51° laser scanner with a height of 55.5mm x width and a depth of 360mm.It consists of a range scanner core and mechanical power components that rotate the core at high speed, and rotates clockwise for 360° scanning.It is not affected by the reflection from the object and can be used outdoors. Despite its small size, it can measure a 40-degree range with a maximum radius of 360 m, and is compatible with a wide range of applications.

 

If you have any other requests for "Introducing hardware made by overseas manufacturers related to SLAM technology", please feel free to contact Unipos.

In addition to the products introduced this time, we will investigate the possibility of handling even products not handled by us so far and will give an estimate.