Structure of Velodyne sensor

Untitled

Untitled

Untitled

  1. The HDL-32E is a time-of-flight (ToF) range sensor.

    1. In vertical direction, it has 32/64 laser beams which all point at a different (fixed) vertical angle, providing a total 40 degree vertical field of view (FOV).
    2. the entire device rotates about its vertical axis, providing full 360 horizontal FOV.
      1. In a full 360 revolution, each laser takes 2172 measurements
    3. Thus, for a 360◦ rotation, an unordered 32/64 × 2172 × 5 vector is returned.
      1. [x, y, z, intensity, ring]. The ring data field corresponds to which of the 32 lasers captured the point being transmitted.
      2. or [r, intensity, ring]
    4. By structuring the data collected based on angular position and ring number, a ‘cylindrical’ depth image representation is obtained, where each row of the image corresponds to one of the 32/64 lasers, and each column corresponds to a rotational position. Hence, for a full revolution, the image dimensions are 32/64 × 2172, which means no multi 2 one problem.
    5. However, in a vehicle moving at high speeds this rotation does not happen fast enough to ignore the skewing generated by this sort of “rolling shutter” behavior.
      1. To obtain a more geometrically consistent representation of the environment for each scan, we must consider the vehicle motion, resulting in a point cloud which no longer contains range measurements for each pixel, but contains multiple measurements for some other pixels. 【目前没处理这个,或者说在360上保持每一个数据不是很重要。】
      2. so analysis the range image in resolution: 32/641024 or 32/64512, usually.
    6. The intensity field could be used as another avenue to filter noise, however high variation in the reported intensity of snow would likely hinder performance.
    7. Lidar 2D Images
      1. distance matrix D
      2. intensity image I

    Distribution of Snow, fog & rain in Lidar Data

    Untitled

    Smaller objects at medium to large distances might be falsely marked as noise. Modern lidar sensors, e.g. the Velodyne VLP32C, do not necessarily perceive drops of water from fog or rain as a single point, but often as multi-point clutter in the near to mid range.

    Modern

    multi-echo

    most modern LiDAR sensors are capable of returning multiple echos for each LiDAR ray (multi-echo)

    it can be utilized for segmentation and Airborne Particle Classification.

    Untitled

    LiDAR point cloud from the example scene from Fig. 1. Left: Colored by label with particles in white and non-particles in red. Right: Colored by echo feature with green when both echoes match, blue for the first echo return, and purple for the second echo return.

    References:

    Airborne Particle Classification, fsr21