Recent Developments in Airborne Lidar
Article

Recent Developments in Airborne Lidar

Multispectral Laser Scanning, Single-photon Lidar, Hybrid Sensors and UAVs

The arrival of airborne Lidar, also referred to as airborne laser scanning (ALS), has revolutionized area-wide 3D data acquisition of topography, bathymetry, vegetation, buildings and infrastructure. This article presents an overview of the progress made since then in both sensor and aircraft technology. It also highlights how ongoing miniaturization has recently enabled the application of Lidar on unmanned aerial vehicles (UAV), resulting in ultra high-resolution 3D point clouds with centimetre precision.

From Single to Multiple Pulses in the Air

Current trends include a steady increase in the measurement rate, achievements in full-waveform processing, the spread of multispectral and topo-bathymetric laser scanners, the introduction of single-photon sensitive airborne sensors and the advent of integrated active and passive systems (Figure 1). Airborne Lidar is a dynamic, polar and active multi-sensor system comprising a navigation unit (GNSS, IMU) for continuous measurement of the sensor platform’s position and attitude and the laser scanner itself. This provides the direction of the laser beam and the distance between the sensor and the reflecting targets. The sensor-to-target ranges are estimated by measuring the round-trip time of a short laser pulse. Due to the constant increase of the pulse repetition frequency (PRF), it is commonplace today that multiple laser pulses are in the air at the same time, i.e. new laser pulses are emitted before the arrival of the return of the causative pulse. Assuming a constant flying velocity, the higher scan rates obviously provide a higher laser point density on the ground. Depending on the application, the densities typically range between 5–20 points/m2. State-of-the-art laser scanners feature a PRF of up to 2MHz (see Table 1), corresponding to an unambiguous range of 75m. In other words, flying such a sensor at an altitude of 2,600m results in 35 concurrent pulses in the air. This is also referred to as multiple time around (MTA). Manufacturers provide various solutions for coping with this additional complexity, such as varying the PRR, using a preliminary DTM.

Figure 1: Airborne laser scanning timeline.


From Multi-photon to Single-photon Lidar

For conventional airborne Lidar, which is also referred to as linear mode Lidar, several hundred photons are needed to detect an echo. In contrast, single-photon-sensitive sensors have entered the airborne mapping market in recent years advertising a higher aerial coverage performance while enabling the same point density as traditional laser scanners. Two approaches have gained specific importance: Geiger-mode Lidar (GmLidar) and single-photon Lidar (SPL).

Table 1: Recent airborne Lidar sensors for manned and unmanned platforms based on the spec sheets of the respective instruments. Footprint sizes are reported for the typical flying altitude.

In GmLidar, a divergent laser pulse illuminates a relatively large spot on the ground. The reflected signal is captured by an array of 32x128 Geiger-mode avalanche photo diodes (GmAPD). An additional bias above the breakthrough voltage brings the APD of each single matrix element in a state in which the arrival of a single photo or a few photons is sufficient to trigger the avalanche effect, leading to an abrupt rise of voltage at the detector’s output. The event is used as the stop impulse for round-trip time measurement and, once triggered, the respective cell element stays inactive for a few microseconds, i.e. generally until the emission of the next laser pulse. On the one hand, each GmAPD is a binary detector (i.e. no multi-target capability), but on the other hand 4,096 cells are simultaneously active for echo detection and ranging. In short, GmLidar constitutes a scanning range camera system. To reduce the inevitable measurement noise and to estimate signal intensity, redundant echoes from largely overlapping laser footprints are analysed in post-processing.

Figure 2: Perspective view of a multispectral 3D point cloud. Pseudocolour RGB image composed of intensity measurements of three laser channels: red=1,550nm, green=1,064nm, blue=532nm. Sensor: Teledyne Optech Titan, data captured by SLU, Umeå, Sweden.

In contrast to GmLidar, the SPL technology utilizes a highly collimated laser pulse which is split into 10x10 sub-beams (‘beamlets’) by a diffractive optical element (DOE). Each laser beamlet potentially returns multiple echoes as the backscattered radiation of each beamlet is captured by a single-photon-sensitive detector array, which acts like an APD operated in linear mode in a certain dynamic range. Multi-target capability is especially important for vegetation penetration. Due to the high receiver sensitivity, both systems can be operated from high altitudes (4,000–10,000m AGL). They therefore enable a higher area performance than conventional laser scanners while providing similar point densities (800–1,000km2/h with 8 points/m2) in exchange for a lower height accuracy (~10cm) and a higher rate of outlier points. Recent SPL data evaluation has revealed good measurement precision in the centimetre range for smooth horizontal targets (e.g. asphalt) and a faster decrease of precision in more complex target situations (tilted and/or natural surfaces) compared to conventional full-waveform scanners. The 3D point cloud of a church located in Austria’s capital city Vienna captured with a full-waveform laser scanner and a single-photon Lidar sensor, featuring approximately the same point density (waveform Lidar: 20 points/m2; SPL: 16 points/m2), is depicted in Figure 4.

Figure 3: Deposition and erosion caused by 30-year flood event captured by repeat topo-bathymetric survey, River Pielach, Austria. Sensor: RIEGL VQ-880-G. Data: RIEGL LMS.

From Single-purpose Scanners to Hybrid Sensor Systems

Another apparent trend in airborne mapping is the combination of active and passive sensors. Practically all new state-of-the-art airborne sensors feature both laser scanners and RGB, CIR or even multispectral cameras (see Table 1). While concurrent image acquisition is valuable due to the documentary character of the images and the inherent possibility to derive digital orthophotos from it, the recent progress in computer vision in general – and dense image matching (DIM) in particular – now enables height estimation for each image pixel based on multi-view stereo. With typical ground sampling distances (GSDs) of aerial images ranging from 5–20cm, photogrammetric techniques even outperform Lidar in terms of 3D point density. However, this is accompanied by higher processing times and a higher outlier rate. Especially in image areas with low texture, shadows and semi-transparent objects like vegetation, the reliability of image-derived height estimates still lags behind Lidar.

Nevertheless, serious fusion of the two data sources, beyond the mere colourization of the Lidar point cloud based on the radiometric image content, is presently a topic of ongoing research. One of the open research topics in this context is the semantic labelling of Lidar and DIM point clouds using mutual information content. Joint processing of Lidar and DIM point clouds relies on proper co-registration of the different data sources. This requires both knowledge of the underlying sensor models and understanding of the properties of the respective active and passive capturing techniques. As an example of the latter, Lidar penetrates the grass layer to a certain extent whereas DIM only delivers the top of the grass layer. When combining both data sources in a hybrid orientation (i.e. strip adjustment plus aerial triangulation), these differences need to be considered when searching for corresponding points and surface patches in the scans and images.

Figure 4: Perspective view of 3D point clouds from single-photon Lidar (left) and full-waveform Lidar (right) coloured by intensity. Sensors: Hexagon Leica SPL100, RIEGL VQ-1560i. Data: City of Vienna, Municipal Department 41.

From Manned to Unmanned Platforms

The latest and most obvious trend in the aerial mapping sector is the meteoric rise of unmanned aerial vehicles (UAVs or drones). Tremendous progress in automation, navigation, robotics, miniaturization and commercialization has enabled the mass production of consumer drones that are capable of carrying payloads in the range of several hundred grams, at a comparably low price. Lightweight compact cameras were, therefore, the first tools for systematic close-range aerial 3D mapping using both nadir and oblique images. Looking at the professional UAV market, metric 100–150MP cameras are available nowadays that weigh about 1kg and allow 3D data acquisition with a resolution in the sub-centimetre domain. For such high-level sensor equipment, professional multi-rotor or fixed-wing UAV platforms in the 10kg maximum take-off mass (MTOM) category are necessary.

As laser scanners are typically much heavier than cameras, it took a couple of years longer to integrate survey-grade time-of-flight Lidar sensors on UAV platforms. But, today, comprehensive scanners including an IMU are available weighing less than 3kg, featuring full-waveform recording, scan rates of up to 1MHz and ranging precision in the centimetre range. These compete with the specs of mature scanners designed for manned platforms (see Table 1). With flying altitudes of 50–100m AGL, the resulting laser footprints are typically less than 10cm and the achievable point density is in the range of several hundred points/m2. However, UAVs in the 25kg MTOM category are necessary to lift such high-end sensors.

In terms of the scan mechanisms, different concepts are in use. Some instruments use a rotating mirror to deflect the laser beam into a (vertical) plane with a scan angle range of more than 300°. This is especially useful for scanning downwards, sideways and even above the horizon, e.g. for capturing narrow streets or river canyons. This concept is also pursued by manufacturers of short-range laser scanners primarily designed for the automotive market, using a fan of laser channels and thereby increasing the point density. As many laser shots do not hit a target with such full-circle scan planes, some instruments use conical downward scanning (Palmer Scanner), with the benefit that each laser pulse is directed towards the ground so that the pulse repetition rate equals the effective scan rate. Although not only pulsed lasers but also continuous wave lasers are lightweight enough for integration on UAVs, the main benefit of pulsed lasers is their inherent multi-target capability. This is especially important for applications in forestry, which is one of the driving forces of the progress in UAV Lidar. Figure 5 illustrates the 3D point clouds captured with different UAV-borne laser scanners.

Figure 5: Perspective view of 3D UAV-Lidar point clouds. Left: Aubergwarte, Lower Austria. Sensor: 2x Velodyne Puck. Sensor integration: 4DIT. Data: Umweltdata GmbH. Right: Hessigheim, Germany. Sensor: RIEGL VUX1-LR. Data: Federal institute of Hydrology, Koblenz, Germany.

Another prominent field of application is hydrology/hydrography, where the trend is towards the development of bathymetric sensors for UAV platforms. While the first instrument was a UAV laser profiler with a constant beam axis, UAV-borne bathymetric scanners have already been announced for release in 2019 that promise high point density and small laser footprints. These might be a game changer for capturing fluvial morphology, especially for shallow rivers with abundant riverside vegetation.

Conclusions

This article has provided an overview of recent trends in airborne Lidar. Clear progress is being made – and even accelerating – in many fields such as sensor and aircraft technology, robotics, automation and miniaturization. This is leading to higher point densities by increased pulse repetition frequencies or the use of single-photon-sensitive receiver arrays, routine online and offline processing of the recorded return waveforms, the use of multispectral lasers, simultaneous acquisition of topography and bathymetry, concurrent capturing and processing of laser scan and images and, finally, the advent of Lidar and image sensors mounted on multi-rotor and fixed-wing UAV platforms. With no end in sight to airborne Lidar’s success story in the foreseeable future, it will be intriguing to keep an eye on the next developments.

Hydrography Newsletter

Value staying current with hydrography?

Stay on the map with our expertly curated newsletters.

We provide educational insights, industry updates, and inspiring stories from the world of hydrography to help you learn, grow, and navigate your field with confidence. Don't miss out - subscribe today and ensure you're always informed, educated, and inspired by the latest in hydrographic technology and research.

Choose your newsletter(s)