Three-Dimensional Navigation with Scanning Ladars Concept amp;amp; Initial Verification-eR2.pdf

(2618 KB) Pobierz
641081474 UNPDF
I. INTRODUCTION
Three-Dimensional Navigation
with Scanning Ladars: Concept
& Initial Verification
This paper focuses on the use of laser radars
(LADARs) for navigation of unmanned aerial vehicles
(UAVs) in an urban environment. To enable operation
of UAVs at any time in any environment a precision
navigation, attitude, and time (PNAT) capability
onboard the vehicle is required. This capability
should be robust and not solely dependent on the
Global Positioning System (GPS) since GPS may
not be available due to shadowing, significant signal
attenuation, and multipath caused by buildings, or
due to intentional denial or deception. The following
operational scenario is considered in this paper. A
UAV will take off at a known position in a known
environment. After the take-off phase, the UAV will
enter an unknown or partially known environment
and starts its mission toward the urban target
environment. Upon arrival in the urban environment,
the UAV may perform tasks such as surveillance.
Navigation during the en-route phase is based on
terrain-referenced navigation (TRN) techniques. The
urban environment is fundamentally different from
the en-route flight environment; whereas during
the en-route flight most information is found in the
environment below the UAV, in the urban environment
navigable information is mostly found around the
aerial vehicle. Hence, the UAV platform must be
capable of observing features in a wide field-of-view
(FOV): 2D LADAR and 3D imaging sensor are
excellent candidates for this approach. A conceptual
picture of the urban UAV scenario is shown in
Fig. 1.
During the en-route phase of flight of the
UAV terrain-referenced techniques may be applied
in the absence of GPS. Many TRN techniques
were successfully employed in the past. Two
well-known TRN schemes are terrain contour
matching (TERCOM) and Sandia inertial terrain aided
navigation (SITAN) [1]. These methods integrate
sensor data from a radar altimeter and a baro-altimeter
with an inertial navigation system (INS) and an
a priori know terrain database to obtain an estimate
of user position and velocity. A more recent variant
of a TRN system that exploits the use of an airborne
scanning LADAR instead of a radar altimeter has
been shown to provide meter-level accuracies [2].
However, TRN techniques are operationally limited by
the availability of an onboard terrain database at the
location of interest. Reference [3] describes a system
that uses a passive sensor such as a vision camera
or an active sensor such as a radar to detect terrain
features and perform self-localization and mapping
for UAVs. That paper bases the position estimates
on features extracted from the imagery data. More
recently, a method has been proposed to perform
the navigation function using two airborne scanning
LADARs integrated with an INS [4].
ANDREY SOLOVIEV
University of Florida
MAARTEN UIJT DE HAAG
Ohio University
This paper investigates the use of scanning laser radars
(LADARs) for 3D navigation of autonomous vehicles in structured
environments such as outdoor urban navigation scenarios. The
navigation solution (position and orientation) is determined in
unknown environments where no a priori map information is
available. The navigation is based on the use of planar surfaces
(planes) extracted from LADAR scan images. Changes in plane
parameters between scans are applied to compute position and
orientation changes. Feasibility of the algorithms developed is
verified using simulation results and initial results of live data
tests.
Manuscript received September 13, 2007; revised April 2, 2008;
released for publication July 22, 2008.
IEEE Log No. T-AES/46/1/935926.
Refereeing of this contribution was handled by D. Gebre-Egziabher.
The work presented in this paper was supported in part through
the Air Force Office of Scientific Research (AFOSR) 07NE174
research grant. The authors, especially, would like to thank Jon
Sjogren of the AFOSR for supporting this research.
Authors’ addresses: A. Soloviev, University of Florida, Research
and Engineering Education Facility, 1350 N. Poquito Rd., Shalimar,
FL 32579-1163, E-mail: (soloviev@ufl.edu); M. U. de Haag, Ohio
University, 210 Stocker Center, Athens, OH 45701.
0018-9251/10/$26.00 ° 2010 IEEE
14
IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 46, NO. 1 JANUARY 2010
AUTHORIZED LICENSED USE LIMITED TO: IEEE XPLORE. DOWNLOADED ON MAY 13,2010 AT 11:53:30 UTC FROM IEEE XPLORE. RESTRICTIONS APPLY.
641081474.002.png
Fig. 1. Sensor configurations for feature-based navigation in urban environment. (a) 2D LADAR configuration. (b) 3D imaging
LADAR configuration.
Due to the lack of terrain during the UAV
navigation in an urban environment, the
LADAR-based methods must exploit features
such as surfaces, corners, points, etc. For the
feature-based navigation, changes in position and
orientation are estimated from changes in the
parameters of features that are extracted from LADAR
images. Two-dimensional (2D) laser scanners and
feature-based localization methods have been used
extensively to enable navigation of robots in an
indoor environment. For example [5] describes a
method to estimate the translation and rotation of
a robot platform from a set of extracted lines and
points using a 2D sensor. Reference [6] discusses
the feature extraction and localization aspects of
mobile robots and addresses the statistical aspects
of these methods, whereas [7] introduces improved
environment-dependent error models and establishes
relationships between the position and heading
uncertainty and the laser observations, thus enabling
a statistical assessment of the quality of the estimates.
In [8], 2D scanning LADAR measurements are
tightly integrated with inertial measurement unit
(IMU) measurements to estimate the relative position
of a van in an urban environment. The idea of
using 3D measurements and planar surfaces for
2D localization is introduced in [9]. Note that the
above applications focus on 2D navigation (two
position coordinates and a platform heading angle).
However, for applications such as autonomous UAVs,
a 3D navigation solution is required, especially,
for those cases where the platform attitude varies
in pitch, roll, and yaw directions. To enable 3D
navigation, the utilization of the laser range scanner
measurements must, somehow, be expanded for
estimation of 3D position and attitude. The use
of 3D features from 3D flash LADAR imagery
was introduced in [10]. Existing flash LADARs
have however a limited measurement range and
alimitedFOV:8mand45deg,typically[10].
This limits the feature availability for navigation
in urban environments. Scanning LADARs have
a significantly larger measurement range (80 m,
typical) and a FOV of up to 360 deg. Hence, this
paper extends the 3D navigation methodology
presented in [10] for the case of 2D scanning
LADARs.
This paper develops a methodology for using
measurements of a 2D scanning LADAR for
3D navigation for the urban part of the UAV
mission. Navigation herein is performed in
completely unknown environments. No map
informationisassumedtobeavailableapriori.
Fully autonomous 3D relative positioning and
3D relative attitude determination are considered.
The navigation solution is computed in a local
coordinate frame that is defined by the LADAR
position and orientation at the initial scan. A
relative navigation solution is thus provided.
Estimating local frame position and orientation in
one of commonly used navigation frames (e.g.,
East-North-Up and Earth-Centered Earth-Fixed
frames) allows for the transformation of the relative
navigation solution into an absolute navigation
solution.
The remainder of the paper is organized as
follows. Key aspects of the 3D LADAR-based
navigation are first summarized. The 3D navigation
approach proposed uses planar surfaces as the basis
navigation feature. LADAR imaging technologies
are then discussed. Next, a method for extracting
planar surfaces from LADAR images is developed.
The paper then discusses algorithms for computing
relative 3D position and orientation solution based
on parameters of planar surfaces that are extracted
from scan images. Simulation results and live data test
results are used to initially demonstrate the feasibility
of the 3D plane-based navigation developed. The
paper is concluded by summarizing the main results
achieved.
SOLOVIEV & UIJT DE HAAG: THREE-DIMENSIONAL NAVIGATION WITH SCANNING LADARS
15
AUTHORIZED LICENSED USE LIMITED TO: IEEE XPLORE. DOWNLOADED ON MAY 13,2010 AT 11:53:30 UTC FROM IEEE XPLORE. RESTRICTIONS APPLY.
641081474.003.png
Fig. 3. Generic routine of 3D navigation that uses images of
scanning LADAR.
Fig. 2. Examples of planar surfaces observed in urban images:
multiple planes can be extracted for indoor and outdoor image
examples.
algorithms developed in [8] must be extended for
a 3D case. Hence, the feature matching procedure
has to use position and orientation outputs of the
INS to predict plane location and orientation in the
current scan based on plane parameters observed in
previous scans. If predicted plane parameters match
closely to the parameters of the plane extracted from
the current scan, a match is declared and a matched
plane is used for navigation computations. Note
that INS data can be also applied to compensate for
LADAR motion during scans for those cases where
such motion can introduce significant distortions to
LADAR scan images. Following feature matching,
changes in parameters of the planes that are matched
between different scans are exploited to estimate the
navigation solution. Changes in plane parameters
are also applied to periodically recalibrate the INS
to reduce drift terms in inertial navigation outputs
in order to improve the quality of the INS-based
plane prediction used by the feature matching
procedure.
This paper focuses on the key aspects of the
planar-based navigation that are related to LADAR
data processing only. Development of LADAR/INS
integrated components will be addressed by future
research. Accordingly, two key questions that are
addressed by the remainder of the paper are: 1) how
to extract planes from LADAR scan images, and 2)
how to use parameters of extracted planes to compute
the navigation solution. To address these questions,
LADAR imaging technologies are discussed first. A
method for extracting plane parameters from LADAR
measurements is then developed. The use of plane
parameters for the estimation of relative position and
orientation is finally described.
Aspects of the 3D navigation routine that are
related to the LADAR/INS integration will be
considered by future development. Particularly,
future development will address the use of INS data
for feature matching, INS-based compensation of
distortions in scan images created by LADAR motion
during scans, and LADAR-based INS calibration.
II. 3DLADAR-BASED NAVIGATION
This paper exploits planar surfaces (planes) as
the basis feature for the 3D navigation solution. The
rationale for the use of planes for navigation in 3D
urban environments is that planes are common in
man-made environments. To exemplify, Fig. 2 shows
typical urban indoor (hallway) and outdoor (urban
canyon) images. Multiple planes can be extracted from
both images as illustrated in Fig. 2. Since changes in
image feature parameters between two different scans
are used for navigation, this feature must be observed
in both scans. Feature repeatability is thus essential
for the LADAR-based navigation. Planar surfaces
satisfy this requirement as they are highly repeatable
from scan to scan. If a wall of a building stays in the
LADAR measurement range then the plane associated
with that wall repeats in the scan images.
Fig. 3 illustrates a generic navigation routine
that exploits planar surfaces to derive the navigation
solution. A 3D scan image of the environment is
obtained by a scanning LADAR. Planes are extracted
from LADAR images and used to estimate the
navigation solution that is comprised of changes in
LADAR position and orientation between scans. In
order to use a planar surface for the estimation of
position and orientation changes from one scan to
the next, this planar surface must be observed in both
scans and it must be known with certainty that a plane
in one scan corresponds to the plane in the next scan.
Hence, the feature matching procedure establishes a
correspondence between planes extracted from the
current scan and planes extracted from previous scans.
The navigation routine stores planes extracted from
previous scans into the plane list. The plane list is
initially populated at the initial scan. If a new plane is
observed during one of the following scans, the plane
list is updated to include this new plane. In [8], INS
data are exploited to match lines extracted from 2D
LADAR images for a 2D navigation case. In order
to use INS data for plane matching, line matching
16
IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 46, NO. 1 JANUARY 2010
AUTHORIZED LICENSED USE LIMITED TO: IEEE XPLORE. DOWNLOADED ON MAY 13,2010 AT 11:53:30 UTC FROM IEEE XPLORE. RESTRICTIONS APPLY.
641081474.004.png
III. 3D IMAGING TECHNOLOGIES
Various optical approaches exist to obtain 3D
imagery of the environment such as stereo-vision
camera systems, the combination of a digital camera
and projected light from a laser source, flash LADAR
systems, and systems based on a LADAR scanning in
both azimuth and elevation directions.
Flash LADAR sensors consist of a modulated
laser emitter coupled with a focal plane array detector
and the required optics. Similar to a conventional
camera this sensor creates an “image” of the
environment, but instead of producing a 2D image
where each pixel has associated intensity values,
the flash LADAR generates an image where each
pixel measurement consists of an associated range
and intensity value. Current low-cost flash LADAR
technology is capable of greater than 100 £ 100
pixel resolution with 5 mm depth resolution at a
30 Hz frame rate. Example commercial products are
produced by MESA Imaging, Canesta, Inc., and PMD
Technologies GmbH. These cameras derive the range
by measuring the phase difference (shift) between the
transmitted and received (from the target) signal from
a modulated light source and have a range limitation
determined by the wavelength of the modulation.
Other commercial sensors such as the sensors by
Advanced Scientific Concepts, Inc. (ASC) measure
the time-of-flight of a light pulse to compute distance.
The advantage of all these 3D imaging sensors is
the instantaneous acquisition of all pixels within the
FOV. The disadvantage is their often limited range
and limited FOV. The limited FOV can significantly
limit the availability of features that can be used for
navigation. Note that the limited FOV mainly depends
on the optics used for the camera and that a larger
FOV results in a higher power requirement since the
light source must provide the same light density over
a larger spherical area.
3D imaging sensors based on scanning LADARs
are also commercially available, for example, from
Velodyne, AutonoSys, Riegl, and Optech. In contrast
to the flash LADAR sensors, these scanning systems
require a large amount of optics and precise scanning
mechanisms and are, therefore, expensive. Since
these systems are pulsed and have a very narrow
instantaneous FOV, their ranges are longer and
the range accuracy is higher. The FOV of these
sensors is, furthermore, determined by the scanning
mechanism and is in general much larger (as large as
360 deg). This type of scanner is designed primarily
for mapping applications. The scan rate is generally
slow (from few seconds to few minutes per FOV) due
to extensive scans at different elevation angles, which
is not required for navigation applications as shown in
the following sections.
This paper proposes a low-cost alternative to
existing 3D scanning LADARs in order to develop
Fig. 4. Zero elevation scan: lines observed in scan image are
created by intersection of LADAR scanning beam with planar
surfaces such as building walls.
and verify 3D navigation methods. An inexpensive 2D
scanning LADAR (SICK LMS-200) is augmented by
a low-cost servo motor that enables LADAR rotations
in a limited elevation range. The elevation range is
chosen to allow for plane reconstruction as described
in the next section. The 3D navigation methods
described in this paper are also developed to meet the
UAV payload requirements, since the limited elevation
scan range allows for a simple and light sensor design
and requires limited processing power of the LADAR
data.
2D LADAR sensor imagery has been previously
considered for 3D plane reconstruction in mapping
applications. Particularly in [11] 2D LADAR images
are used to construct planar maps of indoor office
environments. Specifically, [11] employs an upward
looking 2D LADAR that is mounted on a robotic
vehicle. Planar surfaces are extracted from multiple
LADAR images that are collected as the robot moves
through the indoor hallway. While [11] performs a
3D mapping, the navigation task is still carried out in
two dimensions using data of a 2D forward-looking
LADAR. As mentioned previously, the focus of this
paper is 3D autonomous navigation as opposed to
3D mapping. Hence, the plane extraction method
described in the following section is not optimized for
mapping purposes but for estimation of the UAV’s
3D navigation solution from the changes in plane
parameters between scans.
IV. PLANE RECONSTRUCTION USING 2D LADAR
ROTATIONS IN A LIMITED ELEVATION RANGE
This paper proposes the use of a 2D LADAR that
is rotated in a limited elevation range for 3D plane
reconstruction. A 2D LADAR first performs a scan at
zero elevation as shown in Fig. 4.
The LADAR scanning beam intersects with a
planar surface created, for example, by a wall of
a building. A line is obtained in the scan image
as a result of this intersection. This line can be
extracted from the scan image using line extraction
techniques such as the ones reported in [12]. One line
is obviously insufficient for the plane reconstruction
since this line can belong to multiple planes as
illustrated in Fig. 5.
SOLOVIEV & UIJT DE HAAG: THREE-DIMENSIONAL NAVIGATION WITH SCANNING LADARS
17
AUTHORIZED LICENSED USE LIMITED TO: IEEE XPLORE. DOWNLOADED ON MAY 13,2010 AT 11:53:30 UTC FROM IEEE XPLORE. RESTRICTIONS APPLY.
641081474.005.png
Fig. 6. First elevated scan: second intersect line obtained for
each planar surface in LADAR FOV; fictitious planes can still
exist since plane can be fit through two lines that belong to
different real planes.
Fig. 5. Zero elevation scan: multiple planes can be fit through
single line extracted from zero elevation scan; hence, one scan
insufficient for plan reconstruction.
Fig. 7. Second elevated scan: third intersect line extracted from
LADAR scan image; Use of this line allows for removal of
fictitious planes.
The LADAR is thus elevated and a second scan
is taken as shown in Fig. 6. Two intersect lines are
obtained after the elevated scan is performed: 1)
intersection of the planar surface with the nonelevated
LADAR scanning plane (Fig. 4) and 2) intersection of
the planar surface with the elevated LADAR scanning
plane (Fig. 6). These two lines are applied to the
plane reconstruction. A plane reconstruction that is
solely based on two lines can still be ambiguous.
Particularly, if there is a second planar surface present
within the FOV of the LADAR, a fictitious plane can
be fit through two lines that belong to different real
planes as illustrated in Fig. 6. Hence, information
contained in two LADAR images is insufficient to
separate real and fictitious planes.
A third scan (second elevated scan) is taken to
resolve the plane reconstruction ambiguity. Fig. 7
illustrates the second elevated scan. A third intersect
line is extracted from the third scan image. This line
belongs to the real plane but does not belong to the
fictitious plane. The fictitious plane is thus removed,
which completes the plane reconstruction.
The above consideration demonstrates that
three consecutive LADAR scans (zero elevation
scan and two elevated scans) are sufficient for the
reconstruction of planar surfaces. A formal description
of the reconstruction procedure is offered next. Fig. 8
illustrates the LADAR body frame. Fig. 9 represents a
planar surface. In Fig. 9, n is the plane normal vector,
which is the unit vector that originates from the
LADAR body frame origin perpendicular to the planar
surface; ½ is the plane range, which is the closest
distance from the body-frame origin to the plane;
Μ is the plane tilt angle, which is the angle between
the plane normal vector and the x b , y b plane; ® is the
plane azimuth angle, which is the angle between the
projection of n on the x b , y b plane and the x b axis.
Note that the plane normal vector is related to the
plane angular parameters (azimuth and tilt angles) as
follows:
2
4 cos( ® ) ¢ cos( Μ )
3
5
n =
sin( ® ) ¢ cos( Μ )
sin( Μ )
:
(1)
A plane can also be represented by its normal point
where the normal point is the intersection of the plane
and a line originating from the LADAR location
perpendicular to the plane of interest.
Equation (2) formulates the plane equation in
Cartesian coordinates:
x b ¢ cos( ® ) ¢ cos( Μ )+ y b ¢ sin( ® ) ¢ cos( Μ )+ z b ¢ sin( Μ )= ½
(2)
18
IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 46, NO. 1 JANUARY 2010
AUTHORIZED LICENSED USE LIMITED TO: IEEE XPLORE. DOWNLOADED ON MAY 13,2010 AT 11:53:30 UTC FROM IEEE XPLORE. RESTRICTIONS APPLY.
641081474.001.png
Zgłoś jeśli naruszono regulamin