7.2
CiteScore
3.7
Impact Factor
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Filter by Categories
ABUNDANCE ESTIMATION IN AN ARID ENVIRONMENT
Case Study
Correspondence
Corrigendum
Editorial
Full Length Article
Invited review
Letter to the Editor
Original Article
Retraction notice
REVIEW
Review Article
SHORT COMMUNICATION
Short review
7.2
CiteScore
3.7
Impact Factor
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Filter by Categories
ABUNDANCE ESTIMATION IN AN ARID ENVIRONMENT
Case Study
Correspondence
Corrigendum
Editorial
Full Length Article
Invited review
Letter to the Editor
Original Article
Retraction notice
REVIEW
Review Article
SHORT COMMUNICATION
Short review
View/Download PDF

Translate this page into:

Original article
11 2022
:34;
102298
doi:
10.1016/j.jksus.2022.102298

GAN based simultaneous localization and mapping framework in dynamic environment

Academic Affairs Office, Xi'an Technological University, Xi'an 710021, China
School Of Mechatronic Engineering, Xi'an Technological University, Xi'an 710021, China
School of Electronic Information Engineering, Xi'an Technological University, Xi'an 710021, China
Library, Xi'an Technological University, Xi'an 710021, China
School of Computer Science and Engineering, Xi'an Technological University, Xi'an 710021, China

⁎Corresponding author at: Academic Affairs Office, School Of Mechatronic, Xi'an Technological University, Xi'an 710021, China. bosun12021@163.com (Bo Sun)

Disclaimer:
This article was originally published by Elsevier and was migrated to Scientific Scholar after the change of Publisher.

Peer review under responsibility of King Saud University.

Abstract

Objective

Because of interference from dynamic objects, the traditional simultaneous localization and mapping (SLAM) framework execute poorly while operational in a dynamic environment. The Dynamic-SLAM model is introduced by considering the merits of deep learning in object discovery. At the semantic level, the dynamic objects in the new detection are detected to construct the prior knowledge with an SSD object detector. Among the radar and the LiDAR, the synchronization and conversion calibration makes the multi-sensor fusion. Localizing using a camera in a dynamic environment is more challenging because the localization process takes place in moving segments. This further results in unstable pose estimation. A large amount of information of the visual point cloud and high precision of the laser radar information enhances the accuracy of real-time positioning thereby attaining grid map and 3-D point cloud map.

Methods

Here, we have proposed Generative Adversarial Network (GAN) with Aquila Optimizer for moving object detection. The LiDAR measurements check the Radar outcomes. The targeted moving objects are determined via Doppler velocity from the radar and their exact location and mass can be estimated with LiDAR and the proposed GAN-AO approach. Hence the GAN-based AO approach is used to segment the objects inside the point clouds. The arrangements of point clouds are made in a particular range to multiply the vertical points with the laser channel number. If the same objects are identified then the angles are analyzed among the image vectors and then labeled identically for the same points of the object. In addition to this, Velocity compensation is made to estimate the actual moving target from the world frame. This is due to the fact that the estimated velocity of the mmW-radar is in the radial direction with respect to the touching sensors.

Results

The investigation is conducted between different state-of-art methods like Deep Learning (DL), Generative Adversarial Network (GAN), artificial neural network (ANN), Deep neural network (DNN) and propose methods. From this analysis, the proposed method provided 93.8% detection accuracy than other existing methods like DL, ANN, GAN and DNN respectively.

Conclusions

While comparing to the state-of-art techniques, the proposed method demonstrated superior performance results in terms of tracking, detection, root mean square error (RMSE) and accuracy.

Keywords

Localization
Mapping
Cloud
Generative Adversarial Network and Aquila Optimizer
1

1 Introduction

Visual Synchronization Localization is a process in wireless sensor networks to determine the location of sensor nodes. Under the field of the sensor in which they deployed, the Localization algorithm was designed (Ding et al., 2018; Zhang et al., 2019; Labbé and Michaud, 2019; Fuentes-Pacheco et al., 2015; Tsintotas et al., 2022). Some sensor nodes are static and some are dynamic, based on their nature localization algorithm was built. Localizing using a camera (Zheng et al., 2022; Ruan et al., 2022; Chen et al., 2021; Wang et al., 2021a; Wang et al., 2021b; Wang et al., 2021c; Feng, 2021; Lee et al., 2021; Zhao et al., 2022; Sodhi et al., 2022) in a dynamic environment is more challenging because the localization takes place in moving segments and leads to unstable pose estimation. In this proposed work, we use GAN with an Aquila optimizer for Visual Synchronization Localization and Mapping depending upon the visual point cloud information and laser in a dynamic environment (Shastri et al., 2022; Lindqvist et al., 2021; Venator et al., 2021; Rajendran et al., 2021; Wang et al., 2021a; Wang et al., 2021b; Wang et al., 2021c; Shin et al., 2019). The proposed scheme uses laser and visual point cloud information (Zhao et al., 2022; Ghaffari et al., 2019) to support Visual Synchronization Localization to optimize pose. A large amount of information on the visual point cloud and high precision of the laser radar information enhances the accuracy of real-time positioning thereby attaining grid map and 3-D point cloud map (Morales and Kassas, 2017).

To improve the performance of the Visual Synchronization Localization, we have to combine Radar and Lidar (Schultz and Zarzycki, 2021) to effectively localize the dynamic objects. The static objects are canceled via the measurements of Lidar and a visual point clouds from Lidar are segmented. The input data for the Lidar based Visual Synchronization Localization (Shao et al., 2015) was the filtered visual point clouds. Our proposed method works efficiently in real-time. The accuracy and robustness of Visual Synchronization Localization were improved by Laser and visual point cloud information on the other hand the dynamic objects can be efficiently identified (Yu et al., 2020; Xiao et al., 2019). The major contributions of this study are summarized as follows:

  • The dynamic objects in the new detection are detected to construct the prior knowledge with an SSD object detector at the semantic level.

  • For moving object detection, we used Generative Adversarial Network (GAN) with Aquila Optimizer.

  • Doppler velocity from the radar detects the targeted moving objects and their exact location and mass can be estimated with LiDAR and the proposed GAN-AO approach.

The remaining sections of this article are summarized as: Section 2 illustrated the literature survey. Section 3 explains the system model and the GAN with AO is formulated in section 4. Section 5 delineates the proposed methodology and the experimental results are discussed in section 6. At last, section 7 terminates the article.

2

2 Related works

By fusing mmW-radar and LiDAR, Dang et al. (2020) suggested an effective method for the reduction of the power of dynamic environment on SLAM. The mapping and localization accuracy was enhanced. Due to doppler effect, the moving objects are eliminated via localization and segmentation with efficient moving object detection. Different real world scenarios were used to validate the performance of moving object elimination. The quick dynamic object detections were provided via calibration and synchronization. Fed the resulting point clouds to SLAM and thereafter removed moving objects.

The mapping system and robust visual-Lidar simultaneous localization were suggested by Qian et al. (2021) for unmanned aerial vehicles. From point clouds via clustering, extract the plane features and more stable line. The least-squares iterative closest point algorithm calculates the relation pose among both consecutive frames. At a lower frequency, construct the texture information with 3-D map thereby referring to the elimination of pose. For unmanned aerial vehicles (UAV), the higher precise and robust mapping and localization with higher cost were achieved.

A robust framework was suggested by Wang et al. (2021a), Wang et al. (2021b), Wang et al. (2021c) for simultaneous mapping and localization by multiple non-repetitive scanning Lidars. The map alignment configures the transformation between two lidars depending upon the rigidity assumption of geometric structure. Time synchronizes the original information from various lidars. While estimating lidar odometry, send all the feature candidates. For enhanced loop detection, integrate the novel place descriptor and remove the dynamic objects. An experimental result verifies large motion and feature-less scenario performances.

Xiao et al. (2019) suggested deep learning (DL) based semantic monocular visual mapping and localization in a dynamic environment. In the following thread, the selective tracking algorithm via feature points of dynamic objects was processed thereby constructing a feature-oriented visual SLAM organization. Compared to the original SSD network, the recall rate of the system is raised from 82.3 % to 99.8 % respectively. In a real-world dynamic environment, an accurate environmental map was localized. But, the environment was more complex and more time-consuming.

3

3 System model

The system model of the suggested method is depicted in Fig. 1. The multi-sensor fusion can be made with the time synchronization and conversion calibration among the radar and the LiDAR. From the LiDAR dense and precise clouds are produced (Nguyen and Le, 2013). Then the segmentation using the GAN-AO approach is conducted to detect the objects from the surroundings. However, the targeted objects are analyzed and detected by using this approach and then the outcomes are collected on Radar. Meanwhile, the outcomes of Radar are checked with the help of LiDAR measurements and thus filtered the ghost targets effectively (Dubayah and Drake, 2000). The targeted moving objects can be detected by Doppler velocity from the radar and their exact location and mass can be estimated with LiDAR and the proposed GAN-AO segmentation approach.

System model.
Fig. 1
System model.

4

4 Formulation of Generative Adversarial Network and Aquila Optimizer

This section describes the background of the adopted GAN and AO approaches for segmentation.

4.1

4.1 Generative Adversarial network (GAN)

The discriminator D and generator G are the two sub-networks of the GAN network. For training, provides the ground truth data representation is generated via generator attempts. The generator data produces the true ground truth information from data, which are differentiated from the discriminator attempts. According to the distribution P, the binary segmentation map Yis produced via the map F : X Y . From the generator sub-network, the ground truth mask s represented as Y. The discriminators are indicated via segmenting map into 0 to 1 values and input data comprises the pair of discriminator maps X , Y .

Equation (1) expresses the GAN for segmentation and the following equation expresses the objective function.

(1)
GA N L F , D = F X , Y P data X , Y log D X , Y + E X P data Y log 1 - D X , F Y

To make the right decision, the discriminator (D) subnetwork is trained via minimization of D ( X , D ( X ) ) and the maximization of D ( X , Y ) . While making the right decision, the outputs are generated via the generator sub-network (F). The secondary loss functions of binary cross-entropy ( A L F ) are delineated in addition to the GAN objective function (Abdollahi et al., 2021).

(2)
A L F = F X , Y P data X , Y - Y · log F ( X ) + ( 1 - Y ) · log 1 - F ( Y )

The GAN objective functions with the optimal results are delineated as.

(3)
F * = arg min D max D G A N L F , D + γ A L ( F )

The weighting parameters are γ . The map generates the spatial resolution are increases via a generator. At last, the practical probability outputs are achieved based on both backward and forward passes. However to enhance the segmentation capabilities we have adopted Aquilla optimization. This will help to tune up the parameters.

4.2

4.2 Aquila optimization

The parameters of GAN are tuned using the Aquilla Optimization (AO) (AlRassas et al., 2021) which is a swarm-based approach. The estimation of the best solution can be acquired by,

(4)
H = h 1 , 1 . . . h 1 , j h 1 , D i m - 1 h 1 , D i m h 2 , 1 . . . h 2 , j . . . h 2 , D i m . . . h i , j h N - 1 , 1 h N - 1 , j h N - 1 , D i m h N , 1 h N , j h N , D i m - 1 h N , D i m

The dimensionality of the segmentation problem in the dynamic object detection is given as Dim. N is the entire amount of solutions to the problem. The best solution can be determined as H. Here, the attained value while performing the ith solution is represented as Hi.

(5)
H ij = r a n d × U j - L j + L j , i = 1 , 2 , . . . . , N ; j = 1 , 2 , . . . . , D i m

The randomly generated number can be given as rand and falls under the range of 0 to 1, L j is the jth lower boundary and jth upper boundary is indicated as U C j .

  1. Numerical Expression of AO

The hunting behavior of the AO can be classified as (i) Expanded exploration (ii) Encircling (iii) Expanded exploitation and (iv) Narrowed exploitation.

  1. Expanded Exploration (H1)

The targeted ghosts are chosen with the characteristics of Aquila selecting the hunting area for the hunting.

(6)
H 1 t + 1 = H best ( t ) × 1 - t T + H M ( t ) - H best ( t ) * r a n d

The solution obtained after the first search application is denoted as H 1 t + 1 . However, the best previous solution can be given as H best ( t ) . That can be used to analyze the exact object. The exploration can be managed by observing the term 1 - t T for each iteration and H M ( t ) is used to find the position average value of the ongoing solutions and can be evaluated as,

(7)
H M ( t ) = 1 N i = 1 N H i ( t ) , j = 1 , 2 , . . . . , D i m

The maximum number of iterations is represented as T.

4.3

4.3 Encircling (H2)

According to the behavior of Aquilla encircling and attacking the prey, the targeted ghosts are encircled and analyzed for detection. This is termed as Aquila contour flight which involves a short glide attack and can be given as,

(8)
H 2 t + 1 = H best ( t ) × L e v y ( D ) + H R ( t ) + ( y - x ) * r a n d

The solution produced by the encircling process can be given as H 2 t + 1 . Levy ( D ) has been indicated as levy flight distribution function and the space dimensionality is given as D. however, the randomly acquired solution ranges between 1 and N is given as H R ( t ) . Then the levy flight distribution function can be estimated as,

(9)
Levy ( D ) = σ × u × β v 1 δ here, σ is a constant value. The randomly selected numbers that lie between 0 and 1 is given as, u, and v. Then the β value is calculated as,
(10)
β = Γ 1 + δ × sin Π δ 2 Γ 1 + δ 2 × δ × 2 δ - 1 2
here, δ = 1.6 and the constant values x and y are evaluated as,
(11)
y = a × cos ( θ ) x = a × sin ( θ )
where a = a 1 + U × D 1 and θ = - ω × D 1 + θ 1 θ 1 = 3 × Π 2 ; and r1 falls between the values 1 and 20, and U is a constant value is equal to 0.00564. D1 is the integer value of Dim. The constant ω is given as 0.006.

4.4

4.4 Expanded exploitation (H3)

This follows the segmentation of objects based on the identified exact location and is performed slowly and carefully. It can be determined as,

(12)
H 3 ( t + 1 ) = H best ( t ) - H M t × α - r a n d + U - L × r a n d + L × υ

The solutions obtained using the third stage are given as H 3 ( t + 1 ) . The parameters used to change the exploitation stage can be denoted as α and υ and determined as 0.1. The lower and upper boundaries of the problem can be denoted as L and U.

4.5

4.5 Narrowed exploitation (H4)

In this stage, object segmentation can be made with the help of a walk and grab prey approach. It can be evaluated as,

(13)
H 4 ( t + 1 ) = F × H best ( t ) - G 1 × H t × r a n d - G 2 × L e v y ( D ) + r a n d × G 1

The quality function is indicated as F and the obtained solution in this stage is given as H 4 ( t + 1 ) . G1 is the motion tracker and G2 is the flight slope between the first and last location.

To enhance the segmentation accuracy of GAN we have merged it with the AO approach and Fig. 2 demonstrates the proposed segmentation methods with their working mechanism.

Proposed GAN-AO segmentation approach.
Fig. 2
Proposed GAN-AO segmentation approach.

5

5 Proposed GAN-AO for the elimination of moving objects

This section expresses the process of Generative Adversarial Network and Aquila Optimizer for the elimination of moving objects.

5.1

5.1 Point cloud segmentation

During the segmentation process, the earth plane from the point clouds is removed. While performing this estimation is inaccurate and does not match with the SLAM (Bailey and Durrant-Whyte, 2006). Hence the GAN-based AO approach is used to segment the objects inside the point clouds. The arrangements of point clouds are made in a particular range to multiply the vertical points with the laser channel number. If the same objects are identified then the angles are analyzed among the image vectors and then labeled identically for the same points of the object.

5.2

5.2 Filtering outcomes from the radar

The object detected using the radar includes noises because of propagation features and multi-path effect of EM waves. Hence to analyze a negative impacts of the inclusion of false alarms a verification strategy is stated along with LiDAR measurements to detect the targeted ghost. The targeted points are converted into a coordinate frame of LiDAR-based on the calibration (Dawkins et al., 2001) outcomes.

5.3

5.3 Association of data

Velocity compensation is made to estimate the actual moving target from the world frame. This is due to the fact that the estimated velocity of the mmW-radar is in the radial direction with respect to the touching sensors. The Doppler velocity (Berger, 1957) V t sensor is acquired from the radar for each target. Then the absolute velocity V t A of the target as of the current static frame is determined by the integration of sensor velocity V sensor world along with the outcomes of SLAM and can be given as,

(14)
V t A = P world Sensor V sensor world + V t sensor here, the inverse of current arrangements of the sensor in world fame can be indicated as P world Sensor . Then the status of the moving object can be evaluated.

6

6 Experimental results

This section validates the performance of GAN-based Aquila Optimizer for dynamic object detection. The mmWradar (Delphi ESR) with the LiDAR (VLP-16) equips the wheeled robot based on the TUM RGB-D benchmark dataset (Sturm et al., 2012). There is 30°×360° of FOV (field of view) with 100 m for LiDAR detection range. At mid-range, a wide of 90° FOV is combined by radar and the scenarios are recorded through a camera. The baseline as open-source code LOAM conducts a series of experiments and real-time implementations are performed using Robot Operating System (ROS). Fig. 3 explains the sample dataset images.

Sample dataset images based on dynamic moving objection detection.
Fig. 3
Sample dataset images based on dynamic moving objection detection.

The performance analysis based on tracking and detection results with respect to various experimentations is plotted in Fig. 4. There are eight experimentation conditions with the tracking and detection performance is evaluated in this graph. Next to the enhancement measures, the transplant results are evaluated based on the RGB-D benchmark dataset. From this investigation, the experimentation conditions are increased with decreasing detection d tracking results are obtained.

Performance analysis based on tracking and detection results.
Fig. 4
Performance analysis based on tracking and detection results.

The root means square error (RMSE) performance analysis is delineated in Fig. 5. The experimentation conditions are varied from 1 to 8 with respect to varying error values. The RMSE values are varied in each experiment condition. From this, we have obtained 0.23, 0.018, 0.20, 0.22, 0.21, 0.21, 0.25 and 0.15 RMSE output values are obtained based on the experimentation conditions from 1 to 8.

Performance analysis in terms of RMSE.
Fig. 5
Performance analysis in terms of RMSE.

Table 1 delineates the operation performance evaluation based on three examination time results of ORB-SLAM2, Dynamic SLAM and Improvements. When compared to ORB-SLAM2, the dynamic SLAM accuracy is 7.84 % by taking the RMSE as the standard.

Table 1 Three examination times for operation performance evaluation.
Time (s) ORB-SLAM2 Dynamic SLAM Improvements
Median 0.50 0.044 12 %
Mean 0.50 0.045 10 %
Total 42.90 38.40 10.49 %

Fig. 6 expresses the performance analysis based on the number of frames with varying numbers. In this experiment, we have taken three conditions such as static, and dynamic with both static and dynamic conditions. In this experiment, the average number of feature points in every keyframe becomes 3090 in order to highlight the dynamic feature point’s proportions. There are 47.96 % of static feature points with 74.11 % dynamic feature points proportions obtained.

Performance analysis based on number of frames with varying numbers.
Fig. 6
Performance analysis based on number of frames with varying numbers.

The comparative analysis of accuracy results are plotted in Table 2. This investigation is conducted between different state-of-art methods like Deep Learning (DL) (Xiao et al., 2019), Generative Adversarial Network (GAN), Artificial neural network (ANN), Deep neural network (DNN) and proposed methods (Bojaj et al., 2021; anghera et al., 2021). From this analysis, the proposed method provided 93.8 % detection accuracy than other existing methods like DL, ANN, GAN and DNN respectively.

Table 2 Comparative analysis of accuracy.
Methods Accuracy
DL 86 %
GAN 90 %
DNN 90.4 %
ANN 89.45 %
Proposed 93.8 %

7

7 Conclusion

In this paper, the Visual Synchronization Localization and mapping of the dynamic object are detected using laser and visual cloud point information. Localizing using a camera in a dynamic environment is more challenging because the localization takes place in moving segments and leads to unstable pose estimation. To overcome this problem, the proposed work uses GAN with an Aquila optimizer in a dynamic environment. In this experiment, we have taken three conditions such as static, dynamic with both static and dynamic conditions. The average number of feature points in every keyframe becomes 3090 to highlight the dynamic feature point’s proportions. There are 47.96 % of static feature points with 74.11 % dynamic feature points proportions are obtained. The proposed technique offers 93.8 % detection accuracy than other existing methods like DL, ANN, GAN and DNN. In the future research, various other parameters namely the execution time as well as computational cost will be discussed and taken into consideration.

Funding

Key R & D plan project of Shaanxi Province in 2021 “digital evaluation system of whole dentition based on coded structured light measurement” (Item No: 2021GY-005).

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. , , , , , . Improving road semantic segmentation using generative adversarial network. IEEE Access. 2021;9:64381-64392.
    [Google Scholar]
  2. , , , , , , , . Optimized ANFIS model using Aquila Optimizer for oil production forecasting. Processes. 2021;9(7):1194.
    [Google Scholar]
  3. , , , , , . Assessment and Role of Social Media in Dental Education. SPR.. 2021;1(2):52-57.
    [Google Scholar]
  4. , , . Simultaneous localization and mapping (SLAM): Part II. IEEE Rob. Autom. Mag.. 2006;13(3):108-117.
    [Google Scholar]
  5. , . The nature of Doppler velocity measurement. IRE Trans. Aeronaut. Navig. Electron.. 1957;3:103-112.
    [Google Scholar]
  6. , , , . Treatment of the first COVID-19 case in kosovo and management of the pandemic. SPR. 2021;1(3):58-62.
    [Google Scholar]
  7. , , , , , , . 3D global mapping of large-scale unstructured orchard integrating eye-in-hand stereo vision and SLAM. Comput. Electron. Agric.. 2021;187:106237
    [Google Scholar]
  8. , , , , . Moving objects elimination towards enhanced dynamic SLAM fusing LiDAR and MmW-radar. In: 2020 IEEE MTT-S International Conference on Microwaves for Intelligent Mobility (ICMIM). IEEE; . p. :1-4.
    [Google Scholar]
  9. , , , . Calibration. Handbook of econometrics. 2001;5:3653-3703.
    [Google Scholar]
  10. , , , , , , . Laser Map Aided Visual inertial Localization in Changing Environment. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE; . p. :4794-4801.
    [Google Scholar]
  11. , , . Lidar remote sensing for forestry. J. Forest.. 2000;98(6):44-46.
    [Google Scholar]
  12. , . Deep Learning for Depth, Ego-Motion, Optical Flow Estimation, and Semantic Segmentation. University of Essex; . Doctoral dissertation
  13. , , , . Visual simultaneous localization and mapping: a survey. Artif. Intell. Rev.. 2015;43(1):55-81.
    [Google Scholar]
  14. Ghaffari, M., Clark, W., Bloch, A., Eustice, R.M., Grizzle, J.W., 2019. Continuous direct sparse visual odometry from RGB-D images. arXiv preprint arXiv:1904.02266.
  15. , , . RTAB-Map as an open-source lidar and visual simultaneous localization and mapping library for large-scale and long-term online operation. J. Field Rob.. 2019;36(2):416-446.
    [Google Scholar]
  16. Lee, K.M.B., Kong, F.H., Cannizzaro, R., Palmer, J.L., Johnson, D., Yoo, C. and Fitch, R., 2021. Decentralised Intelligence, Surveillance, and Reconnaissance in Unknown Environments with Heterogeneous Multi-Robot Systems. arXiv preprint arXiv:2106.09219.
  17. , , , . ExplorAtion-RRT: A Multi-objective PAth PlAnning and ExplorAtion FrAmework for Unknown and Unstructured Environments. In: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE; . p. :3429-3435.
    [Google Scholar]
  18. Morales, J.J., Kassas, Z.M., 2017, September. Distributed signals of opportunity aided inertial navigation with intermittent communication. In Proceedings of the 30th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2017) (pp. 2519–2530).
  19. Nguyen, A., Le, B., 2013, November. 3D point cloud segmentation: A survey. In 2013 6th IEEE conference on robotics, automation and mechatronics (RAM) (pp. 225-230). IEEE.
  20. , , , , , , . Robust visual-lidar simultaneous localization and mapping system for UAV. IEEE Geosci. Remote Sens. Lett. 2021
    [Google Scholar]
  21. Rajendran, G., Uma, V. and O’Brien, B., 2021. Unified robot task and motion planning with extended planner using ROS simulator. J. King Saud Univ.-Comp. Inf. Sci.
  22. , , , . A semantic OctoMap mapping method based on CBAM-PSPNet. J. Web Eng. 2022:879-910.
    [Google Scholar]
  23. , , . The anthropomorphism of intelligence. Technol.| Architecture+ Design.. 2021;5(2)
    [Google Scholar]
  24. , , , , , , , , , , , . Advances in molecular quantum chemistry contained in the Q-Chem 4 program package. Mol. Phys.. 2015;113(2):184-215.
    [Google Scholar]
  25. , , , , , , , , . A review of millimeter wave device-based localization and device-free sensing technologies and applications. IEEE Commun. Surv. Tutorials 2022
    [Google Scholar]
  26. , , , . Loop closure detection in simultaneous localization and mapping using descriptor from generative adversarial network. J. Electron. Imaging. 2019;28(1):013014
    [Google Scholar]
  27. Sodhi, P., Dexheimer, E., Mukadam, M., Anderson, S., Kaess, M., 2022, January. Leo: Learning energy-based models in factor graph optimization. In Conference on Robot Learning (pp. 234-244). PMLR.
  28. , , , , , . A BenchmArk for the EvAluAtion of RGB-D SLAM Systems. In: 2012 IEEE/RSJ international conference on intelligent robots and systems. IEEE; . p. :573-580.
    [Google Scholar]
  29. , , , . The revisiting problem in simultaneous localization and mapping: a survey on visual loop closure detection. IEEE Trans. Intell. Transp. Syst. 2022
    [Google Scholar]
  30. , , , , . Enhancing collaborative road scene reconstruction with unsupervised domain alignment. Mach. Vis. Appl.. 2021;32(1):1-16.
    [Google Scholar]
  31. , , , , , . Deep neural network enhanced sampling-based path planning in 3D space. IEEE Trans. Autom. Sci. Eng. 2021
    [Google Scholar]
  32. , , , , , , . A robust framework for simultaneous localization and mapping with multiple non-repetitive scanning lidars. Remote Sensing.. 2021;13(10):2015.
    [Google Scholar]
  33. , , , , . SBAS: Salient bundle adjustment for visual SLAM. IEEE Trans. Instrum. Meas.. 2021;70:1-9.
    [Google Scholar]
  34. , , , , , . Dynamic-SLAM: Semantic monocular visual localization and mapping based on deep learning in dynamic environment. Rob. Auton. Syst.. 2019;117:1-16.
    [Google Scholar]
  35. , , , , , , . Gan-based differential private image privacy protection framework for the internet of multimedia things. Sensors. 2020;21(1):58.
    [Google Scholar]
  36. , , , , . A VisuAl SLAM System With LAser Assisted OptimizAtion. In: 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM). IEEE; . p. :187-192.
    [Google Scholar]
  37. , , , . Unsupervised monocular depth estimation in highly complex environments. IEEE Trans. Emerg. Topics Comput. Intelligence 2022
    [Google Scholar]
  38. , , , , , . 3D Point Cloud Mapping Based on Intensity Feature. In: Artificial intelligence in China. Singapore: Springer; . p. :514-521.
    [Google Scholar]
Show Sections