RLH-MAPPING: REAL-TIME DENSE MAPPING FOR ROBOTS USING LOW-LIGHT FIELD AND HYBRID REPRESENTATIONS

Xiang Wang∗ and Peter Xiaoping Liu,∗,∗∗

References

  1. [1] S. Saeedi, L. Paull, M. Trentini, and H. Li, Occupancy grid mapmerging for multiple robot simultaneous localization and map-ping, International Journal of Robotics and Automation, 30,2015, 149–157.
  2. [2] L. Shi, F. Zheng, and Y. Shi, Multi-constraint SLAMoptimization algorithm for indoor scenes, International Journalof Robotics and Automation, 206, 2023, 375–382.
  3. [3] F. Ruetz, E. Hern´andez, M. Pfeiffer, H. Oleynikova, M. Cox, T.Lowe, and P. Borges, OVPC mesh: 3D free-space representationfor local ground vehicle navigation, in Proceeding of theInternational Conference on Robotics and Automation (ICRA),Montreal, QC, 2018, 8648–8654.
  4. [4] O. K¨ahler, V.A. Prisacariu, C.Y. Ren, X. Sun, P.H.S. Torr,and D.W. Murray, Very high frame rate volumetric integrationof depth images on mobile devices, IEEE Transactions onVisualization and Computer Graphics, 21,2015,: 1241–1250.
  5. [5] B. Mildenhall, P.P. Srinivasan, M. Tancik, J.T. Barron, R.Ramamoorthi, and R. Ng, NERF: Representing scenes asneural radiance fields for view synthesis, Communications ofthe ACM, 65(1), 2021, 99–106.
  6. [6] B. Kerbl, G. Kopanas, T. Leimkuehler, and G. Drettakis, 3DGaussian splatting for real-time radiance field rendering, ACMTransactions on Graphics (TOG), 42, 2023, 1–14.
  7. [7] X. Yang , H. Li, H. Zhai, Y. Ming, Y. Liu, and G. Zhang,Vox-fusion: Dense tracking and mapping with voxel-basedneural implicit representation, in Proceeding of the IEEEInternational Symposium on Mixed and Augmented Reality(ISMAR), Singapore, 2022, 499–507.
  8. [8] C. Jiang ,H. Zhang, P. Liu, Z. Yu, H. Cheng, B.Zhou, and S. Shen, H2-Mapping: Real-time dense mappingusing hierarchical hybrid representation, IEEE Robotics andAutomation Letters, 8,2023, 6787–6794.
  9. [9] P.P. Srinivasan, B. Deng, X. Zhang, M. Tancik, B. Mildenhall,and J. Barron, NeRV: Neural reflectance and visibility fields forrelighting and view synthesis, in Proceeding of the IEEE/CVFConference on Computer Vision and Pattern Recognition(CVPR), Nashville, TN, 2020, 7491–7500.
  10. [10] K. Wei, Y. Fu, J. Yang, and H. Huang, A physics-basednoise formation model for extreme low-light raw denoising, inProceeding of the IEEE/CVF Conference on ComputerVision and Pattern Recognition (CVPR), Seattle, WA, 2020,2755–2764.
  11. [11] X. Zhang, P.P. Srinivasan, B. Deng, P.E. Debevec, W.T.Freeman, and J.T. Barron, NeRFactor, ACM Transactions onGraphics (TOG), 40, 2021, 1–18.
  12. [12] T.M¨uller, A. Evans, C. Schied, and A. Keller, Instant neuralgraphics primitives with a multiresolution hash encoding, ACMTransactions on Graphics (TOG), 41, 2022 1– 15.
  13. [13] Z. Rahman, D.J. Jobson, and G.A. Woodell, Multi-scale retinexfor color image enhancement, in Proceedings of the 3rd IEEEInternational Conference on Image Processing, Lausanne, 1996,1003–1006.
  14. [14] X. Guo, Y. Li. and H. Ling, LIME: Low-light image enhance-ment via illumination map estimation, IEEE Transactions onImage Processing, 26, 2017, 982–993.
  15. [15] W.Burger and M. Burge, Digital Image Processing: Texts inComputer Science, (Cham: Springer, 2016).
  16. [16] F. Lv, F. Lu, J. Wu, and C.S. Lim, MBLLEN: Low-light image/video enhancement using CNNs, in Proceedingof the British Machine Vision Conference, Newcastle, 2018,1–4.
  17. [17] S.J. Moran, P. Marza, S.G. McDonagh, S. Parisot, and G.G.Slabaugh, DeepLPF: Deep local parametric filters for imageenhancement, in Proceeding of the IEEE/CVF Conferenceon Computer Vision and Pattern Recognition (CVPR), 2020,12823–12832.
  18. [18] Y. Song, H. Qian and X. Du, StarEnhancer: Learning real-time and style-aware image enhancement, in Proceeding ofthe IEEE/CVF International Conference on Computer Vision(ICCV), Seattle, WA, 2021, 4106–4115.
  19. [19] Y. Wang, B. Li, and X. Yuan, BrightFormer: A trans-former to brighten the image, Comput. Graph, 110, 2022,49–57.
  20. [20] Y. Wang, P. Su, X. Pan, H. Wang, and Y. Gao, Channel self-attention based low-light image enhancement network, Comput.Graph, 120, 2024, 103921.
  21. [21] Y. Jin, W. Yang, and R.T. Tan, Unsupervised night imageenhancement: When layer decomposition meets light-effectssuppression, in Proceeding of the European Conference onComputer Vision, Cham, 2022, 404–421.
  22. [22] H. Nguyen, D. Tran, K.D.M. Nguyen, and R.H.M. Nguyen,PSENet: Progressive self-enhancement network for unsu-pervised extreme-light image enhancement, in Proceed-ing of the IEEE/CVF Winter Conference on Applica-tions of Computer Vision (WACV), Waikoloa, HI, 2022,1756–1765.8
  23. [23] R. Martin-Brualla, N. Radwan, M.S. M. Sajjadi, J.T. Barron,A. Dosovitskiy, and D. Duckworth, NeRF in the wild:Neural radiance fields for unconstrained photo collections, inProceeding of the IEEE/CVF Conference on Computer Visionand Pattern Recognition (CVPR), Nashville, TN, 2020,7206–7215.
  24. [24] B. Mildenhall, P. Hedman, R. Martin-Brualla, P.P. Srinivasan,and J.T. Barron, NeRF in the dark: High dynamic rangeview synthesis from noisy raw images, in Proceeding of theIEEE/CVF Conference on Computer Vision and PatternRecognition (CVPR), New Orleans, LA, 2021, 16169–16178.
  25. [25] H.C. Karaimer and M.S. Brown, A software platform formanipulating the camera imaging pipeline, in Proceeding ofthe European Conference on Computer Vision, Cham, 2016,429–444.
  26. [26] J. Straub, T. Whelan, L. Ma, Y. Chen, E. Wijmans, S. Green,J.J. Engel, R. Mur-Artal, C.Y. Ren, S. Verma, A. Clarkson, M.Yan, B. Budge, Y. Yan, X. Pan, J. Yon, Y. Zou, K. Leon, N.Carter, J. Briales, T. Gillingham, E. Mueggler, L. Pesqueira,M. Savva, D. Batra, H.M. Strasdat, R.D. Nardi, M. Goesele, S.Lovegrove, and R.A. Newcombe, The replica dataset: A digitalreplica of indoor spaces, 2019, ArXiv:abs/1906.05797.
  27. [27] A. Dai, A.X. Chang, M. Savva, M. Halber, T.A. Funkhouser,and M. Nießner, ScanNet: Richly-annotated 3D reconstructionsof indoor scenes, in Proceeding of the IEEE Conference onComputer Vision and Pattern Recognition (CVPR), Honolulu,HI, 2017, 2432–2443.
  28. [28] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G.Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A.Desmaison, A. K¨opf, E. Yang, Z. DeVito, M. Raison, A. Tejani,S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala,PyTorch: An imperative style, high-performance deep learningLibrary, 2019, ArXiv:abs/1912.01703.
  29. [29] M.M. Johari, C. Carta, and F. Fleuret, ESLAM: Efficient denseslam system based on hybrid representation of signed distancefields, in Proceeding of the IEEE/CVF Conference on ComputerVision and Pattern Recognition (CVPR), Vancouver, BC, 2022,17408–17419.
  30. [30] T. Zhang, K. Huang, W. Zhi, and M. Johnson-Roberson,DarkGS: Learning neural illumination and 3D Gaussiansrelighting for robotic exploration in the dark, 2024,ArXiv:abs/2403.10814..
  31. [31] S. Ye, Z.-H. Dong, Y. Hu, Y.-H. Wen, and Y.-J. Liu, Gaussianin the dark: Real-time view synthesis from inconsistent darkimages using Gaussian splatting, Computer Graphics Forum,43,2024, e15213.

Important Links:

Go Back