車道線檢測2022新工作,2D、3D都有
車道線檢測是自動(dòng)駕駛中一項(xiàng)基礎(chǔ)而重要的任務(wù),學(xué)術(shù)和工業(yè)界一直投入了大量的工作。小湯也一直對車道線檢測任務(wù)感興趣,并在公司開發(fā)過相關(guān)功能。也分享過一些相關(guān)的文章:
相關(guān)鏈接(點(diǎn)擊進(jìn)入):
車道線檢測新SOTA CLRNet: Cross Layer Refinement Network for Lane Detec
CVPR2022車道線檢測Efficient Lane Detection via Curve Modeling
車道線檢測新工作VIL-100: A New Dataset and A Baseline Model for Video In
端到端的多任務(wù)感知網(wǎng)絡(luò)HybridNet,性能優(yōu)于YOLOP
因?yàn)榍皟商煸谧詣?dòng)駕駛技術(shù)交流群4群里有群友詢問近期車道線有哪些新工作,所以小湯整理了一下近期的相關(guān)工作,給相關(guān)方向的朋友分享一下。也歡迎對車道線檢測、車位檢測、目標(biāo)檢測、深度估計(jì)等相關(guān)任務(wù)感興趣的同行、朋友,加入技術(shù)交流群,大家一起討論交流。
Ultra Fast Deep Lane Detection with Hybrid Anchor Driven Ordinal Classification(UFLDv2)
發(fā)表(錄用):TPAMI 2022
單位:浙江大學(xué)
論文:https://arxiv.org/abs/2206.07389
代碼:https://github.com/cfzd/Ultra-Fast-Lane-Detection-v2

效果demo:

Rethinking Efficient Lane Detection via Curve Modeling
發(fā)表(錄用):CVPR 2022
單位:上海交大、華東師大、香港城市大學(xué)、商湯
論文:https://arxiv.org/abs/2203.02431
代碼:https://github.com/voldemortX/pytorch-auto-drive


論文解讀及效果demo:(點(diǎn)擊進(jìn)入)
CVPR2022車道線檢測Efficient Lane Detection via Curve Modeling
CLRNet: Cross Layer Refinement Network for Lane Detection
發(fā)表(錄用):CVPR 2022
單位:飛布科技(Fabu)、浙江大學(xué)
論文:https://arxiv.org/pdf/2203.10350.pdf
代碼:https://github.com/Turoad/CLRNet


論文解讀及效果demo:(點(diǎn)擊進(jìn)入)
車道線檢測新SOTA CLRNet: Cross Layer Refinement Network for Lane Detec
A Keypoint-based Global Association Network for Lane Detection
發(fā)表(錄用):CVPR 2022
單位:北大、中科大、商湯
論文:https://arxiv.org/pdf/2204.07335.pdf
代碼:https://github.com/Wolfwjs/GANet
提出了一個(gè)全局關(guān)聯(lián)網(wǎng)絡(luò)(GANet)來從一個(gè)新的角度描述車道檢測問題,其中每個(gè)關(guān)鍵點(diǎn)直接回歸到車道線的起點(diǎn),而不是逐點(diǎn)擴(kuò)展。
具體地說,關(guān)鍵點(diǎn)與其所屬車道線的關(guān)聯(lián)是通過預(yù)測它們在全局范圍內(nèi)與相應(yīng)車道起點(diǎn)的偏移量來實(shí)現(xiàn)的,而不相互依賴,這可以并行進(jìn)行,從而大大提高效率。此外,還提出了一種車道感知特征聚合器(LFA),它自適應(yīng)地捕獲相鄰關(guān)鍵點(diǎn)之間的局部相關(guān)性,以補(bǔ)充全局關(guān)聯(lián)的局部信息。在高FPS的同時(shí),在CULane上的F1分?jǐn)?shù)為79.63%,Tusimple數(shù)據(jù)集上的F1分?jǐn)?shù)為97.71%。

效果demo:

Eigenlanes: Data-Driven Lane Descriptors for Structurally Diverse Lanes github SDLane Dataset
發(fā)表(錄用):CVPR 2022
單位:Korea University, dot.ai
論文:https://arxiv.org/abs/2203.15302
代碼:https://github.com/dongkwonjin/Eigenlanes

效果demo:

Towards Driving-Oriented Metric for Lane Detection Models
發(fā)表(錄用):CVPR 2022
單位:University of California
論文:https://arxiv.org/abs/2203.16851
代碼:https://github.com/ASGuard-UCI/ld-metric
design 2 new driving-oriented metrics for lane detection: End-to-End Lateral Deviation metric (E2E-LD) is directly formulated based on the requirements of autonomous driving, a core downstream task of lane detection; Per-frame Simulated Lateral Deviation metric (PSLD) is a lightweight surrogate metric of E2E-LD.

ONCE-3DLanes: Building Monocular 3D Lane Detection Homepage github Dataset
發(fā)表(錄用):CVPR 2022
單位:復(fù)旦大學(xué)、華為諾亞實(shí)驗(yàn)室
論文:https://arxiv.org/pdf/2205.00301.pdf
代碼:https://github.com/once-3dlanes/once_3dlanes_benchmark
the?largest real-world lane detection dataset published up to now, containing more complex road scenarios with various weather conditions, different lighting conditions as well as a variety of geographical locations.(最大一個(gè))

效果demo:

PersFormer: 3D Lane Detection via Perspective Transformer and the OpenLane Benchmark OpenLane Dataset
發(fā)表(錄用):暫時(shí)未知
單位:Shanghai AI Laboratory, Purdue University, Carnegie Mellon University, SenseTime Research, Shanghai Jiao Tong University
論文:https://arxiv.org/abs/2203.11089
代碼:https://github.com/OpenPerceptionX/PersFormer_3DLane
the?first real-world, large-scale 3D lane dataset and corresponding benchmark, OpenLane, to support research into the problem.(第一個(gè))

效果demo:
HybridNets: End-to-End Perception Network
發(fā)表(錄用):暫時(shí)未知
單位:Shanghai AI Laboratory, Purdue University, Carnegie Mellon University, SenseTime Research, Shanghai Jiao Tong University
論文:https://arxiv.org/ftp/arxiv/papers/2203/2203.09035.pdf
代碼:https://github.com/datvuthanh/HybridNets
效果demo:
端到端的多任務(wù)感知網(wǎng)絡(luò)HybridNet,性能優(yōu)于YOLOP