2023.03.24 ArXiv精選
關(guān)注領(lǐng)域:
AIGC
3D computer vision learning
Fine-grained learning
GNN
其他
聲明
論文較多,時間有限,本專欄無法做文章的講解,只挑選出符合PaperABC研究興趣和當(dāng)前熱點問題相關(guān)的論文,如果你的research topic和上述內(nèi)容有關(guān),那本專欄可作為你的論文更新源或Paper reading list.

Paper list:
今日ArXiv共更新136篇
NeRF
Persistent Nature: A Generative Model of Unbounded 3D Worlds
https://arxiv.org/pdf/2303.13515.pdf

AIGC
Ablating Concepts in Text-to-Image Diffusion Models
https://arxiv.org/pdf/2303.13516.pdf

CMU朱俊彥團(tuán)隊的工作,解決了文本生成圖像時版權(quán)信息消除的問題.
DreamBooth3D: Subject-Driven Text-to-3D Generation
https://arxiv.org/pdf/2303.13508.pdf

不多介紹了,大神云集,爆火工作!太卷了!
CoBIT: A Contrastive Bi-directional Image-Text Generation Model
https://arxiv.org/pdf/2303.13455.pdf

UCLA的新作,一個大模型可以支持很多下游任務(wù).不限于生成.
Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators
https://arxiv.org/pdf/2303.13439.pdf

文本驅(qū)動的視頻生成方向,很驚訝這個領(lǐng)域發(fā)展的這么快!
TriPlaneNet: An Encoder for EG3D Inversion
https://arxiv.org/pdf/2303.13497.pdf

VLP
The effectiveness of MAE pre-pretraining for billion-scale pretraining
https://arxiv.org/pdf/2303.13496.pdf

Ross Girshick大神的工作.探討了MAE在billion-scale層面的預(yù)訓(xùn)練方法.
NS3D: Neuro-Symbolic Grounding of 3D Objects and Relations
https://arxiv.org/pdf/2303.13483.pdf
