Looking for the CVPR 2023 website? Check it out here:
[June 19 2022 3:30pm] Poster sessions: Due to the technical difficulties leading to delays, we will be doing the spotlight presentations 2 and 3 at 3:30-4:30pm, panel session from 4:30-5pm, and poster session from 5:00-6:00pm
[June 19 2022 11AM] Poster sessions: As we confirmed with the organizers and staff, there is NO ID number assigned to the poster boards, please find a brown paper on a board with a note "Transformer for Vision 33b-82b", which is at the Hall-E lobby. We recommend the authors to put their posters around there so that all are gathered in the same area.
[June 19 2022 11AM] Poster sessions: As per the latest update from workshop chairs this morning, the new poster location is the lobby of Halls D-E spots 33b-82b. Please use any open poster spot in that range.
Transformers have recently emerged as promising and versatile deep neural architecture in various domains. Since the introduction of Vision Transformers (ViT) in 2020, the vision community has witnessed an explosion of transformer-based computer vision models with applications ranging from image classification to dense prediction (e.g., object detection, segmentation), video, self-supervised learning, 3D and multi-modal learning. This workshop presents a timely opportunity to bring together researchers across computer vision and machine learning communities to discuss the opportunities and open challenges in designing transformer models for vision tasks.
Google Brain
Microsoft Research
Google AI
DeepMind
Google AI
Google AI
Meta AI
Google Brain
Meta AI
Nvidia Research
Meta AI
We accept abstract submissions to our workshop. All submissions shall have maximally 4 pages (excluding references) following the CVPR 2022 author guidelines.
Abstract Submission Due: April 15th, 2022
Notification to Authors: May 15th 23rd, 2022
Workshop: June 19th, 2022
Andreas P Steiner (Google)
Antoine Miech (DeepMind)
Chen Feng (NYU)
Chongjian GE (HKU)
Chuhan Zhang (Oxford)
Daquan Zhou (NUS)
Dong Yang (NVIDIA)
Feng Cheng (UNC)
Haotian Liu (UW-M)
Hongxu Yin (NVIDIA)
Huaizheng Zhang (NTU)
Huaizu Jiang (Northeastern)
Ioannis Siglidis (ENPC)
Jyoti Aneja (Microsoft)
Liliane Momeni (Oxford)
Linxi Fan (NVIDIA)
Maitrey Gramopadhye (UNC)
Mannat Singh (Meta)
Max Bain (Oxford)
Md Mohaiminul Islam (UNC)
Miao Yin (Rutgers)
Mingyang Zhou (UC Davis)
Noureldien Hussein (UoA)
Peize Sun (HKU)
Pichao Wang (Alibaba)
Prajwal K R (Oxford)
Romain Loiseau (ENPC)
Ruohan Gao (Stanford)
Shalini De Mello (NVIDIA)
Shiyang Li (UCSB)
Shoufa Chen (HKU)
Sifei Liu (NVIDIA)
Tao Wang (NUS)
Tengda Han (Oxford)
Tsung-Yi Lin (NVIDIA)
Vivek Sharma (MIT)
Weijian Xu (Microsoft)
Weili Nie (NVIDIA)
Wenhai Wang (Nanjing U.)
Xiao Wang (Google)
Xiaohua Zhai (Google)
Xudong Lin (Columbia U.)
Yan-Bo Lin (UNC)
Yang Sui (Rutgers)
Yiming Li (NYU)
Yong Jae Lee (UW-M)
Yuchen Lu (U. Montreal)
Yue Zhao (UT Austin)
Zhe Wang (UC Irvine)
Zhiqi Li (Nanjing U.)
Zhiyuan Fang (ASU)