|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Code |
Paper |
Bibtex |
Myopia. OpenAI's CLIP uses crude (image, text) web data for one-to-one contrastive alignment, causing serious myopia, i.e., bias to monotonous short texts and shallow visual expressivity. Holism. We advance one holistic CLIP paradigm, by updating colorful (image, multi-texts) data from diverse views, levels; and designing multi-to-multi constrastive learning for image-text part-to-part matching.
In rapidly evolving field of vision-language models (VLMs), contrastive language-image pre-training (CLIP) has made significant strides, becoming foundation for various downstream tasks. However, relying on one-to-one (image, text) contrastive paradigm to learn alignment from large-scale messy web data, CLIP faces a serious myopic dilemma, resulting in biases towards monotonous short texts and shallow visual expressivity. To overcome these issues, this paper advances CLIP into one novel holistic paradigm, by updating both diverse data and alignment optimization. To obtain colorful data with low cost, we use image-to-text captioning to generate multi-texts for each image, from multiple perspectives, granularities, and hierarchies. Two gadgets are proposed to encourage textual diversity. To match such (image, multi-texts) pairs, we modify the CLIP image encoder into multi-branch, and propose multi-to-multi contrastive optimization for image-text part-to-part matching. As a result, diverse visual embeddings are learned for each image, bringing good interpretability and generalization. Extensive experiments and ablations across over ten benchmarks indicate that our holistic CLIP significantly outperforms existing myopic CLIP, including image-text retrieval, open-vocabulary classification, and dense visual tasks. Code for holistic CLIP will be released upon publication, to further promote the prosperity of VLMs. |
Pipeline Overview of Holistic CLIP. To evolve data from monotonous (image, text) pairs to colorful (image, multi-texts) pairs, we use powerful VLMs for captioning from multiple views, levels, and granularities. Diverse prompts are defined to encourage diversity. We then modify the CLIP image encoder into multi-branch, and optimize by multi-to-multi constrastive learning for part-to-part matching. During inference, flexible embedding customizations are available for different tasks, showing good interpretability and generalization.
Architecture Overview of Holistic CLIP. To generate \( H \) image features, we leverage two different structures: \( \Psi_{\mathrm{CLS}} \) and \( \Psi_{\mathrm{MLP}} \). Then we match \( H \) image features with \( M \) text features. Normally \( H=M\) and we apply one-to-one matching.
Attention Visualization of Holistic CLIP's Vision. Vision is naturally decomposed by aligning with various texts.
This research is supported by Alibaba Group.