Advancing Myopia To Holism: Fully Contrastive Language-Image Pre-training


Haicheng Wang1,2*
Chen Ju1*✉️
Weixiong Lin2
Shuai Xiao1✉️

Mengting Chen1
Yixuan Huang1
Chang Liu2
Mingshuai Yao1

Jinsong Lan1
Ying Chen1
Qingwen Liu1
Yanfeng Wang2

1Alibaba Group
2Shanghai Jiao Tong University

CVPR 2025



Code

Paper

Bibtex


Problem


alt text

Myopia. OpenAI's CLIP uses crude (image, text) web data for one-to-one contrastive alignment, causing serious myopia, i.e., bias to monotonous short texts and shallow visual expressivity. Holism. We advance one holistic CLIP paradigm, by updating colorful (image, multi-texts) data from diverse views, levels; and designing multi-to-multi constrastive learning for image-text part-to-part matching.



Abstract

In rapidly evolving field of vision-language models (VLMs), contrastive language-image pre-training (CLIP) has made significant strides, becoming foundation for various downstream tasks. However, relying on one-to-one (image, text) contrastive paradigm to learn alignment from large-scale messy web data, CLIP faces a serious myopic dilemma, resulting in biases towards monotonous short texts and shallow visual expressivity. To overcome these issues, this paper advances CLIP into one novel holistic paradigm, by updating both diverse data and alignment optimization. To obtain colorful data with low cost, we use image-to-text captioning to generate multi-texts for each image, from multiple perspectives, granularities, and hierarchies. Two gadgets are proposed to encourage textual diversity. To match such (image, multi-texts) pairs, we modify the CLIP image encoder into multi-branch, and propose multi-to-multi contrastive optimization for image-text part-to-part matching. As a result, diverse visual embeddings are learned for each image, bringing good interpretability and generalization. Extensive experiments and ablations across over ten benchmarks indicate that our holistic CLIP significantly outperforms existing myopic CLIP, including image-text retrieval, open-vocabulary classification, and dense visual tasks. Code for holistic CLIP will be released upon publication, to further promote the prosperity of VLMs.


Framework Overview


alt text

Pipeline Overview of Holistic CLIP. To evolve data from monotonous (image, text) pairs to colorful (image, multi-texts) pairs, we use powerful VLMs for captioning from multiple views, levels, and granularities. Diverse prompts are defined to encourage diversity. We then modify the CLIP image encoder into multi-branch, and optimize by multi-to-multi constrastive learning for part-to-part matching. During inference, flexible embedding customizations are available for different tasks, showing good interpretability and generalization.


Model Structure


alt text

Architecture Overview of Holistic CLIP. To generate \( H \) image features, we leverage two different structures: \( \Psi_{\mathrm{CLS}} \) and \( \Psi_{\mathrm{MLP}} \). Then we match \( H \) image features with \( M \) text features. Normally \( H=M\) and we apply one-to-one matching.


Experiments

Elementary Short-Text Retrieval

Complex Long-Text Retrieval

Zero-Shot Image Classification

Image-to-Text Captioning & All-Round Abilities

Ablation Study

Ablation Study for the Embedding Fusion

Embedding Customization

Ablation Study for the Text Number on CC3M

Visualization

Attention Visualization of Holistic CLIP's Vision. Vision is naturally decomposed by aligning with various texts.


Acknowledgements

This research is supported by Alibaba Group.