Turbo: Informativity-Driven Acceleration Plug-In for Vision-Language Large Models


Chen Ju*1
Haicheng Wang*1,2
Haozhe Cheng2

Xu Chen1
Zhonghua Zhai1
Weilin Huang1

Jinsong Lan1
Shuai Xiao1
Bo Zheng1

1TAO Technology, Alibaba Group
2Shanghai Jiao Tong University

ECCV 2024 Oral



Code

Paper

Bibtex


Problem


alt text

Left: the trouble with applying VLMs is the high-cost issue. Right: to accelerate VLMs, most existing ideas focus on the model perspective (pruning & quantization). While our Turbo explores de-redundancy from the data perspective.



Abstract

Vision-Language Large Models (VLMs) recently become primary backbone of AI, due to the impressive performance. However, their expensive computation costs, i.e., throughput and delay, impede potentials in the real-world scenarios. To achieve acceleration for VLMs, most existing methods focus on the model perspective: pruning, distillation, quantization, but completely overlook the data-perspective redundancy. To fill the overlook, this paper pioneers the severity of data redundancy, and designs one plug-and-play Turbo module guided by information degree to prune inefficient tokens from visual or textual data. In pursuit of efficiency-performance trade-offs, information degree takes two crucial factors into consideration: mutual redundancy and semantic value. Concretely, the former evaluates data duplication between sequential tokens; while the latter evaluates each token by its contribution to the overall semantics. As a result, tokens with high information degree carry less redundancy and stronger semantics. For VLMs' calculation, Turbo works as a user-friendly plug-in that sorts data referring to information degree, utilizing only top-level ones to save costs. Its advantages are multifaceted, e.g., being generally compatible to various VLMs across understanding and generation, simple use without re-training and trivial engineering efforts. On multiple VLMs benchmarks, we fully experiment to reveal good acceleration of Turbo, under negligible performance drop.


Framework Overview


alt text

As one plug-in, Turbo compresses data to cut computing overheads for VLMs, across understanding/generation and uni-/multi-modality. It sorts then merges tokens by information degree (mutual redundancy and semantic value) for understanding tasks; while sorts, merges and restores VLMs’ tokens for generation tasks, owning good universality and practicality.


Empirical Study


alt text

Empirical Evaluation of Token Redundancy & Attention Concentration on BLIP fine-tuned for multi-modal retrieval. Results reveal the non-negligible redundancy in the token sequence from perspectives of semantics and similarity.


Experiments

Ablation Study

Ablation Study on Drop Ratio. Semantic value retains superior performance when ratio is small, mutual redundancy possesses better stability on the large ratio. By combining two components, Turbo gets competitive results and stability on the whole scope.

Robustness

Ablation Study of Balancing Coefficient. On image captioning using BLIP (VIT-Base and VIT Large), these results prove our robustness, as the performance varies slightly.

Visualization Results

Left: Turbo merges background patches, while retains foreground patches with semantics, preserving more key information. Right: The quality of text-to-image generation is close before and after Turbo acceleration.

Comparison with Previous SOTA

Generation comparisons for ToMe and Turbo. Compared with ToMe, Turbo retains more details and has better quality.


Acknowledgements

This research is completed during research internship, supported by Alibaba Group.