|
|
|
|
|
|
|
|
|
|
|
|
|
Code |
Paper |
Bibtex |
Left: the trouble with applying VLMs is the high-cost issue. Right: to accelerate VLMs, most existing ideas focus on the model perspective (pruning & quantization). While our Turbo explores de-redundancy from the data perspective.
Vision-Language Large Models (VLMs) recently become primary backbone of AI, due to the impressive performance. However, their expensive computation costs, i.e., throughput and delay, impede potentials in the real-world scenarios. To achieve acceleration for VLMs, most existing methods focus on the model perspective: pruning, distillation, quantization, but completely overlook the data-perspective redundancy. To fill the overlook, this paper pioneers the severity of data redundancy, and designs one plug-and-play Turbo module guided by information degree to prune inefficient tokens from visual or textual data. In pursuit of efficiency-performance trade-offs, information degree takes two crucial factors into consideration: mutual redundancy and semantic value. Concretely, the former evaluates data duplication between sequential tokens; while the latter evaluates each token by its contribution to the overall semantics. As a result, tokens with high information degree carry less redundancy and stronger semantics. For VLMs' calculation, Turbo works as a user-friendly plug-in that sorts data referring to information degree, utilizing only top-level ones to save costs. Its advantages are multifaceted, e.g., being generally compatible to various VLMs across understanding and generation, simple use without re-training and trivial engineering efforts. On multiple VLMs benchmarks, we fully experiment to reveal good acceleration of Turbo, under negligible performance drop. |
As one plug-in, Turbo compresses data to cut computing overheads for VLMs, across understanding/generation and uni-/multi-modality. It sorts then merges tokens by information degree (mutual redundancy and semantic value) for understanding tasks; while sorts, merges and restores VLMs’ tokens for generation tasks, owning good universality and practicality.
Empirical Evaluation of Token Redundancy & Attention Concentration on BLIP fine-tuned for multi-modal retrieval. Results reveal the non-negligible redundancy in the token sequence from perspectives of semantics and similarity.
Ablation Study on Drop Ratio. Semantic value retains superior performance when ratio is small, mutual redundancy possesses better stability on the large ratio. By combining two components, Turbo gets competitive results and stability on the whole scope.
Ablation Study of Balancing Coefficient. On image captioning using BLIP (VIT-Base and VIT Large), these results prove our robustness, as the performance varies slightly.
Left: Turbo merges background patches, while retains foreground patches with semantics, preserving more key information. Right: The quality of text-to-image generation is close before and after Turbo acceleration.
Generation comparisons for ToMe and Turbo. Compared with ToMe, Turbo retains more details and has better quality.
This research is completed during research internship, supported by Alibaba Group.