|
๐ Code: https://github.com/sdc17/UPop
๐ Paper: https://proceedings.mlr.press/v202/shi23e/shi23e.pdf
๐ง A Quick Look
- What is it: UPop is the first structured pruning framework for vision-language Transformers. It enables effective structured pruning on various multi-modal & uni-modal tasks (including Visual Reasoning, Image Captioning, Visual Question Answer, Image-Text Retrieval, Text-Image Retrieval, Image Classification and Image Segmentation), datasets (including NLVR2, COCO Caption, VQAv2, COCO, Flickr30K, ImageNet and ADE20K), and model architectures (including BLIP, CLIP, DeiT and Segmenter).
https://preview.redd.it/gfbjnxjm95fb1.png?width=2145&format=png&auto=webp&s=108898690f66a1f0afa068b69487859213055928
- What challenge does it tackle: The above figure demonstrates that Unified Search adopted by UPop rescues us from the burden of repeated experiments (e.g., doing grid search) for searching optimal compression ratios among different modalities and structures. Furthermore, Progressive Pruning adopted by UPop eliminates the weight gap between the searched model and the pruned subnet to be retrained, therefore gaining better convergence and performance, especially at high compression ratios.
- How about the performance: On multimodal tasks, for example, UPop can achieve 2x compression with only 1.2% and 2.0% accuracy loss on the VQAv2 dataset for Visual Question Answer and the NLVR2 dataset for Visual Reasoning, respectively. On unimodal tasks, for example, UPop can achieve 1.5x and 1.2x compression without any loss of accuracy on the ImageNet dataset for Image Classification and the ADE20K dataset for Image Segmentation, respectively. Some examples of vector-level structured granularity are as follows.
https://preview.redd.it/lifz1n1ia5fb1.png?width=1187&format=png&auto=webp&s=f419d9c5fb4d80a2a564198eba356021e1c275e4
submitted by /u/Salty-Situation2606 [comments] |