Visual Question Answering
Transformers
PyTorch
internvl_chat
feature-extraction
custom_code
InternVL-Chat-V1-1 / README.md
czczup's picture
Update README.md
a735595 verified
metadata
license: mit
datasets:
  - laion/laion2B-en
  - laion/laion-coco
  - laion/laion2B-multi
  - kakaobrain/coyo-700m
  - conceptual_captions
  - wanng/wukong100m
pipeline_tag: visual-question-answering

Model Card for InternVL-Chat-V1-1

Image Description

[๐Ÿ†• Blog] [๐Ÿ“œ InternVL 1.0 Paper] [๐Ÿ“œ InternVL 1.5 Report] [๐Ÿ—จ๏ธ Chat Demo]

[๐Ÿค— HF Demo] [๐Ÿš€ Quick Start] [๐ŸŒ Community-hosted API] [๐Ÿ“– ไธญๆ–‡่งฃ่ฏป]

We released InternVL-Chat-V1-1, featuring a structure similar to LLaVA, including a ViT, an MLP projector, and an LLM. In this version, we explored increasing the resolution to 448x448, enhancing OCR capabilities, and improving support for Chinese conversations.

Model Details

  • Model Type: multimodal large language model (MLLM)

  • Model Stats:

    • Architecture: InternViT-6B-448px + MLP + LLaMA2-13B (Our internal SFT versions)
    • Image size: 448 x 448 (256 tokens)
    • Params: 19B
  • Training Strategy:

    • Pretraining Stage
      • Learnable Component: InternViT-6B + LLaMA2-13B
      • Data: Trained on 72M samples, including COYO, LAION, CC12M, CC3M, SBU, Wukong, GRIT, Objects365, OpenImages, and OCR-related datasets.
      • Note: In this stage, we load the pretrained weights of the original InternViT-6B-224px and interpolate its position embedding to the size corresponding to 448 x 448 pixels. Moreover, in order to reduce the number of visual tokens, we use a pixel shuffle operation to reduce 1024 tokens to 256 tokens.
    • Supervised Finetuning Stage
      • Learnable Component: MLP + LLaMA2-13B
      • Data: A comprehensive collection of open-source datasets, along with their Chinese translation versions, totaling approximately 6M samples.

Released Models

Vision Foundation model

Model Date Download Note
InternViT-6B-448px-V1-5 2024.04.20 ๐Ÿค— HF link support dynamic resolution, super strong OCR (๐Ÿ”ฅnew)
InternViT-6B-448px-V1-2 2024.02.11 ๐Ÿค— HF link 448 resolution
InternViT-6B-448px-V1-0 2024.01.30 ๐Ÿค— HF link 448 resolution
InternViT-6B-224px 2023.12.22 ๐Ÿค— HF link vision foundation model
InternVL-14B-224px 2023.12.22 ๐Ÿค— HF link vision-language foundation model

Multimodal Large Language Model (MLLM)

Model Date Download Note
InternVL-Chat-V1-5 2024.04.18 ๐Ÿค— HF link support 4K image; super strong OCR; Approaching the performance of GPT-4V and Gemini Pro on various benchmarks like MMMU, DocVQA, ChartQA, MathVista, etc. (๐Ÿ”ฅnew)
InternVL-Chat-V1-2-Plus 2024.02.21 ๐Ÿค— HF link more SFT data and stronger
InternVL-Chat-V1-2 2024.02.11 ๐Ÿค— HF link scaling up LLM to 34B
InternVL-Chat-V1-1 2024.01.24 ๐Ÿค— HF link support Chinese and stronger OCR

Model Usage

We provide an example code to run InternVL-Chat-V1-1 using transformers.

You also can use our online demo for a quick experience of this model.

import torch
from PIL import Image
from transformers import AutoModel, CLIPImageProcessor
from transformers import AutoTokenizer

path = "OpenGVLab/InternVL-Chat-V1-1"
# If your GPU has more than 40G memory, you can put the entire model on a single GPU.
model = AutoModel.from_pretrained(
    path,
    torch_dtype=torch.bfloat16,
    low_cpu_mem_usage=True,
    trust_remote_code=True).eval().cuda()
# Otherwise, you need to set device_map='auto' to use multiple GPUs for inference.
# model = AutoModel.from_pretrained(
#     path,
#     torch_dtype=torch.bfloat16,
#     low_cpu_mem_usage=True,
#     trust_remote_code=True,
#     device_map='auto').eval()

tokenizer = AutoTokenizer.from_pretrained(path)
image = Image.open('./examples/image2.jpg').convert('RGB')
image = image.resize((448, 448))
image_processor = CLIPImageProcessor.from_pretrained(path)

pixel_values = image_processor(images=image, return_tensors='pt').pixel_values
pixel_values = pixel_values.to(torch.bfloat16).cuda()

generation_config = dict(
    num_beams=1,
    max_new_tokens=512,
    do_sample=False,
)

# single-round conversation
question = "่ฏท่ฏฆ็ป†ๆ่ฟฐๅ›พ็‰‡"
response = model.chat(tokenizer, pixel_values, question, generation_config)
print(question, response)

# multi-round conversation
question = "่ฏท่ฏฆ็ป†ๆ่ฟฐๅ›พ็‰‡"
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
print(question, response)

question = "่ฏทๆ นๆฎๅ›พ็‰‡ๅ†™ไธ€้ฆ–่ฏ—"
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True)
print(question, response)

Examples

In this update, InternVL-Chat has improved support for Chinese and OCR.

As you can see, although the Lynyrd Skynyrd in the image has some letters that are out of the camera's lens, and TOUR's T is blocked, the model is still able to recognize it correctly.

image/png

This model can also conduct an in-depth analysis of AAAI's official website and identify important information on the web page.

image/png

Evaluation

See here for detailed evaluation results.

Citation

If you find this project useful in your research, please consider citing:

@article{chen2023internvl,
  title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks},
  author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng},
  journal={arXiv preprint arXiv:2312.14238},
  year={2023}
}
@article{chen2024far,
  title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites},
  author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
  journal={arXiv preprint arXiv:2404.16821},
  year={2024}
}

License

This project is released under the MIT license. Parts of this project contain code and models (e.g., LLaMA2) from other sources, which are subject to their respective licenses.

Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.

Acknowledgement

InternVL is built with reference to the code of the following projects: OpenAI CLIP, Open CLIP, CLIP Benchmark, EVA, InternImage, ViT-Adapter, MMSegmentation, Transformers, DINOv2, BLIP-2, Qwen-VL, and LLaVA-1.5. Thanks for their awesome work!