--- language: - multilingual - en license: apache-2.0 library_name: transformers tags: - nlp - code - vision - chemistry - engineering - biology - bio-inspired - text-generation-inference - materials science - mixture-of-experts - science - latex datasets: - lamm-mit/Cephalo-Bioinspired-Mechanics-Materials - lamm-mit/Cephalo-Wikipedia-Materials - OleehyO/latex-formulas - lamm-mit/OleehyO-latex-formulas pipeline_tag: image-text-to-text inference: parameters: temperature: 0.3 widget: - messages: - role: user content: <|image_1|>Can you describe what you see in the image? --- ## Model Summary Cephalo is a series of multimodal materials science focused vision large language models (V-LLMs) designed to integrate visual and linguistic data for advanced understanding and interaction in human-AI or multi-agent AI frameworks. A novel aspect of Cephalo's development is the innovative dataset generation method. The extraction process employs advanced algorithms to accurately detect and separate images and their corresponding textual descriptions from complex PDF documents. It involves extracting images and captions from PDFs to create well-reasoned image-text pairs, utilizing large language models (LLMs) for natural language processing. These image-text pairs are then refined and validated through LLM-based NLP processing, ensuring high-quality and contextually relevant data for training. Cephalo can interpret complex visual scenes and generating contextually accurate language descriptions and answer queries. The model is developed to process diverse inputs, including images and text, facilitating a broad range of applications such as image captioning, visual question answering, and multimodal content generation. The architecture combines a vision encoder model and an autoregressive transformer to process complex natural language understanding. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/kl5GWBP9WS0D4uwd1t3S7.png) Cephalo provides a robust framework for multimodal interaction and understanding, including the development of complex generative pipelines to create 2D and 3D renderings of material microstructures as input for additive manufacturing methods. This version of Cephalo, lamm-mit/Cephalo-Idefics2-3x8b-beta, is a Mixture-of-Expert model based on the Idefics-2 model. The basic model architecture is as follows: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/b7BK8ZtDzTMsyFDi0wP3w.png) ### Download Idefics-2 MoE Model and Sample inference code ```markdown pip install transformers -U ``` ```python import torch from transformers import AutoModelForCausalLM, AutoProcessor, AutoConfig def count_parameters(model): total_params = sum(p.numel() for p in model.parameters()) trainable_params = sum(p.numel() for p in model.parameters() if p.requires_grad) #number of parameters in b return total_params/1e9, trainable_params/1e9 device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model_name_moe = f"lamm-mit/Cephalo-Idefics2-3x8b-beta" config = AutoConfig.from_pretrained(model_name_moe, trust_remote_code=True) processor = AutoProcessor.from_pretrained(model_name_moe, trust_remote_code=True) moe_model = AutoModelForCausalLM.from_pretrained( model_name_moe,config=config, trust_remote_code=True, torch_dtype=torch.bfloat16, # quantization_config=quantization_config, ).to(device) count_parameters(moe_model) ``` Now use downloaded model for inference: ```python from transformers.image_utils import load_image DEVICE='cuda' image = load_image("https://d2r55xnwy6nx47.cloudfront.net/uploads/2018/02/Ants_Lede1300.jpg") # Create inputs messages = [ { "role": "user", "content": [ {"type": "image"}, {"type": "text", "text": "What is shown in this image, and what is the relevance for materials design? Include a discussion of multi-agent AI."}, ] }, ] prompt = processor.apply_chat_template(messages, add_generation_prompt=True) # Get inputs using the processor inputs = processor(text=prompt, images=[image], return_tensors="pt") inputs = {k: v.to(DEVICE) for k, v in inputs.items()} # Generate generated_ids = moe_model.generate(**inputs, max_new_tokens=500) generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True) print(generated_texts) ``` ## Make a Idefics-2-MoE model from scratch using several pre-trained models Download .py files that implement the Phi-3-V and the Mixture-of-Expert Vision model ```markdown pip install huggingface_hub ``` ```python from huggingface_hub import HfApi, hf_hub_download from tqdm.notebook import tqdm import os import shutil # Repository details repo_id = "lamm-mit/Cephalo-Idefics2-3x8b-beta" api = HfApi() # List all files in the repository files_in_repo = api.list_repo_files(repo_id) # Filter for .py files py_files = [file for file in files_in_repo if file.endswith('.py')] # Directory to save the downloaded files save_dir = "./Idefics2_MoE/" os.makedirs(save_dir, exist_ok=True) # Download each .py file for file_name in tqdm(py_files): file_path = hf_hub_download(repo_id=repo_id, filename=file_name) new_path = os.path.join(save_dir, file_name) shutil.move(file_path, new_path) print(f"Downloaded: {file_name}") print("Download completed.") ``` Download models that will form the experts, as well as the base model: ```python from transformers import AutoProcessor, Idefics2ForConditionalGeneration , AutoTokenizer from transformers import BitsAndBytesConfig DEVICE='cuda' quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_use_double_quant=True, bnb_4bit_compute_dtype=torch.bfloat16 ) model_id_1='lamm-mit/Cephalo-Idefics-2-vision-8b-beta' model_1 = Idefics2ForConditionalGeneration.from_pretrained( model_id_1, torch_dtype=torch.bfloat16, #if your GPU allows _attn_implementation="flash_attention_2", #make sure Flash Attention 2 is installed trust_remote_code=True, #quantization_config=quantization_config, )#.to (DEVICE) processor = AutoProcessor.from_pretrained( f"{model_id_1}", do_image_splitting=True ) config = AutoConfig.from_pretrained(model_id_1, trust_remote_code=True) IDEFICS2_CHAT_TEMPLATE = "{% for message in messages %}{{message['role'].capitalize()}}{% if message['content'][0]['type'] == 'image' %}{{':'}}{% else %}{{': '}}{% endif %}{% for line in message['content'] %}{% if line['type'] == 'text' %}{{line['text']}}{% elif line['type'] == 'image' %}{{ '' }}{% endif %}{% endfor %}\n{% endfor %}{% if add_generation_prompt %}{{ 'Assistant:' }}{% endif %}" processor.chat_template = IDEFICS2_CHAT_TEMPLATE ``` Now, load the rest of the models: ``` model_id_2='HuggingFaceM4/idefics2-8b-chatty' model_2 = Idefics2ForConditionalGeneration.from_pretrained( model_id_2, torch_dtype=torch.bfloat16, #if your GPU allows _attn_implementation="flash_attention_2", #make sure Flash Attention 2 is installed trust_remote_code=True, #quantization_config=quantization_config, )#.to (DEVICE) model_id_3='HuggingFaceM4/idefics2-8b' model_3 = Idefics2ForConditionalGeneration.from_pretrained( model_id_3, torch_dtype=torch.bfloat16, #if your GPU allows _attn_implementation="flash_attention_2", #make sure Flash Attention 2 is installed trust_remote_code=True, #quantization_config=quantization_config, )#.to (DEVICE) ``` Put on device: ``` model_1.to(DEVICE) model_2.to(DEVICE) model_3.to(DEVICE) ``` ### Construct MoE ``` dtype = torch.bfloat16 # Desired dtype for new layers base_model = copy.deepcopy(model_1) # Your base model expert_models = [ model_1, model_2, model_3 ] # List of expert models moe_config = Idefics2ForCausalLMMoEConfig(config=config, k=1, num_expert_models=len (expert_models)) moe_model = Idefics2ForCausalLMMoE(moe_config, base_model, expert_models, layer_dtype = dtype)#.to(device) count_parameters(expert_models[0]),count_parameters(moe_model) ``` Delete models no longer needed: ``` del model_1 del model_2 del model_3 ``` Put MoE model on device: ``` moe_model.to(DEVICE) ``` Test if it works (untrained): ``` from transformers.image_utils import load_image image = load_image("https://d2r55xnwy6nx47.cloudfront.net/uploads/2018/02/Ants_Lede1300.jpg") # Create inputs messages = [ { "role": "user", "content": [ {"type": "image"}, {"type": "text", "text": "What is shown in this image, and what is the relevance for materials design? Include a discussion of multi-agent AI."}, ] }, ] prompt = processor.apply_chat_template(messages, add_generation_prompt=True) # Get inputs using the processor inputs = processor(text=prompt, images=[image], return_tensors="pt") inputs = {k: v.to(DEVICE) for k, v in inputs.items()} # Generate generated_ids = moe_model.generate(**inputs, max_new_tokens=500) generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True) print(generated_texts) ``` ### Now train MoE gating function ```python image_1 = Image.open("./VALIDATION/Q15.jpg") image_1a = Image.open("./VALIDATION/Q31.jpg") image_2 = Image.open(requests.get("https://media.wired.com/photos/5aa32b912ba43111d1213e0c/master/w_2240,c_limit/akhacouple.jpg", stream=True).raw) image_2a = Image.open(requests.get("https://media.wired.com/photos/5aa32b912ba43111d1213e0c/master/w_2240,c_limit/akhacouple.jpg", stream=True).raw) image_3 = Image.open(requests.get("https://i5.walmartimages.com/seo/Amazing-Andrea-Apple-Tree-Seeds-20-Seeds-Grow-Fresh-Apples_ff218043-bcd4-4437-8418-6631d8e97bb3.638ac0120ff05c8913e85ebb74f45f6c.jpeg?odnHeight=640&odnWidth=640&odnBg=FFFFFF", stream=True).raw) image_3a = Image.open(requests.get("https://i5.walmartimages.com/seo/Amazing-Andrea-Apple-Tree-Seeds-20-Seeds-Grow-Fresh-Apples_ff218043-bcd4-4437-8418-6631d8e97bb3.638ac0120ff05c8913e85ebb74f45f6c.jpeg?odnHeight=640&odnWidth=640&odnBg=FFFFFF", stream=True).raw) prompts_per_expert = [ [{"text": "User:What is shown in this image. Explain the importance for materials design.Assistant: The image shows", "image": [image_1]}, {"text": "User:What is shown in this image. Explain the importance for materials design.Assistant: The image shows", "image": [image_1a]}, ], [{"text": "User:What is shown in this image. Assistant: The image shows a human.", "image": [image_2]}, {"text": "User:What is shown in this image, and what does it mean in terms of human history? Assistant: The image shows a historical image of human development.", "image": [image_2a]}, ], [{"text": "User:What is shown in this image. Provide a brief answer. Assistant: This is an apple.", "image": [image_3]}, {"text": "User:What is shown in this image. Brief and concise answer. Assistant: The image shows an apple.", "image": [image_3a]}, ], ] gating_layer_params = moe_model.train_gating_layer_params_from_hidden_states(processor, prompts_per_expert, epochs=1000, loss_steps=100, lr=5e-5, layer_offset=0) # Set parameters for a specific layer moe_model.set_gating_layer_params(gating_layer_params) ``` ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/mh4eFDuFsTBOYbjc38PYz.png) Inference after MoE gating layers are trained: ``` from transformers.image_utils import load_image image = load_image("https://d2r55xnwy6nx47.cloudfront.net/uploads/2018/02/Ants_Lede1300.jpg") # Create inputs messages = [ { "role": "user", "content": [ {"type": "image"}, {"type": "text", "text": "What is shown in this image, and what is the relevance for materials design? Include a discussion of multi-agent AI."}, ] }, ] prompt = processor.apply_chat_template(messages, add_generation_prompt=True) # Get inputs using the processor inputs = processor(text=prompt, images=[image], return_tensors="pt") inputs = {k: v.to(DEVICE) for k, v in inputs.items()} # Generate generated_ids = moe_model.generate(**inputs, max_new_tokens=500) generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True) print(generated_texts) ``` ### Push to hub and save locally ```python repo_id='...' moe_name='Cephalo-Idefics2-3x8b-beta' processor.push_to_hub (f'{repo_id}/'+moe_name, ) moe_model.push_to_hub (f'{repo_id}/'+merged_name, ) ``` Save locally: ``` processor.save_pretrained(moe_name, ) moe_model.save_pretrained(moe_name, ) ```