Making large AI models cheaper, faster and more accessible
-
Updated
Nov 12, 2024 - Python
Making large AI models cheaper, faster and more accessible
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
🦦 Otter, a multi-modal model based on OpenFlamingo (open-sourced version of DeepMind's Flamingo), trained on MIMIC-IT and showcasing improved instruction-following and in-context learning ability.
Code and models for NExT-GPT: Any-to-Any Multimodal Large Language Model
[CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.
Chronos: Pretrained (Language) Models for Probabilistic Time Series Forecasting
EVA Series: Visual Representation Fantasies from BAAI
DeepSeek-VL: Towards Real-World Vision-Language Understanding
Images to inference with no labeling (use foundation models to train supervised models).
Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)
Emu Series: Generative Multimodal Models from BAAI
[ECCV2024] Video Foundation Models & Data for Multimodal Understanding
Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting
A general representation model across vision, audio, language modalities. Paper: ONE-PEACE: Exploring One General Representation Model Toward Unlimited Modalities
Janus: Decoupling Visual Encoding for Unified Multimodal Understanding and Generation
Creative interactive views of any dataset.
Official PyTorch Implementation of MambaVision: A Hybrid Mamba-Transformer Vision Backbone
[ICLR 2024] Official PyTorch implementation of FasterViT: Fast Vision Transformers with Hierarchical Attention
[CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses that are seamlessly integrated with object segmentation masks.
Add a description, image, and links to the foundation-models topic page so that developers can more easily learn about it.
To associate your repository with the foundation-models topic, visit your repo's landing page and select "manage topics."