Vision-LanguagePretrainingCommercial OK
MINT-1T
by mlfoundations
570.5Kdownloads
2likes
100B<n<1TDescription
π MINT-1T:Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens
π MINT-1T is an open-source Multimodal INTerleaved dataset with 1 trillion text tokens and 3.4 billion images, a 10x scale-up from existing open-source datasets. Additionally, we include previously untapped sources such as PDFs and ArXiv papers. π MINT-1T is designed to facilitate research in multimodal pretraining. π MINT-1T is created by a team from the University of Washington inβ¦ See the full description on the dataset page: https://huggingface.co/datasets/mlfoundations/MINT-1T-PDF-CC-2023-14.
What can I do with this?
Tags
task_categories:image-to-texttask_categories:text-generationlanguage:enlicense:cc-by-4.0size_categories:1M<n<10Mformat:webdatasetmodality:imagemodality:textlibrary:datasetslibrary:webdatasetlibrary:mlcroissantarxiv:2406.11271region:usmultimodal