VLMFine-tune
gemma-3-4b-it-qat-4bit
by mlx-community
968.6Kdownloads
6likes
39/100
otherBase Model
Fine-tuned from
OpenGVLab/InternVL3-1B-InstructTags
transformerssafetensorsgemma3image-text-to-textinternvlcustom_codemlxconversationalmultilinguallicense:othertext-generation-inferenceendpoints_compatibleregion:us