LLM - GeneralFine-tune
GLM-4.7-Flash-AWQ
by QuantTrio
46.9Kdownloads
7likes
32/100
mitBase Model
Fine-tuned from
zai-org/GLM-4.7-FlashTags
transformerssafetensorsglm4_moe_litetext-generationvLLMAWQconversationalenzharxiv:2508.06471license:mitendpoints_compatible4-bitawqregion:us