Text Generation
Transformers
PyTorch
19 languages
t5
text-generation-inference
Inference Endpoints

You sure this is 248B params?

#3
by unaidedelf87777 - opened

It seems you made a typo in your readme. It says this is 248b params, but the pytorch_model.bin is only 189 bytes, which means its almost empty. I dont know if this means that it failed to upload the model or what, but just letting ya know.

@unaidedelf87777 I think it may be quantized via a new advanced method

@pszemraj , there is no way to quantize a 240b param model down to 189bytes. From 2-300gb to 189bytes? Not gonna happen. That’s physically impossible.

no its an unfinished project

Sign up or log in to comment