Sentence Similarity
sentence-transformers
PyTorch
Safetensors
Transformers
English
mpnet
fill-mask
feature-extraction
Inference Endpoints
5 papers

Change max_position_embeddings to 512

#21

When embedding in CUDA-enable environment, the Device side Assertion error occur when I try to put more than 512 embedding dimension. It looks like the config.json is wrong, the real max_position_embeddings are 512.

Sentence Transformers org

I think you might be right, 512 seems more reasonable. However, you should try and limit the sequence length to 384 as defined here: https://huggingface.co/sentence-transformers/all-mpnet-base-v2/blob/main/sentence_bert_config.json#L2
This is what the model was trained for, and what it should perform best with.

Ready to merge
This branch is ready to get merged automatically.

Sign up or log in to comment