Allow return attention weights

#1
by orby-yanan - opened
No description provided.
orby-yanan changed pull request status to open
Mosaic ML, Inc. org
edited Sep 15, 2023

Hi, could you please open this on https://github.com/mosaicml/llm-foundry? We upstream updates to these models from there (and that way we can propagate it to all MPT models easily).

Cannot merge
This branch has merge conflicts in the following files:
  • blocks.py
  • modeling_mpt.py

Sign up or log in to comment