Andrew Reed

andrewrreed

AI & ML interests

Applied ML, Practical AI, Inference & Deployment, LLMs, Multi-modal Models, RAG

Articles

Organizations

andrewrreed's activity

replied to their post 12 days ago
view reply

Thanks! And yes, several people have pointed out the light mode color issue... will push a fix when I get the chance

posted an update 13 days ago
view post
Post
1970
πŸ”¬ Open LLM Progress Tracker πŸ”¬

Inspired by the awesome work from @mlabonne , I created a Space to monitor the narrowing gap between open and proprietary LLMs as scored by the LMSYS Chatbot Arena ELO ratings πŸ€—

The goal is to have a continuously updated place to easily visualize these rapidly evolving industry trends πŸš€

πŸ”— Open LLM Progress Tracker: andrewrreed/closed-vs-open-arena-elo
πŸ”— Source of Inspiration: https://www.linkedin.com/posts/maxime-labonne_arena-elo-graph-updated-with-new-models-activity-7187062633735368705-u2jB/
  • 2 replies
Β·
posted an update about 1 month ago
view post
Post
2032
IMO, the "grounded generation" feature from Cohere's CommandR+ has flown under the radar...

For RAG use cases, responses directly include inline citations, making source attribution an inherent part of generation rather than an afterthought 😎

Who's working on an open dataset with this for the HF community to fine-tune with??

πŸ”—CommandR+ Docs: https://docs.cohere.com/docs/retrieval-augmented-generation-rag

πŸ”—Model on the πŸ€— Hub: CohereForAI/c4ai-command-r-plus
  • 1 reply
Β·
replied to their post 3 months ago
view reply

Thanks for sharing! I'm guessing this is related to the .strip() call in the template?

I've shared this feedback internally and we'll make a fix to it soon! cc @Rocketknight1 @Xenova

replied to their post 3 months ago
view reply

The latter, just those with a chat template set

replied to chiphuyen's post 3 months ago
replied to their post 3 months ago
posted an update 3 months ago
view post
Post
πŸš€ It's now easier than ever to switch from OpenAI to open LLMs

Hugging Face's TGI now supports an OpenAI compatible Chat Completion API

This means you can transition code that uses OpenAI client libraries (or frameworks like LangChain 🦜 and LlamaIndex πŸ¦™) to run open models by changing just two lines of code πŸ€—

⭐ Here's how:
from openai import OpenAI

# initialize the client but point it to TGI
client = OpenAI(
    base_url="<ENDPOINT_URL>" + "/v1/",  # replace with your endpoint url
    api_key="<HF_API_TOKEN>",  # replace with your token
)
chat_completion = client.chat.completions.create(
    model="tgi",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Why is open-source software important?"},
    ],
    stream=True,
    max_tokens=500
)

# iterate and print stream
for message in chat_completion:
    print(message.choices[0].delta.content, end="")


πŸ”— Blog post ➑ https://huggingface.co/blog/tgi-messages-api
πŸ”— TGI docs ➑ https://huggingface.co/docs/text-generation-inference/en/messages_api
Β·
replied to victor's post 4 months ago
view reply

For models like Mixtral that don't have an explicit "system prompt" in the chat template, how is the system prompt handled? Is it just prepended to the first input from the user?