models put heavier weight on the end of the context window

end, first, middle

matters more for longer prompts

RAG

document content into the prompt

shorter chunks can be better for making sure that the model locates the relevant info in the chunk

6000 token chunk - does this gave X information? or Summarize

prefills

prepend each response with the name of the current persona in brackers e.g. [SAM]

deal with people that do stuff you can’t learn a protocol for


Building with Anthropic's Claude - The Prompt Doctor is In

The conversation revolved around improving the performance of language models, with speakers discussing various strategies such as providing negative examples, incorporating extraneous information, using self-correction algorithms, and optimizing language models for logical reasoning. The effectiveness of negative prompting in language models was also discussed, with speakers sharing their experiences and concerns about potential hallucinations. Overall, the conversation aimed to explore the potential benefits and drawbacks of negative prompting in language models.

Transcript