jni, @simon ok I'm finally trying out your llm library and it's neat. I am intentionally limiting myself to open source and local models. Could you write a post about these? Specifically:
- strengths/weaknesses of each model?
- strategies for getting around context length limits? I'm currently interested in text summarisation and immediately hit this with mistral-7b-instruct-v0*
- yes I did ask mistral itself to suggest strategies but they were distressingly manual and brittle/non-generalisable. 😂