LLMLingua
Repository: LLMLingua
Author: microsoft · Source status: Clear source
[EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.
Score basis:Clear source · High execution risk · Universal · Evidence completeness 67%