To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.
[email protected] is safe to use (health: 46/100)
Get this data programmatically — free, no authentication.
curl https://depscope.dev/api/check/pypi/llmlinguaLast updated · 2024-04-09T08:21:55.428185Z