Announcement_2025 06 12
Check our new paper on better leveraging the LLM KV cache for modeling multi-doc interdependencies, Graph-KV: Breaking Sequence via Injecting Structural Biases into Large Language Models, led by Haoyu!
Check our new paper on better leveraging the LLM KV cache for modeling multi-doc interdependencies, Graph-KV: Breaking Sequence via Injecting Structural Biases into Large Language Models, led by Haoyu!