5 papers across 3 sessions
We propose ModuLM, a flexible framework for LLM-based molecular relational learning, supporting multimodal inputs and dynamic model construction.
By jointly modeling patient conditions and substructure-aware representations, Our model enhances both accuracy and interpretability, ultimately enabling safer, more personalized drug recommendations.
Optimize KV cache eviction by adaptively allocating budgets across different attention heads for efficient LLM inference