1 paper across 1 session
To tackle the challenges of imbalance and heterogeneity in multimodal knowledge graph completion (MKGC), this paper proposes a novel MMKGC framework that uses a large vision-language model, cross-modal alignment, and adaptive multimodal fusion.