2 papers across 2 sessions
AIM introduces a new scheme to improve merging performance in LLMs by combining continual learning principles and activation space-based model compression.
OAT introduces the first step towards foundation models for topology optimization by combining a neural field auto-encoder and latent diffusion models with large scale training on a new general dataset of 2M optimized topologies called OpenTO.