2 papers across 1 session
We propose Memory-Integrated Reconfigurable Adapters, a unified ML framework integrating Hopfield-style associative memories atop a shared backbone, demonstrating remarkable flexibility across domain shifts and sequential task exposures.
Transformer-based language models learn low-dimensional task manifolds across layers, with similar patterns/trends in intrinsic dimensions revealing similar compression strategies despite varying architectures/sizes.