4 papers across 2 sessions
Using a novel similarity measure, we find that current visual deep networks are diverging from the brain.
We propose a second-order goodness function based on effective dimensionality for Forward-Forward learning, eliminating the need for negative samples and showing that noise enhances performance and inference robustness.
We propose a stimulus-wise decomposition of the mutual information that is (1) principled (axiomatic justification), (2) tractable (estimated via diffusion models), and then we demonstrate it's application on a model of visual neurons.