2 papers across 2 sessions
In human vision, representational dimensions continually expand to enable abstraction, whereas artificial networks undergo late-stage collapse that may constrain their capacity for flexible generalization.
We propose a second-order goodness function based on effective dimensionality for Forward-Forward learning, eliminating the need for negative samples and showing that noise enhances performance and inference robustness.