3 papers across 3 sessions
We derive brain-like inference as natural gradient descent on free energy (FOND). The resulting spiking network (iP-VAE) outperforms amortized VAEs in reconstruction-sparsity trade-offs and out-of-distribution generalization.
We introduce $\mu$PC, a reparameterisation of predictive coding networks that enables stable training of 100+ layer ResNets on simple tasks with hyperparameter transfer.
Meta-RL with self-supervised predictive coding modules can learn interpretable, task-relevant representations that better approximate Bayes-optimal belief states than black-box meta-RL models across diverse partially observable environments.