2 papers across 2 sessions
We prove divergence results for gradient flows for deep neural networks with analytic activation and polynomial target functions.
Conditional forecasts which optimise proper scoring rules guarantee (i) correctness of the forecast (performative optimality), (ii) performative stability under successive retraining and (iii) can be compatible with outcome optimisation.