3 papers across 2 sessions
We propose a differentiable structure learning framework for general binary data that makes no parametric assumptions about the data‐generating process.
Enhancing linear RNNs to multi-dimensional structures, stable and parallelizable.