Researcher, Layer 6 AI
2 papers at NeurIPS 2025
We introduce TabDPT, a tabular foundation model capable of providing highly accurate predictions for unseen tabular datasets with no further training or hyperparameter tuning, and demonstrate scaling in both model and pre-training dataset size.
CausalPFN is a pre-trained transformer that amortizes causal effect estimation: trained once on simulated data-generating processes, it outputs calibrated effects for new observational datasets with zero tuning.