2 papers across 2 sessions
We introduce TabDPT, a tabular foundation model capable of providing highly accurate predictions for unseen tabular datasets with no further training or hyperparameter tuning, and demonstrate scaling in both model and pre-training dataset size.
We apply conformal prediction to provide statistical guarantees that all important information within a long-text is captured by an automatically generated summary.