4 papers across 3 sessions
BC-LLM is a novel procedure that integrates Large Language Models into a Bayesian framework for concept discovery to achieve better predictive performance, converge faster to relevant concepts, and provide rigorous uncertainty quantification.
Label noise in CBMs cripples prediction performance, interpretability, and interventions via a few susceptible concepts. We combat this with sharpness-aware training and entropy-based concept correction, restoring the robustness of CBMs.