2 papers across 2 sessions
We introduce a representation-level counterfactual framework to mitigate co-occurrence bias in vision–language models without retraining or prompt engineering.
Statistical prediction may be sufficient to drive the emergence of internal causal models and causal inference capacities in deep neural networks.