Yingshan Chang

Greetings! I am a 6th year PhD student at the Language Technology Institute of Carnegie Mellon University. I am very fortunate to be advised by Professor Yonatan Bisk. I received my Bachelor's degree in Computer Science & Mathematics with first class honors from Hong Kong University of Science and Technology. I love trees and nature documentaries. I tend to get along with people who write, or will write, good books. The evolution of how I have expressed my intellectual pursuits can be found here.

  My current research focus is principled generalization, which begs the question of formalizing, or discovering, a notion of functional or conceptual similarity that links “seen” and “unseen” instances. From this perspective, I have explored compositional [image synthesis] and inductive generalization [counting], [inductive learning paradigm] through a more formal lens, shedding light on empirical puzzles.
  Broadly, I am very interested in the computational mechanisms underlying analogy. This inquiry demands us to strengthen the tie between AI and cognitive science. I believe effective generalization can boil down to effective analogy-making, which requires a mechanism to account for a notion of “seemingly different but actually the same”.
  My past work have engaged with language [iclr 25], vision-language [cvpr 22], and diffusion models [eccv 24]. Together, they surface distinct factors that shape generalization in deep learning, spanning from individual components (data, architecture) to the organizational level of learning (the paradigm).

✨ I'm actively exploring postdoc / research-fellow opportunities starting as early as Fall 2026, and would sincerely appreciate any advice or pointers. ✨

Publications

Yingshan Chang and Yonatan Bisk. "Learning Model Successors" arXiv:2502.00197
Yingshan Chang and Yonatan Bisk. "Language Models Need Inductive Biases to Count Inductively" ICLR 2025
Jimin Sun, So Yeon Min, Yingshan Chang, Yonatan Bisk "Tools Fail: Detecting Silent Errors in Faulty Tools" EMNLP 2024
Shaurya Dewan, Rushikesh Zawar, Prakanshul Saxena, Yingshan Chang, Andrew Luo, Yonatan Bisk. "DiffusionPID: Interpreting Diffusion via Partial Information Decomposition" Neurips 2024
Yingshan Chang, Yasi Zhang, Zhiyuan Fang, Yingnian Wu, Yonatan Bisk, Feng Gao. "Skews in the Phenomenon Space Hinder Generalization in Text-to-Image Generation" European Conference on Computer Vision (ECCV) 2024.
Akter, Syeda Nahida, Sangwu Lee, Yingshan Chang, Yonatan Bisk and Eric Nyberg. “VISREAS: Complex Visual Reasoning with Unanswerable Questions” In Findings of the Association for Computational Linguistics: ACL 2024.
Liangke Gui*, Yingshan Chang*, Qiuyuan Huang, Subhojit Som, Alexander G Hauptmann, Jianfeng Gao and Yonatan Bisk. “Training Vision-Language Transformers from Captions” In Transactions on Machine Learning Research, pp. 2835-8856. 2023.
Yingshan Chang, Mridu Narang, Hisami Suzuki, Guihong Cao, Jianfeng Gao, and Yonatan Bisk. “Webqa: Multihop and Multimodal QA” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16495-16504. 2022. Oral.
Yingshan Chang, and Yonatan Bisk. “WebQA: A Multimodal Multihop NeurIPS Challenge” In NeurIPS 2021 Competitions and Demonstrations Track, pp. 232-245. PMLR, 2022.

My Way of Seeing

🌟English Blog🏔Chinese Blog🌲Gallery

Education

Carnegie Mellon University

PhD in Language Technologies  2022 -

Carnegie Mellon University

MS in Language Technologies  2020 - 2022

Hong Kong University of Science and Technology

BS in Computer Science & Mathematics  2016 - 2020

Georgia Institute of Technology

Exchange  Spring 2019

Peking University

AEARU Summer Camp  Summer 2018

Old Projects