Hi! Welcome to my homepage! I am Yuan Yang. I did my PhD in computer science at Vanderbilt University, advised by Prof. Maithilee Kunda. I am now a faculty member of the College of Information, Mechanical, and Electrical Engineering at Shanghai Normal Unversity. I am constantly looking for collaborators in AI and cogntive science. And I am also accepting new master students this year.
I have been fanatically pursuing big questions about intelligence per se—not only artificial intelligence (AI) but human intelligence (HI). My research is thus unavoidably interdisciplinary and interwined with studies about general intelligence, fluid intelligence, intelligence testing, abstract reasoning, visual spatial reasoning, cognitive information processing, and AI applcations in education. Find more about me and my lab’s works at AIVAS Lab. My SHNU website is under construction.
Feel free to shoot me a message at yuan.yang [at] shnu [dot] edu [dot] cn if you want to know more about my work.
Yuan Yang, Deepayan Sanyal, James Ainooson, Joel Michelson, Effat Farhana, and Maithilee Kunda. A cognitively-inspired neural architecture for visual abstract reasoning using contrastive perceptual and conceptual processing. Preprint, 2023. PDF.
Yang, Yuan, Deepayan Sanyal, Joel Michelson, James Ainooson, and Maithilee Kunda. Deep non-monotonic reasoning for visual abstract reasoning tasks. Preprint, 2023. PDF
Yang, Yuan and Mathilee Kunda. Computational models of solving raven’s progressive matrices: A comprehensive introduction. Preprint, 2023. PDF
Yang, Yuan, Keith McGreggor, and Maithilee Kunda. Visual-imagery-based analogical construction in geometric matrix reasoning task. In press. Advances in Cognitive Systems, 2023. PDF
Deepayan Sanyal, Joel Michelson, Yuan Yang, James Ainooson, and Mathilee Kunda. A computational account of self-supervised visual learning from egocentric object play. CogSci 2023. PDF
James Ainooson, Deepayan Sanyal, Joel Michelson, Yang, Yuan, and Mathilee Kunda. An approach for solving tasks on the abstract reasoning corpus. Preprint, 2023. PDF
Yang, Yuan, Deepayan Sanyal, Joel Michelson, James Ainooson, and Maithilee Kunda. An end-to-end imagery-based modeling of solving geometric analogy problems. In Proceedings of the Annual Meeting of the Cognitive Science Society, number 44, 2022. PDF
Yang, Yuan, Deepayan Sanyal, Joel Michelson, James Ainooson, and Maithilee Kunda. A conceptual chronicle of solving raven’s progressive matrices computationally. In CEUR Workshop Proceedings,2022. PDF
Joel Michelson, Deepayan Sanyal, James Ainooson, Yang, Yuan, and Maithilee Kunda. Experimental designs and facets of evidence for computational theory of mind. In CEUR Workshop Proceedings, 2022. PDF
Yang, Yuan, Deepayan Sanyal, Joel Michelson, James Ainooson, and Maithilee Kunda. Automatic item generation of figural analogy problems: A review and outlook. In Proceedings of the Ninth Annual Conference on Advances in Cognitive Systems, 2021. PDF
Joel Michelson, Deepayan Sanyal, James Ainooson, Yang, Yuan, and Maithilee Kunda. Social cognition paradigms ex machinas. In Proceedings of the AAAI Fall Symposium on Computational Theory of Mind for Human-Machine Teams, 2021. PDF
Yang, Yuan, Xiaoan Li, and Lu Zhang. Task-specific pre-learning to improve the convergence of reinforcement learning based on a deep neural network. In 2016 12th World Congress on Intelligent Control and Automation, pages 2209–2214. IEEE, 2016. PDF
Xiaoan Li, Yang, Yuan, Yunming Sun, and Lu Zhang. A developmental actor-critic reinforcement learning approach for task-nonspecific robot. In 2016 IEEE Chinese Guidance, Navigation and Control Conference, pages 2231–2237. IEEE, 2016. PDF
I am particularly interested in three closely related topics—visual abstract reasoning, analogy making, and mental imagery. And most of my works lie in the interdisciplinary area of AI and cognitive systems.
Visual abstract reasoning tasks are commonly used in human intelligence tests for they can be very insensitive to non-intelligence factors, such as cultural and educational background, and meanwhile, closely correlated to core intelligence factors, such as fluid intelligence. I build AI systems to solve visual abstract reasoning tasks. The interesting part of this work is not solving the problem per se, but to build AI’s ability of visual abstract reasoning that allows it to make sense out of unseen situations. In fact, the world “reasoning” here is a bit misleading when visual abstract reasoning is discussed, because visual abstract reasoning tasks are mainly testing the ability to discover underlying patterns beneath the perceptual stimuli; but once the underlying patterns are found, the reasoning is relatively trivial.
Analogy making is at the center of human cognition. Like visual abstract reasoning, analogy problems are also widely used in many standardized tests. Visual analogy making can be considered as a specific case of visual abstract reasoning, in which reasoning is based on analogy. The interesting part of visual analogy making is, given two objects between which an analogy can be made, when/how we process inter-object relations and when/how we process intra-object relations, i.e., relations between components in each object. These two aspects of information processing are against and interdependent on each other. It is similar to the question of “which came first: the chicken or the egg”. An iterative dynamic, possibly attention-based, cognitive process probably exists to reconcile between these two aspects. And I am looking into possible implementations of such a cognitive process in AI systems.
While the imagery debate for human cognition has been going on for decades and never got resolved, the imagery representation is an ideal way to realize AI and has been less explored in AI research. AI (researchers) always has to face a trade-off between different representations; choosing one against another usually means an AI system that works in some situations but not the others. In contrast, no matter what representation (imagery or proposition) human cognition uses, it leads to robust intellectual ability in all situations. I thus look into the possible implementations of mental imagery. The interesting (and tricky) part of this research is that mental imagery is not equivalent to mental images; and similarly, computational imagery is not equivalent to computer images.In particular, human can experience the mental imagery when the corresponding perceptual input is absent; human can mentally manipulate mental imagery and perform their thinking and reasoning through mental imagery, i.e., every step of the thinking process is rendered as a mental image; and thus the abstract concepts can be easily obtained from mental imagery. In a word, mental imagery implies visual thinking.
There are already some extremely prototypical AI systems that use mental imagery, for example, the generative deep learning models. The ability to generate image according to latent variables is essential for mental imagery, which is similar to the situation that human can experience mental imagery without perceptual input. However, human mental imagery is far more sophisticated than generating images. These works can be a good start to explore the realm of computational imagery.