Yang Yu @ NJUCS

Modified: 2018/04/28 19:40 by admin - Uncategorized
ImageChinese name(中文简历)
Yang Yu (Y. Yu)
Can be pronounced as "young you"
Ph.D., Associate Professor
Department of Computer Science
National Key Laboratory for Novel Software Technology
Nanjing University

Office: 311, Computer Science Building, Xianlin Campus
email: ,
Image Image

I received my Ph.D. degree in Computer Science from Nanjing University in 2011 (supervisor Prof. Zhi-Hua Zhou), and then joined the LAMDA Group (LAMDA Publications), Department of Computer Science and Technology of Nanjing University as an assistant researcher from 2011, and as an associate professor from 2014.

My research interest is in machine learning, a sub-field of artificial intelligence. Currently, I am working on reinforcement learning in various aspects, including optimization, representation, transfer, etc. More information please see my CV. (Detailed CV | CV in PDF)


Recent Update

Neuron & Logic Our recent paper connects neural perception and logic reasoning through abductive learning.   Tutorial We will have a tutorial on Pareto Optimization for Subset Selection in WCCI 2018.
ZOOpt A Python package for derivative free optimization. Release 0.2.   AWRL We had a successful 2nd Asian Workshop on Reinforcement Learning



A quick-learned policy beats level 3 bot in Starcraft II

Currently, I am mainly focusing on reinforcement learning. Reinforcement learning searches for a policy of near-optimal decisions, by learning from environment interactions autonomously. Despite the fantastic future, reinforcement learning is still in early infancy. Its potential has not been fully released in many situations. Our team is trying in various aspects to improve reinforcement learning, including theoretical foundation, optimization, model structure, experience reuse, abstraction, model building, etc., heading toward sample-efficient methods for large-scale physical-world applications.

Full publication list >>>




Selected Work

  • Model-based derivative-free optimization (with Hong Qian and Yi-Qi Hu, etc.)
    Derivative-free methods can tackle complex optimizations in real domains, such as non-convex, non-differentiable, and non-continuous problems with many local optima. Our studies address the issues including theoretical foundation, high-dimensionality, and noisy-evaluation.

  • Approximation analysis & Pareto optimization (with Chao Qian, Xin Yao and Zhi-Hua Zhou, etc.)
    Evolutionary algorithms are most commonly used to obtain "good-enough" solutions in practice, which relates to their approximation ability. Pareto optimization is born from evolutionary algorithms. It has been shown to be a powerful approximate solver for constrained optimization problems in finite discrete domains, particularly the subset selection problem.

  • The role of diversity in ensemble learning (with Nan Li, Yu-Feng Li and Zhi-Hua Zhou)
    Ensemble learning is a machine learning paradigm that achieves the state-of-the-art performance. Diversity was believed to be a key to a good performance of an ensemble approach, which, however, previously served only as a heuristic idea. We show that diversity can play the role of regularization.

(My Goolge Scholar Citations)



  • Artificial Intelligence. (for undergraduate students. Spring, 2018) >>>Course Page>>>
  • Advanced Machine Learning. (for graduate students. Fall, 2017)
  • Artificial Intelligence. (for undergraduate students. Spring, 2015, 2016, 2017)
  • Data Mining. (for M.Sc. students. Fall, 2014, 2013, 2012)
  • Digital Image Processing. (for undergraduate students from Dept. Math., Spring, 2014, 2013)
  • Introduction to Data Mining. (for undergraduate students. Spring, 2013, 2012)




National Key Laboratory for Novel Software Technology, Nanjing University, Xianlin Campus Mailbox 603, 163 Xianlin Avenue, Qixia District, Nanjing 210023, China
(In Chinese:) 南京市栖霞区仙林大道163号,南京大学仙林校区603信箱,软件新技术国家重点实验室,210023。

The end