Yang Yu @ NJUCS

Modified: 2017/06/29 00:19 by admin - Uncategorized
ImageChinese name(中文简历)
Yang Yu (Y. Yu)
Can be pronounced as "young you"
Ph.D., Associate Professor
LAMDA Group
Department of Computer Science
National Key Laboratory for Novel Software Technology
Nanjing University

Office: 311, Computer Science Building, Xianlin Campus
email: ,
Image Image

I received my Ph.D. degree in Computer Science from Nanjing University in 2011 (supervisor Prof. Zhi-Hua Zhou), and then joined the LAMDA Group (LAMDA Publications), Department of Computer Science and Technology of Nanjing University as an assistant researcher from 2011, and as an associate professor from 2014.

I am interested in artificial intelligence, particularly, applying theoretical-grounded evolutionary optimization to solve machine learning problems. Our research work has been published in international journals and conferences including Artificial Intelligence, IJCAI, KDD, etc., and granted several awards such as Grand Champion (with other LAMDA members) of PAKDD'06 Data Mining Competition, the best paper award of PAKDD'08, the best theory paper award of GECCO'11. My dissertation was granted honours including China National Outstanding Doctoral Dissertation Award in 2013 and China Computer Federation Outstanding Doctoral Dissertation Award in 2011.

(Detailed CV | CV in PDF)

Edit

Recent Update


Edit

Research

I am interested in investigating towards the goal of artificial intelligence. As conceived by Alan Turing, one possible way of building an intelligent machine is to evolve learning machines, which now drops into multiple subfields of artificial intelligence, especially machine learning and evolutionary computation. Currently I am focusing on evolutionary machine learning that applies theoretical-grounded evolutionary optimization to solve complex and non-convex problems for machine learning and reinforcement learning.

Edit

Full Publication List >>>

Edit

Selected work:


  • Pareto optimization (with Chao Qian and Zhi-Hua Zhou)
    Pareto optimization is born from evolutionary algorithms. It has been shown to be a powerful approximate solver for constrained optimization problems in finite discrete domains, particularly the subset selection problem. We apply Pareto optimization for learning tasks including sparse regression, achieving extraordinary performance.





  • The role of diversity in ensemble learning (with Nan Li, Yu-Feng Li and Zhi-Hua Zhou)
    Ensemble learning is a machine learning paradigm that achieves the state-of-the-art performance. Diversity was believed to be a key to a good performance of an ensemble approach, which, however, previously served only as a heuristic idea. We show that diversity can play the role of regularization.

Edit

Codes




(My Goolge Scholar Citations)

Edit

Teaching

  • Artificial Intelligence. (for undergraduate students. Spring, 2017) >>> Course Homepage >>>
  • Artificial Intelligence. (for undergraduate students. Spring, 2015, 2016)
  • Data Mining. (for M.Sc. students. Fall, 2014, 2013, 2012)
  • Digital Image Processing. (for undergraduate students from Dept. Math., Spring, 2014, 2013)
  • Introduction to Data Mining. (for undergraduate students. Spring, 2013, 2012)

Edit

Students



Edit

Mail:
National Key Laboratory for Novel Software Technology, Nanjing University, Xianlin Campus Mailbox 603, 163 Xianlin Avenue, Qixia District, Nanjing 210023, China
(In Chinese:) 南京市栖霞区仙林大道163号,南京大学仙林校区603信箱,软件新技术国家重点实验室,210023。

The end