报告人： Vladimir Cherkassky, Professor, University of Minnesota
时 间：2017 年 12 月 26 日星期二 15：00-16：00
地 点：复旦大学张江校区软件楼 102 第二会议室
The main intellectual appeal of ‘Big Data’ is its promise to generate knowledge from data. That is, knowledge can be discovered by applying readily available statistical and machine learning software to the growing volumes of data. An opposite philosophical view is that scientific inquiry starts with asking intelligent questions, so it cannot be outsourced to computers. My talk will discuss important differences between knowledge discovery in classical science and modern data-analytic knowledge discovery. The range of topics includes conceptual, philosophical and methodological aspects of knowledge discovery. Various methodological issues will be illustrated using application examples ranging from financial engineering to biomedical applications.
Vladimir Cherkassky is Professor of Electrical and Computer Engineering at the University of Minnesota, Twin Cities. He received MS in Operations Research from Moscow Aviation Institute in 1976 and PhD in Electrical and Computer Engineering from the University of Texas at Austin in 1985. He has worked on theory and applications of statistical learning since late 1980’s and he has co-authored the monograph Learning from Data, published by Wiley (now in its second edition). He is also the author of a recent textbook Predictive Learning – see http://vctextbook.com/
He has served on editorial boards of IEEE Transactions on Neural Networks (TNN), Neural Networks (the official journal of INNS), Natural Computing, and Neural Processing Letters. He was a Guest Editor of the IEEE TNN Special Issue on VC Learning Theory and Its Applications published in September 1999. Dr. Cherkassky was an organizer and Director of NATO Advanced Study Institute (ASI) From Statistics to Neural Networks: Theory and Pattern Recognition Applications held in France in 1993. He received the IBM Faculty Partnership Award in 1996 and 1997 for his work on learning methods for data mining. He has been elected as Fellow of IEEE in 2007, for ‘contributions and leadership in statistical learning and neural networks’. In 2008, he received the A. Richard Newton Breakthrough Research Award from Microsoft for ‘development of new methodologies for predictive learning’.