ICCSAMA 2019






Home

Call for Papers

Call for Special Sessions

Special Issuesnew

Steering Committee

Program Committee

Invited Lecturesnew

Proceedings

Submission

Accepted papers

Conference Programnew

Important Dates

Registration

Support for Vietnamese researchernew

Venuenew

Visiting Hanoi

Visiting Vietnam


Sponsors

Organisation


Previous ICCSAMA's proceeding

     ICCSAMA 2013

     ICCSAMA 2014

     ICCSAMA 2015

     ICCSAMA 2016

     ICCSAMA 2017


Contact us

INVITED LECTURES



PLENARY LECTURE     

Professor Aharon Ben-Tal
Technion - Israel Institute of Technology


    
Title: Robust Optimization: the need, the challenge, the success

Abstract:
The Need: Optimization problems are affected by uncertainties, either due to inaccuracy in measurements, lack of timely information on values of parameters (which incurs estimation errors), and limitation on executing accurately a computed solution (causing implementation errors). Due to these uncertainties a solution may divert from the expected optimality, and even worse; can lead to infeasibility. Consequently, there is a great need to come up with a method that can address the above difficulties.
The challenge is to introduce a methodology which on one hand will not impose on the user of the optimization the need to provide full information on the uncertainties, which he cannot deliver, and on the other hand will not impose on the optimizer the need to solve intractable optimization problems, in particular dynamic (multi-stage) ones and Chance Constrained problems. Robust Optimization (RO) is an attempt to meet these challenges. The talks will explain and demonstrate how this is achieved. We will present in particular examples in Antenna design and in Signal processing.
The success: RO created a new branch of optimization theory; this is evidence by huge number of publications addressing topics in RO. What is even more impressive is the large number of diverse applications of RO (As of November 2019, (Google Scholar lists over 144,000,000 items when one clicks "application of robust optimization"!).

Brief bio:
Aharon Ben-Tal is a Professor of Operations Research Management at the Technion Israel Institute of Technology. He received his Ph.D. in Applied Mathematics from Northwestern University in 1973. He has been a Visiting Professor at the University of Michigan, University of Copenhagen, Delft University of Technology, MIT and CWI Amsterdam, Columbia and NYU. His interests are in Continuous Optimization, particularly nonsmooth and large-scale problems, conic and robust optimization, as well as convex and nonsmooth analysis. In recent years the focus of his research is on optimization problems affected by uncertainty. In the last 20 years he has devoted much effort to engineering applications of optimization methodology and computational schemes. Some of the algorithms developed in the MINERVA Optimization Center are in use by Industry (Medical Imaging, Aerospace). He has published more than 135 papers in professional journals and co-authored three books. Prof. Ben-Tal was Dean of the Faculty of Industrial Engineering and Management at the Technion (1989-1992) and (2011-2014). He served in the editorial board of all major OR/Optimization journals. He gave numerous plenary and keynote lectures in international conferences.

In 2007 Professor Ben-Tal was awarded the EURO Gold Medal - the highest distinction of Operations Research within Europe.

In 2009 he was named Fellow of INFORMS.

In 2015 he was named Fellow of SIAM.

In 2016 he was awarded by INFORMS the Khchiyan prize for Lifetime Achievement in the area of Optimization.

In 2017, the Operation Research Society of Israel (ORSIS) awarded him the Lifetime Achievement Prize.

As of September 2018 his work has over 22,900 citations (Google scholar).





PLENARY LECTURE

Professor Melvyn Sim
National University of Singapore, Business school
    
Title: Robust Stochastic Optimization

Abstract: We present a new distributionally robust optimization model called the robust stochastic optimization (RSO), which unifies both scenario-tree based stochastic linear optimization and distributionally robust optimization in a practicable framework that can be solved using the state-of-the-art commercial optimization solvers. The model of uncertainty incorporates both discrete and continuous random variables, typically assumed in scenario-tree based stochastic linear optimization and distributionally robust optimization respectively. To address the non-anticipativity of recourse decisions, we introduce the event-wise recourse adaptations, which integrate the scenario-tree adaptation originating from stochastic linear optimization and the affine adaptation popularized in distributionally robust optimization. Our proposed event-wise ambiguity set is rich enough to capture traditional statistic-based ambiguity sets with convex generalized moments, mixture distribution, f-divergence, Wasserstein (Kantorovich-Rubinstein) metric, and also inspire machine-learning- based ones using techniques such as K-means clustering, and classification and regression trees. Several interesting RSO models, including optimizing over the Hurwicz criterion and two-stage problems over Wasser- stein ambiguity sets, are provided. We develop a new algebraic modeling package, RSOME to facilitate the implementation of RSO models. This is a joint work with Zhi Chen and Peng Xiong.


Brief bio:
Dr. Melvyn Sim is Professor and Provost's Chair at the Department of Analytics & Operations, NUS (National University of Singapore) Business School. He holds PhD in Operations Research, June 2004 Massachusetts Institute of Technology, Cambridge MA Thesis: Robust Optimization Advisor: Dimitris J. Bertsimas, MIT His research interests fall broadly under the categories of decision making and optimization under uncertainty with applications ranging from finance, supply chain management, healthcare to engineered systems. He is one of the active proponents of Robust Optimization and has given invited talks in this field at international conferences. Dr. Sim won second places in the 2002 and 2004 George Nicholson best student paper competition and first place in the 2007 Junior Faculty Interest Group (JFIG) best paper competition. He is also the recipient of the 2009 NUS outstanding young researcher award. Dr. Sim serves as an associate editor for Operations Research, Management Science and Mathematical Programming Computations.

Academic and Professional Experience

1. Head, Department of Analytics & Operations, 2017 - Present
2. Professor and Provost's Chair, April 2016 - Present
3. Courtesy appointment at Industrial and Systems Engineering, Jan 2016 - Present
4. Deputy Director, NUS Global Asian Institute, Aug 2012 - Present
5. Professor, Jan 2012 - Present
6. Dean's Chair, July 2009 - 2012
7. Deputy Head, Decision Sciences, May 2009 - July 2011.
8. Associate Professor (with tenure), Decision Sciences, July 2008 - Dec 2011
9. NUS Risk Management Institute Affiliated Researcher , 2007 - 2016
10. Fellow, Singapore-MIT-Alliance, 2004 - 2008
11. Assistant Professor , Decision Sciences, NUS, 2004 - 2008
12. Senior Tutor , Decision Sciences, NUS, 2000 - 2004
13. Research Engineer , Singapore Ministry of Defense, 1997 - 1999

Area of Expertise

1. Optimization and under Uncertainty
2. Modeling and Optimization of Operations/Supply chains/Healthcare systems






PLENARY LECTURE

Professor Martin J. Wainwright
Department of Statistics, UC Berkeley, Berkeley, USA
Department of EECS, UC Berkeley, Berkeley, USA
    
Title: Randomized algorithms for big data: From optmization to machine learning

Abstract: Large-scale data sets are now ubiquitous throughout engineering, science and technology, and they present a number of interesting challenges at the interface between machine learning and optimization. In this talk, we discuss the use of randomized dimensionality reduction techniques, also known as sketching, for quickly obtaining approximate solutions to large-scale optimization problems that arise in machine learning and statistics. We first show how sketching allows for much faster solution of constrained quadratic problems, and how the sketch dimension can be adapted to the intrinsic dimension of the solution space. We then show how these ideas lead to a faster, randomized version of the Newton algorithm with provable guarantees.
Based on joint work with Mert Pilanci, Stanford University


Brief bio: Martin Wainwright joined the faculty at University of California at Berkeley in Fall 2004, and is currently a Chancellor's Professor with a joint appointment between the Department of Statistics and the Department of Electrical Engineering and Computer Sciences. He received his Bachelor's degree in Mathematics from University of Waterloo, Canada, and his Ph.D. degree in Electrical Engineering and Computer Science (EECS) from Massachusetts Institute of Technology (MIT), for which he was awarded the George M. Sprowls Prize from the MIT EECS department in 2002. He is interested in high-dimensional statistics, information theory and statistics, and statistical machine learning. He has received an Alfred P. Sloan Foundation Research Fellowship (2005), IEEE Best Paper Awards from the Signal Processing Society (2008) and Communications Society (2010); the Joint Paper Award from IEEE Information Theory and Communication Societies (2012); a Medallion Lecturer (2013) of the Institute for Mathematical Statistics; a Section Lecturer at the International Congress of Mathematicians (2014); and the COPSS Presidents' Award in Statistics (2014). He is currently serving as an Associate Editor for the Annals of Statistics, Journal of Machine Learning Research, Journal of the American Statistical Association, and Journal of Information and Inference.





PLENARY LECTURE

Professor Jack Xin
University of California Irvine, USA
    
Title: Nonconvex non-smooth optimization methods for reducing complexity of deep neural networks

Abstract: We discuss nonconvex optimization problems arising in training quantized and sparsified deep neural networks. Such networks have much lower memory and inference costs than their full precision counterparts. The training aims to maintain full precision network performance. Quantization restricts the network weights to discrete values such as {1,-1} up to a scalar multiplier or piecewise constant activation functions. Sparsification refers to network weights being sparse or groupwise sparse. The mathematical and algorithmic challenge is to reconcile the continuous nature of stochastic gradient descent and the discreteness in quantization or sparsification so that the training process is convergent and efficient. We show computational results on large image data sets as well as theoretical analysis on model problems.


Brief bio: Jack Xin is Chancellor's Professor of Mathematics at UC Irvine. He received his Ph.D in applied mathematics at Courant Institute, New York University in 1990. He was a postdoctoral fellow at Berkeley and Princeton in 1991 and 1992. He was assistant and associate professor of mathematics at the University of Arizona from 1991 to 1999. He has been professor of mathematics at the University of Texas at Austin (1999-2005), and UC Irvine since 2005. His research interests include applied analysis, computational methods and their applications in multi-scale problems, nonconvex optimization, and data science. He authored over a hundred twenty journal papers and two Springer books. He is a fellow of the Guggenheim Foundation, and the American Mathematical Society. He is Editor-in-Chief of Society of Industrial and Applied Mathematics Interdisciplinary Journal Multi-scale Modeling and Simulation (MMS).





SEMI-PLENARY LECTURE

Professor Takahito Kuno
University of Tsukuba, Japan
    
    
Title: Global optimization of a class of DC functions over a polytope

Abstract:
It is known that every twice continuously differentiable function can be represented as the sum of a convex function and a separable concave function. To find the global minimum of such a class of DC functions over a polytope, we extend the rectangular branch-and- bound algorithm for separable concave minimization and try to improve the bounding process. We also report some numerical results, which indicate our algorithm is rather promising.

Brief bio: From 1988 to 1991, Takahito Kuno worked at TIT as an assistant professor, and then moved to University of Tsukuba in 1991, and now He is now professor in Faculty of Engineering, Information and Systems, University of Tsukuba. His research interest is in Global optimization of multiextremal nonconvex functions, and he is a coeditor of Journal of Global Optimization, Optimization Letters, and SN Operations Research forum.


















































Lastest news new

Change of conference's venu added

Conference program added

Support for vietnamese researcher added

Invited speakers added

Submission deadline extended

Special Issue announced