Theory, Algorithms and Applications
DC Programming and DCA constitute the backbone of smooth/nonsmooth nonconvex programming and global optimization. They are inroduced by Pham Dinh Tao in 1985 in their preliminary form and extensively developed by Le Thi Hoai An and Pham Dinh Tao since 1994 to become now classic and more and more popular. This page collects links to papers, software, announcements, etc. that may be of relevance for people working DC Programming and DCA.
DC PROGRAMMING
DC = Difference of Convex functions
The DC programming and its DC Algorithm (DCA) address the problem of minimizing a function f = gâh (with g, h being lower semicontinuous proper convex functions on R^{n}) on the whole space.
Î±= inf { f(x) := g(x) â h(x): x in R^{n}}
A constrained DC program whose feasible set C is convex can always be transformed into an unconstrained DC program by adding the indicator function of C (it is equal to 0 in C, â elsewhere) to the first DC component g.
WHY DC PROGRAMMING ?
In recent years there has been a very active research in the following classes of nonconvex and nondifferentiable optimization problems
(1) sup {f(x) : x in C}, where g, f and C are convex,
(2) inf {g(x) - h(x) : x in R^{n}} where g, h are convex,
(3) inf {g(x) - h(x) : x in C, f_{1}(x) - f_{2}(x) â€ 0} where g, h, f_{1}, f_{2} and C are convex.
The reason is quite simple: most real life optimization problems are of nonconvex nature. Moreover industrialists have begin to replace convex models by nonconvex ones which are more complex but more reliable and especially more economic.
Problem (1) is a special case of Problem (2) with g is the indicator function of C and h = f, while the latter (resp. Problem (3)) can be equivalently transformed into the form of (1) (resp. (2)) by introducing an additional scalar variable (resp. via exact penalty relative to the dc constraint f_{1}(x) - f_{2}(x) â€ 0).
Clearly the complexity increases from (1) to (3), the solution of one of them implies the solution of the two others. Problem (2) is called a DC program whose particular structure has been permitting a good deal of development both in qualitative and quantitative studies.
We can say that all most reald world optimization problems can be formulated as DC programs.
The richness of the set of DC functions on X=R^{n}, denoted by DC(X) :
(i) DC(X) is a subspace containing the class of lower-C^{2} functions. In particular, DC(X) contains the space C^{1,1}(X) of functions whose gradient is locally Lipschitzian on X.
(ii) DC(X) is closed under all the operations usually considered in optimization.
In particular, a linear combination of fi DC(X) belongs to DC(X), a finite supremum of DC functions is DC. Moreover, the product of two d.c. functions remains DC.
(iii) Under some caution we can say that DC(X) is the subspace generated by the convex cone Îo(X) :
DC(X) = Î_{o}(X) â Î_{o}(X).
This relation marks the passage from convex optimization to nonconvex optimization and also indicates that DC(X) constitutes a minimal realistic extension of Î_{o}(X) - the convex cone of all lower semicontinuous proper convex functions on R^{n}.
DCA
DCA = DC Algorithm
WHAT IS DCA ?
DCA is an optimization approach based on local optimality and the duality in DC programming for solving DC programs. DCA have been introduced by Pham Dinh Tao in 1985 as an extension of his subgradient algorithms (for convex maximization programming) to DC programming. Crucial developments and improvements for DCA from both theoretical and computational aspects have been completed since 1993 throughout the joint works of Le Thi Hoai An and Pham Dinh Tao.
WHY DCA ?
It is worth noting that for suitable DC decompositions , DCA generates almost standard algorithms in convex and nonconvex programming.
AN INTRODUCTION TO DCA
DC Programming and DCA, which constitute the backbone of nonconvex programming and global optimization, are introduced in 1985 by Pham Dinh Tao in the preliminary state, and extensively developed by Le Thi Hoai An and Pham Dinh Tao since 1994 to become now classic and increasingly popular. Their original key idea relies on the structure DC of objective function and constraint functions in nonconvex programs which are explored and exploited in a deep and suitable way.
A DC program is of the form :
(P_{dc}) Î±= inf { f(x) := g(x) â h(x): x in X }
with g, h in Î_{o} (X). Such a function f is called a DC function, and g-h, a DC decomposition of f, while the convex functions g and h are DC components of f. In (P_{dc}) the nonconvexity comes from the concavity of the function -h (except the case h is affine since (P_{dc}) then is a convex program). It should be noted that a convex constrained DC program can be expressed in the form (P_{dc}) by using the indicator function on C, that is
inf { f(x):=g(x)-h(x): x in C} = inf {Ï(x) - h(x): x in X}, with Ï := g + Ï_{C}.
We would like to make an extension of convex programming, not too large to still allow using the arsenal of powerful tools in convex analysis and convex optimization but sufficiently wide to cover most real world nonconvex optimization problems . There the convexity of the two DC components g and h of the objective function has been used to develop appropriate tools from both theoretical and algorithmic viewpoints. The other support of this approach is the DC duality, which has been first studied by J.F. Toland in 1978 (Toland, 1978, 1979) who generalized, in a very elegant and natural way, the just mentioned works by Pham Dinh Tao on convex maximization programming (g then is the indicator function of a non-empty closed convex set in X). In contrast with the combinatorial approach where many global algorithms have been studied, there have been a very few algorithms for solving DC programs in the convex analysis approach. Here we are interested in local and global optimality conditions, relationships between local and global solutions to primal DC programs and their dual
Î±=inf {h*(y) â g*(y): y in Y } (D_{dc})
(where Y is the dual space of X, which can be identified with X it self, and g*, h* denote the conjugate functions of gand h, respectively) and solution algorithms.
The transportation of global solutions between (P_{dc}) and (D_{dc}) is expressed as:
if x* is an optimal solution of (P_{dc}), then y* in âh(x*) is an optimal solution of (D_{dc}),
if y* is an optimal solution of (D_{dc}), then x* in âg*(x*) is an optimal solution of (P_{dc}).
Under technical conditions, this transportation holds also for local solutions of (P_{dc}) and (D_{dc}).
DC programming investigates the structure of the vector space DC(X), DC duality and optimality conditions for DC programs. The complexity of DC programs resides, of course, in the lack of practical optimal globality conditions. We developed instead the following necessary local optimality conditions for DC programs in their primal part, (by symmetry their dual part is trivial ) :
âg(x*) â© âh(y*) â ÎŠ (1)
(such a point x is called critical point of g â h or for (P_{dc})), and
âg(x*) \subset âh(y*) (2).
The condition (2) is also sufficient for many important classes of DC programs. In particular it is sufficient for the next cases quite often encountered in practice:
HOW DCA WORKS ?
Based on local optimality conditions and duality in DC programming, the DCA consists in the construction of two sequences {x^{k}} and {y^{k}}, candidates to be optimal solutions of primal and dual programs respectively, such that the sequences {g(x^{k})âh(x^{k})} and {h(y^{k})âg(y^{k})} are decreasing, and {x^{k}} (resp. {y^{k}}) converges to a primal feasible solution x* (resp. a dual feasible solution y*) verifying local optimality conditions and
x* in âg*(y*); y* in âh(x*)
These two sequences {x^{k}} and {y^{k}} are determined in the way that x^{k+1} (resp. y^{k}) is a solution to the convex program (P_{k}) (resp. (D_{k})) defined by
inf {g(x) â h(x^{k}) â <x â x^{k}, y^{k}> : x in R^{n}}, (P_{k})
inf {h*(y) â g*(y^{kâ1})â <y â y^{kâ1}, x^{k}> : y in R^{n}} (D_{k}).
The first interpretation of DCA is simple : at each iteration one replaces in the primal DC program (P_{dc}) the second component h by its affine minorization h^{k}(x) := h(x^{k}) + <x â x^{k}>, y^{k}> at a neighbourhood of x^{k} to give birth to the convex program (P^{k}) whose the solution set is nothing but âg(y^{k}). Likewise, the second DC component g of the dual DC program (D_{dc}) is replaced by its affine minorization (g *)_{k}(y) := g*(y^{k}) + <y â y^{k}>, x^{k+1} at a neighbourhood of y^{k} to obtain the convex program (D_{k}) whose âh(x^{k+1}) is the solution set. DCA performs so a double linearization with the help of the subgradients of h and g and the DCA then yields the next scheme:
y^{k} in âh(x^{k}); x^{k+1} in âg*(y^{k}).
The second interpretation of DCA :
Let x be an optimal solution of primal DC program (P_{dc}) and y* in âh(x*). By the transportation of global solutions between (P_{dc}) and (D_{dc}), y is an optimal solution of the dual DC program (D_{dc}).
Let h be the affine minorization of h defined by h_{*}(x) := h(x*) + <x â x*, y*> and consider the next convex program
Î±* := inf{g(x)â h_{*}(x) : x in R^{n}} = inf{f(x) + h(x) â h_{*}(x) : x in R^{n}}. (3)
Since the function f_{*}(x) = f(x) + h(x) â h_{*}(x) is a convex majorization of f , Î±*â„ Î± .
But f_{*}(x*) = f(x*) = Î±. Hence finally Î±*= Î± .
On the other hand, the optimal solution set of (3) is âg*(y*) that is contained in the optimal solution set of (P_{dc}), following the transportation of global solution between (P_{dc}) and (D_{dc}). Taking into account of this transportation and the decrease of the sequence {g(x^{k})â h(x^{k})}, one can understand better the role played by the linearized programs (P_{k}), (D_{k}) and (3). And the fact that DCA converges to an optimal solution of (P_{dc}) from a good initial point.
Finally it is important to point out the deeper insight into DCA . Let h_{l} and g*_{l} be the polyhedral convex functions (which underapproximate, respectively, the convex functions h and g*) defined by
h_{l}(x) : = sup{h_{i}(x) : i = 0, ..., l}, \" x in R^{n} ,
g*_{l} (y)= sup{(g*)_{i}(y) : i = 1, ..., l}, " y in R^{n} .
Let k := inf{l : g(x_{l}) â h(x_{l}) = g(x^{l+1}) â h(x^{l+1})} . If k is finite, then the solution computed by DCA, x^{k+1} and y^{k}, are global minimizers for the polyhedral DC programs
ÎČ_{k} = inf{g(x) â h_{k}(x) : x in R^{n} } (P_{k}) and ÎČ_{k} = inf{h(y) â (g)_{k}(y) : y in R^{n} } (D_{k})
respectively. This property holds especially in polyhedral DC programs where DCA has a finite convergence.
The hidden features reside in (k is finite or equal to +â ):
x^{k+1} (resp. y^{k}) is not only an optimal solution of (P_{k}) (resp. (D_{k})) but also an optimal solution to the more tightly approximate problem (P_{k}) (resp. (D_{k})),
ÎČ_{k }+ Î”_{k} â€ a â€ ÎČ_{k} where Î”_{k}:= inf {h^{k}(x)-h(x):x in P} â€ 0 and the more Î”_{k} is near zero (i.e., the more the polyhedral convex minorization h^{k} is close to h over P), the more x^{k+1} is near P (P denotes thesolution set of (P_{dc})).
If h and h^{k }coincide at some optimal solution to (P_{dc}) or g* and (g*)^{k} coincide at some optimal solution to (D_{dc}), then x^{k+1} (resp. y^{k}) is also an optimal solution of P_{dc}) (resp. (D_{dc})).
These nice features explain partially the efficiency of DCA with respect to related standard methods.
KEY PROPERTIES OF DCA:
HOW TO USE DCA ?
(i) Find a DC decomposition g - h of the objective function f
(ii) Compute subgradients of h and g*
(iii) Implement the algorithm that consists of three steps :
inf{ g(x) â h(x^{k} ) + < xâ x^{k},y^{k} > : x in R^{n}}
until the convergence.
IMPORTANT QUESTIONS
From the theoretical point of view, the question of optimal DC decompositions is still open . Of course, this depends strongly on the very specific structure of the problem being considered. In order to tackle the large scale setting, one tries in practice to choose g and h such that sequences {x^{k}} and {y^{k}} can be easily calculated, i.e. either they are in explicit form or their computations are inexpensive.
GLOBALIZING
To guarantee globality of sought solutions or to improve their quality it is advised to combine DCA with global optimization techniques,the most popular of which are Branch-and-Bound, SDP and cutting plane techniques...., in a deeper and efficient way. Note that for a DC function f = g - h, a good convex minorization of f can be taken as g + co(-h) where co stands for convex envelope. knowing that the convex envelope of a concave function on a bounded polyhedral convex set can be easily computed.
APPLICATIONS
The major difficulty in nonconvex programming resides in the fact that there is, in general, no practicable global optimality conditions. Thus, checking globalilty of solutions computed by local algorithms is only possible in the cases where optimal values are known a priori or by comparison with global algorithms which, unfortunately, cannot be applied to large scale problems. A pertinent comparison of local algorithms should be based on the following aspects:
DCA seems to meet these features since it was successfully applied to a lot of different and various nonconvex optimization problems:
1. The Trust Region subproblems
2. Nonconvex Quadratic Programs
3. Quadratic Zero-One Programming problems / Mixed Zero-One Programming problems
4. Minimizing a quadratic function under convex quadratic constraints
5. Multiple Criteria Optimization: Optimization over the Efficient and Weakly EfficientSets
6. Linear Complementarity Problem
7. Nonlinear Bilevel Programming Problems.
8. Optimization in Mathematical Programming problems with Equilibrium Constraints
9. Optimization in Mathematics Finance:
10. The strategic supply chain design problem from qualified partner set
11. The concave cost supply chain problem
12. Nonconvex Multicommodity Network Optimization problems
13. Molecular Optimization via the Distance Geometry Problems
14. Molecular Optimization by minimizing the Lennard-Jones Potential Energy
15. Minimizing Morse potential energy
16. Multiple alignment of sequences
17. Phylogenetic analysis
18. Optimization for Restoration of Signals and Images
19. Discrete tomography
20. Tikhonov Regularization for Nonlinear Ill-Posed problems
21. Engineering structures
22. Multidimensional Scaling Problems (MDS)
23. Clustering
24. Fuzzy Clustering
25. Multilevel hierarchique clustering and its application to multicast structures
26. Support Vector Machine
27. Large margin classification to Ïâlearning
28. Generating highly nonlinear balanced Boolean Functions in Cryptography
29. Learning with sparsity by minimizing the zero norm (feature selection & classification, sparse regression, ...)
30. Compressed sensing, sparse signal recovery
31. Nonnegative Matrix Factorization
32. SOM Self Organizing Map
33. Community detection in social networks
34. Network Utility Maximization (NUM)
35. Power control problem
36. Dynamic spectrum management in DSL systems
37. MIMO relay optimization
38. Proportional-fairness and max-min optimization of SISO/MISO/MIMO ad-hoc networks
39. Quality of Service (QoS) Routing problems
40. Cross-layer Optimization in Multi-hop Time Division Multiple Access (TDMA) Networks
41. Planning a multisensor multizone search for a target
To be continued ...
MISCELLANIES
CAVEAT: Neither are the given lists of papers complete nor do they always provide the most recent reference.
BUT: The intention is to provide a useful page for the entire DC Programming community. To make and keep this page sufficiently complete and up to date I need your help. Please do inform me about necessary updates or missing contributions and mail suggestions or comments to
hoai-an.le-thi @ univ-lorraine
This email address is being protected from spambots. You need JavaScript enabled to view it.
NOTICE: this homepage is devoted to the convex analysis approach to nonconvex programming - DC programming and DCA, and the combined DCA - global optimization techniques.
TOOLS OF DC PROGRAMMING AND DCA
Pham Dinh Tao, Le Thi Hoai An, Recent advances in DC programming and DCA. Transactions on Computational Collective Intelligence, Volume 8342, pp. 1-37, 2014.
WORKS USING DCA
OUR WORKS ON DCA
See the list of publications here and the PhD. theses here.
WORKS OF OTHER AUTHORS USING DCA (partial list)
132. Yunquan Song, Lu Lin, Ling Jian: Robust check loss-based variable selection of high-dimensional single-index varying-coefficient model. Communications in Nonlinear Science and Numerical Simulation, Volume 36, July 2016
131. Moeini Mahdi: The Maximum Ratio Clique Problem: A Continuous Optimization Approach and Some New Results. Proceedings of the 3rd International Conference on Modelling, Computation and Optimization in Information Systems and Management Sciences â MCO, 2015.
130. Nguyen Canh Nam, Pham Thi Hoai, Tran Van Huy: DC Programming and DCA Approach for Resource Allocation Optimization in OFDMA/TDD Wireless Networks. Proceedings of 3rd International Conference on Computer Science, Applied Mathematics and Applications â ICCSAMA, 2015.
129. Tran Duc Quynh, Nguyen Ba Thang Phan, Nguyen Quang Thuan: A New Approach for Optimizing Traffic Signals in Networks Considering Rerouting. Proceedings of the 3rd International Conference on Modelling, Computation and Optimization in Information Systems and Management Sciences - MCO 2015 - Part I, 2015.
128. Xin Zhou, Nicole Mayer-Hamblett, Umer Khan, Michael R. Kosorok: Residual Weighted Learning for Estimating Individualized Treatment Rules. Eprint arXiv:1508.03179, 2015.
127. Yifei Lou, Stanley Osher and Jack Xin: Computational Aspects of Constrained L_{1}-L_{2} Minimization for Compressive Sensing. Math ucla, 2015.
126. Jin Xiao and Ng, M.K.-P. and Yu-Fei Yang: On the Convergence of Nonconvex Minimization Methods for Image Recovery. IEEE Transactions on Image Processing, pages 1587-1598, 2015.
125. Annabella Astorino, Antonio Fuduli: Semisupervised spherical separation. Applied Mathematical Modelling, 2015.
124. Jeyakumar, V. and Lee, G.M. and Linh, N.T.H.: Generalized Farkasâ lemma and gap-free duality for minimax DC optimization with polynomials and robust quadratic optimization. Journal of Global Optimization, 2015.
123. Shuo Xiang, Xiaotong Shen, Jieping Ye: Efficient nonconvex sparse group feature selection via continuous and discrete optimization. Artificial Intelligence, Volume 224, July 2015.
122. Mehryar Mohri, Andres Munoz Medina: Learning Algorithms for Second-Price Auctions with Reserve. Courant Institute of Mathematical Sciences, 2015.
121. Hoang Ngoc Tuan: Linear convergence of a type of iterative sequences in nonconvex quadratic programming. Journal of Mathematical Analysis and Applications, Volume 423, Issue 2, Pages 1311â1319, 2015.
120. Jong Chul Ye, Jong Min Kim and Yoram Bresler: Improving M-SBL for Joint Sparse Recovery using a Subspace Penalty. arXiv:1503.06679v2 [cs.IT], 2015.
119. Enlong Che: Optimized Transmission and Selection Designs in Wireless Systems. PhD. thesis, University of Technology, Sydney, Australia 2015.
118. Nguyen Thai An, Nguyen Mau Nam and Nguyen Dong Yen: A DC algorithm via convex analysis approach for solving a location problem involving sets. arXiv:1404.5113v2 [math.OC], 2015.
117. Penghang Yin, Yifei Lou, Qi He and Jack Xin: Minimization of l_{1-2} for Compressed Sensing. SIAM J. SCI. COMPUT Vol. 37, No. 1, pp. A536âA563, 2015.
116. G Polya: Nonsmooth DCA. A Fortran 77 solver for nonsmooth DC programming by A. Bagirov, 2015.
115. Adrian Schad, Ka L. Law and Marius Pesavento: Rank-Two Beamforming and Power Allocation in Multicasting Relay Networks. arXiv:1502.04861v1 [cs.IT], 2015.
114. Kaisa Joki, Adil M. Bagirov, Napsu Karmitsa and Marko M. MĂ€kelĂ€: New Proximal Bundle Method for Nonsmooth DC Optimization. TUCS Technical Report, 2015.
113. Gao Xiangsheng, Wang Min, Li Qi and Zan Tao: Uncertainty analysis of torque-induced bending deformations in ball screw systems. Advances in Mechanical Engineering, 2015.
112. A Belyaev: Test rig optimization. PhD. thesis, University of Illinois at Urbana-Champaign, 2014.
111. Daquan Feng, Guanding Yu, Yi Yuan-Wu, Li, G.Ye., Gang Feng, Shaoqian Li: Mode switching for device-to-device communications in cellular networks. Signal and Information Processing (GlobalSIP), 2014 IEEE Global Conference on , vol., no., pp.1291-1295, 2014.
110. Jian Luo: Quadratic Surface Support Vector Machines with Applications. A dissertation submitted to the Graduate Faculty of North Carolina State University in partial fulfillment of the requirements for the Degree of Doctor of Philosophy, Raleigh, North Carolina, 2014.
109. Penghang Yin, Yifei Lou, Qi He and Jack Xin: Minimization Of â_{1}ââ_{2} For Compressed Sensing. 2014, URL: http://www.math.uci.edu/~jxin/cam14-01.pdf.
108.Y. Lou, T. Zeng, S. Osher and J. Xin: A Weighted Difference of Anisotropic and Isotropic Total Variation Model for Image Processing. URL: ftp://ftp.math.ucla.edu/pub/camreport/cam14-69.pdf.
107. Guang Li and Mike R. Belmont: Model predictive control of a sea wave energy converter: a convex approach. Preprints of the 19th World Congress, The International Federation of Automatic Control Cape Town, South Africa, pp. 11987 â 11992, August 24-29, 2014.
106. Joaquim JĂșdice: Optimization with Linear Complementarity Constraints. to appear in Pesquisa Operacional, 2014.
105. Xiaolong Long, Knut Solna, Jack Xin: Two L_{1}Two L1 Based Nonconvex Methods for Constructing Sparse Mean Reverting Porfolios. CAM report 14-55, UCLA, 2014.
104. M. Mokhtar, A. Shuib, D. Mohamad, Mathematical Programming Models for Portfolio Optimization Problem: A Review, World Academy of Science, Engineering and Technology International Journal of Social, Human Science and Engineering Vol:8 No:2, pp. 122-129 2014.
103. Tu Xu, Junhui Wang, Yixin Fang, A model-free estimation for the covariate-adjusted Youden index and its associated cut-point, arXiv:1402.1835 [stat.AP] 2014.
102. Alhussein Fawzi, Mike Davies, Pascal Frossard, Dictionary learning for fast classification based on soft-thresholding, Computer Vision and Pattern Recognition, arXiv:1402.1973 [cs.CV] 2014.
101. Mehryar Mohri, Andres Munoz Medina, Learning Theory and Algorithms for Revenue Optimization in Second-Price Auctions with Reserve. Proceedings of the 31st International Conference on Machine Learning, Beijing, China. JMLR: W&CP volume 32, 2014.
100. Kuaini Wang, Wenxin Zhu, Ping Zhong, Robust Support Vector Regression with Generalized Loss Function and Applications, Neural Processing Letters 2014.
99. Changzhi Wu, Chaojie Li, Qiang Long, A DC Programming approach for sensor network localization with uncertainties in anchor positions, Journal of Industrial and Management Optimization 10(3), pp. 817-826 2014.
98. Theodoros Tsiligkaridis, Etienne Marcheret and Vaibhava Goel: A Difference of Convex Functions Approach to Large-Scale Log-Linear Model Estimation. IEEE Transactions on Audio, Speech, and Language Processing, vol.21, no.11, pp. 2255-2266, 2013.
97. Timothy Hunter, Jerome Thai, Kene Akametalu, Alexandre Bayen and Claire Tomlin: Inverse Covariance Estimation from Data with Missing Values using the Concave-Convex Procedure. Optimization for Machine Learning OPT2013, 5 pages, 2013.
96. W. Guan, A. Gray, Sparse high-dimensional fractional-norm support vector machine via DC programming, Computational Statistics and Data Analysis, 67(2013), pp. 136-148, 2013.
95. Takanori MAEHARA, Kazuo MUROTA, A Framework of Discrete DC Programming by Discrete Convex Analysis , MATHEMATICAL ENGINEERING TECHNICAL REPORTS 2013.
94. Donghui Fang, Zhe Chen, Total Lagrange duality for DC infinite optimization problems, Fixed Point Theory and Applications 2013:269, 2013.
93. Martin Slawski, Matthias Hein, Pavlo Lutsik, Matrix factorization with Binary Components, Poster Session in NIPS conference 2013.
92. Yang, Liming and Ju, Ribo, A DC programming approach for feature selection in the Minimax Probability Machine, International Journal of Computational Intelligence Systems 2013.
91. Fu Wenlong, Du Tingsong, Zhai Junchen, A New Branch and Bound Algorithm Based on D.C.Decomposition about Nonconvex Quadratic Programming with Box Constrained, Journal of Mathematical Study 2013.
90. Mehryar Mohri, Andres Munoz Medina, Learning Theory and Algorithms for Revenue Learning Theory and Algorithms for Revenue Optimization in Second-Price Auctions with Reserve , arXiv:1310.5665 2013.
89. Ming Huang, Li-Ping Pang, Zun-Quan Xia, The space decomposition theory for a class of eigenvalue optimizations, Computational Optimization and Applications, December 2013.
88. Wei Pan, Xiaotong Shen, Binghui Liu, Cluster Analysis: Unsupervised Learning via Supervised Learning with a Non-convex Penalty, Journal of Machine Learning Research 14, 1865-1889 (2013).
87. Sun, Qian and Xiang, Shuo and Ye, Jieping, Robust principal component analysis via capped norms. Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, KDD '13, pp. 311-319 (2013).
86. Alberth Alvarado, Gesualdo Scutari, Jong-Shi Pang, A New Decomposition Method for Multiuser DC-Programming and its Applications, submitted to IEEE Transactions on Signal Processing (2013).
85. Liming Yang, Laisheng Wang, A class of semi-supervised support vector machines by DC programming, Advances in Data Analysis and Classification, Springer Berlin Heidelberg, pp. 1--17 (2013).
84. De La Torre, F.; Black, M.J., Robust principal component analysis for computer vision. Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on , vol.1, no., pp.362,369 vol.1, 2001doi: 10.1109/ICCV.2001.937541 (2001).
83. Meihua Wang, Fengmin Xu, Chengxian Xu, A Branch-and-Bound Algorithm Embedded with DCA for DC Programming, Hindawi Publishing Corporation, Mathematical Problems in Engineering (2012).
82. D.H. Fang, C. Li, X.Q. Yang, Asymptotic closure condition and Fenchel duality for DC optimization problems in locally convex spaces, Nonlinear Analysis: Theory, Methods & Applications, Vol. 75, Issue 8 (2012), pp. 3672-3681.
81. Xiaotong Shen, Hsin-Cheng Huang, Wei Pan, Simultaneous supervised clustering and feature selection over a graph, Biometrika 2012 : ass038v1-ass038, (2012).
80. Shufang Wang, Artificial Mixture Methods for Correlated Nominal Responses and Discrete Failure Time, Doctor of Philosophy theses (Biostatistics) in The University of Michigan, (2012).
79. Xilan TIAN, Apprentissage et Noyau pour les Interfaces Cerveau-machine, these in l'Institut National des Sciences Appliquees de Rouen (2012).
78. Annabella Astorino, Antonio Fuduli, Manlio Gaudioso, Margin maximization in spherical separation, Comput Optim Appl 53 (2012) , pp. 301â322.
77. Hoang, N.T. , Convergence Rate of the Pham DinhâLe Thi Algorithm for the Trust-Region Subproblem, J Optim Theory Appl 154 (2012) pp. 904â915.
76. Liming Yang, Qun Sun, Recognition of the hardness of licorice seeds using a semi-supervised learning method and near-infrared spectral data, Chemometrics and Intelligent Laboratory Systems, Volume 114, 15 May 2012, Pages 109-115.
75. Xilan Tian, Gilles Gasso, StĂ©phane Canu, A multiple kernel framework for inductive semi-supervised SVM learning, Neurocomputing, Volume 90, 1 August 2012, pp. 46-58.
74. Hoai Minh, L., Adnan, Y., Riadh, M., DCA for solving the scheduling of lifting vehicle in an automated port container terminal, Comput Manag Sci 9 (2012), pp. 273â286.
73. Liming Yang, Laisheng Wang, A class of smooth semi-supervised SVM by difference of convex functions programming and algorithm, Knowledge-Based Systems, Available online (2012).
72. Yiqing Zhong, El-Houssaine Aghezzaf, Combining DC-programming and steepest-descent to solve the single-vehicle inventory routing problem, Computers & Industrial Engineering, Vol. 61, Issue 2 (2011), pp. 313-321.
71. Kuaini Wang, Ping Zhong, Yaohong Zhao, Training Robust Support Vector Regression via D. C. Program, Journal of Information & Computational Science 7: 12 (2010), pp. 2385â2394.
70. Sun, X. L., Li, J. L. & Luo, H. Z., Convex relaxation and Lagrangian decomposition for indefinite integer quadratic programming. Optimization: A Journal of Mathematical Programming and Operations Research, 59(5), 627-641 (2010).
69. Muu, L. & Quoc, T., One step from DC optimization to DC mixed variational inequalities. Optimization: A Journal of Mathematical Programming and Operations Research, 59(1), 63-76 (2010).
68. Kuaini Wang, Ping Zhong, Yaohong Zhao, Training Robust Support Vector Regression via D. C. Program, Journal of Information & Computational Science 7: 12 (2010), pp. 2385â2394.
67. Vucic, Nikola, Shi, Shuying, Schubert, Martin, DC programming approach for resource allocation in wireless networks, Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks (WiOpt), Proceedings of the 8th International Symposium (2010), pp.380-386.
66. Yoshihiro Kanno, Makoto Ohsaki, A Sequential Second-Order Cone Programming A Sequential Second-Order Cone Programming for Stability Analysis of Nonsmooth Mechanical Systems, 8^{th} World Congress on Structural and Multidisciplinary Optimization, Lisbon, Portugal (2009).
65. Bharath K. Sriperumbudur, Gert R. G. Lanckriet, On the Convergence of the Concave-Convex Procedure, NIPS (2009).
64. Nikolaos Vasiloglou Alexander G. Gray David V. Andersony, Non-Negative Matrix Factorization, Convexity and Isometry, Proceedings of SIAM conference on Data Mining 2009, pp 673-684.
63. Effrosyni Kokiopoulou, Pascal Frossard, Minimum Distance between Pattern Transformation Manifolds: Algorithm and Applications, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 7, pp. 1225-1238, June 2009.
62. Antonio Fuduli, A. Astorino, Manlio Gaudioso, DC models for spherical separation, Journal of Global Optimization, Volume 48, Issue 4, pp 657-669, 2010.
61. Chun-Nam John Yu Thorsten Joachims, Learning Structural SVMs with Latent Variables, Proceedings of the 26 th International Conference on Machine Learning, Montreal, Canada, 2009.
60. Yiqing Zhong, El-Houssaine Aghezzaf, A DC Programming Approach to solve the Single-Vehicle Inventory Routing Problem, in the Proceedings of the international conference CIE39, July 6-8, 2009.
59. JunhuiWang, Xiaotong Shen, Wei Pan, On Efficient Large Margin Semisupervised Learning: Method and Theory, Journal of Machine Learning Research 10 (2009) 719-742.
58. E. Kokiopoulou, D. Kressner, N. Paragios, P. Frossard, Optimal image alignment with random projections of manifolds: algorithm and geometric analysis, Proceedings of EUSIPCO, 2009.
57. Gianni Di Pillo, Almerico Murli, On Efficient Large Margin Semisupervised Learning: Method and Theory, Journal of Machine Learning Research 10 (2009) 719-742.
56. Yiming Ying, Kaizhu Huang,Colin Campbell, Enhanced protein fold recognition through a novel data integration approach, BMC Bioinformatics. 2009; 10: 267.
55. Bharath Sriperumbudur, Gert Lanckriet, On the Convergence of the Concave-Convex Procedure, NIPS Conferences 2009.
54. Zheng Zhao , Liang Sun , Shipeng Yu , Huan Liu , Jieping Ye, Multiclass Probabilistic Kernel Discriminant Analysis, Proceedings of the 21st international jont conference on Artifical intelligence, pages 1363-1368, 2009.
53. Yoshihiro Kanno, Makoto Ohsaki, Optimization-based stability analysis of structures under unilateral constraints, International Journal for Numerical Methods in Engineering, Volume 77, Issue 1 (p 90-125) (2009).
52. Gasso, Gilles; Rakotomamonjy, Alain; Canu, StĂ©phane, Recovering Sparse Signals With a Certain Family of Nonconvex Penalties and DC Programming, IEEE Transactions on Signal Processing, vol. 57, issue 12, pp. 4686-4698 (2009).
51. Yichao Wu, Yufeng Liu, Variable selection in quantile regression, Statistica Sinica, 19, 801-817, Statistica Sinica 19, pp. 801--817 (2009).
50. Cambini, R. and Salvi, F., A branch and reduce approach for solving a class of low rank d.c. programs, J. Comput. Appl. Math. 233, 2 (Nov. 2009), 492-501.
49. Kai Zhang , Ivor W. Tsang , James T. Kwok, Maximum margin clustering made practical, IEEE Transactions on Neural Networks, v.20 n.4, p.583-596, April 2009.
48. Kirk Sturtz, Gregory Arnold, and Matthew Ferrara, DC optimization modeling for shape-based recognition, Proc. SPIE 7337, 73370O (2009).
47. Xianyun Wang, Optimality Conditions and Duality for DC Programming in Locally Convex Spaces, Journal of Inequalities and Applications (2009).
46. Akoa, F.B., Combining DC Algorithms (DCAs) and Decomposition Techniques for the Training of NonpositiveâSemidefinite Kernels, Neural Networks, IEEE Transactions on , vol.19, no.11 (2008), pp.1854-1872.
45. Gasso, G., Rakotomamonjy, A., Canu, S., Solving non-convex lasso type problems with DC programming, Machine Learning for Signal Processing, MLSP 2008. IEEE Workshop (2008), pp.450-455.
44. Kappes, Christoph Schn, MAP-Inference for Highly-Connected Graphs with DC-Programming. Proceedings of the 30th DAGM symposium on Pattern Recognition, Gerhard Rigoll (Ed.). Springer-Verlag, Berlin, Heidelberg (2008), pp. 1-10.
43. JĂ¶rg Kappes and Christoph SchnĂ¶rr, MAP-Inference for Highly-Connected Graphs with DC-Programming, Pattern Recognition, Lecture Notes in Computer Science Volume 5096, pp 1-10 (2008).
42. F. Akoa, Combining DC Algorithms (DCAs) and Decomposition Techniques for the Training of NonpositiveâSemidefinite Kernels, IEEE Transactions on Neural Networks 2008 Vol. 19 (No.11).
41. Peng Zhang, Yingjie Tian, Zhiwang Zhang, Aihua Li, Xingquan Zhu, Select Objective Functions for Multiple Criteria Programming Classification, wi-iat, vol. 3, pp.420-423, 2008 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, 2008.
40. Liang Fang, Li Sun, Guoping He, Fangying Zheng, A New Algorithm for Quadratic Programming with Interval-Valued Bound Constraints, International Conference on Natural Computation, vol. 6, pp. 433-437, 2008 Fourth International Conference on Natural Computation, 2008.
39. D. Wozabal,A new method for Value-at-Risk constrained optimization using the Difference of Convex Algorithm (DCA), Optimization online 2008.
38. Riccardo Cambini, Claudio Sodini, A computational comparison of some branch and bound methods for indefinite quadratic programs, Central European Journal of Operations Research Volume 16 p. 139 - 152, 2008-06-01.
37. Daili, Ch.; Daili, N., Duality gap and quadratic programming, Part III: dual bounds of a D.C. quadratic programming problem and applications. The Free Library. (2008)
36. C. Schnoerr, Signal And Image Approximation With Level-set Constraints, Computing 2007 Vol.81 (No.2-3) Volume 81, Issue 2-3, pp 137-160, 2007.
35. Yoshihiro Kanno, Makoto Ohsaki, Eigenvalue Optimization of Structures via Polynomial Semidefinite Programming. Mathematical engineering technical reports, 2007.
34. Bharath K. Sriperumbudur David A. Torres Gert R. G. Lanckriet, Sparse Eigen Methods by D.C. Programming, Proceedings of the 25 th International Conference on Machine Learning 2007.
33. David A. Torres, Douglas Turnbull, Bharath K. Sriperumbudur, Luke Barrington, Gert R. G. Lanckriet, Finding Musically Meaningful Words by Sparse CCA, ePrint 2007.
32. Wang, J., Shen, Z., and Pan, W. , On transductive support vector machines. Proceedingf of the International Conference on Machine Learning ICML 2007.
31. Wang, J., and Shen, X , Large margin semi-supervised learning, JMLR 8 (2007), pp 1867-1891.
30. SchnĂ¶rr, C., SchĂŒle, T., Weber, S. . Variational Reconstruction with DCProgramming. Advances in Discrete Tomography and Its Applications Applied and Numerical Harmonic Analysis, pp 227-243 (2007).
29. Bierlaire, Michel ; ThĂ©mans, MichaĂ«l ; Zufferey, Nicolas, Nonlinear global optimization relevant to discrete choice models estimation, 7th Swiss Transport Research Conference, Ascona, Switzerland, 12-14 September 2007.
28. Dinh Nho Hao, Nguyen Trung Thanh and H. Sahli , Numerical Solution to a Parabolic Boundary Control Problem. Advances in Deterministic and Stochastic Analysis, pp.115 - 130, published by World Scientific, 2007.
27. Bharath K. Sriperumbudur, David A. Torres, and Gert R. G. Lanckriet, Sparse eigen methods by D.C. programming. In Proceedings of the 24th international conference on Machine learning (ICML '07), Zoubin Ghahramani (Ed.). ACM, New York, NY, USA, (2007), pp. 831-838.
26. Andreas Argyriou, et al, A DC-programming algorithm for kernel selection, Proceedings of the 23rd international conference on Machine learning (ICML '06). ACM, New York, NY, USA (2006), pp. 41-48.
25. Liu, Yufeng and Shen, Xiaotong, Multicategory psi-learning , Journal of the American Statistical Association/, 101, 474, 500-509 (2006).
24. Ronan Collobert ronan, Fabian Sinz f Max Planck , LĂ©on Bottou , Trading Convexity for Scalability, Proceedingf of the International Conference on Machine Learning ICML 2006.
23. Sijin Liu, M.S., Computational developments for psi-learning, PhD thesis Univ. Ohio 2006.
22. Pak-Ming Cheung, James T. Kwok, A regularization framework for multiple-instance learning, Proceedings of the 23rd international conference on Machine learning, Pittsburgh, Pennsylvania Pages: 193 - 200, 2006.
21. Alberto Bemporad, Advanced control methodologies for hybrid dynamical systems, Research program, UniversitĂ degli Studi di SIENA (2006).
20. S. Weber, T. SchĂŒle, A.Kuba and C. SchnĂ¶rr . Binary Tomography with Deblurring. In: Proc. 11th International Workshop on Combinatorial Image Analysis ( IWCIA '06 ). LNCS Vol.4040, Springer, June 2006.
19. Andreas Argyriou, Raphael Hauser, Charles A. Micchelli, Massimiliano Pontil, A DC-Programming Algorithm for Kernel Selection, Proceedings of the 23 rd International Conference on Machine Learning, Pittsburgh, PA, 2006.
18. S. Weber, A. Nagy, T. SchĂŒle, C. SchnĂ¶rr and A.Kuba, A Benchmark Evaluation of Large-Scale Optimization Approaches to Binary Tomography. In: Proc. 13th International Conference on Discrete Geometry on Computer Imagery ( DGCI '06 ). LNCS Vol.4245, Springer, Oct. 2006.
17. STEVEN E. ELLIS_ AND MADHU V. NAYAKKANKUPPAM, PHYLOGENETIC ANALYSIS VIA DC PROGRAMMING, Preprint, Department of Mathematics & Statistics, UMBC, 1000 Hilltop Circle, Baltimore, MD 21250. (2005).
16. Yufeng LIU, Xiaotong SHEN, and Hani DOSS , Multicategory ÏâLearning and Support Vector Machine: Computational Tools , Journal of Computational and Graphical Statistics, 14(1): 219-236 (2005).
15. J.E. Harrington, B.F. Hobbs, J.S. Pang, A. Liu, G. Roch , Collusive game solutions via optimisation, Math. Program. Ser. B(2005), 29 pages.
14. J. Neumann, C. SchnĂ¶rr, G. Steidl : Combined SVM-Based Feature Selection and Classification. Machine Learning November 2005, Volume 61, Issue 1-3, pp. 129-150.
13. T. SchĂŒle, C. SchnĂ¶rr, S. Weber, and J. Hornegger . Discrete tomography by convex-concave regularization and d.c. programming. Discr. Appl. Math., 151:229-243, Oct 2005.
12. T. SchĂŒle, S. Weber, C. SchnĂ¶rr , Adaptive Reconstruction of Discrete-Valued Objects from few Projections. Electr. Notes in Discr. Math., 20:365-384, 2005.
11. S. Weber, T. SchĂŒle, C. SchnĂ¶rr , Prior Learning and Convex-Concave Regularization of Binary Tomography. Electr. Notes in Discr. Math., 20:313-327, 2005.
10. S. Weber, C. SchnĂ¶rr, T. SchĂŒle, J. Hornegger , Binary Tomography by Iterating Linear Programs, R. Klette, R. Kozera, L. Noakes and J. Weickert (Eds.), Computational Imaging and Vision - Geometric Properties from Incomplete Data, Kluwer Academic Press 2005.
09. Kurt M. Anstreicher and Samuel Burer , D.C. Versus Copositive Bounds for Standard QP, Journal of Global Optimization, Volume 33, Issue 2, pp 299-312, 2005.
08. Krause, N., Singer , Leveraging the margin more carefully. International Conference on Machine Learning, ICML (2004).
07. J. Neumann, C. SchnĂ¶rr, G. Steidl , SVM-based Feature Selection by Direct Objective Minimisation, Pattern Recognition, Proc. of 26th DAGM Symposium, LNCS, Springer, August 2004.
06. S. Weber, T. SchĂŒle, J. Hornegger, C. SchnĂ¶rr , Binary Tomography by Iterating Linear Programs from Noisy Projections , Proceedings of International Workshop on Combinatorial Image Analysis (IWCIA), 2004. Auckland, New Zealand, Dec. 1.-3./2004, Lecture Notes in Computer Science, Springer Verlag .
05. X. Shen, G.C. Tseng, X. Zhang, and W. H. Wong , On psi-Learning, Journal of American Statistical Association, 98, 724-734,(2003).
04. P.Mahey, T.Q. Phong and H.P.L Luna , Separable convexification and DCA techniques for capacity and flow assignement problems, RAIRO-Recherche OpĂ©rationnelle 35 (2001), pp.269-281.
03. Le Dung Muu and Jianming SHI , D.C. Optimization Methods for Solving Minimum Maximal Network Flow Problem. AMS ClassiïŹcation 2000.
02. Quoc V. Le Alex Smola, Olivier Chapelle, Choon Hui Te, Optimization of Ranking Measures, Journal of Machine Learning Research 1 (2999) pp. 1--48 (2000).
01. X. Shen , From large margin classification to Ï âlearning, (28 pages), School of Statistics, University of Minnesota.
SPECIAL SESSIONS IN INTERNATIONAL CONFERENCES