**Theory, Algorithms and Applications**

** **

**DC Programming and DCA** constitute the backbone of smooth/nonsmooth nonconvex programming and global optimization. They were introduced by Pham Dinh Tao in 1985 in their preliminary form and have been extensively developed by **Le Thi Hoai An** and **Pham Dinh Tao** since 1994 to become now classic and more and more popular. This page collects links to papers, softwares, announcements, etc. that may be of relevance for people working DC Programming and DCA.

**DC PROGRAMMING**

**DC = Difference of Convex functions **

The DC programming and its DC Algorithm (DCA) address the problem of minimizing a function f = g−h (with g, h being lower semicontinuous proper convex functions on R^{n}) on the whole space.

α= inf { f(x) := g(x) − h(x): x in R^{n}}

A constrained DC program whose feasible set C is convex can always be transformed into an unconstrained DC program by adding the indicator function of C (it is equal to 0 in C, ∞ elsewhere) to the first DC component g.

**WHY DC PROGRAMMING ?**** **

In recent years, there has been a very active research in the following classes of nonconvex and nondifferentiable optimization problems

(1) sup{ f(x) : x in C}, where f and C are convex,

(2) inf{ g(x) - h(x) : x in R^{n}}, where g, h are convex,

(3) inf{ g(x) - h(x) : x in C, f_{1}(x) - f_{2}(x) ≤ 0} where g, h, f_{1}, f_{2} and C are convex.

The reason is quite simple: most real-life optimization problems are of nonconvex nature. Moreover industrialists have begun to replace convex models by nonconvex ones which are more complex but more reliable and especially more economic.

Problem (1) is a special case of Problem (2) with g is the indicator function of C and h = f, while the latter (resp. Problem (3)) can be equivalently transformed into the form of (1) (resp. (2)) by introducing an additional scalar variable (resp. via exact penalty relative to the DC constraint f_{1}(x) - f_{2}(x) ≤ 0).

Clearly the complexity increases from (1) to (3), the solution of one of them implies the solution of the two others. Problem (2) is called a DC program whose particular structure has been permitting a good deal of development both in qualitative and quantitative studies.

We can say that all most real-world optimization problems can be formulated as DC programs.

The richness of the set of DC functions on X=R^{n}, denoted by DC(X) :

(i) DC(X) is a subspace containing the class of lower-C^{2} functions. In particular, DC(X) contains the space C^{1,1}(X) of functions whose gradient is locally Lipschitzian on X.

(ii) DC(X) is closed under all the operations usually considered in optimization.

In particular, a linear combination of f_{i} in DC(X) belongs to DC(X), a finite supremum of DC functions is DC. Moreover, the product of two DC functions remains DC.

(iii) DC(X) is the space generated by the convex cone Γo(X) : DC(X) = Γ_{o}(X) − Γ_{o}(X).

This relation marks the passage from convex optimization to nonconvex optimization and also indicates that DC(X) constitutes a minimal realistic extension of Γ_{o}(X) - the convex cone of all lower semicontinuous proper convex functions on R^{n}. **DCA **

DCA = DC Algorithm

**WHAT IS DCA ?**

DCA is an optimization approach based on local optimality and the duality in DC programming for solving DC programs. DCA has been introduced by Pham Dinh Tao in 1985 as an extension of his subgradient algorithms (for convex maximization programming) to DC programming. Crucial developments and improvements for DCA from both theoretical and computational aspects have been completed since 1994 throughout the joint works of Le Thi Hoai An and Pham Dinh Tao.

**WHY DCA ? **

** **

- Global optimization approaches such as Branch and Bound, Cutting plan for DC programs do not work in large-scale DC programs that we are often faced in real-world problems !
- DCA is a robust and efficient method for nonsmooth nonconvex programming which allows to solve large-scale DC programs. We hear more and more often speak of "Trading Convexity for Scalability" and "The Blessings and the curses of dimensionality" in areas where we have to deal with very large nonconvex programs, as e.g. Transport-Logistics, Communication Networks, Data Mining-Machine Learning, Computational Biology, etc, to name but a few. Consequently, one should rather look for inexpensive, scalable and efficient local approaches for the large-scale setting.
- DCA is simple to use and easy to implement.
- DCA was successfully applied large-scale setting to a lot of different and various nonconvex optimization problems to which it quite often gave global solutions and was proved to be more robust and more efficient than related standard methods, especially in the large-scale setting.

It is worth noting that for suitable DC decompositions , DCA generates almost standard algorithms in convex and nonconvex programming. **AN INTRODUCTION TO DCA **

DC Programming and DCA, which constitute the backbone of nonconvex programming and global optimization, were introduced in 1985 by Pham Dinh Tao in the preliminary state, and have been extensively developed by Le Thi Hoai An and Pham Dinh Tao since 1994 to become now classic and increasingly popular. Their original key idea relies on the DC structure of objective function and constraint functions in nonconvex programs which are explored and exploited in a deep and suitable way.

A DC program is of the form

(P_{dc}) α= inf { f(x) := g(x) − h(x): x in X}** **

with g, h in Γ_{o} (X). Such a function f* *is called a DC function, and g-h, a DC decomposition of f, while the convex functions g and h are DC components of f. In (P_{dc}) the nonconvexity comes from the concavity of the function -h (except the case h is affine since (P_{dc}) then is a convex program). It should be noted that a convex constrained DC program can be expressed in the form (P_{dc}) by using the indicator function on *C*, that is

inf { f(x):=g(x)-h(x): x in C} = inf { φ(x) - h(x): x in X}, with φ := g + χ_{C}.

We would like to make an extension of convex programming, not too large to still allow using the arsenal of powerful tools in convex analysis and convex optimization but sufficiently wide to cover most real-world nonconvex optimization problems . There the convexity of the two DC components g and h of the objective function has been used to develop appropriate tools from both theoretical and algorithmic viewpoints. The other support of this approach is the DC duality, which has been first studied by J.F. Toland in 1978 (Toland, 1978, 1979) who generalized, in a very elegant and natural way, the just mentioned works by **Pham Dinh Tao** on convex maximization programming (g then is the indicator function of a non-empty closed convex set in *X*). In contrast with the combinatorial approach where many global algorithms have been studied, there have been a very few algorithms for solving DC programs in the convex analysis approach. Here we are interested in local and global optimality conditions, relationships between local and global solutions to primal DC programs and their dual

α=inf { h*(y) − g*(y): y in Y} (D_{dc})

(where Y is the dual space of X, which can be identified with X it self, and g*, h* denote the conjugate functions of g and h, respectively) and solution algorithms.

The transportation of global solutions between (P_{dc}) and (D_{dc}) is expressed as:

if x* is an optimal solution of (P_{dc}), then y* in ∂h(x*) is an optimal solution of (D_{dc}),

if y* is an optimal solution of (D_{dc}), then x* in ∂g*(y*) is an optimal solution of (P_{dc}).

Under technical conditions, this transportation holds also for local solutions of (P_{dc}) and (D_{dc}).

DC programming investigates the structure of the vector space DC(X), DC duality and optimality conditions for DC programs. The complexity of DC programs resides, of course, in the lack of practical optimal globality conditions. We developed instead the following necessary local optimality conditions for DC programs in their primal part (by symmetry their dual part is trivial):

∂g(x*) ∩ ∂h(y*) ≠ Ø (1)

(such a point *x* is called critical point of g−h or for (P_{dc})), and

∂g(x*) \subset ∂h(y*) (2).

The condition (2) is also sufficient for many important classes of DC programs. In particular it is sufficient for the next cases quite often encountered in practice:

- In polyhedral DC programs with h being a polyhedral convex function. In this case, if h is differentiable at a critical point
*x*, then*x*is actually a local minimizer for (P_{dc}). Since a convex function is differentiable everywhere except for a set of measure zero, one can say that a critical point x is almost always a local minimizer for (P_{dc}). - In case the function f is locally convex at x.

**HOW DCA WORKS ? **

Based on local optimality conditions and duality in DC programming, the DCA consists in the construction of two sequences {x^{k}} and {y^{k}}, candidates to be optimal solutions of primal and dual programs respectively, such that the sequences {g(x^{k})−h(x^{k})} and {h*(y^{k})−g*(y^{k})} are decreasing, and {x^{k}} (resp. {y^{k}}) converges to a primal feasible solution x* (resp. a dual feasible solution y*) verifying local optimality conditions and x* in ∂g*(y*); y* in ∂h(x*) .

These two sequences {x^{k}} and {y^{k}} are determined in the way that x^{k+1} (resp. y^{k}) is a solution to the convex program (P_{k}) (resp. (D_{k})) defined by

inf {g(x) − h(x^{k}) − <x − x^{k}, y^{k}> : x in R^{n}}, (P_{k})

inf {h*(y) − g*(y^{k−1})− <y − y^{k−1}, x^{k}> : y in R^{n}} (D_{k}).

The first interpretation of DCA is simple: at each iteration one replaces in the primal DC program (P_{dc}) the second component h by its affine minorization h^{k}(x) := h(x^{k}) + <x − x^{k}>, y^{k}> at a neighbourhood of x^{k} to give birth to the convex program (P^{k}) whose the solution set is nothing but ∂g*(y^{k}). Likewise, the second DC component g* of the dual DC program (D_{dc}) is replaced by its affine minorization (g*)_{k}(y) := g*(y^{k}) + <y − y^{k}>, x^{k+1} at a neighbourhood of y^{k} to obtain the convex program (D_{k}) whose ∂h(x^{k+1}) is the solution set. DCA performs so a double linearization with the help of the subgradients of h and g and the DCA then yields the next scheme:

y^{k} in ∂h(x^{k}); x^{k+1} in ∂g*(y^{k}).

The second interpretation of DCA :

Let x* be an optimal solution of primal DC program (P_{dc}) and y* in ∂h(x*). By the transportation of global solutions between (P_{dc}) and (D_{dc}), y* is an optimal solution of the dual DC program (D_{dc}).

Let h_{*} be the affine minorization of h defined by h_{*}(x) := h(x*) + <x − x*, y*> and consider the next convex program

α* := inf { g(x)− h_{*}(x) : x in R^{n}} = inf{f(x) + h(x) − h_{*}(x) : x in R^{n}}. (3)

Since the function f_{*}(x) = f(x) + h(x) − h_{*}(x) is a convex majorization of f, α*≥ α.

But f_{*}(x*) = f(x*) = α. Hence finally α*= α .

On the other hand, the optimal solution set of (3) is ∂g*(y*) that is contained in the optimal solution set of (P_{dc}), following the transportation of global solution between (P_{dc}) and (D_{dc}). Taking into account of this transportation and the decrease of the sequence {g(x^{k})− h(x^{k})}, one can understand better the role played by the linearized programs (P_{k}), (D_{k}) and (3). And the fact that DCA converges to an optimal solution of (P_{dc}) from a good initial point.

Finally it is important to point out the deeper insight into DCA. Let h_{l} and g*_{l} be the polyhedral convex functions (which underapproximate, respectively, the convex functions h and g*) defined by

h_{l}(x) : = sup{h_{i}(x) : i = 0, ..., l}, "x in R^{n} ,

(g*)_{l} (y)= sup{(g*)_{i}(y) : i = 1, ..., l}, "y in R^{n} .

Let k := inf{l : g(x^{l}) − h(x^{l}) = g(x^{l+1}) − h(x^{l+1})} . If k is finite, then the solution computed by DCA, x^{k+1} and y^{k}, are global minimizers for the polyhedral DC programs

β_{k} = inf{g(x) − h_{k}(x) : x in R^{n} } (P_{k}) and β_{k} = inf{h*(y) − (g*)_{k}(y) : y in R^{n} } (D_{k})** **** **

respectively. This property holds especially in polyhedral DC programs where DCA has a finite convergence.

The hidden features reside in (k is finite or equal to +∞ ):

x^{k+1} (resp. y^{k}) is not only an optimal solution of (P_{k}) (resp. (D_{k})) but also an optimal solution to the more tightly approximate problem (P_{k}) (resp. (D_{k})),

β_{k }+* ε _{k}* ≤ a ≤ β

If h and h^{k }coincide at some optimal solution to (P_{dc}) or g* and (g*)^{k} coincide at some optimal solution to (D_{dc}), then x^{k+1} (resp. y^{k}) is also an optimal solution of (P_{dc}) (resp. (D_{dc})).

These nice features explain partially the efficiency of DCA with respect to related standard methods. **KEY PROPERTIES OF DCA: **

** **

**Flexibility:**the construction of DCA is based on g and h but not on f itself, and there are as many DCA as there are DC decompositions. This is a crucial fact in DC programming. It is important to study various equivalent DC forms of a DC problem, because each DC function f has infinitely many DC decompositions which have crucial implications for the qualities (speed of convergence, robustness, efficiency, globality of computed solutions,...) of DCA.

- DCA is a descent method without linesearch, with global convergence (i.e. DCA converges from an arbitrary initial point).

**Return from DC programming to convex programming:**DCA consists in an iterative approximation of a DC program by a sequence of convex programs that will be solved by appropriate convex optimization algorithms. This property is called SCA (Successive Convex Approximation) in some recent works.

**Versatility:**With suitable DC decompositions DCA generates most standard algorithms in convex and nonconvex optimization. Hence DCA offers a wide framework for solving convex/nonconvex optimization problems. In particular DCA is a global approach to convex programming, i.e., it converges to optimal solutions of convex programs reformulated as DC programs. Consequently, it can be used to build efficient customized algorithms for solving convex programs generated by DCA.

**HOW TO USE DCA ? **

** **

(i) Find a DC decomposition g - h of the objective function f* *

(ii) Compute subgradients of h and g*

(iii) Implement the algorithm that consists of three steps :

- Choose x
^{0}in R^{n}, - Set y
^{k}in ∂h(x^{k}), - Set x
^{k+1}in ∂g*(y^{k}) that leads quite often to solving the convex program

inf{ g(x) − h(x^{k}) − < x−x^{k }, y^{k} > : x in R^{n}}

until the convergence.**IMPORTANT QUESTIONS **

** **

**How to find a ''good'' DC decomposition ?****How to find a ''good'' starting point ?**

From the theoretical point of view, the question of optimal DC decompositions is still open. Of course, this depends strongly on the very specific structure of the problem being considered. In order to tackle the large-scale setting, one tries in practice to choose g and h such that sequences {x^{k}} and {y^{k}} can be easily calculated, i.e. either they are in explicit form or their computations are inexpensive.

**GLOBALIZING **

To guarantee globality of sought solutions or to improve their quality, it is advised to combine DCA with global optimization techniques (the most popular ones are Branch-and-Bound, SDP and cutting plane techniques,....) in a deeper and efficient way. Note that for a DC function f = g - h, a good convex minorization of f can be taken as g + co(-h) where co stands for convex envelope. Knowing that the convex envelope of a concave function on a bounded polyhedral convex set can be easily computed.

**APPLICATIONS **

The major difficulty in nonconvex programming resides in the fact that there is, in general, no practicable global optimality condition. Thus, checking globalilty of solutions computed by local algorithms is only possible in the cases where optimal values are known a priori or by comparison with global algorithms which, unfortunately, cannot be applied to large-scale problems. A pertinent comparison of local algorithms should be based on the following aspects:

- mathematical foundations of the algorithms,
- rate of convergence and running-time,
- ability to treating large-scale problems,
- quality of computed solutions: the lower the corresponding value of the objective is, the better the local algorithm will be,
- the degree of dependence on initial points: the larger the set (made up of starting points which ensure convergence of the algorithm to a global solution) is, the better the algorithm will be.

DCA seems to meet these features since it was successfully applied to a lot of different and various nonconvex optimization problems:

1. The Trust Region subproblems

2. Nonconvex Quadratic Programs

3. Quadratic Zero-One Programming problems / Mixed Zero-One Programming problems

4. Minimizing a quadratic function under convex quadratic constraints

5. Multiple Criteria Optimization: Optimization over the Efficient and Weakly EfficientSets

6. Linear Complementarity Problem

7. Nonlinear Bilevel Programming Problems.

8. Optimization in Mathematical Programming problems with Equilibrium Constraints

9. Optimization in Mathematics Finance:

- Portfolio optimization under DC transaction costs and minimal transaction unit constraints
- Portfolio selection problem under buy-in threshold constraints
- Downside risk portfolio selection problem under cardinality constraint
- Collusive game solutions via optimization

10. The strategic supply chain design problem from qualified partner set

11. The concave cost supply chain problem

12. Nonconvex Multicommodity Network Optimization problems

13. Molecular Optimization via the Distance Geometry Problems

14. Molecular Optimization by minimizing the Lennard-Jones Potential Energy

15. Minimizing Morse potential energy

16. Multiple alignment of sequences

17. Phylogenetic analysis

18. Optimization for Restoration of Signals and Images

19. Discrete tomography

20. Tikhonov Regularization for Nonlinear Ill-Posed problems

21. Engineering structures

22. Multidimensional Scaling Problems (MDS)

23. Clustering

24. Fuzzy Clustering

25. Multilevel hierarchique clustering and its application to multicast structures

26. Support Vector Machine

27. Large margin classification to ψ−learning

28. Generating highly nonlinear balanced Boolean Functions in Cryptography

29. Learning with sparsity by minimizing the zero norm (feature selection & classification, sparse regression, ...)

30. Compressed sensing, sparse signal recovery

31. Nonnegative Matrix Factorization

32. SOM Self Organizing Map

33. Community detection in social networks

34. Network Utility Maximization (NUM)

35. Power control problem

36. Dynamic spectrum management in DSL systems

37. MIMO relay optimization

38. Proportional-fairness and max-min optimization of SISO/MISO/MIMO ad-hoc networks

39. Quality of Service (QoS) Routing problems

40. Cross-layer Optimization in Multi-hop Time Division Multiple Access (TDMA) Networks

41. Planning a multisensor multizone search for a target

To be continued ...

**MISCELLANIES **

CAVEAT: Neither are the given lists of papers complete nor do they always provide the most recent reference.

BUT: The intention is to provide a useful page for the entire DC Programming community. To make and keep this page sufficiently complete and up to date I need your help. Please do inform me about necessary updates or missing contributions and mail suggestions or comments to

hoai-an.le-thi @ univ-lorraine
This email address is being protected from spambots. You need JavaScript enabled to view it.

NOTICE: this homepage is devoted to the convex analysis approach to nonconvex programming - DC programming and DCA, and the combined DCA - global optimization techniques.

**TOOLS OF DC PROGRAMMING AND DCA **

** **

- Le Thi Hoai An, Analyse numérique des algorithmes de l'optimisation DC. Approches locales et globales. Code et Simulations numériques en grande dimension. Applications. Thèse de Doctorat (PhD) en Mathématiques Appliquées à l'université de Rouen, 12-1994 .
- Pham Dinh Tao and Le Thi Hoai An, Convex analysis approach to DC programming: Theory, Algorithm and Applications. Acta Mathematica Vietnamica, Vol. 22, Number 1 (1997), pp. 289-355, dedicated to Professor Hoang Tuy on the occasion of his 70th birthday.
- Le Thi Hoai An, Contribution à l'optimisation nonconvexe et l'optimisation globale: Théorie, Algorithmes et Applications , Habilitation à Diriger de Recherches, Université de Rouen, 6-1997.
- Pham Dinh Tao and Le Thi Hoai An, DC optimization algorithms for solving the trust region subproblem. SIAM J. Optimization, Vol. 8, Number 2 (1998), pp. 476-505.
- Le Thi Hoai An, Pham Dinh Tao and Le Dung Muu, Exact penalty in d.c. programming. Vietnam Journal of Mathematics, 27:2 (1999), pp. 169-178.
- Le Thi Hoai An and Pham Dinh Tao, DC Programming: Theory, Algorithms and Applications. The State of the Proceedings of The First International Workshop on Global Constrained Optimization and Constraint Satisfaction (Cocos' 02), 28 pages, Valbonne-Sophia Antipolis, France, October 2-4, 2002.
- Le Thi Hoai An and Pham Dinh Tao, The DC (difference of convex functions) Programming and DCA revisited with DC models of real world nonconvex optimization problems. Annals of Operations Research, 133, pp. 23-46, (2005).
- Le Thi Hoai An, Pham Dinh Tao and Huynh Van Ngai, Exact penalty and error bounds in DC Programming, Journal of Global Optimization, Volume 52, Issue 3, pp. 509-535, March 2012.
- Le Thi Hoai An, Huynh Van Ngai and Pham Dinh Tao, DC Programming and DCA for General DC Programs. Advances in Intelligent Systems and Computing ISBN 978-3-319-06568-7, pp. 15-35, Springer 2014.
- Pham Dinh Tao and Le Thi Hoai An, Recent advances in DC programming and DCA. Transactions on Computational Collective Intelligence, Volume 8342, pp. 1-37, 2014.

**WORKS USING DCA**

**OUR WORKS ON DCA**

See the list of publications here and the PhD. theses here.

**WORKS OF OTHER AUTHORS USING DCA (partial list)**

**133. **Hui Xue, Yu Song, Hai-Ming Xu: Multiple Indefinite Kernel Learning for Feature Selection. Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI), pp. 3210-3216, 2017.

**1****32. **Yunquan Song, Lu Lin, Ling Jian: Robust check loss-based variable selection of high-dimensional single-index varying-coefficient model. Communications in Nonlinear Science and Numerical Simulation, Volume 36, July 2016

**131. **Moeini Mahdi: The Maximum Ratio Clique Problem: A Continuous Optimization Approach and Some New Results. Proceedings of the 3rd International Conference on Modelling, Computation and Optimization in Information Systems and Management Sciences – MCO, 2015.

** **

**130. **Nguyen Canh Nam, Pham Thi Hoai, Tran Van Huy: DC Programming and DCA Approach for Resource Allocation Optimization in OFDMA/TDD Wireless Networks. Proceedings of 3rd International Conference on Computer Science, Applied Mathematics and Applications – ICCSAMA, 2015.

** **

**129. **Tran Duc Quynh, Nguyen Ba Thang Phan, Nguyen Quang Thuan: A New Approach for Optimizing Traffic Signals in Networks Considering Rerouting. Proceedings of the 3rd International Conference on Modelling, Computation and Optimization in Information Systems and Management Sciences - MCO 2015 - Part I, 2015.

**128. **Xin Zhou, Nicole Mayer-Hamblett, Umer Khan, Michael R. Kosorok: Residual Weighted Learning for Estimating Individualized Treatment Rules. Eprint arXiv:1508.03179, 2015.

** **

**127.** Yifei Lou, Stanley Osher and Jack Xin: Computational Aspects of Constrained L_{1}-L_{2} Minimization for Compressive Sensing. Math ucla, 2015.

** **

**126.** Jin Xiao and Ng, M.K.-P. and Yu-Fei Yang: On the Convergence of Nonconvex Minimization Methods for Image Recovery. IEEE Transactions on Image Processing, pages 1587-1598, 2015.** **

** **

**125.** Annabella Astorino, Antonio Fuduli: Semisupervised spherical separation. Applied Mathematical Modelling, 2015.

** **

**124.** Jeyakumar, V. and Lee, G.M. and Linh, N.T.H.: Generalized Farkas’ lemma and gap-free duality for minimax DC optimization with polynomials and robust quadratic optimization. Journal of Global Optimization, 2015.

** **

**123.** Shuo Xiang, Xiaotong Shen, Jieping Ye: Efficient nonconvex sparse group feature selection via continuous and discrete optimization. Artificial Intelligence, Volume 224, July 2015.

** **

**122.** Mehryar Mohri, Andres Munoz Medina: Learning Algorithms for Second-Price Auctions with Reserve. Courant Institute of Mathematical Sciences, 2015.

** **

**121.** Hoang Ngoc Tuan: Linear convergence of a type of iterative sequences in nonconvex quadratic programming. Journal of Mathematical Analysis and Applications, Volume 423, Issue 2, Pages 1311–1319, 2015.

** **

**120. **Jong Chul Ye, Jong Min Kim and Yoram Bresler: Improving M-SBL for Joint Sparse Recovery using a Subspace Penalty. arXiv:1503.06679v2 [cs.IT], 2015.

** **

**119.** Enlong Che: Optimized Transmission and Selection Designs in Wireless Systems. PhD. thesis, University of Technology, Sydney, Australia 2015.

** **

**118.** Nguyen Thai An, Nguyen Mau Nam and Nguyen Dong Yen: A DC algorithm via convex analysis approach for solving a location problem involving sets. arXiv:1404.5113v2 [math.OC], 2015.

** **

**117.** Penghang Yin, Yifei Lou, Qi He and Jack Xin: Minimization of l_{1-2} for Compressed Sensing. SIAM J. SCI. COMPUT Vol. 37, No. 1, pp. A536–A563, 2015.

**116. **G Polya: Nonsmooth DCA. A Fortran 77 solver for nonsmooth DC programming by A. Bagirov, 2015.

** **

**115.** Adrian Schad, Ka L. Law and Marius Pesavento: Rank-Two Beamforming and Power Allocation in Multicasting Relay Networks. arXiv:1502.04861v1 [cs.IT], 2015.

**114.** Kaisa Joki, Adil M. Bagirov, Napsu Karmitsa and Marko M. Mäkelä: New Proximal Bundle Method for Nonsmooth DC Optimization. TUCS Technical Report, 2015.

** **

**113.** Gao Xiangsheng, Wang Min, Li Qi and Zan Tao: Uncertainty analysis of torque-induced bending deformations in ball screw systems. Advances in Mechanical Engineering, 2015.

**112.** A Belyaev: Test rig optimization. PhD. thesis, University of Illinois at Urbana-Champaign, 2014.

**111.** Daquan Feng, Guanding Yu, Yi Yuan-Wu, Li, G.Ye., Gang Feng, Shaoqian Li: Mode switching for device-to-device communications in cellular networks. Signal and Information Processing (GlobalSIP), 2014 IEEE Global Conference on , vol., no., pp.1291-1295, 2014.

** **

**110. **Jian Luo: Quadratic Surface Support Vector Machines with Applications. A dissertation submitted to the Graduate Faculty of North Carolina State University in partial fulfillment of the requirements for the Degree of Doctor of Philosophy, Raleigh, North Carolina, 2014.

** **

**109.** Penghang Yin, Yifei Lou, Qi He and Jack Xin: Minimization Of ℓ_{1}−ℓ_{2} For Compressed Sensing. 2014, URL: http://www.math.uci.edu/~jxin/cam14-01.pdf.

** **

**108.**Y. Lou, T. Zeng, S. Osher and J. Xin: A Weighted Difference of Anisotropic and Isotropic Total Variation Model for Image Processing. URL: ftp://ftp.math.ucla.edu/pub/camreport/cam14-69.pdf.

** **

**107. **Guang Li and Mike R. Belmont: Model predictive control of a sea wave energy converter: a convex approach. Preprints of the 19th World Congress, The International Federation of Automatic Control Cape Town, South Africa, pp. 11987 – 11992, August 24-29, 2014.

**106.** Joaquim Júdice: Optimization with Linear Complementarity Constraints. to appear in Pesquisa Operacional, 2014.

**105. **Xiaolong Long, Knut Solna, Jack Xin: Two L_{1}Two L1 Based Nonconvex Methods for Constructing Sparse Mean Reverting Porfolios. CAM report 14-55, UCLA, 2014.

**104.** M. Mokhtar, A. Shuib, D. Mohamad, Mathematical Programming Models for Portfolio Optimization Problem: A Review, World Academy of Science, Engineering and Technology International Journal of Social, Human Science and Engineering Vol:8 No:2, pp. 122-129 2014.

** **

**103.** Tu Xu, Junhui Wang, Yixin Fang, A model-free estimation for the covariate-adjusted Youden index and its associated cut-point, arXiv:1402.1835 [stat.AP] 2014.

**102.** Alhussein Fawzi, Mike Davies, Pascal Frossard, Dictionary learning for fast classification based on soft-thresholding, Computer Vision and Pattern Recognition, arXiv:1402.1973 [cs.CV] 2014.

**101.** Mehryar Mohri, Andres Munoz Medina, Learning Theory and Algorithms for Revenue Optimization in Second-Price Auctions with Reserve. Proceedings of the 31st International Conference on Machine Learning, Beijing, China. JMLR: W&CP volume 32, 2014.

**100.** Kuaini Wang, Wenxin Zhu, Ping Zhong, Robust Support Vector Regression with Generalized Loss Function and Applications, Neural Processing Letters 2014.

** **

** 99. **Changzhi Wu, Chaojie Li, Qiang Long, A DC Programming approach for sensor network localization with uncertainties in anchor positions, Journal of Industrial and Management Optimization 10(3), pp. 817-826 2014.

**98. **Theodoros Tsiligkaridis, Etienne Marcheret and Vaibhava Goel: A Difference of Convex Functions Approach to Large-Scale Log-Linear Model Estimation.* *IEEE Transactions on Audio, Speech, and Language Processing, vol.21, no.11, pp. 2255-2266, 2013.

**97.** Timothy Hunter, Jerome Thai, Kene Akametalu, Alexandre Bayen and Claire Tomlin: Inverse Covariance Estimation from Data with Missing Values using the Concave-Convex Procedure. Optimization for Machine Learning OPT2013, 5 pages, 2013.

**96.** W. Guan, A. Gray, Sparse high-dimensional fractional-norm support vector machine via DC programming, Computational Statistics and Data Analysis, 67(2013), pp. 136-148, 2013.

**95.** Takanori MAEHARA, Kazuo MUROTA, A Framework of Discrete DC Programming by Discrete Convex Analysis , MATHEMATICAL ENGINEERING TECHNICAL REPORTS 2013.

**94.** Donghui Fang, Zhe Chen, Total Lagrange duality for DC infinite optimization problems, Fixed Point Theory and Applications 2013:269, 2013.

**93.** Martin Slawski, Matthias Hein, Pavlo Lutsik, Matrix factorization with Binary Components, Poster Session in NIPS conference 2013.

**92.** Yang, Liming and Ju, Ribo, A DC programming approach for feature selection in the Minimax Probability Machine, International Journal of Computational Intelligence Systems 2013.

**91.** Fu Wenlong, Du Tingsong, Zhai Junchen, A New Branch and Bound Algorithm Based on D.C.Decomposition about Nonconvex Quadratic Programming with Box Constrained, Journal of Mathematical Study 2013.

**90.** Mehryar Mohri, Andres Munoz Medina, Learning Theory and Algorithms for Revenue Learning Theory and Algorithms for Revenue Optimization in Second-Price Auctions with Reserve , arXiv:1310.5665 2013.

**89. **Ming Huang, Li-Ping Pang, Zun-Quan Xia, The space decomposition theory for a class of eigenvalue optimizations, Computational Optimization and Applications, December 2013.

**88.** Wei Pan, Xiaotong Shen, Binghui Liu, Cluster Analysis: Unsupervised Learning via Supervised Learning with a Non-convex Penalty, Journal of Machine Learning Research 14, 1865-1889 (2013).

**87.** Sun, Qian and Xiang, Shuo and Ye, Jieping, Robust principal component analysis via capped norms. Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, KDD '13, pp. 311-319 (2013).

**86.** Alberth Alvarado, Gesualdo Scutari, Jong-Shi Pang, A New Decomposition Method for Multiuser DC-Programming and its Applications, submitted to IEEE Transactions on Signal Processing (2013).

**85.** Liming Yang, Laisheng Wang, A class of semi-supervised support vector machines by DC programming, Advances in Data Analysis and Classification, Springer Berlin Heidelberg, pp. 1--17 (2013).

**84.** De La Torre, F.; Black, M.J., Robust principal component analysis for computer vision. Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on , vol.1, no., pp.362,369 vol.1, 2001doi: 10.1109/ICCV.2001.937541 (2001).

**83.** Meihua Wang, Fengmin Xu, Chengxian Xu, A Branch-and-Bound Algorithm Embedded with DCA for DC Programming, Hindawi Publishing Corporation, Mathematical Problems in Engineering (2012).

**82.** D.H. Fang, C. Li, X.Q. Yang, Asymptotic closure condition and Fenchel duality for DC optimization problems in locally convex spaces, Nonlinear Analysis: Theory, Methods & Applications, Vol. 75, Issue 8 (2012), pp. 3672-3681.

**81.** Xiaotong Shen, Hsin-Cheng Huang, Wei Pan, Simultaneous supervised clustering and feature selection over a graph, Biometrika 2012 : ass038v1-ass038, (2012).

**80.** Shufang Wang, Artificial Mixture Methods for Correlated Nominal Responses and Discrete Failure Time, Doctor of Philosophy theses (Biostatistics) in The University of Michigan, (2012).

**79.** Xilan TIAN, Apprentissage et Noyau pour les Interfaces Cerveau-machine, these in l'Institut National des Sciences Appliquees de Rouen (2012).

**78.** Annabella Astorino, Antonio Fuduli, Manlio Gaudioso, Margin maximization in spherical separation, Comput Optim Appl 53 (2012) , pp. 301–322.

**77.** Hoang, N.T. , Convergence Rate of the Pham Dinh–Le Thi Algorithm for the Trust-Region Subproblem, J Optim Theory Appl 154 (2012) pp. 904–915.

**76.** Liming Yang, Qun Sun, Recognition of the hardness of licorice seeds using a semi-supervised learning method and near-infrared spectral data, Chemometrics and Intelligent Laboratory Systems, Volume 114, 15 May 2012, Pages 109-115.

**75.** Xilan Tian, Gilles Gasso, Stéphane Canu, A multiple kernel framework for inductive semi-supervised SVM learning, Neurocomputing, Volume 90, 1 August 2012, pp. 46-58.

**74.** Hoai Minh, L., Adnan, Y., Riadh, M., DCA for solving the scheduling of lifting vehicle in an automated port container terminal, Comput Manag Sci 9 (2012), pp. 273–286.

**73.** Liming Yang, Laisheng Wang, A class of smooth semi-supervised SVM by difference of convex functions programming and algorithm, Knowledge-Based Systems, Available online (2012).

**72.** Yiqing Zhong, El-Houssaine Aghezzaf, Combining DC-programming and steepest-descent to solve the single-vehicle inventory routing problem, Computers & Industrial Engineering, Vol. 61, Issue 2 (2011), pp. 313-321.

**71.** Kuaini Wang, Ping Zhong, Yaohong Zhao, Training Robust Support Vector Regression via D. C. Program, Journal of Information & Computational Science 7: 12 (2010), pp. 2385–2394.

**70.** Sun, X. L., Li, J. L. & Luo, H. Z., Convex relaxation and Lagrangian decomposition for indefinite integer quadratic programming. Optimization: A Journal of Mathematical Programming and Operations Research, 59(5), 627-641 (2010).

**69.** Muu, L. & Quoc, T., One step from DC optimization to DC mixed variational inequalities. Optimization: A Journal of Mathematical Programming and Operations Research, 59(1), 63-76 (2010).

**68.** Kuaini Wang, Ping Zhong, Yaohong Zhao, Training Robust Support Vector Regression via D. C. Program, Journal of Information & Computational Science 7: 12 (2010), pp. 2385–2394.

**67.** Vucic, Nikola, Shi, Shuying, Schubert, Martin, DC programming approach for resource allocation in wireless networks, Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks (WiOpt), Proceedings of the 8th International Symposium (2010), pp.380-386.

**66.** Yoshihiro Kanno, Makoto Ohsaki, A Sequential Second-Order Cone Programming A Sequential Second-Order Cone Programming for Stability Analysis of Nonsmooth Mechanical Systems, 8^{th} World Congress on Structural and Multidisciplinary Optimization, Lisbon, Portugal (2009).

**65.** Bharath K. Sriperumbudur, Gert R. G. Lanckriet, On the Convergence of the Concave-Convex Procedure, NIPS (2009).

**64.** Nikolaos Vasiloglou Alexander G. Gray David V. Andersony, Non-Negative Matrix Factorization, Convexity and Isometry, Proceedings of SIAM conference on Data Mining 2009, pp 673-684.

**63.** Effrosyni Kokiopoulou, Pascal Frossard, Minimum Distance between Pattern Transformation Manifolds: Algorithm and Applications, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 7, pp. 1225-1238, June 2009.

**62.** Antonio Fuduli, A. Astorino, Manlio Gaudioso, DC models for spherical separation, Journal of Global Optimization, Volume 48, Issue 4, pp 657-669, 2010.

**61.** Chun-Nam John Yu Thorsten Joachims, Learning Structural SVMs with Latent Variables, Proceedings of the 26 th International Conference on Machine Learning, Montreal, Canada, 2009.

**60.** Yiqing Zhong, El-Houssaine Aghezzaf, A DC Programming Approach to solve the Single-Vehicle Inventory Routing Problem, in the Proceedings of the international conference CIE39, July 6-8, 2009.

**59.** JunhuiWang, Xiaotong Shen, Wei Pan, On Efficient Large Margin Semisupervised Learning: Method and Theory, Journal of Machine Learning Research 10 (2009) 719-742.

**58.** E. Kokiopoulou, D. Kressner, N. Paragios, P. Frossard, Optimal image alignment with random projections of manifolds: algorithm and geometric analysis, Proceedings of EUSIPCO, 2009.

**57.** Gianni Di Pillo, Almerico Murli, On Efficient Large Margin Semisupervised Learning: Method and Theory, Journal of Machine Learning Research 10 (2009) 719-742.

**56.** Yiming Ying, Kaizhu Huang,Colin Campbell, Enhanced protein fold recognition through a novel data integration approach, BMC Bioinformatics. 2009; 10: 267.

**55.** Bharath Sriperumbudur, Gert Lanckriet, On the Convergence of the Concave-Convex Procedure, NIPS Conferences 2009.

**54.** Zheng Zhao , Liang Sun , Shipeng Yu , Huan Liu , Jieping Ye, Multiclass Probabilistic Kernel Discriminant Analysis, Proceedings of the 21st international jont conference on Artifical intelligence, pages 1363-1368, 2009.

**53.** Yoshihiro Kanno, Makoto Ohsaki, Optimization-based stability analysis of structures under unilateral constraints, International Journal for Numerical Methods in Engineering, Volume 77, Issue 1 (p 90-125) (2009).

**52.** Gasso, Gilles; Rakotomamonjy, Alain; Canu, Stéphane, Recovering Sparse Signals With a Certain Family of Nonconvex Penalties and DC Programming, IEEE Transactions on Signal Processing, vol. 57, issue 12, pp. 4686-4698 (2009).

**51.** Yichao Wu, Yufeng Liu, Variable selection in quantile regression, Statistica Sinica, 19, 801-817, Statistica Sinica 19, pp. 801--817 (2009).

**50.** Cambini, R. and Salvi, F., A branch and reduce approach for solving a class of low rank d.c. programs, J. Comput. Appl. Math. 233, 2 (Nov. 2009), 492-501.

**49.** Kai Zhang , Ivor W. Tsang , James T. Kwok, Maximum margin clustering made practical, IEEE Transactions on Neural Networks, v.20 n.4, p.583-596, April 2009.

**48.** Kirk Sturtz, Gregory Arnold, and Matthew Ferrara, DC optimization modeling for shape-based recognition, Proc. SPIE 7337, 73370O (2009).

**47.** Xianyun Wang, Optimality Conditions and Duality for DC Programming in Locally Convex Spaces, Journal of Inequalities and Applications (2009).

**46.** Akoa, F.B., Combining DC Algorithms (DCAs) and Decomposition Techniques for the Training of Nonpositive–Semidefinite Kernels, Neural Networks, IEEE Transactions on , vol.19, no.11 (2008), pp.1854-1872.

**45.** Gasso, G., Rakotomamonjy, A., Canu, S., Solving non-convex lasso type problems with DC programming, Machine Learning for Signal Processing, MLSP 2008. IEEE Workshop (2008), pp.450-455.

**44.** Kappes, Christoph Schn, MAP-Inference for Highly-Connected Graphs with DC-Programming. Proceedings of the 30th DAGM symposium on Pattern Recognition, Gerhard Rigoll (Ed.). Springer-Verlag, Berlin, Heidelberg (2008), pp. 1-10.

**43.** Jörg Kappes and Christoph Schnörr, MAP-Inference for Highly-Connected Graphs with DC-Programming, Pattern Recognition, Lecture Notes in Computer Science Volume 5096, pp 1-10 (2008).

**42.** F. Akoa, Combining DC Algorithms (DCAs) and Decomposition Techniques for the Training of Nonpositive–Semidefinite Kernels, IEEE Transactions on Neural Networks 2008 Vol. 19 (No.11).

**41.** Peng Zhang, Yingjie Tian, Zhiwang Zhang, Aihua Li, Xingquan Zhu, Select Objective Functions for Multiple Criteria Programming Classification, wi-iat, vol. 3, pp.420-423, 2008 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, 2008.

**40.** Liang Fang, Li Sun, Guoping He, Fangying Zheng, A New Algorithm for Quadratic Programming with Interval-Valued Bound Constraints, International Conference on Natural Computation, vol. 6, pp. 433-437, 2008 Fourth International Conference on Natural Computation, 2008.

**39.** D. Wozabal,A new method for Value-at-Risk constrained optimization using the Difference of Convex Algorithm (DCA), Optimization online 2008.

**38.** Riccardo Cambini, Claudio Sodini, A computational comparison of some branch and bound methods for indefinite quadratic programs, Central European Journal of Operations Research Volume 16 p. 139 - 152, 2008-06-01.

**37.** Daili, Ch.; Daili, N., Duality gap and quadratic programming, Part III: dual bounds of a D.C. quadratic programming problem and applications. The Free Library. (2008)

**36.** C. Schnoerr, Signal And Image Approximation With Level-set Constraints, Computing 2007 Vol.81 (No.2-3) Volume 81, Issue 2-3, pp 137-160, 2007.

**35.** Yoshihiro Kanno, Makoto Ohsaki, Eigenvalue Optimization of Structures via Polynomial Semidefinite Programming. Mathematical engineering technical reports, 2007.

**34.** Bharath K. Sriperumbudur David A. Torres Gert R. G. Lanckriet, Sparse Eigen Methods by D.C. Programming, Proceedings of the 25 th International Conference on Machine Learning 2007.

**33.** David A. Torres, Douglas Turnbull, Bharath K. Sriperumbudur, Luke Barrington, Gert R. G. Lanckriet, Finding Musically Meaningful Words by Sparse CCA, ePrint 2007.

**32.** Wang, J., Shen, Z., and Pan, W. , On transductive support vector machines. Proceedingf of the International Conference on Machine Learning ICML 2007.

**31.** Wang, J., and Shen, X , Large margin semi-supervised learning, JMLR 8 (2007), pp 1867-1891.

**30.** Schnörr, C., Schüle, T., Weber, S. . Variational Reconstruction with DCProgramming. Advances in Discrete Tomography and Its Applications Applied and Numerical Harmonic Analysis, pp 227-243 (2007).

**29.** Bierlaire, Michel ; Thémans, Michaël ; Zufferey, Nicolas, Nonlinear global optimization relevant to discrete choice models estimation, 7th Swiss Transport Research Conference, Ascona, Switzerland, 12-14 September 2007.

**28.** Dinh Nho Hao, Nguyen Trung Thanh and H. Sahli , Numerical Solution to a Parabolic Boundary Control Problem. Advances in Deterministic and Stochastic Analysis, pp.115 - 130, published by World Scientific, 2007.

**27.** Bharath K. Sriperumbudur, David A. Torres, and Gert R. G. Lanckriet, Sparse eigen methods by D.C. programming. In Proceedings of the 24th international conference on Machine learning (ICML '07), Zoubin Ghahramani (Ed.). ACM, New York, NY, USA, (2007), pp. 831-838.

**26.** Andreas Argyriou, et al, A DC-programming algorithm for kernel selection, Proceedings of the 23rd international conference on Machine learning (ICML '06). ACM, New York, NY, USA (2006), pp. 41-48.

**25.** Liu, Yufeng and Shen, Xiaotong, Multicategory psi-learning , Journal of the American Statistical Association/, 101, 474, 500-509 (2006).

**24.** Ronan Collobert ronan, Fabian Sinz f Max Planck , Léon Bottou , Trading Convexity for Scalability, Proceedingf of the International Conference on Machine Learning ICML 2006.

**23.** Sijin Liu, M.S., Computational developments for psi-learning, PhD thesis Univ. Ohio 2006.

**22**. Pak-Ming Cheung, James T. Kwok, A regularization framework for multiple-instance learning, Proceedings of the 23rd international conference on Machine learning, Pittsburgh, Pennsylvania Pages: 193 - 200, 2006.

**21**. Alberto Bemporad, Advanced control methodologies for hybrid dynamical systems, Research program, Università degli Studi di SIENA (2006).

**20.** S. Weber, T. Schüle, A.Kuba and C. Schnörr . Binary Tomography with Deblurring. In: Proc. 11th International Workshop on Combinatorial Image Analysis ( IWCIA '06 ). LNCS Vol.4040, Springer, June 2006.

**19.** Andreas Argyriou, Raphael Hauser, Charles A. Micchelli, Massimiliano Pontil, A DC-Programming Algorithm for Kernel Selection, Proceedings of the 23 rd International Conference on Machine Learning, Pittsburgh, PA, 2006.

**18.** S. Weber, A. Nagy, T. Schüle, C. Schnörr and A.Kuba, A Benchmark Evaluation of Large-Scale Optimization Approaches to Binary Tomography. In: Proc. 13th International Conference on Discrete Geometry on Computer Imagery ( DGCI '06 ). LNCS Vol.4245, Springer, Oct. 2006.

**17.** STEVEN E. ELLIS_ AND MADHU V. NAYAKKANKUPPAM, PHYLOGENETIC ANALYSIS VIA DC PROGRAMMING, Preprint, Department of Mathematics & Statistics, UMBC, 1000 Hilltop Circle, Baltimore, MD 21250. (2005).

**16.** Yufeng LIU, Xiaotong SHEN, and Hani DOSS , Multicategory ψ−Learning and Support Vector Machine: Computational Tools , Journal of Computational and Graphical Statistics, 14(1): 219-236 (2005).

**15.** J.E. Harrington, B.F. Hobbs, J.S. Pang, A. Liu, G. Roch , Collusive game solutions via optimisation, Math. Program. Ser. B(2005), 29 pages.

**14.** J. Neumann, C. Schnörr, G. Steidl : Combined SVM-Based Feature Selection and Classification. Machine Learning November 2005, Volume 61, Issue 1-3, pp. 129-150.

**13.** T. Schüle, C. Schnörr, S. Weber, and J. Hornegger . Discrete tomography by convex-concave regularization and d.c. programming. Discr. Appl. Math., 151:229-243, Oct 2005.

**12.** T. Schüle, S. Weber, C. Schnörr , Adaptive Reconstruction of Discrete-Valued Objects from few Projections. Electr. Notes in Discr. Math., 20:365-384, 2005.

**11.** S. Weber, T. Schüle, C. Schnörr , Prior Learning and Convex-Concave Regularization of Binary Tomography. Electr. Notes in Discr. Math., 20:313-327, 2005.

**10.** S. Weber, C. Schnörr, T. Schüle, J. Hornegger , Binary Tomography by Iterating Linear Programs, R. Klette, R. Kozera, L. Noakes and J. Weickert (Eds.), Computational Imaging and Vision - Geometric Properties from Incomplete Data, Kluwer Academic Press 2005.

**09.** Kurt M. Anstreicher and Samuel Burer , D.C. Versus Copositive Bounds for Standard QP, Journal of Global Optimization, Volume 33, Issue 2, pp 299-312, 2005.

**08.** Krause, N., Singer , Leveraging the margin more carefully. International Conference on Machine Learning, ICML (2004).

**07.** J. Neumann, C. Schnörr, G. Steidl , SVM-based Feature Selection by Direct Objective Minimisation, Pattern Recognition, Proc. of 26th DAGM Symposium, LNCS, Springer, August 2004.

**06.** S. Weber, T. Schüle, J. Hornegger, C. Schnörr , Binary Tomography by Iterating Linear Programs from Noisy Projections , Proceedings of International Workshop on Combinatorial Image Analysis (IWCIA), 2004. Auckland, New Zealand, Dec. 1.-3./2004, Lecture Notes in Computer Science, Springer Verlag .

**05.** X. Shen, G.C. Tseng, X. Zhang, and W. H. Wong , On psi-Learning, Journal of American Statistical Association, 98, 724-734,(2003).

**04.** P.Mahey, T.Q. Phong and H.P.L Luna , Separable convexification and DCA techniques for capacity and flow assignement problems, RAIRO-Recherche Opérationnelle 35 (2001), pp.269-281.

**03.** Le Dung Muu and Jianming SHI , D.C. Optimization Methods for Solving Minimum Maximal Network Flow Problem. AMS Classiﬁcation 2000.

**02.** Quoc V. Le Alex Smola, Olivier Chapelle, Choon Hui Te, Optimization of Ranking Measures, Journal of Machine Learning Research 1 (2999) pp. 1--48 (2000).

**01.** X. Shen , From large margin classification to ψ −learning, (28 pages), School of Statistics, University of Minnesota.

** **

** **

**SPECIAL SESSIONS IN INTERNATIONAL CONFERENCES**

** **

- ”DC Programming: Theory, Algorithms and Applications” organisée par H.A Le Thi, T. Pham Dinh, the First International Conference on Optimization Methods and Sotfware, December 15-18, 2002, Hangzhou, China.
- ”DC Programming: Theory, Algorithms and Applications” organisée par H.A Le Thi, T. Pham Dinh, 18th International Symposium on Mathematical Programming (ISMP 2003) Copenhagen, Denmark, August, 18-22, 2003.
- ”DC Programming: Theory, Algorithms and Applications” organisée par H.A Le Thi, T. Pham Dinh, The Sixth International Conference on Optimization: Techniques and Applications (ICOTA6, 2004), Ballarat, Australia, 9-11 December 2004.
- ”DC Programming: Theory, Algorithms and Applications” organisée par H.A Le Thi, T. Pham Dinh, SIAM Conference on Optimization, May 15-19, 2005, Stockholm, Sweden.
- ”DC Programming: Theory, Algorithms and Applications” organisée par H.A Le Thi, T. Pham Dinh, 7ème Congrès de la Société Française de Recherche Opérationnelle et d'Aide à la Décision, Lille 6-8, Février-2006 (2 sessions).
- ”DC Programming: Theory, Algorithms and Applications” in 3rd International Conference on Computational Management Science (CMS) May 17-19, 2006, Amsterdam (2 sessions).
- ”Programmation Non Convexe: Approches Locales et Globales. Théorie, Algorithmes et Applications”, organisée par H.A Le Thi, T. Pham Dinh, FrancoroV/Roadef 07, Grenoble 19-23 Février 2007 (3 sessions).
- Optimization in Data Mining, Mini-Symposium organized by Le Thi Hoai An (2 sessions) in the international conference NCP’07 - Nonconvex Programming: Local and Global Approaches. Theory, Algorithms and Applications" December 17-21, 2007, INSA-Rouen.
- DC programming and DCA, Mini-Symposium organized by Le Thi Hoai An (3 sessions) in the international conference NCP’07 - Nonconvex Programming: Local and Global Approaches. Theory, Algorithms and Applications" December 17-21, 2007, INSA-Rouen.
- ”Nonconvex Programming: Local and Global Approaches" in Computational Management Science, organized by H.A Le Thi, T. Pham Dinh in 4th International Conference on Computational Management Science, April 20-22, 2007, Imperial College, London, England.
- ”DC Programming Approach to Finance, Information Security, Transport-Logistic, Communication Networks and Data Mining”, organized by H.A Le Thi, T. Pham Dinh in Computational Management Science, May 1-3, 2009, Geneva, Switzerland (3 sessions).
- Stream “Nonconvex programming: local and global approaches” organized by H.A Le Thi, T. Pham Dinh in EURO Conference 2009 in Bonn, OR creating competitive advantage, 23rd European Conference on Operational Research, Bonn, July 5 - 8, 2009 (3 sessions).
- Symposium ”Modelling and Optimization Techniques in Information Systems, Database Systems and Industrial Systems” MOT-ACIIDS 2010 organized by H.A Le Thi in the 2nd Asian Conference on Intelligent Information and Database Systems, http://aciids2010.hueuni.edu.vn , 24 - 26 March 2010, Hue City, Vietnam (3 sessions).
- DC Programming: Local and Global Approaches” organized by H.A Le Thi, T. Pham Dinh in Computational Management Science 2010, International Conference on Computational Management Science. Vienna, Austria. July 28-30, 2010 (3 sessions).
- Stream “Nonconvex programming: local and global approaches” organized by H.A Le Thi, T. Pham Dinh in 24th European Conference on Operational Research, Lisbon, Portugal, July 11-14, 2010, EURO XXIV (3 sessions).
- ”DC Programming and DCA: Theory, Algorithms and Applications. The State of the Art” organized by H.A Le Thi, T. Pham Dinh in the 8th International Conference on Optimization: Techniques and Applications (ICOTA 8), December 10-13, 2010. Shanghai, China (2 sessions).
- Workshop on Recent advances in Modelling and Optimization Techniques for Intelligent Computing in Information Systems and Industrial Engineering, organized by H.A Le Thi, T. Pham Dinh in ACIIDS 2011, The 3rd Asian Conference on Intelligent Information and Database Systems, April 20-22, 2011, Daegu, Korea.
- “Recent advances on Modelling and Optimization Techniques for Intelligent Computing in Industrial Engineering and Systems Management” organized by H.A Le Thi, T. Pham Dinh, D.T. Pham in 4th International Conference on Industrial Engineering and Systems Management, IESM 2011, May 25-27, 2011, Metz, France.
- “Nonconvex programming - local and global approaches” organized by H.A Le Thi, 12e congrès annuel de la Sociéte française de Recherche Opérationnelle et d’Aide à la Décision ROADEF 2011, St Etienne 2-4 mars 2011.
- “Nonconvex Programming” organized by H.A Le Thi, T. Pham Dinh in stream Continuous and Non-Smooth Optimization, the 19th Triennial Conference of the International Federation of Operational Research Societies (IFORS2011), Australia July 10-15.
- Stream ”DC Programming and DCA: Theory, Algorithms and Applications – Local and Global Approaches” organized by H.A Le Thi, T. Pham Dinh in EURO Conference 2012 in Vilnius, OR creating competitive advantage, 25th European Conference on Operational Research, Vilinus (Lithuania), July 2012.