Archive ouverte UNIGE | last documents for author 'Jean-Philippe Vial'https://archive-ouverte.unige.ch/Latest objects deposited in the Archive ouverte UNIGE for author 'Jean-Philippe Vial'engACCPM with a nonlinear constraint and an active set strategy to solve nonlinear multicommodity flow problemshttps://archive-ouverte.unige.ch/unige:111375https://archive-ouverte.unige.ch/unige:111375abstract not availableFri, 23 Nov 2018 11:48:08 +0100Automatic Formulation of Stochastic Programs Via an Algebraic Modeling Languagehttps://archive-ouverte.unige.ch/unige:111366https://archive-ouverte.unige.ch/unige:111366abstract not availableFri, 23 Nov 2018 10:50:30 +0100Solving the p-Median Problem with a Semi-Lagrangian Relaxationhttps://archive-ouverte.unige.ch/unige:111365https://archive-ouverte.unige.ch/unige:111365abstract not availableFri, 23 Nov 2018 10:37:23 +0100Oracle-based optimization applied to climate model calibrationhttps://archive-ouverte.unige.ch/unige:111364https://archive-ouverte.unige.ch/unige:111364abstract not availableFri, 23 Nov 2018 10:36:05 +0100Solving nonlinear multicommodity flow problems by the analytic center cutting plane methodhttps://archive-ouverte.unige.ch/unige:111358https://archive-ouverte.unige.ch/unige:111358abstract not availableFri, 23 Nov 2018 10:26:11 +0100Survivability in Telecommunication Networkshttps://archive-ouverte.unige.ch/unige:5991https://archive-ouverte.unige.ch/unige:5991abstract not availableThu, 15 Apr 2010 14:21:51 +0200Computing Maximum Likelihood Estimators of Convex Density Functionshttps://archive-ouverte.unige.ch/unige:5978https://archive-ouverte.unige.ch/unige:5978e consider the problem of estimating a density function that is known in advance to be convex. The maximum likelihood estimator is then the solution of linearly constrained convex minimization problem. This problem turns out to be numerically difficult. We show that interior point algorithms perform well on this class of optimization problems, though for large samples, numerical difficulties are still encountered. To eliminate those difficulties, we propose a clustering scheme that is reasonable from a statistical point of view. We display results for problems with up to 40000 observations. We also give a typical picture of the estimated density: a piece wise linear function, with very few pieces onlyThu, 15 Apr 2010 14:21:44 +0200Using an Interior Point Method for the Master Problem in a Decomposition Approachhttps://archive-ouverte.unige.ch/unige:5964https://archive-ouverte.unige.ch/unige:5964We address some of the issues that arise when an interior point method is used to handle the master problem in a decomposition approach. The main points concern the efficient exploitation of the special structure of the master problem to reduce the cost of a single interior point interation. The particular structure is the presence of GUB constraints and the natural partitioning of the constraint matrix into blocks builts of cuts generated by different subproblems. The method can be used in a fairly general case, i.e., in any decomposition approach whenever the master is solved by an interior point method in which the normal equations are used to compute orthogonal projections. Computational results demonstrate its advantages for one particular decomposition approach. Analytic Center Cutting Plane Method (ACCPM) is applied to solve large scale nonlinear multicommoditiy network flow problems (up to 5000 arcs and 10000 commodities)Thu, 15 Apr 2010 14:21:38 +0200Optimal Joint Syntheses of Base and Spare Telecommunication Networkshttps://archive-ouverte.unige.ch/unige:5963https://archive-ouverte.unige.ch/unige:5963A telecommunication network is survivable if, following an arc failure, the interrupted traffic ca be redirected through the network via existing excess capactiy. The standard survivability problem consists in finding the least cost investistment in spare capacity to allow rerouting of a given base traffic. In this paper we consider the more involved problem of simultaneously designing the base traffic and the spare capacity investment. If the investment costs are linear, the problem can be formulated as a large scale structured linear program that we solve using different decomposition techniques, including the analytic center cutting plane method. The global analysis is performed under the assumption of local rerouting of the traffic, i.e., the interrupted traffic creates a local demand between the end points of the broken edge. More sophistificated telecommunication network management allows to break down the interrupted traffic into its individual demand components. We do not treat the simultaneous design of the base traffic and the spare capacity under a global rerouting strategy. We propose a two-step procedure. We determine firstthe base traffic and the spare capacity under the local rerouting strategy. We then compute the necessary spare capacity to secure the base traffic under the global rerouting strategy. Our solution technique are tested on a set of realistic problemsThu, 15 Apr 2010 14:21:37 +0200A Library for Convex Optimization Based on an Analytic Center Cutting Plane Methodhttps://archive-ouverte.unige.ch/unige:5960https://archive-ouverte.unige.ch/unige:5960abstract not availableThu, 15 Apr 2010 14:21:36 +0200A Short Note on the Comparative Behaviour of Kelley's Cutting Plane Method and the Analytic Center Cutting Plane Methodhttps://archive-ouverte.unige.ch/unige:5956https://archive-ouverte.unige.ch/unige:5956abstract not availableThu, 15 Apr 2010 14:21:34 +0200A Generic Path-Following Algorithm with a Sliding Constraint and its Application to Linear Programming and the Computation of Analytic Centershttps://archive-ouverte.unige.ch/unige:5952https://archive-ouverte.unige.ch/unige:5952We propose a generic path-following scheme which is essentially a method of centers that can be implemented with a variety of algorithms. The complexity estimate is computed on the sole assumption that a certain local quadratic convergence property holds, independently of the specific algorithmic procedure in use, primal, dual or primal-dual. We show convergence in iterations. We verify that the primal, dual and primal-dual algorithms satisfy the local quadratic convergence property. The method can be applied to solve the linear programming problem (with a feasible strat) and to compute the analytic center of bounded polytope. The generic path-following scheme easily extends to the logarihmic barrier approachThu, 15 Apr 2010 14:21:33 +0200Advanced Mathematical Programming Modeling to Assess the Benefits from International CO2 Abatement Cooperationhttps://archive-ouverte.unige.ch/unige:5923https://archive-ouverte.unige.ch/unige:5923The aim of this paper is to propose a method for coupling national energy models, to identify the dividents of international cooperation in atmospheric pollution abatement and efficient energy use. It indicates also how to solve the resulting large-scale multinational model. It simulates finally a cooperation of four European countries for curbing their CO2 emissionsThu, 15 Apr 2010 14:21:16 +0200A Lagrangian Relaxation of the Capacited Multi-Item Lot Sizing Problem Solved with an Interior Point Cutting Plane Algorithmhttps://archive-ouverte.unige.ch/unige:5922https://archive-ouverte.unige.ch/unige:5922The capacitated multi-item lot sizing problem consists of finding a production shedule that minimizes over a finite number of periods the total production, holding inventory, and setup costs subject to demand and capacity constraints. The CLSP problem is NP-hard, while the problem of finding a feasible solution, which is polynomial if there are no set-up times, becomes NP-complete when set-up-times are included. Approximate solution can be obtained by heuristics. In this paper we consider an approach based on a Lagrangian relaxation of the capacity constraints. The relaxation is used in two ways. First, generates a lower bound for the optimal value. Second, the primal and dual solutions of the relaxation (if available) are used to generate integer feasible solutions by primal or dual heuristics. We compare three methods of solving the Lagrangian relaxation: subgradient method, Kelley's cutting plane method - also known as Dantzig-Wolfe decomposition - and the analytic center cutting plane method. We conclude that the analytic center cutting plane method performs as well, and sometimes better than subgradient optimization or Kelley's cuting plane methodThu, 15 Apr 2010 14:21:15 +0200A Survey of Algorithms for Convex Multicommodity Flow Problemshttps://archive-ouverte.unige.ch/unige:5920https://archive-ouverte.unige.ch/unige:5920There are many problems related to the design of networks. Among them, the message routing problem plays a determinant role in the optimization of network performance. Much of the motivation for this work comes from this problem which is shown to belong to the class of nonlinear convex multicommodity flow problems. This paper emphasizes the message routing problem in data networks, but it includes a broader literature orverview of convex multicommodity flow problems. We present and discuss the main solution techniques proposed for solving this class of large-scale convex optimization problems. We conduct some numerical experimets on the message routing problem with some different techniquesThu, 15 Apr 2010 14:21:14 +0200Structure Exploiting Tool in Algebraic Modeling Languageshttps://archive-ouverte.unige.ch/unige:5918https://archive-ouverte.unige.ch/unige:5918A new concept is proposed for linking algebraic modeling language and the structure exploiting solver. SPI (Structure Passing Interface) is a program that enables retrieving structure from the anonymous mathematical program built by the algebraic modeling language. SPI passes the special structure of the problem to a SES (Structure Exploiting Solver). An integration of SPI and SES leads to SET (Structure Exploiting Tool) and can be integrated with any algebraic modeling languageThu, 15 Apr 2010 14:21:14 +0200Homogenous analytic center cutting plane methods for convex problems and variational inequalitieshttps://archive-ouverte.unige.ch/unige:5915https://archive-ouverte.unige.ch/unige:5915In this paper we consider a new analytic center cutting plane method in a projective space.We prove the efficiency estimates for the general scheme and show that these results can be used in the analysis of feasibility problem, the variational inequality problem and the problem of constrained minimization. Our analysis is valid even for the problems whose solution belongs to the boundary of the domainThu, 15 Apr 2010 14:21:12 +0200A Two-Cut Approach in the Analytic Center Cutting Plane Methodhttps://archive-ouverte.unige.ch/unige:5913https://archive-ouverte.unige.ch/unige:5913abstract not availableThu, 15 Apr 2010 14:21:11 +0200Interior Points Methods for Nondifferentiable Optimizationhttps://archive-ouverte.unige.ch/unige:5909https://archive-ouverte.unige.ch/unige:5909We describe the analytice center cutting plane method and its relationship to classical methods of nondifferentiable optimization and column generations. Implementations issues are also discussed, and current applications listedThu, 15 Apr 2010 14:21:10 +0200Warm Start and -Subgradients in Cutting Plane Scheme for Block-Angular Linear Programshttps://archive-ouverte.unige.ch/unige:5910https://archive-ouverte.unige.ch/unige:5910This paper addresses the issues involved with an interior point-based decomposition applied to the solution of linear programs with a block-angular structure. Unlike classical decomposition schemes that use the simplex method to solve subproblems, the approach presented in this paper employs a primal-dual infeasible interior point method. The above-mentioned algorithm offers a perfect measure of the distance to optimality, which is exploited to terminate the algorithm earlier (with a rather losse optimality tolerance) and to generate -subgradients. In the decomposition scheme, subproblems are sequentially solved for varying objective functions. It is essential to be eable to exploit the optimal solution of the previous problem when solving a subsequent one (with a modified objective). A warm start routine is described that deals with this problem. The proposed approach has been implemented within the context of two optimization codes freely available for research use: the Analytic Center Cutting Plane Method (ACCPM) - interior point based decomposition algorithm and the Higher Order Primal-Dual Method (HOPDM) - general purpose interior point LP solver. Computational results are given to illustrate the potential advantages of the approach applied to the solution of very large structured linear programsThu, 15 Apr 2010 14:21:10 +0200Parallel Implementation of a Central Decomposition Method for Solving Large Scale Planning Problemshttps://archive-ouverte.unige.ch/unige:5907https://archive-ouverte.unige.ch/unige:5907We use a decomposition approach to solve two types of realistic problems: Block-angular linear programs arising in energy planning, and multicommodity network problems arising in capacity planning for survivable telecommunication networks. Decomposition is an algorithmic device that down computations into several independent subproblems. It is thus ideally suited to parallel implementation. To achieve robustness and greater reliability in the performance of the decomposition algorithm, we use the Analytic Center Cutting Plane Method (ACCPM) to handle the master program. We run the algorithm on two different parallel computing platforms: a network of PC's running under Linux and a genuine parallel machine, the IBM SP2. The approach is well adapted for this coarse grain parallelism and the results display good speed-up's for the classes of problems we have treatedThu, 15 Apr 2010 14:21:09 +0200Homogenous Analytic Center Cutting Plane Methods with Approximate Centershttps://archive-ouverte.unige.ch/unige:5905https://archive-ouverte.unige.ch/unige:5905In this paper we consider a homogeneous analytic center cutting plane method in a projective space. We describe a general scheme that uses a homogeneous oracle and computes an approximate analytic center at each iteration. This technique is applied to a convex feasibility problem, to variational inequalities, and convex constrained minimization. We prove that these problems can be solved with the same order of complexity as in the case of exact analytic centers. For the feasibility and the minimization problems rough approximations suffice, but very high precision is required for the variational inequalities. We give an exemple of variational inequality where even the first analytic center needs to be computed with a precision matching the precision required for the solutionThu, 15 Apr 2010 14:21:08 +0200A Planning Model with one Million Scenarios Solved on an Affordable Parallel Machinehttps://archive-ouverte.unige.ch/unige:5897https://archive-ouverte.unige.ch/unige:5897Stochastic programs inevitably get huge if they are to model real life problems accurately. Nowadays only massive parallel machines can solve them but at a cost few decision markers can afford. We report here on a deterministic equivalent linear programming model of 1,111,112 constraints and 2,555,556 variables generated by GAMS. It is solved by an interior point based decomposition method in less than 3 hours on a cluster of 10 Linux PC'sThu, 15 Apr 2010 14:21:05 +0200Multiple Cuts in the Analytic Center Cutting Plane Methodhttps://archive-ouverte.unige.ch/unige:5898https://archive-ouverte.unige.ch/unige:5898We analyze the multiple cut generation scheme in the analytic center cutting plane method. We propose an optimal primal dual updating direction when the cuts are central. The direction is optimal in the sense that it maximizes the product of the new dual slacks and of the new primal variables within the trust regions defined by Dikin's primal and dual ellipsoids. The new primal and dual directions use the variance-covariance matrix of the normals to the new cuts in the metric given by Dikin's ellipsoidThu, 15 Apr 2010 14:21:05 +0200Mid-Range Planning of Survivablehttps://archive-ouverte.unige.ch/unige:5894https://archive-ouverte.unige.ch/unige:5894In this paper, we discuss the optimal design of telecommunications networks prone to failure. The problem concerns mid-range planning objectives. The purpose of the model is to give insight on the cost of protecting the network against failures by balancing the choice of a base routing of the demands and the investment in spare to protect the routing against elementary breakdowns. The problem in formulated as a large linear programming model. To exploit the structure we resort to a decomposition scheme with two nested levels. At the higher level, we propose two different schemes: a Lagrangian relaxation and a decomposition "à la Benders". The transformed problem is solved by a cutting plane method. We compare two such methods, the Cheney-Goldstein-Kelly method and the analytic center cutting plane method. The methods are tested on a set of realistic problemsThu, 15 Apr 2010 14:21:03 +0200Convex nondifferentiable optimization: a survey focussed on the analytic center cutting plane methodhttps://archive-ouverte.unige.ch/unige:5882https://archive-ouverte.unige.ch/unige:5882abstract not availableThu, 15 Apr 2010 14:20:58 +0200A note on the de Ghellinck-Vial infeasible start interior point methodhttps://archive-ouverte.unige.ch/unige:5880https://archive-ouverte.unige.ch/unige:5880The paper is a simplified exposition of an early combined phase I-phase II method for linear programming. The method works from an infeasible start. Besides, there is no need for regularity conditions if the method is applied to a primal-dual formulationThu, 15 Apr 2010 14:20:57 +0200Capacity Planning Under Uncertain Demand in Telecommunications Networkshttps://archive-ouverte.unige.ch/unige:5871https://archive-ouverte.unige.ch/unige:5871This paper deals with the sizing of telecommunications networks offering private line service to a few clients. The clients ask for some transfer capacity between some pair of nodes, but their demand is uncertain. In case of high demand and insufficient capacity, some clients may be denied the transfer; the telecommunications company pays a penalty cost for that. The network has a fixed topology. In planning the network capacity, the company wants to balance the investment cost with the total expected penalty cost. The planning situation is modeled as a stochastic programming problem. The scenarios are built under the assumption that the clients have independent demands. The solution method is based on Benders decomposition coupled with the analytic center solution method. We solve some large size problem instances. For one problem instance, we perform sensitivity analysis and draw the trade-of cost curve vs. the unitary penalty cost. Finally, we run the algorithm on a parallel computing platform.Thu, 15 Apr 2010 14:20:53 +0200Confidence Level Solutions for Stochastic Programminghttps://archive-ouverte.unige.ch/unige:5864https://archive-ouverte.unige.ch/unige:5864We propose an alternative apporach to stochastic programming based on Monte-Carlo sampling and stochastic gradient optimization. The procedure is by essence probabilistic and the computed solution is a random variable. The associated objectiev value is doubly random, since it depends two outcomes: the event in the stochastic program and the randomized algorithm. We propose a solution concept in which the propability that the randomized algorithm produces a solution with an expected objective value departing from the optimal one by more than is small enough. We derive complexity bounds for this process. We show that by repeating the basic process on independent sample, one can significantly sharpen the complexity bounds.Thu, 15 Apr 2010 14:20:49 +0200Gestion des découpes dans une papeteriehttps://archive-ouverte.unige.ch/unige:5856https://archive-ouverte.unige.ch/unige:5856Cet article propose une alternative automatisée à la gestion manuelle des découpes dans une papeterie industrielle. Nous utilisons le modèle classique de Gilmore et Gomory. Nous résolvons le problème relaxé par ACCPM qui est une méthode de plans coupants. Les sous-problèmes et la génération d'une solution entière sont résolus par CPLEX.. Afin de préserver la simplicité de mise en oeuvre, nous n'avons pas inclus de procédure de Branch and Bound. L'approche fournit de bonnes solutions heuristiques qui améliorent sensiblement les performances de la gestion manuelleThu, 15 Apr 2010 14:20:45 +0200Augmented self-concordant barriers and nonlinear optimization problems with finite complexityhttps://archive-ouverte.unige.ch/unige:5851https://archive-ouverte.unige.ch/unige:5851In this paper we study special barrier functions for the convex cones, which are the sum of a self-concordant barrier for the cone and a positive-semidefinite quadratric form. We show that the central path of these augmented barrier functions can be traced with linear speed. We also study the complexity of finding the analytic center of the augmented barrier. This problem itself has some interesting applications. We show that for some special classes of quadratic forms and some convex cones, the computation of the analytic center requires an amount of operations independent on the particular data set. We argue that these problems form a class that is endoweed with a property which we call finite polynomial complexity.Thu, 15 Apr 2010 14:20:43 +0200Multiple Cuts with a Homogeneous Analytic Center Cutting Plane Methodhttps://archive-ouverte.unige.ch/unige:5844https://archive-ouverte.unige.ch/unige:5844This paper analyzes the introduction of multiple central cuts in a conic formulation of the analytic center cutting plane method (in short ACCPM). This work extends earlier work on the homogeneous ACCPM, and parallels the analysis of the multiple cuts process in the standard ACCPM. The main issue is the calculation of a direction that restores feasibility after introducing p new cutting planes at the query point. We prove that the new analytic center can be recovered in O (p log wp) damped Newton iterations, where w is a parameter depending of the data. We also present two special cases where the complexity can be decreased to O (p log p). Finally, we show that the number of calls to the oracle is the same as in the single cut case, up to a factorThu, 15 Apr 2010 14:20:39 +0200Quantitative analysis of multi-periodic supply chain contracts with options via stochastic programminghttps://archive-ouverte.unige.ch/unige:5842https://archive-ouverte.unige.ch/unige:5842We propose a stochastic programming approach for quantitative analysis of supply contracts, involving flexibility, between a buyer and a supplier, in a supply chain framework. Specifically, we consider the case of multi-periodic contracts in the face of correlated demands. To design such contracts, one has estimate the savings or costs induced for both parties, as well as the optimal orders and commitments. We show how to model the stochastic process of the demand and the decision problem for both parties using the algebraic modeling language AMPL. The resulting linear programs are solved with a commercial linear programming solver. We compute the economic performance of these contracts, giving evidence that this methodology allows to gain insight into realistic problems.Thu, 15 Apr 2010 14:20:38 +0200Proximal ACCPM, a Cutting Plane Method for Column Generation and Lagrangian Relaxation: Application to the P-Median Problemhttps://archive-ouverte.unige.ch/unige:5824https://archive-ouverte.unige.ch/unige:5824Proximal ACCPM is a variant of the analytic center cutting plane method, in which a proximal term is added to the barrier function that defines the center. The present paper gives a detailed presentation of the method and of its implementation. Proximal ACCPM is used to solve the Lagrangian relaxation of the p-median problem on two sets of problem instances. Problems of the same collection are tentatively solved with the classical column generation scheme.Thu, 15 Apr 2010 14:20:29 +0200The Linear Separation Problem Revisited with ACCPMhttps://archive-ouverte.unige.ch/unige:5811https://archive-ouverte.unige.ch/unige:5811We investigate the use of ACCPM (Analytic Center Cutting Plane Method) for solving the linear separation problem, which is an important instance of the general data mining concept. Given two disjoints subsets of points, the problem is to find a hyperplane which separates these two subsets as well as possible. Our method consists of minimizing the total sum of misclassification gaps. For a large amount of data, the number of global iterations in the ACCPM process can be reduced by splitting the objective into several parts. Hence, we reduce the total time cost involved by the evaluations of the objective and its subsgradients. However, as the number of cuts increases with the number of subfunctioins, the time cost associated with becomes considerable. Therefore, we have to achieve a certain balance between the gain in the objective evaluations and the price we pay for the additional cuts generated. After providing an algorithm to compute a good starting point wich yields a faster convergence, we perform some experiments with four sample databases over various disaggregated forms. We have used a 500 Mhz SUN Ultrasparc with 256 MB of RAM. The report of measured execution times shows that the method is quite efficient, and the compromise previously explained is well illustrated. An asymptotic time analysis is provided and from this, we derive an expression approximating the value of the optimal partition size.Thu, 15 Apr 2010 14:20:21 +0200Supplier-Retailer Flexible Commitments Contracts : A Robust Optimization Approachhttps://archive-ouverte.unige.ch/unige:5802https://archive-ouverte.unige.ch/unige:5802We propose the use of robust optimization, a powerful tool that has already proven itself in several application areas, to the problem of finding optimal values for quantity exible commitment contracts between suppliers and retailers. We demonstrate the capabilities of the robust optimization methodology by solving model variations that could not be solved by other methods previously suggested in this area. In particular, we allow non-stationary demand and cost parameters and a more exible way to describe the possible deviations in commitments and orders. We apply our models first to finite-horizon problems and then extend it to a new type of rolling-horizon settings which we denote as "folding-horizons".Thu, 15 Apr 2010 14:20:17 +0200Neatwork, a decision support program for the design of gravity water distribution networkshttps://archive-ouverte.unige.ch/unige:5769https://archive-ouverte.unige.ch/unige:5769abstract not availableThu, 15 Apr 2010 14:19:56 +0200Semi-Lagrangian Relaxationhttps://archive-ouverte.unige.ch/unige:5768https://archive-ouverte.unige.ch/unige:5768Lagrangian relaxation is commonly used in combinatorial optimization to generate lower bounds for a minimization problem. We propose a modified Lagrangian relaxation which used in (linear) combinatorial optimization with equality constraints generates an optimal intinteger solution. We call this new concept semi-Lagrangian relaxation and illustrate its practical value by solving large-scale instances of the p-median problem.Thu, 15 Apr 2010 14:19:55 +0200Proximal-Accpm : a versatile oracle based optimization methodhttps://archive-ouverte.unige.ch/unige:5767https://archive-ouverte.unige.ch/unige:5767Oracle Based Optimization (OBO) conveniently designates an approach to handle a class of convex optimization problems in which the information pertaining to the function to be minimized and/or to the feasible set takes the form of a linear outer approximation revealed by an oracle. We show, through three representative examples, how difficult problems can be cast in this format, and solved. We present an efficient method, Proximal-ACCPM, to trigger the OBO approach and give a snapshot on numerical results. This paper summarizes several contributions with the OBO approach and aims to give, in a single report, enough information on the method and its implementation to facilitate new applications.Thu, 15 Apr 2010 14:19:55 +0200Oracle-Based optimization applied to climate model calibrationhttps://archive-ouverte.unige.ch/unige:5766https://archive-ouverte.unige.ch/unige:5766In this paper, we show how oracle-based optimization can be used effectively for the calibration of an intermediate complexity climate model. In a fully developed example, we estimate the 12 principal transport and mixing parameters of the C-GOLDSTEIN climate model by using an oracle-based optimization tool, Proximal-ACCPM. The oracle is a procedure which finds, for each query point, a value for the goodness-of-fit function and an evaluation of its gradient with respect to the parameters to be estimated. The difficulty in the model calibration problem stems from the need to undertake costly calculations for each simulation and also from the fact that the error function used to assess the goodness-of-fit is not convex w.r.t. the calibration parameters. However, Proximal-ACCPM is able to find good candidates for parameter calibration approximations in spite of this non-convexity. The method converges to a 'best fit' estimate over ten times faster than a comparable test using the ensemble Kalman filter. The approach is simple to implement and potentially useful in calibrating computationally demanding models based on temporal integration (simulation), for which functional derivative information is not readily available.Thu, 15 Apr 2010 14:19:54 +0200Solving large scale linear multicommodity flow problems with an active set strategy and proximal-accpmhttps://archive-ouverte.unige.ch/unige:5765https://archive-ouverte.unige.ch/unige:5765In this paper, we propose to solve the linear multicommodity flow problem using a partial Lagrangian relaxation. The relaxation is restricted to the set of arcs that are likely to be saturated at the optimum. This set is itself approximated by an active set strategy. The partial Lagrangian dual is solved with Proximal-ACCPM, a variant of the analytic center cutting plane method. The new approach makes it possible to solve huge problems when few arcs are saturated at the optimum, as it appears to be the case in many practical problems. Acknowledgments. The work was partially supported by the Fonds National Suisse de la Recherche Scientifique, grant # 12-57093.99.Thu, 15 Apr 2010 14:19:53 +0200Automatic formulation and decomposition of stochastic programs via algebraic modeling languageshttps://archive-ouverte.unige.ch/unige:5764https://archive-ouverte.unige.ch/unige:5764abstract not availableThu, 15 Apr 2010 14:19:52 +0200ACCPM with a nonlinear constraint and an active set strategy to solve nonlinear multicommodity flow problemshttps://archive-ouverte.unige.ch/unige:5757https://archive-ouverte.unige.ch/unige:5757This paper proposes an implementation of a constrained analytic center cutting plane method to solve nonlinear multicommodity flow problems. The new approach exploits the property that the objective of the Lagrangian dual problem has a smooth component with second order derivatives readily available in closed form. The cutting planes issued from the nonsmooth component and the epigraph set of the smooth component form a localization set that is endowed with a self-concordant augmented barrier. Our implementation uses an approximate analytic center associated with that barrier to query the oracle of the nonsmooth component. The paper also proposes an approximation scheme for the original objective. An active set strategy can be applied to the transfomed problem: it reduces the dimension of the dual space and accelerates computations. The new approach solves huge instances with high accuracy. The method is compared to alternative approaches proposed in the literature.Thu, 15 Apr 2010 14:19:48 +0200Step decision rules for multistage stochastic programming : a heuristic approachhttps://archive-ouverte.unige.ch/unige:5743https://archive-ouverte.unige.ch/unige:5743Stochastic programming with step decision rules, SPSDR, is an attempt to overcome the curse of computational complexity of multistage stochastic programming problems. SPSDR combines several techniques. The first idea is to work with independent experts. Each expert is confronted with a sample of scenarios drawn at random from the original stochastic process. The second idea is to have each expert work with step decision rules. The optimal decision rules of the individual experts are then averaged to form the final decision rule. The final solution is tested on a very large sample of scenarios. SPSDR is then tested against two alternative methods: regular stochastic programming on a problem with 3 stages and 2 recourses; robust optimization with affinely adjustable recourses on a 12-stage model. The performance of the new method turns out to be competitive on those examples, while it permits a tighter control on computational complexity than standard stochastic programming.Thu, 15 Apr 2010 14:19:41 +0200An efficient method to compute traffic assignment problems with elastic demandshttps://archive-ouverte.unige.ch/unige:5744https://archive-ouverte.unige.ch/unige:5744The traffic assignment problem with elastic demands can be formulated as an optimization problem, whose objective is sum of a congestion function and a disutility function. We propose to use a variant of the Analytic Center Cutting Plane Method to solve this problem. We test the method on instances with different congestion functions (linear with capacities on the arc and BPR) and different demand functions (constant elasticity and linear). The results of the numerical experiments show that it is possible to solve large instances with high accuracy.Thu, 15 Apr 2010 14:19:41 +0200Solving the multicommodity flow problem with the analytic center cutting plane methodhttps://archive-ouverte.unige.ch/unige:396https://archive-ouverte.unige.ch/unige:396La thèse porte sur le développement de méthodes numériques performantes pour la résolution de problèmes de multiflots où plusieurs entités (messages en télécommunications, marchandises ou usagers en transport) entrent en compétition pour l'utilisation d'un réseau à capacité limitée. Pour chaque arc du réseau, le flot qu'il supporte engendre un coût d'utlisation et l'objectif est de minimiser la somme des coûts. La contribution de la thèse porte principalement sur le développement et la spécialisation d'une méthode de résolution efficace, ACCPM, et sur la mise en oeuvre d'une stratégie d'ensemble actif permettant de réduire considérablement la taille des problèmes traités. Les travaux réalisés dans le cadre de la thèse améliorent les résultats publiés dans la littérature de manière systématique et significative. Ils permettent également de traiter des problèmes dont la taille dépasse de beaucoup ceux rencontrés dans la littérature.Wed, 29 Oct 2008 12:47:09 +0100The homogeneous analytic center cutting plane methodhttps://archive-ouverte.unige.ch/unige:159https://archive-ouverte.unige.ch/unige:159La méthode homogène des centres analytiques est une méthode de plans coupants pour l'optimisation convexe, combinant les propriétés des fonctions self-concordantes et la robustesse de la méthode des centres analytiques. A chaque itération, un centre analytique est défini comme le point minimisant une fonction potentielle self-concordante. Un oracle de premier ordre renvoie un plan coupant enrichissant la définition de l'ensemble de localisation courant. Durant tout le processus, le problème est plongé dans un espace homogène tandis que l'oracle reste dans l'espace d'origine. Nous étudions tout d'abord la convergence de l'algorithme avec des centres analytiques approchés. Puis nous considérons le cas où l'oracle renvoie simultanément plusieurs coupes : la preuve de complexité fait appel à des résultats récents sur les barrières self-concordantes augmentées. Finalement, nous appliquons la méthode homogène à des problèmes d'inégalités variationnelles et de séparation.Wed, 29 Oct 2008 12:43:31 +0100