Robust Program Design
Robust optimization is a field of optimization theory that deals with optimization problems in which a certain measure of robustness is sought against uncertainty that can be represented as deterministic variability in the value of the parameters of the problem itself and/or its solution.
ROBUST DESIGN. Industrial Engineering 361 Katie Maurer. February 11, 2000. What is robust product design? Robust product design is a concept from the teachings of Dr. Genichi Taguchi, a Japanese quality guru. It is defined as reducing variation in a product without eliminating the causes of the variation. A system that comes with a wide range of capabilities. (It should be noted that this last sense of the term robust is not uniformly accepted in technical circles. The term is typically used in this sense in the marketing of software or computer systems to emphasize a selling point and does not refer to the first three meanings of the term.). Robust Design Techniques. It's important to understand that robust programs generally need to deal with three kinds of 'exceptional' conditions: user errors, when invalid input is passed to the program. Exhaustions, when the program tries to acquire shared resources. Internal errors, due to bugs (e.g. Dangling pointers).
- 3Classification
- 3.2Global robustness
History[edit]
The origins of robust optimization date back to the establishment of modern decision theory in the 1950s and the use of worst case analysis and Wald's maximin model as a tool for the treatment of severe uncertainty. It became a discipline of its own in the 1970s with parallel developments in several scientific and technological fields. Over the years, it has been applied in statistics, but also in operations research,[1]electrical engineering,[2][3]control theory,[4]finance,[5]portfolio management[6]logistics,[7]manufacturing engineering,[8]chemical engineering,[9]medicine,[10] and computer science. In engineering problems, these formulations often take the name of 'Robust Design Optimization', RDO or 'Reliability Based Design Optimization', RBDO.
Example 1[edit]
Consider the following linear programming problem
where is a given subset of .
What makes this a 'robust optimization' problem is the clause in the constraints. Its implication is that for a pair to be admissible, the constraint must be satisfied by the worst pertaining to , namely the pair that maximizes the value of for the given value of .
If the parameter space is finite (consisting of finitely many elements), then this robust optimization problem itself is a linear programming problem: for each there is a linear constraint .
If is not a finite set, then this problem is a linear semi-infinite programming problem, namely a linear programming problem with finitely many (2) decision variables and infinitely many constraints.
Classification[edit]
There are a number of classification criteria for robust optimization problems/models. In particular, one can distinguish between problems dealing with local and global models of robustness; and between probabilistic and non-probabilistic models of robustness. Modern robust optimization deals primarily with non-probabilistic models of robustness that are worst case oriented and as such usually deploy Wald's maximin models.
Local robustness[edit]
There are cases where robustness is sought against small perturbations in a nominal value of a parameter. A very popular model of local robustness is the radius of stability model:
where denotes the nominal value of the parameter, denotes a ball of radius centered at and denotes the set of values of that satisfy given stability/performance conditions associated with decision .
In words, the robustness (radius of stability) of decision is the radius of the largest ball centered at all of whose elements satisfy the stability requirements imposed on . The picture is this:
where the rectangle represents the set of all the values associated with decision .
Global robustness[edit]
Consider the simple abstract robust optimization problem
where denotes the set of all possible values of under consideration.
This is a global robust optimization problem in the sense that the robustness constraint represents all the possible values of .
The difficulty is that such a 'global' constraint can be too demanding in that there is no that satisfies this constraint. But even if such an exists, the constraint can be too 'conservative' in that it yields a solution that generates a very small payoff that is not representative of the performance of other decisions in . For instance, there could be an that only slightly violates the robustness constraint but yields a very large payoff . In such cases it might be necessary to relax a bit the robustness constraint and/or modify the statement of the problem.
Example 2[edit]
Consider the case where the objective is to satisfy a constraint . where denotes the decision variable and is a parameter whose set of possible values in . If there is no such that , then the following intuitive measure of robustness suggests itself:
where denotes an appropriate measure of the 'size' of set . For example, if is a finite set, then could be defined as the cardinality of set .
In words, the robustness of decision is the size of the largest subset of for which the constraint is satisfied for each in this set. An optimal decision is then a decision whose robustness is the largest.
This yields the following robust optimization problem:
This intuitive notion of global robustness is not used often in practice because the robust optimization problems that it induces are usually (not always) very difficult to solve.
Example 3[edit]
Consider the robust optimization problem
where is a real-valued function on , and assume that there is no feasible solution to this problem because the robustness constraint is too demanding.
To overcome this difficulty, let be a relatively small subset of representing 'normal' values of and consider the following robust optimization problem:
Since is much smaller than , its optimal solution may not perform well on a large portion of and therefore may not be robust against the variability of over .
One way to fix this difficulty is to relax the constraint for values of outside the set in a controlled manner so that larger violations are allowed as the distance of from increases. For instance, consider the relaxed robustness constraint
where is a control parameter and denotes the distance of from . Thus, for the relaxed robustness constraint reduces back to the original robustness constraint.This yields the following (relaxed) robust optimization problem:
The function is defined in such a manner that
and
and therefore the optimal solution to the relaxed problem satisfies the original constraint for all values of in . It also satisfies the relaxed constraint
outside .
Non-probabilistic robust optimization models[edit]
The dominating paradigm in this area of robust optimization is Wald's maximin model, namely
where the represents the decision maker, the represents Nature, namely uncertainty, represents the decision space and denotes the set of possible values of associated with decision . This is the classic format of the generic model, and is often referred to as minimax or maximin optimization problem. The non-probabilistic (deterministic) model has been and is being extensively used for robust optimization especially in the field of signal processing.[11][12][13]
The equivalent mathematical programming (MP) of the classic format above is
Constraints can be incorporated explicitly in these models. The generic constrained classic format is
The equivalent constrained MP format is defined as:
Probabilistically robust optimization models[edit]
These models quantify the uncertainty in the 'true' value of the parameter of interest by probability distribution functions. They have been traditionally classified as stochastic programming and stochastic optimization models. Recently, probabilistically robust optimization has gained popularity by the introduction of rigorous theories such as scenario optimization able to quantify the robustness level of solutions obtained by randomization. These methods are also relevant to. Operations Research. 52 (1): 35–53. doi:10.1287/opre.1030.0065.
Robust Program Designs
Further reading[edit]
- H.J. Greenberg. Mathematical Programming Glossary. World Wide Web, http://glossary.computing.society.informs.org/, 1996-2006. Edited by the INFORMS Computing Society.
- Ben-Tal, A.; Nemirovski, A. (1998). 'Robust Convex Optimization'. Mathematics of Operations Research. 23 (4): 769–805. CiteSeerX10.1.1.135.798. doi:10.1287/moor.23.4.769.
- Ben-Tal, A.; Nemirovski, A. (1999). 'Robust solutions to uncertain linear programs'. Operations Research Letters. 25: 1–13. CiteSeerX10.1.1.424.861. doi:10.1016/s0167-6377(99)00016-4.
- Ben-Tal, A.; Arkadi Nemirovski, A. (2002). 'Robust optimization—methodology and applications'. Mathematical Programming, Series B. 92 (3): 453–480. CiteSeerX10.1.1.298.7965. doi:10.1007/s101070100286.
- Ben-Tal A., El Ghaoui, L. and Nemirovski, A. (2006). Mathematical Programming, Special issue on Robust Optimization, Volume 107(1-2).
- Ben-Tal A., El Ghaoui, L. and Nemirovski, A. (2009). Robust Optimization. Princeton Series in Applied Mathematics, Princeton University Press.
- Bertsimas, D.; Sim, M. (2003). 'Robust Discrete Optimization and Network Flows'. Mathematical Programming. 98 (1–3): 49–71. CiteSeerX10.1.1.392.4470. doi:10.1007/s10107-003-0396-4.
- Bertsimas, D.; Sim, M. (2006). 'Tractable Approximations to Robust Conic Optimization Problems Dimitris Bertsimas'. Mathematical Programming. 107 (1): 5–36. CiteSeerX10.1.1.207.8378. doi:10.1007/s10107-005-0677-1.
- Chen, W.; Sim, M. (2009). 'Goal Driven Optimization'. Operations Research. 57 (2): 342–357. doi:10.1287/opre.1080.0570.
- Chen, X.; Sim, M.; Sun, P.; Zhang, J. (2008). 'A Linear-Decision Based Approximation Approach to Stochastic Programming'. Operations Research. 56 (2): 344–357. doi:10.1287/opre.1070.0457.
- Chen, X.; Sim, M.; Sun, P. (2007). 'A Robust Optimization Perspective on Stochastic Programming'. Operations Research. 55 (6): 1058–1071. doi:10.1287/opre.1070.0441.
- Dembo, R (1991). 'Scenario optimization'. Annals of Operations Research. 30 (1): 63–80. doi:10.1007/bf02204809.
- Gupta, S.K.; Rosenhead, J. (1968). 'Robustness in sequential investment decisions'. Management Science. 15 (2): 18–29. doi:10.1287/mnsc.15.2.B18.
- Kouvelis P. and Yu G. (1997). Robust Discrete Optimization and Its Applications, Kluwer.
- Mutapcic, Almir; Boyd, Stephen (2009). 'Cutting-set methods for robust convex optimization with pessimizing oracles'. Optimization Methods and Software. 24 (3): 381–406. CiteSeerX10.1.1.416.4912. doi:10.1080/10556780802712889.
- Mulvey, J.M.; Vanderbei, R.J.; Zenios, S.A. (1995). 'Robust Optimization of Large-Scale Systems'. Operations Research. 43 (2): 264–281. doi:10.1287/opre.43.2.264.
- Rosenblat, M.J. (1987). 'A robust approach to facility design'. International Journal of Production Research. 25 (4): 479–486. doi:10.1080/00207548708919855.
- Rosenhead, M.J; Elton, M; Gupta, S.K. (1972). 'Robustness and Optimality as Criteria for Strategic Decisions'. Operational Research Quarterly. 23 (4): 413–430. doi:10.2307/3007957. JSTOR3007957.
- Rustem B. and Howe M. (2002). Algorithms for Worst-case Design and Applications to Risk Management, Princeton University Press.
- Sniedovich, M (2007). 'The art and science of modeling decision-making under severe uncertainty'. Decision Making in Manufacturing and Services. 1 (1–2): 111–136. doi:10.7494/dmms.2007.1.2.111.
- Sniedovich, M (2008). 'Wald's Maximin Model: a Treasure in Disguise!'. Journal of Risk Finance. 9 (3): 287–291. doi:10.1108/15265940810875603.
- Sniedovich, M (2010). 'A bird's view of info-gap decision theory'. Journal of Risk Finance. 11 (3): 268–283. doi:10.1108/15265941011043648.
- Wald, A (1939). 'Contributions to the theory of statistical estimation and testing hypotheses'. The Annals of Mathematics. 10 (4): 299–326. doi:10.1214/aoms/1177732144.
- Wald, A (1945). 'Statistical decision functions which minimize the maximum risk'. The Annals of Mathematics. 46 (2): 265–280. doi:10.2307/1969022. JSTOR1969022.
- Wald, A. (1950). Statistical Decision Functions, John Wiley, NY.
- M. Shabanzadeh, M. Fattahi. Generation Maintenance Scheduling via robust optimization. DOI: 10.1109/IranianCEE.2015.7146458, 2015
External links[edit]
Robust Controller Design
This example shows how to design a feedback controller for a plant with uncertain parameters and uncertain model dynamics. The goals of the controller design are good steady-state tracking and disturbance-rejection properties.
Design a controller for the plant G
described in Robust Controller Design. This plant is a first-order system with an uncertain time constant. The plant also has some uncertain dynamic deviations from first-order behavior beyond about 9 rad/s.
Design Controller
Because of the nominal first-order behavior of the plant, choose a PI control architecture. For a desired closed-loop damping ratio ξ and natural frequency , the design equations for the proportional and integral gains (based on the nominal open-loop time constant of 0.2) are:
To study how the uncertainty in G
affects the achievable closed-loop bandwidth, design two controllers, both achieving ξ = 0.707, but with different values, 3 and 7.5.
Examine Controller Performance
The nominal closed-loop bandwidth achieved by C2
is in a region where G
has significant model uncertainty. It is therefore expected that the model variations cause significant degradations in the closed-loop performance with that controller. To examine the performance, form the closed-loop systems and plot the step responses of samples of the resulting systems.
The step responses for T2
exhibit a faster rise time because C2
sets a higher closed-loop bandwidth. However, as expected, the model variations have a greater impact.
You can use robstab
to check the robustness of the stability of the closed-loop systems to model variations.
The display gives the amount of uncertainty that the system can tolerate without going unstable. In both cases, the closed-loop systems can tolerate more than 100% of the modeled uncertainty range while remaining stable. stabmarg
contains lower and upper bounds on the stability margin. A stability margin greater than 1 means the system is stable for all values of the modeled uncertainty. A stability margin less than 1 means there are allowable values of the uncertain elements that make the system unstable.
Compare Nominal and Worst-Case Behavior
While both systems are stable for all variations, their performance is affected to different degrees. To determine how the uncertainty affects closed-loop performance, you can use wcgain
to compute the worst-case effect of the uncertainty on the peak magnitude of the closed-loop sensitivity function, S = 1/(1+GC)
. This peak gain of this function is typically correlated with the amount of overshoot in a step response; peak gain greater than one indicates overshoot.
Form the closed-loop sensitivity functions and call wcgain
.
maxgain
gives lower and upper bounds on the worst-case peak gain of the sensitivity transfer function, as well as the specific frequency where the maximum gain occurs. Examine the bounds on the worst-case gain for both systems.
wcu
contains the particular values of the uncertain elements that achieve this worst-case behavior. Use usubs
to substitute these worst-case values for uncertain elements, and compare the nominal and worst-case behavior.
While C2
achieves better nominal sensitivity than C1
, the nominal closed-loop bandwidth extends too far into the frequency range where the process uncertainty is very large. Hence the worst-case performance of C2
is inferior to C1
for this particular uncertain model.
See Also
robstab
usubs
wcgain