In order to demonstrate and enable utilization of a complex engineering toolkit on GRID resources, DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit being developed at SNL (Sandia National Labs) has been used. DAKOTA is available for download under a GNU General Public License (GPL). DAKOTA runs on most Unix platforms and ports are actively maintained with nightly build verifications for PC Red Hat Linux, Sun Solaris, IBM AIX, SGI IRIX, DEC OSF, and the Intel TeraFLOP machine (ASCI Red).
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible, extensible interface between analysis codes and iteration methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods, uncertainty quantification with sampling, reliability, and stochastic finite element methods, parameter estimation with nonlinear least squares methods, and sensitivity/variance analysis with design of experiments and parameter study capabilities.
Optimization with DAKOTA
To understand and use DAKOTA toolkit certain basic concepts of optimization should be known.
In mathematics, optimization is concerned with finding the maxima and minima of functions, possibly subject to constraints. Example: maximize the profit of a manufacturing operation while ensuring that none of the resources exceed certain limits and also satisfying as much of the demand faced as possible. Optimization has many practical applications in logistics and design problems.
In computer science, optimization is the process of improving a system in certain ways to increase the effective execution speed and/or bandwidth, or to reduce memory requirements. Despite its name, optimization does not necessarily mean finding the optimum solution to a problem. Often this is not possible, and heuristic algorithms must be used instead.
Mathematical Definition of Optimization
Optimization process requires,
1) An objective function which we want to minimize or maximize.
2) A set of unknowns or variables which affect the value of the objective function.
3) A set of constraints that allow the unknowns to take on certain values but exclude others.
Optimization has wide applications, some of the fields and its applications are:
Engineering: Design Optimization, Crashworthiness Optimization, Shape, Size and topology optimization of structures (aircrafts, automobiles, mechanical components)
Biomedical: DNA sequence mapping, Protien structure assesment, New algorithms are developed to formulate new drugs
Environmental: Efficient energy utilization and Natural resource utilization (Land, Water and Raw Materials)
Others: Search engine optimization, Financial applications
There are various optimization methods, basically they can be classified into Gradient and non-gradient based methods. Gradient based methods calculate gradients (differential of objective functions WRT design variables) to find the slopes and Hessians (double differentials) to find the curvature of the surface on which the search is made. Gradient based methods are very efficient in finding the optimum in the absence of non-linearities and when the start point of search is very near the global minima.
Non Gradient based do not calculate gradients or hessians, for e.g., Genetic algorithms, Neural Networks, Pattern search methods. They perform exhaustive search and use knowledge from previous iterations to find the optimum. They can handle non-linearities and can reach a point near the global optimum but not the exact optimum result. There are numerous hybrid algorithms and researchers are developing new efficient algorithms. This is a very wide topic and if interested to know more, i would suggest reading "Numerical methods for scientists and engineers" by R. W. Hamming.
DAKOTA toolkit contains various optimization algorithms, parameter study methods, uncertainty quantification methods and other state of the art algorithms used in many research fields. More about the same can be found in DAKOTA Website