source: src/documentation/constructs/potentials.dox

Candidate_v1.6.1
Last change on this file was 3cfb31, checked in by Frederik Heber <frederik.heber@…>, 8 years ago

DOCU: Updated potentials.dox.

  • Property mode set to 100644
File size: 14.9 KB
Line 
1/*
2 * Project: MoleCuilder
3 * Description: creates and alters molecular systems
4 * Copyright (C) 2010 University of Bonn. All rights reserved.
5 * Please see the LICENSE file or "Copyright notice" in builder.cpp for details.
6 */
7
8/**
9 * \file potentials.dox
10 *
11 * Created on: Nov 28, 2012
12 * Author: heber
13 */
14
15/** \page potentials Empirical Potentials and FunctionModels
16 *
17 * On this page we explain what is meant with the Potentials sub folder.
18 *
19 * First, we are based on fragmenting a molecular system, i.e. dissecting its
20 * bond structure into connected subgraphs, calculating the energies of the
21 * fragments (ab-initio) and summing up to a good approximation of the total
22 * energy of the whole system, \sa fragmentation.
23 * Second, having calculated these energies, there quickly comes up the idea
24 * that one actually calculates quite similar systems all time and if one could
25 * not cache results in an intelligent (i.e. interpolating) fashion ...
26 *
27 * That's where so-called empirical potentials come into play. They are
28 * functions depending on a number of "fitted" parameters and the variable
29 * distances within a molecular fragment (i.e. the bond lengths) in order to
30 * give a value for the total energy without the need to solve a complex
31 * ab-initio model (essentially, not solving the electronic Schrödinger equation
32 * anymore). And they are accompanied by a specific binding model that
33 * represents what kind of many-body force is represented by the potential.
34 *
35 * Empirical potentials have been thought of by fellows such as Lennard-Jones,
36 * Morse, Tersoff, Stillinger and Weber, etc. And in their honor, most of the
37 * potential forms are named after its inventor. Hence, we speak e.g. of a
38 * Lennard-Jones potential.
39 *
40 * So, what we have to do in order to cache results is the following procedure:
41 * -# gather similar fragments
42 * -# perform a fit procedure to obtain the parameters for the empirical
43 * potential
44 * -# evaluate the potential instead of an ab-initio calculation
45 *
46 * However, we need more: What are similar fragments? How do we perform the
47 * fitting procedure? And as the potentials are mathematical functions, what
48 * arguments do they depend on?
49 *
50 * Similar fragments are those that share the same bond graph, i.e. they have
51 * the same number of nodes and the same number of edges. And each edge is
52 * between the same two elements.
53 *
54 * The fitting procedure works by looking at a training set, i.e. a list of
55 * elements where each contains an energy and a number of arguments, namely
56 * pair-wise distances. The error is then the difference between the energies
57 * in the set and all the energy values that we obtain when we feed the
58 * arguments into the fitted potentials. This error is minimized in the
59 * euclidian norm, i.e. least squares regression. But other norms might be
60 * possible in the future, too.
61 *
62 * And the pair-wise distances, we mentioned are the arguments.
63 *
64 * The terms, that we use, model the classes that are implemented:
65 * -# EmpiricalPotential: Contains the interface to a function that can be
66 * evaluated given a number of arguments_t, i.e. distances. Also, one might
67 * want to evaluate derivatives.
68 * -# FunctionModel: Is a function that can be fitted, i.e. it depends on a
69 * set of internal parameters that can be set and got.
70 * -# argument_t: The Argument stores not only the distance but also the index
71 * pair of the associated atoms and also their charges, to let the potential
72 * check on validity.
73 * -# SerializablePotential: Eventually, one wants to store to or parse from
74 * a file all potential parameters. This functionality is encoded in this
75 * class.
76 * -# HomologyGraph: "Similar" fragments in our case have to have the same bond
77 * graph. It is stored in the HomologyGraph that acts as representative.
78 * -# HomologyContainer: This container combines, in a ultimap fashion, all
79 * similar fragments with their energies together, with the HomologyGraph
80 * as their "key".
81 * -# TrainingData: Here, we combine InputVector and OutputVector that contain
82 * the set of distances required for the FunctionModel (e.g. only a single
83 * distance/argument for a pair potential, three for an angle potential,
84 * etc.) and also the expected OutputVector, i.e. the energy of the specific
85 * configuration in our case. This in combination with the FunctionModel is
86 * the basis for the non-linear regression used for the fitting procedure.
87 * -# Extractors: These set of functions yield the set of distances from a
88 * given fragment that is stored in the HomologyContainer.
89 * -# FunctionApproximation: Contains the interface to the levmar package where
90 * the Levenberg-Marquardt (Newton + Trust region) algorithm is used to
91 * perform the fit.
92 *
93 * \section potentials-fit-potential-action What happens in FitPotentialAction.
94 *
95 * First, charges and a potential type is used from the given options. This
96 * is used to instantiate EmpiricalPotentials via the PotentialFactory, stored
97 * within the PotentialRegistry. This is the available set of potentials
98 * (without requiring any knowledge as to the nature of the fragment employed
99 * in fitting).
100 *
101 * Second, the given fragment is used to get a suitable HomologyGraph from
102 * the World's HomologyContainer. This is given to a CompoundPotential, that in
103 * turn browses through the PotentialRegistry, picking out those
104 * EmpiricalPotential's that match with a subset of the FragmentNode's of the
105 * given graph. These are stored as a list of FunctionModel's within the
106 * CompoundPotential instance. Here comes the specific fragment into play,
107 * picking a subset from the available potentials.
108 *
109 * Third, we need to setup the training data. For this we need vectors of input
110 * and output data that are obtained from the HomologyContainer with the
111 * HomologyGraph as key. The output vector in our case is simply a number
112 * (although the interface allows for more). The input vector is the set of
113 * distances. In order to pre-process the input data for the specific model
114 * a filter is required in the TrainingData's constructor. The purpose of the
115 * filter is to pick out the subset of distance arguments for each model one
116 * after the other and concatenate them to a list. On evaluation of the model
117 * this concatenated list of distances is given to the model and it may easily
118 * dissect the list and hand over each contained potential its subset of
119 * arguments. See Extractors for more information.
120 *
121 * Afterwards, training may commence: The goal is to find a set of parameters
122 * for the model such that it as good as possible reproduces the output vector
123 * for a given input vector. This non-linear regression is contained in the
124 * levmar package and its functionality is wrapped in the FunctionApproximation
125 * class. An instance is initialized with both the gathered training data and
126 * the model containing a list of potentials. See
127 * [FunctionApproximation-details] for more details.
128 *
129 * During the fitting procedure, first the derivatives of the model is checked
130 * for consistency, then the model is initialized with a sensible guess of
131 * starting parameters, and afterwards the Levenberg-Marquardt algorithm
132 * commences that makes numerous calls to evaluate the model and its derivative
133 * to find the minimum in the L2-norm.
134 *
135 * This is done more than once as high-dimensional regression is sensitive to
136 * the starting values as there are possible numerous local minima. The lowest
137 * of the found minima is taken, either via a given threshold or the best of a
138 * given number of attempts.
139 *
140 * Eventually, these parameters of the best model are streamed via
141 * PotentialSerializer back into a potential file. Each EmpiricalPotential in
142 * the CompoundPotential making up the whole model is also a
143 * SerializablePotential. Hence, each in turn writes a single line with its
144 * respective subset of parameters and particle types, describing this
145 * particular fit function.
146 *
147 * \section potentials-function-evaluation How does the model evaluation work
148 *
149 * We now come to the question of how the model and its derivative are actually
150 * evaluated. We have an input vector from the training data and we have the
151 * model with a specific set of parameters.
152 *
153 * FunctionModel is just an abstract interface that is implemented by the
154 * potential functions, such as CompoundPotential, that combines multiple
155 * potentials into a single function for fitting, or PairPotential_Harmonic,
156 * that is a specific fit function for harmonic bonds.
157 *
158 * The main issue with the evaluation is picking the right set of distances from
159 * ones given in the input vector and feed it to each potential contained in
160 * CompoundPotential.
161 *
162 * Initially, the HomologyGraph only contains a list of configurations of a
163 * specific fragments (i.e. the position of each atom in the fragment) and an
164 * energy value. These first have to be converted into distances.
165 *
166 * These distances are prepared by the TrainingData instantiation, i.e. a
167 * fragment with all its atomic positions has already been converted to the
168 * set of all pair-wise interatomic distances.
169 *
170 * \section potentials-distance-picking How does the distance picking work
171 *
172 * Given a set of pair-wise distances, how do we pick the subset of distances
173 * needed by a particular potential.
174 *
175 * Let's make an example first: Imagine a water molecule, i.e. one oxygen and
176 * and two hydrogen atoms with two O-H bonds. Naturally, we obtain three pair-
177 * wise distances, OH1, OH2 and H1H2. Now, we would like to fit a Morse
178 * potential that just depends on a single interatomic distance. We would like
179 * it to represents the O-H bond energy. Hence, the distance picker, namely
180 * the Extractor function, needs to pick any subset of distance that contains
181 * a unique single O-H distance. In effect, it needs to return a list
182 * containing OH1 and OH2 as the Morse potential needs to represent both bond
183 * energies together.
184 *
185 * Now, this is really still quite simple as long as the potential only
186 * depends on a single distance. However, what if we continue and look at a
187 * angle potential, requiring three atoms, i.e. H-O-H?
188 *
189 * Or even more complicated: Imagine an ethane molecule (C2H6) and we would
190 * to represent the H-C-C angular interaction by a harmonic angle potential.
191 * Now, there are multiple of these at the same time, namely six angular
192 * interactions.
193 *
194 * What have to do is look for subgraphs inside a graph. Each potential comes
195 * with a small graph that represents the binding structure, in our terms
196 * the bond model, that we expect. And we need to find the all matching
197 * subgraphs in the whole graph being the fragment itself. Then, for each
198 * subgraph the potential tells us in what order the pair-wise distances
199 * associated with the subgraph are required to be. All of these subset of
200 * distances are eventually concatenated and fed into the model on evaluation.
201 *
202 * \section potentials-howto-use Howto use the potentials
203 *
204 * We just give a brief run-down in terms of code on how to use the potentials.
205 * Here, we just describe what to do in order to perform the fitting. This is
206 * basically what is implemented in FragmentationFitPotentialAction.
207 *
208 * \code
209 * // we need the homology container and the representative graph we want to
210 * // fit to.
211 * HomologyContainer homologies;
212 * const HomologyGraph graph = getSomeGraph(homologies);
213 * Fragment::atomicnumbers_t h2o;
214 * h2o += 8,1,1;
215 * // TrainingData needs so called Extractors to get the required distances
216 * // from the stored fragment. These functions are bound via boost::bind.
217 * TrainingData AngleData(
218 * boost::bind(&Extractors::gatherDistancesFromFragment,
219 * boost::bind(&Fragment::getPositions, _1),
220 * boost::bind(&Fragment::getAtomicNumbers, _1),
221 * boost::cref(h2o),
222 * _2)
223 * );
224 * // now we extract the distances and energies and store them
225 * AngleData(homologies.getHomologousGraphs(graph));
226 * // give ParticleTypes of this potential to make it unique
227 * PairPotential_Angle::ParticleTypes_t types =
228 * boost::assign::list_of<PairPotential_Angle::ParticleType_t>
229 * (8)(1)(1)
230 * ;
231 * PairPotential_Angle angle(types);
232 * // give initial parameter
233 * FunctionModel::parameters_t params(PairPotential_Angle::MAXPARAMS, 0.);
234 * // ... set some potential-specific initial parameters in params struct
235 * angle.setParameters(params);
236 *
237 * // use the potential as a FunctionModel along with prepared TrainingData
238 * FunctionModel &model = angle;
239 * FunctionApproximation approximator(AngleData, model);
240 * approximator(FunctionApproximation::ParameterDerivative);
241 *
242 * // obtain resulting parameters and check remaining L_2 and L_max error
243 * angleparams = model.getParameters();
244 * LOG(1, "INFO: L2sum = " << AngleData(model)
245 * << ", LMax = " << AngleData(model) << ".");
246 * \endcode
247 *
248 * The evaluation of the fitted potential is then trivial, e.g.
249 * \code
250 * // constructed someplace
251 * PairPotential_Angle angle(...);
252 *
253 * // evaluate
254 * FunctionModel::arguments_t args;
255 * // .. initialise args to the desired distances
256 * const double value = angle(args)[0]; // output is a vector!
257 * \endcode
258 *
259 * \section potentials-stability-of-fit note in stability of fit
260 *
261 * As we always start from random initial parameters (within a certain sensible
262 * range at least), the non-linear fit does not always converge. Note that the
263 * random values are drawn from the defined distribution and the uniform distribution
264 * engine is obtained from the currently set, see \ref randomnumbers. Hence, you
265 * can manipulate both in order to get different results or to set the seed such that
266 * some "randomly" drawn value always work well (e.g. for testing).
267 *
268 * In any case, the FragmentationFitPotentialAction has the option
269 * "take-best-of" to allow for multiple fits where the best (in terms of l2 error)
270 * is taken eventually. Furthermore, you can use the "set-threshold" option to
271 * repeat the fit procedure until the L2 error has dropped below the given
272 * threshold.
273 *
274 * \section potentials-howto-add Howto add new potentials
275 *
276 * Adding a new potential requires the following:
277 * -# Add the new modules to Potentials/Specifics
278 * -# Add a unit test on the potential in Potentials/Specifics/unittests
279 * -# Give the potential a type name and add it to PotentialTypes.def. Note
280 * that the name must not contain white space.
281 * -# Add the potential name as case to PotentialFactory such that it knows
282 * how to instantiate your new potential when requested.
283 * -# Remember to use the the RandomNumberGenerator for getting random starting
284 * values!
285 *
286 * PotentialTypes.def contains a boost::preprocessor sequence of all
287 * potential names. PotentialFactory uses this sequence to build its enum to
288 * type map and inverse which the user sees when specifying the potential to
289 * fit via PotentialTypeValidator.
290 *
291 *
292 * \date 2017-05-14
293 */
Note: See TracBrowser for help on using the repository browser.