source: src/FunctionApproximation/FunctionApproximation.hpp@ b8f2ea

Action_Thermostats Add_AtomRandomPerturbation Add_FitFragmentPartialChargesAction Add_RotateAroundBondAction Add_SelectAtomByNameAction Added_ParseSaveFragmentResults AddingActions_SaveParseParticleParameters Adding_Graph_to_ChangeBondActions Adding_MD_integration_tests Adding_ParticleName_to_Atom Adding_StructOpt_integration_tests AtomFragments Automaking_mpqc_open AutomationFragmentation_failures Candidate_v1.5.4 Candidate_v1.6.0 Candidate_v1.6.1 ChangeBugEmailaddress ChangingTestPorts ChemicalSpaceEvaluator CombiningParticlePotentialParsing Combining_Subpackages Debian_Package_split Debian_package_split_molecuildergui_only Disabling_MemDebug Docu_Python_wait EmpiricalPotential_contain_HomologyGraph EmpiricalPotential_contain_HomologyGraph_documentation Enable_parallel_make_install Enhance_userguide Enhanced_StructuralOptimization Enhanced_StructuralOptimization_continued Example_ManyWaysToTranslateAtom Exclude_Hydrogens_annealWithBondGraph FitPartialCharges_GlobalError Fix_BoundInBox_CenterInBox_MoleculeActions Fix_ChargeSampling_PBC Fix_ChronosMutex Fix_FitPartialCharges Fix_FitPotential_needs_atomicnumbers Fix_ForceAnnealing Fix_IndependentFragmentGrids Fix_ParseParticles Fix_ParseParticles_split_forward_backward_Actions Fix_PopActions Fix_QtFragmentList_sorted_selection Fix_Restrictedkeyset_FragmentMolecule Fix_StatusMsg Fix_StepWorldTime_single_argument Fix_Verbose_Codepatterns Fix_fitting_potentials Fixes ForceAnnealing_goodresults ForceAnnealing_oldresults ForceAnnealing_tocheck ForceAnnealing_with_BondGraph ForceAnnealing_with_BondGraph_continued ForceAnnealing_with_BondGraph_continued_betteresults ForceAnnealing_with_BondGraph_contraction-expansion FragmentAction_writes_AtomFragments FragmentMolecule_checks_bonddegrees GeometryObjects Gui_Fixes Gui_displays_atomic_force_velocity ImplicitCharges IndependentFragmentGrids IndependentFragmentGrids_IndividualZeroInstances IndependentFragmentGrids_IntegrationTest IndependentFragmentGrids_Sole_NN_Calculation JobMarket_RobustOnKillsSegFaults JobMarket_StableWorkerPool JobMarket_unresolvable_hostname_fix MoreRobust_FragmentAutomation ODR_violation_mpqc_open PartialCharges_OrthogonalSummation PdbParser_setsAtomName PythonUI_with_named_parameters QtGui_reactivate_TimeChanged_changes Recreated_GuiChecks Rewrite_FitPartialCharges RotateToPrincipalAxisSystem_UndoRedo SaturateAtoms_findBestMatching SaturateAtoms_singleDegree StoppableMakroAction Subpackage_CodePatterns Subpackage_JobMarket Subpackage_LinearAlgebra Subpackage_levmar Subpackage_mpqc_open Subpackage_vmg Switchable_LogView ThirdParty_MPQC_rebuilt_buildsystem TrajectoryDependenant_MaxOrder TremoloParser_IncreasedPrecision TremoloParser_MultipleTimesteps TremoloParser_setsAtomName Ubuntu_1604_changes stable
Last change on this file since b8f2ea was b8f2ea, checked in by Frederik Heber <heber@…>, 9 years ago

FIX: PotentialTrainer did not use user-specified threshold so far.

  • TESTFIX: Decreased l2 tolerance in FitPotential regression tests to further speed up tests. This is especially true for the enable-debug variant, where 3 of 5 tests take more than 15 minutes.
  • Property mode set to 100644
File size: 7.3 KB
Line 
1/*
2 * FunctionApproximation.hpp
3 *
4 * Created on: 02.10.2012
5 * Author: heber
6 */
7
8#ifndef FUNCTIONAPPROXIMATION_HPP_
9#define FUNCTIONAPPROXIMATION_HPP_
10
11// include config.h
12#ifdef HAVE_CONFIG_H
13#include <config.h>
14#endif
15
16#include <vector>
17
18#include "FunctionApproximation/FunctionModel.hpp"
19
20class TrainingData;
21
22/** This class encapsulates the solution to approximating a high-dimensional
23 * function represented by two vectors of tuples, being input variables and
24 * output of the function via a model function, manipulated by a set of
25 * parameters.
26 *
27 * \note For this reason the input and output dimension has to be given in
28 * the constructor since these are fixed parameters to the problem as a
29 * whole and usually: a different input dimension means we have a completely
30 * different problem (and hence we may as well construct and new instance of
31 * this class).
32 *
33 * The "training data", i.e. the two sets of input and output values, is
34 * given extra.
35 *
36 * The problem is then that a given high-dimensional function is supplied,
37 * the "model", and we have to fit this function via its set of variable
38 * parameters. This fitting procedure is executed via a Levenberg-Marquardt
39 * algorithm as implemented in the
40 * <a href="http://www.ics.forth.gr/~lourakis/levmar/index.html">LevMar</a>
41 * package.
42 *
43 * \section FunctionApproximation-details Details on the inner workings.
44 *
45 * FunctionApproximation::operator() is the main function that performs the
46 * non-linear regression. It consists of the following steps:
47 * -# hand given (initial) parameters over to model.
48 * -# convert output vector to format suitable to levmar
49 * -# allocate memory for levmar to work in
50 * -# depending on whether the model is constrained or not and whether we
51 * have a derivative, we make use of various levmar functions with prepared
52 * parameters.
53 * -# memory is free'd and some final infos is given.
54 *
55 * levmar needs to evaluate the model. To this end, FunctionApproximation has
56 * two functions whose signatures is such as to match with the one required
57 * by the levmar package. Hence,
58 * -# FunctionApproximation::LevMarCallback()
59 * -# FunctionApproximation::LevMarDerivativeCallback()
60 * are used as callbacks by levmar only.
61 * These hand over the current set of parameters to the model, then both bind
62 * FunctionApproximation::evaluate() and
63 * FunctionApproximation::evaluateDerivative(), respectively, and execute
64 * FunctionModel::operator() or FunctionModel::parameter_derivative(),
65 * respectively.
66 *
67 */
68class FunctionApproximation
69{
70public:
71 //!> typedef for a vector of input arguments
72 typedef std::vector<FunctionModel::arguments_t> inputs_t;
73 //!> typedef for a vector of input arguments
74 typedef std::vector<FunctionModel::list_of_arguments_t> filtered_inputs_t;
75 //!> typedef for a vector of output values
76 typedef std::vector<FunctionModel::results_t> outputs_t;
77public:
78 /** Constructor of the class FunctionApproximation.
79 *
80 * \param _data container with tuple of (input, output) values
81 * \param _model FunctionModel to use in approximation
82 * \param _precision desired precision of fit
83 */
84 FunctionApproximation(
85 const TrainingData &_data,
86 FunctionModel &_model,
87 const double _precision);
88
89 /** Constructor of the class FunctionApproximation.
90 *
91 * \param _input_dimension input dimension for this function approximation
92 * \param _output_dimension output dimension for this function approximation
93 * \param _model FunctionModel to use in approximation
94 */
95 FunctionApproximation(
96 const size_t &_input_dimension,
97 const size_t &_output_dimension,
98 FunctionModel &_model,
99 const double _precision) :
100 input_dimension(_input_dimension),
101 output_dimension(_output_dimension),
102 model(_model),
103 precision(_precision)
104 {}
105 /** Destructor for class FunctionApproximation.
106 *
107 */
108 ~FunctionApproximation()
109 {}
110
111 /** Setter for the training data to be used.
112 *
113 * \param input vector of input tuples, needs to be of
114 * FunctionApproximation::input_dimension size
115 * \param output vector of output tuples, needs to be of
116 * FunctionApproximation::output_dimension size
117 */
118 void setTrainingData(const filtered_inputs_t &input, const outputs_t &output);
119
120 /** Setter for the model function to be used in the approximation.
121 *
122 */
123 void setModelFunction(FunctionModel &_model);
124
125 /** This enum steers whether we use finite differences or
126 * FunctionModel::parameter_derivative to calculate the jacobian.
127 *
128 */
129 enum JacobianMode {
130 FiniteDifferences,
131 ParameterDerivative,
132 MAXMODE
133 };
134
135 /** This starts the fitting process, resulting in the parameters to
136 * the model function being optimized with respect to the given training
137 * data.
138 *
139 * \param mode whether to use finite differences or the parameter derivative
140 * in calculating the jacobian
141 */
142 void operator()(const enum JacobianMode mode = FiniteDifferences);
143
144 /** Evaluates the model function for each pair of training tuple and returns
145 * the output of the function as a vector.
146 *
147 * This function as a signature compatible to the one required by the
148 * LevMar package (with double precision).
149 *
150 * \param *p array of parameters for the model function of dimension \a m
151 * \param *x array of result values of dimension \a n
152 * \param m parameter dimension
153 * \param n output dimension
154 * \param *data additional data, unused here
155 */
156 void evaluate(double *p, double *x, int m, int n, void *data);
157
158 /** Evaluates the parameter derivative of the model function for each pair of
159 * training tuple and returns the output of the function as vector.
160 *
161 * This function as a signature compatible to the one required by the
162 * LevMar package (with double precision).
163 *
164 * \param *p array of parameters for the model function of dimension \a m
165 * \param *jac on output jacobian matrix of result values of dimension \a n times \a m
166 * \param m parameter dimension
167 * \param n output dimension times parameter dimension
168 * \param *data additional data, unused here
169 */
170 void evaluateDerivative(double *p, double *jac, int m, int n, void *data);
171
172 /** This functions checks whether the parameter derivative of the FunctionModel
173 * has been correctly implemented by validating against finite differences.
174 *
175 * We use LevMar's dlevmar_chkjac() function.
176 *
177 * \return true - gradients are ok (>0.5), false - else
178 */
179 bool checkParameterDerivatives();
180
181private:
182 static void LevMarCallback(double *p, double *x, int m, int n, void *data);
183
184 static void LevMarDerivativeCallback(double *p, double *x, int m, int n, void *data);
185
186 void prepareModel(double *p, int m);
187
188 void prepareParameters(double *&p, int &m) const;
189
190 void prepareOutput(double *&x, int &n) const;
191
192private:
193 //!> input dimension (is fixed from construction)
194 const size_t input_dimension;
195 //!> output dimension (is fixed from construction)
196 const size_t output_dimension;
197 //!> desired precision given to LevMar
198 const double precision;
199
200 //!> current input set of training data
201 filtered_inputs_t input_data;
202 //!> current output set of training data
203 outputs_t output_data;
204
205 //!> the model function to be used in the high-dimensional approximation
206 FunctionModel &model;
207};
208
209#endif /* FUNCTIONAPPROXIMATION_HPP_ */
Note: See TracBrowser for help on using the repository browser.