Mathematical model
From Academic Kids

 Note: The term model is also given a formal meaning in model theory, a part of axiomatic set theory.
A mathematical model is the use of mathematical language to describe the behaviour of a system. Mathematical models are used in particularly in the sciences such biology, electrical engineering, physics but also in other fields such as economics, sociology and political science.
Contents 
Background
Often when engineers analyze a system to be controlled or optimized, they use a mathematical model. In analysis, engineers can build a descriptive model of the system as a hypothesis of how the system could work, or try to estimate how an unforeseeable event could affect the system. Similarly, in control of a system, engineers can try out different control approaches in simulations.
A mathematical model usually describes a system by a set of variables and a set of equations that establish relationships between the variables. The values of the variables can be practically anything; real or integer numbers, boolean values or strings, for example. The variables represent some properties of the system, for example, measured system outputs often in the form of signals, timing data, counters, event occurrence (yes/no). The actual model is the set of functions that describe the relations between the different variables.
Building blocks
There are five basic groups of variables: decision variables, input variables, state variables, exogenous variables, random variables, and output variables. Since there can be many variables of each type, the variables are generally represented by vectors.
Decision variables are sometimes known as independent variables. Exogenous variables are sometimes known as parameters or constants. The variables are not independent of each other as the state variables are dependent on the decision, input, random, and exogenous variables. Furthermore, the output variables are dependent on the state of the system (represented by the state variables).
Objectives and constraints of the system and its users can be represented as functions of the output variables or state variables. The objective functions will depend on the perspective of the model's user. Depending on the context, an objective function is also known as an index of performance, as it is some measure of interest to the user. Although there is no limit to the number of objective functions and constraints a model can have, using or optimizing the model becomes more involved (computationally).
Classifying mathematical models
Mathematical models can be classified in several ways, some of which are described below.
 Linear vs. nonlinear: If the objective functions and constraints are represented entirely by linear equations, then the model is known as a linear model. If one or more of the objective functions or constraints are represented with a nonlinear equation, then the model is known as a nonlinear model.
 Deterministic vs. probabilistic (stochastic): A deterministic model performs the same way for a given set of initial conditions, while in a stochastic model, randomness is present, even when given an identical set of initial conditions.
 Static vs. dynamic: A static model does not account for the element of time, while a dynamic model does. Dynamic models typically are represented with difference equations or differential equations.
 Lumped parameters vs. distributed parameters: If the model is homogeneous (consistent state throughout the entire system) the parameters are lumped. If the model is heterogenous (varying state within the system), then the parameters are distributed. Distributed parameters are typically represented with partial differential equations.
A priori information
Mathematical modelling problems are often classified into blackbox or whitebox models, according to how much a priori information is available of the system. A blackbox model is a system of which there is no a priori information available. A whitebox model (also called glass box or clear box) is a system where all necessary information is available. Practically all systems are somewhere between the blackbox and whitebox models, so this concept only works as an intuitive guide for approach.
Usually it is preferable to use as much a priori information as possible to make the model more accurate. Therefore the whitebox models are usually considered easier, because if you have used the information correctly, then the model will behave correctly. Often the a priori information comes in forms of knowing the type of functions relating different variables. For example, if we make a model of how a medicine works in a human system, we know that usually the amount of medicine in the blood is an exponentially decaying function. But we are still left with several unknown parameters; how rapidly does the medicine amount decay, and what is the initial amount of medicine in blood? This example is therefore not a completely whitebox model. These parameters have to be estimated through some means before one can use the model.
In blackbox models one tries to estimate both the functional form of relations between variables and the numerical parameters in those functions. Using a priori information we could end up, for example, with a set of functions that probably could describe the system adequately. If there is no a priori information we would try to use functions as general as possible to cover all different models. An often used approach for blackbox models are neural networks which usually do not assume almost anything about the incoming data. The problem with using a large set of functions to describe a system is that estimating the parameters becomes increasingly difficult when the amount of parameters (and different types of functions) increases.
Complexity
Another basic issue is the complexity of a model. If we were, for example, modelling the flight of an airplane, we could embed each mechanical part of the airplane into our model and would thus acquire an almost whitebox model of the system. However, the computational cost of adding such a huge amount of detail would effectively inhibit the usage of such a model. Additionally, the uncertainty would increase due to an overly complex system, because each separate part induces some amount of variance into the model. It is therefore usually appropriate to make some approximations to reduce the model to a sensible size. Engineers often can accept some approximations in order to get a more robust and simple model. For example Newton's classical mechanics is an approximated model of the real world. Still, Newton's model is quite sufficient for most ordinarylife situations, that is, as long as particle speeds are well below the speed of light, and we study macroparticles only.
Training
Any model which is not pure whitebox contains some parameters that can be used to fit the model to the system it shall describe. If the modelling is done by a neural network, the optimization of parameters is called training. In more conventional modelling through explicitly given mathematical functions, parameters are determined by curve fitting.
Model evaluation
An important part of the modelling process is the evaluation of an acquired model. How do we know if a mathematical model describes the system well? This is not an easy question to answer. Usually the engineer has a set of measurements from the system which are used in creating the model. Then, if the model was built well, the model will adequately show the relations between system variables for the measurements at hand. The question then becomes: How do we know that the measurement data is a representative set of possible values? Does the model describe well the properties of the system between the measurement data (interpolation)? Does the model describe well events outside the measurement data (extrapolation)?
A common approach is to split the measured data into two parts; training data and verification data. The training data is used to train the model, that is, to estimate the model parameters (see above). The verification data is used to evaluate model performance. Assuming that the training data and verification data are not the same, we can assume that if the model describes the verification data well, then the model describes the real system well.
However, this still leaves the extrapolation question open. How well does this model describe events outside the measured data? Consider again Newtonian classical mechanicsmodel. Newton made his measurements without advanced equipment, so he could not measure properties of particles travelling at speeds close to the speed of light. Likewise, he did not measure the movements of molecules and other small particles, but macro particles only. It is then not surprising that his model does not extrapolate well into these domains, even though his model is quite sufficient for ordinary life physics.