曝光台 注意防骗
网曝天猫店富美金盛家居专营店坑蒙拐骗欺诈消费者
smoothing method with dynamic model parameter. Smoothing parameter
t depends on the step t in the following way
F1 = Y1
Ft+1 = tYt + (1 − t)Ft, where 2 (0, 1), t = 1, 2, . . . , n
et = Yt − Ft
Et = et + (1 − )Et−1
Mt = |et| + (1 − )Mt−1
t+1 =
Et
Mt (3.3)
Et is called smoothed error while Mt is absolute value of smoothed error.
Holt-Winters Three Parameter Exponential Smoothing
3.3.3 Box Jenkins Method
3.4 FORECAST ACCURACY
A fundamental concern while forecasting is how to measure the suitablity of a particular
forecasting method. In most forecasting situations accuracy is seen like the
criterion for selecting a forecasting method. In time-series modeling it is possible
to use a subset of known data to forecast the rest of the known data which enables
one to study the accuracy of the forecasts more directly.
42
There are many nave thecniques as well as mathematically sophisticated techniques.
Some of forecast accuracy measures are stated below.
3.4.1 Standard Statistical Measures
If Yi is the actual value in concern at time i and Fi is the forecasted (or fitted) value
for the same time period then the error is defined:
ei = Yi − Fi
If there are observations and forecasts for n time periods then one can talk about
the following statistical measures:
• Mean Error
ME = Pni
=1 ei/n
• Mean Absolute Error
MAE =Pni
=1 |ei|/n
• Sum of Squared Errors
SSE = Pn
i=1 e2i
• Mean Squared Error
MS E = Pni
=1 e2i
/n
• Standard Deviation of Errors
SDE = qPn
i=1 e2i
/(n − 1)
A forecaster may want to see all of the measures above routinely, but the main
points is to recognize the limitations of each. For instance, in most cases, the
43
forecaster’s aim is to minimize the mean squared error (or sum of square errors),
however, this measure has two disadvantages. First of all, this measure refers to
fitting a model to historical data. Such a fit does not need to give good forecasts for
future. An MSE of value 0 can always be obtained by fitting a model to the data
by using a function (or polynomial) of sufficiently high order or a suitable Fourier
transformation. Overfitting a model to data is as bad as failing to identify the nonrandom
pattern in the data.
Secondly, each different method has its own procedures in the fitting phase and
this is related to the measure of accuracy. For example, regular linear regression
method gives the same weight to the error while minimizing the MSE whereas
Box-Jenkins method follows a non-linear optimization process. Therefore, comparing
the accuracy of those methods on a single criterion, such as MSE, is of
limited value.
3.4.2 Relative Measures
Because of the drawbacks of the above mentioned measures, alternative measures
have been proposed. One can find those especially dealing with percentages below.
• Percentage Error
PEi = Xi−Fi
Fi (100)
• Mean Percentage Error
MPE = Pni
=1 PEi/n
• Mean Absolute Percentage Error
MAPE = Pn
i=1 |PEi|/n
As one may imagine, among the above measures, MPE tends to be small as positive
values and negative values would cancel each other. Therefore MAPE is introduced
44
to get rid of this drawback. In many cases, knowing the mean absolute percentage
error is more useful than knowing the mean squared error. In this paper, we use
MAPE to measure the forecast accuracy and find the optimum model parameters.
3.5 NONPARAMETRIC CURVE ESTIMATION
In this section, in order to reach our aim, we followed a non-parametric robust
approach. Denoting the OP periods by tk, k = 1, 2, . . . , n (in our recent case n = 12)
and sample quantiles for each OP period by Y
k , one can construct the following
non-parametric model:
Y
k = (tk) +
k , k = 1, 2, . . . , n
where Y
k is observed on a discrete grid of a closed bounded interval I R,
k
are independent identically distributed random noises, (x) 2 (H, L), x 2 I, with
non-parametric class (H, L) which will be defined below.
Without loss of generality, we assume the interval I to be unit, i.e, I = [0, 1],
as any closed bounded interval can be transformed into the unit interval.
In the above model, one should interprete Y
k as observed sample quantiles
while function gives the real quantiles at time tk, which we cannot identify
from the sample without knowing the exact distribution.
k is the error term whose
quantile we assume to be 0.
中国航空网 www.aero.cn
航空翻译 www.aviation.cn
本文链接地址:
航空资料31(19)