• 热门标签

当前位置: 主页 > 航空资料 > 国外资料 >

时间:2010-08-31 18:45来源:蓝天飞行翻译 作者:admin
曝光台 注意防骗 网曝天猫店富美金盛家居专营店坑蒙拐骗欺诈消费者

Figure 3.3
In order to have a general insight about the behaviours of the -quantiles for different
’s, we found the autocorrelation coeffiecient of the series constructed by
different -quantiles over OP periods. The autocorrelation coefficient graph of 0.5
quantiles (median) is given by the Figure 3.4:
The reader can find the autocorrelation coefficient graphs for all quantiles in the
Appendix B.
We followed the same procedure above with the latter series. The results can be
found in Appendix B.
After finding the autocorrelation coefficients and plotting them, the first question
that should be asked is whether the series is random or not. As explained before,
for a random series, lagged values of the series are uncorrelated and rk is expected
38
Figure 3.4: Autoregression Coefficients Graph for 0.5 Quantiles (Median) of GHP
Times over Lags
39
to be approximately 0. The 95% confidence limits for the correlogram can be plotted
at approximately 0 ± 2/pn; therefore, the approximate 95% confidence band
in our case is ±2/p12 = ±0.5773503. As one observes, the autocorrelation coeeficients
we calculated are in this limit, hence, we conclude our observations are
from a not autocorrelated population.
The above results let us conclude that there is no consistent pattern from one OP
period to the other, i.e, the intervals 1-period apart are unrelated, as the autocorrelation
coefficients are not significantly different from 0.
3.3 BASIC FORECASTING METHODS
3.3.1 Regression Analysis
Regression is a simple method which is used to analyze the reletionship between
two variables, namely explanatory variable X and dependent variable Y. The goal
of the method is to find the best fitting curve in order to predict Y from X. A linear
regression line is of the form Y = aX+b, where a is the slope and b is the intercept.
The aim of the algorithm is to adjust the values of the slope and the intercept in
order to find the line which produces the best forecasts.
The most common method to fit the data into a curve is Least Squares Estimates.
The result is reached by minimizing the sum of the squares of vertical distances of
the points from the curve. If we think the observed data as a function of explanatory
variable with an error, the following equation describes our model:
Yt = f (Xt) + et t = 1, 2, . . . , n (3.1)
where the linear function f and et determines the pattern and the error, respectively.
The critical task on forecasting is to seperate the pattern from the error component
so that the former can be used for forecasting.
If the function f in the equation (3.1) is quadratic, the method is called quadratic
regression.
40
Equation of the Least-Squares Regression Line
The equation of the lest-squares regression line is given by
Y = aX + b
where
a = Pn
i=1 XiYi − PXi PYi
n
Pni
=1 x2
i − (PXi)2
n
and
b = ¯Y − a¯X
¯X
and ¯Y are the means of X and Y, respectively.
3.3.2 Exponential Smoothing
In this section, we describe a set of methods which assign weights to the observations.
Since the weights exponentially decrease as the observations get older, these
methods are called exponential smoothing procedures.
There are single, double and more complicated exponential smoothing methods.
Single Exponential Smoothing
The below equations give the general from of single exponential smoothing (SES):
F1 = Y1
Ft+1 = Yt + (1 − )Ft, where 2 (0, 1), t = 1, 2, . . . , n (3.2)
Ft represents the forecast at time t while Yt is the actual observation. Simply, one
can use = 1/n where n is the total number of available observations. However,
there are more sophisticated methods to estimate . The estimation involved in
exponential smoothing is a non-linear optimization problem.
Single exponential smoothing algorithm does not need to store all of the historical
data. It only requires to store the most recent observation, the most recent forecast
and the value of .
41
As shown by Muth [1960], single exponential smoothing predictor derived from
the equation (3.2) is optimal if and only if Yt is generated by the ARIMA(0, 1, 1)
(Auto-Regressive Integrated Moving Averages) process (1−B)Yt = [1−(1− )B]t.
On the other hand, if the data is stationary, one still obtains fairly good approximation,
but when the existence of a trend, the SES method explained above is
inadequate (Makridakis, Wheelwright, McGee[1983]).
Adaptive Response Rate Single Exponential Smoothing
Adaptive Response Rate Single Exponential Smoothing (ARRSES) is a single exponential
 
中国航空网 www.aero.cn
航空翻译 www.aviation.cn
本文链接地址:航空资料31(18)