The parameters for a simple OLS regression are calculated when you select the Simple regression icon. The simple regression icon opens the dialog box depicted in Figure 1 so the X and Y variables can be specified. The intercept parameters for the equation:
are estimated and placed in the worksheet starting where the Output Range specifies. The names of the estimated parameters appear in the column to the left of the parameters. The R2, F-Ratio, Student-t test statistics, and residuals are calculated if you click the appropriate boxes.
The Multiple regression option is accessed through the Multiple Regression icon and estimates the parameters for the following equation:
Figure 2. Multiple Regression Dialog Box.
A sample output for a multiple regression in Figure 3 indicates the format for the first part of the results. The name of an X variable and its beta are in bold if the variable is statistically significant at the indicated one minus alpha level (e.g., X1, X2, X3, and X4 in the example). Standard errors for the betas, the t-test statistics and the probability (pr) value of the t-statistics are provided for each explanatory variable. The elasticity at the mean for each independent variable as well as the partial and semi partial correlations for these variables are also provided. The variance inflation factor is reported for each X variable to indicate the degree of multicollinearity of Xi to other variables in the model.
The Restriction row in the parameter block of output values allows the user to interactively experiment with various combinations of X variables. After the initial parameter estimation the Restriction coefficients are all blank (Figure 3), meaning that every X variable is included in the unrestricted model. The user can interactively drop and re-include a variable by changing its restriction coefficient to 0 or blank. Three test statistics (F, R2 and ) for the Unrestricted Model are provided and remain fixed while testing alternative specifications of the model’s variables. This is done to facilitate the comparison to the original unrestricted model to the restricted models. If you type a non-zero number in the restriction row, the value becomes the beta-hat coefficient for a restricted regression.
Figure 3. Sample Output for a Restricted Multiple Regression Model.
The covariance matrix for the betas is provided as output for the multiple regression. The beta covariance matrix is used in simulation when the model is assumed to have stochastic betas. The beta covariance matrix is provided when requested as an option in the multiple regression dialog box.
Residuals from the regression are summarized in a table (Figure 4). The residuals for the regression are calculated as for each observation i. The standard error for the mean predicted value (SE mean predicted) is provided for each observation i. In addition the SE of the Predicted values for each observation is provided in column 5 of the output (Figure 4). The SE of the Predicted values is the correct SE for simulating a probabilistic forecast of the multiple regression forecast values, because it is the SE for predicting “an observation.” As indicated in Figure 4, the SE Predicted Values increase as the forecasted period gets longer. Prediction and confidence intervals for the model are provided in the table and graphically for the alpha equal 5 percent level (Figure 5).
Figure 4. Sample Output of Residuals and Confidence and Prediction Intervals for a Multiple Regression.
Figure 5. Sample Chart of Predicted Regression Results and Prediction Intervals.
If requested in the regression dialog box (Figure 2), observational diagnostics are calculated and reported for the unrestricted model (Figure 6). The column of 1’s in the DFBetas Restriction column indicate that the unrestricted model was fit using all of the observed data. If you change a DFBetas Restriction column indicate that the unrestricted model was fit using all of the observed data. If you change a DFBetas Restriction to 0 for a particular row the model is instantly updated using a dummy variable to ignore the effects for that row of X’s and Y. The rule for excluding an observation is if its Studentized Residual is greater than 2 (is bold). This is the case for observation 24 in the sample output (Figure 6). Setting the Restriction value to 0 for observation 24 causes the F statistic to increase from 88 to 107, given that X6 has not been excluded from the model. The R2 increases to 96.9 and R2 the increases to 95.8. This result suggests that observation 24 is either an outlier or should be handled with a dummy variable. A priori justification should be used when handling observations in this manner.