Variable Cross-Section Dependence Test:
We have the panel data and the preliminary tests which are necessary for panel data one of them
is Variable Cross-Section Dependence Test.
The Variable Cross-Section Dependence Test (VCSD) helps researchers identify whether cross-
sectional dependence is present in panel data and, if present, to what extent. It provides a
statistical framework for testing the null hypothesis of cross-sectional independence against the
alternative hypothesis of cross-sectional dependence. It assesses whether observations across
entities are independent, crucial for accurate panel data modeling.
Table 1: Variable Cross-Section Dependence Test
Variables Breusch-Pagan Pesaran Scaled Bias-Corrected Pesaran CD
Scaled
Statistic P value Statistic P value Statistic P Statistic P value
value
GDP 407.3140 0.0000 58.52942 0.0000 58.34521 0.0000 20.18028 0.0000
EC 351.6663 0.0000 49.94279 0.0000 49.75858 0.0000 18.71408 0.0000
CO2 278.2259 0.0000 38.61069 0.0000 38.42648 0.0000 16.3431 0.0000
FDI 42.59273 0.0035 2.251708 0.0243 2.067497 0.0387 0.597521 0.5502
IND 98.01356 0.0000 10.80333 0.0000 10.61912 0.0000 1.209677 0.2264
The results of the cross-section dependence test in Table 1 indicate that there is significant cross-
sectional dependence among the variables. This is due to the low p-values across all variables in
the Breusch-Pagan, Pesaran Scaled, and Bias-Corrected Scaled statistics, as well as the Pesaran
CD statistic. Lower p-values (close to 0) show us stronger evidence against the null hypothesis of
no cross-sectional dependence. For instance, in the case of GDP, the Breusch-Pagan statistic is
407.3140 with a p-value of 0.0000, suggesting strong evidence of cross-sectional dependence.
Similarly, for variables like EC and CO2, the statistics and p-values indicate significant cross-
sectional dependence.
In summary, these results suggest that when analyzing these variables, it's essential to account
for cross-sectional dependence in the model to ensure the validity of statistical inferences.
Residual Cross-Section Dependence Test:
A residual cross-section dependence test is a statistical method used in econometrics to detect
whether there is correlation or dependence among the error terms across different units in panel
data analysis. Residual cross-section dependence can lead to biased estimates and incorrect
inferences in econometric models if not addressed properly.
Table 2: Residual Cross-Section Dependence Test
Test Statistic P-Value
Breusch Pagan LM 141.7652 0.0000
Pesaran Scaled LM 17.55436 0.0000
Pesaran CD 2.488228 0.0128
Based on the results of the residual cross-section dependence tests, the test statistic Breusch
Pagan LM Test is 141.7652 with a p-value of 0.0000. This indicates strong evidence against the
null hypothesis of no residual cross-section dependence. In other words, there is significant
residual correlation or dependence across different cross-sectional units in our panel data. The
test statistic of Pesaran Scaled LM Test is 17.55436 with a p-value of 0.0000. Similar to the
Breusch Pagan LM test, this also suggests strong evidence against the null hypothesis of no
residual cross-section dependence. The results are confirmed by the Pesaran CD Test.
Overall, based on these results, it seems clear that there is significant residual cross-section
dependence in our panel data.
Second Generation Unit Root Test:
After confirming from the variable cross-section dependence test and residual cross-section
dependence test we have obtained the results that there is significant evidence of cross-section
dependence, that’s why we can’t used the First generation unit root test, therefore we are using
the second generation unit root test.Second generation unit root test aims to provide more robust
and accurate results compared to traditional unit root tests when analyzing panel data. Overall,
second-generation unit root tests aim to provide reliable inference in the presence of various
forms of dependence, heterogeneity, and non-linearity in time series data. They are particularly
useful in empirical applications where the assumptions of first-generation tests may be violated.
Table 3: Second-Generation Unit Root Test
variables CIPS
Level 1st Difference
No Cons. Constant Cons No Cons. Constant Cons
&Trend &Trend
GDP -0.641 -1.089 -1.852 -2.542*** -2.831*** -2.861*
CO2 -0.631 -1.775 -2.676 -3.794*** -3.871*** -3.592***
FDI -3.347*** -3.474*** -3.789***
EC -1.449 -1.589 -1.548 -2.982*** -2.943*** -3.057**
Ind -2.083*** -2.145 -2.783* -4.028*** -4.127*** -4.384***
The critical values for determining stationarity are usually provided in the test documentation or
literature. Comparing the test statistics to these critical values will determine whether the series
is stationary or non-stationary.
If the absolute value of the test statistic is greater than the critical value, then the null hypothesis
of a unit root is rejected and it is concluded that the series is stationary. Conversely, if the
absolute value of the test statistic is smaller than the critical value, then the null hypothesis
cannot be rejected, suggesting that the series is non-stationary. Keeping in view the results
mentioned in table 3 the CIPS value for GDP is -0.641at level is not stationary but at first
difference, the test statistic is -2.542 which is highly significant. CO2 is not stationary at level but
highly significant at first difference the stars mentioned in the results which show us that the
variable is significant, FDI is highly significant at level, EC is not stationary at level but highly
significant at first difference.
Descriptive Statistics:
Descriptive statistics involves summarizing and describing data to understand its main features
without making inferences about larger populations. It encompasses measures like central
tendency (mean, median, mode), dispersion (range, variance, standard deviation), and graphical
representations (histograms, box plots) to present data characteristics effectively.
Variable Afghanistan Bangladesh Bhutan India Nepal Pakistan Sri Lanka
GDP
Mean 1.37e+09 1.50e+11 1.48e+09 1.57e+1 2.00e+10 2.26e+11 6.10e+10
2
Std. dev 5.38e+09 5.33e+10 5.85e+08 6.3e+11 5.12e+09 5.42e+10 1.96e+10
Min 5.95e+09 8.35e+10 6.55e+08 8.01e+1 1.34e+10 1.46e+11 3.56e+10
1
Max 2.11e+10 2.58e+11 2.47e+09 2.69e+1 3.06e+10 3.24e+11 9.22e+10
2
CO2
Mean .1305238 .3454421 .7513229 1.314267 .2195635 .7954681 .7653386
Std. dev .0790785 .133783 .3710144 .3367599 .1365143 .0761666 .1776208
Min .0337854 .1601221 .3553215 .8870139 .1005114 .683968 .5737181
Max .2965062 .5861576 1.391842 1.812696 .5406519 .9563447 1.090676
FDI
Mean 1.186387 .8771916 1.185776 1.63054 .3301632 1.177988 1.272266
Std. dev 1.228071 .4426651 1.626094 .7207652 .5359334 .9959528 .3339848
Min 1244959 .0955794 _.675563 .6058893 -.098374 .3755285 .8418734
9
Max 4.364535 1.735418 6.321598 3.620522 2.412665 3.668323 1.863973
EC
Mean .0810891 .98945 .0470112 21.2335 .0877623 2.556362 .2638118
Std. dev .0543468 .3411099 .0186219 6.556248 .0369879 .5419201 .0646552
Min .0151095 .511 .016969 12.20718 .0502012 1.811211 .1929788
Max .110508 1.587 .07639 31.78274 .1701738 3.434414 .3684541
Ind
Mean 21.24489 25.83943 40.62353 28.48032 15.06611 19.32659 29.42594
Std. dev 5.324754 3.343688 2.866156 1.946608 2.046352 1.182919 1.256817
Min 10.05187 22.27938 36.05893 24.59147 12.65542 17.54846 26.82059
Max 28.21077 32.85255 45.09979 31.13672 20.73557 21.7374 31.12828
Table 4: Descriptive Statistics
Slope Heterogeneity Test:
A slope heterogeneity test is a statistical procedure used to determine whether the slopes of two
or more regression lines are significantly different from each other. This test is particularly
useful in comparing the relationship between two variables across different groups or conditions.
Based on the provided results mentioned in the below table 6, there is a statistically significant
difference in slopes between the two groups or conditions being compared. The adjusted
difference in slopes is 8.224 for one comparison and 9.908 for another comparison. This means
that, after adjusting for other factors, the relationship between the variables (represented by the
slopes) differs substantially between the two groups or conditions.
Table 6: Slope Heterogeneity Tests
Delta P-Value
8.224 0.000
Adj. 9.908 0.000
In summary, the results suggest in the table 6 that the relationship between the variables being
studied varies significantly across the different groups or conditions, as indicated by the
statistically significant differences in slopes.
Slope Heterogeneity Test (HAC):
The slope heterogeneity test using HAC estimators is a statistical procedure used to examine
whether the relationship between independent variables and the dependent variable varies across
different groups or time periods while accounting for potential heteroskedasticity and
autocorrelation in the data.
Table 7: Slope Heterogeneity Test (HAC)
Delta P-Value
4.187 0.000
Adj. 5.045 0.000
The table 7 presents the results of slope heterogeneity tests using HAC estimators. The "Adj."
column likely represents some measure of adjustment or statistic related to the test. The "Delta"
column contains values indicating the magnitude of the difference in slopes across groups or
time periods. The "P-Value" column displays the associated p-values for each test. In this case,
both p-values are reported as 0.000, indicating strong evidence against the null hypothesis of
slope homogeneity. Thus, it can be interpreted that there is significant heterogeneity in slopes
across the examined groups or time periods.
Correlation Matrix of Variables:
A correlation matrix of variables is a square matrix that displays the correlation coefficients
between all pairs of variables in a dataset. Each cell in the matrix represents the correlation
coefficient between two variables, indicating the strength and direction of their linear
relationship. Correlation coefficients range from -1 to 1, where 1 indicates a perfect positive
linear relationship, -1 indicates a perfect negative linear relationship, and 0 indicates no linear
relationship.
Table 8: Correlation Matrix of Variables
LGDP LCO2 LFDI LEC LIND
LGDP 1.0000
LCO2 0.4711 1.0000
LFDI 0.2376 0.2880 1.0000
LEC 0.9434 0.6555 0.2714 1.0000
LIND -0.1985 0.3836 0.4250 -0.0256 1.0000
The results of correlation matrix in table 8 suggests that there is no correlation among the
variables.
Westerlund Test for Cointegration:
The Westerlund test specifically tests for cointegration in panel data settings, where both the
cross-section and time-series dimensions are present. It is an extension of the Engle-Granger
two-step procedure to panel data. It is particularly useful in panel data analysis when dealing
with multiple time series variables and can provide insights into the long-term relationships
among these variables.
Table 9: Westerlund Test for Cointegration
Statistic P-value
Variance Ratio -0.8154 0.2074
The p-value of 0.2074 is higher than conventional significance levels (e.g., 0.05 or 0.01). This
suggests that there is insufficient evidence to reject the null hypothesis of no cointegration. In
other words, based on the results of the Westerlund test, we fail to find significant evidence of
cointegration among the variables at the chosen significance level. Therefore, it is likely that the
variables in our panel data do not exhibit a long-term equilibrium relationship or
cointegration.It's important to note that the interpretation may vary depending on the chosen
significance level and specific context of our analysis. Additionally, researchers often consider
other factors and conduct further analysis to validate the results of cointegration tests.
Pedroni Test:
Pedroni's panel cointegration tests are statistical methods used to assess the existence of
cointegration relationships among variables in panel datasets.
Table 10: Pedroni Cointegration Test
Test stats. Panel Group
v 1.042
rho -0.2377 0.501
t -2.19 -2.557
adf 2.972 3.746
If a group's value is greater than the panel value, then cointegration will exist.
The results of the Pedroni Cointegration test suggests in the table 10 that there is cointegration
among the variables.
Table 1: Variables, Measure, Code and Source
variable Measure Code Source
GDP growth rate Percentages GDP WDI
CO2 emission CO2 WDI
Foreign Direct Investment FDI WDI
Energy Consumption QBTU EC USEIA
Industrialization Ind WDI
Hausman Test
Chi-Square P-value Hausman Chi-Square P-value
(PMG & DFE)
Hausman (PMG &MG 7.26 0.129 3.46 0.48