The Moorish Wanderer

The Big Picture – Part 4

Posted in Dismal Economics, Moroccan Politics & Economics, Morocco, Read & Heard by Zouhair ABH on May 14, 2012

The standard RBC model has several major limitations that fail to account for proper results – in this case, a close match-up with summary statistics obtained after significant aggregates have been HP-filtered. The graph below for instance, shows a long-term comparison between actual GDP data, and RBC-generated output, the widening gap can be explained by the fact that savings in the standard RBC setup are exclusively domestic; recall capital accumulation dynamics:
k_{t+1}=(1-\delta)k_t + i_t
and National Accounting identities:
c_t + i_t = y_t
and
y_t = z_t k_t^\alpha h_t^{1-\alpha}
Obviously, if the Moroccan economy were to rely solely on domestic savings, capital accumulation would have grown at a lower rate, hence leading to smaller levels of output; furthermore, because Morocco is not an immigration country – meaning, demographic growth is endogenous- Capital dynamics account for a lot in terms of output growth, which vindicates the initial claim domestic savings are not high enough to explain the levels of investment observed over the past half a century.

This in my opinion is the strongest piece of evidence I would consider for pro-free trade policies: capital flows boost the economy, to the tune of 130Bn Dirhams every year since 1965, in real terms.

In addition to Balance of Payments issues, the RBC model needs to embed Government policies in the model’s intrinsic functions; Overall, RBC model described by an inter-temporal CRRA utility function and the resources constraints mentioned above yield the following:

HP Data     |σ      |σj/σy |Corr(y,j)|
------------+-------+------+----------
Y_GDP       |0,08030|   1  |    1    |
------------+-------+------+----------
Consumption |0,07013|0,8734|  0,8215 |
------------+-------+------+----------
Investment  |0,22035|2,7441|  0,8369 |
------------+-------+------+----------
Government  |0,24127|3,0046|  0,4997 |
------------+-------+------+----------
Labour      |0,04256|0,5300| -0,8670 |
--------------------------------------
RBC         |σ      |σj/σy |Corr(y,j)|
------------+-------+------+----------
Y_GDP       |0,06596|   1  |    1    |
------------+-------+------+----------
Consumption |0,04715|0,7148|  0,5092 |
------------+-------+------+----------
Investment  |0,20460|3,1018|  0,8766 |
------------+-------+------+----------
Government  |         No Data        |
------------+-------+------+----------
Labour      |0,00002|0,0003|  0,0238 |
--------------------------------------

Starting from the mid-1960s, Real GDP departed significantly from RBC-generated GDP. Incidentally, Morocco’s Balance of Payment picked up steam around the same time. (log-levels)

As you can see, the standard RBC model does pretty well in explaining cyclical fluctuations on GDP, household consumption and Investment dynamics – it exhibits lower levels of volatility for GDP, Consumption and Investment.

So even though synthetic data shows discrepancies like that of GDP’s, it retains similar features – in this case volatility, correlation and relative variance with respect to other aggregates.

The basic model provides powerful results, but not powerful enough to start building on forecasts and statistics-based predictions; there is a need for newly specified functions where foreign trade, government expenditure, and perhaps cross-correlated structural shocks are embedded.

The Big Picture – Part 3

Posted in Dismal Economics, Moroccan Politics & Economics, Morocco, Read & Heard by Zouhair ABH on May 7, 2012

Looking back at the latest two posts, I must admit I had spent too much time trying to identify the deep parameters needed to build Morocco’s RBC model. But it was a blessing in disguise; indeed, when the standard methods of calibration are applied, the necessary structural parameters fit very well with the regressed/estimated results earlier; no harm done there.

This means a lot: calibrated parameters are usually constructed by means of steady state, and as far as academic modus operandi goes, there are no particular standard methods to follow (or in my case, by computing long-term trends for the variables of interest) – and the fact that econometric computations, however rudimentary and based upon a relatively small sample, tend to be vindicated by calibration method, does point out to the existence of some steady-state (that is yet to be determined) and the superiority of a long-term sample, compared to that of HCP’s forecast model.

The deep parameters’ vector encompasses the following values:

\eta – time fraction allocated to labour activities : 1/3

\alpha – Capital share output : .03414 (homogeneity of degree one is acquired, and Labour share is thus 1-\alpha

\theta – a measure for risk aversion : .0370

\delta – depreciation rate of Capital (annual) : 2.905%

\beta – household discount rate : .9198

\rho – persistence of structural exogenous shocks: .923611

\sigma – ‘white noise’ standard deviation : .00177

These figures are computed from stead-state identities:

1/\alpha=\left[\frac{\delta+\frac{1}{\beta}-1}{\delta}\right]\frac{i_{lt}}{y_{lt}}

2/\theta=\frac{\delta+\hat{\beta}}{\delta} with \beta=\frac{1}{1+\hat{\beta}}

3/\eta=\theta^{\alpha}

4/\phi=\frac{1}{1+\frac{2(1-\alpha)}{c_{lt}/y_{lt}}}

Finally, the remaining figure to compute is k_0 which is derived from Capital motion law at steady state, with

\frac{k_{lt}}{y_{lt}}=\frac{i_{lt}}{\delta y_{lt}}

The subscript ‘lt’ refers to the steady-state proxy, i.e. long-term mean. As for the initial stock of capital (a figure nowhere to be found I am afraid) the idea is to use properties of the balanced growth path, with y_0 as the initial state for output, thus ensuring a ‘calibrated’ k_0

While these figures are not strictly equal to the estimates described before, they do fit in all of 95% Confidence Interval for each of their estimates, in fact they fit for all of them in the 99% CI. The simple difference being here that these parameters are fixed values, while estimates provided in the two last posts are in essence random variables, centred around the OLS estimates; it will be helpful in estimating the persistence factor of exogenous shocks. Indeed, the standard recipe to compute it is to consider it as the formula most able to capture the Solow Residual, defined such:

\log(z_t)=\alpha\log k_t+(1-\alpha)\log n_t

recall the first post for the log-linear argument; exogenous shocks are thus defined as AR(1) process, with a fixed term that denotes of long-term shocks;

\log(z_t)=\rho\log(z_{t-1}) + (1-\rho)\log(z_{lt}) + \epsilon_{t-1}

the next step is to compute z_t just like all other aggregates with HP-filters, then regress the trend -i.e. \log(z_{lt}– on a index time to find (1-\rho) which yields the following results:

      Source |       SS       df       MS              Number of obs =      62
-------------+------------------------------           F(  1,    60) = 2753.20
       Model |  115.862298     1  115.862298           Prob > F      =  0.0000
    Residual |  2.52497013    60  .042082836           R-squared     =  0.9787
-------------+------------------------------           Adj R-squared =  0.9783
       Total |  118.387268    61  1.94077489           Root MSE      =  .20514
------------------------------------------------------------------------------
 HP_z_t_sm_1 |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
       index |    .076389   .0014558    52.47   0.000     .0734769    .0793011
       _cons |   22.80177   .0527426   432.32   0.000     22.69627    22.90727
------------------------------------------------------------------------------

The reader can observe that the coefficient .076389 means parameter rho is therefore has a value of .923611. Furthermore, the model’s mean (MS) square residual also allows for ‘white noise’ estimate, a zero-centred normal distribution with \sigma=.00177

We propose the following system of identities to solve Morocco’s RBC:

Households maximize and smooth their utilities over time according to the following:

U\left(c_t;1-h_t\right)=E\sum\limits_{0}^{\infty}\beta^t\left[\frac{c_t^\gamma(1-h_t)^{(1-\gamma)}}{1-\phi}\right]^{1-\phi}

the function is maximized subject to the following constraints:

k_{t+1}=i_t+(1-\delta)k_t

y_t=z_t k_t^\alpha n_t^{1-\alpha}

i_t=s_t+\exp(z_t)tb_t

tb_t refers to the trade balance, the aggregate explanation for discrepancies between domestic savings and actual investment.

(next piece will deal with a first set of simulation and comparison with HP-filtered data in the first post)

The Big Picture – Part 2

Consumption smoothing is a reality in view of empirical data, and in this particular occasion, HCP’s own PRESIMO model is at faults in terms of specification and reliability on estimated coefficients, and the model specification themselves can be gainsaid as to their robustness.

Consider their proposed model for household consumption:

\log(c_t)=.73\log(rw_t)+.87\log\left[\frac{rw_{t-1}}{c_{t-1}}\right]-.84\left[\frac{u_t}{l+u_t}\right]-0,82icv_t-.01(r_{t_0}-icv_t)

La variable la plus importante dans la détermination de la consommation est le revenu disponible des ménages. Dans le modèle, cette variable est endogène et résulte d’un ensemble de composantes : la masse salariale, l’excédent brut d’exploitation (EBE), les revenus de la propriété, les impôts sur les revenus, les transferts courants, les prestations sociales et les cotisations sociales.

The reported t-values indicate a pretty large standard deviation attached to each of the computed coefficients in this formula (just divide the estimated coefficients by their corresponding t-values below) Not to mention the fact that inflation and short-term interest rates tend to make the model dependent on contingent data, hence the relatively high R², though it comes at the expenses of a long-term, structural explanation of how households smooth their consumption across time and variations in income.

Consumption smoothing can be traced back to the consumption cycle – whose absolute and relative to GDP’s volatility are both second only to labour work. The proposed alternative does away with inflation and short-term interest rates, as well as unemployment; The idea behind it is can be broken down into two sub-parts:

– long-term trends: inflation and distortionary interest rates do not stay for long, and are eventually factored in by households. The fact that there is little (genuine) concern over subsidies provided by the Compensation Fund, as well as the short-lived effects of Bank Al Maghrib’s decision to cut its policy rate by 25bps are two illustrative examples of the simple intuition behind the idea: households rationalize a lot more than what they let on, and decisions of short-term consequences (including inflation and unexpected shifts in monetary policy) are eventually factored in, and their effect tends to fade away as time goes by. And in this particular issue, we are interested in real consumption behaviour over a very long period of time. Finally, because most of the aggregates are expressed in real terms,

– unemployment is a bit more difficult to gauge from aggregate macroeconomic data; furthermore, because the model is based on household units instead of individuals, there is a mechanism of risk-sharing that alleviates the effects of unemployment and the attached uncertainty to it.

We therefore consider the following model:

U\left(c_t;1-h_t\right)=E\sum\limits_{0}^{\infty}\beta^t\left[\frac{c_t^\gamma(1-h_t)^{(1-\gamma)}}{1-\phi}\right]^{1-\phi}

where: \beta: the discount time factor and \gamma: time fraction allocated to leisure.

While the formula might look baffling, it displays interesting computational properties in terms of inter-temporal behaviour – the trade-offs households face in deciding their present and immediate future consumption; for \phi = 1 we get:

E\sum\limits_{0}^{\infty}\beta^t\left[\gamma\log(c_t)+(1-\gamma)(1-h_t)\right]

(the ‘curvature’ of the proposed utility function denotes of the ‘intensity’ of inter-temporal arbitrage)

PWT provides dataset with consumption per capita, GDP per capita as well as GDP per effective worker; Consumption per capita is then computed back into an aggregate of Consumption per household, so as to preclude uncertainty around unemployment. Worked hours are then computed on the basis of the 40-hours, as the result is based on Moroccan labour laws.

When computed, First Order Conditions on that utility function yield the following, which is then regressed to provide estimates for the parameters described above:

. reg C k_h      
      Source |       SS       df       MS              Number of obs =      56
-------------+------------------------------           F(  1,    54) =    1.48
       Model |  .000169425     1  .000169425           Prob > F      =  0.2283
    Residual |  .006162348    54  .000114118           R-squared     =  0.0268
-------------+------------------------------           Adj R-squared =  0.0087
       Total |  .006331773    55  .000115123           Root MSE      =  .01068
------------------------------------------------------------------------------
           C |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
         k_h |    .247134   .2028243     1.22   0.228    -.1595042    .6537721
       _cons |   .9015001   .0882337    10.22   0.000     .7246021    1.078398
------------------------------------------------------------------------------

with: C=\frac{c_{t+1}}{c_{t}}
k_h= \frac{\beta}{1-\gamma}\left[\alpha\left(\frac{k}{h}\right)^{\alpha-1}+1-\delta\right]
Households’ own \beta_t is therefore .9015 which does square with estimates from academic (and a lot more serious) papers.

These deep (structural) parameters are now all identified, next step is to build Morocco’s RBC model.