CES Estimation Packages

March 20, 2018 | Author: Hijab Syed | Category: Least Squares, Mathematical Optimization, Errors And Residuals, Numerical Analysis, Algorithms


Comments



Description

Econometric Estimation of the “Constant Elasticityof Substitution” Function in R: Package micEconCES Arne Henningsen G´ eraldine Henningsen∗ Institute of Food and Resource Economics University of Copenhagen Risø National Laboratory for Sustainable Energy Technical University of Denmark Abstract The Constant Elasticity of Substitution (CES) function is popular in several areas of economics, but it is rarely used in econometric analysis because it cannot be estimated by standard linear regression techniques. We discuss several existing approaches and propose a new grid-search approach for estimating the traditional CES function with two inputs as well as nested CES functions with three and four inputs. Furthermore, we demonstrate how these approaches can be applied in R using the add-on package micEconCES and we describe how the various estimation approaches are implemented in the micEconCES package. Finally, we illustrate the usage of this package by replicating some estimations of CES functions that are reported in the literature. Keywords: constant elasticity of substitution, CES, nested CES, R. Preface This introduction to the econometric estimation of Constant Elasticity of Substitution (CES) functions using the R package micEconCES is a slightly modified version of Henningsen and Henningsen (2011a). 1. Introduction The so-called Cobb-Douglas function (Douglas and Cobb 1928) is the most widely used functional form in economics. However, it imposes strong assumptions on the underlying functional relationship, most notably that the elasticity of substitution1 is always one. Given these restrictive assumptions, the Stanford group around Arrow, Chenery, Minhas, and Solow (1961) developed the Constant Elasticity of Substitution (CES) function as a generalisation of the Cobb-Douglas function that allows for any (non-negative constant) elasticity of substitution. This functional form has become very popular in programming models (e.g., general equilib∗ Senior authorship is shared. For instance, in production economics, the elasticity of substitution measures the substitutability between inputs. It has non-negative values, where an elasticity of substitution of zero indicates that no substitution is possible (e.g., between wheels and frames in the production of bikes) and an elasticity of substitution of infinity indicates that the inputs are perfect substitutes (e.g., electricity from two different power plants). 1 2 Econometric Estimation of the Constant Elasticity of Substitution Function in R rium models or trade models), but it has been rarely used in econometric analysis. Hence, the parameters of the CES functions used in programming models are mostly guesstimated and calibrated, rather than econometrically estimated. However, in recent years, the CES function has gained in importance also in econometric analyses, particularly in macroeconomics (e.g., Amras 2004; Bentolila and Gilles 2006) and growth theory (e.g., Caselli 2005; Caselli and Coleman 2006; Klump and Papageorgiou 2008), where it replaces the Cobb-Douglas function.2 The CES functional form is also frequently used in micro-macro models, i.e., a new type of model that links microeconomic models of consumers and producers with an overall macroeconomic model (see for example Davies 2009). Given the increasing use of the CES function in econometric analysis and the importance of using sound parameters in economic programming models, there is definitely demand for software that facilitates the econometric estimation of the CES function. The R package micEconCES (Henningsen and Henningsen 2011b) provides this functionality. It is developed as part of the “micEcon” project on R-Forge (http://r-forge.r-project. org/projects/micecon/). Stable versions of the micEconCES package are available for download from the Comprehensive R Archive Network (CRAN, http://CRAN.R-Project.org/ package=micEconCES). The paper is structured as follows. In the next section, we describe the classical CES function and the most important generalisations that can account for more than two independent variables. Then, we discuss several approaches to estimate these CES functions and show how they can be applied in R. The fourth section describes the implementation of these methods in the R package micEconCES, whilst the fifth section demonstrates the usage of this package by replicating estimations of CES functions that are reported in the literature. Finally, the last section concludes. 2. Specification of the CES function The formal specification of a CES production function3 with two inputs is − ν  ρ −ρ y = γ δx−ρ + (1 − δ) x , 1 2 (1) where y is the output quantity, x1 and x2 are the input quantities, and γ, δ, ρ, and ν are parameters. Parameter γ ∈ [0, ∞) determines the productivity, δ ∈ [0, 1] determines the optimal distribution of the inputs, ρ ∈ [−1, 0) ∪ (0, ∞) determines the (constant) elasticity of substitution, which is σ = 1 /(1 + ρ) , and ν ∈ [0, ∞) is equal to the elasticity of scale.4 The CES function includes three special cases: for ρ → 0, σ approaches 1 and the CES turns to the Cobb-Douglas form; for ρ → ∞, σ approaches 0 and the CES turns to the Leontief 2 The Journal of Macroeconomics even published an entire special issue titled “The CES Production Function in the Theory and Empirics of Economic Growth” (Klump and Papageorgiou 2008). 3 The CES functional form can be used to model different economic relationships (e.g., as production function, cost function, or utility function). However, as the CES functional form is mostly used to model production technologies, we name the dependent (left-hand side) variable “output” and the independent (righthand side) variables “inputs” to keep the notation simple. 4 Originally, the CES function of Arrow et al. (1961) could only model constant returns to scale, but later Kmenta (1967) added the parameter ν, which allows for decreasing or increasing returns to scale if ν < 1 or ν > 1, respectively. Arne Henningsen, G´eraldine Henningsen 3 production function; and for ρ → −1, σ approaches infinity and the CES turns to a linear function if ν is equal to 1. As the CES function is non-linear in parameters and cannot be linearised analytically, it is not possible to estimate it with the usual linear estimation techniques. Therefore, the CES function is often approximated by the so-called “Kmenta approximation” (Kmenta 1967), which can be estimated by linear estimation techniques. Alternatively, it can be estimated by non-linear least-squares using different optimisation algorithms. To overcome the limitation of two input factors, CES functions for multiple inputs have been proposed. One problem of the elasticity of substitution for models with more than two inputs is that the literature provides three popular, but different definitions (see e.g. Chambers 1988): While the Hicks-McFadden elasticity of substitution (also known as direct elasticity of substitution) describes the input substitutability of two inputs i and j along an isoquant given that all other inputs are constant, the Allen-Uzawa elasticity of substitution (also known as Allen partial elasticity of substitution) and the Morishima elasticity of substitution describe the input substitutability of two inputs when all other input quantities are allowed to adjust. The only functional form in which all three elasticities of substitution are constant is the plain n-input CES function (Blackorby and Russel 1989), which has the following specification: y=γ n X !− ν ρ δi x−ρ i (2) i=1 with n X δi = 1, i=1 where n is the number of inputs and x1 , . . . , xn are the quantities of the n inputs. Several scholars have tried to extend the Kmenta approximation to the n-input case, but Hoff (2004) showed that a correctly specified extension to the n-input case requires non-linear parameter restrictions on a Translog function. Hence, there is little gain in using the Kmenta approximation in the n-input case. The plain n-input CES function assumes that the elasticities of substitution between any two inputs are the same. As this is highly undesirable for empirical applications, multiple-input CES functions that allow for different (constant) elasticities of substitution between different pairs of inputs have been proposed. For instance, the functional form proposed by Uzawa (1962) has constant Allen-Uzawa elasticities of substitution and the functional form proposed by McFadden (1963) has constant Hicks-McFadden elasticities of substitution. However, the n-input CES functions proposed by Uzawa (1962) and McFadden (1963) impose rather strict conditions on the values for the elasticities of substitution and thus, are less useful for empirical applications (Sato 1967, p. 202). Therefore, Sato (1967) proposed a family of two-level nested CES functions. The basic idea of nesting CES functions is to have two or more levels of CES functions, where each of the inputs of an upper-level CES function might be replaced by the dependent variable of a lower-level CES function. Particularly, the nested CES functions for three and four inputs based on Sato (1967) have become popular in recent years. These functions increased in popularity especially in the field of macro-econometrics, where input factors needed further differentiation, e.g., issues such as Grilliches’ capital-skill complementarity (Griliches 1969) or wage differentiation between skilled and unskilled labour (e.g., Acemoglu 1998; Krusell, Ohanian, R´ıos-Rull, and Violante 2000; Pandey 2008). (4) For instance. normalising γ1 to one changes γ to γ δγ1−ρ + (1 − δ) and changes δ to   δγ1−ρ δγ1−ρ + (1 − δ) . x1 and x2 could be skilled and unskilled labour.5 In the case of the three-input nested CES function. and δ = δ1 + δ2 . δ1 = δ1 /(1 − δ3 ). However. given an arbitrary vector of input quantities (see Footnote 6 for an example). As the nesting structure is theoretically arbitrary.7 The nesting of the CES function increases its flexibility and makes it an attractive choice for many applications in economic theory and empirical work. In these lower-level CES functions. p p p p n p n n n δ3 = 1 − δ . and δ cannot be (jointly) identified in econometric estimations (see also explanation for the four-input nested CES function above Equation 3). but has no effect on the functional form. only one input of the upper-level CES function is further differentiated:6   −ν/ρ ρ/ρ1 −ρ1 −ρ1 −ρ y = γ δ δ1 x1 + (1 − δ1 )x2 + (1 − δ)x3 . δ4 = (1 − δ2 ) (1 − δ ). γ = γ . respectively. the final specification of the four-input nested CES function is as follows:    ρ/ρ1 ρ/ρ2 −ν/ρ −ρ1 −ρ1 −ρ2 −ρ2 y = γ δ δ1 x1 + (1 − δ1 )x2 + (1 − δ) δ2 x3 + (1 − δ2 )x4 . δ2 = δ3 /(δ3 + δ4 ). 1 2 3 However. . (3) If ρ1 = ρ2 = ρ. δ2 = (1 − δ1 ) δ . p p p p p p p p p p n n n n p n n n n δ3 = δ2 (1 − δ ). δ1 = δ1 /(δ1 + δ2 ). the four-input nested CES function defined in Equation 3 reduces to the plain four-input CES function defined in Equation 2. If ρ1 = ρ. 7 In this case. i = 1. Hence. 6 Papageorgiou and Saam (2005) proposed a specification that includes the additional term γ1−ρ : h i−ν/ρ ρ/ρ1 1 1 y = γ δγ1−ρ δ1 x−ρ + (1 − δ1 )x−ρ + (1 − δ)x−ρ . the three-input nested CES function defined in Equation 4 reduces to the plain three-input CES function defined in Equation 2. the parameters of the three-input nested CES function defined in Equation 4 (indicated by the superscript n) and the parameters of the plain three-input CES function defined in Equation 2 (indicated p p n n n n n by the superscript p) correspond in the following way: where ρp = ρn 1 = ρ .4 Econometric Estimation of the Constant Elasticity of Substitution Function in R The nested CES function for four inputs as proposed by Sato (1967) nests two lower-level (two-input) CES functions into an upper-level (two-input) CES function: y = γ[δ CES1 +  −νi /ρi −ρi i (1 − δ) CES2 ]−ν/ρ . adding the term γ1−ρ does not increase the flexibility of this function as γ1 can be arbitrar−(ν/ρ) ily normalised to one. the parameters γ. γ1 . not all coefficients of the (entire) nested CES function can be identified in econometric estimations. and x3 capital. where CESi = γi δi x−ρ . nested CES functions are not invariant to the nesting structure and different nesting structures imply different assumptions about the separability between inputs (Sato 1967). 2. Hence. δ1 = δ1 δ . because without these normalisations. and energy. the parameters of the four-input nested CES function defined in Equation 3 (indicated by the superscript n) and the parameters of the plain four-input CES function defined in Equation 2 (indicated p p n n n n n n by the superscript p) correspond in the following way: where ρp = ρn 1 = ρ2 = ρ . 5 In this case. an infinite number of vectors of non-normalised coefficients exists that all result in the same output quantity. indicates the 2i−1 + (1 − δi ) x2i two lower-level CES functions. δ1 = δ1 δ . we (arbitrarily) normalise coefficients γi and νi to one. Alternatively. Kemfert (1998) used this specification for analysing the substitutability between capital. the selection depends on the researcher’s choice and should be based on empirical considerations. and δ = 1 − δ3 . γ = γ . δ2 = (1 − δ1 ) δ . labour. In the following section. G´eraldine Henningsen 5 The formulas for calculating the Hicks-McFadden and Allen-Uzawa elasticities of substitution for the three-input and four-input nested CES functions are given in Appendices B. x3. and y4. because this avoids several problems that usually occur with real-world data. δ = 0.7.6. For the two-input CES function. ρ1 = 0.1). and seventh commands use the function cesCalc. δ = 0.cesData$y3 + 1. rho = 0. and ν = 1. fifth.cesCalc(xNames = c("x1".1). rho = 0.e. δ1 = 0. delta = 0. to calculate the deterministic output variables for the CES functions with two. y3. Estimation of the CES production function Tools for economic analysis with CES function are available in the R package micEconCES (Henningsen and Henningsen 2011b). "x3". Hence. x2. nested CES functions cannot be easily linearised. R> R> + R> + R> R> + + R> R> + + R> set. rho_2 = 0. data = cesData.1. and x4) that each have 200 observations and are generated from random χ2 distributions with 10 degrees of freedom.1 ) ) cesData$y2 <. The third.6. 10) ) cesData$y2 <. respectively) given a CES production function. we will present different approaches to estimate the classical two-input CES function as well as n-input nested CES functions using the R package micEconCES. Anderson and Moroney (1994) showed for n-input nested CES functions that the Hicks-McFadden and Allen-Uzawa elasticities of substitution are only identical if the nested technologies are all of the Cobb-Douglas form. "x2" ). it can be loaded with the command R> library( "micEconCES" ) We demonstrate the usage of this package by estimating a classical two-input CES function as well as nested CES functions with three and four inputs. nested = TRUE ) cesData$y4 <. delta = 0. 10). ρ1 = ρ2 = ρ = 0 in the four-input nested CES function and ρ1 = ρ = 0 in the three-input nested CES function. delta = 0.5 * rnorm(200) cesData$y4 <. coef = c( gamma = 1. respectively. and four inputs (called y2.data. Like in the plain n-input case.3 and C. i. they have to be estimated by applying non-linear optimisation methods. coef = c(gamma = 1. delta_2 = 0.5. delta_1 = 0. 10). "x2". x3 = rchisq(200.cesData$y2 + 2.3.3. coef = c( gamma = 1. data = cesData. For this. ρ = 0.cesData$y4 + 1. The second line creates a data set with four input variables (called x1. nu = 1. we use an artificial data set cesData.cesCalc(xNames = c("x1".3.5.5. If this package is installed.5.5 * rnorm( 200 ) cesData$y3 <. and ν = 1. rho = 0. "x3"). "x2". three. "x4"). 10).frame(x1 = rchisq(200.7. x2 = rchisq(200. rho_1 = 0..seed( 123 ) cesData <.6.6.5.5.7. we use the coefficients γ = 1. we use γ = 1.Arne Henningsen.cesCalc( xNames = c( "x1". data = cesData. and . delta_1 = 0. rho_1 = 0.3. nu = 1.4. which is included in the micEconCES package. nu = 1. ρ = 0. nested = TRUE ) cesData$y3 <. for the three-input nested CES function. 3.6.5 * rnorm(200) The first line sets the “seed” for the random number generator so that these examples can be replicated with exactly the same data set.1. x4 = rchisq(200. ρ1 = 0. As the authors consider the latter approach to be more straight-forward. Therefore. which performs non-linear least-squares estimations. ln y = ln γ + ν δ ln x1 + ν (1 − δ) ln x2 ρν − δ (1 − δ) (ln x1 − ln x2 )2 2 (5) While Kmenta (1967) obtained this formula bylogarithmising the CES function and applying −ρ a second-order Taylor series expansion to ln δx−ρ at the point ρ = 0. The fourth.0858 residual sum-of-squares: 1197 Number of iterations to convergence: 6 Achieved convergence tolerance: 8.6. ρ2 = 0. we show alternative ways of estimating the CES function in the following sections. the Kmenta approximation is called—in contrast to Kmenta (1967. convergence to a local minimum. the most straightforward way to estimate the CES function in R would be to use nls. p. delta = 0. we use γ = 1.6 Econometric Estimation of the Constant Elasticity of Substitution Function in R for the four-input nested CES function.25.delta) * x2^(-rho) )^(-phi / rho + data = cesData. δ = 0. sixth. phi = 1 ) ) R> print( cesNls ) Nonlinear regression model model: y2 ~ gamma * (delta * x1^(-rho) + (1 . and eighth commands generate the stochastic output variables by adding normally distributed random errors to the deterministic output variable.1.5. the 1 + (1 − δ)x2 same formula can be obtained by applying a first-order Taylor series expansion to the entire logarithmised CES function at the point ρ = 0 (Uebe 2000). it does not perform well in many applications with real data.7. 3. 180)—first-order Taylor series expansion in the remainder of this paper. δ1 = 0.5. ρ = 0. or theoretically unreasonable parameter estimates. R> cesNls <. and ν = 1.6222 0. δ2 = 0.5. start = c( gamma = 0. The Kmenta approximation can also be written as a restricted translog function (Hoff 2004): ln y =α0 + α1 ln x1 + α2 ln x2 (6) . rho = 0.5420 1.3. As the CES function is non-linear in its parameters.0239 0.4.delta) * x2^(-rho))^(-phi/rho) data: cesData gamma delta rho phi 1. either because of non-convergence. Kmenta approximation Given that non-linear estimation methods are often troublesome—particularly during the 1960s and 1970s when computing power was very limited—Kmenta (1967) derived an approximation of the classical two-input CES production function that could be estimated by ordinary least-squares techniques.217e-06 While the nls routine works well in this ideal artificial example.nls( y2 ~ gamma * ( delta * x1^(-rho) + (1 .1.5. or whether the non-linear CES function can be approximated by the Kmenta approximation. G´eraldine Henningsen + 7 1 1 β11 (ln x1 )2 + β22 (ln x2 )2 + β12 ln x1 ln x2 . These restrictions can be utilised to test whether the linear Kmenta approximation of the CES function (5) is an acceptable simplification of the translog functional form. a third restriction α1 + α2 = 1 (8) must be enforced. data = cesData. and finally. (c) estimates the restricted translog function (6. (b) carries out a Wald test of the parameter restrictions defined in Equation 7 and eventually also in Equation 8 using the (finite sample) F -statistic. If argument method of this function is set to "Kmenta".9 The parameters of the CES function can be calculated from the parameters of the restricted translog function by: γ = exp(α0 ) ν = α1 + α2 α1 δ= α1 + α2 β12 (α1 + α2 ) ρ= α1 · α2 (9) (10) (11) (12) The Kmenta approximation of the CES function can be estimated by the function cesEst. 7). R> cesKmenta <. + method = "Kmenta". 9 Note that this test does not compare the Cobb-Douglas function with the (non-linear) CES function.Arne Henningsen. The following code estimates a CES function with the dependent variable y2 (specified in argument yName) and the two explanatory variables x1 and x2 (argument xNames). R> summary( cesKmenta ) 8 Note that this test does not check whether the non-linear CES function (1) is an acceptable simplification of the translog functional form. (d) calculates the parameters of the CES function using Equations 9–12 as well as their covariance matrix using the delta method. 2 2 where the two restrictions are β12 = −β11 = −β22 . it (a) estimates an unrestricted translog function (6). . xNames = c( "x1". all taken from the artificial data set cesData that we generated above (argument data) using the Kmenta approximation (argument method) and allowing for variable returns to scale (argument vrs).cesEst( yName = "y2". which is included in the micEconCES package. vrs = TRUE ) Summary results can be obtained by applying the summary method to the returned object. but only with its linear approximation. (7) If constant returns to scale are to be imposed. a simple t-test for the coefficient β12 = −β11 = −β22 can be used to check if the Cobb-Douglas functional form is an acceptable simplification of the Kmenta approximation of the CES function.8 If this is the case. "x2" ). 836951e-02 eq1_b_2_2 -0. codes: 0 '***' 0. data = cesData. codes: 0 '***' 0. we can conclude that the underlying technology is not of the Cobb-Douglas form.2126387 0.2269042 Coefficients: Estimate Std. Alternatively.0365 * nu 1.091 0.1 ' ' 1 Residual standard error: 2.2085723 2.836951e-02 Given that β12 = −β11 = −β22 significantly differs from zero at the 5% level.195467e-09 eq1_b_1_1 -0.86321 0.05 '.910 < 2e-16 *** rho 0.14738 6. Error t value Pr(>|t|) gamma 0.05658941 6.2085723 2.07308 15. we can check .1072116 0.2126387 0.3896839 1.09e-09 *** delta 0.13442 0.09627881 2. As the estimation of the Kmenta approximation is stored in component kmenta of the object returned by cesEst. vrs = TRUE.498807 Multiple R-squared: 0. "x2").09627881 -2.05785460 13.04029 16.' 0.39e-06 *** --Signif.2126387 0.001 '**' 0.6534723 5.1 ' ' 1 The Wald test indicates that the restrictions on the Translog function implied by the Kmenta approximation cannot be rejected at any reasonable significance level.68126 0.8 Econometric Estimation of the Constant Elasticity of Substitution Function in R Estimated CES function with variable returns to scale Call: cesEst(yName = "y2".2085723 2. method = "Kmenta") Estimation by the linear Kmenta approximation Test of the null hypothesis that the restrictions of the Translog function required by the Kmenta approximation are true: P-value = 0.001 '**' 0.' 0. xNames = c("x1".01 '*' 0.05 '.5367 0.7728315 0.41286 2.3615885 0.7548401 Elasticity of Substitution: Estimate Std. we can check if the coefficient β12 = −β11 = −β22 significantly differs from zero.16406442 -0.142216e-01 eq1_a_1 0.3581693 0.000000e+00 eq1_a_2 0.095 1.513 6.01 '*' 0. Error t value Pr(>|t|) E_1_2 (all) 0.09627881 -2. To see whether the underlying technology is of the Cobb-Douglas form.836951e-02 eq1_b_1_2 0.523 < 2e-16 *** --Signif. we can obtain summary information on the estimated coefficients of the Kmenta approximation by: R> coef( summary( cesKmenta$kmenta ) ) Estimate Std.1189 4. Error t value Pr(>|t|) eq1_(Intercept) -0.89834 0. which is calculated from the coefficients of the Kmenta approximation. This should—as in our case—deliver similar results (see above). fitted( cesKmenta ). 3. results for γ and ρ are estimated with generally considerable bias and MSE (Thursby and Lovell 1978. Gradient-based optimisation algorithms Levenberg-Marquardt Initially. σ → 1 which increases the convergence region. R> library( "miscTools" ) R> compPlot ( cesData$y2. the Kmenta approximation encounters several problems. Although.2. i.Arne Henningsen. However. G´eraldine Henningsen 9 if the parameter ρ of the CES function. Finally. significantly differs from zero. if the underlying CES function is of the Cobb-Douglas form. ● 25 ● ● 15 10 5 fitted values 20 ●● ● ● ● ● ● ● 0 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ●● ● ●● ●●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ●● ● ● ●● ● ● ● ●● ● ●● ● ● ● ● ●●● ●● ● ● ● ●● ● ●●● ● ●●● ●● ● ● ●● ● ● ● ●●● ● ●●● ● ●● ● ●● ● ●●● ● ●●● ● ● ●● ●● ●● ●● ●● ● ●●● ● ●● ● ● ●● ●● ●● ● ● ● ●● ● ● ● ● ● ● ●●● ● ● ●● ● ● ●● ● ● ●● ●●● ● ● ●● ● ● 0 5 10 15 20 25 actual values Figure 1: Fitted values from the Kmenta approximation against y. we plot the fitted values against the actual dependent variable (y) to check whether the parameter estimates are reasonable. More reliable results can only be obtained if ρ → 0. the Levenberg-Marquardt algorithm (Marquardt 1963) was most commonly used for estimating the parameters of the CES function by non-linear least-squares. and thus. Second. This iterative . + ylab = "fitted values" ) Figure 1 shows that the parameters produce reasonable fitted values.e. This is a major drawback of the Kmenta approximation as its purpose is to facilitate the estimation of functions with non-unitary σ. xlab = "actual values". Thursby 1980).. First. it is a truncated Taylor series and the remainder term must be seen as an omitted variable. the Kmenta approximation only converges to the underlying CES function in a region of convergence that is dependent on the true parameters of the CES function (Thursby and Lovell 1978). Maddala and Kadane (1967) and Thursby and Lovell (1978) find estimates for ν and δ with small bias and mean squared error (MSE). 10 Econometric Estimation of the Constant Elasticity of Substitution Function in R algorithm can be seen as a maximum neighbourhood method which performs an optimum interpolation between a first-order Taylor series approximation (Gauss-Newton method) and a steepest-descend method (gradient method) (Marquardt 1963). By combining these two nonlinear optimisation algorithms, the developers want to increase the convergence probability by reducing the weaknesses of each of the two methods. In a Monte Carlo study by Thursby (1980), the Levenberg-Marquardt algorithm outperforms the other methods and gives the best estimates of the CES parameters. However, the Levenberg-Marquardt algorithm performs as poorly as the other methods in estimating the elasticity of substitution (σ), which means that the estimated σ tends to be biased towards infinity, unity, or zero. Although the Levenberg-Marquardt algorithm does not live up to modern standards, we include it for reasons of completeness, as it is has proven to be a standard method for estimating CES functions. To estimate a CES function by non-linear least-squares using the Levenberg-Marquardt algorithm, one can call the cesEst function with argument method set to "LM" or without this argument, as the Levenberg-Marquardt algorithm is the default estimation method used by cesEst. The user can modify a few details of this algorithm (e.g., different criteria for convergence) by adding argument control as described in the documentation of the R function nls.lm.control. Argument start can be used to specify a vector of starting values, where the order must be γ, δ1 , δ2 , δ, ρ1 , ρ2 , ρ, and ν (of course, all coefficients that are not in the model must be omitted). If no starting values are provided, they are determined automatically (see Section 4.7). For demonstrative purposes, we estimate all three (i.e., two-input, three-input nested, and four-input nested) CES functions with the Levenberg-Marquardt algorithm, but in order to reduce space, we will proceed with examples of the classical two-input CES function only. R> cesLm2 <- cesEst( "y2", c( "x1", "x2" ), cesData, vrs = TRUE ) R> summary( cesLm2 ) Estimated CES function with variable returns to scale Call: cesEst(yName = "y2", xNames = c("x1", "x2"), data = cesData, vrs = TRUE) Estimation by non-linear least-squares using the 'LM' optimizer assuming an additive error term Convergence achieved after 4 iterations Message: Relative error in the sum of squares is at most `ftol'. Coefficients: Estimate Std. Error t value Pr(>|t|) gamma 1.02385 0.11562 8.855 <2e-16 *** delta 0.62220 0.02845 21.873 <2e-16 *** rho 0.54192 0.29090 1.863 0.0625 . nu 1.08582 0.04569 23.765 <2e-16 *** Arne Henningsen, G´eraldine Henningsen --Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 2.446577 Multiple R-squared: 0.7649817 Elasticity of Substitution: Estimate Std. Error t value Pr(>|t|) E_1_2 (all) 0.6485 0.1224 5.3 1.16e-07 *** --Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 R> cesLm3 <- cesEst( "y3", c( "x1", "x2", "x3" ), cesData, vrs = TRUE ) R> summary( cesLm3 ) Estimated CES function with variable returns to scale Call: cesEst(yName = "y3", xNames = c("x1", "x2", "x3"), data = cesData, vrs = TRUE) Estimation by non-linear least-squares using the 'LM' optimizer assuming an additive error term Convergence achieved after 5 iterations Message: Relative error in the sum of squares is at most `ftol'. Coefficients: Estimate Std. Error t value Pr(>|t|) gamma 0.94558 0.08279 11.421 < 2e-16 *** delta_1 0.65861 0.02439 27.000 < 2e-16 *** delta 0.60715 0.01456 41.691 < 2e-16 *** rho_1 0.18799 0.26503 0.709 0.478132 rho 0.53071 0.15079 3.519 0.000432 *** nu 1.12636 0.03683 30.582 < 2e-16 *** --Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 1.409937 Multiple R-squared: 0.8531556 Elasticities of Substitution: Estimate Std. Error t value Pr(>|t|) E_1_2 (HM) 0.84176 0.18779 4.483 7.38e-06 *** E_(1,2)_3 (AU) 0.65329 0.06436 10.151 < 2e-16 *** --- 11 12 Econometric Estimation of the Constant Elasticity of Substitution Function in R Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 HM = Hicks-McFadden (direct) elasticity of substitution AU = Allen-Uzawa (partial) elasticity of substitution R> cesLm4 <- cesEst( "y4", c( "x1", "x2", "x3", "x4" ), cesData, vrs = TRUE ) R> summary( cesLm4 ) Estimated CES function with variable returns to scale Call: cesEst(yName = "y4", xNames = c("x1", "x2", "x3", "x4"), data = cesData, vrs = TRUE) Estimation by non-linear least-squares using the 'LM' optimizer assuming an additive error term Convergence achieved after 8 iterations Message: Relative error in the sum of squares is at most `ftol'. Coefficients: Estimate Std. Error t value Pr(>|t|) gamma 1.22760 0.12515 9.809 < 2e-16 *** delta_1 0.78093 0.03442 22.691 < 2e-16 *** delta_2 0.60090 0.02530 23.753 < 2e-16 *** delta 0.51154 0.02086 24.518 < 2e-16 *** rho_1 0.37788 0.46295 0.816 0.414361 rho_2 0.33380 0.22616 1.476 0.139967 rho 0.91065 0.25115 3.626 0.000288 *** nu 1.01872 0.04355 23.390 < 2e-16 *** --Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 1.424439 Multiple R-squared: 0.7890757 Elasticities of Substitution: Estimate Std. Error t value Pr(>|t|) E_1_2 (HM) 0.7258 0.2438 2.976 0.00292 ** E_3_4 (HM) 0.7497 0.1271 5.898 3.69e-09 *** E_(1,2)_(3,4) (AU) 0.5234 0.0688 7.608 2.79e-14 *** --Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 HM = Hicks-McFadden (direct) elasticity of substitution AU = Allen-Uzawa (partial) elasticity of substitution Finally, we plot the fitted values against the actual values y to see whether the estimated parameters are reasonable. The results are presented in Figure 2. Given that the CES function has only few parameters and the objective function is not approximately quadratic and shows a tendency to “flat surfaces” around the minimum. fitted( + ylab = "fitted values". main R> compPlot ( cesData$y3. fitted( + ylab = "fitted values". convergence within the given number of iterations is less likely the more the level surface of the objective function differs from spherical (Kelley 1999). A detailed discussion of iterative optimisation algorithms is available. i. As a proper application of these estimation methods requires the user to be familiar with the main characteristics of the different algorithms.e. The “Conjugated Gradient” method works best for objective functions that are approximately quadratic and it is sensitive to objective functions that are not well-behaved and have a non-positive semi-definite Hessian. This iterative method is mostly applied to optimisation problems with many parameters and a large and possibly sparse Hessian matrix..g. The user can modify this algorithm (e.. xlab = "actual values". because this algorithm does not require that the Hessian matrix is stored or inverted. G´eraldine Henningsen R> compPlot ( cesData$y2. main R> compPlot ( cesData$y4. the “Conjugated Gradient” method is probably less suitable than other algorithms for estimating a CES function. it is not the aim of this paper to thoroughly discuss these algorithms.g. = "four-input nested CES" ) 20 two−input CES 0 5 10 15 20 actual values 13 25 5 10 15 actual values 20 ● ●● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ●● ● ●● ●● ●● ●● ● ● ●● ● ● ● ●●● ● ●● ● ●●● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ●● ●● ●●● ● ● ● ● ●● ● ●●● ● ●● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ●● ●● ● ●● ● ●●● ●● ●● ● ●●● ● ●● ● ●● ● ● ● ●● ● ●● ● ● ● ● ●● ● ●● ● ● ● ●●●● ● ● ●● ● ● ● ●● ● ● ●●● ● ●● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● 5 10 15 ● 20 actual values Figure 2: Fitted values from the LM algorithm against actual values. = "three-input nested CES" ) cesLm4 ). Function cesEst can use some of them to estimate a CES function by non-linear least-squares. However. Conjugate Gradients One of the gradient-based optimisation algorithms that can be used by cesEst is the “Conjugate Gradients” method based on Fletcher and Reeves (1964). in Kelley (1999) or Mishra (2007). replacing the update formula of Fletcher and Reeves (1964) by the formula of Polak and Ribi`ere (1969) or the one based on Sorenson (1969) and . xlab = "actual values". fitted( + ylab = "fitted values". e. xlab = "actual values".. Several further gradient-based optimisation algorithms that are suitable for non-linear leastsquares estimations are implemented in R. Setting argument method of cesEst to "CG" selects the “Conjugate Gradients” method for estimating the CES function by non-linear least-squares. main three−input nested CES ● four−input nested CES ● 25 20 ● ● 15 fitted values 5 0 ●● ● ● ● ● ● ●● ● ●● ● ● ● ●● ● ●●●●● ● ● ●● ● ●● ● ● ● ●● ●● ● ●● ● ● ● ●● ● ●●● ● ● ●● ● ● ● ● ● ● ●● ● ●● ● ●● ● ● ● ●● ●● ● ●●● ●●●● ● ● ● ●●●● ●●●● ● ● ● ● ● ● ● ● ●● ●● ●● ● ● ●● ● ●●●● ●●● ●● ● ●● ● ●●●● ● ● ● ● ● ●● ● ●● ●●● ●● ● ● ● ● ● ● ●● ●●● ● ●● ●●● ● ● ● ● ● ●●● ● ●● ●● ● ●● ● ● ● ● ●● ● ● ● ● 10 15 10 ● fitted values ● ● 5 20 15 10 5 ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ●●●● ● ● ● ● ● ● ●● ● ● ● ●● ●● ● ● ● ● ●● ● ● ●●●● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ●● ●●●● ● ●● ●●● ●● ● ●● ● ●● ● ●●● ● ●● ● ●● ● ● ●● ●● ● ● ●● ●●● ● ● ● ● ●● ● ● ● ●● ● ● ● ●● ●●● ● ●●● ● ●● ●● ● ● ●● ●● ● ●● ● ● ●● ● ● ● ●●●●● ● ●● ● ● ● ● ● ● fitted values cesLm2 ). we will briefly discuss some practical issues of the algorithms that will be used to estimate the CES function.Arne Henningsen. = "two-input CES" ) cesLm3 ). 1289 5.01 '*' 0.952 <2e-16 *** rho 0.214 1.' 0. "x2" ).62081 0.01 '*' 0. method = "CG") Estimation by non-linear least-squares using the 'CG' optimizer assuming an additive error term Convergence NOT achieved after 401 function and 101 gradient calls Coefficients: Estimate Std.11684 8. "x2" ). vrs = TRUE.0874 .02828 21.03581 0. codes: 0 '***' 0.08043 0. convergence tolerance level) by adding a further argument control as described in the “Details” section of the documentation of the R function optim.84e-07 *** --Signif.cesEst( "y2".709 0. c( "x1". + control = list( maxit = 1000. R> cesCg <. xNames = c("x1".660 <2e-16 *** --Signif. method = "CG" ) R> summary( cesCg ) Estimated CES function with variable returns to scale Call: cesEst(yName = "y2". nu 1. cesData.28531 1. Error t value Pr(>|t|) E_1_2 (all) 0.cesEst( "y2". Error t value Pr(>|t|) gamma 1.04566 23.. "x2"). reltol = 1e-5 ) ) R> summary( cesCg2 ) Estimated CES function with variable returns to scale Call: .446998 Multiple R-squared: 0.1 ' ' 1 Residual standard error: 2.001 '**' 0. method = "CG".865 <2e-16 *** delta 0.48769 0. R> cesCg2 <.' 0. c( "x1".1 ' ' 1 Although the estimated parameters are similar to the estimates from the Levenberg-Marquardt algorithm.001 '**' 0.6722 0.14 Econometric Estimation of the Constant Elasticity of Substitution Function in R Beale (1972)) or some other details (e. vrs = TRUE.7649009 Elasticity of Substitution: Estimate Std. the “Conjugated Gradient” algorithm reports that it did not converge.05 '.g.05 '. vrs = TRUE. codes: 0 '***' 0. data = cesData. Increasing the maximum number of iterations and the tolerance level leads to convergence. cesData. This confirms a slow convergence of the “Conjugate Gradients” algorithm for estimating the CES function. this algorithm does a line search at each iteration to determine the optimal length of the shift vector (step size) as described in Dennis and Schnabel (1983) and Schnabel. c( "x1". xNames = c("x1".6485 0.05 '.0625 .1 ' ' 1 Newton Another algorithm supported by cesEst that is probably more suitable for estimating a CES function is an improved Newton-type method.001 '**' 0.g.54192 0. in contrast to the original Newton method.cesEst( "y2". Setting argument method of cesEst to "Newton" selects this improved Newton-type method. codes: 0 '***' 0.Arne Henningsen. nu 1. Koontz. However.02385 0.. The following commands estimate a CES function by non-linear least-squares using this algorithm and print summary results.62220 0.11562 8.765 <2e-16 *** --Signif.01 '*' 0. R> cesNewton <. data = cesData.01 '*' 0. cesData. As with the original Newton method.08582 0.' 0. vrs = TRUE. vrs = TRUE. . "x2" ). and Weiss (1985).16e-07 *** --Signif.001 '**' 0. method = "CG". reltol = 1e-05)) Estimation by non-linear least-squares using the 'CG' optimizer assuming an additive error term Convergence achieved after 1375 function and 343 gradient calls Coefficients: Estimate Std. data = cesData.446577 Multiple R-squared: 0. + method = "Newton" ) R> summary( cesNewton ) Estimated CES function with variable returns to scale Call: cesEst(yName = "y2".29090 1.7649817 Elasticity of Substitution: Estimate Std. codes: 0 '***' 0.863 0. The user can modify a few details of this algorithm (e. Error t value Pr(>|t|) E_1_2 (all) 0. G´eraldine Henningsen 15 cesEst(yName = "y2". "x2"). this algorithm uses first and second derivatives of the objective function to determine the direction of the shift vector and searches for a stationary point until the gradients are (almost) zero. "x2"). Error t value Pr(>|t|) gamma 1.1224 5.04569 23.05 '. the maximum step length) by adding further arguments that are described in the documentation of the R function nlm.1 ' ' 1 Residual standard error: 2.3 1.873 <2e-16 *** rho 0. control = list(maxit = 1000.' 0.855 <2e-16 *** delta 0.02845 21. xNames = c("x1". method = "BFGS" ) R> summary( cesBfgs ) Estimated CES function with variable returns to scale Call: cesEst(yName = "y2". The problem with BFGS can be that although the current parameters are close to the minimum.05 '. R> cesBfgs <. vrs = TRUE. Error t value Pr(>|t|) E_1_2 (all) 0.1 ' ' 1 Residual standard error: 2. c( "x1". "x2" ).01 '*' 0. Goldfarb (1970). Error t value Pr(>|t|) gamma 1.54192 0.11562 8.02845 21.01 '*' 0.' 0.0625 .446577 Multiple R-squared: 0.02385 0. xNames = c("x1". cesData. codes: 0 '***' 0.855 <2e-16 *** delta 0. in practice.16 Econometric Estimation of the Constant Elasticity of Substitution Function in R vrs = TRUE. .3 1. Fletcher (1970).' 0. However. codes: 0 '***' 0.62220 0. data = cesData.1 ' ' 1 Broyden-Fletcher-Goldfarb-Shanno Furthermore.04569 23. a quasi-Newton method developed independently by Broyden (1970). method = "Newton") Estimation by non-linear least-squares using the 'Newton' optimizer assuming an additive error term Convergence achieved after 25 iterations Coefficients: Estimate Std.873 <2e-16 *** rho 0. This so-called BFGS algorithm also uses first and second derivatives and searches for a stationary point of the objective function where the gradients are (almost) zero. "x2"). the convergence tolerance level) by adding the further argument control as described in the “Details” section of the documentation of the R function optim.cesEst( "y2".29091 1.001 '**' 0. The user can modify a few details of the BFGS algorithm (e. and Shanno (1970) can be used by cesEst..1224 5.6485 0.001 '**' 0. In contrast to the original Newton method. If argument method of cesEst is "BFGS".863 0. the algorithm does not converge because the Hessian matrix at the current parameters is not close to the Hessian matrix at the minimum.05 '. the BFGS algorithm is used for the estimation. nu 1. the BFGS method does a line search for the best step size and uses a special procedure to approximate and update the Hessian matrix in every iteration.g.08582 0. BFGS proves robust convergence (often superlinear) (Kelley 1999).16e-07 *** --Signif.7649817 Elasticity of Substitution: Estimate Std.765 <2e-16 *** --Signif. Arne Henningsen, G´eraldine Henningsen 17 vrs = TRUE, method = "BFGS") Estimation by non-linear least-squares using the 'BFGS' optimizer assuming an additive error term Convergence achieved after 72 function and 15 gradient calls Coefficients: Estimate Std. Error t value Pr(>|t|) gamma 1.02385 0.11562 8.855 <2e-16 *** delta 0.62220 0.02845 21.873 <2e-16 *** rho 0.54192 0.29091 1.863 0.0625 . nu 1.08582 0.04569 23.765 <2e-16 *** --Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 2.446577 Multiple R-squared: 0.7649817 Elasticity of Substitution: Estimate Std. Error t value Pr(>|t|) E_1_2 (all) 0.6485 0.1224 5.3 1.16e-07 *** --Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 3.3. Global optimisation algorithms Nelder-Mead While the gradient-based (local) optimisation algorithms described above are designed to find local minima, global optimisation algorithms, which are also known as direct search methods, are designed to find the global minimum. These algorithms are more tolerant to objective functions which are not well-behaved, although they usually converge more slowly than the gradient-based methods. However, increasing computing power has made these algorithms suitable for day-to-day use. One of these global optimisation routines is the so-called Nelder-Mead algorithm (Nelder and Mead 1965), which is a downhill simplex algorithm. In every iteration, n + 1 vertices are defined in the n-dimensional parameter space. The algorithm converges by successively replacing the “worst” point by a new vertex in the multi-dimensional parameter space. The Nelder-Mead algorithm has the advantage of a simple and robust algorithm, and is especially suitable for residual problems with non-differentiable objective functions. However, the heuristic nature of the algorithm causes slow convergence, especially close to the minimum, and can lead to convergence to non-stationary points. As the CES function is easily twice differentiable, the advantage of the Nelder-Mead algorithm simply becomes its robustness. As a consequence of the heuristic optimisation technique, the results should be handled with care. However, the Nelder-Mead algorithm is much faster than the other global optimisation algorithms described below. Function cesEst estimates a CES function with the Nelder- 18 Econometric Estimation of the Constant Elasticity of Substitution Function in R Mead algorithm if argument method is set to "NM". The user can tweak this algorithm (e.g., the reflection factor, contraction factor, or expansion factor) or change some other details (e.g., convergence tolerance level) by adding a further argument control as described in the “Details” section of the documentation of the R function optim. R> cesNm <- cesEst( "y2", c( "x1", "x2" ), cesData, vrs = TRUE, + method = "NM" ) R> summary( cesNm ) Estimated CES function with variable returns to scale Call: cesEst(yName = "y2", xNames = c("x1", "x2"), data = cesData, vrs = TRUE, method = "NM") Estimation by non-linear least-squares using the 'Nelder-Mead' optimizer assuming an additive error term Convergence achieved after 265 iterations Coefficients: Estimate Std. Error t value Pr(>|t|) gamma 1.02399 0.11564 8.855 <2e-16 *** delta 0.62224 0.02845 21.872 <2e-16 *** rho 0.54212 0.29095 1.863 0.0624 . nu 1.08576 0.04569 23.763 <2e-16 *** --Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 2.446577 Multiple R-squared: 0.7649817 Elasticity of Substitution: Estimate Std. Error t value Pr(>|t|) E_1_2 (all) 0.6485 0.1223 5.3 1.16e-07 *** --Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Simulated Annealing The Simulated Annealing algorithm was initially proposed by Kirkpatrick, Gelatt, and Vecchi (1983) and Cerny (1985) and is a modification of the Metropolis-Hastings algorithm. Every iteration chooses a random solution close to the current solution, while the probability of the choice is driven by a global parameter T which decreases as the algorithm moves on. Unlike other iterative optimisation algorithms, Simulated Annealing also allows T to increase which makes it possible to leave local minima. Therefore, Simulated Annealing is a robust global optimiser and it can be applied to a large search space, where it provides fast and reliable solutions. Setting argument method to "SANN" selects a variant of the “Simulated Annealing” Arne Henningsen, G´eraldine Henningsen 19 algorithm given in B´elisle (1992). The user can modify some details of the “Simulated Annealing” algorithm (e.g., the starting temperature T or the number of function evaluations at each temperature) by adding a further argument control as described in the “Details” section of the documentation of the R function optim. The only criterion for stopping this iterative process is the number of iterations and it does not indicate whether the algorithm converged or not. R> cesSann <- cesEst( "y2", c( "x1", "x2" ), cesData, vrs = TRUE, method = "SANN" ) R> summary( cesSann ) Estimated CES function with variable returns to scale Call: cesEst(yName = "y2", xNames = c("x1", "x2"), data = cesData, vrs = TRUE, method = "SANN") Estimation by non-linear least-squares using the 'SANN' optimizer assuming an additive error term Coefficients: Estimate Std. Error t value Pr(>|t|) gamma 1.01104 0.11477 8.809 <2e-16 *** delta 0.63414 0.02954 21.469 <2e-16 *** rho 0.71252 0.31440 2.266 0.0234 * nu 1.09179 0.04590 23.784 <2e-16 *** --Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 2.449907 Multiple R-squared: 0.7643416 Elasticity of Substitution: Estimate Std. Error t value Pr(>|t|) E_1_2 (all) 0.5839 0.1072 5.447 5.12e-08 *** --Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 As the Simulated Annealing algorithm makes use of random numbers, the solution generally depends on the initial “state” of R’s random number generator. To ensure replicability, cesEst “seeds” the random number generator before it starts the “Simulated Annealing” algorithm with the value of argument random.seed, which defaults to 123. Hence, the estimation of the same model using this algorithm always returns the same estimates as long as argument random.seed is not altered (at least using the same software and hardware components). R> cesSann2 <- cesEst( "y2", c( "x1", "x2" ), cesData, vrs = TRUE, method = "SANN" ) R> all.equal( cesSann, cesSann2 ) [1] TRUE seed and to check whether the estimates differ considerably.6341353 0.2429077 Now the estimates are much more similar—only the estimates of ρ still differ somewhat. vrs = TRUE. c( "x1".seed = 1234. stdDev = sd( m ) ) cesSann cesSann3 cesSann4 cesSann5 stdDev gamma 1.0232862 0.0868674 0. method = "SANN".cesEst( "y2". "x2" ).0917877 1.0337633 1. method = "SANN".cesEst( "y2". random. "x2" ). method = "SANN".5632106 0.0797618 1.0936468 0. cesData. "x2" ). cesSann5 = coef( cesSann5 ) ) rbind( m. cesSann3 = coef( cesSann3 ).0344585 1. method = "SANN". method = "SANN".seed = 12345. c( "x1". Price. the Differential Evolution algorithm (Storn and Price 1997. random. control = list( maxit = 100000 ) ) cesSannB3 <. control = list( maxit = 100000 ) ) m <. method = "SANN".5623429 0. vrs = TRUE. random. R> + R> + R> + R> + R> cesSann3 <. vrs = TRUE.6238302 0. R> + R> + R> + R> + R> + R> cesSannB <. cesSannB5 = coef( cesSannB5 ) ) rbind( m.cesEst( "y2".5259158 0. stdDev = sd( m ) ) cesSannB cesSannB3 cesSannB4 cesSannB5 stdDev gamma 1.cesEst( "y2". vrs = TRUE. c( "x1".0820884 1.2414348 If the estimates differ remarkably.6204874 0. "x2" ). cesSannB3 = coef( cesSannB3 ). cesData. cesData. the user can try increasing the number of iterations.rbind( cesSannB = coef( cesSannB ).cesEst( "y2".0101985 0. random.2414348 delta 0. "x2" ).0386186 1.seed = 1234 ) cesSann4 <. Now we will re-estimate the model a few times with 100.seed = 123456. cesData.0868685 1. vrs = TRUE.4716324 0.cesEst( "y2".000 by default. vrs = TRUE.rbind( cesSann = coef( cesSann ).0220481 1.2429077 rho 0. Storn.cesEst( "y2".6381545 0. random. method = "SANN".2429077 delta 0.0813484 1.2414348 nu 1. c( "x1".5745630 0.seed = 12345 ) cesSann5 <.000 iterations each.6206622 0. cesSannB4 = coef( cesSannB4 ). cesData.7125172 0.0827909 1. and Lampinen 2006) belongs to the class of . control = list( maxit = 100000 ) ) cesSannB5 <. random.6225271 0. c( "x1".0110416 1. cesData.0208154 1. vrs = TRUE.20 Econometric Estimation of the Constant Elasticity of Substitution Function in R It is recommended to start this algorithm with different values of argument random. control = list( maxit = 100000 ) ) cesSannB4 <. "x2" ).5284805 0.6149628 0.5759035 0. cesSann4 = coef( cesSann4 ). c( "x1".2429077 nu 1. cesData.seed = 123456 ) m <.2414348 rho 0. Differential Evolution In contrary to the other algorithms described in this paper.6260139 0. which is 10. "x2" ). c( "x1". 001 '**' 0.7649799 Elasticity of Substitution: Estimate Std. control = list(trace = FALSE)) Estimation by non-linear least-squares using the 'DE' optimizer assuming an additive error term Coefficients: Estimate Std. "x2"). For some problems.0233 0. c( "x1".01 '*' 0. if argument method is set to "DE".847 0. δ2 . Of course.293 1. vrs = TRUE.2901 1.1230 5. cesData. codes: 0 '***' 0. By default.cesEst( "y2". Function cesEst uses a Differential Evolution optimiser for the non-linear least-squares estimation of the CES function. nu 1.0859 0. the algorithm has proven to be effective and accurate on a large range of optimisation problems. Error t value Pr(>|t|) gamma 1. the number of population members) by adding a further argument control as described in the documentation of the R function DEoptim.01 '*' 0.g.control. In contrary to the other optimisation algorithms. the differential evolution strategy or selection method) or change some details (e.1156 8.761 <2e-16 *** --Signif.Arne Henningsen. Error t value Pr(>|t|) E_1_2 (all) 0.446587 Multiple R-squared: 0.853 <2e-16 *** delta 0. + control = list( trace = FALSE ) ) R> summary( cesDe ) Estimated CES function with variable returns to scale Call: cesEst(yName = "y2". method = "DE".' 0. Quasi-Newton. vrs = TRUE.' 0.. the Differential Evolution method requires finite boundaries for the parameters. −1 ≤ ρ1 . R> cesDe <.873 <2e-16 *** rho 0. The user can modify the Differential Evolution algorithm (e. However. the user can specify own lower and upper bounds by setting arguments lower and upper to numeric vectors. data = cesData. it has proven to be more accurate and more efficient than Simulated Annealing.05 '. δ ≤ 1. Mishra 2007).0457 23.6212 0.001 '**' 0.0648 ..2e-07 *** --Signif.0284 21. G´eraldine Henningsen 21 evolution strategy optimisers and convergence cannot be proven analytically.1 ' ' 1 . and 0 ≤ ν ≤ 10.1 ' ' 1 Residual standard error: 2. "x2" ). ρ2 . codes: 0 '***' 0. method = "DE". ρ ≤ 10.05 '. or other genetic algorithms (Storn and Price 1997. Ali and T¨ orn 2004. xNames = c("x1". inter alia the CES function (Mishra 2007). the bounds are 0 ≤ γ ≤ 1010 .5358 0. 0 ≤ δ1 .6511 0.g. c( "x1". vrs = TRUE. itermax = 1000 ) ) cesDeB3 <.cesEst( "y2". method = "DE".2483916 delta 0. cesDe2 ) [1] TRUE When using this algorithm. it is also recommended to check whether different values of argument random.0488844 1. method = "DE". method = "DE".equal( cesDe.6212958 0. control = list( trace = FALSE. cesDeB4 = coef( cesDeB4 ). cesData.6212100 0. "x2" ).rbind( cesDe = coef( cesDe ). vrs = TRUE. itermax = 1000 ) ) rbind( cesDeB = coef( cesDeB ). method = "DE".0913477 1. cesData. random.seed = 1234.5358122 0.cesEst( "y2". cesDe4 = coef( cesDe4 ).g.seed = 123456. Now we will re-estimate this model a few times with 1.seed result in considerably different estimates. control = list( trace = FALSE. c( "x1".seed before it starts this algorithm to ensure replicability. cesData.2483916 nu 1. cesDe3 = coef( cesDe3 ). "x2" ). the Differential Evolution algorithm makes use of random numbers and cesEst “seeds” the random number generator with the value of argument random.0233453 1. itermax = 1000 ) ) cesDeB4 <. random. R> cesDe2 <. if the user wants to obtain more precise estimates than those derived from the default settings of this algorithm. cesData.0096730 1. "x2" ).cesEst( "y2". c( "x1". vrs = TRUE. c( "x1". method = "DE". vrs = TRUE. cesData.seed = 123456. "x2" ).seed = 12345. control = list( trace = FALSE.22 Econometric Estimation of the Constant Elasticity of Substitution Function in R Like the “Simulated Annealing” algorithm. control = list( trace = FALSE ) ) cesDe4 <. R> + R> + R> + R> + R> cesDe3 <. random. c( "x1".000 population generations each. e. cesDeB3 = coef( cesDeB3 ).5019560 0.0902370 0.5457970 0.5736423 0. vrs = TRUE. However. cesData. cesDe5 = coef( cesDe5 ) ) rbind( m. "x2" ).. which is 200 by default. cesData. random. c( "x1". the user can try to increase the maximum number of population generations (iterations) using control parameter itermax. method = "DE". c( "x1".2483916 rho 0. + control = list( trace = FALSE ) ) R> all. vrs = TRUE. which generally indicates that all estimates are close to the optimum (minimum of the sum of squared residuals).2483916 These estimates are rather similar. cesData. "x2" ).0858811 1. stdDev = sd( m ) ) cesDe cesDe3 cesDe4 cesDe5 stdDev gamma 1.seed = 1234. vrs = TRUE. if the estimates differ considerably. random.seed = 12345. c( "x1".cesEst( "y2". method = "DE". control = list( trace = FALSE ) ) cesDe5 <. "x2" ).6216175 0.0760523 1. itermax = 1000 ) ) cesDeB5 <.0135793 0.cesEst( "y2".cesEst( "y2".cesEst( "y2". R> + R> + R> + R> + R> + cesDeB <.6217174 0. control = list( trace = FALSE ) ) m <. vrs = TRUE. control = list( trace = FALSE. method = "DE".cesEst( "y2". random. cesDeB5 = coef( cesDeB5 ) ) . "x2" ). g. Moreover. the L-BFGS-B algorithm is especially suitable for high dimensional optimisation problems. Lu.5419226 0. R> cesLbfgsb <.. which is 10 times the number of parameters by default and should not have a smaller value than this default value (see documentation of the R function DEoptim. data = cesData. and 0 ≤ ν < ∞.023853 cesDeB3 1. The user can further increase the likelihood of finding the global optimum by increasing the number of population members using control parameter NP. G´eraldine Henningsen gamma cesDeB 1. ρ < ∞. + method = "L-BFGS-B" ) R> summary( cesLbfgsb ) Estimated CES function with variable returns to scale Call: cesEst(yName = "y2". This can be done by the Differential Evolution (DE) algorithm as described above.023853 cesDeB5 1. Nocedal. Constraint parameters As a meaningful analysis based on a CES function requires that the function is consistent with economic theory.6221982 rho 0.4.08582 The estimates are now virtually identical. function cesEst can use two gradient-based optimisation algorithms for estimating a CES function under parameter constraints. but instead relies on the past (often less than 10) values of the parameters and the gradient vector. L-BFGS-B One of these methods is a modification of the BFGS algorithm suggested by Byrd.5419226 23 nu 1. δ2 . 0 ≤ δ1 . 3. the number of BFGS updates) by adding a further argument control as described in the “Details” section of the documentation of the R function optim. method = "L-BFGS-B") . ρ2 .08582 1.08582 1.5419226 0.cesEst( "y2".023853 delta 0. vrs = TRUE. the so-called L-BFGS-B algorithm allows for box-constraints on the parameters and also does not explicitly form or store the Hessian matrix. The user can specify own lower and upper bounds by setting arguments lower and upper to numeric vectors. "x2" ). and Zhu (1995). vrs = TRUE. xNames = c("x1". Therefore. In contrast to the ordinary BFGS algorithm summarised above. the restrictions on the parameters are 0 ≤ γ < ∞. By default.08582 1.control).6221982 0.Arne Henningsen. "x2"). Function cesEst estimates a CES function with parameter constraints using the L-BFGS-B algorithm if argument method is set to "L-BFGS-B". −1 ≤ ρ1 .6221982 0. it is often desirable to constrain the parameter space to the economically meaningful region. δ ≤ 1.6221982 0. c( "x1". The user can tweak some details of this algorithm (e. cesData. but—of course—it can also be used for optimisation problems with only a few parameters (as the CES function).5419226 0.023853 cesDeB4 1. R> cesPort <. xNames = c("x1".16e-07 *** --Signif. vrs = TRUE. e. method = "PORT") Estimation by non-linear least-squares using the 'PORT' optimizer assuming an additive error term Convergence achieved after 27 iterations .0625 .6485 0. The user can modify a few details of the Newton algorithm (e. The lower and upper bounds of the parameters have the same default values as for the L-BFGS-B method. cesData.001 '**' 0..873 <2e-16 *** rho 0. Error t value Pr(>|t|) E_1_2 (all) 0. codes: 0 '***' 0.cesEst( "y2". c( "x1".001 '**' 0. + method = "PORT" ) R> summary( cesPort ) Estimated CES function with variable returns to scale Call: cesEst(yName = "y2".3 1.05 '.' 0. nu 1..' 0. the minimum step size) by adding a further argument control as described in section “Control parameters” of the documentation of R function nlminb.g. data = cesData.01 '*' 0.765 <2e-16 *** --Signif. "x2" ).04569 23.1224 5.1 ' ' 1 Residual standard error: 2.02845 21.g.62220 0.855 <2e-16 *** delta 0.02385 0. Setting argument method to "PORT" selects the optimisation algorithm of the PORT routines.54192 0.01 '*' 0.29090 1.11562 8.7649817 Elasticity of Substitution: Estimate Std.05 '. "x2").1 ' ' 1 PORT routines The so-called PORT routines (Gay 1990) include a quasi-Newton optimisation algorithm that allows for box constraints on the parameters and has several advantages over traditional Newton routines.446577 Multiple R-squared: 0.863 0.24 Econometric Estimation of the Constant Elasticity of Substitution Function in R Estimation by non-linear least-squares using the 'L-BFGS-B' optimizer assuming an additive error term Convergence achieved after 35 function and 35 gradient calls Message: CONVERGENCE: REL_REDUCTION_OF_F <= FACTR*EPSMCH Coefficients: Estimate Std. vrs = TRUE. codes: 0 '***' 0.08582 0. trust regions and reverse communication. Error t value Pr(>|t|) gamma 1. There is a lively ongoing discussion about the proper way to estimate CES functions with factor augmenting technological progress (e.446577 Multiple R-squared: 0.873 <2e-16 *** rho 0.16e-07 *** --Signif.765 <2e-16 *** --Signif. (14) where λ1 and λ2 measure input-specific technological change.7649817 Elasticity of Substitution: Estimate Std. 1 2 (13) where γ is (as before) an efficiency parameter.863 0.. Error t value Pr(>|t|) gamma 1.05 '.01 '*' 0. Although many approaches seem to be promising.1 ' ' 1 3.001 '**' 0. G´eraldine Henningsen 25 Message: relative convergence (4) Coefficients: Estimate Std.Arne Henningsen. ˆ factor augmenting (non-neutral) technological change y=γ  x1 e λ1 t −ρ  λ2 t + x2 e −ρ − νρ . Error t value Pr(>|t|) E_1_2 (all) 0.11562 8.1 ' ' 1 Residual standard error: 2. McAdam. we decided to wait until a state-of-the-art approach emerges before including factor augmenting technological change into micEconCES.855 <2e-16 *** delta 0.08582 0. λ is the rate of technological change.62220 0.05 '.29091 1.02385 0.' 0.1224 5.01 '*' 0.g. nu 1. and Willman 2010). . Klump.3 1. Technological change Estimating the CES function with time series data usually requires an extension of the CES functional form in order to account for technological change (progress).02845 21. and t is a time variable.0625 . accounting for technological change in CES functions basically boils down to two approaches: ˆ Hicks-neutral technological change − ν  ρ −ρ + (1 − δ)x y = γ eλ t δx−ρ . Le´ on-Ledesma. Luoma and Luoto 2010. micEconCES only includes Hicks-neutral technological change at the moment. So far.001 '**' 0. Therefore.5.6485 0.54192 0. and Willman 2007. codes: 0 '***' 0. McAdam.04569 23.' 0. codes: 0 '***' 0. Error t value Pr(>|t|) E_1_2 (all) 0. tName = "t".653 < 2e-16 *** --Signif. tName = "t". vrs = TRUE. vrs = TRUE.397 < 2e-16 *** lambda 0. (iii) add noise to obtain the stochastic “observed” output variable.5].76e-14 *** nu 1. method = "LM") Estimation by non-linear least-squares using the 'LM' optimizer assuming an additive error term Convergence achieved after 5 iterations Message: Relative error in the sum of squares is at most `ftol'. data = cesData. coef = c( gamma = 1.1.5 * rnorm( 200 ) cesTech <.26 Econometric Estimation of the Constant Elasticity of Substitution Function in R When calculating the output variable of the CES function using cesCalc or when estimating the parameters of the CES function using cesEst. Error t value Pr(>|t|) gamma 0.01 '*' 0. codes: 0 '***' 0.10 The following commands (i) generate an (artificial) time variable t. the name of time variable (t) can be specified by argument tName.01 ) ) cesData$yt <. Coefficients: Estimate Std. "x2").5268406 0.02986 21. and (v) print the summary results. where the corresponding coefficient (λ) is labelled lambda.0073754 80.0696036 7.94 <2e-16 *** 10 If the Differential Evolution (DE) algorithm is used. c( "x1".cesCalc( xNames = c( "x1".0100173 0. 0. xNames = c("x1".0362520 27. The user can use arguments lower and upper to modify these bounds.506 < 2e-16 *** delta 0.5. R> R> + R> R> + R> cesData$t <.65495 0.5. (iv) estimate the model.0130233 84. lambda = 0.6. "x2" ).964 < 2e-16 *** rho 0.9899389 Elasticity of Substitution: Estimate Std. nu = 1.9932082 0. tName = "t".' 0.001 '**' 0.0001007 99. (ii) calculate the (deterministic) output variable of a CES function with 1% Hicks-neutral technological progress in each time period. rho = 0.5971397 0. "x2" ). .569 3. method = "LM" ) summary( cesTech ) Estimated CES function with variable returns to scale Call: cesEst(yName = "yt". data = cesData.c( 1:200 ) cesData$yt <. delta = 0.cesEst( "yt". as this algorithm requires finite lower and upper bounds of all parameters.1 ' ' 1 Residual standard error: 2. data = cesData.05 '.550677 Multiple R-squared: 0.cesData$yt + 2.1024537 0. parameter λ is by default restricted to the interval [−0. where a grid of values for the substitution parameters (ρ1 . two-. 3. ρ2 . if argument rho1. "x2" ). However. particularly in case of n-input nested CES functions.1 and the default optimisation method.1)) .Arne Henningsen. to = 1. codes: 27 0 '***' 0. the grid search over the substitution parameters can be either one-. ρ2 .5 with an increment of 0. ρ2 . The values of these vectors are used to specify the grid points for the substitution parameters ρ1 . the Levenberg-Marquardt algorithm. by = 0. data = cesData. G´eraldine Henningsen --Signif.. to = 1. The following command estimates the two-input CES function by a one-dimensional grid search for ρ.5. we have demonstrated how Hicks-neutral technological change can be modelled in the two-input CES function.cesEst( "y2".3. The estimates with the values of the substitution parameters that result in the smallest sum of squared residuals are chosen as the final estimation result. ρ2 .01 '*' 0. ρ) that are found in the grid search are not known. or three-dimensional. or rho is set to a numeric vector. by = 0. Grid search for ρ The objective function for estimating CES functions by non-linear least-squares often shows a tendency to “flat surfaces” around the minimum—in particular for a wide range of values for the substitution parameters (ρ1 . rho2. by multiplying the CES function with eλ t . ρ). vrs = TRUE. and ρ. rho = seq(from = -0. many optimisation algorithms have problems in finding the minimum of the objective function. "x2"). ρ) is pre-selected and the remaining parameters are estimated by non-linear least-squares holding the substitution parameters fixed at each combination of the pre-defined values.1 ) ) R> summary( cesGrid ) Estimated CES function with variable returns to scale Call: cesEst(yName = "y2".05 '. In case of more than two inputs—regardless of whether the CES function is “plain” or “nested”—Hicks-neutral technological change can be accounted for in the same way.001 '**' 0. The function cesEst carries out this grid search procedure. R> cesGrid <. where the pre-selected values for ρ are the values from −0. Functions cesCalc and cesEst can account for Hicks-neutral technological change in all CES specifications that are generally supported by these functions. this problem can be alleviated by performing a grid search. data = cesData.6. i. but with a different estimation method). c( "x1". is used to estimate the remaining parameters. respectively. Since the “best” values of the substitution parameters (ρ1 . but estimated (as the other parameters. the covariance matrix of the estimated parameters also includes the substitution parameters and is calculated as if the substitution parameters were estimated as usual.1 ' ' 1 Above. xNames = c("x1". As the (nested) CES functions defined above can have up to three substitution parameters.e. + rho = seq( from = -0.' 0. The estimation of the other parameters during the grid search can be performed by all the non-linear optimisation algorithms described above. vrs = TRUE.3.5.3 to 1. Therefore. 3 to 1. R> ces4Grid <.6. codes: 0 '***' 0.11506 8.2 ) ) R> summary( ces4Grid ) Estimated CES function with constant returns to scale 11 This plot method can only be applied if the model was estimated by grid search.50000 0.01 '*' 0. the Levenberg-Marquardt algorithm. + rho1 = seq( from = -0.08746 0. by = 0.' 0.9.6 to 0. + rho2 = seq( from = -0.1269 5.48e-07 *** --Signif.001 '**' 0. by = 0.3 for ρ1 .2 for ρ2 .' 0.4 to 0.022 <2e-16 *** rho 0. codes: 0 '***' 0.2 for ρ. As a further example.9 with an increment of 0.3.6667 0.01851 0. −0.7 with an increment of 0.1 ' ' 1 A graphical illustration of the relationship between the pre-selected values of the substitution parameters and the corresponding sums of the squared residuals can be obtained by applying the plot method. to = 0.11 R> plot( cesGrid ) This graphical illustration is shown in Figure 3. Error t value Pr(>|t|) E_1_2 (all) 0. ρ2 .8. + rho = seq( from = -0. "x2". to = 0. Error t value Pr(>|t|) gamma 1.05 '.2 ). nu 1. + data = cesData. to = 1.7649542 Elasticity of Substitution: Estimate Std. "x3".3 ).4.0798 .62072 0.752 0.1 ' ' 1 Residual standard error: 2.05 '. and ρ. . Again. we apply the default optimisation method.794 <2e-16 *** --Signif.04570 23. Preselected values are −0. and −0. "x4" ).28543 1.28 Econometric Estimation of the Constant Elasticity of Substitution Function in R Estimation by non-linear least-squares using the 'LM' optimizer and a one-dimensional grid search for coefficient 'rho' assuming an additive error term Convergence achieved after 4 iterations Message: Relative error in the sum of squares is at most `ftol'.cesEst( yName = "y4".8 with an increment of 0. by = 0. Coefficients: Estimate Std.7. method = "LM". we estimate a four-input nested CES function by a three-dimensional grid search for ρ1 .44672 Multiple R-squared: 0.001 '**' 0.02819 22.255 1.01 '*' 0. xNames = c( "x1".852 <2e-16 *** delta 0. 642 0. by = 0. "x4").03237 24. method = "LM".5 rho Figure 3: Sum of squared residuals depending on ρ.Arne Henningsen. Call: cesEst(yName = "y4".0 1.01 '*' 0.01632 78.0 ● ● ● 0.001 '**' 0.6.1 ' ' 1 Residual standard error: 1.7.45684 0. to = 0.511382 rho_2 0.' 0. rho1 = seq(from = -0.51498 0. G´eraldine Henningsen 1260 ● 1240 rss ● ● ● ● 1220 ● ● ● ● ● ● 1200 ● ● ● ● ● 0.088727 . Coefficients: Estimate Std. rho = seq(from = -0.2)) Estimation by non-linear least-squares using the 'LM' optimizer and a three-dimensional grid search for coefficients 'rho_1'. rho2 = seq(from = -0.24714 3. to = 0.23500 1.000271 *** --Signif.05 '.8.657 0. "x3".40000 0. by = 0.7887368 Elasticities of Substitution: 29 . xNames = c("x1". 'rho' assuming an additive error term Convergence achieved after 4 iterations Message: Relative error in the sum of squares is at most `ftol'.78337 0.28086 0.111 < 2e-16 *** delta 0.9. rho 0. codes: 0 '***' 0.02608 23. to = 1.90000 0.02119 24. data = cesData. 'rho_2'.2).3).482 < 2e-16 *** delta_1 0.4.30000 0.60272 0.197 < 2e-16 *** delta_2 0.302 < 2e-16 *** rho_1 0. Error t value Pr(>|t|) gamma 1.425583 Multiple R-squared: 0.5 1.3.702 0. by = 0. "x2". start = coef(cesGrid)) Estimation by non-linear least-squares using the 'LM' optimizer assuming an additive error term Convergence achieved after 4 iterations Message: Relative error in the sum of squares is at most `ftol'. R> cesStartGrid <. xNames = c("x1".873 <2e-16 *** rho 0. Error t value Pr(>|t|) E_1_2 (HM) 0. vrs = TRUE.56e-09 *** E_(1.11562 8.001 '**' 0.00443 ** E_3_4 (HM) 0. Starting values can be set by argument start. ρ2 .52632 0.76923 0.11990 5. data = cesData. plotting the sums of the squared residuals against the corresponding (pre-selected) values of ρ1 .05 '.30 Econometric Estimation of the Constant Elasticity of Substitution Function in R Estimate Std.1 ' ' 1 HM = Hicks-McFadden (direct) elasticity of substitution AU = Allen-Uzawa (partial) elasticity of substitution Naturally.01 '*' 0. codes: 0 '***' 0. where each of the three substitution parameters (ρ1 .' 0.04569 23. the plot method generates three three-dimensional graphs. ρ2 . or as starting values for a new non-linear least-squares estimation.2)_(3. codes: 0 '***' 0. As it is (currently) not possible to account for more than three dimensions in a graph.49e-14 *** --Signif. for a three-dimensional grid search. c( "x1".688 1. Coefficients: Estimate Std. Error t value Pr(>|t|) gamma 1. An example is shown in Figure 4.02385 0.1 ' ' 1 .0625 .' 0.cesEst( "y2". the values of the substitution parameters that are between the grid points can also be estimated.02845 21.4) (AU) 0. "x2" ). vrs = TRUE. In the latter case.863 0.958 2.01 '*' 0. R> plot( ces4Grid ) The results of the grid search algorithm can be used either directly.62220 0. "x2").08582 0.29090 1. would require a four-dimensional graph.06846 7.71429 0.855 <2e-16 *** delta 0. and ρ.846 0. + start = coef( cesGrid ) ) R> summary( cesStartGrid ) Estimated CES function with variable returns to scale Call: cesEst(yName = "y2". data = cesData.765 <2e-16 *** --Signif.001 '**' 0.54192 0. nu 1.05 '.27032 2. ρ) in turn is kept fixed at its optimal value. 0 −0.0 0.4 0.6 0.5 rh rh 2 o_ 1 o_ 0.4 0.5 o_ 0.4 −420 −440 −460 −480 1.0 0.5 0.2 0.2 −0.8 0.5 rh o rh 2 1. G´eraldine Henningsen negative sums of squared residuals −410 −420 −430 −440 0.Arne Henningsen.0 1.0 o_ rh 0.6 0.0 1 rh o 0.2 −0.5 0.5 −420 −440 −460 −480 −500 0. and ρ.8 0.5 −0.0 −0.0 −0.2 0.4 Figure 4: Sum of squared residuals depending on ρ1 . 31 . ρ2 .5 1. ' 0.02573 23.05 '. data = cesData. codes: 0 '***' 0.41742 0.132696 rho 0.425085 Multiple R-squared: 0.51224 0.741 0. xNames = c("x1".6485 0.1 ' ' 1 .1224 5.1 ' ' 1 R> ces4StartGrid <.28212 0.01 '*' 0.05 '.74369 0. Error t value Pr(>|t|) gamma 1.08e-14 *** --Signif.001 '**' 0.05 '. "x2".01 '*' 0.06677 7.049 < 2e-16 *** rho_1 0. data = cesData.25067 3.730 1.2)_(3. codes: 0 '***' 0.46e-09 *** E_(1.7649817 Elasticity of Substitution: Estimate Std.7888844 Elasticities of Substitution: Estimate Std. + start = coef( ces4Grid ) ) R> summary( ces4StartGrid ) Estimated CES function with constant returns to scale Call: cesEst(yName = "y4".12678 5.23333 3.001 '**' 0. start = coef(ces4Grid)) Estimation by non-linear least-squares using the 'LM' optimizer assuming an additive error term Convergence achieved after 6 iterations Message: Relative error in the sum of squares is at most `ftol'. "x3".03303 23.3 1.000184 *** --Signif.cesEst( "y4".1 ' ' 1 Residual standard error: 1. Error t value Pr(>|t|) E_1_2 (all) 0.4) (AU) 0.46878 0.34464 0.024 0. Error t value Pr(>|t|) E_1_2 (HM) 0. "x2".504 0.' 0.02130 24.374 < 2e-16 *** delta 0. "x4" ).446577 Multiple R-squared: 0.16e-07 *** --Signif.60130 0.51610 0.' 0.93762 0. codes: 0 '***' 0.78554 0. "x3".01 '*' 0.890 0.866 4. Coefficients: Estimate Std. c( "x1".22922 1.70551 0.001 '**' 0.783 < 2e-16 *** delta_2 0.32 Econometric Estimation of the Constant Elasticity of Substitution Function in R Residual standard error: 2.0025 ** E_3_4 (HM) 0. "x4").373239 rho_2 0.01634 78.442 < 2e-16 *** delta_1 0. which implements the actual grid search procedure. The restricted translog model (6. ρ2 .2. for which the user has specified vectors of pre-selected values of the substitution parameters (rho1. Estimations with the PORT routines use function nlminb from the stats package (R Development Core Team 2011). which uses the Fortran library PORT (Gay 1990). and/or rho). rho2. or functions from other packages. are estimated given the particular combination of substitution parameters. which uses the Fortran library UNCMIN (Schnabel et al. and L-BFGS-B algorithms use the function optim from the stats package (R Development Core Team 2011). ρ). Garbow. 4. Gil.hypothesis of the car package (Fox 2009). 4.Arne Henningsen. This is done by setting the arguments of cesEst. cesEst calls the internal function cesEstGridRho. As cesEst is called with arguments . 1985) with a line search as the step selection strategy. Nelder-Mead (NM). The test of the parameter restrictions defined in Equation 7 is performed by the function linear. However. Kmenta approximation The estimation of the Kmenta approximation (5) is implemented in the internal function cesEstKmenta. Grid search If the user calls cesEst with at least one of the arguments rho1. and Hillstrom 1980). or rho being a vector. rho2. on which the grid search should be performed. Estimations with the Levenberg-Marquardt algorithm are performed by function nls. G´eraldine Henningsen 33 HM = Hicks-McFadden (direct) elasticity of substitution AU = Allen-Uzawa (partial) elasticity of substitution 4. to the particular elements of these vectors. This function uses translogEst from the micEcon package (Henningsen 2010) to estimate the unrestricted translog function (6). Simulated Annealing (SANN). on which the grid search should be performed. BFGS. the parameters on which no grid search should be performed (and which are not fixed by the user). the actual estimations are carried out by internal helper functions.lm package (Elzhov and Mullen 2009). the function cesEstGridRho consecutively calls cesEst. Estimations with the “Conjugate Gradients” (CG). Estimations with the Newton-type algorithm are performed by function nlm from the stats package (R Development Core Team 2011). 7) is estimated with function systemfit from the systemfit package (Henningsen and Hamann 2007). Non-linear least-squares estimation The non-linear least-squares estimations are carried out by various optimisers from other packages. Ardia. For each combination (grid point) of the pre-selected values of the substitution parameters (ρ1 .3.lm of the minpack. In each of these internal calls of cesEst. Estimations with the Differential Evolution (DE) algorithm are performed by function DEoptim from the DEoptim package (Mullen. Implementation The function cesEst is the primary user interface of the micEconCES package (Henningsen and Henningsen 2011b).1. Windover. 4. which is an R interface to the Fortran package MINPACK (Mor´e. and Cline 2011). 4.12 In case of nested CES functions with three or four inputs. if the absolute value of ρ is smaller than. nu = 1. ρ2 .g. R> rhoData <. when first very small (in absolute terms) exponents (e. where this function is applied to generate the output variables of an artificial data set for demonstrating the usage of cesEst. or 4 are imprecise if at least one of the substitution parameters (ρ1 . ρ2 . and/or ρ approaching zero are presented in Appendices B. ρ/ρ1 . and/or ρ approaching zero. the CES functions are not defined. function cesCalc calls the internal functions cesCalcN3 or cesCalcN4 for the actual calculations. the cesCalc function is called by the internal function cesRss that calculates and returns the sum of squared residuals. Calculating output and the sum of squared residuals Function cesCalc can be used to calculate the output quantity of the CES function given input quantities and coefficients. ρ2 .4. This first-order Taylor series approximation is the Kmenta approximation defined in Equation 5.cesCalc( xNames = c( "x1".. which is 5 · 10−6 by default. and rho being all single scalars (or NULL if the corresponding substitution parameter is neither included in the grid search nor fixed at a pre-defined value). The limits of the three-input (4) and fourinput (3) nested CES functions for ρ1 . ρ) is close to 0. −ρ2 . cesCalc uses a first-order Taylor series approximation at the point ρ = 0 for calculating the output.. for the traditional two-input CES function (1). + yCES = NA. 3. A few examples of using cesCalc are shown in the beginning of Section 3.34 Econometric Estimation of the Constant Elasticity of Substitution Function in R rho1. which is the objective function in the non-linear least-squares estimations. In this case. We noticed that the calculations with cesCalc using Equations 1. ρ) is equal to zero. respectively. + coef = cesCoef. "x2" ). or equal to. 5e-9 ). rhoApprox = Inf ) + rhoData$yCES[ i ] <. argument rhoApprox. ρ2 .]. Furthermore. or −ν/ρ) are applied. Therefore. data = cesData[1.data. −ρ1 . which has been created by following commands. 13 The derivation of the first-order Taylor series approximations based on Uebe (2000) is presented in Appendix A. data = cesData[1. e.]. + coef = cesCoef. ρ/ρ2 . 2e-6. "x2" ). and/or ρ) fixed at the values specified in the corresponding arguments.g. rho2. it estimates the CES function by non-linear least-squares with the corresponding substitution parameters (ρ1 . or −ρ) and then very large (in absolute terms) exponents (e. in CES functions with very small (in absolute terms) substitution parameters. rounding errors can become large in specific circumstances.c( gamma = 1. cesCalc returns the limit of the output quantity for ρ1 . . yLin = NA ) R> # calculate dependent variables R> for( i in 1:nrow( rhoData ) ) { + # vector of coefficients + cesCoef <. If at least one substitution parameter (ρ1 .. However. rho = rhoData$rho[ i ].1 and C.1. This is caused by rounding errors that are unavoidable on digital computers.1 ) + rhoData$yLin[ i ] <.frame( rho = seq( -2e-6. rhoApprox = 0 ) + } 12 The limit of the traditional two-input CES function (1) for ρ approaching zero is equal to the exponential function of the Kmenta approximation (5) calculated with ρ = 0.1.cesCalc( xNames = c( "x1".g. ρ2 . delta = 0. but are usually negligible.6.13 We illustrate the rounding errors in the left panel of Figure 5. 15 When estimating a CES function with function cesEst.rhoData$yLin .20 y (normalised. Argument rhoApprox of cesEst must be a numeric vector.14 In case of nested CES functions. the user can use argument rhoApprox to modify the threshold to calculate the dependent variable by the Taylor series approximation or linear interpolation. This might not only affect the fitted values and residuals returned by cesEst (if at least one of the estimated substitu14 The commands for creating the right panel of Figure 5 are not shown here. cesCalc uses linear interpolation in order to avoid rounding errors. Depending on the number of substitution parameters (ρ1 . CES.0e−08 1. or three-dimensional linear interpolation is applied.10 0. because they are the same as the commands for the left panel of this figure except for the command for creating the vector of ρ values. red = lines( rhoData$rho. a one-.5e−07 # normalise output variables rhoData$yCES <. In this case. xlab = "rho". 15 We use a different approach for the nested CES functions than for the traditional two-input CES function. rhoData$yLin ) −1. cesCalc calculates the output quantities for two different values of each substitution parameter that is close to zero.rhoData$yLin[ plot( rhoData$rho. if at least one substitution parameter (ρ1 . red = CES.e. type = "l". black = linearised) −5.00 0. ylab = "y (normalised. i.5e−07 y (normalised. red = CES. ρ2 . two-. ρ2 . black = linearised)" ) −0. ρ) is close to zero.0e−08 5. . see the following section) is very laborious and has no advantages over using linear interpolation.Arne Henningsen. where the first element is passed to cesCalc (partly through cesRss). because calculating Taylor series approximations of nested CES functions (and their derivatives with respect to coefficients.. because it is nearly linear for a wide range of ρ values. black = linearised) R> R> R> R> + R> 35 −1 0 1 2 3 rho Figure 5: Calculated output for different values of ρ. G´eraldine Henningsen −2e−06 −1e−06 0e+00 rho 1e−06 2e−06 −0.rhoData$yCES .rhoData$yLin[ rhoData$yLin <.05 rhoData$rho == 0 ] rhoData$rho == 0 ] col = "red". These interpolations are performed by the internal functions cesInterN3 and cesInterN4. rhoData$yCES. ρ) that are close to zero. zero (using the formula for the limit for this parameter approaching zero) and the positive or negative value of argument rhoApprox (using the same sign as this parameter). The right panel of Figure 5 shows that the relationship between ρ and the output y can be rather precisely approximated by a linear function. but now ∂y/∂γ is calculated with Equation 20 instead of Equation 15.2. For the traditional two-input CES function. respectively. The partial derivatives of the nested CES functions with three and four inputs with respect to the coefficients are presented in Appendices B. If at least one 16 The derivations of these formulas are presented in Appendix A. we calculate these derivatives by first-order Taylor series approximations at the point ρ = 0 using the limits for ρ approaching zero:16  ρ  ∂y ν (1−δ) = eλ t xν1 δ x2 exp − ν δ (1 − δ) (ln x1 − ln x2 )2 (20) ∂γ 2 ∂y ν(1−δ) = γ eλ t ν (ln x1 − ln x2 ) xν1 δ x2 (21) ∂δ    ρ 1 − 1 − 2 δ + ν δ(1 − δ) (ln x1 − ln x2 ) (ln x1 − ln x2 ) 2  ∂y 1 λt ν δ ν(1−δ) = γ e ν δ (1 − δ) x1 x2 − (ln x1 − ln x2 )2 (22) ∂ρ 2  ρ ρ + (1 − 2 δ) (ln x1 − ln x2 )3 + ν δ(1 − δ) (ln x1 − ln x2 )4 3 4  ∂y ν(1−δ) = γ eλ t xν1 δ x2 δ ln x1 + (1 − δ) ln x2 (23) ∂ν  ρ − δ(1 − δ) (ln x1 − ln x2 )2 [1 + ν (δ ln x1 + (1 − δ) ln x2 )] 2 If ρ is zero or close to zero. Therefore.36 Econometric Estimation of the Constant Elasticity of Substitution Function in R tion parameters is close to zero). if ρ is zero or close to zero.5. .4).2 and C. Partial derivatives with respect to coefficients The internal function cesDerivCoef returns the partial derivatives of the CES function with respect to all coefficients at all provided data points.2. but also the estimation results (if one of the substitution parameters was close to zero in one of the steps of the iterative optimisation routines). the partial derivatives with respect to λ are calculated also with Equation 16. see Section 4. these partial derivatives are:  − ν ∂y ρ −ρ = eλ t δx−ρ + (1 − δ)x (15) 1 2 ∂γ ∂y ∂y =γt (16) ∂λ ∂γ  − ν −1 ν  −ρ ∂y ρ −ρ −ρ (17) = − γ eλ t x1 − x−ρ δx + (1 − δ)x 2 1 2 ∂δ ρ   − ν ∂y ν ρ −ρ −ρ −ρ (18) = γ eλ t 2 ln δx−ρ + (1 − δ)x δx + (1 − δ)x 1 2 1 2 ∂ρ ρ − ν −1  ν ρ −ρ −ρ −ρ + γ eλ t δ ln(x1 )x−ρ + (1 − δ) ln(x )x δx + (1 − δ)x 2 2 2 1 1 ρ − ν   ∂y 1 ρ −ρ −ρ −ρ + (1 − δ)x δx + (1 − δ)x (19) = − γ eλ t ln δx−ρ 2 1 2 1 ∂ν ρ These derivatives are not defined for ρ = 0 and are imprecise if ρ is close to zero (similar to the output variable of the CES function. 4. The limits of the partial derivatives of the nested CES functions for one or more substitution parameters approaching zero are also presented in Appendices B. i. G´eraldine Henningsen 37 substitution parameter (ρ1 . Newton-type. where the second to the fifth element of 17 The partial derivatives of the nested CES function with three inputs are calculated by: cesDerivCoefN3Gamma (∂y/∂γ). cesDerivCoefN4Rho (∂y/∂ρ). cesDerivCoefN4Lambda (∂y/∂λ). Finally. the user can use argument rhoApprox to specify the thresholds below which the derivatives with respect to the coefficients are approximated by Taylor series approximations or linear interpolations. When estimating a CES function with function cesEst. cesDerivCoefN3Rho1 (∂y/∂ρ1 ).Arne Henningsen. and ρ are “close” to zero. cesDerivCoefN4B2 (returning B2 = δ2 x−ρ + (1 − δ2 )x−ρ ). cesDerivCoefN3Rho (∂y/∂ρ). ∂y/∂ρ2 . cesDerivCoefN4Delta (∂y/∂δ). δ. the second element defines the threshold for ∂y/∂δ1 . BFGS. ν} is a coefficient of the CES function. L-BFGS-B. θ ∈ {γ. ρ2 . function cesDerivCoef is used to obtain the gradient matrix for calculating the asymptotic covariance matrix of the non-linear least-squares estimator (see Section 4. calculated by: cesDerivCoefN4Gamma (∂y/∂γ). and cesDerivCoefN4B (returning B = δB1 + (1 − δ)B2 2 ). Furthermore. Function cesDerivCoef is used to provide argument jac (which should be set to a function that returns the Jacobian of the residuals) to function nls. and PORT. ..17 The one. δ1 . ui is the residual of the ith observation. and ∂y/∂δ (default value 5 · 10−6 ). cesDerivCoefN4Delta2 (∂y/∂δ2 ). cesDerivCoefN4Delta1 (∂y/∂δ1 ). which calculates the partial derivatives of the sum of squared residuals (RSS) with respect to all coefficients by:  N  X ∂RSS ∂yi = −2 ui . cesDerivCoef uses the same approach as cesCalc to avoid large rounding errors. using the limit for these parameters approaching zero and potentially a one-. function cesDerivCoef is used by the internal function cesRssDeriv.or more-dimensional linear interpolations are (again) performed by the internal functions cesInterN3 and cesInterN4. cesDerivCoefN3Lambda (∂y/∂λ). δ2 . ρ) is exactly zero or close to zero. cesDerivCoefN4L2 3 4 ρ/ρ1 ρ/ρ (returning L2 = δ2 ln x3 + (1 − δ2 ) ln x4 )..e. and ∂yi /∂θ is the partial derivative of the CES function with respect to coefficient θ evaluated at the ith observation as returned by function cesDerivCoef. (24) ∂θ ∂θ i=1 where N is the number of observations. Argument rhoApprox of cesEst must be a numeric vector of five elements. ρ1 .6). ∂y/∂δ2 . Function cesRssDeriv is used to provide analytical gradients for the other gradient-based optimisation algorithms. cesDerivCoefN3Delta1 (∂y/∂δ1 ). cesDerivCoefN4Rho2 (∂y/∂ρ2 ). The calculation of the partial derivatives and their limits are performed by several internal functions with names starting with cesDerivCoefN3 and cesDerivCoefN4. the third element defines the threshold for ∂y/∂ρ1 . ρ2 . or three-dimensional linear interpolation. ρ2 .2. Conjugate Gradients. and cesDerivCoefN4Nu (∂y/∂ν) with helper functions cesDerivCoefN4B1 (returning B1 = δ1 x1−ρ1 + (1 − δ1 )x2−ρ1 ). and cesDerivCoefN3B (returning ρ/ρ B = δB1 1 + (1 − δ)x−ρ The partial derivatives of the nested CES function with four inputs are 3 ). and the fourth element defines the threshold for ∂y/∂ν (default value 5 · 10−6 ). cesDerivCoefN4Rho1 (∂y/∂ρ1 ). two-. and 1 cesDerivCoefN3Nu (∂y/∂ν) with helper functions cesDerivCoefN3B1 (returning B1 = δ1 x−ρ + (1 − δ1 )x2−ρ1 ).lm so that the LevenbergMarquardt algorithm can use analytical derivatives of each residual with respect to all coefficients. This argument must be a numeric vector with exactly four elements: the first element defines the threshold for ∂y/∂γ (default value 5 · 10−6 ). 1 cesDerivCoefN3L1 (returning L1 = δ1 ln x1 + (1 − δ1 ) ln x2 ).2 and C. ρ.e. cesDerivCoefN3Delta (∂y/∂δ). Function cesDerivCoef has an argument rhoApprox that can be used to specify the threshold levels for defining when ρ1 . cesDerivCoefN4L1 (returning 2 2 L1 = δ1 ln x1 + (1 − δ1 ) ln x2 ). and ∂y/∂ρ (default value 10−3 ). i. e. and σ ˆ2 denotes the estimated variance of the residuals. and ρ are estimated (not fixed as.5. and all other coefficients equal to the above-described starting values. function cesEstStart determines the starting values automatically. If the CES function includes a time variable.7.5% per time period. 4. ρ1 .g. Covariance matrix The asymptotic covariance matrix of the non-linear least-squares estimator obtained by the various iterative optimisation methods is calculated by: σ ˆ 2  ∂y ∂θ > ∂y ∂θ !−1 (25) (Greene 2008..2 for the nested CES functions).g.. PN yi γ = PN i=1 . If the coefficients ρ1 ..2 and C. the internal function cesEstStart checks if the number of starting values is correct and if the individual starting values are in the appropriate range of the corresponding parameters. ρ2 . Starting values If the user calls cesEst with argument start set to a vector of starting values. The starting values of δ1 . or ρ) equal to their pre-selected values.e.8. i. we calculate the estimated variance of the residuals by σ ˆ2 = N 1 X 2 ui . and δ are set to 0. which corresponds to constant returns to scale. . Finally. 292). but also the estimation results obtained by a gradient-based optimisation algorithm (if one of the substitution parameters is close to zero in one of the steps of the iterative optimisation routines). As Equation 25 is only valid asymptotically.6. which corresponds to a technological progress of 1. k is the number of coefficients. without correcting for degrees of freedom.38 Econometric Estimation of the Constant Elasticity of Substitution Function in R this vector are passed to cesDerivCoef. the starting value of λ is set to 0. p. ρ2 . If no starting values are provided by the user. their starting values are set to 0.25. e. where ∂y/∂θ denotes the N ×k gradient matrix (defined in Equations 15 to 19 for the traditional two-input CES function and in Appendices B. the starting value of γ is set to a value so that the mean of the residuals is equal to zero. which generally corresponds to an elasticity of substitution of 0. The starting value of ν is set to 1. The choice of the threshold might not only affect the covariance matrix of the estimates (if at least one of the estimated substitution parameters is close to zero). 4. δ2 . all “fixed” coefficients (e.015. N (26) i=1 i. N is the number of observations. i=1 CESi (27) where CESi indicates the (nested) CES function evaluated at the input quantities of the ith observation and with coefficient γ equal to one. during grid search). or ρ. The print method prints the call. and marginal significance levels (P -values) of the estimated parameters and elasticities of substitution. convergence).4. their standard errors. and ρ constant in each of the three plots. and marginal significance levels. G´eraldine Henningsen 39 4. which are the names of the coefficients of the CES function. vcov. are not included in the vector of estimated coefficients. The print method for objects of class "summary. if these coefficients are fixed (e. cesDerivCoef. If the model is estimated by a one-dimensional grid search for ρ1 . the plot method draws a perspective plot by using the command persp of the graphics package (R Development Core Team 2011) and the command colorRampPalette of the grDevices package (R Development Core Team 2011) (for generating a colour gradient). the fitted values.g. ρ2 .g. their covariance matrix. The coef.3 and 3. .8. The internal function cesCheckRhoApprox checks argument rhoApprox of functions cesEst. the covariance matrix of the estimated coefficients and elasticities of substitution. Objects returned by function cesEst are of class "cesEst" and the micEconCES package includes several methods for objects of this class. the R2 value as well as the standard errors. t-values. their standard errors. the internal function cesCoefBounds creates and returns the default bounds depending on the optimisation algorithm as described in Sections 3. 4. algorithm. L-BFGS-B. The object returned by the summary method is of class "summary. t-values. and cesRssDeriv. ρ2 . The internal function cesCoefNames returns a vector of character strings. and ρ to the vector of coefficients. the estimated coefficients. the plot method plots three perspective plots by holding one of the three coefficients ρ1 . and the estimated elasticities of substitution. In case of a three-dimensional grid search. The plot method can only be applied if the model is estimated by grid search (see Section 3. fitted. In case of a two-dimensional grid search. this method plots a simple scatter plot of the pre-selected values against the corresponding sums of the squared residuals by using the commands plot.. ρ2 . respectively. and the residuals. Methods The micEconCES package makes use of the “S3” class system of the R language introduced in Chambers and Hastie (1992).cesEst" returns a matrix with four columns containing the estimated coefficients.9. and residuals methods extract and return the estimated coefficients. If the user selects the optimisation algorithm Differential Evolution. Other internal functions The internal function cesCoefAddRho is used to add the values of ρ1 . respectively. cesRss. The summary method calculates the estimated standard error of the residuals (ˆ σ ).default and points of the graphics package (R Development Core Team 2011). and marginal significance levels as well as some information on the estimation procedure (e. or PORT.cesEst" prints the call.cesEst". but does not specify lower or upper bounds of the coefficients. the estimated coefficients and elasticities of substitution.6). t-values.Arne Henningsen. The coef method for objects of class "summary.. during grid search for ρ) and hence. Sun. The data used in Sun et al. This model is given in Masanjala and Papageorgiou (2004. the distribution parameter α should lie between zero and one. and Vinod 2008. Please note that the relationship between the coefficient ρ and the elasticity of substitution is slightly different from this relationship in the original two-input CES function: in this case. Hence. yi is the steady-state aggregate output (Gross Domestic Product. (2011) and Masanjala and Papageorgiou (2004) are actually from Mankiw. ni is the population growth rate. the coefficient ρ has the opposite sign. and σ is the elasticity of substitution. Henderson.e. A is the efficiency of labour (with growth rate g). Anderson. 5. GDP) per unit of labour. subscript i indicates the country. eq. and popgrowth (average growth rate of working-age population 1960 to 1985 18 In contrast to Equation 3 in Masanjala and Papageorgiou (2004). the variables gdp85 (per capita GDP in 1985). second. the elasticity of substitution is σ = 1/(1 − ρ). This data set is available in the R package AER (Kleiber and Zeileis 2009) and includes. x2 = (ni + g + δ)/sik . We can re-define the parameters and variables in the above function in order to obtain the standard two-input CES function (1). Third. i. McCullough. and A being the efficiency of labour) into the (observable) steady-state output per unit of labour (yi = Yi /Li ) and the (unobservable) efficiency component (A) to obtain the equation that is actually estimated. McGeary. and Kumbhakar (2011) Our first replication study aims to replicate some of the estimations published in Sun.. to confirm the reliability of the micEconCES package by comparing cesEst’s results with results published in the literature. Romer. and Kumbhakar (2011). Schwab. so that the estimated parameter δ must have—in contrast to the standard CES function—a value larger than or equal to one. McCullough. amongst others. Furthermore. which should be afforded higher priority in scientific economic research (e.e. McCullough 2009). δ = 1−α . the estimated ρ must lie between minus infinity and (plus) one. and Claerbout 2000. Li being aggregate labour. sik is the ratio of investment to aggregate output. . to encourage reproducible research. δ is the depreciation rate of capital goods. the quantity of labour multiplied by the efficiency of labour) as inputs.g. α is the distribution parameter of capital.40 Econometric Estimation of the Constant Elasticity of Substitution Function in R 5. where: 1 γ = A. This article is itself a replication study of Masanjala and Papageorgiou (2004). and ν = 1. x1 = 1. Buckheit and Donoho 1995.. we decomposed the (unobservable) steady-state output per unit of augmented labour in country i (yi∗ = Yi /(A Li ) with Yi being aggregate output. and Weil (1992) and Durlauf and Johnson (1995) and comprise crosssectional data on the country level. This section has three objectives: first. Replication studies In this section. ρ = (σ − 1)/σ. invest (average ratio of investment (including government investment) to GDP from 1960 to 1985 in percent).1. Karrenbach. Greene. g is the technology growth rate. we aim at replicating estimations of CES functions published in two journal articles. and Harrison 2008. We will re-estimate a Solow growth model based on an aggregate CES production function with capital and “augmented labour” (i. to highlight and discuss the problems that occur when using real-world data by basing the econometric estimations in this section on realworld data. which is in contrast to the previous sections. Henderson. 3):18 σ "   σ−1 #− σ−1 σ 1 α sik yi = A − (28) 1 − α 1 − α ni + g + δ Here. 776 0.197 0.2354 --Signif. The following commands load this data set and remove data from oil producing countries in order to obtain the same subset of 98 countries that is used by Masanjala and Papageorgiou (2004) and Sun et al.187 0.subset( GrowthDJ. oil == "no" ) Now we will calculate the two “input” variables for the Solow growth model as described above and in Masanjala and Papageorgiou (2004) (following the assumption of Mankiw et al. (1992) that g + δ is equal to 5%): R> GrowthDJ$x1 <. data = GrowthDJ ) R> summary( cesNls. we will calculate the distribution parameter of capital (α) and the elasticity of substitution (σ) manually.05 '.175 0. "\n" ) . "x2").001 '**' 0. "x2"). Coefficients: Estimate Std. G´eraldine Henningsen 41 in percent). rho -0.Arne Henningsen.cesEst( "gdp85".239 1.1 R> GrowthDJ$x2 <.1 ) / coef( cesNls )[ "delta" ]. ( coef( cesNls )[ "delta" ] . where we suppress the presentation of the elasticity of substitution (σ). (2011). R> cat( "alpha =".2401 delta 3.977 2.' 0. xNames = c("x1".993 1.01 '*' 0. package = "AER" ) R> GrowthDJ <. codes: 0 '***' 0.( GrowthDJ$popgrowth + 5 ) / GrowthDJ$invest The following commands estimate the Solow growth model based on the CES function by nonlinear least-squares (NLS) and print the summary results. ela = FALSE) Estimated CES function with constant returns to scale Call: cesEst(yName = "gdp85". R> data( "GrowthDJ".141 549.1 ' ' 1 Residual standard error: 3313.166 -1. because it has to be calculated with a non-standard formula in this model.0757 . Error t value Pr(>|t|) gamma 646. data = GrowthDJ) Estimation by non-linear least-squares using the 'LM' optimizer assuming an additive error term Convergence achieved after 23 iterations Message: Relative error in the sum of squares is at most `ftol'. R> cesNls <. c( "x1".748 Multiple R-squared: 0.6016277 Now. 05 '. Error t value gamma 1288.590591 As the deviation between our calculated α and the corresponding value published in Sun et al. if we restrict coefficient ρ to zero: R> cdNls <. "x2"). Table 1: α = 0.8354).000 --Signif. ela = FALSE ) Estimated CES function with constant returns to scale Call: cesEst(yName = "gdp85".1609 0. Coefficients: Estimate Std. rho = 0) Estimation by non-linear least-squares using the 'LM' optimizer assuming an additive error term Coefficient 'rho' was fixed at 0 Convergence achieved after 7 iterations Message: Relative error in the sum of squares is at most `ftol'. rho = 0 ) R> summary( cdNls.1 ' ' 1 Residual standard error: 3342. data = GrowthDJ. we can use function cesEst to estimate Cobb-Douglas functions by non-linear least-squares (NLS).001 '**' Pr(>|t|) 0. (2011.1772 2. xNames = c("x1".6955 3.coef( cesNls )[ "rho" ] ).512 rho 0.cesEst( "gdp85".308 Multiple R-squared: 0. data = GrowthDJ. If we restrict ρ to zero and assume a multiplicative error term.000000 0.1 ) / coef( cdNls )[ "delta" ].0000 0. "x2").01 '*' 0. codes: 0 '***' 0.000445 *** 1. c( "x1". "\n" ) sigma = 0. As the CES function approaches a Cobb-Douglas function if the coefficient ρ approaches zero and cesEst internally uses the limit for ρ approaching zero if ρ equals zero. (2011.4425 0.017722 * 0. σ = 0. the estimation is in fact equivalent to an OLS estimation of the logarithmised version of the Cobb-Douglas function: . 1 / ( 1 . Table 1: α = 0.371 delta 2.5947313 R> cat( "alpha =". we also consider this replication exercise to be successful.' 0. ( coef( cdNls )[ "delta" ] .7485641 R> cat( "sigma =".7486.5907) is very small.835429 These calculations show that we can successfully replicate the estimation results shown in Sun et al. "\n" ) alpha = 0.0797 543.42 Econometric Estimation of the Constant Elasticity of Substitution Function in R alpha = 0. but generally. 5. data = GrowthDJ.1 ' ' 1 Residual standard error: 0. xNames = c("x1". we conclude that the micEconCES package has no problems with replicating the estimations of the two-input CES functions presented in Sun et al. (2011.5980698 Again. Kemfert (1998) Our second replication study aims to replicate the estimation results published in Kemfert (1998).1 ) / coef( cdLog )[ "delta" ].01 '*' 0.08e-15 *** delta 2. rho = 0) Estimation by non-linear least-squares using the 'LM' optimizer assuming a multiplicative error term Coefficient 'rho' was fixed at 0 Convergence achieved after 8 iterations Message: Relative error in the sum of squares is at most `ftol'.5973597 R> cat( "alpha =". and labour.017 1. "x2"). We do this just to test the reliability of cesEst and for the sake of curiosity.' 0. Kemfert’s CES functions basically have the same specification as our three-input nested CES function 19 This is indeed a rather complex way of estimating a simple linear model by least squares.001 '**' 0. "\n" ) alpha = 0. Kemfert (1998) estimates nested CES functions with all three possible nesting structures. Coefficients: Estimate Std. data = GrowthDJ.Arne Henningsen.2337 120.6814132 Multiple R-squared: 0. codes: 0 '***' 0.4880 0. She estimates nested CES production functions for the German industrial sector in order to analyse the substitutability between the three aggregate inputs capital. (2011).05 '. ( coef( cdLog )[ "delta" ] .1056 0.51e-16 *** rho 0. energy. "x2").5981).cesEst( "gdp85". multErr = TRUE ) R> summary( cdLog.4003 8. . multErr = TRUE. G´eraldine Henningsen 43 R> cdLog <. c( "x1". ela = FALSE ) Estimated CES function with constant returns to scale Call: cesEst(yName = "gdp85". we can successfully replicate the α shown in Sun et al. Error t value Pr(>|t|) gamma 965.2.3036 8. As nested CES functions are not invariant to the nesting structure.000 1 --Signif.0000 0.195 2.19 As all of our above replication exercises were successful. we recommend using simple linear regression tools to estimate this model. rho = 0. Table 1: α = 0. which were originally published by the German statistical office. in the second nesting structure (s = 2). and ν = 1. labour.subset( GermanIndustry. labour.5. ρ = βs . R> cesNls1 <.2 ). ρ1 = αs . Kemfert (1998) does not allow for increasing or decreasing returns to scale and the naming of the parameters is different with: γ = As . 2. rho = 0. ) First.. In contrast. nls terminates with an error message. usually returns parameter estimates.2. the estimation with cesEst using. e. they are energy. and energy. add a linear time trend (starting at zero in 1960). we try to estimate the first model specification using the standard function for non-linear least-squares estimations in R (nls). R> data( "GermanIndustry" ) R> GermanIndustry$time <. In the first nesting structure (s = 1). + delta2 = 0. rho1= 0. + data = GermanIndustry ) ) R> cat( cesNls1 ) Error in numericDeriv(form[[3L]]. λ = ms . year < 1973 | year > 1975. and remove the years of economic disruption during the oil crisis (1973 to 1975) in order to obtain the same subset as used by Kemfert (1998). . labour input (A) is the total number of persons employed in the West German industrial sector (in million). they are capital. respectively. and in the third nesting structure (s = 3). lambda = 0. the LevenbergMarquardt algorithm. and x3 are capital. where—according to the specification of the three-input nested CES function (see Equation 4)—the first and second input (x1 and x2 ) are nested together so that the (constant) Allen-Uzawa elasticities of substitution between x1 and x3 (σ13 ) and between x2 and x3 (σ23 ) are equal. x2 .5. capital input (K) is given by gross stock of fixed assets of the West German industrial sector (in billion Deutsche Mark at 1991 prices). where the subscript s = 1.GermanIndustry$year . The data used by Kemfert (1998) are available in the R package micEconCES. δ = as . as in many estimations of nested CES functions with real-world data. and energy input (E) is determined by the final energy consumption of the West German industrial sector (in GWh).g. However. the three inputs x1 . and capital. and labour. Output (Y) is given by gross value added of the West German industrial sector (in billion Deutsche Mark at 1991 prices).1 / rho ). energy. The following commands load the data set. The data are annual aggregated time series data of the entire German industry for the period 1960 to 1993. env) : Missing value or an infinity produced when evaluating the model However. 3 of the parameters in Kemfert (1998) indicates the nesting structure.44 Econometric Estimation of the Constant Elasticity of Substitution Function in R defined in Equation 4 and allow for Hicks-neutral technological change as in Equation 13.try( nls( Y ~ gamma * exp( lambda * time ) * + ( delta2 * ( delta1 * K ^(-rho1) + + ( 1 . Indeed. names(ind). + start = c( gamma = 1.delta1 ) * E^(-rho1) )^( rho / rho1 ) + + ( 1 .1960 R> GermanIndustry <. delta1 = 0.delta2 ) * A^(-rho) )^( .015. δ1 = bs . these data were taken from the appendix of Kemfert (1998) to ensure that we used exactly the same data. control = nls.549e+00 1.2)_3 (AU) NA NA NA NA HM = Hicks-McFadden (direct) elasticity of substitution AU = Allen-Uzawa (partial) elasticity of substitution Although we set the maximum number of iterations to the maximum possible value (1024). Error t value Pr(>|t|) gamma 2.842 < 2e-16 *** delta_1 2.20 While our estimated coefficient λ is not too far away from the corresponding coefficient in Kemfert (1998) (m1 = 0.03239 0. Moreover.853 rho_1 5.control( maxiter = 1024. ρ1 = α1 = 0.1813 in). maxfev = 2000 ) ) R> summary( cesLm1 ) Estimated CES function with constant returns to scale Call: cesEst(yName = "Y".186 0.927e-66 0. "A").852 0.064 . --Signif.567 E_(1. this is not the case for the estimated coefficients ρ1 and ρ (Kemfert 1998. codes: 0 '***' 0. maxfev = 2000)) Estimation by non-linear least-squares using the 'LM' optimizer assuming an additive error term Convergence NOT achieved after 1024 iterations Message: Number of iterations has reached `maxiter' == 1024. + control = nls.139e-68 5.cesEst( "Y". we can achieve “successful convergence” by increasing the tolerance levels for convergence.222).112e-04 0.001 '**' 0. c( "K". the default convergence criteria are not fulfilled after this number of iterations. "E". tName = "time".900e+01 5. xNames = c("K".lm.444e+01 0. + data = GermanIndustry. tName = "time". "E".376e+00 -1. Consequently.581e-04 45.005 5.997 delta 7. Error t value Pr(>|t|) E_1_2 (HM) 0.01 '*' 0. .575 rho -2.1 ' ' 1 Residual standard error: 10. ρ = β1 = 0. G´eraldine Henningsen 45 R> cesLm1 <.lm. our Hicks-McFadden elasticity of substitution between capital and energy considerably deviates from the corresponding elasticity published in Kemfert (1998) (σα1 = 0.100e-02 4.control(maxiter = 1024.653).795e+00 5.300e+01 9.572 0.5300.004 0. Coefficients: Estimate Std.641e-05 4. coefficient ρ estimated by cesEst 20 Of course. "A" ).Arne Henningsen.01852 0.9958804 Elasticities of Substitution: Estimate Std.25944 Multiple R-squared: 0.05 '.' 0.59e-07 *** lambda 2.561 0. data = GermanIndustry. "E". the coefficients are constrained to be in the economically meaningful region by default. so that the elasticities of substitution between capital/energy and labour are not defined. cesEst using the Levenberg-Marquardt algorithm reaches a successful convergence.max = 1000.624 0. eval. Error t value Pr(>|t|) gamma 1.04623 0.max = 2000)) Estimation by non-linear least-squares using the 'PORT' optimizer assuming an additive error term Convergence NOT achieved after 464 iterations Message: false convergence (8) Coefficients: Estimate Std. Error t value Pr(>|t|) E_1_2 (HM) 0. method = "PORT". control = list(iter.0566 . tName = "time".001 '**' 0.1 ' ' 1 Residual standard error: 10. In order to avoid the problem of economically meaningless estimates. c( "K". "E".453e+00 0.251 <2e-16 *** delta_1 1. eval.0534 . + control = list( iter.729 0. tName = "time". + data = GermanIndustry. rho_1 9. Kemfert (1998) does not report the estimates of the other coefficients.46 Econometric Estimation of the Constant Elasticity of Substitution Function in R is not in the economically meaningful range. i. data = GermanIndustry.05 '. codes: 0 '***' 0. "A" ).533e+00 2.217 0. rho 5. we re-estimate the model with cesEst using the PORT algorithm. which allows for box constraints on the parameters.0839 .061 0.' 0.932 0. For the two other nesting structures.905e-01 1.e. but again the estimated ρ1 and ρ parameters are far from the estimates reported in Kemfert (1998) and are not in the economically meaningful range.01 '*' 0.906 0.909e-12 0. R> cesPort1 <.max = 2000) ) R> summary( cesPort1 ) Estimated CES function with constant returns to scale Call: cesEst(yName = "Y". "A").476e-01 4.5989 lambda 2.287e-04 39.2)_3 (AU) 0. Unfortunately.max = 1000. E_(1.cesEst( "Y".633e+00 1.8286 --Signif.158e-13 1.09314 0.68322 Multiple R-squared: 0. contradicts economic theory.5324 --- .526 0.65314 1. If cesEst is called with argument method equal to "PORT".736e+00 5.311e-01 2.9516 delta 9.995533 Elasticities of Substitution: Estimate Std.075e-02 5..914e+00 0.04886 1. method = "PORT". xNames = c("K". codes: 0 '***' 0.1 ' ' 1 HM = Hicks-McFadden (direct) elasticity of substitution AU = Allen-Uzawa (partial) elasticity of substitution As expected. because we thought that Kemfert (1998) could have estimated the model in logarithms without reporting this in the paper. However.001 '**' 0. rho1 = 0..GermanIndustry$E GermanIndustry$A1 <. by setting argument multErr of function cesEst to TRUE). we have to impose the restriction on λ manually.0222 * exp( 0.e.GermanIndustry$K GermanIndustry$E1 <.Arne Henningsen.GermanIndustry$A cesLmFixed1 <. * GermanIndustry$time ) * GermanIndustry$time ) * GermanIndustry$time ) data = GermanIndustry. we could not replicate the estimates reported in Kemfert (1998). Coefficients: Estimate Std.5300. Error t value Pr(>|t|) 21 As we were unable to replicate the results of Kemfert (1998) by assuming an additive error term. .control(maxiter = 1000.. While ρ1 and ρ2 can be restricted automatically by arguments rho1 and rho of cesEst. maxfev = 2000)) Estimation by non-linear least-squares using the 'LM' optimizer assuming an additive error term Coefficient 'rho_1' was fixed at 0.e. maxfev = 2000 ) ) Estimated CES function with constant returns to scale Call: cesEst(yName = "Y". the output is linearly homogeneous in all three inputs). G´eraldine Henningsen 47 Signif. control = nls.0222 "E1". As the model estimated by Kemfert (1998) is restricted to have constant returns to scale (i.53 Coefficient 'rho' was fixed at 0. control = nls. 1000. "A1" ). rho1 = 0.21 In the following. these estimated coefficients are in the economically meaningful region and the fit of the model is worse than for the unrestricted estimation using the Levenberg-Marquardt algorithm (larger standard deviation of the residuals). δ1 . R> R> R> R> + + R> GermanIndustry$K1 <.cesEst( "Y". and ρ to the estimates reported in Kemfert (1998) and only estimate the coefficients that are not reported.1813.01 '*' 0. c( "K1". ρ1 . and ρ restricted to be equal to the estimates reported in Kemfert (1998).control( maxiter = summary( cesLmFixed1 ) * exp( 0.lm. rho = 0.. rho = 0. and δ. xNames = c("K1".05 '. "A1"). "E1".0222 * exp( 0.1813. we re-estimated the models assuming that the error term was multiplicative (i. we can simply multiply all inputs by eλ t . data = GermanIndustry.1813 Convergence achieved after 22 iterations Message: Relative error between `par' and the solution is at most `ptol'.53. the optimisation algorithm reports an unsuccessful convergence and estimates of the coefficients and elasticities of substitution are still not close to the estimates reported in Kemfert (1998).' 0. However. ρ1 . γ.lm. The following commands adjust the three input variables and re-estimate the model with coefficients λ. i.e. we restrict the coefficients λ. even when using this method. 6.5.181300 6. 6. DE) ˆ one gradient-based algorithms for unrestricted optimisation (LM) with starting values equal to the estimates from the global algorithms ˆ one gradient-based algorithms for restricted optimisation (PORT) with starting values equal to the estimates from the global algorithms ˆ two-dimensional grid search for ρ1 and ρ using all gradient-based algorithms (Newton. 3. BFGS. 7.4.8.4.6.1. 0.0. Surprisingly. PORT)22 22 The two-dimensional grid search included 61 different values for ρ1 and ρ: -1.8. 6. -0.4. the coefficient δ1 is not in the economically meaningful range. PORT) ˆ all global algorithms (NM.933 0.680111 5.6.4.9204 0. -0.4. -0. the reported elasticities of substitution are also equal to the values published in Kemfert (1998).003031 delta 0.044 0.2068 0.165851 4.526 0. . 6.918 0. 3. although the coefficients λ. 12.2)_3 (AU) 0. 13. 8. LM) ˆ all gradient-based algorithms for restricted optimisation (L-BFGS-B. LM. Error t value Pr(>|t|) E_1_2 (HM) 0.494895 delta_1 -0.8465 2.2. 0.8.2. 4. 1.103 0. 1.0. we systematically re-estimated the models of Kemfert (1998) using cesEst with many different estimation methods: ˆ all gradient-based algorithms for unrestricted optimisation (Newton.48 Econometric Estimation of the Constant Elasticity of Substitution Function in R gamma 1. 0. and 14. 10. -0. 2. Given that ρ1 and ρ are identical to the corresponding parameters published in Kemfert (1998). the R2 values strongly deviate from the values reported in Kemfert (1998).075280 0. 3. 2.6.8.085 0. 4. 9. 1.884490 rho_1 0. 0. whilst it is considerably smaller for the other.2.8.8.2. 2. -0. -0.5. as our R2 value is much larger for one nesting structure.4.4. 0.4.767 E_(1.092737 0.965 Residual standard error: 12.3.806 0.035786 1.0.9. Moreover.2.9936586 Elasticities of Substitution: Estimate Std.72876 Multiple R-squared: 0.2.0. 0.8. and ρ are exactly the same.2. For the two other nesting structures. 8. Hence. 2. Given our unsuccessful attempts to reproduce the results of Kemfert (1998) and the convergence problems in our estimations above.0. 3. 0.2.9.6. 12. 3.772 HM = Hicks-McFadden (direct) elasticity of substitution AU = Allen-Uzawa (partial) elasticity of substitution The fit of this model is—of course—even worse than the fit of the two previous models.4. BFGS. 0.6. 13.599 0. 11. 8. 0. SANN.7.8.6536 2. 7.0. -0. 5.2.3.0.8. 11. -0. L-BFGS-B.0. 12.4.6.7.290 0. ρ1 .6. 1. -0. 1.245 -0.2.296 0.1.6. the R2 value of our restricted estimation is not equal to the R2 value reported in Kemfert (1998).0. 4.0. 10. 9.530000 rho 0. 5. 10. each grid search included 612 = 3721 estimations.0. 2.8.6. 0. g. 24 . L-BFGS-B. 10. are always obtained by the Levenberg-Marquardt algorithm—either with the standard starting values (third nesting structure).000 function evaluations) ˆ Nelder-Nead: maxit = 5000 (max. For these estimations.24 23 The R script that we used to conduct the estimations and to create these tables is available in Appendix D.. 1. δ. δ1 . eval. with the smallest sums of squared residuals. the models with the best fit are obtained by grid-search with the Levenberg-Marquardt algorithm (first and second nesting structure). Please note that the grid-search with an algorithm for unrestricted optimisation (e. where the PORT algorithm can be used to restrict the estimates to the economically meaningful region.000 function evaluations) ˆ PORT: iter.000 iterations) ˆ SANN: maxit = 2e6 (2. or the PORT algorithm with starting values taken from global algorithms or grid search (second and third nesting structure). the PORT algorithm with default starting values (second nesting structure). and 3. 1.23 The models with the best fit. Assuming that the basic assumptions of economic production theory are satisfied in reality.e. or that the data are not reasonably constructed (e. 500 iterations) ˆ BFGS: maxit = 5000 (max. or with starting values taken from global algorithms (second and third nesting structure) or grid search (first and third nesting structure). 5. all estimates that give the best fit are economically meaningless. LevenbergMarquardt) can be used to guarantee economically meaningful values for ρ1 and ρ.000 iterations). 5. BFGS. If we only consider estimates that are economically meaningful. respectively. grid-search with the PORT or BFGS algorithm (second nesting structure). Hence.. PORT) with starting values equal to the estimates from the corresponding grid searches. 2. itermax = 1e4 (max.000.000 population generations) The results of all these estimation approaches for the three different nesting structures are summarised in Tables 1. but not for γ. LM. 2.max = 1000 (max. maxfev = 2000 (max. because either coefficient ρ1 or coefficient ρ is less than than minus one.Arne Henningsen. G´eraldine Henningsen 49 ˆ all gradient-based algorithms (Newton. and ν.g. we can conclude that the Levenberg-Marquardt and the PORT algorithms are—at least in this study—most likely to find the coefficients that give the best fit to the model. 5. However.000 iterations) ˆ Levenberg-Marquardt: maxiter = 1000 (max.000 iterations) ˆ DE: NP = 500 (500 population members).000 iterations) ˆ L-BFGS-B: maxit = 5000 (max. 1. problems with aggregation or deflation).max = 1000 (max..000 iterations). these results suggest that the three-input nested CES production technology is a poor approximation of the true production technology. i. we changed the following control parameters of the optimisation algorithms: ˆ Newton: iterlim = 500 (max. 0207 0.9811 0.0208 0.0001 0.3841 3.3205 15.LM DE DE .PORT NM .9517 0.0208 0.5447 0.0222 0.9996 0.4949 14.9866 0.9901 -1.0150 0.7000 -0.9957 0.7807 0.9954 0.LM SANN SANN .1233 -0.8727 9.0900 2.0000 0.0208 0.9423 6. The rows highlighted with in red include estimates that are economically meaningless.9872 0.6000 14.0002 0.0207 0.0000 -2.0000 0.9959 Table 1: Estimation results with first nesting structure: in the column titled “c”.0000 0. .0370 0.6204 7.5940 0.0126 0.6487 52.0000 0. ρ1 .0000 0.9656 0.2192 14.9822 0.3982 3.9528 -0.2045 3.0236 0.0000 1.9870 0.8000 6.5300 0.1502 4.1470 -2.0222 0.3934 8.9954 0.1502 11.6000 11.1791 3.8000 11.1078 0.0000 0.0210 0.5300 10.9954 0.3870 13.0210 δ1 δ -0.8310 0.8340 λ 0.5421 -0.5373 0.0118 0.9780 0.8728 51.5797 2.0000 0.0203 0.4400 6.5797 11. The rows entitled “<gradAlg> grid start” present the results when the gradient-based algorithm “<gradAlg>” was used with starting values equal to the results of a two-dimensional grid search for ρ1 and ρ using the same gradient-based algorithm at each grid point.2078 0.4582 -0. The rows titled “<globAlg> .0208 0.9728 0.2479 7.9959 0.1078 0.4887 0.3205 4.9798 0. The rows entitled “<gradAlg> grid” present the results when a two-dimensional grid search for ρ1 and ρ was performed using the gradient-based algorithm “<gradAlg>” at each grid point.6382 6.9135 -1.<gradAlg>” present the results when the model is first estimated by the global algorithm “<globAlg>” and then estimated by the gradient-based algorithm “<gradAlg>” using the estimates of the first step as starting values.9956 0.0206 0. The row entitled “fixed” presents the results when λ.0119 0.9811 0.0000 0.8245 0.2000 -0.4887 0.9508 1.9959 0.1000 -0.0210 0.0000 63.3720 28.7000 -0.0000 0.9884 0.PORT SANN .0000 0.9576 0.1813 0.9955 0.9731 28.0209 0.0000 0.8845 0.0120 0.0210 0.4969 c RSS 1 1 1 1 0 0 1 0 0 5023 14969 3608 14969 3566 3264 4043 3542 3266 16029 9162 10794 14075 9299 10611 3637 3637 3465 3465 3469 3469 3416 3259 0 1 0 1 1 0 1 1 1 0 1 0 R2 0.1241 28.9956 0.9347 ρ 0.0000 0.1813 3.0206 0.9956 0. a “1” indicates that the optimisation algorithm reports a successful convergence.7022 -2.0206 0.2078 0. whereas a “0” indicates that the optimisation algorithm reports non-convergence.7565 0.7565 0.9850 3.0030 0.1000 0.8379 3.9956 0.0118 0.9949 0.0001 ρ1 0.8187 0.9700 3.9955 1.9864 0.50 Econometric Estimation of the Constant Elasticity of Substitution Function in R γ Kemfert (1998) fixed Newton BFGS L-BFGS-B PORT LM NM NM .3677 2.PORT DE .9955 0.0000 0.0150 0.0000 0.0000 1.9937 0.0000 0.2138 16.2704 10.LM Newton grid Newton grid start BFGS grid BFGS grid start PORT grid PORT grid start LM grid LM grid start 1.6000 11.2000 -1.0000 0.1386 1.0143 0.4234 7.9883 0.4882 8.3205 9.0920 2.4668 -0.6000 11. and ρ are restricted to have exactly the same values that are published in Kemfert (1998). 8613 0.0000 -5.8315 7.0249 0.3534 9.9219 0.2680 0.9880 0.0208 0.3052 0.9951 0.0182 0.8000 5.0206 0.0207 0.5434 16.0207 0.9951 0.0000 -5.0207 0.9933 0.9291 1.9886 0.5434 7.0133 3.9951 0.9892 0.5015 7.9951 0.9942 0.0207 0.9880 0.9798 0.0000 -5.5015 16.0000 0.0194 0.2248 51 c RSS 1 1 1 1 1 1 1 1 1 14146 3858 3778 3857 3844 3646 5342 3844 3636 4740 3844 3638 3875 3844 3640 3925 3826 3844 3844 3844 3844 3844 3642 1 1 1 1 1 0 1 1 1 1 1 1 R2 0.3101 6.0026 0. G´eraldine Henningsen γ Kemfert (1998) fixed Newton BFGS L-BFGS-B PORT LM NM NM .9910 0.9984 0.9950 0.0000 -1.0207 0.0000 -5.2167 -0.PORT NM .9880 0.6806 5.9892 0.Arne Henningsen.9951 0.0000 0.9942 0.9951 0.9951 0.0207 0.3299 -0.0065 4.9278 5.6353 5.7889 0.9951 0.PORT SANN .1816 1.0112 ρ 1.0167 0.0208 0.0207 0.9748 0.9296 0.LM Newton grid Newton grid start BFGS grid BFGS grid start PORT grid PORT grid start LM grid LM grid start 1.6891 5.9291 1.5015 16.0208 0.9942 0.0000 -1.2132 2.7743 -2.PORT DE .2995 7.0207 0.2382 7.0780 12.5056 5.2000 5.9880 0.6612 -1.LM SANN SANN .9942 0.9291 0.0203 0.0000 0.9954 0.0207 0.0000 -1.2000 6.3761 -0.2545 5.1816 -0.9296 1.3101 6.3350 -1.9296 0.9951 0.3101 6.9892 0.5434 7.9942 0.0208 0.2895 0.4188 0.9951 0.5015 16.9954 Table 2: Estimation results with second nesting structure: see note below Table 1.8120 -1.8427 0.9892 0.1000 -1.9951 0.0000 ρ1 0.9954 0.1034 9.2412 5.7330 -1.9466 7.3334 6.9821 0.0069 0.9296 0.9952 0.0208 0.1869 λ 0.7860 0.2000 5.0207 0.0000 -1.2841 5.9954 0.9555 0.8741 0.6826 0.0207 0.9293 1.LM DE DE .5208 5.0997 -0.9940 0.9919 0.9951 0.2155 0.0208 δ1 δ 0.8530 -1.0069 0.6749 7.0000 0.5243 16.9952 0.7211 6.9291 1.2000 5.2543 6.9885 0.3101 5.0207 0.5434 7.9940 0.9954 0.2155 5.0000 -5. . 9997 0.7198 15.2177 7.9956 0.3619 15.5155 9.PORT NM .1487 0.9958 0.0209 0.5533 12.9955 0.9917 19.0000 0.0555 0.0480 0.0209 0.0497 -0.9958 0.9687 8.0208 0.0207 0.1072 -1.0208 0.8327 5.8327 1.9943 0.5286 7.8617 0.0898 15.1082 0.0207 0.9955 0.3998 8.0208 0.5000 -0.0085 0.5533 19.1100 0.3654 1.0192 0.7000 -0.0000 7.7174 -1.0207 δ1 δ -30.9440 0.0345 0.6000 7.1058 0.9687 c RSS 0 1 1 1 0 1 1 1 1 72628 4516 4955 3537 3518 3357 3582 3510 3357 4596 3510 3357 3588 3510 3357 3553 3551 3532 3532 3531 3510 3528 3357 1 0 1 1 1 1 1 1 1 1 1 1 R2 0.8789 -6.4646 6.0209 0.1100 0.2264 0.7167 3.0208 0.2490 0.9986 0.5285 6.0000 0.7000 -1.0000 0.PORT SANN .0000 8.9955 0.0345 0.9956 0.9955 0.5834 0.0156 -0. .6353 13.5898 -0.LM SANN SANN .9997 0.9924 0.8632 0.0206 0.9686 8.2316 0.0085 1.8498 19.0724 0.0208 0.9955 0.7169 7.9437 0.8632 0.9956 0.LM Newton grid Newton grid start BFGS grid BFGS grid start PORT grid PORT grid start LM grid LM grid start 17.52 Econometric Estimation of the Constant Elasticity of Substitution Function in R γ Kemfert (1998) fixed Newton BFGS L-BFGS-B PORT LM NM NM .9937 0.9942 0.4000 8.0485 ρ 5.9899 0.9955 0.0207 0.5533 19.0196 0.7305 7.8632 0.9958 0.0000 0.7169 λ 0.7042 7.9955 0.0208 0.9924 0.9580 0.6767 5.0000 0.3654 -0.9967 0.0208 0.5964 9.0497 -0.4859 -0.7944 -0.3492 15.5709 15.LM DE DE .8482 0.0000 -0.0345 0.9956 0.0086 0.PORT DE .7000 -0.2220 -1.0064 0.8000 6.0085 0.7038 7.4646 6.0208 0.8632 0.0345 0.0207 0.0064 0.0220 0.6131 -0.9083 0.0000 -6.1286 0.6653 7.0085 ρ1 1.9955 0.9956 0.9258 10.0000 0.0482 -0.8745 -1.9958 Table 3: Estimation results with third nesting structure: see note below Table 1.0000 -6.9958 0.5533 19.9955 0.0000 -6.4646 6.9686 1.0000 0.9258 11.0207 0.5303 10.9714 6.0068 0.0208 0.9534 1.7167 9.8000 -6.4646 6. 14.400e+01 -1. control = nls. This figure indicates that the surface is not always smooth. "E". because the deepest part could be hidden behind a ridge). 0.cesEst("Y". as this will help us to understand the problems that the optimisation algorithms have when finding the minimum of the sum of squared residuals.4)) R> cesLmGridRho1 <. + control = nls. rho = rhoVec) R> print(cesLmGridRho1) Estimated CES function Call: cesEst(yName = "Y".000e+00 Elasticities of Substitution: E_1_2 (HM) E_(1.c(seq(-1.2). "A"). seq(4. we will examine the results of the grid searches for ρ1 and ρ.RData".Arne Henningsen. + data = GermanIndustry. tName = "time". c("K".512e+01 2. The fit of the model is best (small sum of squared residuals. p.. rho1 = rhoVec. Hence.045e-19 delta 3.4. The object cesLmGridRho1 can be loaded with the command load(system. "E".control(maxiter = 1000. the plot method plots the negative sum of squared residuals against ρ1 and ρ. + rho1 = rhoVec.696e-02 rho_1 rho 1. 0. 15). seq(1. xNames = c("K".2. The idea of including results from computationally burdensome calculations in the corresponding R package is taken from Hayfield and Racine (2008.1). data = GermanIndustry. rho = rhoVec) Coefficients: gamma lambda 1. "A").g.lm.lm.087e-02 delta_1 6. package = "micEconCES")) so that the reader can obtain the estimation result more quickly. 0.file("Kemfert98Nest1GridLm.control(maxiter = 1000. 25 As this grid search procedure is computationally burdensome. tName = "time". 1.06667 Inf AU = Allen-Uzawa (partial) elasticity of substitution HM = Hicks-McFadden (direct) elasticity of substitution We start our analysis of the results from the grid search by applying the standard plot method.2)_3 (AU) 0. We demonstrate this with the first nesting structure and the Levenberg-Marquardt algorithm using the same values for ρ1 and ρ and the same control parameters as before. we have—for the reader’s convenience— included the resultant object in the micEconCES package. 4. we get the same results as shown in Table 1:25 R> rhoVec <. . maxfev = 2000). G´eraldine Henningsen 53 Given the considerable variation of the estimation results. maxfev = 2000). This is shown in Figure 6. R> plot( cesLmGridRho1 ) As it is much easier to spot the summit of a hill than the deepest part of a valley (e. Econometric Estimation of the Constant Elasticity of Substitution Function in R negative sums of squared residuals −50000 −100000 −150000 −200000 −250000 10 5 5 o_ o rh 1 10 rh 54 0 0 Figure 6: Goodness of fit for different values of ρ1 and ρ. . 000. the fit is worst (large sum of squared residuals. ρ1 = 0.NA R> plot( cesLmGridRho1a ) negative sums of squared residuals −3500 −4000 −4500 10 5 5 rh o_ o rh 1 10 0 0 Figure 7: Goodness of fit (best-fitting region only).8 and 2 depending the value of ρ). G´eraldine Henningsen 55 light green colour) if ρ is approximately in the interval [−1. At the estimates of Kemfert (1998). R> cesLmGridRho1a <. we could not reproduce any of the estimates published in Kemfert (1998). we re-plot Figure 6 with only sums of squared residuals that are smaller than 5. the sum of the squared residuals is clearly not at its minimum.53 and ρ = 0. In order to get a more detailed insight into the best-fitting region. . i. red colour) if ρ is larger than 5 and ρ1 has a low value (the upper limit is between −0.cesLmGridRho1 R> cesLmGridRho1a$rssArray[ cesLmGridRho1a$rssArray >= 5000 ] <. We contacted 26 The R commands for conducting these estimations are included in the R script that is available in Appendix D. 1] (no matter the value of ρ1 ) or if ρ1 is approximately in the interval [2. We obtained similar results for the other two nesting structures.1813..26 but again. The resulting Figure 7 shows that the fit of the model clearly improves with increasing ρ1 and decreasing ρ.Arne Henningsen. Furthermore. We also replicated the estimations for seven industrial sectors.e. 7] and ρ is smaller than 7. which makes it easy for the user to extend and modify (e.. As the CES function is not easy to estimate. Hence.. Unfortunately. both the Shazam scripts used for the estimations and the corresponding output files have been lost. the user can easily use several different methods and compare the results. Additionally. hence. in particular. which include the linear approximation suggested by Kmenta (1967). However. the grid search allows one to plot the objective function for different values of the substitution parameters (ρ1 . increases the probability of finding a global optimum. even if some readers choose not to use the micEconCES package for estimating a CES function. i. given an objective function that is seldom well-behaved. they will definitely benefit from this paper. the user can visualise the surface of the objective function to be minimised and. gradient-based and global optimisation algorithms.56 Econometric Estimation of the Constant Elasticity of Substitution Function in R the author and asked her to help us to identify the reasons for the differences between her results and ours. correspond to the smallest sums of squared residuals. but also all major extensions. The grid search procedure. i. In doing so. The micEconCES package provides such a solution.2 demonstrates how these control instruments support the identification of the global optimum when a simple non-linear estimation.e. the CES function has gained in popularity in macroeconomics and especially growth theory. The function cesEst is constructed in a way that allows the user to switch easily between different estimation and optimisation procedures. fails. we were unable to find the reason for the large deviations in the estimates. Additionally. This option is a further control instrument to ensure that the global optimum has been reached. . in contrast. including factor augmented technological change). 6. and an extended grid-search procedure that returns stable results and alleviates convergence problems. check the extent to which the objective function is well behaved and which parameter ranges of the substitution parameters give the best fit to the model. ρ2 .. The micEconCES package is open-source software and modularly programmed. Hence. ρ).g. Furthermore.e. Section 5. technological change and three-input and four-input nested CES functions. a software solution to estimate the CES function may further increase its popularity. the user can impose restrictions on the parameters to enforce economically meaningful parameter estimates. Its function cesEst can not only estimate the traditional two-input CES function. the micEconCES package provides the user with a multitude of estimation and optimisation procedures. However. we are confident that the results obtained by cesEst are correct. Conclusion In recent years. as it is clearly more flexible than the classical Cobb-Douglas function. as it provides a multitude of insights and practical hints regarding the estimation of CES functions. ρ (35) We define function so that Now we can calculate the first partial derivative of f (ρ): f 0 (ρ) = ν ν g 0 (ρ) ln (g (ρ)) − ρ2 ρ g (ρ) (36) and the first three derivatives of g (ρ) −ρ g 0 (ρ) = −δ x−ρ 1 ln x1 − (1 − δ) x2 ln x2 00 g (ρ) = 000 g (ρ) = −ρ 2 2 δ x−ρ 1 (ln x1 ) + (1 − δ) x2 (ln x2 ) −ρ 3 3 −δ x−ρ 1 (ln x1 ) − (1 − δ) x2 (ln x2 ) . (32) Now we can approximate the logarithm of the CES function by a first-order Taylor series approximation around ρ = 0 : ln y ≈ ln γ + λ t + f (0) + ρf 0 (0) (33) −ρ g (ρ) ≡ δ x−ρ 1 + (1 − δ) x2 (34) ν f (ρ) = − ln (g (ρ)) . Traditional two-input CES function A. (37) (38) (39) At the point of approximation ρ = 0 we have g (0) = 1 (40) .1.Arne Henningsen. Traditional two-input CES function: y = γe λt  δ x−ρ 1 + (1 − δ) x−ρ 2 − ν ρ (29) Logarithmized CES function: ln y = ln γ + λ t −  ν  −ρ ln δ x1 + (1 − δ) x−ρ 2 ρ (30) Define function f (ρ) ≡ −   ν −ρ + (1 − δ) x ln δ x−ρ 2 1 ρ (31) so that ln y = ln γ + λ t + f (ρ) . Derivation of the Kmenta approximation This derivation of the first-order Taylor series (Kmenta) approximation of the traditional two-input CES function is based on Uebe (2000). G´eraldine Henningsen 57 A. 58 Econometric Estimation of the Constant Elasticity of Substitution Function in R g 0 (0) = −δ ln x1 − (1 − δ) ln x2 2 00 g (0) = δ (ln x1 ) + (1 − δ) (ln x2 ) (41) 2 3 000 g (0) = −δ (ln x1 ) − (1 − δ) (ln x2 ) (42) 3 (43) Now we calculate the limit of f (ρ) for ρ → 0: f (0) = = lim f (ρ) (44) −ν ln (g (ρ)) ρ→0 ρ (45) ρ→0 lim 0 = (ρ) −ν gg(ρ) lim 1 = ν (δ ln x1 + (1 − δ) ln x2 ) ρ→0 (46) (47) and the limit of f 0 (ρ) for ρ → 0: f 0 (0) = lim f 0 (ρ)   ν ν g 0 (ρ) = lim ln (g (ρ)) − ρ→0 ρ2 ρ g (ρ) (48) ρ→0 (49) 0 = lim (ρ) ν ln (g (ρ)) − ν ρ gg(ρ) ν = = lim ρ→0 lim − ρ→0 (50) ρ2 ρ→0 g 0 (ρ) g(ρ) −ν g 0 (ρ) g(ρ) −νρg 00 (ρ)g(ρ)−(g 0 (ρ))2 (g(ρ))2 2ρ ν g 00 (ρ) g (ρ) − (g 0 (ρ))2 2 (g (ρ))2 ν g 00 (0) g (0) − (g 0 (0))2 2 (g (0))2   ν − δ (ln x1 )2 + (1 − δ) (ln x2 )2 − (−δ ln x1 − (1 − δ) ln x2 )2 2 ν − δ (ln x1 )2 + (1 − δ) (ln x2 )2 − δ 2 (ln x1 )2 2  2 2 −2δ (1 − δ) ln x1 ln x2 − (1 − δ) (ln x2 )     ν − δ − δ 2 (ln x1 )2 + (1 − δ) − (1 − δ)2 (ln x2 )2 2  −2δ (1 − δ) ln x1 ln x2  ν δ (1 − δ) (ln x1 )2 + (1 − δ) (1 − (1 − δ)) (ln x2 )2 − 2  −2δ (1 − δ) ln x1 ln x2 (51) (52) = − (53) = (54) = = = = −  νδ (1 − δ)  (ln x1 )2 − 2 ln x1 ln x2 + (ln x2 )2 2 (55) (56) (57) (58) . Functions f (ρ) and g(ρ) are defined as in the previous section (in Equations 31 and 34. The derivations of these Taylor series approximations are shown in the following. respectively). G´eraldine Henningsen = − νδ (1 − δ) (ln x1 − ln x2 )2 2 59 (59) so that we get following first-order Taylor series approximation around ρ = 0: ln y ≈ ln γ + λ t + νδ ln x1 + ν (1 − δ) ln x2 − νρ δ (1 − δ) (ln x1 − ln x2 )2 (60) A. These calculations are inspired by Uebe (2000).2.Arne Henningsen. The first-order Taylor series approximations of these derivatives at the point ρ = 0 are shown in the main paper in Equations 20 to 23. Derivatives with respect to coefficients The partial derivatives of the two-input CES function with respect to all coefficients are shown in the main paper in Equations 15 to 19. Derivatives with respect to Gamma ∂y ∂γ  − ν ρ −ρ + (1 − δ) x = eλ t δ x−ρ 1 2    ν −ρ −ρ λt = e exp − ln δ x1 + (1 − δ) x2 ρ = eλ t exp (f (ρ)) (62) (63) 0 λt (61)  ≈ e exp f (0) + ρf (0)   1 = eλ t exp νδ ln x1 + ν (1 − δ) ln x2 − νρ δ (1 − δ) (ln x1 − ln x2 )2 2   1 2 λ t νδ ν(1−δ) = e x1 x2 exp − νρ δ (1 − δ) (ln x1 − ln x2 ) 2 (64) (65) (66) Derivatives with respect to Delta ∂y ∂δ − ν −1   ν  −ρ ρ −ρ −ρ δ x1 + (1 − δ) x−ρ x − x 2 1 2 ρ  − ν −1 x−ρ − x−ρ ρ −ρ 2 δ x−ρ + (1 − δ) x = −γ eλ t ν 1 1 2 ρ = −γ eλ t (67) (68) Now we define the function fδ (ρ) fδ (ρ) = = −ρ  − ν −1 x−ρ ρ −ρ 1 − x2 δ x−ρ + (1 − δ) x 1 2 ρ     −ρ  x−ρ ν −ρ −ρ 1 − x2 exp − + 1 ln δ x1 + (1 − δ) x2 ρ ρ (69) (70) . hδ (ρ). and h0δ (ρ) for ρ → 0 by gδ (0) = lim gδ (ρ)    ν = lim + 1 ln (g (ρ)) ρ→0 ρ (ν + ρ) ln (g (ρ)) = lim ρ→0 ρ ρ→0 ln (g (ρ)) + (ν + ρ) = lim g 0 (ρ) g(ρ) 1 0 g (0) = ln (g (0)) + ν g (0) = −νδ ln x1 − ν (1 − δ) ln x2 ρ→0 (80) (81) (82) (83) (84) (85) 0 gδ0 (0) = lim ρ→0 (ρ) −ν ln (g (ρ)) + ρ (ν + ρ) gg(ρ) ρ2 (86) . gδ0 (ρ).60 Econometric Estimation of the Constant Elasticity of Substitution Function in R so that we can approximate ∂y/∂δ by using the first-order Taylor series approximation of fδ (ρ): ∂y ∂δ = −γ eλ t ν fδ (ρ) (71) ≈ −γ eλ t ν fδ (0) + ρfδ0 (0)  Now we define the helper functions gδ (ρ) and hδ (ρ)     ν −ρ gδ (ρ) = + 1 ln δ x−ρ + (1 − δ) x 1 2 ρ   ν = + 1 ln (g (ρ)) ρ hδ (ρ) = −ρ x−ρ 1 − x2 ρ (72) (73) (74) (75) with first derivatives   0 ν ν g (ρ) (ρ) = − 2 ln (g (ρ)) + +1 ρ ρ g (ρ)   −ρ −ρ − x−ρ −ρ ln x1 x−ρ 1 + x2 1 − ln x2 x2 0 hδ (ρ) = ρ2 gδ0 (76) (77) so that fδ (ρ) = hδ (ρ) exp (−gδ (ρ)) (78) fδ0 (ρ) = h0δ (ρ) exp (−gδ (ρ)) − hδ (ρ) exp (−gδ (ρ)) gδ0 (ρ) (79) and Now we can calculate the limits of gδ (ρ). G´eraldine Henningsen 0 0 = lim = = 00 (ρ)g(ρ)−(g 0 (ρ))2 (g(ρ))2 (87) 2ρ ρ→0  = 0 (ρ) (ρ) (ρ) + (ν + ρ) gg(ρ) + ρ gg(ρ) + ρ (ν + ρ) g −ν gg(ρ) 61 lim  0 0 0 0 (ρ) (ρ) (ρ) (ρ) + ν gg(ρ) + ρ gg(ρ) + ρ gg(ρ) + ρ (ν + ρ) g −ν gg(ρ) lim g 00 (ρ) g (ρ) − (g 0 (ρ))2 g 0 (ρ) 1 + (ν + ρ) g (ρ) 2 (g (ρ))2  2 (g(ρ))  2ρ ρ→0 ρ→0 00 (ρ)g(ρ)−(g 0 (ρ))2 ! (89) g 0 (0) 1 g 00 (0) g (0) − (g 0 (0))2 + ν g (0) 2 (g (0))2 = −δ ln x1 − (1 − δ) ln x2 + (90) νδ (1 − δ) (ln x1 − ln x2 )2 2 (91) −ρ x−ρ 1 − x2 hδ (0) = lim ρ→0 ρ −ρ − ln x1 x−ρ 1 + ln x2 x2 = lim ρ→0 1 = − ln x1 + ln x2 h0δ (0) = lim (93) (94) (95) ρ2  = (92)   −ρ −ρ − x−ρ − ln x x −ρ ln x1 x−ρ 2 1 + x2 2 1 ρ→0 lim     −ρ 2 −ρ − ln x1 x−ρ + ρ (ln x1 )2 x−ρ 1 − ln x2 x2 1 − (ln x2 ) x2 2ρ ρ→0 (88) ! −ρ ln x1 x−ρ − ln x x 2 2 1 + 2ρ  1 2 −ρ = lim (ln x1 )2 x−ρ − (ln x ) x 2 1 2 ρ→0 2   1 = (ln x1 )2 − (ln x2 )2 2  (96) (97) (98) so that we can calculate the limit of fδ (ρ)and fδ0 (ρ) for ρ → 0 by fδ (0) = = = = lim fδ (ρ) (99) ρ→0 lim (hδ (ρ) exp (−gδ (ρ))) (100) lim hδ (ρ) lim exp (−gδ (ρ)) ρ→0   lim hδ (ρ) exp − lim gδ (ρ) (101) ρ→0 ρ→0 ρ→0 ρ→0 (102) = hδ (0) exp (−gδ (0)) (103) = (− ln x1 + ln x2 ) exp (νδ ln x1 + ν (1 − δ) ln x2 ) (104) = (− ln x1 + ν(1−δ) ln x2 ) xνδ 1 x2 (105) .Arne Henningsen. 62 Econometric Estimation of the Constant Elasticity of Substitution Function in R fδ0 (0) = = = lim fδ0 (ρ) (106) ρ→0 lim h0δ (ρ) exp (−gδ (ρ)) − hδ (ρ) exp (−gδ (ρ)) gδ0 (ρ)  ρ→0 lim h0δ (ρ) lim exp (−gδ (ρ)) ρ→0 (108) ρ→0 − lim hδ (ρ) lim exp (−gδ (ρ)) lim gδ0 (ρ) ρ→0 ρ→0 ρ→0   = lim h0δ (ρ) exp − lim gδ (ρ) ρ→0 ρ→0   − lim hδ (ρ) exp − lim gδ (ρ) lim gδ0 (ρ) ρ→0 = = = = = = = = = ρ→0 h0δ (107) (109) ρ→0 (0) exp (−gδ (0)) − hδ (0) exp (−gδ (0)) gδ0 (0)  exp (−gδ (0)) h0δ (0) − hδ (0) gδ0 (0)    1 exp (νδ ln x1 + ν (1 − δ) ln x2 ) (ln x1 )2 − (ln x2 )2 2 − (− ln x1 + ln x2 )   νδ (1 − δ) (ln x1 − ln x2 )2 −δ ln x1 − (1 − δ) ln x2 + 2  1 ν(1−δ) 1 xνδ (ln x1 )2 − (ln x2 )2 + ln x1 − ln x2 1 x2 2 2   νδ (1 − δ) 2 (ln x1 − ln x2 ) −δ ln x1 − (1 − δ) ln x2 + 2  1 ν(1−δ) 1 xνδ (ln x1 )2 − (ln x2 )2 − δ (ln x1 )2 1 x2 2 2 νδ (1 − δ) − (1 − δ) ln x1 ln x2 + ln x1 (ln x1 − ln x2 )2 2  νδ (1 − δ) +δ ln x1 ln x2 + (1 − δ) (ln x2 )2 − ln x2 (ln x1 − ln x2 )2 2     1 1 2 νδ ν(1−δ) x1 x2 − δ (ln x1 ) + − δ (ln x2 )2 2 2    1 νδ (1 − δ) −2 − δ ln x1 ln x2 + (ln x1 − ln x2 ) (ln x1 − ln x2 )2 2 2   1 ν(1−δ) xνδ − δ (ln x1 − ln x2 )2 1 x2 2  νδ (1 − δ) 2 + (ln x1 − ln x2 ) (ln x1 − ln x2 ) 2   νδ (1 − δ) νδ ν(1−δ) 1 −δ+ (ln x1 − ln x2 ) (ln x1 − ln x2 )2 x1 x2 2 2 1 − 2δ + νδ (1 − δ) (ln x1 − ln x2 ) νδ ν(1−δ) x1 x2 (ln x1 − ln x2 )2 2 (110) (111) (112) (113) (114) (115) (116) (117) (118) and approximate ∂y/∂δ by ∂y ∂δ  ≈ −γ eλ t ν fδ (0) + ρfδ0 (0) (119) . Arne Henningsen. G´eraldine Henningsen 63  ν(1−δ) = −γ eλ t ν (− ln x1 + ln x2 ) xνδ 1 x2 (120) 1 − 2δ + νδ (1 − δ) (ln x1 − ln x2 ) νδ ν(1−δ) x1 x2 (ln x1 − ln x2 )2 2  ν(1−δ) = γ eλ t ν (ln x1 − ln x2 ) xνδ 1 x2  1 − 2δ + νδ (1 − δ) (ln x1 − ln x2 ) νδ ν(1−δ) x1 x2 (ln x1 − ln x2 )2 2  +ρ −ρ (121) ν(1−δ) = γ eλ t ν (ln x1 − ln x2 ) xνδ 1 x2   1 − 2δ + νδ (1 − δ) (ln x1 − ln x2 ) 1−ρ (ln x1 − ln x2 ) 2 (122) Derivatives with respect to Nu  − ν 1  ∂y ρ −ρ −ρ −ρ = −γ eλ t ln δ x−ρ + (1 − δ) x δ x + (1 − δ) x 1 2 1 2 ∂ν ρ Now we define the function fν (ρ)  − ν 1  −ρ ρ −ρ −ρ fν (ρ) = ln δ x1 + (1 − δ) x−ρ δ x + (1 − δ) x 2 1 2 ρ     ν 1  −ρ −ρ −ρ −ρ exp − ln δ x1 + (1 − δ) x2 = ln δ x1 + (1 − δ) x2 ρ ρ (123) (124) (125) so that we can approximate ∂y/∂ν by using the first-order Taylor series approximation of fν (ρ): ∂y ∂ν = −γ eλ t fν (ρ) (126) ≈ −γ eλ t fν (0) + ρfν0 (0)  (127) Now we define the helper function gν (ρ) gν (ρ) = =  1  −ρ ln δ x1 + (1 − δ) x−ρ 2 ρ 1 ln (g (ρ)) ρ (128) (129) with first and second derivative 0 gν0 (ρ) = = (ρ) ρ gg(ρ) − ln (g (ρ)) 1 g 0 (ρ) ln (g (ρ)) − ρ g (ρ) ρ2 gν00 (ρ) = − (131) ln (g (ρ)) 1 g 0 (ρ) 1 g 0 (ρ) 1 g 00 (ρ) 1 (g 0 (ρ))2 + − + 2 − ρ2 g (ρ) ρ g (ρ) ρ (g (ρ))2 ρ3 ρ2 g (ρ) 0 = (130) ρ2 (ρ) −2ρ gg(ρ) + ρ2 g 00 (ρ) g(ρ) − ρ2 ρ3 (g 0 (ρ))2 (g(ρ))2 (132) + 2 ln (g (ρ)) (133) . 64 Econometric Estimation of the Constant Elasticity of Substitution Function in R and use the function f (ρ) defined above so that fν (ρ) = gν (ρ) exp (f (ρ)) (134) fν0 (ρ) = gν0 (ρ) exp (f (ρ)) + gν (ρ) exp (f (ρ)) f 0 (ρ) (135) and Now we can calculate the limits of gν (ρ). gν0 (ρ). and gν00 (ρ) for ρ → 0 by gν (0) = = = lim gν (ρ) (136) ln (g (ρ)) ρ→0 ρ (137) ρ→0 lim lim g 0 (ρ) g(ρ) 1 = −δ ln x1 − (1 − δ) ln x2 gν0 (0) = ρ→0 lim gν0 (ρ)   1 g 0 (ρ) ln (g (ρ)) = lim − ρ→0 ρ g (ρ) ρ2 (138) (139) (140) ρ→0 (141) 0 = = lim (ρ) ρ gg(ρ) − ln (g (ρ)) lim ρ→0 (142) ρ2 ρ→0 g 0 (ρ) g(ρ) +ρg 00 (ρ)g(ρ)−(g 0 (ρ))2 (g(ρ))2 − g 0 (ρ) g(ρ) 2ρ g 00 (ρ) g (ρ) − (g 0 (ρ))2 = lim ρ→0 2 (g (ρ))2 = = = = = g 00 (0) g (0) − (g 0 (0))2 2 (g (0))2   1 2 2 2 δ (ln x1 ) + (1 − δ) (ln x2 ) − (−δ ln x1 − (1 − δ) ln x2 ) 2 1 δ (ln x1 )2 + (1 − δ) (ln x2 )2 − δ 2 (ln x1 )2 2  2 2 −2δ (1 − δ) ln x1 ln x2 − (1 − δ) (ln x2 )     1 δ − δ 2 (ln x1 )2 + (1 − δ) − (1 − δ)2 (ln x2 )2 2  −2δ (1 − δ) ln x1 ln x2  1 δ (1 − δ) (ln x1 )2 + (1 − δ) (1 − (1 − δ)) (ln x2 )2 2  −2δ (1 − δ) ln x1 ln x2 (143) (144) (145) (146) (147) (148) (149) . G´eraldine Henningsen = = gν00 (0) =  δ (1 − δ)  (ln x1 )2 − 2 ln x1 ln x2 + (ln x2 )2 2 δ (1 − δ) (ln x1 − ln x2 )2 2 lim gν00 (ρ)  0 (ρ) + ρ2 −2ρ gg(ρ) = lim  (152) g 00 (ρ) g(ρ) − ρ2 lim   lim  ρ→0 lim ρ→0 + 2 ln (g (ρ))  (153)  0 2 00 g 000 (ρ) g(ρ) (154) 3ρ2 ρ→0 + (g 0 (ρ))2 (g(ρ))2 ρ3 00 0 −ρ2 = (151) (ρ) (ρ)) (ρ) (ρ) − 2ρ gg(ρ) + 2ρ (g + 2ρ gg(ρ) + ρ2 −2 gg(ρ) (g(ρ))2  = (150) ρ→0 ρ→0 = 65 g 00 (ρ)g 0 (ρ) (g(ρ))2 ρ2 g 000 (ρ) g(ρ) 0 2 g 0 (ρ)g 00 (ρ) (g(ρ))2 2 3ρ (ρ)) − 2ρ (g − 2ρ2 (g(ρ))2 − 3ρ2 g 00 (ρ)g 0 (ρ) (g(ρ))2 3ρ2 + 2ρ2 (g 0 (ρ))3 (g(ρ))3 1 g 000 (ρ) g 00 (ρ) g 0 (ρ) 2 (g 0 (ρ))3 − + 3 g (ρ) 3 (g (ρ))3 (g (ρ))2 + 2ρ2 (g 0 (ρ))3 (g(ρ))3 0 (ρ) + 2 gg(ρ)     ! g 000 (0) g 00 (0) g 0 (0) 2 (g 0 (0))3 − + g (0) 3 (g (0))3 (g (0))2   −δ (ln x1 )3 − (1 − δ) (ln x2 )3 =   − δ (ln x1 )2 + (1 − δ) (ln x2 )2 (−δ ln x1 − (1 − δ) ln x2 ) = (155) 1 3 1 3 2 + (−δ ln x1 − (1 − δ) ln x2 )3 3 1 1 = − δ (ln x1 )3 − (1 − δ) (ln x2 )3 + δ 2 (ln x1 )3 3 3 +δ (1 − δ) (ln x1 )2 ln x2 + δ (1 − δ) ln x1 (ln x2 )2 + (1 − δ)2 (ln x2 )3  2 + δ 2 (ln x1 )2 + 2δ (1 − δ) ln x1 ln x2 + (1 − δ)2 (ln x2 )2 3 (−δ ln x1 − (1 − δ) ln x2 )   1 2 = δ − δ (ln x1 )3 + δ (1 − δ) (ln x1 )2 ln x2 3   1 2 2 +δ (1 − δ) ln x1 (ln x2 ) + (1 − δ) − (1 − δ) (ln x2 )3 3   2 2 2 2 2 + δ (ln x1 ) + 2δ (1 − δ) ln x1 ln x2 + (1 − δ) (ln x2 ) 3 (−δ ln x1 − (1 − δ) ln x2 )   1 2 = δ − δ (ln x1 )3 + δ (1 − δ) (ln x1 )2 ln x2 3 (156) (157) (158) (159) (160) (161) .Arne Henningsen. 66 Econometric Estimation of the Constant Elasticity of Substitution Function in R   1 2 +δ (1 − δ) ln x1 (ln x2 ) + (1 − δ) − (1 − δ) (ln x2 )3 3 2 3 − δ (ln x1 )3 + δ 2 (1 − δ) (ln x1 )2 ln x2 + 2δ 2 (1 − δ) (ln x1 )2 ln x2 3 2  +2δ (1 − δ)2 ln x1 (ln x2 )2 + δ (1 − δ)2 ln x1 (ln x2 )2 + (1 − δ)3 (ln x2 )3   1 2 = δ − δ (ln x1 )3 + δ (1 − δ) (ln x1 )2 ln x2 (162) 3   1 +δ (1 − δ) ln x1 (ln x2 )2 + (1 − δ)2 − (1 − δ) (ln x2 )3 3 2 3 − δ (ln x1 )3 − 2δ 2 (1 − δ) (ln x1 )2 ln x2 − 2δ (1 − δ)2 ln x1 (ln x2 )2 3 2 − (1 − δ)3 (ln x2 )3 3   1 2 3 2 = δ − δ − δ (ln x1 )3 + δ (1 − δ) − 2δ 2 (1 − δ) (ln x1 )2 ln x2 (163) 3 3    + δ 1 − δ − 2δ (1 − δ)2 ln x1 (ln x2 )2   1 2 2 3 + (1 − δ) − (1 − δ) − (1 − δ) (ln x2 )3 3 3   1 2 = δ − − δ 2 δ (ln x1 )3 + (1 − 2δ) δ (1 − δ) (ln x1 )2 ln x2 (164) 3 3 + (1 − 2 (1 − δ)) δ (1 − δ) ln x1 (ln x2 )2   1 2 + (1 − δ) − − (1 − δ)2 (1 − δ) (ln x2 )3 3 3   1 2 = − + δ δ (1 − δ) (ln x1 )3 + (1 − 2δ) δ (1 − δ) (ln x1 )2 ln x2 3 3 + (2δ − 1) δ (1 − δ) ln x1 (ln x2 )2   1 2 4 2 2 + 1 − δ − − + δ − δ (1 − δ) (ln x2 )3 3 3 3 3   1 2 = − + δ δ (1 − δ) (ln x1 )3 + (1 − 2δ) δ (1 − δ) (ln x1 )2 ln x2 3 3   1 2 2 + (2δ − 1) δ (1 − δ) ln x1 (ln x2 ) + − δ δ (1 − δ) (ln x2 )3 3 3 1 = − (1 − 2δ) δ (1 − δ) (ln x1 )3 + (1 − 2δ) δ (1 − δ) (ln x1 )2 ln x2 3 1 − (1 − 2δ) δ (1 − δ) ln x1 (ln x2 )2 + (1 − 2δ) δ (1 − δ) (ln x2 )3 3 1 = − (1 − 2δ) δ (1 − δ) 3  (165) (166) (167) (168) (ln x1 )3 + 3 (ln x1 )2 ln x2 + 3 ln x1 (ln x2 )2 − (ln x2 )3 1 = − (1 − 2δ) δ (1 − δ) (ln x1 − ln x2 )3 3 (169) . Arne Henningsen. G´eraldine Henningsen 67 so that we can calculate the limit of fν (ρ)and fν0 (ρ) for ρ → 0 by fν (0) = = = = lim fν (ρ) (170) lim (gν (ρ) exp (f (ρ))) (171) lim gν (ρ) lim exp (f (ρ)) ρ→0   lim gν (ρ) exp lim f (ρ) (172) ρ→0 ρ→0 ρ→0 ρ→0 (173) ρ→0 = gν (0) exp (f (0)) (174) = (−δ ln x1 − (1 − δ) ln x2 ) exp (ν (δ ln x1 + (1 − δ) ln x2 )) = − (δ ln x1 + (1 − δ) fν0 (0) = ν(1−δ) ln x2 ) xνδ 1 x2 (175) (176) lim fν0 (ρ) (177) ρ→0  lim gν0 (ρ) exp (f (ρ)) + gν (ρ) exp (f (ρ)) f 0 (ρ) ρ→0     0 = lim gν (ρ) exp lim f (ρ) + lim gν (ρ) exp lim f (ρ) lim f 0 (ρ) (178) exp (f (0)) + gν (0) exp (f (0)) f 0 (0)  = exp (f (0)) gν0 (0) + gν (0) f 0 (0)  δ (1 − δ) = exp (ν (δ ln x1 + (1 − δ) ln x2 )) (ln x1 − ln x2 )2 2   νδ (1 − δ) 2 (ln x1 − ln x2 ) + (−δ ln x1 − (1 − δ) ln x2 ) − 2 ν(1−δ) δ (1 − δ) = xνδ (ln x1 − ln x2 )2 (1 + ν (δ ln x1 + (1 − δ) ln x2 )) 1 x2 2 (180) = = ρ→0 gν0 (0) ρ→0 ρ→0 ρ→0 ρ→0 (179) (181) (182) (183) and approximate ∂y/∂ν by ∂y ∂ν ≈ −γ eλ t fν (0) + ρfν0 (0)  (184) ν(1−δ) = γ eλ t (δ ln x1 + (1 − δ) ln x2 ) xνδ 1 x2 ν(1−δ) δ (1 − δ) −γ eλ t ρxνδ (ln x1 − ln x2 )2 1 x2 2 (1 + ν (δ ln x1 + (1 − δ) ln x2 ))  λ t νδ ν(1−δ) = γ e x1 x2 δ ln x1 + (1 − δ) ln x2 (185) (186)  ρδ (1 − δ) 2 − (ln x1 − ln x2 ) (1 + ν (δ ln x1 + (1 − δ) ln x2 )) 2 Derivatives with respect to Rho ∂y ∂ρ = γ eλ t − ν   ν  −ρ ρ −ρ −ρ −ρ δ x + (1 − δ) x ln δ x + (1 − δ) x 1 2 1 2 ρ2 (187) . 68 Econometric Estimation of the Constant Elasticity of Substitution Function in R   − ν +1   ν  −ρ ρ −ρ −ρ +γ e δ x1 + (1 − δ) x−ρ δ x ln x + (1 − δ) x ln x 1 2 2 1 2 ρ   − ν   1 ρ −ρ −ρ −ρ −ρ = γ eλ t ν δ x + (1 − δ) x ln δ x + (1 − δ) x (188) 1 2 1 2 ρ2 !    − ν +1  1  −ρ ρ −ρ −ρ δ x1 + (1 − δ) x2 δ x−ρ ln x + (1 − δ) x ln x + 1 2 1 2 ρ λt Now we define the function fρ (ρ) − ν   1  −ρ ρ −ρ −ρ −ρ fρ (ρ) = δ x + (1 − δ) x ln δ x + (1 − δ) x 1 2 1 2 ρ2    − ν +1  1  −ρ ρ −ρ −ρ + δ x ln x + (1 − δ) x ln x δ x1 + (1 − δ) x−ρ 1 2 1 2 2 ρ (189) so that we can approximate ∂y/∂ρ by using the first-order Taylor series approximation of fρ (ρ): ∂y ∂ρ = γ eλ t ν fρ (ρ) (190) ≈ γ eλ t ν fρ (0) + ρfρ0 (0)  (191) We define the helper function gρ (ρ) −ρ gρ (ρ) = δ x−ρ 1 ln x1 + (1 − δ) x2 ln x2 (192) with first and second derivative −ρ 2 2 gρ0 (ρ) = −δ x−ρ 1 (ln x1 ) − (1 − δ) x2 (ln x2 ) gρ00 (ρ) = 3 δ x−ρ 1 (ln x1 ) + (1 − δ) 3 x−ρ 2 (ln x2 ) and use the functions g (ρ) and gν (ρ) all defined above so that − ν   1  −ρ ρ −ρ −ρ −ρ + (1 − δ) x + (1 − δ) x fρ (ρ) = δ x ln δ x 2 2 1 1 ρ2   − ν +1   1  −ρ ρ −ρ −ρ + δ x1 + (1 − δ) x−ρ δ x ln x + (1 − δ) x ln x 1 2 2 1 2 ρ ν     −ρ 1 −ρ −ρ −ρ = δ x + (1 − δ) x ln δ x−ρ 1 2 1 + (1 − δ) x2 ρ2 − ν  −1 1  −ρ ρ −ρ + δ x1 + (1 − δ) x−ρ δ x−ρ 2 1 + (1 − δ) x2 ρ   −ρ δ x−ρ ln x + (1 − δ) x ln x 1 2 1 2 ν     −ρ 1  −ρ 1 −ρ −ρ δ x−ρ + (1 − δ) x ln δ x + (1 − δ) x = 1 2 1 2 ρ ρ  −1   −ρ −ρ −ρ −ρ + δ x1 + (1 − δ) x2 δ x1 ln x1 + (1 − δ) x2 ln x2    1   1 ν  −ρ −ρ −ρ = exp − ln δ x1 + (1 − δ) x2 ln δ x−ρ + (1 − δ) x 1 2 ρ ρ ρ (193) (194) (195) (196) (197) (198) . G´eraldine Henningsen  δ x−ρ 1 x−ρ 2 −1  δ x−ρ 1 ln x1 + (1 − δ) + (1 − δ)   exp (−νgν (ρ)) gν (ρ) + g (ρ)−1 gρ (ρ) + = 69 x−ρ 2 ln x2  (199) ρ and we can calculate its first derivative fρ0 (ρ) =   −ρν exp (−νgν (ρ)) gν0 (ρ) gν (ρ) + g (ρ)−1 gρ (ρ) (200) ρ2  +  ρ exp (−νgν (ρ)) gν0 (ρ) − g (ρ)−2 g 0 (ρ) gρ (ρ) + g (ρ)−1 gρ0 (ρ) ρ2  − exp (−νgν (ρ)) gν (ρ) + g (ρ)−1 gρ (ρ)  ρ2  = exp (−νgν (ρ)) ρ −νgν0 (ρ) gν (ρ) − νgν0 (ρ) g (ρ)−1 gρ (ρ)  ρ2 (201)  exp (−νgν (ρ)) ρ gν0 (ρ) − g (ρ)−2 g 0 (ρ) gρ (ρ) + g (ρ)−1 gρ0 (ρ)  + ρ2  − exp (−νgν (ρ)) gν (ρ) + g (ρ)−1 gρ (ρ)  ρ2 exp (−νgν (ρ))   ρ −νgν0 (ρ) gν (ρ) − νgν0 (ρ) g (ρ)−1 gρ (ρ) 2 ρ  0 +gν (ρ) − g (ρ)−2 g 0 (ρ) gρ (ρ) + g (ρ)−1 gρ0 (ρ)  −gν (ρ) − g (ρ)−1 gρ (ρ) exp (−νgν (ρ))  −νρgν0 (ρ) gν (ρ) − νρgν0 (ρ) g (ρ)−1 gρ (ρ) = 2 ρ 0 +ρgν (ρ) − ρg (ρ)−2 g 0 (ρ) gρ (ρ) + ρg (ρ)−1 gρ0 (ρ)  −gν (ρ) − g (ρ)−1 gρ (ρ) = (202) (203) Now we can calculate the limits of gρ (ρ). gρ0 (ρ).Arne Henningsen. and gρ00 (ρ) for ρ → 0 by gρ (0) = lim gρ (ρ)   −ρ = lim δ x−ρ ln x + (1 − δ) x ln x 1 2 1 2 (204) −ρ = δ ln x1 lim x−ρ 1 + (1 − δ) ln x2 lim x2 (206) = δ ln x1 + (1 − δ) ln x2 (207) ρ→0 ρ→0 ρ→0 gρ0 (0) = ρ→0 lim gρ0 (ρ)   −ρ 2 2 = lim −δ x−ρ (ln x ) − (1 − δ) x (ln x ) 1 2 1 2 ρ→0 ρ→0 (205) (208) (209) . 70 Econometric Estimation of the Constant Elasticity of Substitution Function in R −ρ 2 = −δ (ln x1 )2 lim x−ρ 1 − (1 − δ) (ln x2 ) lim x2 (210) = −δ (ln x1 )2 − (1 − δ) (ln x2 )2 (211) ρ→0 gρ00 (0) = ρ→0 lim gρ00 (ρ)   −ρ 3 3 = lim δ x−ρ (ln x ) + (1 − δ) x (ln x ) 1 2 1 2 (212) −ρ 3 = δ (ln x1 )3 lim x−ρ 1 + (1 − δ) (ln x2 ) lim x2 (214) = δ (ln x1 )3 + (1 − δ) (ln x2 )3 (215) ρ→0 ρ→0 ρ→0 ρ→0 (213) so that we can calculate the limit of fρ (ρ) for ρ → 0 by fρ (0) = = = = = = = = lim fρ (ρ)    exp (−νgν (ρ)) gν (ρ) + g (ρ)−1 gρ (ρ)  lim  ρ→0 ρ    −ν exp (−νgν (ρ)) gν0 (ρ) gν (ρ) + g (ρ)−1 gρ (ρ) lim  ρ→0 1   exp (−νgν (ρ)) gν0 (ρ) − g (ρ)−2 g 0 (ρ) gρ (ρ) + g (ρ)−1 gρ0 (ρ)  + 1   −ν exp (−νgν (0)) gν0 (0) gν (0) + g (0)−1 gρ (0)   + exp (−νgν (0)) gν0 (0) − g (0)−2 g 0 (0) gρ (0) + g (0)−1 gρ0 (0)   exp (−νgν (0)) −ν gν0 (0) gν (0) + g (0)−1 gρ (0)   + exp (−νgν (0)) gν0 (0) − g (0)−2 g 0 (0) gρ (0) + g (0)−1 gρ0 (0)    exp (−νgν (0)) −ν gν0 (0) gν (0) + g (0)−1 gρ (0)  +gν0 (0) − g (0)−2 g 0 (0) gρ (0) + g (0)−1 gρ0 (0)  νδ (1 − δ) exp (−ν (−δ ln x1 − (1 − δ) ln x2 )) − (ln x1 − ln x2 )2 2 (−δ ln x1 − (1 − δ) ln x2 + δ ln x1 + (1 − δ) ln x2 ) δ (1 − δ) + (ln x1 − ln x2 )2 2 − (−δ ln x1 − (1 − δ) ln x2 ) (δ ln x1 + (1 − δ) ln x2 )  −δ (ln x1 )2 − (1 − δ) (ln x2 )2  νδ ν(1−δ) 1 x1 x2 δ (1 − δ) (ln x1 )2 − δ (1 − δ) ln x1 ln x2 2 1 + δ (1 − δ) (ln x2 )2 + δ 2 (ln x1 )2 + 2δ (1 − δ) ln x1 ln x2 2 ρ→0 (216) (217) (218) (219) (220) (221) (222) (223) . we can calculate limρ→0 fρ0 (ρ) by using de l’Hospital’s rule.Arne Henningsen. where the numerator converges to zero if hρ (ρ) converges to zero for ρ → 0 hρ (ρ) = −gν (ρ) − g (ρ)−1 gρ (ρ) (230) hρ (0) = lim hρ (ρ)   = lim −gν (ρ) − g (ρ)−1 gρ (ρ) (231) = −gν (0) − g (0)−1 gρ (0) (233) = − (−δ ln x1 − (1 − δ) ln x2 ) − (δ ln x1 + (1 − δ) ln x2 ) (234) = 0 (235) ρ→0 ρ→0 (232) As both the numerator and the denominator converge to zero. G´eraldine Henningsen + (1 − δ)2 (ln x2 )2 − δ (ln x1 )2 − (1 − δ) (ln x2 )2   1 νδ ν(1−δ) 2 = x1 x2 δ (1 − δ) + δ − δ (ln x1 )2 2   1 2 + δ (1 − δ) + (1 − δ) − (1 − δ) 2  (ln x2 )2 + δ (1 − δ) ln x1 ln x2   1 1 2 νδ ν(1−δ) 2 = x1 x2 δ − δ + δ − δ (ln x1 )2 2 2   1 1 2 2 + δ − δ + 1 − 2δ + δ − 1 + δ (ln x2 )2 2 2 +δ (1 − δ) ln x1 ln x2 )   1 1 2 νδ ν(1−δ) = x1 x2 − δ + δ (ln x1 )2 2 2 !   1 1 2 2 + − δ + δ (ln x2 ) + δ (1 − δ) ln x1 ln x2 2 2 ν(1−δ) = xνδ 1 x2 71  1 − δ (1 − δ) (ln x1 )2 2 ! 1 2 − δ (1 − δ) (ln x2 ) + δ (1 − δ) ln x1 ln x2 2   1 ν(1−δ) = − δ (1 − δ) xνδ (ln x1 )2 − 2 ln x1 ln x2 + (ln x2 )2 1 x2 2 1 ν(1−δ) (ln x1 − ln x2 )2 = − δ (1 − δ) xνδ 1 x2 2 (224) (225) (226) (227) (228) (229) Before we can apply de l’Hospital’s rule to limρ→0 fρ0 (ρ). fρ0 (0) = lim fρ0 (ρ)  exp (−νgν (ρ)) −νρgν0 (ρ) gν (ρ) = lim ρ→0 ρ2 ρ→0 (236) (237) . we have to check whether also the numerator converges to zero. We do this by defining a helper function hρ (ρ). 72 Econometric Estimation of the Constant Elasticity of Substitution Function in R −νρgν0 (ρ) g (ρ)−1 gρ (ρ) + ρgν0 (ρ) − ρg (ρ)−2 g 0 (ρ) gρ (ρ)  +ρg (ρ)−1 gρ0 (ρ) − gν (ρ) − g (ρ)−1 gρ (ρ)  1 = lim (exp (−νgν (ρ))) lim −νρgν0 (ρ) gν (ρ) ρ→0 ρ→0 ρ2 −νρgν0 (ρ) g (ρ)−1 gρ (ρ) + ρgν0 (ρ) − ρg (ρ)−2 g 0 (ρ) gρ (ρ)  +ρg (ρ)−1 gρ0 (ρ) − gν (ρ) − g (ρ)−1 gρ (ρ)  1 = lim (exp (−νgν (ρ))) lim −νgν0 (ρ) gν (ρ) − νρgν00 (ρ) gν (ρ) ρ→0 ρ→0 2ρ (238) (239) −νρgν0 (ρ) gν0 (ρ) − νgν0 (ρ) g (ρ)−1 gρ (ρ) − νρgν00 (ρ) g (ρ)−1 gρ (ρ) +νρgν0 (ρ) g (ρ)−2 g 0 (ρ) gρ (ρ) − νρgν0 (ρ) g (ρ)−1 gρ0 (ρ) + gν0 (ρ) 2 +ρgν00 (ρ) − g (ρ)−2 g 0 (ρ) gρ (ρ) + 2ρg (ρ)−3 g 0 (ρ) gρ (ρ) −ρg (ρ)−2 g 00 (ρ) gρ (ρ) − ρg (ρ)−2 g 0 (ρ) gρ0 (ρ) + g (ρ)−1 gρ0 (ρ) −ρg (ρ)−2 g 0 (ρ) gρ0 (ρ) + ρg (ρ)−1 gρ00 (ρ) =  −gν0 (ρ) + g (ρ)−2 g 0 (ρ) gρ (ρ) − g (ρ)−1 gρ0 (ρ)  1 1 lim (exp (−νgν (ρ))) lim −νgν0 (ρ) gν (ρ) − νρgν00 (ρ) gν (ρ) ρ→0 ρ 2 ρ→0 (240) −νρgν0 (ρ) gν0 (ρ) − νgν0 (ρ) g (ρ)−1 gρ (ρ) − νρgν00 (ρ) g (ρ)−1 gρ (ρ) +νρgν0 (ρ) g (ρ)−2 g 0 (ρ) gρ (ρ) − νρgν0 (ρ) g (ρ)−1 gρ0 (ρ) + ρgν00 (ρ) 2 +2ρg (ρ)−3 g 0 (ρ) gρ (ρ) − ρg (ρ)−2 g 00 (ρ) gρ (ρ)  −2ρg (ρ)−2 g 0 (ρ) gρ0 (ρ) + ρg (ρ)−1 gρ00 (ρ) = 1 lim (exp (−νgν (ρ))) 2 ρ→0    1 lim −νgν0 (ρ) gν (ρ) − νgν0 (ρ) g (ρ)−1 gρ (ρ) ρ→0 ρ (241) −νgν00 (ρ) gν (ρ) − νgν0 (ρ) gν0 (ρ) − νgν00 (ρ) g (ρ)−1 gρ (ρ) +νgν0 (ρ) g (ρ)−2 g 0 (ρ) gρ (ρ) − νgν0 (ρ) g (ρ)−1 gρ0 (ρ) 2 +gν00 (ρ) + 2g (ρ)−3 g 0 (ρ) gρ (ρ) − g (ρ)−2 g 00 (ρ) gρ (ρ)  −2 0 −1 00 0 −2g (ρ) g (ρ) gρ (ρ) + g (ρ) gρ (ρ) = 1 lim (exp (−νgν (ρ))) 2 ρ→0     1 −1 0 0 lim −νgν (ρ) gν (ρ) − νgν (ρ) g (ρ) gρ (ρ) ρ→0 ρ  + lim −νgν00 (ρ) gν (ρ) − νgν0 (ρ) gν0 (ρ) − νgν00 (ρ) g (ρ)−1 gρ (ρ) ρ→0 +νgν0 (ρ) g (ρ)−2 g 0 (ρ) gρ (ρ) − νgν0 (ρ) g (ρ)−1 gρ0 (ρ) + gν00 (ρ) 2 +2g (ρ)−3 g 0 (ρ) gρ (ρ) − g (ρ)−2 g 00 (ρ) gρ (ρ) (242) . where the numerator converges to zero if kρ (ρ) converges to zero for ρ → 0 kρ (ρ) = −νgν0 (ρ) gν (ρ) − νgν0 (ρ) g (ρ)−1 gρ (ρ) (243) kρ (0) = (244) lim kρ (ρ)   = lim −νgν0 (ρ) gν (ρ) − νgν0 (ρ) g (ρ)−1 gρ (ρ) ρ→0 ρ→0 = −νgν0 (0) gν (0) − νgν0 (0) g (0)−1 gρ (0) νδ (1 − δ) = − (ln x1 − ln x2 )2 (−δ ln x1 − (1 − δ) ln x2 ) 2 νδ (1 − δ) − (ln x1 − ln x2 )2 (δ ln x1 + (1 − δ) ln x2 ) 2 = 0 (245) (246) (247) (248) As both the numerator and the denominator converge to zero.Arne Henningsen. we can apply de l’Hospital’s rule. we have to check if also the numerator converges to zero. fρ0 (0) = 1 lim (exp (−νgν (ρ))) 2 ρ→0    1 −1 0 0 lim −νgν (ρ) gν (ρ) − νgν (ρ) g (ρ) gρ (ρ) ρ→0 ρ  + lim −νgν00 (ρ) gν (ρ) − νgν0 (ρ) gν0 (ρ) − νgν00 (ρ) g (ρ)−1 gρ (ρ) (251) ρ→0 +νgν0 (ρ) g (ρ)−2 g 0 (ρ) gρ (ρ) − νgν0 (ρ) g (ρ)−1 gρ0 (ρ) 2 +gν00 (ρ) + 2g (ρ)−3 g 0 (ρ) gρ (ρ) − g (ρ)−2 g 00 (ρ) gρ (ρ) !  −2g (ρ)−2 g 0 (ρ) gρ0 (ρ) + g (ρ)−1 gρ00 (ρ) = 1 lim (exp (−νgν (ρ))) 2 ρ→0  2 lim −νgν00 (ρ) gν (ρ) − ν gν0 (ρ) − νgν00 (ρ) g (ρ)−1 gρ (ρ) ρ→0  +νgν0 (ρ) g (ρ)−2 g 0 (ρ) gρ (ρ) − νgν0 (ρ) g (ρ)−1 gρ0 (ρ) (252) . G´eraldine Henningsen −2g (ρ) −2 0 g (ρ) gρ0 (ρ) + g (ρ)−1 gρ00 (ρ) 73  Before we can apply de l’Hospital’s rule again. We do this by defining a helper function kρ (ρ). kρ (ρ) ρ→0 ρ lim = lim kρ (ρ)  2 = lim −νgν00 (ρ) gν (ρ) − ν gν0 (ρ) − νgν00 (ρ) g (ρ)−1 gρ (ρ) ρ→0  +νgν0 (ρ) g (ρ)−2 g 0 (ρ) gρ (ρ) − νgν0 (ρ) g (ρ)−1 gρ0 (ρ) ρ→0 (249) (250) and hence. 74 Econometric Estimation of the Constant Elasticity of Substitution Function in R  + lim −νgν00 (ρ) gν (ρ) − νgν0 (ρ) gν0 (ρ) − νgν00 (ρ) g (ρ)−1 gρ (ρ) ρ→0 +νgν0 (ρ) g (ρ)−2 g 0 (ρ) gρ (ρ) − νgν0 (ρ) g (ρ)−1 gρ0 (ρ) + gν00 (ρ) 2 +2g (ρ)−3 g 0 (ρ) gρ (ρ) − g (ρ)−2 g 00 (ρ) gρ (ρ) !  −2 0 −1 00 0 −2g (ρ) g (ρ) gρ (ρ) + g (ρ) gρ (ρ) =  2 1 exp (−νgν (0)) −νgν00 (0) gν (0) − ν gν0 (0) 2 −νgν00 (0) g (0)−1 gρ (0) + νgν0 (0) g (0)−2 g 0 (0) gρ (0) (253) −νgν0 (0) g (0)−1 gρ0 (0) − νgν00 (0) gν (0) − νgν0 (0) gν0 (0) −νgν00 (0) g (0)−1 gρ (0) + νgν0 (0) g (0)−2 g 0 (0) gρ (0) 2 −νgν0 (0) g (0)−1 gρ0 (0) + gν00 (0) + 2g (0)−3 g 0 (0) gρ (0)  −g (0)−2 g 00 (0) gρ (0) − 2g (0)−2 g 0 (0) gρ0 (0) + g (0)−1 gρ00 (0)  2 1 = exp (−νgν (0)) −νgν00 (0) gν (0) − ν gν0 (0) − νgν00 (0) gρ (0) 2 +νgν0 (0) g 0 (0) gρ (0) − νgν0 (0) gρ0 (0) −νgν00 (0) gν (0) − νgν0 (0) gν0 (0) − νgν00 (0) gρ (0) + νgν0 (0) g 0 (0) gρ (0) 2 −νgν0 (0) gρ0 (0) + gν00 (0) + 2 g 0 (0) gρ (0) − g 00 (0) gρ (0)  −2g 0 (0) gρ0 (0) + gρ00 (0)  2 1 exp (−νgν (0)) −2νgν00 (0) gν (0) − 2ν gν0 (0) − 2νgν00 (0) gρ (0) = 2 +2νgν0 (0) g 0 (0) gρ (0) − 2νgν0 (0) gρ0 (0) 2 +gν00 (0) + 2 g 0 (0) gρ (0) − g 00 (0) gρ (0)  −2g 0 (0) gρ0 (0) + gρ00 (0) 1 = exp (−νgν (0)) gν00 (0) (−2νgν (0) − 2νgρ (0) + 1) 2  +νgν0 (0) −2gν0 (0) + 2g 0 (0) gρ (0) − 2gρ0 (0) 2 +2 g 0 (0) gρ (0) − g 00 (0) gρ (0)  −2g 0 (0) gρ0 (0) + gρ00 (0) = 1 exp (−ν (−δ ln x1 − (1 − δ) ln x2 )) 2   1 3 − (1 − 2δ) δ (1 − δ) (ln x1 − ln x2 ) 3 (−2ν (−δ ln x1 − (1 − δ) ln x2 ) − 2ν (δ ln x1 + (1 − δ) ln x2 ) + 1)  δ (1 − δ) δ (1 − δ) 2 (ln x1 − ln x2 ) −2 (ln x1 − ln x2 )2 +ν 2 2 +2 (−δ ln x1 − (1 − δ) ln x2 ) (δ ln x1 + (1 − δ) ln x2 )   −2 −δ (ln x1 )2 − (1 − δ) (ln x2 )2 +2 (−δ ln x1 − (1 − δ) ln x2 )2 (δ ln x1 + (1 − δ) ln x2 ) (254) (255) (256) (257) . Arne Henningsen. G´eraldine Henningsen 75   − δ (ln x1 )2 + (1 − δ) (ln x2 )2 (δ ln x1 + (1 − δ) ln x2 )   −2 (−δ ln x1 − (1 − δ) ln x2 ) −δ (ln x1 )2 − (1 − δ) (ln x2 )2  +δ (ln x1 )3 + (1 − δ) (ln x2 )3  1 νδ ν(1−δ) 1 x1 x2 − (1 − 2δ) δ (1 − δ) (ln x1 − ln x2 )3 = 2 3 (2νδ ln x1 + 2ν (1 − δ) ln x2 − 2νδ ln x1 − 2ν (1 − δ) ln x2 + 1)   1 + νδ (1 − δ) (ln x1 )2 − 2 ln x1 ln x2 + (ln x2 )2 2 (258) −δ (1 − δ) (ln x1 )2 + 2δ (1 − δ) ln x1 ln x2 − δ (1 − δ) (ln x2 )2 −2δ 2 (ln x1 )2 − 4δ (1 − δ) ln x1 ln x2 − 2 (1 − δ)2 (ln x2 )2  +2δ (ln x1 )2 + 2 (1 − δ) (ln x2 )2 +2δ 3 (ln x1 )3 + 6δ 2 (1 − δ) (ln x1 )2 ln x2 + 6δ (1 − δ)2 ln x1 (ln x2 )2 +2 (1 − δ)3 (ln x2 )3 − δ 2 (ln x1 )3 − δ (1 − δ) (ln x1 )2 ln x2 −δ (1 − δ) ln x1 (ln x2 )2 − (1 − δ)2 (ln x2 )3 − 2δ 2 (ln x1 )3 −2δ (1 − δ) ln x1 (ln x2 )2 − 2δ (1 − δ) (ln x1 )2 ln x2 − 2 (1 − δ)2 (ln x2 )3  +δ (ln x1 )3 + (1 − δ) (ln x2 )3 =  1 νδ ν(1−δ) 1 x1 x2 − (1 − 2δ) δ (1 − δ) (ln x1 − ln x2 )3 2 3   1 1 2 2 + νδ (1 − δ) (ln x1 ) − νδ (1 − δ) ln x1 ln x2 + νδ (1 − δ) (ln x2 ) 2 2   2 2 −δ (1 − δ) − 2δ + 2δ (ln x1 ) (259) + (2δ (1 − δ) − 4δ (1 − δ)) ln x1 ln x2    + −δ (1 − δ) − 2 (1 − δ)2 + 2 (1 − δ) (ln x2 )2  + 2δ 3 − δ 2 − 2δ 2 + δ (ln x1 )3  + 6δ 2 (1 − δ) − δ (1 − δ) − 2δ (1 − δ) (ln x1 )2 ln x2   + 6δ (1 − δ)2 − δ (1 − δ) − 2δ (1 − δ) ln x1 (ln x2 )2    + 2 (1 − δ)3 − (1 − δ)2 − 2 (1 − δ)2 + (1 − δ) (ln x2 )3  1 1 νδ ν(1−δ) x x − (1 − 2δ) δ (1 − δ) (ln x1 − ln x2 )3 = 2 1 2 3   1 1 2 2 + νδ (1 − δ) (ln x1 ) − νδ (1 − δ) ln x1 ln x2 + νδ (1 − δ) (ln x2 ) 2 2   −δ + δ 2 − 2δ 2 + 2δ (ln x1 )2 (260) . 76 Econometric Estimation of the Constant Elasticity of Substitution Function in R −2δ (1 − δ) ln x1 ln x2   + −δ + δ 2 − 2 + 4δ − 2δ 2 + 2 − 2δ (ln x2 )2  + 2δ 3 − 3δ 2 + δ (ln x1 )3  + 6δ 2 − 6δ 3 − δ + δ 2 − 2δ + 2δ 2 (ln x1 )2 ln x2  + 6δ − 12δ 2 + 6δ 3 − δ + δ 2 − 2δ + 2δ 2 ln x1 (ln x2 )2   + 2 − 6δ + 6δ 2 − 2δ 3 − 1 + 2δ − δ 2 − 2 + 4δ − 2δ 2 + 1 − δ (ln x2 )3  1 νδ ν(1−δ) 1 = x1 x2 − (1 − 2δ) δ (1 − δ) (ln x1 − ln x2 )3 2 3   1 1 2 2 + νδ (1 − δ) (ln x1 ) − νδ (1 − δ) ln x1 ln x2 + νδ (1 − δ) (ln x2 ) 2 2     2 2 2 −δ + δ (ln x1 ) − 2δ (1 − δ) ln x1 ln x2 + −δ + δ (ln x2 )2   + 2δ 3 − 3δ 2 + δ (ln x1 )3 + −6δ 3 + 9δ 2 − 3δ (ln x1 )2 ln x2    + 6δ 3 − 9δ 2 + 3δ ln x1 (ln x2 )2 + −2δ 3 + 3δ 2 − δ (ln x2 )3 =  1 νδ ν(1−δ) 1 − (1 − 2δ) δ (1 − δ) (ln x1 )3 x1 x2 2 3 (261) (262) + (1 − 2δ) δ (1 − δ) (ln x1 )2 ln x2 − (1 − 2δ) δ (1 − δ) ln x1 (ln x2 )2 1 + (1 − 2δ) δ (1 − δ) (ln x2 )3 3  1 1 2 2 + νδ (1 − δ) (ln x1 ) − νδ (1 − δ) ln x1 ln x2 + νδ (1 − δ) (ln x2 ) 2 2   δ (1 − δ) (ln x1 )2 − 2δ (1 − δ) ln x1 ln x2 + δ (1 − δ) (ln x2 )2 +δ (1 − δ) (1 − 2δ) (ln x1 )3 − 3δ (1 − δ) (1 − 2δ) (ln x1 )2 ln x2 +3δ (1 − δ) (1 − 2δ) ln x1 (ln x2 )2 − δ (1 − δ) (1 − 2δ) (ln x2 )3   1 νδ ν(1−δ) 1 2 = x x νδ (1 − δ)2 (ln x1 )4 − νδ 2 (1 − δ)2 (ln x1 )3 ln x2 2 1 2 2 1 + νδ 2 (1 − δ)2 (ln x1 )2 (ln x2 )2 − νδ 2 (1 − δ)2 (ln x1 )3 ln x2 2 +2νδ 2 (1 − δ)2 (ln x1 )2 (ln x2 )2 − νδ 2 (1 − δ)2 ln x1 (ln x2 )3 1 + νδ 2 (1 − δ)2 (ln x1 )2 (ln x2 )2 − νδ 2 (1 − δ)2 ln x1 (ln x2 )3 2 1 + νδ 2 (1 − δ)2 (ln x2 )4 2  1 + − (1 − 2δ) δ (1 − δ) + δ (1 − δ) (1 − 2δ) (ln x1 )3 3 + ((1 − 2δ) δ (1 − δ) − 3δ (1 − δ) (1 − 2δ)) (ln x1 )2 ln x2 (263) . Arne Henningsen. G´eraldine Henningsen 77 + (− (1 − 2δ) δ (1 − δ) + 3δ (1 − δ) (1 − 2δ)) ln x1 (ln x2 )2    1 3 + (1 − 2δ) δ (1 − δ) − δ (1 − δ) (1 − 2δ) (ln x2 ) 3 = 1 νδ ν(1−δ) x x 2 1 2  1 2 νδ (1 − δ)2 (ln x1 )4 − 2νδ 2 (1 − δ)2 (ln x1 )3 ln x2 2 (264) +3νδ 2 (1 − δ)2 (ln x1 )2 (ln x2 )2 1 −2νδ 2 (1 − δ)2 ln x1 (ln x2 )3 + νδ 2 (1 − δ)2 (ln x2 )4 2 2 3 + δ (1 − δ) (1 − 2δ) (ln x1 ) − 2δ (1 − δ) (1 − 2δ) (ln x1 )2 ln x2 3  2 2 3 +2δ (1 − δ) (1 − 2δ) ln x1 (ln x2 ) − δ (1 − δ) (1 − 2δ) (ln x2 ) 3 = 1 νδ ν(1−δ) x x 2 1 2    1 2 νδ (1 − δ)2 (ln x1 )4 − 4 (ln x1 )3 ln x2 2 +6 (ln x1 )2 (ln x2 )2 − 4 ln x1 (ln x2 )3 + (ln x2 )4 2 + δ (1 − δ) (1 − 2δ) 3  (ln x1 )3 − 3 (ln x1 )2 ln x2 + 3 ln x1 (ln x2 )2 − (ln x2 )3 (265)  ν(1−δ) = δ (1 − δ) xνδ 1 x2   1 1 3 4 (1 − 2δ) (ln x1 − ln x2 ) + νδ (1 − δ) (ln x1 − ln x2 ) 3 4 (266) Hence. we can approximate ∂y/∂ρ by a second-order Taylor series approximation: ∂y ∂ρ  ≈ γ eλ t ν fρ (0) + ρ fρ0 (0) (267) 1 ν(1−δ) = − γ eλ t ν δ (1 − δ) xνδ (ln x1 − ln x2 )2 1 x2 2 ν(1−δ) +γ eλ t ν ρ δ (1 − δ) xνδ 1 x2   2 1 3 4 (1 − 2δ) (ln x1 − ln x2 ) + νδ (1 − δ) (ln x1 − ln x2 ) 3 2 ν(1−δ) = γ eλ t ν δ (1 − δ) xνδ 1 x2 (268) 1 − (ln x1 − ln x2 )2 2 1 1 + ρ (1 − 2δ) (ln x1 − ln x2 )3 + ρνδ (1 − δ) (ln x1 − ln x2 )4 3 4 B. Three-input nested CES function (269) ! . . (273) Limits for ρ1 and/or ρ approaching zero The limits of the three-input nested CES function for ρ1 and/or ρ approaching zero are: − ν  ρ lim y =γ eλ t δ {exp(L1 )}−ρ + (1 − δ)x−ρ 3 ρ1 →0      ln(B1 ) lim y =γ eλ t exp ν δ − + (1 − δ) ln x3 ρ→0 ρ1 lim lim y =γ eλ t exp {ν (δ L1 + (1 − δ) ln x3 )} ρ1 →0 ρ→0 (274) (275) (276) B. 289.2. where ∂y/∂γ is calculated according to Equation 277. or 295 depending on the values of ρ1 and ρ.78 Econometric Estimation of the Constant Elasticity of Substitution Function in R The nested CES function with three inputs is defined as y = γ eλ t B −1/ρ (270) with ρ/ρ1 B =δ B1 1 B1 =δ1 x−ρ 1 + (1 − δ)x−ρ 3 + (1 − 1 δ1 ) x−ρ 2 (271) (272) For further simplification of the formulas in this section. 283. Derivatives with respect to coefficients The partial derivatives of the three-input nested CES function with respect to the coefficients are:27 ∂y ∂γ ∂y ∂δ1 ∂y ∂δ ∂y ∂ν ∂y ∂ρ1 =eλ t B −ν/ρ (277) ρ−ρ 1 ν −ν−ρ ρ ρ 1 1 − x−ρ = − γ eλ t B ρ δ B1 1 (x−ρ 1 2 ) ρ ρ1  ρ  −ν−ρ ρ1 −ρ λt ν ρ =−γe B B1 − x3 ρ 1 −ν = − γ eλ t ln(B)B ρ ρ −ν−ρ ν = − γ eλ t B ρ δ ρ    1 ρ   ρ−ρ ρ1 ρ ρ ρ1 ρ1 −ρ1 ln(B1 )B1 − 2 + B1 −δ1 ln x1 x1 − (1 − δ1 ) ln x2 x2 ρ1 ρ1   ρ ν ν −ν−ρ ∂y ν 1 ρ − =γ eλ t 2 ln(B)B ρ − γ eλ t B ρ δ ln(B1 )B1 1 − (1 − δ) ln x3 x−ρ 3 ∂ρ ρ ρ ρ1 (278) (279) (280) (281) (282) 27 The partial derivatives with respect to λ are always calculated using Equation 16.1. we make the following definition: L1 = δ1 ln(x1 ) + (1 − δ1 ) ln(x2 ) B. G´eraldine Henningsen 79 Limits of the derivatives for ρ approaching zero      ∂y ln(B1 ) + (1 − δ) ln x3 =eλ t exp ν δ − ρ→0 ∂γ ρ1 (283) lim ∂y lim = − γ eλ t ν ρ→0 ∂δ1 " 1 (x−ρ1 − x−ρ 2 ) δ 1 ρ 1 B1 ! #   ln(B1 ) exp −ν −δ − − (1 − δ) ln x3 ρ1   (284) ∂y lim = − γ eλ t ν ρ→0 ∂δ        ln(B1 ) ln(B1 ) + ln x3 exp ν −δ − − (1 − δ) ln x3 ρ1 ρ1     ln(B1 ) δ − + (1 − δ) ln x3 ρ1      ln(B1 ) · exp −ν −δ − − (1 − δ) ln x3 ρ1 ∂y lim =γ eλ t ρ→0 ∂ν 1 1 δ ln(B1 ) −δ1 ln x1 x−ρ − (1 − δ1 ) ln x2 x−ρ ∂y 1 2 = − γ eλ t ν − + lim ρ→0 ∂ρ1 ρ1 ρ1 B1      ln(B1 ) · exp −ν −δ − − (1 − δ) ln x3 ρ1 ∂y lim =γ eλ t ν ρ→0 ∂ρ " 1 − 2  δ ln B1 ρ1 ! 2 2 + (1 − δ) (ln x3 ) 1 + 2 (285) (286) ! (287)  2 # ln B1 δ − (1 − δ) ln x3 ρ1 (288)    ln B1 + (1 − δ) ln x3 · exp ν −δ ρ1 Limits of the derivatives for ρ1 approaching zero  − ν ∂y ρ =eλ t δ · {exp(L1 )}−ρ + (1 − δ)x−ρ 3 ρ1 →0 ∂γ lim (289)  − ν −1 ∂y ρ {exp(L1 )}−ρ (ln x1 − ln x2 ) (290) = − γ eλ t ν δ δ {exp(L1 )}−ρ + (1 − δ)x−ρ 3 ρ1 →0 ∂δ1 lim − ν −1   ∂y ν ρ −ρ −ρ = − γ eλ t δ {exp(L1 )}−ρ + (1 − δ)x−ρ {exp(L )} − x 1 3 3 ρ1 →0 ∂δ ρ lim (291) .Arne Henningsen. 3.80 Econometric Estimation of the Constant Elasticity of Substitution Function in R   − ν ∂y ρ −ρ −ρ −ρ −ρ λt 1 lim =−γe ln δ {exp(L1 )} + (1 − δ)x3 δ {exp(L1 )} + (1 − δ)x3 ρ1 →0 ∂ν ρ (292)    ∂y 1 = − γ eλ t ν δ exp (−ρ L1 ) δ1 (ln x1 )2 + (1 − δ1 ) (ln x2 )2 − L21 ρ1 →0 ∂ρ1 2  − ν+ρ ρ δ exp (−ρ L1 ) + (1 − δ) x−ρ 3 lim   ∂y ν =γ eλ t 2 ln δ {exp(L1 )}−ρ + (1 − δ)x−ρ 3 ρ1 →0 ∂ρ ρ  − ν ρ δ {exp(L1 )}−ρ + (1 − δ)x−ρ 3 − ν −1 ν ρ − γ eλ t δ {exp(L1 )}−ρ + (1 − δ)x−ρ 3 ρ   −δL1 {exp(L1 )}−ρ − (1 − δ) ln x3 x−ρ 3 lim (293) (294) Limits of the derivatives for ρ1 and ρ approaching zero ∂y =eλ t exp{ν(δ L1 + (1 − δ) ln x3 )} ρ→0 ∂γ (295) ∂y = − γ eλ t ν δ (− ln x1 + ln x2 ) exp{−ν(−δ L1 − (1 − δ) ln x3 )} ∂δ1 (296) ∂y = − γ eλ t ν (−L1 + ln x3 ) exp{ν(δ L1 + (1 − δ) ln x3 )} ρ→0 ∂δ (297) ∂y =γ eλ t (δ L1 + (1 − δ) ln x3 ) exp {−ν(−δ L1 − (1 − δ) ln x3 )} ∂ν (298) lim lim ρ1 →0 lim lim ρ1 →0 ρ→0 lim lim ρ1 →0 lim lim ρ1 →0 ρ→0  ∂y 1 = γ eλ t ν δ δ1 (ln x1 )2 + (1 − δ1 )(ln x2 )2 − L21 ρ→0 ∂ρ1 2 · exp (−ν (δ L1 + (1 − δ) ln x3 )) lim lim ρ1 →0    1 ∂y 1 2 λt 2 2 lim lim =γ e ν − δ L1 + (1 − δ)(ln x3 ) + (−δ L1 − (1 − δ) ln x3 ) ρ1 →0 ρ→0 ∂ρ 2 2 · exp {ν (δ L1 + (1 − δ) ln x3 )} B. Elasticities of substitution (299) (300) . 2. 2. we make the following definitions: L1 = δ1 ln(x1 ) + (1 − δ1 ) ln(x2 ) (312) L2 = δ2 ln(x3 ) + (1 − δ2 ) ln(x4 ) (313) .j 1 1 + θi θj       1 1 1 1 1 1 = (1 − ρ1 ) θ − θ∗ + (1 − ρ2 ) θ − θ + (1 − ρ) θ∗ − θ i j         (1 − ρ1 )−1          for i = 1. G´eraldine Henningsen 81 Allen-Uzawa elasticity of substitution (see Sato 1967) σi. j = 2 (301) Hicks-McFadden elasticity of substitution (see Sato 1967) σi.j  (1 − ρ)−1         (1 − ρ1 )−1 − (1 − ρ)−1 = + (1 − ρ)−1  1+ρ     y    δ   − ρ1 B1 1 for i = 1. j = 2 (302) with ∗ ρ ρ1 θ = δB1 · y ρ θ = (1 − (303) δ)x−ρ 3 ·y ρ ρ −ρ − 1ρ 1 1 θ1 = δδ1 x−ρ 1 B1 (304) · yρ (305) ρ −ρ − 1ρ 1 1 θ2 = δ(1 − δ1 )x−ρ 2 B1 · yρ θ3 = θ (306) (307) C. j = 3 for i = 1. j = 3 text i = 1. Four-input nested CES function The nested CES function with four inputs is defined as y = γ eλ t B −ν/ρ (308) with ρ/ρ1 B = δ B1 ρ/ρ2 + (1 − δ)B2 1 1 B1 = δ1 x−ρ + (1 − δ1 )x−ρ 1 2 B2 = 2 δ2 x−ρ 3 + (1 − 2 δ2 )x−ρ 4 (309) (310) (311) For further simplification of the formulas in this section.Arne Henningsen. 329. . Limits for ρ1 . 361. 369. and/or ρ approaching zero are:    δ ln(B1 ) (1 − δ) ln(B2 ) λt + (314) lim y =γ e exp −ν ρ→0 ρ1 ρ2  − ν ρ ρ ρ2 λt δ exp{−ρ L1 } + (1 − δ)B2 lim y =γ e (315) ρ1 →0 lim y =γ e ρ2 →0 λt − ν  ρ ρ ρ1 δB1 + (1 − δ) exp{−ρ L2 )} (316) − νρ lim lim y =γ eλ t (δ exp {−ρ L1 } + (1 − δ) exp {−ρ L2 }) ρ2 →0 ρ1 →0    ln(B2 ) λt lim lim y =γ e exp −ν −δ L1 + (1 − δ) ρ1 →0 ρ→0 ρ2    ln(B ) 1 λt lim lim y =γ e exp −ν δ − (1 − δ)L2 ρ2 →0 ρ→0 ρ1 lim lim lim y =γ eλ t exp {−ν (−δ L1 − (1 − δ)L2 )} ρ1 →0 ρ2 →0 ρ→0 (317) (318) (319) (320) C.2. ρ2 .1. 353. Derivatives with respect to coefficients The partial derivatives of the four-input nested CES function with respect to the coefficients are:28 ∂y ∂γ ∂y ∂δ1 ∂y ∂δ2 ∂y ∂δ ∂y ∂ν ∂y ∂ρ1 =eλ t B −ν/ρ (321)   ρ−ρ1 −ν−ρ ρ ν ρ 1 1 − x−ρ (322) =γ eλ t − B ρ δB1 1 (x−ρ 1 2 ) ρ ρ1   ρ−ρ2 −ν−ρ ρ ν ρ λt 2 2 =γ e − B ρ (1 − δ)B2 2 (x−ρ − x−ρ (323) 3 4 ) ρ ρ2   h i ν ρ/ρ ρ/ρ − ν −1 λt =γ e − B ρ B1 1 − B2 2 (324) ρ   1 =γ eλ t ln(B)B −ν/ρ − (325) ρ   −ν−ρ ν λt =γ e − B ρ (326) ρ     ρ−ρ1 ρ ρ ρ ρ1 ρ1 −ρ1 −ρ1 δ ln(B1 )B1 − 2 + δB1 (−δ1 ln x1 x1 − (1 − δ1 ) ln x2 x2 ) ρ1 ρ1   −ν−ρ ν ∂y =γ eλ t − B ρ (327) ∂ρ2 ρ     ρ−ρ2 ρ ρ ρ ρ2 ρ2 −ρ2 −ρ2 (1 − δ) ln(B2 )B2 − 2 + (1 − δ)B2 (−δ2 ln x3 x3 − (1 − δ2 ) ln x4 x4 ) ρ2 ρ2 28 The partial derivatives with respect to λ are always calculated using Equation 16. or 377 depending on the values of ρ1 . ρ2 . and ρ. and/or ρ approaching zero The limits of the four-input nested CES function for ρ1 . where ∂y/∂γ is calculated according to Equation 321. 345.82 Econometric Estimation of the Constant Elasticity of Substitution Function in R C. 337. ρ2 . Arne Henningsen. G´eraldine Henningsen 83   −ν−ρ ν ∂y λt −ν/ρ ν λt ρ − B =γ e ln(B)B + γ e ∂ρ ρ2 ρ   ρ ρ ρ1 1 ρ2 1 δ ln(B1 )B1 + (1 − δ) ln(B2 )B2 ρ1 ρ2 (328) Limits of the derivatives for ρ approaching zero    ∂y δ ln(B1 ) (1 − δ) ln(B2 ) λt lim + =e exp −ν ρ→0 ∂γ ρ1 ρ2 ! ∂y lim =γ eλ t ρ→0 ∂δ1 1 1 − x−ρ ν δ(x−ρ 2 ) 1 − ρ1 B1 ∂y lim =γ eλ t ρ→0 ∂δ2 2 2 ν (1 − δ)(x−ρ − x−ρ 3 4 ) − ρ2 B2   · exp −ν ! (329) δ ln(B1 ) (1 − δ) ln(B2 ) + ρ1 ρ2  (330)    δ ln(B1 ) (1 − δ) ln(B2 ) · exp −ν + ρ1 ρ2 (331) ∂y =γ eλ t lim ρ→0 ∂δ       ln(B1 ) ln(B2 ) δ ln(B1 ) (1 − δ) ln(B2 ) −ν − + · exp −ν ρ1 ρ2 ρ1 ρ2 (332)    ∂y (1 − δ) ln(B2 ) δ ln(B1 ) (1 − δ) ln(B2 ) λ t δ ln(B1 ) =−γe lim + · exp −ν + (333) ρ→0 ∂ν ρ1 ρ2 ρ1 ρ2 1 1 δρ1 (−δ1 ln x1 x−ρ − (1 − δ1 ) ln x2 x−ρ δ ln(B1 ) 1 2 ) −ν − 2 ρ1 B1 ρ21    δ ln(B1 ) (1 − δ) ln(B2 ) · exp −ν + ρ1 ρ2 ∂y =γ eλ t lim ρ→0 ∂ρ1 ∂y =γ eλ t lim ρ→0 ∂ρ2 !! 2 2 (1 − δ)ρ2 (−δ2 ln x3 x−ρ − (1 − δ2 ) ln x4 x−ρ (1 − δ) ln(B2 ) 3 4 ) − 2 ρ2 B2 ρ22 −ν (334) !! (335)  · exp −ν  δ ln(B1 ) (1 − δ) ln(B2 ) + ρ1 ρ2     ∂y 1 1 1 =γ eλ t ν − δ(ln B1 )2 2 + (1 − δ)(ln B2 )2 2 (336) ρ→0 ∂ρ 2 ρ1 ρ2   !    1 1 1 2 δ ln(B1 ) (1 − δ) ln(B2 ) + δ ln B1 + (1 − δ) ln B2 · exp −ν + 2 ρ1 ρ2 ρ1 ρ2 lim . 84 Econometric Estimation of the Constant Elasticity of Substitution Function in R Limits of the derivatives for ρ1 approaching zero ∂y lim =eλ t ρ1 →0 ∂γ  − ν ρ ρ ρ2 δ exp{−ρ L1 } + (1 − δ)B2  − ν −1 ρ ρ ∂y ρ2 λt ν =−γe δ exp{−ρ L1 } + (1 − δ)B2 lim ρ1 →0 ∂δ1 ρ · δ exp{−ρ L1 }ρ(− ln x1 + ln x2 ) − ν −1  ρ ρ ∂y ρ2 λt ν lim =−γe δ exp{−ρ L1 } + (1 − δ)B2 ρ1 →0 ∂δ2 ρ  ρ ρρ −1  −ρ2 2 · (1 − δ) B2 2 x3 − x−ρ 4 ρ2  − ν −1 ρ ρ ∂y ρ2 λt ν =−γe δ exp{−ρ L1 } + (1 − δ)B2 lim ρ1 →0 ∂δ ρ   ρ ρ2 · exp{−ρ L1 } − B2   ρ ∂y ρ2 λt 1 lim =−γe ln δ exp{−ρ L1 } + (1 − δ)B2 ρ1 →0 ∂ν ρ  − ν ρ ρ ρ2 δ exp{−ρ L1 } + (1 − δ)B2  − ν −1 ρ ρ ∂y ρ2 λt lim = − γ e δ ν δ exp {ρ (−δ1 ln x1 − (1 − δ1 ) ln x2 )} + (1 − δ) B2 ρ1 →0 ∂ρ1 · exp {−ρ (δ1 ln x1 + (1 − δ1 ) ln x2 )}   1 1 2 2 2 − (−δ1 ln x1 − (1 − δ1 ) ln x2 ) + δ1 (ln x1 ) + (1 − δ1 ) (ln x2 ) 2 2 (337) (338) (339) (340) (341) (342)  − ν −1 ρ ρ ∂y ρ2 λt ν =−γe δ exp{−ρ L1 } + (1 − δ)B2 (343) lim ρ1 →0 ∂ρ2 ρ  ρ  ρ ρ ρρ2 −1  ρ2 −ρ2 −ρ2 −δ2 ln x3 x3 − (1 − δ2 ) ln x4 x4 −(1 − δ) 2 ln(B2 )B2 + (1 − δ) B2 ρ2 ρ2   ρ ∂y ρ2 λt ν lim =γ e ln δ exp{−ρ L1 } + (1 − δ)B2 ρ1 →0 ∂ρ ρ2  − ν ρ ρ ρ2 δ exp{−ρ L1 } + (1 − δ)B2 (344) . G´eraldine Henningsen λt ν ρ  ρ ρ2 85 − ν −1 ρ δ exp{−ρ L1 } + (1 − δ)B2 −γe   ρ 1−δ ρ2 −δ exp{−ρ L1 }L1 + ln(B2 )B2 ρ2 Limits of the derivatives for ρ2 approaching zero ∂y lim =eλ t ρ2 →0 ∂γ ∂y ν lim = − γ eλ t ρ2 →0 ∂δ1 ρ  ρ ρ1 δB1  − ν ρ ρ ρ1 δB1 + (1 − δ) exp{−ρ L2 } − ν −1  ρ ρ ρρ −1  −ρ1 1 + (1 − δ) exp{−ρ L2 } δ B1 1 x1 − x−ρ 2 ρ1 − ν −1  ρ ρ ∂y ρ1 λt lim = − γ e ν δB1 + (1 − δ) exp{−ρ L2 } ρ2 →0 ∂δ2 (1 − δ) exp{−ρ L2 } (− ln x3 + ln x4 ) − ν −1  ρ ρ ∂y ρ1 λt ν lim =−γe δB1 + (1 − δ) exp{−ρ L2 } ρ2 →0 ∂δ ρ  ρ  ρ1 B1 − exp{−ρ L2 }   ρ ∂y ρ1 λt 1 lim =−γe ln δB1 + (1 − δ) exp{−ρ L2 } ρ2 →0 ∂ν ρ  − ν ρ ρ ρ1 δB1 + (1 − δ) exp{−ρ L2 } (345) (346) (347) (348) (349)  − ν −1 ρ ρ ∂y ρ1 λt ν lim =−γe δB1 + (1 − δ) exp{−ρ L2 } (350) ρ2 →0 ∂ρ1 ρ      ρ ρ  −1  ρ ρ ρ1 ρ1 −ρ1 −ρ1 − 2 +δ B1 −δ1 ln x1 x1 − (1 − δ1 ) ln x2 x2 δ ln(B1 )B1 ρ1 ρ1 ∂y = − γ eλ t (1 − δ) ν exp {−ρ (δ2 ln x3 + (1 − δ2 ) ln x4 )} (351) ρ2 →0 ∂ρ2    ρ − νρ −1 −ρ1 −ρ1 ρ1 (1 − δ) exp {ρ (−δ2 ln x3 − (1 − δ2 ) ln x4 )} + δ δ1 x1 + (1 − δ1 ) x2   1 1 2 2 2 − (δ2 ln x3 + (1 − δ2 ) ln x4 ) + δ2 (ln x3 ) + (1 − δ2 ) (ln x4 ) 2 2 lim .Arne Henningsen. 86 Econometric Estimation of the Constant Elasticity of Substitution Function in R  − ν  ρ ρ ρ ∂y ρ1 ρ1 λt ν δB1 + (1 − δ) exp{−ρ L2 } ln δB1 + (1 − δ) exp{−ρ L2 } lim =γ e 2 ρ2 →0 ∂ρ ρ (352) − ν −1  ρ ρ ν ρ − γ eλ t δB1 1 + (1 − δ) exp{−ρ L2 } ρ   ρ ρ1 1 −(1 − δ) exp{−ρ L2 }L2 + δ ln(B1 ) B1 ρ1 Limits of the derivatives for ρ1 and ρ2 approaching zero   ∂y ν λt lim lim =e exp − ln (δ exp{−ρ L1 } + (1 − δ) exp{−ρ L2 }) ρ2 →0 ρ1 →0 ∂γ ρ ∂y − ν −1 = − γ eλ t ν δ (δ exp{−ρ L1 } + (1 − δ) exp{−ρ L2 }) ρ ρ1 →0 ∂δ1 exp{−ρ L1 } (− ln x1 + ln x2 ) lim lim ρ2 →0 ∂y − ν −1 = − γ eλ t ν ((1 − δ) exp{−ρL2 } + δ exp{−ρ L1 }) ρ ρ1 →0 ∂δ2 (1 − δ) exp{−ρ L2 }(− ln x3 + ln x4 ) lim lim ρ2 →0 ∂y ν − ν −1 = − γ eλ t (δ exp {−ρ L1 } + (1 − δ) exp {−ρ L2 }) ρ ρ1 →0 ∂δ ρ (exp {−ρ L1 } − exp {−ρ L2 }) lim lim ρ2 →0 ∂y 1 = − γ eλ t ln (δ exp {−ρ L1 } + (1 − δ) exp {−ρ L2 }) ρ1 →0 ∂ν ρ lim lim ρ2 →0 (353) (354) (355) (356) (357) − νρ (δ exp {−ρ L1 } + (1 − δ) exp {−ρL2 })  − ν −1 ∂y ρ = − γ eλ t δ ν δ exp {−ρ L1 } + (1 − δ) exp {−ρ L2 } ρ1 →0 ∂ρ1   1 2 1 2 2 · exp {ρ L2 } − L1 + δ1 (ln x1 ) + (1 − δ1 ) (ln x2 ) 2 2 lim lim ρ2 →0 ∂y − ν −1 = − γ eλ t (1 − δ) ν ((1 − δ) exp {−ρ L2 } + δ exp {−ρ L1 }) ρ ρ2 →0 ∂ρ2   1 2 1 2 2 · exp {−ρ L2 } − L2 + δ2 (ln x3 ) + (1 − δ2 ) (ln x4 ) 2 2 lim lim ρ1 →0 ν ∂y =γ eλ t 2 ln (δ exp {−ρ L1 } + (1 − δ) exp {−ρ L2 }) ρ1 →0 ∂ρ ρ lim lim ρ2 →0 (358) (359) (360) . G´eraldine Henningsen 87 −ν (δ exp {−ρ L1 } + (1 − δ) exp {−ρ L2 }) ρ ν − ν −1 − γ eλ t (δ exp {−ρ L1 } + (1 − δ) exp {−ρ L2 }) ρ ρ (−δ exp {−ρ L1 } L1 − (1 − δ) exp {−ρ L2 } L2 ) Limits of the derivatives for ρ1 and ρ approaching zero    ∂y (1 − δ) ln(B2 ) λt lim lim =e exp −ν −δ L1 + ρ1 →0 ρ→0 ∂γ ρ2 (361)    ∂y (1 − δ) ln(B2 ) λt lim lim =γ e (−ν(δ(− ln x1 + ln x2 ))) exp −ν −δ L1 + ρ1 →0 ρ→0 ∂δ1 ρ2 (362)    −ρ2 2 ∂y − x−ρ (1 − δ) ln(B2 ) λ t −ν(1 − δ)(x3 4 ) lim lim =γ e exp −ν −δ L1 + ρ1 →0 ρ→0 ∂δ2 ρ2 B2 ρ2 (363)      ln(B2 ) ln(B2 ) ∂y λt = − γ e ν −L1 − exp −ν −δ L1 + (1 − δ) lim lim ρ1 →0 ρ→0 ∂δ ρ2 ρ2 (364) ∂y lim lim =γ eλ t ρ1 →0 ρ→0 ∂ν      ln(B2 ) ln(B2 ) δ L1 − (1 − δ) exp −ν −δ L1 + (1 − δ) ρ2 ρ2   ∂y 1 1 (δ1 (ln x1 )2 + (1 − δ1 )(ln x2 )2 ) − L21 = − γ eλ t ν δ ρ→0 ∂ρ1 2 2    ln(B2 ) · exp −ν −δ L1 + (1 − δ) ρ2 lim lim ρ1 →0 2 2 ln(B2 ) (−δ2 ln x3 x−ρ − (1 − δ2 ) ln x4 x−ρ ∂y 3 4 ) = − γ eλ t ν (1 − δ) − + lim lim ρ1 →0 ρ→0 ∂ρ2 ρ2 B2 ρ22    ln(B2 ) · exp −ν −L1 + (1 − δ) ρ2 ∂y lim lim =γ eλ t ν ρ1 →0 ρ→0 ∂ρ 1 − 2 δL21 + (1 − δ)  ln(B2 ) ρ2 2 ! 1 + 2 (365) (366) ! (367)   ! ln(B2 ) 2 −δ L1 + (1 − δ) ρ2 (368)    ln(B2 ) · exp −ν −δ L1 + (1 − δ) ρ2 .Arne Henningsen. 88 Econometric Estimation of the Constant Elasticity of Substitution Function in R Limits of the derivatives for ρ2 and ρ approaching zero    δ ln(B1 ) ∂y λt − (1 − δ)L2 =e exp −ν lim lim ρ2 →0 ρ→0 ∂γ ρ1 (369)    −ρ1 1 ∂y − x−ρ δ ln(B1 ) λ t −νδ(x1 2 ) lim lim =γ e exp −ν − (1 − δ)L2 ρ2 →0 ρ→0 ∂δ1 ρ1 B1 ρ1 (370)    ∂y δ ln(B1 ) λt lim lim =γ e (−ν((1 − δ)(− ln x3 + ln x4 ))) exp −ν − (1 − δ)L2 ρ2 →0 ρ→0 ∂δ2 ρ1 (371) ∂y lim lim = − γ eλ t ν ρ2 →0 ρ→0 ∂δ ∂y lim lim =γ eλ t ρ2 →0 ρ→0 ∂ν       ln(B1 ) ln(B1 ) + L2 exp −ν δ − (1 − δ)L2 ρ1 ρ1     ln(B1 ) ln(B1 ) −δ + (1 − δ)L2 exp −ν δ − (1 − δ)L2 ρ1 ρ1 1 1 ∂y ln(B1 ) (−δ1 ln x1 x−ρ − (1 − δ1 ) ln x2 x−ρ 1 2 ) lim lim = − γ eλ t ν δ − + ρ2 →0 ρ→0 ∂ρ1 ρ1 B1 ρ21    ln(B1 ) − (1 − δ)L2 · exp −ν δ ρ1   ∂y 1 1 (δ2 (ln x3 )2 + (1 − δ2 )(ln x4 )2 ) − L22 = − γ eλ t ν (1 − δ) ρ→0 ∂ρ2 2 2    ln(B1 ) · exp −ν δ − (1 − δ)L2 ρ1 ∂y =γ eλ t ν lim lim ρ2 →0 ρ→0 ∂ρ 1 − 2  δ ln(B1 ) ρ1 2 ! + (1 − δ)L22 1 + 2  (373) ! lim lim ρ2 →0 (372) ln(B1 ) δ − (1 − δ)L2 ρ1 (374) (375) 2 ! (376)   · exp −ν δ ln(B1 ) − (1 − δ)L2 ρ1  Limits of the derivatives for ρ1 . and ρ approaching zero lim lim lim ρ1 →0 ρ2 →0 ρ→0 ∂y =eλ t exp {−ν (−δ L1 − (1 − δ)L2 )} ∂γ (377) . ρ2 . 4 for i = 1.3. Elasticities of substitution Allen-Uzawa elasticity of substitution (see Sato 1967) σi. G´eraldine Henningsen lim lim lim ρ1 →0 ρ2 →0 ρ→0 lim lim lim ρ1 →0 ρ2 →0 ρ→0 ∂y = − γ eλ t ν δ (− ln x1 + ln x2 ) exp {−ν (−δ L1 − (1 − δ)L2 )} ∂δ1 89 (378) ∂y = − γ eλ t ν (1 − δ)(− ln x3 + ln x4 ) exp {−ν (−δ L1 − (1 − δ)L2 )} (379) ∂δ2 lim lim lim ρ1 →0 ρ2 →0 ρ→0 lim lim lim ρ1 →0 ρ2 →0 ρ→0 ∂y = − γ eλ t ν (−L1 + L2 ) exp {−ν (−δ L1 − (1 − δ)L2 )} ∂δ ∂y =γ eλ t (δ L1 + (1 − δ)L2 ) exp {−ν (−δ L1 − (1 − δ)L2 )} ∂ν   ∂y 1 1 2 λt 2 2 (δ1 (ln x1 ) + (1 − δ1 )(ln x2 ) ) − L1 lim lim lim =−γe νδ ρ1 →0 ρ2 →0 ρ→0 ∂ρ1 2 2 · exp {−ν (−δ L1 − (1 − δ)L2 )} lim lim lim ρ1 →0 ρ2 →0 ρ→0   1 1 ∂y (δ2 (ln x3 )2 + (1 − δ2 )(ln x4 )2 ) − L22 = − γ eλ t ν (1 − δ) ∂ρ2 2 2 · exp {−ν (−δ L1 − (1 − δ)L2 )}    1 1 ∂y 2 λt 2 2 lim lim lim =γ e ν − δL1 + (1 − δ)L2 + (−δ L1 − (1 − δ)L2 ) ρ1 →0 ρ2 →0 ρ→0 ∂ρ 2 2 · exp {−ν (−δ L1 + (1 − δ)(−δ2 ln x3 − (1 − δ2 ) ln x4 ))} (380) (381) (382) (383) (384) C. j = 3.Arne Henningsen. j = 2 (385) for i = 3. j = 4 . 2.j   (1 − ρ)−1          (1 − ρ1 )−1 − (1 − ρ)−1    + (1 − ρ)−1  1+ρ     y     δ − ρ1 = B1 1         (1 − ρ2 )−1 − (1 − ρ)−1    1+ρ + (1 − ρ)−1      y    (1 − δ)  − 1    ρ B2 2 for i = 1. j = 2 for i = 3. j = 3. 2.j =  1 1   +   θi θ j           1 1 1 1 1 1    (1 − ρ1 ) − + (1 − ρ2 ) − + (1 − ρ) −   θi θ ∗ θj θ θ∗ θ     (1 − ρ1 )−1          (1 − ρ2 )−1 for i = 1. j = 4 (386) with ρ ρ θ∗ = δB1 1 · y ρ (387) ρ ρ2 θ = (1 − δ)B2 · y ρ ρ −ρ − 1ρ 1 1 θ1 = δδ1 x−ρ 1 B1 (388) · yρ ρ −ρ − 1ρ 1 1 θ2 = δ(1 − δ1 )x−ρ 2 B1 ρ −ρ − 2ρ 2 2 θ3 = (1 − δ)δ2 x−ρ 3 B2 (389) · yρ (390) · yρ (391) ρ −ρ − 2ρ 2 2 θ4 = (1 − δ)(1 − δ2 )x−ρ 4 B2 · yρ (392) .90 Econometric Estimation of the Constant Elasticity of Substitution Function in R Hicks-McFadden elasticity of substitution (see Sato 1967) σi. 4 for i = 1. cesEst( "Y".cesEst( "Y".cesEst( method = "NM". control = list( maxit = 5000 ) ) "Y". summary( cesNm1 ) cesNm2 <. tName = "time".c( "E". xNames3. ) # add a time trend (starting with 0) GermanIndustry$time <. control = list( maxit = 5000 ) ) "Y".GermanIndustry$year . summary( cesBfgs2 ) cesBfgs3 <. "A". data = GermanIndustry. control = list( maxit summary( cesSann1 ) cesSann2 <.1975 because of economic disruptions (see Kemfert 1998) GermanIndustry <. Script for replicating the analysis of Kemfert (1998) # load the micEconCES package library( "micEconCES" ) # load the data set data( "GermanIndustry" ) # remove years 1973 . control = list( maxit = 5000 ) ) ## Simulated Annealing cesSann1 <. tName = "time". tName = method = "SANN". = 2e6 ) ) "time". xNames1. summary( cesNm2 ) cesNm3 <.cesEst( method = "BFGS". control = list( maxit = 5000 ) ) "Y". data = GermanIndustry. data = GermanIndustry. control = list( maxit summary( cesSann2 ) cesSann3 <. 91 .subset( GermanIndustry. data = GermanIndustry.cesEst( method = "BFGS". tName = "time". "A".Arne Henningsen. data = GermanIndustry. data = GermanIndustry.cesEst( method = "NM". data = GermanIndustry. control = list( maxit = 5000 ) ) ## L-BFGS-B cesBfgsCon1 <. tName = "time". control = list( maxit = 5000 ) ) "Y".cesEst( "Y". "E" ) <. tName = method = "SANN". G´eraldine Henningsen D. = 2e6 ) ) "Y". xNames3. tName = "time". summary( cesBfgs3 ) "time". "K" ) ################# econometric estimation with cesEst ########################## ## Nelder-Mead cesNm1 <. "A" ) <. xNames1.c( "K". xNames2. xNames1. xNames2. summary( cesNm3 ) "Y". xNames2. = 2e6 ) ) "time". data = GermanIndustry.c( "K".1960 # names xNames1 xNames2 xNames3 of inputs <. tName = "time".cesEst( "Y". tName = method = "SANN". tName = "time".cesEst( method = "BFGS". data = GermanIndustry. year < 1973 | year > 1975. xNames1. summary( cesBfgs1 ) cesBfgs2 <. xNames3. "E". control = list( maxit summary( cesSann3 ) ## BFGS cesBfgs1 <.cesEst( method = "NM". data = GermanIndustry. method = "L-BFGS-B". tName = "time". xNames2. xNames1. tName = "time". xNames3. xNames2.cesEst( "Y".cesEst( method = "PORT". data = GermanIndustry. data = GermanIndustry. tName = "time".cesEst( method = "Newton".cesEst( "Y". method = "PORT".max = 2000 ) ) .max = 1000. control = list( eval. control = list( maxit = 5000 summary( cesBfgsCon3 ) ) ) data = GermanIndustry. data = GermanIndustry. data = GermanIndustry. xNames3.control( maxiter = 1000.cesEst( "Y". tName = "time".lm. method = "L-BFGS-B". tName = "time". iterlim = 500 ) "Y". summary( cesPort3 ) = GermanIndustry. xNames1.cesEst( "Y". tName = "time". 1000.control( maxiter = summary( cesLm2Me ) cesLm3Me <.cesEst( method = "Newton".cesEst( "Y". control = list( eval. xNames1. control = nls.lm. xNames3. control = list( maxit = 5000 summary( cesBfgsCon2 ) cesBfgsCon3 <.control( maxiter = summary( cesLm1Me ) cesLm2Me <.92 Econometric Estimation of the Constant Elasticity of Substitution Function in R method = "L-BFGS-B". xNames2. summary( cesNewton1 ) cesNewton2 <. maxfev = 2000 ) ) = GermanIndustry. maxfev = 2000 ) ) "Y". iter. tName = "time". data multErr = TRUE.cesEst( "Y". iterlim = 500 ) "Y".cesEst( "Y".max = 1000 ) ) ## PORT.max = 1000 ) ) "Y". data = GermanIndustry. summary( cesPort2 ) cesPort3 <. multiplicative error cesPort1Me <.max = 1000. tName = "time". ) ) data = GermanIndustry. 1000. summary( cesNewton3 ) ## PORT cesPort1 <. tName = "time". iter. control = list( eval. data multErr = TRUE. summary( cesNewton2 ) cesNewton3 <.cesEst( "Y". iter. xNames2. tName = "time".max = 1000 ) ) "Y". tName = "time". maxfev = 2000 ) ) summary( cesLm3 ) ## Levenberg-Marquardt. xNames2.control( maxiter = 1000. tName = "time". multiplicative error term cesLm1Me <. control = nls. xNames1.lm. data = GermanIndustry.lm.max = 1000. multErr = TRUE. control = nls. data = GermanIndustry.cesEst( "Y". maxfev = 2000 ) ) summary( cesLm2 ) cesLm3 <. control = nls.lm. xNames3. 1000.lm. ) ) ## Levenberg-Marquardt cesLm1 <. data multErr = TRUE. maxfev = 2000 ) ) = GermanIndustry.cesEst( method = "PORT".cesEst( method = "PORT". iter. data = GermanIndustry. summary( cesPort1 ) cesPort2 <. data = GermanIndustry. control = list( eval. control = nls. control = list( maxit = 5000 summary( cesBfgsCon1 ) cesBfgsCon2 <. maxfev = 2000 ) ) summary( cesLm1 ) cesLm2 <.cesEst( method = "Newton".control( maxiter = summary( cesLm3Me ) ## Newton-type cesNewton1 <. control = nls. tName = "time". iterlim = 500 ) "Y".max = 2000.control( maxiter = 1000. xNames1. tName = "time". tName = "time". data = GermanIndustry. xNames3. cesEst( "Y". control = nls.cesEst( "Y". maxfev = 2000 ) ) summary( cesNmLm3 ) ## SANN . tName = "time". xNames2. tName = "time". start = coef( cesSann2 ).cesEst( "Y".cesEst( "Y".cesEst( "Y". method = cesNls3 <.max = 1000 ) ) tName = "time". xNames1. NP = 500. data = GermanIndustry. FALSE. control = DEoptim. maxfev summary( cesSannLm1 ) cesSannLm2 <. xNames2.Levenberg-Marquardt cesNmLm1 <. tName = "time". tName = "time".lm.try( cesEst( vrs = TRUE.control( trace = itermax = 1e4 ) ) summary( cesDe2 ) cesDe3 <.max = 1000. xNames1.max = 1000 ) ) ## DE cesDe1 <. data = GermanIndustry. tName = "time". control = nls. data = GermanIndustry. data = GermanIndustry. start = coef( cesNm1 ). control = nls. G´eraldine Henningsen summary( cesPort1Me ) cesPort2Me <. data = GermanIndustry. = GermanIndustry. iter.cesEst( "Y". start = coef( cesNm3 ). method = "PORT". xNames1. method = "PORT". FALSE.Arne Henningsen. data = GermanIndustry. control = DEoptim.cesEst( "Y".Levenberg-Marquardt cesSannLm1 <. method = cesNls2 <. iter.cesEst( "Y". FALSE. maxfev = 2000 ) ) summary( cesNmLm1 ) cesNmLm2 <.control( maxiter = 1000. xNames3. = 2000 ) ) data = GermanIndustry.lm. multErr = TRUE. control = list( eval.control( trace = itermax = 1e4 ) ) summary( cesDe3 ) ## nls cesNls1 <. xNames3. summary( cesPort2Me ) cesPort3Me <.lm. data method = "DE". start = coef( cesSann1 ). control = list( eval.control( maxiter = 1000.control( maxiter = 1000.control( maxiter = 1000. data method = "DE".control( trace = itermax = 1e4 ) ) summary( cesDe1 ) cesDe2 <. xNames2. xNames2. tName = "time". data = GermanIndustry. summary( cesPort3Me ) tName = "time". data method = "DE". tName = "time". 93 . "nls" ) ) "Y". control = DEoptim. maxfev summary( cesSannLm2 ) cesSannLm3 <. data = GermanIndustry. data = GermanIndustry.cesEst( "Y".lm. xNames1. xNames2. "nls" ) ) ## NM . maxfev = 2000 ) ) summary( cesNmLm2 ) cesNmLm3 <. tName = "time". xNames3. multErr = TRUE.max = 1000. tName = "time". tName = "time". "Y". start = coef( cesNm2 ). NP = 500. NP = 500.cesEst( "Y". tName = "time". tName = "time".try( cesEst( vrs = TRUE. control = nls.lm.control( maxiter = 1000. "nls" ) ) "Y".cesEst( "Y". start = coef( cesSann3 ). = 2000 ) ) data = GermanIndustry. control = nls. xNames3. = GermanIndustry. xNames3.try( cesEst( vrs = TRUE. method = = GermanIndustry. xNames2.max = 1000 ) summary( cesSannPort1 ) cesSannPort2 <.max = 1000. xNames3. method = "PORT".max = 1000 summary( cesNmPort3 ) data = GermanIndustry. tName = "time".cesEst( "Y". ) ) data = GermanIndustry. xNames3.cesEst( "Y".max = 1000 ) summary( cesSannPort2 ) cesSannPort3 <. tName = "time". control = nls. iter. data = GermanIndustry. ) ) ## SANN . method = "PORT".cesEst( "Y".cesEst( "Y".PORT cesDePort1 <. control = list( eval. method = "PORT".94 Econometric Estimation of the Constant Elasticity of Substitution Function in R control = nls. xNames1. control = list( eval.max = 1000. start = coef( cesSann2 ).max = 1000.control( maxiter = 1000. control = list( eval. start = coef( cesNm3 ).max = 1000. maxfev = 2000 ) ) summary( cesDeLm2 ) cesDeLm3 <. tName = "time". maxfev = 2000 ) ) summary( cesSannLm3 ) ## DE . ) data = GermanIndustry.PORT cesSannPort1 <. start = coef( cesSann3 ).control( maxiter = 1000. start = coef( cesNm1 ). control = list( eval. iter. xNames1.max = 1000. tName = "time".lm. iter. maxfev = 2000 ) ) summary( cesDeLm3 ) ## NM .max = 1000 summary( cesDePort1 ) cesDePort2 <.lm. control = list( eval. control = list( eval. ) ) . control = nls. start = coef( cesSann1 ).cesEst( "Y".control( maxiter = 1000.PORT cesNmPort1 <.cesEst( "Y". xNames1. start = coef( cesDe2 ). ) data = GermanIndustry.cesEst( "Y".Levenberg-Marquardt cesDeLm1 <.max = 1000. xNames2. start = coef( cesDe3 ).max = 1000 summary( cesNmPort2 ) cesNmPort3 <. tName = "time".max = 1000 ) summary( cesSannPort3 ) ## DE . control = nls.max = 1000. iter. control = list( eval. ) data = GermanIndustry. method = "PORT". ) ) data = GermanIndustry. method = "PORT". control = list( eval. tName = "time". control = list( eval. data = GermanIndustry. method = "PORT".control( maxiter = 1000. method = "PORT".max = 1000 data = GermanIndustry. xNames1.cesEst( "Y". data = GermanIndustry. start = coef( cesNm2 ). tName = "time".cesEst( "Y". method = "PORT".lm. tName = "time". xNames3. method = "PORT".cesEst( "Y". xNames3. start = coef( cesDe1 ).cesEst( "Y".max = 1000 summary( cesNmPort1 ) cesNmPort2 <.max = 1000. tName = "time". maxfev = 2000 ) ) summary( cesDeLm1 ) cesDeLm2 <. tName = "time". iter.cesEst( "Y". iter.lm. tName = "time". iter.max = 1000. start = coef( cesDe2 ). xNames2. tName = "time".max = 1000 summary( cesDePort2 ) cesDePort3 <. xNames2. iter. iter. start = coef( cesDe3 ). ) ) data = GermanIndustry. start = coef( cesDe1 ). ) ) data = GermanIndustry. rho = 1. xNames2f.0222 * GermanIndustry$time ) GermanIndustry$A1 <.cesEst( "Y". xNames2f. rho1 = 0.3654.cesEst( "Y". lambda. xNames2f.GermanIndustry$E * exp( 0.8327.GermanIndustry$K * exp( 0.GermanIndustry$K * exp( 0.cesEst( "Y". and rho fixed cesLmFixed1 <.GermanIndustry$K * exp( 0. control = list( maxit = 5000 ) ) summary( cesNmFixed2 ) cesNmFixed3 <.c( "K2". data = GermanIndustry. xNames1f. "A2".cesEst( "Y". data = GermanIndustry.2155.GermanIndustry$A * exp( 0. lambda. rho1 = 1.lm.0069 * GermanIndustry$time ) GermanIndustry$K3 <. control = list( maxit = 5000 ) ) summary( cesBfgsFixed2 ) cesBfgsFixed3 <. rho = 0.control( maxiter = 1000. control = nls. and rho fixed ##################### # removing technological progress using the lambdas of Kemfert (1998) # (we can do this. "E2" ) xNames3f <. rho1 = 0. rho = 1.8327.c( "E3".1813.00641 * GermanIndustry$time ) GermanIndustry$E3 <. xNames3f. "E1".1813. rho1 = 0.GermanIndustry$A * exp( 0. because the model has constant returns to scale) GermanIndustry$K1 <.0222 * GermanIndustry$time ) GermanIndustry$E1 <. rho1 = 0. method = "BFGS". rho = 1. rho = 5. rho = 5. "A3".2155.1816. rho1 = 0.cesEst( "Y".GermanIndustry$E * exp( 0.c( "K1".cesEst( "Y". method = "NM". rho1 = 0. data = GermanIndustry.GermanIndustry$E * exp( 0.1816. data = GermanIndustry. data = GermanIndustry.0222 * GermanIndustry$time ) GermanIndustry$K2 <.GermanIndustry$A * exp( 0.5300. lambda. control = list( maxit = 5000 ) ) summary( cesBfgsFixed3 ) ## Levenberg-Marquardt. maxfev = 2000 ) ) summary( cesLmFixed1 ) cesLmFixed2 <. xNames1f. "A1" ) xNames2f <.cesEst( "Y". and rho fixed cesBfgsFixed1 <. and rho fixed cesNmFixed1 <.1813. xNames3f. data = GermanIndustry. G´eraldine Henningsen summary( cesDePort3 ) ############# estimation with lambda. method = "BFGS". xNames1f. rho_1. control = list( maxit = 5000 ) ) summary( cesBfgsFixed1 ) cesBfgsFixed2 <.5300.1816. control = list( maxit = 5000 ) ) summary( cesNmFixed1 ) cesNmFixed2 <.00641 * GermanIndustry$time ) # names of adjusted inputs xNames1f <. data = GermanIndustry.0069 * GermanIndustry$time ) GermanIndustry$E2 <. 95 . "K3" ) ## Nelder-Mead. data = GermanIndustry.cesEst( "Y".00641 * GermanIndustry$time ) GermanIndustry$A3 <. method = "NM".Arne Henningsen. rho_1. control = list( maxit = 5000 ) ) summary( cesNmFixed3 ) ## BFGS. rho = 0. method = "BFGS". rho1 = 1.3654. rho = 0.0069 * GermanIndustry$time ) GermanIndustry$A2 <.2155. rho_1.5300. rho_1. method = "NM". 3654. iter. cesPortFixed2$rss ).cesLmFixed1 print( matrix( c( cesNmFixed2$rss.cesLmFixed2 print( matrix( c( cesNmFixed3$rss.8327. and rho fixed print( matrix( c( cesNmFixed1$rss. rho1 = 1. maxfev = 2000 ) ) summary( cesLmFixed3 ) ## Levenberg-Marquardt. cesBfgsFixed1$rss.max = 1000. rho = 5.8327. rho1 = 1. maxfev = 2000 ) ) summary( cesLmFixed2 ) cesLmFixed3 <.max = 1000. method = "PORT".cesEst( "Y".max = 1000.3654. method = "Newton". data = GermanIndustry. xNames1f. xNames3f.max = 1000 ) ) summary( cesPortFixed2 ) cesPortFixed3 <. xNames2f. data = GermanIndustry. multErr = TRUE. xNames2f.control( maxiter = 1024.cesEst( "Y". data = GermanIndustry. rho = 5. control = list( eval.2155. maxfev = 2000 ) ) summary( cesLmFixed3Me ) summary( cesLmFixed3Me.2155. . data = GermanIndustry. rho1 = 0. control = nls.control( maxiter = 1000. rho = 0. rho = 5. cesBfgsFixed2$rss. rSquaredLog = FALSE ) cesLmFixed3Me <.3654.lm. rho_1. digits = 16 ) cesFixed1 <.cesEst( "Y". data = GermanIndustry.lm.1816. rho = 5. rho1 = 0. rho = 1. cesPortFixed1$rss ).lm. data = GermanIndustry. control = nls. lambda.max = 1000 ) ) summary( cesPortFixed1 ) cesPortFixed2 <. iter.1816. digits = 16 ) cesFixed2 <. and rho fixed.5300. and rho fixed cesPortFixed1 <. cesNewtonFixed2$rss. multiplicative error term cesLmFixed1Me <. rho = 1. xNames3f. rSquaredLog = FALSE ) ## Newton-type.cesEst( "Y". ncol = 1 ). xNames3f.5300.cesEst( "Y".cesEst( "Y".96 Econometric Estimation of the Constant Elasticity of Substitution Function in R control = nls. data = GermanIndustry. xNames1f. maxfev = 2000 ) ) summary( cesLmFixed2Me ) summary( cesLmFixed2Me.cesEst( "Y". data = GermanIndustry. method = "PORT".lm. cesBfgsFixed3$rss. cesNewtonFixed1$rss. rho = 0.1813. rho_1.3654. control = list( eval. rho1 = 0. control = list( eval. iterlim = 500 ) summary( cesNewtonFixed2 ) cesNewtonFixed3 <. ncol = 1 ). xNames1f. rho1 = 0. control = nls.control( maxiter = 1000. rho1 = 1. rho = 1. method = "Newton". rho = 0. rSquaredLog = FALSE ) cesLmFixed2Me <. xNames2f. method = "Newton". and rho fixed cesNewtonFixed1 <. iter. maxfev = 2000 ) ) summary( cesLmFixed1Me ) summary( cesLmFixed1Me. control = nls. data = GermanIndustry.cesEst( "Y".lm. cesLmFixed3$rss. lambda. rho_1.cesEst( "Y". iterlim = 500 ) summary( cesNewtonFixed3 ) ## PORT.1813. data = GermanIndustry. rho1 = 1. rho_1.max = 1000 ) ) summary( cesPortFixed3 ) # compare RSSs of models with lambda. iterlim = 500 ) summary( cesNewtonFixed1 ) cesNewtonFixed2 <. cesLmFixed1$rss.8327.8327. rho1 = 0. xNames3f.1813. multErr = TRUE. method = "PORT". cesLmFixed2$rss.2155. rho1 = 0.control( maxiter = 1000.1816.control( maxiter = 1000. multErr = TRUE.5300. lambda.cesEst( "Y". rho = rhoVec. 4.cesEst( "Y". digits = 16 ) cesFixed3 <. control = list( maxit = 5000 ) ) summary( cesBfgsGridRho1 ) plot( cesBfgsGridRho1 ) cesBfgsGridRho2 <. rho1 = rhoVec. data = GermanIndustry. nested = TRUE ) all. method = "BFGS". coef = coef( cesFixed2 ).1 ). tName = "time". data = GermanIndustry. 0. tName = "time".lm.Arne Henningsen. tName = "time".lm. coef = c( coef( cesFixed2 )[1]. maxfev = 2000 ) ) summary( cesLmGridRho1 ) plot( cesLmGridRho1 ) cesLmGridRho2 <. Y2TcCalc ) ########## Grid Search for Rho_1 and Rho ############## rhoVec <. cesPortFixed3$rss ). rho = rhoVec. xNames2f ). maxfev = 2000 ) ) 97 . xNames2. rho1 = rhoVec. control = list( maxit = 5000 ) ) summary( cesBfgsGridRho2 ) plot( cesBfgsGridRho2 ) cesBfgsGridRho3 <. grid search for rho_1 and rho cesBfgsGridRho1 <.cesEst( "Y". method = "BFGS". data = GermanIndustry. xNames1. control = list( maxit = 5000 ) ) summary( cesBfgsGridStartRho1 ) cesBfgsGridStartRho2 <. returnGridAll = TRUE.control( maxiter = 1000. rho = rhoVec.cesEst( "Y".cesEst( "Y". start = coef( cesBfgsGridRho2 ).c( seq( -1. data = GermanIndustry.cesCalc( xNames2f. control = nls. returnGridAll = TRUE. rho1 = rhoVec. tName = "time". data = GermanIndustry. 14. rho = rhoVec.2. data = GermanIndustry. data = GermanIndustry. method = "BFGS". tName = "time".cesCalc( sub( "[123]$".cesEst( "Y". control = nls. xNames1.cesEst( "Y". ncol = 1 ). returnGridAll = TRUE. tName = "time". fitted( cesFixed2 ) ) Y2TcCalc <. "". control = list( maxit = 5000 ) ) summary( cesBfgsGridRho3 ) plot( cesBfgsGridRho3 ) # BFGS with grid search estimates as starting values cesBfgsGridStartRho1 <. start = coef( cesBfgsGridRho3 ). data = GermanIndustry. 0. rho = rhoVec. rho1 = rhoVec.equal( Y2Calc. lambda = 0.control( maxiter = 1000.cesEst( "Y".4 ) ) ## BFGS. tName = "time". returnGridAll = TRUE. data = GermanIndustry. data = GermanIndustry.4. xNames1. xNames2. 1. xNames3. seq( 1. G´eraldine Henningsen cesNewtonFixed3$rss.0069.cesLmFixed3 ## check if removing the technical progress worked as expected Y2Calc <. rho1 = rhoVec. seq( 4. grid search for rho1 and rho cesLmGridRho1 <. returnGridAll = TRUE.cesEst( "Y". tName = "time". 0. xNames2. method = "BFGS". control = list( maxit = 5000 ) ) summary( cesBfgsGridStartRho3 ) ## Levenberg-Marquardt.2 ). xNames3.equal( Y2Calc. start = coef( cesBfgsGridRho1 ). method = "BFGS". method = "BFGS". coef( cesFixed2 )[-1] ). tName = "time". control = list( maxit = 5000 ) ) summary( cesBfgsGridStartRho2 ) cesBfgsGridStartRho3 <. nested = TRUE ) all. max = 1000 ) ) .cesEst( "Y". data = GermanIndustry.98 Econometric Estimation of the Constant Elasticity of Substitution Function in R summary( cesLmGridRho2 ) plot( cesLmGridRho2 ) cesLmGridRho3 <. tName = "time". rho1 = rhoVec. = 2000 ) ) ## Newton-type. = 2000 ) ) "time".control( maxiter = 1000. = 2000 ) ) "time". start = coef( cesNewtonGridRho1 ). tName = "time". returnGridAll = TRUE.lm. rho = rhoVec. xNames1.analyticals = FALSE ) summary( cesNewtonGridStartRho2 ) cesNewtonGridStartRho3 <. data = GermanIndustry. iterlim = 500.cesEst( "Y". returnGridAll = TRUE. check. grid search for rho_1 and rho cesNewtonGridRho1 <. start = coef( cesNewtonGridRho3 ). tName = start = coef( cesLmGridRho1 ). returnGridAll = TRUE. xNames3. rho1 = rhoVec.cesEst( "Y".lm. xNames2. control = nls.analyticals = FALSE ) summary( cesNewtonGridRho2 ) plot( cesNewtonGridRho2 ) cesNewtonGridRho3 <. tName = "time". xNames2. data = GermanIndustry. maxfev summary( cesLmGridStartRho3 ) "time". rho1 = rhoVec. maxfev = 2000 ) ) summary( cesLmGridRho3 ) plot( cesLmGridRho3 ) # LM with grid search estimates as starting values cesLmGridStartRho1 <.lm. control = nls. method = "Newton". tName = "time". tName = "time". rho1 = rhoVec. method = "PORT". iterlim = 500 ) summary( cesNewtonGridRho1 ) plot( cesNewtonGridRho1 ) cesNewtonGridRho2 <.control( maxiter = 1000. check. method = "Newton". method = "Newton".max = 1000. xNames3. rho = rhoVec. data = GermanIndustry. tName = start = coef( cesLmGridRho3 ).cesEst( "Y".control( maxiter = 1000. xNames1. returnGridAll = TRUE. iter. maxfev summary( cesLmGridStartRho2 ) cesLmGridStartRho3 <. tName = "time". grid search for rho1 and rho cesPortGridRho1 <.cesEst( "Y". rho = rhoVec. tName = "time". tName = "time". method = "Newton".control( maxiter = 1000. xNames1. control = list( eval. xNames1.cesEst( "Y". iterlim = 500 ) summary( cesNewtonGridRho3 ) plot( cesNewtonGridRho3 ) # Newton-type with grid search estimates as starting values cesNewtonGridStartRho1 <. iterlim = 500. iterlim = 500. control = nls. iterlim = 500. returnGridAll = TRUE. method = "Newton". data = GermanIndustry.cesEst( "Y". rho1 = rhoVec.cesEst( "Y".cesEst( "Y". data = GermanIndustry.analyticals = FALSE ) summary( cesNewtonGridStartRho1 ) cesNewtonGridStartRho2 <. data = GermanIndustry. data = GermanIndustry. maxfev summary( cesLmGridStartRho1 ) cesLmGridStartRho2 <. control = nls. xNames3. start = coef( cesNewtonGridRho2 ).cesEst( "Y". rho = rhoVec. data = GermanIndustry.analyticals = FALSE ) summary( cesNewtonGridStartRho3 ) ## PORT. data = GermanIndustry. data = GermanIndustry.cesEst( "Y". method = "Newton". tName = start = coef( cesLmGridRho2 ). check. check. xNames2.lm. rho = rhoVec. xNames3. cesEst( "Y". "PORT_Grid". control = list( eval. 7.cesEst( "Y". 0. control = list( eval. "A".cesEst( "Y". "I". 15. xNames2. ) "time". rho = rhoVec. iter.1970 # rhos for grid search rhoVec <. ) "time". "K" ) # list ("folder") for results indRes <.GermanIndustry$year . control = list( eval. method = "PORT".array( NA.c( "E". data = GermanIndustry. ) estimations for different industrial sectors ########## # remove years with missing or incomplete data GermanIndustry <. 0. "F" ) # names of inputs xNames <. tName = "time". seq( 2. "A" ) xNames[[ 2 ]] <.max = 1000 ) summary( cesPortGridStartRho3 ) ######## "time". iter. data = GermanIndustry. 20. iter. "PORT".list() # names of estimation methods metNames <.c( "LM". xNames3. iter.3 ). method = "PORT".max = 1000. method = "PORT". 30. returnGridAll = TRUE. xNames1. year >= 1970 & year <= 1988. 10.6. method = "PORT". method = "PORT". 0.2 ). length( metNames ) ).9. "E" ) xNames[[ 3 ]] <. tName = start = coef( cesPortGridRho3 ). 100 ) # industries (abbreviations) indAbbr <. xNames2. "E". seq( 0. 1.cesEst( "Y".c( "K". data = GermanIndustry.Arne Henningsen. rho = rhoVec.max = 1000 ) ) summary( cesPortGridRho3 ) plot( cesPortGridRho3 ) # PORT with grid search estimates as starting values cesPortGridStartRho1 <. "PORT_Start" ) # table for parameter estimates tabCoef <. control = list( eval.max = 1000. "A". tName = start = coef( cesPortGridRho1 ). dim = c( 9.list() xNames[[ 1 ]] <. 99 . ) # adjust the time trend so that it again starts with 0 GermanIndustry$time <.max = 1000 ) summary( cesPortGridStartRho1 ) cesPortGridStartRho2 <. tName = start = coef( cesPortGridRho2 ).max = 1000 ) ) summary( cesPortGridRho2 ) plot( cesPortGridRho2 ) cesPortGridRho3 <.. xNames3. iter.max = 1000. 1 ).c( seq( -1. returnGridAll = TRUE. G´eraldine Henningsen summary( cesPortGridRho1 ) plot( cesPortGridRho1 ) cesPortGridRho2 <.max = 1000 ) summary( cesPortGridStartRho2 ) cesPortGridStartRho3 <. "N".c( "C".subset( GermanIndustry. rho1 = rhoVec. rho1 = rhoVec. "V".cesEst( "Y". 50. "P". control = list( eval.5. tName = "time". data = GermanIndustry. data = GermanIndustry.max = 1000.max = 1000. 12. "S".c( "K". list() for( modNo in 1:3 ) { cat( "\n=======================================================\n" ) cat( "Industry No.tmpCoef[ "lambda" ] tabR2[ indNo.array( NA.tmpSum$r.tmpCoef[ "gamma" ] >= 0 & tmpCoef[ "delta_1" ] >= 0 & tmpCoef[ "delta_1" ] <= 1 & tmpCoef[ "delta" ] >= 0 & tmpCoef[ "delta" ] <= 1 & tmpCoef[ "rho_1" ] >= -1 & tmpCoef[ "rho" ] >= -1 ## PORT indRes[[ indNo ]][[ modNo ]]$port <. "m_" ). each = 3 ). metNames ) ) # table for technological change parameters tabLambda <. "\n". metNames ) ) # table for R-squared values tabR2 <. xIndNames. 3 ).2 ):( 3 * modNo ). indAbbr. modNo. "LM" ] <. "LM" ] <.coef( indRes[[ indNo ]][[ modNo ]]$lm ) tabCoef[ ( 3 * modNo . indNo.cesEst( yIndName. rep( 1:3. drop = TRUE ] #econometric estimation with cesEst for( indNo in 1:length( indAbbr ) ) { # name of industry-specific output yIndName <. dim = c( 7. modNo.tabLambda # table for RSS values tabRss <. ".lm.squared tabRss[ indNo. ")".tmpSum$rss tabConsist[ indNo. modNo ] <. "Y".paste( indAbbr[ indNo ].summary( indRes[[ indNo ]][[ modNo ]]$lm ) ) tmpCoef <. modNo. data = GermanIndustry. . "lambda" ). sep = "_" ) # sub-list ("subfolder") for all models of this industrie indRes[[ indNo ]] <. control = nls. . "rho". " (". indNo.tabLambda # table for economic consistency of LM results tabConsist <. sep = "" ).list() ## Levenberg-Marquardt indRes[[ indNo ]][[ modNo ]]$lm <. sep = "" ) cat( "=======================================================\n\n" ) # names of industry-specific inputs xIndNames <. maxfev = 2000 ) ) print( tmpSum <.100 Econometric Estimation of the Constant Elasticity of Substitution Function in R dimnames = list( paste( rep( c( "alpha_". rep( c( "rho_1".control( maxiter = 1024. "lambda" ) ] tabLambda[ indNo. tName = "time". 3. xNames[[ modNo ]]. ".cesEst( yIndName.tabLambda[ . 3 ). length( metNames ) ). 1. ". "LM" ] <. "beta_". "LM" ] <tmpCoef[ c( "rho_1". c(1:3). model No. sep = "_" ) # sub-sub-list for all estimation results of this model/industrie indRes[[ indNo ]][[ modNo ]] <. dimnames = list( indAbbr. xIndNames. modNo.paste( indAbbr[ indNo ]. "rho". max = 2000 ) ) print( tmpSum <.2 ):( 3 * modNo ). "L-BFGS-B".tmpSum$rss # PORT. "$".LM".2 ):( 3 * modNo ).squared tabRss[ indNo. indNo. "PORT" ] <. "$\\lambda$" ] <. "$\\lambda$" ] <.tmpSum$r. "PORT_Start" ] <tmpCoef[ c( "rho_1". "PORT_Grid" ] <.squared tabRss[ indNo.tmpSum$r. "Newton grid start".2 ):( 3 * modNo ).summary( indRes[[ indNo ]][[ modNo ]]$portGrid ) ) tmpCoef <. "rho".result1 result1[ "Kemfert (1998)".coef( indRes[[ indNo ]][[ modNo ]]$portGrid ) tabCoef[ ( 3 * modNo .0.tmpSum$rss # PORT.max = 2000.cesEst( yIndName. control = list( eval. "Newton". sep = "" ). iter. modNo. "SANN .0. data = GermanIndustry. xIndNames. ncol = 9 ) rownames( result1 ) <.0069 result3[ "Kemfert (1998)".max = 2000 ) ) print( tmpSum <. modNo. method = "PORT".tmpCoef[ "lambda" ] tabR2[ indNo. "BFGS". "rho".matrix( NA. "fixed".00641 101 . G´eraldine Henningsen tName = "time". "lambda" ) ] tabLambda[ indNo. "PORT". "$\\lambda$" ] <. iter. "PORT_Grid" ] <tmpCoef[ c( "rho_1". "PORT" ] <. "lambda" ) ] tabLambda[ indNo.max = 2000.tmpSum$rss } } ########## tables for presenting estimation results ############ result1 <. "LM grid start" ) colnames( result1 ) <.max = 2000 ) ) print( tmpSum <. modNo. modNo. tName = "time". "DE . "PORT_Start" ] <. "LM grid". grid search for starting values indRes[[ indNo ]][[ modNo ]]$portStart <.PORT". indNo. rho = rhoVec.PORT". "PORT" ] <. "rho". rho1 = rhoVec. "PORT_Start" ] <. nrow = 24. "NM . "RSS". "PORT_Start" ] <.0222 result2[ "Kemfert (1998)". indNo.summary( indRes[[ indNo ]][[ modNo ]]$port ) ) tmpCoef <. grid search indRes[[ indNo ]][[ modNo ]]$portGrid <. "$R^2$" ) result3 <. "PORT grid".c( "Kemfert (1998)". tName = "time". data = GermanIndustry. "BFGS grid start".tmpCoef[ "lambda" ] tabR2[ indNo. modNo.tmpCoef[ "lambda" ] tabR2[ indNo.Arne Henningsen. "LM". modNo. "PORT_Grid" ] <. "SANN".squared tabRss[ indNo. "NM".c( paste( "$\\". modNo. modNo. "PORT" ] <tmpCoef[ c( "rho_1".tmpSum$r. control = list( eval. modNo.result2 <. method = "PORT". iter. "DE . start = coef( indRes[[ indNo ]][[ modNo ]]$portGrid ).coef( indRes[[ indNo ]][[ modNo ]]$port ) tabCoef[ ( 3 * modNo . "SANN . "DE". "PORT grid start". "lambda" ) ] tabLambda[ indNo.PORT". "c".LM". "Newton grid". control = list( eval.coef( indRes[[ indNo ]][[ modNo ]]$portStart ) tabCoef[ ( 3 * modNo . names( coef( cesLm1 ) ). "BFGS grid". method = "PORT".cesEst( yIndName.0.LM".summary( indRes[[ indNo ]][[ modNo ]]$portStart ) ) tmpCoef <. xIndNames. data = GermanIndustry. "NM . "PORT_Grid" ] <.max = 2000. ] <. ] <. 0.function( model ) { if( is. summary( cesFixed3 )$r.makeRow( cesNm2 ) result3[ "NM".makeRow( cesLm3 ) result1[ "NM".makeRow( cesNewton3 ) result1[ "BFGS".c( coef( cesFixed3 )[1].0222. ] <. ] <.c( coef( model ).786 result3[ "Kemfert (1998)".FALSE } result <. summary( cesFixed2 )$r.102 Econometric Estimation of the Constant Elasticity of Substitution Function in R result1[ "Kemfert (1998)". ] <.5300 result2[ "Kemfert (1998)".squared ) result2[ "fixed".makeRow( cesLm2 ) result3[ "LM". "$\\rho_1$" ] <.cesFixed2$convergence. ] <.makeRow( cesPort1 ) result2[ "PORT".makeRow( cesBfgs1 ) result2[ "BFGS". ] <. ] <.1813 result2[ "Kemfert (1998)".9996 result2[ "Kemfert (1998)".00641. "$R^2$" ] <. ifelse( is. summary( cesFixed1 )$r. "$\\rho$" ] <.makeRow( cesBfgsCon2 ) result3[ "L-BFGS-B".makeRow( cesLm1 ) result2[ "LM". ] <. "$\\rho$" ] <.2155 result3[ "Kemfert (1998)".3654 result1[ "Kemfert (1998)". 0.makeRow( cesPort2 ) result3[ "PORT". coef( cesFixed3 )[-1]. model$rss.cesFixed3$convergence. ] <.makeRow( cesPort3 ) result1[ "LM". cesFixed1$rss. ] <.makeRow( cesNm3 ) . ] <.squared ) return( result ) } result1[ "Newton".makeRow( cesNewton1 ) result2[ "Newton".0.5.null( model$convergence ). cesFixed3$rss.1816 result3[ "Kemfert (1998)". ] <. ] <. "$R^2$" ] <.9986 result1[ "fixed".1. cesFixed2$rss. 0.0.makeRow( cesBfgsCon1 ) result2[ "L-BFGS-B".c( coef( cesFixed1 )[1]. model$convergence ). "$\\rho$" ] <. ] <. ] <.8327 result1[ "Kemfert (1998)". NA. ] <.makeRow( cesNm1 ) result2[ "NM". "$R^2$" ] <. coef( cesFixed2 )[-1]. cesFixed1$convergence.0.0. ] <. ] <.makeRow( cesNewton2 ) result3[ "Newton".squared ) makeRow <.makeRow( cesBfgs2 ) result3[ "BFGS".0.squared ) result3[ "fixed".c( coef( cesFixed2 )[1]. "$\\rho_1$" ] <.null( model$multErr ) ) { model$multErr <.makeRow( cesBfgs3 ) result1[ "L-BFGS-B".0.0069. ] <. "$\\rho_1$" ] <. coef( cesFixed1 )[-1].makeRow( cesBfgsCon3 ) result1[ "PORT". ] <.1. summary( model )$r. ] <. ] <.makeRow( cesNmPort1 ) result2[ "NM .makeRow( cesDeLm2 ) result3[ "DE .makeRow( cesSannPort1 ) result2[ "SANN . ] <.makeRow( cesBfgsGridRho2 ) result3[ "BFGS grid".makeRow( cesBfgsGridStartRho1 ) result2[ "BFGS grid start". ] <.PORT".Arne Henningsen. ] <. ] <.makeRow( cesBfgsGridRho1 ) result2[ "BFGS grid".makeRow( cesNmPort3 ) result1[ "SANN". ] <.makeRow( cesPortGridRho2 ) result3[ "PORT grid".makeRow( cesNmPort2 ) result3[ "NM .makeRow( cesDe3 ) result1[ "DE .makeRow( cesNmLm2 ) result3[ "NM .LM". ] <. ] <.LM".makeRow( cesNewtonGridRho3 ) result1[ "Newton grid start".makeRow( cesBfgsGridStartRho3 ) result1[ "PORT grid".PORT".PORT".makeRow( cesNewtonGridStartRho3 ) result1[ "BFGS grid". ] <. ] <. ] <. ] <.makeRow( cesPortGridRho3 ) result1[ "PORT grid start". G´eraldine Henningsen result1[ "NM .PORT".makeRow( cesNewtonGridRho2 ) result3[ "Newton grid".makeRow( cesPortGridRho1 ) result2[ "PORT grid".makeRow( cesSann2 ) result3[ "SANN".makeRow( cesSannPort3 ) result1[ "DE".LM". ] <.makeRow( cesDePort3 ) result1[ "Newton grid".PORT". ] <. ] <.LM". ] <. ] <.makeRow( cesBfgsGridStartRho2 ) result3[ "BFGS grid start".makeRow( cesSannLm2 ) result3[ "SANN .makeRow( cesPortGridStartRho1 ) result2[ "PORT grid start". ] <.PORT".makeRow( cesLmGridRho1 ) 103 . ] <. ] <.PORT".LM".makeRow( cesNmLm1 ) result2[ "NM .makeRow( cesSannLm3 ) result1[ "SANN .makeRow( cesDe1 ) result2[ "DE". ] <. ] <.makeRow( cesNewtonGridStartRho2 ) result3[ "Newton grid start". ] <. ] <. ] <.makeRow( cesNewtonGridStartRho1 ) result2[ "Newton grid start". ] <. ] <. ] <. ] <. ] <. ] <.makeRow( cesSannPort2 ) result3[ "SANN . ] <.LM".makeRow( cesNewtonGridRho1 ) result2[ "Newton grid". ] <.LM".makeRow( cesDePort1 ) result2[ "DE .makeRow( cesSannLm1 ) result2[ "SANN . ] <.makeRow( cesNmLm3 ) result1[ "NM . ] <. ] <. ] <.LM".PORT".makeRow( cesSann3 ) result1[ "SANN .makeRow( cesPortGridStartRho2 ) result3[ "PORT grid start".makeRow( cesDePort2 ) result3[ "DE . ] <.PORT".makeRow( cesBfgsGridRho3 ) result1[ "BFGS grid start". ] <. ] <. ] <.makeRow( cesSann1 ) result2[ "SANN".makeRow( cesDeLm3 ) result1[ "DE .makeRow( cesPortGridStartRho3 ) result1[ "LM grid".makeRow( cesDe2 ) result3[ "DE".LM".makeRow( cesDeLm1 ) result2[ "DE . ] <. rep( 4.tex" ) result3 <. 6 ).readLines( tempFile ) close( tempFile ) for( i in grep( "MarkThisRow ". rep( 4. "MarkThisRow ".makeRow( cesLmGridRho3 ) result1[ "LM grid start".tex" ) . file = tempFile. fileName ) invisible( latexLines ) } result1 <. 4 ) ) printTable( xTab2.function( result ) { rownames( result ) <. "\\\\color{red}" . 0. align = "lrrrrrrrrr".sub( "MarkThisRow".latexLines[ i ] ) latexLines[ i ] <. 6 ). ] <. 0.colorRows( result2 ) xTab2 <. ] <.function( xTab. 0.paste( ifelse( !is. ] <. ] <. rownames( result ). 0.gsub( "&". ] <.tex" ) result2 <. fileName = "kemfert1Coef. "$\\rho_1$" ] < -1 | result[ .xtable( result3. digits = c( 0. "" ). "$\\rho$" ] < -1. 0.colorRows( result1 ) xTab1 <.xtable( result1. "$\\delta_1$" ] ) & ( result[ . "$\\delta_1$" ] >1 | result[ . 4 ) ) printTable( xTab1. fileName ) { tempFile <.makeRow( cesLmGridStartRho3 ) # create LaTeX tables library( xtable ) colorRows <.text. 6 ). rep( 4. "$\\delta$" ] < 0 | result[ . align = "lrrrrrrrrr".file() print( xTab. sep = "" ) return( result ) } printTable <.function = function(x){x} ) latexLines <.makeRow( cesLmGridStartRho1 ) result2[ "LM grid start".104 Econometric Estimation of the Constant Elasticity of Substitution Function in R result2[ "LM grid". latexLines.makeRow( cesLmGridStartRho2 ) result3[ "LM grid start". "$\\delta$" ] > 1 ) | result[ . sanitize. "& \\\\color{red}" .colorRows( result3 ) xTab3 <. digits = c( 0. fileName = "kemfert2Coef.latexLines[ i ] ) } writeLines( latexLines. floating = FALSE. "$\\delta_1$" ] < 0 | result[ . 4 ) ) printTable( xTab3.na( result[ .xtable( result2. 0. fileName = "kemfert3Coef. align = "lrrrrrrrrr". value = FALSE ) ) { latexLines[ i ] <.makeRow( cesLmGridRho2 ) result3[ "LM grid". digits = c( 0. B´elisle CJP (1992).” The Review of Economics and Statistics. Wavelets and Statistics.” Computers & Operations Research. Beale EML (1972). pp. 99–119.” In A Antoniadis (ed. Byrd R. “Explaining Movements in the Labour Share. Bentolila SJ. Arrow KJ. 15(1). Article 4. Academic Press. Handbook of Economic Growth. 79. North Holland.S. 1190–1208. pp. Broyden CG (1970). G´eraldine Henningsen 105 References Acemoglu D (1998). “Substitution and Complementarity in C.” The Quarterly Journal of Economics. 16. “Will the Real Elasticity of Substitution Please Stand up? (A Comparison of the Allan/Uzawa and Morishima Elasticities). Caselli F (2005).” The American Economic Review. Buckheit J.” In P Aghion. “Convergence Theorems for a Class of Simulated Annealing Algorithms on Rd . Numerical Methods for Nonlinear Optimization.” Journal of Economic Methodology. URL http://www. Nocedal J. “The Role of Data/Code Archives in the Future of Economic Research. Minhas BS. 882–888. Caselli F.). “Capital-labor Substitution and Economic Efficiency. London.” Southern Economic Journal. Solow RM (1961). Moroney JR (1994). 60(4).jstor. Greene WH. Amras P (2004). “Is the U. 1703– 1725. Coleman Wilbur John I (2006).Arne Henningsen. Anderson R. “Wavelab and Reproducible Research. Blackorby C. Donoho DL (1995). T¨ orn A (2004).org/stable/30034059. Springer-Verlag. 3(1). 886–895. 29.” Contribution in Macroeconomics. 43(3).” Contributions to Macroeconomics.” Journal of the Institute of Mathematics and Its Applications.jstor. 885–895. Anderson RK. 679–742.).” The American Economic Review. . 6. “Population Set-Based Global Optimization Algorithms: Some Modifications and Numerical Studies. Lu P. McCullough BD.E. “The Convergence of a Class of Double-rank Minimization Algorithms. 31(10). “The World Technology Frontier. 4(1).org/stable/1927286.S. Russel RR (1989). Models.” In FA Lootsma (ed. Gilles SP (2006).). 76–90. 225–250. 113(4). 1055–1089.” Journal of Applied Probability. 499–522. 96(3). “Accounting for Cross-Country Income Differences. Chenery BH. URL http://www. Ali MM. SN Durlauf (eds.” SIAM Journal for Scientific Computing. “A Limited Memory Algorithm for Bound Constrained Optimization. “Why Do New Technologies Complement Skills? Directed Technical Change and Wage Inequality. Zhu C (1995). “A Derivation of Conjugate Gradients. 39–43. Article 9. Vinod HD (2008). Aggregate Production Function Cobb-Douglas? New Estimates of the Elasticity of Substitution. jstor. Cambridge University Press. Hayfield T. Statistical Models in S. Chambers RG (1988).org/package=car. 6th edition.R-project. pp. minpack.” International Journal of Microsimulation. Reeves C (1964).” Journal of Applied Econometrics. micEcon: Tools for Microeconomic Analysis and Microeconomic Modeling.pdf. Cobb CW (1928). Fox J (2009). Applied Production Analysis.com/ cm/cs/cstr/153.106 Econometric Estimation of the Constant Elasticity of Substitution Function in R Cerny V (1985). R package version 0. “A Thermodynamical Approach to the Travelling Salesman Problem: an Efficient Simulation Algorithm. URL http://www. “A Theory of Production. “Usage Summary for Selected Optimization Routines. “Multiple Regimes and Cross-Country Growth Behavior. Prentice-Hall. 51(4). Henningsen A (2010). 45. Econometric Analysis.microsimulation.” Journal of Optimization Theory and Applications. Fletcher R. 317–322. Douglas PC.bell-labs. “Capital-Skill Complementarity. 27(5). Chapman & Hall. .org/package=micEcon.” Computer Journal.” Journal of Statistical Software. USA). Johnson PA (1995). Dennis JE. Mullen KM (2009). Davies JB (2009). Numerical Methods for Unconstrained Optimization and Nonlinear Equations. 465–468. 23–26. “A New Approach to Variable Metric Algorithms. “A Family of Variable Metric Updates Derived by Variational Means. http://CRAN. Elzhov TV. 139–165. 49–65. Goldfarb D (1970).R-project. Englewood Cliffs (NJ.” Computer Journal. Durlauf SN. 18(1).2-16. 1–32. URL http: //CRAN. London. Cambridge. URL http://netlib. 48–154.” Mathematics of Computation. 41–51. AT&T Bell Laboratories.pdf. A Dual Approach.lm. 13. 24. 365–384. Schnabel RB (1983). R package version 1. “Nonparametric Econometrics: The np Package. car: Companion to Applied Regression.org/IJM/V2_1/IJM_2_1_4. URL http://CRAN. 7. Prentice Hall. Hastie TJ (1992). Chambers JM. “Combining Microsimulation with CGE and Macro Modelling for Distributional Analysis in Developing and Transition Countries.lm: R Interface to the Levenberg-Marquardt Nonlinear Least-Squares Algorithm Found in MINPACK.R-project.1-4.6.” The American Economic Review. Racine JS (2008).” Computing Science Technical Report 153. “Function Minimization by Conjugate Gradients. Griliches Z (1969). Greene WH (2008). R package version 1.org/pss/1926439. Fletcher R (1970). 10. Gay DM (1990). URL http://www.” The Review of Economics and Statistics.org/package=minpack. ” Quarterly Journal of Economics. Papageorgiou C (2008). Hamann JD (2007). “Factor Substitution and Factor-Augmenting Technical Progress in the United States: A Normalized Supply-Side System Approach. 599–600. Vecchi MP (1983). 220(4598). Hoff A (2004).org/package= micEconCES. 24. Ohanian LE.1. micEconCES: Analysis with the Constant Elasticity of Scale (CES) Function. 180–189.” Econometrica. “A Contribution to the Empirics of Economic Growth. 23(4). 671–680.” Energy Economics. AER: Applied Econometrics with R. Gelatt CD. 249–264. URL http: //www. Krusell P. Institute of Food and Resource Economics. 107.org/v23/i04/. Kleiber C. Maddala G. Kemfert C (1998). Klump R.” Econometrica. 1–40.org/package=AER. 68(5). “Identifying the Elasticity of Substitution with Biased Technical Change. “Econometric Estimation of the “Constant Elasticity of Substitution” Function in R: Package micEconCES. “Optimization by Simulated Annealing.” Journal of Statistical Software. Le´on-Ledesma MA. 76(3). Klump R. “The Linear Approximation of the CES Function with n Input Variables. G´eraldine Henningsen 107 Henningsen A. R package version 1. SIAM Society for Industrial and Applied Mathematics. repec. “The Aggregate Production Function of the Finnish Economy in the Twentieth Century.” International Economic Review. “Editorial Introduction: The CES Production Function in the Theory and Empirics of Economic Growth. Iterative Methods of Optimization. 723–737. Henningsen G (2011b). 100. 295–306. Henningsen A. Kadane J (1967). Violante GL (2000).” The Review of Economics and Statistics. Romer D. Willman A (2007).” Southern Economic Journal. “systemfit: A Package for Estimating Systems of Simultaneous Equations in R. Weil DN (1992).9. Philadelphia. 8.jstatsoft. “Estimation of Returns to Scale and the Elasticity of Substitution.” FOI Working Paper 2011/9.” Marine Resource Economics. 1330–1357. McAdam P. 89(1).” Journal of Macroeconomics.Arne Henningsen. http://CRAN. 30(2).” Science. . Mankiw NG. McAdam P. “Estimated Substitution Elasticities of a Nested CES Production Function Approach for Germany. 419–423. Zeileis A (2009).R-project. Henningsen G (2011a). 20(3). Kirkpatrick S. http://CRAN. “On Estimation of the CES Production Function. Kmenta J (1967). “Capital-skill Complementarity and Inequality: A Macroeconomic Analysis. Willman A (2010). Kelley CT (1999). 183–192. Luoto J (2010). Luoma A. University of Copenhagen.R-project. Henningsen A. URL http://EconPapers.” American Economic Review. 407–437. R package version 0.org/RePEc:foi:wpaper:2011_9. 19. 1029–1053. R´ıos-Rull JV. MINPACK. Masanjala WH. URL http://www. 11(2). “Do Economics Journal Archives Promote Replicable Research?” Canadian Journal of Economics. Papageorgiou C. 40(6). Weiss BE (1985). “Open Access Economics Journals and the Market for Reproducible Economic Research. Austria. 16. . 30(4). 41(4). Harrison TD (2008). Saam M (2005). Louisiana State University. 43. North-Eastern Hill University. Argonne National Laboratory. 30.” Revue Francaise d’Informatique et de Recherche Op´erationnelle. “A Modular System of Algorithms for Unconstrained Minimization.108 Econometric Estimation of the Constant Elasticity of Substitution Function in R Marquardt DW (1963). McCullough BD.org/v40/i06. “A Simplex Algorithm for Function Minimization. URL http://www.” Working paper 2005-07. Koontz JE.org/minpack/.” Journal of Applied Econometrics. “Human Capital Aggregation and Relative Wages Across Countries. 1–26. 39(1). Cline J (2011). 201–218. ISBN 3-900051-07-0. “The Solow Model with CES Technology: Nonlinearities and Parameter Heterogeneity.” The Review of Economic Studies. Mead R (1965).edu/~bdm25/eap-1. Price KV.R-project. Windover D.” The Review of Economic Studies. Shillong. Springer-Verlag. Hillstrom KE (1980). URL http://www. Ribi`ere G (1969). “Note Sur la Convergence de M´ethodes de Directions Conjugu´ees. 431–441. 7. 73–83.netlib. URL http://www.org. 117–126. Garbow BS. 11(4). URL http: //www. 308–313. ISBN 540209506. Mishra SK (2007). McCullough BD (2009). Mor´e JJ. Papageorgiou C (2004). 19(2).” Journal of Statistical Software. Gil DL. R Development Core Team (2011). R: A Language and Environment for Statistical Computing. Mullen KM. Differential Evolution – A Practical Approach to Global Optimization. 171–201.org/stable/2296809. “DEoptim: An R Package for Global Optimization by Differential Evolution. URL http://www.” ACM Transactions on Mathematical Software. 419–440.drexel. Polak E. McFadden D (1963). 1406–1420. McGeary KA.jstatsoft.jstor.” Computer Journal.pdf. “A Note on Numerical Estimation of Sato’s Two-Level CES Production Function. Sato K (1967). “A Two-Level Constant-Elasticity-of-Substitution Production Function.” Economic Analysis and Policy.” Journal of the Society for Industrial and Applied Mathematics. “An Algorithm for Least-Squares Estimation of Non-linear Parameters.” Journal of Macroeconomics.jstor. “Two-Level CES Production Technology in the Solow and Diamond Growth Models. “Constant Elasticity of Substitution Production Function. Nelder JA. Storn RM.org/stable/2098941. Ardia D. Vienna. Natural Computing. Department of Economics. 1587–1601.” MPRA Paper 1019. pages. R Foundation for Statistical Computing. Lampinen JA (2006). 35–43. Schnabel RB. Pandey M (2008). “Conditioning of Quasi-Newton Methods for Function Minimization. 421–441. Affiliation: Arne Henningsen Institute of Food and Resource Economics University of Copenhagen Rolighedsvej 25 1958 Frederiksberg C. 19(2). 341–359. Price K (1997). Shanno DF (1970). 363–377. “Kmenta Approximation of the CES Production Function.” The Review of Economics and [email protected] [accessed 2010-02-09].com URL: http://www. “Production Functions with Constant Elasticity of Substitution. http://www2. Denmark E-mail: arne.” Mathematics of Computation. 26(4). 291–299. Lovell CAK (1978).de/ uebe/Lexikon/K/Kmenta. 62. 708–714.” International Economic Review. “CES Estimation Techniques. Claerbout J (2000). Denmark E-mail: gehe@risoe. 61–67.” Journal of Global Optimization. Uzawa H (1962). 11. Sun K. 24. Storn R. “Differential Evolution – A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces.Arne Henningsen.” The Review of Economic Studies.” Journal of the Franklin Institute.arne-henningsen. 2(6). “Biases in Approximating Log Production.” Computing in Science & Engineering. Uebe G (2000). “Making Scientific Computations Reproducible. Sorenson HW (1969). “In Investigation of the Kmenta Approximation to the CES Function.dtu.hsu-hh. Thursby JG. “Comparison of Some Conjugate Direction Procedures for Function Minimization. Henderson DJ.” Journal of Applied Econometrics. 295–299. Thursby JG (1980). Kumbhakar SC (2011). 288(6). G´eraldine Henningsen 109 Schwab M.dk .” Macromoli: Goetz Uebe’s notebook on macroeconometric models and literature.name/ G´eraldine Henningsen Systems Analysis Division Risø National Laboratory for Sustainable Energy Technical University of Denmark Frederiksborgvej 399 4000 Roskilde. 29. 647–656. Karrenbach M.
Copyright © 2024 DOKUMEN.SITE Inc.