# Your Online Encyclopedia

## Online Encylopedia and Dictionary Research Site

Online Encyclopedia Search    Online Encyclopedia Browse

# Model (economics)

A diagram of the IS/LM model

In economics, the term model denotes a theoretical construct that represents economic processes by a set of variables and a set of logical and quantitative relationships between them. Models are constructed to reason within a idealized logical framework about economic processes.

In general terms, economic models are a simplification of and abstraction from observed data. Simplification is particularly important for economics given the enormous complexity of economic processes. This complexity can be attributed to the diversity of factors that determine economic activity; these factors include: individual and cooperative decision processes, resource limitations, environmental and geographical constraints, institutional and legal requirements and purely random fluctuations. Economists therefore must make a reasoned choice of which variables and which relationships between these variables are relevant and which ways of analysing and presenting this information are useful.

In addition to their professional academic interest, the use of models include:

• Forecasting economic activity in a way in which conclusions are logically related to assumptions;
• Proposing economic policy to modify future economic activity;
• Presenting reasoned arguments to politically justify economic policy at the national level, to explain and influence company strategy at the level of the firm, or to provide intelligent advice for household economic decisions at the level of households.
• Planning and allocation, in the case of centrally planned economies, and on a smaller scale in logistics and management of businesses.

Obviously any kind of reasoning about anything uses representations by variables and logical relationships. A model however establishes an argumentative framework for applying logic and mathematics that can be independently discussed and tested and that can be applied in various instances. Policies and arguments that rely on economic models have a clear basis for soundness, namely the validity of the supporting model.

Economic models in current use have no pretensions of being theories of everything economic; any such pretensions would immediately be thwarted by computational infeasibility and the paucity of theories for most types of economic behavior. Therefore conclusions drawn from models will be approximate representations of economic facts. However, properly constructed models can remove extraneous information and isolate useful approximations of key relationships. In this way more can be understood about the relationships in question than by trying to understand the entire economic proces.

The details of model construction vary with type of model and its application, but a generic process can be identified. Generally any modelling process has two steps: generating a model, then checking the model for accuracy (sometimes called diagnostics). The diagnostic step is important because a model is only useful to the extent that it accurately mirrors the relationships that it proports to describe. Creating and diagnosing a model is frequently an iterative process in which the model is modified (and hopefully improved) with each iteration of diagnosis and respecification. Once a satisfactory model is found, it should be double checked by applying it to a different data set.

 Contents

## Types of Models

Broadly speaking economic models are stochastic or non-stochastic.

• Non-stochastic mathematical models may be purely qualitative (for example, models involved in some aspect of social choice theory) or quantitative (involving specific forms of functional relationships between variables). In some cases economic predictions of a model merely assert the direction of movement of economic variables, and so the functional relationships are used only in a qualitative sense: for example, if the price of an item increases, then the demand for that item will decrease. For such models, economists often use two-dimensional graphs instead of functions.
• Qualitative models - Although almost all economic models involve some form of mathematical or quantitative analysis, qualitative models are occassionally used. One example is qualitative scenario planning in which possible future events are played out. Another example is non-numerical decision tree analysis. Qualitative models often suffer from lack of precision.

At a more practical level, quantitative modeling is applied to many areas of economics and several methodologies have evolved more or less independently of each other. As a result, no overall model taxonomy is naturally available. We can nonetheless provide a few examples which illustrate some particularly relevant points of model construction.

• An accounting model is one based on the premise that for every credit there is a debit. More symbolically, an accounting model expresses some principle of conservation in the form
algebraic sum of inflows = sinks - sources
This principle is certainly true for money and it is the basis for national income accounting. Accounting models are true by convention, that is any experimental failure to confirm them, would be attributed to fraud, arithmetic error or an extraneous injection (or destruction) of cash which we would interpret as showing the experiment was conducted improperly.
• Optimality and constrained optimization models - Other examples of quantitative models are based on principles such as profit or utility maximization. An example of such a model is given by the comparative statics of taxation on the profit-maximizing firm. The profit of a firm is given by
$\pi(x,t) = x p(x) - C(x) - t x \quad$
where p(x) is the price that a product commands in the market if it is supplied at the rate x, xp(x) is the revenue obtained from selling the product, C(x) is the cost of bringing the product to market at the rate x, and t is the tax that the firm must pay per unit of the product sold.
The profit maximization assumption states that a firm will produce at the output rate x if that rate maximizes the firm's profit. Using differential calculus we can obtain conditions on x under which this holds. The first order maximization condition for x is
$\frac{\partial \pi(x,t)}{\partial x} =\frac{\partial (x p(x) - C(x))}{\partial x} -t= 0$
Regarding x is an implicitly defined function of t by this equation (see implicit function theorem), one concludes that the derivative of x with respect to t has the same sign as
$\frac{\partial^2 (x p(x) - C(x))}{\partial^2 x}={\partial^2\pi(x,t)\over \partial x^2},$
which is negative if the second order condition s for a local maximum are satisfied.
Thus the profit maximization model predicts something about the effect of taxation on output, namely that output decreases with increased taxation. If the predictions of the model fail, we conclude that the profit maximization hypothesis was false; this should lead to alternate theories of the firm, for example based on bounded rationality.
Borrowing a notion apparently first used in economics by Paul Samuelson, this model of taxation and the predicted dependency of output on the tax rate, illustrates an operationally meaningful theorem; that is one which requires some economically meaningful assumption which is falsifiable under certain conditions.
• Aggregate models. Macroeconomics needs to deal with aggregate quantities such as output, the price level , the interest rate and so on. Now real output is actually a vector of goods and services, such as cars, passenger airplanes, computers, food items, secretarial services, home repair services etc. Similarly price is the vector of individual prices of goods and services. Models in which the vector nature of the quantities is maintained are used in practice, for example Leontief input-output models are of this kind. However, for the most part, these models are computationally much harder to deal with and harder to use as tools for qualitative analysis. For this reason, macroeconomic models usually lump together different variables into a single quantity such as output or price. Moreover, quantitative relationships between these aggregate variables are often parts of important macroeconomic theories. This process of aggregation and functional dependency between various aggregates usually is interpreted statistically and validated by econometrics. For instance, one ingredient of the Keynesian model is a functional relationship between consumption and national income: C = C(Y). This relationship plays an important role in Keynesian analysis.

## Pitfalls

Economic models can be such powerful tools in understanding some economic relationships, that it is easy to ignore their limitations. An example of this are perfect-competition market equilibrium models. These models are based on perfect information, an identical product, and inability of individual agents to significantly affect total output or demand. When these assumptions are met, the resulting static equilibrium conditions will be Pareto optimal. One can interpret optimality as an ideal situation in which each agent can do no better. When these assumptions fail, for instance under imperfect information or product differentiation, the model conclusions also fail. Moreover these models often exclude externalities such as environmental effects.

An economic model that has been established to have validity in explaining a relationship under one set of assumptions, is useless if the assumptions are not valid. Model assumptions include not only those can be expressed as predicates on model parameters but others with more qualitative or asymptotic form. This basic concept is however surprisingly often ignored. A common example is the application of Keynesian economics to government fiscal policy. The simple Keynesian model postulates (among other things) that output is a function of aggregate demand. Government spending is one component of aggregate demand, so Keynes' model is often applied to conclude that increasing government spending will have the same positive effect on output as private investment (see the article by Paul Samuelson, Simple Mathematics of Income Determination). This application of the model is correct in the short run, but the model does not take into account the results of this policy change, which may affect business cycles, interest and tax rates, private investment, and other factors which could in the long run either reduce or increase output as a result of the change in fiscal policy. This example highlights one of the difficulties of applying economic models, that is correctly inferring short term and long term effects of economic policy.

The sharp distinction between falsifiable economic models and those that are not is by no means a universally accepted one. Indeed one can argue that the ceteris paribus (all else being equal) qualification that accompanies any claim in economics is nothing more than an all-purpose escape clause. See the N. de Marchi and M. Blaug collection for a philosophical discussion of these issues. The all else being equal claim allows holding all variables constant except the few that the model is attempting to reason about. This allows the separation and clarification of the specific relationship. However, in reality all else is never equal, so economic models are guaranteed to not be perfect. The goal of the model is that the isolated and simplified relationship has some predictive power that can be tested. Ignoring the fact that the ceteris paribus assumption is being made is another big failure often made when a model is applied. At the minimum an attempt must be made to look at the various factors that may not be equal and take those into account.

## History

One of the major problems addressed by economic models has been understanding economic growth. An early attempt to provide a technique to approach this came from the French physiocratic school in the Eighteenth century. Among these economists, François Quesnay should be noted, particularly for his development and use of tables he called Tableaux économiques . These tables have in fact been interpreted in more modern terminology as a Leontiev model, see the Phillips reference below.

All through the 18th century (that is, well before the founding of modern political economy, conventionally marked by Adam Smith's 1776 Wealth of Nations) simple probabilistic models were used to understand the economics of insurance. This was a natural extrapolation of the theory of gambling, and played an important role both in the development of probability theory itself and in the development of actuarial science. Many of the giants of 18th century mathematics contributed to this field. Around 1730, De Moivre addressed some of these problems in the 3rd edition of the Doctrine of Chances. Even earlier (1709), Nicolas Bernoulli studies problems related to savings and interest in the Ars Conjectandi . In 1730, Daniel Bernoulli studied "moral probability" in his book Mensura Sortis , where he introduced what would today be called "logarithmic utility of money" and applied it to gambling and insurance problems, including a solution of the paradoxical Saint Petersburg problem. All of these developments were summarized by Laplace in his Analytical Theory of Probability (1812). Clearly, by the time David Ricardo came along he had a lot of well-established math to draw from.

## References

• W. Baumol and A. Binder, Economics: Principles and Policy 2ed., Harcourt Brace Jovanovich, Inc. 1982.
• Bruce Caldwell, Beyond Positivism Revised edition, Routledge, 1991.
• R. Holcombe, Economic Models and Methodology, Greenwood Press, 1989. Defines model by analogy with maps, an idea borrowed from Baumol and Blinder. Discusses deduction within models, and logical derivation of one model from another. Chapter 9 compares the neoclassical school and the Austrian school, in particular in relation to falsifiability.
• Oscar Lange The Scope and Method of Economics, Review of Economic Studies, 1945. One of the earliest studies on methodology of economics, analysing the postulate of rationality.
• N. B. de Marchi and M. Blaug., Appraising Economic Theories, Edward Elgar, 1991. A series of essays and papers analysing questions about how (and whether) models and theories in economics are empirically verified and the current status of positivism in economics.
• M. Morishima, The Economic Theory of Modern Society, Cambridge University Press, 1976. A thorough discussion of many quantitative models used in modern economic theory. Also a careful discussion of aggregation.
• A. Phillips, The Tableau Économique of a Simple Leontiev Model, Quarterly Journal of Economics, 69, 1955 pp 137-44.
• Paul Samuelson, Foundations of Economic Analysis, Atheneum, 1965. Originally published by Harvard University Press in 1947. This is a classic book carefully discussing comparative statics in microeconomics, though some dynamics is studied as well as some macroeconomic theory. This should not be confused with Samuelsons' popular textbook.
• Paul Samuelson, The Simple Mathematics of Income Determination, in: Income, Employment and Public Policy; essays in honor of Alvin Hansen, W. W. Norton, 1948
• J. Tinbergen, Statistical Testing of Business Cycle Theories, League of Nations, 1939
• H . Wold, A Studyy in thw Analysis of Stationary Time Series, Almqvist and Wicksell, 1938
• H. Wold and L. Jureen, Demand Analysis A Study in Econometrics, 1953

## External links

Last updated: 10-24-2004 05:10:45