Abstract: The irrational behavior of the market in the lead-up to the subprime mortgage crisis, which puzzled even Alan Greenspan, is explained. A simple three-part model predicts market failure whenever substantial executive agreement is reached on an irrational predictor of future value, and explains why such agreement is to be expected. Techniques for mitigating this phenomenon are discussed.
On October 23, 2008 [2], Alan Greenspan in testimony before Congress called the subprime mortgage practices leading up to the crisis a “flaw in the model …that defines how the world works.” He said this left him in a “state of shocked disbelief.” Banking officials failed to protect their shareholders from their bad loan decisions. “A critical pillar to market competition and free markets did break down,” said Greenspan. “I still do not fully understand how it happened.”
This note explains how it happened, and agrees with personal and common experience in describing the “flaw.” Three unexamined failures of the model lead to the practical results recently seen:
(A) The finiteness of the competing selections available in the real world results in some dimensions of optimization not being available.
(B) The universal use of one measure of predicted value results in one such dimension being parallel to that measure under reasonable assumptions. Small-angle variation in that one measure is shown not to solve this problem.
(C) The availability of the same search space to all computers creates a phenomenon of “compulsory moral hazard” in corporate structures that makes the dysfunction of (B) likely to happen.
As a result, the small world of universal communications has a predictable effect of positive-feedback surges leading to universal market failure. The key to damping out this phenomenon is to attack (B) at its source, the understanding of value. Sovereign wealth funds and analogues can serve to lead this damping project.
The three phenomena listed in the Summary are easy to explain, and the first two are easy to quantify in mathematical terms. They fall into the category of simplifying assumptions made in standard economic theory, which in this case prove inadequate to insure rationality. The third phenomenon of “compulsory moral hazard” is not so easy to quantify, but is a matter of common experience among those familiar with the employer/employee relationship in a stratified or labor-surplus environment.
The nonsense equation
3 = ∞ |
is a consequence of the economic theory of optimizing value through competition, if it is applied to the classical array of three car manufacturers, General Motors, Chrysler and Ford. The smooth curves (actually hypersurfaces) used as hypotheses for the optimizations have an infinite number of points, which do not correspond to anything in a real customer’s world — even if we allow for multiple models.
They are used because finite mathematics is much more difficult than calculus of smooth curves. However, when the effective number of different choices becomes small, optimization falls afoul of a trivial property of linear dependence in multi-dimensional space, which I will call quality collapse.
For ease of understanding, space and optimization are taken to be linear, in a standard multidimensional Hilbert space (conveniently self-dual using the concept of perpendicularity). This makes sense locally in any smooth model, and manifold arguments can extend it to curvilinear cases.
Let optimization be linear in a Hilbert space of possibilities, and let n be finite. If no more than n selections are available, and there are n or more dimensions with respect to which to optimize, then for some combination of these dimensions the set of selections offers no variation of quality.
Of practical interest are cases of near-total quality collapse, where the variation of quality available to the customer according to some measure is not zero, but an amount much smaller than the variation possible from the entire space of selections accessible to human endeavor with currently available resources. An example of total quality collapse was Henry Ford’s famous dictum that you can have any color as long as it’s black. Near-collapse might be exhibited by modes of transportation available in large suburban housing developments: a car or something very similar.
It may be argued that the finiteness of choices actually available to customers of a small number of competitors is mitigated before the fact by the “infinity” of possibilities available to the managers designing their products at the planning stage. This is not identical to actual competition but is related to it by a connection — financing — whose effectiveness will be challenged by my last two points.
To take advantage of the vast range of possibilities available to producers at the planning stage, resources must be allocated with an eye to future value, and a selection must be made based on that expected value. An expectation of value is a measure based on two things: (i) an understanding of value, in other words a goal; and (ii) a prognostication of the future as it impinges upon that understood value.
A measure (or dual vector with a dot-product, in the previous section’s simple case) is built by the executives of the producer based on (i) and (ii), and that determines present allocation of available resources and future product selection. The problem is that if (i) and (ii) are identical among all the competitors, the measure will be too, and the optimal points selected will be the same or very similar, causing a near-collapse of quality in the direction of the measuring vector.
Unfortunately, (i) is identical among commercial corporations: shareholder value, or money profit. But prognostication of the future — the path to profit — can vary. If it does not, the consequences can be dire, as we have seen. All the lemmings agreed that subprime mortgages were the path to profit.
To show that similar measures cause a collapse of quality, hypothesize a finite-dimensional and bounded set of possible or plannable options. It is plausible to assume that this set is convex: that is, if two options are possible, then anything between them is also possible. This is of course not strictly true, but with a population in many millions, the finite steps are so thickly distributed that the smooth curve does a good enough job.
Suppose all the measure vectors point in nearly the same direction — say they deviate no more than a small θ in angle from a central measure vector, S. Then the quality variation in S between the optima picked out by these measure vectors will be poor — of the order of θ. It does not matter whether the body of possibilities is smooth or polygonal.
Suppose a convex, closed, bounded body of possibilities of diameter d exists in n-dimensional Hilbert space, for some finite n ≥ 2. Let S and T be two quality measure vectors of unit length in this Hilbert space, at an acute or right angle with each other, and let PS and PT be optimal in the body with respect to the respective vectors. If the angle between S and T is θ, with |θ| ≤ π/2, and πS is the quality function of S (i.e., the orthogonal projection onto a line parallel to S, treated as isometric with R1, that is maximal at PS), then
0 ≤ πS(PS) − πS(PT) ≤ d sin(θ). (1) |
If variation is 20% in the direction of the quality measure vectors, the above theorem implies that quality variation will collapse to the order of 20% of the possible. However, this weak result assumes the possibility of flat boundary areas. It is usually more realistic to assume the body of possibilities is “smooth” and rounded, since backing away from extremes in one dimension can free up resources to allow variation in other dimensions. A second-order theorem takes this into account.
Let the hypotheses be as in Proposition 2, with the additional requirement that the boundary of the body of possibilities have minimum curvature κ > 0. Then
0 ≤ πS(PS) − πS(PT) ≤ |
| . (2) |
Since 1 − cos(θ) is of the order of θ2, this means that a variation of 20% in the directions of the quality measure vectors will restrict quality variation in the direction of S to around 4%. Even in the cases where there are flat boundary areas in the body of possibilities — such as can be caused by legal restrictions — the average behavior will be more like that of Proposition 3. The only exception will be if the legal restriction is nearly aligned with S itself. For instance, if S points to the desirability of subprime mortgage and credit default swap investments, a specific legal restriction on those investments could have the effect of making small variations in orthogonal desirability factors lead to proportional (not square proportional) variation in quality along S.
Unfortunately, the restriction on quality modeled above is a very real problem. The reason is because the computerized search space available to all competitors, or “players,” is substantially the same. This combines with the near-identity of the understanding of value (the goal of stockholder profit) to produce near-identity of the quality measure vectors used by the players. This is effected, in corporations with a standard employer/employee relationship, by what I call “compulsory moral hazard.”
A typical scenario was described to me by an experienced lawyer during a church breakfast in La Jolla [4]. Credit default swaps intended to insure subprime mortgages charge, in a competitive market, say 8.2% of insured value — with insured value totalling trillions of dollars. Reserve requirements are estimated by insurance company employees. An estimate of 8.0% predicts a profit of 0.2% and a proportionately large bonus for the company executive. An estimate of 7.8% doubles this bonus.
Realism would have led closer to the actual value around 20%. But what price realism for the employee making the estimate? He can only base his prediction on fact, history, and mathematical consequences. The executive can immediately do a Web search and find an estimate that comes closer to the executive’s preferences. Thus the employee who attempts scientific realism is not only guilty of attempting to sabotage company hopes. He is, in fact, convicted of negligence for wasting company resources on his analysis, since a simple Web search was all that was necessary to give the executive the conclusion he wanted!
This phenomenon is not restricted to mortgage lenders or insurance specialists. All prognostication of future value is affected. The more players are involved on a single issue (as in vendor/purchaser negotiations), the more intense is this compulsory moral hazard, because the negotiating partner can also check the Web and protest any prediction that varies from the most preferred.
The above analysis shows a clear route to market failure. The first factor is a severe finiteness flaw, called quality collapse, in the normal theory of competition, which can be mitigated by management attention to a full range of possibilities in the planning stage. The second factor is a reappearance of quality collapse when the optimizations done during the planning phase are based on prognostications of future value that are too similar among competitors. The third factor shows realistically how this excessive agreement among quality measure vectors is a consequence of the universal computerized monoculture for information search.
None of these factors are restricted to the particulars of the current mortgage crisis. To borrow Alan Greenspan’s words [2], they constitute a “flaw in the model …that defines how the world works.” They will recur with equal severity under new circumstances. They may already be recurring even now in bank judgement that agrees to be more restrictive in loans than is optimal.
Several approaches to mitigation are suggested by looking at the parameters of the problem.
(I.) The identity of understanding of value goals — shareholder value, which is to say money profit — is the root cause of the monoculture. This will be mitigated by sovereign wealth funds or analogues. A Kurdish wealth fund would have two goals: accumulation of wealth and preservation of Kurdish patrimony. Even with the same search space as everyone else, its conclusions as to quality vectors may be sharply skewed by the second goal.
Government policy can encourage the analogues of sovereign wealth funds even within one nation. By structuring the law so that corporate abstraction of ownership is not encouraged, and individual proprietorship is, the normal differences among human beings can mitigate the monoculture imposed by the requirement to maximize shareholder value.
(II.) The intensity of compulsory moral hazard is a consequence of the fact that the employer/employee relationship has tilted so far toward employee fear of job loss. This results in what used to be a free contract moving toward a “contract of adhesion,” in the words of my lawyer friend — a conversation on terms dictated by one party. The wish to give honest advice remains and will become effective if this counterpressure is reduced.
It is actually in the self-interest of business organizations to make their employees free to prosper independently. This is quite a counter-intuitive thought in current executive practice, but it is proved by my analysis. Strong government intervention in favor of human independence — strengthening of alternative livelihoods like farming, and of extended family economic cooperation — is a fruitful route toward moving the advising of planners nearer to a conversation among equals.
(III.) Information sources cannot be made to forget their links, so the computerized search space monoculture is unavoidable. However, even apart from the effect discussed above, it has its severe limitations. It is well-known to be episodic and page-size-limited, with capacity for integration of knowledge crushed by the dominance of the medium over the searcher. It cannot exercise or encourage discrimination based on validity of search results.
Therefore it is intensely desirable to recover encouragement of science, in the widest sense of the term. Reasoning from first principles, fact, history, and mathematical consequences must recover dominance until it is only aided, not superseded, by search results. This justifies a business and government effort on the scale of a Manhattan Project or a space race, and in fact it is so important that it should impact the forty-hour week. A weekly holiday should be decreed for the purpose of discussion and reasoning by ordinary citizens.
To wrap up, let me emphasize that the above measures are not utopian or socialistic. They and others like them are needed to allow the market itself to recover the ability to behave like a distributed large world of the kind found by Marco Polo. Their cost is small compared even to the cost of the current crisis of 2008, the second of the crises predicted by my model (the first, and lesser, was dot-com). Refusal to adjust our thinking to the new reality of global monoculture will lead, like Irish potato monoculture, to economic famine and ruin.
Here I sketch the proofs of the three Propositions claimed in the Analysis.
The assumption places the n points in a space of at least n dimensions. Select one point, and construct n-1 vectors from it to each of the other points. The linear span of the vectors is a vector space of at most n-1 dimensions, and hence its orthogonal complement has at least one dimension. Any nonzero vector selected from this orthogonal complement yields zero when used as a measure of any of the n-1 difference vectors by the duality relation (dot product). Therefore by linearity it yields the same result applied to all n of the original points. QED
Without loss of generality, the body of possibilities may be orthogonally projected onto a two-dimensional space including S and T, with S parallel to the positive x-axis and T being (cos(θ), sin(θ)) for 0 ≤ θ ≤ π/2. This projection is still convex, of diameter d2, where d2 ≤ d. PS and PT being extreme will project to the boundary of the projected body, so the whole problem can be reduced to n = 2. Define the vector D = PT − PS in this two-dimensional reduction.
By definition, πT(PT) ≥ πT(PS) while πS(PT) ≤ πS(PS). It follows that D has length ≤ d2 and argument between π/2 and π/2 + θ. The minimum value of πS(D) that is possible under these constraints is −d2 sin(θ). Since 0 ≤ d2 ≤ d, this implies (1). QED
It is worth noting that the hypothesis of convexity can actually be omitted! All that is required is compactness, because the extreme points will project to the boundary of the projection of the convex hull. However, convexity is a natural model, and is certainly needed for the stronger conclusions of Proposition 3.
In order not to be constrained by smoothness requirements, we define the curvature of a convex body to be bounded below by κ > 0 at a boundary point B if the following holds true. For any tangent plane P to the body at B, and any є ≪ κ, let S be the sphere of radius 1/κ − є tangent to P at B on the side of the body, and let K be its center. Then for some δ > 0, the cone with vertex K whose intersection with S is the open neighborhood of diameter δ around B on S has an intersection with the body that is wholly, except for B itself, on the inside of S. This cone will be called a test cone.
This is easily seen to be equivalent to the standard definition of curvature at any point where the boundary is twice differentiable. Note that the tangent plane may be non-unique at a “crease” or “corner” of an unsmooth convex body.
Set up an orthogonal projection onto a two-dimensional subspace including S and T, as in the proof of Proposition 2. By compactness and the continuity of the projection, the projection image of the body is closed, hence compact, in R2, and the inverse image of any closed subset of the boundary of the projection image is a closed, hence compact, subset of the boundary of the body.
Suppose a closed, bounded, convex body has curvature bounded below by κ > 0 according to the above definition. Then any orthogonal projection of that body has curvature bounded below by the same κ.
Proof of lemma. For any point C on the projected image boundary, the pre-image of C must be unique. Otherwise, the entire line segment between the two pre-images of C must be a subset of the body, and hence of the body’s boundary. This violates the positive minimum curvature condition, a contradiction. Now let B be the unique pre-image of C, and Q a plane parallel to a tangent plane P at B, shifted slightly inward so it has a nonempty intersection with the interior of the body and also so that the diameter of its intersection with S is ≤ δ. Then the intersection of Q with the body nowhere meets the intersection of Q with S. It follows that the entire intersection of the body with the closed half-space bounded by Q and including B must be within the test cone. By possibly making δ smaller, it can be assumed that the diameter of the intersection of Q with S is exactly δ.
Let N be the orthogonal complement of the space into which the projection is mapping. Since B is a pre-image of a boundary point C of the orthogonal projection of the body, any tangent plane PC of C implies that P ≡ PC ⊕ N is a tangent plane of B. Then Q projects onto a lower-dimensional QC and the test cone projects onto a lower-dimensional test cone. Because the inverse image of the part of the projection of the body that is outside the lower-dimensional test cone falls on the far side of Q from B, the curvature condition in the higher-dimensional test cone now implies the same curvature condition in the lower-dimensional one. QED Lemma
We now continue the proof of Proposition 3. Thanks to Lemma 1, it reduces to a two-dimensional problem.
The boundary of a closed, bounded, convex set in two dimensions which has curvature bounded below by κ > 0 can be parametrized by the angle φ of its tangent direction. The defining condition given above for the curvature to be bounded below by κ > 0 is equivalent to the requirement that the arc length velocity of this parametrization always be ≤ 1/κ.
Proof of lemma: By cutting the boundary into four pieces at tangents that have 45 degree angles with the axes, we may reduce the proof of the first sentence to a similar claim for a convex function with bounded slope on an interval I. Let y = f(x) be such a function. According to [3], “the second (distributional) derivative of f is a nonnegative locally finite Borel measure on I, and any such measure is the second derivative of a convex function f which is unique up to the addition of an affine function.” Let s(x) be this second derivative, noting that it is the sum of a standard nonnegative function and a set of Dirac delta functions of various locations and positive weights.
Noting that curvature is always less than or equal to absolute value of second derivative, the definition of curvature bounded below by κ given above, applied in two dimensions, implies
s(x) ≥ κ ∀ x ∈ I. |
Without loss of generality, assume 0 ∈ I and (0,0) is the minimum of the convex function, corresponding to φ = 0. We then get
(x,y) = | ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ | ∫ |
| ξ(δ) dδ, | ∫ |
| δ ξ(δ) dδ | ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ |
where
0 ≤ ξ(d) = |
| ≤ |
|
and d is the slope at (x,y) while δ is the slope at intermediate points.
This definition of ξ also works at Dirac delta values of s(x) and is zero for the entire interval of d corresponding to such a corner. It expresses rate of change of x as function of d. To finish the proof of our Lemma, let σ(φ) denote the rate of change of arc length as a function of angle φ. We then get
σ(φ) = |
|
for δ = tan(φ). Substitution gives
(x,y) = | ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ | ∫ |
| cos(φ) σ(φ) dφ, | ∫ |
| sin(φ) σ(φ) dφ | ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ |
and checking the definitions also shows
0 ≤ σ(φ) ≤ |
| . |
This proves existence and shows that the old definition implies the new one. The implication in the other direction follows from the definition of curvature. QED Lemma
Rotating and translating so that S = (0,−1), T = (sin(θ), −cos(θ)), and PS = (0,0), we may parametrize the boundary in terms of the angle φ of its tangent direction. Integrating,
△ y ≤ | ∫ |
| σ(φ) sin(φ) dφ ≤ | ∫ |
|
| sin(φ) dφ = |
| ( 1 − cos(θ) ) (3) |
which implies (2). QED
This document was translated from LATEX by HEVEA.