After over four years of work, my book Law, Bubbles, and Financial Regulation came out at the end of 2013. You can read a longer description of the book at the Harvard Corporate Governance blog. Blurbs from Liaquat Ahamed, Michael Barr, Margaret Blair, Frank Partnoy, and Nouriel Roubini are on the Routledge’s web site and the book's Amazon page. The introductory chapter is available for free on ssrn.
Look for a Conglomerate book club on the book on the first week of February!
Permalink | Books| Comparative Law| Corporate Law| Economics| Finance| Financial Crisis| Financial Institutions| Law & Economics| Legal History| Legal Scholarship| Securities| White Collar Crime| Wisdom and Virtue | Comments (0) | TrackBack (0) | Bookmark
Last summer, I decided to scorn market efficiency and invest in a portfolio of individual stocks. Not a lot of money, but enough that I cared. I did some research and picked 18 stocks. Here are the results (blue line) compared to the Dow Jones, S&P 500, and Nasdaq:
As you can see, I am beating the market by a large margin. Feeling pretty confident about my success and believing that I could take advantage of the uncertainty created by the impending fiscal cliff, I purchased another small portfolio of stocks in September:
While this new portfolio has been rallying in late December as prospects for a cliff deal waxed and waned, I am still in the hole. The WSJ tells me that tonight's fiscal cliff deal will bolster my returns, but why? With all of the other uncertainties about the debt ceiling, spending cuts, and tax increases looming, the optimistic story seems to be that neither President Obama nor the Republicans in Congress are as extreme as their public positions, which are irreconcilable.
What a mess. We need a leader, and none is in sight.
As in a bad horror movie (or a great Rolling Stones song), observers of the current crisis may have been disquieted that one of the central characters in this disaster also played a central role in the Enron era. Is it coincidence that special purpose entities (SPEs) were at the core of both the Enron transactions and many of the structured finance deals that fell part in the Panic of 2007-2008?
Bill Bratton (Penn) and Adam Levitin (Georgetown) think not. Bratton and Levin have a really fine new paper out, A Transactional Genealogy of Scandal, that not only draws deep connections between these two episodes, but also traces back the lineage of collateralized debt obligations (CDOs) back to Michael Millken. The paper provides a masterful guided tour of the history of CDOs from the S&L/junk bond era to the innovations of J.P. Morgan through to the Goldman ABACUS deals and the freeze of the asset-backed commercial paper market .
Their account argues that the development of the SPE is the apotheosis of the firm as “nexus of contracts.” These shell companies, after all, are nothing but contracts. This feature, according to Bratton & Levin, allows SPEs to become ideal tools either for deceiving investors or arbitraging financial regulations.
Here is their abstract:
Three scandals have fundamentally reshaped business regulation over the past thirty years: the securities fraud prosecution of Michael Milken in 1988, the Enron implosion of 2001, and the Goldman Sachs “Abacus” enforcement action of 2010. The scandals have always been seen as unrelated. This Article highlights a previously unnoticed transactional affinity tying these scandals together — a deal structure known as the synthetic collateralized debt obligation (“CDO”) involving the use of a special purpose entity (“SPE”). The SPE is a new and widely used form of corporate alter ego designed to undertake transactions for its creator’s accounting and regulatory benefit.
The SPE remains mysterious and poorly understood, despite its use in framing transactions involving trillions of dollars and its prominence in foundational scandals. The traditional corporate alter ego was a subsidiary or affiliate with equity control. The SPE eschews equity control in favor of control through pre-set instructions emanating from transactional documents. In theory, these instructions are complete or very close thereto, making SPEs a real world manifestation of the “nexus of contracts” firm of economic and legal theory. In practice, however, formal designations of separateness do not always stand up under the strain of economic reality.
When coupled with financial disaster, the use of an SPE alter ego can turn even a minor compliance problem into scandal because of the mismatch between the traditional legal model of the firm and the SPE’s economic reality. The standard legal model looks to equity ownership to determine the boundaries of the firm: equity is inside the firm, while contract is outside. Regulatory regimes make inter-firm connections by tracking equity ownership. SPEs escape regulation by funneling inter-firm connections through contracts, rather than equity ownership.
The integration of SPEs into regulatory systems requires a ground-up rethinking of traditional legal models of the firm. A theory is emerging, not from corporate law or financial economics but from accounting principles. Accounting has responded to these scandals by abandoning the equity touchstone in favor of an analysis in which contractual allocations of risk, reward, and control operate as functional equivalents of equity ownership, and approach that redraws the boundaries of the firm. Transaction engineers need to come to terms with this new functional model as it could herald unexpected liability, as Goldman Sachs learned with its Abacus CDO.
The paper should be on the reading list of scholars in securities and financial institution regulation. The historical account also provides a rich source of material for corporate law scholars engaged in the Theory of the Firm literature.
Permalink | Accounting| Bankruptcy| Business Organizations| Corporate Law| Economics| Enron| Finance| Financial Crisis| Financial Institutions| Innovation| Legal History| Legal Scholarship| Securities| Transactional Law| White Collar Crime | Comments (0) | TrackBack (0) | Bookmark
I am getting ready to teach MGM v. Scheider next week in Contracts. The case (347 N.Y.S.2d. 755) involves whether a series of communications between a Hollywood studio and actor Roy Scheider (who would later star in JAWS) constituted a contract that bound the star to act in an ABC tv series. [Note: should any of my contract students read this post, the foregoing is not an example of a good case brief.]
When going over the aftermath of this case in class, the inevitable question comes up: “Why didn’t the lawyers insist on a more formal, written, and executed contract?” The same answers surface: sloppiness, lack of sophistication, time pressure. It makes for an easy moral for law students (“be tougher and more careful”), but one that I find increasingly less satisfying and nutritious. Sloppiness just seems too pat an answer to explain this or many of the other lawyer “mistakes” that populate a Contracts case book.
Fortunately, Jonathan Barnett (USC Law) has a new working paper that provides a much more nuanced answer. Barnett’s “Hollywood Deals: Soft Contracts for Hard Markets” explores why many contracts between Hollywood studios and star level talent (both sides usually represented by experienced lawyers) fall into this netherworld of “soft contracts” – that is agreements of questionable status as enforceable contracts. Barnett’s explanation involves both parties navigating two different risks – project risk (the risk a film won’t happen or will flop) and hold-up risk (the risk that a necessary party to a film will back out, possibly to hold the project hostage). The studio system used to provide a way to balance these two risks. The decline of this sytem, according to Barnett, gave rise to a growing use of “soft contracts.” Here is the abstract:
Hollywood film studios, talent and other deal participants regularly commit to, and undertake production of, high-stakes film projects on the basis of unsigned “deal memos,” informal communications or draft agreements whose legal enforceability is uncertain. These “soft contracts” constitute a hybrid instrument that addresses a challenging transactional environment where neither formal contract nor reputation effects adequately protect parties against the holdup risk and project risk inherent to a film project. Parties negotiate the degree of contractual formality, which correlates with legal enforceability, as a proxy for allocating these risks at a transaction-cost savings relative to a fully formalized and specified instrument. Uncertainly enforceable contracts embed an implicit termination option that provides some protection against project risk while maintaining a threat of legal liability that provides some protection against holdup risk. Historical evidence suggests that soft contracts substitute for the vertically integrated structures that allocated these risks in the “studio system” era.
The very accessible paper is worth a read – not only for Contracts scholars and teachers, but also for those interested in the theory of the firm. For a different, stimulating approach to supplementing the teaching of contracts (Hollywood and otherwise), Larry Cunningham’s new book, Contracts in the Real World: Stories of Popular Contracts and Why They Matter is out from Cambridge University Press. Larry gave a preview of the book and his approaching to teaching the subject in our Conglomerate forum on teaching contracts last summer. The book is chock full of very useful stories on chestnut casebook opinions, as well as contracts straight out of Variety involving stars from Eminem to Jane Fonda.
The Times has a not that newsy profile of the Mittelstand, today, Germany's vaunted SME sector, and one that counts for 60 percent of its employment. The big reveal is that the Mittelstand likes the euro, though that calculation is largely on the basis of interviews at one obscure (every Mittelstand company is obscure, that's rather the point) shut-off valve manufacturer.
If you hang out at business schools, the Mittelstand is a useful corrective to everything you think you're supposed to know about finance. German companies eschew debt, we are told, rely on banks instead of capital markets for funding, and retain their employees at all costs. Basically the opposite of the private equity playbook. And yet ... look at the awesome German economy! It has implications for corporate law, too, given that Mittelstand firms are likely to be closely held, with representation for workers and banks if it isn't just a family thing. Maybe that's what Delaware ought to be offering!
But I think this obsession with the Mittelstand may be branding more than anything else. Take that 60% employment number. In the US, though, small businesses account for half, and 65% of all new jobs. And some Mittelstand firms probably count as large businesses in the American definition (500 employees is the cutoff). Nor is Germany radically more industrialized than the US, though that's what the Mittelstand is supposed to be. 28% of the country's employees are in manufacturing. The we-just-do-service United States proportion? 22%
Wiser heads than mine accept the Mittelstand as different - the interest in SME-usable research is an excellent way to fund a project in not just German, but European universities more generally. But surely some perspective is in order. It's easy to overstate modest differences, and while I'll be happy to conclude that the German model well and truly is unique, I'd like to see a few more differences between that approach and ours before doing so.
I'll admit. I love pianos. I remember the day that my mom and I went to someone's house to buy a piano she had heard was for sale. I was 4 or 5, so I don't know if she saw the piano listed in the newspaper or heard about it from a friend, but I know the seller was a stranger. I remember the man asked if we wanted to try the piano, but neither my mom nor I played. We just bought it based on its good looks alone. We lucked out. It is a good piano.
When I was 27, after six years of living on my own, without a piano, I bought a piano from a warehouse dealer. I took a long my boyfriend, who has perfect pitch and who became my husband. He told me to pick what I would describe as the ugliest piano there, with a mismatched bench. It turns out, it is an excellent piano, a Baldwin from the early 1960s, when the company did not put the name on the outside of the piano. As a result, I bought the piano at a fraction of its "market value."
But, I've been skeptical of adding this asset to my concept of my "net worth," because I can't imagine that I would find a buyer willing to pay market value for my scratched-up upright piano. And this article in the NYT today says that I'm probably right.
Markets depart away from efficiency when transaction costs are high. Pianos have incredibly high transaction costs. They are difficult to transport and difficult to store. Yeah, but so are cars, right? Yes, but cars can transport themselves, don't have to be climate-controlled (besides hail), and don't need a tune-up after being transported. And, there are warranties. Online markets don't do well with pianos, which need to be heard, not seen, and often heard by an expert. So, though I have often wondered why there isn't more piano arbitrage, with dealers scooping up deals from unsuspecting sellers' houses, it may be that the transaction costs of going to people's houses, storing and transporting pianos is too much.
The article also makes the point that fewer people want pianos anymore. Even though houses are bigger, people seem to like electronic keyboards that don't have to be tuned and can be stored easily. When one of my neighbors announced they were selling their piano because their son really just wanted a keyboard and they were remodeling the living room, I was appalled. But, this seems to be the way of the world. And, I guess my neighbor's son will be able to take his keyboard when he moves out, unlike me. I'm sure folks with fancy foyers and front rooms still want grand pianos, as long as they are shiny and look good (and maybe with digital "player piano" capabilities), but the middle-class front room piano may go the way of the Encyclopedia Britannica. In fact, it's so hard to get rid of an old piano these days -- like a set of encyclopedias -- you may even have to pay someone to take it off your hands.
I am attending the Economics Bloggers Forum 2012 today at the Kauffman Foundation. You can get a progam and live stream here. Cool event, as is the norm with Kauffman and Bob Litan.
UPDATE: The Kauffman sketchbook series is fun. We just watched "I'm a blogger" featuring Tyler Cowen. Nice.
On Sunday, the NYT published a long, front-page story that is getting a lot of attention, Even Critics of Safety Net Increasingly Depend On It. The article has a lot of interesting facts, like the amount of spending on government entitlement programs has skyrocketed, but those dollars are going less and less (as a percentage) to the lowest fifth of the population. Relatedly, though surveys show that most Americans believe that the programs that are growing the fastest are for the poor, these programs are not growing at all, except for Medicaid. And, the program that is growing the fastest is Medicare.
But the article wants to point out what it seems to see as a hypocrisy: that people who are arguing that government should be smaller, and voting their concerns, are beneficiaries of these programs. So, folks that argue that government should cut spending to the poor are taking advantage of free lunch programs and getting disability checks, Social Security and Medicare. The article seems to think that this irony is the result of some sort of "except me" selfishness or cognitive dissonance. If the latter is the case, then voters are surely not voting their interests, and next year they will wake up and realize that they can't make ends meet any more because they can't qualify for free lunch or their unemployment checks were much smaller. And then, of course, it's too late to re-cast your vote. The article seems to hold up these people as sort-of ignorant victims of Tea Party rhetoric who maybe shouldn't deserve to vote.
But, there could be alternative theories. It could be that a voter doesn't think that some of these programs should be funded by the government. But they are. So, if I can't control how that money is being spent, then it's in my interest to take advantage of the program until the program is ended. So, I may think that our summer research grants are too large (obviously a hypothetical) or that our teaching load is too small (again, just go with me). But, I'm not in a position to change either of those, so I might as well take the summer money and teach the prescribed load. But, if a Dean candidate came through saying that she would reduce summer grants and increase the teaching load (this candidate has become extinct due to evolution), then I wouldn't be a hypocrite or a creature of cognitive dissonance to vote for that candidate. Now, I have known a few people who turn down free government services they otherwise qualify for because they can pay their own way (special needs services for children, for example). But these folks are few and far between, I think.
No one in the article seems to articulate that "ride the wave" type of thinking (at least how the article is written), but some do seem to fit into the category of "I spent my life thinking I could count on Social Security and Medicare, and now I depend on it. But I do think the government should make cuts so as not to overburden the youth. But any cuts to my income would be devastating because I was counting on those." I don't think that's hypocritical or cognitive dissonance; that's just realistic pragmatism.
The 2011 symposium edition of the Berkeley Business Law Journal on Dodd-Frank is out. I would like to thank the editors and the Berkeley Center for Law, Business and the Economy for inviting me to a great conference. My contribution, Credit Derivatives, Leverage, and Financial Regulation’s Missing Macroeconomic Dimension is now up on ssrn. Here is the abstract:
Of all OTC derivatives, credit derivatives pose particular concerns because of their ability to generate leverage that can increase liquidity - or the effective money supply - throughout the financial system. Credit derivatives and the leverage they create thus do much more than increase the fragility of financial institutions and increase counterparty risk. By increasing leverage and liquidity, credit derivatives can fuel rises in asset prices and even asset price bubbles. Rising asset prices can then mask mistakes in the pricing of credit derivatives and in assessments of overall leverage in the financial system. Furthermore, the use of credit derivatives by financial institutions can contribute to a cycle of leveraging and deleveraging in the economy.
This Article argues for viewing many of the policy responses to credit derivatives, such as requirements that these derivatives be exchange traded, centrally cleared, or otherwise subject to collateral or 'margin' requirements, in a second, macroeconomic dimension. These rules have the potential to change – or at least better measure – the amount of liquidity and the supply of credit in financial markets and in the 'real' economy. By examining credit derivatives, this Article illustrates the need to see a wide array of financial regulations in a macroeconomic context.
Understanding credit derivatives’ macroeconomic effects has implications for macroprudential regulatory design. First, regulations that address financial institution leverage offer central bankers new tools to dampen inflation in asset markets and to fight potential asset price bubbles. Second, even if these regulations are not used primarily as monetary or macroeconomic levers, changes in these regulations, including changes in the effectiveness of these regulations due to regulatory arbitrage, can have profound macroeconomic effects. Third, the macroeconomic dimension of credit derivative regulation and other financial regulation argues for greater coordination between prudential regulation and macroeconomic policy.
Comments by e-mail are always welcome.
Originally, I was hoping to start this post with a link to some research a colleague and I just completed that discusses how lenders may be overestimating property values prior to foreclosure. But it has not made it through formatting and on to the web yet, so I will instead share the findings with you.
In this research we find that lenders may be overestimating property values prior to foreclosure in weak housing submarkets. (By “lender” I mean banks servicing their own loans or securitized loans.) We find evidence of overestimating values by looking at the difference between the sale price at foreclosure auction (in this case the lender’s reserve/minimum bid) and the subsequent sale price of the home out of REO in submarkets in Cuyahoga County, OH (home to Cleveland). As the housing market gets weaker, the gap between those two sale prices grows. We also find that lenders’ value estimates may be dramatically improved by incorporating a few simple factors such as the age of the home and the poverty level in the home’s census tract. So we would expect lenders to pick up on this at some point and adjust their models accordingly. But we don’t see that happening. There are three possible explanations I can think of, though I welcome others.
First, lenders may not be overestimating the value at all. The price they pay for property may represent bidding in accord with an Ohio law that automatically sets the minimum bid at the first foreclosure auction, rather than waiting for subsequent auctions when the minimum bid can be adjusted. The way Cuyahoga County interprets this law, prior to foreclosure the County pays for a drive-by or walk-around appraisal. The initial minimum bid is set at two thirds of that appraisal. (If anyone can think of a good reason for this law, please share in the comments.) If no one bids at the first auction, the lender can lower the minimum bid at subsequent auctions. Anecdotally, bankers report credit-bidding their judgment to meet the minimum bid to obtain control of, and begin marketing, the property.
Automatically placing the minimum bid may be routine for bankers, but it probably does not always payoff: we find that the worst 25% of REO property sells for less than half of its minimum bid, if it sells in the quarter it is taken into REO. If it stays in REO for four quarters, it sells for less than 10% of its minimum bid. If the property’s minimum bid was $50,000 (remember, this is the worst 25% of property taken into REO), the lender recovers $5,000 before the broker’s commission, maintenance, taxes, and transfer costs. It is unclear why lenders would be in such a rush to obtain such low-quality properties if they were valuing them correctly.
The next two explanations differ from the first, because they assume that lenders are actually bidding at or close to their estimated value of the property. The second explanation may be that the methods used to value property just don’t work well in weak submarkets, and lenders’ valuation models are not correcting for that. It is not hard to imagine that a walk-around appraisal is a reasonably accurate way to value most property in most markets. If brokers want to find non-foreclosure sales to use as comparables, they have to reach back further in time in weak markets than they do in others, so the prices they use are more likely to be stale. Walk-around appraisals may also miss interior damage(stripped copper pipe and wire, appliances, etc.) that properties are more likely to have suffered in weak markets.
The third possible explanation is that lenders are shifting accounting losses from loan portfolios to REO portfolios. This could be accomplished by using the inflated estimated value to prevent recognizing losses on the loan, and instead writing down the value in the REO portfolio. There are two potential benefits to this. The first is that capital markets tend to pay more attention to loan portfolio performance than REO portfolio performance. The second benefit is that most solvency tests for banks focus on loan portfolio performance metrics, and pay little or no attention to REO portfolio performance. So shifting these losses could potentially make lenders look healthier and more attractive than they actually are.
Any way you slice it large REO portfolios are bad for banks and communities. One way to reduce the size of these portfolios is to lower foreclosure auction reserves, increasing the chance that others will purchase the property at auction instead of it becoming REO. If there is no market for the property, then donation to a land bank or similar entity may be the answer.
Since the mid-2000s, researchers have been studying the impact of foreclosures on surrounding property values. A colleague and I recently finished an important contribution to this line of research. Anyone attempting to craft responses to the housing market's current woes, particularly efforts to stabilize neighborhoods and home values, should give two of our results serious consideration.
First, our findings suggest that most prior studies overstate the impact that foreclosures have on surrounding property values. The reason they overstate the impact of foreclosures is that they don't take into account long term vacancy or property abandonment even though vacancy and abandoned property also drive down surrounding property values (either by adding units of supply or dis-amenities, depending on the vacant home or abandoned property's condition). In our study, we include measures of all three (both individually and collectively) and we find (in Table 10 on p.42, for those interested) that when you only measure one of the three factors that drive down housing values, you overstate the influence of the factor you are measuring. This result is important because it tells us that foreclosures do not decrease surrounding property values as much as previous studies suggest, and that attempts to stabilize markets should address vacancy and abandonment in addition to foreclosure.
Second, and more importantly, we illustrate how foreclosure, vacancy, and abandonment have differential impacts on weak housing markets relative to average housing markets. (The following information summarizes Tables 12-14.) When we subdivide Cuyahoga County's (home to Cleveland) housing markets by strength, we see huge differences. In the markets that more closely approximate the average market in the US, we see what prior research and theory would predict: foreclosures, vacancies, and abandoned housing all substantially lower surrounding home values. In these markets, long-term vacancy and property abandonment are not as common or as problematic as foreclosure.
But in weaker sub markets, things are strikingly different: long term vacant homes and abandoned properties drive down prices more than foreclosures, and are more common than they are in normal markets. Part of what drives this result is that lenders are attempting (and usually succeeding) to selectively foreclose on the "best-of-the-worst:" properties that have some prospect of resale because they are in the best neighborhoods in weak housing sub markets.
Understanding the relationship between foreclosure and abandonment is tricky: in weak markets, foreclosure accelerates abandonment, but does not appear to cause it. Foreclosed properties are sometimes abandoned, for example when lenders foreclose on a property that turns out to be among the worst-of-the-worst, they sell it to a property speculator that usually abandons it. But abandonment is really driven by long-term population loss that resulted in an oversupply of housing relative to demand. Plenty of property has been vacant long term or abandoned but has not been through a foreclosure recently.
The fact that vacancy and abandonment are bigger problems in weak housing markets than foreclosure is not surprising to those that have studied, worked, or lived in these markets. People with no experience with weak housing markets often overlook the problems of vacancy and abandonment, instead focusing on foreclosure. It is critical for policymakers to understand the differences between average and weak housing markets because the best tools for stabilizing housing in them differ. Weak markets need subsidies for the removal of vacant or abandoned homes, and less funding for rehabilitation. Likewise, proposals to move REO to rental property might not be as wise in weak markets as they are in average or stronger markets.
There are some promising local practices, such as modern land banking and low-value REO donation accompanied by per-property demolition grants that will help correct this supply/demand imbalance. Still, I personal think it would be better in the long run if policymakers paid more attention to right-sizing weak housing markets, and less to subsidizing rehab and new construction in them.
This post is a summary of a working paper the two of us finished recently, available here.
There are numerous discussions taking place about the future of housing finance, most focusing on the secondary market. The central themes in theses discussions have been the government's future role in secondary markets and restarting private secondary markets. But one area that is not receiving much attention is the potential liability of either the entities that arrange securitizations or the trusts (the assignees) that end up owning loans, for unlawful acts at loan origination.
During the housing boom, everyone seemed to think that assignees were shielded from the consequences of lenders' illegal acts. It appears that the market assumed that the holder in due course (HDC) rule(which protects note purchasers from most defenses to non-payment on notes) and originators' loan repurchase obligations through representations and warranties would take care of assignee liability risk. These turned out to be pretty bad assumptions. Originator repurchase obligations are only effective if the originator is still around to repurchase loans, which has been the case less and less frequently through the crisis.
In addition, assignees are not protected from liability by the HDC rule unless the notes are negotiable instruments, and the buyers and sellers of the notes observed the formalities necessary to obtain the rules protection. As we have seen with the shoddy foreclosure documentation, the industry ignored fundamental formalities and undermined the HDC shield.
The more interesting point is that many securitization arrangers may find themselves exposed to liability for the illegal actions of originators based on theories such as joint venture. Such claims have survived summary judgment motions when the arrangers prospectively agreed to purchase all or some of the loans originators made, and the arrangers had some knowledge of the originators' illegal acts. Arrangers could often glean information on lenders and their loan practices through due diligence, media reports, and informal information sharing in vertically-integrated firms (the last being very difficult to prove). Arrangers have also exposed themselves to liability by actually supplying deceptive disclosures and payment schedules to originators, who then provided the documents to borrowers.
So far, consumer claims against securitization arrangers have been rare and most have been settled, but this trend may reverse. Now that litigators and judges better understand the organization of the private label securities markets, these claims may have sturdier footing and judges and juries may be more sympathetic to consumers.
Uncertainty is clearly the theme when it comes to both assignee and arranger liability. This uncertainty impedes accurate pricing of MBS, especially given the potential for claims by attorneys general, large class actions, and widespread borrower rescission of loans. Policy-makers that want to stimulate the secondary market need to address the legal complexity that causes uncertainty (among other things). Going forward, the simplest solution is to create incentives for the market to police itself, by allowing assignee and arranger liability for originator wrongdoing. The next step should be to set parameters for arranger and assignee liability to allow it to be quantified and priced into credit. Together these actions will sanction future bad actors, protect consumers, and help the MBS market by making it possible to price litigation risk.
Time Magazine’s “person of the year” is the “protestor.” Occupy Wall Street’s participants have generated discussion unprecedented in recent years about the role of corporations and their executives in society. The movement has influenced workers and unemployed alike around the world and has clearly shaped the political debate.
But how does a corporation really act? Doesn’t it act through its people? And do those people behave like the members of the homo economicus species acting rationally, selfishly for their greatest material advantage and without consideration about morality, ethics or other people? If so, can a corporation really have a conscience?
In her book Cultivating Conscience: How Good Laws Make Good People, Lynn Stout, a corporate and securities professor at UCLA School of Law argues that the homo economicus model does a poor job of predicting behavior within corporations. Stout takes aim at Oliver Wendell Holmes’ theory of the “bad man” (which forms the basis of homo economicus), Hobbes’ approach in Leviathan, John Stuart Mill’s theory of political economy, and those judges, law professors, regulators and policymakers who focus solely on the law and economics theory that material incentives are the only things that matter.
Citing hundreds of sociological studies that have been replicated around the world over the past fifty years, evolutionary biology, and experimental gaming theory, she concludes that people do not generally behave like the “rational maximizers” that ecomonic theory would predict. In fact other than the 1-3% of the population who are psychopaths, people are “prosocial, ” meaning that they sacrifice to follow ethical rules, or to help or avoid harming others (although interestingly in student studies, economics majors tended to be less prosocial than others).
She recommends a three-factor model for judges, regulators and legislators who want to shape human behavior:
“Unselfish prosocial behavior toward strangers, including unselfish compliance with legal and ethical rules, is triggered by social context, including especially:
(1) instructions from authority
(2) beliefs about others’ prosocial behavior; and
(3) the magnitude of the benefits to others.
Prosocial behavior declines, however, as the personal cost of acting prosocially increases.”
While she focuses on tort, contract and criminal law, her model and criticisms of the homo economicus model may be particularly helpful in the context of understanding corporate behavior. Corporations clearly influence how their people act. Professor Pamela Bucy, for example, argues that government should only be able to convict a corporation if it proves that the corporate ethos encouraged agents of the corporation to commit the criminal act. That corporate ethos results from individuals working together toward corporate goals.
Stout observes that an entire generation of business and political leaders has been taught that people only respond to material incentives, which leads to poor planning that can have devastating results by steering naturally prosocial people to toward unethical or illegal behavior. She warns against “rais[ing] the cost of conscience,” stating that “if we want people to be good, we must not tempt them to be bad.”
In her forthcoming article “Killing Conscience: The Unintended Behavioral Consequences of ‘Pay for Performance,’” she applies behavioral science to incentive based-pay. She points to the savings and loans crisis of the 80's, the recent teacher cheating scandals on standardized tests, Enron, Worldcom, the 2008 credit crisis, which stemmed in part from performance-based bonuses that tempted brokers to approve risky loans, and Bear Sterns and AIG executives who bet on risky derivatives. She disagrees with those who say that that those incentive plans were poorly designed, arguing instead that excessive reliance on even well designed ex-ante incentive plans can “snuff out” or suppress conscience and create “psycopathogenic” environments, and has done so as evidenced by “a disturbing outbreak of executive-driven corporate frauds, scandals and failures.” She further notes that the pay for performance movement has produced less than stellar improvement in the performance and profitability of most US companies.
She advocates instead for trust-based” compensation arrangements, which take into account the parties’ capacity for prosocial behavior rather than leading employees to believe that the employer rewards selfish behavior. This is especially true if that reward tempts employees to engage in fraudulent or opportunistic behavior if that is the only way to realistically achieve the performance metric.
Applying her three factor model looks like this: Does the company’s messaging tell employees that it doesn’t care about ethics? Is it rewarding other people to act in the same way? And is it signaling that there is nothing wrong with unethical behavior or that there are no victims? This theory fits in nicely with the Bucy corporate ethos paradigm described above.
Stout proposes modest, nonmaterial rewards such as greater job responsibilities, public recognition, and more reasonable cash awards based upon subjective, ex post evaluations on the employee’s performance, and cites studies indicating that most employees thrive and are more creative in environments that don’t focus on ex ante monetary incentives. She yearns for the pre 162(m) days when the tax code didn’t require corporations to tie executive pay over one million dollars to performance metrics.
Stout’s application of these behavioral science theories provide guidance that lawmakers and others may want to consider as they look at legislation to prevent or at least mitigate the next corporate scandal. She also provides food for thought for those in corporate America who want to change the dynamics and trust factors within their organizations, and by extension their employee base, shareholders and the general population.
Permalink | Agency Law| Books| Business Ethics| Business Organizations| Contracts| Corporate Governance| Corporate Law| Crime and Criminal Law| Economics| Empirical Legal Studies| Employees| Enron| Junior Scholars| Law & Economics| Law & Society| Management| Organizational Theory| Politics| Sociology| White Collar Crime | TrackBack (0) | Bookmark
I am grateful for Usha’s latest post about her ambivalence to law and emotions scholarship because it provides an opportunity to engage in extended public discussion about what are some of the legal payoffs to (business) law professors of learning and teaching about emotions in general and happiness in particular.
I concur with Usha that it’s a busy time of the academic year as the semester is coming to a close and many of us will soon be traveling for the holidays (and some of us have traveled to participate in conferences). Of course, most of us feel that we are if not always, then at least constantly busy. In their article titled Idleness Aversion and the Need for Justifiable Busyness, Christopher K. Hsee, Adelle X. Yang, and Liangyan Wang present experimental evdience that busier people self-report being happier. The following is a video short about how the days are long, but the years are short.
I am quite sympathetic to Usha’s opinion that while happiness research is “all fascinating and it shapes my daily choices and reaffirms (or causes me to question) my life choices. Happiness research goes to the core of myself as a person. Still I wonder: what does this have to do with law?” This is partly because her view is one that many people including myself from a couple of years ago share. As Usha pointed out, I’ve already written a number of law review articles and some peer-referred articles about law and emotions including but not limited to happiness. Rather than repeating any of those article’s themes (those interested can find all of them available here), I’ll share five concrete responses to the specific challenge that Usha issued about what are the legal implications of and payoff to emotions and happiness research.
First, much of law concerns and is about human behavior: how to discourage anti-social human behavior and encourage pro-social human behavior. In attempting to change human behavior, law is and must be predicated upon a theory of human behavior. The theory can be Oliver Wendell Holmes’ bad man or neoclassical economics’ much caricatured rational actor. Whatever that underlying theory of human behavior is that law is based upon, that theory must address human JDM (Judgment and Decision Making) because in order for the law to change human behavior the law must change the judgments and/or decisions that humans make. It just so happens there has been a recent flood of research about how emotions in general and happiness in particular influence human JDM. This research is diverse and scattered across many disciplines, including anthropology, economics, finance, neuroscience, marketing, philosophy, political science, psychology, and sociology. Of course, this plethora of non-legal interest and research does not have to mean there are legal implications of new understandings about how emotions and happiness shape human JDM. But at least some law professors can and should read this rapidly growing literature to digest it and see if any of it has legal implications or payoffs. Professor Emeritus and former Dean of Stanford Law School and current President of the William and Flora HEwlett Foundation, Paul Brest teaches a graduate course on JDM at Stanford University. He has co-authored with Professor of Law and Director of the Ulu Lehua Scholars Program at the William S. Richardson School of Law in Honolulu, Hawai'i and Senior Research Fellow at the Center for the Study of Law and Society at the University of California, Berkeley, Linda Hamilton Krieger a book titled Problem Solving, Decision Making, and Professional Judgment: A Guide for Lawyers amd Policymakers. Chapter 13 of their book analyzes complexities about decision-making including predicting future well-being and Chapter 16 is titled The Role ofAffect In Risky Decisions.
Second, much of business law is premised upon the neoclassical economics model of utility maximization or the behavioral economics challenge to that model. In either case, business law can benefit from recent work on happiness economics because happiness economics raises a more fundamental challenge to and radical critique of neoclassical economics than does behavioral economics. Some view happiness economics as being a proper subset of behavioral economics, while others view happiness economics as being an extension of behavioral economics. In any event, behavioral economics points out that people have bounded rationality, willpower, and self-interest. The theoretical core of behavioral economics is an article titled Prospect Theory: An Analysis of Decision under Risk by Daniel Kahneman and Amos Tversky. This is an article which is likely to have been cited more times than it has been read by law professors and certainly more times than it has been understood by law professors as evidenced by overly broad attempted legal applications.
Happiness economics points out how people often systematically make decisions that fail to maximize their experienced happiness ex post as opposed to their anticipated or predicted happiness ex ante. This robust empirical and experimental finding means that at least in principle there is room for some other party, public or private, to help improve (or take advantage of) people’s JDM. In a recent working paper that is a forthcoming article in the American Economic Review, titled What Do You Think Would Make You Happier? What Do You Think You Would Choose?, Daniel Benjamin, Ori Heffetz, Miles S. Kimball, and Alex Rees-Jones present survey evidence that although what people choose hypothetically and what they predict would maximize their SWB (Subjective Well-Being) typically coincide, there are systematic reversals. They identify such factors as autonomy, family happiness, predicted sense of purpose, and social status to help account for hypothetical choices while controlling for predicted SWB. Their methodology has a number of possible legal and policy applications, including the development of aggregate measures of happiness. Another example is the application of their approach to reconcile the tension between an empirical finding in the article The Paradox of Declining Female Happiness by economists Betsey Stevenson and Justin Wolfers of declining average SWB of American women since the 1970s, both in absolute terms and in relative terms compared to men, with a common intuition that expanded political and economic freedoms for American women have made American women better off. Survey respondents who were asked to rank living in a world with or without such increased political and economic freedoms for women. Significantly more respondents choose to live in a world having expanded political and economic freedoms for women despite believing that a world without such expanded political and economic freedoms would make them happier than the opposite. Their National Bureau of Economic Research working paper 16489 titled Do People Seek to Maximize Happiness? Evidence from New Surveys contains additional examples and more details.
Third, research into two specific emotions, namely fear and greed finds that participants in financial markets are sometimes emotional and sometimes unemotional because they engage in both emotional and unemotional types of mental processing in responding to ever-changing market circumstances. In a series of articles titled,
finance professor Andrew W. Lo posits that many tenets of rational expectations and the so-called efficient markets hypothesis fail to hold always, despite serving as useful benchmarks of what might eventually happen under certain idealized conditions. He speculates that an evolutionary theory of punctuated equilibria involving rare but big environmental shocks resulting in mass extinctions and eruption of new species could apply to financial markets. As Lo points out, law and policy that is based upon assuming rationality or more precisely lack of emotionality is going to be inapt during financial crises. Similarly, law and policy that is based upon assuming emotionality is going to be inapt during financially calm times. His Adaptive Markets Hypothesis implies that effective law and policy should adapt in light of changing financial markets and their participants. Examples of such adaptive business law and policy include:
(1) Countercyclical capital requirements.
(2) Collection, communication, dissemination, publication, and transparency of information about accurate systemic risk measures.
(3) Creation of a Capital Markets Safety Board (CMSB), analogous to the National Transportation Safety Board which conducts an independent investigation of all transportation accidents, in order to perform definitive forensic analysis of past financial crises. The CMSB would be made up of “teams of experienced professionals— forensic accountants, financial engineers from industry and academia, and securities and tax attorneys—that work together on a regular basis to investigate the collapse of every major financial institution.”
As Professor Lo cogently observes,
“The fact that the 2,319-page Dodd-Frank financial reform bill was signed into law on July 21, 2010—six months before the Financial Crisis Inquiry Commission submitted its January 27, 2011 report, and well before economists have developed any consensus on the crisis—underscores the relatively minor scientific role that economics has played in responding to the crisis. Imagine the FDA approving a drug before its clinical trials are concluded, or the FAA adopting new regulations in response to an airplane crash before the NTSB has completed its accident investigation.”
Fourth, central to effective JDM is the development and practice of skills related to emotions and emotional intelligence. A number of business trade books and business school courses focus on how managers can improve their emotional intelligence and in so doing become more effective organizational leaders. Law school clinical and negotiation casebooks and courses often discuss the importance of recognizing and responding appropriately to emotions in attorneys, clients, judges, juries, and other legal actors. For example, in their chapter, If I’d Wanted to Teach About Feelings, I Wouldn’t Have Become a Law Professor, Melissa L. Nelken, Andrea Kupfer Schneider, & Jamil Mahuad present concrete tools for teaching law students about the importance of emotions in negotiation. Yet much of current American legal non-clinical education teaches students explicitly and implicitly that lawyering is just about logical analysis and not about feelings. For example, in another article titled The Discourse Beneath: Emotional Epistemology in Legal Deliberation and Negotiation, Erin Ryan writes that "[b]y acknowledging the salience of wise emotionality in individual and collective deliberation, lawyers will not only improve their own personal repertoires, but propel the practice of law, negotiation, and policymaking toward new horizons of efficacy." Similarly, a recent book titled How Leading Lawyers Think: Expert Insights into Judgment and Advocacy by Randall Kiser discusses (at pages 75-85) how important emotional intelligence is to legal practice.
Fifth and finally, law professors can and should incorporate more information about emotions into law school. Many law professors and law students share a common discomfort with and disdain for emotions in part because of what many law students and faculty believe it means to think like a lawyer. For example, see page 422 of the article titled Negotiation and Psychoanalysis: If I’d Wanted to Learn about Feelings, I Wouldn’t Have Gone to Law School by Melissa L. Nelken. In her anthropological study of first–year contracts classes at eight law schools, law professor and senior fellow of the American Bar Foundation Elizabeth Mertz found that being taught to think like a lawyer caused students to lose their sense of self as they develop analytical and emotional detachment, resulting from the discounting of personal moral reasoning and values, as they learn to substitute purely analytical and strategic types of reasoning in place of personal feelings of compassion and empathy.
In fact, empathy is an important skill that lawyers can and should learn. In his article, Thinking Like Nonlawyers: Why Empathy Is a Core Lawyering Skill and Why Legal Education Should Change to Reflect Its Importance, Ian Gallacher analyzes pedagogical implications of lawyers communicating a lot with people who are not lawyers, such as clients, jurors, and witnesses.
In conclusion, a better and more nuanced understanding of what roles emotions generally and happiness particularly can play in human JDM, economic behavior, financial markets, legal practice, and legal education can and should inform how law professors conduct academic research and teach law students.
As promised this post will be about recent proposals advocating that governments adopt various measures of aggregate happiness to complement such traditional measures of economic well-being as Gross Domestic Product (GDP) or Gross National Product (GNP). The basic premise for these proposals can be found in the first major campaign speech that Senator Robert F. Kennedy gave on March 18, 1968 at the University of Kansas. That speech challenged the prevailing orthodoxy of how governments measure progress and well-being.
Not surprisingly, the speech is right in that many items that are part of GNP do not reflect genuine social progress. To be clear and for the record, most economists themselves have long understood that GDP is an imperfect proxy for social welfare. Such proposed refinements as the idea of Net Economic Welfare (NEW) attempt to improve GDP by placing values upon and subtracting the costs on such negative externalities as crime, congestion, and environmental pollution from GDP. The last paragraph of the speech is what proposed social measures of subjective well-being intend to capture:
"Yet the Gross National Product does not allow for the health of our children, the quality of their education, or the joy of their play. It does not include the beauty of our poetry or the strength of our marriages, the intelligence of our public debate or the integrity of our public officials. It measures neither our wit nor our courage, neither our wisdom nor our learning, neither our compassion nor our devotion to our country; it measures everything, in short, except that which makes life worthwhile. And it can tell us everything about America except why we are proud that we are Americans."
Of course, the claim that GNP "measures everything, in short, except that which makes life worthwhile. And it can tell us everything about America except why we are proud that we are Americans" is a bit overstated. Nonetheless, GNP can be improved to better measure what governments and societies value. There is currently a lively debate over whether and if so, how governments can pragmatically measure aggregate happiness. One reason that such a debate is and will be contested is that once an item is measured and recorded, that item becomes harder to ignore and is likely to become a part of policy discussions. As Kenneth Arrow pointed out on pages 47-48 of his book, The Limits of Organization,
"The Full Employment Act of 1946 amounted to nothing more than a statement that full employment was at last on the Federal agenda, and many felt that this was a hollow victory indeed. But those who opposed it so violently were not deceived; in the long run, this recognition was decisive, though the process of implementing the responsibility was slow indeed. Once an item has arrived on the agenda, it is difficult not to treat it in a somewhat rational manner, if that is at all possible, and almost any considered solution may be better than neglect."
Professors Kahneman and Sugden introduce a methodology of policy evaluation based on experienced utility to environmental economics that avoids well-known problems of preference anomalies for contingent valuation studies. French President Nicolas Sarkozy recently created a Commission on the Measurement of Economic Performance and Social Progress, chaired by 2001 Nobel Laureate in economics, Joseph E. Stiglitz. The report by this commission makes a number of recommendations, including “Recommendation 10: Measures of both objective and subjective well-being provide key information about people’s quality of life. Statistical offices should incorporate questions to capture people’s life evaluations, hedonic experiences and priorities in their own survey." In a discussion paper titled Beyond GDP and Back: What is the Value-Added by Additional Components of Welfare Measurement, economists Sonja C. Kassenboehmer and Christoph M. Schmidt analyze quality-of-life indicators that are suggested in the Stiglitz Report to find that much of the variation in many well-being measures is already well-captured by such traditional economic indicators as GDP and the unemployment rate, but because the correlation of alternative indicators with monetary measures is far from perfect, there is room to augment traditional statistical reporting by non-standard indicators.
British Prime Minister David Cameron recently announced similar plans to collect national well-being measures that incorporate life satisfaction. In an article titled Emotional Prosperity and the Stiglitz Commission, British economist Andrew Oswald argues that countries are capable of and should measure their emotional prosperity and focus on mental well-being. In that article, Oswald summarizes seven studies that suggest emotional prosperity and broad measures of psychological well-being have recently been declining over time. In a National Bureau of Economic Research working paper titled, Beyond GDP? Welfare across Countries and Time, American economists Charles I. Jones and Peter J. Klenow propose a simple summary statistic for a country’s flow of well-being that combines data about consumption, inequality, leisure, and mortality.
In an article titled Happiness and Public Choice, European economists Bruno S. Frey and Alois Stutzer caution that a policy of maximizing aggregate happiness faces a number of difficulties including that it reduces people to being merely happiness metric stations in addition to discounts problems with political institutions and incentive distortion. In their article, they instead propose two practical ways to utilize happiness research for policy: (1) facilitate identification of those institutions that assist people in best achieving their personal goals and in so doing contributing maximally to individual happiness, and (2) provide crucial information as inputs to political discussion process.
Instead of maximizing a measure of aggregate happiness, it might be more politically feasible to minimize a measure of aggregate misery, stress, or unhappiness, such as the U-index, which in their article titled Recent Developments in the Measurement of Subjective Well-Being, Daniel Kahneman and labor economist Alan Krueger proposed and defined to measure the fraction of time that people spend experiencing unpleasant emotions. The U-index provides empirical information about negative emotional experiences that society may care about.
Another way to incorporate happiness data into policy analysis is to introduce maximum levels of a measure of unhappiness or minimum levels of a measure of happiness as constraints that government policies must satisfy while optimizing some objective function or goal besides happiness or unhappiness. This approach is analogous to philosopher Robert Nozick’s approach in his book titled Anarchy, State, and Utopia to incorporating rights as constraints that are not to be violated as opposed to rights as part of a policy goal to be optimized.
In her article titled Happiness on the Political Agenda? PROS and CONS, philosopher Valérie De Prycker argues that actual incorporation of happiness research into policy implicates a number of value-loaded ethical, ideological, and moral issues. But, in his article titled Greater Happiness for a Greater Number Is that Possible and Desirable?, sociologist Ruut Veenhoven believes that empirical research about life satisfaction refutes all theoretical philosophical objections against the greatest happiness principle. In yet a third article titled Greater Happiness for a Greater Number: Some Non-controversial Options for Governments, social scientist Jan C. Ott believes that governments can increase average happiness, eventually reduce happiness inequalities, and realize both purposively by non-controversial means. In another article titled Good Governance and Happiness in Nations: Technical Quality Precedes Democracy and Quality Beats Size, Professor Ott examines how quality of governance and in particular technical as opposed to democratic quality is correlated with average happiness of a country's citizens and finds that technically good governance appears to be a universal condition for happiness independent of culture. Once technical quality of governance reaches a minimum level, democratic quality of governance adds substantially to the positive effects of technical quality of governance upon average happiness.
In his chapter titled That Which Makes Life Worthwhile in the book Measuring the Subjective Well-Being of Nations: National Accounts of Time Use and Well-Being, behavioral economist George Lowenstein proposes that time-use surveys ask people not just about how much positive and negative affect is felt during a particular activity, but also if people believed that a particular activity was a valuable or worthwhile use of their time or instead a waste of their time. In their article titled Accounting for the Richness of Daily Activities, psychologist Mathew P. White and economist Paul Dolan ask people not just about how they felt during a particular activity, but also six additional questions about such non-hedonic aspects of experience as being engaged, focused, and finding meaning. These fundamental insights about how people care about not just positive affect, but also meaning in their lives raise questions about whether law and policy should care more about positive affect versus meaning in people’s lives.
In the article titled The Metrics of Subjective Wellbeing: Cardinality, Neutrality and Additivity, Australian economist Ingebjørg Kristoffersen provides a legitimate source of uneasiness about basing social policies upon aggregation of empirical happiness data via his quantitative analysis of certain mathematical properties of empirical happiness data that continue to remain contentious among economists, namely additivity, cardinality, and neutrality of such data, even though psychologists have to some degree already been able to address how to make international, interpersonal, and intertemporal comparisons of happiness data. This mathematical analysis also serves to provide a cautionary, persuasive critique of recent proposals by law professors for governments to eschew cost-benefit analysis and instead to determine and evaluate policy based upon aggregation of happiness, defined simply as experienced positive feeling.
Finally, a concern with experienced subjective well-being captured by self-reports of happiness is what economist Carol Graham terms a paradox of happy peasants and miserable millionaires, due to differences in anticipations or expectations between poor and rich people. As Graham notes, optimism among poor individuals can be a tool for their survival and parents who are poor may revise their own personal expectations downward but maintain hopeful expectations for their children. If peasants report being happy due to lowered expectations and (perhaps some) hedonic adaptation, while millionaires report misery due to envy towards even richer people and (perhaps unrealistic) expectations, should law and policy be more concerned over self-reported unhappiness of rich people, or about increasing self-reported happiness of poor folks, even if that means encouraging or nudging poor individuals to expect more of their future?