Vulnerable Software: 

Product-Risk Norms and the Problem of Unauthorized Access+

 

Richard Warner* and Robert H. Sloan**

 

WORK IN PROGRESS: LAST UPDATED 08/20/2011

The online edition of the Journal immediately publishes accepted articles online as works in progress to expedite publication and elicit feedback from the site's scholarly visitors.  This draft may not reflect the content or form of the final publication.  Comments on the following draft may be made by emailing JLTP directly.

 

Abstract

 

Unauthorized access to online information costs billions of dollars per year.  Software vulnerabilities are a key cause.  Software currently contains an unacceptable number of vulnerabilities.  The standard solution notes that the typical software business strategy is to keep costs down and be the first to market even if that means the software has significant vulnerabilities.  Many endorse the following remedy:  make software developers liable for negligent or defective design.  This remedy is unworkable.  We offer an alternative based on an appeal to product-risk norms.  Product-risk norms are social norms that govern the sale of products.  A key feature of such norms is that they ensure that the design and manufacture of products impose only acceptable risks on buyers.  Unfortunately, mass-market software sales are not governed by appropriate product-risk norms; as result, market conditions exist in which sellers profitably offer vulnerability-ridden software.  This analysis entails a solution:  ensure that appropriate norms exist.  We contend that the best way to do so is a statute based on best practices for software development, and we define the conditions under which the statute would give rise to the desired norm.  Why worry about creating the norm?  Why not just legally require that software developers conform to best practices.  The answer is that enforcement of legal’s requirement can be difficult, costly, and uncertain;  once the norm is in place, however, buyers and software developers conform on their own initiative. 

 

 

Table of Contents

 

I. Norms

A.  Coordination Norms

1.  Coordination norms defined

2.  Why people conform to coordination norms

B. Value-optimal Norms

C.  Coordination Norms and Coordination Games

1. Definitions

2. Value-optimality and Nash Equilibria

 

II. Product-Risk Norms

A.  The “Fitness” Norm

B.  The “Negligent Design/Manufacture” Norm

C.  The “Best Loss-Avoider” Norm

D.  Norm-Implemented Tradeoffs

E.  A New Definition of Value-Optimal Norms

 

III. Acceptable Risk and Ideal Transaction Conditions

A.  Detecting Norm Violations

B.  Norm-Violation Detectors versus Norm-Inconsistent Sellers

C.  Sellers’ Inability to Discriminate

D.  The Profit-Maximizing Strategy

E.  Summary of the Product-Risk Norms Model

 

IV. Applying the Model to Software Vulnerabilities

A.  The “Vulnerability-Ridden” Norm

B.  Why Not Fitness, Negligent Design/Manufacture, and Best Loss-Avoider?

C.  The “Vulnerability-Ridden” Norm Is Not Value-Optimal

 

V. Best Practices and Best-practices norms

A.  Best Practices Defined

B.  Summary of the Argument for the Best-practices norm

C.  Best Practices for Software Development

D.  Developers Do Not Follow Best Practices

 

VI. Conditions for Creating the Norm

A.  Perfect Competition

B.  Sufficient Detection

C.  Creating the Norm

D.  The Approximation Goals

 

VII. Creating the Norm through Legal Regulation

A.  Negligence

B.  Products Liability for Defective Design

C.  Statutes Closely Modeled on Negligence or Products Liability

D.  A Statutory Task

          1. Avoiding a lemons market

2. Creating the norm

3. Once the norm is established

 

VIII. Conclusion

 

Losses from unauthorized access to online information run in the billions per year.[1]  We assume it would be better to avoid this loss;[2] our question is how best to do so.  We limit our inquiry by focusing exclusively on one significant source of unauthorized access:  software vulnerabilities.[3]  A vulnerability is a property of a software system that could be exploited to gain unauthorized access to a computer or network.[4]  The prevailing and correct consensus is that software programs currently contain an unacceptable number of vulnerabilities.[5]  Why?  And, what is the remedy?  The standard answer to the first question assumes that “businesses are profit-making ventures, so they make decisions based on both short- and long-term profitability.”[6]  Reducing vulnerabilities requires a longer and more costly development process, and the typical profit-maximizing strategy is to keep costs down and be the first to offer a particular type of software even if it is imperfect in a variety of ways, including having vulnerabilities.[7]  Many who offer this diagnosis endorse the following remedy:  make software developers liable for negligent or defective design—either by adapting common law tort doctrines, or by enacting statutes based on negligence or product liability concepts.[8]  We do not dispute the profit-maximizing diagnosis.  “The market often rewards first-to-sell and lowest cost rather than extra time and cost in development.”[9]  We do reject the remedy.  We offer an alternative based on an appeal to product-risk norms. 

Product-risk norms are social norms that govern the sale of products.  A key feature of such norms is that they ensure that the design and manufacture of products impose only acceptable risks on buyers.[10]  Unfortunately, mass-market software sales are not governed by appropriate product-risk norms; and, as result, market conditions exist in which sellers can, and do, profitably offer vulnerability-ridden software.  This analysis entails a solution:  ensure that appropriate norms exist.  We contend that the best way to do so is a statute based on best practices for software development.  Our concern with the norm may seem puzzling.  Since we will suggest a statue, why not just stop there?  Why not just legally require that software developers conform to best practices and not worry about creating a norm?  Our answer is that there are significant advantages to creating the norm.  Enforcement of the legal requirement can be difficult, costly, and uncertain;[11] once the norm is in place, however, buyers and software developers conform on their own initiative. 

In Section I, we define norms generally as well as the important special case of coordination norms.  We also introduce the concept of a value-optimal norm.  The concept is required for our central claim, namely:  that the sale of mass-produced software is not governed by value-optimal norms.  In Section II, we provide the background essential to defending this claim.  We argue that product-risk norms are coordination norms that ensure that most buyers demand similar features in particular types of products.  We offer three examples.  The examples illustrate that in mass markets, product-risk norms are coordination norms that promote buyers’ interests by unifying their demands.  A mass-market buyer cannot unilaterally ensure that sellers will conform to his or her requirements; coordination norms create collective demands.  In Section III, we adapt a well-know law and economics argument to explain why—under ideal transaction conditions—profit-motive driven sellers conform to product-risk norms because offering norm-conforming products is the profit-maximizing strategy.  As we explain in detail in Section III, transaction conditions are ideal when two conditions hold:  there are enough value-optimal product-risk norms, and the market is perfectly competitive.  We argue in Section IV that software sales fail to adequately approximate this ideal.  While we address the concern that sufficiently competitive markets may not exist, we focus primarily on the lack of appropriate value-optimal norms.  We argue that buyers are trapped in a product-risk norm that is not value-optimal, and we contend that the solution is to replace the current norm with a norm formulated in terms of best practices for software development.  Section V defines the notion of a best practice, argues that best practices exist for software development.  Section VI specifies the conditions under which a best-practices norm will arise.  In Section VII, we argue that legal regulation is required to fulfill these conditions, and we sketch an appropriate statute.

 

I. Norms

          We begin by describing the purchase of a typical consumer good.  When Alice discovers that her water heater no longer works, she purchases a new one.  She takes it for granted that the gas pilot light will not stop burning every few days; that the water heater will not burst; that the materials are sufficiently corrosion resistant that the water heater will function properly for about ten years; and so on.  Alice does not try to confirm these assumptions.  She does not investigate the water heater, its design specifications, or its manufacturing process.  She simply assumes that its design and manufacture do not impose unacceptable risks (as long as she uses the water heater for its intended purpose).  She assumes this because she assumes that the sale of the water heater is governed by relevant product-risk norms.  This raises three questions.  What are the relevant norms?  Why, and in what sense, do norm-compliant sales ensure only acceptable risks?  And, why do buyers and sellers comply with product-risk norms?  Clarifying the relevant notion of a norm is an essential preliminary.  

 

A.  Coordination Norms

          Product-risk norms are coordination norms.  Coordination norms are one important species in the broad genus of norms in general.  We define the genus first.  Thus:  a norm is a behavioral regularity in a group, where the regularity exists at least in part because almost everyone thinks that he or she ought to conform to the regularity.[12]  Suppose, for example, that the norm in Jones’s small town is to go to a Protestant church on Sunday; that is:  almost everyone goes to a Protestant church on Sunday (even though there is a Catholic church nearby in the next town); and, almost everyone does so at least in part because almost everyone believes he or she ought to go to a Protestant church on Sunday.  As the “almost” in “almost everyone” indicates, the existence of a norm does not require universal compliance.  We will not consider the interesting question of how many count as “almost everyone,” and we will for convenience drop the “almost” and simply understand “everyone” as “almost everyone.” 

          Norms evolve over time through repeated patterns of interaction; the interactions may initially have their source in custom, private agreement or law (or a combination of these factors).[13]  To take custom first, it is easy to imagine it as the source of the “Protestant church” norm.  Suppose it was customary for some part of the town’s population to attend the church; church-goers and non-church-goers alike notice the custom, and both groups begin to think that they ought to conform—either out of religious conviction, or some other view of why it is a good thing, or in order to avoid the disapproval of others.  Attendees continue to attend while non-attendees increasingly become attendees.  To illustrate private agreement, imagine that two years ago Scott and Zoe agreed to meet at Starbucks every morning; having done so for two years, each thinks he or she ought to meet the other at Starbucks.  The norm of driving on the right illustrates the role of legal regulation.  The norm owes its existence at least in part to the fact that it is the law that one drives on the right.  Legal regulation does not, however, always bring a norm into existence; it is the law that one should obey speed limits, but the norm is to exceed them.  We defer further consideration of the generation of norms to Section VI and VII.  Until then, we focus on why people continue to conform to already established norms. 

 

B.  Coordination Norms

We first define coordination norms and then turn to the question—critical for our later purposes—of why people conform to coordination norms. 

 

          1.  Coordination norms defined

Driving on the right is an example of a coordination norm.  Before considering what makes this a coordination norm, note that the general definition of the genus is fulfilled:  everyone (in the United States) drives on the right, and they do so in part because they think they ought to.  Exceptional circumstance aside, no one thinks he or she should drive on the left—as long as everyone else drives on the right.  The “as long as” is the distinctive feature of the example.  The “ought” is conditional.  Everyone thinks he or she ought to drive on the right, but only on the condition that everyone else does so.  If everyone started driving on the left, no one would think he or she ought to drive on the right.  This conditional “ought” distinguishes driving on right from the norms we examined earlier.  In the Protestant church norm, for example, each churchgoer expects others to attend, but attendance does not depend on that expectation; each attends because each thinks he or she ought to, no matter what others do.  “Attend a Protestant church” and “drive on the right” are both norms, but they are different species of the same genus; the latter is a coordination norm; the former is not.  A coordination norm is a behavioral regularity in a group, where the regularity exists at least in part because almost everyone thinks that he or she ought to conform to the regularity, as long as everyone else does.[14]  The “ought” is conditioned on the assumption about everyone else.  We will need to refer to such “oughts” frequently, and, to avoid constant repetitions of “as long as everyone else does,” we will often say that, for short, that one thinks one ought conditionally to conform. 

An example is helpful.  You are about to enter an elevator in which others are already present.  Where do you stand?  The norm is to maximize the distance between you and the person nearest you.[15]  Thus, everyone thinks he or she ought to conform to the norm—but conditionally, as long as everyone else conforms.  There is little point in being the only “nearest neighbor distance maximizer” if everyone else is just going to stand wherever they like.  The example illustrates an important feature of coordination norms:  they make it possible for parties to coordinate their behavior in ways that realize shared interests that none could realize on their own.  The shared interest in the case of elevators is finding an acceptable compromise between two goals:  using the elevator when it arrives, and avoiding unacceptable crowding.  No elevator user can strike an acceptable balance on his or her own; others must cooperate by standing in appropriate places.  Following the “maximize the distance from your nearest neighbor” norm creates the necessary cooperation.[16]  Similar points hold for driving on the right.  No driver alone can realize the goal of everyone driving on the same side of the road.  The norm ensures the necessary cooperation.

 

          2.  Why people conform to coordination norms

A key claim in our analysis is that coordination norms resist change; once established, they are self-perpetuating.  To explain why, we need to see why people conform to norms.  We begin with non-coordination norms (like the Protestant church norm) and then turn to coordination norms.    

People conform to non-coordination norms because, for the most part, people do what they sincerely and without reservation think they ought to do.  Cases of “thinking one ought” form a continuum.  At one extreme, one conforms only to avoid sanctions (one may avoid eating one’s meat with one’s salad fork only to avoid the disapproval of one’s etiquette-obsessed friends); at the other extreme, sanctions play no role in explaining conformity.  One conforms because one thinks that conformity realizes a state of affairs one regards as good—attending a Protestant church, for example.  In between, conformity is a mix, in varying degrees, of both factors.  People in Jones’s town, for example, may attend church because they think it is their religious duty to do so, and because others would disapprove if they failed to attend.  Across the entire continuum, it is true to say that one thinks one ought to conform.  The “ought” is a prudential “ought” at the “conform only to avoid sanctions” end, and a non-prudential “ought” at the “conform to realize a good state of affairs” end.  Our free use of “ought” may ring false to those who assume that people are entirely self-interested.[17]  We do not share the assumption, but those who wish to work within its constraints may simply interpret our “one ought to do” as “it is in one’s self-interest to do.”  We will not make any claims inconsistent with that interpretation.[18]

We now turn to coordination norms.  The explanation of why people conform to non-coordination norms is not adequate as an explanation of why they conform to coordination norms.  To see why, recall that coordination norms are regularities that exist at least in part because everyone thinks that he or she ought to conform conditionally to the regularity.  Thus, one will conform as long as one expects everyone else to do so.  Our earlier explanation simply does not address cases in which one’s convictions about what one ought to do depend on one’s expectations about what everyone else will do.  Our explanation of conformity to coordination norms is that conformity yields mutually concordant expectations about conformity, which yield conformity, which yields mutually concordant expectations about conformity, which . . ., and so on.  In this way, once established, coordination norms are self-perpetuating.  There are two questions.  How does conformity expectations?  And, how do expectations yield conformity? 

It is easy to see how conformity yields expectations.  Imagine Alice is about to enter an elevator.  Like anyone who has lived long enough in the community in which the elevator norm obtains, Alice knows that people conform to the norm because think they ought to.[19]  What is true of Alice is true of everyone.  Everyone who has lived long enough in the community knows that people conform because they think they ought to.  Thus, mutually concordant expectations exist:  everyone expects everyone to conform.    

Now how do those expectations give rise to conformity?  Start again with Alice.  Alice thinks she ought to conform as long as everyone else does so, the expectation that everyone else will conform gives her good reason to conform when she enters the elevator, and, acting on that reason, she will conform.  Again, everyone is like Alice.  Each person thinks he or she ought to conform as long as everyone else does, so the expectation that everyone else will conform gives each person as reason to conform, and, acting on that reason, each will conform.  Thus: conformity yields mutually concordant expectations about conformity, which yields conformity.  The continuing conformity reinforces the mutually concordant expectations about conformity, which yield conformity, which reinforces the mutually concordant expectations about conformity, which . . .  The process ensures that, once established, coordination norms are entrenched, self-perpetuating practices.  Our critique of software sales is that the “wrong” product-risk coordination norm has become entrenched in precisely this self-perpetuating way.   

 

B. Value-optimal Norms    

          The product-risk norm governing software sales is “wrong” in the sense that it is not value-optimal.  So what is a value-optimal norm?  To answer, consider first that one typically conforms to norms without much thought; when you step into an elevator, you just unreflectively stand in the appropriate spot.  You think you ought to stand there, but you do not worry or wonder about the justification for that “ought.”  You could justify it, however; you could if you reflected on the norm under ideal conditions (including having sufficient time, sufficient information, lack of bias, and so on).[20]  You could justify the balance the norm strikes between not feeling crowded, and being able to use the elevator when it arrives.  Roughly speaking, a norm is value-optimal when one can, in light of one’s values, justify the norm.   

This is “rough speaking” because justification is a matter of degree.  One might, for example, regard the elevator norm as justified but also think that the following alternative is even better justified:  maximize the distance from your nearest neighbor and do not enter the elevator unless that distance is at least three inches.  It is essential to take degrees of justification into account to arrive at an explanation of value-optimality that will serve our purposes in what follows.  Thus, we define a value-optimal norm as follows:  a coordination norm is value-optimal when (and only when), in light of the values of (almost) all members of the group in which the norm obtains, the norm is at least as well justified as any alternative.  It is the “at least as well justified as any alternative” that make the norm optimal; it means one cannot improve by choosing a better justified norm.  There are many optimality notions; Pareto optimality is perhaps the most well-known one.[21]  Value-optimality is the notion that we need.

Our analysis of software vulnerabilities focuses on a particular type of failure of value-optimality.  The following example illustrates the type.  Until 1979, hockey players in the National Hockey League did not wear helmets despite the clear risk of severe head injury.[22]  There were two disadvantages to wearing a helmet:  non-helmet-wearing players’ perception that helmet-wearers lacked toughness, and a small loss in playing effectiveness against non-helmet-wearing players from the helmet’s restriction of peripheral vision.  Nonetheless, had one conducted a secret ballot at the time, the vast majority of players would have agreed that it would be better if all players wore helmets.[23]  “One player summed up the feelings of many:  It is foolish not to wear a helmet.  But I don’t—because the other guys don’t.  I know that’s silly, but most of the other players feel the same way.”[24]  In light of the sanctions, each player thought he ought to conform.  The result was that it remained a norm not to wear a helmet until 1979, when the league required helmets.  Despite its persistence, the “no helmet” norm was not value-optimal.  There was an alternative the players regarded as far better justified:  all players wear helmets. 

The example shows why value-optimality matters.  The no-helmet norm defined a tradeoff between the risk of head injury, on the one hand, and peripheral vision and appearing tough, on the other.  When they conformed to this norm, the players accepted this tradeoff—even though they regarded another norm (all players wear helmets) implementing a different tradeoff (reduced risk of head injury) as far better justified.  This is why value-optimality matters:  conformity to a norm that lacks value-optimality means acting contrary to one’s values.  We argue in Section V that software buyers are trapped in conformity to a product-risk coordination norm that lacks value-optimality.

 

C.  Coordination Norms and Coordination Games

Our notion of coordination norm has strong connections to the notion of a coordination game in game theory.[25]  This subsection is not essential to our argument, and readers with no taste for technical details may wish to skip the discussion.  However, examining the connections with coordination games sheds important light on our use of the notion of a value-optimal norm.  We assume some basic familiarity with game theory, and we first briefly recall some standard definitions.[26]

 

          1.  Definitions

In a (normal-form) game, each player has a finite set of actions available.  Each player simultaneously chooses an action, and the outcome of the game, which is a distinct payoff to each player, is determined by the actions chosen.  A player’s strategy specifies what action he or she will use; a pure strategy is the choice of one particular action; a mixed strategy randomizes over two or more actions (e.g., if the possible actions are “Left” and “Right,” then one mixed strategy is, “Choose Left with probability 1/3; Choose Right with probability 2/3”).  A set of one strategy for each player is called a strategy profile.  A strategy profile is a Nash equilibrium if each player’s strategy is a best possible response to the combined strategies of all the other players.  Nash’s famous theorem says that every game has at least one mixed-strategy Nash equilibrium.  Intuitively we would expect that a game with pure-strategy Nash equilibria that is played repeatedly will wind up with the players settling into one of those equilibria.

Normal-form games with only two players are typically described by giving a payoff matrix that shows the payoffs for all possible actions by the players.  For example, for the game of deciding which side of the road to drive on, with actions “Left” and “Right” we have:

 

Left

Right

Left

(10, 10)

(0, 0)

Right

(0, 0)

(10, 10)

Figure 1: The Driving Game (which side of the road?)

 

In each cell of the matrix, there is a pair of numbers. The left number gives the payoff to the player who chooses the row, and the right gives the payoff to the player who chooses the column.  Here there are two Nash equilibria, one where both players drive on the right and one where both drive on the left.

Let us say that a game is a coordination game if it has at least two pure-strategy Nash equilibria where all players choose corresponding actions, and no other pure-strategy Nash equilibria.[27]   Our driving game represents the purest possible sort of coordination game: Both players have the same payoffs for every combination of actions, and there are strict Nash equilibria for the action profiles consisting of corresponding moves.

 

          2. Value-optimality and Nash Equilibria

To see the connection to value-optimality, consider The Stag Hunt Game.[28]  Two hunters each have to decide whether to hunt stag together (neither can catch a stag alone) or hunt rabbits separately (which they can easily catch on their own).  Stags provide a lot more food than whatever number of rabbits each could catch alone, and thus each prefers cooperating to hunt stags to hunting rabbits alone. Thus the payoff matrix might be

 

Stag

Rabbit

Stag

(10, 10)

(0, 3)

Rabbit

(3, 0)

(3, 3)

Figure 2: The Stag Hunt Game

Or perhaps if one hunter is likely to catch extra rabbits if the other hunter is (futilely) hunting a stag by himself, it might look like

 

Stag

Rabbit

Stag

(10, 10)

(0, 5)

Rabbit

(5, 0)

(3, 3)

Figure 3: Stag Hunt game with slightly different payoffs

The key point is that either way the choice “Rabbit, Rabbit” forms a Nash equilibrium.  Each player, if he believes the other player will choose rabbit, should himself rationally choose rabbit.  But why would a player believe that the other will choose rabbit when they both prefer stag?  Distrust is a sufficient reason.  Imagine hunting stag is more difficult and uncertain than hunting rabbits.  Each will hunt stag only as long as he or she believes the other will.  As soon as one of them becomes convinced that the other will desert the stag hunt to catch rabbits, he or she too will abandon the stag hunt.  Where there is insufficient trust, hunting rabbits will become the norm.  It will be a behavioral regularity, and—since it is a Nash equilibrium—each will think he or she ought to conform as long as the other does. 

The norm is not value-optimal, however.  There is another Nash equilibrium—stag-stag, and each player believes that that is the best outcome and hence believes that the coordinating behavior to achieve this outcome is value-optimal.  However, as long as a player believes that the other players are going to choose rabbit, then he believes that he too should choose rabbit. Indeed, it would not merely be risky but outright foolish to choose stag if one knows that the other players will choose rabbit.

The rabbit norm traps the players in a suboptimal equilibrium.  Our notion of a value-optimal norm generalizes this idea beyond the confines of game theory.  We will argue shortly that the stag-hunt game corresponds to buyers’ choices in buying vulnerability-ridden software, with the rabbit action corresponding to settling for defective software, and the stag action corresponding to demanding higher quality software.  Demanding higher quality software is the value-optimal alternative, but buyers are trapped in the choice of defective software.  It is illuminating in this regard to return to the 1970s professional hockey players.

For simplicity, we consider just two players, each representing some substantial fraction of all the hockey players. Here we could get two quite different games depending on just what assumptions we make about the hockey players’ utility. First we might get exactly the same payoff matrix as the one in Figure 3 changing only the labels on actions from Stag, Rabbit to Helmet, Bare. 

 

Helmet

Bare

Helmet

(10, 10)

(0, 5)

Bare

(5, 0)

(3, 3)

Figure 4: Hockey Helmet Game (as Stag Hunt)

We obtain the payoff matrix in Figure 4 by assuming that players prefer helmets to bare heads; and prefer an advantage in winning to an even game, and prefer an even game to being at a disadvantage.[29]  However, another plausible set of assumptions about the hockey players’ preferences gives us the famous Prisoners’ Dilemma game.[30] We need assume only that their preference for an advantage in winning the game is larger than any preference for wearing a helmet. For example, we might have[31]

 

Helmet

Bare

Helmet

(5, 5)

(0, 10)

Bare

(10, 0)

(3, 3)

Figure 5: Hockey Helmet Game (as Prisoner's Dilemma)

 

The two payoff matrices look quite similar, but there is a very important difference. For the Stag-hunt version of the hockey helmet game, there are in fact two Nash equilibria, and if everybody in the league is playing bare headed, then we have a difficult but potentially solvable problem: how do we move to the other Nash equilibrium, or in our terms, how we move from the norm that is not value-optimal to the norm that is value-optimal?  However, in the Prisoner’s Dilemma version, there is only one Nash equilibrium, the low-payoff one at Bare, Bare paying 3 to each player. 

Our use of value-optimality generalizes this game-theoretic theme from coordination games to coordination norms.  When a norm lacks value-optimality, there is (at least) one other alternative norm that is better justified; in game theoretic terms, there are—so to speak—(at least) two Nash equilibria, one of which all players prefer to the other.  The “so to speak” qualification is essential.  We have defined coordination games only for two-player interactions; the parties to the norms we consider typically number in the millions.  Still, we think that the coordination games offer mathematically precise model that illuminates the broader phenomenon of value-optimality in the case of coordination norms.       

 

II. Product-Risk Norms

 

          Typically, product sales are governed by (more or less) value-optimal norms.  In Section IV, we argue that software sales are not governed by an appropriate value-optimal product-risk and thus are an aberration from the typical pattern.  In this section, we illustrate the typical pattern with three examples. 

Each example is coordination norm that allocates the burden of avoiding losses.  Sellers bear whatever investment is required to produce norm-conforming products while buyers bear the risks of loss from using a norm-conforming product (unless those risks are assigned to the seller by other norms, by law, or by contract).  We have deliberately chosen examples that may appear to govern the allocation of risks of unauthorized access due to software vulnerabilities.  We argue in Section IV that they do not.  The argument rests on the following point, which we emphasize in the discussion of the examples:  one applies product-risk norms against a background of shared judgments about the proper allocation of risk in particular cases.  The relevant shared judgments in the case of software sales assign the vulnerability-created risk of unauthorized access in ways inconsistent with the examples, and indeed in ways inconsistent generally with product-risk norms governing non-software sales.

          One preliminary remains.  Who are the parties to the norms?  The answer may seem obvious—buyers and sellers; after all, they need to coordinate so that sellers offer what buyers want to buy; further, if the norms are to allocate risks between buyers and sellers, how could both not be parties to the norm?  It is indeed possible to represent product-risk norms as buyer-seller coordination norms; however, it is also possible and simpler and more elegant to model the norms as norms to which the only parties are buyers.  The key point is that sellers design mass-software in response to sufficiently large groups of buyers; hence, no buyer can unilaterally ensure that his or her desired level of risk will be available; only a sufficiently large collective demand can accomplish that.  Coordination via product-risk norms creates the required collective demand, to which profit-motive driven sellers respond.  Since the profit-motive is sufficient to ensure the sellers’ response, there is no need to see the sellers as a party to the coordination norm.[32] 

         

A.  The “Fitness” Norm

The norm is that buyers “demand” products which are fit for the ordinary purpose for which such products are used.  We use “demand” here in the following sense:  “demand a fit product and (other things being equal[33]) refuse to buy an unfit product.”  We will use “demand” in this “demand and refuse to buy” sense throughout in our discussion of product-risk norms. 

There is no doubt that the “fitness” norm is indeed a norm.  The required regularity exists; buyers do demand fit products;[34] moreover, they think they ought to do so—conditionally.  As long as everyone conforms, non-conformity would mean unilaterally demanding an unfit product.  The demand would go unfulfilled, and the non-conforming buyer would forego the purchase of the product.  To the extent that doing so is unacceptable, the buyer will think he or she ought to conform.  Of course, if enough buyers were interested in purchasing “unfit” products, seller would begin to offer them (other things being equal), and a new “fitness” norm would develop to govern those sales; products “fit” under the new norm would not be “fit” under the old one.    

Varying notions of fitness are possible because “fitness” is determined by contextually-sensitive normative judgments.  It could hardly be otherwise.  Fitness depends on the type of product, the circumstances in which it is ordinarily used, the knowledge and skill of typical buyers, and the values in light of which buyers evaluate the product.  In a significant range of cases, there is sufficient overlap in values, use, knowledge and skill that buyers converge on roughly the same judgments of fitness in particular cases.  Lindy Homes v. Evans Supply Co.[35] is an excellent example, even though it does not concern the fitness norm, at least not directly.  The case concerns the Implied Warranty of Merchantability, which is asserted in Uniform Commercial Code §2-314(2)(c).  Under that provision, a seller warrants that the goods are fit for the ordinary purpose for which such goods are used.[36]  The task before the court was to determine fitness. 

Lindy Homes used electrogalvanized sixpenny casing nails in cedar plywood siding.  Electrogalvanized nails rust when used in cedar; nails galvanized by a different process—“hot-dipped”—are far more rust-resistant, and the standard practice in the construction industry is to use hot-dipped nails in cedar.[37]  When the electrogalvanized nails rusted, Lindy Homes sued the seller, Evans Supply, for breach of the implied warranty of merchantability.  The court held that the electrogalvanized nails were fit for the ordinary use made of them, a use that did not include their use in cedar.  The court relied on the industry-wide normative judgment it was “common knowledge in the trade that galvanized casing nails should not be used in exterior siding because a . . .  “hot-dipped” galvanized nail is proper in such a condition.”[38]

We have not argued that the “fitness” norm is value-optimal, but we take the point to be sufficiently plausible, that we may, for purposes of illustration, assume it is.  We make the same assumption about the next two examples.   

           

B.  The “Negligent Design/Manufacture” Norm 

The norm is that buyers demand products that do not, as a result of negligent design or manufacture, impose an unreasonable risk of loss on buyers who use the product as intended.  The relevant regularity exists: buyers demand such products;[39] moreover, buyers think they ought conditionally to demand such products.  The argument is essentially the same as in the case of the “fitness” norm.  A buyer who had an unusual use for a particular product might not care whether the intended uses of the product imposed an unreasonable risk of loss; however, as long as everyone else conforms, such a buyer will think he or she ought to conform.  Non-conformity would mean unilaterally demanding a norm-deviant product; the demand would go unfulfilled, and the buyer would forego the purchase of the product.  To the extent that going without is unacceptable, such a buyer will think he or she ought conditionally to conform.  As with the fitness norm, if enough buyers were interested in purchasing “unreasonably risky” products for an alternate use, seller would begin to offer them (other things being equal), and a new “negligent design/manufacture” norm would develop to govern those sales; products not “unreasonably risky” under the new norm would might still be “reasonably risky” for the range of uses governed under the old norm. 

Applying a “negligent design/manufacture” norm requires making two context-sensitive, fact-specific judgments:  one about unreasonable safety, and one about negligent design or manufacture.  In the Matter of Sony BMG Music Entertainment[40] is an excellent illustration.  Part of its merit is that it concerns software.  The example illustrates that the norms we discuss in this section do indeed govern some aspects of software; our claim, which we defend in Section IV, is that the norms do not apply to risks arising from software vulnerabilities.[41]

Between 2003 and 2005, Sony BMG Music Entertainment sold over 14 million music CDs containing one of two copy protection programs—XCP, or MediaMax.  The programs allowed users to make only three physical copies of the CD; limited the ability to transfer files from the CD to other devices (including the iPod); allowed Sony to monitor users’ listening habits, and were extremely difficult to uninstall.[42]  Buyers were not given adequate notice of these aspects of the software.[43]  Thus, using the CDs imposed the following largely undisclosed risks:  interference with plans to make more than three copies, and with plans to play files on other devices; and, the invasion of privacy by monitoring buyers’ listening habits.  Buyers found the risks unreasonable:  

Once the public became aware of the [risks] . . . CDs distributed with [the software] . . . experienced a steep drop-off in sales within some market segments . . . In addition, Sony BMG spent millions to settle the steady stream of lawsuits arising out of the . . . incident. Less quantifiably, the resulting backlash from artists and customers significantly damaged the reputations of Sony BMG and its parent corporations.[44]

 

The unreasonableness judgments are fact-specific, context-sensitive judgments about the number of times it is reasonable to expect to copy music from a CD to other devices; about what sorts of devices it is reasonable to copy to; about the legitimacy of monitoring music listening habits. 

Fact-specific, context-sensitive judgments are also the basis of the determination that Sony’s actions were negligent.  It is a standard practice in the music CD business to conduct a pre-release review of copy protection software to determine whether it works acceptably.  Sony BMG certainly had the resources to conduct such a review.[45]  If it did so, it did so negligently; it should have discovered the flaws in the software.[46]  Sony might instead have relied on the expertise of the suppliers of First4Internet (XCP) and SunnComm (MediaMax), but such reliance would clearly have been culpable.  First4Internet’s expertise was in content filtering technology, particularly the recognition of pornographic images; it had virtually no experience in copy protection technology.[47]  SunnComm was no better.  It began as a provider of Elvis impersonation services and had the lack of business savvy and technological insight to purchase a 3.5” floppy disk factory in 2001.  It had virtually no relevant experience with copy protection software prior to entering the contract with Sony.[48] 

 

C.  The Best Loss-Avoider Norm

The norm is that, other things being equal, buyers demand products that assign the risk of a loss to the party that can most cost-effectively prevent or remedy the loss—the best loss-avoider.  Car buyers rather than sellers, for example, bear losses for failure to change the oil sufficiently often since the buyers are in possession of the car and are the ones who can most easily keep track of mileage.  Another example:  in the case of refrigerators, sellers are liable for defects in the motor while buyers are liable for wear and tear on the shelves, and doors.  The best loss-avoider is the seller in regard to motor defects because it has more expertise and benefits from economies of scale; the buyer, on the other hand, is the best loss-avoider in regard to damage to the motor, doors, and shelves since the buyer may avoid damage simply by careful use.[49] 

To see that the best-loss avoider norm really is a norm, consider that allocating risks to the best loss-avoider yields a net savings overall.  Widely shared values dictate that, other things being equal, one should realize such savings when one can.  Thus, everyone thinks he or she ought conditionally to conform.  Non-conformity would mean unilaterally demanding something else; the demand would go unfulfilled, and the buyer would forego the purchase of the product.  Thus, the requirements for the existence of a coordination norm are fulfilled.  The required regularity exists—buyers demand products in which the risks of use are allocated to be best loss-avoider; moreover, the regularity exists because buyers think they ought conditionally to conform. 

One applies the best loss avoider in light of fact-specific, context-sensitive judgments.  Applying the best loss-avoider norm requires making fact-specific, context-sensitive tradeoffs between the best loss-avoider bearing losses and potentially conflicting goals.  The reason is that, under the norm, the best loss-avoider bears relevant losses other things being equal.  “Other things” are not “equal” when imposing losses on the best loss-avoider unacceptably conflicts with other goals.[50]  The norm assigns a risk of loss to the best-loss avoider when and only when there are no unacceptable conflicts with other goals. 

 

          D. Norm-Implemented Tradeoffs

In each of the above examples, the norm implements tradeoffs among competing goals.  Norm-conforming seller must make tradeoffs because the greater the seller’s investment of time, effort, and money in creating norm-conforming products, the less is available for pursing other goals.  The tradeoff for norm-conforming buyers comes from bearing the risk of using norm-conforming products.  They must invest in precautions to avoid those losses and spend the time, effort, and money to recover from losses they fail to avoid.  The more they invest, the less they have for other pursuits.

 

E.  A New Definition of Value-Optimal Norms

          These points about tradeoffs allow us, in the case of product-risk norms, to replace our earlier, general definition of value-optimality with one that is equivalent but more informative.  The earlier definition was that a norm is value-optimal when and only when it is at least as well justified as any alternative.  In the case of product-risk norms, we can replace this general criterion with:  a product-risk norm is value-optimal when and only when the tradeoffs it implements are as least as well justified as any alternative.  We argue in Section IV that the norm governing software sales is not value-optimal because there is an alternative norm that implements a better justified tradeoff. 

Car buyers rather than sellers, for example, bear losses for failure to change the oil sufficiently often since the buyers are in possession of the car and are the ones who can most easily keep track of mileage.  Another example:  in the case of refrigerators, sellers are liable for defects in the motor while buyers are liable for wear and tear on the shelves, and doors.  The best loss-avoider is the seller in regard to motor defects because it has more expertise and benefits from economies of scale; the buyer, on the other hand, is the best loss-avoider in regard to damage to the motor, doors, and shelves since the buyer may avoid damage simply by careful use.[51] 

We turn to now to the question is why—and in what sense—do norm-compliant sales ensure only acceptable risks.  We begin with the “in what sense” part of the question.  What do we mean by “acceptable”?  

 

III. Acceptable Risk and Ideal Transaction Conditions

          We define acceptable as follows.  Product-risk norms ensure the design and manufacture of a product imposes only acceptable risks (when and only when) the norm is value-optimal.  To see the rationale, suppose you had a choice between various norms.  How would you choose?  You would choose the norm (or in the case of ties, one of the norms) best justified in light of your values.  Our definition of “acceptable” simply acknowledges this fact.  But, one may rightly object, what if there are not enough value-optimal norms?  Product-risk norms are paired with particular risks; the “fitness” norm, for example, addresses the risk of using unfit products but not risk of using a negligently designed or manufactured product; the “negligent design/manufacture” norm addresses that risk.  Product-risk norms cannot ensure acceptable risks when there are significant risks that are not addressed by at least one value-optimal product-risk norm. 

Our solution is to introduce the first of the two assumptions characterizing ideal transaction conditions.  The first is that there is no significant risk that is not governed by at least one value-optimal product-risk norm.  Call this norm completeness.[52]  Norm completeness defines an ideal that practice only approximates.  Practice tends to approximate norm completeness because sellers and buyers have exchanged products for centuries, and, over the years, relevant value-optimal norms have evolved.  In Section V, we argue that software sales are an aberration that falls unacceptably short of the ideal of norm completeness. 

Norm completeness guarantees that enough product-risk norms exist but it does not guarantee that sellers will conform to the norms.  Indeed, product-risk norms would appear to make buyers an easy target for exploitation.  Norm-conforming buyers typically do not investigate products in any detail; they simply take it for granted that products do not impose any unacceptable risks as a result of their design and manufacture.  So why won’t sellers exploit that fact to sell products that do impose such risks when doing so maximizes profits?   

Our answer is to introduce the second assumption characterizing ideal transaction conditions, the assumption of a perfectly norm-competitive market (discussed below).  When both assumptions hold, the profit-maximizing strategy is for sellers to conform to product-risk norms.  It is the profit-maximizing strategy in practice to the extent practice approximates ideal transaction conditions.  Our argument adapts of well-known law and economics argument.[53]  We begin with a summary of the argument:  (1) whenever a business violates a norm, at least some consumers will notice; (2) consumers who detect a norm-violation will not, other things being equal, buy from norm-inconsistent businesses; (3) businesses are unable to discriminate between consumers who will, and those who will not, detect a norm-inconsistency; therefore, in a perfectly norm-competitive market, (4) the profit-maximizing strategy is for businesses to conform to norms.

 

A.  Detecting Norm Violations

It is quite unlikely that norm-inconsistent products will escape the notice of every buyer.  Awareness of norm-inconsistent products can come from news reports, magazine articles, books, consumer watch-dog groups, negative publicity from consumer complaints, and litigation.[54]  This is not to make any claim about how many buyers detect norm-violations.  It is the second assumption, formulated later, that includes such a claim. 

 

B.  Norm-Violation Detectors versus Norm-Inconsistent Sellers

When buyers detect norm-inconsistent sellers, they will not—other things being equal—buy from them.  Consider that a norm is a regularity to which one thinks one ought to conform.  Norm-violation detectors will, therefore, perceive a norm-inconsistent seller as not treating them as they ought to be treated.  Other things being equal, buyers will purchase from sellers they perceive as treating them as they ought to be treated, not from those whom they perceive as not doing so.[55] 

 

C.  Sellers’ Inability to Discriminate

If sellers could reliably discriminate between buyers who will, and those who will not, detect a norm-inconsistency, they could remain norm-consistent in the case of inconsistency-detectors but violate norms for the rest.  Sellers can in some cases spot those buyers that are likely to detect violations of norms.  They can easily identify repeat customers who have objected to violations in the past, and it would not take too much research to identify a customer as, for example, the President of a consumer protection group like Consumer Reports.  Such cases aside, when you walk into a retail store or order an item over the phone or online, nothing reliably signals whether you will detect norm-inconsistent behavior.[56] 

 

D.  The Profit-Maximizing Strategy

The final claim is that when sellers cannot discriminate between those who do and those who do not detect norm-inconsistencies, then, in a perfectly norm-competitive market, the profit-maximizing strategy is to conform to product-risk norms; hence, rational, profit-motive driven sellers will do so.  The assumption of a perfectly norm-competitive market is the second idealizing assumption.  In Section V, we consider the extent to which software markets approximate this ideal.   

When is a market perfectly norm-competitive?  Two conditions must hold.  When is a market perfectly norm-competitive?  Two conditions must hold.  The first is standard economic notion of perfect competition.[57]  Competition is perfect when and only when six conditions hold.  (1) the market contains a large number of independently acting (non-colluding) sellers and consumers, (2) no one of whom can unilaterally control the features a product has.  (3) Sellers sell homogenous products (4) in a market in which competitors may costlessly enter and leave, and (5) in which consumers can costlessly switch from one seller to another.  (6) Sellers know buyers preferences for various combinations of price and quality, and buyers know the price/quality combinations sellers offer.  

The second condition adds to the knowledge specified in (6).  To formulate the condition, recall the point made above:  buyers will, other things being equal, not buy from a seller who violate product risk norms.  The second condition is that there are enough buyers who know when norm violations occur.  More precisely:  there are enough norm-violation-detecting buyers that a seller’s gain from norm-inconsistent behavior is smaller than the loss which results when norm-violation-detectors buy from norm-consistent sellers.  We will need a name for this requirement.  Call it the sufficient detection requirement. 

          Together perfect competition and sufficient detection entail that the profit-maximizing strategy is to be a norm-consistent seller.  Perfect competition ensures that every norm-violation detecting buyer will buy from norm-consistent sellers, if at least one such seller exists.  Sufficient detection ensures that there are enough there are enough norm-violation-detecting buyers that norm-inconsistent sellers lose more than they gain.  Thus the profit-maximizing strategy is to be a norm-consistent seller; hence, rational profit-motive driven sellers will be norm-consistent.[58] 

 

          E.  Summary of the Product-Risk Norms Model

          The model makes two idealizations.  The first is the assumption of norm-completeness; the second, the assumption of a norm-competitive market.  Norm completeness ensures that every purchase is governed by value-optimal product-risk norms; perfect norm-competitiveness ensures that rational, profit-motive driven sellers conform to the norms.  When both assumptions hold, product sales are governed by norms that implement acceptable tradeoffs, tradeoffs to which buyers give free and informed consent.  The assumptions define an ideal that is only approximated in practice.  The closer practices come to the ideal, the more product sales involve acceptable tradeoffs to which buyers consent. 

We argue next that software sales fall unacceptably far short of this ideal.      

 

IV. Applying the Model to Software Vulnerabilities

In this section, we focus exclusively on the failure to approximate norm-completeness.  We consider perfect norm-competitiveness in Section VI.  Norm-completeness requires that every significant risk be allocated by at least one value-optimal norm.  There are two ways to fail to meet this requirement.  Norms may not exist, or existing norms may not be value-optimal.  We claim that software sales exhibit the latter sort of failure:  sales are governed by a norm that is not value-optimal.  Our argument divides into three parts.  We first identify the norm.  We then explain why, appearances to the contrary, the earlier examples of product-risk norms do not apply.  Finally, we argue that the norm is not value-optimal.

 

A.  The “Vulnerability-Ridden” Norm

The norm is that buyers demand vulnerability-ridden software.  The required regularity obtains.  Buyers do demand such software.  It is commonplace to complain that buyers are unwilling to pay a premium for more secure software; they demand quick-to-market, cheap, vulnerability-ridden software.[59]  The explanation of the existence of the regularity is that buyers think that they ought conditionally to demand such software.  Thus, the conditions for the existence of a demand-unifying product-risk norm are fulfilled:  buyers regularly demand vulnerability-ridden software, and they do so at least in part because they think they ought conditionally to do so. To see that buyers think they ought conditionally to demand vulnerability-ridden software, divide buyers into three groups:  buyers ignorant of the relevant risks; buyers who are aware of the risks but underestimate them; and, those who are aware of the risks and accurately estimate them.  In each group, buyers think they ought conditionally to demand vulnerability-ridden software, but they think so for different reasons.    

Group one:  ignorance.  Many buyers lack relevant information; security experts, consumer advocates, and those who make or seek to influence public policy may understand the risks involved in using vulnerability-ridden software, but many users have at best a minimal understanding.[60]  Since they are unaware of the risks, the buyers do not see why they should pay a premium for more secure software; hence, they think they ought conditionally to demand quick-to-market, cheap, vulnerability-ridden software.  The “ought” is conditional because a buyer would change his or her mind if all other buyers demanded secure software.  An isolated demand for vulnerability-ridden software would go unmet. 

Group two:  underestimation.  Buyers may be aware of the risks but underestimate them.  "An amazingly robust finding about human actors . . . is that people are often unrealistically optimistic about the probability that bad things will happen to them."[61]  Like buyers who are simply unaware of the risk, risk-underestimating buyers do not see why they should pay more for secure software and thus think they ought conditionally to demand quick-to-market, cheap, vulnerability-ridden software.       

Group three: compelling reason.  Even when buyers correctly estimate the risks, they will still think they ought to demand vulnerability-ridden software.  Imagine Alice deciding whether to use the Adobe Acrobat Reader; she is well aware that the Reader has significant vulnerabilities,[62] but, given the “vulnerability-ridden” norm, she has only two options:  use the Reader, or not.  There is no third option of unilaterally demanding and receiving a less vulnerable Reader.  She will think she ought conditionally to use the Reader as long as she is confident that she can take reasonable precautions to protect herself from unauthorized access.  She realizes that, to the extent she transmits .pdf files to others who may not exercise the care she does, she imposes on them risks of unauthorized access by giving them yet one more occasion to use the Reader.  But such third party risks have virtually no impact on her decision; given the extremely widespread use of the Reader, her decision not to use it would only yield an infinitesimal reduction in the risks to others.[63]

 

B.  Why Not Fitness, Negligent Design/Manufacture, and Best Loss-Avoider?

 

But what about the three product-risk norms discussed earlier—fitness, negligent design/manufacture, and best loss-avoider?  Why don’t they apply to software sales?  Take the fitness norm first.

How can vulnerability-ridden software be fit?  To see why it is fit, considered that fitness is determined, not by the opinion of software experts, but by contextually-sensitive judgments of software buyers.  Software sales violate the norm only if those judgments classify the software as unfit.  Software buyers share no such judgment.  They demand quick-to-market, cheap, vulnerability-ridden software.  One may rightly object that our buyers who correctly assess the risks of using vulnerability-ridden software may regard such software as unfit.  However, even if they do, their judgment is, so to speak, inert.  Correctly-risk-assessing buyers still think they ought conditionally to conform to the “demand vulnerability-ridden software” norm and hence conform to the norm.  That is the norm that governs, not the “negligent/design manufacture” norm—despite any judgment of unfitness that correctly-risk-assessing buyers may make.

Essentially the same points hold for the “negligent design/manufacture” norm.  It may appear to apply because some vulnerabilities are clearly the result of negligent design.[64]  Software “buffer overflows” are a good example.  A buffer is a temporary location on a computer that a program uses to store information before it sends it to the CPU for processing.  Programmers can take effective steps to ensure that, before storing information in a buffer, the program checks to see if the capacity of the buffer is large enough to contain the information.  To fail to do so is to create a buffer overflow vulnerability, which one can exploit to take over a computer and make it run programs one has written.[65]  The consensus of software development experts is that, in a wide range of cases at least, it is negligent to create a buffer overflow vulnerability.[66]  As computer security organization SANS notes, “the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.”[67] 

Agreement on such instances of negligence is not, however, sufficient to show that software sales violate the negligent design/manufacture norm.  The norm is that sellers do not offer products that, as a result of negligent design or manufacture, impose an unreasonable risk of loss on buyers using the product as intended.  Unreasonableness is determined by shared judgments that allocate risks of loss between sellers and buyers; thus, to claim that vulnerability-ridden software imposes unreasonable risks is to claim the shared judgment it does so.  Software buyers share no such judgment.  The argument is exactly parallel to the argument in the case of the “negligent design/manufacture” norm.  Buyers prefer quick-to-market, cheap, vulnerability-ridden software.  Incorrectly-risk-assessing buyers regard the risks as unreasonable, but it does not matter whether they do or not.  They still think they ought conditionally to conform to the “vulnerability-ridden” norm. 

The case for thinking software sales violate the best loss-avoider norm may, at first sight, seem considerably stronger, for, as we will argue shortly, software developers are the best loss-avoider for a wide range of losses arising from software vulnerabilities.  It does not follow, however, that sales of vulnerability-ridden software violate the best loss-avoider norm.  One applies the best loss avoider in light of shared normative judgments that allocate the burden of avoiding the risk of loss.  Thus, to claim that vulnerability-ridden software violates the best loss avoider norm is to claim that shared judgments allocate burden on software developers in a significant range of cases.  As we argue below in Section IV, C, the opposite is true.  Buyers demand quick-to-market, cheap, vulnerability-ridden software.  It does not matter whether correctly-risk-assessing buyers judge software developers to be the best loss-avoids in some cases; they conform to the “demand vulnerability-ridden software” norm anyway.

Arguing that software sales do not violate the three product-risk norms given as examples does not show that sales do not violate other product-risk norms; however, the argument generalizes.  Consider any product-risk norm that purportedly assigns the risk of at least some vulnerabilities to software developers.  That claim will be inconsistent with the fact that buyers demand quick-to-market, cheap, vulnerability-ridden software.   

 

C.  The “Vulnerability-Ridden” Norm Is Not Value-Optimal

A product-risk norm is value-optimal when and only when the tradeoffs it implements are as least as well justified as the tradeoffs implemented by an alternative norm.  The “vulnerability-ridden” norm is not value-optimal because there is a better justified alternative.  Under the current norm buyers bear the risk of loss from unauthorized access resulting from vulnerabilities; the better justified option shifts a good part of that risk on to software developers.   

          The existence of a consensus on this point may seem surprising.  It is difficult to obtain reliable data concerning losses, even in the case of readily quantifiable data such as the time, effort, and money involved in detecting unauthorized access, diagnosing its effects, removing malware that may have been installed, and lost productivity resulting from network malfunctions.  This difficulty does not, however, prevent widespread agreement that the cost of unauthorized access runs in the billions of dollars a year.[68]  While not all of these losses can be traced back to software vulnerabilities, vulnerabilities are nonetheless a significant factor,[69] and the consensus is that the cost of improving software development procedures to an extent that would significantly reduce vulnerabilities would be considerably less than the aggregate cost of unauthorized access mediated by vulnerabilities.[70] Software developers are—to a considerable extent—the best loss-avoider with regard to a wide range of vulnerabilities.   

This conclusion is reinforced by considering losses that resist quantification—primarily:  invasion of privacy, loss of trust, and anxiety from a sense of increased risk.[71]  The assessment of a matter of making normative judgments about the desirability of competing policy goals—in particular, the goals served by keeping software costs down versus the value of trust, privacy, and a reduced sense of risk.  To the extent one thinks a reduction in non-quantitative losses is worth an increase in software development costs, one has an additional reason to regard software developers as the best loss-avoiders over a wide range of cases.

We conclude that the “vulnerability-ridden” norm is not value-optimal.  The solution is to replace that norm with a value-optimal norm.  But what exactly should the alternative norm be?  The more software developers must invest to create norm-conforming software, the less is left over for other important goals, including promoting software innovation;[72] promoting the development of open source software;[73] and ensuring sufficient competitiveness among software sellers.[74]  The less developers invest, the greater the risk of loss from unauthorized access and hence the greater the investment buyers must make in avoiding those losses or recovering from them when they occur.  The more buyers invest, the less they have for the wide variety of other goals they pursue.  A value-optimal norm must define a best justified tradeoff among the competing goals.    

 

V. Best Practices and Best-practices norms

We claim that the norm should be that buyers demand software developed following best practices.  One immediate difficulty is that “[b]est practices has become an overused, underdeveloped catchphrase employed by industries and professions to signal an often unsubstantiated superiority in a given field.”[75]  Accordingly, our first step is to explain what we mean by “best practices.”  We then argue that for the “buyers demand best practices software” norm.     

 

A.  Best Practices Defined

A best practice in a particular industry is a practice (method, process, or system) meeting two conditions.  The first condition consists of two parts.  Part one:  with regard to one or more goals, there must be widespread agreement that it is highly desirable that those goals be achieved.[76]  Call these goals the best practice goals.  Part two:  there must be widespread agreement that following the practices is a sufficiently reliable, sufficiently detailed means of meeting the best practice goals. 

An example is helpful.  In the United Kingdom, the Electrical Safety Council promulgates best practices for electrical wiring.[77]  The Council offers “a series of Best Practice Guides in association with leading industry bodies for the benefit of electrical contractors and installers, and their customers.”[78]  The best practice goal is adequate safety, a goal widely regarded as highly desirable;[79] further, there is also widespread agreement that following the practices is a sufficiently reliable way to achieve that goal.  The practices contain specific, detailed requirements for testing and installation.  The best practices for electrical wiring, for example, require that the electrician determine whether the insulation resistance in electrical circuits is at least one megaohm; if not, equipment on that circuit must be disconnected, or 30 mA RCD protection must be installed.[80]  Even a cursory survey of best practices reveals that they typically provide quite detailed advice.  We defer our explanations of “sufficiently reliable,” and “sufficiently detailed” to the discussion of the second condition.

          To formulate the second condition, note that practices meeting the first condition implement tradeoffs between the best practice goal and a variety of other competing goals. [81]  The reason is that following best practices typically requires an increased investment of time, effort, and money.  Conforming to electrical wiring best practices, for example, requires various inspections and the installation of hardware upgrades.  This increases the cost of maintaining buildings, and the increased cost entails tradeoffs between safety and other goals.  Increased wiring costs can, for example, affect the availability of low cost rentals. 

          Best practices for pharmaceutical company staffing and expenditure are another example.  The tradeoff is between costs and discovering, developing, and distributing high-quality drugs at reasonable prices.  This fact forms a key selling point for Best Practices, a company that licenses access to a database of best practices:

By finding the optimal level of staffing and spending to achieve efficiency and effectiveness, companies can save money while maintaining a high-value medical affairs function [discovering, developing, and distributing high-quality drugs at reasonable prices]. Medical affairs leaders can use the information in this . . . document to learn how top companies find the optimal level of staffing and spending to achieve both efficiency and effectiveness in executing the mission of medical affairs.[82] 

 

To trade costs against healthcare is of course to trade costs against the vast number of concerns and goals affected by the quality and availability of health care.  A large number of similar examples can be found in the Best Practices Database in Improving the Living Environment.[83]  The database provides access to “the practical ways in which public, private and civil society sectors are working together to improve governance, eradicate poverty, provide access to shelter, land and basic services, protect the environment and support economic development.”[84]

          Now we can state the second condition:  the tradeoffs implemented by following the practices are at least as well justified as any alternative.  This is what makes the practices best practices.  One cannot improve the tradeoffs by switching to alternative practices.  Discussions of best practices do not explicitly offer this “at least as well justified” gloss on what makes best practices best.  In its discussion of pharmaceutical best practices the company, Best Practices, characterizes best practices as “optimal”; such practices yield “the optimal level of staffing and spending.”  Another common characterization is “best in class”—thus:  a company adopts best practices by “measuring . . . functions, processes, activities, products, or services against those of [its] competitors and improving . . .  [to match] the best-in-class.”[85]  To be optimal or best-in-class is, however, surely to be at least as well justified as any alternative.  Whatever the language used, we take it to be clear that a best practice is one that is at least as well justified as any alternative; if there is a better justified alternative, the practice can hardly be best.

          The “at least as well justified” requirement explains why we do not require best practices to be the most reliable way to achieve the consensus goals.  The best justified tradeoffs may sacrifice some reliability in the name of further other goals.  The requirement also explains why—and in what sense—best practices are sufficiently detailed methods.  Recall the Electrical Safety Council requirement that 30 mA RCD protection must be installed in electrical circuits with less than one megaohm of resistance.  This requirement allows one to compare Electrical Safety Council practices to practices that require different combinations of cost and protection against electrical shock.  In general, where one has a variety of competing goals, one will want to compare various tradeoffs among those goals to determine which tradeoffs are the best justified.  Best practices must be sufficiently detailed to allow one to make those comparisons.  

         

          B.  Summary of the Argument for the Best-practices norm

We begin with a summary of the argument.  (1) Best practices for software development exist, and software developers would significantly reduce vulnerabilities if they followed them.  (2) Best practices make tradeoffs among competing goals, where the tradeoffs are at least as well justified as the tradeoffs implemented by alternative practices.  Therefore (3), a “buyers demand best practices software” norm would be a value-optimal norm whose implementation would significantly reduce vulnerabilities. 

Premise (2) follows from our discussion of best practices.  As that discussion shows, best practices for software development—assuming they exist—make tradeoffs among relevant competing goals that are at least as well justified as alternatives.  Relevant goals include, as we noted earlier, prompting innovation; promoting the development of open source software; and ensuring sufficient competitiveness.[86]  Given (2), the conclusion in (3) follows since a product-risk norm is value-optimal provided the tradeoffs it implements are at least as well justified as any alternative tradeoffs.  The only question then is whether best practices exist for software development.  One could reasonably think that they do not.  It is, after all, routine to observe, as the leading security expert, Eugene Spafford does, that software “is usually produced using error-prone tools and methods, including inadequate testing.”[87]  Such practices can hardly qualify as best.  Our view is that this does not show that best practices do not exist; it shows that existing best practices are not followed.

 

C.  Best Practices for Software Development

          We begin with a specific example.  It is a best practice to ensure that, before a program stores information in a buffer, it first checks to see if the amount of information is greater than the capacity of the buffer.  Failing to do so creates a buffer overflow vulnerability.  The practice meets the two requirements for being a best practice.  The first requirement is that there must be widespread agreement that it is highly desirable to realize a certain goal, and there must widespread agreement that following the practice is a sufficiently reliable, sufficiently detailed means of meeting that goal.  As we noted earlier, there is consensus on the goal of reducing vulnerabilities by requiring a greater investment in software development.  The consensus is, first, that an increased investment in software development would, over a significant range of cases, reduce the number of vulnerabilities in software and hence the losses from unauthorized access.[88]  There is also widespread agreement that the practice—ensuring that the amount of information to be stored does not exceed the capacity of the buffer—avoids buffer overflow vulnerabilities.  The second requirement is that the tradeoffs the practice implements must be at least as well justified as any alternative.  It seems clear that is true as the consensus is that the time, effort, and money needed to ensure that the amount of information to be stored does not exceed the capacity of the buffer is far less than the losses thereby avoided.[89] 

There are many such examples.  A vulnerability is just a particular type of defect, similar in principle to any other software defect, such as giving the wrong answer or crashing, and the same high-level picture holds for both software defects in general and software vulnerabilities in particular:  the amount depends very much on the design and programming practices used.  There is widespread agreement that one should ensure adequate overall management of the creation of the software, from first deciding what the behavior of the software should be, through designing it, writing it, and especially testing it.  There is also widespread agreement on how to write and design the actual computer programs (“code” in the language of programmers) that collectively are the software. This includes, for example, such matters as the choice of appropriate data structures and algorithms, structuring the flow of control well, obeying abstraction barriers, and breaking the overall software into appropriate size pieces.  The techniques for developing sufficiently defect-free software are collectively known as software engineering.  How to write individual computer programs well, and the basics of software engineering are fairly well settled subjects,[90] and should be known by competent software developers. For example, one can find many aphorisms summarizing these principals in a handbook of software engineering.[91]  More importantly, the basics of how to construct good quality code and the basics of software engineering form a significant fraction of the core (required) portion of the model computer science bachelor’s degree curriculum jointly published by the two main professional societies for computer science in 2001.[92]  Furthermore, most of that same material was also found in the earlier 1978 and 1991 versions of that model undergraduate curriculum, though of course some important details have changed as the field has evolved.  Writing secure software also requires some additional knowledge. Some minimal training in writing secure software is a standard part of today’s undergraduate curriculum for computer science majors,[93] but was not so common a decade ago.  In general, a great deal is known about what sort of software development practices lead to fewer software defects, and what sort lead to more defects.  One particular area of software engineering that has seen real progress in the past 20 years or so is testing.[94]  There are a whole host of automated techniques for testing whether software under development contains errors, and use of these techniques significantly lower the defect rate in the final product.  Failure to use any of these newer testing techniques leads to higher defect rates.  It is common wisdom among experts in software development that all the proper attention to all the issues we have mentioned lead to lower defect rates, and various studies from over the years back up this common wisdom.[95]

In sum, there are software development practices that meet the conditions for being best practices.[96]  First, there is a goal—reducing the number of vulnerabilities, and there is widespread agreement that the goal is desirable, and that following the practices is a sufficiently detailed, reliable way to achieve that goal.  Second, there is widespread agreement, at least to some extent, that the tradeoffs implemented by following the practices are at least as well justified than any alternative.  The “at least to some extent” qualification acknowledges that there is some indeterminacy here.  It is clear that the tradeoffs involved in following certain best practices—such as checking on adequate buffer size—are at least as well justified as any alternative, and it is clear in general that one or more combinations of the practices discussed above implement tradeoffs that are at least as well justified as any alternative.  But it is not clear what those combinations are.  Exactly what tradeoffs among the various competing goals are best justified is unclear; there are competing arguments for weighting various goals in various ways.[97]   

          In evaluating such tradeoffs, it is important to bear in mind an often overlooked limit on what best practices can achieve.  Software is different from other engineered products in that sufficiently complex software inevitably has programming flaws.[98]  In contrast, design flaws are not inevitable in, for example, refrigerators, batteries, and bridges even when they exhibit considerable complexity.  Software alone combines complexity and inevitable flaws.  Thus, no matter how much one invests in development procedures designed to reduce programming flaws, flaws—and perhaps vulnerabilities—will remain.  There are two reasons.

First, most of engineering is governed by continuous mathematics, whereas software is governed by discrete mathematics.[99]  Continuous mathematics includes the mathematics of the real numbers, which describe the physics of motion and electricity.  Discrete mathematics includes the mathematics of the integers and of strings of letters. For our purposes, the heart of continuous mathematics is the notion of a continuous function. The definition of continuous function is typically given in Calculus classes using Greek lambdas and epsilons, but what a continuous function means to an engineer is that if, in a continuous system, you make a very small error in one of your inputs, the error in the behavior of your system must also be small.  The discrete mathematics that governs software offers no such guarantees.  An error in a single line of a million-line program can cause arbitrarily large errors.  The second thing that makes software different from other engineered entities is that there is no way to “over engineer” for safety in designing software as one can in designing many physical systems.[100]  For example, if one wants to design a building to withstand 140 mile per hour winds, one can do the calculations about the necessary material strength, thickness, etc., to withstand 150 mile per hour winds, and then build according to those calculations, and thus create an extra margin for safety. There are analogous things to do in many engineering situations, but not in the construction of software.

 

          D. Developers Do Not Follow Best Practices

          Software developers do not follow the practices.  As we noted earlier, software “is usually produced using error-prone tools and methods, including inadequate testing.”[101]  Creating buffer overflow vulnerabilities is a clear violation of best practices, but the vulnerability is still a common occurrence[102] and still ranks third on the SANS Institute’s 2010 list of the top twenty-five most dangerous software errors.[103]  Why don’t software developers conform more closely to best practices?  The answer lies in the behavior of buyers.  Buyers are trapped in a self-perpetuating coordination norm that under which they demand vulnerability-ridden software.  In such a case, the profit-maximizing strategy for software developers is to be the first in the market to offer a particular type of software or an upgrade to existing software.  Reducing vulnerabilities by following best practices requires a longer and more costly development process, so software developers avoid those practices.  

 

VI. Conditions for Creating the Norm

The solution is to create a value-optimal best-practices norm governing software sales in a market which sufficiently closely approximates perfect norm-competitiveness.  The more closely the market approximates perfect norm-competitiveness, the more rational, profit-motive driven sellers conform to the norm.  The existence of the value-optimal norm ensures the norm-governed sales implement acceptable tradeoffs, tradeoffs to which buyers give free and informed consent.  We remark in passing that the norm would do more than just reduce the number of vulnerabilities; it would reduce the number of software defects generally.  As we noted earlier, a vulnerability is a type of software defect, and following best practices reduces defects generally.[104]  How can one best ensure that a value-optimal norm operates in a sufficiently norm-competitive market?  We first consider ensuring a sufficiently norm-competitive market and then turn to creating the norm. 

A perfectly norm-competitive market exists when and only when two requirements are fulfilled:  perfect competition, and sufficient detection.  Current markets fall far short of both requirements. 

 

          A. Perfect Competition

The operating system market falls far short of the requirement of multiple sellers.  Microsoft dominates with a relatively small market share going to Apple and Linux (and in the future possibly to Google’s Chrome operating system).  There are also significant barriers to entry as operating systems are very costly to develop and adoption is uncertain.[105]  In addition, operating systems are not sufficiently homogeneous; switching from one to the other involves significant costs.[106]  These issues require detailed analysis in the context of antitrust and intellectual property law,[107] and that task lies outside the scope of our efforts here.  In contrast to the operating system market, markets for software applications and utilities may sufficiently approximate perfect competition, but, as with operating system markets, we will put the question aside.  For our purposes, we may assume perfect competition.

 

B. Sufficient Detection 

          Sufficient detection is the requirement that there are enough norm-violation-detecting buyers that a seller’s gain from norm-inconsistent behavior is smaller than the loss which results when norm-violation-detectors buy from norm-consistent sellers.  It may appear that this condition is not fulfilled.  Typical consumers lack the expertise required to distinguish—by inspecting the software—between vulnerability-ridden software and software with significantly fewer vulnerabilities.[108]   This is worrisome as it potentially leads to a lemons market.  We first explain the notion of a lemons market, and then consider whether a lemons market does in fact exist in regard to software vulnerabilities.  

          We explain a lemons market using a version of the “used car” example first employed by the economist George Akerlof in his seminal article, The Market for Lemons.[109]  Suppose a town has 300 used cars for sale: 100 good ones worth $2000, 1000 so-so ones worth $1500, and 100 lemons worth $1000.  Buyers cannot tell the difference between a good and bad car; thus, buying a used car means entering a lottery in which the buyer has a 1/3 chance of getting a good car, a 1/3 chance of getting a so-so car, and a 1/3 chance of getting a lemon.  The expected value of the purchase is $1500.  Rational buyers thus will pay only $1500 for a used car; consequently, buyers who value their good cars at over $1500 do not offer those cars for sale.  Thus, the market now contains lemons worth $1000 and not-so-good cars worth $1500; the expected value of a use car drops to $1250; consequently, buyers who value their cars above $1250 do not offer them for sale.  The process continues until only the lemons are left on the market.  In general, a lemons market exists when four conditions are fulfilled. (1) The products on the market vary significantly in the extent to which they have certain properties (the properties that make a car a lemon, for example), and buyers regard products with the properties in question as having less expected value than those without them; [110] (2) there is an asymmetry of information where buyers cannot discriminate between products with the properties and those without, but sellers can at least partially distinguish them; and furthermore, (3) there is no reliable signal of quality (i.e., sellers with an excellent car have no way to reliably disclose this fact to buyers); however, (4) buyers know there is a mix of products on the market.

          Are these four conditions fulfilled for software vulnerabilities?  In answering this question, it is important to distinguish two markets:  the market for security software and systems, such as firewalls, anti-virus software, or secure USB memory sticks; and, the market for other sorts of mass-consumer software.  Bruce Schneier has argued convincingly the former market is a lemons market.[111]  Others have picked up on his claim, and argued that it may also apply to software that is (relatively) secure, that is, software that is relatively free of vulnerabilities.[112]  We are not so sure.  While there are strong arguments that security software is a lemons market, it is unclear whether secure software is a lemons market.  Conditions (2) and (4) are arguably fulfilled, but (1) and (3) are problematic.  We first briefly review the arguments in favor of regarding (2) and (4) as fulfilled.  Condition (2):  Typical consumers do not have the expertise to distinguish by inspecting the software between secure and insecure software;[113] the developers do know something about what production practices they are using.  Condition (4):  Buyers—a significant portion of buyers—do know that the market contains both vulnerability-ridden and not so vulnerability-ridden software.[114]  

          Condition (1) requires (in part) that buyers regard vulnerability-ridden software as having less perceived expected value than similar software with significantly fewer vulnerabilities.  At the moment, this is not true.  Buyers are, on the whole, not willing to pay more for more secure software.[115]  Our proposal in Section VII is designed to change this, but if that is all it does, it may simply contribute to the creation of a lemons market in software.  Accordingly, our proposal will also suggest a mechanism for avoiding a lemons market. 

          Condition (3) requires that there do not exist any reliable signals that differentiate vulnerability-ridden from similar software with significantly fewer vulnerabilities.   Typical consumers do not have the expertise to distinguish by inspecting the software between vulnerability-ridden software and software with significantly fewer vulnerabilities.  Inspection is not, however, the only way to determine the extent to which software suffers from vulnerabilities.  The general quality of the software is a moderately reliable signal of the extent to which it contains vulnerabilities.  Vulnerabilities are a kind of flaw, or defect, in the software, and it is reasonable to assume that their occurrence correlates with the occurrence of other flaws, such as a tendency to crash or give wrong answers.[116]  Indeed, it is routine not to distinguish sharply between defects and vulnerabilities.  As security experts observe in a recent book, “Software defects are the single most critical weakness in computer systems. . . . [S]oftware defects lead directly to exploit.”[117]  The correlation between vulnerabilities and defects is sufficiently strong that at least some buyers will infer that improperly functioning software is likely to contain significant vulnerabilities.  This signaling mechanism is far from perfect, but sufficient detection does not require that all or most buyers detect vulnerability-ridden software, just that enough do to impose losses on sellers who offer such software.  Thus, there is very possibly a signaling mechanism that is strong enough to prevent a lemons market. 

Our ultimate proposal in Section VII for changing the market to encourage the creation of secure software does not rely solely on this possible signaling mechanism to avoid a potential lemons market.   We argue that the statute we propose, if adequately enforced, will ensure that condition (1) fails to hold.  That condition requires that software products vary significantly in the extent to which they have vulnerabilities, and that buyers regard vulnerability-ridden software as having less perceived expected value than similar software with significantly fewer vulnerabilities.  We explain and motivate the “vary significantly” provision, and we argue that it will not be fulfilled.  Thus, our proposal avoids the problem of a lemons market.

While showing how to avoid a lemons market is important, it is not the main thrust of our statutory proposal.   Our central claim is that the statute, if adequately enforced, will give rise to a best-practices norm.  We argue in Section VII that, once the norm is in place, the sufficient detection assumption will be more or less true.  There will be, in enough different situations, enough norm-violation-detecting buyers that norm-inconsistent sellers suffer losses.       

 

C.  Creating the Norm

Creating the norm requires ensuring that the conditions for the existence of a coordination norm are fulfilled.  (1) The relevant regularity must obtain:  buyers must regularly demand best practices software; and, (2) the regularity must exist at least in part because buyers think they ought to conform as long as everyone else does.  Assume, for the moment, a perfectly norm-competitive market.  Then, it is—in principle—clear how to ensure these conditions are fulfilled:  convince almost all buyers to demand best practices software—where they demand this at least in part because they think they ought to as long as everyone else does.  Assuming the demand persists long enough, profit-motive driven software developers will—in a perfectly competitive market—begin to meet the demand.  When they do, buyers will continue to demand, and the following regularity will be established:  buyers demand best practices software.  The regularity will exist in part because buyers think that they ought to conform as long as everyone else does.  The conviction that they ought to conform will be reinforced by the fact that unilateral non-conformity will mean going without software the buyer wants. 

The assumption of a perfectly competitive market is essential.  It ensures that developers will respond to the buyer demand; without such a response the demand will almost certainly fade away.  Buyers of netbooks computers could demand netbooks with a processor equivalent in power to the Pentium 4 processer, but sellers will not meet the demand as a processor that powerful currently generates too much heat to function in netbook.  Netbook buyers will either cease to demand such a processor, or they will cease to purchase netbooks.  The latter option is unlikely in the case of software generally, so an unmet buyer demand will eventually simply fade away.

 

D.  The Approximation Goals

          To summarize, there are three goals:  (1) convince buyers that they ought conditionally to demand best practices software and ensure that they do indeed demand it for that reason; (2) avoid the creation of a lemons market; and, (3) once the norm exits, ensure that, in enough different situations, enough norm-violation-detecting buyers that norm-inconsistent sellers suffer losses.  Call these the approximation goals.  Market forces will not achieve the approximation goals.[118]  Buyers are trapped in the self-perpetuating “vulnerability-ridden” coordination norm; moreover, the persistence of the norm ensures that the profit-maximizing strategy is to be the first in the market to offer a particular type of software or an upgrade to existing software, even if the software or upgrade is imperfect in a variety of ways, including having vulnerabilities.  As long as buyers are trapped in the norm, they will not demand best practices software.  Even those who understand the individual and social advantages of such software are unlikely to do so; a unilateral demand for best practices software simply falls on deaf ears.  We conclude that legal regulation is required to achieve the two approximation goals.[119]  The question, then, is what sort of legal regulation will best achieve the approximation goals. 

 

VII. Creating the Norm through Legal Regulation

 

We first consider common law negligence and products liability for defective design as well as statutory proposals modeled more or less along the lines of those two common law doctrines. [120]  We argue that these approaches clearly fail to achieve the approximation goals.  We offer a statutory alternative built around the idea of best practices. 

 

A.  Negligence

A software developer is liable in negligence for losses resulting from vulnerability only if the vulnerability was the result of the software developer’s failure to act as a reasonable developer would.  There are a number of difficulties in using negligence to regulate vulnerabilities in software;[121] we focus entirely on assessing how well it will achieve the approximation goals.        

It is certainly possible that negligence cases could lead to the fulfillment of the approximation goals.  Here is one possible scenario.  Successful negligence claims against software developers yield a series of decisions that, other things being equal, it is negligent not to follow this or that best practice (e. g., it is negligent to create a buffer overflow vulnerability[122]).  The “other things being equal” rider acknowledges that a developer who can demonstrate the reasonableness of a departure from best practices will not be liable.  On the basis of the series of decisions, courts and software developers both conclude that, other things being equal, it is negligent not to follow best practices for software development.  Publicity about the law suits, combined perhaps with advertising from best practice complaint developers, convinces almost all buyers that they ought to demand best practices software, and they begin to demand it for that reason.  All software developers respond by following the practices.  This eliminates worries about fulfilling the sufficient detection condition.  Since all software is best-practices software, there is no need to detect software that is not.  Thus the following regularity arises:  buyers demand best practices software.  Once the regularity is in place, unilateral non-conformity will mean going without software the buyer wants, and buyers will think that they ought to conform as long as everyone else does. 

Each step in this scenario is problematic.  It is hardly automatic that the successful completion of the first step (namely, successful negligence claims against developers) would result in developers actually following best practices.  Developers would have to be convinced that the cost of doing so was less than the expected legal liability.  Even in that case, it is hardly plausible that all developers will follow best practices; irrational developers will not do so.  Further, it is far from obvious that publicity and advertising would convince almost all buyers that they ought to demand best practices software and lead them to demand it on that basis.  Our focus, however, is on the first step—the assumption that courts will hold generally that it is negligent not to follow best practices.  This is unlikely to happen; hence, it is unlikely that the process will even get started. 

The role of custom in establishing reasonableness makes it extremely unlikely that courts will do so.  As the Restatement notes, “[i]n determining whether conduct is negligent, the customs of the community, or others under like circumstances, are factors to be taken into account, but are not controlling where a reasonable man would not follow them.”[123]  The relevant customs for software development are not the best practices but the prevailing industry practices.[124]  In theory, industry practices are just “factors to be taken into account, but are not controlling where a reasonable man would not follow them.”[125]  In practice, however, it is difficult for a plaintiff to overcome the defendant’s claim that it followed industry practice and hence proceeded reasonably.[126]  This is not a defect in tort law; it is a sensible approach to assessing reasonable design choices for one who is in the business of designing products to sell for a profit.  What practices should a software developer adopt when designing software for sale in the current market?  Buyers demand vulnerability-ridden software and will generally not pay a premium for more secure software.  The developer’s competitors cater to this demand by offering relatively inexpensive insecure software.  A developer who invests too much in software development runs the risk of business losses.  Software developers, just as much as buyers, are in the grip of the “vulnerability-ridden” norm.  In a wide range of cases, developers will be able to make a convincing case that they acted reasonably.

The case may not always be successful, of course.  Courts have rejected such reasonableness claims where the plaintiff has identified a readily available way to avoid the damage that the industry practices ignore.  A classic case is The T. J. Hooper.[127]  Two tugs, the Montrose and the T. J. Hooper, encountered a gale while towing barges.  The tugs and the barges sank.  The cargo owners sued the barge owners, who in turned sued the owner of the two tugs; the owner petitioned to limit his liability.  The court found that the tugs negligently unseaworthy because they lacked shortwave radios.  Had they been so equipped, they would have received reports of worsening weather; had they received the reports, they would have avoided the storm by putting in at the Delaware breakwater.  The case illustrates a familiar pattern in torts cases:  (1) an activity imposes a significant risk of harm on third-parties, where (2) those engaging in and benefiting from the activity under invest in protecting the third parties; (3) the law responds by imposing on those engaging in the activity a duty to take reasonable steps to prevent harm to third-parties, where (4) other things being equal, a reasonable step is one that reduces expected damage to third-parties by an amount greater than the total cost of the step.  Current software development practices certainly appear to fit this pattern.  Software developers under invest software development by ignoring bests practices thereby producing vulnerability-ridden software that imposes, in the aggregate, significant losses on buyers and society as a whole.  Shouldn’t tort law hold that not following best practices is negligent?  It is unlikely that the courts will do so.  There are two key differences between shortwave radios of The T. J. Hooper and software best practices.    

The first is that the cost of shortwave radios was relatively small.[128]  The cost of acquiring a radio did not put a barge owner at a competitive disadvantage; indeed, it arguably conferred one since the owner could offer lower risk transport at the same cost as competitors.  This is a critical factor in making it unreasonable not to acquire a radio—even in the market context at the time.  The second difference is that barge owners could easily make a rough and ready comparison between the cost of the radio and the expected losses avoided by its use.  The losses, when they do occur, can be huge; and, while the occurrence of violent storms is difficult to predict, their occurrence from time to time is certain.  This is a key factor in justifying the holding of negligence.  If the comparison was uncertain and controversial, it would be far less clear that owners acted unreasonably.  In the case of software, the comparison is uncertain and controversial.  As we noted earlier, it is clear that some combinations of best practices implement best justified tradeoffs among the relevant goals, but it is unclear and controversial what combinations those are.  For both of these reasons, it is unlikely that the courts will hold that it is negligent not to follow best practices.[129]
         

B.  Products Liability for Defective Design

A product is defective in design only when use of the product involves a foreseeable and unreasonable risk of harm.[130]  As with negligence, the role of custom in establishing reasonableness that makes it unlikely that courts will hold that failing to follow best practices (or a defensible alternative to best practices) creates a foreseeable and unreasonable risk of harm.[131]  Evidence of industry practices is relevant under both of the main tests used to determine defectiveness—the “risk/utility test” (a product is defective when its risk of harm exceeds its benefits), and the “consumer expectations” test (a product is defective when it fails to meet the reasonable expectations of consumers).[132]  Defendants may seek to show that a product was not defective by introducing evidence that other sellers customarily use the same design.[133]  For the same reasons given in the discussion of negligence, it is unlikely that the courts will hold that it is negligent not to follow best practices.

 

C. Statutes Closely Modeled on Negligence or Products Liability 

The arguments above also apply to any statute modeled sufficiently closely on the common law requirements for negligence or products liability; indeed, the critique applies to any statute that incorporates a “reasonableness” requirement for software development where the courts will rely heavily on custom in interpreting that requirement.[134]  We suggest a different statutory alternative modeled on best practices as the best way to promote the approximation goals.  Our goal is not to define the statute itself but to define the task of creating it.  Our brief discussion is a catalogue of problems to be solved, not a list of solutions. 

 

          D.  A Statutory Task

The statute would identify best practices and require that software developers either follow them, or, to avoid liability, be able to demonstrate the reasonableness of their alternative practices.  It is essential to implement this requirement in a way that allows developers reasonable flexibility in their choice of development methodologies; otherwise, the statute will excessively inhibit innovation.  The statute could delegate to a standard setting organization like the Computer Security Division of the National Standards Institute (NIST)[135] or the American National Standards Institute (ANSI),[136] and adopt and enforce its standards; it could delegate to an agency to fashion standards with advice from private sector.   There are well-know problems.[137]  Regulatory capture in particular is a concern.  The concern is that commercial interests that dominate the software industry will have such a powerful influence on the formulation of the standards that the standards fall far short of genuine best practices and instead advance commercial or special interests.[138]

We now turn to issues that arise in using such a statute as a means to realizing the approximation goals. 

 

                   1. Avoiding a lemons market

          How would the statute ensure that there are a sufficient number of norm-violation-detecting buyers?  As we noted in the discussion if negligence, the problem disappears if all developers follow best practices.  This is of course extremely unlikely.  But it is also not required.  It is enough if almost all follow best practices.  The main problem then is to ensure sufficient compliance.   

This may seem obviously wrong.  If some developers deviate from best practices won’t the conditions for a lemons market obtain?  Recall that a lemons market exists when the following conditions hold.   (1) The products on the market vary significantly in the extent to which they have certain properties (vulnerabilities, in this case), and buyers regard products with the properties in question as having less expected value than those without them;  (2) there is an asymmetry of information where buyers cannot discriminate between products with the properties and those without, but sellers can at least partially distinguish them; and furthermore, (3) there is no reliable signal of quality; however, (4) buyers know there is a mix of products on the market.

We suggested earlier that (3) probably does not hold, but we will not rely on that suggestion here.  Instead, we note that (1) is most likely not fulfilled.  If almost all developers offer best-practices software, the probability of purchasing non-best practices software is very low; hence, the existence of such software on the market only minutely affects the expected value of a purchase.  Rational buyers will simply ignore the minimal impact.   They will not calculate the reduction in expected value of a purchase caused by the existence of vulnerability-ridden software.  The reason is that it is rational not to try to assess the small difference in expected value and simply treat all software as if it were best-practices software.  The costs of making the assessment are greater than any gain it yields. [139]  Thus, half of (1) will be true:  buyers will regard vulnerability-ridden software as having less expected value than software with significantly fewer vulnerabilities.  But half will be false:  the products on the market vary significantly in the extent to which they have vulnerabilities.  There will not be enough vulnerability-ridden software to make it rational to take the minimal reduction in expected value into account in purchasing decisions.   

 

                   2. Creating the norm

How will the statute help convince buyers that they ought conditionally to demand best practices software and ensure that they do demand it for that reason?  Our answer is that steps must be taken to “educate” buyers about the advantages of best-practices software.  We put “educate” in quotes because techniques for creating the conviction form a continuum from genuine education to manipulation.  At the “education” end, one presents the relevant information about the individual and social gains from more secure software and counts on rational reflection to create the conviction.  As one moves toward the “manipulation” end, one increasingly supplements presentation of information and rational reflection with techniques designed to produce the conviction in other ways.  The task is to use some combination of techniques to produce the desire conviction in buyers.  One possibility is that, in order to gain a competitive advantage, developers who comply with the statute might themselves inform consumers about the advantages of “best practices” software and tout their software not only as best-practices software but as software that exceeds the legal minimum.  Alternatively, there are governmental ways to change citizens’ minds, as the anti-littering, anti-smoking, and anti-drug campaigns illustrate.     

 

3.  Once the norm is established

 

Once the norm is in place, it is important that buyers and software developers conform on their own initiative, not because of the threat of enforcement of statutory requirements; otherwise, one must rely on difficult, costly, and uncertain enforcement.  Developers will voluntarily conform as long as there a sufficient number of norm-violation detecting buyers.  Once the norm is in place, there may well be.  Developers themselves can ensure that buyers possess information about norm-inconsistent sellers.  If Microsoft, for example, offers norm-inconsistent software, Google’s advertising for its Chrome operating system can call that fact to buyers’ attention.[140]  Awareness of norm-inconsistent software can also come from publications like Consumer Reports, consumer watch-dog groups, and negative publicity from consumer complaints, and litigation.[141] 

 

VIII. Conclusion

          Software sales current depart dramatically from the typical pattern of sales governed, more or less, by value optimal-norms in a more or less norm-competitive market; instead, buyers, and developers, are trapped in the “vulnerability-ridden” norm in a market that (most likely) falls far short of the norm-competitive ideal.  The solution is to devise a suitable statutory stepping stone toward a value-optimal best-practices norm governing software sales in a sufficiently norm-competitive market. 

 



+ This Article is based upon work supported by the National Science Foundation under Grant No. IIS-0959116.

* Professor of Law, Chicago-Kent College of Law; Visiting Foreign Professor, Law Faculty, University of Gdańsk, Poland.

** Professor and Head, Department of Computer Science, University of Illinois at Chicago.

 

[1] In 2009, the cost of a data breach to organizations in the US was an average $6.75 million per incident.  Poneman Institute, 2009 Annual Study: U. S. Cost of a Data Breach (2010), http://www.encryptionreports.com/2009cdb.html#See also Robert W. Hahn and Anne LayneFarrar, The Law and Economics Of Software Security, 30 Harvard Journal of Law & Public Policy 283, 302 - 308 (2007).  A United Kingdom government study estimates the yearly cost of data breaches to be £21bn to businesses, £2.2bn to government and £3.1bn to citizens. The Cost of Cybercrime, 8 (2011), http://www.cabinetoffice.gov.uk/sites/default/files/resources/the-cost-of-cyber-crime-full-report.pdf.  Earlier United States estimates of the cost of identity theft alone are also in the billions.  For a summary of relevant studies, see Fred H. Cate, Information Security Breaches and the Threat To Consumers, 2005, http://www.hunton.com/files/tbl_s47Details/FileUpload265/1280/Information_Security_Breaches.pdf (reporting  10.1 million victims of identity theft in 2003 and a total losses to consumers of over 50 billion).  The number of victims has declined recently but costs have actually risen.  Jennifer Saranow Schultz, The Rising Cost of Identity Theft for Consumers, New York Times, February 15, 2011, http://bucks.blogs.nytimes.com/2011/02/09/the-rising-cost-of-identity-theft-for-consumers/?src=busln (noting that  “[t]he average consumer out-of-pocket cost due to identity fraud increased to $631 per incident in 2010, up 63 percent from $387 in 2009. Such costs include the expenses of paying off fraudulent debt as well as resolution fees, such as legal cost.”).  Saranow summarizes this report:  Javelin Strategy and Research, 2011 Identity Fraud Survey Report:  Identity Fraud Decreases – but Remaining Frauds Cost Consumers More Time & Money (2011), https://www.javelinstrategy.com/uploads/1103.R_2011%20Identity%20Fraud%20Survey%20Report%20Brochure.pdf (reporting the increasing cost to consumers of identity theft). 

 

[2] We assume that the gains from allowing unauthorized access (e. g. saving the time, effort, and money otherwise spent in prevention) are not sufficient to offset the losses.  See infra note 70 and accompanying text. 

 

[3] Vulnerabilities are a major cause of unauthorized access.  In 2010, CWE (Common Weakness Enumeration) and SANS (SysAdmin, Audit, Network, Security) identified cross-site scripting (XSS), SQL injection, and buffer overflow vulnerabilities as the causes of nearly all major cyber attacks in recent years.  CWE/SANS TOP 25 Most Dangerous Software Errors, http://www.sans.org/top25-software-errors.  When releasing the list, the SANS noted that "[t]hese 25 programming errors, and their ‘on the cusp cousins’ have been the cause of nearly every major type of cyber attack, including recent penetrations of Google, power systems, military systems, and millions of other attacks on small businesses and home users." Joan Goodchild, Security Experts: Developers Responsible for Programming Problems, CSO, February 16, 2010, http://www.csoonline.com/article/544163/Security_Experts_Developers_Responsible_for_Programming_Problems.  See also Applications Security: Eliminating Vulnerabilities in Enterprise Software, http://i.cmpnet.com/darkreading/vulnerabilitymgmt/July2010_ApplicationsSecurity.Alert[1].pdf  (noting that “[m]ost of the hacks that compromise enterprise security today are those that exploit flaws in applications”). 

 

[4] See, e.g., Ross Anderson, Security Engineering 15 (2009).

 

[5] See, e.g., Bruce Schneier, Information Security and Externalities, http://www.schneier.com/essay-150.html

 

[6] Id.

 

[7] See C. Shapiro & H. R Varian, Information Rules: A Strategic Guide to the network economy (1999).  The economics and information security community has developed Shapiro and Varian’s initial insights.  Much of this work is reported in the annual Workshop on the Economics of Information Security since 2002. For information the workshops from 2002 to 2010 see http://weis2010.econinfosec.org/index.html.  For a good general survey, see Ross Anderson and Tyler Moore, Information Security: Where Computer Science, Economics and Psychology Meet, 367 Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 2717 (2009).

 

[8] Bruce Schneier is a prominent advocate of this view.  Bruce Schneier, Liability changes everything, November 2003, http://www.schneier.com/essay-025.html (arguing that “if we expect software vendors to . . . invest in secure software development processes, they must be liable for security vulnerabilities in their products”).  The theme appears frequently in the law review literature.  See e.g., David Gripman, The Doors Are Locked But the Thieves and Vandals Are Still Getting In: A Proposal in Tort to Alleviate Corporate America’s Cyber-Crime Problem, 16 The John Marshall Journal of Computer and Information Law 167 (1997); Stewart D. Personick and Cynthia A. Patterson (eds.), National Research Council, Critical Information Infrastructure Protection and the Law: An Overview of Key Issues 50 (2003); Rustad and Koenig, The Tort Of Negligent Enablement Of Cybercrime 20 Berkeley Tech. L.J. 1553 (2005) (arguing for recognizing a negligent enablement tort to provide an incentive to avoid negligent design practices); Jennifer A. Chandler, Improving Software Security: A Discussion of Liability for Unreasonably Insecure Software, in Anapum Chander, Lauren Gelman, and Margaret Jane Radin (eds.), Securing Privacy In The Internet Age 155 (2006); Shuba Gosh and Vikram Mangalmurti, Curing Cybersecurity Breaches Through Strict Products Liability, in Securing Privacy In The Internet Age, at 187; Michael D. Scott, Tort Liability for Vendors of Insecure Software: Has the Time Finally Come? 62 Maryland Law Review 425 (2008).

 

[9] Eugene H. Spafford, Remembrances of Things Pest, 53 Commun. ACM 35, 36 (2010).     

 

[10] See infra Section III. 

 

[11] See Paul S. Atkins and Bradley J. Bondi, Evaluating the Mission: A Critical Review of the History and Evolution of the SEC Enforcement Program, 13 Fordham Journal of Corporate and Financial Law 367 (2008); and Robert L. Glicksman and Dietrich H. Earnhart, The Comparative Effectiveness of Government Interventions on Environmental Performance in the Chemical Industry, 26 Stan. Envtl. L.J. 317 (2007).

[12] See Michael Hector & Karl-Dieter Opp, What Have We Learned About the Emergence of Social Norms?, in Social Norms 394, 403 (Michael Hector & Karl-Dieter Opp (eds.), 2001). 

 

[13] See David K. Lewis, Convention: A Philosophical Study, 5 - 42 (1969).

 

[14] Our notion of coordination norms is similar to but not as broad as Steven Hechter’s notion.  Steven A. Hetcher, Norms in a Wired World 50 (2004).  Our notion is closely related to the notion of coordination game in game theory, which has roots going back to Thomas C. Schelling, The Strategy of Conflict (1960) and to David Lewis's notion of conventionConvention, supra note 13.   There are important affinities between our notion of a coordination norm and the notion of coordination game. The original idea of coordination games and the term “coordination game” comes from Schelling, at 89. The latter notion was further developed, and connected to norms and conventions by Lewis.  For a more recent treatment, see Russell Cooper, Coordination Games: Complementarities and Macroeconomics (1999).

 

[15] This is a simplification.  The true norm is closer to “maximize the distance from your nearest neighbor subject to the constraint that you stay within the peripheral vision of at least one other passenger, and that you have at least one other passenger within your peripheral vision.” 

 

[16] Following the norm is not of course a unique solution to the cooperation problem; there are alternatives—e. g., maximize the distance from your nearest neighbor and do not enter unless that distance is at least three inches.

 

[17] The assumption dominates economics and law and economics.  See Amartya Sen, On Ethics and Economics (1987), and Amartya Sen, The Idea of Justice (2009). Sen extensively criticizes the assumption, decisively in our view. 

 

[18] Even our observation that one may conform to realize a good state of affairs is consistent as long as one sees such motivations as being in one way or another in one’s self-interest.

 

[19] For simplicity, we are suppressing interesting issues about the type and extent of knowledge required for the existence of a norm.  See Convention supra note 13, at 52 - 76.   Given our discussion, we may legitimately assume that the required knowledge conditions are fulfilled. 

 

[20] The appeal to reasoning under appropriate conditions to justify normative conclusions begins (at least) with Aristotle.  See Aristotle, Nicomachean Ethics.  For a modern exposition and defense of this approach, see Stephen Darwall, Impartial Reason (1983).

 

[21] A situation is Pareto optimal when and only when it is not possible to improve the well-being of any one person without making others worse off.

 

[22] Thomas C. Schelling, Hockey Helmets, Concealed Weapons, and Daylight Saving: A Study of Binary Choices with Externalities, 17 Journal of Conflict Resolution, 381, 381 (1973).

 

[23] The Economist reports that there really was a secret ballot.  The Economics of Hockey Helmets, The Economist, July 19th 2007,  http://www.economist.com/blogs/freeexchange/2007/07/the_economics_of_hockey_helmet.  We have been unable to confirm this report.   Thomas Schelling considers the results hypothetical choices in Thomas Schelling, Micromotives and Macrobehavior 198 - 201 (2d. edition, 2006 ), but he does not consider a secret ballot among hockey players.

 

[24] Hockey Helmets, Concealed Weapons, and Daylight Saving, supra note 22, at 381.  

 

[25] See supra note 14.  

 

[26] See, e.g., Robert Gibbons, Game Theory for Applied Economists (1992); Kevin Leyton-Brown, Essentials of Game Theory: A Concise, Multidisciplinary Introduction (2008); Martin J. Osborne & Ariel Rubinstein, A Course in Game Theory (1994); and Philip D. Straffin, Game Theory and Strategy (1996).

[27] To be more precise, we should have said “the actions of the game can be labeled so that it has Nash equilibria with the players choosing the corresponding actions,” because two games that become the same when the actions (or players) are relabeled are really the same game.  There does not seem to be any one exact definition of “coordination game” used uniformly throughout the literature. For instance, we didn’t specify whether the Nash equilibria must be strict, that is, whether “best response” in Nash equilibrium is to be defined as “better than all alternatives.” (If it’s defined as “at least as good as all other alternatives,” then we get weak Nash equilibrium.) Some authors impose further conditions that we will not discuss here. 

          We speculate that there is no precise definition because the notion of coordination is of most interest in social science and legal communities interested in social norms or situations of mixed cooperation and competition, and perhaps of less interest to the mathematical game theory community that tends to be the source of strict definitions. Schelling remarks in his Preface to the 1980 edition of The Strategy of Conflict, “I wanted to show that some elementary theory, cutting across economics, sociology, political science, even law and philosophy and perhaps anthropology could be useful not only to formal theorists but also to people concerned with practical problems. I hoped too, and I now think mistakenly, that the theory of games might be redirected toward applications in these several fields. . . . [G]ame theorists have tended to stay instead at the mathematical frontier.” Schelling, supra note 14, at vi.

 

[28] See Brian Skyrms, The Stag Hunt and the Evolution of Social Structure (2003).    

[29] To be precise, we obtain the payoff matrix shown by assuming that having a helmet is worth 7 units of utility, being bare headed is worth 0, having an advantage in winning is worth 5, being in a neutral position for winning is worth 0, and being at a disadvantage is worth -7, and that the preferences are independent and can be added.

 

[30]  William Poundstone, Prisoner's Dilemma (1992) contains an excellent, detailed discussion of the prisoner’s dilemma. 

 

[31] Figure 5’s payoff matrix is based on the assumption that the utilities for an advantage in winning, an even game, and a disadvantage are respectively 10, 3, and -2, and the independent utility for having a helmet is 2.

[32] See infra Section III.

 

[33] The “other things being equal” is merely to handle exceptions that do not matter for our purposes—e. g., a buyer may accept an unfit product if he or she has a non-standard use for it, or if the seller is a relative from whom the buyer believes he must not refuse.

 

[34]  The demand has a long history.  As British common law responded to the rise of a market economy in the seventeenth century, it explicitly noted that that the commercial custom and practice was to offer fit products.  Such acknowledgments, moreover, are not confined to modern market economies; Ancient Roman law also notes the same custom and practice. See Friedrich Kessler, The Protection Of The Consumer Under Modern Sales Law, Part 1, 74 Yale L. J. 262 (1974); George L. Priest, A Theory Of The Consumer Product Warranty, 90 Yale L. J. 1297 (1981); and, James Oldham, English Common Law in the Age of Mansfield, Part II (2004).  The existence of the demand is consistent with spectacular failures to meet it.  For example, in June 2010, in just small fraction of the recalls that month, “McDonald's asked customers to return 12 million glasses emblazoned with the character Shrek. Kellogg's warned consumers to stop eating 28 million boxes of Froot Loops and other cereals. Campbell Soup asked the public to return 15 million pounds of SpaghettiOs, and seven companies recalled 2 million cribs.”  Lindsay Layton, A slew of defective products leaves consumers with 'recall fatigue', The Seattle Times, July 2, 2010, http://seattletimes.nwsource.com/html/nationworld/2012268615_recallfatigue03.html.

 

[35] 357 So.2d 996 (1978).

 

[36] Uniform Commercial Code §2-314(2)(c).  The norm and the legal rule are not the same; people generally know and adhere to the norm while only the relatively legally sophisticated are aware of Uniform Commercial Code §2-314(2)(c).   

 

[37] Lindy Homes, supra note 35, at 999 - 1000. 

 

[38] Id. at 999.

 

[39] People clearly do think that sellers ought not to offer products that, as a result of negligent design, impose an unreasonable risk of loss on buyers who use the product in the intended way.  It is difficult to imagine anyone sincerely claiming that sellers ought to offer such negligently designed products, and indeed precisely the opposite conviction plays a central role in the development of products liability law.  See, e. g., Richard Wright, The Principles of Products Liability Law, in Symposium, Products Liability: Litigation Trends on the 10th Anniversary of the Third Restatement, 26 Review of Litigation 1067 (2007).    

[40]  In the Matter of Sony BMG Music Entertainment, FTC File No. 062-3019, http://www.ftc.gov/os/caselist/0623019/index.shtm. 

  

[41] The software did contain vulnerabilities.  See Deirdre K. Mulligan and Aaron K. Perzanowski, The Magnificence Of The Disaster: Reconstructing The Sony BMG Rootkit Incident, 22 Berkeley Tech. L.J. 1157, 1166 (2007).  Our discussion focuses exclusively on other aspects of the software.     

 

[42] See Sony BMG CD copy protection scandal, Wikipedia, http://en.wikipedia.org/wiki/Sony_BMG_CD_copy_protection_scandal#Legal_and_financial_problems; and Bruce Schneier, Sony's DRM Rootkit: The Real Story, http://www.schneier.com/blog/archives/2005/11/sonys_drm_rootk.html. 

 

[43] The Magnificence Of The Disaster, supra note 41, at 1168. 

 

[44] Id. at 1168 - 1169.  As a result of the ensuing consumer outrage, Sony lost roughly $6.5 million in return fees alone.  Id. at 1170. 

 

[45] The resources to conduct such a review were available to Sony BMG from Sony Corporation of America which has a 50% interest in Sony BMG whose holdings include Sony Electronics and Sony Computer Entertainment America.  Id. at 1179.   Sony, along with Philips, owns the rights to the core DRM patents of Intertrust. In theory, at least, Sony BMG could have implemented a suite of better technical solutions.  See Press Release, Sony Corporation of America, Philips and Sony Lead Acquisition of Intertrust, available at http://ww.sony.com/SCA/press/021113.shtml (Nov. 13, 2002).

 

[46] Sony’s “decision [to offer the CD’s with the copy protection software] points to a culpable failure of internal procedures to safeguard against the wide-scale distribution of flawed protection measures.”  The Magnificence Of The Disaster, supra note 41, at 1168 - 1179.  

 

[47] Id. at 1180.  

 

[48] Id. at 1180.  

 

[49] Alan Schwartz and Louis L. Wilde, Imperfect Information in Markets for Contract Terms:  The Examples of Warranties and Security Interests, 69 Va. L. Rev. 1387, 1398 (1983).

 

[50] One may, for example, think that someone who commits an intentional tort should bear the losses he or she causes even if the victim is the best loss-avoider.  See generally, The Principles of Product Liability, supra note 39. 

[51] Alan Schwartz and Louis L. Wilde, Imperfect Information in Markets for Contract Terms:  The Examples of Warranties and Security Interests, 69 Va. L. Rev. 1387, 1398 (1983).

 

[52] We will, for simplicity, assume that consistency with norms is an all-or-nothing matter:  a transaction is either entirely consistent, or entirely inconsistent.  In practice, consistency is often a matter of degree.  Similarly, in regard to value-optimality, we assume that one’s values show either that one ought to act in accord with a given norm, or that one ought not.  In practice, there may be open questions where one’s values do not show that one ought to act in accord with the norm but also do not show that one ought not.  

 

[53] The argument is adapted from Alan Schwartz & Louis L. Wilde, Intervening In Markets On The Basis Of Imperfect Information:  A Legal And Economic Analysis, 127 U. Pa. L. Rev. 630 (1979).   

 

[54] See, e. g., Robert A. Hillman, Online Boilerplate: Would Mandatory Website Disclosure of E-Standard Terms Backfire, 104 Mich. L. Rev. 837, 853 (2006) (discussing the role of watchdog groups). 

 

[55] See, e.g., J. R. Avrill, Studies on Anger and Aggression, 38 American Psychologist 1145 (1983) (noting that violation of norms in an exchange provokes anger and may lead to the termination of the exchange).    

 

[56] You may, of course, reveal yourself as an inconsistency-detector if you explicitly insist on norm-consistent treatment, or if you detect and object to norm-inconsistent behavior. 

 

[57] See, e. g., Jeffery L. Harrison, Law and Economics 261 (2007). 

 

[58] We assume that sellers, as members of the community in which the norm obtains, are aware of the norms and realize that they fail to meet buyers’ demands when they fail to act in according with demand–unifying coordination norms.  See supra note 19.

 

[59] See Douglas A. Barnes, Note, Deworming the Internet, 83 TEX. L. REV. 279, 297 - 299 (2004), and Mark G. Graff and Kenneth R. van Wyk, Secure Coding:  Principles & Practices 25 (2003).  

 

[60] Bruce Schneier, Schneier on Security, December 19, 2005, http://www.schneier.com/blog/archives/2005/12/insider_threat.html (reporting, among corporate employees, “Two thirds (62%) admitted they have a very limited knowledge of IT Security” and “More than half (51%) had no idea how to update the anti-virus protection on their company PC”); SANS Institute InfoSec Reading Room, Consumer Labeling for Software Security, http://www.sans.org/reading_room/whitepapers/awareness/consumer-labeling-software-security_10.  Consumer awareness has increased over time.  Tim Wilson, Consumer Awareness Of Online Threats Is Up, Study Says, Dark Reading, http://www.darkreading.com/security/vulnerabilities/222400407/index.html

 

[61] Christine Jolls, Behavioral Economics Analysis of Redistributive Legal Rules, 51 Vand. L. Rev. 1653, 1659 (1998).

 

[62] http://www.sans.org/security-resources/malwarefaq/pdf-overview.php.

 

[63] The third-party risks are, in the terminology of economics, externalities—effects of a decision on those who did not make the decision and whose interests were not taken into account in making the decision.   

 

[64] We are not using “negligent” here in legal sense.  We simply have in mind the non-legal use to mean “without sufficient attention.”  We discuss negligence as a tort in Section VI.  

 

[65] See Aleph One, Smashing the Stack for Fun and Profit, 7 Phrack Magazine, file 14,   http://www.phrack.com/issues.html?issue=49&id=14#article

 

[66] See, e. g., SANS, Secure Windows Initiative Trial by Fire: IIS 5.0 Printer ISAPI Buffer Overflow, http://www.sans.org/reading_room/whitepapers/win2k/secure-windows-initiative-trial-fire-iis-50-printer-isapi-buffer-overflow_190 (noting that “[b]ecause buffer overflows begin with poor programming practices it is essential that vendors train their programmers to write secure code”).   

 

[67] CWE/SANS TOP 25 Most Dangerous Programming Errors, Programming Error Category: Risky Resource Management, http://www.sans.org/top25-programming-errors (linking to http://cwe.mitre.org/data/definitions/120.html).  

 

[68] See supra note 1.  

 

[69] See supra note 3.

 

 [70] See Graf and Van Wyk 56; Roger S. Pressman, Software Engineering:  A Practitioner’s Approach 13 – 14 (2001), and Ponemon Institute, Business Case for Data Protection: A Study of CEOs and other C-level Executives in the United Kingdom, http://www.ponemon.org/local/upload/fckjail/generalcontent/18/file/IBM%20Business%20Case%20for%20Data%20Protection%20UK%20White%20Paper%20FINAL6%20doc.pdf (noting that “C-level executives believe the cost savings from investing in a data protection program of £11 million is substantially higher than the extrapolated value of data protection spending of £1.9 million. This suggests a very healthy ROI for data protection programs”).   The study is of course not a study of investment in software development, but the significant savings from protecting data on networks suggests that reasonable software development practices that reduced the incidence of vulnerabilities would save money.

 

[71]  See Alessandro Acquisti, Allan Friedman; and Rahul Telang, Is There a Cost to Privacy Breaches? An Event Study, ICIS 2006 Proceedings,  4

http://aisel.aisnet.org/icis2006/94

 

[72] Michael Carrier, Innovation for the 21st Century: Harnessing the Power of Intellectual Property and Antitrust Law 19 – 32 (2009) (emphasizing the importance of innovation). 

 

[73] On the importance of open source software, see David A. Wheeler, Why Open Source Software / Free Software (OSS/FS, FLOSS, or FOSS)? Look at the Numbers!, http://www.dwheeler.com/oss_fs_why.html (offering statistics to show that open source software can be a better option than proprietary software); and, Edward M. Corrado, The Importance of Open Access, Open Source, and Open Standards for Libraries, Issues in Science & Technology Librarianship, Spring 2005, http://www.library.ucsb.edu/istl/05-spring/article2.html.  Best practices appropriate for proprietary software might unduly constrain the development of open source software. 

 

[74] See generally Innovation for the 21st Century: Harnessing the Power of Intellectual Property and Antitrust Law, supra note 72. 

 

[75] Ira P. Robbins, Best Practices on ‘Best Practices’: Legal Education and Beyond, 16 Clinical Law Review 269 (2009). 

 

[76] Id. at 278 - 282  

 

[77] http://www.esc.org.uk/business-and-community/electrical-industry/technical-manual.html

 

[78] http://www.esc.org.uk/business-and-community/electrical-industry/best-practice-guides.html 

 

[79] http://www.esc.org.uk/business-and-community/policy/statements.html and http://www.esc.org.uk/business-and-community/statistics.html (offering safety statistics). 

 

[80]30 mA RCD protection (the British equivalent of CGFI switches) greatly reduces the risk of an electrical shock sufficient to cause arterial fibrillation, the main cause of death from electric shock. 

 

[81] In Replacing a Consumer Unit in Domestic Premises Where Light Circuits Have No Protective Conductor, the Council advises that “where the customer is . . . not prepared to accept the cost or disruption of re-wiring . . . but still needs a new consumer unit [circuit breaker box], . . . the installer needs to carry out a risk assessment before agreeing to replace only the consumer unit.”  http://www.esc.org.uk/pdfs/business-and-community/electrical-industry/BPG1v2_web.pdf  

 

[82]  Medical Affairs Staffing & Spend: Maximizing Value, Decreasing Cost, Study Overview,

http://www.best-in-class.com/bestp/domrep.nsf/products/medical-affairs-staffing-spend-maximizing-value-decreasing-cost.  

 

[83] http://www.bestpractices.org.

 

[84] Id.

 

[85] Robert J. Boxwell, Benchmarking For Competitive Advantage 30 (1994).  The quote characterizes “benchmarking”; benchmarking is setting standards as a step toward adopting practices that realize them.  The practices are “best practices” if they are “best in class.”  “State of the art” is a similar characterization; as Robbins notes, Great Britain also uses the term best practices in the area of public management, defining a best practice as generally accepted ‘state of the art’ approach.  Robbins at 282, citing Tessa Brannan et al., Assisting Best Practice as a Means of Innovation, 4 Loc. Gov’t Stud. 23, 24 (2008).

 

[86] See supra notes 72 - 74 and accompanying text. 

 

[87] Rememberance of Things Pest, supra note 9, at 38.

 

[88] See supra Section IV,C.

 

[89] See Ethan Preston and John Lofton, Information Economics, Shifting Liability and The First Amendment, 71 Computer Security Publication, http://digitalebookden.com/computer-security-publications-information-economics-shifting.html; David Wheeler, Secure programmer: Countering buffer overflows, Developer Works, http://www.ibm.com/developerworks/linux/library/l-sp4.html; Secure Bit2 : Transparent, Hardware Buffer-Overflow Protection, http://www.cse.msu.edu/cgi-user/web/tech/document?ID=619. 

 

[90] However, the choice of which software engineering methodology is the best one for managing various sorts of projects is contentious. In particular, there is debate about the relative merits of a traditional methodology called the Waterfall Model with its origins in the late 1960s versus various other methodologies, such as Spiral or Agile.

 

[91] Albert Endres & Dieter Rombach, A Handbook of Software and Systems Engineering: Empirical Observations, Laws and Theories (2003). A small sample of the sort of rules includes: Boehm’s first law:  “Errors are most frequent during the requirements and design activities and are the more expensive the later they are removed.” Id. at 17.    Dijkstra–Mills–Wirth law:  “Well-structured programs have fewer errors and are easier to maintain.”  Id. at 74.  Fagan’s law:  “Inspections significantly increase productivity, quality, and project stability.” Id. at 100.  Herzel–Myers law “A combination of different verification and validation [i.e., testing] methods outperforms any single method alone.”  Id. at 107.

 

[92] Eric Roberts et al., Computing Curricula 2001: Computer Science 17 (2001). Of the roughly 280 hours of “core” material listed there, perhaps half the core material in Programming Fundamentals, a third of the core material in Programming Languages, and almost all the core material in Software Engineering concerns the basics of good software development practices. Together those hours make up about a third of that core curriculum. A recent revision does not make significant changes from the point of view of the issues we consider here, except for adding some material on how to write secure software to the core. Lillian Cassel et al., Computer Science Curriculum 2008: An Interim Revision of CS 2001, http://www.acm.org/education/curricula-recommendations.

 

[93] See Id.

 

[94] See Security Engineering, supra note 4, at 829. See generally any standard textbook on software engineering such as Roger Pressman, Software Engineering: A Practitioner's Approach (7 ed. 2009); Ian Sommerville, Software Engineering (9 ed. 2010).

 

[95] See, e.g., Anthony Hall, Seven Myths of Formal Methods, 7 IEEE Softw. 11–19 (1990); I. J Hayes, Applying Formal Specification to Software Development in Industry, SE-11 IEEE Transactions on Software Engineering 169–178 (1985) (discussing the usefulness of software engineering techniques in some particular projects), and A. MacCormack et al., Trade-offs between productivity and quality in selecting software development practices, 20 IEEE software 78–85 (2003) (comparing various techniques). 

 

[96] See generally Capers Jones, Software Engineering Best Practices: Lessons from Successful Projects in the Top Companies (2010).

 

[97] See generally Innovation for the 21st Century, supra note 72 (discussing the analogous issues that arise in the context of copyrights and patents; many of the concerns and competing arguments crossover). 

 

[98] As far back as the 1980s a panel convened to study the issues with software for President Regan’s Strategic Defense Initiative noted, “Simply because of its inevitable large size, the software capable of performing the battle management task for strategic defense will contain errors. All systems of useful complexity contain software errors, [italics added]” eastport Study Group Marina Del Rey Ca, Eastport Study Group: Summer Study 1985. A Report to the Director, Strategic Defense Initiative Organization. 13 (1985), http://www.cse.nd.edu/ kwb/nsf-ufe/star-wars/.  Recently Capers Jones noted  that one goal of software engineering best practices is to increase the percentage of bugs removed prior to delivery from 85 percent to something that "approach[es] 99 percent," (not that approaches 100%), Jones, supra note 96, at xxvi.

 

[99] Eric Roberts, Society, Computers in, in Encyclopedia of Computer Science 1591, 1594–1596 (2003).    

 

[100] Id.

 

[101] Rememberance of Things Pest, supra note 9 at 38.

 

[102] Jason Lam, Top 25 Series – Rank 3 – Classic Buffer Overflow, http://blogs.sans.org/appsecstreetfighter/2010/03/02/top-25-series-%E2%80%93-rank-3-%E2%80%93-classic-buffer-overflow/.

 

[103] See supra note 67.  

 

[104] See supra Section V, C. 

 

[105] See David A. Wheeler, More Than a Gigabuck: Estimating GNU/Linux's Size, http://www.dwheeler.com/sloc/redhat71-v1/redhat71sloc.html (estimating that “it would cost over $1 billion ($1,000 million - a Gigabuck) to develop this GNU/Linux distribution by conventional proprietary means in the U.S. (in year 2000 U.S. dollars”), and Amanda McPherson, Brian Proffitt, and Ron Hale-Evans, Estimating the Total Development Cost of a Linux Distribution, http://www.linuxfoundation.org/sites/main/files/publications/estimatinglinux.html.  The authors note that

In 2002, David A. Wheeler published a well-regarded study that examined the Software Lines of Code present in a typical Linux distribution. His findings? The total development cost represented in a typical Linux distribution was $1.2 billion. We’ve used his tools and method to update these findings. Using the same tools, we estimate that it would take approximately $10.8 billion to build the Fedora 9 distribution in today’s dollars [2008], with today’s software development costs. Additionally, It would take $1.4 billion to develop the Linux kernel alone.

Id.

 

[106] Switching-costs lead to customer lock-in.  See Information Rules, supra note 7, at 103 – 172.

 

[107] See generally Innovation for the 21st Century, supra note 72; William H. Page and Seldon J. Childers, Software Development As An Antitrust Remedy: Lessons From The Enforcement Of The Microsoft Communications Protocol Licensing Requirement, 14 Mich. Telecomm. Tech. L. Rev. 77 (2007); and Competition, Innovation and the Microsoft Monopoly: Antitrust, supra note 105. 

 

[108] See John R. Michener, Steven D. Mohan, James B. Astrachan and David R. Hale,

“Snake-Oil Security Claims” The Systematic Misrepresentation of Product Security in the E-Commerce Arena, 9 Mich. Telecomm. Tech. L. Rev. 211 (2003); and Bruce Schneier, How Security Suckers Us With Lemons, Wired, April, 19, 2007,

http://www.wired.com/politics/security/commentary/securitymatters/2007/04/securitymatters_0419?currentPage=all.

 

[109] George Akerlof, The Market for “Lemons”: Quality Uncertainty and the Market Mechanism,” 84 Quarterly Journal of Economics 353 (1970).

 

[110] One may wonder about the meaning of “significantly”; considerations we offer in Section VII explain and motivate the qualification.  

 

[111] Bruce Schneier, How Security Suckers Us With Lemons, Wired, April, 19, 2007,

http://www.wired.com/politics/security/commentary/securitymatters/2007/04/securitymatters_0419.

 

[112] See, e.g., The Law And Economics Of Software Security, supra note 1, at 314 (suggesting the possibility of a lemons market  where software developers offered software that varied in the degree of security); and Deworming the Internet, supra note 59, at 292 (noting that “[a]s long as software is maintained as a trade secret, and development occurs behind closed doors, buyers have nothing more to go on than vague, unprovable assertions about quality and security (which are cheap to make)” and asserting a lemons market results).   

 

[113] See supra note 108. 

 

[114] See Law and Economics of Software Security, supra note 1, at 302 (noting that “[s]oftware and network security issues receive substantial press”). 

 

[115] See supra Section IV, A.

 

[116] Programs containing vulnerabilities are often developed in ways that violated programming laws of the sort identified in A Handbook of Software and Systems Engineering: Empirical Observations, Laws and Theories, supra note 91Development practices that violate those laws frequently create a variety of defects in addition to vulnerabilities.   

 

[117] Greg Hoglund & Gary McGraw, Exploiting Software: How to Break Code 14 (2004) (emphasis added). These lines come at the end of an introductory section of the book that moves from discussing famous software defects that had nothing to do with security and attackers to discussing defects that constitute security holes.  Two examples of non-security defects the authors give are the NASA’s 1999 Mars lander software failure, where a metric versus English units error caused the loss of the $165 million system, see, e.g., NASA's metric confusion caused Mars orbiter loss - CNN,  http://articles.cnn.com/1999-09-30/tech/9909_30_mars.metric_1_mars-orbiter-climate-orbiter-spacecraft-team?_s=PM:TECH (last visited Feb 16, 2011), and the Denver International Airport automated baggage handling system fiasco.  See, e.g., Sara Baase, A Gift of Fire: Social, Legal, and Ethical Issues for Computing and the Internet 417 (3 ed. 2008); and Michael J. Quinn, Ethics for the Information Age 362 (4 ed. 2010). 

[118] The market has given rise to vulnerability disclosure businesses.  iDefense, for example, pays for information about the existence of vulnerabilities and communicates this information to its clients.  Http://labs.idefense.com/.  This is not a general solution for consumers, who will not be willing to pay the significant charges that businesses like iDefense demand.  See The Law And Economics Of Software Security, supra note 1,  at 315 - 316.  CERT (Computer Emergency Response Team) discloses vulnerabilities free of charge.  http://www.cert.org/kb/vul_disclosure.html.  The disclosures are too technical for the average user, however. 

 

[119] In general, norms arise through custom, private agreement, or legal regulation.  See supra text accompanying note 13.   A best-practices norm is unlikely to arise by custom as long as buyers are trapped in the “vulnerability-ridden” norm.  It is also unlikely to arise by private agreement.  Mass market, standard form contracts typically disclaim liability for direct and indirect damages and place limits on any potential liability.  See George L. Priest, A Theory of the Consumer Product Warranty, 90 Yale L.J. 1297 (1981).  The article presents empirical results in support of the claim that the disclaimers in standard form contracts are best explained as an optimal allocation of the risk of product malfunctions between the seller and the buyer. 

 

[120] Stewart D. Personick and Cynthia A. Patterson (eds.) National Research Council, Critical Information Infrastructure Protection and the Law: An Overview of Key Issues 50 2003 (As a motivating factor for industry to adopt best practices, tort law can be a significant complement to standard-setting, because compliance

with industry-wide standards is usually an acceptable demonstration of

due care”).

 

[121] See Jennifer A. Chandler, Improving Software Security: A Discussion of Liability for Unreasonably Insecure Software, in Securing Privacy In The Internet Age supra note 8, at 155.   

 

[122] Another example is a “time of check to time of use” vulnerability.  See CWE-367: Time-of-check Time-of-use (TOCTOU) Race Condition, http://cwe.mitre.org/data/definitions/367.html; and SANS Institute

InfoSec Reading Room,  A Tour of TOCTTOUs, http://www.sans.org/reading_room/whitepapers/securecode/tour-tocttous_1049.  A As a type of race condition, time-of-check to time-of–use vulnerabilities rank twenty-fifth on the SANS list of the top-twenty-five most dangerous software errors.  http://cwe.mitre.org/top25/index.html#CWE-362.  

[123] Restatement (Second) of Torts §295A (1965). 

 

[124] See Restatement (Second) of Torts §295A, cmt. b, and David Owen, Proving Negligence in Modern Products Liability Litigation, 36 Ariz. L. J. 1003, 1038 (2004).    

 

[125] Restatement (Second) of Torts §295A (1965). 

 

[126] See Gideon Parchomovsky and Alex Stein, Torts and Innovation, 107 Mich. L. Rev. 285, 292 (2008).   

 

[127] The T. J. Hooper, 60 F.2d 737 (2d Cir. 1932).  Parchomovsky and Alex Stein cite Texas & Pacific Railway Co. v. Behymer, 189 U. S. 468 (1903), as a similar case.  Torts and Innovation, supra note 126, at 293.  The Behymer court does indeed note that what “is usually done may be evidence of what ought to be done, but what ought to be done is fixed by a standard of reasonable prudence, whether it is complied with or not.”  Id. at 470.  Behymer, however, concerns the sudden stopping of a train in circumstances in which the court found the sudden stop negligent.  There is no suggestion that sudden stops in such situations were an industry practice. 

 

[128] As the court notes, “An adequate receiving set suitable for a coastwise tug can now be got at small cost and is reasonably reliable if kept up; obviously it is a source of great protection to their tows.”  The T. J. Hooper, supra note 122, at 739.  

 

[129] This may strike some as dubious, for, as Thomas Smedinghoff notes, “recent case law . . . recognizes that there may be a common law duty to provide security, the breach of which constitutes a tort.”  Thomas J. Smedinghoff, Defining the Legal Standard for Information Security, in Securing Privacy In The Internet Age supra note 8, at 22.  In support, Smedinghoff cites Wolfe v.  American Bank, 485 F.Supp.2d 874 (2007), Guin v. Brazos Higher Education Service, not reported in F.Supp.2d, 2006 WL 288483 (D.Minn.), and Bell v. Michigan Council of 25, not reported in N.W.2d, 2005 WL 356306 (Mich.App.).  These cases certainly support Smedinghoff’s cautious claim that recent cases recognize that there may be a common law duty to provide security,” but no case suggests that it is negligent not to follow best practices.  Guin holds that a laptop theft from a home was not foreseeable because person in possession of it lived in a relatively safe neighborhood, had taken reasonable steps to prevent burglary.  Wolfe concerns the failure to verify the authenticity information in a credit card application taken by a telemarketer.  Bell concerns the non-online theft of information from a labor union; the court held that "defendant did owe plaintiffs a duty to protect them from identity theft by providing some safeguards to ensure the security of their most essential confidential identifying information." Bell, at *5.  Other recent cases demonstrate that the courts may be reluctant to expand negligence doctrine to create liability for contributing to unauthorized access. In Forbes v. Wells Fargo Bank, N.A., 420 F.Supp.2d 1018 (D. Minn. 2006), the court rejects negligence liability for a bank’s role in permitting unauthorized access to information that could be used to commit identity theft; the court notes that the plaintiff did not allege any harm, just an increased risk of harm.  Standard tort law does not allow recovery for a merely increased risk of harm.  Banknorth, N.A. v. BJ's Wholesale Club, Inc., 442 F.Supp.2d 206 (M.D. Pa. 2006), holds that, even where there is a present injury to the plaintiff, the economic harm rule prevents recovery when the injury is merely economic. 

 

[130] See The Principles of Products Liability Law, supra note 39, at 1078.      

 

[131] The use of custom in providing defectiveness derives at least in part from its use in proving negligence.  David G. Owen, Products Liability Law § 1.2 (2005). 

 

[132] Torts and Innovation, supra note 126, at 299.

 

[133] David Owen, The Proof of Products Liability, 93 Ky. L. J. 1, 5 (2004).  Plaintiffs may also seek to show that the product was defective by introducing evidence that other sellers use a safer design, but, unless they other sellers are following best practices, this will not provide a basis for requiring that software developers follow best practices.

 

[134] Such statutory reasonableness requirements are common.  As Smedingoff notes in regard to statutory standards network security for businesses, “Laws and regulations rarely specify the security measures a business should implement to satisfy its legal obligations.  Most simply obligate companies to establish and maintain “reasonable” or “appropriate” security measures, controls, safeguards, or procedures, but give no further direction or guidance.”  Defining the Legal Standard for Information Security, supra note 129, at 23.    

 

[135] http://csrc.nist.gov/index.html.  Computer Security Division does not currently offer best standards for software development. 

 

[136]  ANSI (http://www.ansi.org/) does not currently offer standards for software development.  It refers to the International Standards Organization ISO/IEC 24773:2008 standard. Http://www.ansi.org/news_publications/news_story.aspx?menuid=7&articleid=2034. ISO/IEC 24773:2008 does not specify best practices in our sense of the term.  Its purpose is to establish “a framework for comparison of schemes for certifying software engineering professionals.  A certification scheme is a set of certification requirements for software engineering professionals. ISO/IEC 24773:2008 specifies the items that a scheme is required to contain and indicates what should be defined for each item.”  Http://www.iso.org/iso/catalogue_detail.htm?csnumber=41543.  Other certification proposals have also failed to create viable best practices standards.  Two notable failures are the Trusted Computer System Evaluation Criteria (TCSEC) (for a useful summary and links, see http://en.wikipedia.org/wiki/Rainbow_Series), and the Common Criteria (http://www.commoncriteriaportal.org/).  For criticisms of both approaches, see Security Engineering, supra note 4, at 517 - 538. 

 

[137] Innovation for the 21st Century, supra note 72, at 323 – 344 provides a succinct overview the concerns.  The discussion concerns standards in the sense of “a common platform that allows products to work together.”  Id. at 323.  Essentially the same issues arise in defining best practices, however.    

 

[138] See Jon Hanson and David Yosifon, The Situation: An Introduction to the Situational Character, Critical Realism, Power Economics, and Deep Capture, 152 U. Pa. L. Rev. 129 (2003).  The Federal Communications Commission is arguably an example of regulatory capture.  See Hannibal Travis, Of Blogs, eBooks, and Broadband: Access to Digital Media as a First Amendment Right, 35 HOFSTRA L. REV. 1519 (2007), and Jonathan E. Nuechterlein and  Philip J. Weiser, Digital Crossroads: American Telecommunications Policy in the Internet Age (2005). 

 

[139] See A. M. Odlyzko, The Case Against Micropayments, in R. N. Wright (ed.), Lecture Notes in Computer Science #2742 77, 80 (2003), and A. M. Odlyzko, Internet Pricing And The History Of Communications, 36 Computer Networks 493 (2001). 

 

[140] Id. at 968.

 

[141] See Online Boilerplate, supra note 54.