Structural rights in privacy

Download 136.61 Kb.
Size136.61 Kb.

SMU Law Review

Fall 2007


Harry Surden [FNa1]

* * *


      A growing body of legal literature emphasizes the point that there are multiple means by which society regulates human behavior besides law. [FN14] The concept of alternative behavior regulators extended from initial insights by the law and economics community on the effects of pricing upon behavior. [FN15] Recent scholars, notably Lawrence Lessig, have highlighted the role of other, less obvious, regulatory mechanisms, including social norms and physical or technological constraints. [FN16] Under Lessig's formulation, there are four major categories of constraints that regulate behavior: laws, markets, social norms, and those constraints which are based upon the physical and technological state of the world, which I collectively refer to as “structural constraints.” [FN17]

      The common thread among all of these regulatory mechanisms is that they restrict or moderate the level of behavior by increasing (or reducing) costs of certain activities. [FN18] Markets use pricing mechanisms to impose economic costs, social norms impose social costs, and law uses the threat of sanctions to impose expected penal costs. [FN19] Structural constraints impose physical or technological costs on behaviors. [FN20] Importantly, this cost principle can be generalized: any mechanism which imposes costs on behavior can be seen as a device that society can use or rely upon to regulate behavior.

* * *

B. Privacy Interests Expanded: Constraint-Rights and Legal Rights

      In the traditional view of privacy, the relevant set of privacy rights are those positive legal rules and doctrines which have been explicitly enumerated by legal rule-makers. [FN31] However, a key observation is that many privacy interests are protected not by positive legal prohibitions on behavior, but by structural constraints which act as reliable substitutes for legal constraints. [FN32]

      As indicated earlier, explicit structural constraints are the costs imposed by intentionally created physical or technological features in the world. [FN33] We can think about two different types of structural constraints-- explicit and latent structural constraints. The paradigm example of an explicit structural regulator is a physical fence surrounding a piece of property. In that scenario, a certain behavior--entering the property--is constrained by the physical cost that would be incurred by climbing or otherwise circumventing the fence. Another example of an explicit structural constraint is a computer encryption algorithm, which, through technological means, encodes information in such a way that decoding it is difficult and often nearly impossible. This encryption imposes a high technological cost upon the behavior of accessing the underlying information. [FN34]

       *1613 Latent structural constraints, by contrast, are the secondary costs arising from the current technological or physical state of the world. The presence of these costs implicitly regulates conduct. [FN35] Under the existing technological state of the world, certain activities are so costly in terms of resources and effort so as to render them effectively impossible to carry out on a widespread basis. [FN36] To illustrate, take the example of the unauthorized sequencing of an individual's DNA to determine susceptibility to certain diseases. As of the time of publication of this Essay, one can view this search behavior as functionally prevented by latent structural constraints; namely, the large amount of resources required to conduct such an analysis. The existence of these costs can be seen as inhibiting the sequencing behavior. As long as these costs persist, society can avoid explicitly regulating this behavior, relying on these latent structural constraints to constrain the conduct. Policy decisions are made--or not made--based upon assumptions about what behaviors are thought to be possible or costly given the current technological state of the world. In this case, this implicit regulation is directly linked to the rapidly changing state of the technology, resulting in a rather fragile constraint. [FN37]

      It is helpful to examine the idea of regulation by latent structural constraints in the context of a contemporary privacy interest. Privacy is frequently defined as the ability to control information about oneself. [FN38] For illustrative purposes, I will focus upon the example of private information--social security numbers and financial data--appearing within public documents, such as the records of court proceedings. [FN39] We can describe this privacy interest as the right to be free from others finding and using this sensitive personal information. [FN40]

      Traditionally, what would have protected such interests would have been latent structural constraints, rather than legal constraints. A typical court case can generate hundreds or thousands of individual documents. Each of these documents within the public record can be tens or hundreds of pages in length. In the not-too-recent past, these documents were not available electronically. The act of finding a particular piece of information in a public record would have required a physical visit to the public records office, followed by multiple requests to a records clerk for large physical files. This would have been followed by a labor-intensive search for particular words through thousands of pages of documents. *1614 Although it was not impossible to conduct such a search, the costs involved effectively constrained such searching behavior for most widespread, privacy-protecting purposes. Frequent, arbitrary, and widespread searches of this nature were too costly to occur on a regular basis.

      In this example, the privacy interest--the freedom from having others glean sensitive personal information from public records--was adequately protected by latent structural constraints. Society relied upon an alternative regulatory mechanism--structure--to constrain the behavior as a substitute for what could have been explicitly regulated by law. Note that with the advent of electronically searchable court documents available on public websites, the structural privacy rights calculus changed. Many similar examples of private information protected by the obscuring effects of search costs exist in the privacy realm.

* * *

D. The Structural Rights/Emerging Technology Dynamic

* * *       To the extent that society is reliant upon latent structural constraints to prevent privacy-violating behavior, there exists the potential for problematic erosion of privacy. Numerous works in the existing literature have observed the impact of technology on privacy. The work of Lessig in particular has been extremely influential in highlighting the role of emerging technologies in changing the balance in privacy interests. [FN55] Others have noted that privacy rights appear to erode over time in the face of new technical capabilities. [FN56] This Essay suggests that the observed phenomenon is a manifestation of a general pattern and attempts to describe this dynamic in a general manner by way of the constraint-rights conceptual framework.

      The dynamic proceeds as follows: society relies upon latent structural constraints to prevent behavior that would otherwise be considered privacy-violating.*1618 This reliance upon existing, latent structural constraints to regulate behavior can be seen as analogous to an implicit, initial distribution of entitlements in the Coasean sense. [FN57] Society's dependence upon structural constraints rather than other constraint mechanisms to protect privacy rests upon assumptions about the technological state of the world. To use the earlier example of private information in paper court records: the physical costs of obtaining the court documents and the difficulty of searching through large amounts of paper documents for one particular piece of information acted as sufficient constraints on the privacy violating activity.

      Since the latent structural constraints are simply the costs imposed by the physical or technological state of the world, then their ability to constrain behavior is weakened by anything which tends to lower those costs. Importantly, many emerging technologies possess exactly this characteristic--the tendency to lower transactional and operational costs. This in turn permits conduct which was previously costly or impossible. Consequently, a side-effect of many emerging technologies is the elimination of structural privacy interests through cost reduction. The costs lost are those that previously acted as latent structural constraints--the activity costs that society was relying upon to constrain unwanted behaviors. This dynamic--an emerging technology enabling previously impossible behavior--is a recurring pattern in the realm of privacy.

      One can restate this dynamic in terms of this Essay's constraint-rights framework as follows: In the world prior to the emergence of the technology, society relied upon a non-legal regulator to prevent unwanted privacy-violating behavior. It is useful to conceptualize this arrangement as having produced a corresponding constraint-right that is analogous to the positive legal right that would have been created had society regulated the behavior through the use of an explicit rule. This conceptualization allows policymakers to identify privacy rights which are non-obvious because they are implicitly protected by non-legal constraint mechanisms.

      With the widespread diffusion of a disabling technology there is effectively a rights-shift. The default state of the world changes from one in which the structural privacy interest was adequately protected to a world in which the privacy interest is no longer protected. Assuming that there is no parallel constraint mechanism--law, norms, or markets--to continue to safeguard the privacy right, this phenomenon can be seen as the loss of a previously held right. In Hohfeldian terms, the emerging technology has effectively caused a shift from a structural right scenario to a structural no-right scenario. [FN58]

       *1619 To illustrate the point, imagine that the privacy interest in question had instead been protected by an explicit law rather than by a latent structural constraint. The implicit rights-shift can be seen as functionally comparable to a scenario in which policymakers affirmatively decided to abrogate the privacy law. In that case, there would be no confusion as to the fact that the former rights-holders were deprived of a legal right that they once had. The difference is that this implicit rights-shift is based upon exogenous factors--emerging technology--rather than affirmative societal action.

      Emphasizing the parallel nature of this implicit rights-shift, relative to the more visible activity of society's enactment and removal of explicit privacy laws, is one of the intended contributions of this Essay. This comparison has rhetorical consequences: Once a structural constraint has eroded, policymakers can elect to use another constraint mechanism--such as a law--to continue to protect the eroded privacy interest. The affirmative use of another regulatory device can be framed as the continuation of a previously existing right, rather than the creation of an entirely new legal right. Such a distinction becomes important in the public policy debate over protecting privacy interests where the creation of a new privacy right may prove politically more difficult than the protection of an existing right.

      Even when there is a parallel constraint mechanism in place--for example, a legal or normative constraint that persists after the absence of the structural constraint--there may be a significant change on the nature of the underlying interest. As the previous subpart noted, the characteristics of the constraint mechanisms employed can affect the quality of the right. Thus, a violable legal constraint may be less effective at preventing particular behaviors than a prior structural constraint which was non-violable. Such a shift to a relatively weaker form of protection can be seen as somewhat analogous to the substantive effects resulting from changes from property to liability rules. [FN59] The ability to prevent or “enjoin” the unwanted behavior can lessen with a change from a structural to a non-structural regulator.

* * *


      As this Essay has suggested, a theoretical construct such as the constraint-rights framework can be useful in revealing rights that policymakers could otherwise easily overlook. In particular, latent structural constraints are subtle, non-obvious regulators of behavior.

      From a social perspective, there are important benefits to affirmatively viewing these constrained behaviors as structural privacy rights. First, this conceptualization allows policymakers to identify potentially serious privacy losses very early in the technological diffusion process, before an emerging technology achieves widespread adoption and erodes structural constraints. [FN86] Since this theoretical framework focuses upon latent costs as the regulators of certain activities, policymakers can anticipate privacy losses simply by examining emerging technologies and the costs that they will tend to reduce. Today, the public debate over privacy and emerging technologies is dominated by those scenarios that are readily imaginable by policymakers and interested parties. [FN87] There does not, however, appear to be an explicit, overarching principle for systematically predicting the full range of eroding privacy interests, including those that are implicit. The structural-cost principle--the notion that focusing upon *1626 emerging technologies and the costs that the technologies will reduce as a means of anticipating potentially undesirable behaviors that will be enabled--is one such guide that policymakers could systematically use. Importantly, this principle allows policy makers to anticipate vulnerable privacy interests that have never been previously identified in the public debate.

      It is important that policymakers have the ability to identify vulnerable privacy interests early in the diffusion process of emerging technologies. As indicated, the widespread diffusion of an emerging technology effectively causes a rights-shift with respect to privacy interests protected by latent structural constraints. The default state of the world changes from one in which the privacy interest is protected to one in which it is no longer protected due to exogenous factors such as the cost-lowering effects of technologies. The structural rights abstraction can serve as a predictive construct. By permitting the identification of implicit and nonobvious privacy interests at an early stage, policymakers could potentially mitigate the overall social costs that might result by relatively sudden shifts caused by rapidly emerging technologies. For example, policymakers might choose to overlay another regulatory device--law, structure, norms, or markets--in parallel with existing structural constraints, at a time before full technological diffusion occurs. With the dissipation of the structural constraint, there would be one or more other regulatory mechanisms in place. Such parallel regulation could effectively prevent the erosion of a structural privacy interest by keeping the unwanted behavior regulated to some degree, even after the loss of the structural constraint following the emergence of a disabling technology. [FN88]

* * *

       *1627 Like legal rules, structural constraints are both over-inclusive and under-inclusive relative to the behavior that society wishes constrained. Due to limits in precision, legal rules necessarily over-regulate, prohibiting a small amount of behavior that is benign or beneficial in order to constrain a larger amount of undesirable behavior. [FN91] Since structural constraints are similarly over-inclusive, the loss of a structural constraint via the diffusion of an emerging technology enables some benign behavior along with unwanted behavior. As emerging technologies such as the Internet have demonstrated, it is often impossible to predict, a priori, which uses of the technology will maximize social welfare. To simply mimic the structural constraints via another regulatory device may not be a socially desirable outcome: the emerging technology may enable socially beneficial behaviors that society may not want to constrain.

      This suggests that the principle of “least constraint” should generally govern: as a rule of thumb, society should continue to prevent only those particular behaviors that are known to be undesirable, and permit the rest. [FN92] Thus, it is important to distinguish analytically between the structural mechanism--the costs that were constraining particular unwanted behaviors--and the unwanted behaviors themselves. For example, it is helpful to imagine a world in which latent structural constraints governed a particular behavior. Because of the implicit and imprecise nature of these regulations, it is likely that unwanted activities are constrained along with certain desirable activities. However, following the emergence of a cost-lowering technology, these structural constraints no longer govern, and both socially undesirable and socially desirable activities are possible. The structural cost view thus allows policymakers to hone in precisely on the unwanted behaviors and constrain only those, while allowing beneficial behaviors enabled by the technology to persist. This can be thought of conceptually as the disaggregation of individual undesirable behaviors from the larger group of behaviors that were constrained structurally prior to the emergence of the cost-lowering technology.

      The framework suggests that the structural privacy rights analysis can be roughly summed up in three questions: What cost is a particular emerging technology reducing? What unwanted behaviors are currently *1628 constrained by those costs through existing latent structural mechanisms? and What alternative regulatory mechanisms can society use to continue to constrain the unwanted behaviors, while permitting other enabling uses of the emerging technologies? This conceptualization permits policymakers to more precisely target unwanted behaviors for regulation, while permitting benign activities enabled by emerging technologies.

* * *
[FNa1]. Fellow, Stanford Center for Computers and the Law, Stanford Law School; J.D. Stanford Law School. I am grateful to Barbara Babcock, Andrew Coan, Richard Craswell, George Fisher, Lawrence Lessig, and Mara Mintzer for their valuable comments and suggestions.
[FN1]. See, e.g., Daniel J. Solove, Privacy and Power: Computer Databases and Metaphors for Information Privacy, 53 Stan. L. Rev. 1393, 1430 (2001) ( “Privacy law consists of a mosaic of various types of law: tort law, constitutional law, federal and state statutory law, evidentiary privileges, property law, and contract law.”).
[FN2]. See, e.g., Edward K. Cheng, Structural Laws and the Puzzle of Regulating Behavior, 100 Nw. U. L. Rev. 655, 657 (2006); Lawrence Lessig, The New Chicago School, 27 J. Legal Stud. 661, 662 (1998).
[FN3]. See Lawrence Lessig, Code and Other Laws of Cyberspace 82-83 (1999) (discussing how human behavior is moderated by other mechanisms besides law). A fence is an example of a structural regulator. Rather than relying upon trespass law to keep unwanted visitors from one's land, landowners often rely on the physical regulation that a tall fence imposes. Lessig uses the term “architecture” to describe real-world limitations on human behavior. Id. I will use the term “structure” instead, because I believe that the term “architecture” might suggest intentional human design. The term “structure” encompasses regulators which are both the by-product of design and the by-product of the state of the world.
[FN4]. This Essay focuses upon the cost-imposing, rather than the cost-reducing (subsidy) aspects of regulatory devices.
[FN5]. See Lessig, supra note 2, at 686.
[FN6]. See Wesley Hohfeld, Some Fundamental Legal Conceptions as Applied in Judicial Reasoning, 23 Yale L.J. 16, 30-32 (1913). In Hohfeld's conception, one holds a legal right when there is a general legal duty that others in the world refrain from certain behavior that either interferes with some affirmative activity (a positive right) or takes away from some activity (a negative right).
[FN7]. This focus of this Essay is the Hohfeldian negative right--the duty to refrain from a particular behavior--rather than a positive right--the duty to engage in particular behaviors.
[FN8]. I use the term “society” here in a generic sense, referring to actions of formal or informal institutions of public governance.
[FN9]. Cheng, supra note 2, at 659.
[FN10]. Id. at 662.
[FN11]. This Essay builds upon important insights from the existing privacy literature in order to create a general conceptual framework to describe this dynamic. See Lessig, supra note 3, at 142-63 (providing some of the most important insights in this regard).
[FN12]. A simple example will illustrate the point. In today's world, there is an expectation that the contents of a closed bag will be private from onlookers. This expectation is based upon the assumption that others cannot, from a distance, discern the contents of a closed bag given the technological state of the world. Structural constraints--the physical and technological costs imposed by the current state of the world--can be seen as constraining the conduct. Assuming that such behavior is socially undesirable from a privacy perspective, society would not need to explicitly prohibit this behavior using the legal regulatory device. The structural constraint of cost serves as an adequate constraint on the unwanted behavior.
[FN13]. See, e.g., Duncan Kennedy, Form and Substance in Private Law Adjudication, 89 Harv. L. Rev. 1687 (1976). A speed limit provides a good example of a law being over-inclusive relative to the behavior it is attempting to regulate. If the behavior regulated by a speed limit is “safe driving,” then a sixty-five miles per hour speed limit prevents some behavior at sixty-seven miles per hour which is safe for the sake of administrability.
[FN14]. See, e.g., Cheng, supra note 2, at 655-66; Lessig, supra note 2, at 661-63.
[FN15]. Lessig, supra note 2, at 661-65.
[FN16]. Id. at 662-63.
[FN17]. Id. The paradigm of a legal constraint is a law which explicitly prohibits certain behavior and provides a sanction for its violation--for example, a speed limit. Markets are economic mechanisms which regulate through price--the higher the price of a particular activity, typically, the less it will occur. Social norms are implicit or explicit social rules about how to behave.
[FN18]. Viewed through this general lens of cost imposition, other regulatory mechanisms come into view. Businesses are constrained by internal business policies, trade customs and practices, and fear of loss of reputation within long-term professional relationships. For more on this, see generally Frank B. Cross, Law and Trust, 93 Geo. L.J. 1457 (2005).
[FN19]. Lessig, supra note 2, at 661-65.
[FN20]. James Gibson, Re-Reifying Data, 80 Notre Dame L. Rev. 163, 164-65 (2004).
[FN21]. See Cheng, supra note 2, at 657.
[FN22]. For a good discussion of this issue, see id. at 667-75 (emphasizing that society often overlooks solutions which involve structural constraints in favor of legal constraints--legislation).
[FN23]. Often policymakers will regulate a particular behavior by using several constraint mechanisms in parallel to buttress the overall regulatory effect. The paradigm example of this is the earlier example of the fence--a structural constraint--and trespass laws--a legal constraint--operating in parallel to prevent unauthorized entry onto private land.
[FN24]. Given that any one of a number of constraint mechanisms could have been selected to achieve a particular regulatory goal, there is arguably a disproportionate focus upon legal constraints regulating behavior.
[FN25]. Hohfeld, supra note 6, at 16-25; Joseph William Singer, The Legal Rights Debate in Analytical Jurisprudence from Bentham to Hohfeld, 1982 Wis. L. Rev. 975, 986.
[FN26]. Singer, supra note 25, at 986.
[FN27]. Id.
[FN28]. See id.
[FN29]. Hohfeld, supra note 6, at 16-25.
[FN30]. Whether a particular behavior is “privacy violating” is a policy issue, as I will discuss in Part III.
[FN31]. See, e.g., Solove, supra note 1, at 1430.
[FN32]. See, e.g., Lawrence Lessig, The Architecture of Privacy, 1 Vand. J. Ent. L. & Prac. 56, 62-64 (1999).
[FN33]. Id. Lessig's important insight was to highlight the indirect and often unintended regulatory effects that occur as the result of selecting particular technological or physical structural designs.
[FN34]. Buildings, highways, and natural features such as mountains and rivers can be seen as implicit structural constraints on behavior. See Neal Kumar Katyal, Architecture as Crime Control, 111 Yale L.J. 1039, 1041, 1047 (2002).
[FN35]. Gibson, supra note 20, at 164.
[FN36]. Id. at 164-65.
[FN37]. Id.
[FN38]. See, e.g., Eugene Volokh, Freedom of Speech and Information Privacy: The Troubling Implications of a Right to Stop People from Speaking About You, 52 Stan. L. Rev. 1049, 1050 (2000); Kent Walker, Where Everybody Knows Your Name: A Pragmatic Look at the Costs of Privacy and the Benefits of Information Exchange, 2000 Stan. Tech. L. Rev. 2, P 5 (2000), http:// (defining privacy as “the ability to prevent other people or companies from using, storing, or sharing information about you”).
[FN39]. I use this example simply for illustrative purposes. Whether this is an example of an actual privacy interest is a public policy question.
[FN40]. See Walker, supra note 38, P 5.
[FN41]. See Lessig, supra note 2, at 662-64.
[FN42]. See id. at 666-80.
[FN43]. See id. at 662-63.
[FN44]. See generally Cheng, supra note 2, at 664-67 (discussing a similar characterization).
[FN45]. See generally id. at 664.
[FN46]. Id. Strictly speaking, regulatory devices are not solely for constraining unwanted behavior. Society also uses regulatory devices to encourage, subsidize, or require other, wanted, behaviors. While it would be interesting, in future work, to explore similar analogues to these affirmative devices, that is not the focus of this Essay.
[FN47]. Id. at 664-65.
[FN48]. Gary S. Becker, Crime and Punishment: An Economic Approach, 76 J. Pol. Econ. 169, 176 (1968).
[FN49]. Id. at 177.
[FN50]. Cheng, supra note 2, at 681-88.
[FN51]. Id.
[FN52]. Id. at 689-93.
[FN53]. See id. at 691 (discussing a similar point).
[FN54]. See id at 662-68.
[FN55]. Lessig, supra note 3, at 142-63. Others have noted the reoccurring impact of technology on privacy. See, e.g. Arthur R. Miller, Personal Privacy in the Computer Age: The Challenge of a New Technology in an Information-Oriented Society, 67 Mich. L. Rev. 1089 (1969).
[FN56]. See, e.g., Christopher Slobogin, Public Privacy: Camera Surveillance of Public Places and the Right to Anonymity, 72 Miss. L.J. 213, 264-66 (2002); Daniel Solove, Identity Theft, Privacy, and the Architecture of Vulnerability, 54 Hastings L.J. 1227, 1228-30 (2003).
[FN57]. See Ronald H. Coase, The Problem of Social Cost, 3 J.L. & Econ. 1, 8 (1960). Although I do not explore the Coasean implications of initial distributions of structural rights and transaction costs in this Essay, this would be a fruitful topic for further research. An initial intuition is that the dichotomous and binary nature of structural rights make for substantial transaction costs such that it is difficult to bargain around initial rights distributions.
[FN58]. See Hohfeld, supra note 6, at 30.
[FN59]. See Louis Kaplow & Steven Shavell, Property Rules Versus Liability Rules: An Economic Analysis, 109 Harv. L. Rev. 713 (1996), (discussing the implications of property rules and liability rules).
[FN60]. For a good discussion of similar issues in the intellectual property context, see Gibson, supra note 20.
[FN61]. See Timothy K. Armstrong, Digital Rights Management and the Process of Fair Use, 20 Harv. J.L. & Tech. 49, 60-65 (2006).
[FN62]. Sanjay E. Sarma, Stephen A. Weis & Daniel W. Engels, Auto-ID Center, RFID Systems, Security and Privacy Implications, 1, 3-4 (2002), http://
[FN63]. i2mobilesolutions, Auto ID and the Bar Code, http:// (last visited Mar. 17, 2007) (noting that “[t]he term auto ID covers a number of different technologies that enable computers to automatically identify an item, usually by reading data from it .... [C]ompanies have developed technologies like bar codes, RF (radio frequency) tags, magnetic stripe, smart card, optical character recognition (OCR), optical mark readers (OMR).”) .
[FN64]. See Sarma, Weis & Engels, supra note 62, at 3-4. For example, in the case of building security, a typical system might involve a security card carried by an authorized user. The user may have to swipe or slide his card in order to gain entry. The card contains the authorization code. By sliding or swiping the card, the employee is allowing the system to “read” the authorizing data on the card. Thus, the computer system has acquired identifying information about a physical object--namely the card and its authorization information.
[FN65]. These scanning applications are auto-identification systems and acquire information about the item being purchased, such as price or brand, by deciphering product data in barcodes affixed to items.
[FN66]. By one estimate, the amount of time spent at supermarket checkout is five minutes and thirty-four seconds, with three minutes of that time spent waiting. Gavin Chappell et al., Auto-Id Center, Auto-ID in the Box: The Value of Auto-ID Technology in Retail Stores 10 (2003) http:// (citing a 1999 Envirosell study).
[FN67]. In such a system, the checkout device will be able to read each item's RFID tag from several feet away and complete the purchase. See id.
[FN68]. See Jennifer E. Smith, You Can Run, But You Can't Hide: Protecting Privacy From Radio Frequency Identification Technology, 8 N.C. J.L. & Tech. 249, 257 (2007).
[FN69]. Tags are formally known as “transponders,” and readers are formally known as “transceivers.” Sarma, Weis & Engels, supra note 62, at 4.
[FN70]. Id.
[FN71]. Id.
[FN72]. Id.
[FN73]. See id.
[FN74]. The distance from which RFID readers can communicate with tags depends upon the type of technology used. At the current time, low-power tags can communicate over a maximum distance of 15 feet, whereas battery-operated tags can communicate over distances of 100 feet or more. See Klaus Finkenzeller, RFID Handbook: Radio-Frequency Identification Fundamentals and Applications 6-9, 21 (Rachel Waddington trans., John Wiley & Sons, Ltd. 2d ed. 2003); Sarma, Weis & Engels, supra note 62, at 4.
[FN75]. See Finkenzeller, supra note 74, at 199-218.
[FN76]. Id. at 6-7. For example, cards with magnetic stripes are often parts of auto-identification systems. Information about the card, such as an account number, is communicated to a computer system, such as an Automated Teller Machine (“ATM”), through the process of swiping. The obvious disadvantage of a system like this is that it requires physical contact between the card and the reader. Physical contact may lead to degradation of the magnetic surface or may be impeded by dust or scratches. Id. at 15.
[FN77]. See Sarma, Weis & Engels, supra note 62, at 4.
[FN78]. See id.
[FN79]. There are some limitations on RFID ability, which will be discussed below. For example, RFID tags are limited in their operating range. Additionally, readers and tags may not be able to communicate with one another through certain materials, such as metal or certain types of liquids. Id. at 5.
[FN80]. See discussion supra Part II.B.
[FN81]. Of course, like all privacy rights, there are certain limitations to the right, such as when one is required to enter detection equipment in order to board an airplane. See, e.g., United States v. Hartwell, 436 F.3d 174, 177 (3d Cir. 2006) (holding an x-ray scan at an airport constitutional under the administrative search doctrine).
[FN82]. For the purposes of this example, we assume that there are no compensating technological barriers to reading RFID tags, such as encryption or tag disablement.
[FN83]. Lawrence Lessig, Privacy and Attention Span, 89 Geo. L.J. 2063, 2065 (2001) (“Privacy's function, in this story, is not to protect the presumptively innocent from true but damaging information, but rather to protect the actually innocent from damaging conclusions drawn from misunderstood information.”).
[FN84]. For example, in a pre-RFID world, an individual might see the principal carrying the illicit magazine and also draw incorrect conclusions. But since RFID potentially makes it more difficult to conceal things, the opportunity for such intrusions is greater.
[FN85]. See Sarma, Weis & Engels, supra note 62, at 10.
[FN86]. See Gaia Bernstein, When New Technologies are Still New: Windows of Opportunity for Privacy Protection, 51 Vill. L. Rev. 921 (2006) (providing for a thoughtful discussion of technology diffusion and privacy issues).
[FN87]. See, e.g., PrivacyActivism, RFID Position Statement of Consumer Privacy and Civil Liberties Organizations (Nov. 20, 2003), http://; Electronic Privacy Information Center, Radio Frequency Identification (RFID) Systems, http:// (last visited Mar. 27, 2007).
[FN88]. However, as this Essay has noted, different constraint mechanisms regulate in different ways and are able to constrain behaviors more or less effectively. Thus, the replacement mechanism may not always be equivalent in regulatory ability to the original.
[FN89]. See Lessig, supra note 3, at 146-60.
[FN90]. Scholars have been advocating a similar position to this in the copyright context. Fair use, a structural right, has been subject to erosion by emerging technologies. Several scholars have advocated countering this erosion by actively implementing fair-use rights via technological measures. See, e.g., Dan L. Burk & Julie E. Cohen, Fair Use Infrastructure for Rights Management Systems, 15 Harv. J.L. & Tech. 41, 56 (2001) (advocating discretionary access to publications in a rights management database that would constitute fair use). Others have argued for similar technological protection rights preserving measures in the privacy context. See, e.g., Shawn C. Helms, Translating Privacy Values with Technology, 7 B.U. J. Sci. & Tech. L. 288, 305-14 (2001).
[FN91]. Again, a speed limit is the paradigm example of a legal rule that is over-inclusive relative to the behavior it seeks to regulate. If the underlying behavior that a speed limit seeks to prohibit is unsafe driving, then most speed limits are going to prohibit some driving that is safe for the sake of administrability of the rule. A speed limit that is sixty-five miles per hour might be a good proxy for a safe driving speed; clearly, however, a vehicle traveling safely at sixty-seven miles per hour could be safe under many circumstances, but would be constrained under the rule.
[FN92]. This is a similar to the argument that has been advocated in favor of so-called net neutrality, in which it is suggested that, a priori, it is too difficult to predict which uses of a technology will be the most beneficial. Therefore, the platform with the least restrictions should generally be seen as the best. See, e.g., Brett Firschmann & Mark A. Lemley, Spillovers, 107 Colum. L. Rev. 257, 293-98 (2007).
[FN93]. See discussion supra Part II.
[FN94]. See supra notes 14-17 and accompanying text.
[FN95]. See Hohfeld, supra note 6, at 30-32.

60 SMU L. Rev. 1605

Share with your friends:

The database is protected by copyright © 2019
send message

    Main page