Criminal-Justice Apps: A Modest Step Toward Democratizing the Criminal Process

Substantive criminal law and the criminal-justice process are both famously opaque. Although society expects people to be on notice of the substantive criminal law,[1] the average person has little understanding of the breadth of the penal code and what the legislature has criminalized.[2] Even for conventional crimes that everyone is aware of (think of drunk driving, speeding, and burglary, to name just a few) the average person likely has no idea what particular elements make up those crimes.[3]

Criminal procedure is similarly a mystery to most of the public.[4] The average driver has no idea whether she can refuse to let an officer look around her car[5] or whether she can decline to take a breathalyzer test.[6] Most famously, almost everyone waives their Miranda rights[7]­­—even after specifically being told that they do not have to talk—because they do not really understand their options. Once arrested, the process of getting out of custody and retaining a lawyer is confusing.[8] Arraignments, bail determinations, preliminary hearings, and motions to suppress are also likely befuddling to the average individual.[9]

The procedural confusion is compounded by the sheer practical difficulty of exercising constitutionally protected rights. For instance, defendants have a right to a fair trial, yet large numbers of defendants are handicapped by being unable to make bail.[10] These individuals often waive their trial rights and plead guilty simply to get out of custody.[11] And for defendants who can afford to hire lawyers, finding the right attorneys and making time to consult with them is enormously difficult.

In short, the criminal justice system affords suspects and defendants considerable statutory and constitutional protections. Yet, as in many other areas of our democracy, these individuals, especially vulnerable members of the population, lack the necessary information and ability to take advantage of these legal protections.

Technology is slowly starting to make the criminal-justice system more understandable and beginning to help even the playing field for disadvantaged groups. In recent years, lawyers, activists, and policymakers have introduced cell phone applications—what I will call “criminal-justice apps”—that are slowly beginning to democratize the criminal-justice system.  These apps fall into a variety of categories: (1) apps that teach individuals about the law; (2) apps that help suspects and defendants connect with lawyers; (3) apps that help defendants navigate confusing court systems; and (4) what we might think of as reform apps or paradigm-shifting apps that seek to bring systemic changes to the criminal justice system.

To be sure, many of these apps were created by lawyers out of self-interest in order to generate business. But a few apps have emerged purely as a public service, in order to enhance the power of suspects and defendants who have been historically disadvantaged in the criminal-justice system. Whether these apps were designed with a profit motive or as a public service, they serve a democratizing function. Criminal-justice apps make it (at least somewhat) easier for individuals to be informed about the criminal-justice system and to navigate it on a more equitable basis.

Using apps focused on DWIs, bail, stop-and-frisk, and recording of the police as illustrative examples, this essay explores how apps are democratizing criminal justice. While many of the apps are exciting, this essay concludes with a cautionary note: Unlike other technological breakthroughs, criminal-justice apps will likely lead to only modest change at a modest pace.

I. Apps That Teach Law:

One of the greatest barriers to a more egalitarian criminal justice system is simply lack of knowledge. Law is complicated. Criminal codes are massive, and most citizens do not have more than a cursory knowledge about what has been criminalized.[12] Nor do most individuals know much about their criminal-procedure rights.[13] Apps in a few select areas have begun to narrow this information deficit.

A. DWI Apps Are Increasingly Common

The most common type of criminal-justice app is the DWI (Driving While Intoxicated/Impaired) app. Numerous criminal-defense attor­neys—no doubt in an effort to acquire clients—have created apps with information for drivers who are about to find themselves in legal trouble.

Some of the DWI apps provide considerable legal information. For example, the colorfully named “Oh Crap! App” has a section on “Basic Rights” that includes information on the right to record police, the right to refuse to answer a police officer’s questions, the right to a lawyer, and an explanation of how police cannot search without a warrant or consent.[14] The app also offers defense-oriented guidance on how to answer an officer’s questions about whether you have been drinking to avoid incriminating responses.[15] Likewise, the app has a section on consenting to field sobriety tests and correctly notes that “[y]ou cannot be forced to submit to field sobriety tests in the State of Virginia.”[16] It further opines that the walk and turn test and one leg stand test “are almost impossible to perform to law enforcement standards in a stressful situation whether alcohol has been consumed or not.”[17] While the claims about the unfairness of the field sobriety tests may be exaggerated,[18] they nevertheless provide drivers with a coherent set of advice on their rights and how they can respond to law enforcement.

While the Oh Crap! App tries to steer individuals away from confessions and field sobriety tests, it also offers wise and accurate advice about when to cooperate with law enforcement. The app advises that police do have the authority to demand a driver’s license and registration and to order the driver out of the vehicle.[19]

Other DWI apps provide even more information. For instance, the “DWI Defense” app, designed by a Missouri law firm, offers advice on how to avoid being pulled over in the first place.[20] It also offers strategic advice to help the possibly intoxicated avoid being arrested, such as keeping with the flow of traffic, rolling down the window to vent smells in the vehicle, and not saying things such as “I just had one beer with dinner.”[21] While some might see this advice as an objectionable effort to help people engage in drunk driving, the overall thrust of the information is to help individuals be more knowledgeable about how drunk-driving laws operate.

Other DWI apps try to simplify the relevant rules of criminal law, procedure, and investigation as much as possible for users with no legal background. A DWI app from a Texas lawyer provides “12 Rules for Dealing with Police.”[22] The twelve rules include many of the points made above, but they also wisely caution people to ask to see an actual warrant before allowing a blood draw.[23]

In addition to providing legal information, some of the DWI apps also have valuable functions designed to prevent drunk driving in the first place. For instance, a number of apps have a blood-alcohol content calculator that allows a user to estimate whether their blood-alcohol level might exceed the legal limit of 0.08.[24] The more sophisticated apps also seek to help individuals avoid legal trouble by including icons that allow the user to call nearby taxi services.[25]

The DWI apps are still a relatively modest part of the legal landscape. Each year, more than a million people are arrested for driving while intoxicated.[26] Likely only a small percentage of those individuals consulted a DWI app before leaving a bar or while an officer was in the process of investigating them on the road. Still, the apps have become prevalent. The lawyer who created the Oh Crap! App estimates that it has been downloaded over 100,000 times since it was created in 2013.[27] With such a large number of downloads, some arrestees surely consulted the app in the moments before they were being arrested.[28] And even those who did not consult it at the moment of arrest likely internalized some of the information from prior review. Of course, while I recognize that more information does not always mean better information, in the case of DWI stops where most citizens have limited knowledge of the law and are at a power imbalance with the police, the apps may be very valuable.

II. Apps to Connect Defendants with Lawyers

In addition to a lack of legal knowledge, a key problem for many criminal defendants is finding the right lawyer. App designers have created mechanisms to find, contact, and compare attorneys. Not surprisingly, all of the aforementioned DWI apps also provide ways to directly communicate with the lawyer who created the app. Some DWI apps enable the user to immediately call a lawyer[29] or send a pre-filled email to an attorney.[30] Some apps also provide an icon to contact a bail bondsman in addition to a lawyer.[31]

Beyond the DWI area, there are also websites that connect individuals with lawyers for other types of offenses. For instance, one website enables individuals to upload traffic tickets, receive competing offers from lawyers, and then hire an attorney.[32]

Another app—“Got My Legal Help”—identifies the user’s location, when prompted, and immediately puts the user in touch with an attorney licensed in the relevant jurisdiction and with expertise in the applicable area of law.[33] This app is valuable not just to individuals who have no prior experience with lawyers, but also to well-connected individuals who happen to be away from home when they are arrested.

Of course, we should be cautious in extolling the virtue of “find a lawyer” apps. These are early-generation apps with limited functionality. More worrisome, the lawyers have a profit motive. And most troubling is the possibility that some of the lawyers who acquire clients by apps may not be the most capable or cost-effective attorneys.[34] In short, clients may be drawn to attorneys who are less skilled and more expensive than they otherwise would have hired.  Of course, problems with lawyer advertising have always existed.[35] Mobile apps at least move the ball forward by helping those with no legal contacts find attorneys, helping others find the right type of attorney, and making it easier to begin getting legal advice by breaking down some of the existing communication barriers.[36]

Moreover, there is vast potential for future apps that connect individuals to lawyers. Think of TripAdvisor and Yelp, which provide individuals with the opportunity to rate and compare hotels, destinations, and restaurants. Engineers could create a similar app where previous clients rate lawyers, describe the kind of case the lawyer handled, and the result of the proceeding. A criminal defendant without the first idea whom to hire could scroll through such an app and make a more informed decision about their representation. Of course, all of the information might not be accurate—much as there are both self-serving and unfair reviews on Yelp and TripAdvisor—but the defendant would have far more information than she would without the app. And lawyers who know they will be publicly reviewed will have an incentive to provide good service to their clients in order to avoid bad reviews.

III. Apps That Help Individuals Navigate the Legal System

The legal process is obviously confusing. In theory, an attorney helps individuals navigate the complicated process. But not all criminal defendants have attorneys.[37] And even for those who have legal representation, it is sometimes hard to get a timely and thorough answer to legal questions from an overburdened defense attorney.[38] In a very small number of courts, there are apps to help individuals decipher the legal issues and procedures in the court system.

For example, following a conviction, many defendants are obligated to pay restitution or fines.[39] Failure to do so leads previously released individuals to be taken back into custody.[40] To make this process easier for federal defendants, courts are beginning to create apps that enable individuals to pay fines from their phones. The United States District Court for the District of Minnesota created an app—“MND Debt”—that enables users to “make payments for restitution, fines, and assessments from anywhere with no additional charge.”[41] In Hawaii, individuals can use the “Hawaii Courts Mobile” app to access records, find court forms, and also pay fines.[42]

Once cases are finalized, many convicted individuals seek to expunge their criminal records, but find the process to be complicated. Multiple jurisdictions have apps that assist people in trying to expunge their criminal records. For instance, the “Expunge.io” app provides attorney referral assistance for individuals with a juvenile record in Cook County, Illinois.[43] The “ExpungeMaryland” app provides “an assessment of an individual’s eligibility for expungement” and “referrals to pro bono legal groups.”[44]

In 2014, California enacted a law that downgraded some crimes from felonies to misdemeanors.[45] The downgrade was significant for defendants who had already served their sentences but were saddled with felony convictions that hindered their employment opportunities. However, utilizing the new law to clear felonies from their record proved difficult for some individuals. Each California county adopted its own procedure to implement the law, and there was limited staffing to help individuals complete the paperwork.[46] Reformers thus created an app—“Clear My Record”—that helps individuals apply across counties to have their convictions reduced.[47] There are also other expungement apps that are “attorney facing” and that enable lawyers and law school clinics to import data and populate form documents much faster.[48]

In civil cases, there are a few apps to help pro se litigants navigate the court system. For instance, the “Florida Courts HELP” app seeks to help Floridians who represent themselves in family-law cases.[49] The app provides access to nearly 200 family-law forms that can be filled out on the device, contact information for help centers, user-friendly instructions about how the process works, lawyer referrals, and a glossary that explains dozens of legal terms.[50]

The Legal Services of Northern Virginia has likewise created an app, with funding by the Legal Services Corporation, to help individuals navigate the court system. Their app—“Legal Case Navigator”—assists individuals in Northern Virginia by providing links to legal forms, a map of the local courthouse, lawyer referral phone numbers, and basic information about individuals’ legal rights.[51] It even enables the individual to access information about their own pending case.[52]

Although the Florida and Northern Virginia apps are focused on civil issues such as family law, consumer law, elder law, expungements, and housing law, the concept could easily be expanded to the criminal-justice context. For instance, as noted above, the Florida Courts HELP app includes a glossary of dozens of legal terms that average citizens otherwise may not understand. Criminal-law terminology can be just as confusing, and a criminal-courts app would be similarly valuable in translating concepts for a lay audience.

IV. Apps Aimed at Reforming the Criminal Justice System

While most of the criminal-justice apps were created by lawyers seeking to generate business, there are also apps designed by nonprofits that seek to reform the criminal-justice system.

A. Bail Apps Seeking to Change the System

Scholars and criminal-justice reformers have turned their attention to the bail system in recent years. Most arrestees are poor and do not have thousands or even hundreds of dollars in discretionary funds to pay bail.[53] Arrestees therefore turn to bail bondsmen, who typically require the suspect to pay ten percent of the bail amount.[54] Many defendants cannot even afford the ten percent, and must remain in jail pending trial.[55] Unable to show up for work, some will lose their jobs, which causes a cascade of other financial and basic life problems. Detainees who were in drug treatment programs or homeless shelters may lose their beds in such facilities.[56] They might even suffer violence while incarcerated. And, perhaps most significantly, the biggest indicator that a defendant will plead guilty is whether he is detained pending trial.[57]

Although there have been some successful reform efforts through litigation,[58] the money bail system is still one of the biggest obstacles to creating an egalitarian criminal-justice system in the United States.[59] Reformers are creating apps to tackle the problem.

In late 2017, engineers launched “Appolition,” an app that links to users’ bank accounts and rounds up the spare change on debit and credit card purchases and donates the money to grassroots groups that post bail for incarcerated misdemeanor suspects.[60] In the first month, Appolition raised $18,000 to post bail for pretrial detainees;[61] as of January 23, 2019, the app has raised around $200,000 and bailed over fifty people out of jails across the United States.[62]

Similarly, consider “Bail Bloc,” a blockchain-based bail app, also launched in late 2017, which “allocates a small percentage of the operating device’s excess computing power to mine cryptocurrency.”[63] The Bail Bloc app converts the cryptocurrency to dollars and donates the proceeds to The Bronx Freedom Fund, which then donates it to pretrial detainees. In the first two months after launch, the app raised about $5,000.[64]

An app backed by rapper and entrepreneur Jay-Z—“Promise”—aims to provide local criminal-justice systems with an alternative to pretrial incarceration and the conventional bail process.[65] The app (which is currently in the design stage) would “monitor and support participants” by generating a calendar of obligations such as court appearances, drug testing, and substance-abuse counseling, and then remind users to attend these obligations.[66] The app would also provide referrals and support for job training, counseling, housing, and other needs.[67] The Promise app would enable case managers to “monitor compliance with court orders and better keep tabs on people via the app.”[68] As of March 2018, the app was being tested in one county and the designers were in talks with other counties to offer the service as an alternative to pretrial detention in county jails.[69]

There are also profit-based apps designed to help individuals make bail. For instance, when people have been pulled over and think they will be arrested they can utilize the “Arrest SOS” app.[70] The app sends a message to an attorney and bail-bond agent in the ZIP code where the arrest occurs, and the bail bondsman immediately begins the bail process.[71] The idea is not only to speed up the process of making bail, but also to help arrestees whose cell phones have been impounded and cannot remember phone numbers to call a friend or relative during the booking process.[72] A similar app—“iGotBerries”—operates on a fifteen-minute delay so that an individual can tap the app immediately after being pulled over, but still cancel the request if she is released at the scene rather than being arrested.[73]

Of course, these bail apps have yet to make a dent in the massive multi-billion-dollar bail industry.[74] Nevertheless, they signal how app developers are seeking to disrupt the traditional bail paradigm and empower individuals to deal more effectively with the criminal-justice system.

B. Apps That Record and Report on Police Interactions

In 2012, the New York affiliate of the American Civil Liberties Union (ACLU) introduced the “Stop and Frisk Watch” app. The app enables users to record footage and then immediately send it to the New York Civil Liberties Union (NYCLU) and report on incidents they observed but did not film.[75] The app also alerts users when people near their location are being stopped by the police.[76] Finally, the app has a “know your rights” feature that informs users about their rights to film the police.[77]

After the release of the NYCLU Stop and Frisk Watch app, other ACLU chapters followed suit with a “Mobile Justice” app that provides users with the same general functionality as the Stop and Frisk Watch App.[78] The Mobile Justice app is available in seventeen states and the District of Columbia.[79]

III. Untapped Potential: The Miranda App

The Supreme Court’s Miranda doctrine is supposed to help suspects avoid being coerced into making confessions.[80] Yet, scholars have documented for decades how most suspects illogically waive their Miranda rights and confess, even after receiving Miranda warnings.[81] The reason may be that being confronted by a police officer is inherently coercive.[82] Or it could be that the warnings go by so quickly[83] and are sometimes read incorrectly by police, such that the suspects do not truly understand them and waive their rights as a result. Or perhaps some suspects are visual learners and do not really process information that is provided verbally.[84] Others may simply think remaining silent in the face of accusations is unnatural and makes them look guiltier.[85]

Professors Andrew Guthrie Ferguson and Richard Leo recently proposed a Miranda app that would solve many of these problems.[86] The Miranda app would be free for all devices and would present individuals’ Fifth Amendment rights in formats for both visual and oral learners.[87] Moreover, providing the Miranda rights on an app would enable the suspect to review the law in a “slow, clear, and repetitive manner.”[88] If a suspect were confused, he could go back and review the options more than once.[89] The app “could even offer individuals a choice of programs that might be more culturally relevant to their particular circumstance.”[90]

The Miranda app is certainly a good idea. Suspects would benefit from a clearer explanation and understanding of their Miranda warnings. And some police officers would benefit by having a clear statement of the Miranda warnings available at the touch of a button. Indeed, in the past, Apple’s App Store sold a Miranda app that was created by a police officer and which listed all of the warnings and translated them into Spanish.[91] The American Bar Association (ABA) recently started a pilot program in New Orleans in which police officers “provide a Miranda translation using Spanish phrasing that has been approved by certified translators with plain language pictographic images and audio.”[92] Although not yet an app, the ABA program utilizes some of the same reforms outlined by Professors Ferguson and Leo and could easily be converted to a more technologically sophisticated cell phone or iPad application.

      While an app would effectuate the spirit of the Miranda decision, unfortunately there are substantial obstacles to widespread adoption. Police departments have a disincentive to create and adopt a Miranda app that does a thorough job of helping suspects understand their Miranda rights; confused suspects are more likely to waive their rights and confess, and police departments like to get confessions.[93] Moreover, police officers face minimal repercussions if they read the warnings incorrectly. Except in rare cases,[94] courts will reject Miranda challenges based on the argument that the officers misread the warnings.[95]

In sum, under the current legal regime, police departments do not have an incentive to create a Miranda app or adopt one created by a third party; they may actually think they are strategically better off without one. Not surprisingly, the New Orleans Police Department, which is testing Miranda warnings with Spanish translation and pictographic images, is doing so while under a consent decree with the United States Department of Justice.[96] Without comparable pressure on most police departments, it may be a long road to implementing a widely used Miranda app.

V. Democracy and Criminal-Justice Apps

What are we to make of the proliferation of criminal-justice apps? Will the apps described in this essay revolutionize criminal justice the way cell phones and internet technology have altered so many other areas of life? Today, people use their cell phones for music, email, texting, photography, podcasts, traditional news, and, perhaps most significantly, social media. Platforms such as Facebook and Twitter (which are predominantly used as cell phone apps[97]) have played a significant role in everything from overthrowing foreign dictatorships[98] to quite possibly altering the outcome of the 2016 presidential election.[99] Is criminal justice next?

The case for cell phone applications being a democratizing force is straightforward. Criminal-justice apps put more information in the hands of the individuals who need it and help them to exercise their constitutional and statutory rights.[100] For instance, in the past, an average person with no legal education who was pulled over for DWI had no idea whether he had the legal right to refuse a breathalyzer or where he would find a lawyer with the special expertise to help him.[101] Now, his cell phone can tell him what the police are legally permitted to do and it can immediately direct him to a lawyer specializing in DWI defense.[102]

Not that long ago, a person who knew he was about to be arrested might have resigned himself to languishing in jail over the weekend. Today, he can tap on a cell phone app that will contact a bail bondsman and initiate the process to post bail before the arrest occurs.[103] Moreover, because poor people who cannot make bail face an increased risk of conviction, criminal-justice apps that raise and distribute bail money can help the poor have the same chance at justice as the more affluent.[104]

When an arrestee needs to find the right lawyer to help defend herself, criminal-justice apps can help her to effectuate her Sixth Amendment right to counsel.[105] Later in the process, criminal-justice apps can help a suspect navigate the court process and even expunge his conviction.[106]

In short, criminal-justice apps are democratic because they directly convey valuable information and help individuals overcome monetary obstacles in order to exercise their constitutional rights.

At present, of course, criminal-justice apps serve a limited audience. Although the number of apps is growing, the total number of downloads (i.e., the utilization) is not huge. To put it in perspective, the Appolition app raised $140,000 in its first six months.[107] That money likely had a huge impact in the lives of the pretrial detainees who received the money and were thus extricated from pretrial incarceration, but the United States has a multi-billion-dollar bail industry,[108] making $140,000 a tiny sum by comparison.

While criminal-justice apps presently have a modest footprint, it is not difficult to envision how they could grow. For a point of comparison, consider a driver looking for a coffee shop on an unfamiliar highway using an early generation iPhone map. The early iPhone map certainly constituted progress—our driver no longer had to completely guess where to exit the highway—though it still was not easy for the driver to quickly find a coffee shop. Today, however, there are apps that not only identify which highway exits have coffee shops,[109] but corporate-designed apps that direct drivers to the closest store. Our driver can tap on her Starbucks app, find the closest location, order her favorite drink, and be directed right to the store—including to a location she has never visited before.[110]

The question, then, is whether criminal-justice apps will ever progress from their current state—what we might think of as equivalent to the early iPhone map application—and bring us to the point where navigating the legal system is as easy, egalitarian, and ubiquitous as finding a roadside Starbucks and ordering a drink from your phone. The short answer, unfortunately, is likely no.

Law is not simple, neither in doctrine nor in logistics. The complexities of a DWI prosecution cannot be answered in a few simple statements on a cell phone application.[111] Each criminal case is different, and nuanced analysis is often critical. Nor can the criminal-justice process easily be described in detail on a cell phone application. Lawyers practice for years to become experts in all the procedural steps and motions that can occur in a criminal case. Moreover, even within the same courthouse, there are procedural variations from judge to judge. In short, criminal law and procedure cannot be simplified in a cell phone application, except at a very high level of generality.

Nor is the quality of criminal-defense lawyers easily reduced to a cell phone rating. Thousands of criminal defense lawyers handle numerous different types of cases. While TripAdvisor can help individuals determine which hotel is the cleanest and quietest, there are simply too many variables in criminal cases to allow for a comparably informative rating system of criminal-defense attorneys. To note the most obvious variable, the strength of criminal charges varies by defendant.  Some criminal cases are so strong that Perry Mason could not help the defendant, while in other cases the charges are so weak that even a terrible attorney could win at trial or negotiate a favorable plea bargain. Most Yelp users can agree which restaurants have the best pizza and fastest service because the same food (at least by and large) is being served to all the patrons. Criminal cases are far more individualized.

While criminal-justice apps can serve a democratizing purpose, our expectations for the scope and speed of change should be modest. Americans have grown accustomed to rapid technological change. For example, in 1998, most people watched movies by driving to a video store (often a Blockbuster Video) to rent physical copies of movies. In less than a decade, Blockbuster was in free fall, and Netflix took over the market by first cost-effectively delivering movies directly to consumers’ homes[112]  and, only a few years later, shifting to a highly successful streaming-based model.[113] Technology revolutionized the home movie market—twice—in a very short period of time.

The speed of the Netflix revolution (or that of Twitter, Facebook, Instagram, or other platforms, for that matter) simply is not likely in the criminal-justice space. The criminal-justice system is made up of thousands of diversified systems. Most cases are not handled at the federal level, or even at the state level for that matter. The criminal-justice “system” is actually thousands of different counties with their own prosecutors, defense attorneys, and judges.[114] The variety of procedural and substantive rules across jurisdictions would make it incredibly difficult to develop a nationwide application.

Furthermore, there are no large institutional players. The large institutional players that cross county lines—think of the ABA, the ACLU, and the National Association of Criminal Defense Lawyers (NACDL)—do not have the market power to affect rapid technological change. Additionally, they are not powerhouse technology players and they are not likely to become them. Perhaps most importantly, they do not have the singular focus of Silicon Valley companies. The ABA, ACLU, and NACDL (and other organizations like them) have diverse sets of priorities. Cell phone applications that are attempting to democratize criminal justice are not even close to the top of their lists. Of course, the future could bring a new criminal-justice player that we are not presently aware of. There was no Netflix during the Blockbuster era, and it was not that long ago that we lived in a world without behemoths like Amazon, Facebook, and Twitter. Without a profit motive, however, it is difficult to see a disruptive force like those companies revolutionizing the criminal-justice space. 

In short, criminal-justice apps serve a democratizing purpose. They educate the citizenry and further the exercise of constitutional rights. Criminal-justice apps will bring change to the system, but they are not likely to be game changers. Instead, we should anticipate that criminal-justice apps will bring modest change at a modest pace.

VI. Conclusion

The criminal-justice system is confusing. Most people do not have a good grasp of either the substantive criminal law or criminal procedure. Moreover, the system appears to be stacked against those who do not understand their rights and those who are too poor to afford bail. In recent years, lawyers, activists, and policymakers have introduced cell phone apps that are very slowly beginning to democratize the criminal-justice system. These criminal-justice apps teach individuals about the law, help suspects and defendants connect with lawyers, assist defendants in navigating the judicial system, and undertake reform efforts by attempting to bring about systemic changes to problematic areas such as the bail process. Criminal-justice apps serve a democratic purpose by conveying valuable information and lessening the financial obstacles defendants face in exercising their constitutional rights.

We should, however, be cautious and not expect too much change. Substantive law is far too complicated and legal processes far too intricate to distill into easy-to-use apps. Moreover, there is no large institutional player driving a revolution of criminal-justice cell phone applications. Criminal-justice apps are therefore likely to be a positive, though modest, democratizing force.

 


[1] See Atkins v. Parker, 472 U.S. 115, 130 (1984) (“All citizens are presumptively charged with knowledge of the law.”).

[2] See Paul H. Robinson & Michael T. Cahill, The Accelerating Degradation of American Criminal Codes, 56 Hastings L. J. 633, 638 (2005) (“The proliferation of potentially redundant offenses causes several significant problems. First, overstuffed criminal codes make it more difficult for the average citizen to understand what the criminal code commands.”); Paul H. Robinson & Michael T. Cahill, Can a Model Penal Code Second Save the States From Themselves?, 1 Ohio St. J. Crim. L. 169, 170 (2003).

[3] See Michael T. Cahill, Attempt, Reckless Homicide, and the Design of Criminal Law, 78 U. Colo. L. Rev. 879, 953 (2007) (“Rather than promoting the principle of notice, today’s criminal law creates an impregnable network of prohibitions that no one but a criminal law expert could decipher.”); see also Stephanos Bibas, Designing Plea Bargaining from the Ground Up: Accuracy and Fairness Without Trials as Backstops, 57 Wm. & Mary L. Rev. 1055, 1075 (2016) (finding that defendants may have difficulty understanding and recognizing technical doctrines, such as mens rea and accomplice liability, and evaluating the elements of crimes).

[4] See Stephanos Bibas, Transparency and Participation in Criminal Procedure, 81 N.Y.U. L. Rev. 911, 913 (2006).

[5] See Alafair S. Burke, Consent Searches and Fourth Amendment Reasonableness, 67 Fla. L. Rev. 509, 526 (2015).

[6] Practically speaking, the answer is “yes,” as it is impractical for an officer to physically force an individual to blow into a tube. But the Supreme Court recently gave states the green light to criminalize refusal to take a breathalyzer (though not a warrantless blood draw). See Birchfield v. North Dakota, 136 S. Ct. 2160, 2163–65 (2016). 

[7] See Richard A. Leo, Inside the Interrogation Room, 86 J. Crim. L. & Criminology 266, 276 (1996) (finding that approximately seventy-eight percent of respondents waived their Miranda rights).

[8] See Mark D. Killian, Survey Looks at How People Choose Lawyers, Fla. B. News, May 15, 2001, at 19.

[9] See Bibas, supra note 4, at 924 (“[L]egalese, jargon, euphemism, and procedural complexities garble court proceedings.”).

[10] See Samuel R. Wiseman, Pretrial Detention and the Right to Be Monitored, 123 Yale L.J. 1344, 1360 (2014).

[11] See Jenny Roberts, The Innocence Movement and Misdemeanors, 98 B.U. L. Rev. 779, 832 (2018) (“The most significant predictor of whether a defendant enters a guilty plea is his custodial status.”).

[12] See supra notes 1–3 and accompanying text.

[13] See supra notes 4–9 and accompanying text; Craig M. Bradley, Two Models of the Fourth Amendment, 83 Mich. L. Rev. 1468, 1472 (1985) (“[T]he fundamental problem with fourth amendment law is that it is confusing.”).

[14] See Oh Crap! App, [https://perma.cc/SC4Z-WZK3] (last visited Nov. 10, 2018).

[15] See id. (“The answer to this question can be incriminating, thus, you have the right not to answer it when asked by law enforcement.”).

[16] Id.; Whitestone Young, Is it Mandatory to Take a Field Sobriety Test in Virginia?, [https://perma.cc/DU3V-ENE6] (last visited Jan, 4, 2019).

[17] See Oh Crap! App, supra note 14.

[18] Three prominent studies on the Standardized Field Sobriety Test found that police officers’ arrest decisions based on such tests were accurate, that is the drivers had measured BAC of 0.008% or higher, in over 86% of cases. See Steven J. Rubenzer, The Standardized Field Sobriety Tests: A Review of Scientific and Legal Issues, 32 L. & Hum. Behav. 293, 297 (2007). But, because of the potential for confounding variables that are present at many DWI stops, the validity of the experimental design and results of the aforementioned studies has recently been questioned. Id. at 306.

[19] See Oh Crap! App, supra note 14. The app also provides valuable information on the right to consult with an attorney, breathalyzer refusals, and evidence preservation. See id.

[20] See DWI Defense, iTunes App Store, [https://perma.cc/5LNK-VPP7] (last visited Nov. 10, 2018) (“If you speed, roll through a stop sign, forget to signal, drive with a burned-out light, or fail to place new registration tags on your license plate, you risk getting pulled over.”).

[21] Id. 

[22] See ATX DWI, iTunes App Store, [https://perma.cc/YPR2-QXWE] (last visited Nov. 10, 2018).

[23] Id.

[24] See, e.g., Oh Crap! App, iTunes App Store, [https://perma.cc/SC4Z-WZK3] (last visited Jan. 25, 2019); The Dude, iTunes App Store, [https://perma.cc/N3WC-WSYB] (last visited Nov. 10, 2018).

[25] See Oh Crap! App, supra note 24.

[26] See Impaired Driving: Get the Facts, Centers for Disease Control and Prevention [https://perma.cc/4EXE-63ZJ] (last visited Nov. 10, 2018).

[27] See Telephone Interview with Robert Rehkemper, Managing Partner, Gourley, Rehkemper & Lindholm, PLC (Aug. 2, 2018) (interview conducted by Elizabeth Brightwell).

[28] Using a DWI app during a traffic stop does carry a risk. If an individual is fiddling with a phone after being pulled over, there is some chance that an officer will confuse the cell phone with a weapon.

[29] See, e.g., The Dude, supra note 24 (displaying a “call me now” button); Buffalo DWI Lawyers, iTunes App Store, [https://perma.cc/UT4S-T4B5] (last visited Nov. 14, 2018) (providing a defense attorney’s phone number); Louisiana DWI Defense: Glynn Delatte, Jr, iTunes App Store, [https://perma.cc/WG62-4NN4] (last visited Nov. 14, 2018) (displaying a “call me now” button).

[30] See, e.g., DWI Arrest Phone Apps, Lipsitz Green Scime Cambria LLP, [https://perma.cc/P85J-C8YB] (last visited Nov. 14, 2018) (describing features of the law firm’s DWI & Arrest Guide App, including an “I’m Being Arrested!!” button that sends a prefilled email to the firm).

[31] See, e.g., Oh Crap! App, supra note 24.

[32] See How it Works, Bernie Sez [https://perma.cc/EQ3D-FF7J] (last visited Nov. 14, 2018). The app is also able to connect DWI defendants with attorneys. Speeding/DUI Info, Bernie Sez, [https://perma.cc/94NY-9PGD] (last visited Nov. 14, 2018).  

[33] See Got My Legal Help, iTunes App Store, [https://perma.cc/4BVF-AKWH] (last visited Nov. 14, 2018).

[34] Cf. Gene W. Murdock & John White, Does Legal Service Advertising Serve the Public’s Interest?, 8 J. Consumer Pol’y. 153, 162 (1985) (finding that “lower quality lawyers are more prone to use Yellow Pages advertising”).

[35] See John B. Attanasio, Lawyer Advertising in England and the United States, 32 Am. J. Comp. L. 493, 496–97 (1984).

[36] On the lack of information available to consumers in selecting a lawyer, see Linda Morton, Finding a Suitable Lawyer: Why Consumers Can’t Always Get What They Want and What the Legal Profession Should Do About It, 25 U.C. Davis L. Rev. 283, 284–85 (1992).

[37] See Adam M. Gershowitz, The Invisible Pillar of Gideon, 80 Ind. L.J. 571, 572, 591 (2005).

[38] See generally Mary Sue Backus and Paul Marcus, The Right to Counsel in Criminal Cases, 86 Geo. Wash. L. Rev. 1564 (2018) (documenting the various problems facing indigent criminal defendants who rely on underfunded and overworked public or appointed defense attorneys).

[39] See Neil L. Sobol, Charging the Poor: Criminal Justice Debt and Modern-Day Debtors’ Prisons, 75 Md. L. Rev. 486 (2016).

[40] Katherine Beckett & Alexes Harris, On Cash and Conviction: Monetary Sanctions as Misguided Policy, 10 Criminology & Pub. Pol’y 509, 523–26 (2011).

[41] See MND Debt: Pay US Court, iTunes App Store, [https://perma.cc/CAB9-B4LY] (last visited Nov. 15, 2018).

[42] See Hawaii Courts Mobile, iTunes App Store, [https://perma.cc/DL77-4DUJ] (last visited Nov. 15, 2018).

[43] See Jason Tashea, A Good Name Is Hard To Clear: A National Report of Digital Expungement Applications, SIMLab (Sept. 1, 2016), [https://perma.cc/Z4C8-3G8P].

[44] Id.

[45] See What You Need to Know About Proposition 47, Calif. Dep’t of Corr. and Rehab., [https://perma.cc/SBZ7-W3WJ] (last visited Jan. 4, 2019).

[46] See Jason Shueh, Code for America’s Clear My Record App Gives Ex-Convicts a Second Chance, State Scoop (Dec. 2, 2016), [https://perma.cc/FL69-VE36].

[47] See id.

[48] See Tashea, supra note 43.

[49] See Florida Courts HELP App, Florida Courts Help, [https://perma.cc/5BAG-8766] (last visited Jan. 4, 2019) (discussing the app’s features).

[50] See id.

[51] See Legal Case Navigator, iTunes App Store, [https://perma.cc/G976-RBMG] (last visited Jan. 4, 2019).

[52] See id.

[53] See Cherise Fanno Burdeen, The Dangerous Domino Effect of Not Making Bail, The Atlantic (Apr. 12, 2016), [https://perma.cc/FB8V-3RPU].

[54] See Wayne R. LaFave et al., Criminal Procedure §12.1(b), at 650 (4th ed. 2004).

[55] See Burdeen, supra note 53 (“More than 60 percent of people locked up in America’s jails have not yet been to trial, and as many as nine in 10 of those people are stuck in jail because they can’t afford to post bond.”).

[56] See Yale Law Sch. Allard K. Lowenstein Int’l Human Rights Clinic, “Forced into Breaking the Law”: The Criminalization of Homelessness in Connecticut 18 (2016), [https://perma.cc/2YPU-KMLF].

[57] See Roberts, supra note 11, at 832.

[58] See Eli Rosenberg, Judge in Houston Strikes Down Harris County’s Bail System, N.Y. Times (Apr. 29, 2017), [https://perma.cc/F75T-G6G7].

[59] See Shima Baradaran Baughman, The Bail Book: A Comprehensive Look at Bail in America’s Criminal Justice System 1–11 (2018).

[60] See Victoria Law, This App Collects Spare Change to Bail People Out of Jail, Wired (Jan. 2, 2018, 7:00 AM), [https://perma.cc/RGC8-9L7F]; see also Frequently Asked Questions, Appolition, [https://perma.cc/Z6AD-P583] (last visited Jan. 4, 2019) (describing how the app works).

[61] Law, supra note 60.

[62] Allana Akhtar, A Movement Is Underway to End Cash Bail in America. This App Found an Ingenious Way to Help, Money (Jan. 23, 2019), http://money.com/money/55 09560/a-movement-is-underway-to-end-cash-bail-in-america-this-app-found-an-ingenious-way-to-help/; see also @blackwomangaze, Twitter (June 16, 2018, 10:00 AM), [https://perma .cc/F3EL-K8Q3] (retweeted by Appolition’s Twitter profile on June 16, 2018) (claiming that the app raised $140,000 within six months of launch).

[63] See Arvind Dilawar, You Can Download an Easy Blockchain App to Help Poor People Make Bail, Quartz (Jan. 23, 2018), [https://perma.cc/924Z-SSKT].

[64] See id.; see also infra Part VI for a discussion of whether these apps will be successful.

[65] See Jenna Amatulli, Jay-Z’s Roc Nation Partners With App Aiming To Better Criminal Justice System, Huffington Post (Mar. 19, 2018, 4:20 PM), [https://perma.cc/7RXN-KUPK].

[66] Id.

[67] Id.

[68] See Megan Rose Dickey, Bail Reform’s Complex Relationship with Tech, TechCrunch (May 20, 2018), [https://perma.cc/VF5C-T43K].

[69] See Amatulli, supra note 65.

[70] See The App That Gets You Out of Jail, Arrest SOS, [https://perma.cc/3F36-RVNT].

[71] Id.

[72] See id; see also United States v. Edwards, 415 U.S. 800, 807 (1973) (“[O]nce the accused is lawfully arrested and is in custody, the effects in his possession at the place of detention that were subject to search at the time and place of his arrest may lawfully be searched and seized without a warrant.”).

[73] See iGotBerries: DWI SOS App & Police SOS, Google Play Store, [https://perma.cc /K2ZB-X2ZF] (last visited Nov. 18, 2018).

[74] Ten issuers underwrite fourteen billion dollars in bail bonds, resulting in two billion dollars of annual profit. Gillian B. White, Who Really Makes Money Off of Bail Bonds?, The Atlantic (May 12, 2017), [https://perma.cc/G4P3-LXQT]. Compare the size of this industry to the roughly $200,000 raised by Appolition, see supra note 62, and the $5,000 raised by BailBloc. See supra note 64.

[75] See Stop and Frisk Watch App, N.Y. Civ. Liberties Union, [https://perma.cc/PHD3-YK3N] (last visited Nov. 18, 2018); see also Azi Paybarah, Civil Libertarians Introduce a Stop-and-Frisk App, Politico (June 6, 2012, 2:15 PM), [https://perma.cc/Z6JP-TYDH] (describing the intended function of the app prior to its release in 2012).

[76] Stop and Frisk Watch App, supra note 75.

[77] Id.

[78] See ACLU Apps to Record Police Conduct, Am. Civ. Liberties Union, [https://per ma.cc/5HVQ-9E6L] (last visited Jan. 7, 2019).

[79] See id. Notably, many of the DWI apps discussed in Part I.A. above also have a recording function that enables the user to record the traffic stop and turn it over to the attorney. See, e.g., The Dude, supra note 24.

[80] See Berkemer v. McCarty, 468 U.S. 420, 433 (1984) (“The purposes of the safeguards prescribed by Miranda are to ensure that the police do not coerce or trick captive suspects into confessing.”) (emphasis omitted).

[81] See, e.g., Leo, supra note 7, at 276 (finding that approximately seventy-eight percent of suspects waived their rights, even after having the Miranda warning read to them).

[82] I do not mean to suggest here that the police behaved illegally. A suspect can internally feel compelled to answer, even though the police followed proper procedure. See Lawrence Rosenthal, Against Orthodoxy: Miranda Is Not Prophylactic and the Constitution Is Not Perfect, 10 Chap. L. Rev. 579, 594–601 (2007).

[83] See George C. Thomas III & Richard A. Leo, The Effects of Miranda v. Arizona: “Embedded” in Our National Culture?, in 29 Crime and Justice: A Review of Research, 203, 247, 250 (Michael Tonry ed. 2002).

[84] See Jayne Elizabeth Zanglein & Katherine Austin Stalcup, Te(a)chnology: Web-Based Instruction in Legal Skills Courses, 49 J. Legal Educ. 480, 488 (1999).

[85] See Albert W. Alschuler, Miranda’s Fourfold Failure, 97 B.U. L. Rev. 849, 890 (2017).

[86] See Andrew Guthrie Ferguson & Richard A. Leo, The Miranda App: Metaphor and Machine, 97 B.U. L. Rev. 935 (2017).

[87] Id. at 950–51 (“[B]ecause the medium of an App allows for digital innovation, we envision video, graphics, and animations adding explanatory power to the design. Written descriptions of legal terms could be accompanied by visual explanations through images, graphics, animations, or hyperlinks. Videos of real people, avatars, or a combination of the two could be used to capture the attention of viewers. A narrator (available in multiple languages) would guide users through the process of understanding Miranda warnings and obtaining a valid waiver or acknowledging the invocation of rights.”)

[88] Id. at 951.

[89] Id.

[90] Id.

[91] Eugene Nielsen, Law Enforcement iPhone Apps, Part 1, Hendon Media Group [https://perma.cc/L4RC-V49H] (last visited Jan. 11, 2019) (describing the “Police Miranda Warning” app); see also Miranda Warnings/ Rights, Google Play Store, [https://­perma.cc/TU8Q-RFNW] (last visited Jan. 7, 2019) (providing a quick reference guide to assist law enforcement and security officers in providing the Miranda warning).

[92] See Innovative Miranda Tools Being Tested by New Orleans Police in Effort to Broaden Access to Justice, Am. B. Ass’n (July 27, 2018), [https://perma.cc/QFC2-BWSW].

[93] See William J. Stuntz, Miranda’s Mistake, 99 Mich. L. Rev. 975, 983 (2001).

[94] See, e.g., United States v. Street, 472 F.3d 1298, 1312 (11th. Cir. 2006) (holding warnings inadequate because the suspect “was not told that anything he said could be used against him in court”).

[95] See Michael D. Cicchini, The New Miranda Warning, 65 SMU L. Rev. 911, 914 (2012) (“The reality is that lower courts have created ‘countless exceptions and loopholes’ to label nearly any imaginable version of the warning as legally adequate—even if it miserably fails to convey anything resembling Miranda’s substance.”); see also Duckworth v. Egan, 492 U.S. 195, 200–05 (1989) (explaining that as long as the warning reasonably conveys to a suspect his rights, the warning need not be in the exact form described in Miranda).

[96] Emily Lane, NOPD Sets “Ambitious Goal” To Exit Consent Decree by 2020, Chief Says, NOLA (Aug. 5, 2017, updated May 31, 2018), [https://perma.cc/GWA2-V5KZ]; see also supra note 92 and accompanying text.

[97] See Brian R. Fitzgerald, Data Point: Social Networking Is Moving on From the Desktop, Wall St. J. (Apr. 3, 2014, 12:07 PM), [https://perma.cc/KL9D-J6UR] (observing that more than eighty-five percent of Twitter use in 2014 was on mobile devices).

[98] See Maeve Shearlaw, Egypt Five Years On: Was It Ever a ‘Social Media Revolution’?, Guardian (Jan. 25, 2016, 7:35 AM), [https://perma.cc/PTM8-87PL].

[99] See Danielle Kurtzleben, Did Fake News on Facebook Help Elect Trump? Here’s What We Know, NPR (Apr. 11, 2018, 7:00 AM), [https://perma.cc/WW7X-6CJB].

[100] See Renee Newman Knake, Democratizing the Delivery of Legal Services, 73 Ohio St. L.J. 1, 3–4 (2012) (“Access to the law — that is, facilitating and delivering legal services — goes to the very heart of First Amendment concerns and values by contributing to Justice Holmes’ marketplace of ideas, acting as a checkpoint on government action, facilitating individual development, and cultivating political discourse.”).

[101] See supra note 6 and accompanying text.

[102] See discussion supra Part I.A.

[103] See supra notes 31 & 70­–74 and accompanying text.

[104] See discussion supra Part IV.A.

[105] See discussion supra Part II.

[106] See supra notes 43–48 and accompanying text.

[107] See supra note 62 and accompanying text.

[108] See White, supra note 74.

[109] See, e.g., iExit Interstate Exit Guide, [https://perma.cc/E6F3-SRG9] (last visited Jan. 8, 2019).

[110] See David Oragui, The Success of Starbucks App: A Case Study, Medium (June 12, 2018), [https://perma.cc/CF39-Q55N] (“Using the geo-location feature, a user can see where the closest Starbucks locations are, the menu at each location, and even place an order that can be ready upon arrival.”).

[111] There are multi-volume treatises devoted to the complex law of driving while intoxicated. See, e.g., Richard E. Erwin. & Leon A. Greenberg, Defense of Drunk Driving Cases: Civil—Criminal (Matthew Bender ed., 3d ed. 1971).

[112] See A Timeline: The Blockbuster Life Cycle, Forbes (Apr. 7, 2011, 2:23 PM), [https://perma.cc/W28U-GRP9].

[113] See Seth Fiegerman, Netflix Hits 125 Million Subscribers, CNN (Apr. 16, 2018, 6:48 PM), [https://perma.cc/YR8H-4X8L]; Ashley Rodriguez, Ten Years Ago, Netflix Launched Streaming Video and Changed the Way We Watch Everything, Quartz (Jan. 17, 2017), [https://perma.cc/H4GY-89EC].

[114] See Steven W. Perry & Duren Banks, Bureau of Justice Statistics, NCJ 234211, Prosecutors in State Courts, 2007 – Statistical Tables (2011) (identifying 2,330 state prosecutors’ offices).

Facebook’s Alternative Facts

[W]e show related articles next to [content flagged by fact-checkers] so people can see alternative facts.”

 

      -Sheryl Sandberg, Sept. 5, 2018

 

Nearly two years have passed since Kellyanne Conway, Counselor to President Donald J. Trump, coined the term “alternative facts” during a television interview. At the time, Conway’s language provoked a sharp response. “Alternative facts are not facts,” her interviewer replied. “They’re falsehoods.”[1] Commentators mostly agreed: Alternative facts were “an assault on foundational concepts of truth”[2] and “the new way of disregarding unpalatable evidence.”[3] Even a year later, one writer likened alternative facts to “reality denial” and claimed that the term had been “mocked out of existence.”[4]

In September 2018, alternative facts roared back into relevance when Facebook’s chief operating officer, Sheryl Sandberg, told a Senate committee that Facebook deploys alternative facts in its fight against misinformation.[5] In Facebook’s strategy, Sandberg explained, potentially false content is presented in users’ News Feeds alongside related articles “so people can see alternative facts.”[6] “The fundamental view is that bad speech can often be countered by good speech,” she said,[7] possibly meaning to evoke Louis Brandeis’s concurrence in Whitney v. California.[8] Thus, she explained, Facebook’s “Related Articles” feature literally places “good speech” (fact-checked content) beside “bad speech” (false content) in users’ scrolling feeds. To Sandberg, alternative facts did not describe reality denial but nearly its opposite: a strategy for evidence-based course correction.

Facebook’s use of Related Articles to fight misinformation, together with the articles’ public characterization as “alternative facts,” provide a case study for exploring the company’s private ordering of speech. They highlight Facebook’s power to control the communicative content of speech in digital space;[9] Facebook’s highly experimental approach to behavioral modification of users; Facebook’s lack of accountability for its speech-regulating choices beyond its economic relationships; Facebook’s selective neutrality in speech-related disputes; the complex relationship between speech practices that suppress misinformation and those that increase user engagement; and the tension that exists between Facebook’s role as a governor of others’ speech and its role as a corporate political speaker in its own right.

None of these factors justifies regulating Facebook as a state actor—a question that may weigh on the minds of the Supreme Court justices who hear Manhattan Community Access Corp. v. Halleck this term.[10] Permitting the government to regulate platforms like Facebook as state actors would, among other things, promote the “both sides” approach that I criticize in this essay. Competition among platforms obviates the need for content-based regulation, so long as users can choose from among an array of providers. Some of them might, however, justify legal constraints on matters of corporate structure, such as dual class stock, that limit managerial accountability, corrode corporate democracy, and, at Facebook, indirectly but powerfully influence how political discourse gets structured.[11]

In this short essay, I argue that Facebook’s adoption of the alternative-facts frame potentially contributes to the divisiveness that has made social media misinformation a powerful digital tool. Facebook’s choice to present information as “facts” and “alternative facts” endorses a binary system in which all information can be divided between moral or tribal categories—“bad” versus “good” speech, as Sandberg put it in her testimony to Congress. As we will see, Facebook’s related-articles strategy adopts this binary construction, offering a both-sides News Feed that encourages users to view information as cleaving along natural moral or political divisions.

In addition, the company’s adoption of alternative facts reflects its strong adherence to both-sides capitalism, in which corporate actors claim that they must be value neutral and politically impartial in order to mitigate business risks or satisfy fiduciary obligations to their investors. The fallacy of both-sides capitalism is its promise that neutrality in commerce—like Facebook’s claim to be a “platform for all ideas”—results in neutral outcomes. The alternative-facts frame demonstrates this. Though it has been presented, by both executive-branch officials and Facebook’s leadership, as politically neutral, the alternative-facts frame advances an ideological bias against evidence-based reasoning. As I show, Conway herself conceived alternative facts to demonstrate how contestation undermines evidence-based reasoning.[12] Because this is true, Facebook’s alternative facts may unwittingly reinforce the post-truth and politically charged notion that once content is contested, resorting to more information won’t help the user distinguish truth from falsity.

If so, Facebook’s alternative facts provide an example of how the superficial neutrality of both-sides capitalism creates new, digitally enhanced threats to democratic discourse. Broadly, the danger is that businesses will adopt tactics that appear neutral but, at least where the democratic process has been commercialized, produce biased results. Facebook’s embrace of alternative facts raises the specific concern that, in order to mitigate the business risks involved in challenging misinformation, the company is deploying platform features that undermine fact-based reasoning and, as a result, strengthening the political hand of one set of actors.

I. Facebook and Political Misinformation

Facebook, Inc., generates “substantially all” of its revenue from advertising.[13] This includes not only traditional advertisements for products and services but also enhanced content distribution for a fee. Although the company does not disclose the proportion of its ad revenue that comes from political expression, we know that political expression generates value for the company, and that Facebook has actively sought to build engagement around political expression on its platform in the U.S. since at least 2006.[14] In both 2015 and 2016, the upcoming U.S. presidential election was the number one “most talked-about global [topic]” on Facebook.[15]

Key to political discourse on Facebook is the News Feed, which presents users with an updating list of posts by the user’s friends and others.[16] Created in 2006 and initially unpopular with many users, “News Feed” has become the platform’s core feature.[17] In 2012, to compete with Twitter, Facebook made changes to News Feed to promote news articles using author bylines and headlines, enabling Facebook to become the leading social media gateway to news publishers’ web sites.[18] Facebook quickly found innovative ways to monetize News Feed. It began allowing users to pay to boost their posts to the top of their friends’ News Feeds.[19] By 2014, Mark Zuckerberg was proclaiming that Facebook’s goal was to make News Feed the “perfect personalized newspaper for every person in the world,” by populating each individual’s News Feed with a customized mix of content.[20]

Yet by January 2015—the start of the 2016 election cycle—Facebook announced self-regulatory reform to counter misinformation: It would reduce distribution of posts that users had reported as hoaxes.[21] It was around this time that Facebook added a specific option for users to report news as false.[22]

In May 2016, just a few months before the election, Gizmodo published charges by an anonymous former Facebook employee that the editors of Facebook’s “Trending” feature censored topics “of interest to conservative readers.”[23] Trending used both an algorithm and an editorial team to populate a running list of popular topics at the top of the Facebook dashboard. Stories “covered by conservative outlets (like Breitbart, Washington Examiner, and Newsmax) that were trending enough to be picked up by Facebook’s algorithm were excluded unless mainstream sites like the New York Times, the BBC, and CNN covered the same stories.”[24] This was essentially true; Facebook’s Trending editorial team had been curating trending topics with attention to the judgments of well-established news outlets.

A backlash followed; the Republican Party issued a statement accusing Facebook of liberal bias and using its influence “to silence view points.”[25] Facebook’s own data analysis showed that conservative and liberal topics were approved as trending topics “at virtually identical rates.” [26] Nonetheless, it initiated a major policy change, terminating its Trending editorial team in August 2016 and relying exclusively on algorithms to produce the Trending list. Almost immediately, false news stories began to proliferate in the Trending list.[27] To this day, critics trace Facebook’s amplification of false news stories in the lead-up to the November 2016 election to this change from human curators to algorithm. In January 2017, after the election, Facebook modified its Trending algorithm so that it no longer reflected only a story’s popularity among users, but took into account its recognition by content publishers, a change meant to incorporate a measure of credibility; in June 2018, as the U.S. midterm elections approached, Facebook eliminated the Trending feature altogether.[28]

II. Facebook’s Strategy to Fight Misinformation

In the days after the 2016 election, Mark Zuckerberg claimed it was a “pretty crazy idea” that fake news on Facebook had influenced the election “in any way.”[29] He followed this up by writing that Facebook would strive to improve its efforts to combat fake news, but added the caveat that “[i]dentifying the ‘truth’ is complicated.”[30] These statements by the company’s CEO and controlling shareholder—under an uncommon arrangement, Facebook’s dual-class stock vests Zuckerberg with voting control of the company—suggest that reducing misinformation was not a priority at the time. Nonetheless, by the end of 2016, Facebook had begun experimenting with new features to reduce misinformation.

Several themes run through Facebook’s efforts. First, the company says it does not want misinformation on its platform. However, its executives have consistently emphasized that Facebook shouldn’t be “the arbiter of what’s true and what’s false.”[31] Thus, a major tension exists at the heart of Facebook’s efforts: it wishes to preserve the appearance of neutrality, but Facebook does convey the true–false judgments of fact-checkers to its users, and it suppresses purportedly false content through down-ranking. Facebook may not issue a final judgment about the truth or falsity of content, but it has created a distribution system that relies on assessments of truth and falsity to determine the scope of a message’s distribution. The company is an arbiter of truth and falsity in the practical sense that it chokes off distribution of purportedly false content.

A second theme is the tension between Facebook’s interest in encouraging user engagement and its interest in censoring false but engaging content. Facebook insists on delivering content that users want, even if what users want is misinformation. “We don’t favor specific kinds of sources — or ideas,” Facebook proclaims in its News Feed Values:

Our aim is to deliver the types of stories we’ve gotten feedback that an individual person most wants to see. We do this not only because we believe it’s the right thing but also because it’s good for our business. When people see content they are interested in, they are more likely to spend time on News Feed and enjoy their experience.[32]

This may be why Zuckerberg was reluctant to ascribe bad motives to Holocaust deniers in a July 2018 interview, when he said that he believed Holocaust deniers were not “intentionally getting it wrong.”[33] If Facebook’s users demand content that denies the Holocaust occurred—and some do—Facebook wants to give it to them. Facebook’s business goal of keeping users engaged is thus sometimes in conflict with its professed desire to get misinformation off its platform. This conflict seems to be at the heart of Facebook’s selective embrace of neutrality as a guiding principle.

A third theme is Facebook’s willingness to experiment with behavioral modification of its users. In the year and a half that followed the 2016 election, Facebook experimented with several behavioral interventions around false news. The purpose of these experiments seems to have been to reduce circulation of obviously false content. Although Facebook has disclosed information about these experiments, it has been remarkably less transparent about down-ranking, in which it suppresses content. As a result, we know little about how down-ranking is used by the company to suppress misinformation.

A. Facebook’s First Experiment: Disputed Flags

By late November 2016, Zuckerberg was describing to journalists a new “product” that would address concerns about misinformation.[34] This was “Disputed Flags,” a feature employed by Facebook from roughly December 2016 to December 2017. The company marked content in user News Feeds with red icons to signal it had been disputed by fact-checkers or users.[35] Facebook ended the experiment after finding, among other things, that the flags “could sometimes backfire.”[36] It told users that research had shown that “putting a strong image, like a red flag, next to an article may actually entrench deeply held beliefs—the opposite effect to what [Facebook] intended.”[37]

B. Facebook’s Second Experiment: A Revamped Related Articles Feature

In 2013, Facebook began offering users who read an article “new articles they may find interesting about the same topic.” [38] In this early feature, called “Related Articles,” Facebook supplied additional, recommended content after the user clicked on a link.[39] Its purpose was to increase user engagement and to enhance content customization. Immediately following the 2016 election, Mark Zuckerberg identified “raising the bar for stories that appear in related articles” as one of seven publicly featured “projects” the company had undertaken to address misinformation.[40] This suggests that Facebook eventually came to believe that the original Related Articles feature amplified low-quality content to users before the election.

In spring 2017, while it was experimenting with Disputed Flags, Facebook began testing a different version of Related Articles. The new Related Articles supplied additional content to a user before the user read an article shared in News Feed, and was specifically designed to address misinformation.[41] A few months later, the company told users that it had received feedback that “Related Articles help [sic] give people more perspectives and additional information, and helps them determine whether the news they are reading is misleading or false,” and announced it was expanding the feature.[42]

Related Articles works like this: When someone flags content on Facebook as potentially false, Facebook sends it to third-party fact-checkers. In the United States, Facebook currently uses five fact-check organizations certified by the International Fact-Checking Network: the Associated Press, Factcheck.org, PolitiFact, Snopes.com, and The Weekly Standard Fact Check.[43] Some of these organizations are paid by Facebook for their fact-checking work, but others reportedly reject payment.[44]

If the fact-checker confirms its falsity, Facebook “typically” reduces an article’s traffic by 80%.[45] This is down-ranking, which Zuckerberg has said “destroys the economic incentives that most spammers and troll farms have to generate these false articles in the first place.”[46] Facebook also warns users who are about to share or have shared the false content, and shows Related Articles—short headlines with links to longer articles—next to the false content. For at least some subject matter, Related Articles are not culled from different sources around the internet, but are created by Facebook’s partner fact-check organizations specifically for the purpose of being appended to flagged Facebook content.[47]

This is a screen shot from a video Facebook posted on December 20, 2017, titled “How Facebook Addresses False News,” which shows the “Related Articles” approach:[48]

[[{“fid”:”834″,”view_mode”:”full”,”type”:”media”,”attributes”:{}}]]

 
   

Although Facebook’s example shows the two Related Articles clearly disputing a false article about aliens, some Related Articles do not clearly reject the flagged content. In response to an actual October 2018 post titled “Republicans Vote to Make It Legal Nationwide to Ban Gays & Lesbians from Adopting,” for example, Facebook appended these two related articles[49]:

[[{“fid”:”835″,”view_mode”:”full”,”type”:”media”,”attributes”:{}}]]

 
   

These two actual Related Articles are unlike the examples that Facebook provided above, insofar as they lack headlines that refute the false content; the user must click through to the linked content and read the respective articles to understand what (if anything) Politifact.com and Snopes.com believed was false about the original article. It is quite likely that Facebook has data about click-through rates that would tell us something about the success of the Related Articles strategy. The fact that it has not published any data since beginning the Related Articles experiment more than eighteen months ago might suggest that the data doesn’t support the feature’s efficacy.

Facebook has continued to experiment with new tweaks and features to address political misinformation. In the summer of 2018, it revealed plans to create its own news content: news programs on its video service, Watch, produced for a fee by established news companies such as CNN and Fox News.[50] In September 2018, Facebook’s fact-checking product manager, Tessa Lyons, revealed that Facebook had begun using technology to “predict articles that are likely to contain misinformation and prioritiz[ing] those for fact-checkers to review.”[51] According to Lyons, the company uses predictive signals such as reader comments on the post that question its veracity, and the post’s source. If a Facebook Page sharing content has “a history of sharing things that have been rated false by fact-checkers,” it triggers review.[52]

III. Facebook’s Alternative Facts

A. The News Feed’s Binary Construction

More than a year passed between Facebook’s roll-out of the new Related Articles feature and Sheryl Sandberg’s description of related articles as “alternative facts.” [53] Her remarks may have been intended to evoke Louis Brandeis, the icon of free speech: “The fundamental view,” Sandberg said, “is that bad speech can often be countered by good speech, and if someone says something’s not true and they say it incorrectly, someone else has the opportunity to say, ‘Actually, you’re wrong, this is true.’”[54]

Justice Brandeis’s concurrence in Whitney v. California likewise associated false information with moral wrong: “If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the processes of education,” he wrote, “the remedy to be applied is more speech, not enforced silence.”[55] Of course, Brandeis wasn’t advocating a closed universe of “more speech” provided exclusively by the State, the way that Facebook’s closed universe of News Feed posts presents an exclusive set of curated content. Brandeis’s moral gloss on the solution of “more speech” was grounded, at least in part, on the assumption that citizens, not a single State or a State-like entity, would provide the counter-speech to avert “evil.”

Brandeis also believed that context mattered. “More speech” was the remedy for misinformation only “if there be time.”[56] More speech may not be a viable remedy for misinformation where the context tends to discourage active listening or to discredit the speech. Brandeis’s famous endorsement of “more speech” doesn’t translate easily to social media’s curated feed, especially in light of new insights in behavioral and decision science.

Brandeis conceived of an active speaker and an active listener engaged in “public discussion.”[57] But that assumption does not hold up on social media platforms. In Facebook’s News Feed, information is presented to please the recipient, as determined by Facebook’s customizing algorithms. The “facts” versus “alternative facts” frame of Related Articles interrupts this pleasing data stream’s flow and introduces a binary construction in which content divides between that which conforms customized specifications (“bad” speech, in Sandberg’s depiction), and Related Articles that don’t (“good” speech). However, if the algorithms got the original assessment correct, the reader actually may experience Related Articles more like “bad” speech interrupting the flow of “good” misinformation. The decision to present point and counterpoint in this format not only sends users the simplistic message that information itself is binary, but also twists the user’s intuitive sense about which information is “good” versus “bad.”

In fact, empiricists have tested the extent to which Facebook’s Related Articles are likely to mitigate “motivated reasoning” and stem the influence of false information disseminated on Facebook. The work of two researchers, Leticia Bode of Georgetown University and Emily K. Vraga of George Mason University, is directly on point.

In the first of two studies, they found that corrective Related Articles successfully reduced misperceptions for individuals who previously held a false belief about GMOs and were shown false information about GMOs in a simulated Facebook News Feed.[58] However, they found no effect in a similar study of subjects who held a false belief about the link between vaccines and autism.[59] Bode and Vraga concluded that the length of time a misperception lingered in public discourse affected its debunk-ability, and that correction was more effective “when false beliefs are not deeply ingrained among the public consciousness.”[60]

In a follow-on study, Bode and Vraga explored how a subject’s conspiracist ideation affected his or her capacity for correction. Research has shown that individuals high in conspiracist ideation—those who endorse multiple unrelated conspiracy theories—are particularly vulnerable to misinformation.[61] Bode and Vraga measured subjects’ conspiracist ideation and then asked them to view a simulated Facebook News Feed, where they were exposed to a post, purportedly from USA Today (but in fact fake), which contained false information.[62] Some subjects were then shown two related articles that debunked the fake story, and others were shown debunking comments by Facebook users.[63] Individuals high in conspiracist ideation tended to rate both types of correction as “equally (not) credible.”[64] Although the study’s authors concluded that correction worked, the corrective effects were “relatively small in size.”[65]

Together, these studies suggest that the more “deeply ingrained” health-related misperceptions are, the less likely it is that Related Articles can debunk them. Individuals with conspiracist ideation simply did not trust Related Articles. If this is true, political misinformation that connects to deeply-ingrained partisan commitments might be particularly difficult to debunk through Related Articles. Facebook may discover, like it did with Disputed Flags, that its assumptions about how people respond to its behavioral interventions are erroneous.

As I have argued elsewhere, “alternative facts” are a rhetorical trick.[66] The frame suggests that, in a controversy, each side presents information in its favor. The two sides can’t agree on the facts because facts are a matter of perspective.[67] Ultimately the post-truth reasoner suggests that facts and alternative facts aren’t particularly helpful for resolving a dispute: the greater the controversy, the greater the cacophony of facts bombarding us from both sides. In such a situation, the post-truth reasoner tells us, other inputs—a gut check, tribal affiliation, or trust in a group leader—can provide a superior basis for decision making. In a post-truth world, where one finds alternative facts, one should use alternative decision-making processes.[68]

As this suggests, Facebook’s “alternative facts” may contribute to, rather than ameliorate, the toxicity of social media discourse. The binary construction of a “both sides” News Feed is part of the problem, not part of the solution.

B. “Both Sides” Capitalism

Fundamentally, Facebook’s both-sides News Feed is evidence of its broader adherence to both-sides capitalism, in which for-profit businesses claim impartiality not as a moral virtue, but as a business imperative. Like other adherents to both-sides capitalism, Facebook treats viewpoint neutrality as key to its economic prospects.

There are many reasons that a platform for political discourse might pledge allegiance to both-sides capitalism. The company might perceive that its monopolistic ambitions do not allow it to cede market share to competitors catering to different political affiliations. It might also see a commercial benefit to presenting “both sides” of controversies: It could encourage users to spend more time on Facebook, or to click through to a broader range of links. Facebook has a business interest in remaining free from regulation. If the company is perceived as partisan, this could encourage the opposing political party to pursue laws that reduce Facebook’s profits or prospects. Finally, Facebook is a political actor in its own right, and an active participant in campaign finance and lobbying. It may view both-sides neutrality as a means to deflect criticism when it spends money to influence politics in its own favor.

Facebook took the both-sides approach so far that it formed a fact-checking partnership with a partisan news source, The Weekly Standard, resulting in a new round of controversy. In September 2018, Facebook came under fire when The Weekly Standard flagged as false an article published by ThinkProgress because of its title, “Brett Kavanaugh Said He Would Kill Roe v. Wade Last Week and Almost No One Noticed.”[69] The title was meant to be hyperbolic rather than literal; the article did not falsely attribute any statements to Kavanaugh. Judd Legum, who later became the publisher of ThinkProgress, captured the critique in a tweet alleging that the purpose behind Facebook’s fact-checking program is “to appease the right wing.”[70]

But both-sides capitalism, as implemented by Facebook, is about more than appeasement. It is the claim that, in order to satisfy its obligations to investors and customers, a company must provide services to anyone who can pay for them, promote any ideology regardless of substance, and treat all ideas equally. Increasingly, Silicon Valley tech companies like Facebook present both-sides capitalism, wrongly, as neutral in operation and neutral in outcome.

Finally, we might ask whether Facebook has a real incentive to foster critical thinking in its users. In other words, perhaps Facebook or its CEO and controlling shareholder, Mark Zuckerberg, benefit by advancing an ideological agenda through the alternative-facts frame. Brand loyalty can be a form of post-truth reasoning, and Facebook has nurtured a valuable brand of social media service. Facebook might believe that it does not benefit by sharpening its users’ critical-thinking skills. Considering all the problems the platform has had with privacy, for example, company managers may worry that well-informed users will delete Facebook and move on to a competitor.

V. Conclusion

Facebook’s attempt to rehabilitate “alternative facts” during Sheryl Sandberg’s testimony to the Senate Select Committee on Intelligence drew little attention, but it underscores important tensions in the way the company fights misinformation. It also exposed the company’s commitment to “both sides” capitalism on a national stage.

Facebook’s Related Articles strategy adopts the binary frame of “alternative facts,” and thus conditions users to accept a two-sided view of information that may increase polarization and partisanship, not diffuse it. Facebook may have adopted this binary approach because it fits comfortably within the News Feed format, or because the company views political discourse as a series of simple, binary disagreements that can be staged as for-profit entertainment. Either way, information on Facebook reaches up to 185 million people in North America every day. It seems unlikely that Facebook is serious about behavioral intervention given the research suggesting its difficulty, and more likely that Facebook’s evolving features result from the company’s profit motive.

It’s also possible that Related Articles has become a minor strategy, with down-ranking of false content doing most of the work. In preparing this short essay, I went looking for Related Articles in the News Feeds of students and associates, but found few examples. Some avid Facebook users couldn’t ever recall seeing Related Articles in their own Feeds. Is this because Facebook had successfully suppressed false content through down-ranking? It’s hard to know. Without more transparency from Facebook, users and researchers are left in the dark.

 


[1] See Rebecca Sinderbrand, How Kellyanne Conway Ushered in the Era of ‘Alternative Facts,’ Wash. Post (Jan. 22, 2017), https://www.washingtonpost.com/news/the-fix/wp/2017 /01/22/how-kellyanne-conway-ushered-in-the-era-of-alternative-facts/ (providing video and transcript of the January 22, 2017, interview) [http://perma.cc/TW6P-YABP].

[2] Bret Stephens, Trump: The Reader’s Guide, Wall St. J. (Jan. 23, 2017), https://ww w.wsj.com/articles/trump-the-readers-guide-1485216078 [http://perma.cc/N5HV-Z3YC].

[3] Stefan Kyriazis, George Orwell’s 1984 Explains Trump: Doublespeak, Alternative Facts and Reality Control, Express (Jan. 26, 2017), https://www.express.co.uk/entertainment/bo oks/759436/Trump-George-Orwell-1984-Doublespeak-alternative-facts-crimestop-reality-control [http://perma.cc/8DTK-WQVR].

[4] Louis Menand, Words of the Year, New Yorker (Jan. 8, 2018), https://www.newyorker.c om/magazine/2018/01/08/words-of-the-year [http://perma.cc/2WKU-RCBZ].

[5] Foreign Influence Operations and Their Use of Social Media Platforms: Hearing Before the S. Select Comm. on Intelligence, 115th Cong., at 1:34:54–1:35:14 (2018) [hereinafter Sandberg Senate Testimony], video available at https://www.intelligence.senate.gov/hearing s/open-hearing-foreign-influence-operations’-use-social-media-platforms-company-witnesses [http://perma.cc/7J39-ULU7] (testimony of Sheryl Sandberg, Chief Operating Officer, Facebook).

[6] Id.

[7] Id.

[8] See Whitney v. California, 274 U.S. 357, 377 (1927) (Brandeis, J., concurring) (“If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence.”).

[9] See Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 Harv. L. Rev. 1598, 1599 (2017) (arguing that private content platforms are systems of governance “responsible for shaping and allowing participation in our new digital and democratic culture”).

[10] Halleck v. Manhattan Cmty. Access Corp., 882 F.3d 300 (2d Cir. 2018), cert. granted, 2018 WL 3127413 (U.S. Oct. 12, 2018) (No. 17-1702).

[11] See, e.g., Chris Hughes, The Problem With Dominant Mark Zuckerberg Types, Bloomberg (Dec. 9, 2018), https://www.bloomberg.com/opinion/articles/2018-12-10/the-problem-with-dominant-mark-zuckerberg-types (describing a growing “international cam­paign” against super-voting rights for founders).

[12] See Sarah C. Haan, The Post-Truth First Amendment, 94 Ind. L. J. (forthcoming 2019) (manuscript at 6–7), available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=32093 66 [http://perma.cc/Z7NY-YGTD].

[13] Facebook, Inc., Quarterly Report (Form 10-Q) at 28 (Jul. 26, 2018).

[14] See, e.g., Christine B. Williams and Girish J. ‘Jeff’ Gulati, Social Networks in Political Campaigns: Facebook and the Congressional Elections of 2006 and 2008, 15 New Media & Soc’y 52, 56 (2012).

[15] Betsy Cameron and Brittany Darwell, 2015 Year in Review, Facebook Newsroom (Dec. 9, 2015), https://newsroom.fb.com/news/2015/12/2015-year-in-review/ [http://perma.cc/Y XY9-657Z]; Sheida Neman, 2016 Year in Review, Facebook Newsroom (Dec. 8, 2016), https://newsroom.fb.com/news/2016/12/facebook-2016-year-in-review/[http://perma.cc/GY F5-52B2].

[16] See Mark Zuckerberg, Facebook (Sep. 5, 2016), https://www.facebook.com/zuck/posts /10103084921703971 [http://perma.cc/7YFV-F66V] (explaining the thought process behind News Feed in a September 2016 post marking its tenth anniversary). Zuckerberg wrote that “News Feed has been one of the big bets we’ve made in the past 10 years that has shaped our community and the whole internet the most.” Id.

[17] See Farhad Manjoo, Can Facebook Fix Its Own Worst Bug?, N.Y. Times: N.Y. Times Mag. (Apr. 25, 2017), https://www.nytimes.com/2017/04/25/magazine/can-facebook-fix-its-own-worst-bug.html [http://perma.cc/986J-HXSW] (describing the Facebook News Feed as “the most influential source of information in the history of civilization”).

[18] See Nicholas Thompson and Fred Vogelstein, Inside the Two Years that Shook Facebook—and the World, Wired (Feb. 12, 2018), https://www.wired.com/story/inside-facebook-mark-zuckerberg-2-years-of-hell/ [http://perma.cc/68B5-FBYN]; Niall Ferguson, What Is To Be Done? Safeguarding Democratic Governance In The Age Of Network Platf- orms, Hoover, Institution, Nov. 13, 2018, https://www.hoover.org/research/what-be-done-safeguarding-democratic-governance-age-network-platforms [http://perma.cc/3VV3-N79Y] (“Facebook and Google are now responsible for nearly 80 percent of news publishers’ referral traffic.”).

[19] See Hayley Tsukayama, Would You Pay to Promote a Facebook Post?, Wash. Post (May 11, 2012), https://www.washingtonpost.com/business/technology/would-you-pay-to-prom­ote-a-facebook-post/2012/05/11/gIQA1nlSIU_story.html [http://perma.cc/4GGR-FBW2].

[20] Eugene Kim, Mark Zuckerberg Wants To Build The ‘Perfect Personalized Newspaper’ For Every Person In The World, Bus. Insider (Nov. 6, 2014), https://www.businessin sider.com/mark-zuckerberg-wants-to-build-a-perfect-personalized-newspaper-2014-11 [http://perma.cc/7C5C-WEMJ].

[21] Erich Owens & Udi Weinsberg, Showing Fewer Hoaxes, Facebook Newsroom (Jan. 20, 2015), https://newsroom.fb.com/news/2015/01/news-feed-fyi-showing-fewer-hoaxes/ [http:// perma.cc/9MG4-MNL4].

[22] Id.

[23] Michael Nunez, Former Facebook Workers: We Routinely Suppressed Conservative News, Gizmodo (May 9, 2016, 9:10 AM), https://gizmodo.com/former-facebook-workers-we-routinely-suppressed-conser-1775461006 [http://perma.cc/AJU4-F2TT]. According to the blog, the former employee had worked as a curator of Trending Topics sometime between mid-2014 and December 2015, was “politically conservative,” and “asked to remain anonymous, citing fear of retribution from the company.” Id.

[24] Id.

[25] Team GOP, #MakeThisTrend: Facebook Must Answer for Conservative Censorship, GOP.com: Liberal Media Bias (May 9, 2016), https://gop.com/makethistrend-facebook-must-answer-for-liberal-bias/ [http://perma.cc/6H5R-238K].

[26] Colin Stretch, Response to Chairman John Thune’s Letter on Trending Topics, Facebook Newsroom (May 23, 2016), https://newsroom.fb.com/news/2016/05/response-to-chairman-john-thunes-letter-on-trending-topics/ [http://perma.cc/Z3XK-MD25].

[27] See, e.g., Caitlin Dewey, Facebook Has Repeatedly Trended Fake News Since Firing Its Human Editors, Wash. Post, Oct. 12, 2016, https://www.washingtonpost.com/news/the-inte rsect/wp/2016/10/12/facebook-has-repeatedly-trended-fake-news-since-firing-its-human-editors/ [http://perma.cc/EAM2-BUBS] (reporting a study from Aug. 31 to Sept. 22 that identified “five trending stories that were indisputably fake,” including a “tabloid story claiming that the Sept. 11 attacks were a ‘controlled demolition’”); Abby Ohlheiser, Three Days After Removing Human Editors, Facebook Is Already Trending Fake News, Wash. Post (Aug. 29, 2016), https://www.washingtonpost.com/news/the-intersect/wp/2016/08/29/a-fake-headline-about-megyn-kelly-was-trending-on-facebook/

[http://perma.cc/FV5A-MU5T].

[28] Will Cathcart, Continuing Our Updates to Trending, Facebook Newsroom (Jan. 25, 2017), https://newsroom.fb.com/news/2017/01/continuing-our-updates-to-trending/ [http: //perma.­cc/G4UW-VEDV]; Jacob Kastrenakes, Facebook Will Remove the Trending Topics Section Next Week, The Verge (June 1, 2018, 11:48 AM), https://www.thever ge.com­/2018/6/1/17417428/facebook-trending-topics-being-removed [http://perm a.cc/TFX 9-LM­R3]; Nathan Olivarez-Giles & Deepa Seetharaman, Facebook Moves to Curtail Fake News on ‘Trending’ Feature, Wall St. J. (Jan. 25, 2017), https://www.wsj.com/articles/facebook-moves-to-curtail-fake-news-on-trending-feature-1485367200 [http://perma.cc/W4C4-CAL­M] (“Facebook’s software will surface only topics that have been covered by a significant number of credible publishers.”).

[29] Deepa Seetharaman, Zuckerberg Defends Facebook Against Charges It Harmed Political Discourse, Wall St. J. (Nov. 10, 2016), https://www.wsj.com/articles/zuckerberg-de fends-facebook-against-charges-it-harmed-political-discourse-1478833876 [http://perma.cc/H224-ZE3Z].

[30] Mark Zuckerberg, Facebook (Nov. 12, 2016), https://www.facebook.com/zu ck/posts/10103253901916271 [http://perma.cc/9J8F-53J7].

[31] Sandberg Senate Testimony, supra note 5, at 1:34:19–1:34:42.

[32] News Feed Values, Facebook News Feed, https://newsfeed.fb.com/values/ [http://perma.cc/B3W2-ZRSY] (last visited Nov. 8, 2018).

[33] Kara Swisher, Full Transcript: Facebook CEO Mark Zuckerberg on Recode Decode, Recode: Recode Decode (Jul. 18, 2018, 11:01 AM), https://www.recode.net /2018/7/18/1757 5158/mark-zuckerberg-facebook-interview-full-transcript-kara-swisher [http://perma.cc/QU3Y-JMHN].

[34] Deepa Seetharaman, Mark Zuckerberg Explains How Facebook Plans to Fight Fake News, Wall St. J. (Nov. 20, 2016), https://www.wsj.com/articles/mark-zuckerberg-explains-how-facebook-plans-to-fight-fake-news-1479542069 [http://perma.cc/UY7F-3836]; Mark Zuckerberg, Facebook (Nov. 19, 2016), https://www.facebook.com/zuck/posts/101032 69806149061 [http://perma.cc/4CVD-CDPS].

[35] Tessa Lyons, Replacing Disputed Flags with Related Articles, Facebook Newsroom (Dec. 20, 2017), https://newsroom.fb.com/news/2017/12/news-feed-fyi-updates-in-our-fight-ag­ainst-misinformation/ [http://perma.cc/3BU2-VD6D]; Barbara Ortutay, Facebook Gets Serious About Fighting Fake News, Associated Press (Dec. 15, 2016), https://www.apnews. com/22e0809d20264498bece040e85b96935 [http://perma.cc/X9T2-TLCS].

[36] Jeff Smith, Grace Jackson & Seetha Raj, Designing Against Misinformation, Medium (Dec. 20, 2017), https://medium.com/facebook-design/designing-against-misinformation-e5846b3aa1e2 [http://perma.cc/MKM8-YNCA].

[37] Lyons, supra note 35.

[38] Sara Su, New Test With Related Articles, Facebook Newsroom (Apr. 25, 2017), https://newsroom.fb.com/news/2017/04/news-feed-fyi-new-test-with-related-articles/ [http://perma.cc/8XEW-8AJ3].

[39] Id.

[40] Zuckerberg, supra note 34.

[41] Su, supra note 38.

[42] Id.

[43] Third-Party Fact-Checking on Facebook, Facebook Business, https://www.faceboo k.com/help/publisher/182222309230722 [http://perma.cc/BN42-NBYJ] (last updated Nov. 7, 2018).

[44] In April 2018, a journalist conducted a study of Facebook’s partnership with these fact-checking organizations for the Tow Center for Digital Journalism at Columbia University. The journalist, Mike Ananny, noted previous reports that the fact-checking partners were paid about $100,000 per year from Facebook for their work. However, Ananny reported that unidentified individuals at several of the organizations told him their organizations had rejected the money. Mike Ananny, The Partnership Press: Lessons for Platform-Publisher Collaborations as Facebook and News Outlets Team to Fight Misinformation, Colum. Journalism Rev.: Tow Ctr. Rep. (Apr. 4, 2018), https://www.cjr.org/tow_center_reports/pa rtnership-press-facebook-news-outlets-team-fight-misinformation.php [http://perma.cc/WM5W-L82X].

[45] Facebook, Inc. Fourth Quarter and Full Year 2017 Earnings Call Transcript, at 3 (Jan. 31, 2018), https://s21.q4cdn.com/399680738/files/doc_financials/2017/Q4/Q4-17-Earnings-call-transcript.pdf [http://perma.cc/82SE-FAJ4] (remarks of Mark Zuckerberg, Chief Executive Officer, Facebook).

[46] Id.

[47] Expanding Our Policies on Voter Suppression, Facebook Newsroom (Oct. 15, 2018), https://newsroom.fb.com/news/2018/10/voter-suppression-policies/ [http://perma.cc/2L2F-YV2J] (describing this process with respect to articles containing information about how to vote).

[48] Dan Zigmond, How Facebook Addresses False News, Facebook, at 1:02 (Dec. 20, 2017), https://www.facebook.com/facebook/videos/10156900476581729/ [http://perma.cc/Z S8G-3C4S].

[49] This screenshot, shared with me by a student, shows Related Articles that appeared in the student’s Facebook News Feed in October 2018. E-mail from student to Sarah C. Haan, Assoc. Professor of Law, Wash. & Lee (Oct. 23, 2018, 6:40 PM EST) (on file with author).

[50] David Ingram, Facebook Enlists Anchors From CNN, Fox News, Univision for News Shows, Reuters (Jun. 6, 2018, 10:03 AM), https://www.reuters.com/article/us-facebook-media/facebook-enlists-anchors-from-cnn-fox-news-univision-for-news-shows-idUSKCN1J21SM [http://perma.cc/3BL4-JCQE].

[51] Seeing the Truth, Facebook Newsroom (Sep. 13, 2018), https://newsroom.fb.com/news /2018/09/inside-feed-tessa-lyons-photos-videos/ [http://perma.cc/U7NH-NXLH].

[52] Id.

[53] Sandberg Senate Testimony, supra note 5, at 1:34:54–1:35:14.}

[54] Id.

[55] Whitney v. California, 274 U.S. 357, 377 (1927) (Brandeis, J., concurring).

[56] Id.

[57] Id. at 375–76 (“Those who won our independence believed that the final end of the State was to make men free to develop their faculties; and that in its government the deliberative forces should prevail over the arbitrary. . . . Believing in the power of reason as applied through public discussion, they eschewed silence coerced by law—the argument of force in its worst form. . . . It is the function of speech to free men from the bondage of irrational fears.”).

[58] Leticia Bode & Emily K. Vraga, In Related News, That Was Wrong: The Correction of Misinformation Through Related Stories Functionality in Social Media, 65 J. of Comm. 619, 624–27 (2015).

[59] Id. at 628.

[60] Leticia Bode & Emily K. Vraga, See Something, Say Something: Correction of Global Health Misinformation on Social Media, 33 Health Commc’n 1131, 1132 (2018).

[61] Id. at 1133.

[62] Id. at 1134.

[63] Id.

[64] Id. at 1137.

[65] Id.

[66] Haan, supra note 12, at 15–17.

[67] Id.

[68] Id.

[69] Mathew Ingram, The Weekly Standard and the Flaws in Facebook’s Fact-Checking Program, Colum. Journalism Rev. (Sep. 18, 2018), https://www.cjr.org/the_new_gatekeepe rs/the-weekly-standard-facebook.php [http://perma.cc/NQV7-M9R5].

[70] Id.

Foreword: Facebook Unbound?

The concept of checks and balances is a core tenet of our democracy; we fear letting any single institution become overly powerful or insufficiently accountable. As Americans, we naturally apply this concept first and foremost to the interactions among our three branches of government, given the principle’s constitutional origins. What happens, though, when a handful of exceedingly powerful private actors—today’s behemoth technology companies—begin to have as much control over our lives as the government does? Should the same impulse that drives our commitment to interbranch checks and balances kick in? How can we ensure that our democracy remains our democracy, even when digitized?

In the past, when highly powerful industries have emerged, our democratic system often has responded by erecting legal and regulatory frameworks around them. We applied antitrust laws to break up the railroads and oil companies.[1] We established the Food and Drug Administration and the National Highway Traffic Safety Administration to ensure that drug and car manufacturers take adequate measures to protect citizens’ health and welfare.[2] And we heavily regulate common carriers, such as transportation and telecommunications providers, to ensure that they don’t discriminate against groups within the general public.[3] Yet our three branches of government have engaged in only limited ways with technology companies such as Facebook, Google, and Amazon, even though these companies dominate their industries and have a tremendous influence on every corner of our lives—as various contributions to this symposium illustrate. Our government also has failed to regulate the use of high-technology tools that implicate our privacy, such as facial-recognition software and other controversial uses of machine-learning algorithms. Why has the government been slow to engage? Further, assuming that our society disfavors institutions that accrue unchecked power, especially when they wield that power in a way that affects our physical safety, our privacy, and our democratic arena, are there feasible ways to impose constructive constraints?

As an initial step in thinking about these questions, this essay examines a different context in which our checks and balances have proven weak: the national security space. It recounts the basic challenges that the other two branches have faced in checking the Executive’s national security activities. The essay then identifies the ways in which those challenges resonate in the context of checking technology companies, helping us to understand why it has proven difficult for Congress and the courts (and the Executive) to weave a set of legal constraints around technology companies that offer us social media platforms, build advanced law enforcement tools, and employ machine learning algorithms to help us search, buy, and drive.[4] The essay explores alternative sources of constraints on the national security Executive, drawing inspiration from those constraints to envision other ways to shape the behavior of today’s technology behemoths and other companies whose products are driven by our data.

I. The National Security Executive Unbound

It has become a truism that the Executive faces limited constraints when it undertakes activities to protect our national security. Congress rarely enacts statutes to restrict executive military and intelligence actions, and the courts are often loath to bar the Executive from taking the actions it deems appropriate. Accompanying this truism is a long-running debate about whether it is problematic that the Executive has accrued this much power. The debate reached a high-water mark with the publication of Eric Posner and Adrian Vermeule’s book, The Executive Unbound, which argued that the Executive is effectively unconstrained by law and is limited only by politics and public opinion—and that this is unproblematic.[5] A number of scholars critiqued the book as providing an insufficiently nuanced view of how the executive branch operates, as giving inadequate weight to the power of law to constrain,[6] and as failing to appreciate the costs of an unchecked Executive.[7] Few, however, would contest that the Executive has very broad responsibilities in pursuing national security policies and that it can be difficult to force the Executive to alter or abandon those policies.

There are a variety of reasons why the Executive lacks constraints on its national security actions, at least from predictable sources.[8] Congress, the actor best positioned to impose those constraints, often proves both unwilling and unable to cabin the Executive’s military and intelligence activities, including the use of wartime detention, targeted killings, and the introduction of troops abroad. First, Congress tends to lack knowledge about the details of such activities, including the advanced technologies that the military and intelligence agencies are using.[9] Second, identifying sensible solutions for how to regulate these complicated technologies and programs is hard. It requires Congress to strike a nuanced balance between protecting the country and protecting individual life, liberty, privacy, and fair process. Third, Congress fears being blamed if, as a direct or indirect result of its laws, the country suffers an attack or crisis.[10] Finally, when Congress is divided, it faces the ordinary partisan gridlock that occurs whenever it tries to legislate.

The courts have also hesitated to act. Though the Supreme Court issued several high-profile detainee-related decisions in the decade after September 11, 2001, the Court and lower federal courts have avoided reaching decisions on the merits of a range of national security cases related to rendition, surveillance, detention, and military uses of force. Two related instincts seem to drive this. One is the courts’ self-perception that they lack the technical, military, and foreign-policy experience to correctly decide these questions.[11] The other is their sense that Congress, not the courts, should make the hard policy decisions embedded in these cases because Congress is politically accountable in a way that the courts are not. In a case about the procedures to which detainees at Guantanamo were entitled, for instance, Judge Brown of the D.C. Circuit wrote in a concurrence that “the circumstances that frustrate the judicial process are the same ones that make this situation particularly ripe for Congress to intervene pursuant to its policy expertise, democratic legitimacy, and oath to uphold and defend the Constitution. These cases present hard questions and hard choices, ones best faced directly.”[12] In a case challenging the Executive’s alleged plan to target a U.S. citizen abroad, a D.C. district court relied on a lack of standing and the political-question doctrine to avoid the merits, noting that courts are ill-suited to “make real-time assessments of the nature and severity of alleged threats to national security.”[13] In these and a host of other cases, courts reveal their preference for avoiding decisions on hard national security questions that test the outer bounds of their expertise.

The end result of these enfeebled checks and balances is a very powerful Executive. However, a discussion of executive constraints that focuses only on the actions of Congress and the courts undersells the existence of other factors that constrain the national security Executive, a point I discuss below.

II. Facebook Unbound

Many of the same dynamics that have made it difficult to rein in a powerful national security Executive are playing out in the technology space—leading to what we might call the “Facebook Unbound” phenomenon.[14] Indeed, the academic and foreign-policy conversation about the Executive’s undue power in the national security space, which was a constant refrain in the post-9/11 era, has died down, to be replaced by conversation about the undue power of large technology companies.[15] Several essays in this symposium illustrate the companies’ power and the lack of restrictions on how they use our data or control content on their platforms, and on how the government uses their products in ways that implicate our privacy. The journalist Farhad Manjoo, for example, has adopted the term “Frightful Five” to refer to Amazon, Apple, Facebook, Google, and Microsoft (all of which own other major technology and consumer products companies, including WhatsApp, Instagram, Waze, YouTube, Audible, Zappos, Whole Foods, and Waymo).[16] Other technology companies that have faced limited regulation include social media platforms such as Twitter; manufacturers of self-driving cars; Uber and Lyft; and companies that use “big data” and machine learning algorithms to produce highly sophisticated, privacy-implicating technologies for the U.S. military and federal, state, and local law enforcement.[17]

What unites these companies is their systematic collection and use of vast amounts of user data to make their products more powerful and their use of machine learning algorithms based on that data to make their systems more effective and more profitable. Some observers are untroubled by the relative lack of constraints on these companies and worry far more about the fact that the national security Executive is unbound. After all, the Executive can impose more severe sanctions and direct physical effects on individuals than companies can. In any case, these technology companies wield enormous control over our lives on a daily basis.[18] It is therefore worth exploring why our government has done little to regulate these companies.

The factors that have led to the lack of constraints on these technology companies are markedly similar to those that have produced the national security Executive. First, members of Congress lack sophisticated understandings of how these companies—and the technologies that undergird their products—work. This was brought into sharp relief when the Senate summoned Facebook CEO Mark Zuckerberg to testify about the company’s privacy policies, data leaks, and Russian interference with the 2016 U.S. presidential election. At one point, Senator Orrin Hatch asked Zuckerberg how Facebook managed to make money; Zuckerberg, smiling slightly, responded, “Senator, we run ads.”[19]As Daniel Solove has written, “There may be a few in Congress with a good understanding of . . . technology, but many lack the foggiest idea about how new technologies work.”[20]

Second, knowing what to regulate, in what level of detail, and at what stage in the overall development of technologies such as machine learning is simply hard.[21] Laws can easily be overtaken by events in fast-changing areas such as war fighting or technology.[22] Third, Congress fears undercutting U.S. innovation by regulating too soon, which is not unlike Congress’s fear of deliberately reining in the Executive’s national security decisions, particularly in the face of threats from other actors who have not chosen to self-constrain.[23] The United States seeks to out-innovate China; members of Congress will not want to stand accused of slowing down U.S. companies that are developing artificial intelligence, for instance, while Chinese companies press ahead. Finally, partisanship has kicked in when Congress has tried to regulate.[24]

This is not to say that Congress has enacted no rules regulating technology. In the past few years, Congress has been able to enact laws regulating cross-border data requests by law enforcement,[25] holding online platforms accountable if they are used to facilitate sex trafficking,[26] and updating the Foreign Intelligence Surveillance Act.[27] However, it has failed in its efforts to legislate on the use of encryption, election security (as Jacob Rush details in his contribution), “hacking back,” and drone safety, and it has not tried to regulate facial-recognition software.[28] Efforts to impose federal data-privacy laws on companies are just getting underway.[29]

As with national security issues, some judges have articulated a view that they lack the capacity to correctly assess complicated technical tools and that Congress rather than the courts should be making the hard policy decisions in these areas.[30] In several recent cases that implicated law enforcement uses of new technologies, Justice Alito argued that it is far more desirable for Congress to articulate appropriate uses of law enforcement technologies than for the courts to decide those questions.[31] Although the Court did ultimately reach decisions in these cases, the Court in Carpenter v. United States asserted that it was producing a narrow holding that applied only to the specific technology at issue.[32] And in United States v. Jones, the majority relied on a Fourth Amendment trespass analysis to produce a relatively narrow opinion that would not reach technologies such as remote GPS tracking.[33] Finally, the Court obviously decides what cases to hear, and recently declined to grant certiorari in a case involving the use of predictive algorithms in criminal sentencing.[34]

Where we are dealing with constraints (or the lack thereof) on private companies, we also must ask whether the Executive has imposed regulations or other constraints. The Trump administration seems uninterested in taking steps to influence the behavior of social media platforms, even if it had authority to do so. The President seems to embrace, rather than bemoan, the divisive aspects of social media that Sarah Haan describes. Further, the Executive currently has limited incentives to shape the production and use of tools that law enforcement and military actors have started to deploy, such as facial-recognition software, body-worn cameras, and cell-site location information.[35] Finally, the Federal Trade Commission examined but chose not to bring an antitrust case against Google and the Trump administration does not appear poised to pursue an antitrust case against Amazon.[36] In short, the Executive has done little to bind Facebook and the various other types of technology companies described in this essay. We thus find ourselves confronting broadly unregulated technology actors that know and use oceans of information about us, holding vast amounts of power over what we read, buy, watch, think, and drive.

III. Constraining Our Unbounded Actors

Even though traditional checks and balances by Congress and the courts do not function very well in the national security space, the Executive nevertheless confronts certain constraints on its behavior. Most prominently, citizens can choose to vote the President out of office. There are a number of other, more nuanced ways in which the executive branch checks itself and is checked by nontraditional actors. First, the Executive often seeks public support for its decisions, which may require it to be more transparent than it would otherwise prefer.[37] In a recent example, President Obama disclosed how the Executive made decisions about targeted killings and what constraints it imposed on itself.[38] Sometimes leaks by government officials foist involuntary transparency on the Executive, too. Second, the Executive often makes changes to its national security policies when it faces litigation challenging those policies and it fears that it might lose the case.[39] Third, executive-branch lawyers, who often have a commitment to law as a guiding principle, help ensure that the executive branch generally complies with applicable laws and policies, even when it is inconvenient to do so.[40] Fourth, the Executive often needs to rely on allies for assistance in executing its foreign policy and national security decisions, which means that U.S. national security activities are sometimes indirectly subject to allies’ legal and policy constraints.[41] Finally, the Executive itself engages with (or willingly brings itself under the supervision of) actors who are perceived as more neutral, such as the federal judges on the Foreign Intelligence Surveillance Court or the Department of Homeland Security’s Office for Civil Rights and Civil Liberties.

Assuming that Congress will be unable—at least in the short term—to produce significant legislation on privacy, machine learning algorithms, or law enforcement uses of tools such as facial-recognition software, the same types of mechanisms that constrain the national security Executive might helpfully constrain the technologies and companies that are the subject of this symposium.

Public pressure and critiques already have played an important role in prompting companies such as Facebook and Twitter to establish more robust policies on user privacy and content regulation. This pressure has also forced the companies to be more transparent about their privacy and content moderation policies and the algorithms that they use to identify trolls and harassers.[42] Further, public criticism has led Facebook to remove the accounts of particular actors, including those of twenty Burmese officials and organizations responsible for what the United Nations concluded was genocide against the Rohingya.[43] These new pressures come not only from the technologies’ users but also from the companies’ employees.[44] Facing demands from its employees, Google declined to extend its contract with the Defense Department, under which the company provided support to a project deploying machine learning algorithms to war zones.[45] Amazon is facing a similar challenge: 450 of its employees reportedly wrote to CEO Jeff Bezos to demand that Amazon cease selling its facial-recognition software (which the company calls Rekognition) to police.[46]

Like the national security Executive, these companies also are keenly attuned to potential litigation or legislation, and often change their behavior in an effort to fend off those alternatives. Microsoft in particular has been forward-leaning in an effort to help shape any legislation that might come down the pike. In testimony before the U.K. Parliament about regulation of artificial intelligence (“AI”), a Microsoft official told the committee that regulating AI was a job “for the tech industry, the Government, NGOs and the people who will ultimately consume the services” and that it was important “to find a way of convening those four parties together to drive forward that conversation.”[47] Microsoft has also asked Congress to regulate facial-recognition software and has suggested specific areas on which Congress might focus.[48] Microsoft, Twitter, and Google all revealed how Russian agents had used their platforms in the lead-up to their officials’ testimony before Congress, where they expected to be asked about that topic.[49] Facebook announced its strengthened advertising disclosure policies in an attempt to preempt a bill imposing such requirements by law.[50] More recently, Facebook revealed its intention to create an international body to adjudicate content decisions, which may well be an effort to stave off more stringent regulation by Congress.[51] There are exceptions: Google’s CEO declined to appear before Congress, for example, even though he faced significant public pressure to do so.[52] In general, though, even if Congress cannot unite to enact laws, it has managed to convene congressional hearings that have extracted important information and policy changes from the companies.

Foreign governments have also imposed constraints on U.S. technology companies. Just as the U.S. military and intelligence communities sometimes find themselves bound by foreign laws during overseas operations, the U.S. tech companies face direct exposure to foreign legal systems, which have in several cases imposed onerous laws and penalties on them. For example, the EU’s General Data Protection Regulation (“GDPR”) requires companies that process personal data to obtain the affirmative consent of those whose data they are using (the “data subject”).[53] Those processors must also provide, at the data subject’s request, any information they have on the subject; must rectify inaccurate personal data; and must erase the subject’s data at her request. Finally, the GDPR generally prohibits companies from transferring personal data outside the EU, unless the European Commission determines that the data protection laws of the receiving jurisdiction are adequate.[54] Although formally directed only to companies that are located in the EU or that provide services to or monitor the behavior of people in the EU, the GDPR’s impact has been global. Virtually all of the companies discussed in this essay must comply with the GDPR. The EU also fined Google $2.7 billion for disadvantaging its competition by steering search engine users toward its comparison-shopping site.[55] The EU apparently also is considering whether to bring a case against Amazon.[56] In short, foreign governments have constrained U.S. tech companies, even when the U.S. government itself has not.

Finally, these companies have sometimes turned to neutral actors to increase their credibility among users and Congress. As Sarah Haan details, Facebook has enlisted the help of third parties to fact check and identify fake news.[57] Further, Facebook’s plan to set up an independent body to adjudicate content takedowns would draw on the credibility of actors perceived as neutral and expert.[58] Tech companies including Google, Microsoft, Facebook, Nokia, and Ericsson have joined the Global Network Initiative, which commits them to respect freedom of expression and privacy rights when faced with government pressure to turn over user data or restrict communications.[59] Other companies have supported nonprofits such as OpenAI (the goal of which is to ensure that advanced AI capabilities are used for good, not harm) and the Partnership on Artificial Intelligence to Benefit People and Society, which Google, Facebook, Amazon, IBM, and Microsoft formed to establish ethical standards and best practices for AI researchers.[60] The companies have taken all these steps to retain their users’ support for their products and policies (and so maintain their profits).

Most recently—in a move that reflects the operation of three of these constraints at once—Facebook agreed to allow French regulators to monitor Facebook’s policies and tools to observe how the company combats hate speech and to help structure future French regulatory efforts to fight online hate speech more generally.[61] This reflects an effort by Facebook to shape prospective legislation; a decision by a foreign government to impose pressure on the practices of a U.S. platform; and an attempt by Facebook to persuade its users that it is making serious efforts to improve its policies by inviting a kind of “neutral arbiter” to observe its practices. We are likely to see more of all of these types of constraints unless and until—and perhaps even after—legislators decide to act.

IV. Moving Forward

Notwithstanding these different flavors of alternative constraints, the lack of consistent checks by Congress and courts on these technology companies means that the constraints are unpredictable, partial, and of questionable durability. As a result, there is still an important role for statutory constraints, should Congress find the political will to impose them. This is not to say that Congress should regulate for regulation’s sake. Restrictions should be deliberate, balanced, effective, and sensitive to the speed at which technologies develop. In an ideal world, our democratic institutions would reassert themselves to develop legislation in five primary areas: antitrust,[62] the appropriate use of algorithms,[63] privacy, the responsibilities of technology platforms for the content they host, and the use by law enforcement of high-tech tools such as facial-recognition software. A host of proposals already exists on each topic, including in Katelyn Ringrose’s essay on body cameras and facial-recognition systems. As another example, Congress could legislate norms for the development and deployment of machine learning algorithms at a relatively high level of generality (identifying impermissible sources of data, requiring companies to test input data and outputs for systematic bias, and requiring a level of algorithmic explanation when algorithmic decision-making affects individuals).[64] Institutionally, Congress could also bring on board more staffers with technological experience; create opportunities for technology fellows from think tanks and educational institutions; and restore the Office of Technology Assessment, a 200-member congressional support agency that operated from 1972 to 1995 and that researched and summarized technological and scientific matters for Congress.[65]

Courts, too, will prove invaluable as these technologies develop. In a world of “competing facts,” courts reveal the absolute value of neutral arbiters, which are missing from the interactions between cops wearing body cameras and suspects; between Facebook users on the extremes of an issue; and between the U.S. government and those it places on “no fly” lists pursuant to machine learning algorithms.[66] Of course, courts do not generally choose the disputes that come before them, and their decisions are by definition backward-looking (though they have forward-looking implications).[67] Another approach includes self-regulation: Law enforcement actors in the Executive (and within states) could choose to self-regulate when they employ algorithms, as the Obama administration did for targeted killing and as New York City is contemplating for its automated decisions.[68] Finally, there obviously is a role for individuals, private lawyers, and nongovernmental organizations to engage in thoughtful self-help, as Adam Gershowitz’s essay details in the policing and civil-justice context.[69] Grass-roots citizens’ movements can work to persuade companies “that respecting and protecting their users’ universally recognized human rights is in their long-term commercial self-interest.”[70]

Our three branches of government have not yet engaged deeply on the difficult questions of how to shape the technologies that drive every aspect of our future. Understanding why that engagement has been slow opens up possibilities for addressing the underlying obstructions and deploying with purpose the alternative forms of constraint described here.

 


[1] See, e.g., United States v. Trans-Missouri Freight Ass’n, 166 U.S. 290 (1897) (applying antitrust law to the railroad industry); United States v. Joint Traffic Ass’n, 171 U.S. 505 (1898) (same); Standard Oil Co. v. United States, 221 U.S. 1 (1911) (applying antitrust law to the oil industry).

[2] What We Do, FDA., https://www.fda.gov/aboutfda/whatwedo/ [https://perma.cc/6WYL-CQNS] (last visited Jan. 8, 2019); Understanding the National Highway Traffic Safety Administration (NHTSA), U.S. Dep’t of Transp., https://www.transportation.gov/transit ion/understanding-national-highway-traffic-safety-administration-nhtsa [https://perma.cc/7L­H4-NDXB] (last visited Jan. 8, 2019).

[3] See, e.g., 47 U.S.C. § 201(a) (2018) (requiring every common carrier engaged in interstate communication by wire or radio to furnish such communication service upon reasonable request therefor); Thomas Nachbar, The Public Network, 17 Comm. L. Con. 67, 76 (2008) (discussing nondiscrimination requirements for package carriers, taxis, and railroads).

[4] See infra Part II.

[5] Eric A. Posner & Adrian Vermeule, The Executive Unbound: After the Madisonian Republic 4–14 (2010).

[6] See, e.g., Aziz Z. Huq, Binding the Executive (By Law or By Politics), 79 U. Chi. L. Rev. 777, 782–83 (2012) (reviewing Posner & Vermeule, supra note 5) (arguing that legal rules and institutions play a “pivotal role” in the production of executive constraint); Saikrishna B. Prakash & Michael D. Ramsey, The Goldilocks Executive, 90 Tex. L. Rev. 973, 973–74 (2012) (reviewing Posner & Vermeule, supra note 5) (arguing that executive officials do not appear to regard themselves as above the law and that legal constraints on the Executive are manifest)..

[7] See, e.g., Peter M. Shane, Madisonianism Misunderstood: A Reply to Posner and Vermeule, Am. Const. Soc’y: ACSblog (Apr. 8, 2011), https://www.acslaw.org/acsblog/mad isonianism-misunderstood-a-reply-to-posner-and-vermeule/ [https://perma.cc/G7YA-RZWF] (critiquing Posner and Vermeule for abandoning the rule of law).

[8] For a general discussion of systemic difficulties in checking the national security Executive, see Ashley Deeks, Predicting Enemies, 104 Va. L. Rev. 1529, 1560–63 (2018).

[9] Ashley Deeks, Checks and Balances from Abroad, 83 U. Chi. L. Rev. 65, 70 (2016).

[10] See, e.g., Applying the War Powers Resolution to the War on Terrorism: Hearing Before the Subcomm. on the Constitution, Federalism, and Prop. Rights of the S. Comm. of the Judiciary, 107th Cong. 37 (2002) (statement of Sen. Russ Feingold, Chairman) (noting that Congress is “not necessarily eagerly asserting the powers that it has. It is a pretty good deal for Congress, if tough decisions about war are made by the executive; if things do not go well, they are not responsible”).

[11] See, e.g., Crockett v. Reagan, 558 F. Supp. 893, 898 (D.D.C. 1982), aff’d, 720 F.2d 1355 (D.C. Cir. 1983) (noting, in a case involving the role of U.S. forces in El Salvador, that the court “lacks the resources and expertise (which are accessible to the Congress) to resolve disputed questions of fact” related to the military situation).

[12] See Al-Bihani v. Obama, 590 F.3d 866, 882 (D.C. Cir. 2010) (Brown, J., concurring).

[13] Al-Aulaqi v. Obama, 727 F. Supp. 2d 1, 9 (D.D.C. 2010).

[14] The reasons for the failures to constrain the national security Executive and technology companies are not unique to these two contexts. However, because there are several important similarities, certain lessons from the national security context can inform how we might proceed in the technology context. Nor do I mean to suggest an overly strong identity between the Executive and powerful technology companies. It should go without saying that there are significant differences between the two. The President faces democratic accountability; the tech companies do not. Unlike the President, tech companies cannot veto legislation. Nor can they invoke executive privilege when Congress asks for information. The companies are not entitled to deference by courts, and it is easier to hold them accountable when they violate the law.

[15] See, e.g., How 5 Tech Giants Have Become More Like Governments Than Companies, NPR (Oct. 26, 2017) https://www.npr.org/2017/10/26/560136311/how-5-tech-giants-have-become-more-like-governments-than-companies [https://perma.cc/C58F-ETVD] (interview of Farhad Manjoo, a tech columnist for the New York Times) [hereinafter Tech Giants] (“Amazon is sort of . . . getting its kind of corporate tentacles into a large part of the economy, into shipping, and how warehouses work and robots. Things that will allow it to dominate in the future that we’re kind of just not good at regulating at this point.”); see also Stephen L. Carter, Too Much Power Lies in Tech Companies’ Hands, Bloomberg (Aug. 17, 2017), https://www.bloomberg.com/opinion/articles/2017-08-17/too-much-power-lies-in-tech-com­panies-hands [https://perma.cc/EM46-SAEY].

[16] Farhad Manjoo, Tech’s ‘Frightful 5’ Will Dominate Digital Life for Foreseeable Future, N.Y. Times (Jan. 20, 2016), https://www.nytimes.com/2016/01/21/technology/techs-frightful-5-will-dominate-digital-life-for-foreseeable-future.html [https://perma.cc/8NXJVT­3L]; Tech Giants, supra note 15 (discussing the subsidiaries that the “Frightful Five” own).

[17] See, e.g., Ben Tarnoff, Weaponizing AI is coming. Are algorithmic forever wars our future?, Guardian (Oct. 11, 2018), https://www.theguardian.com/commentisfree/2018/oct/11 /war-jedi-algorithmic-warfare-us-military [https://perma.cc/3LNH-E7N2].

[18] See, e.g., Rebecca MacKinnon, Consent of the Networked: The Worldwide Struggle for Internet Freedom 149–65 (2012) (describing major technology companies as “digital sovereigns”); Tech Giants, supra note 15 (discussing how Amazon, Google, Apple, Microsoft, and Facebook affect the economy, our elections, our jobs, and what we buy; how they innovate more aggressively than the U.S. government; how they act as gateways to many other products we use; and how they may suppress others’ innovations).

[19] Nancy Scola, Zuckerberg Survived But Facebook Still Has Problems, Politico (Apr. 10, 2018), https://www.politico.com/story/2018/04/10/zuckerberg-facebook-hearing-senate-474­055 [https://perma.cc/V4JL-37JH].

[20] Daniel J. Solove, Fourth Amendment Codification and Professor Kerr’s Misguided Call for Judicial Deference, 74 Fordham L. Rev. 747, 771 (2005).

[21] Info. Soc’y Project, Governing Machine Learning (2017), https://law.yale.edu/system/f iles/area/center/isp/documents/governing_machine_learning_-_final.pdf [https://perma.cc/2P FE-6HBZ] [hereinafter Governing Machine Learning].

[22] Richard H. Pildes, Law and the President, 125 Harv. L. Rev. 1381, 1387 (2012) (reviewing Posner & Vermeule, supra note 5) (summarizing Posner and Vermeule’s argument that, because technology is constantly shifting, it is better for Presidents to make their best judgments based on the actual circumstances then governing); Katy Steinmetz, Congress Never Wanted to Regulate Facebook. Until Now, Time (Apr. 12, 2018), http://time.com/5237432/congress-never-wanted-to-regulate-facebook-until-now/ [https://­perma.cc/GF4L-3CMW] (“Congress is always playing catch-up to technology, so statutes it writes can quickly become outdated.”).

[23] Klint Finley, Obama Wants the Government to Help Develop AI, Wired (Oct. 12, 2016), https://www.wired.com/2016/10/obama-envisions-ai-new-apollo-program/ [https://perma.­cc/3TEH-FX6E] (quoting President Obama as stating, “The way I’ve been thinking about the regulatory structure as AI emerges is that, early in a technology, a thousand flowers should bloom. And the government should add a relatively light touch. . . . As technologies emerge and mature, then figuring out how they get incorporated into existing regulatory structures becomes a tougher problem, and the government needs to be involved a little bit more.”); David Shepardson & Susan Heavey, Amazon, Apple, others to testify before U.S. Senate on data privacy September 26, Reuters (Sept. 12, 2018), https://ww w.reuters.com/article/us-usa-tech-congress/amazon-apple-others-to-testify-before-u-s-senate-on-data-privacy-september-26-idUSKCN1LS25P [https://perma.cc/G5JV-9WGW] (quoting Sen. John Thune as stating that Commerce Committee hearing would allow tech companies to testify about “what Congress can do to promote clear privacy expectations without hurting innovation”); see also Governing Machine Learning, supra note 21 (reflecting participants’ views that standardizing the regulation of machine learning “would stifle innovation in a nascent industry, attempt to solve for problems that haven’t yet arisen, and potentially create barriers to entry for new entrants”).

[24] See, e.g., Paul Blumenthal, The Last Time Congress Threatened to Enact Digital Privacy Laws, It Didn’t Go So Well, Huff. Post (July 27, 2018), https://www.huffingtonpo st.com/entry/congress-digital-privacy-laws_us_5af0c587e4b0ab5c3d68b98b [https://perma­.cc/82TJ-ESVA].

[25] Consolidated Appropriations Act, 2018, Pub. L. No. 115-141, Div. V, § 103 (2018) (codified at 18 U.S.C. § 2703(h)).

[26] Allow States and Victims to Fight Online Sex Trafficking Act of 2017, Pub. L. No. 115-164, § 4, 132 Stat. 1253, 1254 (2018) (codified at 47 U.S.C. § 230(e)).

[27] FISA Amendments Reauthorization Act of 2017, Pub. L. No.115-118, 132 Stat. 3 (2018) (codified at 50 U.S.C. §§ 1881 –1881g).

[28] See, e.g., Jacob Rush, Hacking the Right to Vote, 105 Va. L. Rev. Online 67 (2019) (discussing Congress’s failure to regulate election security); Dustin Volz, Mark Hosenball & Joseph Menn, Push for encryption law falters despite Apple case spotlight, Reuters (May 27, 2016), https://www.reuters.com/article/us-usa-encryption-legislation-idUSKCN0YI0EM [https://perma.cc/93WR-UQB6] (discussing Congress’s failure to regulate encryption). A draft bill that would authorize companies to “hack back” in certain situations has been pending for several years. Active Cyber Defense Certainty Act, H.R. 4036, 115th Cong. (2017).

[29] Press Release, U.S. Sen. Ron Wyden of Or., Wyden Releases Discussion Draft of Legislation to Provide Real Protections for Americans’ Privacy (Nov. 1, 2018), https://ww w.wyden.senate.gov/news/press-releases/wyden-releases-discussion-draft-of-legislation-to-provide-real-protections-for-americans-privacy [https://perma.cc/4KKH-J4RH]. There are some existing federal privacy laws in specific areas, such as health care and student records. See Health Insurance Portability and Accountability Act of 1996, Pub. L. No. 104-191, §§ 221, 264, 110 Stat. 1936, 2009, 2033 (1996); Family Educational and Privacy Rights Act, 20 U.S.C. § 1232g (2012). Further, California has enacted its own data privacy law. California Consumer Privacy Act of 2018, AB-375 (June 29, 2018).

[30] See Orin Kerr, The Fourth Amendment and New Technologies, 102 Mich. L. Rev. 801, 875 (2004) (“Judges struggle to understand even the basic facts of such technologies.”).

[31] See Carpenter v. United States, 138 S. Ct. 2206, 2261 (2018) (Alito, J., dissenting); Riley v. California, 134 S. Ct. 2473, 2497–98 (2014) (Alito, J., concurring in part and concurring in the judgment). In Carpenter, Justice Alito wrote, “Legislation is much preferable to the development of an entirely new body of Fourth Amendment caselaw for many reasons, including the enormous complexity of the subject, the need to respond to rapidly changing technology, and the Fourth Amendment’s limited scope.”

[32] Carpenter, 138 S.Ct. at 2220 (“Our decision today is a narrow one. . . . We do not . . . call into question conventional surveillance techniques and tools . . . . Further, our opinion does not consider other collection techniques involving foreign affairs or national security. . . . [W]hen considering new innovations . . . the Court must tread carefully in such cases, to ensure that we do not ‘embarrass the future.’” (quoting Northwest Airlines, Inc. v. Minnesota, 322 U.S. 292, 300 (1944))). Paul Ohm argues that the opinion is in fact sweeping in its consequences, however. Paul Ohm, The Many Revolutions of Carpenter, 32 Harv. J.L. & Tech. (forthcoming 2019) (manuscript at 1–3), https://osf.io/preprints/lawarxiv/bsedj/ [https://perma.cc/B6HL-GS6F].

[33] 565 U.S. 400, 409–11 (2012).

[34] Loomis v. Wisconsin, SCOTUSblog, http://www.scotusblog.com/case-files/cases/loomi s-v-wisconsin/ [https://perma.cc/AC9D-3Z5P] (last visited Nov. 10, 2018) (listing the Supre- me Court’s denial of certiorari on June 26, 2017).

[35] MacKinnon, supra note 18, at 175.

[36] Hal Singer, The FTC’s Decision to Reject the Search Antitrust Case against Google, Forbes (Dec. 5, 2012), https://www.forbes.com/sites/halsinger/2012/12/05/the-ftcs-decision-to-reject-the-search-antitrust-case-against-google/ [https://perma.cc/2JBJ-GK97]; Laura Stevens, Why a Trump-Led Antitrust Case Against Amazon is a Long Shot, Wall St. J. (Mar. 31, 2018), https://wsj.com/articles/why-a-trump-led-antitrust-case-against-amazon-is-a-long-shot-1522501200 [https://perma.cc/9LN9-5EL2].

[37] See, e.g., Richard Neustadt, Presidential Power and the Modern Presidents 185 (1990) (identifying public standing as a source of presidential influence); Posner & Vermeule, supra note 5, at 113–53 (discussing ways the Executive can garner public support, including through transparency).

[38] Press Release, White House, Fact Sheet: U.S. Policy Standards and Procedures for the Use of Force in Counterterrorism Operations Outside the United States and Areas of Active Hostilities (May 23, 2013), https://obamawhitehouse.archives.gov/the-press-office/2013/05 /23/fact-sheet-us-policy-standards-and-procedures-use-force-counterterrorism [https://perma.cc/D23M-PXYR].

[39] Ashley Deeks, The Observer Effect: National Security Litigation, Executive Policy Changes, and Judicial Deference, 82 Fordham L. Rev. 827, 838 (2013).

[40] Curtis A. Bradley & Trevor W. Morrison, Presidential Power, Historical Practice, and Legal Constraint, 113 Colum. L. Rev. 1097, 1132–33 (2013); Ashley Deeks, The Substance of Secret Agreements and the Role of Government Lawyers, 111 AJIL Unbound 474, 476 (2018).

[41] Deeks, supra note 9, at 76–77; Ashley Deeks, Intelligence Communities, Peer Constraints, and the Law, 7 Harv. Nat’l Sec. J. 1, 4–5 (2015).

[42] Sarah Frier, Facebook Publishes Content Removal Policies for the First Time, Bloomberg (Apr. 24, 2018), https://www.bloomberg.com/news/articles/2018-04-24/face boo k-publishes-content-removal-policies-for-the-first-time [https://perma.cc/4T4E-CFV7] (noti- ng that the “release of the document follows frequent criticism and confusion about the company’s policies”); Julia Carrie Wong, Twitter Announces Global Change to Algorithm in Effort to Tackle Harassment, Guardian (May 15, 2018), https://www.theguardian.com/ technology/2018/may/15/twitter-ranking-algorithm-change-trolling-harassment-abuse [https://perma.cc/9LR5-THZT].

[43] Antoni Slodkowski, Facebook Bans Myanmar Army Chief, Others in Unprecedented Move, Reuters (Aug. 27, 2018), https://www.reuters.com/article/us-myanmar-facebook­/facebook-bans-myanmar-army-chief-others-in-unprecedented-move-idUSKCN1LC0R7 [https://perma.cc/MU2B-RJU5].

[44] See Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 Harv. L. Rev. 1598, 1627–28 (2018); Farhad Manjoo, Why the Google Walkout was a Watershed Moment in Tech, N.Y. Times (Nov. 7, 2018), https://www.nyti mes.com/2018/11/07/technology/google-walkout-watershed-tech.html [https://perma.cc/52S F-DS75] (“Protests by [Google’s] workers are an important new avenue for pressure; the very people who make these companies work can change what they do in the world.”).

[45] Daisuke Wakabayashi & Scott Shane, Google Will Not Renew Pentagon Contract That Upset Employees, N.Y. Times (June 1, 2018), https://www.nytimes.com/2018/06/01/technol ogy/google-pentagon-project-maven.html [https://perma.cc/TAN3-N7QB].

[46] Isabel Asher Hamilton, An Amazon Staffer Says Over 450 Employees Wrote to Jeff Bezos Demanding Amazon Stop Selling Facial-Recognition Software to Police, Bus. Insider (Oct. 17, 2018), https://www.businessinsider.com/amazon-employee-letter-jeff-bezos-facial-recognition-software-police-2018-10 [https://perma.cc/4C93-ARDP].

[47] Science and Technology Committee, Robotics and Artificial Intelligence, 2016–17, HC 145, ¶ 66 (UK).

[48] Brad Smith, Facial Recognition Technology: The Need for Public Regulation and Corporate Responsibility, Microsoft: Microsoft on the Issues (Jul. 13, 2018), https://blogs.microsoft.com/on-the-issues/2018/07/13/facial-recognition-technology-the-need-for-public-regulation-and-corporate-responsibility [https://perma.cc/MT9Q-GZW4].

[49] Mike Isaac & Daisuke Wakabayashi, Russian Influence Reached 126 Million Through Facebook Alone, N.Y. Times (Oct. 30, 2017), https://www.nytimes.com/2017/10/30/techno logy/facebook-google-russia.html [https://perma.cc/UW23-ZGG5].

[50] Id.

[51] Neil Malhotra, Benoit Monin & Michael Tomz, Does Private Regulation Preempt Public Regulation?, Am. Pol. Sci. Rev. 1 (2018); Evelyn Douek, Facebook’s New ‘Supreme Court’ Could Revolutionize Online Speech, Lawfare (Nov. 19. 2018), https://www.lawfa reblog.com/facebooks-new-supreme-court-could-revolutionize-online-speech [https://perma.cc/AHM4-WEMA].

[52] Steven T. Dennis, Senators Criticize Google CEO for Declining to Testify, Bloomberg (Aug. 28, 2018), https://www.bloomberg.com/news/articles/2018-08-28/google-ceo-pichai-faulted-by-senators-for-declining-to-testify [https://perma.cc/4K4E-2R5Q].

[53] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation), art. 7, 2016 O.J. (L 119) 2.

[54] Id. arts. 44–46, at 8–9.

[55] Robert Levine, Antitrust Law Never Envisioned Massive Tech Companies Like Google, Bos. Globe (June 13, 2018), https://www.bostonglobe.com/ideas/2018/06/13/google-hugely-powerful-antitrust-law-job/E1eqrlQ01g11DRM8I9FxwO/story.html [https://perma.cc/ZH7D-TEVR].

[56] Guadalupe Gonzales, E.U. Antitrust Commission Sets Sights on Amazon. Here’s Why, Inc. (Sept. 21, 2018), https://www.inc.com/guadalupe-gonzalez/amazon-margrethe-vestager-preliminary-investigation.html [https://perma.cc/EP6Z-4SHB].

[57] Sarah C. Haan, Facebook’s Alternative Facts, 105 Va. L. Rev. Online 18 (2019).

[58] Mark Zuckerberg, A Blueprint for Content Governance and Enforcement, Facebook (Nov. 15, 2018), https://www.facebook.com/notes/mark-zuckerberg/a-blueprint-for-content-governance-and-enforcement/10156443129621634/ [https://perma.cc/TR8R-DDQ6].

[59] Global Network Initiative, https://globalnetworkinitiative.org [https://perma.cc/X66R-8JL3] (last visited Dec. 3, 2018).

[60] Greg Brockman & Ilya Sutskever, Introducing OpenAI, OpenAI: Blog (Dec. 11, 2015), https://blog.openai.com/introducing-openai/ [https://perma.cc/J3V2-NL4C]; Alex Hern, ‘Partnership on AI’ Formed by Google, Facebook, Amazon, IBM and Microsoft, Guardian (Sept. 28, 2016), https://www.theguardian.com/technology/2016/sep/28/google-facebook-amazon-ibm-microsoft-partnership-on-ai-tech-firms [https://perma.cc/EG74-RJVD].

[61] Tony Romm & James McAuley, Facebook Will Let French Regulators Study Its Efforts to Fight Hate Speech, Wash. Post (Nov. 12, 2018), https://www.washingtonpost.com/tech nology/2018/11/12/facebook-will-let-french-regulators-study-its-efforts-fight-hate-speech/ [https://perma.cc/L5QU-VYWB].

[62] Ted Cruz has called for use of antitrust laws to break up power of Facebook and others. Press Release, U.S. Sen. for Texas Ted Cruz, Sen. Cruz: We Have an Obligation to Defend the First Amendment Right of Every American on Social Media Platforms (Apr. 12, 2018), https://www.cruz.senate.gov/?p=press_release&id=3723 (accessed Jan. 8, 2019); see also Robert Levine, Antitrust Law Never Envisioned Massive Tech Companies Like Google, Bost. Globe: Ideas (June 13, 2018), https://www.bostonglobe.com/ideas/2 018/06/13/google-hugely-powerful-antitrust-law-job/E1eqrlQ01g11DRM8I9FxwO/story.html [https://perma.cc/L424-7XKW].

[63] See Mariano-Florentino Cuellar, Cyberdelegation and the Administrative State 10–14 (Stan. Pub. L. & Legal Theory Res. Paper Series, Working Paper No. 2754385, 2016), http s://ssrn.com/abstract=2754385 [https://perma.cc/2EPS-EZT8] (contemplating the executive branch’s use of algorithms to regulate and adjudicate); Daniel Newman, Inside Look: The World’s Largest Tech Companies are Making Massive AI Investments, Forbes (Jan. 17, 2017), https://www.forbes.com/sites/danielnewman/2017/01/17/inside-look-the-worlds-large st-tech-companies-are-making-massive-ai-investments/#54ff6f3c4af2 [https://perma.cc/HM­5D-W374] (describing how Amazon, Google, Apple, Microsoft are all investing heavily in AI).  

[64] See Governing Machine Learning, supra note 21 (suggesting that regulation could mandate levels of explainability, prevent specific types of bias, or specify what types of models or data sets could be used for which purposes); Finale Doshi-Velez & Mason Kortz, Accountability of AI Under the Law: The Role of Explanation 5–7 (Berkman Klein Ctr. for Internet & Soc’y, Working Paper, 2017), http://nrs.harvard.edu/urn-3:HUL.InstRepos:3437 2584 [https://perma.cc/7GQ7-BB7K].

[65] Mitch Ambrose, Another Physicist Congressman Attempts to Revive the Office of Tec- hnology Assessment, Am. Inst. of Physics (Jan. 20, 2016), https://www.aip.org/fyi/2016/an other-physicist-congressman-attempts-revive-office-technology-assessment [https://perma.­cc/B73D-WUJL].

[66] Danielle Citron, Technological Due Process, 85 Wash. U. L. Rev. 1249, 1252 (2008).

[67] Glenn S. Gerstell, Gen. Couns., Nat’l Sec. Agency, Keynote Address to the American Bar Association 28th Annual Review of the Field of National Security Law Conference (Nov. 1, 2018) (transcript available at https://www.nsa.gov/news-features/speeches-testimo nies/Article/1675727/keynote-address-by-glenn-s-gerstell-general-counsel-nsa-to-the-american-bar-ass/ [https://perma.cc/LK7N-YVGT]).

[68] Projects, NYC Mayor’s Office of Operations, https://www1.nyc.gov/site/operations/proj ects/ads-task-force.page [https://perma.cc/74BJ-R2YT] (last visited Dec. 3, 2018).

[69] Adam Gershowitz, Criminal Justice Apps: A Modest Step Towards Democratizing the Criminal Process, 105 Va. L. Rev. Online 37 (2019).

[70] MacKinnon, supra note 18, at 175.