Federalism, Metropolitanism, and the Problem of States

The United States has long been an urban country, but it is fast becoming a metropolitan one. Population and economic activity are now concentrated in cities and their surrounding regions. The largest twenty of these city-regions account for almost fifty-two percent of total U.S. GDP. This “metropolitan revolution” represents a fundamental challenge to our current federalism. The old federalism assumed that capital and labor are fully mobile and that subnational governments—in this case, states—will engage in competitive efforts to attract desirable investment while the federal government will assume the bulk of redistributive spending. The new federalism rejects the notion that economic growth can be attributed to interstate competition or that only central governments can effectively engage in social welfare redistribution. As economic activity becomes concentrated in cities, those cities become capable of engaging in forms of regulation and redistribution that the standard model of fiscal federalism had deemed impossible. 

Our current state-based federalism, however, fails to appropriately align capabilities with responsibilities. Instead of empowering cities, states are increasingly seeking to defund, defang, and delegitimize them. The mismatch between the prevailing sites of productive economic activity and the location of regulation and redistribution has subverted the values conventionally associated with federalism. State power is being deployed to undermine accountability, limit experimentation, and prevent the effective exercise of local self-government. One current consequence of the gap between state and city power is increased political polarization. A future consequence may be an institutional restructuring that better reflects the new geography of production and population.

What is Just Compensation?

The Supreme Court has held that “[t]he word ‘just’ in [‘just compensation’] . . . evokes ideas of ‘fairness.’” But the Court has not been able to discern how it ensures fairness. Scholars have responded with a number of novel policy proposals designed to assess a fairer compensation in takings.

This Article approaches the ambiguity as a problem of history. It traces the history of the “just compensation” clause to the English writ of ad quod damnum in search of evidence that may shed light on how the clause was intended to ensure fairness. This historical inquiry yields a striking result. The word “just” imposes a procedural requirement on compensation: a jury must set compensation for it to be just.

This historical understanding is especially important to modern law since the Supreme Court applies a historical test to determine whether the Seventh Amendment guarantees the right to a jury. This Article corrects the common misperception that juries did not determine just compensation in eighteenth-century English and colonial practice.

Predicting Enemies

Actors in our criminal justice system increasingly rely on computer algorithms to help them predict how dangerous certain people and certain physical locations are. These predictive algorithms have spawned controversies because their operations are often opaque and some algorithms use biased data. Yet these same types of predictive algorithms inevitably will migrate into the national security sphere as the military tries to predict who and where its enemies are. Because military operations face fewer legal strictures and more limited oversight than criminal justice processes do, the military might expect—and hope—that its use of predictive algorithms will remain both unfettered and unseen.

This Article shows why that is a flawed approach, descriptively and normatively. First, in the post-September 11 era, any military operations associated with detention or targeting will draw intense scrutiny. Anticipating that scrutiny, the military should learn from the legal and policy challenges that criminal justice actors have faced in managing the transparency, reliability, and lawful use of predictive algorithms. Second, the military should clearly identify the laws and policies that govern its use of predictive algorithms. Doing so would avoid exacerbating the “double black box” problem of conducting operations that are already difficult to legally oversee and contest, using algorithms whose predictions are often difficult to explain. Instead, being transparent about how, when, why, and on what legal basis the military is using predictive algorithms will improve the quality of military decision-making and enhance public support for a new generation of national security tools.