In the next few years of the George W. Bush administration, it is almost certain that there will be a number of contentious battles between Democrats and Republicans and between the White House and the U.S. Senate over certain federal court nominees. While the issues will appear to be substantive and far-reaching — and no doubt they are in the present-day United States — one needs to examine another perspective concerning the federal courts, one that demonstrates how far this country has drifted from its original moorings in liberty.
Before the 20th century, the federal courts did not play much of a role in the daily lives of Americans. It is difficult to comprehend just how decentralized government power was in this country at one time, for even the structure of the various court systems ensured that the federal courts would not have much effect on the average person.
That state of affairs no longer exists. While the state courts still handle the bulk of criminal cases and lawsuits, federal courts have grown both in the size and in the numbers of cases heard. A federal criminal code that once had three crimes (piracy, treason, and counterfeiting) now contains thousands, and it is no exaggeration to say that most Americans at one time or another have violated a federal law for which there are serious criminal penalties. The violations mostly are made in ignorance, and the size of the U.S. population relative to what the federal courts can handle means that the vast majority of people won’t be arrested or charged with anything.
However, the huge numbers of potential federal “criminals” also mean that a large number of people who have no idea they have violated a federal law will be shocked to find themselves in the dock. On top of that, the civil dockets have metastasized, as the number of lawsuits filed in federal court by private individuals, businesses and corporations, and the government itself has expanded exponentially.
The federal courts did not grow on their own, nor does the U.S. Constitution create a large role for them. That the federal courts are the major players in the system of justice in this country is testament to the unconstitutional usurping of power by the three branches of the federal government. This development did not occur out of logic or necessity; the federal system did not grow simply of its own accord. Instead, the Leviathan we see today has come about because groups of intellectuals and lawyers actively sought to change the very meaning of law in the United States. It was and is a sorry episode of U.S. history, one of many such affairs that have turned the nation’s legal system from a marvel to a slough of treachery, deceit, and unpredictability. The system of justice that once protected the innocent and held contracts and private property to be near-sacred entities, has become a mechanism through which lawyers legally loot businesses and rogue prosecutors regularly charge, convict, and imprison the innocent.
While the centralization of government began in earnest in the victory of the northern states over the Confederacy in 1865 and continued during the Progressive Era of the late 1800s and early 1900s, the process reached warp speed during the 1930s, the period we know as the New Deal. The legislative agenda that President Franklin D. Roosevelt sought to impose was collectivist in nature and clearly went against the emphasis on individual rights that reflected the core philosophy of those who wrote the Constitution.
Although the U.S. Supreme Court resisted the New Deal during Roosevelt’s first term, ultimately the president was able to push his agenda by remaking the High Court, which became little more than a rubber stamp for policies that made a mockery of rule of law and of the rights of individuals.
As I shall demonstrate in this series, the Roosevelt administration inflicted damage on law in the United States that was both wide and deep.
However, regime changes do not occur in a vacuum. While the U.S. Supreme Court in 1935 held to some of the vestiges of constitutional government, the intellectual breakdown had begun long before the courts finally caved in to Roosevelt and gave him the powers he coveted. Thus, in this first section, I tell the story of how the law began to lose its way before the Great Depression. For the remnant who believe in the primacy of limited, constitutional government, it is a sorry tale, but one that we need to know.
The origins of U.S.
While we like to think of U.S. law originating with the Constitution, the real “author” of the original legal system was William Blackstone, the great British jurist who wrote Commentaries on the Laws of England in the mid 1700s. Historian Daniel Boorstin wrote that this book was influential not only in England, but also in the American colonies, writing that “no other book — except the Bible — [had] played so great a role” in colonial thinking. Boorstin added that “Blackstone was to American law what Noah Webster’s speller was to American literacy.”
It was Blackstone who championed the ideal of law as a shield of the innocent, a tool that in the hands of government was to protect the life, property, and liberty of individual persons. Law was not only to constrain (and punish) those who would steal or kill, but also to constrain the powers and activities of those who were part of the state. Perhaps more than any other person, Blackstone defined the limitations of law and how, correctly laid out, law could be a bulwark against tyranny.
The men whose signatures graced the Declaration of Independence and later the U.S. Constitution, were thoroughly familiar with Blackstone’s themes and sought to carry them out in this new country. Perhaps it is deeply ironic that in 1776, the same year the Declaration of Independence was written, the “champion” of modern law made his own intellectual debut in England. Jeremy Bentham, who sat in Blackstone’s Oxford lectures as a student, penned an anonymous attack on Blackstone entitled “A Fragment on Government.”
“Fragment” took the opposite approach to every ideal Blackstone laid out in his writings and lectures. Government’s role in society, wrote Bentham, was not to protect the innocent or to be restrained in what it could do, but rather to have near-unlimited powers to ensure the overall happiness of society, or “the greatest happiness for the greatest number.”
One of the important doctrines of criminal law was the condition of mens rea, or what Blackstone termed “a vicious will.” In Blackstone’s view, a person had to intend to commit a crime, and had to know that what he was doing not only was “wrong,” but also would inflict harm on others. What made the difference between a civil and criminal offense was the nature and the scope of injury that one was wreaking upon another. Bentham thought otherwise. Law was to be used as a tool of the state for the imposition of the “greater good.”
While Blackstone’s ideals prevailed when the Framers wrote the Constitution, Bentham is the father of modern law in this country. Writes Paul Rosenzweig,
[Today] the criminal law has strayed far from its historical roots. Where once the criminal law was an exclusively moral undertaking, it now has expanded to the point that it is principally utilitarian in nature. In some instances the law now makes criminal the failure to act in conformance with some imposed legal duty. In others the law criminalizes conduct undertaken without any culpable intent.
The law as it stands today is not a direct descendant from what the Framers held to be the proper and good role of law in society. In fact, it is not even a distant cousin of what was written on parchment in that hot summer of 1787. First, the Constitution clearly separated the powers not only of the three branches of the national (or what we today call the federal) government, but also distinguished between the legitimate powers of states and the central government.
Perhaps it is instructive to remember that, at that time, people referred to the United States in the plural; that is, they would say, “The United States are …” In the state system of justice, common law, something inherited from Great Britain, held sway. For the majority of American citizens, any contact that they would have with the law was seen, for the most part, on the state or local level. There were few federal laws, and they dealt with issues of national taxation (tariffs) or national defense.
Individual rights versus
The Bill of Rights protects individual persons from the predations of the state, and was intended to restrain the proclivities of politicians and government authorities to grab power. That governments and the courts have ridden roughshod over those protections does not minimize their importance or the fact that they are enshrined in U.S. law, even if that law today is little more than parchment under glass.
At the time the Constitution was written, the rights of private-property ownership and the sanctity of contracts were front and center, not only in the minds of the document’s Framers, but also with the public at large. For example, the Fifth Amendment, which contains the Due Process clause, says that government cannot take property from any person, subject anyone to double jeopardy, compel anyone to testify against himself, or deprive anyone “of life, liberty, or property without due process of law” (emphasis mine). As William F. Shughart II, writes, the key issue of this clause deals with whether or not the concept of “due process” was “substantive” or “procedural.” The former interpretation would require a high burden of proof of a “public” need for government to act, while the second would be nothing more than a nuisance for public officials, who simply would have to give notice and follow some prescribed set of rote guidelines. Well into the 20th century, the courts held that “due process” was substantive, not just a roadmap for procedure.
The Commerce Clause of the Constitution has provided that “hook” for the nationalizing of law. Article I, Section 8, No. 3 says that Congress shall have power “to regulate Commerce with foreign Nations, and among the several States, and with the Indian Tribes.” One of the things the Framers wished to avoid was for the states to levy tariffs against each other, as they had done under the Articles of Confederation. By giving Congress the authority to be the final arbiter between the states, the Framers in effect were setting up the United States as a large free-trade zone.
Unfortunately, Congress has seized upon the Commerce Clause as a mechanism for declaring nearly everything to be “interstate commerce.” This provides the hook for creating laws that have usurped the rightful power given to the states and that have given the federal government a blank check to do whatever the political classes want to do with almost nothing standing in their way but votes by members of Congress, a signature from the president of the United States, and an okay by the Supreme Court.
One of the ways the Framers tried to keep a balance of power in the federal government was through what is called the “nondelegation principle.” Article I, Section 1 declares, “All legislative Powers herein granted shall be vested in a Congress of the United States….” In other words, only Congress could make laws and carry out those duties granted to it by the Constitution.
That meant that Congress was given sole privilege of making laws, with the executive branch charged with carrying out the laws. To ensure that the presidency would not become too powerful, the president of the United States was not given lawmaking powers. That legal principle fell by the wayside during the 1930s, as Congress — under the prodding of Franklin Roosevelt — allowed the executive branch to grab what in effect would be the power to make law.
The system of laws and courts in the United States today hardly resembles that system that came about in the wake of the founding of this republic. This sea change in the law is not due — as some might claim — to the complexities of modern life or the need for reality to rule instead of ideology. Instead, we have lost the law because of expediency that comes with the Benthamite view of utility and because of the notion that “social good” rules over the principles of liberty and justice.
The deterioration of law in the United States (as well as other Western countries, including Great Britain, from which this nation inherited its legal traditions) is a sorry chapter of history, but one that needs to be repeated if only to provide inspiration to future generations who decide to recapture the spirit of liberty. In the meantime, it is necessary to detail the hows and whys of this decline.
Part 1 of this series dealt with the establishment of law in the United States and with how the Framers of the U.S. Constitution sought to limit the power of government at all levels, seeking to harness the apparatus of the state to ensure that individual rights were not violated. Within a generation of the document’s creation, however, forces were already at work in this country to undermine individual rights. This article deals with the rise of collectivist thought that was enforced through major wars and legislation meant to strengthen government at the expense of personal freedoms.
We begin with the War Between the States, 1861—1865. It is popular — or at least “politically correct” — to say that the conflagration known as the Civil War (which really was not a true civil war, since the Confederacy was fighting to gain independence, not establish political power over the Northern states) was caused by slavery. It is true that slavery was a flash point over the issues separating North and South in the United States, but slavery, as much a cancer on liberty as it was, did not bring about the secession of the Southern states in 1860 or 1861 or the war that followed. Indeed, one can argue that, even had chattel slavery not existed, the economic differences between North and South might very well have led to the exit of the Southern states from the Union.
The election of Abraham Lincoln as president of the United States was a triumph of those who had adhered to the vision of Alexander Hamilton, as opposed to people whose social and political philosophies mirrored those of Thomas Jefferson. Hamilton believed in a strong central government, central banking, and an economic order that was buttressed by government subsidies and regulated by Congress.
Jeffersonians, on the other hand, held to a philosophy in which government played a relatively small role in the lives of individual persons who were to be free to pursue their own interests within a social order that emphasized free exchange unregulated by the “dead hand” of the state.
Hamilton’s ideas outlived the man, and they were first embodied in the two Banks of the United States and later animated those who were members of the Whig Party. When the Whig Party ceased to exist, the scattered former members formed the Republican Party, which became the party of central banking, taxpayer-subsidized “internal improvements” (such as railroads and canals), government-sponsored education, and high protective tariffs. (Contrary to popular belief, as Thomas DiLorenzo has pointed out in his book The Real Lincoln, the Republicans were defined by their economic program, not by any principled opposition to slavery.)
By 1860, according to DiLorenzo, almost 90 percent of federal tax revenues were coming through the Southern ports, which meant that higher tariffs would make the position of Southern economic interests even worse relative to that of the industrial North. (We need to add that, at this time, British and European goods were considered vastly superior in price and quality to goods made in the Northern states. Northern economic interests wanted higher tariffs precisely to force Americans to purchase their goods rather than products made abroad.)
In the war following the Southern secession, we find that Lincoln simply circumvented the Constitution. First, he engaged troops in a war not declared by Congress. Second, he unilaterally suspended the writ of habeas corpus, which meant that, by an administrative act, he was able to declare the Constitution null and void. As DiLorenzo points out, by crossing this Rubicon, Lincoln was able to set a standard — through the use of raw force — that has haunted us since.
demise of law
Following the war, attempts were made to patch up the old system of federalism and restore some semblance of constitutional order, but the damage had already been done. In effect, those who advocated the centralized political and legal system over the decentralized one that had characterized the United States since its founding had won the battle, albeit by force. Furthermore, even though some decentralization came to the fore, the politicians and courts were already at work undermining individual rights. One of the first dominoes to fall was the right to bear arms, as outlined in the Second Amendment. Many states and localities (mostly the former Confederate states but certainly not only them) passed laws prohibiting blacks from owning firearms.
The U.S. Supreme Court in two decisions upheld the laws that were passed precisely to keep blacks from being able to defend themselves from attacks by groups such as the Ku Klux Klan. While the decisions were dressed in the language of federalism (the Second Amendment did not apply to the states, only the federal government), the Court also held that the Second Amendment involved a collective right, not an individual one. In other words, when it failed to protect the individual rights of blacks by ensuring that they would be helpless in the face of violent attacks by vengeful whites, the Court simultaneously undermined the constitutional concept of individual rights.
Of course, the modern gun-control advocates have seized on those decisions (which clearly were racist in design) to promote their agenda of disarming all law-abiding persons. Thus, the descendants of whites who once supported those decisions now find them being used in a way that the justices of the late 1800s never thought possible. Call it “blowback.”
The rise of
The concept of collectivism continued to grow in the United States in the late 1800s. As Walter Lippman wrote in The Good Society, the “classical liberalism” which stressed individual rights was fighting a “rearguard action” by about 1870. The “isms,” such as Progressivism and Populism, already were beginning to dominate the American political scene.
It is difficult to fathom the sea change which occurred in this country from the 1880s to the beginning of World War I. The Populists and their allies demanded that government embark on a policy of deliberate inflation, which could happen only if the state fully controlled the issuing of money, something that the U.S. banking system could only partly do before the creation of the Federal Reserve System in 1913. Until the creation of the Interstate Commerce Commission in 1887, the setting of special rules and regulations for businesses was left to state and local governments. The ICC was one of the first breaks in the policy of nondelegation of powers that was enshrined in the Constitution, which gave Congress the power to regulate commerce.
(It should be noted that the concept of regulation in 1787 meant that Congress needed to make sure that trade ran smoothly between states, keeping it “regular.” The Commerce Clause of the Constitution was written to make sure states did not erect trade barriers against one another; it was not meant as a “hook” by which Congress — and later the administrative branch — could assume powers that were left to the states.)
Congress again called on its powers to “regulate” interstate commerce with the passage of the Sherman Antitrust Act of 1890 and the Clayton Act of 1914. In 1913, the Sixteenth Amendment, permitting Congress to levy a national income tax, and the Seventeenth Amendment, which called for direct election of U.S. senators (they were previously chosen by state legislatures), further “nationalized” law in this country. Congress also created the Federal Reserve System in 1913, which gave the government the tools to inflate the currency at will.
“Progressive” actions were not limited to Congress, as the courts took part in ushering out the concept that the Constitution protected individual rights and replacing it with a collectivist notion of “public interest.” The Progressive mindset was not limited to the courts or the intellectuals and activists who promoted the growth of the state. As Robert Higgs pointed out in Crisis and Leviathan, Charles Whiting Baker wrote in 1921 that
doctrines that were deemed ultra-radical thirty years ago . . . are accepted today without question by railway presidents, financiers and captains of industry.
One could argue that the actual change in politics and governance would not have been possible had these people not bought into the Progressive ideology. If one leaves out the high protective tariffs of this time, it can be argued that the huge business enterprises that developed during the latter portion of the 19th century were grown in a relatively laissez-faire atmosphere. The so-called captains of industry were primarily interested in making money and promoting their own enterprises, not in being the guardians of an individualist ideology of liberty that no longer captured the intellectual “commanding heights.” It is not surprising that the intellectuals “won” the industrialists to their side, and the results, ultimately, were disastrous, as they would create and nurture the conditions that created the Great Depression and ushered in the New Deal.
Economic historians such as Robert Higgs and Murray Rothbard have noted that the entry of the United States into the First World War was a triumph for the Progressives. First, it gave the government near-absolute power over the economy. Second, it gave the government the excuse to raise marginal income tax rates to very high levels. (In the mid 1920s, the top tax rate was lowered to 25 percent, and it has not been lower since.) Third, it elevated the power of the central government as an ideological substitute for liberty.
In calling for the United States to enter the war, President Woodrow Wilson said that war was necessary “to make the world safe for democracy.” No longer did the old “individualism” hold sway; as Higgs notes in Crisis and Leviathan, there was a vast change in the ideological winds from the 1890s to the U.S. entry into the war and it permanently changed the course of U.S. history.
The courts were hardly immune to the ideological changes, although they would not be brought fully into line with Progressive thinking until the New Deal. However, Progressivism had made its mark with the jurists long before the presidency of Franklin Roosevelt. In 1911, the federal courts ordered the breakup of the Standard Oil Company into a number of smaller firms, citing the Sherman Antitrust Act and a vague notion of the “public good.”
Keep in mind that the courts paid no mind to the realities of the marketplace or the role Standard Oil had played in making fuel available to poor and lower-income persons, who simply did without in the pre—Standard Oil era. Furthermore, the courts took no notice of John D. Rockefeller’s decision not to tap into the oil that flowed from Texas wells. (That oil ultimately helped other companies to become able to whittle down Standard’s once-huge market share.)
Furthermore, the courts did not seem to care that consumers preferred to freely purchase Standard Oil products. Instead, the courts depended on the nebulous “public interest” argument that basically promoted the “everyone knows big, successful businesses are bad” point of view.
Of course, government intervention into the economy was not the only area in which the courts broke with the past. The courts also decided to empower the state by eviscerating the mens rea doctrine in the area of criminal law. The great English jurist William Blackstone held that mens rea was the bedrock of criminal law. The term means “a guilty mind,” and its application to criminal law emphasized that it was necessary for persons who broke the law to have done so deliberately in order to charge them with a crime. A person who did not know he was acting illegally could not be placed in the dock for alleged criminal offenses.
The courts began to hold that a crime simply was the violation of a set of laws (and later, simple regulations), regardless of the intent of the person who transgressed the rules. This should hardly be surprising, as mens rea involves an individualistic interpretation of the rule of law. Crimes before then involved the imposition of real harm to real people; a “public interest” view of law that grew (metastasized?) during the Progressive Era emphasized the “social good” of law. Thus, when one broke a law, he harmed society, regardless of the intent or even the outcome of the action itself.
The assault upon mens rea began with the Minnesota Supreme Court in a 1910 ruling denying a mens rea defense to a lumber company that was being prosecuted for cutting timber on state lands without a valid permit. (As Paul Craig Roberts and Lawrence M. Stratton point out in their book The Tyranny of Good Intentions, a state official had renewed the permit, but it turned out he did not have the legal authority to do so.) And while the judicial attacks that finally did away with a meaningful sense of mens rea were not passed until after the New Deal, they ultimately would bring a veritable legal revolution to this country.
As noted in the previous paragraph, the United States has experienced a legal revolution. However, the sea change did not happen all at once. What was a tiny trickle turned into a stream because of the War Between the States, and while the flow was slightly curtailed for a while after the war ended, it did not take long for the stream to grow into a near river by the time the Progressive Era rolled around.
The entry of the United States into World War I in 1917 further strengthened the central state at the expense of both the states and individuals. Most important, however, was the fact that the intellectuals in this country no longer supported the view that the Constitution was a document dedicated to individual rights and restraint of the state. The intellectual and legal table now was set for the political economy of the New Deal.
When Janice Rogers Brown was renominated to fill a vacancy on the D.C. Court of Appeals this year, the New York Times demanded that Democrats filibuster her nomination, one of the reasons being that, in a speech to a gathering of conservative lawyers, Brown had called the New Deal a “socialist” venture. In his New York Times columns, Princeton University economist Paul Krugman on many occasions has praised the Franklin D. Roosevelt administration and recently called for a new installment of the New Deal.
Not only did the New Deal transform U.S. governmental structures as we know them, it also left an economic and legal legacy that to this day is both influential and controversial. Judging from the outright anger of many Democrats to Brown’s criticism of the New Deal, it is not hard to understand that Roosevelt still is the standard-bearer of the Democratic Party, no matter how many “New Democrats” may be running for office.
New Deal programs from Social Security to the minimum wage play a major role in our lives nearly 70 years after Congress enacted them, and attempts to make even minimal reductions in them spark national outrage. Witness the firestorm that has erupted from the Bush administration’s reform proposals in Social Security. While these programs are regarded as almost sacrosanct today, before Roosevelt’s time in office they would have been regarded as illegal. Furthermore, there was a time in the history of the United States when the vast powers now being wielded by members of the executive branch would have been seen as unconstitutional by most people who understood the law. To put it another way, the United States of America that existed in early 1933 was not the same country when Roosevelt died in April 1945.
The U.S. Supreme Court played a major role in the legal and political transformation of this country during the New Deal through the way it chose to reinterpret the U.S. Constitution. First, the High Court was willing to ignore the Constitution’s “non-delegation clause” by permitting the executive branch to take on legal responsibilities that the Framers of the Constitution clearly had given Congress. Second, it chose to take earlier rulings regarding laws governing workplaces and employment contracts and turn them upside down. Furthermore, even though the language of the justices’ decisions declared them to be a correct and legal interpretation of the Constitution, those involved in the legal process clearly understood that what they were doing went contrary to the law.
Rexford G. Tugwell, a close advisor to Roosevelt, said as much in an article entitled “Rewriting the Constitution,” published in the March 1968 issue of The Center Magazine:
The Constitution was a negative document, meant mostly to protect citizens from their government…. Above all, men were to be free to do as they liked, and since the government was likely to intervene and because prosperity was to be found in the free management of their affairs, a constitution was needed to prevent such intervention…. The laws would maintain order, but would not touch the individual who behaved reasonably.
Regarding the enforcement of what the Roosevelt administration would have called “social virtues,” Tugwell said,
To the extent that these new social virtues developed [in the New Deal], they were tortured interpretations of a document intended to prevent them. The government did accept responsibility for individuals’ well-being, and it did interfere to make secure. But it really had to be admitted that it was done irregularly and according to doctrines the framers would have rejected. Organization for these purposes was very inefficient because they were not acknowledged intentions. Much of the lagging and reluctance was owed to constantly reiterated intention that what was being done was in pursuit of the aims embodied in the Constitution of 1787, when obviously it was done in contravention of them. [Emphasis mine.]
The Great Depression
To understand the background of the New Deal, one first needs to understand the economic catastrophe that was the Great Depression. The economic downturn that began soon after the stock market crash of October 1929 turned into calamity within two years. Unemployment rates, which were at about 7 percent in 1930, ballooned to about 28 percent by February 1933, a month before Roosevelt was sworn in to office, Democrats having swept the 1932 elections.
The standard mainstream approach has been to label the Great Depression the ultimate failure of capitalism, private property, sound money, and free enterprise. “Unbridled competition” had led to “overproduction” of goods with workers not being paid wages high enough to “buy back the products” they had made, with “underconsumption” being the result. Thus, as one political cartoon of the day put it, there was “too much wheat, too much oil, and too much poverty.”
While this article cannot go into depth on the economic side of the Depression and the New Deal, it is important to point out that economic historians Murray N. Rothbard, Robert Higgs, and others have argued that government interventions themselves created the conditions that gave us the Depression. For example, President Herbert Hoover insisted that businesses not permit wages and prices to fall, despite the fact that bank failures were resulting in the shrinkage of the supply of money in the economy; thus, Hoover’s policies resulted in unsold goods and unemployment.
Rothbard also notes that Congress passed the destructive Smoot-Hawley tariff in 1930 and doubled tax rates in 1932, thus increasing business costs and making the downturn even worse. The Roosevelt administration continued in that same vein. Writes author William Shughart II in Independent Review (Vol. IX, No. 1, 2004),
The [Roosevelt] administration’s mindset was colored strongly by Marxist notions that free-market institutions inevitably are self-destructive. Competition was seen not as beneficially expanding output, lowering prices, and increasing wealth, but as fostering ruinous overproduction that impoverished capitalists and workers alike. Confusing effect with cause, the New Dealers interpreted the unprecedented collapse of prices, wages, and employment that began in the fall of 1929 as evidence consistent with the supposed evils of unfettered markets.
Upon taking office, the New Dealers, through various new laws and proposals, such as the National Industrial Recovery Act (NIRA) and the Agricultural Adjustment Act (AAA), attempted to force up commodity prices and reduce output in an attempt to create an economic recovery. Ironically, as Higgs noted in his 1987 book Crisis and Leviathan (quoted in the Shughart article),
Hence arose the anomalous but widely supported policy proposal to cure the depression, itself a catastrophic decline of real output and employment, by cutting back on production.
The “reforms” brought in by the New Deal were radical, to say the least. Furthermore, as Tugwell admitted, the only way to enact them was to turn U.S. law upside down. Thus began a major transformation of how the courts would view the Constitution.
The Court and
the “first” New Deal
Perhaps it is ironic that the decade that transformed the legal landscape of the United States did not begin that way. In fact, during the early years of the Roosevelt administration, the Supreme Court was seen as the last bastion defending the original Constitution. Roosevelt himself openly plotted to force a change in the Court’s makeup in order to bring about rulings that would give his administration a free hand to reconfigure the law.
As the story is popularly told, a small group of justices on the Supreme Court stood against Roosevelt’s attempts to install popular programs proposed to mitigate the economic misery being caused by the Great Depression. Shughart points out,
Standard accounts of the drama that began to unfold early in January 1935, when the U.S. Supreme Court ruled on the first of the challenges to New Deal legislation that came before it, frequently ignore these uncomfortable facts. As the story usually is told, the Old Court stubbornly blocked FDR’s policies by invalidating on constitutional grounds the bold experiments undertaken during his first term to deal with the nation’s extraordinary economic emergency. Thwarted at nearly every turn, often by narrow five-to-four vote margins, and emboldened by his stunning reelection to the White House in November 1936, FDR responded the following winter by threatening to pack the Court with up to six additional members, thereby ensuring a more compliant majority. To diffuse that threat, the Court abruptly changed course, executing its famous “switch in time that saved nine,” and began to sustain most of the president’s policies and programs, especially in the area of economic regulation.
The truth, as Shughart points out, is somewhat different. While the Court struck down the two flagship programs of the first 100 days of the New Deal, the National Industrial Recovery Act and the Agricultural Adjustment Act, it already had upheld other major planks of Roosevelt’s agenda.
Soon after Roosevelt took office in March 1933, he decided to pursue a deliberate policy of inflation. Since, at that time, individuals could demand payment in gold from the government for their dollars, a surge of deliberate inflation caused by printing more dollars would have brought about a run on the government’s gold reserves. To keep that from happening, Roosevelt invoked an obscure 1917 World War I-era law — the Trading with the Enemy Act — to justify the confiscation of almost all gold owned by private individuals. (The government raised the price of its gold from $20 an ounce to $35 an ounce and compensated individuals with the cheaper dollars.)
Not surprisingly, the action was challenged in the courts, but the Supreme Court upheld it in three separate cases. The Court had also upheld a mortgage-moratorium law in Minnesota, which Roosevelt supported, and approved the formation of the Tennessee Valley Authority, which clearly was an exercise in socialism. In other words, the Supreme Court had a mixed record and certainly was not “obstructionist” as the critics have claimed.
Moreover, one must keep in mind the nature of the two acts in question. Both the NIRA and AAA were attempts to “stimulate” the economy by forcing up prices of consumer goods and raising business costs, as well as curtailing the production of agricultural and other goods. In other words, in a time of real deprivation for much of the country, the government’s answer to the Depression was to make it even more difficult for people to afford their daily bread.
The NIRA was an attempt to organize the entire U.S. economy into a series of cartels ranging from forest and wood products to the dog-food industry. Industry groups were required to set minimum wages and minimum prices, hold back production, and prevent new entrants from starting businesses. However, because cartels tend to be unwieldy over time, the system began to collapse even before the Court declared the NIRA unconstitutional in its famous A.L.A. Schechter Poultry Corp. v. the United States decision. The economist Benjamin M. Anderson wrote that the NIRA “was not a revival measure…. It was an antirevival measure.”
Likewise, the AAA attempted to increase the incomes of farmers by setting limits on crops and even went as far as having agents from the Agricultural Adjustment Administration destroy commodities and animals in order to achieve goals. As author Lawrence W. Reed noted,
[The AAA] levied a new tax on agricultural processors and used the revenue to supervise the wholesale destruction of valuable crops and cattle. Federal agents oversaw the ugly spectacle of perfectly good fields of cotton, wheat, and corn being plowed under (the mules had to be convinced to trample the crops; they had been trained, of course, to walk between the rows). Healthy cattle, sheep, and pigs were slaughtered and buried in mass graves. [Emphasis in original.]
When the Court struck down the AAA in 1936, Roosevelt was enraged and floated his infamous proposal to “pack” the Court by sending Congress a plan that would give him authority to appoint up to six more judges to the Supreme Court (in addition to the nine already on it). Although the anti—New Deal decisions were based clearly on constitutional merits, Roosevelt wanted justices to reevaluate the Constitution in a way that would permit his future programs to be implemented unmolested by the rule of law. However, the Court “conveniently” began to change the pattern of its rulings, and, with retirements, Roosevelt was soon able to appoint justices who ruled more to his liking.
The Court and
the “second” New Deal
The “first” New Deal can be seen as an attempt to cartelize the U.S. economy in an effort to aid businesses. The “second” New Deal was different in that it was aimed at attacking business enterprises through regulation and the promotion of labor unions. The Court ultimately acquiesced.
First, in complete reversal of a 1936 decision that struck down a New York state minimum-wage law, the Court in West Coast Hotel v. Parrish (1937) upheld a Washington state minimum-wage law that was nearly identical to the New York one that had been upheld. Similar rulings followed.
Second, the Court began to reinterpret the Commerce Clause of the Constitution by changing its definition of interstate commerce. Previous courts took the clause as a reason to limit the role Congress could play in crafting laws that usurped what traditionally were seen as state functions. While the Constitution gave Congress the power to “regulate” interstate commerce, the key element was the fact that the courts had interpreted “commerce” in a very narrow way.
By 1937, however, the Court began to define interstate commerce in a way that turned the earlier interpretation of the Commerce Clause upside down. Candice E. Jackson argues that — although from 1870 to 1937 the Supreme Court slowly loosened some of its strictest definitions of interstate commerce — after 1937 the Roosevelt Court moved full speed ahead to give Congress almost unlimited power in imposing the will of the federal government on individuals and business enterprises.
The effects of this sea change in legal interpretation of a document that clearly was written to limit the power of the government became obvious. The most blatant example of the Court’s new way of reading the Commerce Clause was its decision, in NLRB v. Jones & Laughlin Steel Corp., to rule in favor of the Wagner Act, which, in effect, placed labor unions under congressional protection against firms that wished to resist being unionized, and based its decision on the Commerce Clause.
Within a year of this decision, the country was hit by numerous strikes, walkouts, and labor-related violence. Coupled with the Federal Reserve’s decision to raise interest rates, the U.S. economy, which had been making a relatively small recovery after 1935, fell into another deep recession, with the unemployment rate reaching 20 percent.
(The Court’s willingness to make nearly everything subject to its new interpretation of the Commerce Clause was demonstrated by its infamous 1942 Wickard v. Filburn decision in which the Court upheld government action against a farmer who grew wheat for his own consumption. Because he had grown more than the quota permitted by the government, the Court ruled that, while the farmer had no plans to sell the wheat he was growing, his home production of wheat flour meant that he would not be purchasing more wheat elsewhere, a personal decision that, the Court reasoned, would affect interstate commerce.)
The second major change in the legal interpretation that the Court permitted was the wholesale transference of legislative power from Congress to the executive branch. This was done in two ways. First, although the Constitution declares that “all legislative powers” rest in the hands of Congress, Roosevelt demanded that Congress rubber-stamp his proposed legislation, which then was quickly passed without debate and without many members’ having even read the proposed bills. While this preserved the image of Congress actually performing its legislative duties, in reality the president was the legislator.
The second way that the Roosevelt administration usurped legislative powers was through the creation of bureaucracies that were able to, in effect, write the law through regulations. The way it worked was that Congress would write laws that deferred the rulemaking powers to the regulatory agencies. In their book The Tyranny of Good Intentions, Paul Craig Roberts and Lawrence M. Stratton describe the legal change that followed:
Prior to the New Deal, legislation tended to be specifically and tightly written in order to avoid delegating the law to executive branch enforcers. This minimized the opportunity for executive branch interpretation.
However, the Roosevelt administration followed the path urged by James M. Landis, the dean of Harvard Law School. Landis believed that bureaucracies would be staffed by “experts,” who would be more adept at following “correct” policies than the less-nimble Congress. Such a state of affairs would mean, according to Landis, “that the operative rules will be found outside the statute book.” In other words, the wording and even intent of the laws passed by Congress meant nothing; the law was whatever the executive branch — aided by a subservient Supreme Court — said it was.
The New Deal did not revolutionize the courts but instead accelerated the changes already taking place. However, even given the encroachments on liberty that occurred during the Progressive Era, the decade of the 1930s still is breathtaking when one realizes what was lost.
Furthermore, the courts have not effectively moved from the various positions that justices such as William O. Douglas, Felix Frankfurter, and Hugo Black took in regard to the Constitution. Like Tugwell, they fully understood that the Founders of this country wrote the Constitution in order to create the necessary fences around the exercise of power by government officials. And, like Tugwell, they decided that liberty was archaic and unnecessary in a modern, complex society. Thus, liberty faded as a polestar, and the New Dealers replaced it with statism, all with the blessings of the U.S. Supreme Court.
No matter who is appointed to replace retiring members of the Supreme Court, the larger issues will remain unchanged, as they have been for nearly seven decades — the New Deal Supreme Court has become a permanent fixture in our country.
Changes brought about by Franklin Roosevelt’s Court solidified the trends that had occurred since the Progressive Era, trends that could have come about only through viewing the U.S. Constitution in a way fundamentally different from what the Framers intended. As Roosevelt “brain truster” Rexford G. Tugwell put it, the Constitution was a document that was written not to promote and expand a welfare state but instead to protect Americans from their own government.
That the New Deal justices were able to absolutely subvert the Constitution — and with it, the rule of law — and do it without meaningful opposition from Congress and the Fourth Estate constitutes one of the darkest chapters in American history. A nation that was conceived in liberty and limited government has become a country where almost no meaningful limits are placed on those who are in authority, all with the approval of the courts, which were supposed to be one of the bulwarks against such action.
In this last part of my series, I first reappraise the judicial/legal philosophy that has guided the Court since the mid 1930s and show how it radically departs from how the Founders viewed the law. This ethos is one which holds that government — including the judicial branch — is the entity that must “change society.” Thus, there exists a “compelling government interest” that began with tampering with economic institutions and ultimately spread to the rest of society.
Second, I examine some of the decisions and their aftermath to demonstrate how the Court’s legal philosophy led to the continuation of the expansion of federal powers and the misuse of the Constitution’s Commerce Clause.
The great Austrian economist Ludwig von Mises once wrote that, while he set out to be a “reformer,” instead he became the “historian of decline.” Likewise, this series has been a history of decline of what was once a bastion of Western civilization: law and liberty. However, I do not write pessimistically, for as long as there are people who deeply treasure individual rights, private property, and liberty, there will be the possibility of uprooting the collectivist mindset that permeates the political, economic, and legal landscape in the United States.
and gun control
Although the Second Amendment begins with a preamble on militias, it is clear that the Framers considered gun ownership to be a basic right. Unfortunately, as was noted in part two of this series, the Supreme Court had already weakened the Second Amendment by claiming it protected a collective right, not an individual one. Such reasoning would guide the Court in future gun-control decisions.
In 1939, the Court ruled in the last gun-control case to be heard by the Supreme Court, United States v. Miller, which involved a man who had violated the National Firearms Act of 1934 by having a sawed-off shotgun in violation of the rule that a barrel had to be at least 18 inches long. In ruling for the government, the Court held that the Second Amendment protected a collective right, not an individual right, for Americans to own firearms. Furthermore, the Court tied the right to the Second Amendment’s preamble about the necessity of a well-trained militia, in essence saying that individual rights extended only to weapons that would be considered useful for a militia, and that the state could heavily regulate those “rights.”
Although the Court’s decision in Miller was limited, it was collectivist in nature and did not question the power of the federal government to restrict individual ownership of firearms. Since then, gun-control laws have proliferated on the state and national levels, and most Americans who are gun owners can easily run afoul of those statutes. At last count, there were about 20,000 laws on the books at all levels of government to regulate use and ownership of firearms, all to restrict a right that the Framers clearly believed was enjoyed by the people.
Furthermore, one must tie the proliferating anti-gun-ownership laws to the growing power of the police. Self-defense, once seen as necessary and even honorable, has been relegated to “vigilantism,” which is given a negative connotation. The current ethos is that home and personal defense should be left to the authorities, who ostensibly are better at such things.
(Unfortunately, the Supreme Court has also ruled that the police are under no legal obligation to protect anyone. Thus, people who protect their lives and property with personal weapons are subject to prosecution, but waiting for the police to show up can be fatal.)
on private property
The Court’s continuing assault on the institution of private property came in a multi-pronged manner. First, the principle of “no discrimination” was used to constitutionally trump private-property rights. Second, the Court strengthened the power of government to seize private property under the notion that anything the government contends will increase tax revenue constitutes a “public purpose.”
The year 1954 is significant not only for Brown v. Board of Education, which helped launched the modern “civil rights movement,” but also for Berman v. Parker, which held that the seizing and tearing down of “blighted areas” for “urban renewal” met the Fifth Amendment guidelines for eminent domain. The Progressives of the late 19th and early 20th centuries had “slum clearance” as one of their main goals for urban planning, and the Court agreed, albeit nearly 50 years after the Progressive Era.
Fast forward 51 years later to the Kelo v. New London decision, in which the Court ruled that areas did not even have to be “blighted” in order to be condemned. All that was needed was a “formula” that demonstrated that the municipality or political entity in question could raise more tax revenue by taking private property and selling it at less-than-market prices to developers who would then build shopping centers, condominiums, or something similar. To put it another way, the U.S. Supreme Court said, in effect, that there really is no such thing as private property, at least the kind of private property that existed at the time of the framing of the Constitution.
Another example of the Court’s intrusion into the institution of private property has come with the civil-rights laws. While the Brown v. Board of Education decision of 1954 is outside the scope of this article, the decision not only emboldened both the Court and Congress to expand the definitions of racial discrimination but also, in effect, to “nationalize” businesses by declaring them to be “public” entities, falling under the jurisdiction of Congress through the Commerce Clause.
Ten years after Brown, Congress passed the landmark 1964 Civil Rights Act. Resorting (once again) to its new interpretation of the Commerce Clause, Congress expanded the reach of “no discrimination” to private property, but that was not all that the act involved. During the New Deal, as noted in part three of this series, Congress ceded much of its power to the executive branch, including the final interpretation of the laws Congress passed.
This situation had special meaning with regard to the Civil Rights Act, which had very specific language forbidding the use of racial quotas and the evoking of special racial tests in situations governed by the Act (such as employment, college admissions, and the like). However, as Paul Craig Roberts and Lawrence M. Stratton point out in their book The Tyranny of Good Intentions, when the Court reviewed a challenge to the use of racial quotas, it deferred to the administration, specifically the Equal Employment Opportunity Commission.
The EEOC decided to “redefine discrimination as unintentional statistical group disparities” which could be remedied only through racial quotas, as Roberts and Stratton point out. For example, a business could be charged with racial discrimination if the percentage of minority employees on its payroll did not match the racial makeup of the surrounding area.
The Supreme Court, in unanimously approving this definition of racial discrimination, said, “The administrative interpretation of the act by the enforcing agency is entitled to great deference.” Thus, the government was able to use a law that unequivocally outlawed racial quotas to create a system based on racial quotas, all with the enthusiastic approval of the Supreme Court.
As pointed out in part three, the government further intruded on private-property rights with its Wickard decision, which restricted the amount of wheat a farmer could grow on his property, despite the fact that the wheat was intended for personal consumption. While Wickard generally is discussed in the context of agriculture and interstate commerce (indeed, the Court used the Commerce Clause of the Constitution to justify its decision), it can also be termed an assault on private property.
Keep in mind that at the time of the decision, the United States was at war with Germany and Japan and much of the agricultural harvest was being diverted to war uses. To deal with the huge shortages of fresh vegetables, Americans started what were termed “Victory Gardens,” turning spaces that might have been devoted to flowers, grass, or even weeds into garden plots. The logic of Wickard easily could have been applied to the “Victory Gardens,” although it is clear that such a move by the authorities would have been tremendously unpopular and would have seriously damaged the home-front morale.
the purpose of law
Thus, we have a clear example of the presence of laws on the books that would be enforced selectively, and the kinds of laws that lent themselves to this kind of abuse were an unfortunate legacy of the New Deal and the Roosevelt Supreme Court. The Framers would have been horrified to see that the Commerce Clause — put into place in order to guarantee free trade among the states — was being used, in effect, to criminalize a farmer’s growing wheat on his own property for his own consumption. Paul Rosenzweig writes,
At its inception, criminal law was directed at conduct that society recognized as inherently wrongful and, in some sense, immoral. These acts were wrongs in and of themselves (malum in se), such as murder, rape, and robbery. In recent times the reach of the criminal law has been expanded so that it now addresses conduct that is wrongful not because of its intrinsic nature but because it is a prohibited wrong (malum prohibitum) — that is, a wrong created by a legislative body to serve some perceived public good. These essentially regulatory crimes have come to be known as “public welfare” offenses.
While no one went to jail in the Wickard case, it is clear that it fell within the “public welfare” category of what Rosenzweig describes. In this case, the government wanted to keep wheat prices high in order to provide benefits to a certain political constituency, and anything that might interfere with that goal — and one doubts that it could be considered a “lofty” goal at that — is illegal. In fact, since the New Deal Supreme Court effectively revolutionized law in this country (or at least provided the push to topple the legal order created by the nation’s Founders), the growth of laws — both civil and criminal — at all levels of government has occurred within the “public welfare” classification.
The emphasis on the “public purpose” of the law is yet another example of how the courts have become collectivist in their focus. As Rosenzweig pointed out, in the past criminal law ultimately dealt with harm imposed wrongfully on individuals. Today, however, the victim is “society”; even if no individuals are harmed — as was the situation in Wickard — for according to the modern way of interpreting the law, it is considered that a wrong still has been committed.
War and the
The last time Congress declared war was immediately after the attack on Pearl Harbor in December 1941. It is obvious, however, that World War II was not the last war involving U.S. troops. Since the end of hostilities in the late summer of 1945, soldiers from this country have been involved in wars or military actions in Korea, Vietnam, Cambodia, Lebanon, Panama, Grenada, Kuwait, Somalia, Bosnia, Afghanistan, and Iraq.
Korea, Vietnam, Cambodia, Grenada, Panama, Kuwait, Afghanistan, and Iraq involved full-scale invasions, and many conflicts have included the use of bombers and air fighters, not to mention the navy.
However, while thousands of American troops have died in these wars, the thing they have in common is that Congress has not declared war in any of them. All of the post—World War II wars that have involved American troops have been executive wars. The president has decided to commit troops, and the Pentagon has dutifully carried out his orders. Furthermore, the presidents involved have come from both political parties, so this development is not a partisan issue.
While the Constitution gave Congress the power to declare war, the executive branch has in effect usurped this power as well, with little protest from the legislative branch and with the clear approval of the Supreme Court. Before the Great Depression, it would have been almost unthinkable for the president to openly involve U.S. troops in continuous wartime operations without a declaration of war from Congress; the New Deal thinking changed all of that.
While Roosevelt sought and received a declaration of war from Congress following Pearl Harbor, he already had committed American goods, equipment, and personnel to the British side for about two years, not to mention having U.S.-funded troops (the Flying Tigers) attacking Japanese forces in China. Even Woodrow Wilson would not have been able to do such a thing.
Thus, each time a U.S. soldier dies in Iraq or Afghanistan, his death has indirectly come about because Congress agreed to transfer much of its legislative power to the executive branch, clearly in violation of the Constitution’s “nondelegation” clause. Unfortunately, the Supreme Court declines to enforce constitutional provisions on war, and, therefore, this succession of presidential wars since 1950 has been the unhappy result.
This article has covered only a small portion of the post—New Deal Supreme Court’s crimes against the Constitution. For lack of space, I have not dealt with the Court’s rulings on asset forfeiture, which has accompanied the government’s “war on drugs,” nor have I dealt with the various Court assaults on free speech, religious beliefs, and civil liberties.
To be able to fully gauge the effect that the New Deal has had on our lives today through the Supreme Court, a deforestation of North America would be needed to write a volume large enough. However, there are two consistent themes that have emerged in the past seven decades. The first is that private property is considered to be an anachronism, useful only insofar as it serves as a mechanism to raise tax revenues for government. The second is that the U.S. Supreme Court and all U.S. courts, federal and state, are expected to be movers and arbiters of social change. To put it bluntly, the courts see themselves as having a mission to implement the policies of the Progressive Era. Unfortunately, what the political classes see as being “progressive” actually is little more than a regression into tyranny in which the state has absolute power.