What Was Inflation During The Great Depression?

Inflation was at its peak in the United States in the twentieth century during the years following World Wars I and II, as well as in the 1970s. The Great Depression of the 1930s saw the lowest inflationactually, deflation.

In the 1930s, what was the rate of inflation?

In 1930, the inflation rate was -2.34 percent. The inflation rate in 1930 was lower than the average annual inflation rate of 3.13 percent from 1930 and 2022.

During the Great Recession, what was the rate of inflation?

The Great Inflation was the defining macroeconomic event of the twentieth century’s second half. After the roughly two decades it lasted, the worldwide monetary system built during World War II was abandoned, four economic recessions occurred, two catastrophic energy shortages occurred, and wage and price restrictions were implemented for the first time in peacetime. It was “the worst failure of American macroeconomic policy in the postwar century,” according to one eminent economist (Siegel 1994).

However, that failure ushered in a paradigm shift in macroeconomic theory and, ultimately, the laws that now govern the Federal Reserve and other central banks across the world. If the Great Inflation was the result of a major blunder in American macroeconomic policy, its defeat should be celebrated.

Forensics of the Great Inflation

Inflation was a bit over 1% per year in 1964. It had been in the area for the last six years. Inflation began to rise in the mid-1960s, reaching a high of more than 14% in 1980. In the second half of the 1980s, it had dropped to an average of barely 3.5 percent.

While economists dispute the relative importance of the causes that have spurred and sustained inflation for more than a decade, there is little disagreement about where it comes from. The actions of the Federal Reserve, which allowed for an excessive expansion in the quantity of money, were at the root of the Great Inflation.

It would be helpful to describe the story in three distinct but related parts to comprehend this phase of particularly terrible policy, particularly monetary policy. This is a kind of forensic examination into the motive, means, and opportunity for the Great Inflation to happen.

The Motive: The Phillips Curve and the Pursuit of Full Employment

The first section of the story, the motivation behind the Great Inflation, takes place in the immediate aftermath of the Great Depression, a period in macroeconomic theory and policy that was similarly momentous. Following World War II, Congress focused on programs that it anticipated would foster better economic stability. The Employment Act of 1946 was the most prominent of the new legislation. The act, among other things, stated that the federal government’s role is to “advance maximum employment, production, and purchasing power” and called for more coordination between fiscal and monetary policy. 1 The Federal Reserve’s current twin mandate to “maintain long-run expansion of the monetary and credit aggregates…in order to achieve effectively the goals of maximum employment, stable prices, and moderate long-term interest rates” is based on this legislation (Steelman 2011).

The orthodoxy that guided policy in the postwar era was Keynesian stabilization policy, which was driven in part by the painful memory of the unprecedented high unemployment in the United States and around the world during the 1930s. The fundamental focus of these policies was the regulation of aggregate expenditure (demand) through the fiscal authority’s spending and taxation policies, as well as the central bank’s monetary policies. The notion that monetary policy can and should be used to manage aggregate spending and stabilize economic activity remains a widely held belief that governs the Federal Reserve’s and other central banks’ operations today. However, one crucial and incorrect assumption in the implementation of stabilization policy in the 1960s and 1970s was that unemployment and inflation had a stable, exploitable relationship. In particular, it was widely assumed that permanently lower unemployment rates could be “purchased” with somewhat higher inflation rates.

The idea that the “Phillips curve” indicated a longer-term trade-off between unemployment, which was very destructive to economic well-being, and inflation, which was sometimes seen as more of a nuisance, was an appealing assumption for policymakers who sought to enforce the Employment Act’s requirements.

2

But the stability of the Phillips curve was a disastrous assumption, one that economists Edmund Phelps (1967) and Milton Friedman (1968) warned against. “If the statical’optimum’ is chosen,” Phelps says, “it is logical to assume that participants in product and labor markets will learn to expect inflation…and that, as a result of their rational, anticipatory behavior, the Phillips Curve will progressively shift upward…” Friedman (1968) and Phelps (1967). In other words, the authorities’ desired trade-off between reduced unemployment and higher inflation would almost certainly be a false bargain, requiring ever higher inflation to maintain.

The Means: The Collapse of Bretton Woods

If the Federal Reserve’s policies were well-anchored, chasing the Phillips curve in search of lower unemployment would not have been possible. Through the Bretton Woods agreement in the 1960s, the US dollar was tied if shakily to gold. As a result, the collapse of the Bretton Woods system and the severance of the US dollar from its last link to gold play a part in the story of the Great Inflation.

During World War II, the world’s industrial nations agreed to a worldwide monetary system, which they thought would promote global trade and offer more economic stability and peace. The Bretton Woods system, hammered out by forty-four nations in New Hampshire in July 1944, established a fixed rate of exchange between the world’s currencies and the US dollar, with the latter linked to gold.3

The Bretton Woods system, on the other hand, had a number of faults in its implementation, the most serious of which was the attempt to maintain constant parity across world currencies, which was incompatible with their domestic economic goals. Many countries were pursuing monetary policies that claimed to move up the Phillips curve, resulting in a more favorable unemployment-inflation nexus.

The US dollar faced an additional challenge as the world’s reserve currency. The need for US dollar reserves expanded in tandem with global trade. For a period, an expanding balance of payments deficit met the demand for US dollars, and foreign central banks accumulated ever-increasing dollar reserves. The amount of dollar reserves held overseas eventually exceeded the US gold stock, meaning that the US could not sustain total convertibility at the current gold pricea fact that foreign governments and currency speculators were quick to note.

As inflation rose in the second half of the 1960s, more US dollars were changed to gold, and in the summer of 1971, President Richard Nixon put a stop to foreign central banks exchanging dollars for gold. The short-lived Smithsonian Agreement attempted to save the global monetary system during the next two years, but the new arrangement performed no better than Bretton Woods and quickly fell apart. The worldwide monetary system that had existed since World War II had come to an end.

Most of the world’s currencies, including the US dollar, were now entirely unanchored after the last link to gold was destroyed. Except during times of global crisis, this was the first time in history that the industrialized world’s currencies were based on an irredeemable paper money standard.

The Opportunity: Fiscal Imbalances, Energy Shortages, and Bad Data

The US economy was in a state of flux throughout the late 1960s and early 1970s. At a time when the US economic situation was already stressed by the Vietnam War, President Lyndon B. Johnson’s Great Society Act ushered in large spending programs across a broad range of social initiatives. The monetary policy was complicated by the developing budgetary imbalances.

The Federal Reserve used a “even-keel” policy approach to avoid monetary policy actions that would conflict with the Treasury’s funding plans. In practice, this meant that the central bank would not change policy and would maintain interest rates at their current levels during the time between the announcement of a Treasury issuance and its market sale. Treasury difficulties were rare under normal circumstances, and the Fed’s even-keeled policies didn’t obstruct monetary policy implementation considerably. The Federal Reserve’s adherence to the even-keel principle, however, became progressively limited as debt difficulties became more prominent (Meltzer 2005).

The periodic energy crises, which raised oil prices and stifled US GDP, were a more disruptive force. The first crisis was a five-month-long Arab oil embargo that began in October 1973. Crude oil prices quadrupled at this time, reaching a plateau that lasted until 1979, when the Iranian revolution triggered a second energy crisis. The price of oil tripled during the second crisis.

In the 1970s, economists and policymakers began to classify increases in aggregate prices into various inflation kinds. Macroeconomic policy, particularly monetary policy, had a direct influence on “demand-pull” inflation. It was caused by policies that resulted in expenditure levels that were higher than what the economy could produce without pushing the economy beyond its normal productive capacity and requiring the use of more expensive resources. However, supply interruptions, particularly in the food and energy industries, might push inflation higher (Gordon 1975). 4 This “cost-push” inflation was also passed on to consumers in the form of higher retail prices.

Inflation driven by the growing price of oil was mainly beyond the control of monetary policy, according to the central bank. However, the increase in unemployment that occurred as a result of the increase in oil prices was not.

The Federal Reserve accommodated huge and rising budget imbalances and leaned against the headwinds created by energy costs, motivated by a duty to generate full employment with little or no anchor for reserve management. These policies hastened the money supply expansion and increased overall prices without reducing unemployment.

Policymakers were also hampered by faulty data (or, at the very least, a lack of understanding of the facts). Looking back at the data available to policymakers in the run-up to and during the Great Inflation, economist Athanasios Orphanides found that the real-time estimate of potential output was significantly overstated, while the estimate of the unemployment rate consistent with full employment was significantly understated. To put it another way, officials were probably underestimating the inflationary effects of their measures as well. In reality, they couldn’t continue on their current policy path without rising inflation (Orphanides 1997; Orphanides 2002).

To make matters worse, the Phillips curve began to fluctuate, indicating that the Federal Reserve’s policy actions were being influenced by its stability.

From High Inflation to Inflation TargetingThe Conquest of US Inflation

Friedman and Phelps were correct. The previously stable inflation-unemployment trade-off has become unstable. Policymakers’ power to regulate any “real” variable was fleeting. This included the unemployment rate, which fluctuated about its “natural” level. The trade-off that policymakers were hoping to take advantage of didn’t exist.

As businesses and families began to appreciate, if not anticipate, rising prices, any trade-off between inflation and unemployment became a less favorable trade-off until both inflation and unemployment reached unacceptably high levels. This became known as the “stagflationary age.” When this narrative began in 1964, inflation was at 1% and unemployment was at 5%. Inflation would be over 12% and unemployment would be over 7% ten years later. Inflation was near 14.5 percent in the summer of 1980, while unemployment was over 7.5 percent.

Officials at the Federal Reserve were not ignorant to the escalating inflation, and they were fully aware of the dual mandate, which required monetary policy to be calibrated to achieve full employment and price stability. Indeed, the Full Employment and Balanced Growth Act, more generally known as the Humphrey-Hawkins Act after the bill’s authors, re-codified the Employment Act of 1946 in 1978. Humphrey-Hawkins tasked the Federal Reserve with pursuing full employment and price stability, as well as requiring the central bank to set growth targets for several monetary aggregates and submit a semiannual Monetary Policy Report to Congress. 5 When full employment and inflation collided, however, the employment part of the mandate appeared to have the upper hand. Full employment was the foremost objective in the minds of the people and the government, if not also at the Federal Reserve, as Fed Chairman Arthur Burns would later declare (Meltzer 2005). However, there was a general consensus that confronting the inflation problem head-on would be too costly to the economy and jobs.

Attempts to reduce inflation without the costly side effect of increasing unemployment had been made in the past. Between 1971 and 1974, the Nixon government implemented wage and price controls in three stages. These measures only delayed the rise in prices for a short time while aggravating shortages, particularly in food and energy. The Ford administration did not fare any better. Following his declaration of inflation as “enemy number one,” President Gerald Ford initiated the Whip Inflation Now (WIN) initiative in 1974, which included voluntary steps to encourage increased thrift. It was a colossal flop.

By the late 1970s, the public had come to anticipate monetary policy to be inflationary. They were also becoming increasingly dissatisfied with inflation. In the latter half of the 1970s, survey after survey revealed a deterioration in popular confidence in the economy and government policy. Inflation was frequently singled out as a particular scourge. Since 1965, interest rates have appeared to be on the rise, and as the 1970s drew to a conclusion, they jumped even higher. Business investment stagnated, productivity fell, and the country’s trade balance with the rest of the globe worsened during this time. Inflation was largely seen as either a substantial contributing factor or the primary cause of the economic downturn.

However, once the country was in the midst of unacceptably high inflation and unemployment, officials were confronted with a difficult choice. Combating high unemployment would almost surely drive inflation even higher, while combating inflation would almost certainly cause unemployment to rise much more.

Paul Volcker, formerly of the Federal Reserve Bank of New York, was elected chairman of the Federal Reserve Board in 1979. Year-over-year inflation was above 11 percent when he assumed office in August, and national unemployment was slightly under 6 percent. By this time, it was widely understood that lowering inflation necessitated tighter control over the pace of increase of reserves in particular, as well as broad money in general. As mandated by the Humphrey-Hawkins Act, the Federal Open Market Committee (FOMC) had already began setting targets for monetary aggregates. However, it was evident that with the new chairman, attitude was shifting and that greater measures to restrict the expansion of the money supply were needed. The FOMC announced in October 1979 that instead of using the fed funds rate as a policy tool, it would target reserve growth.

Fighting inflation was now considered as important to meet both of the dual mandate’s goals, even if it temporarily disrupted economic activity and resulted in a greater rate of unemployment. “My core idea is that over time we have no choice but to deal with the inflationary situation since inflation and the unemployment rate go together,” Volcker declared in early 1980. Isn’t that what the 1970s taught us?” (Meltzer, 1034, 2009).

While not perfect, better control of reserve and money expansion over time resulted in a desired slowdown of inflation. The establishment of credit limits in early 1980, as well as the Monetary Control Act, aided this stricter reserve management. Interest rates surged, decreased for a short time, and then spiked again in 1980. Between January and July, lending activity decreased, unemployment increased, and the economy experienced a temporary recession. Even as the economy improved in the second half of 1980, inflation declined but remained high.

The Volcker Fed, on the other hand, kept up the pressure on rising inflation by raising interest rates and slowing reserve growth. In July 1981, the economy suffered another recession, this time more severe and long-lasting, lasting until November 1982. Unemployment peaked at over 11%, but inflation continued to fall, and by the conclusion of the recession, year-over-year inflation had dropped below 5%. As the Fed’s commitment to low inflation gained traction, unemployment fell and the economy entered a period of steady growth and stability. The Great Inflation had come to an end.

Macroeconomic theory had undergone a metamorphosis by this time, influenced in large part by the economic lessons of the day. In macroeconomic models, the importance of public expectations in the interaction between economic policy and economic performance has become standard. The need of time-consistent policy choicespolicies that do not sacrifice long-term prosperity for short-term gainsas well as policy credibility became widely recognized as essential for excellent macroeconomic outcomes.

Today’s central banks recognize that price stability is critical to sound monetary policy, and several, like the Federal Reserve, have set specific numerical inflation targets. These numerical inflation targets have reinstated an anchor to monetary policy to the extent that they are credible. As a result, they have improved the transparency of monetary policy decisions and reduced uncertainty, both of which are now recognized as critical preconditions for achieving long-term growth and maximum employment.

During the Great Depression, did prices rise?

  • During the Great Depression in the United States, between 1929 and 1933, real GDP fell by more than 25%, the unemployment rate rose to 25%, and prices fell by more than 9% in both 1931 and 1932, and by nearly 25% overall.
  • The Great Depression is still a mystery today. The origins of this severe economic downturn, as well as why it lasted so long, are still hotly debated topics in economics.
  • A decline in the economy’s ability to create goods and services is one explanation for the Great Depression. The economy’s overall demand for goods and services is being reduced, according to the second main cause.

What is creating 2021 inflation?

As fractured supply chains combined with increased consumer demand for secondhand vehicles and construction materials, 2021 saw the fastest annual price rise since the early 1980s.

What was the value of a dollar in 1930?

In today’s money, $100 in 1930 is worth around $1,698.90, an increase of $1,598.90 over 92 years. Between 1930 and present, the dollar experienced an average annual inflation rate of 3.13 percent, resulting in a cumulative price increase of 1,598.90 percent.

In 1979, how much did inflation cost?

Between 1979 and 2018, the average annual inflation rate is 3.23 percent compounded. As previously stated, this annual inflation rate adds up to a total price difference of 246.05 percent over 39 years.

To put this inflation into context, if we had invested $100 in the S&P 500 index in 1979, our investment would now be worth nearly $1,500.

What caused inflation in the 1970s?

  • Rapid inflation occurs when the prices of goods and services in an economy grow rapidly, reducing savings’ buying power.
  • In the 1970s, the United States had some of the highest rates of inflation in recent history, with interest rates increasing to nearly 20%.
  • This decade of high inflation was fueled by central bank policy, the removal of the gold window, Keynesian economic policies, and market psychology.

Was inflation present prior to the Great Depression?

With the exception of the world wars, the Great Depression, and a few brief periods, inflation in the United States during the twentieth century kept around 5% for the most part – except during the 1970s. Price fluctuations from year to year increased from 2% in 1965 to 14% in 1980.