President Johnson’s 1964 ‘Great Society’ program began an era of dynamic federal-level socio-economic reform. Starting with the passage of the Economic Opportunity Act, followed by Medicaid legislation in 1965, the Johnson administration’s War on Poverty aimed to eradicate impoverishment by creating a more equitable economic society. Johnson’s economists envisioned that this path would afford opportunities to improve living conditions through personal self-improvement, not merely government handouts.
However, today, despite honorable intentions, the federal initiatives started by LBJ fell short of their objectives. Instead of lifting recipients from poverty, these programs created a dependence on government assistance. Moreover, it laid the groundwork for generations of children raised under government financial assistance.
By 2016, the poverty rate in America was 12.7 percent, which was only three percentage points less than in 1967, three years after the War on Poverty began. Yet, in raw numbers, welfare recipients in 2016 were twice that in 1967; approximately 100 million individuals – nearly one in three Americans – receiving benefits.
Today, few remnants of the Great Society’s aspirational goals emphasize self-support and human dignity. Instead, welfare’s present form provides material support – transferring resources to help individuals obtain goods and services they cannot purchase independently. In practice, its base format has evolved into a multiprogram model of income redistribution that increases recipient dependence, and ultimately, generational poverty.
The failure to achieve the original goal of welfare stems from what researchers call a “moral hazard.” In short, a moral hazard occurs when the problem worsens as the original solution does not adapt to new challenges. Over time, this hazard eats at the integrity and viability of this solution. In the case of federal welfare, an unintended consequence of welfare’s governing regulations has been that it discourages the drive to be economically self-sufficient through work and marriage.
In other words, why hasn’t over $1 trillion in annual federal spending on welfare not demonstrated progress toward eliminating poverty?
One popular belief is that welfare is extensive and poverty high because inefficient bureaucracies absorb most welfare spending with minimal benefit reaching the poor; however, the facts do not support this assertion. The reality is that welfare reaches approximately 90% of eligible poor and low-income persons and families as tangible benefits and services.
While good news that government aid is generally efficient, it is a concern since it indicates that current benefit policies perpetuate – not alter – poverty.
So, how can we tackle generational poverty?
We can start with the idea that the government, in my opinion, incorrectly defines poverty. Put simply families are identified as poor if their income is below the federal income level for poverty – in 2020 a family of three earning less than $21,700. On its face, having criteria for benefit eligibility makes sense until it fails to incorporate welfare’s economic value in the assessment, creating an incomplete economic picture.
In counting a family’s income, nearly all welfare contributions – such as food aid, housing subsidies, and health care benefits – are not considered income. Of the $449 billion spent on direct cash, food, housing, and medical care for families with children in 2016, only $14.7 billion (3.3 percent) are counted as income for purposes of measuring child poverty.
The takeaway: Nearly all government spending assists eligible persons and families, but almost none of these payments go into calculating income when measuring economic inequality. If included, the percent in poverty would decline, the nation’s income gap would shrink, thus creating a view of welfare as lifting recipients out of poverty.
We should also consider that one central element in the declining capacity for self-support is the collapse of marriage. As the War on Poverty expanded benefits, welfare began to substitute economic incentives for marriage, resulting in a decline in two-parent welfare families.
When Johnson launched the War on Poverty, seven percent of American children were born out of wedlock. As of 2015, it was over 40 percent. As husbands left home, the need for more welfare to support single mothers increased. Thus, the War on Poverty created a destructive feedback loop: Welfare inadvertently promoted marriage decline, thereby generating more welfare.
Today, with the growth of single-parent homes, unwed childbearing is the dominant cause of child poverty. The overwhelming majority of these births occur to women in their early twenties, not high school teenagers. If poor women who give birth outside of marriage were married to their children’s father, estimates suggest this alone would lift two-thirds out of poverty. Approximately 80 percent of all long-term child poverty occurs in single-parent homes. Welfare either fails to encourage or actively discourages self-support through work and marriage.
For example, a single mother (call her Mary) with two school-age children and works full-time 52 weeks a year earning the federal minimum wage of $7.25 per hour, approximately $13,800, is societally positioned in poverty. As such, Mary is eligible for government benefits that will nearly double her economic position to approximately $26,500 – above the poverty line for her family. Counting Mary’s earnings and benefits, her effective hourly wage rate is nearly 13 dollars.
This brings us to the idea of the 15 dollar minimum wage.
Because Mary’s adjusted hourly wage is 13 dollars, a 15 dollar minimum wage proposal is a winner from multiple perspectives. Mary’s case displays the reality that even though many families work full-time they still earn an unsustainable wage. In 2017, approximately 55% of families receiving SNAP benefits had someone in the family who is earning wages. However, with a $15 per hour wage, families no longer need welfare.
The concern critics have with raising the minimum wage to $15 per hour is that it could cost people jobs. This is grounded in the simple reasoning that if companies have to pay more for a single job, there will be fewer jobs on the market.
However, many academic studies counter this assumption. For example, a study by Alan Kruger and D
avid Card in 1992 found that when New Jersey increased its minimum wage for fast-food workers, the higher wages created a 13% growth rate in employment opportunities. Over the intervening years, this finding has been replicated several times. More recently in 2019, the US Census Bureau found that wage increases had a long-term positive impact on lower-wage earner families without declines in overall employment.
In closing, I envision the $15 per hour minimum wage as the means to getting off this runaway welfare train. While raising the minimum wage puts pressure on those paying salaries, the counterbalance is that the public side has begun moving toward a higher entry-level pay scale. As such, with more expendable money in the average family’s pocket, more Americans can sustain higher wages.
A 15 dollar minimum wage shifts the responsibility to the family, not the government. For each family removed from welfare through earned income, the upside is twofold: government welfare expenditures decline, and a family is on its way to self-sufficiency. Furthermore, families can also start to take advantage of the Earned Income Tax Credit, which provides additional income through tax reduction and tax returns.
So, the title of this article asks, can $15 an hour solve poverty? On its own, the answer is no. However, $15 per hour will set the conditions for a long-term solution.