Proposal blog

Research on social stratification and the distribution of resources has traditionally focused on two concepts: income inequality and wealth inequality. Income inequality refers to a stratified distribution of income, such as wages received for labor. Wealth inequality refers to the stratified distribution of wealth, including financial assets from equities to real estate. Income inequality is thought to top-heavy, and vary periodically in response to market shocks and political pressures, while wealth is thought to be even more top-heavy, and passed through generations in such a way that ensures an unchanging distribution. Generally speaking, wealth inequality is thought to be much more important than income inequality in terms of securing a more equitable distribution of resources, although Piketty notes that financial returns on wealth are relatively low right now, which might be the reason why income inequality is increasing – as wealth pays less, the rich are forced to pay themselves higher amounts in income in order to maintain their relative position.

Debt is rarely discussed in sociology, and has served as a monkey wrench in many analyses of wealth and income distributions because it can signal both increased earnings potential, as well as an over-reliance on debt that could potentially lead to financial disaster. Nowhere is this dual nature more evident than in the case of student loans. Going to college can result in a lifetime of increased earnings, but paying for it on credit can also result in decades of loan payments. College tuition and reliance on lending has skyrocketed since the turn of the century, resulting in a cohort of students so indebted that writers have begun terming them as “generation debt.” Loan maintenance carries with it a guarantee of decreased income (as payments must be made), and with such a degree of reduced income it is entirely plausible that the rate of wealth accumulation for this cohort might be decreased to a degree that aggravates the distribution of wealth to even more inequitable levels.

Therefore, my analysis proposes studying the effects of student debt load on wealth accumulation for students who borrow. This analysis is necessarily divided into three populations of interest: those who accumulated student loans but didn’t graduate (who are expected to possess the least wealth), students who accumulated student loans and did graduate, and, finally, both groups together to determine the net effect of student loans on wealth accumulation. This will be achieved through secondary data analysis, drawing upon the National Longitudinal Youth Survey’s 1997 cohort, and the follow-ups later conducted of this cohort. This cohort is ideal for the study because they would be starting college in the same time period that student lending truly began to explode in magnitude, so their experiences will likely be similar to the generations that followed them. Analysis will be performed through the use of a regression analysis – included in this regression will be factors previously identified as being significant to wealth accumulation rates, such as race, age and parental income. The inclusion of these variables will improve the internal validity of the study, as exclusion of them might lead to the identification of spurious relationships.

 

Secondary data analysis blog

Secondary data analysis typically relies upon using statistical quantitative techniques or qualitative coding techniques on previously collected data. Data sources can be large omnibus surveys, government statistics, research conducted by previous researchers or even popular publications. This sort-of research design can carry both advantages and disadvantages.

Advantages of secondary data analysis primarily center around the ease of producing research projects with them. Data collection is often the most time-consuming and expensive stages of research projects, so the ability to skip this step increases our ability to produce new knowledge. Furthermore, many largescale data collection projects exist at a scale that most researchers could never acquire funding for, such as the GSS or even the US Census. The Internet age has enabled even more large-scale data collection techniques, often performed by tech firms, allowing researchers to start looking at big data as well. Appealingly, these projects rarely have serious ethical concerns attached to them, as there is no data collection or experimental stages, although considerations need to be made regarding confidentiality when accessing certain data sources. Typically, most repositories of secondary data restrict access to data that might compromise confidentiality, but researchers still need to be mindful of their sources.

That said, there are also disadvantages attached to the use of secondary data sources. Most obviously, researchers cannot direct data collection efforts. This restricts a researcher’s ability to investigate certain subjects when performing secondary data analysis, and carries the added danger of focusing the attention of researchers too closely on subjects investigated by presently existing secondary data sources. Furthermore, the distance between the researcher and the data collection in secondary data analysis can also arguably be a disadvantage – presumably, researchers involved in the data collection stages of their own projects are more familiar with the scope, advantages and disadvantages of their data. While there are many secondary data sources that are well-suited for content analysis, there are not many secondary data sources available that are appropriate for other qualitative techniques, meaning that secondary data analysis causes a disproportionate amount of quantitative publications. This can obscure the observation and understanding of causal mechanisms, and privilege statistical methods within the field. Finally, secondary data analysis can arguably lead to over-reliance on specific samples, such as the GSS, although statistics suggests that random selection and sampling techniques should mitigate this effect.

The advantages and disadvantages of secondary data analysis are exemplified well by Roscigno, Karafin and Tester’s analysis of discrimination in the housing market, “The Complexities and Processes of Racial Housing Discrimination” (http://www.jstor.org.proxy.library.vcu.edu/stable/10.1525/sp.2009.56.1.49). Perhaps most interesting about this is the novel data source, which is drawn from verified complaints received by the Ohio Civil Rights Commission. Previous attempts to document this phenomenon typically relied upon audit studies and large-scale statistical techniques, which attempted to document racial segregation and modern redlining practices. Roscigno, Karafin and Tester’s analysis gets right to the heart of the matter, by focusing on verified complaints made by real people, demonstrating the existence of discrimination in the housing market. However, that’s not to say their data source is perfect. Being that all of the people that are studied took the time to actually file an official complaint suggests that most of the people that they’re studying are outliers, and the typical nature of housing discrimination might not be depicted here. Similarly, while it might be interesting to compare the rate of civil rights complaints in Ohio versus other states, it is difficult to generalize this information to a broader rate of housing discrimination.

 

Evaluation research blog

The primary goal of evaluation and policy research is to evaluate some specific aspect of a policy or program – most often the effectiveness of the program, but research questions can also include questions of efficiency or how to improve a program. Most often, these projects are carried out to evaluate public programs, but social scientists might also evaluate the efforts of private entities as well. Traditionally, public policy analysis uses a model of inputs (the resources required for a program), process (how the programs utilize the inputs to create the desired change), outputs (the product or service produced by the process) and outcomes (the longterm impact and consequences of the program). Various types of designs exist for these projects: an evaluability assessment might be conducted first in order to determine if an assessment is even possible, a needs assessment might be conducted in order to determine what kinds of needs a population might have that can be served by a particular policy or program, a process evaluation might be conducted to determine how a policy or program actually delivers its outputs, or a formative evaluation might be taken out to determine how best to improve a policy or program’s process, an impact evaluation might be taken to determine if the program is actually effective, and, finally, an efficiency analysis might be undertaken to compare costs with benefits.

The value of such evaluatory research can be seen in many examples, but I would like to highlight Schutt’s history of the D.A.R.E. program, which attempts to reduce substance use in high school students. D.A.R.E. was implemented in many high schools as a program where law enforcement officers would interact with students to try to reduce substance use rates. Evaluation research determined that D.A.R.E. didn’t cause any lasting impact on the substance rates of students, and as high schools started dropping out of the program, administrators tried to redesign the program to be more effective. The deployment of D.A.R.E. was so widespread, that it’s surprising that such an enormous sum of money was wasted supporting the program. While the program has been redesigned, preliminary research suggests that the redesigned program might be similarly useless.

Ethical concerns arise from two primary sources in evaluation research: the tendency of public programs to try to help marginalized people, and the concerns of working cooperatively with the government. Because research will often focus on groups such as addicts, children and the incarcerated, ethical concerns regarding consent and the distribution of benefits will naturally arise. Similarly, cooperating with the government can require ceding control over confidential information, and the danger of confidentiality being breeched in legal proceedings is particularly high. Boruch offers various guidelines to help perform this research ethically, including: minimizing sample size, minimizing the size of the control group, only testing relevant portions of the program or policy, comparing the effects of different amounts of the treatment instead of the absence of the treatment and varying the treatment across different settings rather than across different individuals in one particular setting. Additionally, federal laws have have been passed to prohibit breaches of confidentiality in court, although this remains a dangerous area that researchers much consider carefully before undertaking a project.

An example of such a study can be found in Wyker, Jordan and Quigley’s “Evaluation of Supplemental Nutrition Assistance Program Education: Application of Behavioral Theory and Survey Validation” (http://dx.doi.org.proxy.library.vcu.edu/10.1016/j.jneb.2011.11.004). Wyker, Jordan and Quigley’s study can be best described as a formative analysis, which considers the conditions necessary for SNAP-Ed programs to change the eating habits of their beneficiaries. Wyker, Jordan and Quigley rely heavily upon previous needs assessments in the design of their study, which is discussed both above and in the textbook by Schutt. Some of the concerns brought up by Schutt are also visible in the article – their sample was not constructed as representatively as the authors would have liked, and the authors suggest randomly sampling or oversampling in order to ensure that more males and Latinos are present in the study. This might run into ethical concerns brought up by Schutt regarding the distribution of benefits – however, if their sample groups only include people who are already participating in the program, this will likely not be a problem.

 
Privacy Statement