Proposal blog

Research on social stratification and the distribution of resources has traditionally focused on two concepts: income inequality and wealth inequality. Income inequality refers to a stratified distribution of income, such as wages received for labor. Wealth inequality refers to the stratified distribution of wealth, including financial assets from equities to real estate. Income inequality is thought to top-heavy, and vary periodically in response to market shocks and political pressures, while wealth is thought to be even more top-heavy, and passed through generations in such a way that ensures an unchanging distribution. Generally speaking, wealth inequality is thought to be much more important than income inequality in terms of securing a more equitable distribution of resources, although Piketty notes that financial returns on wealth are relatively low right now, which might be the reason why income inequality is increasing – as wealth pays less, the rich are forced to pay themselves higher amounts in income in order to maintain their relative position.

Debt is rarely discussed in sociology, and has served as a monkey wrench in many analyses of wealth and income distributions because it can signal both increased earnings potential, as well as an over-reliance on debt that could potentially lead to financial disaster. Nowhere is this dual nature more evident than in the case of student loans. Going to college can result in a lifetime of increased earnings, but paying for it on credit can also result in decades of loan payments. College tuition and reliance on lending has skyrocketed since the turn of the century, resulting in a cohort of students so indebted that writers have begun terming them as “generation debt.” Loan maintenance carries with it a guarantee of decreased income (as payments must be made), and with such a degree of reduced income it is entirely plausible that the rate of wealth accumulation for this cohort might be decreased to a degree that aggravates the distribution of wealth to even more inequitable levels.

Therefore, my analysis proposes studying the effects of student debt load on wealth accumulation for students who borrow. This analysis is necessarily divided into three populations of interest: those who accumulated student loans but didn’t graduate (who are expected to possess the least wealth), students who accumulated student loans and did graduate, and, finally, both groups together to determine the net effect of student loans on wealth accumulation. This will be achieved through secondary data analysis, drawing upon the National Longitudinal Youth Survey’s 1997 cohort, and the follow-ups later conducted of this cohort. This cohort is ideal for the study because they would be starting college in the same time period that student lending truly began to explode in magnitude, so their experiences will likely be similar to the generations that followed them. Analysis will be performed through the use of a regression analysis – included in this regression will be factors previously identified as being significant to wealth accumulation rates, such as race, age and parental income. The inclusion of these variables will improve the internal validity of the study, as exclusion of them might lead to the identification of spurious relationships.

 

Secondary data analysis blog

Secondary data analysis typically relies upon using statistical quantitative techniques or qualitative coding techniques on previously collected data. Data sources can be large omnibus surveys, government statistics, research conducted by previous researchers or even popular publications. This sort-of research design can carry both advantages and disadvantages.

Advantages of secondary data analysis primarily center around the ease of producing research projects with them. Data collection is often the most time-consuming and expensive stages of research projects, so the ability to skip this step increases our ability to produce new knowledge. Furthermore, many largescale data collection projects exist at a scale that most researchers could never acquire funding for, such as the GSS or even the US Census. The Internet age has enabled even more large-scale data collection techniques, often performed by tech firms, allowing researchers to start looking at big data as well. Appealingly, these projects rarely have serious ethical concerns attached to them, as there is no data collection or experimental stages, although considerations need to be made regarding confidentiality when accessing certain data sources. Typically, most repositories of secondary data restrict access to data that might compromise confidentiality, but researchers still need to be mindful of their sources.

That said, there are also disadvantages attached to the use of secondary data sources. Most obviously, researchers cannot direct data collection efforts. This restricts a researcher’s ability to investigate certain subjects when performing secondary data analysis, and carries the added danger of focusing the attention of researchers too closely on subjects investigated by presently existing secondary data sources. Furthermore, the distance between the researcher and the data collection in secondary data analysis can also arguably be a disadvantage – presumably, researchers involved in the data collection stages of their own projects are more familiar with the scope, advantages and disadvantages of their data. While there are many secondary data sources that are well-suited for content analysis, there are not many secondary data sources available that are appropriate for other qualitative techniques, meaning that secondary data analysis causes a disproportionate amount of quantitative publications. This can obscure the observation and understanding of causal mechanisms, and privilege statistical methods within the field. Finally, secondary data analysis can arguably lead to over-reliance on specific samples, such as the GSS, although statistics suggests that random selection and sampling techniques should mitigate this effect.

The advantages and disadvantages of secondary data analysis are exemplified well by Roscigno, Karafin and Tester’s analysis of discrimination in the housing market, “The Complexities and Processes of Racial Housing Discrimination” (http://www.jstor.org.proxy.library.vcu.edu/stable/10.1525/sp.2009.56.1.49). Perhaps most interesting about this is the novel data source, which is drawn from verified complaints received by the Ohio Civil Rights Commission. Previous attempts to document this phenomenon typically relied upon audit studies and large-scale statistical techniques, which attempted to document racial segregation and modern redlining practices. Roscigno, Karafin and Tester’s analysis gets right to the heart of the matter, by focusing on verified complaints made by real people, demonstrating the existence of discrimination in the housing market. However, that’s not to say their data source is perfect. Being that all of the people that are studied took the time to actually file an official complaint suggests that most of the people that they’re studying are outliers, and the typical nature of housing discrimination might not be depicted here. Similarly, while it might be interesting to compare the rate of civil rights complaints in Ohio versus other states, it is difficult to generalize this information to a broader rate of housing discrimination.

 

Evaluation research blog

The primary goal of evaluation and policy research is to evaluate some specific aspect of a policy or program – most often the effectiveness of the program, but research questions can also include questions of efficiency or how to improve a program. Most often, these projects are carried out to evaluate public programs, but social scientists might also evaluate the efforts of private entities as well. Traditionally, public policy analysis uses a model of inputs (the resources required for a program), process (how the programs utilize the inputs to create the desired change), outputs (the product or service produced by the process) and outcomes (the longterm impact and consequences of the program). Various types of designs exist for these projects: an evaluability assessment might be conducted first in order to determine if an assessment is even possible, a needs assessment might be conducted in order to determine what kinds of needs a population might have that can be served by a particular policy or program, a process evaluation might be conducted to determine how a policy or program actually delivers its outputs, or a formative evaluation might be taken out to determine how best to improve a policy or program’s process, an impact evaluation might be taken to determine if the program is actually effective, and, finally, an efficiency analysis might be undertaken to compare costs with benefits.

The value of such evaluatory research can be seen in many examples, but I would like to highlight Schutt’s history of the D.A.R.E. program, which attempts to reduce substance use in high school students. D.A.R.E. was implemented in many high schools as a program where law enforcement officers would interact with students to try to reduce substance use rates. Evaluation research determined that D.A.R.E. didn’t cause any lasting impact on the substance rates of students, and as high schools started dropping out of the program, administrators tried to redesign the program to be more effective. The deployment of D.A.R.E. was so widespread, that it’s surprising that such an enormous sum of money was wasted supporting the program. While the program has been redesigned, preliminary research suggests that the redesigned program might be similarly useless.

Ethical concerns arise from two primary sources in evaluation research: the tendency of public programs to try to help marginalized people, and the concerns of working cooperatively with the government. Because research will often focus on groups such as addicts, children and the incarcerated, ethical concerns regarding consent and the distribution of benefits will naturally arise. Similarly, cooperating with the government can require ceding control over confidential information, and the danger of confidentiality being breeched in legal proceedings is particularly high. Boruch offers various guidelines to help perform this research ethically, including: minimizing sample size, minimizing the size of the control group, only testing relevant portions of the program or policy, comparing the effects of different amounts of the treatment instead of the absence of the treatment and varying the treatment across different settings rather than across different individuals in one particular setting. Additionally, federal laws have have been passed to prohibit breaches of confidentiality in court, although this remains a dangerous area that researchers much consider carefully before undertaking a project.

An example of such a study can be found in Wyker, Jordan and Quigley’s “Evaluation of Supplemental Nutrition Assistance Program Education: Application of Behavioral Theory and Survey Validation” (http://dx.doi.org.proxy.library.vcu.edu/10.1016/j.jneb.2011.11.004). Wyker, Jordan and Quigley’s study can be best described as a formative analysis, which considers the conditions necessary for SNAP-Ed programs to change the eating habits of their beneficiaries. Wyker, Jordan and Quigley rely heavily upon previous needs assessments in the design of their study, which is discussed both above and in the textbook by Schutt. Some of the concerns brought up by Schutt are also visible in the article – their sample was not constructed as representatively as the authors would have liked, and the authors suggest randomly sampling or oversampling in order to ensure that more males and Latinos are present in the study. This might run into ethical concerns brought up by Schutt regarding the distribution of benefits – however, if their sample groups only include people who are already participating in the program, this will likely not be a problem.

 

Quantitative data analysis

A frequent refrain in public discourse is the old axiom that “correlation does not equal causality.” Statistics can both muddle and clarify this picture. For example, it is very easy to determine association, linear or otherwise, with modern statistics software. However, determining causality is more difficult, and while statistical measures can provide a degree of comfort, it is ultimately impossible to do without careful analysis. Chi-square tests are often used in cross-tabular analysis to determine if such a correlation is due to chance. Similarly, a careful interpretation of the statistical measures which are generated in a regression analysis can provide some more hints, particularly when a correlation appears strong in the context of a highly predictive model. Perhaps one of the most challenging aspects of evaluating a relationship is the impact of other potential variables. In crosstabular analysis this can be accomplished by adding confounding variables into the analysis, such as intervening variables which lie in the casual chain between a dependent variable and an independent variable, and extraneous variables, which can be used to test if a relationship is, in fact, spurious. In a regression analysis, this can be achieved by including these variables in the regression model. However, regardless of statistical method being applied, causality demands a deep, critical understanding of time order and causal mechanics of the proposed relationship.

Sociological studies frequently rely on one of the above statistical techniques (crosstabs and regression analysis), and both of these techniques can be seen in Devah Pager’s “The Mark of a Criminal Record.” Pager is utilizing an employer audit methodology, where testers who are as identical as possible test two different independent variables (criminal record and race) in relation to the dependent variable of callbacks received. In essence, Pager is only examining three variables, so graphs are primarily utilized to communicate the results to the reader. This works well, because the proposed relationship is so simple, and the attention to internal validity within the study is so high that further information would only unnecessarily confuse the reader. However, when one journeys to the appendix, they find a more complicated picture provided via regression analysis. Pager probably utilizes regression analysis in order to communicate statistical significance and the specific co-efficients provided.

Considering the simplicity of the proposed relationship, it is not entirely clear if the regression analysis is needed – a more simple set of crosstabs could have probably been used. Notably, the regression table in the appendix lacks an r squared value, which helps communicate how comprehensive the regression model is. There is also no attention to additional variables – this is presumably due to the nature of the experimental design, in which the testers were made to be as identical as possible (with the exception of the variables being measured). It would be useless to control for another variable, as the value of that variable is presumably the same across every case. Ultimately though, simplicity is a major boon to this study – it is difficult to argue with Pager’s results without falling back on typical arguments opposed to audit methodologies.

 

Pager, D. (2003). The Mark of a Criminal Record. American Journal of Sociology, 108(5), 937–975.

 

Sampling techniques

In an ideal world, researchers might have enough time, money and research assistants to include census-like studies of entire populations in their research designs. In reality, this is entirely implausible, but luckily statistics implies that this is also entirely unnecessary. Sampling is much like the logic of flipping a coin – if you only flip it six times, the chances of getting an accurate 50/50 distribution is diminished, but if you flip it one hundred times you’ll probably get something a lot closer to 50/50. In the same way, analyzing an entire population only requires a researcher to study a large enough chunk to ensure that their analysis is not significantly influenced by random chance and errors.

Generalizability is often considered to be a paramount goal in scientific inquiry, although this is not entirely the case in social sciences. Generally speaking, generalizability is a typical goal of quantitative analysis in the social sciences, but somewhat less so for some instances of qualitative analysis. A typical line of reasoning for not striving for generalizability might be that access to the population that is being studied is limited and a representative sample could never be selected; researchers with constructivist viewpoints might also conclude that generalizability is an impossible goal to achieve. That said, even when generalizability is not a plausible goal, the specific sampling technique used is still important.

When generalizability is a plausible goal, the sampling techniques used are termed as “probability sampling methods.” By utilizing probability mechanics, these sampling methods avoid systematic biases caused by underrepresentation and overrepresentation. The most obvious of these methods is simple random sampling, where subjects are selected with a random numerical component to ensure that selection is entirely random. Closely related to this sampling technique is systematic random sampling, where specific items in sequential files are selected according to the necessary representative sample size and the total number of available entries. If it is essential for a researcher to make sure their sample accurately represents a specific distribution of a characteristic in a population, proportional stratified sampling is used; similarly, if a researcher wants to focus on a specific (or underrepresented) characteristic in a population, disproportionate stratified sampling, in combination with weighting techniques, is used. Finally, researchers can use cluster sampling as an alternative to simple random and systematic random sampling techniques. Cluster sampling focuses on random selection of naturally clustering elements of a population (such as city blocks or schools), and while it is sometimes convenient for survey researchers, it is unfortunately more likely to cause sampling errors.

When generalizability is not a plausible goal, nonprobability sampling is used. The most obvious type of these sampling techniques is availability sampling, where the sample is simply selected by convenience (such as people passing by on the street). The use of this technique varies contextually, but given the difficulty of ensuring that all subjects being studied belong to a particular population, it is not often applicable to nuanced research questions. More preferable techniques include quota sampling and purposive sampling. Quota sampling represents an attempt to mitigate the detrimental effect of nonprobability based sampling methods – researchers attempt to meet sampling quotas that represent a population, but since all possible characteristics of a population can’t realistically be identified, this sampling technique still fails to lead to a generalizable sample. Purposive sampling techniques are used when the sample is selected to fulfill a specific research purpose, such as only interviewing people who are particularly knowledgeable about a subject or the study of a small subset of a population. Finally, there is snowball sampling, where each subject provides the researcher with a list of other cooperative subjects – this is best utilized when studying populations with limited access, such as police officers.

 

CITI reflections

Perhaps the fact that a mandatory online training course exists should have tipped me off, but it was still surprising how very bureaucratic institutional review board (IRB) procedures appear to be. That said, I think this is a bureaucracy that we are better with than without. While some standards may appear onerous, IRB boards don’t only serve to protect scientists from ethical concerns, but their stringent procedures may also actually help restore some of the faith that the public has lost in science.

In response to ethically questionable experiments, such as the Milgram shock experiment, the Stanford prison experiment, and, most notoriously, the Tuskegee syphilis trials, the federal government issued the Belmont Report, which ultimately led to the establishment of numerous federal research guidelines and the establishment of IRBs at universities and other research institutions. While the premise of scientists policing themselves might not appear to be a trustworthy venture at face value, this arrangement is more nuanced than it might appear. IRBs not only require the presence of a member who is not associated with the institution, but also the presence of a non-scientist member. Furthermore, efforts must be made to ensure that IRBs are not representative of only one gender. The diverse membership that these procedures create help ensure that IRBs respect multiple perspectives when reviewing proposals.

There are numerous ethical concerns that IRBs are concerned with, but they’re broadly concerned with the three recommendations of the Belmont Report: respect for persons, beneficence and justice. IRBs primarily ensure respect for persons through high standards of informed consent – by ensuring that participants in research understand the aims and potential risks of the study. One of the most appalling aspects of the Tuskegee syphilis trials was the fact that the participants were unaware they were participating in research, and unaware of the disease that was being studied, so powerful informed consent procedures can help safeguard from such an incident occurring again (also of importance: IRBs enforce several extra precautionary measures when dealing with medical research). Beneficence, defined as the normative goal of maximizing potential benefits and minimizing harm, ensures that potentially harmful research is actually worth doing and not only of interest to a handful of scientists. One of the frequent criticisms of the Milgram shock experiment was that it potentially harmed the research subjects while failing to adequately address a worthwhile research question. Milgram said that the goal of the experiment was to understand if obedience could explain Nazi war atrocities, but critics rejected the study as unable to adequately address this research question. Finally, there is the issue of justice, which, in the context of the Belmont Report, strives to ensure that potential harm is distributed equally and fairly across research participants and populations. This concept is exemplified well in the previously discussed examples of IRB concerns, but one additional way that justice is ensured is by applying special protections to at-risk populations such as youth and the incarcerated. Special consideration must be given not only to those who are legally unable to consent, but also to those who may be coerced into participation in research through various mechanisms that researchers cannot control.

Some might be concerned that these protections are too prohibitory, and that powerful research cannot be achieved anymore. This is where IRB deliberation is often most important. IRBs help determine when the use of deception is worthwhile and appropriate, as well as waiving informed consent procedures when necessary. Conversely, a lot of research (such as survey research) is entirely benign and, therefore, criteria exist for expediting or bypassing IRB review when appropriate. As a researcher in the social sciences, I would expect my work to fall in the latter category, but I endorse the idea of ensuring that all researchers are thoroughly educated in ethical matters anyways.

 

Introduction — SOCY 601

Hello all! My name’s Peter Jameson, and this is my first semester in the sociology department’s graduate program. I’m originally from Northern Virginia, but I’ve also lived in California, New Orleans and Egypt. I finished my undergraduate degree here at VCU, where I majored in political science with a concentration in public policy and administration, and minored in sociology.

My experience with academic research is limited. Following the advice of my senior seminar instructor, I managed to enroll in a research methods course in my last semester, which I enjoyed immensely. What frustrated me the most about writing literature reviews and research papers was that when I was reviewing journal articles, I could only ascertain information from the concluding sections because everything above that section didn’t make sense. When reading only the end of an article, it’s easy to understand if an author concluded that the hypotheses were proven or unproven, but often difficult to answer questions like “by what margin?” without going into the methodology and findings sections. While I’m certainly not fluent in methodology and statistics, that course helped me to at least begin to understand the word soup that lives between an introduction and a conclusion. Based off the syllabus and the first reading, I suspect that there are bigger questions that this course will address, such as how one designs a functional experimental design, or how one operationalizes and measures social phenomena — but I think learning how to read and understand those intermediary sections would definitely be a meaningful step forward for me.

 
Privacy Statement