Research Proposal Summary

Ultimately, my research is trying to answer the question “are trade and labor union workers more class conscious than non-unionized workers?” The project is mixed methods, and I will be using both a descriptive, cross-sectional survey design and exploratory, semi-structured focus group interviews. The specific goal of this methodology is to understand how participants in two samples, non-unionized workers and unionized workers, think of their own class and material wellbeing, how they feel about politics that affect them as members of this class, and determine how they or if they participate in class action such as rallies, strikes, slowdowns, or protests. The exploratory phase will incorporate the findings of the survey data into the focus group interview process with union members to understand how unionization has or has not affected how they think about class or how they do class action. My hypothesis is that the responses of unionized and non-unionized workers will not be significantly different and that the presence of a union alone does not influence people’s attitudes or behavior regarding class or class action.

The survey instrument itself utilizes questions from existing survey designs on the subject of class consciousness as well as questions about social and political action that were obtained from this cross-national study of political action. The attitudinal questions seek to operationalize pro-worker or anti-worker attitudes (attitudes towards overprivileged/underprivileged groups, personal placement to the political left/right, subjective class identification, etc.), whereas the behavioral questions try to operationalize the respondent’s propensity to engage in political or social action (Do you approve of protests? Are protests effective? How often to you participate in protests? etc.). The questions to be asked during the focus group portion of the research will be determined based on the results of the quantitative data, focusing mostly on explaining the results to the focus group, asking them if the data is surprising, asking whether they think being in a union had an effect on their responses and so on.

The purpose of using mixed methods in my research is to attempt to bring together two prevailing ideologies that have emerged in the class consciousness literature. One camp believes that class consciousness can be quantified by measuring people’s political affiliations and dispositions, relying on survey methods to gauge the “pro-worker” attitudes of a sample to ascribe class consciousness. The other camp believes that class consciousness is something latent in the mind of workers that is only illicited in situations of deliberate class antagonism, such as strikes or protests, and typically ascribes class consciousness based on how readily workers embrace these activities. By developing a survey measure that incorporates both attitudinal and behavioral measures of “class conscious” traits, I hope to be able to use this research to also analyze the utility of survey designs as a method of measuring class consciousness and also investigate whether these qualities are more prevalent among unionized workers, which most of the theoretical literature (Marx, Lenin, Gramsci) assumes to be true.

In order to minimize the risk of occupation influencing survey results, I will be looking at two samples with the same occupation; parcel delivery drivers, namely those working for either UPS or FedEx. The Richmond area is suitable for this research because two major parcel delivery services, UPS and FedEx, both have shipment offices either within or directly outside of the city. Further, UPS has an established unionized workforce represented by Teamsters Local #322 whose office is located within the city. The presence of an established union in the area ensures significant representation of both non-unionized and unionized workers in the quantitative portion of this research.

I plan to use a non-probability availability sampling technique to select a sample of non-unionized parcel drivers for participation. Ideally, this would involve the recruitment of employees from both UPS and FedEx. Utilizing a simple random sampling procedure for such a population seems nigh impossible. Given the unlikelihood that an employer or manager would give me access to a full list of their location’s employees, I would have to be able to recruit as many participants as possible in however many days or hours in the facility were allotted to me, hence my decision to use availability sampling. Another possible form of participant recruitment could be snowball sampling; after administering the survey instrument to the sample of unionized workers, I could ask them to give their coworkers my phone number so I could administer the survey or send it to them by mail to be completed. Both techniques would be considered haphazard and lacking generalizability, which is something to be considered when analyzing my data.

In regard to ethics, the greatest ethical concern with this methodology is assuring the anonymity of my respondents. All personal identifying information will omitted from my findings, however given that my subject matter and participant recruitment techniques are intrinsically tied to my respondents’ jobs, I have to consider that if any personal information were to be compromised, it could jeopardize the well-being of my patients depending on who in their workplace were to know how they responded.


Secondary Data Analysis

At its most basic, secondary data analysis is the process of reanalyzing an existing data source with the intention of answering a hypothesis different from those hypothesized by whoever did the original research. Most commonly this involves the use of social science surveys and government data, however this method can be done using qualitative data, although as Schutt points out qualitative data is not nearly as readily available as quantitative data.

Perhaps the most obvious advantage of doing secondary data analysis is that secondary data sets are easily accessible. Thanks to the online compendiums such as ICPSR, finding a data set with variables pertinent to your hypothesis is easier than ever. Even qualitative data sets are starting to find their way onto the internet via repositories such as Syracuse’s QDR. Some sites such as the ICPSR even allow the user to input possible variable names to expedite their search for relevant data sets. The accessibility of these data sets can allow researchers to test hypotheses much faster than if they were to attempt to collect the data themselves.

Similarly, one of the greatest advantages of secondary data analysis is that it is incredibly cheap to carry out; especially for students such as ourselves who may not have the time and money to conduct large-scale research projects, secondary data analysis allows us to conduct our research without the burden of paying for it. For researchers crunched for time and money, secondary data analysis gives them an opportunity to analyze data much faster, and typically on a much larger scale, than they could hope to achieve on their own.

Analyzing existing data sets isn’t without its disadvantages, however. Firstly, without conducting your own research, secondary data can never specifically address your research question. Existing data sets are usually collected with a specific hypothesis or goal in mind, which raises questions about whether or not the data is appropriate given the hypothesis you have.  The use of secondary data means that your hypotheses are often beholden to the data sets available rather than having data with survey items operationalized around your specific research question. Secondly, just because data is available does not mean that it is good data; typically government data can be considered fairly safe, however when retrieving a data set from an online repository, it is important to evaluate the quality of the data collection and analysis before using the data set to test another hypothesis.

A study that exemplifies the advantage of using secondary data is Did Welfare Reform Cause the Caseload Decline?in which the authors analyze state-level welfare caseload counts collected by the U.S. Department of Health and Human Services between 1992 and 2002 to determine the degree to which the falling welfare caseload could be attributed to the replacement of AFDC with TANF in the mid-nineties. The authors perform a multivariate regression analysis to determine which policies caused the caseload to decline and whether other variables had an effect on the caseload decline, ultimately concluding that TANF only accounted for about 1/5 of the decline in the caseload.

We can imagine that, as researchers, it would be quite arduous to carry out this kind of research if we did not know exactly where, and when, the amount of people on the welfare rolls was changing. Considering this research was conducted by just two people, having to compile a ten year, month-by-month count of welfare recipients for all 50 states would probably be an undertaking that could take many years and a good amount of money to complete. In this particular case there aren’t many concerns about whether or not the data is good data or whether or not it is applicable to the hypothesis being tested, considering the secondary being analyzed is simply a count of the number of people receiving TANF and was collected by a government agency.


Evaluation and Policy Research

At their most basic, the primary purpose of evaluative research is to understand how certain programs — whether that means a new drug, an educational curriculum, or a social policy — works the way that it does. This type of research can be guided by several questions; Is the program needed? What is the program’s impact? How efficient is the program? Evaluative research is a way for stakeholders — groups who have some kind of concern with a program — to answer these questions and determine how they should move forward with these programs in light of their findings.

Evaluative research is generally carried out for these stakeholders, whether they be business managers, government officials, or funders of a particular project. As Schutt points out, who program stakeholders are and what role they play in the program has extraordinary ethical consequences in evaluative research. In many cases, the funding awarded to researchers by these stakeholders could result in questionable research methodology or interpretations of findings for the sake of remaining funded. Consider for example that nearly 75% of U.S. clinical trials in medicine are funded by pharmaceutical companies; though this may seem benign, considering as researchers we should favor a world where scientific research is generously funded and endorsed, research funded by these companies is more likely to favor the drug under consideration than similar studies funded using government grants or charitable donations. Consider a company like Coca-Cola, who has a legacy of funding university studies that obfuscate the connection between soda consumption and obesity. When reading evaluative research, knowing who the research was conducted for can be nearly as important as the findings of the research itself.

What I found particularly important in this chapter is that Schutt illustrates that impact analysis is just one type of evaluative research. Often we think of evaluative research as something that retroactively ascribes necessity or usefulness to a program, but as Schutt points out, research can also be carried out before the implementation or design of a program to determine if it could be needed or if the program itself can even be evaluated. These are forms of evaluation research that I had not considered before, so I appreciated the designation.

Did Welfare Reform Cause the Caseload Decline? is a piece that I think exemplifies well done policy research. The authors, Caroline Danielson and Alex Klerman, use monthly state-level welfare case counts collected by the U.S. Department of Health and Human Services both before and after the replacement of AFDC with TANF policies to investigate how certain TANF policy changes affected the drastic reduction of the welfare caseload during the late 1990’s and early 2000’s. Using this data, the authors conduct a difference-in-difference model to detect how four major policy changes affected the caseload; the generosity of financial incentives, sanctioning from welfare rolls due to non-compliance with work requirements, time limits placed on how long families could receive aid, and other programs to divert families who needed temporary assistance from joining the welfare caseload. The authors also include the national unemployment rate for each given month to account for changes in the caseload that may be attributed to economic conditions.

Their findings are quiet grim; admittedly DID models are a bit above my pay grade, but using this data their findings suggest that these major policy changes only explain about 10 percentage points of the 56 percentage point decline in the welfare caseload that occurred between 1992 and 2005. Further, the booming economy of the late 1990’s accounted for about 5 percentage points of decline during this time period.  This suggests that factors outside of state-level welfare reform accounted for the majority of the caseload decline, a finding which is quite eery considering how the Clinton and Bush administrations were quick to tout TANF as a romping success.

This research firmly fits into what Schutt describes as impact analysis. The authors are not necessarily concerned with whether or not the effectiveness of TANF was worth the “cost”, just whether it was working as it was purported to at all. It is also more of a black box model, focusing not on how welfare reform should have theoretically operated, but attempting to dissect why the caseload declined the way that it did. It is hard to say what kind of stakeholders could have possibly funded this research; the authors were employed by The Public Policy Institute of California and RAND Corporation at the time of this piece’s publication; however, neither of these think tanks are very forthcoming with who sponsors their work.


Quantitative Data Analysis – Causal Explanations

Most people are probably familiar with the old adage; correlation does not necessarily mean causation. For many researchers the proverbial end goal of their work is to find causal explanations; how does the introduction of an independent variable, x, effect a dependent variable, y. Unfortunately, as we will probably come to know with our own research, the real world isn’t quite as nice and easy to understand as a linear expression. Often somewhere between the and  there is a t , u, v, and w that we need to account for that also have an effect on the dependent variable.

That doesn’t necessarily mean that and y don’t have some perceivable connection, however. As Schutt describes in Investigating the Social World, there are five criteria that must be satisfied when considering whether a causal relationship exists.

  1. Association: Does a change in happen at the same time as a change in y? At its most simplest, this involves seeing whether cases where the independent variable is different also differ in terms of the dependent variable. Take a crosstabulation of Hours Studied and Grades on an Exam, for example: If 16 people who studied for 3 hours received a mean grade of 79 on an exam, and people who studied for 10 hours received a mean grade of 95 on an exam, you might be able to say that the amount of hours studied and grades on an exam are connected, or associated.
  2. Time Order: Did the change in happen after the change in x? If you wanted to assert that an independent variable caused a change in a dependent variable, you would have to illustrate that the change in the dependent variable only happened after the variation in the dependent variable.
  3. Nonspuriousness: Was the change in x and y due to a third variable? When deciding whether or not a causal relationship exists, we as researchers need to be certain that something else that we are not accounting for is happening at the same time. If we were to use the Hours Studied vs. Grades on an Exam example, what if students who studied more also saw tutors for extra help? If that were the case, we would not be able to say that studying more causes higher exam grades because studying alone may not have solely caused the increase.
  4. Randomization: Were participants in the research randomly sorted into their respective groups? This is essentially a means of controlling for spuriousness as well; by randomly assigning participants to research groups, you alleviate the risk of some extraneous variable disproportionately affecting one of the conditions for your independent variable.
  5. Statistical Control: Can we hold other variables constant and focus solely on the and y? Again, this is a way of controlling for unforseen influences by other variables. To use the hours studied vs. Exam Grades example, the researcher may acknowledge that tutoring can also influence exam grades and therefore gather data using only exams taken by students who do not recieve tutoring.

As illustrated, the criteria for determining a causal relationship are quite strenuous. It is nigh impossible to meet all of these criteria when doing non-Experimental research, especially if we are analyzing bivariate data. The possibility that a third, unforseen variable may not have been accounted for the the research design is incredibly present in non-Experimental research, as is establishing a time order between changes in the independent and dependent variables.

I will go back to one of my favorite studies to illustrate this point: Did Welfare Reform Cause the Caseload Decline? After AFDC was replaced with TANF in the late 1990’s, most politicians touted the success of welfare reform at changing the face of american poverty because the number of people receiving aid dropped so drastically. It seems to make sense; welfare reform passes and the caseload declines. Post hoc ergo propter hoc. However, the two researchers, Caroline Danielson and Jacob Klerman, believed that other variables had influenced this relationship.

The authors believed, and illustrated using multivariate regression modeling, that not all policy changes enacted during the 90’s had an equal effect on the caseload decline, and that the bolstering economy had a significant effect that was not accounted for in the “welfare caused the caseload decline” narrative. By focusing solely on the type of welfare program (independent variable) and the number of people receiving welfare (dependent variable), other important factors that contributed to the declining caseload were ignored (bolstering economy, dismissal from welfare rolls due to stringent work requirements, etc.).

This piece also illustrates just how messy it can be to avoid spuriousness. The authors illustrate that the majority of the caseload decline was not attributable to either the economy or new welfare policies, but some other variable(s) that they themselves couldn’t determine. Even though this research satisfies most of the other criteria (association being clear, time order established by having a clear starting point for welfare reform policies), even they could not determine what exactly caused the caseload decline. This research is still important though; even if it may not be able to establish a causal explanation, it can at least dispel the myth that welfare reform alone was the cause of the caseload decline.


Reflecting on CITI IRB Training

My first go-around at CITI’s IRB training was in 2012 during my first experimental psychology course, so I went into this assignment with a very “this isn’t my first rodeo” attitude. At this point “respect for persons, beneficence, and justice” might as well be the Manchurian-Candidate-esque trigger phrase that turns me back into an eye-rolling undergrad with a propensity for dining hall mac and cheese. Nonetheless, retaking the modules and reading the responses of the other people in this class gave me a better appreciation for the existence of institutions like IRB’s.

As students in a classroom it may be all too easy to succumb to hindsight bias when looking at the historical pretense for ethical guidelines. When we see Milgram’s obedience experiment, Zimbardo’s prison experiment, and most notably the Tuskegee syphilis experiments, we see them as exemplars of poor foresight and blatant disregard for their participants that violates an almost a priori knowledge of what it means to be ethical. When I first looked at these training modules, it seemed almost laughable to use such ham-fisted examples when discussing the nuances of ethics in experimental research. However, I think it speaks to the general benevolence of institutions like IRB’s that these case aren’t treated as skeletons in the closet. It is rare for a powerful institution to own up to its misdoings, let alone put them at the forefront of a curriculum aimed at self-correcting such behavior. Also, when we talk about these seminal cases it is easy to think of them as occurring in some time immemorable, but their inclusion seems much more understandable considering these are all experiments that happen a mere fifty-ish years ago.

From a methodological perspective, the existence of institutions like IRB’s are integral to keeping our research honest and realistic. Especially as students our ideas for research ideas may tend towards pie-in-the-sky methodologies that couldn’t possibly be replicated safely in the real world. One example given in the training modules epitomized this to me; a survey asking about the attitudes of women who had had abortions. It a student were to investigate this topic the task of recruiting respondents may seem trivial, but as the course illustrated, the recruitment or response process not only requires more work then simply finding women who have had abortions but also could prove dangerous for potential respondents if done tactlessly. Institutions like IRB’s not only allow for a second opinion, a prescreening of these methodologies, but in doing so foster a more creative approach to experimental methodologies, encouraging us to think “how can I do this more safely, more efficiently, and less invasive”.

Conducting research is ultimately a two-way street; no matter how convinced we may be that our research is contributing to the “common good” by virtue of being conducted, we as researchers must trust and respect our participants as much as we expect them to trust and respect us. The onus is on the researcher is to first formulate an approach that is trustworthy, and in that regard the CITI IRB training offers a solid framework to begin thinking about the broader historical and legal contexts in which we should hope to achieve that goal.


An Introduction

Hey there! My name is Austin Round, I am a second semester student in the Sociology M.S. Program. I am originally from Rhode Island, where I completed my B.A. in Psychology with minors in Political Science and Gender & Women’s Studies. I moved to Richmond about 7 months ago with my friend Adrian and his Old English Bulldog, Tiny (who is, in fact, not very tiny). I am also currently a T.A. in the political science department here at VCU! In my free time I play several fighting games at the competitive level (Super Smash Brothers Melee and Street Fighter 3: 3rd Strike, hit me up if you’re trying to get these hands), and spend most of my time hunched over a computer screen looking for new music to listen to.

I am fortunate to have a fairly extensive academic background in research methodology; at my alma mater, University of Rhode Island, the psychology program did not have a B.S. program until my last semester there, so in an effort to create an ad-hoc B.S. (a b.s. B.S., if you will), I took every course on experimental psychology that I could. This carried over into a bit of undergraduate research; one project I was working on, which regrettably wasn’t finished by the time I left the university, was a pre- and post-test inventory evaluating the university’s introductory Gender and Women’s Studies course’s ability to foster more favorable attitudes towards gender equality and feminist attitudes.

What I hope to get out of this course is a more nuanced ability to critique the methodologies in the research that I cite. It’s easy enough to disavow or endorse a piece of literature based on whether or not its findings suit our needs, but it’s an entirely other beast to be able to look at a piece’s methods and say “how could this be improved?”; “why did they pick this methodology?”; “what other methodologies can I use to better our understanding of this topic?”

Privacy Statement