Putting it all together with D3 and TOPOJson

It seems I’m at a point where I need to define what my goal is/has been for this course, since I don’t think I have explicitly stated it quite yet.

Looking at my projects thus far, it seems like a crass menagerie of vaguely data visualization-esque achievements. I have about half of a stacked bar chart, a bunch of circles floating in space (albeit a confined space now, thank you very much), and a pseudo-timeline thing that looks like a flaccid scatterplot. Granted these are all works in progress, but they are still incredibly rudimentary, without question.

I would probably find this more disappointing if my goal was to make a timeline, and a bar chart, and a network visualization. But my goal-with-a-capital-“G”, my macro goal, was to become more familiar with a data visualization utility, learn its basics, and use it to apply some design theory to make presentation worthy documents of my work and other researchers’ work. In that sense, it may also seem like I’m falling short. So this week I decided to make one project, beginning to end, that encompassed all of the design elements that I wanted to achieve in those other projects based on what I’ve learned thus far.

D3 is impressive for a myriad of reasons, but one of the most impressive (in my opinion) is its ability to render complex maps completely client-side. In lieu of hardcoding images or flash elements, D3, in tandem with another Bostock library, TOPOJson, renders svg paths using coordinates passed from JSON files. This allows for some interesting interactive and real time mapping, especially in tandem with an API or other real time data source. It also allows for more robust styling options that can add more user-friendly ways to present our data.

To test the waters with TOPOJson, I used data from the 2016 American Community Survey investigating gross rent as a percentage of household income. The ACS is a supplementary battery of questions released by the census bureau that charts variables about changes within communities. The data itself is incredibly easy to come by through the census bureau website and usually comes pre-cleaned and with some user-friendly variables for use-cases such as mine. The data can also be exported by census tract, county, state, or as national data depending on your desired level of analysis. For this visualization, I wanted to use counties to illustrate the disparate cost of living not only between states but between urban centers and surrounding areas.

One of the first design choices I had to make was in choosing what kind of color scale to use. Most examples of d3 chloropeths, including creator Mike Bostock’s own work, use threshold scales to set breaks in the data set and then map those values to an array of color keys. One problem I found when trying to implement this across several data sets, however, is that threshold scaling is highly influenced by outliers; in many cases where a data set had an extreme high or low value, the color scale became basically indistinguishable for most areas except those extreme cases. To avoid this, I used a sequential scale to represent this data. Along with D3’s color interpolation functions, this mean I could input two hex codes to use as range’s my “min” and “max”, and D3 would return an array of RGB values matching all the colors in between.

var colors = d3.scaleSequential(d3.interpolate("#F2CC8F","#E07A5F"));

Next, I had to find a JSON file with census tract coordinates to draw the map itself. Fortunately, TOPOJson has a repository of json files that made this task incredibly easy. One difficulty I faced when iterating over this data, however, was the sheer size of the files themselves. The continental US alone has over 3,000 counties, so computing all of that information can be time and resource consuming in a client-side language like javascript. Compounding on this, I was joining data from this large JSON file to my original dataset, which was comparable in size. To account for this, I made use of d3’s .queue function to preload my datasets before implementing any content generation. Similarly, since javascript is an asynchronous language, this avoided any difficulties that may arise from trying to join two rows of data that may not be loaded at the same time. The .await function then calls our function for actually drawing the map itself once both data sets have loaded completely.

.defer(d3.json , "https://d3js.org/us-10m.v1.json")
.defer(d3.csv, "https://docs.google.com/spreadsheets/d/e/2PACX-1vQK1F0gF62y_UNIsAhThc54HPWZHm-c-gZ1V5HTg5DYDHQ2eIbC3VKoaIJTqWniZnyD_UvfqpNxdBh6/pub?output=csv")


To join the two data sets, I created two empty arrays for county names and the actual rent percentage variable from the ACS data set. Using a .forEach function, I then populated these arrays using the county FIPS number as the entry key to make matching this data to its TOPOJson counterpart more simple. This is obviously not the most elegant way of accessing this information, nor is it the easiest in terms of the processing required, but it works.

function mapLoad(error, us, rent) {  
  //Write FIPS data and percentages into new array
  //Makes accessing percentages easier after feeding in the JSON coordinate data
  rent.forEach(function(d){ data[d.FIPS] = +d.percentage});
  rent.forEach(function(d){ names[d.FIPS] = d.county});

The rest of the code works similarly to the D3 prototype I’ve illustrated in past weeks; elements are called by a function, data is attached to those objects, and then new objects are created for every iteration of that data set. Some county values are still displaying as undefined, either as a result of missing values in the ACS data or due to FIPS values that do not match up between the two data sets, so I’ll have to take a deeper dive into the JSON data to see how that can be rectified.

One feature of TOPOJson that I have not been able to get working with this example is the library’s .projection functionality; this essentially maps the SVG paths being drawn to a larger SVG so that the size and shape of the object can be adjusted on the fly. This can be used to resize or rotate the projection to look at areas of interest, which could be an interesting implementation for future iterations of this project. I am also not completely satisfied the the sequential scale and may replace it with a more “partitioned” scale in the future to create better visual distinctions between counties. For now, though, I’m just happy that I managed to create my most complete visualization to date!


Building on the Force Directed Network Graph

Getting the force directed graph functional in d3 was a good start, but it clearly needs some tuning up.


For starters, the uniform fill color needs to be changed. Network graphs are interesting visualizations but mean very little without some differentiation between the nodes. Mapping the degree value to the node size expresses one dimension of this information, but there is a lot more information to be expressed and color is one of the simplest and notable ways to do so.

Color scales are simple to create in d3; it’s simply a matter of taking one of d3’s built in scale functions and mapping colors to the scale’s range, like so:

Var colorScale = d3.scaleOrdinal()



Colors are expressed as an array of “bins” to be matched to the values in the domain. The colors can then be called when the objects are drawn.

In the case of this network graph, I wanted to use a color scale to represent the annual budget of each node’s parent organization. The animal rights movement is championed by several organizations with budgets that far exceed that of other organizations, and this information might be interesting to compare to their presence at the animal rights national conference (Do larger organizations have some clout that bolsters their presence at these events? Does the sheer breadth of smaller organizations obfuscate their participation).

In this case, I decided to use a quantized scale to partition the Organization Budget values in 9 domains. A continuous scale may also be fitting, since Budget is a continuous variable, however if the intention is to visually describe similarity or difference between nodes using color, many different shades of many different colors may obfuscate larger trends in the data.
Quantize scales are somewhere in between ordinal and linear scales in the d3 library. Whereas ordinal scales create “bins” based on (typically) nominal or categorical variables, quantize scales take a continuous variable and partition it into equal, discrete “bins” bound to a given domain; in this case




The tooltips were then appended to include the actual Organization Budget of each node, pulled from the row of the nodes dataset while the circles are being drawn.

Another prominent issue with the visualization is that the node x and y positioning is not currently bound to the width and height of the svg canvas. As a result, some nodes fly out of the visual bounds and are not visible to the user. D3 creator Michael Bostock has a proposed solution here, however implementing this in my own code has proven problematic. The gist is that, when the x and y coordinates are pulled, the library is told to put the nodes within the range of [radius, cavaswidth-radius] for the x coordinate, and the range of [radius, canvasheight-radius] for the y coordinate.

The reason the code seems to be breaking is that the variable radius, as it appears in Bostock’s code, is a set integer, whereas in my code, radius is a function of the node’s degree centrality value. In trying to call the variable radius, my code is trying to look for a value that isn’t present in the current data which is being built by the force function. As such, I set an arbitrary value of 20 in place of Bostock’s use of radius.

The result is a graph that still fits the bounds of the box and has a bit more of a visual distinction between the types of nodes. Pretty cool! A key still needs to be implemented to explain what the colors actually mean, of course. Another concern is that the nodes still gravitate towards one another quite arbitrarily, rather than the nice subgroups that are emblematic of network graphs. This week I plan to take a deeper dive into the d3.force function to see how these forces can be used to replicate such results.


John Carpenter, Sociologist

I came here to discourse civilly and chew bubblegum…and I’m all out of gum.

John Carpenter’s 1988 film They Live is memorable for many reasons; firstly, it memorializes a place in time in popular culture when an iconic wrestler could get a leading role in a horror movie. It also features one of the most oft-quoted lines in movie history. What has solidified its place as a cult classic, however, is its timeless commentary on the media as a corporate tool, limiting public discourse and encouraging civil complacency.

As much as it pains me to say it, John Carpenter will not be remembered as a poignant sociological theorist. A year later in 1989, sociologist Jurgen Harbermas would expound on similar concepts in his piece The Structural Transformation of the Public Sphere: An Inquiry into a Category of Bourgeois Society (the piece itself was first published in 1962, but was translated into English in the late 80’s). Habermas described the emergence of a space between private life and political powers, which he coined the Public Sphere, where individuals could come together to freely discuss and identify social problems. First emerging in the 18th century with the collapse of feudal structures, Habermas viewed the public sphere as an arena for individual opinion to evolve into public opinion, free from the influence of government entities.

In a perfect democratic society, according to Habermas, media such as newspapers would serve as a conduit through which the public opinion would proliferate; the media could allow for asynchronous discussion of public issues, and allow for more than just the wealthy and educated to engage in these debates. As it stands, however, the movement from simple media to mass media instigated the “re-feudalization of the public sphere”; participation of the masses has become commodified, and political discourse displaced by entertainment and consumerism. As media proliferates, it becomes less about communicating public opinion, and more about shaping it.

“Editorial opinions recede behind information from press agencies and reports from correspondents; critical debate disappears behind the veil of internal decisions concerning the selection and presentation of the material.”

Habermas, The Structural Transformation of the Public Sphere

Habermas’ deductions may paint a grim picture regarding the public sphere and open discourse, others say the public sphere has moved, and perhaps grown, beyond the printed page. Sociologists such as Manuel Castells argue that the public sphere isn’t eroding, but has merely shifted to a global scale rather than a local one. For Castells, public society is not just a venn diagram between the private sphere and government entities, but a network society, in which information is processed and managed using digital technologies. In a network society, individuals are not reliant on institutions to obtain information, and can engage remotely to influence the political sphere.

Castell’s theory is corroborated by a body of work focusing on the counter-public sphere, in which marginalized communities can organize around their shared identities. Drawing on Antonio Gramsci’s theory of hegemony, counter-public spheres help form political opinion by allowing the marginalized to reframe what it means to be “hegemonic”. One example of the counter-public sphere in action can be seen in the case of Oscar Grant. Grant was shot and killed by Bay Area Transit officers in 2009, when an officer claimed that he thought he intended to fire his taser. Following the incident, videos were uploaded to Youtube, inciting public outrage and demonstrations (x). Using digital media, the witnesses were able to capture the attention of many beyond the scope of the Bay Area, turning an event witnessed by a few individuals into a public discourse.

Perhaps Habermas, and John Carpenter, are correct in lamenting the commercialization of the media. However the public sphere is not dying; it is thriving. New media has moved political opinion beyond the scope of the town pub and onto a digital stage. Digital networks allow the marginalized to create new organizational structures that go beyond the hegemony they experience on a daily basis, reframing public opinion beyond that which is portrayed in popular media. Moreso, a digital network society allows for a public opinion that can put pressure on the political sphere from beyond the walls of a town hall. And in the event that the ruling class is in fact a race of aliens bent on subduing the masses, maybe we can revisit John Carpenter’s theory.


Being Good at Doing Good: Collaboration, Citation, and Co-Authorship Amongst Animal Welfare Organizations

“Nothing can possibly be conceived in the world, or even out of it, which can be called good without qualification, except a good will.”

Immanuel Kant

That an innate desire to do good necessarily translates into effective outreach is not a given. Despite our best efforts, sometimes even our best intentions can have unintended consequences. Although there are innumerable cases of do-gooders-gone-awry, the case of PlayPumps comes to mind.

PlayPumps is a water solution system intended to replace arduous hand water pumps in African villages with Merry-go-Rounds that utilize the motion of the device to draw water. The design won a number of grants and sponsors to aid in its implementation and was widely hailed for its innovation.

Shortly after their instillation, however, PlayPumps weren’t putting the “fun” in “functionality” as initially hoped. Children found playing on the devices exhausting, and women in the villages often ended up churning the giant wheels by hand. When they broke, the cumbersome machines were often more costly to repair or replace than the original water pumps, and mechanically speaking generated less energy, and therefore less water, than the original devices. Ultimately, many villagers said they preferred the older method, and slowly the gaudy merry-go-round devices have either been replaced or lay dormant (x).

With non-profits and other altruistic endeavors, which stake their livelihood on putting on a good face, how can we see through the veneer and make sure they are doing the best possible job? Looking inward, how can these organizations evaluate their own methods to improve? And how often do they consult their peers who are working towards the same goal? In an attempt to answer these questions, I looked toward a movement close to my heart; animal welfare and animal activism.

Internal Research and The Animal Welfare Movement

Particularly in the United States, the animal welfare movement is a mixed bag of both long-established organizations and a crop of fledgling organizations that have quickly come to the forefront of the movement. Many of these newer organizations have their own internal research branches to evaluate their outreach interventions, however collaboration between groups is not well documented to say the least. Evaluative research has been done on a myriad of outreach interventions, such as corporate outreach, undercover investigations, leafleting, online ads, and humane education presentations (x). Most of this research, however, is conducted internally; the data is available, but only for those ready to spend hours mulling through obscurely linked research reports. Some organizations site academic pieces in psychological or philosophical journals to justify their interventions on their more public reports (x), however they are not as forthcoming with their citations of other animal welfare organizations.

Animal welfare organizations share many philosophical underpinnings with the effective altruism movement, a pseudo-utilitarian backbone that emphasizes highly effective activism and a commitment to ongoing research. Not all organizations share this mentality, however; Animal Welfare, “with a capital A”, is an umbrella which encompasses many smaller organizations with much more niche goals, such as humane societies, animal testing and vivisection, and wild animal rescue and shelter. In keeping with this dedication to maximizing outreach effectiveness, and focusing on organizations that would be the most likely to engage in collaborative efforts, I will not be including these more niche organizations in my analysis. My focus will be on organizations that emphasize general animal rights, industrial agriculture, general animal welfare, legal and legislative change, and metacharities, or charity evaluators.

My network analysis will investigate citation, collaboration, and coauthorship habits among animal welfare organizations in the United States. Particularly, this piece will investigate how these organizations are connected in regards to developing new interventions for effective advocacy. Using backlink tracking and html parsing of animal welfare organizations’ research publications and annual reports, I want to investigate which organizations collaborate or cite each other the most in evaluative research projects to create more effective outreach interventions.

This graphic from animalcharityevaluators.org represents its three highest ranks charities based on internal research, cost effectiveness, and innovative strategies. How likely are they to collaborate?


Data Collection

Initial research findings and annual reports were found from prominent animal welfare organizations using Animal Charity Evaluators’ Research Library. Animal Charity Evaluators’ research library comprises thousands of publications from organizations both in the Animal Welfare mainstream, but also academic journals on the periphery, so the search criteria required some refining. Using the site’s built in search functionality, publications were able to be refined to fit three main criteria; pieces published by animal welfare organizations, pieces published in the past ten years, and pieces that focused specifically on outreach techniques and interventions. This initial search yielded 25 publications from 21 different Animal Welfare Organizations. These publications will be parsed using the BeautifulSoup Python Library to help find hyperlink connections to other organizations. This initial search and first round of hyperlinks constitutes the original sample.

Next, backlinks will be traced between this first sample of organizations and organizations that cited/collaborated on these projects. Using an online backlink search engine, edges can be drawn between . Backlinks are used commonly to construct web based network analyses, particularly in research pertaining to knowledge and coauthorship networks (x)(x)(x). Organizations that were found through backlink tracing were added to the network sample as nodes. This procedure will then be repeated for organizations found through backlinks, creating a two-stage snowball sample.

Source and Target nodes will be manually compiled into a csv formatted edgelist to be plotted, cleaned, and analyzed using the NetworkX Python library. Graph data will then be imported into Gephi for visualization.

Like any other social movement, animal rights and the organizations at its forefront are trying to answer a difficult question; how do we get people to care? It is a psychological, and ultimately deontological question so elusive that it cannot hope to be answered alone. Ultimately, the aim of this research isn’t to answer that difficult of a question, or provide a catch-all solution to creating a perfectly unified animal rights front. My goal, using network analysis as my backdrop, is to help these groups answer more organizational questions; who else cares about what we care about? And how could we be working together?




Evaluation and Policy Research

At their most basic, the primary purpose of evaluative research is to understand how certain programs — whether that means a new drug, an educational curriculum, or a social policy — works the way that it does. This type of research can be guided by several questions; Is the program needed? What is the program’s impact? How efficient is the program? Evaluative research is a way for stakeholders — groups who have some kind of concern with a program — to answer these questions and determine how they should move forward with these programs in light of their findings.

Evaluative research is generally carried out for these stakeholders, whether they be business managers, government officials, or funders of a particular project. As Schutt points out, who program stakeholders are and what role they play in the program has extraordinary ethical consequences in evaluative research. In many cases, the funding awarded to researchers by these stakeholders could result in questionable research methodology or interpretations of findings for the sake of remaining funded. Consider for example that nearly 75% of U.S. clinical trials in medicine are funded by pharmaceutical companies; though this may seem benign, considering as researchers we should favor a world where scientific research is generously funded and endorsed, research funded by these companies is more likely to favor the drug under consideration than similar studies funded using government grants or charitable donations. Consider a company like Coca-Cola, who has a legacy of funding university studies that obfuscate the connection between soda consumption and obesity. When reading evaluative research, knowing who the research was conducted for can be nearly as important as the findings of the research itself.

What I found particularly important in this chapter is that Schutt illustrates that impact analysis is just one type of evaluative research. Often we think of evaluative research as something that retroactively ascribes necessity or usefulness to a program, but as Schutt points out, research can also be carried out before the implementation or design of a program to determine if it could be needed or if the program itself can even be evaluated. These are forms of evaluation research that I had not considered before, so I appreciated the designation.

Did Welfare Reform Cause the Caseload Decline? is a piece that I think exemplifies well done policy research. The authors, Caroline Danielson and Alex Klerman, use monthly state-level welfare case counts collected by the U.S. Department of Health and Human Services both before and after the replacement of AFDC with TANF policies to investigate how certain TANF policy changes affected the drastic reduction of the welfare caseload during the late 1990’s and early 2000’s. Using this data, the authors conduct a difference-in-difference model to detect how four major policy changes affected the caseload; the generosity of financial incentives, sanctioning from welfare rolls due to non-compliance with work requirements, time limits placed on how long families could receive aid, and other programs to divert families who needed temporary assistance from joining the welfare caseload. The authors also include the national unemployment rate for each given month to account for changes in the caseload that may be attributed to economic conditions.

Their findings are quiet grim; admittedly DID models are a bit above my pay grade, but using this data their findings suggest that these major policy changes only explain about 10 percentage points of the 56 percentage point decline in the welfare caseload that occurred between 1992 and 2005. Further, the booming economy of the late 1990’s accounted for about 5 percentage points of decline during this time period.  This suggests that factors outside of state-level welfare reform accounted for the majority of the caseload decline, a finding which is quite eery considering how the Clinton and Bush administrations were quick to tout TANF as a romping success.

This research firmly fits into what Schutt describes as impact analysis. The authors are not necessarily concerned with whether or not the effectiveness of TANF was worth the “cost”, just whether it was working as it was purported to at all. It is also more of a black box model, focusing not on how welfare reform should have theoretically operated, but attempting to dissect why the caseload declined the way that it did. It is hard to say what kind of stakeholders could have possibly funded this research; the authors were employed by The Public Policy Institute of California and RAND Corporation at the time of this piece’s publication; however, neither of these think tanks are very forthcoming with who sponsors their work.


Service with a Smile: The True Cost of ‘Faking It’

Ever find yourself hung over at a diner at 7am, desperate to get as many hashbrowns and cups of burnt coffee into your body as fast as possible? When the waiter comes over and happily tries striking up a conversation have you ever wanted to suplex him through the nearest table like you were a WWE tag team champion? Well, have some sympathy for that  overjoyed waiter; he’s just doing his job.

This “service with a smile” mentality has been an increasingly prominent part of workplace behavior, where simply being polite isn’t enough anymore. This “emotional labor”, the fancy academic word for this obnoxious inflated happiness, adds a second dimension to work in which employees now must regulate their emotions on top of performing their duties.

The term “emotional labor” first famously appeared in Arlie Russel Hochschild’s The Managed Heart: Commercialization of Human Feeling, in which Hochschild sought to theorize the effects of emotional labor on employees.  Though Hochschild acknowledged that humans regulate their emotions in their private interactions, such as acting happy around friends even if we’re having an off day, she wanted to understand how emotional regulations conducted as a part of one’s job were different than private regulations. Using two groups, flight attendants who were supposed to be “nicer than natural” and debt collectors who were supposed to be “nastier than natural”, Hochschild illustrated that emotionally performative workers began to feel estranged from their expressions (e.g. “smiling” or “grimacing”) as well as their emotions. These findings were dire, because as Hochschild points out, one-third of american men and one-half of american women are engaged in jobs that require emotional labor.

Current sociological and psychological research corroborates Hochschild’s findings. One such study suggests that sales and marketing employees who performed emotional labor felt less capable of addressing issues at home (emotional labor accounted for 28% of the variation in work-to-family interference). The same study found that respondents found that emotional labor were less satisfied with their jobs; about 15% of variation in work satisfaction scores was attributed to emotional labor. Another, more indirect, cost of emotional labor, is that people who exert more false emotions in the workplace finds themselves more exhausted, which has been shown to increased business turnover rates (1)(2).

Emotional labor doesn’t just result in psychological strain, however; some studies have suggested that emotional labor can have psychosomatic symptoms as well. In Driving it Home: How Workplace Emotional Labor Harms Employee Home Life, the authors surveyed 78 bus drivers from the american midwest, and similar to the pieces mentioned above tested for emotional exhaustion and work-to-family interference, but added an extra variable; insomnia. Similarly, the authors found that bus drivers who feigned a smile were more likely to be emotionally exhausted and had trouble addressing issues at home, but in addition had greater bouts of insomnia than those who were not faking smiles. The authors describe these issues of insomnia as a side effect of “state anxiety”, a state of nervousness or discomfort instigated by the autonomic nervous  system.

Unfortunately, it doesn’t look like emotional labor is going anywhere anytime soon; for every person who couldn’t care less about how happy their server is, there’s an angry old man on Red Lobster’s Facebook page complaining about their rude waiter. As a way to quell the negative psychological backlash of emotional work, Penn State organizational psychologist Alicia Grandey says that emotional labor should be abolished. Instead, she argues, greater onus should be placed on organizations, managers, and even customers to foster positive workplace environments and an authentically happy employees. What a concept, right?


Office Space: Capitalism, Morality, and White-Collar Work

Peter: “When you come in on Monday and you’re  not feeling real well, does anyone say to you ‘sounds like someone’s got a case of the Mondays?'”

Lawrence: “Nah. Nah, man. Shit, nah man. I believe you’d get your ass kicked saying something like that, man.”


Though it’s impact may have been understated during it’s initial release, Mike Judge’s 1999 workplace comedy Office Space has struck a chord with audiences in recent years, solidifying itself as a cult-classic. The film centers around Peter Gibbons, a white-collar employee who finds himself simultaneously underwhelmed with his work and inundated with the bureaucratic drudgery of his office. Taking place in 1999, Peter spends his days adding 2 digits to the beginning of lines and lines of banking information in preparation for the coming of the new millennium, while simultaneously being barraged with requests from his multiple middle managers about memos, report cover sheets, and weekend shifts.

It is only after Peter undergoes hypnotherapy, during which his therapist experiences a heart attack and dies, that he overcomes the fugue state of his dead end job and decides he’s not going to allow his office job to ruin his life. Freed of his commitment to his workplace, Peter convinces two of his fellow employees to rebel against their boss and launder money from the company. What ensues is a jab at the white-collar workplace, with all of its corporate messiness and odd rituals that its denizens adopt to subdue the maddening nature of what they do.

As a critique of Capitalism, Office Space shows us, perhaps a bit hyperbolically, the attitude of the worker who is alienated from their work. This is evidenced first through Peter’s relationship with his next door neighbor Lawrence, a construction worker, who Peter seems to admire for his profession. Peter lauds, and almost fetishizes, Lawrence’s physical job because he thinks the job offers a sense of fulfillment, a “job well done”,  that his current work is lacking. As Peter describes it later in the film, “I probably do about 15 minutes of real work a day… it’s not that I’m lazy, I just don’t care.” This notion of the worker being productive when their work is fulfilling and they feel a sense of connection to their job hearkens back to Marx’s notion of workplace alienation and the species being.

Office Space also touches on C Wright Mills’ notion of the “personality market“, the idea that under capitalism it is not enough for us to give up our labor or skills to our jobs, but we must also surrender or regulate our attitudes, emotions, and personalities in the market. This is most prevalently evidenced in the case of Jennifer Aniston’s character, Joanna, who works as a waitress at a chain family restaurant called Chotchkies. Joanna is constantly berated by her boss for not having enough “flair”, small buttons or pins placed on her work uniform, and is constantly criticized for doing “the bare minimum”. This criticism is usually accompanied  by the question “don’t you want to express yourself?”, as if to imply the expression allowed by the workplace could be genuine or fulfilling to any degree.

Overall, Office Space is a quick-witted film that is uncomfortably poignant for anyone who has had to endure a Casual Friday or seems to have come down with a chronic case of “The Mondays”.

Privacy Statement