Last January, when I set out to write a thesis, I was
initially interested in media effects, framing, and disparities (from a social
justice perspective). I had studied media advocacy in mass media seminar, but I
knew that media advocacy was more of an intentional tactic public health folks
use to change community norms and less an organic decision that journalists
make. I wanted to study the small, everyday decisions journalists have to
consider when writing stories and what effects those decisions have on
audiences. This desire took me to my main theory of study: framing. I dug
through the available framing literature and found it was filled with
interesting research questions and studies.
Many of the framing studies I found in the literature
included content analyses. I steered away from this design for my own thesis
because I knew I wanted to take a more active role in the process. I also
viewed the coding part of content analyses as tedious and liked the control
experiments give to researchers. I was drawn to the experience of learning how
to manipulate something, then controlling for variation, collecting data from
human subjects, and analyzing it. I am, however, very grateful to the
researchers who took the time to look at how health articles were framed and to
those researchers who took the time to interview health journalists about their
internal processes. I see now that the content analyses and qualitative
interviews are important for developing research questions and learning about
next steps. The best researcher would probably employ a combination of these approaches
and use both qualitative and quantitative research methods to answer their
research questions.
It took months for me to understand how framing theory
should guide my literature review and my research questions. I also had a hard
time conceptualizing the constructs and how the post-test questions combine
into a (hopefully) reliable measure. In fact, l didn’t understand how to create
a measure until after I ran my experiment. I thought blame could ask about
individual and societal blame, and it was only after I was creating my summary
scores that I realized individual and societal blame were separate measures. I
worked to remedy this by making different combinations of questions into
indices. Unfortunately, I had to use individual questions for two of my
measures: societal blame and societal responsibility.
The first experiment I proposed had three independent
variables and was far too complex for a first-time experimenter. Brian Houston,
who graciously agreed to serve as my external committee member and quantitative
methodologist, helped me simplify the design. He challenged me to identify just
two independent variables and assured me the process would be complex enough. A
few weeks of mental grappling ensued. It was hard to commit. I enjoyed thinking
about all the options but settling on a health topic, a frame and a disparity
felt so narrowing. The grand ideas were dissolving, and I was left with just a
few specifics. It felt like the findings would not be useful enough. But making
these decisions also meant getting closer to gathering data. So, I pushed
forward. After speaking with my mentors (Katherine Reed, Jeanne Abbott and Kim
Walsh-Childers), I chose diabetes as my health topic. It is a pervasive
condition with serious physical and economic costs for individuals and society.
I also got closer to designing the experiment, with much assistance from Brian.
We settled on a 2x2 factorial design between-subjects experiment. My manipulations
would be frame: individual and societal and disparity: presence and absence. I
chose economics for my disparity. Race disparities are common in the
literature, but they are also polarizing. I decided to circumvent race and talk
instead about something directly affected by racial disparities: money. The
inclusion of a disparity would give me space to talk about the economic cost
and economic struggles of a central character.
Once the design was settled, I met with my committee to
present the proposal. The meeting itself was helpful. I was nervous, and too stressed
from working too many nights and not getting time away between semesters. I was
unable to take a step back and effectively edit, so the suggestions from my committee were very welcome. Brian wasn’t able to come because I didn’t send a reminder in time, and I hadn't thought about using an Outlook calendar invitation.
I was disappointed but learned a valuable lesson about how to reserve a busy professor's time.
This summer involved a lot of logistics and email exchanges.
I waded into Qualtrics and learned how to set up the articles and then randomize
them. At times, I was frustrated that I didn’t have more quantitative mentors who
could help me. I reached out to a prominent health communication scholar, Glen Cameron, who is based at Missouri, but alas, he didn't have time to help. I
pushed forward, knowing that mistakes are part of the process. Guides are
crucial, but learning how to be a researcher also means pushing through unfamiliar territory. So I pushed. I worked on the articles. I fought with the technology. I checked off boxes of the many forms required of a master's thesis. By mid-June, with my
move to DC looming, I put the research on hold and focused on packing up my
life in Columbia. I was nostalgic and proud of all I'd learned. Journalism school was a fantastic choice, and I felt lucky to have called the J-school my home for the past two years.
Once the move to DC was semi-complete, I turned on the heat again. I put in an IRB
application and started researching Amazon Mechanical Turk. I read a lot of discussion threads and how-tos for MTurk and got a handle on that world. I also heard back from Betty Jo at Mizzou's IRB office. I
got the green light to run my experiment and returned to Qualtrics to edit my
articles and the post-test questionnaire. I called the IT department to ask about
Qualtrics access and was relieved to learn I had a 30-day grace period. I ran
my post-test questionnaire by one of the statisticians at the National Cancer
Institute. He graciously assisted. I could tell he was nervous about getting
wrapped into a master’s level project, so I again turned inward. I could only
get so much advice before I had to just dive in.
I reviewed everything four times and hit run on the MTurk
experiment around 5 p.m. on a Thursday in August. The data came flooding
in nearly instantly. By Friday morning, I had collected 200 responses. I
reviewed them, paying close attention to time spent. The average time on the experiment was about 11 minutes. I rejected
people who seemed to complete it too quickly. This turned out to only be one
person. I put the experiment back onto the market and collected that last
response by Friday afternoon. I approved everyone. Then I waited.
And waited. A week passed. I had a large spreadsheet worth
of data from 200 people. I didn’t really know what to do next. I knew I needed
to clean the data and then analyze it. But I hadn’t used SPSS since the winter
of 2014. About a week later, a coworker asked me how the data analysis was
going. It wasn’t. We set an appointment to review data cleaning and analysis
for the next day, and I started to dig in.
I found the data cleaning process meditative. It involves creating new columns when needed, running demographic analyses on participants and reviewing the manipulation checks to make sure they worked. After that process, I was able to fix my methodology. I reorganized that section and filled in the holes where the descriptive statistics needed to be.
My next phase was digging into the data itself. I read tutorials on analysis of variance (ANOVA) and reviewed 2x2 factorial design. I also asked another coworker how she would look for main effects and interactions. I ran tests and stared at the charts. Then I slowly began working through the findings to find meaning in the numbers.
The results showed that my manipulations (frame and disparity) had a significant impact on readers and that one manipulation was not dependent upon the other. Framing and the inclusion of a disparity had an impact on how readers assigned blame and responsibility to the individual and the government. And the influence was in the direction I had hypothesized. Participants who read an individually oriented article were more likely to assign blame to the individual, while the societally focused article found the opposite. The same was true for responsibility and government solutions: people who read individually oriented articles were more likely to put responsibility on the individual than society or the government and vice versa.
I spent several evenings and weekends staring at data and trying to figure out what it meant and how it could apply to journalists. At first, I struggled to understand the difference between a main effect and an
interaction and how to represent my findings in a paper. I am very
grateful to Jennifer Taber and Chan Thai at the National Cancer
Institute for taking time out of their busy research schedules to mentor
me through my roadblocks. I have a much stronger grasp of social science research now that I
have run an experiment of my own.
Even though I have a limited scope of real-world journalism, I still feel confident with my discussion and my conclusion. The everyday health journalist most likely uses a combination of the frames and source types I used in my thesis. My manipulations were very pronounced so that I could see the effects clearly. An actual article would probably resemble a mixture of the four articles I produced for this experiment. A newsroom would certainly create something much less "experiment acceptable." But that is the real world. So, I am only able to extrapolate so far. I think it is worth noting though that framing and the inclusion of an economic disparity did have a significant impact on where readers of articles on diabetes placed blame, responsibility and government responsibility. It is a reality I will remember when I have the opportunity to start reporting again (fingers crossed!).