[I’ll preface this by saying that due to some of the issues, discussed below, that we ran into while attempting to implement this project, this project was never ‘completed’ per se.]
Our team consisted of an ex-journalist in MIT’s comparative media studies program, a computer science master’s student, and myself. We were interested in exploring ways to build empathy even around controversial subjects through exchanging stories, specifically building empathy through video stories. We created the StorySwap platform for a class project and then started testing it out. First, we seeded the site with videos we had recorded in person with random people we’d encountered walking around MIT, and then we sent the site out to the larger MIT community to try to get responses to start conversation chains around controversial topics on campus.
The two-fold problem: As we perceived it, discussion about controversial topics in recent years has become increasingly uncivil. We attributed this to a lack of empathy, a lack of opportunity to understand someone different from you and see them face to face. We created this in order to 1) address that empathy problem and encourage discussions, and 2) so that journalists could go to the videos as a source of harder-to-reach video viewpoints in an increasingly multimedia journalism world
Solving the problem: Looking back, a key driver that affected how we tried to solve this problem was the fact that one of the team members had already done existing research into videos, attempting to get people to voluntarily upload videos of themselves talking about issues, and whether that allows for any additional empathy beyond textual communications. Because of that, it often felt like the solution came first, and then we tweaked the problem. This begs the question: How often are we biased in terms of how we think problems should be solved because we’re looking for a way to apply an existing tool or some knowledge?
Assumptions: We assumed “people” wanted this tool and wanted to use it to build empathy. Really, that just didn’t seem like a priority for MIT students drowning in work. We also assumed that videos would build empathy more than just text, but (discussed below) this did not seem to be the case.
Journalists as users: Because we tried to report on a story using this tool, I suppose we tested this out as users ourselves. In its existing form, actual interviewing is still so much more effective than sourcing videos, and we didn’t really get to test out how effective this platform would be in terms of getting access to typically hard-to-reach views.
Users: Because of time and resource constraints, I feel that we definitely targeted the wrong users, or at least too narrow a set. Testing questions out with the MIT community wasn’t that productive, and we discovered that people were a lot warier of videos of them talking publicly especially if they were thinking of further consequences for what they said further down the line. The focus wasn’t so much on what other people said, and no one really started a conversation — it was more, ‘oh, here’s a topic I’m interested in, and I have something to say.’ The presence of a video, of the face of the person speaking, which we had hoped would build empathy and help people remain civil, actually seemed to create a little too much caution to allow for freeflowing discussion. We originally had an idea of this becoming a trusted community, somewhat like “Quora with videos,” and that trust would encourage discussions. We didn’t have the time to build and nurture such a community, but perhaps within that context, users watching videos could have become more empathetic and willing to engage in discussion.