Since the election, Iâ€™ve seen a lot of writing about the ways in which social media can help explain our contemporary political divide, from the rise of fake news on Facebook to the belated ban of racists on Twitter (sorry, I donâ€™t use the politically correct term â€œalt-rightâ€, and, by the way, how ironic is it to be PC about that group, of all people) to the existence of news bubbles (as foretold by Eli Parisier). I worry, though, that too much of this conversation focuses on policy fixes, and not enough looks at the deeper issues: how these systems and networks are designed from the ground up. We seem to be talking about how to fix Facebook, but not about whether Facebook is ultimately able to be fixed.
Facebook is private property and its rules of discourse are governed by proprietary algorithms designed with revenue-maximizationâ€”not social benefitâ€”in mind. Any social benefitsâ€”and there are manyâ€”are incidental. So even though it acts as a de facto town square, itâ€™s really a town mall. And that makes the ground rulesâ€”and our ability to change themâ€”different. The same is true of all other platforms, not just Facebook. What I want to explore is how particular architectures embed certain kinds of expectations about people and their needs, and the ways in which these structures facilitate certain activities and circumscribe others.Â Ultimately, is there a way for us to consider society as we build out new platforms, particularly when it comes to social media, where people are both the producers of value and its consumers? Put another way, how is it possible that, after the rise of literary theory, we are not thinking more deeply about a system where private corporations control the ways in which language is produced, distributed, and consumed? Surely that is affecting its contentâ€”and its potential.
If Walter Benjamin were alive, I imagine he would talk about the ways in which the cyber flaneur is constrained. We live in intentional online communities. There is little juxtaposition or happenstance. We go online, typically, with a search in mindâ€”and even though we use a â€œbrowserâ€ to get there, we are motivated to get to our destination. Searching and browsing are fundamentally different. Alternatively, we are on Facebook, which is less directional. We are looking for stuff to see, but we are only doing this from known entities, not ones we happen to encounter, and this stuff to see is typically disconnected from our corporeal selves that eat, live, work, play, and travel through real spaces, through different neighborhoods whose inhabitants have different perspectives. Even our offline movements are increasingly point-to-point, designed with minimal travel times in mind, through the aid of navigation apps.
How you feel about this depends on what your goals are (and where you are in the system). The mid-20th century drive to build highways through American city centers made a certain kind of sense: it allowed for speedy transfer of surburbanites to their work in the urban core. As long as this was the goal, highways were the answer, and the governmental subsidies that embedded these policy choices made sense. But highways through cities, of course, also destroy urban neighborhoods, make mixed-use development more difficult, and create sprawl and its attendant social isolation. You can either read the Power Broker or have grown up in suburban Atlanta to understand that. So if your goal is vibrant city life, highwaysâ€”and subsidiesâ€”donâ€™t make sense. I fear that we are currently building highways and gated communities in cyberspace without thinking about their potential side effects. We are only concerned with efficiency, not sustainability.
Consider even something as simple as commenting systems and moderation. Anonymity and/or pseudonymity might encourage some kinds of discourse (whistleblowing), but it can also lead to bullying. Comment moderation can embed certain social norms (veto power, group voting) that will have impacts on the discussion taking place. Now think about much bigger impacts that come from, say, the network design and user interface of music streaming, or news aggregators. How are these networks serving long-term interests? How might seemingly innocuous one-off decisions about remuneration affect the long-term economic viability of art and journalism? Could we design a creator-focused network that would allow producers to earn more of the revenue from the audience they generateâ€”and how could we do that (if at all) to ensure that we continue to allow for discovery of new voices? The internet was supposed to be revolutionary. Donâ€™t we want to ask more of it than clickbait headlines and unlimited access to golden oldies on Spotify?
Beyond just the social aspect, there are very real material concerns about how architecture (and access) to other kinds of networks can affect peopleâ€™s ability to earn a living. This past weekend I had a conversation with my Uber driver (one of the very few chance encounters I hadâ€”where I ordered a car but not a particular driver). I was talking to him about some of the research a colleague at the University of Denver, Nancy Leong, has done on racial bias in the peer-to-peer economy. My driver talked about the lack of control he has as a driver in terms of who he picks up. If he doesnâ€™t pick someone up, he said, the algorithm might drop him. This is particularly troubling because his rides often take him far from where he lives, and so he might have to spend hours driving home, for free, if he doesnâ€™t get work. The algorithm is proprietary and opaque, and he has no form of redress except to exit the market. The virtue or vice of privately-owned networks isnâ€™t an all-or-nothing thing. I use ride-sharing services, and people make money driving for them. But this seems like an awful lot of power is being divested to and concentrated in a network governed by private interests.
There are more fundamental social issues as well. Some of the arguments in favor of virtual reality, for example, are based on the idea that VR will make empathy easier: that we can see what it is like to live as a racial or ethnic minority, that we can feel the struggle on the ground in Syria, that we can overcome our PTSD by repeated exposure and desensitization to traumatic events. But these discussions are marginal to those actually building VR and the systems that will program it. If his political participation is any indication, Palmer Luckey didnâ€™t design his VR headset to promote greater social understanding. VR can, of course, be used to understand diversity, but it can also be used to eliminate it. I can whitewash all the people around meâ€”or remove them entirely. I can use it to withdraw from society. Itâ€™s certainly not a given that VRâ€™s potential as a tool to better engage with and understand others will be realized. Some of the ease with which it can be put to these uses will be determined by design decisions made today. Iâ€™m a believer in the potential VR has to make us better human beings, but if it is designed to maximize revenue from existing markets like porn and first-person shooters, these design decisions will make other adaptive uses more difficult.
I donâ€™t think this is just a technical problem. Itâ€™s too important to leave to the people building the systems. I also know that there are a number of smart people already working on it. I saw a proposal for Facebook that would take an inventory of each user and say “David, you know a lot of X group (racial, geographical, political), but you don’t know many folks from (Arkansas, Southern Baptism, atheists, Central Asians).Â Here are some suggestions for new friends.”Â Computer Professionals for Social Responsibility has a book, Liberating Voices: A Pattern Language for Communication Revolution, that I have ordered; itâ€™s based on the classic book by Christopher Alexander (et al), A Pattern Language, which was concerned with built spaces. These are good starts, but they are late ones. Designing is easier than remodeling. Setting social expectations is easier than resetting them. We need to engage with these concerns as things are being rolled out, not just after they have gained traction.
There are those Iâ€™ve spoken with who say that you canâ€™t engineer people, that the market rules, that you have to give people what they want or they will go elsewhere. But I donâ€™t know that I agree. Lines in a parking lot surely constrain my freedom to park wherever I want. If I donâ€™t, I face social sanctions. But I donâ€™t feel less free in a parking lot. These social norms promote both fairness and efficiency, even though they might be somewhat paternalistic.
In the end, I donâ€™t have a lot of answers, just questions. But I do think thereâ€™s no such thing as neutral choices, only choices that, at best, ignore the values that are promoted or inhibited by them. I believe that technical design decisions have impacts on the world, and, as such, that they should be thrown open to folks from all walks of life (and all other disciplines, particularly the humanities). I would welcome your thoughts about this subject, and I am particularly interested to learn about those who have surely already written about this subject (or analogous subjects) so that I can educate myself.