Researcher Interview: Jamie Theophilos
In this first ASM researcher interview, Robert W. Gehl talks to Jamie Theophilos. Jamie is a doctoral candidate and associate instructor at the Media School at Indiana University Bloomington. Recently, they published an article in Social Media + Society, “Closing the Door to Remain Open: The Politics of Openness and the Practices of Strategic Closure in the Fediverse”. This open access article discusses how fediverse admins and members reacted to Meta’s Threads, an ActivityPub-enabled microblogging system.
RWG: In your recent article, you discuss the Free Fediverse. What is it, and why did you decide to study that project?
JT: The Free Fediverse is a general phrase to describe fediverse users who are invested in creating a more equitable, safer and secure Fediverse that involves not cooperating or collaborating with big tech corporations by any means. The phrase comes from the Free Fediverse website started by user @ophiocephalic@kolektiva.socia which a website that serves as a hub for literature documenting opposition to the federation of Meta’s microblogging platform Threads and other related topics.
My original intention for this paper was to look at how activists are moving to alternative social media platforms as a way to mitigate surveillance and censorship on corporate social media. I became interested in this in the aftermath of various far left leaning media outlets being booted off different corporate social media platforms in the aftermath of the George Floyd Uprising.
While I was doing this research, the news of Meta joining really took over. What felt more pertinent, then, is how these users who fled corporate social media are now trying to maintain the fediverse as autonomous. Within that, the argument and tensions of openness became so overwhelmingly present in public and private conversations that it felt imperative to me to focus on the politics of “openness.”
Very often I hear the conflict between the pro-Meta folks and anti-Meta folks characterized as a conflict between “openness” and “closure.” You draw on the concept of “strategic closure” in your article. How does strategic closure complicate the simplistic “open/closed” binary we often get stuck in?
Free and Open Source Software (FOSS) movement (and the internet at large) has along standing history regarding differing interpretations of openness. The most generalized narrative is that for some their value in FOSS – and subsequently the Fediverse – comes from those with strong values of social egalitarianism, anti-authoritarianism, and anti-capitalism.
The other side is people who come from a far more libertarian perspective, where free and open source allows for valuable innovations. In the case of the fediverse, these individuals more believe that any cooperation with big tech is done with the idea it can help make bigger (and thus better) Fediverse.
In the aftermath of the news of Meta joining the fediverse, there was (and still is) a lot of arguments about what it takes to value openness and develop/maintain open source technologies. That really showed how complicated the politics of open source technologies are and how a value in openness involves tradeoffs, compromises and all sorts of conundrums.
I was (and still am) curious to learn and think through how people play and negotiate with all of this. I learned how people who have been practically negotiating these complicated positions has been through a contextualized sense of openness, where the materials, values, and practices of openness are black or white, nor are they static and immutable.
I landed on the term strategic closure as it felt like the best word to describe that framework. The choice to “close,” doesn’t negate a fervent commitment to freedom and openness, but is predicated on a series of decision-making that puts into consideration who, what, how, and why we are open.
It can be challenging doing research on Fediverse blocking decisions and content moderation. What methodological lessons have you learned from doing this work? How might your broader research on doxxing affect your approach to consent in research?
I’m not sure if this is methodological but it feels relevant regardless: Early on in my research I had the privilege to sit in on moderator meeting for a server, which then after my research ended, I actually became a moderator for that server.
I think in my head I presumed that content moderation was going to just simply involve blocking a spam account or instances that host CSAM. Yet that hasn’t really been the case. Almost all the reports I saw moderators navigate (and what I see now as well) involves fervent disagreements on issues related to Israel, Palestine, Ukraine and Russia.
Being a content moderator is also unpaid labor, so to keep on top of conflicts and put intentional thoughtful responses to certain reports can sometimes be difficult when you don’t check social media multiple times a day. It really shows how how profoundly political and difficult content moderation is.
The political nature of content moderation absolutely extends to any research on doxxing (or more broadly online harassment and privacy invasion) and those who want to work on making platforms “safer.” I think there is a lot to unpack with what it means for a social space (online or offline) to be “safe.” What do we actually mean by safety? Safer for whom?
I think doxxing is a great example of how complicated online safety and privacy is and Fediverse is actually a great place to examine it. There is a broad ‘culture’ of accounts among the Fediverse platforms that are the homes of antifascist hate watch ‘counter info’ groups that post doxes in an effort to curb white supremacist groups.
People have moved to these federated platforms for various reasons, one of which being that over the past 5-7 years corporate social media platforms have banned doxxing. For some, doxxing becomes the entry way to bringing online harassment to in person physical danger. However, in the context of these anti-racist, anti-fascist blogs, doxxing is perceived as a way to develop community safety.
I share all this to to exemplify that the connection between how the practice of moderating, and instilling safety (be it through moderation/mediating the communication/information, or the communication/information itself) is incredibly convoluted and so deeply political.
I don’t have an answer to all of this, but I do think that any attempt to depoliticize the process of making online platforms safer and more secure will do more harm than good.
You drew on interviews for your article. What surprised you when you talked to people? What did you hear that you didn’t initially expect?
Aside from the news of Meta joining the Fediverse and the experiences of content moderation, there are two other things that were realy eye-opening for me.
One server I was speaking to that had their servers raided by the FBI (due to alleged behaviors at a counter protest for anti-trans rights event) which led to a lot of discourse where some users were incredibly upset while others spoke about how there shouldn’t have been anything that private or sensitive on a public social media account anyways. This incident was a wake-up call to others (and myself) about how private can a public social media platform really be.
The final surprising element was learning that some of the server runners I interviewed have blocked instances run by other people I interviewed, despite the fact that they all subscribe generally to a similar political ideology and have similar moderation policies. This really points to how so many of us have different definitions of what we consider ‘safer’ or what constitutes ‘conflict’ versus abuse.
I think we need to reckon with the reality that those issues aren’t siloed to just individuals who fall on a polarized political spectrum, but exists within ‘the left’ as well.
You have an upcoming article on ASM and the Stop Cop City activism. Can you share a bit about that article?
So for anyone who doesn’t know what the Stop Cop City movement is, Stop Cop City (also sometimes referred to as the Defend the Atlanta forest) is a movement against the construction of a police training facility in the United State’s largest urban forest located in Atlanta, Georgia.
The paper is titled “Countering repression and Narrative Architects in the Stop Cop City Movement” and is co-written with Cole Nelson (editor at Black Camera Journal) and Chris Robe (author of Abolishing Surveillance and more). The drive around the essay was to look at how activist movements become counter-narrative “architects” as a way to counter repression and mobilize their movement.
In almost every radical social uprising, States build narratives to demonize protesters, developing them as criminal “folk devils.” We took on a qualitative approach and specifically analyzed two specific platforms, one of which was the blog site, Scenes from the Atlanta Forest (Scenes), hosted on a federated free open source collectively run blogsite called Noblogs.org. It felt important to use the term ‘architect’ when analyzing the narratives developed on Scenes as it showed how the technological design of that platform play a role in the counter narratives of the Stop Cop City movement.
Where do you think ASM research should go from here?
I am interested in building discussions with ASM resarchers on how we can advocate and utilize ASM and Free and Open Source Sotware inside and outside academia. Too often do academics leave their political critiques at the door in a way that doesn’t ever translate to actually participating to building better worlds outside the ivory tower.
I think it’s imperative we also share the knowledge that we’ve accumulated on tech, media to others. There are other options for writing your papers outside of Google Docs!
To more specifically answer your question. From my own personal standpoint, I hope to do more work and collaborate with other researchers on ASM and issues of privacy, safety, and security. I would love to see research on other federated platforms beyond just Mastodon. I also would like to see work on far right extremist Fediverse users, non-western users of the Fediverse, and the development of Fediverse servers that are physical-location based.
Comments
For each of these posts, we will also post to Mastodon. If you have a fediverse account and reply to the ASM Network Mastodon post, that shows up as a comment on this blog unless you change your privacy settings to followers-only or DM. Content warnings will work. You can delete your comment by deleting it through Mastodon.
Reply through Fediverse