Skip to the content.

Introduction

People with PTSD or who are otherwise prone may experience triggering flashbacks or other harmful episodes after engaging with certain kinds of situations or content. Unfortunately, Twitter is especially likely to serve this type of content, since it often features triggering content on timelines of users.

This difficulty is compounded by the fact that it is difficult for users to control what does and does not appear on their timelines, making strategies that are effective for other social media (ex: unfollowing/blocking people who post triggering content) insufficient on Twitter. Additionally, there is a large percentage of the twitter population who are ideologically opposed to content warnings, causing increased risk of exposing people with PTSD to triggering content.

Related Work

Although some tools do attempt to provide content warnings on a page, they mostly do so ineffectively: either having a content warning generated once at the top of the page – which works poorly for how twitter constantly loads new content, or by allowing users to blur content they don’t like- which forces them to be exposed to the content regardless. Both of these features work poorly for tools such as twitter. Instead, what we needed to do was create something that accounts for how twitter constantly loads new posts and is able to catch potentially triggering new content.

Methodology

We created a chrome extension meant to be used in conjunction with twitter.com in order to help people with PTSD. The extension blurs content that is deemed to be potentially triggering and displays a content warning explaining why that tweet is blurred. Users can choose to view the filtered tweet by hovering their mouse over it.

Whenever the user scrolls (with a mouse, keyboard, screen reader navigation, etc), the extension checks the timeline for new tweets that have not yet been filtered. For each tweet, we extract the text (if there is any) and compare it to a regular expression associated with each filter. If the tweet matches any of the filters, we insert a content warning above it and add a class with some CSS that blurs its content.

By creating this extension, we hope to give twitter users more agency over the content they consume and give them a stronger ability to engage with and access the site as a whole. We hope this will help uphold and maintain the principles of Sustainability and Collective Access by preserving the mindset and mentality of users interacting with the website and allowing them to better control how they are exposed to content on Twitter, as well as meeting the access needs of people for whom a stricter moderation of the content which they are exposed to is a necessity.

Disability Justice Perspective

We looked at this problem through the perspective of two main disability principles: Sustainability and Collective Access. We used this perspective as a lens in order to inform how we designed the tool and the goals we strove for at each step of the project.

Sustainability is all about pacing ourselves to be able to last and be whole long term. Giving people the ability to moderate and curate the content they consume contributes to their ability to preserve their mental state and have greater comfort and confidence browsing the web long term. Meanwhile, Collective Access focuses on how we are all people of worth and we should together be able to have access to things designed according to this principle. We will further this principle by reducing the barrier to entry and making the web more accessible for users with PTSD or other conditions that make them prone to being triggered. Collective access also highlights the idea of value exploration and enabling people to have a culture of creativity, which we promote by acknowledging that people have different access needs and enabling them to fulfill them. This touches on the concept of shame, and that access needs are not something to be ashamed of, just another thing which should be designed for, and by implementing this we hope to enable people to interact confidently and without shame. This technology will also promote autonomy in that it allows users to participate in the community while still maintaining their own autonomy.

Learnings and Future Work

Our Takeaways

Interacting with twitter is harder than anticipated, interacting directly though the API is fairly limiting in what you can do, and there’s a lot of confusing elements or deprecated features. We eventually decided to interact with the page directly. Even in this case, we ran into a lot of obstacles, there’s a lot of redundant code in the way that Twitter formats their tweets, and it got in the way of our intended goals quite a lot!

It’s really difficult to find an effective way to identify potentially triggering phrases. Some words which can be triggering are a smaller part of larger words with may cause false positives (for example, if “go” was triggering and “goliath” was not).

It’s also very challenging to identify intent/meaning. While it’s possible to check for a text form of a harmful phrase, it becomes trickier if it is referenced through a euphemism or written in a way to avoid filters. Additionally, it is quite possible to simply not anticipate a word or phrase that might be triggering, causing the user to be exposed to it. This is a hard problem to solve and will likely see a lot of iteration on it in the future.

Next Steps

There are a few areas that we would love the opportunity to develop further in the future…

  1. Language Model Integration: One of the biggest challenges with developing content filtering is identifying content that could be triggering that only indirectly references or approaches that content. Sometimes triggers can be really specific and it’s hard to identify all the possible things that could cause triggers, even for yourself. It would be really interesting to look at passing the tweets to an AI which could do some level of content filtering.
  2. Increased customizability: It’s hard to allow users to customize much, as it introduces the complexity of storing data somewhere. With some investment, it might be worth exploring further in the future.
  3. Usability testing: I’d love to get the opportunity to do some additional usability testing and really get a better understanding of how users would interact with the tool and how it could be adapted to better serve the user.

Accessibility

Since our project has no UI and makes minimal changes to the target website, we didn’t need to make a lot of extra changes to accommodate accessibility needs. The main accessibility needs that we had to consider in our design were those of people using screen readers.

For users of screen readers, blurring the tweet achieves very little. Our solution to this was to insert the content warning into the DOM where the tweet would normally be and to push the offending tweet back. This means that users of screen readers will always encounter the content warning before the offending tweet and can decide whether they want to continue or skip to the next tweet.