The Retweet was originally an abbreviation adopted by the Twitter community to indicate attribution in cases of resharing a tweet. Twitter eventually listened to its users and coded the automatic Retweet into its interface. The automatic Retweet prevented modification of the Tweet and simply transplanted the Tweeter’s profile photo and text directly onto the Retweeter’s profile, like a donation of attention from one account to another. Many users saw this as an assault on their nomenclature of choice – why could Twitter not have favoured their preferred method of resending information? :
[Optional: Retweeter’s additional text] RT @username [Original tweet text and links]
Tweetdeck, a yellow-branded Twitter monitoring software represented by a regal raven, protected the original citation style by building it into their interface. When you clicked to Retweet a tweet using Tweetdeck, it would bring you to a darkened version of the tweet that could be automatically retweeted or edited in the original Retweet format:
RT @username: [Original tweet text and links]
Time passed and Tweetdeck’s userbase rose, to the point where it could justify opening a directory for its fans. Twitter (probably) fumed with jealousy as it noted the rise in users posting via Tweetdeck – then purchased the company. There was some speculation that Twitter may have bought out Tweetdeck to keep it out of Ubersuite’s hands, but I think Twitter had equal motivation to satisfy greater demand for more powerful monitoring software and didn’t want to keep losing so many users to a secondary service. Also, they needed an alternative to their excessively simplistic mobile interface that would provide more monitoring power on-the-go. Since then, they have completely rebranded the web and desktop versions of Tweetdeck, substituting the raven with a Twitter bird. Interestingly, the Twitter bird for the main web interface is a simple blue with no outline, whereas the bird to represent Tweetdeck is the same shape, but black with a blue background. Shade upgrade! *High Five*
Initially, as a part of the rebrand Twitter slashed the Tweetdeck directory and destroyed the option to perform manual Retweets. In place, they forced users to ‘Quote Tweet’ and imprison the Retweeted text and username in quotation marks. Besides being very visually jarring, quotation marks tend to throw off certain monitoring software and do not permit the range of communication styles many users prefer. The MT, or modified Retweet, is used to represent an altered quotation, which is more clunky to accomplish using the ‘Quote Tweet’ function. Obviously, Twitter wants to discourage users from changing what the original author said. Not necessarily a bad thing, but it comes into conflict with their users’ creative attempts to beat the tight word limit.
The last bastion of RT-nostalgia lay in the TweetDeck mobile app. Despite an update since Twitter’s acquisition of Tweetdeck, the mobile app still allows you to edit an automatic Retweet and will insert the RT before the @username. Symbolically, Twitter has yet to force Tweetdeck mobile app to drop the yellow raven in favour of the blue bird, although I’d wager the next major update might change that.
More recently, Twitter updated their TweetDeck Web and Desktop versions to include an ‘Edit & RT’ option, which mimics the old manual Retweet style, and have done away with the option to ‘Quote Tweet.’
Putting the Right Face Forward
Since there is too much variability in the content to consider the question of whether an edited RT with additional text is better, I’m going to focus purely on the question of whether it is better for a user to see a novel or familiar face or profile photo in their Timeline of Tweets. This assumes that the Retweeter is more familiar to the viewer of the tweet than the Retweetee (account being Retweeted), which obviously does not hold true for every case in a network. Still, I think it’s a fairly representative thought experiment for most Twitter relationships.
Another confounding factor is the small grey text that appears above a tweet when it is automatically Retweeted, identifying it as a Retweet and listing the Retweeter’s name. Since this could certainly be used as a visual hook, or conversely as a visual skipping cue, it could change some of the considerations I lay out in this post. However, I think it’s faint enough and small enough that it’s unlikely to be one of the chief targets of visual focus during a rapid saccade.
What influences your probability to stop scanning when you are reading your tweets?
When someone scans down their Twitter timeline, deciding to stop on a particular tweet and invest additional attention could be based on a number of factors. However, there are only so many visual anchors for them to use during their scanning process. These include:
- The profile photo
- The username and full name
- The tweet content (with highlighted text for usernames, hashtags and links)
- The tweet time
- The ‘expand’ button
- A media button at the button of the tweet
Determining whether the automatic or the manual RT is worth more will rely on knowing which of these visual anchors the majority of users focus on during their visual scans, also known as a visual saccade.
Watching you, Watching Twitter
Since no eye-tracking studies to date have been conducted regarding tweets, the best I could find were these Youtube videos that detail that ocular focus of an “expert” Twitter user in the process of navigating the website. From the Tobii Eye Tracking Youtube Channel, watch the entire video if you have time and watch for the differences between where the focus lies during the faster downward scrolling movements. This is an older version of Twitter, but the essential aspects of the timeline structure remain unchanged.
They also have a video of a beginner Twitter user, but it’s relatively similar to the expert user, although with just slightly more focus on the profile photos during slow and fast saccades.
Does this mean that profile photos are irrelevant to the scanning process? In my opinion, that’s unlikely, especially when you consider just how shaking the focal point on the screen was during those faster saccades. Also, it’s important to note that current eye and gaze-tracking technology focuses largely on the center of the pupil, we are missing significant amounts of data on the additional elements within a user’s field of view. In their 2008 review paper about the future of eye tracking in research for online searching, Lorigo et al. admits:
“… it is important to note that eye tracking does not tell us how much users perceive in their peripheral field; to the best of our knowledge, nearly no literature studying peripheral vision exists from which we can effectively extrapolate to the context of online searching.”
In 2001, Asress and Carpenter wrote in Vision Research that the systems that determine whether we react to ‘stop’ signals during a saccade for peripheral and central vision are different. Supporting the notion that peripheral data might be playing a bigger role than pupil-tracking can tell us, they point out that both the peripheral and central vision stop processes seem to respond with the same speed.
How do we react to novel stimuli?
To determine whether it is more advantageous to have your own profile pic (manual retweet) vs. someone else’s profile pic (automatic retweet), you can look at some research that has been done regarding our reactions to novel stimuli. Park, E. Shimojo and S. Shimojo published in PNAS regarding the “roles of familiarity and novelty in visual preference judgements.”
The 22 subjects in their experiments exhibited no preference for familiar or novel geometric figures, but preferred novel landscapes and familiar faces. One important question; when so many people use profile pictures that are mostly-landscape, or where their faces take up a tiny portion of the thumbnail, can these be considered faces when shrunk for a tweet, or would our brain compute them as geometric features/landscapes?
A good quotation found in Jeremy M. Wolfe’s review paper on “asymmetries in visual search” outlines the relevant Treisman Hypothesis:
“… it is easier to detect a deviant among standard stimuli than to find the standard stimulus hiding among deviants (Treisman and Gormican, 1988).”
Conclusion: Speed Matters?
From a visual perspective, I would suggest that Twitter Timeline browsing can be broken into two different activities:
- Slow, purposefully scanning that focuses on the content of the tweets, as well as user name and full name of the user
- Faster scanning that sticks to the left side of the tweet, focusing on the username and likely absorbing important peripheral information about the profile photo
In the former situation, perhaps our preference for familiar faces might make it more likely that a manual RT, with a familiar face, would receive attention. However, when the saccade speed exceeds the speed where reading is possible, the photo likely plays a much larger role in attracting attention. Does a novel signal, which the Treisman hypothesis suggests is easier to identify, mean that novel profile photos are also easier to use as stop signals during a saccade? Or does the familiarity of an identifiable photo draw us in even at higher speeds?
Let me know what you think in the comments and I may use them in future updates to this post.
- Asress, K.N. and Carpenter, R.H.S. 2001. “Saccadic countermanding: a comparison of central and peripheral stop signals,” Vision Research. 41: 2645–2651
- Lorigo, L., Haridasan, M., Brynjarsdóttir, H. Xia, L., Joachims, T., and Gay. G. 2008. “Eye Tracking and Online Search: Lessons Learned and Challenges Ahead.“ Journal of the American Society for Information Science and Technology. 59(7):1041–1052, 2008
- Park, J., Shimojo, E., and Shimojo, S. 2010. “Roles of familiarity and novelty in visual preference judgments are segregated across object categories.“ PNAS. 107(33): 14552-14555
- Shen, J. and Reingold, E.M. 2001. “Visual search asymmetry: The influence of stimulus familiarity and low-level features.” Perception and Psychophysics. 63(3): 464-475.
- Treisman, A. and Gormican, S. 1988. “Feature Analysis in Early Vision: Evidence From Search Asymmetries.” Psychological Review. 95(1): 15-48.
- Wolfe, J.M. 2001. “Asymmetries in Visual Search: An Introduction.“ Perception and Psychophysics. 63(3): 381-389.