“The power of social media is that it gives you depth, vulnerability and multiple perspectives,” Desmond Patton, PhD’12, has said. (Photography by John Pinderhughes)

Social work meets social media

How Desmond Patton, PhD’12, is harnessing technology to help young people.

In April 2014 a Chicago teenager named Gakirah Barnes took to Twitter—not, as she sometimes did, to post photos of herself posing with guns or to threaten rival gangs—but to grieve. Her friend Raason “Lil B” Shaw had been fatally shot by a police officer. The pain, Barnes wrote, was “unbearable.”

Eight days later, Barnes, 17, was dead too, shot nine times in the chest, neck, and jaw. She was memorialized in the media as “Chicago’s Gun-Toting Gang Girl” and “Lil Snoop,” a reference to the cold-blooded killer on The Wire.

Social worker Desmond Upton Patton, PhD’12, saw the coverage, which led him to Barnes’s Twitter account. Reading her tweets, he recognized the hardened figure described in news reports but also saw a young woman, apparently sleepless at 2:41 a.m. on a Thursday morning, sharing her sorrow with the world. What would have happened, Patton wondered, if someone in a position to help had seen the suffering in her tweets?

It wasn’t the first time Patton had thought about how social media was shaping young lives. As a graduate student at the School of Social Service Administration, he wrote his dissertation about how high schoolers on Chicago’s West Side navigated violence in their community; Twitter, they told Patton, was an important platform, because it helped them spot and avoid brewing conflicts.

Patton is still grappling with the influence and power of social media. Now the founding director of Columbia University’s SAFElab, he’s developing software that analyzes patterns of online behavior among at-risk youth. He hopes the tool will help social workers and outreach organizations intervene in positive ways.

One of SAFElab’s most significant findings, published in npj Digital Medicine last year, is that aggressive social media posts often follow posts about grief. Young people turn to social media to express reactions to personal trauma and loss, but then a broad network of people engage with those reactions, in ways both well-intentioned and hostile. That’s precisely what happened to Barnes, whose grief-filled tweets prompted taunts from a rival gang member.

“That is a very common practice, because being tough on- and offline is an important way in which you stay safe,” Patton explains. “If we don’t like each other, one of the ways in which we show how bad we are is, ‘I’m going to follow you and every time you say something, I’m going to say something back to you.’” When a rival responds to an expression of grief—even just by replying with an emoji—the interaction can spiral into online and sometimes offline aggression.

The software is in its early stages, and part of the work ahead for Patton and his collaborators is determining how it can best be used. When the technology is ready, they plan to partner with community-based organizations and figure out when, where, and how to put it into action. But he’s certain of one thing: the tool isn’t able or intended to predict crime. “This isn’t Minority Report,” he says.

For the time being, Patton is focused on making the tool fairer and more accurate. Social media language is ever changing and extremely local. Law enforcement can monitor social media for threats, but that data is useless without the ability to parse out meaning on a local level. It’s taken Patton and 25 fellows, research assistants, and collaborators several years to reach 72 percent accuracy with their algorithm—meaning that 72 percent of the time, it correctly labels content related to grief, substance abuse, and aggression.

Now Patton has “an army” of community members who help him interpret the patterns he sees. He relies heavily on their help in decoding linguistic quirks from place to place. In New York City, for instance, the purple grinning devil emoji has a sexual connotation, but in Chicago that same emoji, when paired with an implicit threat, might signal an imminent violent act.

Still, the complexity of online language hasn’t stopped police and attorneys from deploying social media during legal proceedings. All over the country, posts from black and Latino young people have been used as proof of gang involvement. But too often, critics of this practice argue, courts ignore a crucial fact: teenagers posture. Without context, it’s impossible to separate criminality from swagger. That’s why Patton is taking his time developing the tool and is wary of similar software already in use by police departments.

There’s good reason to be cautious. Patton and his colleagues have already found biases in their own system. “Our algorithm kept identifying stop words—like ‘duh,’ ‘uh,’ and ‘uhn’—as being related to aggression,” Patton says. “When we dug a little deeper, we saw that some of those patterns of speech are also intrinsically connected to African American Vernacular English.” The team fixed the problem, but the discovery shook them.

Patton wants a tool that helps rather than harms, one that allows social workers, the courts, and the public to better understand the lives of young people who get written off as dangerous. For every tweeted threat, there is another post—about friendship, or loss, or fear—that tells a different story. “Most people are using these platforms for support,” Patton says. “They’re looking for help.”