Stanford Social Innovation Review : Informing and inspiring leaders of social change

SUBSCRIBE | HELP

Human Rights

Can Charity Make Big Brother Benevolent?

What would Hari Seldon think of big data?

In Isaac Asimov’s Foundation trilogy, Hari Seldon invents the science of “psychohistory.” He uses it to predict the actions of billions of people and thereby predict the future history of civilization. But he keeps his ultimate predictions and responses to them secret, which sets up the conflicts that play out in the series.

Seldon has inspired untold numbers of data-heads (including the prominent Nobel Prize-winning economist Paul Krugman), and his spirit—though fictional—was a strong presence among the 260 programs recognized at the recent Computerworld Honors 2013 gala. Laureates included UN Global Pulse, which seeks to use “big data” and “open data” for the common good; Grameen Foundation, for its use of text and voice to deliver vital pre-natal and early childhood health information to women in Ghana; and the Unique Identification Authority of India, which is using biometric data to identify millions of low-income people so that they can access banking and other services, and receive direct electronic transfers of benefits. (Disclosure: Our United Way/California 2-1-1 network partnership with California Emerging Technology Fund also was a laureate.)

In contrast to Seldon’s reticence, the “World Wide Computer,” to use Nicholas Carr’s term, uses our interactions with it—searches, social media, mobile voice and data services, and more—to try to sell us what it predicts we will need. Companies that want to sell us something use our data whenever we search the web or log in to Facebook or Twitter, regardless of our wishes. Twitter clients seem to know just what age and profession-appropriate ads to show us, for example. Sometimes this is convenient, but often it is creepy.

Of course, search engines and social media platforms do more than push products. They offer us the ability to connect, to find community. In the process, though, they take the data we provide about our interests and the content we create to share with friends and communities, and use that to make money for themselves and their shareholders. Whether we call it “digital sharecropping” (Nicholas Carr) or “digital serfdom” (Jaron Lanier), in many ways we serve the machine’s needs at least as much as our own.

We seem increasingly resigned to this kind of intrusion, in the marketplace at least (our feelings about NSA data gathering notwithstanding). But if we’re stuck with this reality, is there some way to put those abilities to good use?

Why don’t philanthropy and nonprofits similarly use their access to personal information to better target funding and services? Is it because they respect and protect privacy, or is the reason less noble—that we simply don’t know how and can’t afford to do it? Or worse, are we just being squeamish—do we shrink from the hard question of whether we have an obligation to use personal data targeting to prevent harm?

In human services, everyone agrees that it would be great if, whenever a low-income family or individual interacted with a resource (a school, clinic, or social services agency), they could connect with every relevant resource; if a mother comes in looking for job training, she could enroll in health coverage, SNAP food assistance, and subsidized child care. This vision of a “no wrong door” system is still an unattained ideal. Many funders, nonprofits, and advocates struggle to create such a holistic, integrated system, but bureaucracy, limited funding, legacy human and IT systems, and even privacy policies frustrate them. The “no wrong door” ideal requires that a consumer present a need, so it is a “pull” model in the typology of push vs. pull strategies in marketing; when a consumer comes to “pull” a service, the system then seeks to meet more than just the presented need by connecting them with as many relevant benefits as possible.

The 2-1-1 information and referral service aims to advance this “no wrong door” goal, and to date, it also is a “pull” model—consumers call or search 2-1-1 sites seeking services. These 2-1-1 programs use rich local resource data, trained information and referral specialists, and the three-digit dialing code authorized by the Federal Communications Commission to connect more than 16 million people annually to resources (such as food, housing, health and mental health care, and education) by phone; many more also use 2-1-1 databases to find resources online. With smart search and “opt-in” permission from users, we could use that information to send follow-up messages that connect people to other resources or support beneficial lifestyle changes. The scale of data collected also could inform government and philanthropy about trends and gaps or disparities in access to resources. (This is something the 2-1-1 field is working toward.)

Big data has a lot of potential for good, and there is great excitement in the philanthropic world about its macro, system-level potential. But it clearly poses a significant threat.

But can we get more aggressive? Might it be more effective to use big data to push resources to people who haven’t sought them out? Opt-in is important ethically, but it also takes work and is a bit of a barrier. If we could get results on a greater scale by pushing messages without an opt-in (the way search engines and social media applications do) would we? Take San Bernardino County in California, where approximately 140,000 people are eligible for SNAP food assistance but do not receive it. A supermarket chain or a grocers’ association probably could purchase or collect data from these families to reach out to them, and we wouldn’t criticize them—in fact, we may honor them for it. Shouldn’t a hunger charity be able and willing to do the same?

Big data has a lot of potential for good, and there is great excitement in the philanthropic world about its macro, system-level potential. But it clearly poses a significant threat. The annoyance of targeted ads is likely only the visible tip of the iceberg; using big data to exclude people or deny them services may be the huge, unseen base. In “Big Data Is Our Generation’s Civil Rights Issue and We Don’t Know It,” Alastair Croll provides a great summary of the promise and perils of big data:

We’re great at using taste to predict things about people. OkCupid’s 2010 blog post “The Real ‘Stuff White People Like’” showed just how easily we can use information to guess at race. It’s a real eye-opener. … They simply looked at the words one group used which others didn’t often use. The result was a list of “trigger” words for a particular race or gender.

Now run this backwards. If I know you like these things, or see you mention them in blog posts, on Facebook, or in tweets, then there’s a good chance I know your gender and your race, and maybe even your religion and your sexual orientation. And that I can personalize my marketing efforts towards you.

That makes it a civil rights issue.

If I collect information on the music you listen to, you might assume I will use that data in order to suggest new songs, or share it with your friends. But instead, I could use it to guess at your racial background. And then I could use that data to deny you a loan.

Croll suggests that the answer “is to somehow link what the data is with how it can be used”—and that this will be extremely hard.

UN Global Pulse director Robert Kirkpatrick wrote in response: “Big data is a human rights issue. We must never analyze personally identifiable information, never analyze confidential data, and never seek to re-identify data.”

In my gut, I agree with Kirpatrick; businesses and governments shouldn’t use that data in negative ways. But they do. Every day, millions of us give away the kind of information that allows companies to target ads toward us or deny us services because of a risk profile.

In that light, though we have qualms, we must take seriously the potential power of using data to push benefits to low-income and vulnerable people, while at the same time working toward Croll’s idea of coupling data with conventions on acceptable use. No doubt using data to target individuals poses serious logistical and ethical challenges for nonprofits and philanthropy, but it also could save and improve millions of lives. Let’s not avoid those challenges, but instead see if we can make good use of the dark arts of information technology.

Tracker Pixel for Entry
 

COMMENTS

  • Pete Manzo's avatar

    BY Pete Manzo

    ON June 12, 2013 01:56 PM

    And specifically on the NSA scandal, this Economist piece best frames the key questions for me. Google of course can’t arrest anyone, but…

    We can also invert the question, should Google know less than it does?

    http://www.economist.com/blogs/democracyinamerica/2013/06/surveillance-0

  • BY Jan Masaoka

    ON June 12, 2013 08:07 PM

    Today on the radio was a story about Google standing up for privacy rights (to the govt re NSA). The only rational response was to turn off the radio and write a check to the Electronic Frontiers Foundation.

  • Pete Manzo's avatar

    BY Pete Manzo

    ON June 13, 2013 09:27 AM

    Jan,

    Absolutely, supporting EFF is critical. Also read Electronic Privacy Information Center’s info about the pervasiveness of profiling by companies:  http://epic.org/privacy/profiling/ (Priv.ate data collection is massive, but Google cant arrest anyone, as yet.)

    Then if you haven’t yet read the Economist post I linked to above. Eliminating this kind of data collection may be impossible, but getting agreements about how it can be used, as Croll suggests in the linked piece in the post, and also folding in Lanier’s call for users to be paid for the use of their info, are worth pursuing.

    Talk to you soon,

    Pete

  • BY John Lumpkin

    ON July 20, 2013 04:19 PM

    At the Robert Wood Johnson Foundation, we’re pretty excited about the potential of big data in the realm of health and health care. We’ve been leveraging big data for a number of years through our County Health Rankings initiative, assessing and comparing the social determinants of health of every county in America. But now we’re exploring new possibilities as well. For example, this year we made a grant to the patient community site, PatientsLikeMe, and are working with them to develop health outcome measures from real world, real time patient-generated data that researchers can use to develop meaningful treatments to improve health and advance research. Through our Health Data Exploration Project — for which we are partnering with the California Institute for Telecommunications and Information Technology — we’re investigating the feasibility of using the vast data being gathered via personal health monitors to uncover invaluable insights to improve individual and population health. These are just a couple of the new efforts and we’re confident that there will be more. Again, there is no doubt that we need to be mindful of the risks, but as a Foundation that is dedicated to developing innovative solutions to the greatest challenges in health and health care, we think it’s a risk worth taking. Thank you, Peter, for spurring further conversation on the role that those of us working in the philanthropic sector can play to take advantage of this incredible resource.

Leave a Comment

 
 
 
 
 

Please enter the word you see in the image below: