Monday, November 26, 2018

Media Manipulation and Giving Tuesday

Like many of you I woke up this morning to an email inbox full of leftover Black Friday ads, a whole bunch of Cyber Monday ads, and the Xth day in a row of #GivingTuesday announcements.

Among those was the first clearly-designed-to-misinform #GivingTuesday astroturf email that I've received.

It came from the Center for Consumer Freedom (CCF) - a nonprofit front group run by a lobbyist for a variety of industries including restaurants, alcohol, and tobacco. The umbrella group for CCF - the Center for Organizational Research and Education (CORE) - is also home to HumaneWatch. According to the 2016 990 tax filing for CORE, HumaneWatch exists to "educate the public about the Humane Society of the United States (HSUS), its misleading fundraising practices, its dismal track record of supporting pet shelters and its support of a radical animal rights agenda."

(clip from 2016 990 for CORE)

The email I received from CCF linked to a YouTube "ad." But all of it - the website consumer freedom, the email I received, the work of these nonprofits - all lead back to a commercial PR firm Berman and Co, which has been accused of setting up these groups as part of their paid work for industry. None of this was revealed in the email - and if you look at the website for CCF to find out who funds it you find this statement:
"The Center for Consumer Freedom is supported by restaurants, food companies and thousands of individual consumers. From farm to fork, from urban to rural, our friends and supporters include businesses, their employees, and their customers. The Center is a nonprofit 501(c)(3) organization. We file regular statements with the Internal Revenue Service, which are open to public inspection. Many of the companies and individuals who support the Center financially have indicated that they want anonymity as contributors. They are reasonably apprehensive about privacy and safety in light of the violence and other forms of aggression some activists have adopted as a “game plan” to impose their views, so we respect their wishes."
If you check the CCF's 990 form (Search under CORE) you'll find that on revenue of $4.5 million (sources undisclosed), the largest expense was $1.5 million paid to Berman and Co, for management fees. Next largest expense is $1.4 million spent on advertising and promotion.

There's no virtue in this circle - just paid lobbyists setting up nonprofit groups to counter the messages of other nonprofit groups. On the one hand, the nonprofit sector must be doing something right when the tobacco and alcoholic beverage industries are trying to shut them up. On the other hand, good luck to you - average donor - trying to figure out what's real and what's not. Even the watchdog groups are sniping at each other

I've written before about misinformation, the current ecosystem of distrust, and civil society. And here it is. Be careful out there.

Saturday, November 17, 2018

Verify the data

Three tweets from yesterday:




Depending on a commercial company for our giving infrastructure is problematic in several ways. First, at any point in time the company (and this company has done this repeatedly) can change it's commitment, algorithm, priorities and leave everyone who was using it without recourse. Second, we have no way of knowing that the company's algorithms are offering all the choices to all the people. How would you even know if your nonprofit or fundraising campaign wasn't being shown to those you were trying to reach? Third, Facebook owns this data and can tell us whatever they want about it. Maybe $1 billion was given, maybe it was more, maybe it was less - how would we know?

There's an existing infrastructure for measuring giving in the U.S. and a number of research centers that analyze and report on those trends every year. That infrastructure - from 990 tax forms to The Foundation Center, Guidestar, the Urban Institute, and independent research from Giving Institute or the Lilly School at Indiana U - was built for the purpose of public accountability, to protect the democratic values of free association and expression, and for industry-wide performance improvement. This infrastructure is not perfect. But the data they use and their analytic methods can be checked by others - they can be replicated and verified following the basic tenets of sound scientific practice and good evidence practices for policymaking.

 There needs to be new ways to understand what's happening on these proprietary platforms - especially if Facebook is moving $1 billion and GoFundMe $5 billion. Those are big numbers about our nonprofit sector. We need to be able to interpret these data, not just reflexively believe what the companies announce.

Friday, November 16, 2018

Flipping assumptions about algorithms

I've had countless conversations with well-intended people from a number of social sectors and academic disciplines who are working on digital innovations that they firmly believe can be used to address shared social challenges. Some of these approaches - such as ways to use aggregated public data - are big investments in unproven hypotheses, namely that making use of this data resources will improve public service delivery.

When I ask these folks for evidence to support their hypothesis, they look at me funny. I get it, their underlying hypothesis that better use of information will lead to better outcomes seems so straightforward, why would anyone ask for evidence? In fact, this assumption is so widespread we're not only not questioning it, we're ignoring countervailing evidence.

Because there is plenty of evidence that algorithmically-driven policies and enterprise innovations are exacerbating social harms such as discrimination and inequity.We are surrounded by evidence of the social harms that automated decision making tools exacerbate - from the ways social media outlets are being used to the application of predictive technologies to policing and education. Policy innovators, software coders, data collectors need to assume that any automated tool applied to an already unjust system will exacerbate the injustices, not magically overcome these systemic problems.

We need to flip our assumptions about applying data and digital analysis to social problems. There's no excuse for continuing to act like inserting software into a broken system will fix the system, it's more likely to break it even further.

Rather than assume algorithms will produce better outcomes and hope they don't accelerate discrimination we should assume they will be discriminatory and inequitable UNLESS designed specifically to redress these issues. This means different software code, different data sets, and simultaneous attention to structures for redress, remediation, and revision. Then, and only then, should we implement and evaluate whether the algorithmic approach can help improve whatever service area they're designed for (housing costs, educational outcomes, environmental justice, transportation access, etc.)

In other words, every innovation for public (all?) services should be designed for the real world - which is one in which power dynamics, prejudices, and inequities are part of the system into which the algorithms will be introduced. This assumption should inform how the software itself is written (with measures in place to check for and remediate biases and amplification of them) as well as the structural guardrails surrounding the data and software. By this I mean implementing new organizational processes to monitor the discriminatory and harmful ways the software is working and the implementing systems for revision, remediation and redress. If these social and organizational can't be built, then the technological innovation shouldn't be used - if it exacerbates inequity, it's not a social improvement.

Better design of our software for social problems involves factoring in the existing systemic and structural biases and directly seeking to redress them, rather than assuming that an analytic toolset on its own will produce more just outcomes. There is no "clean room" for social innovation - it takes place in the inequitable, unfair, discriminatory world of real people. No algorithm, machine learning application, or policy innovation on its own will counter that system and its past time to keep pretending they will. It's time to stop being sorry for or surprised by the ways our digital data-driven tools aren't improving social challenges, and start designing them in such a way that they stand a chance.

Wednesday, November 14, 2018

Shining lights on generosity

You know the old trope about how people looking on the ground around a streetlamp for their lost keys, even if they lost them down the block, simply because "that's where the light is?"
 (http://creepypasta.wikia.com/wiki/The_Man_Under_the_Street_Light)

This is a little like how I've been thinking about generosity. We associate generosity - especially in the U.S. - with charitable giving to nonprofits. Everything else - volunteering time, giving to politics, direct gifts to neighbors or friends or others, mutual aid, remittances, shopping your values, investing your values - those things are all something else.

And, yes, the motivational and behavioral mix for these actions may be different. But we make a mistake when we center the one - charitable giving - and shift everything else to the edge and think that's based in human behavior. It's actually based in politics and industry.

In the U.S we've built an infrastructure of organizations (nonprofits) that take up a lot of space in the generosity mix. And we make them register with the government which allows us to count them. And we require them to report certain actions which then allows us to  track giving to them. Those decisions were political - and have to do with values like accountability and association and expression.

On top of those registries and reports we've built big systems and organizations to make sense of the information. Some of those institutions (Foundation Center) were built as an industry response to possible regulation. Some of those institutions (Guidestar) were built because there was a huge data set of nonprofit tax forms that existed by the 1990s. These data sets served as the "lights" that helped us "see" specific behaviors. It wasn't that other behaviors weren't happening, it's just that there weren't lights shining on them.

(https://www.videoblocks.com/video/seamless-looping-animation-of-classic-wooden-house-with-lights-in-a-beautiful-wintry-landscape-at-night-4dty2hnoxijrh8vsk)

Shining a light on these behaviors was done to better understand this one type of generous act - it wasn't done with the intention of judging the others as lesser. But over time, all the light has focused on charitable giving to nonprofits making it seem like the other behaviors weren't happening or were less important, just because the light was not shining there.

The more the full mix of behaviors happens on digital platforms, the more lights get turned on. Where it is hard to track a gift of cash to a neighbor in need, crowdfunding platforms that facilitate such exchanges (and credit card trails) bring light onto those actions. And because more and more acts take place on digital platforms - Facebook claims to have moved $1 Billion in last year - we can now see them better. The digital trails are like shining new lights on old behaviors.

Think of it like a house of generosity. In one room are donations to charitable nonprofits. In the USA, the lights have been burning bright in this room for decades. In another room is contributions to houses of worship. Down the hall is the room of money to neighbors/friends in need. Another room is where shopping for some products and not others happens. Downstairs is investing in line with your values. There's a room for political funding and and one for spending time rallying around a cause. Other rooms hold remittances or cooperative funds or mutual aid pools. As each of these behaviors shifts to use digital platforms - be it online portals, social media, texting, or even just credit card payments - its like turning on the light in those rooms. We can "see" the behaviors better, not because they're new but because the digital trails they create are now visible - the light is shining in all those
rooms.
(https://www.123rf.com/photo_27996382_big-modern-house-with-bright-lights-on-in-windows-on-a-beautitul-summer-evening.html)

Digital trails shine lights on lots of different behaviors. We can see things we coudn't see before. It's going to be increasingly important that we have public access to data on what's going in the whole house, not just certain rooms. Right now, the data on many of these behaviors is held in closed fashion by the platforms on which the transactions happen - crowdfunding platforms know what happens on them, Facebook tells us what happens there, and so on. We're dependent on the holder of the light to shine it into certain rooms. This isn't in in the public's interest. Having the lights turned on is better than being in the dark, but having public access to the light switches is what really matters.