"Open is a means to an end—and that end is trust."
So writes Steve Song, a social entrepreneur, Shuttleworth alum, and blogger who writes extensively about openness and access to affordable communication infrastructure in Africa. Steve's posts exploring the paradoxes, moral philosophy, and future of open are a fascinating read. I recently caught up with him to learn more.
This is an edited transcript of our conversation.
Is trust the new open?
You write that "openness is a means to an end, not the endgame. Trust is the endgame." It's beautifully articulated, but some people might consider it a bit naive or Pollyannish; "Oh, let's all trust each other." What would you say to that?
I remember the moment I had an epiphany about this—I was reading Danah Boyd's It's Complicated, a fantastic book about how teens use social media and how it's transforming the way they think about privacy. She basically argues that technology has not fundamentally changed the relationship between kids and parents. Kids still want to be trusted and respected by their parents, and likewise parents still want their kids to trust and respect them.
The insight for me was that problems around social media between parents and teens mostly relate to issues of trust and respect.
Suddenly I just thought, "Oh yeah, that's it." I have three young boys, and they're constantly coming across these bizarre YouTube videos and stuff online. At first I thought, "How can I be a 'catcher in the rye' and protect them from the Internet?" But then you think: well, what am I going to do—monitor all their accounts? Put up a bunch of nanny filters? Go through all the sites they browse?
Or on the other hand I could say something like: "Well, let's make all our data open. All the sites visited on our home network will be publicly posted, and we will all just be radically transparent with each other." But that doesn't seem like a particularly satisfying answer, either.
The far more successful outcome would be if I were simply able to trust my kids to make good decisions. That's a much harder thing to do, because it involves a big leap of faith. But it's also incredibly more efficient; it saves me a huge amount of time if I can simply trust—in this case—my sons. Of course, "efficiency" is not the primary motivation for trust between a parent and child—but in this case a happy by-product of it.
I believe the same rules apply personally as professionally. It made me wonder whether "open" is simply the wrong framing for some aspects of the open movement.
I want to have the kind of trust in you that creates more efficiency in the ways we deal with each other. Of course, I may also want to validate that trust from time to time. But the point is that validating it every day, on an ongoing basis, is not only inefficient—but also potentially destructive. I don't want to feel like I need to check you.
With my kids, it's a constant work in progress; trust is earned and lost in the the small things. But I do think that as a general principle, you want trust, and reciprocity in trust is something that is not only a good idea, but actually works fairly well. With my boys it reinforces the idea that that they want want their thoughts to be respected, and to engage in that conversation in a way that will move me, too.
Higher trust environments, whether in families or corporations or economies, tend to be both more effective and happier.
How does that play out in terms of organizational culture? Are you suggesting that forced or top-down transparency can sometimes backfire, or unintentionally undermine trust?
Yes. For example, [Mozilla's Executive Director] Mark Surman and I used to work together at the International Development Research Center. I was running a project called "Connectivity Africa," and I became the subject of a Freedom of Information access request.
It was a disgruntled consultant who was unhappy about a particular project and decided to make a big deal of it by requesting correspondence, files, everything. I can remember the intense level of paranoia it instilled in me as a bureaucrat. "Oh my God—did I say something wrong?" It's like when you go through customs coming back from a trip, and even if you're not carrying over your limit...
You feel guilty anyway.
Exactly! [laughs] "I must have done something." It had a really chilling effect on me. It was minor and nothing came out of it, but at the same time, for months afterward, I remember thinking, "should I be self-censoring in the emails that I write?" Because a lot of stuff you say can be taken out of context.
You obviously still believe in Access to Information policies; you're just saying there may be an opportunity to consider trust in terms of making them more effective in organizational culture and practice.
Yes. That's where I part company with some open data or open government initiatives that try to apply that in a top-down way. A key insight for me on this point was triggered by this interview with the Nobel Prize-winning economist Vernon Smith, about Adam Smith's Theory of Moral Sentiments.
Vernon ran a series of experiments with people, in what he calls a "Trust Game." The way it works is: the first player is given the opportunity to give some of what they have on to the next player. It turns out that when player one make that choice voluntarily, they also inspire similar behavior in the other player as well. Others are more likely to trust and give back as well, knowing that they have received a gift that was voluntarily given to them by someone else.
But when they run the same experiment again, and force the first person to share what they have—in exactly the same amount as previously—the other player's behavior totally changes. They're much less likely to give. The second person knows it wasn't a voluntary act, so they feel no need to reciprocate. It's the same basic transaction, but what's missing is there's no trust established, so no accompanying sense of reciprocity.
What matters is the intentionality and social contract between the people involved?
Yes, exactly. The point is to design systems that increase trust.
That's why Wikipedia is such a clever example; the fact that anyone can completely erase and rewrite a Wikipedia page is a form of active design trust. The very openness of it is saying: "We trust you." And by and large, it works.
It means we need to design for trust if we want to actually encourage people to engage in the acts of generosity that build reciprocity.
How might that play out in open government or open data initiatives?
Well, one of the challenges for open government initiatives is their over-preoccupation with data itself, which brings its own host of issues. There's a beautiful article in the Financial Times by Tim Harford that has a go at big data, and some of the assumptions about what we might be able to learn from it.
We go through these cycles where we fall in love with technology and a belief in what it can do. We've been doing it since Charlie Chaplin and Modern Times—there's these Taylor-ist management styles, a belief that the organization is a machine that simply needs to be tuned. We seem to have to learn that lesson over and over again.
There's so much that isn't measured that is valuable, and we only tend to focus on managing what's easy to measure.
Partly I think the rise of Goggle has contributed to this. Google's mantra is "organizing the world's information." But what they don't explicitly acknowledge is: knowledge is mostly a thing that exists in people's heads, and in practices that are embodied in cultural behavior.
And therefore are not really accessible to them?
Exactly. It's to some degree culturally invisible to them. There's a kind of "show me the data" imperative at Google, but there's plenty of evidence to support that knowledge is transmitted in ways that are informal and social, and that aren't captured in org charts or documents or reporting requirements.
I think that's an interesting space, and if I were looking at organizational work practice design, that's what I would experiment with: those little design changes that inspire people to make that "trust deposit" that then hopefully inspires reciprocity.
What's a simple example for people like me, who might want to start experimenting with some of this stuff in our own work?
One example is how people write blog posts. People often tend to write about how awesome everything is. It's transparency in the guise of: "I'm awesome, we're awesome, thanks for watching the awesomeness." Instead, we should expressly encourage people to write about things they may not know the answer for. Writing about not knowing, about needing help with a problem. When you do that, you put big deposits in the trust karma bank. Those are the kinds of things that build reciprocity.
Originally published at workopen.org as Designing for Trust. Republished here under a Creative Commons license.
Follow the conversation on Twitter #theopenorg
Comments are closed.