When people talk about Wikileaks and transparency, I think what they mean to talk about is accountability.

The US government and other organizations did not choose to release this info.  They did not choose to say in public the same thing they say in private, and to allow the public to see within to verify this.  What Wikileaks is doing is holding the various actors involved in its leaks accountable for what they say in private.  It is disrupting the actions done in secret by outing them, as one essay by zunguzungu, much re-tweeted and linked to online in the past week, explains.

Are the Wikileaks cables damaging?  I believe in a free and open society for the United States, and I believe the Wikileaks dump would be far more devastating to a closed government.

Such leaks do not necessarily change the power structure that already exists; the United States is still the global hegemon, China would still retain an authoritarian government that sought “harmony” instead of the “chaos” of democracy, and geopolitics would still dominate, even if countries were found in a lie.

If anything, the public is far more informed.  Maybe the actors involved already knew the contents of the leaks.  But we as a whole now have no excuse for not knowing. (beyond being banned from reading them by our $employers)

Wikileaks would have far more devastating effects, for example, on corporations.  Evidence of fraud or murder or other crimes would bring legal repercussions in most countries, and other companies would quickly fill the gap and pillage the offending company’s brand and identity.

So also people have been asking if we should trust Wikileaks more than the government.  It’s the wrong way to look at things.  No one should always be trusted — what we should seek is an approximation of the truth, based on corroboration and reputation.  We can expect groups to always represent an issue in the light fairest to them.  Knowing this, why don’t we set up systems that allow multiple sides to share the information, present their case, and calculate the best approximation in the middle?

For example, if we know that the US government will say one thing, and a respected journalist says other thing, and Wikileaks says the government actually internally said another thing, and the target country said yet another thing, shouldn’t the best thing be to know all of these sides and figure out what the truth is likely to be?

What if, for elections, votes would go to different entities?  You go place your vote and it goes to the government, a government & voting rights watchdog, and the press?  If anyone’s numbers are off, then we know that someone was doing it wrong.  The different entities have different motivations for presenting their side of the truth.  It, ideally, is a balance of power.

Granted, different entities can be corrupted.  The journalist tribe has been successfully corrupted by government interests (or anti-government interests, in some cases), while its job should be to fact-check everyone else, as well as itself.  People often say the Supreme Court has become politicized and does not strictly adhere enough to what the original documents or the latest precedents say, being influenced by politics and other players instead.

But the more variety we have in the entities who have access to information, the more we can approximate what the truth is and figure out why the outliers had their numbers wrong.  Granted, these entities need to be authenticated, they should adhere to standards, etc., since a direct democratic system would leave us in a similar state as we currently have (where money dominates).

We can’t design perfect one-party systems like having one authority for verifying all votes or clearing all information for release.  There are always flaws.  Centrality draws those who seek to control it.

We also know from recent history that elections do not equal democracy, and also we’ve learned more about the mechanics of corruption.  We’ve learned how even the most ethical organizations can be corrupted into collusion or bribery or ideology.  We have the technology (encryption, cloud, bandwidth, software pliability) to be able to build multi-agent verification systems.

The reason it doesn’t happen is because we do not want it.  It also doesn’t happen, for the reason that people distrust openness and flee to privacy in the name of security.  Trying to remain invisible is not a viable strategy in a world where it’s becoming easier and easier to unearth your personal data, your shopping data, large intelligence caches, internal corporate memos, etc.  What we should do is not attempt security through obfuscation, but build in actual security measures instead of security theater.  What we should do is turn it all on its head, and trust in open accountability systems to keep each other honest.