Risky Business
Why risk taking is good, how it's become bad, and what we need to make it good again.
There are a few qualities that form the basis of success for any living life form. One of these is the ability to discern between what is good for you, or your chances of success stepping from one moment into the next, and what is bad for you, or what would make that transition difficult, impeding your ability to make a good choice in the next instance or, in the worst case, preventing your ability to do so altogether.
This seems like a pretty basic thing, right? A plant finds its way toward light and sources of nutrients and water, avoiding dark corners and barren soil; a child learns to love and find joy in life by experiencing love and joy in the connection with their parents and siblings; a young adult realizes, yet again, that drinking too much alcohol doesn’t make for the most pleasant morning the day after.
This could be summed up as the ability to learn, to use experience to make the best decisions possible given the circumstances – the benefit of hindsight, to borrow a well-known phrase. Mea culpa.
But there is also a tendency to revisit things that don’t go so well for us, an urge to probe the wound.
This seems to be more the case with humans. I can’t think of a plant deciding to seek out darkness.
Perhaps this is a question of granularity, where the probing of that shadow is on a level so subtle that we humans aren’t able to easily perceive it, or perhaps this has to do with the distinction, generally held though I’m not convinced it’s a fact, between humans and other living beings of self-consciousness, that ability, exercised through free choice or agency (including the choice to make dumb decisions) to know we are unique.
For the plant, or most other living beings I think, there’s a quick rebound on the dumb decision, an evolutionary winnowing that happens, with the plant dying, or quickly and viscerally learning from mistakes, incorporating them in the most real sense of that term.
Humans, on the other hand, and this is where I think the more useful distinction is, have been successful as a species largely because we’ve developed capacities that provide a greater degree of elasticity in terms of consequence rebound.
Taking care of each other has meant that if someone breaks a bone by accident, possibly by taking a risk to ensure a successful hunt and therefore provide food, that they will be looked after and helped to recover. There is a widely shared quote, supposedly from the famous anthropologist, Margaret Mead, which posits that the first sign of civilization is the presence of a healed femur bone.
While this quote may not, in fact, be attributable to Meade, it nicely illustrates the strength that we can gain from being able to rely on others, to be assured of a circle of support that is larger than what one, alone, can provide.
Thousands of years ago in a hunter-gatherer society a broken leg would likely have been fatal. Compelled to move from place to place in search of food, those who could not move, or who imperiled the ability of their larger social group to gain sometimes scarce resources, would be left to fend for themselves, and, in all likelihood, die. The presence of a healed femur bone, required for walking, indicates both the caring for that person and the stability of food and shelter required to support that.
Other examples abound. Just about any which way you look you can see some tool or some technology or some contrivance of one sort or another that either distances or protects one from possible consequences of their actions, or that is capable of ameliorating those consequences once suffered.
Modern weapons are largely focused on providing a distance from consequence or potential rebound. Seatbelts and airbags in cars provide protection, and then there’s medical treatment if those protections aren’t sufficient. Another example, the insurance industry, is entirely devoted to providing this elasticity.
Enabling people to take risks is a big part of why the human species has been so successful. (The close connection between the role that safety nets, or caring for each other, plays in our success as a species should give pause for thought to those who advocate for a purely Galtian world, in the fashion of Rand’s hero.)
But what if that safety net or buffer between action and consequence enables irresponsible behaviour?
By definition irresponsible behaviour is a lack of consideration for the response that an action might provoke. And what if, to take it a step further, that sort of irresponsible, risk taking behaviour, is actually encouraged?
Well, then you find yourself with what is known in economics as a moral hazard, which describes scenarios where there is an incentive for an economic actor, such as a corporation, to increase their exposure to risk because they know that they will not fully bear the costs associated with negative outcomes.
The 2008 financial crisis is one recent example of a moral hazard. Banks took on a lot of risk with subprime mortgages, which were package together, obfuscating the underlying instability of the asset.
Oil wells are another example, with companies effectively being able to fully discount the cost of clean-up once the well has been tapped out as governments have, by and large, been unwilling to require them to assume responsibility for their actions.
In both of the above examples the public steps in and covers much of the cost, whether this is financially or whether this is the other forms of negative impacts this risky behaviour results in, such as losing one’s home or the experiencing the health impacts of pollution.
If you’ve read any of my other posts on The Whale you may be recognizing this as a form of negative externality, also called the “social cost”, with a portion of the cost of an activity, from which a private entity profits, being placed onto the public. In effect this is a public subsidy of private profit.
The much larger example of this moral hazard is climate change.
Our entire social and economic system is built on the exploitation of nature, whether that’s the raw resources we use to fashion products or the vast open spaces that provide a dumping ground for emissions. (Of course these spaces are only seen as ‘open’ to us. “Open”, in this sense, denotes a lack of use, an absence of economically viable occupation. The fact that these spaces are home to countless other species does not factor into this attribute.)
All of this leads to a third term, market failure. Markets, to be efficient, which is to say provide the greatest benefit for the least cost, need to price items accurately. The rationale for markets is that they provide a way of identifying something good, but when they enable and reward irresponsible risk-taking, we need to start questioning whether the market is functioning properly.
As with so many things, this irresponsible and damaging behaviour is the result of poor information, decisions made with incorporating considerations of whether the outcomes will be, on balance, positive, or whether they will be negative.
When we look at how decisions are made regarding activities that result in negative outcomes, and here the fossil fuel industry is a prime example, we can see that those making the decisions tend to have a greater degree of insulation from their potential impacts.
Wealth, privilege, hubris and group think act as a type of shield to both prevent and forestall, giving the wherewithal to escape rebounding consequences, at least for a time. If you’re wealthy enough an estate can be bought in New Zealand, private security can be hired, or, on a less extreme level, you can choose to not live close to polluting industry.
Decisions made with the best information possible, taking into account the many permutations of what might happen and how it might affect others, is, I think, how an ideal society should function. This dynamic describes both an ideal market, and an ideal democracy, and that’s what I think is fascinating, that a free and open marketplace requires the best information possible, as does a free and open society.
I could easily get distracted here by including some reference to tech bros and libertarians and their lazy claims to freedom of expression, but they conveniently, or stupidly perhaps, leave out any consideration of responsibility.
Responsibility, being aware of what might happen and gauging actions accordingly, means that we can continue to expand the boundaries of our understanding and capability, it means that we can continue to support and reward risk, underlining the moral part of moral hazard. But this requires good information, which in turn demands processes of open, informed, and accountable decision making.
Interesting Links
Speak of making good decisions: Ladder of inference
And of making bad ones: Doug Ford at 5 years: Selling out Ontario’s future to please the well-connected
And an example of this head-in-the-sand decision making: Alberta Is on Fire, but Climate Change Is an Election Taboo
And, finally some salve: What We Look for When We Are Looking: John Steinbeck on Wonder and the Relational Nature of the Universe
Great piece! Not seeing a lot of responsibility convoys lol.