As the dog days of summer stretch on, I’ve been doing a lot of trail running to keep active. The east end of Toronto is home to a spectacular and unsanctioned, semi-secret network of mountain biking trails that line the river valley ravines. There are over 40 miles of trails maintained by the mountain biking community, and who graciously tolerate runners invading their secret world so long as we follow good trail running practices.
While it’s not the wilderness (you’re never more than a couple miles from civilization), it can often feel like you’re completely and utterly lost in the forest. The complete isolation you experience in these trails comes courtesy of the the fact that Toronto is effectively San Francisco’s topographic inverse. Instead of hills punctuating the view that can see and be seen from anywhere, Toronto has an extensive network of deep, forested ravines that crisscross the city. They’re virtually invisible aside from the bridges that cross them, and when you’re down inside them, you completely forget you’re in the city.
Trail running feels like a completely different sport from road running. Once you start running on more challenging trails, you have to perpetually leap, dodge, and dart around branches, rocks, culverts, and all kinds of natural obstacles that threaten you at every turn. You have to dedicate 100% of your attention to not wiping out, 100% of the time. If you zone out even for a minute, you’re going to get hurt. Oddly enough, this kind of intense concentration typically leaves me a lot more mentally refreshed than zoning out for a long road run does. It comes at a cost of occasional skinned knees and twisted ankles; and potentially worse, if you’re not careful.
Anyway the reason why I’m telling you all this is because I was running with somebody the other day and the concept of “skin in the game” came up, initially in reference to trail running. Compared with regular road running, trail running does indeed feel like it has more SitG: if you stop paying attention, you get removed from the trail - or even the running pool. But the conversation turned to Nassim Taleb, author of books such as The Black Swan, Antifragile and most recently Skin in the Game: Hidden Asymmetries in Daily Life. I think some of you probably know him best from Twitter, at this point; he’s a perfectly unrepentant asshole online (which suits his style and message perfectly), and there’s a bit of a general consensus among people in his Twitter sphere that he’s kind of gone off the deep end over the past couple years, which is a shame. But recent antics nonetheless, his writing continues to resonate.
The core, unifying theme across all of Taleb's writing - from his early work around randomness and The Black Swan through later works like Antifragility and Skin in the Game - is the logic of risk-taking. Taleb takes great offence at creating work, exercise, or any other kind of activity that’s designed to consciously offload risk from the participant; he’s particularly fond of complaining about elliptical machines at the gym, jogging, or really any kind of exercise that isn’t free weight deadlifting. But I have a distinct feeling that he would be willing to make an exception for trail running, which seems like it’d be a lot more in his ballpark.
In this vein, Taleb consistently advocates the idea that people generally shouldn’t be put in charge of anything important unless they’re willing to bear the consequences of what happens. Wielding his go-to phrase, "Skin in the Game”, you can often find him separating people into two categories: “Real experts” (e.g. airplane pilots, business owners) who face real consequences if they’re wrong, versus “fake experts” who may not (he particularly likes to pick on economists and academics).
However, the idea that Skin in the Game leads to good performance and desirable outcomes has been co-opted by the tech industry in a way that’s quite different than what Taleb meant. In tech, we talk about “Skin in the game” a lot, but usually we talk about it in reference to incentives. We say that an employee with stock options might have more skin in the game than an employee paid strictly on a salary. Because a portion of their upside comp is in company shares, they have an added incentive to work hard. Many of these same people take a logical step forward with this thought: Equity-based compensation must be part of the reason why people here work hard; good motivation leads to good performance. In fields that don’t have upside-oriented comp (as in, just about every other field, except finance) then people simply “don’t have the right incentives”. Ah.
Upside incentive can matter a great deal, but it has almost nothing to do with skin in the game the way that Taleb uses the phrase. When Taleb talks about “Skin in the game”, he’s talking about survival. A small business owner does not have SitG because she’s incentivized to run her business well; she has SitG because if she doesn’t run her business well, then it doesn’t stick around. Over time, if she has survived, then she’s probably good at running her business.
Skin in the Game isn’t an incentive; it’s a filter. In systems that require SitG, bad performers are eliminated from the system, leaving behind the good ones. Another way of saying this is that people don’t have skin in the game; systems do. Systems with skin in the game are ones where the good performers are more likely to survive than poor performers; it has nothing to do with how people are incentivized. It’s Buffett’s line: In order to win, you must first survive.
But in tech we’ve rebranded Skin in the Game to something that suits our narrative better. We have this narrative that upside is a great driver of progress: building an incredible future is simply a question of correctly incentivizing everyone. There’s some irony here. First of all, one of the big reasons that Silicon Valley works is that it’s an environment where failure is more tolerated than elsewhere, and that it’s okay to take risks because you’re not personally eliminated if they don’t work out. Second of all, one of the reasons that startups work so hard is because they’re within an inch of death all the time. Upside motivation is not really a factor when you have two weeks of runway left and need to figure out something. That’s real skin in the game, and it doesn’t have a lot to do with incentives.
But history is written about the winners. We see the world through the lens of those who happened to survive and win; and we assign the fact that they succeeded directly to the notion that they won because they were incentivized to win. Tech has created this story where it’s a place where Skin in the Game is a great motivator. It’s true that upside can be used to great advantage in motivating people to work hard, but unless you’re in sales and a huge percentage of your salary is commission, then I’m loath to call that actual SitG; engineers making $200k a year and who also have a fat stock option incentive package wouldn’t qualify for Taleb’s definition of SitG any day of the week.
This new narrative is pervasive enough in Silicon Valley that it’s expanded company-building and into the business models of many of these startups themselves. One such startup is Lambda School, a current tech industry darling. They provide their customers with a computer science and software development education, and in return, take an equity stake in the students’ future income rather than charging an upfront tuition fee. This setup, branded as “Income Sharing Agreements” or ISAs for short, has become one of the big new trends that everyone’s talking about: “ISAs for X” as the new “Sharing Economy for X”.
Now here’s a question: does Lambda School have skin in the game? It does, actually - but not because they’re "incentivized to do a good job” any more than other startups. They have skin in the game because if they don’t do a good job educating their students and getting them good jobs, they don’t survive. They don’t earn the income they need to make, and they fold. What makes Lambda School work, assuming it continues to do well, is that they’ve built a school that actually does a good job teaching students. The ISA is there because in this particular circumstance, ISAs make a lot of sense: people studying to become software developers are probably in a better position to sell equity in themselves rather than fixed income in order to pay for their degrees. But the SitG for Lambda School here is not a positive motivator - it’s a filter, and a threat.
But a lot of people in the community have looked at the excitement around Lambda School and concluded, “The reason why this is working is because of ISAs. Therefore, let’s go fund businesses that are ISAs for X, and Y, and Z.” This is like saying Superhuman is doing well because they have a waitlist. It’s nonsensical! If you want to start a startup that benefits from this kind of added motivation, there’s a much easier and more effective way to do it: just raise less money! It’ll force you to work harder, learner, and smarter, and it’ll force you to get things right faster, or run out of cash trying. But I’ll bet you that’s not what these people have in mind.
I think it’s true that people in tech typically work quite hard, especially early employees who have the most to gain if things go right. But in my opinion the reason why those founders and early employees are so effective in the long run is really not that they’re so motivated to be high performers. It’s the fact that they have to work under circumstances that are very close to danger, all the time. They’re like trail runners. In this case, anyway, I’m willing to side with the Taleb notion that it is, in fact, better exercise and will get you better results. But it’s a question of survival, not incentives.
One quick note from last week: wow a lot of you took that little thought experiment seriously! Thanks for all the email.
The extortion economy: how insurance companies are feeling a rise in ransomware attacks | Renee Dudley, ProPublica This is a really interesting case of moral hazard I’d never considered before. Businesses are increasingly buying insurance against ransomware attacks, where hackers break in and encrypt your files unless you pay them their extortion fee. In many of those cases, the victims are able to fully recover the data by themselves without having to pay the ransom. But the insurers, apparently, often pay the ransom anyway. Why?
Well, because paying ransoms in full is good for criminals, and having lots of profitable criminals is good for the insurance business. Hackers, for their part, want to make that ransom payment as easy and frictionless to do as possible. Hackers and insurers mutually want each other to do well. When an insurer pays out a ransom in full, even when unnecessary, they’re sending a message to prospective clients: See? You need this! To an insurer, paying out ransoms is like paying for CAC. I bet you there’s some sort of really interesting Nash equilibrium in there - if ransomware criminals start charging for too much, the game falls apart; but if they can keep it just right, this looks like a totally sustainable (although obviously illegal) way of skimming money out of businesses’ legal budgets and then splitting them evenly between hackers and insurers. Yikes.
The Meme Hustler | Evgeny Morozov, The Baffler (2013) This is an interesting long piece from several years ago about Tim O’Reilly. The piece is quite critical of O’Reilly and is definitely coming from one side of the spectrum in the open source community, but I learned a ton that I didn’t know and had to read it twice in order to really soak it all in. Recommended.
Journalism is an Action | Hamilton Nolan, Splinter
Dating apps used to seduce gullible investors | Duncan Hughes, AFR
Have a great week,
Alex