Please Read: Two COVID-19 initiatives

Two Truths and a Take, Season 2, Special Issue

Hello everyone,

First of all, thanks to each and every one of you for subscribing and reading this newsletter. I’ve immensely enjoyed getting to write to you all every week over the past year. I hope that all of you are safe and sound at home, and either staying healthy, or on the path to recovery.

If you’ve enjoyed these newsletters, and have ever wondered, “Gee, I’d love to thank Alex for writing these in some way”, or maybe you haven’t thought that until this minute right now but it sounds like a good idea: well, now is your chance. 

Two COVID19-related initiatives that I’m asking you to help out today, if you can: 

The first one I’d like to tell you about is a business in Montreal called AON3D

AON3D was founded in 2015 by a group of material science engineers from McGill University (where I went to school), and I’ve known them since they got started. AON3D has one of the most interesting, thought-provoking and mature theses around the future of material science and manufacturing that I’ve ever heard, and I’ve seen a lot of them. They build industrial 3D printers and a material science platform meant for demanding applications in industry, aerospace, and other high-performance environments. If you are at all interested in 3D printing or the future of manufacturing in general, you should check them out.

But this week, the future has to wait.

The McGill University hospital system, like many hospitals around the country and around the world, desperately needs PPE for its doctors, nurses and frontline workers. The world’s supply chains and manufacturing facilities are reacting admirably to this challenge, but not fast enough to plug the gap for the next two weeks. We also face a looming ventilator crisis: we need local, rapid, distributed capacity to make ventilators and ventilator parts, and we need them now.

So AON3D is stepping up, producing face shields, respirator splitters, and other critical supplies. They’re able to make much higher-quality PPE and equipment parts than most 3D printing shops can thanks to their high-performance material science expertise, meaning their equipment can be sterilized and reused in medical settings (unlike many kinds of plastic). More importantly, they can make them today.

They’re also partners in the Pratt & Whitney / Bombardier Ventilator Project and the Code Life Ventilator Challenge - group efforts to produce a low-cost, locally manufacturable ventilator in a time frame that will help COVID-19 patients as soon as possible. This is a moonshot but if it works, it’s a game changer. 

Here are the asks:

First: If you’d like to help fund BOM costs, shipping costs, employee time, and any other expenses that AON3D incurs in getting PPE to medical frontline workers, I’d really appreciate that. I and another donor are matching up to $25,000 in pledges, and I’d love to have as many of you possible help us out. 

I'll pledge to fund costs

(Also, if you are an angel investor or VC - if you’re interested in funding AON3D’s business through this crisis and beyond into the next decade, email me and I’ll connect you with their team. This is your chance to get into their cap table, and save lives while doing it. The world’s manufacturing and supply chain capability will be massively different post-COVID; if you’re an investor who funds the future, well, here it is right in front of you.) 

Second: If you are a doctor / medical professional in North America (or are close to someone who is) and your team needs face shields urgently, please fill out this google form. We can’t promise anything, but please tell us and we’ll try our best to get you what you need. 

I'm a doctor and need PPE

Third: if you have access to 3D printers or laser cutters and a materials supply chain and want to help, OR if you have a need for parts or supplies that could be 3D printed, contact AON3D - someone on their team will reach out for assistance as soon as they can. 

The second one is a local organization that I’ve known for many years called Fred Victor. Originally founded 125 years ago, Fred Victor is a small organization here in Toronto that provides essential housing, health and employment services for around 2,000 people on a typical day. Fred Victor helps people who are vulnerable, homeless or living in poverty with three core things: getting safe and stable housing, addressing physical and mental health challenges, and finding meaningful and stable employment. It’s really important work, and they are everyday heroes. 

COVID19 is obviously hitting everyone hard, but the population of people Fred Victor serves are especially vulnerable during the next few months. An employment crisis and a public health crisis hitting at the same time are doubly threatening for their community. Fred Victor has the people and infrastructure in place to help right now, in these critical first few weeks of community transmission, but they need resources. So please help fund them through this time if you can. 

Support Fred Victor

Thank you all for helping, from the bottom of my heart. Things are going to get worse before they get better, but we will get through this. If you’re safe, healthy, and fortunate enough to help out, please do.

Thank you as always for reading, for helping, and for being great.

Stay healthy, be thankful, and see you next week,


Black Swan Events

Two Truths and a Take, Season 2 Episode 10

Hi everyone, a quick note from me: in a couple days, I’m going to send out a special email to all of you regarding COVID-19 and a few specific ways we can help out. Please read it - it’d mean a lot to me to see readers help out with a couple of local initiatives I’m looking to support.


To be clear, I don’t think we can really call COVID-19 itself a Black Swan event. Plenty of people saw it coming, in some form or another, and said so. If you asked people last year, “what will trigger the next global event?”, some non-trivial number of people would probably say “a pandemic.” We were warned. 

The resulting small businesses armageddon and unemployment tragedy might be, though. When we thought about pandemics, we typically forecasted either the immediate medical consequences, or went all the way to I am Legend scenarios where everyone is dead. But as far as I can tell, no one really foresaw: what happens when no one can leave their house for 3 months, so every small business closes all at once? (If you know of anyone who actually foresaw this ahead of time, please send it to me!) 

This is closer to real Black Swan stuff. This week’s unemployment filings, compared to the last half-century, are considered by frequentist statistics as a 30-sigma event: less likely to happen than if you had to select one atomic particle at random out of every particle in the universe, and then randomly again select that same particle five times in a row. A 30 sigma event should be outrageously unlikely, at universe-scale. But they happen. And when they do, they warn us: the problem is not that the universe didn’t behave correctly. The problem is that we were wrong. 

This week, we’ll finish a makeshift trilogy of posts on Nassim Taleb’s books and their core concepts - following part 1 on Skin in the Game from last September, and part 2 on Antifragility over the past two weeks. 

First, the basics. Black Swan Events have three principal characteristics:

One: They are unpredictable. This is the easiest one to grok, although the hardest to say anything actionable about. The term “Black Swan Event”, when used most lazily and colloquially, is simply meant to say “something we didn’t see coming.” 

Two: Their magnitude. Unpredictable events aren’t rare; they happen every day. Black Swan Events aren’t just unpredictable in character; they’re also unprecedented in scale. The 2008 financial crisis caught everyone by surprise not because mortgage defaults or bad credit ratings were that unusual, but because the magnitude of the event broke through all expectations and safety valves. Something happens at a scale that no one considered possible before, and with consequences that no one has prepared for. 

Three: They are retroactively explainable. In hindsight, we always see them coming. There were warning signs everywhere; the data pointed to such an event being within the realm of possibility; you know how it is. This retroactive explainability matters: there’s a logic to their retroactive explainability that is directly tied to their scale and their preemptive non-prediction.  

If it were up to me, I would add a fourth essential characteristic, which is implied by the first three but I’d like to make explicit. Prior to the event, they are preemptively ruled out, either explicitly in our models or implicitly in our preparation for the future. 

For an event to really be a Black Swan event, it has to play out in a domain that we thought we understood fluently, and thought we knew the edge cases and boundary conditions for possible realm. To me, this fourth characteristic is the real key for understanding the logic of Black Swan events, like what just happened this week.

Mediocristan vs. Extremistan

Let’s say you got together 1000 dentists, and then added up all of their annual income. Then you went out and found the most successful dentist on earth, and added his or her income to the total. How much will that super-dentist’s income change the mean average? Not much. Dentists earn a living day by day, patient by patient. Some dentists work faster and some dentists charge more, but it’s still a one-at-a-time kind of job. 

If you were to plot the income of these dentists on a graph, you could plausibly expect them to look something like a normal distribution. It might not look like a true bell curve - it would likely have a longer tail towards richer dentists - but you don’t see any dentists making 100 million dollars a year. 

If you used basic frequentist statistics to ask the question, “given our sample of 1000 dentists and their incomes, how likely is it that some dentist out there is earning 100 million dollars a year?”, you’d arrive at an impossibly low probability - 100 million is too many standard deviations, or “sigmas”, away from the mean. (When someone says that something is a “3 sigma event”, they mean an event whose magnitude lies three standard deviations away from the mean, which in a true Gaussian distribution, you’d expect to see maybe 1 in 1000 times.) Earning 100 million dollars a year for dental work would be, I dunno, a 7 sigma event or more? Something very unlikely, anyway. 

Now instead of dentists, let’s take musicians. If you assemble 1000 random musicians, add up their annual income, and a then add Taylor Swift’s annual earnings to the total, the other 1000 people are going to be barely register as a rounding error. Musicians’ jobs scale in a way that dentists’ don’t. 

Swift reportedly earned somewhere between 150 and 200 million dollars before taxes in 2019, and although that kind of experience is obviously atypical, it’s possible. We easily understand that musicians’ incomes do not follow a normal distribution, so asking “How many Sigma an event is Taylor Swift’s income” is pointless. Artists’ income doesn’t follow those rules. But we still think that dentists’ do. We have no reason to think otherwise, yet.

In The Black Swan, Taleb rhetorically uses two fictional places, “Mediocristan” and “Extremistan”, to illustrate the difference between these two environments. In Mediocristan, probabilities follow the laws of Normal Distributions, and calling something an “X sigma event” actually has meaning. In Extremistan, sigmas are meaningless. Tail events are understood and expected. (We’ve never seen one, but we intuitively understand that a 10+ magnitude earthquake will happen one day.) 

Mediocristan and Extremistan are not fixed: the world is a changing place. Musical performance used to look like dentistry. You could only make a living performing live, in person - which does not scale all that well. But the invention of recorded music and the radio changed that. Music became scalable, and the distribution of musicians’ income turned into Extremistan. To a musician of the day, this was a “black swan” transformation of sorts: before recorded music, there was no conceivable way that you, or any musician, might entertain fifty million people a year. There was no way for musical performance to be so correlated. But now it’s routine. 

When we talk about Black Swan Events and their principal characteristics (unpredictable; high magnitude; retroactively explainable; preemptively ruled out), what kind of events fit this description? High-magnitude events in domains that we thought belonged to Mediocristan (because we’d never seen evidence to the contrary), but actually belonged to Extremistan. This might be because that environment changed (new technology; new laws), but more likely, the environment was always that way. We were wrong, not the universe. We’d just never seen a tail event that big before, so we never considered that they were within the realm of possibility. 

Don’t play Russian Roulette

Most people get this far ok. But then some people struggle with the next step: how might environments look like Mediocristan under some conditions, and give every impression of behaving by those rules, but then suddenly not? Where does our false confidence come from? 

Our understanding of the world is based on past experience; the more events we’ve seen, the more confident we are in our model of how those events work. But when we gather these data points and draw conclusions from them, there’s a big difference - which often goes unnoticed - between gathering these data points in parallel (which can create an illusion of Mediocristan), versus gathering them in repeated doses (which reveals the Extremistan that was there all along.) 

Let’s do a thought experiment. Imagine a group of twenty friends heads to the casino, each equipped with $100, and they head to the roulette table. They each play for an hour. At the end of the hour, some people are up, some people are down, a few have gone bankrupt. Overall, if you asked scientifically, “what is the effect of Roulette on your wallet?”, you might draw up a probability distribution and conclude, "OK, the expected value of playing roulette is less than breakeven, but it’s a distribution, and it looks like this."

Now let’s imagine a new scenario. Instead of twenty friends playing for one hour, it’s just one person, playing for twenty hours. Are you going to draw the same conclusion about the effects of Roulette on wallets? No. That one person is overwhelmingly likely to have gone bankrupt, if they’re compelled to keep playing over and over again. 

There is a clear difference between what roulette does to a group of people in parallel, versus what roulette does to one person in repeated doses. (If you’re not convinced, make the game more extreme: you’re offered a chance to play Russian Roulette for a million dollars cash. What is the expected value of playing? What about playing six times?)

What is different? The wheel is the same; the odds are the same. But there is a difference. Every time you spin the wheel, on average there will be a slightly negative expected outcome; but when that negative outcome can be separated and born separately by 20 different friends, each with their own wallet, the odds of any one of them going bankrupt are smaller. 

But when that the outcomes are all concentrated on one wallet, that one wallet will go bankrupt before not too long. Bankruptcy is a tripwire, where the consequences change in character. When you go bust, you can no longer buy back in, and your luck is no longer eligible to “average back out” to the mean expected outcome that 20 friends saw over their one hour of playing. 

The point of this is, if you watch people play roulette for 30 years, but the only data points you’re observing are people playing in one-hour increments, then it’s true that you’ve learned something - your model of “what is the effect of one hour of roulette on your wallet” is probably pretty refined. But that does not mean you understand roulette generally. And it does not mean that you’re prepared to play roulette for 20 straight hours. 

The Problem of Induction

The central example in The Black Swan is a rephrased parable called the Turkey Problem: 

Consider a turkey that is fed every day. Every single feeding will firm up the bird’s belief that it is the general rule of life to be fed every day by friendly members of the human race “looking out for its best interests”, as a politician would say. On the afternoon of the Wednesday before Thanksgiving, something unexpected will happen to the turkey. It will incur a reversion of belief. 

The future hasn’t happened yet. More things can happen than will; and more things will happen than have happened. The future holds infinite possibility, while the past only offers a finite set of examples to learn from. 

Our Roulette Scientist who has only ever seen people play roulette for an hour at a time knows something about roulette, but not everything. He may have watched thousands of individual hours of roulette on a cumulative basis, but that doesn’t prepare him for 20 hours consecutively. They are not the same thing. If tomorrow he is suddenly compelled to play for 20 straight hours, his understanding of the game, as Taleb would put it, will incur a reversion of belief. 

When the dust settles and our Roulette Scientist looks back on what happened, and how he could have been so wrong in his understanding of roulette, he’ll offer himself a retroactive explanation: “all at once” is different from “one at a time”, he’ll sigh. That’s really obvious in hindsight, and it doesn’t require that much of an adjustment to his understanding of roulette. But that tiny little adjustment makes all the difference. If “all at once” isn’t something you’ve ever thought about, then of course it’s not part of your threat model. 

Connecting parts one and two together here: we’re most at risk for getting turkeyed when we’ve studied some corner of the world, and have a lot of data telling us: here’s how this system behaves. Here are its parameters. Here are its upper and lower bounds. It behaves like something out of Mediocristan. There are years and years of data reaffirming this is true. 

Up until last week, the chart of US weekly unemployment claims sure looked a whole lot like Mediocristan. Job losses are something that happen more or less in parallel: in a dynamic free market economy, Alice losing her job in Seattle is not that correlated with Bob losing his job in Tampa. 

There are some correlations with economic cycles, for sure; no one believed that jobless claims were completely independent of one another. But we had decades of data showing us, here’s what "Shutdown” looks like during good years and bad, during national crises like 9/11 and economic crises like 2008. We even have data showing us what happens when “Shutdown” scales up to an entire city for two months, like with hurricane Katrina. 

We didn’t really consider: but what if every business closes for two months, all at once, because no one is allowed to leave their house anymore? In hindsight, yeah that’s pretty much what you’d expect would happen in a pandemic. But no one had ever seen “Shutdown” scale up to that level before. It was not in the model. 

Our years and years of looking at jobless claims - which, in total, constitute far more total claims than this week’s 3 million - are not really useful here. That was all learning in parallel. Now we’re dealing with a repeated dose situation, and a degree of sudden, concentrated unemployment at a scale where we do not have any prior experience. The human cost will be so, so big. 

Part of that cost, tragically, is that we’re in no way ready to handle that much unemployment, so suddenly. The American health care system is tied to employment, and at a moment where we’re all about to get sick and risk death, everyone is getting laid off and losing their health insurance. The system we have in place for processing jobless claims, and supporting small businesses in lean times, is just not ready for a sudden event of this magnitude. And why would it? We had fifty years of experience, stress testing what normal looks like and what high looks like. 

Then 2020 happened. And we were all wrong.

As an additional reminder - please remember to check your inbox in a couple days for a special COVID-19 related issue.

And finally, this week’s comic section, the tweet that made me laugh the hardest:

Immune systems / Antifragility, Part 2 | TTT S2E9

Two Truths and a Take, Season 2 Episode 9

Hi everyone, in keeping with last week’s theme on antifragility amidst the global COVID-19 crisis, this week we’re going to talk about a specific example of an antifragile system that’s probably on a lot of your minds right now: your immune system. If you have a biology background you may already know all of this, but I’m guessing most people who read this newsletter don’t. So I hope you enjoy and learn something from this quick overview on how your immune system works, and why it’s such a great example of an antifragile system that gains from disorder. 

The immune system: three lines of defence

What are we talking about when we say “The immune system?” First of all, you don’t have one immune system, so much as you have three layers of defence who work to protect you from pathogens and disease. They all work together, particularly the second and third layers who coordinate their work a lot. Still, we can still think of them as three distinct systems that have their own strengths and weaknesses. 

The first one is pretty easy to understand: it’s your skin. Your skin is a wall that keeps stuff out. It’s simple, but effective! It’s fragile, though. Your skin suffers when stressed. If you’ve ever had a cut that got infected, that’s what happened: the wall got breached, and you suffered the consequences. Nonetheless, it’s a good thing you have it. It’s a simple, cheap, and effective way to keep 99% of the pathogens and toxins you encounter in the world out of your body and away from where they can cause problems. 

What about threats that get through the wall? The next two layers of defence are more targeted and deliberate. In vertebrate animals like us, there are two levels to our immune response: innate immunity, which is a characteristically robust system, and adaptive immunity, which is antifragile. 

When you’re dealing with a threat, the immune system has to do two things: identify it, and destroy it. Destroying it, relatively speaking, is the easy part. Both the innate and adaptive immune system rely on similar weaponry to do so: we enlist specialized white blood cells and marker molecules to tag threats, break them apart, neutralize them, or just eat them. The harder part is knowing what to destroy: how does the body equip itself to know what a threat looks like, so that if it sees something, it can say something?

The innate immune system has learned what threats look like by an old, slow, but tried-and-true method: evolution. The innate immune system acquires its threat models through evolutionary selection from generation to generation, which is very slow in the context of our own lives (evolution typically takes thousands of generations to work) but still totally works as a way of identifying pathogens that have been around long before we were. 

White blood cells in the innate immune system all carry around an “most wanted list” of threat models in their DNA. Remember, your DNA is more or less fixed from birth: the set you get from your parents is the set you’ve got for life, and you benefit from the evolutionary experience of thousands of generations before you. It’s spread out among every white blood cell and every precursor cell of your body, and the response that it triggers gets carried out by dozens of different cell types and attack mechanisms, so it’s pretty resilient to shock and distress. 

But it only works if that threat can be identified in advance; and unfortunately “in advance” means in evolutionary terms: “before humans”. Fortunately, just as your skin is good at keeping out 99% of all bad stuff, the innate immune system is good at keeping out the next 99% of bad stuff - most pathogens that can hurt you have existed in the world for a very long time. But not all of them. So then what?

The adaptive immune system: stressors are information

In contrast to innate immunity, your adaptive immune system does not have the luxury of keeping a most wanted list handy. Your adaptive immune system has a harder job. It has to identify pathogens and bad stuff that you’ve never seen before, and possibly that no one has ever seen before. Bacteria and viruses evolve at a rate that’s orders of magnitude faster than us. We are perpetually playing catch-up in the fight against new viruses and new bacteria that have evolved into existence. 

It’s worth taking a minute to go through the biological mechanics of this challenge. The way that your white blood cells detect foreign intruders is by continually testing everything they encounter for certain identifying factors, called antigens. To do that, the cells need to express their own counterpart identifiers, called antibodies and antigen receptors, which selectively bind to a specific antigen partner. 

Antigen receptors, like most of the rest of the stuff in your body, are proteins; and the recipe for how to make them is in your DNA. Your innate immune system comes with its DNA recipes for antigen receptors pre-installed; you inherited them from your parents, who got it from their parents. Over time, you can evolve the right DNA recipes for catching the usual suspect pathogens. But how do you predictively design and generate antigen receptors, for threats you’ve never seen? 

The adaptive immune system does something pretty remarkable here: it generates them randomly. New white blood cells repeatedly scramble and rearrange the basic building blocks of these receptors in order to generate new combinations: up to 2.25 x 10^18 potential combinations in humans, at least so we believe so far (it could be higher). Instead of trying to predict the future, your adaptive immune system instead just makes a little bit of everything. 

The vast majority of these cells will never live out their intended purpose. They’ll float around, never finding that new theoretical virus for which they were randomly and specially generated. But every once in a while, one will find its target. (It doesn’t actually just run into it in the wild; that antigen gets “presented” by one of your innate immune system’s cells, who are actually a bit more clever than we’d given them credit for before. They may not know what they’re looking at, but they understand how to “hand over authority”, so to speak, to the adaptive system.) After going through a critical checklist to make sure you haven’t accidentally found part of yourself, the system gears up: it has its marching orders. 

So you’ve found a real threat, and it merits a real response. The first thing your immune system does is make massive numbers of copies of that lucky white blood cell; it also powers up its own fleet of weapons that it’ll use to eliminate the threat that it now knows how to identify. 

Then - and here’s the important part - once the threat is conquered, that threat is now stored permanently in our most wanted list, just like the classic threats that we automatically know how to neutralize. The next time we see that same threat, we’re ready. And we can get to work fighting it way faster; in fact so fast you never even feel sick. That’s what we mean when we say we’re immune to something, and it’s how vaccines work. 

Vaccines are a clever hack. By presenting the body with a small and disabled part of a disease, we can use society’s current knowledge of what diseases are currently dangerous (which works a lot faster than evolution does) to pre-load our adaptive immune system with its own most wanted list. That’s what you’re doing when you get your shots. 

In last week’s issue, one question that still seemed to trip people up a bit was the difference between optionality and antifragility. The adaptive immune system is a great illustration of the difference. The first part - pre-generating all of those randomly generated antibodies and antigen receptors - 1) is a kind of optionality, and 2) is a prerequisite for antifragility: you need to have all of these options available to you, so that you’re ready for anything. 

But it’s not enough. You also have to react. The second part - using stressors as information, reacting, and establishing permanent strength because of that reaction - that’s antifragility. Or, to be fair, that’s how I use the term antifragility. It may be narrower than other definitions, but I find it’s useful to be deliberate and specific with how you use words like this. 

Over the course of your life, your adaptive immune system grows stronger every time you’re exposed to new pathogens and diseases. Every time you give it a workout, it learns and strengthens. That’s why, unless you have an immune deficiency, it’s important to get a lot of exposure to dirt and germs early on when you’re young, and then continually throughout your life too. 

The biggest insight here, I think, is to really wrap your head around the nature of information in this system. The presence of a new, unknown pathogen resolves uncertainty for the adaptive immune system, because it tells it something explicit: Hey, you know how you randomly generated all of those antibodies and antigen receptors? This one is the right one. The stressor tells you what to do, and then makes you stronger in a deliberate, non-accidental way. 

Compare this to the innate immune system, which has no such capacity (the stressor simply goes unregistered until it’s too late) or barriers like the skin (the stressor actively makes it weaker). For the innate immune system, the stressor is not information: it does not resolve any uncertainty. Information isn’t what you’re told; it’s what you understand!

You can find a permalink to parts 1 and 2 of this post (last week + this week) here: Antifragility |

If you’re looking for more non-COVID reading material this weekend, check out this very interesting piece on security by Todd Simpson.

Knights, Castles, Satchels and Writs | Todd Simpson, iNovia Capital

Many of you are probably familiar with the “Castles and Knights” metaphor of IT security; this piece extends the idea further into the present (and, I think, the future that’s beginning to arrive). Give it a read and get smarter.

And finally in the Comics Section, this week’s tweet that made me laugh the hardest:

Have a great week, and stay positive. (Attitudes; not tests!)



Two Truths and a Take, Season 2 Episode 8

Seems like a good week, with the Coronavirus pandemic and all, to talk about this:

Upon request, I feel like I ought to explain some of these misunderstandings. I already wrote one a few months back on misunderstanding “skin in the game” (it’s not an incentive; it’s a filter). This week we’ll tackle another one: antifragility

Antifragility = you need disorder

First, what antifragile isn’t: antifragile does NOT mean “not fragile.” It is not robustness, durability, or ability to withstand adversity. I hear some people use antifragile to mean “immunity”, or “super-resilience”, which aren’t it. Other people equate antifragility with optionality, which is closer, but still not quite it. (More on that later.)

Antifragile means “negative fragility”. This can be tricky to conceptualize, since there isn’t really a word for it in English, or in any language as far as Taleb has ever heard of. Nor are there convenient visual or object analogs we can easily imagine, like the opposite of a porcelain vase or some other clearly fragile object. We can picture something fragile, and then picture the absence of that fragility. 

But what about the other side of that spectrum - negative fragility? If fragility means “suffering from disorder”, what about something that gains from disorder? 

Here, we find antifragility: things that need disorder in order to thrive, and will actively suffer if left at rest. Most objects in the world don’t have this property, but a lot of complex systems and living things do. Markets, democracies, and immune systems are all antifragile: without variance, they stagnate and die. With variance, especially unexpected variance, they grow stronger. Disorder is a key ingredient to how they function. 

The opening example in chapter one of Antifragile, which is the most memorable thought experiment in the book, restates the idea well:

You are in the post office about to send a gift, a package full of champagne glasses, to a cousin in Central Siberia. As the package can be damaged during transportation, you would stamp “fragile”, “breakable”, or “handle with care” on it (in red). Now what is the exact opposite of such situation, the exact opposite of “fragile”?

Almost all people answer that the opposite of “fragile” is “robust,” “resilient,” “solid,” or something of the sort. But the resilient, robust (and company) are items that neither break nor improve, so you would not need to write anything on them - have you ever seen a package with “robust” in thick green letters stamped on it?

Logically, the exact opposite of a fragile parcel would be a package on which one has written “please mishandle” or “please handle carelessly.” Its contents would not just be unbreakable, but would benefit from shocks and a wide array of trauma. 

In antifragile systems, stressors are information

Most people get this far okay. But you can tell they sort of sputter out when they try to logic the core mechanism for how variance makes an antifragile system stronger over time, and what is different about those systems compared to fragile or robust ones. One shortcut to understanding it is to think about antifragility in terms of information theory

Think about a system, humming along in its normal state, and then a stressor is suddenly introduced. A wrench gets thrown into a machine; market demand for a product suddenly changes; a new threat reveals itself; customers start complaining to you in a way you hadn’t anticipated. How does this affect you?

In a fragile system, that stressor creates uncertainty. You had a plan, and you were good to follow that plan so long as you stayed within a certain state. But now you’re thrown into a new state, so your plan no longer works. You’re in trouble. That’s fragility.

In a robust system, that stressor is information-neutral. You had a plan, and there’s enough buffer or slack in your system to absorb the stressor. Your state is resilient to the new challenge; the plan continues.

In an antifragile system, that stressor resolves uncertainty. You had no preexisting plan; the stressor tells you what to do. In an antifragile system, stressors are information. Without stressors, an antifragile system is rudderless. It doesn’t know how to grow or what to do. It actively suffers, until a challenge gives it direction.

Antifragility and optionality aren’t the same thing

The rookie mistake is to confuse antifragility and robustness; the more advanced mistake is to confuse antifragility with optionality. They’re related, but they aren’t the same thing. Options are something you have, whereas antifragility is something you do

Optionality is a precondition to antifragility, but just because you have options doesn’t mean you’re antifragile. A fragile organization, facing an unknown stressor, may have plenty of “options” available to them. But if you don’t know what to do with those options, and if you don’t know how to grow into the challenge, then those options don’t do you any good. 

Antifragility is something you do, rather than something you have or something you are. Antifragility is an operating state of growing through continuous reaction. It’s like the opposite of predicting the future. You’re not making any forward-looking assumptions about anything, but you need disorder: you need a state change to have something to react to. Good antifragile systems react quickly and correctly, like the Hydra growing new heads when you cut one off. Without disorder, the Hydra doesn’t grow. 

Taleb’s favourite go-to example is deadlifting: free weights make you stronger (as opposed to exercise machines) because they expose you to more stressors, and more degrees of freedom in how they stress you. Your muscles and joints are antifragile, because of what they do: they are oriented towards those stressors, and they use those stressors as information. Optionality is not enough: having the option to grow is not the same as growing in active response to stress.

Accordingly, antifragile systems and organisms tend towards a common theme: bottoms-up decision-making, rather than top-down decision making. Antifragility requires real options, and real options are low-cost. Antifragility is only successful if you can actually detect, react, and grow in response to deviations from your present state in real time; the only way you can feasibly do this is for disorder detection and response to take place at a small enough resolution, and tight enough turnaround time. Top-down systems have a hard time with antifragility, because for them, all options are costly. 


In context of what’s going on with the coronavirus pandemic, you can see this relationship between optionality and antifragility playing out in real time. You can compare different countries’ reactions and responses and see how, for example, what’s a cheaper option to Singapore might be an expensive option for America (swift state action to clamp down on transmission). The stressor, “There is a virus” is information to the Singapore government, whereas it’s uncertainty to Washington DC. 

Another notably antifragile country that’s gotten a lot less attention, but has responded in exactly the way you’d expect, is Switzerland. On February 25th, Switzerland saw their first domestic case; three days later, they’d banned all events of over 1000 people. (Imagine the United States acting with that kind of speed!) Since then they’ve repeatedly recalibrated their testing policy as conditions change in real time; it’s not like they don’t have the virus, but you can see they’re dealing with it in their stereotypically Swiss way. *Update: since writing this a few days ago, it looks like Switzerland still has it pretty bad. Here’s hoping they pull through.

On the other hand, there are other aspects of the American system that are going to shine in the response and aftermath here. The American system, for better or worse, is good at never letting a crisis go to waste. It’s hard to see in real time how today’s reactions will build muscle for tomorrow; the process is going to hurt a lot. But America was built for this. The gym is now. 

Permalink to this post is here: Antifragility |

A few other things to read this week:

DeFi, the next-generation distributed finance platform (and/or perpetual motion machine) built on top of Ethereum, is going through a massive stress test right now as plummeting prices trigger a wave of margin calls, and then second-order consequences. (More will certainly have happened since I wrote this, so check live results if you’re interested.)

MakerDAO gets stress tested as Eth price plummets | Jack Purdy, Messari Crypto

In other news, AWS’s home grown CPUs have gone from meh to kicking ass in a (not) shockingly short amount of time:

Amazon’s ARM-based Graviton2 against AMD and Intel: comparing cloud compute | Andrei Frumusanu, Anandtech

And finally, this week’s comics section:

Stay safe,


Employee Stock Options: free money, kinda

Two Truths and a Take, Season 2 Episode 7

One way to think about employee stock option programs (ESOPs) is that they’re good because they align economic interests between the business and its workers. When the business does well, employees do well, too. Employees like it because they get to own a piece of upside; management likes it because it underscores common cause for working through late nights.

Most people in tech believe that systematically sharing upside exposure with employees is good and necessary, although a vocal minority disagrees. To them, the practice isn’t so benevolent: they see equity-based comp as promising the moon and the stars, and then forcing unknowing employees to subsidize the startup through its risky years in exchange for leftover upside that gets paid out last. We have this argument on Twitter every few months, and it tends to fall into the same rut. 

I find these arguments pretty frustrating, because for the most part (and I am generalizing here) the discussion is framed around wealth distribution, incentives, fairness, and the overall question of “who should get what.” I get that these are important questions: motivation and team alignment are useful if you want to go build a business and change the world. But they end up as theoretical arguments, and they never seem to change anybody’s mind.

Meanwhile, there’s another way we could think about ESOPs. In this view, ESOPs are good, not because of mission alignment or fairness, but because they’re a well-crafted tax code / capital structure swap with a convenient side effect of fudging your income statement a little bit. Now that’s useful!

To me, this is a much more interesting way to think about them. It underscores the good about ESOPs (they are genuinely value-creative, as a well-constructed swap ought to be) the bad (a large portion of their appeal is tied to the tax code and GAAP rules, which have changed in the past, and may again) and the ugly (that part about fudging your income statement, which we’ll get to later.) 

Stock based compensation helps startups defy the laws of gravity a little bit. This week we’re going to dig into how, both because they’re topical (this past week’s Twitter dust-up was prompted by a new Bernie Sanders campaign proposal to tax options at vest instead of at exercise; yikes!) and because I’m currently writing this newsletter while procrastinating on my own taxes, so at least I’ll feel like I’m doing something useful. Win-win!  

ESOPs are kind of like insurance

In a funny way, ESOP programs are like insurance. They’re both the same basic economic proposition: you put in a small amount of money on a recurring basis, and then if some conditions are met, a large amount of money is paid out to you. In between, the money is held somewhere as a float. 

The float is important, for a couple reasons. In insurance, that money gets put to work productively in the meantime. It’s like using a loan for leverage, except even better, because policyholders can’t ask for the same rights and covenants that creditors do. Insurance float is fantastic leverage in that sense, and you can turbocharge a business with it if used correctly. (Berkshire Hathaway runs on insurance float: Warren may preach “don’t use debt”, but watch what he does, not what he says.)

Stock-based compensation does something similar for startups. Traditionally, issuing equity grants to employees is supposed to be an economically neutral event for the business, just like buying back stock. It transfers wealth from one group of shareholders (the existing ones) to another group (the new ones), offset by a cash purchase from the latter. The business itself is unaffected. 

But it’s not totally neutral. There’s free juice to squeeze out of the orange here, and startups have learned how. Just like insurance, the key is in the float: employees “buy in” first (by accepting reduced cash wages) and then accept a variable payoff later, when the mature business issues and then sells employees their shares at a loss. In the meantime, the startup gets to put all of those savings to work. If you do this for enough employees, and enough for your compensation expense, it acts like a meaningful amount of leverage for these businesses, and they get it for free. 

The main criticism of ESOP programs, which I mentioned at the beginning, is that employees get paid out last in a liquidation event. At the time they sign their compensation agreement, they don’t know how much more capital will get to cut the line in front of them; in fact, they may not know more basic things than that, like how many shares there are outstanding. Employees have the least amount of information and control over the money they’re contributing to the cap table. 

But you know what, I hate to say it, that’s what they’re signing up for! The good news is that companies like Carta have done great work in helping employees understand the mechanics of their stock options better, and I suspect that as employees learn more about the mechanics of their stock options, they’ll actually appreciate them more, not less, as some people will have you believe. 

Employees do get a major benefit back, though. They avoid paying tax; not only because capital gains are taxed lower than income, but also because appreciating stock is inherently tax-deferred by nature. Stock options and their cousins RSUs work a little differently from each other here, but the outcome is similar as long as you do the paperwork right. All of that wealth gets to grow inside the company tax free, and you only pay tax at the end when you cash out. 

(Here, too, there’s an insurance parallel. If your house burns down, your insurance payment can be as high as it is because your premium payments grew tax free all those years, pooled with everyone else’s. Life insurance is even more explicitly a tax-advantaged investment vehicle. The IRS pays you to be insured.)

Take a minute to understand this swap. Both parties win here. Employees get to take advantage of something the business can’t: substitution of income tax for lower and deferred capital gains tax. Startups can’t take advantage of any equivalent, as fast-growing, money losing businesses that have no earnings to tax in the first place. Also, they’re not people. But employees sure can; even the hefty tax bill you owe at the end is way better than if you’d been paid all along in cash, or with other substitutes like phantom shares.  

Meanwhile, the startup gets to take advantage of something the employee can’t: all of that float. Yes, the employee could put that cash to work, but probably a lot less effectively than the business can. To you, a dollar invested into wealth creation is going to eke you out some return in your 401k, which is perfectly fine; but to a company that’s doubling in size every year and running on VC jet fuel, every dollar you can borrow from your employees is worth ten or more down the road if you succeed. To you, it’s just cash, but to the startup, it’s leverage. 

That’s why I think the insurance analogy is actually a pretty good one. Insurance isn’t only about pooling resources and risk, it’s also about putting money to work in a really powerful and tax-optimized way, so that the payoffs - when they come - can ring the bell especially hard. 

At its best, ESOPs do the same thing; they are genuinely value-creative, all just by organizing people and their money and their ownership of a business in the right arrangement. 

ESOP compensation can make businesses look better than they really are

If only it stopped there. Some businesses take this little swap one step further, and add some sleight of hand to it. 

The short version of this sleight of hand is: if you’re paying a large percentage of comp in stock instead of cash, it’s pretty easy to misrepresent the operating economics of your business as being better than they are. Look, you say, in this time period, I made X much revenue with only Y much expense, employees are happy, and the business is happy! 

Sure, but the expense is there - it’s just in your balance sheet, instead of in your income statement. This trick used to be standard practice, and although changes to GAAP rules in 2002 made it harder, it’s a bit of an open secret in the tech industry that we still do this; just a little less, and under legal cover. 

Here’s the longer version. If you issue stock options to your employees as part of the salary that you’re paying them, then in a general sense they are an expense, just as wages are. Where do they show up in your accounting of how your business works, when you present it to investors and outsiders? And how much are they worth? This is a harder question to answer than you think, because valuing options of private companies isn’t so straightforward. You can make a case that they’re super valuable, or that they’re worthless. 

There’s a classic dilemma here: on the one hand, you want your employees to understand that these options are valuable, so that you can effectively use them to negotiate compensation. You’d also love for the IRS to understand that these options are valuable, so you can claim them as expenses against your income. On the other hand, you don’t want to present them to your shareholders like they’re this huge expense, because it makes your business look less impressive. Ideally, you’d want to present them as not even an expense at all. 

Until the early 2000s, you could literally do that. When businesses advertised stock options to their employees, and calculated expenses for tax purposes, they quoted the market value of the option, which is the fair value that someone would pay for them if offered. Even if you’re selling options at-the-money (an option to buy a $35 stock for $35), their fair value is obviously higher than zero; both because there’s value in the volatility, and there’s value in the ability to wait.

But when it came time to calculate expenses for quarterly earnings, they'd account for their stock option grants using the intrinsic value of the option. You could say, look, our share price as of last round was $35, and we issued these options to buy shares at $35, so clearly their value at that moment in time was 35 minus 35, which is zero. Up until 20 years ago, this was pretty widespread practice, and you can see how even if legally permissible, it was kind of dishonest.

Eventually the hammer came down, as the Financial Accounting Standards Board (who writes the GAAP rules for American financial institutions) issued a directive saying, we see what you’re doing. This is not a victimless crime; it’s misleading to investors, who own less of your future earnings than they believe. You need to calculate a fair market value for these options, and then account for them correctly in your income statement, or else face GAAP noncompliance. So nowadays, when companies issue stock options to employees, we actually go through the trouble to calculate a fair market value for those options. 

But it’s this big open secret in tech that we still mischaracterize their value to some extent. These are private companies, so you can’t actually measure anything you need in order to calculate fair value; you have to infer them from public company comparables. But startups and their public comps are not at all the same! Your startup is volatile by design, whereas you can go find the most stable, boring, comps within a stretch of the imagination to use in your math. It’s entirely silly to believe that a startup’s closest comparables for option valuing purposes are other companies in its sector, rather than other companies at their stage. 

Your valuation has to stand up to audit, but that’s not hard. There’s no way to actually calculate this number definitively. Yet there are tons of ways to defensibly come up with estimates that everyone knows are way low, but where the math checks out. (Meanwhile, it helps lower the taxes withheld from employees when they receive their options, which is much smaller than the taxes they pay if the company wins big, but is still meaningful - everyone pays those, not just the winners.) 

This is a bit shifty. It’s less egregious than before the FASB ruling, but it’s the same kind of misrepresentation - just to a lesser degree. Of course, VCs understand what’s happening perfectly well, and they know how to account for it. They look at the cap table, and option pool size, as a way to understand how much the business relies on equity comp. In fact, they encourage it; it’s a win-win deal for the company and the employees that they want to support. 

But public market investors, who expect compensation expense to show up on the income statement, aren’t used to having to go dig for them, hidden in your balance sheet. To be clear, it’s much less bad than it used to be; but it’s still a thing: heavy ESOP compensation makes the operating characteristics of businesses look better than they really are. And that makes prospective investors believe they’re buying a larger percentage of the businesses’s future earnings than they really are. It turns out that free money was your money. Oops!

ESOPs help defy gravity, just like everything else about startups

There are some noteworthy smart people out there whose opinions I respect that do not like stock-based comp at all: as the line goes, “in private companies they’re a scam on employees, and in public companies they’re a scam on shareholders.” Hopefully at this point you get the basis for these arguments, even if you don’t agree with them. 

Here’s how I think of it. Again we come back to this recurring theme of this newsletter, “startups aren’t economically sensible”: startups don’t really work unless you have a way to temporarily turn gravity off. We’ve talked about lots of ways that Silicon Valley has learned to do this: angel investing for social status as a subsidy; VCs learning bidding conventions in their signalling, and portfolio construction that helps the math worksocial contracts in the tech community that reduce friction; the recurring revenue business model; and more. 

The stock option swap / sleight of hand we’ve talked about today is yet another one of these things. It’s a clever arrangement between the tax code, the cap table, and investor assumptions that helps startups defy gravity just a little bit longer. Is it a bit of a hack? Of course it’s a hack. Early on, it’s a hack on employees being all-in with startups; later on, it’s a hack on public market investors and how they look at companies differently from VCs, which kind of reverses that previous hack while keeping gravity turned off for a little while longer. 

So what’s the point? Well, I warned you, there is no point. The point is for me to procrastinate on doing my taxes, and I think we’ve achieved that. But if you’re not already fluent in this stuff, hopefully this illuminates a few of the mechanics going on underneath the surface of employee stock options, and the incentives and alignment they do (or don’t) supposedly create. 

Permalink to this post is here: Employee Stock Options: free money, kinda |

Several weeks ago on this newsletter, we talked about Counterfeit Food, and more generally how the rise of aggregators is creating new forms of distance between consumers and what they consume, and hence new opportunities for counterfeiting.

Check out this story from the music industry, which predictably faces a similar problem:

Fake artists have billions of streams on Spotify, and for labels like Sony, if you can’t beat ‘em, join ‘em | Tim Ingham, Rolling Stone

This is a pretty strange accusation, and I have no idea if it’s true (or what complicating factors I might be ignoring), but the claim is that Spotify themselves are incentivized to stream fake music in order to lower contractual payouts to real artists on real labels:

The hypothesis, later confirmed, was that many of these artists were, in fact, “fake” (i.e. pseudonymous) names attributed to tracks created by composers signed to Epidemic Sound, a Swedish “production music” house. The unproven inkling amongst major music labels was, and remains, that Spotify pays a lower royalty rate for these songs than it does for tracks from “real artists” vying for the same playlist spots.

Why? Because Spotify pays out royalties on a pro rata basis. This means – as explained on Rolling Stone previously – that the firm divides its total industry payout across the entirety of artists on its platform, based on their portion of overall streams. The important bit: if “fake artists” are paid lower contractual royalty rates than “real” acts, and then, driven by playlist inclusion, claim a certain percentage of Spotify’s total monthly streams, Spotify ends up keeping more money. An ex-Spotify insider was once quoted by Variety as suggesting that this was a deliberate company strategy: “It’s one of a number of internal initiatives to lower the royalties [Spotify is] paying to the major labels,” they said.


And finally, the comics section. This week, courtesy of the great state of Florida:

Have a great week,


Loading more posts…