Progress, Postmodernism and the Tech Backlash

Two Truths and a Take, Season 2 Episode 5

Here are two aspects of the anti-tech backlash that I believe are both true, and are actually reciprocally related to each other:

  1. Critics in media, politics, and even in tech itself, who spend all day in the echo chamber, usually overestimate how many people out in the real world actually believe that Silicon Valley internet companies are villains. As anti-tech rhetoric gets louder, we perceive it as more widespread than it actually is. 

  2. Conversely, tech leaders don’t appreciate how resonant and cohesive the anti-tech movement actually is. As we cordon off the current backlash to a subset of critics, we fail to appreciate what exactly this movement is about, and what it stands for. 

Anti-tech sentiment is far from a universal stance. But it’s more coherent, and therefore more dangerous, than I think most tech leaders realize. To really understand this movement, you need to recognize it as part of a reaction to something bigger than tech. It’s a rebellion against postmodernism.

This post is going to cover a lot of ground:

  • How innovation overtook progress

  • The resonance and coherence of the anti-tech movement

  • The rebellion against postmodernism

  • Guess which tech leader is sort of a Marxist?

What is postmodernism, and how did we get here?

100 years ago, we were right around the peak of a movement that straddled the 19th and 20th centuries called modernism. Modernism is a big idea: it’s a way of looking at the world, thinking and acting in it that subsumed more or less everything happening in the west. You know that joke where the fish asks, what is water? Modernism was like water. 

Modernism fundamentally cared about progress. Your impression from looking around the world, even just looking outside your window, was of definite, forward progress everywhere. Houses went from dark to light. Travel went from slow to fast. Infection went from deadly to curable. 

Whatever the challenge, modernism promised: Make it New! This impression came from everywhere. We saw it in art, literature and architecture; we saw it in political movements; we saw it in the transformation of neighbourhoods as they came alive with electricity, gas and plumbing. We also saw it in terrible things, like the Great War, where soldiers were killed in unthinkable numbers. More than anything else, the modernist mindset had a strong concept of forward

Then came the hangover. Postmodernism began as a conscious reaction to modernism: disillusion with absolute ideals and unstoppable progress; new emphasis on subjective experience and relative change. Postmodern art and culture emphasized a meta-awareness of the old utopian ideals, often by mocking them. New was out. Irony, remixing, and self-reference were in. 

Eventually postmodernism just kind of became everything, just as modernism had 50 years before. The way we looked at the world got reoriented around the viewer, the user, and especially the customer. The common threads that tied postmodern art and culture to 20th century consumer capitalism were two forces: commodification and transformation. In art, this looked like print media and other creative forms of remixing and self-aware reproduction: think Andy Warhol, R&B sampling, and Pulp Fiction. 

We did it for art, and then we did it for everything else. Over time, we perfected the postmodern “front of house / back of house” service delivery model. The back of house grinds out commoditized ingredients, and then the front of house crafts an experience, transformed and re-transformed to delight the customer.  

One important idea to grok if you want to really get postmodernism is Beaudrillard's concept of simulacra: reproductions of an original which no longer exists, or never existed. Walk into a Whole Foods or Starbucks and you’ll see it: the whole place has been crafted, complete with faux-authentic food crates and coffee bean sacks, to recreate this farm-to-market experience that stopped existing a long time ago. We know it’s not real, but we appreciate it. (For more on this, read Venkatesh Rao’s essay The American Cloud.) 

Once you figure out simulacra you start see them everywhere, because we increasingly interact with representations of things, rather than things themselves (brands, media, interfaces, and especially digital technology). Our daily lives are a collage of them, and it's actually pretty pleasant. It feels nice to click the save icon, even if the floppy disk has no tangible meaning anymore; and really, neither does “saving” a file at all. It’s nice in the same way that Starbucks is nice.

This all happens against a backdrop of capitalism. On the front end, our consumption gets increasingly varied, low-commitment, and disposable: we consume in the moment, knowing that tomorrow we’ll get to choose again. On the back end, ownership gets progressively abstracted away, and insulated from its consequences. 

It makes intuitive sense that these two things are related. As our consumption becomes more disposable, we develop a taste for easy variety and differentiation. And the cheapest way to serve up variety is through simulacra: commodify and capitalize the back end; transform and retransform the product offering. Give it enough iterations, and you get the airline industry: lose money flying planes so you can make money selling imaginary credit card tiers.

A real-life example that we’ve talked about in this newsletter is cooking-as-a-service. Food used to come from a farm; then it came from a grocery store hooked up to the food cloud, owned by a bank; now it comes from a delivery courier, hooked up to the cooking cloud, owned by a sovereign wealth fund. This is a very postmodern form of progress: food isn’t necessarily getting better, or more nutritious, or even cheaper. But it’s exactly what you want, and the more you like it, the richer someone gets. 

Out with Progress, in with Innovation

I hope you can appreciate at this point that the modernists and the postmodernists have different ideas about how to build the future. The lazy way to think of it is that modernists care about what we can make, whereas postmodernists care about what we can get. This isn’t far from correct, but it misses the real distinction which is subtler. 

The best articulation I’ve heard is from Peter Drucker’s book The Landmarks of Tomorrow. To Drucker, the modernist saw progress as an assertion of human power. Progress was a demonstration of superiority over the cold, dark and chaotic. It fit seamlessly with the modernist concept of forward: progress was something that inevitably happened, because humans moved forward. 

Drucker understood that the 20th century mindset was different. To the postmodernist, progress isn’t inevitable; it is a leap into the unknown that may or may not pay off. The postmodernist thinks skeptically about risks and tradeoffs, what it means to take that leap, and under what conditions one should do so. This isn’t how 19th century leaders and nations thought about technological progress. This new attitude comes from finance, where everything is understood as a risk. 

This new attitude needed a new word, and we found one: Innovation. As Drucker put it presciently:

Innovation is more than a new method. It is a new view of the universe, as one of risk rather than of chance or of certainty. It is a new view of man's role in the universe; he creates order by taking risks. And this means that innovation, rather than being an assertion of human power, is an acceptance of human responsibility.

As progress becomes a question of assumed risk, rather than asserted power, building the future looks less like a mission and more like arbitrage. Innovation means identifying a window of opportunity to move capital in, assuming the risk of value creation within that window, and then exiting as the opportunity closes. 

The upside of this approach meant that any capital, not just mission-driven capital, could participate in building the future. The downside is that capital is mobile: it only sticks around when it can find no better options. The compromise we reach in practice looks like present-day American Cloud: effective, but alienating.

Oddly enough, the person who figured this out first was Karl Marx. Admittedly, Marx’s mindset presumes a degree of liquidity that most real-life investors could only dream of. But his basic idea of founders and investors as arbitrageur of opportunity is correct. The modern progress economy dealt in technological potential and progress, whereas the postmodern innovation economy dealt in windows of opportunity that open and close. 

In other words, Karl Marx understood Venture Capital better than you thought. 

Memes, Fortnite, SaaS, and Simulacra

Life imitates art, especially online. Internet culture is a kaleidoscope of self-referential simulacra, beyond anything the mid-century postmodernists imagined. 

Frederic Jameson wrote in Postmodernism: The Cultural Logic of Late Capitalism: "The culture of the simulacrum comes to life in a society where exchange value has been generalized to the point at which the very memory of use value is effaced, a society of which Guy Debord has observed, in an extraordinary phrase, that it in it ‘the image has become the final form of commodity reification.’”

He may as well have been talking about the internet, because that mouthful of words is just a fancy way of saying "everything became memes, and then memes became everything". 

The most influential media franchise today, and arguably one of the most important software products today period, is a video game that makes billions of dollars a year by selling remixed dance moves and cool-looking weapon skins. The postmodern work ethic is hard at work online: commodify the back end, transform and retransform the front end. Sound familiar? It’s the dominant motif for value creation in the tech community: the Platform and Applications cycle. 

(From Dani Grant & Nick Grossman - The Myth of the Infrastructure Phase)

I’ve written about this before, both in the newsletter and in Scarcity in the Software Century: software is a fantastic template for incrementally building the future, because it pulls itself forward in a positive feedback loop I call the abundance cycle. Platforms and applications pull each other forward into existence, and in doing so, building the future becomes a matter of fulfilling demand that either already exists, or can at least be seen clearly. It’s a bit of a random walk, but it really gets us somewhere. 

Today’s tech industry really is a triumph of the postmodern work ethic. No one cares about your product; we care about your adoption. No one cares about what your technology does; we care about what problems it solves for users, and how fast you can grow. The first commandment of tech is Build Stuff People Want

The second commandment is Don't Reinvent the Wheel Every Time. There’s a reason why tech products have a unified look and feel: they’re made out of 99% of the same parts. There’s more to it than selling skins on Fortnite, but shipping software and internet products mostly means transformation and retransformation, A/B testing and optimization, as the product eventually becomes the copy of a copy of an original that’s been lost in time. 

The upside to this approach is that it gets you somewhere. But the way you get there, for the most part, is through combinatorics: trying out new skins and new interfaces for a deck of perpetually shuffled cards. The postmodernist looks at a mega success story like Uber, and sees a triumph of innovation. Uber seized a window of opportunity to reinvent transportation as an on-demand service. It's mobility on tap, like running water. Who wouldn’t want this?

Then the modernist looks at Uber and asks: but where is the progress? It’s still the same car, and it still needs a driver. It moves the same speed, burns the same gas, and gets stuck in the same traffic. What has actually changed? 

Peter Thiel, Marxist? 

I think you can generally distill down most of anti-tech criticism into two main points.

The first point is “The tech industry is the worst of late capitalism.” This critic argues that the prime directive of tech companies is to move fast and break things, exploit labor, regulatory and geographic arbitrage, and then extract shreds of profit out of dying institutions in the name of consumer convenience. Amazon destroyed retail, Google and Facebook destroyed newspapers, Uber is destroying labor, Airbnb is destroying neighbourhoods; that kind of thing.

The second point is “These are just stupid apps.” This critic argues that we’ve gone all-in on an innovation economy that’s fine tuned to produce profitable but pointless bullshit instead of solving any real problems. To this critic, the window of opportunity for reshuffling existing stuff will almost always be open wider than the window of opportunity to invent something fundamentally new. 

I know plenty people in tech who genuinely don’t think that “tech is destroying everything” and “tech will fund superficial over substance every time” are overlapping accusations, or even valid criticism at all. And why would they? To a postmodernist, these aren’t even negatives. Destruction is simply a part of value creation, and the next big thing starts out looking like a toy. 

To the postmodernist, software and the internet are the conclusive answer as to whether or not we’re making any progress. Of course we’re making progress! The S&P 500 is dominated by venture-backed software companies, and even though Silicon Valley did not produce the most globally disruptive technology of the past decade (that would be fracking), the Bay Area is still the consensus cradle of the future. Even the value shop bears who call the tech bubble bursting every year will grudgingly admit: it somehow never does. 

And then there’s Peter Thiel. I’ve come to appreciate that one of Thiel’s more interesting and less appreciated points of view on tech is his assertion that we have stopped believing in the future. (Tech people nod along in agreement until they realize that Thiel is specifically talking about us.) Furthermore, he specifically calls out software as part of the problem. He wrote an essay called The End of the Future for the National Review in 2011 where he pondered: 

The economic decoupling of computers from everything else leads to more questions than answers, and barely hints at the strange future where today’s trends simply continue. Would supercomputers become powerful engines for the miraculous creation of wholly new forms of economic value, or would they simply become powerful weapons for reshuffling existing structures — for Nature, red in tooth and claw? More simply, how does one measure the difference between progress and mere change? How much is there of each?

There are a few ways you can read this. One reading sees software as overrated or perhaps even neutral as a progress-driver in society: if it’s just rearranging existing pieces, then it’s probably just rearranging value too. Another reading sees software as useful, but overbought and heading for a correction. 

But there’s a more interesting way that you can interpret this, which is that Thiel sees software as part of a broader culture war. 

My reading of The End of the Future, Zero to One and Thiel’s other writing is as a criticism of postmodernism in general. In this worldview, we lost the path to technological Eden when we stopped believing in the literal and in the inevitable. As Drucker called it 60 years ago, we chose innovation over progress. Today’s founders and VCs have found success by randomly walking into tomorrow, iterating towards any combination of existing pieces that produces a subjectively better outcome for a customer. Hey, it works!

Thiel’s “strange future where today’s trends simply continue” almost looks like a perpetual motion machine, where Silicon Valley eventually gets enough capital and enough momentum spun up that it can keep repeatedly jamming open those windows of opportunity, commodifying and transforming anything in scope into perpetually new simulacra as they pass through: another successfully executed arbitrage. Tech is now wealthy enough to do this self-sufficiently; the only question now is whether we'll get bored.

That’s why the anti-tech backlash of “tech is destroying everything” and “tech will fund superficial over substance every time” matters as a combined narrative. It’s a genuinely coherent worldview, it calls out something bigger than software, and it hits at a particular moment in time where the United States government has introduced Star Trek-inspired Space Force aesthetics and has moved to mandate classical architecture in courthouses, which may not be as random as you think. 

Anyway, all of this is really just a long winded way of saying: Bitcoin. 

Permalink to this post is here: Progress, Postmodernism and the Tech Backlash |

Have a great week,


Debt Follow-up: a guest post from Ali Hamed of CoVenture

Two Truths and a Take, Season 2 Episode 4.5

Hello everyone! We have a bonus episode today, as a thank you for everyone who shared and commented on last week’s post on debt coming to startup financing. Some of you liked it; some of you hated it; and that’s great. And it shattered all records of anything I’ve ever posted before, drowning me in inbound email in the process. 

(Also, right on schedule, check out this $90 million Series B from Astranis, announced on Thursday. I did not know about this ahead of time, but it’s the kind of financing I think we’ll see more of in the future.)

Shortly afterwards, Ali Hamed of Coventure wrote a nice follow-up piece, and he’s graciously agreed to cross-post it here. So please enjoy this guest post from Ali, as a bonus newsletter episode.

If Michael Milken was 25, today — I bet he’d work at some top tier VC fund and think: why do we have so many sophisticated ways to price equity, yet debt is COMPLETELY unavailable no matter the value/merits of our portfolio companies?

(1) Debt is good to take when a borrower has some level of certainty of a future cash flow, or when it can be secured by an asset. For example: Clearbanc lends money to e-commerce companies who might spend the cash on ad-spend, which then creates revenue, and therefore can pay Clearbanc back. Produce Pay borrows money to then finance shipments of perishable produce, which are nearly certain to be realized into cash, allowing Produce Pay to pay back its own debt.

(2) Debt is bad to take when just used to finance high risk growth spend — because that high risk spend does not come with a certainty of cash flow, and therefore when the loan matures, a startup might not be able to make its obligations and go bankrupt.

After all, debt is leverage. When you are confident, you should borrow. When you’re unconfident, you should take equity.

And debt comes in many different forms. Corporate debt, or Venture Debt, essentially lends money to startups who can then use the cash for operations. We at Coventure aren't huge fans of this, because the cash used for operations creates a fairly uncertain outcome in terms of cash flow, making it un-obvious that a borrower can payback. Often, the underwriting is less about the use of the cash/cash flows of a startup, and more about the strength of the equity sponsors in the deal. I wouldn’t want to be an entrepreneur who was borrowing on the strength of who my investor was, rather than on the strength of who I was. It creates a lack of control.

There is asset-backed debt, which we think highly of. You can borrow against an asset, meaning that if things go wrong a lender just claims your asset (think a building). If you have assets, you may as well do this. If things go well, you repay the debt. If they go badly, you lose your asset but not your company. This is opposed to selling equity, where you’ve sold part of your assets whether or not something goes well. Seems dumb in comparison. The problem is many technology companies don’t have much in terms of assets.

One trend that is an important identifier of equity/debt beginning to blend is the premium placed on SaaS revenues. Markets will buy SaaS revenues at high multiples (i) because they are high margin but also (ii) because of their predictability. “Cheap equity is essentially expensive debt.” So starting to see equity get priced closer to debt for the perceived certainty is showing markets are beginning to give technology companies “credit” for debt-like stability.

So… how will debt come into the tech world first? For one, it already has a bit. Online lenders started borrowing capital from hedge funds to finance the loans they were originating. LendingClub, OnDeck, Prosper etc. did this first. The new wave is companies like Clearbanc, SecFi, Produce Pay, etc.

Next, you’re starting to see companies like Teampay, Divvy, and Brex begin exploring offering credit to customers as a way to catalyze their sales process. GE built a whole financing business based on this. We’d expect to see a similar dynamic. Essentially, SaaS revenues are EXACTLY this — it’s the financing of an expensive sale (like AWS), where the debt is financed through payback periods, and interest/principal is paid in the form of subscription revenues. It’s not hard to see why SaaS revenues would then get refinanced out later by lenders.

The way SaaS revenues will get financed at maturity is this:

· Putting the contracts into an off-balance sheet SPV

· Writing into SLA’s that the contract is between the company and its affiliates

· A stipulation (maybe a confession of judgment) that if the corporation goes BK the contract and IP immediately transfers to an SPV, which then is owed the cash flow (this is what will make SaaS lending Asset-Based financed, and not corporate lending).

Eventually, mature unicorn companies with stable business models will begin to issue bonds. There is a lot of talk about how Direct Listings will replace IPO’s. I think Bond issuances will prelude all of them.

A company like Stripe or Airbnb could have easily tapped Bond markets. Would you prefer to get paid 5% on bond issuances from a small $800M public company with lots of leverage, or 8% by Stripe who would only have $1B of senior securities? And 8% is certainly cheaper than the equity being raised.

One challenge is that ratings agencies will have trouble rating bond issuances by high growth companies that are artificially losing money/not producing cash flows as a way to chase new market opportunities. I wouldn’t be surprised to see companies like Carta turn itself into three companies: (i) A SaaS company with positive cash flows, (ii) an exchange that is in R&D and (iii) whatever their next business line might be, which will also be R&D.

The SaaS company could get underwritten on its profitability, and the debt raised could finance the R&D of the other two businesses. Lyft could separate out each of its cities into their own businesses, and raise debt into those subsidiaries as well.

A lot of bond issuances don’t happen, in part, because debt markets don’t know how to underwrite these companies. But also because the VC’s on the boards of these companies want to keep plugging more equity into these companies, so why encourage accessing debt markets?

In short — yes, debt is coming to tech. The companies and their business models are more mature than they used to be. VC used to be used to finance risky technology. Now it’s used to finance the implementation of proven business models.

But it’ll take some re-thinking, and will happen in steps.

Special thanks to Ali for offering to cross-post here. You can find his original essay here:

Is Debt Coming to Tech? | Ali Hamed

Also, in case you missed it, you can read this week’s regular issue here:

Can Twitter Save Science? |

Have a great week,


Can Twitter Save Science?

Two Truths and a Take, Season 2 Episode 4

Before I found my way to the tech world, I was a grad student in the neuroscience department at McGill University. I never took the opportunity to get my PhD, and left science to do a startup instead. But I still think about it sometimes. 

Basic science is important. The life sciences especially, which are the part of academia that I know a bit, are going to make or break some of the big problems we face right now over the long term. Academic research, being an old and established industry, has evolved some structural problems that are hard to fully grasp, let alone solve. 

Some of these problems are fascinating examples of positional scarcity, which emerges out of abundance and ends up strangling the ecosystem it grew out of. Over the long run, I think some of these problems are fixable, and as a matter of fact, the hero we need is already here: it’s Twitter. 

This week, here are three intertwined stories about positional scarcity from this strange world:

  1. How an “indentured scientist” class called postdoctoral fellows became the workhorses of life science research,

  2. Why scientific journals are one of the greatest positional scarcity business models I’ve ever seen, 

  3. Why Twitter is so great for science, and could lead to real disruption.

Postdocs: the Workhorses of Life Science

For most of the history of academia, the relationship between students and masters was pretty straightforward. First you trained as a student, in some sort of apprenticeship arrangement with one or more senior professors. Once you earned your PhD, which was a difficult and usually rate-limiting step, you went off and found a faculty position somewhere. It was normal to expect some attrition through this process: some candidates drop out as we move through the funnel. But there was a clear start, a clear path, and a clear end. 

In the mid 20th century, which is pretty recently on the academic time scale, governments began to allocate more money into academic research and basic science. University departments grew larger, and administrative work became more of a burden on faculty members. As scientific work became increasingly funded by research grants rather than university budgets, grant writing became a meaningful drain on scientists’ time. 

Grant writing and administration aren't really something you want young scientists to be spending time on during the sharpest years of their career. So a new kind of “stopover” position emerged, in between graduate school and full-time faculty, which solved this problem: the postdoc. 

Postdoctoral Fellows, or “postdocs” for short, were a win-win solution for this problem. Postdocs were a new option for freshly minted PhDs: rather than go straight for a faculty position, with all of the teaching and admin it required, you could instead go take up residency in someone else’s lab for a couple years, and just only focus on doing science. Everybody wins here: you get a platform to do science, without any distractions, that comes turnkey ready for you. The Principal Investigator (PI) who hosts you gets to put their grant money to work, via your inexpensive, high-quality labor, for a couple years. 

It worked. Over the last fifty years, postdocs have become the draft horses of life science research. They’ve also become the entry point for foreign scientists to enter the US system. They have more experience and autonomy than PhD students, so they can move much faster, and generate more output. If you look at the scientific work being done in academic labs today, the postdocs are directly responsible for a lot of it. 

Some fifty thousand postdocs currently work in US university research labs, usually making $50,000 a year or less after a decade of higher education. It’s become a logjam, as more and more postdocs compete for a fixed number of faculty openings. The “plight of the postdoc” has become a talking point, as their career prospects and collective mental health look increasingly bad. How did this happen?

One underlying problem here is that the postdoc position, even if it makes a lot of sense for any one person, broke the feedback loop between supply and demand for academic scientists. When you finish your PhD, it’s hard to find a faculty position - but you can always find work as a postdoc. So you go do that, and during your postdoc years you work really hard to publish papers and develop a research portfolio that helps you stand out from the crowd. 

Postdocs became the elastic capacity of the academic hiring funnel. The number of faculty openings remain relatively fixed, even as we keep producing more PhD students. But there’s no real limit on how many postdoc positions there can be. (So long as you can fund them, they’re great ROI!) This means that new PhD students will always find a job somewhere. It’s like airplanes circling in a holding pattern above an airport, waiting for landing slots to open up. There’s always room in the Elastic Middle.

In a world of scientific scarcity, when there weren’t that many scientists and there weren’t that many other postdocs, those few years could be great ones. But in a world of scientific abundance, where there’s a flood of new graduates and foreign scientists stuffing the funnel every year, the postdoc becomes hell. You are the elastic capacity of the system, and you are in a zero-sum competition to get out of it. But you have no choice: you have to go through this system, even though everyone agrees it’s horrible. 

This is more important than just some sob story. If you care about scientific progress, well, these are the people who are doing the science. Our life science research is largely carried out by these stressed out 30 year olds, with young families and no money and horrible career prospects. They can’t really bypass this positional scarcity trap though, because of another issue: journals. 

Academic publishing

Traditionally in academia, publishing was just a task that academic societies did out of necessity. Journals served a basic but useful purpose, which was disseminating information to scientists quickly and efficiently, at a time when it was otherwise hard to do that. They were a passive vehicle to convey discoveries and discussions that were worthy of sharing. The research society had power and influence over the field, but the journal itself? Not that interesting. 

The journal did one important job, which was peer review: acting as a coordinator and judge for independent scientists to review, endorse or reject your work on the basis of its scientific merit. Peer review is an old tradition, and it lives on in more or less that form today. And it’s important! Science is complex and opaque; it’s hard to establish any kind of absolute truth about anything, but peer review at least gives us a protocol for how to do our best. 

Mid 20th century, as national governments began to ramp up their research spend, suddenly we had this new problem: more science was getting done than could fit in each field’s journal. So obviously we needed more journals. Publishers were happy to create new ones, which kept pace with the escalating complexity and specificity of life science research during that time. Now, instead of there being one or two journals you had to follow in your field, there were five, then somehow fifteen.

Journal publishers saw two opportunities here. First, they understood that subscriptions were pretty much mandatory for universities. Professors demanded access in order to keep up with their field, so the university libraries had no choice but to subscribe, and take whatever price was offered. The buyer (the PIs) and the payer (the library) were two different entities, so universities had no effective way to negotiate against journals without angering their faculty. So they didn’t. 

Second, they understood that in a world with a lot of journals, the relative rank between the journals suddenly mattered a lot. Journals no longer had a mandate to publish whatever was submitted to them. A hierarchy emerged, where the top tier of journals like Cell, Nature and Science could be selective with what they published. A pecking order beneath them fought for this metric called impact factor, which is still how we rank journals today.

The publishers were kingmakers, and they knew it. They’d captured a double feat of positional scarcity: all of the research mattered as far as what scientists needed to keep up with in their field (or else fall behind their peers), so everybody needed to subscribe to everything. But the top journals mattered more in terms of career advancement and grant funding, so everyone had to fight for - and recognize - that ranking order. 

The outcome looks like something we’ve seen earlier: the scientific research funnel, which used to flow directly from “research gets done, and some of it’s worthy for publication, so it gets published”, now has this intermediate step with unlimited elastic capacity. Research gets done, and then it all gets published somewhere, so there’s no negative feedback to limit the amount of research that makes it into journals. But only some of it really matters, and it’s a zero-sum contest for everybody to break out of the Elastic Middle and get published in Cell

The academic journal business model is a funny one, because the journals themselves don’t actually do much work. The content is produced by PIs, for free, who apply for publication in hope of getting selected. Other PIs who review and curate submissions also work for free: it’s considered a part of academic duty, and prestigious to accept but disastrous to decline. 

In short, aside from the cost of ink and postage, academic journals deal in one thing only: positional scarcity. They sell their brand to researchers on both sides of the transaction, and collect money and work from both sides of the transaction. (Oh, and they also sell ads, too.) It’s a business model unlike any other I’ve ever seen. 

In the past few decades, the academic publishing market was rolled up by giants like Elsevier, Springer and Wiley/Blackwell, who now collectively own the majority of life science journals. The consolidation gave them an even better business model: selling subscription access as one giant bundle, which are pretty much obligatory purchases for any university. 

The internet didn’t change much: libraries could not cancel their subscriptions on pretext that their faculty members could download the articles illegally, after all. The brief wave of Open Access Journals, which promised free article access to readers, was a lot less disruptive than advertised. Journals simply demanded payments from the authors, rather than the readers. All this did was shift which university budget the fees came out of, and the open access revolution’s first iteration stalled after a few years. 

From symbiont to parasite: the evolution of for-profit scientific publishing | Peter Walter & Dyche Mullins,Molecular Biology of the Cell

In recent years, it’s become mainstream acceptable for scientists to complain about the publishing business model. The University of California library system took a brave stance by cancelling their Elsevier subscription, and US and European policymakers are advancing a few different initiatives aimed to weaken publishers’ grip on science. But all of them focus on pricing or access specifics around the science itself, rather than what I think is the real underlying moat. 

The real moat isn’t talked about as much, because it’s much more of a taboo topic: the interdependence between publishing and career advancement. Journal access is one thing, and I expect we’ll keep making progress on making scientific findings more available. But they do something more entrenched, and in my opinion worse, for science: they’ve become the universally recognized yardstick for evaluating scientific accomplishment and building personal brand. 

The Brand Tax

Let’s go back to our poor postdoc for a minute. Our postdoc has now put in 8 to 10 years of undergrad and graduate effort, and hasn’t yet earned a real professional salary despite being highly skilled and in her late 20s. By the time she’s finished her PhD, she has so much sunk cost and so much career reputation at stake that she can't quit now. At this point, all of her friends and peers are other grad students and other researchers, so leaving academia isn't an option. 

Her job for the next two years is actually pretty straightforward: build a brand. Building a brand is how you escape the positional scarcity of the Elastic Middle, and get hired for real. So how do you do that? Well, you want to establish a few things: that you do good science, that you’re smart, that you have a point of view and a vision for the field, and that people should read your work and listen to you.

Unfortunately, you can’t communicate any of those things directly, because no one knows who you are. So you have to communicate them by proxy, and the way you do that is by getting published in Cell, Nature or Science. And the way you get into those good journals is to go work as a postdoc in a lab that can get you there. 

Of course, everyone knows at some basic level that what journal published you doesn’t really determine the quality of your research, or your potential as a scientist. But we accept it as proxy. Science is tricky and opaque. As much as we try our best to identify undiscovered research and scientists, at the end of the day we mostly fall back on the journal brand names, and the labs who can get into them, as yardsticks for quality. 

This is why our postdoc has no choice but to surrender the sharpest years of her career to a lab that’s not hers, which has no loyalty to her, and barely pays her. The only way to build a brand in science is to pay a tax to those who already have it. This isn’t just a trivial hoop to jump through; it is the dominant concern of the people who are actually doing all of the science. 

The entire business model of academic research is leveraged on this constraint. It's why postdocs get paid so little, and therefore, why postdocs have ended up as the industry workhorses. The sheer amount of output that they have to produce inflates how much research gets published, which in turn makes getting into those top tier journals even harder. 

That’s why the positional scarcity held by elite journals and established PIs is so difficult to displace. The PIs may complain about the journals’ business models, but they’re deeply aligned when it comes to what the PIs actually care about, which is preserving the proxy reputation system that keeps them in charge. Senior researchers spend hardly any time doing science anymore; pretty much their entire job is writing grants, reviewing papers, and working the system. Their priorities are pretty evident.

The real shame in academic publishing, if you ask me, isn’t Elsevier’s 35% profit margin on journal subscriptions. It’s the much larger amount of money, time and influence that is regressively taxed from the young scientists, to the old ones, in exchange for nothing but brand access. So long as journal access remains the yardstick that matters, then no matter what legislation gets passed or conventions get tweaked, I doubt that the overall structure of the ecosystem will change that much.  It’s bad for science, and by extension, bad for all of us. 

That being said, there is something that could actually be disruptive to this setup: Twitter. 

Why Twitter Matters

I was in grad school just before academic Twitter became a thing, and if we’re being honest, if it had come just a few years earlier my entire career may have turned out differently. Twitter matters, because for the first time, young PhD students and postdocs have a way to build their brand directly, and eventually acquire peer review directly.

If you are a young student, you can go hang out in senior scientists' mentions on Twitter and be a reply guy, and if you’re smart and thoughtful and have insightful things to say, you’ll get noticed! Twitter is especially empowering for a particular kind of person who’s unique and might stand out weirdly, and in normal settings that uniqueness would be a problem - but on the internet, it’s an incredible asset. 

Science is full of those kinds of people. They’re the ones who do the best science! But the journal / postdoc / brand building regressive tax hits those people especially hard, because it’s a game they probably don’t play very well. The structure of academic science today is leveraged on there being no way around that positional scarcity. But Twitter is a way around.

The first real application of Twitter in science was pretty straightforward: sharing what you’ve published in an effort to boost citations and other conventional metrics of success. Academics being academics, of course, there’s been a pretty funny (but understandable) push to try to standardize and codify Tweets as an “alternate method of scientific legitimacy”, basically trying to recreate citation rankings and impact factor but with faves and RTs:

Introducing the Twitter Impact Factor: An objective measure of urology’s academic input on Twitter | Diana Cardona-Grau et al.

T factor: a method for measuring impact on Twitter | Lutz Bornmann, Robin Haunschild

The thing is, part of the reason why Twitter works so well is because it’s not as codified and explicit as a journal ranking system. It’s informal, and that informality makes it a lot more real. People can get to know each other so much more easily now, increasing the likelihood that smart scientists who ought to find each other actually do. You can just go participate directly in real-time scientific discourse and relationship building, skipping everything in the middle. A piece in Nature last yearalmost got it right: 

Perhaps the most obvious, and most important, aspect of Twitter is that the platform facilitates a closer, more informal connection between scientists. It can be difficult to see the true nature and personality of authors through the mountains of academic papers they produce. Getting a more human perspective on the big shots we look up to can be refreshing; we can learn about both their science and their wider views, hobbies and the like. By having a more personal line of communication with each other, rather than relying on e-mail correspondence, scientists can connect and form fruitful relationships more easily.

I say almost got it right because it doesn’t quite hit what’s most disruptive about this new setup: it’s not that the established PIs who already have brand recognition can be seen as more approachable; it’s so that students and postdocs who don’t yet have brand power can use Twitter to acquire it. This is sneakily dangerous! It’s a way around the Elastic Middle. And when the value proposition of the Elastic Middle starts to erode, all of that positional scarcity that’s been built up over decades may soon have less value. 

The first crack in the system, which is already happening, is young researchers and labs sharing their preprint results and publications directly on Twitter - initially after peer review and publication, with stuff that’s already in journals. I bet you soon we’ll see people share stuff that hasn’t been peer reviewed, either to get there first (if you’re racing for a discovery) or, quite simply, because Twitter is peer review. 

Look, for instance, at this Twitter thread of scientists going through Coronavirus research in real time. This kind of discussion used to only be able to take place in private email threads between elite scientists, or in slow, back-and-forth editorials in journals, also between elite scientists. Now anyone can participate!  There is no barrier to putting yourself out there and contributing.

And if you had your doubts as to whether the scientific integrity of these discussions is less good than in a supervised, formally peer-reviewed environment, it looks more like the opposite is true: it’s like real-time Wikipedia, for research that’s happening now. This is what real change looks like, and it’s worth connecting the dots between what’s happening on Twitter and the incumbent positional scarcity that’s slowly strangled academic science. The cure may be already here. Just give it a few years. 

Permalink to this post is here: Can Twitter Save Science? |

Have a great week, and keep our eye out for a bonus newsletter episode: a guest post all about debt from Ali Hamed of CoVenture, right after this.


Debt is Coming

Two Truths and a Take, Season 2 Episode 3

Ten years from now, what seismic change will we reflect back on and think, “well that was pretty obvious, in retrospect”?  

Debt is going to finally come to the tech industry. 

We can hate it, we can criticize it, we can raise the alarm about how dangerous debt is to the VC model we’ve honed to perfection over decades. Or we can see this moment for what it is: a turning point into a new deployment period for software and the internet. Debt is coming, whether we like it or not. And I’m actually pretty excited for it. 

The Deployment Period

When people in tech want to sound smart, one name you can drop is Carlota Perez. Her book Technological Revolutions and Financial Capital is a rare accomplishment: it’s a top-down “grand theory” book about the innovation economy, written by an academic rather than an on-the-ground practitioner, that actually gets things right. Read it alongside Bill Janeway’s Doing Capitalism in the Innovation Economy, the number one book that’s most influenced my own thinking.

Technological Revolutions & Financial Capital explores the relationship between Financial Capital (the equity and debt that’s owned by investors) and Production Capital (the factories, equipment, processes, and other real-world concerns which financial capital owns). Perez’s core message in the book is that Financial Capital (FK) and Production Capital (PK) have changing but predictable relationships with each other in distinct phases of technological development and deployment. 

There’s a recurring dynamic of how FK and PK perceive each other and work with one another. Jerry Neumann’s explanation is good: "My long-ago operations research textbook had a cartoon showing one MBA talking to another: 'Things? I didn’t come here to learn how to make things, I came here to learn how to make money.' This is the view of financial capital. The view of production capital is exemplified by Peter Drucker: 'Securities analysts believe that companies make money. Companies make shoes.’” 

In the first phase of a technological revolution, which she calls the “Installation Period”, the relationship between FK and PK is fundamentally a speculative one. The new technology is exciting, and the market opportunities are large but unknown. Speculative investment, with ambitious but inexact expectations of financial return, is important fuel for founders who build the unknown future. However, investors and operators are often deeply misaligned: investors think in bets, while operators think in consequences. The relationship is tense, but can be explosively productive. The VC model is an institutional expression of this tension. 

In the Deployment Period which follows, FK and PK recouple. We reach a turning point away from speculative financing and towards more aligned investment, where capital gets put to work less exuberantly and more deliberately. The investor, at this point, has a good understanding of the assets that they’re buying and the cash flows that they will generate. The operator has reasonable expectations around cost of capital, and a tried-and-true game plan for how to put that capital to work making shoes. This does not look like VC. It looks like regular finance. 

Meanwhile, the Deployment Period is usually when the peace dividend of emerging technology starts to really pay off. Tech is mature and ubiquitous enough that it starts to get deployed everywhere, in a way that’s especially helpful for smaller customers who are now finally have access to the same tools and the same firepower as their bigger rivals. We enter an era of abundance, where technology creates far more value for its customers than for its vendors. FK transitions away from speculative risk capital and towards boring, deliberate underwriting. 

Are we there yet? Well, yes and no. “Tech” is not a monolith industry. Silicon Valley angels and VCs still live out in the speculative future, funding wild bets with out-of-the-money call options. At the same time, big tech incumbents can put capital to work at scale, with little guessing involved. Furthermore, we’ve entered the “let a thousand flowers bloom” era of online companies. A new generation of small businesses has learned to take full advantage of software and the internet, understands their customers, and knows how to put capital to work serving them.

There are three tech industries today, and two of them are solidly in the deployment period. If you want to put $100 million to work, you could lend it to Andy Jassy or Sundar or Satya and say “Go build a data centre with this”. (Or, even better, securitize it.) Startups used to build or buy their technology stacks; now they rent them. The “peace dividend of the cloud wars”, currently between AWS, Azure and GCP, means that startups can increasingly choose to move most of their technology off their balance sheet forever: the AWS bill as the new electric bill. That’s production capital. 

Or you can lend it to Shopify or Clearbanc or Stripe Capital and say, “Go arm the rebels with this.” If your business is making shoes and then selling them online, then you can go get funding that’s committed to help you make shoes and then sell them online. Small merchants are getting access to the same tools, and eventually the same capital, as big giants. There’s no speculation involved: the lender, the platform and the merchant all know pretty much exactly where the money’s going, and what’s expected of them. That's production capital.

Of course, you could also take that $100 million and commit it to a VC fund. Then it's back to the Wild West of speculative equity financing, out-of-the-money bets, and “we can’t know until we try.”

Or is it?

Recurring Revenue

The recurring revenue business model, which everyone in tech knows well by now, may feel mature. But I promise you: we’re only in the early days of its second-order consequences.

The model got off the ground after the dot com crash, from logical origins. The “pay as you go” model for subscription software is great for customers, who no longer need to shell out an up-front payment like they had to in the days of packaged software licenses. The software purchase already comes pre-financed, baked into the SaaS model. The downside to this model is that it takes longer for startups to reach positive cash flow. As Bill Janeway explains:

While the SaaS model made it radically easier to sell software and to forecast reported revenues as contractual payments were made over time, it came with a cost. was the first enterprise software company characterized by sound operating execution to consume more than $100 million of funding to reach positive cash flow. Now the poor start-up was in the role of financing the rich customer. Funding from launch to positive cash flow for a SaaS enterprise software company runs from that $100 million to twice as much or more, some five times the $20–30 million of risk equity once required to get a perpetual license enterprise software company to positive cash flow.

VCs have happily stepped in with the cash. The SaaS model was a great way to deploy capital: these new businesses spend an (often large) initial expense to create a user, and then harvest a (fairly predictable) stream of income from that customer you’ve created. Any one customer may be unknowable, but cohorts of customers can be modelled and understood decently well. VCs have successfully expanded this template to adjacent business models like marketplaces, recurring shared value transactions, and all sorts of consumer businesses. 

As this new model came together, the word “user” became the most important word in tech. People on the outside sometimes wonder why businesses with so few traditional assets seem to require so much financing. Well, they are accumulating assets: users are the new assets, and their use is what you’re out to monetize.Whatever your business model is, acquiring users is the new building factories. 

The overall bet may still be speculative, but the median VC dollar isn’t anymore. It’s buying customer acquisition and then financing service delivery. In plain sight, ever since the dot com crash, VCs have learned and applied the same lesson as GM and Ford years ago: the best way to make money isn’t making cars, it’s in financing them. 

This looks as though it could be deployment money to me! But it isn’t yet: so long as this recurring revenue is financed with VC equity, there’s still this tension between VC’s portfolio approach (FK) versus founders’ and employees' complete commitment (PK). Still, though, the fact that it could be aligned - given the relative stability and maturity of the recurring revenue software model - suggests that we’re overdue for some new financing strategies. 

Maybe not all investors get this, but the smart ones do. Jonathan Hsu of Tribe Capital and I used to talk about this a lot back when we were at Social Capital, and when I interviewed him last year in the newsletter he put it this way:

When you acquire some customers and they start yielding revenue that behavior sounds an awful lot like buying a fixed income instrument and there is a lot of sophistication around how to value those cash flows. In some sense, what we’ve seen over the last decade is that software enables a whole new business model – recurring revenue – which is both good for customers and is good for investors. It’s good for investors because it becomes more “predictable” in the sense that it starts to look more like a fixed income yielding asset and thus more amenable to traditional financial techniques and thus potentially “in scope” for a wider set of investors. (Emphasis mine.)

One big lesson in Technological Revolutions and Financial Capital is that the innovation financing game is played with different rules when FK and PK are aligned versus when they aren’t. In the ramp up of the installation phase, culminating with the frenzy of a bubble, you’re playing one game - where FK and PK have to navigate a lot of ambiguity, and tolerate a high failure rate. Today's VC model, where businesses are built with all-equity capital stacks and portfolio construction anticipates a power law return curve, is hyper-optimized to play this game. 

If you’re a tech founder raising capital today, there’s really one mainstream way to fund it: by selling equity. The VC model capital stack, which the Silicon Valley venture ecosystem has optimized itself around, is the one-size-fits-all funding model for startups of all shapes and sizes. We know it like muscle memory at this point. If your career began after the dot com crash, as mine and I’m sure many of yours did, you’ve probably never known any other way.

But the deployment period is a different game. And when FK and PK get really aligned with each other, the best tool in the game isn’t equity anymore. 

The problem with equity

Here is a widely believed cause-and-effect relationship I bet you’ve never thought to invert before: because most startups fail, therefore equity is the best way to finance them. Have you ever considered: because equity is how we finance startups, therefore most startups fail? 

This feels uncomfortable! But it gets right to the core of the FK-PK misalignment that saturates the modern tech industry, and holds us back from entering the deployment game, with its new set of rules. In the early stages of a startup, the conventional cause-and-effect direction is correct: we use equity, because there’s uncertainty. But later on, I’m not so sure. 

Plenty of people these days preach “startups need to rely less on fundraising”; it’s harder to find anyone who’ll challenge the equity mechanics themselves. But continuously selling equity, even at high valuations, is more expensive than the narrative suggests. As a founder, the most valuable optionality you have is the equity you haven’t sold, and the dilution you haven’t taken. But the second most valuable optionality you can have is a valuation that’s not too high. 

I don’t just mean that high valuations destroy discipline and focus, although that’s also true. I mean strictly in terms of optionality you’re giving up. The higher your equity valuation, the fewer out of all possible future trajectories for your business are acceptable. After the past few years, I think most founders get this. If you must go raise a ton of capital, then boosting your valuation isn’t preserving your optionality; it’s trading one kind for another. 

The wakeup call will be when founders collectively come to grips with the fact that the Financial Capital all-equity stack, as powerful as it is for creating something out of nothing, is and has always been at odds with the Production Capital mentality of a business builder and operator. There is nothing inherent to tech companies that requires that so many of them fail to live up to their aspirational valuations, aside from the way they’re funded. 

But can’t debt blow up in your face? I mean, yes, but so can preferred stock! But debt is up front about it, whereas liquidity preferences aren’t a problem until one day they are. Investors genuinely mean well almost all of the time, but the alignment between the Financial Capital of VC and the Production Capital of software businesses really only works for one narrow version of success. I’ll bet you founders will increasingly ask for paths to many different versions of success, not just one. 

Of course, there is a way to have your cake and eat it too: raise more capital, with less dilution on your cap table, and without needing a dangerously pumped valuation. It’s to raise some debt! Not a huge amount; I’m not arguing founders will be better off if they start racking up enormous debt loads instead of raising VC rounds. Debt is not runway. I’m just saying, there’s more than one way to construct a capital stack. And that, believe it or not, taking on some debt can be a smart way to finance a business. Everyone else in the business world understands this! 

Where could you put debt to work effectively? Oh, I dunno, how about that thing that every tech company does now: creating customers! You have to spend a bunch of money today to acquire users. But once you have them, they send back recurring revenue that’s pretty predictable at a cohort level. Hmm! Tech companies with recurring revenue business models are this close to connecting the dots. 

Gradually, and then Suddenly

There are still some big reasons why most Silicon Valley tech companies don’t use debt all that much. One obvious reason is that lenders aren’t used to extending credit to fast-growing software companies. It’s not like this is an unsolvable problem, since recurring revenue is pretty attractive to borrow against. But the big banks and usual lenders haven’t really worked it out yet. 

The other reason is more of an internal issue. If you raise debt, unless it’s for some specific purpose (like if you’re a fintech company), it’s usually seen as a big red flag that's prohibitive to future equity raises. 

In an environment that’s fine tuned for “you’re growing or you’re dead” and where signalling is everything, debt on your cap table means something has gone wrong. To people in other industries, this looks strange: if you’re a growth business, debt is how you grow faster! But not here: we’ve got our formula, and if you stray from it, it throws off the process. Debt can also scare off growth equity funds, who don’t like not being the most senior money in the pref stack. 

In both cases, debt in your cap table imposes a financing risk. So unless you have line of sight to positive cash flow, debt won’t usually be your first choice. We do see venture debt get used in scenarios like bridge rounds and other special situations, but its customers are really the VCs, not the business. It’s not primary growth fuel. The benefits of debt aren’t worth the risk you’d take by potentially alienating yourself from future access to capital. “Don’t take debt” is tech’s “Four legs good, two legs bad.” 

Furthermore, in the Bay Area Founder-VC scene, FK/PK tension simply isn’t perceived as a problem. Founders increasingly think of themselves as capital allocators who think in bets, and the angel investing scene has brought founders and VCs together as social peers. There’s no FK/PK tension between investors and founders. They all want the same thing, and they all hang out at the same parties. The tension has simply been redistributed, largely onto employees. The greatest trick VCs ever pulled was convincing founders, “you’re just like us.” 

That’s why I’ll bet we first start seeing debt get used as real growth fuel in Silicon Valley-style software companies from companies that aren’t from Silicon Valley. I think this is a great opportunity for other startup ecosystems, especially ones with local capital bases (looking at you, New York? Tel Aviv?) to compete with the Bay Area for teams and talent by creating an alternative capital stack model for funding software companies. (“Come to New York; keep more of your equity!”)

By the way, I fully expect that a lot of people in VC will disagree with this. Of course they do! Just know: if you talk to people in VC about this, and then you talk to people at companies who are building the future of this stuff, you come away with two completely different impressions. I'm not sure I'd bet on the VCs.

The tipping point happens when someone big, and probably local, announces a new financing product: recurring revenue securitization

I honestly think this makes so much sense. Why not go straight to securitizing senior tranches of your recurring revenue, and moving it off your balance sheet? You could imagine a high-quality startup financing its growth this way: raise your initial equity to establish your product, go-to-market, and first big cohort of users. Once you understand that first cohort of users really well, securitize the first X% of the cash flows they generate, get em off your balance sheet, and then use that money to create your next cohort of users. Keep raising equity to grow the other parts of your business, by all means, but just raise less of it! 

There are all sorts of ways you can get creative with this. Let’s say your business has solid product-market fit in its “base layer” of recurring revenue, and your main focus is on whether or not you can successfully build expansion revenue on top. As you raise capital, maybe consider that these two tasks could be financed differently? If a startup is making lots of different bets as it grows, maybe those bets might have different return profiles, and could be funded accordingly?

On the other side, imagine how much investor interest you could get in a diverse basket of recurring revenue from, say, 10 different startups that’ve all raised from Tier 1 VCs. People talk about how great it would be to invest in a unicorn basket; this would probably be even better. In some ways this is threatening to VCs, since it’s competing capital; but it also reinforces their importance as curators and underwriters. 

The risk to VCs isn’t that their role disappears. It’s that once this happens, the muscle memory for how to structure funds and term sheets immediately goes out of date. VC firms should spend time today thinking about how they’re going to prepare for this new world, in case it comes true.

If you think there’s too much money flowing into startups now, just wait until someone makes a high-yield fixed income product for institutional investors to buy recurring revenue. In my 10 predictions for the 2020s post, one of my predictions was a that we’re going to replay the Softbank capital-as-a-moat funding calamity, but with enterprise software this time. Recurring revenue securitization will be like gas on the fire. Forget Softbank; imagine what it's going to be like facing off with someone who's hooked up to the debt market.

VCs need to be ready for this new game. Many of them are already preempting it, deliberately or not, as they transition into these multi-stage battleship firms with scout programs, venture teams, and growth funds. I’m not sure it makes sense for these firms to raise their own debt funds though. More likely, we’ll see a few top-flight firms announce partnerships with Stripe Capital and Goldman Sachs, and just roll it right in with their Series B term sheets. 

Expect, at this point, some pretty funny “Actually, four legs good, two legs better” blog posts from some of the same VCs who told us to never take debt a few years before. “Ah, see, that debt was reckless gambling; this debt is being equity efficient”. K thanks. 

And when founders really get a taste of that credit? That sweet, sweet taste of dilution-free capital, flowing freely to and from a continuous growth vehicle, and learn the dark arts of securitization? And then when their competitors learn about it? It is game over for the old way.

Proceed carefully, but get excited too. This is a good thing. A realignment between financial and production capital is long overdue, and it’s going to hit like an earthquake. But it’s going to level up our collective ability to put capital to work into new and interesting businesses. We could be around the corner from a technological golden age, where software and the internet can get massively deployed in a production-forward way. The world wants this so badly. And we’re almost there. 

Permalink to this post is here: Debt is coming |

These two recent posts by Matt Ball are both A+ reading material on “the metaverse”: one of the next great tech frontiers to explore and homestead.

The Metaverse: what is it, where to find it, who will build it, and Fortnite | Matthew Ball

The next frontier in storytelling universes and the never ending desire for more | Matthew Ball & Jonathan Glick

Other good reading:

News in the age of abundance | David Perell

The iPad at 10 years: now what? | Jean-Louis Gassée

Spotify: the ambient media company | Brett Bivens

How a Richard Montañez, a janitor at Frito-Lay, invented Flamin’ Hot Cheetos | Zachary Crockett, The Hustle

And just for fun:

Google Maps Hacks | Simon Weckert

Have a great week,


Counterfeit Food

Two Truths and a Take, Season 2 Episode 1

Here is a wild story:

You should read the original thread of tweets here, but here’s what happened: 

Pim Techamuanvivit, the owner and chef of a few popular restaurants in San Francisco, was managing the floor at Kin Khao the other night when a call came in from someone asking about their delivery order. This was surprising to her, since her restaurant doesn’t do delivery - not even takeout. After hanging up the phone, she googled “Kin Khao delivery” and found something astonishing: a complete impersonation of their menu and brand, complete with delivery ordering, on Seamless, Grubhub and Yelp. 

Now, there are a couple potential explanations for this; the simplest is that Kin Khao had a profile automatically made for them on these delivery services that shouldn’t have been there (since they don’t do delivery), and that’s the end of the story*. But that’s not my impression of what happened! A real order got placed, and they only found out about it because the customer called them directly. That’s what’s fishy here.

They’re a popular restaurant; they’ll be getting orders all the time. If Pim was never aware of this (and she would know!), then either there are a pile of unfulfilled orders that got ghosted on, and she would have seen couriers come into the restaurant regularly. Or… maybe they were being fulfilled. And had that customer never called Kin Khao directly, no one would ever have known.

Anyway, because this is my newsletter and not actual reporting, I’m going to assume that the most interesting of the scenarios happened, which is: someone made a counterfeit restaurant! This is a whole step beyond the recent allegations that Grubhub and Seamless were putting up landing pages in order to intercept orders and collect commission fees. This isn’t impersonation in order to collect a referral; it’s straight up counterfeiting the restaurant on a delivery app. Unsuspecting customers, recognizing a business that they know, order food expecting it to, you know, actually come from that establishment! 35 minutes later, food arrives at their door. Maybe you notice that the food isn’t quite what you expected, or isn’t up to par. But maybe you don’t notice! If you think, “that would never happen to me”, well, are you sure? 

As the food delivery wars heat up, most of the discussion I’ve heard has been around the food delivery apps’ pros, cons and comparative advantages as aggregators: is demand elastic, who has pricing power, how do they commoditize restaurant supply, and those sort of themes. All aggregators come with their share of societal problems, and food delivery is no exception. 

But there’s another side of the story, and another suite of problems, that I’ve heard less chatter about but long term is way more important. their impact and unexpected consequences as platforms. What new kinds of bad behaviour, like counterfeiting food, are here to stay? What are we going to have to regulate, and who will do it? 

Counterfeit food, of course, isn’t a new thing at all - or at least, counterfeit ingredients aren’t a new thing. There’s olive oil fraud, where cheaper olives from inferior regions or lower-quality processing plants are falsely labeled as extra virgin. You may have heard of “honey laundering”, when cheaper Chinese honey gets mixed in and sold as North American honey in order to evade tariffs. Similar story with the Great Canadian Maple Syrup Heist of 2016. Premium Arabica coffee beans are routinely cut with cheaper Robusto ones. “Kobe Beef” in North America is just a marketing term; so is Prosciutto di Parma. “Parmagiano Reggiano” is even worse: you might be eating sawdust

The biggest food fraud category? Seafood. A 2016 report from Oceana outlined just how bad the seafood industry has become. Vast quantities of fish illegally caught, laundered and resold in black markets; up to 20% of the fish we eat can be the wrong species entirely. As summarized on Eater

A couple particularly egregious examples of fraud cited by Oceana include bluefin tuna in Brussels restaurants, where 98 percent of the dishes tested actually contained another fish entirely; and a sushi restaurant in Santa Monica, California that was busted serving endangered whale meat as fatty tuna. A species called Pangasius, or Asian catfish, is particularly popular with fraudsters, and has been discovered standing in for 18 different species of fish including cod, flounder, grouper, sole, and red snapper. Another food that’s highly susceptible to fraud: caviar. In one fraud study, 10 out of 27 caviar samples were found to be mislabeled, and three of them didn’t even contain animal DNA but rather a completely unidentified substance.

This stinks, but it isn’t the most surprising thing I’ve ever heard. It’s a natural consequence of the food we eat coming from farther and farther away, with a complex, opaque supply chain abstracted out of sight and often outside the law. Food got abstracted away into the “food cloud”. And these days, we’re adding another abstraction layer on top of that: we’re outsourcing food preparation too: at every income level, for every meal, we’re eating more food that’s been prepared by someone else: cooking as a service

As food got abstracted into the food cloud, and now cooking is getting abstracted into the cooking cloud, it should not surprise anyone that you’re not always getting what you think when you hit that order button. Upton Sinclair’s famous book The Jungle, on the Chicago meatpacking plants at the turn of the 20th century, took us inside the food cloud where the sausage gets literally made. It’s not pretty. A century later, Anthony Bourdain's Kitchen Confidential painted a picture of what really goes on in restaurant kitchens. 

The “integrity crisis of food”, if you want to call it that, is on the one hand to be totally expected. Why wouldn’t food get counterfeited, just like every other consumable good? But on the other hand, it’s a bigger problem than knockoff Nikes. Illegal ingredients are often environmentally devastating, and carry unknown health risks while we’re at it. Knockoff or unlicensed kitchens, if they can operate under the cover of someone else’s health certification, may be cutting food prep and safety corners that aren’t acceptable - plus, they’re outright theft from restaurant owners, stealing their brand while sticking them with the food safety risk. 

But the food cloud keeps growing. Every time we add a layer to the food cloud, we add a new layer of opacity, and a new set of costs that are now subjected to margin pressure. In the old days, you knew exactly what food you were eating, because you or someone near you grew it themselves. But then, for most of us, food provenance got abstracted away into a grocery store: you didn’t know where a stick of butter came from anywhere, it just showed up. You have to trust that the label is what it says it is.

Then, food preparation got abstracted into cooking as-a-service: now you don’t know if that food was cooked in butter anymore; it might be margarine, cheaper vegetable oil, or something worse. Then, the food preparation got abstracted into a menu with an “order” button”. Now you don’t even know who’s preparing it - it could be the restaurant you know, but maybe it’s a counterfeit; maybe it’s some illegal cloud kitchen, operating in a garage. The more layers you add, the more you have to just trust the label, but the more untrustworthy the label becomes. 

All internet platforms face this problem sooner or later. Amazon struggles with knockoff goods on its third party platform, while competing against upstart marketplaces like AliExpress that might have different policies. Uber and Lyft put extensive resources into verifying who’s driving on their platform, as do Airbnb with their hosts - user ratings are great when you’re dealing with good actors, but not so much for preventing abject ripoffs. Food isn’t unique. It just feels especially important, since, you know, we eat it. 

One recent trend in the food industry is large CPG companies and retailers exerting more control over their own supply chains. This makes sense; if you have a strong brand, you can’t afford the risk of letting counterfeit, sub-standard and illegally sourced food into your supply funnel too much. Some buyers are doing a better job of walking the walk than others. Costco, for instance, does a very good job of actually inspecting food, doing the hard work, and running things well. Other buyers opt for more of a theatrical approach:

Now, with all of this in mind: I think everyone agrees that the Kin Khao counterfeit restaurant situation is absolutely unacceptable and that Grubhub and Seamless cannot condone this on their platform by any means. It’s not just misleading the customer; it’s also real-time theft of the real restaurant’s brand and reputation, and sticking them with a risk (e.g. a customer gets food poisoning) that is dishonest. It’s fraud. 

But food is full of fraud! Shouldn’t we, over time, realistically expect that this abstraction layer of cooking-as-a-service will settle down to some percentage of fraudulent commerce that we just grudgingly accept? I’m not saying this is right, or okay, but I’m just saying that it seems… likely? 

Shouldn’t we expect to see some level of grift, like: restaurants that claim to cook food to a certain health standard but then don’t? Or restaurants that piggyback on a beloved restaurant that shuts down by “reopening online” (and appropriating the brand in the process)? Or virtual restaurants that create fake celebrity endorsements, drive paid acquisition with their brand until they get caught, and then shut the whole thing down and reopen the next day under a new name? 

As I said before, most of the discussion I’ve seen about food delivery apps and whether they’re good or bad for restaurants has focused on their pros and cons as aggregators. But there’s an entirely different set of questions to ask about their pros and cons as platforms: specifically, to what extent should they be held responsible for bad actors that use them? Are they more or less responsible than say, Amazon, for what gets bought and sold on their watch? These questions probably matter less in the short-term food delivery wars, but they’re more important long-term if we think about the future of food in general.

Cloud kitchens make this a much harder problem. When you’re in a physically different location from the customer, and it’s really easy to spin up a new brand and menu (while keeping the kitchen and the staff), doesn’t that incentivize a certain amount of corner-cutting, if not outright misbehaviour? To what extent can Grubhub or Seamless be reasonably held accountable for this? So long as they’re in a race for market share with one another, can they afford not to let new restaurants onto their platform? How much can they afford to spend on policing what happens on their platform?

More generally, are food delivery apps “common carriers?” Are they neutral platforms? If they’re not neutral platforms, and they’re accountable for what gets sold, then do they also become accountable for everything that passes through them, like the quality of ingredients and labour practices of restaurants they serve? And if they are neutral platforms, how much fraud and bad behaviour can we reasonably tolerate before it becomes a major problem? 

Does all of the fraud in the food industry that already existed simply become a software problem? Or does the new kind of fraud that software and the internet make possible simply become lumped in with all the food problems? Who should be policing this, the FCC, or the FDA? These are exhausting questions, because there’s no right answer yet. But the food cloud gets bigger every year, and we need to figure out whether it’s a cloud with food in it, or food that’s cloudy - and whatever that means to do next. 

Permalink to this post is here: Counterfeit Food |

*Thanks to Charles for emailing me about this after I posted this on Saturday. It’s absolutely possible that this particular instance was a mixup rather than a crime, and most of the time, that’s going to be the case. But I still don’t like it! Even if this was totally a case of delivery apps making their own profiles of restaurants in order to give their customers more delivery options, that strikes me as a really disingenuous disintermediation (especially if the restaurant has no idea) - and, in the long run, feels inevitably to me like a path into outright grift.

No links this week (not because none of you wrote anything good, but because I wasn’t as online this week) but thanks to everyone for the great feedback and comments on Social Capital in Silicon Valley. I particularly enjoyed this introduction to Bourdieu, Max Cutler’s riff on how things are different in Berlin, and the throwback to this older post from Ben Wheeler on Product Hunt as embodiment of social influence in SV. Thanks for reading and keep the emails coming!

Have a great week,


Loading more posts…