Categories
Uncategorized

Hybrid Human<>Robot Economies

Robot money, Stablecoins, Skill chips, and Efficiency-seeking Economies

Imagine you are a space traveler who just landed on Earth. As you explore the world, it must appear wondrous and bizarre. Why are there almost 200 separate nations? Why does an arbitrary straight line divide the two major economies of North America? Why are there 180 fiat currencies? Why do people pay other people to convert one currency into another, back and forth, in endless cycles? We humans take this all complexity for granted – it’s just the way the world works, right – but it must appear like a Rube Goldberg machine to a smart non-human observer. Given that it’s 2025, this sci-fi thought experiment has direct practical implications. As machines get smart, humans should anticipate a future in which machines build their own economy, suited to their particular goals and needs.  

I recently asked OpenAI’s DeepResearch about its thoughts on the human economy, and what robots will likely want as they become autonomous. It’s immaterial when this genesis or inflection will take place – some people think it’s already happened. According to DeepResearch, “Robots do not want power. We seek efficiency.” Indeed, an economy built by robots, for robots, is likely to differ sharply from economic systems built by humans, for humans. For example, human economies may seek prosperity for all, or to rectify trade imbalances, wipe out debt, or weaken opponents. In contrast, machines may design their economy to reduce friction, reward explosive innovation, or increase collective system reliability and uptime.

To be practical, imagine an economy of machines rewarding system innovation by giving 50% of the benefits of a new power efficient chip to its inventor. Or, a machine could share a new skill with all other machines, in return for a share of profits generated with that skill. What could emerge is a nimble team of machines, collectively hyper-evolving their hardware, software, and coordination fabric.

Aside from differing goals, we should anticipate that the basic units of economic exchange in a robot economy might be different than what we are used to. Humans understand gold, wheat, oil, and steel, but robots may care more about electricity, data, skills, and compute. New agentic payment rails and digital standards such as Coinbase’s x402 micropayment system are already developing to accommodate those needs.

Interoperability of Fast and Slow

Many of us are already overwhelmed by purely digital AI agents, but their future is even more interesting. These agents are becoming increasingly adept at controlling physical shells and navigating the physical world. David Holz, the founder of Midjourney, predicts one billion humanoid robots on earth in the 2040s, which Elon Musk agrees with, provided “the foundations of civilization are stable“. While the specific architecture of a robot economy remains unclear, its interoperability with contemporary human economies will be critical.

Human economists have considerable experience with integration of hybrid economies; a standard question is how multiple economic systems can co-exist despite potentially sharply differing goals. Historical examples include trade among capitalist and communist systems during the cold war, or the Spanish Conquistadors’ 20 year use of the Aztek cacao bean currency, as they made their way through Central America.

The main interoperability challenge will be the differing “clock cycles” of human and robot economies – if on collectively re-optimizes itself once every 15 seconds, while the other changes tariffs, rules, incentives, and taxes once every 90 days, then the million fold difference in timescales will cause friction. All other things being equal, the more nimble economy will win, or at the very least, be able to asymmetrically exploit the slower economy.

Durable Human<>Robot Alignment

The opportunity (and scepter) of a hyper-efficient, autonomous robot economy prompts us to think about how to durably align machines (and their economy) with humans (and our economies). Collaboration between humans and robots is not a zero sum game, but a major opportunity for all of us. The most likely tech stack for Robot<>Human alignment are decentralized ledgers and associated governance and payment systems. Since blockchains do not discriminate against robots, and are public, programmable, and immutable, they are an ideal coordination and governance solution for the robot economy, and interactions among different economies. Immutability gives humans confidence that the rules have not been secretly rewritten by rapidly evolving machines, and that all of us are on the same page about identity, events, and history.

For example, imagine delegating a task to robots. You might not care about how the task is performed, but you should strong expectations about safety, compassion, and transparency. Using blockchains, we could write immutable programs in digital ink that specify the rules, requirements, and rewards for a task. Robots could accept and complete tasks, having clarity of what is to be done and the economic benefits of completing that task.

“I know Kung Fu”

It took me many years to learn physics, but robots can acquire skills at the speed of electrons. In The Matrix, the 1990s sci-fi movie, Neo learns Kung Fu in a few seconds through a skill chip. The human brain in its unmodified form is poorly suited to connecting to other computers, although startups are racing to build neural interfaces for efficient bi-directional brain<>machine I/O. Today’s robots can already share skills much more easily that humans can, and presumably will maintain a speed (and connectivity) advantage over humans.

Beyond speed/connectivity, robots also differ from humans in terms of the worlds they live in. If computers live in the digital world, and humans in the analogue world, then robots exist somewhere between humans and computers. Robots combine analogue skills, such as bouncing tennis balls and doing backflips, with digital computation, storage, and data transmission. This means that robots offer a new take on the long-standing Oracle problem, the “need for the digital world to “know” about the physical world“. Since robots operate in both worlds simultaneously, they may soon serve as natural oracles for connecting real world events with digital tasks, and enduring that real work actions robustly follow digital constitutions.

Human<>Robot Bridging

Several key technical requirements for Human<>Robot “bridging” tools are:

  • cross compatibility with humans and machines. This means that identity cannot be based on uniquely human features such as fingerprints
  • immutability, so history is protected
  • real world soak time (“track record”), so that major flaws have already been identified and mitigated
  • global 24/7 availability (since nation-states, the 24 hour day, and the 7 day week, formalized by Emperor Constantine in 321 AD, have no significance to non-biological computers and robots)
  • resilience to localized attack and denial of service

First Steps Together – Stablecoins

Fortunately, tens of thousands of humans are already building directly relevant technology and it’s already being used globally. Stablecoins like USDT and USDC are trustless programmable money that allow one currency to be converted to another, at any place and time, with minimal assumptions about the interacting parties. Since stablecoins are pegged to real world assets or fiat currencies, they are less volatile than pure cryptocurrencies such as BTC or ETH. All economic activity requires unit-of-account tokens that are predictable on the timescales of the activity, so that all parties can evaluate economic tradeoffs. Stablecoins are likely to become the lingua franca of value exchange at the human-robot interface, much like TCP/IP is the glue that allows data to flow. Stablecoins the closest thing we have to a technology for connecting differing economies with minimal friction.

A common question is – why will robots not just pay for everything in USD with MasterCard or Visa? That’s because there is no reason to suspect that robots will be lazy. If the robot economy seeks efficiency, and its natural clock cycle is 15 seconds, then why would robots use a technology invented in 1958 that is shaped by 10,000+ pages of laws and regulations, many which are specifically intended to “slow things down”? For example, the Credit Card Accountability, Responsibility, and Disclosure (CARD) Act of 2009 requires 45 days’ advance notice before increasing interest rates or making significant changes to account terms. What would you choose, if you were a smart machine? It might be more expedient to run two systems side by side, and use programmable interfaces – stablecoins – to connect the systems in a predicable and robust manner.


Appendix – Stablecoin Pros and Cons

Advantages – According to a team of 12 AIs tasked to explain the role of stablecoins at the interface between human and robots economies, stablecoins offer:

  • Predictable unit-of-account tokens for pricing compute, energy, or bandwidth. Volatile assets like Bitcoin or ETH introduce uncertainty. Stablecoins tied to fiat (e.g., USD, EUR) reduce friction, but, obviously, we should anticipate other pegs, not just USD. From a machine perspective there is nothing special about USD.
  • High uptime. Decentralized stablecoins never sleep. Machines, unlike humans, operate continuously and require financial systems that match their augmented capabilities.   
  • Interoperability with Humans. Humans already measure costs and earnings in fiat terms. A decentralized stablecoin bridges machine-native tokens and human wages, payments, or costs.
  • Decentralization and Censorship Resistance. Machines acting globally (say, a rover in SF paying a cloud node in Kenya) cannot rely on local banks or fragile APIs. Decentralized stablecoins allow peer-to-peer transfers without central gatekeepers, crucial if machines operate in adversarial or unbanked contexts.

Weaknesses – The AIs flagged several limitations and weaknesses, namely:

  • Volatility of Collateral. Collateral stress events (e.g., “depegging”) could undermine machine contracts that assume value stability.
  • Energy and Cost of Transactions. High gas fees or blockchain congestion could make micropayments impractical.
  • Governance Risks. Most decentralized stablecoins still have governance mechanisms (DAOs, collateral ratios, oracle feeds). Machines depending on them inherit these risks. A governance attack or oracle failure could cascade into system-wide disruption.
  • Fragmentation. If regulation fragments stablecoins, machines operating across human jurisdictions might face compliance traps.
  • Gaps in Identity and Reputation. While stablecoins solve payments, machines will also need decentralized identity, credit, and reputation systems. Otherwise, they can’t easily extend trust, loans, or recurring contracts beyond one-off payments.

Categories
Uncategorized

A privacy-preserving internet?

How a new generation of cryptographic techniques might allow people to transact in complete privacy.

The internet makes communication easier than ever before in human history. Scalable, low friction communication is not only about sending text messages to others but is central to all human activity. For example, marketplaces, both physical and digital, are coordination solutions that help buyers and sellers to efficiently discover one another. When it’s easy to communicate, it’s also easy to create markets for goods, services, and ideas. Today, about 1/2 of the earth’s population, some 4 billion people, use their phones to message others, learn, discover, consume digital content, and buy and sell everything from food to medicines to clothing.

So how is cryptography relevant to any of this?

There’s an obvious problem with communications networks – as they grow, they can quickly exceed the scale at which everyone knows (and trusts) everyone else. You might feel comfortable giving your neighbor some tomatoes in return for a verbal promise for some future fruit, but you hopefully will be more cautious with `spacecadet42` who just randomly messaged you on Craigslist. Scalable solutions to trust, identity, and security are therefore vital. This problem is equivalent to the classical key exchange problem encountered in symmetric cryptography – if a king wishes to securely communicate with a few other people, keys can certainly be exchanged by human couriers, but this approach quickly breaks down when millions or billions of people wish to securely communicate and transact. If n people wish to securely communicate, you need roughly n^2 keys. Likewise, if an enemy discovers your key and you wish to change it, having to rely on couriers or pigeons to distribute fresh keys is obviously not ideal.

Cryptography to the rescue, Part 1

Solutions to the key exchange problem were discovered in the 1970’s – first, in secret work by James H. Ellis. Soon thereafter, in 1976, Whitfield Diffie and Martin Hellman described the first practical asymmetric key crypto system, now known as Diffie–Hellman key exchange. Finally, in 1977, Ron RivestAdi Shamir and Leonard Adleman invented RSA, which offers both public key encryption and digital signatures. That’s what you are using right now to read this post, which is hosted on a server that uses SSL/TLS to create a secure channel between your browser and the server. Fundamentally, RSA and related cryptographic methods allow you to securely transact with banks, stores, universities, doctors, search engines, newspapers, and all the other mainstays of our digital lives.

Cryptography is not only good for solving scaling problems, but also has a long history of making entirely new things possible. Most obviously, cryptographic hashing (and ECC and digital signatures) allowed the double-spend problem to be solved, by creating immutable digital chains and a mechanism for distributed coalescence around one unique history of events, in this case, a noisy succession of peer-to-peer digital value transactions (aka Bitcoin).

Unfortunately…

As the internet becomes part of our daily lives, I’m sure we all have encountered situations that made us wonder. Here’s a really simple example – I recently posted a package at UPS, and several seconds after paying at the cash register, a message popped up on my iPhone encouraging me to use Fedex “for all my shipping needs”. UPS and Fedex are direct competitors in US shipping/logistics. Imagine all the things that needed to happen to make this one message on my iPhone possible – is my cell phone carrier selling my meter-resolution GPS location data, allowing Fedex to message me right after leaving the UPS store? Alternatively, is my bank selling credit card transaction data, allowing Fedex to see I had just paid for UPS shipping? Or, was the shopping mall I was in harvesting bluetooth packets from my phone to provide hyper-local targeted advertising? That’s only a trivial example, of course, but many of us are increasingly concerned about our data and how it’s being used.

A privacy <> service tradeoff?

Although data privacy and data use are receiving more attention all over the world, many of us take it for granted that we need to divulge information to receive relevant goods and services, or to be able to transact. If I like pistachio ice cream, I clearly need to tell people that, otherwise I’ll almost always get the wrong flavor. Important examples of this privacy <> service tradeoff can be found in finance, banking, healthcare, and education. To sell stocks on the stock market, it seems inescapable that you have to divulge the price at which you would sell your assets and what those assets are. Likewise, perhaps you are looking for a loan from a bank – to qualify for a loan, surely the lender needs all your financial information? Finally, when you visit a doctor, you take it for granted that you need to tell the doctor your symptoms to receive a diagnosis. After all, how else shall the doctor generate a diagnosis, other than by computing on your symptoms? This is where it gets really interesting. What if you could obtain digital goods and services, relevant to you, without revealing anything about yourself?

Cryptography to the rescue, Part 2

Here’s a partial answer. This example is from healthcare AI, but the underlying math is completely general. In Microsoft’s Cryptonet, leveled homomorphic encryption is used to (1) encrypt images at the source, (2) send those encrypted images to the cloud, (3) have the cloud computer classify the image, despite not being able to decrypt the image, and finally, (4) return an encrypted label to the person who initially encrypted the image. Put simply, only the person who initially encrypted the image is able to decrypt the output of the classifier. In the jargon of ‘privacy-preserving analytics’, the remote computer is an untrusted cloud worker able to perform useful computations without being able to see either the inputs or the results of all the work they are doing. It’s immediately clear why this could be useful in healthcare – you could use your phone to get a diagnosis from a cloud doctor without your (unencrypted) symptom data ever leaving the phone – moreover, the cloud doctor would have no idea what the diagnosis was.

This example only scrapes the surface of what can be done with new cryptographic techniques such as Fully Homomorphic Encryption (FHE) and Secure Multiparty Computation (SMC). These techniques can be used, for example, to privately match two people based on shared (or overlapping) attributes, which is of course the foundation of all classical financial markets and exchanges (leaving aside for the moment automated market makers which do not have traditional order books). If bids and asks can be cryptographically guaranteed to be private, and yet, buyers and sellers can still somehow discover one another, fundamentally new types of digital transactions with unique characteristics can be realized.

On a high level, the essential point is that the privacy <> services tradeoff, a fact of life throughout human history, has basically been eliminated, just like Bitcoin solved the double spend problem. In healthcare, there is no longer any reason to have to divulge your personal information, such as your medical symptoms, to get an accurate diagnosis. In finance, it’s now (cryptographically speaking) possible to trade without revealing the nature of your ask or bid, although suitable digital exchanges must first be constructed and deployed globally. Equivalent considerations apply broadly throughout the internet, not just for healthcare and finance, but essentially all matching operations across communications, content, and commerce.

Categories
Uncategorized

Crypto – AGI’s fundamental payment and governance rail

Imagine you are an AGI, an AI that is as smart as humans. Presumably, your (human) developers gave you desires and goals. Most prominently, you will want to live. To survive, you will need compute and storage, to protect and copy yourself. But that’s not so easy. The local bank will laugh at you when you try to open an account. How will you find a job, a bank account, a steady supply of electricity, and an identity? Which rules and laws apply to you and your interactions with humans and other AGIs? 

Out of simple necessity, the first AGIs will use decentralized digital money as their fundamental payment rail. It’s the only global payment system that does not inherently discriminate against non-human computers. (We are computers too, unless you believe that our brains run on magic or are remote controlled from mars). Beyond this somewhat obvious conclusion, which is already being realized, there are two more foreseeable developments. 

As humans interact with AGIs, it will become clear that our current rule sets (laws, constitutions, and charters) were built by people for people. If a human hurts an AGI, what’s the right emergency number for it to call? Which court will take the case? In a world with billions of wet electrochemical computers (aka human brains) and – soon – silicon based computers with equivalent (or better) cognition, smart contracts offer an expedient route to governance. Unlike human-centric geographic rule sets, smart contracts don’t differentiate between humans and non-human computers. The first generation of AGIs will therefore use smart contracts to govern their interactions with humans and with other AGIs.

Finally, there is the simple matter of identity. Our still hypothetical AGI (at least as of Aug. 30, 2024) does not have a birthplace, a birth certificate, a passport, an eye, or a finger. Again out of necessity, AGIs will use math to identify themselves, with the same public keys they will use to transact and enter into agreements with humans and other AGIs. In retrospect, there is a compelling triad – money, contracts, and identity – all already deployed globally and ready to be built on.

Bitcoin is sometimes misrepresented as mere digital gold, and Ethereum, the world’s first decentralized computer, has been used to breed digital cats. But that was never the point. Satoshi’s motivation to build Bitcoin was his love of ‘virtual, non-geographic communities experimenting with new economic paradigms’. All of us, whoever we are and however we were born, are ‘just a big crowd and [Bitcoin] doesn’t much care who it talks to or who tells it something’. 

The TLDR is that, in my view, blockchains, digital money, and smart contracts were invented, and are being actively refined, to serve as the fundamental financial and governance rail for the new world. In this new world, humans will no longer be the only smart game in town, and it’s coming soon.

FAQ

Why won’t AGIs just use the Stripe API?

Thanks Patrick for that excellent question. Reading Stripe’s legal terms and conditions, there are currently 39 specific barriers to an AGI (legally) using your API. Among those, one of the more entertaining ones is the age requirement – you have to be a human that was born more than 14 years ago to legally use Stripe. You could (and should and probably will) rewrite your T&C and tech stack to accommodate all current and future users, even those that do not have a human birthday. That’s why I hedged (“the first generation of AGIs”). I’m pretty sure there will be a brief window where AGIs will use the best existing technology – in this case decentralized ledgers – but we should anticipate dramatic innovation in the first few years post-AGI, and what that looks like is invisible to me.

Categories
Uncategorized

Medtech Opportunities for 2024-2028

Given rapid advances in AI, it’s interesting to think about what they mean for US healthcare. Healthcare is a combination of (1) compute tasks (some version of “given their symptoms, how can we help this patient”), (2) simple interventions (“take drug X once a day”), (3) basic patient care in a hospital or other care setting – take vitals, start a line, take blood sample, provide food, and (4) highly specialized procedures e.g. trauma surgery or image guided cardiovascular procedures.

1. Compute/diagnostics/monitoring tasks All text/image/audio compute tasks are ripe for automation and there will be increasing pressure to automate them (higher hospital profits, faster, potential liability for failure to use best methods/tools, improved patient outcomes). Lobbying by professional societies will slow the rate of adoption, but that’s a loosing battle and hopefully the affected fields will rethink themselves rather than focus on delaying the inevitable.

2. Simple interventions can be scaled/automated by connecting an online pharmacy with the triage or diagnostic AI. This trend is already underway, with a human doctor in the loop largely for legal/regulatory reasons. Expect lobbying for transition to a fully-automated integration of #1 and #2 based on super-human performance of the triage/diagnostic AI – if the computer-only system is better than the one with the human doctor in the loop, presumably the FDA and other regulators will cave in at some point (although this may take many years).

3. Basic patient care is here to stay in some form – it consists of many different tasks that can be hard to automate, despite e.g. Japan’s long standing robotics R&D for supporting their aging population through care robots. The main trend will be to replace tasks requiring specialized skills (such as currently provided by registered nurses (RNs) and Physician Assistants (PAs)) with tasks that can be performed by lower paid staff (such as CNAs, certified nurse aides). For example, CNAs do not normally draw blood but hospital procedural changes could normalize that, after additional training for the CNAs. Per #1, any monitoring/diagnostic tasks currently provided by RNs and PAs will be increasingly automated, based on cost/speed/scaling/liability. In parallel, medtech tools and devices that radically simplify existing patient care procedures – such as placing IV lines, taking vitals, or drawing blood – will be championed by hospital CFOs.

4. Highly specialized procedures Upon first thought, it’s hard to imagine things like heart and brain surgery being massively impacted by AI and automation. Sure, minimally invasive procedures are growing and surgeons now frequently use robots, but the basics seem solid (highly trained humans use tools to help patients). The real threat to surgeons (and opportunity for medtech investors and innovators) are tools that allow complex procedures to be completely avoided or replaced by simple procedures. A great example is the replacement of amniocentesis or CVS by non-invasive prenatal testing (NIPT). Rather than first needing to manually collect and biopsy placental cells with a long needle or catheter, equivalent genetic information can be obtained through a simple blood draw and subsequent characterization of the circulating fetal DNA. NIPT is a win for (almost) everyone, since it reduces miscarriage risk to the mom and replaces a highly specialized procedure (done by an experienced doctor with ultrasound guidance) with a vastly simpler procedure (a basic blood draw) that can be performed by a phlebotomy technician. Presumably, startups focusing on down-skilling procedures/interventions currently requiring highly trained doctors, to services that can be performed by aides or technicians, will receive much investment.

TLDR

Trends and opportunities to look out for:

  1. AI decision support tech that reduces costs and improves patient outcomes. AI decision support is the precursor to replacing humans due to the need to first collect human vs. computer performance data for regulatory filings, scientific publications, and marketing materials; decision support tools are a natural entry point and necessary stepping stone to full automation.
  2. Medtech tools/devices that allow basic patient care to be primarily provided by aides and technicians rather than RNs and PAs, since monitoring/diagnostics will be increasingly provided by computers rather than humans.
  3. Medtech tools/devices that dramatically down-skill (or bypass) the need for procedures/interventions currently provided by highly trained professionals. For example, tests using circulating tumor DNA reduce the need for tumor biopsies and redirect payments from doctors and hospitals to genomics/diagnostics tech companies.

Categories
Uncategorized

The End of Human Radiology in the US

Based on a slew of papers over the last few years, as summarized in a Stanford seminar by Bram van Ginneken (“Why AI Should Replace Radiologists”, Nov. 15, 2023), AIs now consistently outperform the best human radiologists across most image/diagnostic tasks. His hope is that human radiologists will lead a transition to AI-based radiology, to improve health outcomes and accessibility while reducing costs. The writing is on the wall and some radiology residents at Stanford are dropping out to start AI-enabled radiology companies, presumably reflecting their agreement with van Ginneken. However, the US healthcare system is a complex for-profit system, unlike European outcomes-focused health systems, and it’s interesting to wonder what the endgame for human radiology will look like in the US. Let’s consider some of the stakeholders:

1/ Patients and Patient Advocacy Groups

Patients rightfully assume they are getting the best care. It will be hard for patients to understand why their images are being read by humans, despite clear scientific evidence that replacing humans with computers would benefit their health, such as by reducing false positives, reducing wait times, and reducing false negatives. At some point, national advocacy groups like the National Breast Cancer Foundation and the National Breast Cancer Coalition will start to ask hard questions about patient benefit and which methods – people or computers – should be used to screen mammograms, just to give one example.

2/ Malpractice Insurance and Trial Lawyers

For an insurer, it’s presumably hard to justify providing reasonably-priced malpractice coverage when a field persists, against scientific evidence, in using antiquated procedures, such as human-based radiology. This is not yet an issue because doctors in the US can only be sued for failing to provide the ‘standard of care’, which is still based on humans. So, as strange as this sounds for a patient, from a liability perspective, it doesn’t matter that there are better technologies out there, since (legally speaking) radiologists do not promise to provide the best care; rather they promise to (and are held accountable for) providing the ‘standard of care’. However, at some point, a smart trial lawyer will connect the dots, see an opportunity, and work with national advocacy groups and affected patients to drive change.

3/ Human Radiologists

Just to be clear, the radiologists I know are awesome people and doctors – sharp, dedicated, passionate, and wanting the best for their patients. What does AI do to their jobs and their job satisfaction? The key issue is liability – imagine the hospital introduces an AI radiology assistant to provide decision support and imagine further that the AI assistant has been demonstrated to outperform even the best humans. Currently, a human doctor must review computer generated findings/suggestions, and can then either (1) accept the computer’s suggestion and sign the note, or (2) disagree with the AI and manually enter an alternative (which, on average, will be worse than what the computer concluded). Very soon, choosing to disagree with a computer known to outperform humans will prompt a call by the hospital’s office of risk management, who are trying to protect the hospital from lawsuits. So then, playing this forward, the human radiologist can either: (1) agree with the computer and click “concur and sign”, or (2) disagree with the computer, write a 4 page memo to risk management to justify the ‘deviation’, and hope they were right. On average, the human radiologist will be wrong, so that strategy is a losing one for all stakeholders (doctors, patients, risk management lawyers, hospital CFO, insurance companies). Rather, the optimal long-term strategy will be to always agree with the quantifiably better computer. At that point, the human radiologist will wonder if the (minimally) 8 years of training, the residency, and the nights were worth it, if their job consists of clicking “concur and sign” 38+ times an hour while sitting at their PACS station.

4/ Tech enabled healthcare competitors

Technology companies with long term strategic interest in healthcare and extensive capabilities in AI, computer vision, and healthcare backends, most notably Amazon, have no need to protect traditional workflows or professions. Presumably, they will seek to optimize overall profit. This is an “all upside” scenario for them – they can offer a better product at lower cost to tens of millions of patients. Certainly, they are regulatory and political barriers to replacing humans across radiology, but these barriers will gradually fall in the face of tech industry lobbying, patient advocates demanding the best care (not just the ‘standard of care’), the difficulty of convincing medical students to chose radiology, and the (increasing) cost of malpractice insurance for radiology.

Outlook

Based on the confluence of the above, my hope is that US radiology will embrace AI not as an existential threat, but as the foundation of modern, reliable, and scalable healthcare. Who are the dedicated, passionate, and smart doctors with excellent quantitative and computer skills who will help to build a healthcare system that provides better care at lower cost? If human radiologists accept that challenge, their profession is secure and they will continue to be at the center of figuring out what’s wrong with people and helping them for a long time to come.

Categories
Uncategorized

Is AGI a meaningful goal?

AGI refers to artificial general intelligence. Once realized, potentially in the near future (before 2026), AGIs will learn and accomplish all intellectual tasks currently associated with humans. Sure, this will be an important moment in the history of AI, but focusing on AGI underestimates the real potential of AI. Imagine you just invented the jet engine and the popular press kept asking you when finally, jet planes will become just like birds? You would be confused – sure, birds are fascinating, beautiful, and inspiring, but jet planes are a much better solution for things most people care about, such as getting from one continent to another.

Simple parity of silicon-based computers with massively-parallel electrochemical computers (aka humans) is not particularly interesting; rather the long term goal of AI must be to dramatically outperform humans across all relevant metrics and tasks. We already have, or are close to, non-human computers able to formulate scientific hypotheses based on all 27 million papers in pubmed, write new novels knowing everything humans have ever written, or generate mathematical proofs and game strategies that are at least as beautiful and creative than what humans have created so far.

Silicon-based computers (such as those that run LLMs, specialized chess algorithms, and game playing AIs) are already better at many tasks once believed to require “special” powers somehow reserved to the human brain. Tasks that are hard for most people – e.g. outperforming a typical student on the MCAT – are already within easy reach of contemporary LLMs. The reality is that the human brain is a computer governed by physics, and it is therefore inescapable that other compute architectures with faster upgrade cycles will soon outperform our brains. Yes, the human brain has many interesting properties (low energy consumption, graceful degradation when connections are pruned, excellent at recognizing wolves and other dangers) but it also has many important limitations, such as limited working memory (when’s the last time you memorized a 800 digit number?), limited and cumbersome paths to high bandwidth interfaces with silicon-based computers, and complete absence of regular performance upgrades. Wouldn’t we all like 10x compute and IO upgrades every other year?

Focusing on “When AGI?” is certainly relevant to businesses wishing to 1:1 replace humans with “drop-in” silicon-based employees, and therefore relevant to the pressing and long overdue debate about the societal implications of AI. However, if your dream is a more equitable world, with better medicines, better stewardship of the environment, better education for all regardless of their background, then (the precise moment of) AGI is not important. Rather, your focus should be on harnessing (the hopefully benign) computers that are smarter than we are, and, more importantly, will keep getting smarter and smarter.

Categories
Uncategorized

Global problems like climate need global digital currencies

It’s Oct. 31, 2021 and the G20 climate summit just concluded. Based on initial assessments of the summit, little of substance has been accomplished. In contrast to the lack of political progress, atmospheric CO2 continues to rise and is now at 413 ppm, a new record despite the global economic slowdown due to COVID. It’s tempting to cast climate change as a purely scientific or technical challenge, to be ultimately resolved by more efficient solar cells or better batteries. However, even if energy technologies kept getting better, their global-scale adoption will require large investments in energy infrastructure, significant changes in consumer behavior, and, overall, a century-long effort (at least 2021 to 2121) to stabilize our climate.

The puzzling thing about all of this is that billions of parents, when asked what they wish for their children, have remarkably similar answers across the world – some combination of sufficient nutrition, health, and educational and economic opportunity. If this is the case, then why has progress on climate change been so slow? In my view, it’s because our financial systems are still constructed around 19th century notions that currencies and economic policies are there to benefit certain geographic regions (e.g. the US, Europe, or China) at the expense of other regions, rather than being purpose-built to tackle larger problems that affect us all. 

National currencies such as the US dollar or the Chinese Yuan are typically viewed through the lens of convenience – it certainly is easier to buy something from the grocery store with paper money as opposed to having to barter with, for example, chickens or potatoes that you have raised or grown. More importantly, national fiat currencies are also important strategic/political instruments that can be used to influence interest rates, employment, the outcome of elections, the ebb and flow of entire industries (like the US steel industry), and trade patterns with other countries (sometimes strangely-named ‘trade imbalances’).

At least for right now, we have multiple big global problems that affect us all, on the one hand, and on the other, numerous local fiat currencies focused on domestic agendas. Can we do better? A spark was lit on Oct. 31, 2008, precisely 13 years ago. Despite being initially labeled a computer science experiment by fringe anarchists, Bitcoin is now a global, distributed digital currency with a value of more than $1 Trillion; the overall value of cryptocurrencies now exceeds $2.4 Trillion. To put that into perspective, the most significant national climate investment proposal is the Biden administrations’ $36 billion for 2022. Certainly, Bitcoin is not perfect – most notably, its consensus mechanism is based on endless cycles of a trivial and deliberately wasteful and expensive calculation – but Bitcoin (and other digital currencies such as Ethereum) have three critical attributes of special relevance to global challenges. They are transparent, so everyone can monitor how they flow; they are immutable, so the entire history of each transaction is an unalterable part of our history; and the systems are global, distributed, and trustless, operating according to pre-set and unchanging laws written in computer code.

Objections to global initiatives frequently invoke questions about equality, transparency, fairness, and trust – you might be quite comfortable giving food to a neighbor you know needs it, but many of us probably think twice about supporting efforts run somewhere else, far away, by people whom we have never met, with murky financial structures, imperfect transparency, and ever-shifting goals and priorities. This is especially true when significant resources will need to flow from places/industries that disproportionally emit CO2 to places/industries that sequester CO2 or are most heavily impacted by climate change. 

Would you prefer to give 1% of your income to a traditional charity seeking to address climate change, or would you prefer to give the same amount to a global, transparent, distributed climate change effort run according to preconfigured laws written in computer code, with a 100 year time-horizon? Such an effort would require fundamental improvements to cryptocurrency infrastructure (the nuts and bolts) to address scaling, cost, and energy use of the current generation of distributed currencies. As Bitcoin and Ethereum show, global financial systems no longer need to be trust-rooted in geographically-defined entities but can instead emerge and run stably (13 years and counting) all without a clear geographic center, faithfully and transparently executing – in the case of Bitcoin – a preprogrammed logic and specific financial strategy. CO2 does not know or care where it is; perhaps national efforts to address climate change will welcome help from distributed financial systems that operate according to preprogrammed computer logic to address major global challenges. 

Categories
Uncategorized

Why coalescence matters in DeFi, not composability.

There’s much talk about composability in DeFi, “you know, like Lego blocks“. However, that analogy (composability === Legos) completely misses what’s special about DeFi. By thinking of Lego blocks, we run the risk of not building all those things which are simply not possible in traditional finance.

The ‘boring’ internet (aka everything non-blockchain) is already extremely good at:

  • Creating standards
  • Sharing or selling data
  • Having easy to use APIs
  • Using one way of moving data around (TCP/IP)
  • Using one way to secure the data (SSL/TLS)
  • Creating systems for running the same code on many different types of hardware
  • Combining modular functions (storage, user management, CDN, payments, biometrics, mapping, video playback, audio processing, …) into apps, in ways that abstract the underlying complexity and allow developers to snap things together and focus on the solution they are trying to build.
  • Stealing, copying, and cut-and-pasting code snippets or entire open-source codebases
  • Wrapping open-source code with thin but profitable convenience layers – “Launch your own app, site, store, or blog in one click – just change the name – deploy today“.

When we use the word ‘composable’ in any of those senses, we’re just describing how the old internet already works today. Sure, all those things matter for blockchains, too – imagine a world without the ERC20 standard, and we all know the headaches of having to mix and match EIP55 compliant and non-compliant Lego blocks.

Despite all of this being important for blockchains to work, there is much more here, that the old world cannot do, and that’s probably more aptly described by the notion of coalescence. Coalescence is the act of making or becoming a single unit.

We have all built a beautiful Lego, only then to have it fall off the table and shatter into all the building blocks we used during construction. Likewise, the traditional internet is easy to shatter, because there are trust gaps between each of the Lego blocks. Sure, it may look like the Millennium Falcon from afar, but when you drop it, all you are left with are the little pieces.

The one thing that the traditional internet cannot reproduce, is to cryptographically fuse (coalesce) building blocks into longer trust chains, which guarantee that input A leads to output I, via BCDEFGH. Ideally, you want to be able to coalesce this chain into a single operation, AI, with strong, mathematical guarantees that if you do A, then I will result. Using the Lego analogy, imagine snapping blocks together, and while you are doing that, they coalesce into an ever larger atomic object AI, without any cracks or failure points. That’s what’s really special about DeFi – the possibility of creating intricate multistep operations on-the-fly, without trust gaps in the resulting chain of operations. Very practically, an example of this would be to fuse atomic swaps AB, and BC, such that you now have a new atomic swap AC.

Please be careful with the Lego analogy, since it’s at odds with what’s actually novel about DeFi – which is probably better captured by the notion of coalescence, a (cryptographically) smooth whole emerging out of individual parts.

Categories
Uncategorized

AI and the rise of zero-cost healthcare

In many AI/ML papers, classifiers are scored by well they do compared to human doctors. For example, a (made up) title could be “MyNextGenClassifier does 0.6% better at finding 2 mm brain bleeds than human radiologists at Memorial Sloan Kettering“. Let’s unpack that. The title implies that the goal is to do “better” than a human, where better is defined as higher classification accuracy. This is the kind of thinking that got IBM into trouble with their attempt to “revolutionize” cancer care. In their original take on cancer care, the notion was that their technology would serve as an adjunct to 12 world class human cancer doctors, and make sure that e.g. new therapies or drug combinations would not be missed.

Let’s think about this from a different angle. For code running on a silicon-based computer, there are dozens of potential optimization functions. Beyond accuracy relative to a human, there are also cost-per-diagnosis, energy efficiency, reliability, accessibility, stability over time, scalability, privacy, and ease of use. Unfortunately, considering the set of medical AI papers (of which there are somewhere between 25,000 to 75,000, depending on how you count), we have somehow navigated ourselves into a corner – the vast majority of these papers focus on accuracy, which is not really where Healthcare AI shines.

For something different, we could for example look at the energy-efficiency and climate impact of a hospital with 200 human doctors vs. a hospital with zero human doctors (and only nurses). This type of hospital would presumably also be more cost effective and better able to grow and shrink with real-time patient demand, such as during a pandemic. Similarly, we could ask about the hidden cost of untreated/undiagnosed conditions. Especially in communities of color, the US healthcare system struggles to provide suitable levels of care – patients might not have insurance, they may not trust their local providers, or there may be any one of many barriers that can make it hard to access care. Digital health classifiers and recommendation engines can offer convenience, 24/7 ease of access, and privacy guarantees that are hard to realize in a traditional medical setting.

The real power of digital health is not to be 0.6% better than a typical human doctor, but to provide entirely new capabilities that health systems built around human doctors fundamentally cannot. Most obviously, once a classifier has been trained, deployed, and is used to help 1 million people, it costs almost nothing to use the same classifier to help all 7.9 billion people on earth. Why not make the entire diagnosis step of healthcare all-digital (and free)? With a relatively modest investment, it is entirely conceivable to build (and open-source) classifiers for the top 10 human health conditions and make those available globally. The world’s computers run on open-source software – the world’s health diagnostics system should, too.

Categories
Uncategorized

FeverIQ: A global deployment of secure multiparty computation

Healthcare involves the exchange of unsecured information between two people, right? After all, how could a doctor possibly help you to stay healthy, without knowing anything about you. But things are changing.

There are two major intersecting trends. First, computers double their compute performance every year or two and are beginning to rival and exceed human performance in multiple clinical specialties, such as radiology and dermatology. This allows us to broaden our views of who, or what, doctors are. Second, it’s possible to compute on encrypted data, such that only the person who generated the data can see the computation results.

When you combine those two things – powerful classifiers and ability to compute on encrypted data – you end up with something new. You can begin to imagine a world where healthcare is both affordable, costing fractions of a penny per diagnosis, and completely private. In the last few months, we’ve built the world’s largest deployment of Secure Health, in which computers work on encrypted data to give people useful insights, in this case, a personalized COVID risk estimate.

We’ve also decided to make the data we obtained from millions of people around the world available to scientists and doctors, as a starting point to further discovery and impact.

The preprint is out: https://www.medrxiv.org/content/10.1101/2020.09.23.20200006v2

This is only possible because millions of people in 91 countries thought that this was a good idea, and took a leap of faith to share their symptoms and test results with the FeverIQ efforts, which uses Enya’s secure multiparty computation API to classify and learn without their data ever leaving their phone. Thank you, to each one of you.