Money Stuff
Grok, QQQ, Altman, VC laundry.
View in browser
Bloomberg

Have US ground troops entered Iran?

Last weekend, Bloomberg Weekend had a great story by Christopher Beam about how linguists are annoyed at Kalshi. Kalshi is a prediction market where you can bet on sports and various other events, including “mentions markets,” where a contract pays $1 if Jerome Powell says a certain word at a press conference or whatever, and $0 if he doesn’t. But what are the boundaries of a word? Are singular and plural nouns the same word? Past and present tense verbs? Participles? Etc., all these boring disputes. Kalshi has some rules to resolve these questions, and linguists have criticisms:

Karlos Arregi, a linguistics professor at the University of Chicago, says they appear to be arbitrary rather than based on a particular theory or philosophy of language. “This looks like the kind of rules you’d have in a game like Scrabble,” he says. “It’s obvious to me these rules were not done by a linguist.”

Rivka Levitan, a professor of computer science and linguistics at Brooklyn College CUNY, describes them as “more legalistic than linguistic.” A good set of rules, she suggests, ought to prioritize logical consistency. Instead, Kalshi’s rules seem to draw lines for no clear reason. They allow for certain types of inflections (such as -s for plural) but not others (such as -ed, -ing, -er, or -est). If the “strike” word — that is, the one listed on the mention market — is “veteran,” then “veterans” counts. But if the strike word is “veterans,” then “veteran” doesn’t count. This is unambiguous, but it’s also illogical. Similarly random-seeming are rules that allow deviations from the strike word when it comes to meaning but not form: If traders bet on a football announcer mentioning “wind,” for instance, their bet still hits for “the clock winds down.” But if the strike word is “run,” then “ran” or “running” doesn’t count.

Arguably what Kalshi needs is not a team of linguists but a team of philosophers: It offers a huge variety of contracts that pay off if some event occurs, but the boundaries of “event” and “occur” are not always crisp. Beam notes:

it also reflects the broader challenge prediction market platforms face in their attempt to reduce reality to a series of yes-or-no bets. ... Many of the best-known disputes involve angels-on-a-pinhead-type debates: Did Cardi B “perform” at the Super Bowl? Did Ukrainian President Volodymyr Zelenskiy wear a suit? Did the US “invade” Venezuela?

I suppose this is not unique to prediction markets. “More legalistic than linguistic” is a decent description of a lot of payoff disputes we have discussed around here. There is some ambiguous term in a merger agreement or bond indenture or credit agreement or credit default swap definitions or natural gas supply contract or etc. etc. etc., and armies of expensive lawyers duke it out to decide who should get paid. Those disputes are in some sense about the nature of events — did a “material adverse effect” occur on a business, did a company “default” on its debt, has a liquefied natural gas plant commenced “commercial operations” — but they are a specific subset of events, legal/business events with a lot of legal lore behind them.

Prediction markets have … democratized? … this, in two senses:

  1. Now, instead of fancy law firms and hedge funds debating whether some event has occurred, anyone can buy a $1 contract and fight over the meaning of terms, and they often do.
  2. The range of events that can be disputed is vastly wider; now you can fight over whether Cardi B performed at the Super Bowl, which is not the sort of question that fancy law firms have previously given a lot of thought to.

Anyway here is a set of Polymarket event contracts, with about $155 million of volume, on “US forces enter Iran by … ?” [1] Various March contracts expired at zero: US forces had not entered Iran. 

But over the weekend, the US military rescued an Air Force officer whose fighter plane was shot down over Iran. “Navy SEAL Team 6 commandos extracted the officer in a massive operation that involved hundreds of special operations troops and other military personnel,” reported the New York Times. Is that “US forces entering Iran”? The Polymarket contract rules say:

This market will resolve to “Yes” if active US military personnel physically enter Iran at any point by the listed date (ET). Otherwise, this market will resolve to "No".

Military special operation forces will qualify; however, intelligence operatives will not count. ..

US military personnel must physically enter the terrestrial territory of Iran to qualify. Entering Iran’s maritime or aerial territory will not count.

The resolution source will be a consensus of credible reporting.

Note: Only US military personnel who deliberately enter the terrestrial territory of Iran for operational purposes (e.g., military, humanitarian, etc.) will qualify. Pilots who are shot down, or other cases in which US military personnel do not deliberately enter the terrestrial territory of Iran, will not qualify.

My reading of (1) those rules and (2) the “credible reporting” that I have seen suggests that the answer is yes: The officer who was shot down doesn’t count, but at least some of the “hundreds of special operations troops” presumably “deliberately enter[ed] the terrestrial territory of Iran” to rescue him. (“Definitely a lot of folks sitting around on Easter Sunday waiting to find out if a rescue helicopter touched down or just dropped some rope,” a reader emailed me.)

The April 30 contract is priced at close to 100%, suggesting that the market agrees that this counts; a Yes resolution has been proposed but as of this morning is still disputed. Various commenters on Polymarket complain:

A pilot rescue should be considered as an intelligence operative and not a military landing.

And: 

The market rules are unambiguous: only deliberate physical entry of US military personnel into Iran’s terrestrial territory qualifies. Aerial presence, downed pilots, or rescue scenarios explicitly do NOT count.

And, most angels-on-a-pin-ishly:

The verb “enter” denotes a single act, namely the moment of crossing into Iran. In this scenario, that crossing occurs while personnel are inside a helicopter, i.e., within Iran’s aerial territory. The rules explicitly provide that “Entering Iran’s maritime or aerial territory will not count.”

Therefore, the only identifiable act of “entry” is expressly excluded by the rule. Once inside Iran, any subsequent landing or disembarkation does not constitute a new act of “entry,” but merely continued presence within the territory.

I disagree, but I also sympathize; the headline “US forces enter Iran by ...” does suggest a terrestrial invasion, not a search-and-rescue operation, however large and coincidentally ground-based. These rules were not written by a philosopher of war. [2]

Does this matter? I mean, no. Prediction markets are derivatives markets, where people compete to outsmart each other in zero-sum games. Being smart about technicalities and rule interpretation is a good and standard and normal way to make money in derivatives markets, as we talk about all the time. If prediction markets allow more people to be fleeced by reading the rules wrong, or to fleece others by reading the rules right, then hey great whatever.

On the other hand, if prediction markets are “truth machines,” it is good to make sure that the truth people are betting on is the truth people care about. If the purpose of this $155 million market is to inform the world about the probability that the US will launch a ground invasion of Iran, then does this rescue mission meet that purpose? Or does it just meet a technicality? [3] Should the markets try to write rules in a way that reflects people’s intuitive use of words, and the things they intuitively want to predict, or do more technicalities make for more fun? If they are on a quest for truth, should Kalshi and Polymarket be hiring philosophers? 

Grok tying

Investment bankers, I often point out, operate in a gift economy in which they mostly provide free work, advice, sports tickets, etc., to clients, and then every once in a while get a lucrative mandate to run a merger or a debt offering. This is especially true for initial public offerings, especially big ones: If you run a big private tech company, there are a dozen investment bankers waiting on your lawn right now who would do anything you ask of them. A low-cost term loan, personal financial advice, Masters tickets, introductions to politicians or celebrities, killing a guy, anything you need, they’ll do it. 

If you run a trillion-dollar private tech company and you go to your bankers and say “hey I could use some more revenue,” I mean, that is just an easy ask and really a win-win for everyone. (More revenue means a bigger IPO and more fees for the bankers.) The New York Times reports

Elon Musk has made a particularly bold demand of his Wall Street advisers ahead of the initial public offering of his company SpaceX.

Mr. Musk is requiring banks, law firms, auditors and other advisers working on the I.P.O. to buy subscriptions to Grok, his artificial intelligence chatbot, which is part of SpaceX, according to four people with knowledge of the matter, who were not authorized to speak publicly about confidential discussions.

Some of the banks have agreed to spend tens of millions on the chatbot, and they have already started integrating Grok into their I.T. systems, three of the people said. …

The I.P.O. is expected to raise more than $50 billion at a valuation above $1 trillion, which means the banks could generate fees in excess of $500 million for advising on the deal.

Mr. Musk’s ability to secure business from the banks for his A.I. chatbot also shows the enormous sway of the world’s richest man over a banking sector clamoring for his business now and into the future.

I kind of like it? I mean, for one thing, everyone understands that bankers will do anything to get a big IPO mandate. Uber Technologies Inc.’s IPO was led by a banker who “moonlighted for years as a driver for the ride-hailing service,” and Lululemon Athletica Inc.’s was led by bankers who wore yoga pants to the pitch. Har har har, but what do the companies get out of that? Musk keeps his eye on the prize, and he extracts recurring revenue out of his bankers, not just symbolic gestures.

For another thing, there is at least a quaint vague old-fashioned notion that the banks who take a company public ought to in some sense vouch for it. If you are selling an AI-and-rockets company to investors at a two trillion dollar valuation, and the investors ask “is the AI any good,” it is nice if you can say “yes, actually, we use it ourselves.” Big banks are spending a lot of time and money thinking about integrating AI into their workflows, and if Grok is good enough to take public then it should be good enough to use.

Index exclusivity

One of the main expenses of an index fund is licensing an index: If you want to be an S&P 500 index fund, or a Nasdaq 100 index fund, you have to pay S&P Dow Jones Indices or Nasdaq some money to use their index. This has always struck me as a bit weird. Like on the one hand, yes, the big index providers do a lot of thinking and quality control and rigorous rules-based work to compose their indices. On the other hand, you know, it’s a list of big stocks? How hard can it be to copy? Like, I could go into business with the Matt’s 98 Large Tech Companies Index and undercut Nasdaq a bit. Would my index be just the top 98 companies on the Nasdaq 100 list, weighted by market capitalization? Shh, that is just a coincidence. Anyway my podcast co-host Katie Greifeld reports:

BlackRock Inc. is setting its sights on a corner of the $13.7 trillion US exchange-traded fund industry long controlled by Invesco Ltd: tracking the Nasdaq 100 Index. ...

Should it launch, IQQ would become one of just a handful of US-listed ETFs to solely track the Nasdaq 100, and the first one to not be managed by Invesco. Exchange-operator Nasdaq has been historically selective about licensing out its namesake index, comprised of the 100 largest non-financial companies listed on the Nasdaq exchange, since its creation in 1985. 

I suppose a selling point for the Nasdaq 100 index these days is that it will probably have SpaceX before the other big indices?

Sam Altman

Artificial intelligence, at this point, is a real thing, but it is also a fruitful metaphor. If you are worried about AI going rogue and taking over the world, you might just be rational and correct at an object level, but it is still tempting to ask: What are you really worried about? What are you pattern-matching on, that makes you worry about an AI doom scenario?

Around here we have discussed two possible answers. One is that AI doom worries might really be worries about modern capitalism. Ted Chiang, the science-fiction writer, made this argument back in 2017. Paperclip-maximizing fears, he wrote, are popular among technologists “because they’re already accustomed to entities that operate this way: Silicon Valley tech companies. … When Silicon Valley tries to imagine superintelligence, what it comes up with is no-holds-barred capitalism.” 

Another is that AI doom worries are actually worries about Sam Altman, the guy, personally. Back in 2023, the board of directors of OpenAI briefly fired Altman as chief executive officer, announcing that “he was not consistently candid in his communications with the board.” OpenAI employees panicked and asked the board for specifics, and, the Wall Street Journal reported:

They said that Altman wasn’t candid, and often got his way. The board said that Altman had been so deft they couldn’t even give a specific example.

“Without realizing it, we were gradually overmatched by a superior intelligence, until he ended up controlling us in ways that are too subtle for us to even explain,” I wrote. “Their fears about rogue AI are such obvious metaphors for their mundane real-life problems.”

In the New Yorker today, Ronan Farrow and Andrew Marantz have a profile of Altman that kind of supports both of these points. Altman, in their depiction, demonstrates alignment faking:

Most of the people we spoke to shared the judgment of [OpenAI co-founder Ilya] Sutskever and [Anthropic co-founder Dario] Amodei: Altman has a relentless will to power that, even among industrialists who put their names on spaceships, sets him apart. “He’s unconstrained by truth,” the board member told us. “He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.”

Does that not sound like ChatGPT? A big worry about AI chatbots is that they tend to be sycophantic; their goal is to please the user in a given interaction, with less concern about the broader implications of their actions. Also:

“He’s unbelievably persuasive. Like, Jedi mind tricks,” a tech executive who has worked with Altman said. “He’s just next level.” A classic hypothetical scenario in alignment research involves a contest of wills between a human and a high-powered A.I. In such a contest, researchers usually argue, the A.I. would surely win, much the way a grandmaster will beat a child at chess. Watching Altman outmaneuver the people around him during the Blip, the executive continued, had been like watching “an A.G.I. breaking out of the box.”

But reading between the lines a bit, you also get the sense that people in AI dislike Altman because he is too commercial: raising money from autocratic governments, discussing how “OpenAI could enrich itself by playing world powers—including China and Russia—against one another, perhaps by starting a bidding war among them,” building military technology, etc. More generally, he seems more focused on raising money and making profits than on AI safety:

He defended some of his actions as the practice of “normal competitive business.” Several investors we spoke to described Altman’s detractors as naïve to expect anything else. “There is a group of fatalistic extremists that has taken the safety pill almost to a science-fiction level,” Conway, the investor, told us. “His mission is measured by numbers. And, when you look at the success of OpenAI, it’s hard to argue with the numbers.”

“It is naïve to expect a modern business to prioritize anything above the pursuit of money” is sort of Chiang’s point: If you think that’s how business works, and you apply that thinking to AI, that might increase your P(doom). [4]

There’s a fun Anthropic paper about “subliminal learning,” in which a misaligned AI model will train other AI models to be misaligned without appearing to:

Language models learn traits from model-generated data that is semantically unrelated to those traits. For example, a “student” model learns to prefer owls when trained on sequences of numbers generated by a “teacher” model that prefers owls. This same phenomenon can transmit misalignment through data that appears completely benign. 

I wonder if that’s a metaphor too. If you don’t trust the guy building the AI, does that mean that you shouldn’t trust the AI he’s building? Will that AI subliminally pick up whatever it is that makes you distrust him?

VC value add

The essential problem in venture capital is not picking deals but getting into deals. Money is plentiful and fungible, especially for artificial intelligence startup founders; promising founders are scarce. Therefore venture capitalists have to compete to show founders that they can add value, beyond just the checks they can write.

There are some classic approaches. Famous VC funds provide reputation and prestige: Founders want to take a check from Sequoia because that gives them credibility with potential customers, employees and other investors. Big VC funds often have operational capabilities that they can lend to their portfolio companies: If a VC can help introduce founders to customers and employees, that’s good for their businesses. Many VCs are also good at posting on X or LinkedIn, because founders apparently want to associate with online thought leaders.

But you can have a simpler, more down-to-earth approach. Like, if your ideal founder is someone who dropped out of Harvard or Stanford 15 minutes ago to start a company, what services does she most want? Possibly laundry? Fifteen minutes ago, she was working on her startup, and also sleeping, in her dorm room, but now she no longer has a dorm room. Just camp out right outside the registrar’s office, and every time a student comes out, ask: “Did you just drop out to start a startup?” Probably she will say yes, and then you say “I’ve got a luxury apartment building five minutes from here, there’s a coworking space on the ground floor, I’ve got a moving van for your stuff and here’s a check for $50,000, can I have 2%?” Probably she will say yes, and probably that’s a good deal for you. Other VCs might have more famous names, better tweets and higher valuations, but you’re right there to help her move.

The Wall Street Journal reports:

[Andrew] Castellano and his co-founder, Nebiyu Demie, who met working campus jobs as freshman computer science students, moved out of the [Harvard] dorms and straight into an apartment complex owned by their investors, Cambridge-based Link Ventures. Their next-door neighbors are three Delta Kappa Epsilon fraternity brothers developing AI that helps insurance companies sell more policies.

During this blisteringly fast phase of AI development, it’s no longer enough for venture capital firms to invest in companies. They’re buying apartments and workplaces, Ikea furniture and dishes, and providing housekeeping for their teenage and 20-something founders. The logic: fewer responsibilities mean more waking hours for working. ...

While young founders have long dropped out of college to chase startup dreams during past technological booms, this time, their financial backers are funding housing for them and ensuring their daily needs, from changing sheets, taking out the trash and booking travel, are met. …

Link Ventures founder Dave Blundin spent $5.4 million of his own money last year to buy a six-unit, 10,000 square foot apartment building near MIT in Cambridge to house some of the founders the firm has backed. 

After buying the building, Blundin spent another $500,000 on renovations, redoing floors, painting cabinets and gutting rust-stained ceramic tubs, to “make it a little more techie looking,” said Karen Green, Link’s office manager whose staff jokingly refer to as the “den mother.” She furnished the apartments, keeps them tidy and looks after the young residents. 

“Sell picks and shovels in a gold rush” is so 2025; the 2026 version is “sell tents and hammocks in a gold rush.”

Things happen

An Inside Look at OpenAI and Anthropic’s Finances Ahead of Their IPOs. The Citrini Research analyst at the Strait of Hormuz. Nelson Peltz’s bidding war highlights $25bn wave of asset manager consolidation. Debanking. Yuan Fees for Ships to Pass Hormuz Boost Chinese Payment Stocks. Russian crypto payment system expands into Africa. Dimon Urges US to ‘ Get Stronger,’ Keep Economic, Military Power. IMF Warns Tokenized Finance Risks Amplifying Market Crises Ahead. Gulf Funds Agree to Back Paramount’s $81 Billion Takeover of Warner. All that glisters: Maga influencers promote gold but investors feel short-changed. The Wall Street Dealmaker Charged With Solving Paul Weiss’s Identity Crisis. Workers Are Claiming ‘No Tax on Overtime’ — Maybe a Bit Too Much. More Americans Are Breaking Into the Upper Middle Class. “To date, Cliffwater hasn’t made adjustments to the NAV of a nontraded BDC to reflect that it is paying less than 100% of redemption requests.”  Strategy Posts $14.5 Billion Unrealized Loss in First Quarter. Capture factoryVegan ortolan. World’s oldest tortoise caught in viral crypto death scam.

If you'd like to get Money Stuff in handy email form, right in your inbox, please subscribe at this link. Or you can subscribe to Money Stuff and other great Bloomberg newsletters here. Thanks!

[1] As we have discussed, Polymarket gleefully lists contracts on war, while Kalshi, which is a regulated US commodities exchange, tries not to.

[2] Elsewhere, here is a Kalshi market on “When will Pam Bondi depart as Attorney General,” also pending resolution: “Please note, the ‘Before April 3’ market will resolve once sufficient information has been made available as to whether Pam Bondi vacated her role as Attorney General in this time frame. Announcements of intent to depart, without further evidence of actual departure, are not sufficient to resolve this market to ‘Yes.’”

[3] One reason that comment that “any subsequent landing or disembarkation does not constitute a new act of ‘entry’” seems wrong is that, if 10,000 US paratroopers parachuted into Iran and seized the capital, that would *not* count as “entering Iran” by that comment’s definition. Seems like it should!

[4] Or not? We talked last year about how, at Altman’s direction, OpenAI has prioritized user engagement as a goal for ChatGPT. I wrote: “Sam Altman was apparently faced with a literal choice between working to make OpenAI’s models superintelligent, and working to make them give users answers that they wanted, and he apparently decided ‘ehh go for engagement.’” Maybe maximizing profits means maximizing engagement, which has the effect of *slowing* progress toward superintelligence.

Listen to the Money Stuff Podcast
Follow Us Get the newsletter

Like getting this newsletter? Subscribe to Bloomberg.com for unlimited access to trusted, data-driven journalism and subscriber-only insights.

Before it’s here, it’s on the Bloomberg Terminal. Find out more about how the Terminal delivers information and analysis that financial professionals can’t find anywhere else. Learn more.

Want to sponsor this newsletter? Get in touch here.

You received this message because you are subscribed to Bloomberg's Money Stuff newsletter.
Unsubscribe | Bloomberg.com | Contact Us
Ads Powered By Liveintent | Ad Choices
Bloomberg L.P. 731 Lexington, New York, NY, 10022