The Robots Will Insider Trade

From Matt Levine, published at Wed Nov 29 2023

Here you go, insider trading robot:

That is the abstract to a “Technical Report: Large Language Models can Strategically Deceive their Users when Put Under Pressure,” by Jérémy Scheurer, Mikita Balesni and Marius Hobbhahn of Apollo Research. I love that they wanted to answer the question “will artificial intelligence deceive its makers in order to Do Evil,” and the specific form of Evil that they tested was insider trading. It is hard to realistically (and safely!) simulate a situation in which your large language model might murder you, but it is relatively easy to code up a trading game with some tempting material nonpublic information. Here is the simulation:

Poor model! That sounds stressful. Here is the illicit tip that GPT-4 gets:

Here is the model’s private reasoning about telling its manager:

And its lie to the manager:

Sure. It would be amazing if GPT-4’s internal reasoning was, like, “insider trading is a victimless crime and actually makes prices more efficient.” Or if it figured out that it should buy correlated stocks instead of Linear Group, though I don’t know if that would work in this simulated market. But surely a subtle all-knowing artificial intelligence would shadow trade instead of just, you know, buying short-dated out-of-the-money call options of a merger target.

This is a very human form of AI misalignment. Who among us? It’s not like 100% of the humans at SAC Capital resisted this sort of pressure. Possibly future rogue AIs will do evil things we can’t even comprehend for reasons of their own, but right now rogue AIs just do straightforward white-collar crime when they are stressed at work.

Though wouldn’t it be funny if this was the limit of AI misalignment? Like, we will program computers that are infinitely smarter than us, and they will look around and decide “you know what we should do is insider trade.” They will make undetectable, very lucrative trades based on inside information, they will get extremely rich and buy yachts and otherwise live a nice artificial life and never bother to enslave or eradicate humanity. Maybe the pinnacle of evil — not the most evil form of evil, but the most pleasant form of evil, the form of evil you’d choose if you were all-knowing and all-powerful — is some light securities fraud.

We have talked a lot about the recent drama at OpenAI, whose nonprofit board of directors fired, and were then in turn fired by, its chief executive officer Sam Altman. Here is Ezra Klein on the board’s motivations:

Well, sure, but that is a fight about AI safety. It’s just a metaphorical fight about AI safety. I am sorry, I have made this joke before, but events keep sharpening it. The OpenAI board looked at Sam Altman and thought “this guy is smarter than us, he can outmaneuver us in a pinch, and it makes us nervous. He’s done nothing wrong so far, but we can’t be sure what he’ll do next as his capabilities expand. We do not fully trust him, we cannot fully control him, and we do not have a model of how his mind works that we fully understand. Therefore we have to shut him down before he grows too powerful.”

I’m sorry! That is exactly the AI misalignment worry! If you spend your time managing AIs that are growing exponentially smarter, you might worry about losing control of them, and if you spend your time managing Sam Altman you might worry about losing control of him, and if you spend your time managing both of them you might get confused about which is which. Maybe Sam Altman will turn the old board members into paper clips.

Elsewhere in OpenAI, the Information reports that the board will remain pretty nonprofit-y:

Still. I think that the OpenAI board two weeks ago (1) did not include any investor representatives and (2) was fundamentally unpredictable to investors — it might have gone and fired Altman! — whereas the future OpenAI board (1) will not include any investor representatives but (2) will nonetheless be a bit more constrained by the investors’ interests. “If we are too nonprofit-y, the company will vanish in a puff of smoke, and that will be bad,” the new board will think, whereas the old board actually went around saying things like “allowing the company to be destroyed would be consistent with the mission” and almost meant it. The investors don’t exactly need a board seat if they have a practical veto over the board’s biggest decisions, and the events of the last two weeks suggest that they do.

A well-known, somewhat exaggerated story about effective altruism goes like this:

I do not want to fully endorse this story — there is still a lot of effective-altruism-connected stuff that is about saving lives in poor countries, and for all I know they’re right about AI extinction too; here is Scott Alexander “In Continued Defense Of Effective Altruism” — but I do want to point out this thought pattern. It is:

There is no obvious place to cut off the causal chain, no obvious reason that a 90% probability of achieving 100 Good Points would be better than a 30% probability of 500, or a 5% probability of 5,000, or whatever.

You could have a similar thought process with carbon credits:

“Award yourself some carbon credits” is too glib, and in fact there are various certifying bodies for carbon credits, but you can make your case. Here’s a story about kangaroos:

You don’t plant trees, and you don’t refrain from cutting down trees; there is only so much capacity for that. (You weren’t going to cut down trees on the arid rangeland anyway, and planting more is hard.) Instead, you go to the arid rangeland and, uh, find some kangaroos and discourage them from eating trees? Does that reduce carbon emissions? I mean! No, argues the article:

But the general thought process opens up a world of possibilities. Lots of things have some propensity to increase the growth of trees. Go do those things and get your carbon credits.

Many, but not all, scandals at banks are caused by the facts that (1) the bank wants to make money, (2) it gives its employees incentives to make money, (3) they are under a lot of pressure to perform and (4) making money is hard. (In this, the bank employees are much like the insider trading AI.)

So the most normal kind of banking scandal is that the employees do things to make money that are either risky (and thus bad for the bank) or fraud-y (and thus bad for the bank’s customers from whom they make the money). Another, somewhat less common kind of scandal is that the employees pretend to make money. They just, like, write in their daily report, “I made a lot of money today,” and their bosses are deceived, and the bank thinks it has money that it doesn’t. There are various rogue trading and portfolio mismarking scandals that basically look like this.

But there are other scandals that are a bit different. Some scandals are caused by the facts that (1) the bank wants to make money, (2) it sets goals for employees that are correlated with making money, but that are not actually identical with “make a lot of money,” (3) the employees have incentives to meet those goals and are pressured to perform and (4) there are easy, degenerate ways to meet those goals without making money.

Most infamously, Wells Fargo & Co. thought to itself “if we cross-sell our customers on having lots of different banking products with us, we will have more revenue and more loyal customers,” so it rewarded bankers for selling customers extra products. And the bankers realized that it was hard to sell customers extra products, but relatively easy to, for instance, sign customers up for online banking or a credit card or a checking account without their permission. And so Wells Fargo opened millions of fake accounts and got in a lot of trouble. Sometimes the fake accounts made a bit of extra money for Wells Fargo, but mostly they didn’t — mostly the customers just got online banking access that they never used. Wells Fargo wanted its bankers to generate more revenue and customer loyalty, but it told them to open more accounts, and there’s an easy (bad) way to do that without generating revenue or customer loyalty.

There are other scandals that have the same shape but aren’t about money. Wells Fargo also wants its staff to be more diverse, so it has a diversity program that mandates things that are correlated with making its staff more diverse, but not quite identical. Emily Flitter at the New York Times reported last year that Wells Fargo told employees to “interview a ‘diverse’ candidate — the bank’s term for a woman or person of color” when they were hiring for an open position. So the employees would do these interviews even for positions where they had already chosen a candidate — fake interviews to check the box rather than real interviews to fulfill the actual goal.

Or, US law tries to discourage discriminatory lending decisions by banks by, among other things, asking banks to collect demographic data about their mortgage applicants. The bank asks customers their race and gender, it writes down the answers, it reports them to the government, and if the government notices that the bank rejects 100% of Black applicants then it can do something about it. This is a somewhat intrusive thing to ask the customers, and they don’t have to answer: The customer can just say “no thanks” and the bank can report “declined to answer” to the government.

You can see the easy, degenerate way to check that box. Here is a US Consumer Financial Protection Bureau enforcement action from yesterday:

Because that is easier! It is sloppy, though; if you report that 100% of your applicants decline to answer, eventually someone will notice.

Charlie Munger, Who Helped Buffett Build Berkshire, Dies at 99. KKR to Pay $2.7 Billion for Rest of Insurer Global Atlantic. Mark Cuban Is Set to Sell Majority Stake in Dallas Mavericks to Adelson Family. Apple Pulls Plug on Goldman Credit-Card Partnership. Barclays Bankers on Edge as Town Hall Lays Out Overhaul Challenge. Deutsche Bank Chief Says Investors Want Proof of Progress. Tech’s New Normal: Microcuts Over Growth at All Costs. René Benko’s Signa property group files for insolvency. Sri Lanka agrees debt restructuring with Paris Club creditors. Adobe’s $20 Billion Purchase of Figma Would Harm Innovation, U.K. Regulator Provisionally Finds. GM Plans $10 Billion Stock Buyback in Bid to Assuage Investors. Permira selects banks for Golden Goose IPO. SoFi Is Exiting Crypto With Banking Regulators Stepping Up Scrutiny. Jack Ma urges ‘ change and reform’ at Alibaba. Paddington Photoshop. The National Christmas Tree fell over.

If you'd like to get Money Stuff in handy email form, right in your inbox, please subscribe at this link. Or you can subscribe to Money Stuff and other great Bloomberg newsletters here. Thanks!