The Black-Scholes Magic Trick, Explained!

The economist Paul Romer once compared the models used in finance with the tricks used by magicians, whose secrets are protected by the Magician’s Oath. As he wrote in 2015, “A model is like doing a card trick … Perhaps our norms will soon be like those in professional magic; it will be impolite, perhaps even an ethical breach, to reveal how someone’s trick works.” [Spoiler alert: guilty as charged.]

Of course, nothing can be kept secret forever, so once a magician has invented something like a levitation trick, as illusionists did some two centuries ago, other versions soon follow. Today, you can find explanations on YouTube, or Wikipedia. But for most audiences, the trick will still be effective.

An example of such a mathemagical trick is the Black-Scholes model, which since its publication in 1973 has served as the industry-standard model used to price financial options – those instruments which give one the right but not the obligation to buy (a call option) or sell (a put option) a stock in the future at a set price, known as the strike. The model is magically simple: in order for the price of the option to be revealed, traders only need to supply two key pieces of information: the risk-free interest rate, which can be obtained from something like a Treasury bond, and the volatility, which is an estimate of price variation (technically, it is the standard deviation of the logarithm of price changes over a period such as a year).

As financial magic shows go, the Black-Scholes model has certainly had a good run – one of the longest on Wall Street, and elsewhere – and has received rave reviews. One commentator (Ross, 1987) described option pricing as “the most successful theory not only in finance, but in all of economics.” Another (Rubinstein, 1994) said the algorithm may be “the most widely used formula, with embedded probabilities, in human history.” Even critics acknowledge the model’s importance to the field; Nassim Nicholas Taleb (1998) wrote that “Most everything that has been developed in modern finance since 1973 is but a footnote,” while in his 2008 report to shareholders Warren Buffet said it had “approached the status of holy writ” (magic being closely related to religion).

A popular component of magic shows is the prediction trick, where the magician makes a seemingly impossible prediction about an audience member or something else. The Black-Scholes formula does this but with a twist. Its trick is to present itself, not so much as a prediction, but rather as a magical machine which somehow defines the correct option price – like a mentalist who predicts the future by making it happen. And by using the machine as a calculating device, investors only seem to confirm its predictions. What kind of higher-level voodoo is this?

Such is its hypnotic hold, that it defines the very words and concepts used by quants to describe option pricing. For example there is the “implied volatility” which is the special number that must be fed into the machine in order for it to work, but whose true value can be divined only in hindsight. And then there are the “Greeks” which refer to various terms thought to describe its sensitivities, and are reminiscent of the arcane symbols employed by sorcerers. For example the symbol Δ shows how the option price depends on the current asset price, while Θ measures its dependence on time.

Even more remarkable is that the mesmerising power of the model’s spell has distracted the audience from worrying about – or often even noticing – the fact that its assumptions have no more obvious means of support than a magician’s levitating assistant.

Suspending disbelief

For example, one of its totems is that markets are “efficient” so are made up of what economist Eugene Fama (1965) called “rational profit maximizers” whose collective actions ensure that everything is priced rationally, including assets and options. If anyone still think markets are rational and efficient, then see below.

Much of the power of the model comes from its amazing use of “dynamic hedging” which assumes that someone can constantly buy and sell options and the underlying stock in such a way that the risks are always balanced. The theory appears to mathematically prove that the value of an option does not depend on the growth of the underlying asset (which is why the formula uses only the risk-free rate). And yet it obviously kind of does, which is why for something like the S&P 500 index, which tends to grow, call options (to buy) have consistently outperformed put options (to sell). This is a problem, since the test of the model is not to satisfy some abstract theorem, or even to predict what traders are paying for options; it is to predict what prices correspond to the expected payouts.

The model assumes that prices follow a “random walk”, so the probability distribution for price changes should be lognormal (i.e. the log price changes should follow a bell curve). But, it’s not. It has “fat tails” meaning that the chances of extreme price changes, such as a crash or a spike, are much higher than predicted by the model.

The model also assumes that volatility over a set period can be treated as constant, and in particular does not depend on the strike, which again is the reference price for the option. But if you plot volatility versus price change for historic asset price data it turns out that there is a distinct smile shape, with volatility lower for periods over which the price change is small, and climbing higher as the price change becomes increasingly positive or negative. (Note that price change is clearly related to the strike price, since options with different strikes can be viewed as the same option with a corresponding assumed price change.) A similar “volatility smile”, though somewhat less pronounced, is seen when the implied volatility used by traders is plotted versus strike – a clue left in plain sight.

The markets are smiling, but Black-Scholes doesn’t get the joke. Figure shows plots of volatility over time periods of 1, 2, 4 and 8 weeks (light to dark), for the S&P 500 index (1992-2022), compared with a prediction using a quantum model (dashed). The horizontal axis is price change x normalised by the square-root of time T. The Black-Scholes formula assumes volatility is constant so these lines are all flat.

Finally, that dynamic hedging proof, which in a wave of its magic wand appeared to remove the dependence on subjective estimates of future growth, demands that you can constantly buy and sell securities and options to eliminate risk. This ignores the bid-ask spread (the difference between the buyer price and the seller price) on those transactions – which is not a technical detail, but represents a level of irreducible uncertainty, whose magnitude is related to the volatility. Include those, and the clarity, certainty, and elegance of the mathematical demonstration loses some of its theatrical sparkle. (The dashed line in the above figure was derived from a quantum economics model, which uses a different kind of magic.)

How to be beaten by the market

Now, most people in the audience will be untroubled by these details – or won’t perceive them at all – because they will tell themselves that (a) the model has a great back story and is rooted in highly rational mathematics, and (b) what ultimately counts is that the magic formula is widely known to give the “right” answers (i.e. correct predictions of the fair price), at least if we set aside the occasional stage malfunction such as the 1987 Black Monday crash, the 1998 LTCM blow-up, the 2007/8 financial crisis, and so on, where use of the model led to large losses. (Advocates of efficient market theory can explain all of these, thus falsifying the theory that theories in finance can ever be falsified.) Any nagging doubts can be addressed by inventing elaborate excuses, or by noting that no model is perfect, the Black-Scholes model has the advantage of simplicity, it is very useful as a mental tool, traders adjust the price anyway, and so on.

Only, the model doesn’t give the right answers, its predictions are off, the crystal ball is cracked. Here’s another clue: suppose you agreed to buy lots and lots of 1-month at-the-money S&P 500 straddle options (a combination of a call and a put), at the price suggested by the model using the benchmark (VIX index) volatility, and kept reinvesting the takings. If markets are efficient, and the model is telling the truth, then in principle you should expect to break even over a sufficiently long period of time. Except you won’t, you’ll lose money, in an efficient manner (this is not financial advice). In fact, you would overpay by a factor about equal to the square-root of two.

Losses will be reduced if you pay the actual market price for these options, since traders feed the oracle a lower volatility number, but they will still be significant, which again seems to contradict the efficient market hypothesis. The reason is that volatility, which is the mysterious essence at the heart of the trick, isn’t actually a thing, at least in the sense assumed by the formula. You can measure price changes over a previous period and calculate a standard deviation. Since conditions are always changing, the answer you get will depend on the exact period, so you might try to adjust for this somehow. But if the future variability is itself a highly variable quantity which depends, like expected price change, on the state of the market, then there is no single volatility that is independent of strike.

Also, since dynamic hedging isn’t a thing either, the assumption that the growth rate equals the risk-free rate – with no need for subjective estimates or uncertain predictions – is itself just a particular choice or prediction, and one which is not backed up by empirical data. The whole carefully-constructed illusion of deterministic objective rationality shatters into pieces.

All this won’t spoil the entertainment as long as people don’t look too hard behind the scenes, or check what’s going on using actual statistical tests (there being more data now than in 1973). But this still leaves the question of how this trick works. How does it get so many people to take the word of an elegant but obviously idealised mathematical proof, instead of confirming whether option prices correspond to expected payouts, which is the normal test for a statistical predictive model? Or to go along with the idea that the volatility smile is a puzzling anomaly or “logical inconsistency” (as it has been called) caused by market quirks, or irrational behaviour on the part of traders, instead of being a reflection of a real phenomenon? And how does a square-root of two error get magicked out of existence?

The logic hack

Part of the reason for the trick’s success is that, as already mentioned, it substitutes the usual test of a model, which is to predict outcomes, with a different test, which is to obey an abstract proof based on certain assumptions, thus again rendering it unfalsifiable. But at a deeper level, the secret behind the trick is that it induces in its audience what might be described as model blindness. As quants Emanuel Derman and Michael Miller (2016) note, the model “sounds so rational, and has such a strong grip on everyone’s imagination, that even people who don’t believe in its assumptions nevertheless use it to quote prices at which they are willing to trade.” By hacking (like a hypnotist on a hapless showgoer) into people’s ideas about rationality, it changes their perception of reality, and even the language they use to describe it, so the model’s word takes precedence over observable facts (the smirking volatility, the money-losing options). Which is quite a mind-blowing stunt.

Of course, like most tricks it wouldn’t have worked if the people in the gallery hadn’t at some level wanted it to work. But the real audience back in the 1970s, when it first came out, wasn’t just options traders – it was society as a whole. The illusion of predictability, objectivity and rationality was at the time in a sense necessary and productive, because it transformed options trading from a slightly disreputable form of gambling, into scientific risk management, and thus helped conjure into existence much of the quantitative finance industry. A shared language acted as a coordination device which allowed traders to communicate and do business. The audience was therefore part of the performance, and shared in the magical profits (one could even say that they, more than the inventors themselves, were the magicians who made the trick work). And it was all just one component in an even longer-running magic show, which is the neoclassical illusion that the complex, unstable, living system known as the economy is actually a rational, efficient, utility-maximising machine. Magicians have traditionally tried to convince people that the automaton is alive, but here it is the other way round.

The Black-Scholes model is one of the greatest mathemagical tricks of all time. But now, it might be time for us to snap out of this illusion, open our eyes, and let go of this idea that markets are efficient and obey rational logic. After all, it always did sound a little crazy.

An earlier version of this article appeared in the July 2023 issue of Wilmott Magazine.

It’s A Conspiracy I Tell You!

I don’t believe in conspiracy theories but — whenever you hear the “but” you know exactly what is coming! — there is one story that is rather worrisome, and also something to which one can trivially add a bit of mathematics as a convincer.

Did you know that the actors in The Magnificent Seven died in real life in the order in which they died in the movie?

To remind you the seven were, as copied from Wikipedia,

  • Yul Brynner as Chris Adams, a Cajun gunslinger, leader of the seven
  • Steve McQueen as Vin Tanner, the drifter
  • Charles Bronson as Bernardo O’Reilly, the professional in need of money
  • Robert Vaughn as Lee, the traumatized veteran
  • Brad Dexter as Harry Luck, the fortune seeker
  • James Coburn as Britt, the knife expert
  • Horst Buchholz as Chico, the young, hot-blooded shootist

You don’t have to take my word for it, it’s easy to google.

I remember discussing this over dinner at a training course in Mexico City. We even got as far as looking at the probability of this happening. How can you order seven people? For the first one there are seven to choose from. For the second there are six remaining. Then five, and so on. This means that the probability of this being a coincidence is one in 7! (that’s seven factorial), i.e. about 0.02%.

Can you explain this any better than chance?

Maybe they died in the movie according to their ages. You could check that out. That would make some sense, the older gunfighters die sooner in the film, and the older actors die earlier in real life.

We could probably quite easily quantify this effect, to increase that 0.02%. But it’s hard to get the probability up to anything remotely probable.

This is the way the dinner conversation went.

One matter that was not discussed, perhaps out of politeness since I was the teacher and they were the students, was that maybe I WAS MAKING IT UP ON THE SPOT, YOU MUPPETS!

My apologies. This conspiracy theory, like all of them, has no basis in fact. But it was fun while it lasted. My mathemagical distractions, the statistical analyses, trying to find rational explanations, all served to convince my audience that there must have been a plot! I should have got an Oscar for my performance!!

The Prediction Test

The two most famous theories in the field of forecasting are the butterfly effect, and the efficient market hypothesis. Both are theories, not of prediction, but of non-prediction.

The butterfly effect was developed by MIT meteorologist and chaos theorist Ed Lorenz in the 1960s. He found that computer simulations of a toy weather model tended to stray apart over time if the starting point was changed by even a tiny amount (chaos!). He proposed that this “sensitivity to initial condition” was a property, not just of his three-equation model, but of the weather itself. When he submitted an untitled talk for the 1972 conference of the American Association for the Advancement of Science, the person hosting the session supplied a provocative title: “Predictability; Does the Flap of a Butterfly’s Wings in Brazil Set Off a Tornado in Texas?”

The efficient market hypothesis was first proposed in a 1970 PhD thesis by Eugene Fama, from the University of Chicago. It says that price changes in financial markets are caused by random perturbations (e.g. news) and therefore follow a “random walk” which is inherently unpredictable.

Apart from fame, the theories have many other things in common. They both provide a scientific reason for forecast errors, such as the financial crisis. They both assume that forecast error is due to random effects (insects or news). Both theories – or at least their typical applications – assume that the underlying model of the system is correct. And they are both used to justify complicated techniques that are hard to interpret or falsify.

In the 1990s weather forecasters seized on the butterfly effect as an excuse for forecast error, but also as a rationale for elaborate “ensemble forecasting” schemes. Instead of making a single “point” forecast, an ensemble of forecasts is here generated from a set of perturbed initial conditions, and used to produce a statistical forecast that takes into account the effects of chaos. When forecasters made typical perturbations of the sort that might be produced by observational error, they found that the simulations didn’t diverge as quickly as expected, which was possibly a hint; however they soon found ways to select specially optimised perturbations which did exhibit the desired divergent behaviour.

The efficient market hypothesis meanwhile might have shown that price changes were unpredictable, but also enabled the use of statistical models which claimed to predict the probability of a price change, such as the Value at Risk model. In either case of course the statistical forecast is only valid if the underlying model of the system is correct.

Both theories are hard to disprove, and remarkably resilient to criticism. When I (David) showed in a 1999 presentation at the European Centre for Medium-Range Weather Forecasts that plots of forecast error show a square-root shape, which is characteristic not of chaos but of model error, I was contradicted by a number of people in the audience. The next day I received an email from one of the top research heads, which said that he had checked a plot of forecast errors, and, in stark contrast to my talk, “they certainly show positive curvature.” In other words, they were caused by chaos, not model error. We therefore decided that someone there should try to reproduce my results, by plotting the errors as a function of time.

When the results showed a near-perfect square-root shape, I received an email saying that “I guess it would be possible to get an initially square root shape from initial condition error if the error was initially in very very small scales which rapidly saturates but cascades up  to produce errors of larger scale, which then saturate, but cascade up to produce errors of still larger scale.” (That was the exact point when my view of science began to shift.)

Similarly, as Andrew W. Lo and A. Craig MacKinlay wrote in their book A Non-Random Walk Down Wall Street: “One of the most common reactions to our early research was surprise and disbelief. Indeed, when we first presented our rejection of the Random Walk Hypothesis at an academic conference in 1986, our discussant – a distinguished economist and senior member of the profession – asserted with great confidence that we had made a programming error, for if our results were correct, this would imply tremendous profit opportunities in the stock market. Being too timid (and too junior) at the time, we responded weakly that our programming was quite solid thank you, and the ensuing debate quickly degenerated thereafter. Fortunately, others were able to replicate our findings exactly.”

Needless to say, both the butterfly effect and efficient market theory survived these and other challenges.

Finally, both theories rely on a kind of magical thinking – that the atmosphere is incredibly sensitive to the smallest change, so perturbations grow exponentially instead of just dissipating (try waving your hand in front of your face to see which is more physically realistic); or that the economy is magically self-correcting, like a door which snaps instantly shut after being opened.

One difference is that the butterfly effect does double duty in other areas such as economics. As then-Fed chairman Ben Bernanke explained in 2009, “a small cause – the flapping of a butterfly’s wings in Brazil – might conceivably have a disproportionately large effect – a typhoon in the Pacific” which was a useful thing to bring up after you just failed to predict the US housing crisis. However, the idea that unpredictability is caused by efficiency has failed to catch on outside of economics. For example, no one thinks that snow storms that come out of nowhere are efficient.

So why are these theories both still around? The reason is simple. As the physicist Richard Feynman once said, “The test of science is its ability to predict.” The magic of science is the ability to make it look like you can predict.

Simplify!

The following may or may not be factually accurate. It all happened a long time ago. But it is absolutely 100% correct in spirit.

Twenty or so years ago I was browsing through the library of Imperial College, London, when I happened upon a book called something like The Treasury’s Model of the UK Economy. It was about one inch thick and full of difference equations. Seven hundred and seventy of them, one for each of 770 incredibly important economic variables. There was an equation for the rate of inflation, one for the dollar-sterling exchange rate, others for each of the short-term and long-term interest rates, there was the price of fish, etc. etc. (The last one I made up. I hope.) Could that be a good model with reliable forecasts?

[Hint: How good are economic forecasts generally?]

Consider how many parameters must be needed in such a model, every one impossible to measure accurately, every one unstable. I can’t remember whether these were linear or non-linear difference equations, but every undergrad mathematician knows that you can get chaos with a single non-linear difference equation so think of the output you might get from 770.

Putting myself in the mind of the Treasury economists I think “Hmm, maybe the results of the model are so bad that we need an extra variable. Yes, that’s it, if we can find the 771st equation then the model will finally be perfect.”

No, gentlemen of the Treasury, that is not right. What you want to do is throw away all but the half dozen most important equations and then accept the inevitable, that the results won’t be perfect.

A short distance away on the same shelf was the model of the Venezuelan economy. This was a much thinner book with a mere 160 equations. Again I can imagine the Venezuelan economists saying to each other, “Amigos, one day we too will have as many equations as those British cabrones, no?” No, what you want to do is strip down the 160 equations you’ve got to the most important. In Venezuela maybe it’s just a few equations, for the price of oil, inflation, and maybe how much it costs to buy a politician.

We don’t need more complex economics models. Nor do we need that fourteenth stochastic variable in finance. We need simplicity and robustness. We need to accept that the models of human behaviour will never be perfect. We need to accept all that, and then build in a nice safety margin in our forecasts, prices and measures of risk.

Perspective

I love watching Dragons’ Den, the programme in which entrepreneurs try to get established business people to invest in their ideas. I love trying to predict which Dragon will say what, how they will negotiate a deal, how they compete with each other to make themselves look good against other Dragons. I love shouting at the TV, “What about patents and intellectual property?” before the Dragons. And I particularly love it when they so obviously get it wrong. Trunki? Come on, just because a bit of plastic broke on a prototype you aren’t going to invest in such an obvious hit? And I find it reassuring when they break all their own rules to invest in something they get emotionally attached to. E.g. Reggae Reggae Sauce. Although I’m sure it is deliciously invigorating many of the facts and figures that the entrepreneur gave turned out to be wrong, many of them during the programme itself.

But I hate it when an entrepreneur gets flummoxed by a Dragon negotiating for a better deal. An entrepreneur will open with offering 10% in return for a certain investment. A Dragon might find this too little and counter with 20%. At which point the entrepreneur shakes his or head and declines.

What are they thinking?

It looks to me like they are thinking from the perspective of the Dragon. I’m going to double his money? Double! No way!

But this is completely the wrong way to look at this. They should look at it from their own perspective initially. So I’m going down from 90% to 80%. No biggie. And then they can put themselves in the shoes of the Dragon. Ok, I can see that doubling the shareholding might double the help the Dragon will give. Which will make that 11.11% reduction in their shareholding (10/90) look pretty insignificant.

Rule #1 Of Investing: Don’t Obey Rule #1

One of the first lessons in any course on investing will be about portfolio construction and the benefits of diversification, how to maximize expected return for a given level of risk. If assets are not correlated then as you add more and more of them to your portfolio you can maintain a decent expected return and reduce your risk. Colloquially, we say don’t put all your eggs into one basket.

Of course, that’s only theory. In practice there are many reasons why things don’t work out so nicely. Often that’s because stocks and other investments stubbornly refuse to do what they are told.

But can it ever be optimal to not even try to diversify? Should you ever do the exact opposite of Rule #1? You betcha.

As we’ll see people in banks and hedge funds are encouraged to not diversify, to instead concentrate risk. I don’t know whether this is explicit or instinctive.

Imagine the following scenario. It’s your first day as a trader at an investment bank. You’ve had a world-class university education in economics in, say, Chicago. There you learned about all kinds of theoretically marvellous trades and how to manage risk by diversifying.

You are being introduced to the rest of the trading team. You notice that all of the trades they are doing are strangely similar. It worries you a bit because it doesn’t look like they are diversifying much.

You are then shown your desk, with multiple screens, and told to start trading.

Being a decent person you naturally want to do the best for your bank and so you seek out some trades that are uncorrelated to those of your colleagues but which also have a high probability of success.

Let’s put some numbers to this. There are dozens of other traders all making the same trade, and this trade has a 50:50 chance of making or losing a large amount. You have a better, and independent, trade that has a 75% chance of doubling your money and 25% of losing it all.

What happens next?

There’s a 50% chance that all the other traders lose a vast amount of money. This is not great. They might lose their jobs. The bank might go under.

But there’s also a 50% chance that they’ll be heroes, and rewarded as such in bonuses.

Meanwhile your trade might make some money. More likely than the other traders, at 75%. So you are more likely to be a hero too. No! If the others lose and you win then you are too tiny to even be noticed. You won’t be able to save the bank. And certainly don’t expect a bonus.

You can see this in the following table. If the other traders lose then everyone is fired including you. You can only get a bonus if the traders and you both win, and that has a probability of 0.75 x 0.5 = 37.5%.

  Traders win (50%) Traders Lose (50%)
You win (75%) Bonuses all around!!! (37.5%) All fired!!! (37.5%)
You lose (25%) You are fired, other traders get bonuses (12.5%) All fired!!! (12.5%)

No, the only way to get that bonus is to cling to the coattails of the other traders. Do the same trade as them and you have a larger 50% chance of that bonus.

Lose money when all around are making it, you’re fired. Make money when all around are losing it? Expect a big bonus? No way! Your profits will help to bail everyone else out and no one gets a bonus, even you. No, you should do the same as everyone else.

As Keynes said, “It is better to fail conventionally than to succeed unconventionally.”

The Wealth Manager

Following on from Credit Ratings, here’s a true story about how reassuring mathematical analysis can be. Until it turns out to be baloney. This story is also a warning about experts. Unfortunately, although there’s a lesson here we aren’t sure what it is. Sometimes you get screwed no matter how smart you are. Maybe that’s the lesson.

For Paul, 2007 had been a good year financially. His businesses, based around financial mathematics, publishing and training, were starting to take off. Being self employed his earnings were paid without any tax having been withheld. This meant he had to keep a regular check on how much he owed the UK’s Inland Revenue and, not wanting to end up behind bars like some tax-dodging TV evangelist, put it aside for later payment.

Paul is financially very conservative, he wanted to put this money somewhere incredibly safe, but he’s also a bit of a worrier. He knew that the complex financial instruments he worked with were poorly understood, and that their risk management was even worse. He knew that people in banks were confused about fundamental financial principles, and worse, that they didn’t know that they were confused. He knew that it’s important that the incentives of employees (the bankers) and the benefits of the owners and creditors (the man on the street) be lined up, and that in practice they rarely were. And long before it became fashionable he would tell anyone who would listen that the cleverest of the bankers didn’t know what they were doing.

Paul decided to speak to his Wealth Manager at his bank, B______s, to see if there was anywhere safe that he could leave it for a few months before paying the taxman. This was now the second half of 2008. The investment firm Bear Stearns had bitten the dust earlier in the year. So prudence had been the word of the days for several months now.

Paul mentioned these concerns to his Wealth Manager. And the Wealth Manager made some recommendations. One thing that Paul knew about was the Financial Services Compensation Scheme (FSCS), the UK’s version of the US’s Federal Deposit Insurance Corporation, which would at that time cover up to £50,000 in the event of a collapse. Well, the money that Paul owed Alistair Darling, the then Chancellor of the Exchequer, was quite a bit more than this. However, he also knew that there was a version of the financial guarantee that applied to insurance companies where the cover was 90% of any money lost, with no limit. Paul had done his research. So when the Wealth Manager mentioned that he had a couple of insurance-company products to offer, Paul naturally was keen.

There were two products on offer. They both had interest rates of about 4% annualized. One had a slightly higher return than the other. But the return wasn’t the point. The point of this exercise, remember, was to protect the money that he was holding onto on behalf of the Inland Revenue. The Wealth Manager gave the sales pitch for these two products. They seemed very simple, very “vanilla” in the financial jargon, basic short-term bonds. Credit ratings were mentioned and Paul does recall his sense that one of these investments was very, very, very safe, while the other was a mere very, very. The latter had the slightly higher rate of return. The greater the risk, the greater the expected return. That’s classics portfolio theory.

The conservative Paul opted for the lower return, thrice-very-safe investment. The government’s tax money, or at least 90% of it, was now secure.

This was a Thursday in September 2008.

That weekend brought the news of the collapse of Lehman Brothers and the near bankruptcy of AIG due to trading in complex credit derivatives, the very same instruments and models that Paul had said in 2006 “fill me with some nervousness and concern for the future of the global financial markets.”

Did we mention that it was an AIG insurance bond that Paul had bought?

Paul spoke to his Wealth Manager who reassured him that “there was nothing to worry about,” and that they “were speaking to AIG at the highest possible level.” This gave Paul a warm glow, he felt special. It was nice having a Wealth Manager. “Whatever happens to AIG, the money will be returned in 24 hours,” they said.

The next day the money had not reappeared, and B______s were now saying “48 hours” for the return. There was still nothing to worry about because insurance products come with a cooling-off period of 14 days.

Over the next few days the language of the Wealth Manager changed subtly, mention of cooling-off periods disappeared and timescales became more fluid. And suddenly there was talk of early-redemption penalties. This certainly didn’t fit in with the promise of a full refund in the first 14 days. Meanwhile AIG wasn’t getting any better.

Paul decided to take what little control he could, and started to make his own enquiries. He called AIG. To his surprise, considering their situation, the call was answered promptly, and he was put through to someone dealing with these bonds. Paul’s question was simple: “Was there or was there not a cooling-off period?” The AIG person did not know. She read out the bond’s particulars, the same paperwork that Paul had in front of him during the call. No mention was made in the paperwork of cooling-off periods or early redemption.

Paul called the Financial Services Authority, the then regulating body. He was going to ask about specifics of his bond and was expecting a response such as “Go to the FSA website, type in the company name, look for your bond in the dropdown menu, and the details will appear in a pop-up window.” It was the 21st century after all. Unfortunately the FSA’s representative said something somewhat different, in a very tired voice, something rather like “Do you know how many companies there are? Quite frankly, we don’t know who we regulate.” This was not looking promising.

Paul then went to the FSCS’s website. In the event of AIG collapsing they would be the ones to pick up the tab. Although they seemed very proud of their record in recompensing clients of failed institutions it was clear that they had never had to deal with anyone quite the size of AIG, a top-20 global company which sponsored Manchester United football team and, it seemed, much of the rest of the economy. (Forbes ranked them the 18th largest company in the world in 2008. “I bought one of your insurance bonds and all I got was this lousy Man United t-shirt.”) The case studies on the FSCS’s website were all firms that you’d never have heard of. Oh dear. It was now clear to Paul that he couldn’t depend on the insurance bond’s insurance.

Fortunately, this story has a reasonably happy ending – after a number of weeks nearly all the money was returned. Paul was in fact probably one of the more fortunate purchasers of this product. The BBC television journalist Jeremy Clarkson, who found himself in a similar situation (but with only the “very, very” safe AIG bond instead), said “I made strenuous efforts to get my money out of AIG as soon as the scale of its problems became apparent. But it wasn’t possible. Inwardly I was screaming. It’s my money. I gave it to you. You’ve squandered it on a Mexican’s house in San Diego and a stupid football team and that’s your problem. Not mine.” (Clarkson and Mexico have some history, but his observation had some truth to it, many mortgage brokers targeted specific racial groups for their sub-prime, teaser-rate, AIG-insured mortgages.)

But this was not a problem just for isolated investors. AIG was a major node in the financial system, and as its tangled web fell apart many companies, indeed entire countries, were severely affected by the ensuing economic mayhem. People lost their jobs, their homes, their life savings, even their lives (financial crises are strongly correlated with health crises and suicides).

Now, as tales from the credit crunch go this is not exactly movie material – it’s more the Big Short in reverse, not an attempt to make a killing from a crisis, but an attempt to save money to pay tax – but it does prove a point: We are only human, gullible, fallible, and despite our best efforts as prone as anyone to getting things wrong.

This brush with a failing, flailing insurance giant also taught Paul things he didn’t know, and reinforced a few that he did.

• Banks don’t know what they are talking about. They speak in jargon, much of which they don’t understand. But since there is no down side for them it doesn’t matter. To some extent you just have to hope for the best. Experts? Phooey!

• Regulators are clueless.

• Guarantees mean nothing. The FSCS paid out an average of £200million a year between 2001 and 2006. Between 2006 and 2011 that rose to an average of £5billion per annum, and they had to take out a loan from the Bank of England. But AIG was bailed out to the tune of $85billion. The numbers just don’t add up.

• Bailouts can be necessary, but only because some companies are so humungous in size. In some Darwinian sense entities should be allowed to collapse. But AIG was too big, its influence was everywhere. How many people had insurance through AIG? Just take car insurance, for example. How many cars would have been left uninsured, what repercussions would that have had? It’s impossible to tell.

• Being a mathematician doesn’t make you immune to the financial system’s occasional paroxysms – which is bad news because the system is effectively run by quants.

You could say that Paul should have been more careful. But that was precisely his goal. At some stage he had to take a chance on the advice he was given. And we know that that is risky. The alternative is to research and research and research, leaving no stone unturned … but the end result would be what? To not invest in anything? To not put your money in the bank even? Put it in government bonds? Invest it all in gold, or in property? Put it under the mattress? There’s no middle ground where absolute safety and trust overlap. But perhaps this new world in which there is no financial security – where a banknote, a cheque, a bond, a share certificate, can suddenly become irrelevant – is more natural. Perhaps the few decades in which banks were apparently safe was the anomaly, that a return to the precarious state that has been the norm throughout history, and still is in many countries, was inevitable.

Credit Ratings

Didn’t you feel proud when your teacher gave you an A+ at school? Or were you a C student, must try harder? Don’t tell us you were an F! My, you’ve done well…considering.

Just as teachers grade their students so there are businesses who grade investments. The three main credit rating agencies are Moody’s Investors Service, Standard & Poor’s and Fitch Ratings. Such agencies analyse the creditworthiness of companies, and their likelihood of going bust, as well as the risks in individual financial instruments. And they rate countries too.

Let’s take Moody’s for example. Their ratings start at the top, with the rating Aaa, “The highest quality and lowest credit risk” and “Best ability to repay short-term debt.” Below, and a smidgen riskier, comes Aa1, then Aa2, Aa3, followed by A1, A2 and A3. These are still supposedly low credit risk. We move even lower to the Bs with Baa1, Baa2, Baa3, then Ba1, and so on. All of these are higher risk with some “speculative elements.” By B1, B2 and B3 we are in high credit-risk territory. Finally come the Cs, the lowest of which is C itself “Rated as the lowest quality, usually in default and low likelihood of recovering principal or interest.”

Baa3 and above the instruments are supposedly “Investment grade.” Ba and below are non investment grade, also known as “junk.”

Good idea, no? And very helpful to investors. Knowing how professional bodies perceive risk in these companies and products can help investors make qualitative judgements about what to add to or subtract from their portfolios. But there’s more, there’s a quantitative angle to this as well. Let’s take as an example a bond rated Ba2. This bond has a price in the market. Suppose it yields 3% per annum. And suppose bank, i.e. risk-free, interest rates are 2%. Then (under lots of assumptions) these numbers can be interpreted as there being a 3-2=1% probability of the company defaulting on that bond in a year. (That’s 99% chance of getting one dollar and two cents back for your one-dollar investment, and 1% chance of nada.) Now we are in the realms of mathematics and can make quantitative judgements about whether Ba2 bonds are too risky, or which specific bonds to buy.

There are problems with this, or we wouldn’t be writing about this topic.

The first problem is the concept of quantifying probability of default. As a rule bankruptcy is a one-off event from which one doesn’t recover in the same form as before the bankruptcy. And as such a company only experiences this once. Therefore there’s not that much in the way of statistics for individual companies, only statistics about types of companies or companies with the same credit rating. There are parallels with health and death. Death is also a one-off event, at time of writing, and anyone who tells your life expectancy is basing that on tables of life expectancies of people with the same credit health rating as you, whether you smoke or not, take part in dangerous sports, etc. The mathematics of death and bankruptcy are very similar. More about life and death later.

But there’s a bigger problem. It concerns who pays for the credit rating. And it’s not who you’d expect. The credit rating on company XYZ is typically paid for by…company XYZ.

They need the rating to be taken as a serious investment, and they want it to be as high a rating as possible. The rating agency want their business and a happy customer. It doesn’t take a genius to figure out that their two interests are aligned. Had they been business partners then having perfectly aligned interests is exactly right. But here the rating agency is supposed to be acting as an unbiased middleman between the investor and the investment.

A third problem related to the above is that having competition among rating agencies might lead to companies choosing to work with those agencies that are the softest, who give the highest rating. None of this is helped by the lack of transparency about the rating process itself.

This is moral hazard.

Never mind all those problems. The main thing is that mathematics is involved. So it must be ok.

Experts? Phooey!

Experts, who needs ’em? Until recently we’d all have said everyone. But that pendulum has swung all the other way. Experts? They don’t know what they are talking about.

We understand this sentiment. We’ve criticized experts in finance and economics plenty enough. And rightly so. Those experts are to be blamed for their herd-like groupthink, that has so often turned out to be wrong.

And then there’s the media. In the race for newspaper sales they will one day tell us that research says red wine is bad for us. The next day it is good. And then bad again. End result is we don’t trust vinologists. Even though it’s the newspapers we shouldn’t trust.

But even the smartest of people can easily be fooled. And who better to do the fooling than a magician.

Between August 4th and 11th 1974 the Stanford Research Institute conducted experiments verify whether Uri Geller had “paranormal perception.”

The write up can be found in the CIA library here: https://www.cia.gov/library/readingroom/docs/CIA-RDP96-00791R000100480003-3.pdf Follow the link if you dare.

Part of the experiments involved an experimenter drawing a random image and Geller trying to reproduce it.

“In each of the eight days of this experimental period picture-drawing experiments were carried out. In each of these experiments Geller was separated from the target material either by an electronically isolated shielded room or by the isolation provided by having the targets drawn on the East coast. As a result of Geller’s success in this experimental period, we consider that he has demonstrated his paranormal perception ability in a convincing and unambiguous manner.”

Fooled them, Uri!

Hal Puthoff and Russell Targ who ran the experiments were already believers in the supernatural. And so possibly biased. They also didn’t do themselves any favours by allowing Geller access to an intercom during the experiment even though he was in the shielded room. Oh, and there was a hole in the wall between Geller and the experimenters. Even we can reproduce a random drawing if we can peak.

The image for this piece is a photograph taken by one of us at an auction at the Savoy, London. Uri Geller, who is the nicest man you could wish to meet, was there…bidding for spoons of course. We were lucky to witness one of his spoon-bending miracles from just a few feet away. He convinced us!

The Million Dollar Challenge is a prize offered by the foundation of the famous magician and debunker James Randi. It would be won by anyone “who can show, under proper observing conditions, evidence of any paranormal, supernatural, or occult power or event.” No one has ever won it.

Am I Being Random, Still?

What if we tossed a coin ten times and got

HHHHHHHHHH

and then we asked you to bet on the next toss?

Ten Hs in a row has a probability ½^10= 0.0009765625. Pretty unlikely. But any mathematician will tell you that this sequence, or a sequence of alternating H and T, or Hs and Ts representing the digits in the square root of two, or in pi, or any sequence whatsoever, are all equally likely (assuming an unbiased coin). Or rather, equally unlikely. But it’s hard to get one’s head around this fact. One can’t help feeling that if you were asked to bet after ten Hs then you should be suspicious.

Some people think that the Law of Averages applied here means that after so many Heads the next toss is more likely to be Tails to “balance things out.” This is called “The Gambler’s Fallacy.” The Law of Averages is a layman’s version of the more mathematical Law of Large Numbers. And is commonly misunderstood. In a nutshell, the Law of Large Numbers says that after a large number of trials, here tosses, the average should be close to the expected value, and get closer as the number of trials increases. If Heads counts as plus one and Tails as minus one then the expected value is zero for an unbiased coin. As the number of tosses increases so the average, the sum of the plus and minus ones divided by the number of tosses will converge to the expected value of zero. But this says absolutely nothing at all about the next toss, which will always be equally likely to be Head or Tail.

Casinos know all there is to be known about probability. And the Law of Large Numbers is tattooed on their black hearts in red ink. They know for example that each spin of the roulette wheel is independent of all previous spins. They know that ten Reds in a row at roulette is as likely to be followed by a Black as another Red. But they also know that many people don’t believe this simple fact of probability, and physics. They know about apophenia, that people see non-existent patterns, and that they play systems. And these are the people casinos adore, people who believe they can beat the casino and bet accordingly. That is why they will often encourage such people by presenting a list of recent numbers, printed out or on electronic signs near the wheel. Such data is also available online. A quick search will show you people discussing roulette patterns in all seriousness. Such people are the suckers that casinos rely on for business.

You can’t win at roulette.

Or can you? More anon.

In our ten-Heads-in-a-row example maybe the sequence is genuinely random, and it’s 50-50 what the next toss will be. Or maybe the coin is biased, or double headed. Or maybe you’re being lured into thinking it’s double headed and the next toss will be a Tail. On balance, this is a bet best avoided.

We’ve seen something not unlike this in the world of finance only a few years ago. The returns of Bernie Madoff’s fund. The S&P500 index goes up and down, then down and up, down down up up down, and so on, in what looks, and probably mostly is, a random fashion. (Even if the technical analysts think they can see patterns.) On the other hand Madoff’s returns go up, and up, and up, and… up,… in the hedge fund equivalent of a never-ending sequence of Heads. Too good to be true? Yes, and you’d be right to be suspicious.

Derren Brown gives the following wonderful performance. Picture this…

Derren stands on the stage with his goatee beard, suit from days of yore, and a microphone. He asks all members of the audience to stand and put a coin or other object into one of their hands. He says “Left” and everyone with the object in their right hand is asked to sit down. He repeats this several more times. Each time approximately half of the audience is asked to sit. He is down to three audience members still standing. Clearly these are all people with whom he has a “connection.” One of these he chooses and asks to join him on stage.

Derren asks her — in the youtube video of this the audience member is female — to put her hands behind her back and put a coin in one hand. She then holds her hands out in front. Derren has to say which hand the coin is in. His typical patter goes like “Last time you put the coin in your right hand so this time you think I’ll think you’ll put it in your left hand. But you know I’m thinking this so you’ll put it in your right again. But you know I know what you’re thinking so you’ll put it in your left. Forget that, actually your right hand is, you’ll notice, slightly lower than your left. And that’s because you are over compensating for the weight of the coin. Left!” And he’s right!

He does this many, many times in a row, each time correctly figuring out which hand the coin is in.

How does he do it?

Maybe it’s luck. Unlikely. One half to the power of… Or the audience member could have been a plant. Too easy, and also reputation harming. Maybe by pruning the audience he’s found someone who thinks like him. Maybe there’s some psychology going on. We’ve seen online discussions of all this. It’s all about Neuro Linguistic Programming was one suggestion. For example, looking up and to the left is, according to NLP, “Non-dominant hemisphere visualization i.e., remembered imagery.” So DB is looking at the person’s eyes for clues perhaps. Unfortunately NLP seems to have been largely discredited. Another suggestion was that some people were just easy to read. This was from the poker players, who told of easy-to-read poker “tells.”

Or maybe it’s a trick.

How it’s really done we won’t tell. Let’s just say it will set you back one of either a) many years of dedicated practice or b) a couple of hundred dollars. But the most important thing we gleaned from said discussion was how keen people were to believe in (pseudo) science rather than, ahem, perhaps trickery. Homeopathy, crystals, crop circles…just mention energy and vibrations and how some German scientist has proved everything while living on a diet of carrot juice and there’s a decent chunk of society that will believe you. Even smart people.

We tried to get DB to perform at one of our book launches. But DB’s fee of £30k for 40 minutes was, er, a bit steep for our publisher. And this was just as DB was becoming famous. Lord knows what he charges now.