The post Improving your Python Backtesting – From DataFrames to Cython [Part 2] appeared first on FXMasterCourse.

]]>Implementation | Time for RSI2 Backtests |
---|---|

Python – Lists | 0.003s |

Java | 0.00005s |

C | 0.00002s |

there was still a great gap between the Python list implementation and a simple Java or C implementation by as much as a factor of ** 100**!

We promised to get back to this and start looking at Cython as bridge between the two worlds of interpreted language and bare metal implementation. You still get to live in the Python world with all the fancy machine learning and graphing facilities ** and **fast backtesting!

So, by how much can Cython improve our timings? Here is a sneak peak, so you don’t have to plough through all the text: all the way down to 0.00056s, which is six times faster and actually pretty close to bare metal.

If you want to repeat these numbers [which will be of course highly dependent on the machine you run it out] you can grab the code at this github repository: FXMC/backtest.

You might say, so who gives? Why squeeze it all the way down. The answer is simple. When you’re doing research and you want to try your idea over many assets and with lots of different parameters, to get a feel for how stable or random your results are, or even just generate a useful sample of P&L paths for your Monte Carlo analysis; well, you’d probably like to do that within a couple of minutes. Not a couple of days!

The focus here is on what I did to get it running. Show you the specifics, and then you can copy paste in your own work.

First what is Cython? In a nutshell: it is a transpiler that takes annotated Python source [hence it’s a pyx file and not a py file] and converts it to a C file. This file is then compiled as a module that Python can load.

The speed happens because your stuff has now been ported to C, and the load of the data is the only bottle neck.

Why? Because when you run your stuff in Python, it has to be passed to your C(P)ython module and that conversion of data from one language to another takes time. Specifically, because you’re going from a very lenient data language like Python to a very stringent data language like C.

However, if you include specific data annotations to your .pyx file you can bypass a lot of machinery.

It’s the annotations that make the magic. Without them, it does really look like you’re not adding much to the speed improvement.

So, that’s what we will focus on from a syntax perspective (for completeness, the repo has the other non-annotated version as well).

Here it goes:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | cpdef void run_strategy(list close, list ma_long, list ma_short, list rsi_n, float lower_rsi, int start_indx, list posn, list cash): """Cythonized function with explicit type definitions""" cdef bint long_posn = False cdef int idx = 0 cdef double shares = 0.0 for idx in range(start_indx, len(close)): posn[idx] = posn[idx-1] cash[idx] = cash[idx-1] if long_posn: if close[idx] > ma_short[idx]: long_posn = False shares = posn[idx] posn[idx] = 0 cash[idx] = cash[idx] + shares * close[idx] if not long_posn: if (close[idx] > ma_long[idx]) and (rsi_n[idx] < lower_rsi): long_posn = True shares = math.floor(cash[idx] / close[idx]) posn[idx] = shares cash[idx] = cash[idx] - shares * close[idx] |

Let’s go through how this differs from normal python code:

- First on the import side: nothing, we are not importing a thing
- The function has a cpdef tag with a void specifying a no return in this case. This is because the results are passed by reference in the lists
*posn*and*cash* - Inside the function we also specify the types of all variables we will be using before jumping into the body of the function.
- The rest is standard python.

In terms of building this .pyx file we need a setup.py file which is in the repo. The details of providing the correct C/C++ toolchain is left for the reader to check in the PSF website. But it’s pretty straightforward.

How do the results stack up?

Here is the table again with the Cython results!

Implementation | Time for RSI2 Backtests |
---|---|

Python – DataFrame, date indexing | 7.3 s |

Python – DataFrame, iterrows | 1.3s |

Python – DataFrame, itertuples | 0.03s |

Python – Lists | 0.003s |

Python -- Cython no Type Hints | 0.0015s |

Python -- Cython with Type Hints | 0.00056s |

Java | 0.00005s |

C | 0.00002s |

As promised the Cython implementation gets down to 0.00056, with type hints included, boosting your Python backtesting!

This is a pretty awesome performance reduction even if you compare it only to the pure list implementation.

It appears that running simulations over many different configurations won’t be as long a wait or as psychologically tortuous, as it looked to be at the start!

It is true that you shouldn’t be obsessed with speed. But sometimes it’s definitely worthwhile, especially when it boils down to the “research” loop: from idea to result and back to idea. If the delays are significant, the psychology is different, and you’ll have an additional obstacle to overcome.

The big advantage here is that ALL the other modules in the Python eco-system are at your fingertips. It might be worthwhile to compare the Python implementations to the C and Java ones in the repo, just to remind yourselves how much we’ve progressed in the last 40 years in terms of programming languages!

Both Part 1 and this second part are much more computer / programming focused than usual. How does this relate to trading? It does in as far as if you lack the tools to test and analyze, you’ll be stabbing in the dark. And given that Python is the go-to language, it’s worthwhile being able to perform fast backtesting with it.

The post Improving your Python Backtesting – From DataFrames to Cython [Part 2] appeared first on FXMasterCourse.

]]>The post Improving your Python Backtesting – From DataFrames to Cython [Part 1] appeared first on FXMasterCourse.

]]>Backtesting is every systematic trader’s basic tool. And Python is becoming the lingua franca of programming. So putting Python into Backtesting to get ** fast** results should be possible!

Yes and no!

In this article, we’ll cover how to really improve your Python backtesting and boost your speeds by several ** orders of magnitude**!

First a quick table of cons / pros of using Python:

Pros | Cons |
---|---|

Quick implementation time [Python’s forte!] | You’ll grow really old waiting for the results |

Gazillions of libraries for fancy output | |

Gazillions of libraries for fancy analysis |

And it’s this contra that’s the real biggy.

You’re a trader, so by definition you already have the attention span of a goldfish. It stands to reason, therefore, that waiting for a couple of seconds for a backtesting result to come back is an eternity.

It really is when you’re looking at a portfolio of strategies, a portfolio of assets, a portfolio of both, or if you’re running any sort of optimization.

So, in this article we’ll cover some simple and more sophisticated ways of improving our timing. We’ll start out with pure Python solutions and in Part 2 of this series we’ll cover the more sophisticated Cython module set, to squeeze the last ounce out of our code.

To keep ourselves on the straight and narrow we’ll use the Java and C implementations as a benchmark. Of course, these languages trounce Python. But, by the end of our journey you’ll agree, that we don’t have to give up the comfortable life Python offers to get massive speed improvements.

The above statements might cause people to immediately say “Vectorize your Code using Pandas and NumPy!”

Agreed, in many circumstances you can speed up execution by calling vectorized maths functions from Pandas and NumPy.

But, let’s face it, the challenge for trading is that your decision now is dependent upon a bunch of state variables from the last n time steps. This is where vectorization fails and your now in the world of having to write for-do loops.

And even Event Driven backtesters reduce to for-do loops when you run simulations on historical data.

So, the real challenge is running fast for-do loops in Python.

Is this possible?

To experiment and validate the various methods of speeding up our backtesting for-do loops we’ll use a straightforward trading system: the RSI(2) applied to the SPY from January 1993 to until now.

Written in pseudo code it looks like:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | // cash and position are arrays, indexing is not explicit for all history do: cash today = cash yesterday position today = position yesterday // Exit Long if long and if C[0] > MA[Close, 5] posn today = 0 cash today = cash today + shares * close // Enter long if no longs and if C[0]> MA[Close, 200] and RSI[Close, 2] < 20 posn today = cash / close price cash today = cash today – posn today * close price |

Since we simply want to focus on the efficiency of the for loops we’ll pre-calculate the Moving Averages and Relative Strength Indicators and simply look up the values in their corresponding array storages.

The usual way somebody could implement this in Python is to:

- load the data into a DataFrame using the ubiquitous Pandas library
- add columns to the DataFrame for moving averages and relative strength
- loop over the rows of the data frame to calculate cash and position values over the lifetime of the system

So, it’s worthwhile to see how looping is implemented using DataFrames, since in and of itself it’s not the most obvious, and it’s the first place to remove bottlenecks

A naïve method would use the date index of the DataFrame to retrieve the values from the matrix, while looping over all dates.

For 7,000 rows this gives an execution time of 7.5 seconds.

This is pretty impressive slowness.

Here’s the code skeleton:

1 2 3 4 5 | dt_range = df.index for d in dt_range: if df.loc[d, ‘close’] > df.loc[d, ‘ma_short’] ... |

You get the picture.

The above example might be intuitive since it queries data for specific days instead of using integer indices to get array / list / series values.

However, a more natural manner of accessing values in the matrix while looping would be to use the built-in constructs.

The first one you come across is the built-in DataFrame method iterrows.

For the same 7,000 rows the time taken to complete the loop is 1.3 seconds.

Here’s the code skeleton:

1 2 3 | for ix, row in df.iterrows(): if long and (row.close > row.ma_short): ... |

So firstly, there is a hope: just by changing the approach we’ve sped the loop up by a factor of five. But it’s still pretty lousy. Imagine wanting to loop over the stocks in the S&P 500. This would take you roughly 9 hours.

Who has time to sit around for 9 hours!!

Is there a better way?

It’s really remarkable that there are two methods which are so similar in behavior yet in terms of performance are light years apart.

Replace df.iterrows() by df.itertuples(). Here is the code skeleton:

1 2 3 | for row in df.itertuples(): if long and row.close > row.ma_short: ... |

The syntax change is minimal; however, the speed up goes from 1.3 seconds to … wait for it … 0.03 seconds.

Yes, you’ve read that right! The same code, same logic, and same container [the DataFrame] and we’ve sped up the code by a factor of 43 times.

This is pretty insane, right?

So, if DataFrames can work so well, and DataFrames are actually nothing more than complex wrappers around simple arrays, what happens if we just throw out the wrapping and use the arrays themselves. I.e., shove the data into Python lists and loop over those?

The overhead in programming is a bit more, since we now need to explicitly code for each individual list we want to use, but…

… the time for performing the 7,000 loops now has become 0.003 seconds!

You read that right! So, if we were to analyze the S&P 500 stocks it would take a total of 1.5 seconds. Which is much better than lounging 9 hours in front of the screen.

Part 1 of this series took a monstrous 7.3 second loop, backtesting an easy system in Python and reduced it down to 0.003 seconds. That’s an improvement of 2,400 times. Nothing to be sniffed at, and it only involved some basic rewriting of code!

All we did was to acknowledge that DataFrames are great for storing data and applying some math functions to the columns (or rows) in a vectorized fashion. But when it comes to looping, we might as well go down the old-fashioned way of using arrays (known in Python as lists).

So where to next?

In Part 2 of speeding up Python backtesting we’ll start to delve in the Cython module set. This does something funky: it takes your Pythonesque source [a file that ends in .pyx] and transpiles it to the C language. In so doing it can perform a bunch of optimizations which your Python interpreter wasn’t built to do, since it has to deal with most generic use cases. However, you have the option of giving Cython very specific indications as to how your source code is supposed to be used.

The end result is an even bigger speed up!

Do we come close to Java and C on this simple loop?

Check in to Part 2 where we unveil the Cython results as well as provide a link to the GitHub code so you can can check it out for yourself!

Here’s the summary of the speed-ups to-date of our Python backtesting with the corresponding comparisons to a Java / C implementation:

Implementation | Time for RSI2 Backtests |
---|---|

Python – DataFrame, date indexing | 7.3 s |

Python – DataFrame, iterrows | 1.3s |

Python – DataFrame, itertuples | 0.03s |

Python – Lists | 0.003s |

Java | 0.00005s |

C | 0.00002s |

The post Improving your Python Backtesting – From DataFrames to Cython [Part 1] appeared first on FXMasterCourse.

]]>The post Calculating Bond Index Prices: Analyzing the Bond Bear Market appeared first on FXMasterCourse.

]]>Here is a chart since 1960 of US 1 and 10 year rates:

With rates starting to normalize and go higher, bond prices are entering a bear market. Is it still advisable to hold bonds?

These are tricky questions, in particular because easily accessible bond data is lacking, and so, performing any back testing or forward simulations is difficult.

This article sets out to rectify that. We will:

- Introduce you to a simple way to get decades worth of good quality data
- Determine how to create a reliable index of bond prices showing total return going back decades
- Figure out what bonds can actually do for us in this cycle.

Bonds are finite maturity instruments. Unlike a stock that you can hold for decades, a bond matures after a certain amount of time.

Bonds are also tricky for retail traders to trade; they’re not easily accessible. As we saw previously, this is where ETFs come to the rescue. Bond ETFs are like shares. You buy them and receive an income stream from the bond coupons. ETFs also don’t have a finite maturity.

So how is that we can buy a bond ETF, which like a share seems to hang around forever? And more importantly, given that these ETF bond prices only go back to 2002, how can we work out our own ETF proxies, extending them back far enough to produce some meaningful backtests.

To do this we have to break up our bond instrument into its parts and cover some basic bond maths.

A bond is made up of two components.

There is the underlying principal, which you as the bond buyer are lending to the bond seller. Since you are buying the bond, the money goes to the seller, who at maturity of the bond will give you the money back. So, buying a bond is just like lending money.

In the meantime, however, you will receive periodic payments. These are the bond coupon payments. Let’s have a look at an example.

You start out by paying for the bond. That’s the red bar. Seen another way: you’re making a loan and in return you’re receiving a piece of paper that entitles you to regular (in this case semi-annual) interest payments, and a repayment at the end of the life of the bond. The income stream forms your annuity. The final payment is the principal re-payment on the loan you’ve given.

Ultimately what you care about is, how much am I lending and what am I getting in return for it. The series of repayments represents the amount you’re earning on your loan. In essence you can say that the stream of all these future cashflows is equal to the amount you are lending today.

I.e. your loan is the price of today’s bond.

And as you can see this amount (the red bar) is directly related to the interest rates in the market.

So let’s follow this up by looking at some real world bond prices.

One thing you will find, is that getting individual bond data on the internet is more difficult than for equities.

One such source of data is the Frankfurt Stock exchange for various European bonds. One such example is the German government bond maturing in 2024, which is in six years from now. (Note that each bond has a unique identification number, called the ISIN number. In this case it is DE0001134922. Google it!)

Now its coupon is at 6.25%. This means that for every bond, with a face value of EUR 10,000, I would get EUR 625 annually (on the 4th of January). Note that unlike US Government Bonds, German bonds pay annually rather than semi-annually.

But hang on, you say. Interest rates in Germany are low right now. In fact 6 year rates are at -0.08% (you can check that over at marketwatch.com).

So what’s going on? Is this money for free?

Actually the quoted price on the exchange for this bond was at 136.20 on Monday September 10, 2018. So to own that EUR 10,000 of bond I would have to fork over EUR 13,420 right now. Now remember, that in 2024, I get the original EUR 10,000 back, plus a series of annual coupons (worth EUR 625 every 4th of January), I’m actually looking to get back EUR 13,750 (just naively adding up all the cash-flows), which is more or less in line with the current price of money invested, and equates to roughly 0% return over these six years. Exactly where the market puts 6 year German yields.

In essence because everybody asked the same bright question: “Money for free?” they rushed in and pushed that bond price up, until the rate of return equaled to other interest rate instruments in the market. Hence the concept of no free lunch in economics. People tend to rush in and equalize prices so that such anomalies disappear quickly. This is also known as arbitrage trading: exploiting mismatches in prices between equivalent instruments.

To put the above example into context, this bond was issued back in 1994, when 30 year rates were roughly at 6.25%, and hence its coupon wasn’t so out of whack. At that time the bond most likely trade at “par,” meaning it’s market price was equal to it’s face value. That is the EUR 10,000 bond actually traded for a price of EUR 10,000.

So as interest rates decrease, the bond price trades above par. The capital loss you make on receiving par value at redemption counteracts the yield gain on the high coupons during the lifetime of the bond.

This simple example drives the point home, that interest rates determine price and vice versa.

However, the relationship isn’t one-for-one. I.e. in technical jargon: the relationship isn’t linear. The actual relationship between a bond price and interest rates is non-linear.

If I were to plot the relationship it wouldn’t be a straight line. Here’s the behavior of a 30 Yr bond’s price as interest rates change. The blue line shows the price / interest rate relationship, the orange line shows what a linear relationship would look like.

Two things stand out here:

- The actual price – rate relationship is upward curving
- The curvature for low rates is higher than for higher rates

This curvature is called convexity. And we can reformulate the relationship thus: high coupon bonds have a higher convexity than low coupon bonds (compared to current rate levels). And furthermore, higher convexity means that bonds are more sensitive to rate movements. You will also find that longer maturity bonds tend to have higher convexity than shorter maturity bonds.

From this chart you can see that in a falling rate environment people tend to want to hold higher convexity bonds, as it affords them a more levered position with respect to the underlying interest rates. Vice versa in a rising rate environment the opposite is true.

So how did we get this chart up? For the more mathematically savvy here is the relationship we are using

\(\text{P}= \text{Annuity} + \text{Capital Repayment}\)

The Annuity here is the value of the future coupons you receive and the Capital Repayment is today’s value of the loan repayment to be made at the bond’s maturity.

If

- the yield on the bond today is \(r%\),
- your coupon is \(C\),
- your time to maturity is \(T\),
- your loan Notional is \(N\),
- and you make semi-annual payments the formula reduces to:

\(\begin{align}P(r,C) & = \displaystyle\sum_{i=1}^{2T}\frac{\frac{C}{2}N}{\left(1+\frac{r}{2}\right)^i} + \frac{N}{\left(1 + \frac{r}{2}\right)^{2T}} \\\\ & = \frac{CN}{r} \left(1 – \left(1 + \frac{r}{2}\right)^{-2T}\right) + N\left(1 + \frac{r}{2}\right)^{-2T}\end{align}\)

On a technical note: this formula is true on Coupon payment day. In between coupons we need to take into account the accrual. The correction can be stated as:

\(P(r,C,t) = \left( 1 + \frac{r}{2}\right)^{t}\times P(r,C)\),

where \(t\) is the time from the previous coupon date as a year fraction.

This part becomes really important when we want to account for any money that we’ve made on the bond index as time passes, i.e. it allows us to take account of the coupon we are being paid.

This relationship between bond prices and interest rates will allow us to extrapolate price series into the past giving us a synthetic ETF for our backtesting and regime analysis.

And the good thing is we have interest rate data galore! Going back far enough that we can perform some useful analysis of what the future might bring!

Two sources we can look at are:

- Prof. Schiller’s website (which goes back to 1871).
- The US Treasury’s website (though we will be using Python and Quandl for this)

In this section we’ll use the formula for bond prices above to create our own bond index. We’ll also check that our bond index makes sense, by comparing it to the current Bond ETFs and making sure our bond index tracks them appropriately.

Once we’ve covered that we’ll use a great property of interest rates that will allow us to make some educated estimates of future behaviour, allowing us to peek into the future.

Let’s start out by listing the ETFs we could compare our bond index to:

Bond ETF | Maturities Covered | Date available from | CAGR | Sharpe Ratio | Max D/D |
---|---|---|---|---|---|

SHY | 1 - 3 yrs | 2002-07-30 | 1.9% | 1.35 | -2.2% |

IEI | 3 - 7 yrs | 2007-01-11 | 3.5% | 0.87 | -6.0% |

TLH | 10 - 20 yrs | 2007-01-11 | 5.4% | 0.58 | -14.3% |

TLT | > 20 yrs | 2002-07-30 | 6.9% | 0.51 | -26.6% |

AGG | >1 yr | 2003-09-29 | 3.6% | 0.76 | -12.8% |

As we saw previously, the longer dated bond ETFs have a higher volatility, and hence a lower Share Ratio.

Now let’s try to replicate some these ETFs by using the rates available from the US Treasury’s website.

The simplest one to replicate is the TLT, as it is primarily driven by long rates, which have been relatively flat in recent history, unlike the other ETFs which span sections of the yield curve with more curvature.

We’ll track the history back to 2008. Primarily because the 30yr bond issuance has been patchy. The US government started selling 30yr bonds regularly in 1977, but discontinued them in October 2001. They then restarted issuing them in February of 2006.

The result is:

The replication methodology is very simple. At the start of every month we buy a par bond. At the end of the month we check our returns. These are made up of the accrued coupon and a change in price due to the change in rates

\(\begin{align}\Delta A &=\frac{r_{t-1}}{12}, \\\\ \Delta P &=\frac{Nr_{t-1}}{r_t}\left(1-\left(1+\frac{r_t}{2}\right)^{-2T}\right) + \left(1+\frac{r_t}{2}\right)^{-2T} – 1\end{align}\).

The assumptions here are that our bond notional is $1, and the yield of the bond we buy at the start of every month is the same as its coupon, which means its price is at par, i.e. equal to face value, which is the $1. This is a good approximation since the most recent bonds issued by governments tend to have coupons very close to the current yield.

So the total monthly change in value is the sum of these two components \(\Delta A + \Delta P\).

These two components give us two income streams which we can plot out for the same time period as the chart above:

The take-away point here is that we make money both on the coupons we are receiving (the red line) as well as the capital appreciation due to declining rates, which result in rising bond prices.

We’ll have more to say about this when we try to predict possible outcomes: rising rates don’t necessarily mean losing money. It’s also the speed with which they rise, since if they rise too quickly it could be that our income doesn’t make up for the loss caused due to interest rate movements.

We can also create our bond indices for the interest rates available from the US Treasury website and create an equally weighted portfolio, to compare it to the AGG ETF, which is an aggregate of all bonds with maturity greater than one year. The result is:

Looking at returns, our proxy tracks AGG movements closely, with a correlation on monthly returns of 97.8%. However, the AGG covers a universe of bonds larger than just government bonds. Government bonds make up 38%. 28% of the AGG are made up of mortgage backed securities (residential and commercial), 25% are corporate bonds, the remainder is made up of Sovereign / Supranational / Agency, and other bonds. Here sovereign means USD denominated foreign bonds, e.g. Hungary, Colombia, etc. This kind of discrepancy explains the difference in performance between our proxy and AGG.

So we are now in a position to start conducting backtests as well as present a possible future outcome given the current rise in rates.

Let’s start out by looking at 10 year rates going back to 1871, using Schiller’s data:

From the chart you can see that the data starts to become more granular from the start of 1953 onwards. Prior to that the observation periods are annual, with averaged values in between. Hence the smoothed nature of rates prior to 1953.

Regardless, let’s have a look at the relative behavior between the 10 year proxy we can construct from this yield series and the S&P 500 data from the same source. (Note we use log prices in these charts)

At first glance, the results are quite striking: over the last 150 years bonds and equities have performed nearly in sync. There are some structural differences, however, throughout the time period. Since the 1930s it’s clear that equities have significantly outperformed bonds: they start from a complete low point, and yet reach the current highs of our bond index.

The whole point of this exercise of constructing a price index from bond yields was to answer the question of what happened during previous bond bear markets.

The conventional wisdom is that fixed income securities decrease in price when yields go up. However, as we saw previously, it’s not only the value of the notional of the bond that contributes to the value of the bond. It’s also the coupon that you earn during the life-time of the bond.

The caveat here, is the speed with which rates start to increase. The rate of change will influence the decline of bond prices, and if it’s too fast, the coupon might not make up for it.

To get a better picture of the relationship between rising and declining interest rate regimes, let’s plot the bond index versus the underlying 10 year rate:

The part we naturally want to focus on for this article is the section which experienced the previous big updraft in rates: 1953 to 1982.

And this is really the punch-line: bond prices appreciated… a bit. Coming back to coupon income vs capital appreciation, let’s plot out the two components during this time period:

The high rates covered for the capital losses.

However, most importantly, volatility was contained during most time periods:

This has been a pretty lengthy article, so we’ll take a break here…

To summarize: in this article we’ve covered the nature of bonds, and the relationship between their prices and current yields.

Most importantly we’ve formulated a framework to help us navigate bond price behavior over a longer time horizon, so that we can create a historical lab for testing how bonds and their inclusion in equity portfolios will impact returns.

The relevant points here are:

- Bonds make money both on the the interest payments they receive
- They also are make or lose money on the appreciation of the capital depending on how interest rates move
- Falling rates don’t always mean losing money. It depends how quickly they fall with respect to the income the bond generates

So where do we go with this?

Now that we have a lab setup we can ask questions as to how an equity / bond portfolio did perform in the past. And equally exciting: since interest rates mean revert, we can project their behavior into the future and figure out how a rising rate environment might affect our equity / bond portfolio.

Recall that the main focus of the article series is to understand how each asset can contribute to our portfolio. So gaining an insight into the future by performing simulations is quite exciting!

In the meantime, the Python Code for all the diagrams in this article is included here, so that you can go ahead and conduct your own experiments.

So, until next time,

Happy Trading.

If you have enjoyed this post follow me on Twitter and sign-up to my Newsletter for weekly updates on trading strategies and other market insights below!

It’s been six months since I wrote the last article, and with time flying it’s always a good exercise to review how systems have performed. These walk forward tests tend to satisfy two important points

- Re-affirm that the initial principles were valid
- Keep your eyes open for any regime shifts

And with such a turbulent market environment (from trade wars, domestic US issues, to European Brexit, as well as tech stock problems) it’s good to see that principles still hold. Here is the 60% Equity – 40% Bond portfolio holding as it has fared since the start of 2008 as well as the recovery after the Volatility shakes in February:

Here is the Python Code that produces the diagrams in this post.

You might not have some of the modules which are used here, and so you will have to pip install them.

In particular:

- Quandl (for which you should get an API key for free here: Quandl API Key). This module enables you to grab lots of free market data, in particular the FRED Treasury Rate series going back to the 60s
- requests, which will allow you easy access and downloads from the internet
- numpy and pandas, which are standard numerical analysis and data manipulation modules
- matplotlib which is the de facto plotting module in Python
- fix_yahoo_finance, which bypasses the issue in pandas for obtaining Yahoo Finance data

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 | ''' Script for accessing Quandl and Schiller data as well as Bond ETF data and generating Bond indexes going back to 1871 and comparing them to the ETF data. ''' import math import io import quandl import requests import datetime as dt import numpy as np import pandas as pd import fix_yahoo_finance as yf import matplotlib.pyplot as plt from collections import OrderedDict from pandas_datareader import data as pdr yf.pdr_override() ak = XXXXX # INSERT FREE QUANDL KEY HERE from here: https://help.quandl.com/article/320-where-can-i-find-my-api-key TRS = OrderedDict([('M01', 'FED/RIFLGFCM01_N_B'), ('M03', 'FED/RIFLGFCM03_N_B'), ('M06', 'FED/RIFLGFCM06_N_B'), ('Y01', 'FED/RIFLGFCY01_N_B'), ('Y02', 'FED/RIFLGFCY02_N_B'), ('Y03', 'FED/RIFLGFCY03_N_B'), ('Y05', 'FED/RIFLGFCY05_N_B'), ('Y07', 'FED/RIFLGFCY07_N_B'), ('Y10', 'FED/RIFLGFCY10_N_B'), ('Y20', 'FED/RIFLGFCY20_N_B'), ('Y30', 'FED/RIFLGFCY30_N_B'), ]) def plot_yld(df, yld='M03'): ''' Plots a column from the dataframe f, which contains the Quandl Treasury Rates labelled by the columns above :param df: Quandl Treasury Rates :param yld: Labelling which column to plot, can be a list of strings for multiple rate series ''' if type(yld) == list: for i in yld: plt.plot(df[i], label=i) else: plt.plot(df[yld], label=yld) plt.title('Chart of ' + yld + ' government bond yield') plt.title('Chart of government bond yield') plt.grid() plt.legend() plt.show() def get_trs(cache=True): ''' Function to download the Quandl Treasury Rates :param cache: If False will download and store rates in a pkl file. If True will load pickle file :return: Resulting DataFrame with Treasury rates from Quandl ''' if cache: try: df = pd.read_pickle("rates.pkl") except: pass else: return df df = pd.DataFrame() for k, v in TRS.items(): s = quandl.get(v, api_key=ak) s.rename(columns={'Value': k}, inplace=True) df = df.join(s, how='outer') df = df.replace(0, pd.np.nan).ffill() df.to_pickle("rates.pkl") return df def simulate_bond(df, yld='Y01', mat=1, freq=360, pfreq=2): ''' Bond Index price simulation :param df: DataFrame with Treasury Rates from Quandl :param yld: Identifies which Rate in the DataFrame to use :param mat: Maturity of bond considered, e.g. for Y01, mat = 1 :param freq: Frequency of rate observations, e.g. for monthly observations freq = 12 :param pfreq: Payment frequency. E.g. for US Bonds pfreq=2, for German bonds pfreq = 1 :return: A dataframe containing the various components of the Bond Index Series ''' if not type(yld) == list: yld = [yld] rates = df[yld] rates.columns = ['rate'] rates['rate'] = rates.rate / 100.0 rates['capital'] = (1 + rates.rate / pfreq).pow(-pfreq * mat) rates['annuity'] = rates.rate.shift(1) / rates.rate * (1 - rates.capital) rates['coupon'] = rates.rate.shift(1) / freq rates['ret'] = rates.capital + rates.annuity + rates.coupon - 1 rates['p_c'] = np.exp((rates.capital + rates.annuity - 1).cumsum()) rates['c_c'] = np.exp(rates.coupon.cumsum()) rates['px'] = (1 + rates.ret).cumprod() return rates def resample_ylds(df: pd.DataFrame): ''' Simple function to resample rates to monthly observations. Assumption is that DataFrame index contains time. ''' df = df.resample('M').last() return df def get_symbol(sym): ''' Wrapper for getting tickers from Yahoo :param sym: Yahoo ticker. E.g. AGG for AGG ETF :return: DataFrame with Yahoo Finance price data ''' df = pdr.get_data_yahoo(sym, start='1990-01-01', end=dt.date.today().strftime('%Y-%m-%d')) print(df.shape) return df def simb(symb='TLH', yld='Y10', mat=10, freq=12, pfreq=2): ''' Function to plot bond price simulation versus ETF Price :param symb: Yahoo ETF Ticker :param yld: Label of rate from Quandl Treasury Rate data :param mat: Maturity of bond considered :param freq: Observation frequency :param pfreq: Payment frequency of bond ''' ylds = get_trs(True) ylds = ylds[ylds.index >= dt.datetime(2008, 8, 1)] ylds_monthly = resample_ylds(ylds) bnd_px = simulate_bond(ylds_monthly, yld=yld, mat=mat, freq=freq, pfreq=pfreq) tlh = get_symbol(symb) tlh = tlh[tlh.index >= dt.datetime(2008, 8, 1)] tlh_monthly = tlh.resample('M').last() tlh_monthly['px'] = tlh_monthly['Adj Close'] / tlh_monthly.ix[dt.datetime(2008, 8, 31), 'Adj Close'] fig = plt.figure() ax = fig.add_subplot(1, 1, 1) fig.hold(True) ax.plot(bnd_px.index, bnd_px.px, label="30Yr Bond Index") ax.plot(bnd_px.index, bnd_px.p_c, label="Capital Component") ax.plot(bnd_px.index, bnd_px.c_c, label="Coupon Component") ax.plot(tlh_monthly.index, tlh_monthly.px, label="TLT ETF") vals = ax.get_yticks() ax.set_yticklabels(['{:3.2f}'.format(x * 100.0) for x in vals]) plt.legend() plt.grid() plt.title('TLT ETF vs 30 Yr Simulated Bond Index') plt.show() def sim_agg(cached=True): ''' Function to simulate AGG ETF using a weighted average of the bond indexes generated from Quandl Treasury Rates ''' tlh = get_symbol('AGG') tlh = tlh[tlh.index >= dt.datetime(2008, 8, 1)] tlh_monthly = tlh.resample('M').last() tlh_monthly['px'] = tlh_monthly['Adj Close'] / tlh_monthly.ix[dt.datetime(2008, 8, 31), 'Adj Close'] vol = (tlh_monthly.px / tlh_monthly.px.shift(1) - 1).std() ylds = get_trs(cached) ylds = ylds[ylds.index >= dt.datetime(2008, 8, 1)] ylds_monthly = resample_ylds(ylds) px_l = ['Y01', 'Y02', 'Y03', 'Y05', 'Y07', 'Y10', 'Y20', 'Y30'] ret = 0 fig = plt.figure() ax = fig.add_subplot(1, 1, 1) fig.hold(True) for y in px_l: bnd = simulate_bond(ylds_monthly, yld=y, mat=int(y[1:]), freq=12, pfreq=2) bnd.ret = bnd.ret / bnd.ret.std() * vol # ax.plot(bnd.index,bnd.px, label=y) ret = ret + bnd.ret / len(px_l) px = (1 + ret).cumprod() ax.plot(px.index, px, label='Simulation') ax.plot(tlh_monthly.index, tlh_monthly.px, label='AGG') plt.legend() plt.grid() plt.title('TLT ETF vs 30 Yr Simulated Bond Index') plt.show() print((px / px.shift(1) - 1).std()) print(vol) return [px, tlh_monthly] def makedt(x): ''' Helper function to convert Date column in Schiller data to Python datetime.date object ''' d = dt.datetime.strptime(x, '%Y.%m') + dt.timedelta(days=32) d = d.replace(day=1) d = d - dt.timedelta(days=1) return d def get_schiller(): ''' Function to grab and return as a DataFrame the time series for 10Y rates and SP500 since 1871 ''' url = 'http://www.econ.yale.edu/~shiller/data/ie_data.xls' response = requests.get(url) f = io.BytesIO(response.content) names = ['Dates', 'SP500', 'Dividend', 'Earnings', 'CPI', 'DateF', 'T10Y', 'RealPrice', 'RealDiv', 'RealEarn', 'CAPE'] df = pd.read_excel(io=f, sheet_name='Data', skiprows=7, skip_footer=1, names=names, usecols="A:K") df['Dates'] = df['Dates'].apply(lambda x: "{:0.2f}".format(x)).apply(makedt) return df def long_term_bond(): ''' Function to plot S&P500 and 10Y Bond Index from Schiller Data ''' df = get_schiller() px = simulate_bond(df, yld='T10Y', mat=10, freq=12, pfreq=2) a = df.SP500 / df.SP500[0] b = px.px / px.px[1] ix = df['Dates'] plt.plot(ix, np.log(a), ix, np.log(b)) ax = plt.gca() ax.xaxis.set_tick_params(labelsize=24) ax.yaxis.set_tick_params(labelsize=24) plt.grid(linewidth=2) plt.title('S&P500 vs 10 Year Bonds since 1871, Log Scale', fontsize=24) plt.show() def px_vs_yld(stdt=None, eddt=None): ''' Function to plot Bond Index price versus the corresponding Yield between a start and end date ''' df = get_schiller() px = simulate_bond(df, yld='T10Y', mat=10, freq=12, pfreq=2) b = np.log(px.px / px.px[1]) ix = df['Dates'] if (stdt is None) or (eddt is None): iix = pd.Series(len(ix) * [True]) else: iix = (ix > stdt) & (ix < eddt) fig, ax1 = plt.subplots() ax1.plot(ix[iix], b[iix]) ax1.set_ylabel('bond index', fontdict={'size': 24}) ax1.xaxis.set_tick_params(labelsize=24) plt.grid(linewidth=2) ax2 = ax1.twinx() ax2.plot(ix[iix], df['T10Y'][iix], color='red') ax2.set_ylabel('10Y Yield', color='red', fontdict={'size': 24}) ax1.yaxis.set_tick_params(labelsize=24) ax2.yaxis.set_tick_params(labelsize=24) plt.title('10 Yr Bond Index vs Yield from 1871', fontsize=24) plt.show() def bx_coup_cap(stdt=None, eddt=None): ''' Function to plot Capital and Coupon contributions to Bond Index simulated from Schiller Data ''' df = get_schiller() ix = df['Dates'] if (stdt is None) or (eddt is None): iix = pd.Series(len(ix) * [True]) else: iix = (ix > stdt) & (ix < eddt) px = simulate_bond(df[iix], yld='T10Y', mat=10, freq=12, pfreq=2) plt.plot(df['Dates'][iix], np.log(px.px / px.px.iloc[1]), df['Dates'][iix], np.log(px.p_c / px.p_c.iloc[1]), df['Dates'][iix], np.log(px.c_c / px.c_c.iloc[1])) ax = plt.gca() ax.yaxis.set_tick_params(labelsize=16) ax.xaxis.set_tick_params(labelsize=16) plt.legend(['Index Price', 'Index Capital Component', 'Index Coupon Component']) plt.grid(linewidth=2) plt.title('Coupon Income vs Capital Income during Bond Bear Market 1953/82', fontdict={'fontsize': 16, 'fontweight': 'bold'}) plt.show() def bx_vol(stdt=None, eddt=None): ''' Function to plot the realized price volatility of a Bond Price index simulated from Schiller's Data ''' df = get_schiller() ix = df['Dates'] if (stdt is None) or (eddt is None): iix = pd.Series(len(ix) * [True]) else: iix = (ix > stdt) & (ix < eddt) px = simulate_bond(df[iix], yld='T10Y', mat=10, freq=12, pfreq=2) ret = px.px / px.px.shift(1) - 1 ss = ret.rolling(window=12).std() * math.sqrt(12) # 2 year realized volatility plt.plot(df['Dates'][iix], ss) # , df['Dates'][iix], ss.rolling(window=24).mean()) # 2 year vol and average over 100 months ax = plt.gca() ax.yaxis.set_tick_params(labelsize=16) ax.xaxis.set_tick_params(labelsize=16) plt.legend(['Yearly Standard Deviation of Bond Index']) plt.title('Yearly Standard Deviation of Bond Index', fontdict={'fontsize': 16, 'fontweight': 'bold'}) plt.grid(linewidth=2) plt.show() if __name__ == "__main__": run_option = 1 if run_option == 1: # Figure of Treasury Rates from Quandl df = get_trs(False) plot_yld(df,yld=['Y01', 'Y10']) elif run_option == 2: # Comparison of bond index vs TLT simb(symb='TLT', yld='Y30', mat=30, freq=12, pfreq=2) elif run_option == 3: # AGG simulation sim_agg() elif run_option == 4: # Bond Index vs SP500 from Schiller data long_term_bond() elif run_option == 5: # Bond price index vs 10 Yr Yield from Schiller Data px_vs_yld() px_vs_yld(stdt = dt.date(1952,1,1), eddt=dt.date(1982,1,1)) elif run_option == 6: # Capital and Coupon contributions for 10Yr Rate from Schiller Data bx_coup_cap(stdt=dt.date(1952,1,1), eddt=dt.date(1982,1,1)) elif run_option == 7: # Bond Index Volatility bx_vol(stdt=dt.date(1949, 1, 1), eddt=dt.date(2019, 1, 1)) |

The post Calculating Bond Index Prices: Analyzing the Bond Bear Market appeared first on FXMasterCourse.

]]>The post Why Would You Want to Invest in Bonds? appeared first on FXMasterCourse.

]]>In the first article of this series on Creating Profitable Trading Strategies we started with the premise that you need to look at assets and their underlying biases.

We covered equities. They’re exciting. Buffett makes 19.1% a year on them. And as we’ve just seen in the last month, they provide us with some great roller-coaster rides.

Not only that, they also exhibit some nice consistent long-term behavior such as long-term momentum and short term mean reversion.

As far as equity indices and a simple approach we’ve squeezed the lemon sufficiently.

Let’s move on the next vegetable (uh fruit): Bonds!

So, what’s a Bond. Before you invest in a Bond you got to know what it’ll give.

A government bond (which is what we’ll focus on in this article), is a piece of paper issued by a government which is the equivalent of an IOU. And just like when you borrow from a bank, the government has to pay back this loan.

The first ever general bonds were issued by the Netherlands in 1517 (though the Netherlands didn’t officially exist, it was the city of Amsterdam). The first bonds to be issued by a national government was in 1694 by the Bank of England to fund its war against France. Up until the 20^{th} Century many of these bonds were perpetual, in that governments did not have to repay the principal. This stopped in the 20^{th} Century, and nowadays most bonds come with a maturity date upon which the full amount is repaid (hopefully).

The great thing about government bonds is that it’s one borrower, a very big one, borrowing across lots of time frames. Why is this great? For following reasons:

Governments need to fund stuff. Like the roads we use, the hospitals we go to, and the schools we send our school children too. As well, as the armed forces that protect the country, and the police that keep us safe.

Just like most normal families, the government just can’t seem to get its act together and tends to need more money than it has. So, the government goes out and borrows. From you, the citizen of the government.

Given the funding needs, it will do so with loans of different maturities. Like your car loan, which might be only for 5 years. Your mortgage might be for 25 years. Similarly, the government needs stopgap funding, or longer-term funding to meet longer-term liabilities, or even to fix the low rates at present.

So, in essence by analysing a country’s bond market we end up being able to observe what the market is willing to lend to the government and at what cost.

And since these loans are made a wide range of maturities it can tell us some very interesting information as to what the market believes rates and the economy might do. It informs us of certain underlying expectations.

The IOU the government gives you to evidence your lending is called a Bond. The regular repayments are called Coupons (initially bonds came as certificates of ownership with paper coupons attached we you exchanged for payment, hence the name).

Now you don’t have to hold on to these IOU pieces of paper. You can sell them on in a secondary market, if you don’t want to wait for repayment, since you see another opportunity on the horizon.

Or you could buy a bucket load, since it looks like the government is actually getting out of the doldrums, and this IOU is therefore quite valuable. I.e. you’re engaging in purchasing distressed debt.

In either case you can buy and sell this IOU from the market.

Disregarding the mechanics of how this market works, how governments actually issue debt, the fact that this bond has a price is ultimately all that counts.

So, what does this price depend on?

Well, we’ve identified the three factors:

- Time left until the loan matures
- The interest rate on the loan
- And the creditworthiness of the government

Let’s assume for the sake of this article that (3) isn’t that relevant. So all we have left is time and interest.

In 2011 US debt was downgraded by the S&P 500 from AAA to AA. Bizarrely enough bond prices went up. Regardless that the ratings agencies just officially reduced the US credit rating, it was still considered a desirable asset by the market.

As you probably recall from middle school math class, if you have two variables you can plot them out.

Let’s do that. Interest charged on a bond, versus time left until the bond matures.

You can find the python code at the end of this article.

The very first thing that is striking about this chart and shows the interesting times we still live in: the number of government bond yields which are negative. On the short-dated side, it shows you Central Bank policy, trying to incentive money flow, and hence increase inflation again, implicitly getting the economy up and running. By the 10-year mark rates are now close at or above zero. It shows how investors over the last two years have been willing to give up a coupon payment, in return for holding safe assets, especially during the 2016 Brexit vote. And rates are positive all round on the long-dated side.

The second thing you should notice is that this is just a snapshot of TODAY.

So, how would you plot a yield curve’s evolution through time?

As you can see, getting your head around the Yield Curve and how it moves through time, is much more complex than looking at the price-chart of a stock!

Rather than looking at a point moving through time, tracing out a line, you are looking at a line moving through time, tracing out a surface.

But that’s not all!

Remember that these Bonds have a finite maturity!

So, as you move through time the nature of the bonds change as well.

Let’s say you start out today with a government bond that matures in 10 years, i.e. the government pays you in 10 years’ time the full amount back, plus some interest on the way there.

Well in 1 years’ time this bond has just become a 9-year bond. Now let’s say the government wants to issue some more loans also maturing in 9 years’ time. Obviously, they have to have some relationship to the 10-year bond which now has become a 9-year bond, since both types of bonds represent the same concept: a 9-year loan.

So, you have a yield curve moving through time, and at the same time the bonds on the yield curve roll along the yield curve with decreasing maturity.

That’s a lot of dynamics. That’s why fixed-income derivative traders tend to look at FX and Equity traders as if they just got out of their caves.

So, let’s have a quick summary of the bond market:

You have a yield curve

Each point on today’s yield curve represents an interest rate at which the government can borrow from YOU (that’s what your tax is used for).

At each maturity point on the curve you have bonds which will mature at that time. As time passes bonds roll down the curve (down, because the interest rate tends to get lower the shorter the maturity).

Take a break. The following picture summarizes this:

… you might be asking. Who cares??

Well, let’s bring over our typical financial advisors.

One of the popular allocations of assets in the 80s and assets was the 60/40 portfolio. I.e. stick 60% of your capital in Equities and 40% in Bonds. Bonds were supposed to buffer you from the volatility of stocks, however, you still wanted the stock upside, since you needed the growth. Hence the 60% there.

There was also a more general rule of thumb: take your age and stick that into bonds, and 100 – your age and stick that into equities. Obviously the older you get, the more you need the income that bonds generate, rather than the growth equities provide.

In any case bonds should be added to your portfolio (how much? we’ll cover that later).

Does that make any sense?

Let’s take a look. The first question to hit us: what do we use to invest in bonds? Here’s the great thing that’s happened over the last 15 years. The introduction of ETFs which track round about anything. So we can actually use the IEF which tracks 10 year US government bonds. For equities we will use the SPY which tracks the S&P 500.

And we have following performance statistics for this portfolio for leverage 1x:

60 / 40 | SPY | TLT | |
---|---|---|---|

Ann. Return | 8.8% | 10.2% | 6.9% |

Ann. Vol | 8.3% | 14.1% | 13.0% |

Ann. Sharpe | 1.1 | 0.7 | 0.5 |

Max D/D | -28.4% | -50.7% | -21.8% |

So, adding bonds to our equities does indeed work out: the Sharpe Ratio is a significant 1.1, which is pretty astounding!

Hang on you will say, what bonds, and if they mature how do I reinvest, and do I need a PO box to get the coupon?

The great thing is that ETFs take care of all of that. They magically roll any matured bonds into new positions, and the coupons magically appear in your account for you to re-invest.

An example is the IEF from before, which looks only at bonds with maturities between 7 and 10 years.

This means that when the bonds the ETF holds start to shorten in time, their maturity becomes smaller than 7 years, they get sold and the newest batch of 10-year bonds get acquired by the managers of this ETF.

So, looking at the IEF is in essence like holding a bond that always has a fixed maturity of roughly 10 years. Or in other words, you are tracking the 10-year interest rate. In jargon, you are holding on to a constant maturity bond of roughly 10 years.

Choosing the IEF is a bit random. Do we have any other bonds to play with?

Yes, here is a list:

- AGG: iShares Barclays Aggregate Bond Fund, tracks the total U.S. investment-grade market
- GOVT: iShares US Treasury Bond ETF
- SHY: iShares 1 – 3 Yr US Treasury Bonds
- IEI: iShares 3 – 7 Yr US Treasury Bonds
- IEF: iShares 7 – 10 Yr US Treasury Bonds
- TLH: iShares 10 – 20 Yr US Treasury Bonds
- TLT: iShares 20 – 30 Yr US Treasury Bonds

And of course it’s always interesting to see how they’ve behaved in tandem:

A simple visual inspection should indicate that selecting combinations of these bonds might actually improve the performance results we obtained above.

In this post we moved on to the next asset class Bonds, and asked: how do you invest in Bonds?

These instruments are tricky, since they are finite maturity, mature and disappear. However, by utilizing tracking ETFs we can get a handle on them.

And we’ve also seen how adding them to a portfolio of stocks we can definitely improve our overall performance. Historically the Sharpe ratio has moved up to 1.1!

That is impressive behavior.

Of course, there are several unanswered questions that immediately crop up:

- Given that interest rates are about to rise, can we still count on bonds to provide the performance they have?
- Instead of just being naively long stocks, what about including the portfolios we built earlier in the series?
- Instead of trading bond ETFs (or even Stock ETFs), can you trade futures? This is an interesting question, since ETFs incur a series of costs, which futures don’t. Also, futures allow for a greater leverage, which ETFs don’t, since they are classified as shares.

The next couple of articles will address exactly that! So until next time,

Happy Trading!

Below is the Python code used to generate the data in this article. You can run them as standalone scripts (though you might have to ensure that you have have installed the relevant modules, such as pandas). Also, the version of Python used is 3.

Python script to plot out G-10 Government Yield Curves

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 | # Script to plot government yield curves from Investing.com # Third pary modules required: bs4, lxml, matplotlib, requests from bs4 import BeautifulSoup import requests import matplotlib.pyplot as plt INVST_LINK = 'https://www.investing.com/rates-bonds/world-government-bonds' HEADER = 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) ' \ 'Chrome/64.0.3282.186 Safari/537.36' def get_inv_page(): ''' Obtain the parsed Investing.com page with the government bond yields The header is required, otherwise Investing.com will return a 404 :return: bs4 container for html ''' base_url = INVST_LINK header = HEADER pg = requests.get(base_url, headers={'user-agent': header}, verify=False) soup = BeautifulSoup(pg.text, 'lxml') return soup def get_ctry_tbls(soup): ''' :param soup: the bs4 containing the Investing.com page :return: returns the tables containing the bond data in html format ''' cts = soup.find_all('h3') tbls = [] for c in cts: a = c condition = True while condition: a = a.next_sibling if a.name == 'table': condition = False tbls.append([c.text.upper(), a]) return tbls def get_ctrys(tbls): ''' :param tbls: the list of tables from Investing.com :return: a list of countries ''' ctrys = [] for i in tbls: ctrys.append(i[0]) return ctrys def get_tbl(t): ''' :param t: an html table :return: a python list of lists containing strings ''' data = [] rws = t.find_all('tr') for rw in rws: cols = rw.find_all('td') cols = [ele.text.strip() for ele in cols] cols = [ele for ele in cols if ele != ''] if len(cols) != 0: data.append(cols) return data def get_yld_tbl(ctry: str, tbls): ''' :param ctry: the country name :param tbls: the list of tables from Investing.com :return: a html table ''' tbl_tag = list(filter(lambda x: x[0] == ctry.upper(), tbls))[0][1] tbl = get_tbl(tbl_tag) return tbl def parse_time(tms): ''' :param tms: relative time, e.g. 1D, 4W, 6M, 2Y :return: relative time as float ''' tm = tms.split(' ')[-1] if tm == 'Overnight': return 1/365.0 if tm[-1] == 'W': return float(tm[:-1])*7/365.0 if tm[-1] == 'M': return float(tm[:-1])*30/365.0 if tm[-1] == 'Y': return float(tm[:-1]) print(tm) raise RuntimeError('Time not Parsed!') def parse_ylds(ys): ''' :param ys: a python list of lists conversion from html table with string entries :return: maturity and yield values ''' tm = [] y = [] for rw in ys: tm.append(parse_time(rw[0])) y.append(float(rw[1])/100.0) return tm, y def process_all(): ''' Function which retrieves Investing.com data and returns a list of yield tables :return: yield tables ''' soup = get_inv_page() ts = get_ctry_tbls(soup) ctrys = get_ctrys(ts) ctry_ylds = [] for ctry in ctrys: y_s = get_yld_tbl(ctry, ts) tm, yld = parse_ylds(y_s) ctry_ylds.append([tm, yld, ctry]) return ctry_ylds def plot_yld(ctrys): ''' Plots yield tables :param ctrys: a list of yield tables ''' fig = plt.figure() ax = fig.add_subplot(1, 1, 1) fig.hold(True) for c in ctrys: ax.plot(c[0], c[1], label=c[2], linewidth=2.0, linestyle='-', marker='*', markersize=10) vals = ax.get_yticks() ax.set_yticklabels(['{:3.2f}%'.format(x * 100) for x in vals]) ax.tick_params(labelsize=20) plt.legend() plt.grid() plt.title('Government Bond Yield Curves', fontsize=24) plt.show() if __name__ == '__main__': res = process_all() r = list(filter( lambda x: x[2] in ['GERMANY', 'UNITED KINGDOM', 'UNITED STATES', 'AUSTRALIA', 'NEW ZEALAND', 'JAPAN', 'CANADA', 'NORWAY', 'SWEDEN', 'SWITZERLAND'], res)) plot_yld(r) |

Python script to create a 60-40 portfolio of stocks and bonds.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 | import math import pandas as pd import matplotlib.pyplot as plt import datetime as dt from pandas_datareader import data as pdr import fix_yahoo_finance as yf yf.pdr_override() # 'AGG' # total bonds # 'GOVT' # total treasury # 'SHY' # 1-3 yrs # 'IEI' # 3-7 yrs # 'TLH' # 10-20 yrs # 'TLT' # > 20 yrs bondlist = ['AGG', 'SHY', 'IEI', 'IEF', 'TLH', 'TLT'] def get_symbol(sym): return pdr.get_data_yahoo(sym, start='1990-01-01', end=dt.date.today().strftime('%Y-%m-%d'))#, interval='d') def resample(df, freq='W-FRI'): res = pd.DataFrame() res['Open'] = df['Open'].resample(freq).first() res['High'] = df['High'].resample(freq).max() res['Low'] = df['Low'].resample(freq).min() res['Close'] = df['Close'].resample(freq).last() res['Adj Close'] = df['Adj Close'].resample(freq).last() return res def transform_monthly(spy, bond, bond_name): spy_r = resample(spy,freq='M') bond_r = resample(bond,freq='M') spyc = spy_r[["Adj Close"]] bondc = bond_r[['Adj Close']] spyc.rename(columns={'Adj Close': 'SPY'}, inplace=True) bondc.rename(columns={'Adj Close': bond_name}, inplace=True) res = spyc.join(bondc, how='outer') ix = res[bond_name].first_valid_index() res[bond_name] = res[bond_name]/res[bond_name][ix] res['SPY'] = res['SPY']/res['SPY'][ix] return res def maxdd(x: pd.Series): m = (x / x.expanding().max() - 1).min() return m def performance_stats(x, nm): ret = x.mean() * 12 vol = x.std() * math.sqrt(12) print(nm + ' return: ', ret) print(nm + ' vol: ', vol) print(nm + ' Sharpe Ratio: ', ret/vol) print(nm + ' max d/d: ', maxdd((1+x).cumprod())) def plot6040(bond='tlt'): spy = get_symbol('SPY') bond_d = get_symbol(bond) res = transform_monthly(spy, bond_d, bond) rets = res/res.shift(1) - 1 portfolio = pd.DataFrame() portfolio_rets = (0.4*rets[bond] + 0.6*rets['SPY']) portfolio['Portfolio'] = (1+portfolio_rets).cumprod() plt.hold(True) plt.plot(portfolio.index, portfolio['Portfolio'], label='60-40 Portfolio') plt.plot(res.index, res.SPY, label='SPY') plt.plot(res.index, res[bond], label=bond) performance_stats(rets[bond], bond) performance_stats(rets['SPY'], 'SPY') performance_stats(portfolio_rets, 'Portfolio') plt.tick_params(labelsize=20) plt.title('60-40 portfolio of Equities and Bonds', fontsize=24) plt.legend() plt.grid() plt.show() def plot_all_bonds(): plt.hold(True) for bnd in bondlist: b = get_symbol(bnd) plt.plot(b.index, b['Adj Close'], label=bnd) plt.title('Bond ETF Performance', fontsize=24) plt.tick_params(labelsize=20) plt.legend() plt.grid() plt.show() if __name__ == "__main__": plot6040(bond='tlt') # plot_all_bonds() |

The post Why Would You Want to Invest in Bonds? appeared first on FXMasterCourse.

]]>The post Trading Mean Reversion in Currencies appeared first on FXMasterCourse.

]]>We saw in the last article how combining two simple ideas for equities produced a stable system over the last 30 years.

Can we repeat a similar analysis for currencies?

**Yes! **

However, be warned. Currency trading is a different magnitude of difficulty to equity trading. Currency traders have had a real tough time since 2008 (take a look at the BTOP Barclay Hedge Currency Trader Index).

As always it depends which pond you fish in. Whereas equities can see slow trend grinds, or explosive surges, currencies are much more choppy. Have you ever seen a G10 currency perform a “Yahoo party like it’s 1999” dance?

In this article we’ll cover:

- Defining Mean Reversion again
- Finding the Right Pond to Fish in
- Testing Patterns for Mean Reversion
- Constructing a real simple but well performing mean-reverting portfolio
- Using the benefit of Diversification to combine it with our Equity strategy

Recall: Mean Reversion Trading means fading strong moves. Usually towards their points of origin, the ** mean** of the price series.

In this article series we covered two approaches:

- Look at the 5-day moving average (one week seems magical across assets) and trade from the other side
- Look at sequences of up and down periods.

Let’s apply these two concepts to currencies as well.

And let’s start out with EURUSD.

Why EURUSD? Well, it’s considered to be one of the most ‘technical’ currencies (at least anecdotally). More tangible characteristics: it’s certainly the most liquid, has a low/bid ask spread, and for the purpose of testing our ideas, it’s certainly exhibited strongly trending, range-bound, high-vol and low-vol environments. This makes it a good beast to try out at first.

Recall, that the 5-day moving average approach was to trade in the direction opposite to the short-term trend. Meaning we were long if EURUSD is below its moving average and short if it is above.

You might ask, why 5-days. You can certainly vary this. However, I felt that since we fixed this period in the previous article, it’s a good example of how to look for universal properties, and not get bogged down in parameter searches.

The (minutely) data was obtained from ForexTester’s historical data service, which is sourced from a list of brokers. This is a good test to have. If different data sources provide very similar results, you know that you are not dealing with some spurious data quality issues.

The Sharpe ratio here is at 0.5.

So what about GBPUSD? Or USDJPY?

This is what it looks like for the two:

Not too good.

*Now this is important: Looking for the right pond to fish in!*

What do I mean by this? Forget about the majors for starters.

Would you recognize this pair without me telling you what it is?

Difficult!

It’s actually CAD/NOK. What’s interesting about this pair: almost no retail broker will show it.

Furthermore, both CAD and NOK as economies are strongly related due to their oil production. So it makes sense for them to strongly related.

How strongly?

Let’s use the 5-day MA method from before:

The Sharpe Ratio alone on this is 1.16! Pretty impressive.

So here is an exercise: find as many ** “off”** pairs as you can think of. Obviously you will have to construct them. CAD/NOK = USD/NOK divided by USD/CAD, where you can obtain the data from sources such as your MT4 History Center.

We’re going to stick with the concept of 5 business days, better said a week (the signal over dailies is too noisy, and not much comes of it).

Similar to the equities setup we’re going to try something really naïve.

If the last week’s currency move was up, go short, and vice versa. If the last week’s currency move was down, go long.

At first sight this might not seem like much.

Here are the results for this approach for 28 pairs typically found on brokers (same data set as before with FXOpen as the source).

Aggregating these results we obtain:

Note that we haven’t cherry picked any of the currencies which had underperforming periods.

There might be some arguments to be made for selecting only those that visibly clearly exhibit ‘mean-reverting’ characteristics, such as the CHF pairs.

It turns out the Sharpe Ratio for this strategy is at 0.7.

That is pretty astonishing.

And there are some key subtleties here. The most important one is that we are not trading one single currency pair.

Instead we have a currency mean-reversion index.

Similar to the equity setup where mean-reversion on a single-stock would not have been as powerful, however in aggregate the signal becomes very strong for the index itself, the S&P500.

Looking at the correlation of this strategy with our equity strategy: **-5%!!** Driven primarily by the US debt downgrade shock in August of 2011, where currencies shocked the other way to equities.

And this is an indication that we might want to mix it up, and put these two together.

Adjusting for volatilities, we obtain:

With a Sharpe Ratio of 1.17

No that is really not too bad!

We’ve covered mean-reversion on currencies in this article.

And as we indicated at the start, trading currencies can provide a much tougher time. However, combining them with other assets provides great diversification.

Even more importantly, none of the methods we’ve tackled so far are ‘rocket-science.’

That’s not the point of trading.

The point of trading is to find something that provides juice and systematically extract it.

We’ve covered the equity portfolio of our Consistent Trading Portfolio.

Next up will be bonds.

Bonds are much more tricky to deal with, since they are finite maturity products, that pay coupons on a regular basis. Nowadays you can get useful data from bond ETFs, such the AGG, TLT, SHY, etc.

The biggest argument levelled at these ETFs is their short history, and the fact that they’ve been trading during one of the biggest bond bull markets.

Many say that now with rising rates we’ll see the end of their profitability and they could even be drag to include in a portfolio.

So, the next part of this series will look at putting together an index which we’ll calculate off available government interest rate data going back 70 years or so. And then we’ll have some fun looking at bond behavior and its contribution to our portfolio.

So, until next time,

Happy Trading.

If you have enjoyed this post follow me on Twitter and sign-up to my Newsletter for weekly updates on trading strategies and other market insights below!

7 Steps To Trading Success – 7 Steps that will increase your trading Success **TODAY**

The post Trading Mean Reversion in Currencies appeared first on FXMasterCourse.

]]>The post Equities Mean Reversion appeared first on FXMasterCourse.

]]>And as part of Building Consistently Profitable Trading Systems it forms a key component.

In this article we’ll present the final version of the mean-reversion system to form part of the trading toolbox, and the final portfolio.

It’s the Larry Connor’s RSI2 strategy. And it’s based on the two concepts we covered in Equities and Their Mean Reversion Habits:

- Looking for a series of down days
- Looking for reversion towards a short dated moving average

And as we said in previous articles, the twist we’ll add is by introducing Welles Wilder’s RSI indicator.

Recall that in Building Consistently Profitable Trading Systems the stated aim was to re-use well known systems that have worked in the past and keep on working. Re-inventing the wheel can be a waste of time.

The point of trading is to exercise discipline and really intervene when systems start to behave differently from their historic norm.

With that said let’s proceed.

From Equities and Their Mean Reversion Habits we saw that by combining two systems we obtained a solid performance. The first system required you to buy after two-down days in the SPY (the S&P 500 ETF). The second system required you to go long the SPY while you were below the 5 day moving average. Here is a P&L chart to refresh your memory:

We’ll add modifications to this system and boost the Sharpe ratio it had from 0.7 to close to 1.

The primary modification is to move away from a binary signal (two down days in a row) to a continuous signal, by utilizing Wilder’s Relative Strength Indicator (RSI).

The RSI indicator measures the strength between upward and downward moves in an asset. The logic goes that if there has been a strong move in one direction then we’re due for a correction in the opposite direction.

The actual calculation looks at the average of the upward moves versus the average of the downward moves via the relative strength (where we are looking at the sizes of the moves, ignoring their signs):

\(RS = \frac{\mathrm{Avg}(up moves)}{\mathrm{Avg}(down moves)}\)

Obviously this calculation is bounded below by zero, but unbounded above.

So Wilder created the RSI which simply caps the indicator at 100 on the upside:

\(RSI=100 – \frac{100}{(1+RS)}\)

The logic goes: as you have persistent up moves, the Relative Strength (RS) becomes large, and the RSI goes to 100. If you have persistent down moves, the RS goes to zero, and the RSI goes to zero as well.

The mean-reversion interpretation comes from the fact that we tend to go short as the RSI approaches 100, and we tend to go long as it approaches 0.

A good question to ask right now is what kind of distribution does this RSI have?

It turns out that the 2 day look-back is a special case. The majority of RSI readings in this case actually occur at the extremes: 0 and 100.

As the look-back window increases the distribution becomes more and more normally distributed, centered around 50. For Wilder’s default value of 14, the standard deviation is 10, which explains the choice of 30 and 70 as the lower and upper bounds for the RSI. They are two standard deviations away and represent 95% confidence intervals.

Now, back to the RSI(2).

One important point about financial time series is that they do have memory, and that memory tends to be very short lived. One / two time periods tends to be a good guess as to how long memory lasts in financial time series.

Hence our choice of RSI(2) (Note: go ahead and replicate these systems for the RSI(3), you should get similar results, and it’s a great exercise!).

The RSI(2)’s particular distribution looks like:

The cut-off values 0 and 100 prevent the tails from spilling past those bounds. Hence the bunching up at the ends.

Now, let’s apply it to the S&P 500.

There is actually quite a nice way to implement the RSI, using not just thresholds but the signal itself to position size.

You re-center the RSI around 50 and take positions equal and opposite in size to the RSI:

\(\mathrm{Posn}=-\frac{RSI-50}{100}\)

where I have divided by 100 to keep the position sizes in check. The look-back window for the RSI here is 2.

Applying this to the SP500 gives:

It works, but it’s quite choppy.

If you apply the RSI in the usual fashion, entering long / short positions when thresholds get crossed you get a similar picture. To be clear, the approach here is to only hold the S&P 500 for the following day once the thresholds get crossed. Short for high thresholds, and long for low thresholds of the RSI.

The thresholds we use are 20 for long entries and 80 for short entries.

The motivation for choosing these thresholds are that they are one standard deviation away from the mean value of the RSI (which is 50).

Of course holding only for the next day leaves a lot of money on the table.

This is where our 5-day moving average approach comes in (and it’s also Larry Connor’s exit strategy).

The idea here is to hold the position until the S&P 500 crosses the 5-day moving average.

If you’re long you wait for price to cross the 5-day moving average from below to above it.

And vice versa for a short position.

This approach leads to:

We can now use our understanding of momentum from the previous article to add that to the mix.

We only take on long positions in the direction of the long-term momentum, and short positions in the direction of short-term momentum.

We determine the direction of momentum by looking at the 200-day moving average. If price is above it, we take long positions, and if price is below it we take short positions.

The choice of 200 days here is to be in accord with Larry Connor’s original system. Which is not too far off the 252 business days in a year, corresponding to our 12-month momentum approach.

The resulting strategy is:

We chose 20 and 80 as thresholds before, based on the standard deviation argument of the RSI(2) distribution.

On top of that we also saw that restricting trading in the direction of the long-term trend improved the performance significantly.

But what if we experience a divergence that continues? Of course, the usual adage is to not average down losers. However, the whole point of contrarian trading is that the further away from the mean you are the more confident you will be that you will revert to it. Therefore, the only logical conclusion is to increase position size.

The way to implement this is to layer the same strategy on top of the current one, but with thresholds set further out. I.e. at 90/10. The means that at above 80 we go short one unit, and above 90 we go short a second unit. Similarly below 20 we go long one unit and below 10 we go long a second unit.

Combining these two systems smooths out our P&L even further:

The fun now is to combine our long-term momentum system with the short-term mean reversion system and see what happens.

Right off the bat the first thing to notice is that the short-term mean reversion strategy in essence increases positions in the direction of the long-term momentum when you have dips.

You are buying dips in up trends and selling rallies in down trends.

This is a well-known approach to trading. We have managed to quantify it here.

The question is exactly how to combine these strategies. A standard approach is that of taking the correlations of the various assets/systems and optimize with respect to some criterion, e.g. minimum variance, desired return etc. (i.e. classic Markowitz).

This is complicated, fraught with statistical noise, and hence continuous rebalancing of the portfolio.

A much more straightforward method is to simply state that all correlations are zero. And assign equal risk to all assets. Under the zero-correlation assumption it boils down to risk parity.

Though this method appears naive, it is a tried and tested method of many trading desks over the decades, and has proven quite robust (as you don’t have to estimate many statistical properties!).

As long as the strategies make money, it’s a great way to combine them. Especially if you have reason to believe that the strategies are uncorrelated. If they are, your performance will improve:

As before, let’s look at some of the trading performance measures.

RSI & Momentum | |
---|---|

Ann Ret | 11.1% |

Ann Vol | 12.4% |

Sharpe Ratio | 1.1 |

Max DrawDown | -20% |

This is indeed a significant improvement. Our Sharpe Ratio went from 0.7 to 1.1. This alone allows a big increase on our maximum optimal leverage.

Remember, a lower drawdown and higher Sharpe ratio, lead to better leverage conditions, and ultimately to faster and greater wealth growth.

Let’s just apply half-Kelly to this strategy. In this particular case that would be a five times leverage:

All we can say is, hold on to the seat of your pants!

Equities, equities, equities…

Why focus on them so much?

Because they are currently going up! Look at the charts. I always find it remarkable that people are quite willing to bang their heads against a wall, when there is an easy option staring them in the face!

Just in the last week we’ve nearly posted a 1% return on the S&P 500. And as I’ve said in the past, I’m sure a correction is somewhere round the corner. However, the markets will give you ample opportunity to move out of the way before it happens!

This was also true for 1987, using momentum measures over a variety of time horizons. The same will be true now.

This article finished the Equity bit of this series on building a consistently profitable strategy.

To recap:

- We’ve covered long-term momentum and seen it work over the last one and a half century
- We’ve covered short-term mean reversion, and granted, though it started working as of 1982, it still is going strong. No reason to abandon it.
- We saw how to combine these two to produce consistent results over the long-term

Of course, utilizing these ideas there are many more Equity assets we could explore. Such as the sector ETFs as well as international equity ETFs.

The performance will be similar within the same order of magnitude.

However, the key adage to trading is that you can’t and shouldn’t stick to one asset / system. You need to diversify.

With that in mind…

In next week’s article it’ll be back to currencies, and how they mean-revert.

The real fascinating stuff about the S&P 500 index is that it exhibits momentum and mean-reversion (of course over different time-scales).

With currencies this isn’t necessarily so. And to get to the mean-reverting juice you have to choose the right pond to fish in. Sticking firmly to the majors and their crosses we’ll construct a portfolio that has seen some consistent behavior over the last 20 and more years.

So, until next time,

Happy Trading.

If you have enjoyed this post follow me on Twitter and sign-up to my Newsletter for weekly updates on trading strategies and other market insights below!

The post Equities Mean Reversion appeared first on FXMasterCourse.

]]>The post Equities Mean Reversion and Market Regimes appeared first on FXMasterCourse.

]]>In the previous article we combined the idea of looking at two consecutive down-days combined with buying the S&P 500 while it was below its five day moving average.

A question that stands out: why does it work so well? And, will it ever stop?

The answer to this question lies at the heart of developing good trading strategies.

Ultimately a trading strategy is a procedure that generates buy/sell signals from the information made available to it.

The underlying assumption therefore is that markets are driven by the factors we are trying to exploit.

So a trading strategy ultimately means determining the factors that drive the market, determining how strong they are and if they are persistent. And then finding a way of overcoming the market noise to get the most value out of our trades.

Some well-known factors are: momentum, mean-reversion, economic fundamentals.

You’ll probably be familiar with the fact that choppy markets are anathema to trend-following.

And of course overly trendy markets are bad for mean-reversion.

Economically motivated strategies such as macro strategies assume that markets over time converge to some economically implied value for the asset.

But how can we determine if the factors we wish to trade are actually present?

For the purpose of this article series, as in the previous articles, we’ll focus on the S&P 500 whilst answering these questions.

How can we detect market regimes? Usually the simplest methods tend to be best.

The ones we want to focus on in this article are mean-reversion and by association, momentum; one being the mirror image of the other.

The way we usually understand mean-reversion is as a tendency to move counter to the direction of the most recent moves. Usually off the back of an overextension.

Momentum on the other hand is nothing more than a continuation move.

So what would be a really naïve strategy to exploit either of these behaviours?

In the case of mean-reversion we can simply trade in the opposite direction to yesterday’s move:

\(\mathrm{\mathrm{Position}}_{today}=-\mathrm{sign}(\mathrm{Return}_{\mathrm{yesterday}})\)

In the case of momentum we revert the sign on the right as we are trading in the direction of yesterday’s move

Not that difficult!

So let’s check it, as far back as we can.

__Market Regimes in the Dow Jones Industrial Index and the S&P 500__

The two data series most easily accessible and going back far enough are the Dow Jones Industrial Index going back to 1900 and the S&P 500 for which we can easily obtain data back to 1950. Both of these sources have daily closing data.

Applying the momentum strategy to both these indices we obtain (where we are treating returns as additive):

and

Both these charts exhibit some very fascinating and consistent behaviour (which is the reason we chose to focus on momentum first, rather than run the mean-reversion counterpart).

Most of the time we spent in a trending market. You can see this by the incredible performance of this naïve strategy. (Note: we haven’t taken any commission / slippage into account here!)

There were times when the trendiness switched sides, and we became strongly mean-reverting (as can be seen in the loss making periods of this naïve momentum strategy).

These periods coincided with the post-1929 era and the post-2000 era. Both experienced big crashes.

So, how does that influence our expectations about mean reversion?

To answer this question let’s apply the two systems we described in the previous article to both these indices and see how they performed.

We see that prior to 1982, the performance was actually a straight line down.

So mean-reversion beware! Even if the bonanza has lasted the last 35 years, over the real long-term it didn’t fare too well!

** **

As we saw, the P&L on naïve strategies is quite an efficient way of establishing the regime we are in.

Is there another way?

Yes: Autocorrelation. Autocorrelation measures the tendency of a market to follow through on the previous day’s move. In excel we simply use the CORREL function where the input is the same return series, but offset by one cell from itself.

Let’s apply the 1-day autocorrelation (using the past year’s data) on the S&P 500 return since 1950 and overlay a two year moving average on top of it to smooth out the oscillations:

We have been firmly in a mean-reverting regime since the early 80s, as can be seen from the autocorrelation starting to poke below zero and then firmly staying in that territory.

Many things could have changed the market dynamic; one thing that does stand out is that S&P 500 futures started trading in 1982 on the CME, though I admit this might a tenuous relationship (for instance no index futures were introduced shortly after the 1929 crash).

Another explanation might be that the shocks markets experience during significant crashes tend to drive a change in behaviour of market participants from trendiness to mean-reversion.

It is always relevant to understand the market regime you are in.

The regime will ultimately determine the kind of strategy you can apply.

For instance post 2013 EURUSD intraday ranges collapsed, and intraday strategies were at a big disadvantage.

In the case above we can see that mean-reversion did not work well in trending markets.

It’s therefore not just necessary to design indicators that have some form of predictive power. It’s more important to understand the underlying statistical tools you can use to obtain information about the characteristics of the market. As well as the properties of these measures.

In essence you are building a market scanner, that will be able to tell you which strategies to apply.

What’s highly interesting in the case of the equity market analysis above, is the persistence of the underlying behaviour, be it mean-reversion or momentum. We are talking decades.

A good systematic trader (aka market scientist) also works the other way. Once he thinks he has found a statistical feature that provides juice he creates a lab. In our particular case it would be great to see if the autocorrelation feature is the true driver behind our 2 down day system.

All we need to do is create a time series that has those features. In essence a controlled version of out S&P 500 index.

Below is the Python code to do this. The parameters we are using are estimated from the S&P 500. We generate the autocorrelation by simulating an AR(1) process from gaussian random variables. Here is a result from a sample:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 | import pandas as pd import numpy as np import matplotlib.pyplot as plt r = np.random.normal(0.0, 1.0, 6250) ac1 = -0.064 ac2 = -0.00453 mu = 0.0003515 std = 0.011385 df = pd.DataFrame({'rand': r}) df['rand_ac'] = df['rand'] + ac1 * df['rand'].shift(1) # + ac2 * df['rand'].shift(2) # --- uncomment df['ret_sim'] = mu + std * df['rand_ac'] df['px'] = (1 + df['ret_sim']).cumprod() df['sig'] = (df['ret_sim'] < 0) & (df['ret_sim'].shift(1) < 0) df['sys'] = (1+df['sig'].shift(1) * df['ret_sim']).cumprod() plt.plot(df.index, df['px'], hold=False, label='Simulated Price') plt.plot(df.index, df['sys'], hold=True, label='Two Down Day') plt.grid() plt.legend() |

[*Note**: to keep the article from over-running I’ve cheated here. A two down day strategy really requires an AR(2) process. And indeed if you work out the autocorrelation lagged by two days for the S&P 500 you see exactly that. Uncomment the code to run these controlled experiments. And if you’re up to it, check out that the down 2 day actually doesn’t do that well for a simple AR(1) process! Autocorrelation is more subtle than meets the eye!]*

As expected the P&L is driven by the statistical feature we baked into the model.

__Currency Markets__

In currency mean-reversion is slightly more subtle. Using the pattern based approach (i.e. the 2 down day) is not necessarily the best method of extracting value. There is a lot of underlying noise in the currency pairs.

However, the 5-day moving average acts as a great smoother.

You also have to find the right pond to fish in. USDJPY is an explosive pair and even to this day quite trendy.

A great mean-reverting pair is EURSEK. Have a look at the long term chart:

And applying the 5-day moving average approach from the previous article produces following result:

This article addressed the concerns as well as the criticisms that are levelled at the types of mean-reversion strategies we have been looking at:

- They haven’t always worked
- They’re over fit

In actuality the systems are so simple, that the second can’t really be levelled at them. The first point however is valid: and a long back test shows that they didn’t always work.

But that’s the point of the saying “the only constant thing about markets is that markets will always change.”

You have to understand what drives your system to ensure that it will continue to function. If the underlying cause changes or disappears, so will your system.

Here we identified a statistical feature of the equity market that does contribute to the performance as we saw from our “lab” experiments.

Today’s article acted as a general disclaimer: these systems work; now. They didn’t always work, but, we’ve done our darndest to nail down when they might fail and how to detect if the market conditions are right.

With this setup we are ready to move on to a twist on the mean-reversion combination we’ve been covering. And then add to that the Equity momentum strategies. That’ll be portfolio composition for you!

So, until next time,

Happy Trading.

The post Equities Mean Reversion and Market Regimes appeared first on FXMasterCourse.

]]>The post How to Implement Leverage Using Kelly Betting appeared first on FXMasterCourse.

]]>“How do you deduce leverage from your backtests?”

“How do you actually implement leverage in your trading?”

“How do you calculate position size?”

“How often do you rebalance?”

So, to keep this series as hands on and close to reality as possible, this article focuses on the practical aspects of implementing leverage on your account (and no, the margin your Broker indicates, such as 1:400 doesn’t count).

In detail for this article:

- How do the results of the backtests translate to actual position sizes?
- How does that translate into day-to-day trading?
- What kind of money-management type is Kelly betting?
- How do you work out position sizes with your MT4 broker?

The take away message is that leverage in and of itself is a very straightforward concept.

The difficulty arises when you have to actually work out the leverage from your systems testing and then translate it into actual position sizes on your trading account.

As you saw in the previous articles, all backtests ultimately reduced to percentage returns.

In essence we had an asset, \(A_i\) and we looked at its returns over a period \(R_i:=\frac{A_i}{A_{i-1}}-1\)

Especially for equities this simplifies the view on risk: as the SPY (recall this is the S&P 500 ETF) increases in price, its swings will be much larger in terms of points, but will remain of comparable size when looked at in terms of percentage risk.

Now, let’s say that you have *C* dollars in your account. A leverage one position would imply that you use your full cash amount to purchase *N* units of the asset at a price *P*.

Of course there will be rounding errors, since most of the time you can’t buy fractions of the assets, and \(\frac{C}{P}\) isn’t a whole number. But we’ll gloss over these details here!

With this leverage one position any percentage change in the asset will equate to an equal percentage change in your account value.

This is your base case scenario.

Now, recall that the way we looked at leverage, was as a multiplicative factor on the asset returns!

Meaning that when we set our leverage to two, this actually translated to:

\(R_i \longrightarrow 2 \times R_i\)

And of course to achieve that it means that at the start of the period we had to purchase twice as many units as we did with our original capital, i.e. we must purchase \(2 \times N\) units!

That’s pretty straightforward, no?

But the next time period, after the asset has moved, you have to keep on your toes, since we’ll be facing some market trickery!

To figure out what happens in the next time period after the market has moved, let’s work out some explicit numbers.

Let’s say that we’re going to use a leverage two position. Our asset price is initially at $100. And it moves up 10%.

So in essence to have a leverage two position, we’re going to lay out $200 of cash and purchase two units of the asset. (Where the extra money came from is a question of margin, and we’ll get to that later).

So we started out with $100, bought two units of the asset, which moved up 10%. So we made $20 of profit.

Now here is the trick question. Is your leverage still \(2\times\)?

Think about it before you move ahead.

The answer is ….

…. NO!

You now have $120 in the kitty and the value of your holdings is $220.

If you work out the maths your leverage is now \(\frac{$220}{$120}=1.83\times\).

Your leverage just decreased. This is really important.

This would not have happened had you had a leverage 1 position.

So if you want to maintain a constant leverage, which is of course what we calculated in the previous articles, there is only one thing left to do.

You gotta buy more!

How much more is easy to figure out. Your total holding has to equal twice your cash, which in this case would be $240. So you have to buy an extra $20 worth of assets. (Let’s ignore the issue of fractional asset holdings here for the time being).

Now let’s look at the scenario where the asset price decreases by 10%. You again start out with $100, and you have a leverage of 2 times which means you just purchased $200 worth of the asset.

After the 10% drop in the asset price you lose 20% of your holdings (as expected from a leverage two position), which results in your net worth being $80. However your holdings are now equal to $180.

So what has happened to your leverage?

It’s actually increased.

Your leverage is now \(\frac{$180}{$80}=2.25 \times\).

So it’s obvious what you have to do for the next trading period: you have to dump some of your holdings to get back to the leverage that you were targeting. In this case you have to get rid of $20 of the asset.

The above section is really important!

You see, most people think of leverage as being something constant that they don’t have to worry about once their position size is set.

That’s not quite true.

The argument goes: your leverage is variable, since it’s a multiplicative concept, however, your P&L changes are additive. This makes a whole bunch of difference.

The end-result: keeping a constant leverage (which is some fraction of your Kelly criterion as we saw in previous articles), requires you to constantly re-balance your portfolio.

But more than that: when you make money, Kelly forces you to increase your position size. When you lose money, Kelly forces you to reduce your position size.

Kelly betting in essence is an anti-martingale strategy, very much in line with what trend-followers apply in their trading. As the markets go in their favour, they pile in and hog out.

Recall Stanley Druckenmiller’s quote: “It takes courage to be a pig. It takes courage to ride a profit with huge leverage.”

To translate form capital in your MT4 (or MT5) brokerage account to lot size is now the final missing piece of the puzzle.

It’s what makes everything real.

To be clear, the calculation presented below applies to any brokerage account. And we’ll see how margin and ultimately the advertised broker leverage comes into it.

For this article we’ll use a broker who has following specifications for the S&P 500 index:

The important numbers for us are

- Contract size, which in this case is: 100

What this tells us is that for a 1 lot trade, in the parlance of the broker, a one point move in the index is equal to 100 units of the base currency, which in this case is USD.

So quick, if I want to purchase one unit of the index at a price of 2,564 (the current price), what would my lot size with this broker be?

Well, it would have to be one hundredth! Or \(\frac{\mathrm{Lot Size}}{\mathrm{Contract Size}}=\frac{1.0}{100}=0.01\), which is also the smallest unit of volume tradable for this asset.

So let’s say we have $10,000 in our account, and we would like to leverage or holding to three times. What would the lot size be?

Here it is:

\( 0.01 \times \mathrm{Lev} \times \frac{Capital}{Price} = 0.01 \times 3 \times \frac{10,000}{2564} = 0.12\)

And you can see as your capital changes and the price changes so will your position size.

What will be your margin requirement be for this position?

Well, this is where the 1:400 advertised margin from your broker comes in. If you have $30,000 worth of SP500, one four-hundredth of this a requirement of $75. The $30,000 comes from using three times leverage on your $10,00 capital.

This small margin requirement is quite phenomenal.

If you look at the CME where the actual futures are traded, the marign requirement is 3.5%, which at this moment in time for one contract is \( 2564 \times 50 \times 3.5\% = 4,500\) dollars. We have used the fact that one point on the index is worth $50 of P&L.

Now of course such a MT4 broker margin means that theoretically you could leverage yourself up to 400 times. But that’s idiotic, and not what you would do in the first place.

The whole advantage of such margins is that in essence you are trading at almost zero funding cost. There are some caveats, and you will see that in this broker example both short and long position cost you, via the swap long and short. However, there are brokers that CFD the future and not the underlying cash index. Since there the funding cost is baked into the futures contract already, holding the CFD contract incurs no funding cost and is in essence at zero cost to you.

In this article we took a step back and looked at the practicality of what it means to work out your position size given the broker specifications for the underlying contract.

The calculation for your position size is straightforward.

However, the key concept, which rarely gets mentioned, is that true Kelly betting is a statement about constant leverage, since this is the assumption underlying the optimization equation.

Hence, to truly Kelly bet you will be actively turning you portfolio over, as you constantly try to keep the portfolio leverage close to the calculated Kelly criterion.

If you have followed this far, you probably have following question now:

- Given that my Capital fluctuates every day should I rebalance everyday??
**Or**should I only adjust trade sizes on the new trades that I put on.

To maintain your sanity I’d advocate the latter (which I also practice).

Also, to constantly rebalance your portfolio will ultimately result in higher transaction costs, which will erode any benefit from strictly adhering to the Kelly mechanism.

What is really nice about these results is that they conceptually tie together: we’ve been taught to pile in when we’ve got something good going for us, and the maths, in terms of optimal value extraction, points to exactly that.

As Hannibal Smith from the A-Team says: “I love it when a plan comes together!”

Next time we’ll return to improving our mean-reversion approach from last time, by looking at some of the favourite indicators. The end result will be pretty impressive.

And equipped with our current knowledge of leverage and its implementation we’ll look at properly combining the various equity strategies, before moving on to bonds.

So, until next time,

Happy Trading.

__Twitter__ and sign-up to my Newsletter for weekly updates on trading strategies and other market insights below!

The post How to Implement Leverage Using Kelly Betting appeared first on FXMasterCourse.

]]>The post Equities and Their Mean Reversion Habits appeared first on FXMasterCourse.

]]>So, here’s the deal. We’re going to keep it simple, just like in the previous three posts, and start from the ground up. Over the following series we’ll culminate in a simple, straightforward (and well known) system that still works.

In detail for this article:

- What do we mean by mean-reversion and how can we measure it?
- What are some natural methods to trade this?
- We’ll look at the results from these naïve setups!

Be fore-warned, there won’t be any rinky-dink magic marker indicator at play here. It’s actually really easy to set up. So if you think only magic grails will do it for you, you gotta move one.

Also, mean-reversion doesn’t exist only in equities, but in other asset classes as well. We’ll get to those topic in the follow-up articles. Those will also include the Python code (and spreadsheets) to go along with these examples.

Let’s start out with some basic stats.

You see, any time-series (that is a price-series in our trading parlance), has several basic properties (which are relevant to our discussion):

- Drift (aka mean)
- Swing-around (aka standard deviation)
- Bias towards negative or positive outcomes (skew)
- Dependency upon previous returns (auto-correlation)

The first are known as the first three moments (usually expressed as \(\mathbb{E}X^{1,2,3}\)), and tell you about the likelihood of a day’s return.

The last one, the autocorrelation gives you some forecasting power, as it tells you on average what happens tomorrow given today’s price behaviour. The sign here is important. If the auto-correlation is negative it means that if we had an up-day, it’s more likely we’ll have a down-day next.

To figure out if we have some sort of mean-reversion going on, we’re also going introduce time in our equation.

In particular different time frequencies. Remember Alexander Elder’s Triple Screen Trading System? It advocated looking at various time-frames to identify the asset’s behaviour.

Well, we’re going to do exactly focusing on daily, weekly, and monthly equity returns.

And to keep the article to the point: by equity I mean the S&P 500 market.

So here are the stats for the Daily, Weekly, and Monthly returns of the S&P 500:

Daily | Weekly | Monthly | |
---|---|---|---|

Mean | 10.0% | 10.0% | 10.0% |

StDev | 17.8% | 16.4% | 14.2% |

Skew | 0.10 | -0.6 | -0.7 |

Auto Corre | -6.3% | -8.5% | 6.3% |

We see that for monthly returns the auto-correlation is positive. And this is exactly what we exploited in the previous articles.

For Weekly and Daily returns we have negative auto-correlation which indicates that returns tend to want to go the opposite way.

So over the short time-horizon we can expect some form of whip-sawing.

What is definitely important to note is that the whip-sawing as measured over the various timescales isn’t the same. The standard-deviation actually becomes smaller over longer time horizons. This is an example of a Variance-Ratio test applied to a price-sers.

This result indicates that we expect our price-series to mean-revert, as the shorter time horizon swings need to be constrained strongly to give a lower volatility over longer time-horizons.

Furthermore, on the dailies, we see that the skew is positive. What this means for the case we are investigating is interesting: only on the daily returns do our daily returns actually overshoot the down days. If we look at the max / min of the daily, weekly, monthly returns we get:

Daily | Weekly | Monthly | |
---|---|---|---|

Min Return | -10% | -20% | -17% |

Max Return | 15% | 13% | 11% |

So the stats imply that over short time horizons we expect returns to go against the previous direction, which ultimately is the essence of mean-reversion. And we should focus on the dailies, where a positive skew, a positive drift will add a double whammy after we experience dips.

This is in accordance with an important observartion for our sample period: the equity markets were strongly upward trending (1993 to 2017). And as we saw previously this is a phenomenon that goes back even further. It’s important to note, because we can ex-ante expect long positions to outperform short positions!

Given the picture we’ve painted above here are two ways that immediately come to mind if you want to trade mean-reversion:

- If the market has been going in one direction, go the other way
- Trade in the opposite direction to the market’s short term directional trend.

We’re going to implement these approaches on daily closes. And for these two cases, we’ll focus on only trading until the close of the next day.

Nothing revolutionary. However, you immediately outperform the equity market by a wide margin! Actually on a par with some of the biggest money managers and hedge funds out there.

In detail:

__Case 1:__ If the market has been going down for N straight days, buy at the close and hold for one day.

It’s pretty naïve, but we are not too concerned right now with being sophisticated.

In-line with our expectations above, going short really only helped throughout the 2008 collapse. So, we’ll scrap that side of the equation.

We’ll also focus on N = 1 for the time being. Let’s not give anybody the chance to claim I’ve been data snooping! (I have, but the results are stable).

__Case 2:__ If the market is below its 5-day moving average, buy on the close and hold for one day.

Again, we’ll drop the short side of the equation.

The reason we chose 5-days is because it fits the fact that we see mean-reversion already kick in over the spane of a week: look at the compression of volatility over that period.

** **

__Case 1: Buying after a down day__

You might say, “no-way,” you’re making less!

But hang-on. Don’t you see that your draw-downs are lower, and the equity curve is smoother? Let’s measure that:

Down Day | S&P 500 | |
---|---|---|

Ann. Returns | 8.7% | 10.7% |

Ann. Volatility | 13.8% | 18.4% |

Sharpe Ratio | 0.63 | 0.58 |

Max D/D | -31% | -55% |

And now let’s take on a similar risk to the S&P 500 on our trading strategy.

Specifically we’ll just match the drawdown!

This is a CAGR of 27%. Certainly a good return. You might squabble over the Sharpe Ratio not being close enough to 1 but we’ll deal with that later.

__Case 2: below the 5-day moving average__

We get a similar performance as for Case 1. Again we can list out the stats:

Down Day | S&P 500 | |
---|---|---|

Ann. Returns | 9.6% | 10.7% |

Ann. Volatility | 14.3% | 18.4% |

Sharpe Ratio | 0.67 | 0.58 |

Max D/D | -33% | -55% |

And again we can scale up to get the same risk as the S&P 500 as measured in terms of its drawdown:

For Case 2 we also hit it out of the ball park with a 27% return on an annualized basis!

We can now rightly ask what happens if we combine the two concepts? That is, buy after a down day, and close out once we hit the 5 day moving average (or cross above it).

The rationale? *Mean-reversion complete*.

We buy after we drop, and we close out after we hit the weekly mean. I.e. we revert to the mean in the true and proper sense of the word.

It turns out that by keeping the memory length to only one day as a decider for our buying strategy is too short, and gets swamped by the moving average overlay. We therefore need to extend it to the next logical step: buy when we have experienced a two day drop.

On an unleveraged basis we managed to improve performance. Here are the stats for this approach:

Full Mean-Reversion | S&P 500 | |
---|---|---|

Ann. Returns | 9.1% | 10.7% |

Ann. Volatility | 13.0% | 18.4% |

Sharpe Ratio | 0.7 | 0.58 |

Max D/D | 24% | -55% |

We have pushed the Sharpe up to 0.7. But what’s remarkable now is that our drawdown has decreased to 24%. So let’s risk this up so that we match the S&P 500 in terms of drawdowns:

We now get a 34% CAGR. Trust me, this is indeed something to write home about!

** **

In this article we covered set some of the foundational details of developing trading systems. In particular we looked at mean-reversion:

- How to measure and define it
- Two naïve approaches to trading it
- Combining the two ideas to hit the definition of reverting to the mean
- Seeing that the end result puts you in the top league of hedge fund managers

Do you still believe that trading is difficult?

Imagine combining this strategy with our momentum strategy, what would happen then…?

__See You Next Time…__

Next time we’ll take a stab at improving our mean-reversion approach by looking at some of the favourite indicators out there in combination with our rule-set above.

The end result is pretty impressive, as it extends the Sharpe Ratio we’re encountering from 0.7 to 1.0, and the resulting performance is truly astounding.

Surprisingly this particular system has been well-known for some time, and I’ve traded it for several years consistently. I’m highlighting this in case people start to moan about curve fitting and data snooping.

So, try to replicate these calculations, and see if you can repeat the performance.

Until next time,

Happy Trading.

The post Equities and Their Mean Reversion Habits appeared first on FXMasterCourse.

]]>The post Dangers of Backtesting, Over-Leverage and the Need for a Protective Stop appeared first on FXMasterCourse.

]]>A shout out to Peter who raised both these points on the previous article in the series: Trading Numbers

So, what are the two issues we’ll address in this week’s article:

- Look ahead bias, and is it really that bad?
- Kelly is always touted as optimal. Is it really that optimal?

And to all those who don’t like spoilers: (2) will be covered in more detail later in this article series, so close your eyes; but following Peter’s comments, I couldn’t help include some detailed analysis here, I hadn’t really considered earlier.

I hear you asking, “What’s this look under the bed bias??”

Well, remember in Trading Numbers, we used momentum to setup a strategy that easily outperformed the S&P500.

Recapping: on the last trading day of the month you looked back over the last year. If the S&P500 had finished up, you bought, otherwise you were flat. Nice and simple.

Now assume that you were a bit sloppy implementing this in your spreadsheet, and you let a Look Ahead bias creep in. Let’s see what this bias could have led us to believe:

Wow!

That’s pretty impressive.

Recall what it really looked like, however:

That’s a bummer. We overstated performance by 100%! What went wrong?

For anybody, who’s implemented anything in Excel, you’ll know how easy it is to get references mixed up. In this particular case the Look Ahead bias entered by trading the 12^{th} month of our lookback period.

I.e. rather than trading the 13^{th} month, using the prior 12 months’ information, we traded the last month of the 12-month lookback period, even though we already used that month to form our trading decision. It’s like assuming you have a crystal ball!

The really insidious thing here is how 12-month momentum and 1-month momentum are so strongly correlated! You would have assumed that it wouldn’t make that much difference!

Now obviously a situation like this can only arise when you backtest. Unfortunately, you only go live after you have a good backtesting result, and so it’s not surprising that, given a Look Ahead bias makes backtests look so nice, this error keeps on rearing its head.

Either check your tests, or forward walk your system to see that you are indeed testing realistic rules.

So how did we fall prey to the Look Ahead bias in the previous article, and what is our saving grace.

As was pointed out, we risk adjusted our 12-month momentum strategy. Here it is again:

And to do that, we had to measure the risk of the S&P 500 and that of the Momentum strategy and scale up the Momentum strategy appropriately.

And if you recall from the previous article, we did that by measuring the standard deviation of both strategies.

But HANG ON! How can we do that if in 1994, we have no clue how these strategies will pan out over the next 23 years?!

That’s where we got clonked.

The scaling factor turned out to be 1.3x, based upon these “Forward Looking” risk measures (which were nothing more than the standard deviations of the P&L streams of either strategy).

A possible solution was to use an expanding window for our risk measure. Meaning at each time we measure the risk from that point in time all the way back to 1994 (the start of our simulated trading).

This actually ends up giving us a much lower risk multiplier, and hence a much lower performance for the risk adjusted momentum strategy.

Indeed, an issue.

There is, however, a saving grace for us in all this, and it actually gives us even more confidence in the momentum strategy.

Our saving grace is that the stock markets didn’t just magically appear in 1994!

If you recall from the first article in this series: Building Profitable Trading Systems, we had data (albeit synthetic) going all the way back to 1871.

So, let’s try this. Let’s estimate the relative risk from 1871 until 1994 for both the S&P 500 as well as its momentum filtered counterpart.

It so magically turns out that the relationship of 1.3x is stable! The risks from 1871 until 1994 were nearly identical as for the period 1994 to 2017!

Now that is remarkable indeed.

It means two things:

- Our initial analysis still stands
- Going forward we can rest assured that we don’t have to fiddle too much with our risk multiplier, since it has been stable at the same value (at least to O(0.1) ) for the last 150 years.

What else does this do for us?

It underscores the stability of our approach. Going back to gaining confidence from statistical measures, it implies that structurally markets have stayed very similar over modern times. And this gives us confidence to proceed with this strategy.

This leads nicely to the second point that was raised.

What is the optimal leverage I should use? It turns out that simply risk-adjusting the momentum strategy does leave a lot of money on the table.

Accepted wisdom recommends using a leverage factor equal to the Kelly criterion.

This Kelly factor is usually evaluated using statistical properties of the return series. In particular:

\( \lambda = \frac{\mu}{\sigma^2} \)

Where \(\mu\) and \(\sigma\) are the average rate of return and the standard deviation of our momentum strategy.

There are some caveats here. Naively plugging our mean and standard deviation estimate using this formula gives an optimal leverage factor of 8.8.

This leads to:

which is a cataclysmic result.

And furthermore, how on earth is this possible?

The simple answer: Over-leverage.

Let’s define Over-Leverage: the naïve assumption that we have all available, all possible, information at our disposal, and we have the over-powering desire to ride the ragged edge of disaster.

Obviously, this is a fallacious assumption and attitude. (You’ll see towards the end how this leads to the notion of a Stop Loss).

So how could we rectify our example above?

Let’s wind the clock back a bit, and work out Kelly from first principles.

Principles:

- The returns of our asset / trading strategy are normally distributed.
- We can safely ignore any risk terms coming from higher-order moments of the normal distribution

The way these assumption feed into Kelly is visible from the structure of the formula: it only includes the mean and the standard deviation of the return distribution.

So, this leads us to the conclusion that maybe our Momentum Strategy isn’t as normal as we might assume, and might have riskier higher order moments than a normal distribution.

Let’s check this.

Kurtosis, \(\kappa=1.8\), and skweness, \(\gamma_1=-0.25\). It’s got fat tails, which isn’t surprising for a momentum strategy, and interestingly enough a negative skew. Now, this is interesting because on the one hand you’d expect positive skew for momentum strategies (viz. the MAN AHL article: Positive Point of Skew), however, for stocks skewness of the strategy tends to be negative (cf this great article Skewness Enhancement), indicating that you can fall off a cliff.

So how do we deal with this?

Let’s go back to the derivation of Kelly.

If our wealth process evolves like \({\displaystyle V=V_0\prod_i{(1+\lambda X_i)}}\), where \(\lambda\) is our leverage, and \(X_i\) is the return over a time period, we can find the optimal leverage, the Kelly leverage, by optimizing the expected growth rate with regards to our leverage factor.

Writing out the expected growth rate:

\({\displaystyle g(\lambda)=\mathbb{E}\left(\log\frac{V}{V_0}\right)=\sum_i \mathbb{E} \log(1+\lambda X_i) }\)

and taking into account that our returns are identically distributed over all time-steps (iid), we can Taylor expand \(g(\lambda)\) as:

\(g(\lambda) = \lambda \mathbb{E} X – \frac{1}{2}\lambda^2 \mathbb{E}X^2 + \frac{1}{3}\lambda^3 \mathbb{E}X^3 – \frac{1}{4} \lambda^4 \mathbb{E} X^4\)

The optimization means taking the derivative of \(g(\lambda)\) with respect to \(\lambda\), and setting it equal to zero:

\(g'(\lambda)=\lambda^3\mathbb{E}X^4 – \lambda^2\mathbb{E}X^3 + \lambda\mathbb{E}X^2 – \mathbb{E}X = 0\)

In the case where we assume kurtosis and skew to be zero the Kelly-leverage \(\lambda\) ends up being our usual suspect.

However, we obtain a cubic equation if we include the higher order moments.

Thank goodness for the Italians having found solutions for this in 16^{th} Century (del Ferro, Tartaglia, Cardano, and Bombelli).

Working out the four moments for our case of the momentum strategy and using the algebraic solution for a cubic, we obtain an optimal leverage factor of 7.35.

Now this looks good, it’s lower than before, and it seems to have noticed our fat tails and negative skew. But is it good enough??

NO!

We still fall off a cliff. And I don’t even have to look at a chart.

Do you want to know why?

Because in 1998 August, the Russians went boom, and the stock market took a big nose dive.

Our momentum portfolio lost 14% that month. And 14% x 7.35 is bigger than 100%, meaning with this leverage we would have lost it all.

What went wrong with our super-duper maths?

Simple, that event was a massive outlier. The next biggest loss for the momentum strategy is at 7%. This means that the kurtosis is even bigger than we estimated from our sample.

So what solutions do we have?

There are two possibilities in our case:

- We cheat! Meaning that since our strategy derives from the underlying market, why not use the skew and the kurtosis of the S&P 500? This would allow us to benefit from the increased returns and lower volatility of the momentum strategy, but still take into account the potential for big losses from the S&P500!
- We follow the madding crowd and use half-Kelly instead.

For option (1) we obtain a leverage of 6.0, which is not too far from the optimal leverage at 6.32 (if you numerically fiddle with the numbers). For option (2) we’d obtain 4.4.

What would the difference in performance be? Option (1) yields a 11,800x return, and option (2) a return of 3,900x. I would say it’s ball park similar!

Forgetting for the time being the ludicrously high numbers (not that they are ridiculous), the real question arises: even if we used the S&P 500’s fatter tails, we haven’t really guarded against a cataclysmic event in the future: we simply don’t have a crystal ball!

Coming back full circle, this is where ultimately the protective stop comes in.

The message is as always, the old one: you can’t forecast the future, and hence you have to put a stop in to protect yourself.

However, here is where it gets more refined.

The stop I’m talking about is one that protects your capital from *disaster.* I’m not talking trailing stops, or stops set at some arbitrary point, where a trade signal has been negated. The cows haven’t come home yet on the subject of where the ideal trading stop is.

However, with respect to disaster recovery, it’s clear you need them. The position depends on your risk appetite. There are people out there who are quite happy to stomach a 90% loss. Do you belong in that camp?

In this article we covered some foundational details of developing trading systems. In particular, the dangers of backtesting and over-leverage:

- The Look Ahead bias can creep into your analysis in the most devious of ways. Always be on the look-out, especially if your equity curve is too much of a straight line in your tests
- Over-leverage is a killer when you least need it: in the worst-case scenario. It’s the reason places like LTCM, Amaranth, Peloton had to book the losses they did (as well as Howie Hubler at Morgan Stanley, and with him the rest of the US housing market). So, decide a level at which you will get out, just for the sake of keeping alive (as long as it’s not at -100%!)

In addition, we had some numbers excursion that have been quite fun, and have led to some better ways of calculating the usual leverage ratios touted out there.

If you want to incorporate these, here is the extension of the previous Python code:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 | import matplotlib.pyplot as plt import numpy as np import math from pandas_datareader import data as pdr import fix_yahoo_finance as yf yf.pdr_override() def cubic(a,b,c,d): d0 = b**2 - 3*a*c d1 = 2*(b**3) - 9*a*b*c + 27*(a**2) * d C = ( (d1 + math.sqrt(d1**2 - 4*(d0**3)))/2 ) ** (1/3) res = -1/(3*a) * (b + C + d0/C) return res def optimal_leverage(spy_ret, mom_ret): a1 = (spy_ret**4).mean() b1 = (spy_ret**3).mean() a2 = (mom_ret**4).mean() b2 = (mom_ret**3).mean() c = (mom_ret**2).mean() d = (mom_ret**1).mean() kelly = d/c lev_mom = cubic(-a2,b2,-c,d) lev_mom_spy = cubic(-a1,b1,-c,d) return (kelly, lev_mom, lev_mom_spy) if __name__=="__main__": data = pdr.get_data_yahoo('SPY', start='1990-01-01', end='2017-10-02', interval="1mo") c = data[["Adj Close"]] c["spy"] = c["Adj Close"] / c.ix[0,"Adj Close"] c["rets"]=c["spy"]/c["spy"].shift(1)-1 c["flag"]=np.where(c["Adj Close"]>c["Adj Close"].shift(12),1,0) c["mom_ret"] = c["flag"].shift(1) * c["rets"] (kelly, lev_mom, lev_mom_spy) = optimal_leverage(c["rets"], c["mom_ret"]) print("Kelly: {}, Optimal leverage for momentum strategy: {}, " \ "Optimal leverage for momentum strategy using SPY " \ "kurtosis and skew{}".format(str(kelly),str(lev_mom),str(lev_mom_spy))) std_spy = c["rets"].std() std_mom = c["mom_ret"].std() fac = std_spy / std_mom c["mom"] = (1+c["mom_ret"]).cumprod() c["mom_lev"] = (1 + c["mom_ret"] * fac).cumprod() plt.plot(c.index, c["spy"], label="spy") plt.hold(True) plt.plot(c.index, c["mom"], label="spy 12m - mom") plt.plot(c.index, c["mom_lev"], label="spy 12m lev") plt.grid() plt.title("SPY vs SPY 12 Month Momentum vs SPY Momentum Leveraged", \ fontdict={'fontsize':24, 'fontweight':'bold'}) plt.legend(prop={'size':16}) plt.show() |

Next time round, we’ll be continuing with a technical variant of mean-reversion for equities which has proven profitable over the last 25 years, and which doesn’t let up. (Double promise! No detour foreseen). In particular, we’ll look at various ways of understanding market regimes.

I’ll also utilize our analysis on Kelly betting to give you a taster on what portfolio construction entails, and how to go about extracting as much value as you can out of your very own simple momentum / mean-reversion equity portfolio.

So, until next time,

Happy Trading.

The post Dangers of Backtesting, Over-Leverage and the Need for a Protective Stop appeared first on FXMasterCourse.

]]>