Homebrew Correlation Indicator

The CBOE index of implied correlation made the news recently, with a high of 72.  This index, JCJ, is not as well-known as the one for implied volatility, but the two are linked. Low correlation suppresses volatility, due to the diversification effect.  Math geeks may refer to the algebra in my earlier post on this topic.

High correlation is newsworthy when it suggests an indiscriminate selloff.  In a panic, as they say, all correlations go to unity.  Intuitively, you would expect a healthy bull market to show a middling amount of correlation as leadership rotates among generally rising issues.  I wrote a Python script to help me look at this.

code-snippetMy correlation indicator is the average of the pairwise correlations among the nine “sector spider” funds, over a 20-day window.  Twenty days is what I would normally use with the Stockcharts CORR() function, to compare SPY with, say, Treasury yields or commodities.  The script allows me to experiment with different parameters.

back-period

In the chart, I have also smoothed both SPY and the indicator with a five-day moving average.  This is just to make the long series easier to interpret.  The gray bars highlight periods when the correlation indicator is below 60.

In contrast to high correlation, low correlation suggests a period when the market is “looking for leadership.”  It appears reliably around local tops.  You see plenty of this in 2008, obviously, but also note the two very precise warnings in 2010.  The indicator tends to swing from these low values up to 80 or more, as price rolls over the tops.  Note also, the steady bull market that begins in 2012, with correlation contained between 50 and 80.

Finally, note that the CBOE calculates their measures of implied volatility and correlation based on option prices.  Mine is a trailing indicator, based on historical correlation.  It is to JCJ as STDEV() is to the VIX.

Posted in Indicators, Quantitative | Tagged , , | Leave a comment

Winter Solstice

The winter solstice, just passed, is the shortest day of the year. It marks not the depth of winter, but the onset. The seasons lag the sunshine, as shown in the chart below. The coldest days come five or six weeks later.

WeatherTo make this chart, I downloaded twenty years of temperature data from the University of Dayton, and fit a sine curve to them. Then, I computed the hours of daylight throughout the year, at 40 degrees north latitude. The latter curve is not empirical; it’s straight-up geometry.

What does all this have to do with the stock market? The relationship between these two sine curves reminded of this one, below. It’s the sector rotation model from stockcharts.com. The stock market is a leading indicator of the business cycle.

Business Cycle

The market is about as noisy as my temperature data, but the basic idea is that its peaks and valleys lead those in the real economy, just as day length leads the seasons. After this week in the market, traders might be feeling a chill.

Posted in Patterns, Strategies | Tagged , , , | Leave a comment

TrueCar Gap Play

Occasionally, my day job intersects with my passion for the stock market. Below is the Finviz chart of TrueCar. This company has great potential, and some challenges. I believe the selloff on last quarter’s miss was overdone. Even the downgraded price target is $9, and the short interest at 8.5 is ripe for a squeeze.

The chart shows price having made a double bottom, and now breaking out of the downtrend channel. It has been in a bullish flag formation for the past two weeks, and looks primed for a gap fill back toward $10.

TRUE

The play, of course, is long from $7 to 10. Ideally, you would wait until price enters the gap, and then exit around $9.80 for a 40% gain. Unfortunately, the next earnings report is November 5, which could slap the stock right back into the down channel – or up above the gap.

Because of earnings, this trade must be done using options. As of this writing, the December 18 $7.50 calls are trading for $0.30. This makes our gap play more of an earnings play, so don’t try it unless – like me – you have some confidence in the fundamentals. The call profile is below:

Call Profile

It’s a thin market, so we want to be “near” the money and prepared to exercise. Risk is $0.30 per share and, if the gap fills by expiration, reward is $2.00.

Posted in Strategies | Tagged , , | Leave a comment

Chartism Demystified

Technical analysis of chart patterns is so popular today, it’s hard to remember the scorn heaped on it by my professors at business school. This condescending remark is from Benoit Mandelbrot. Yes, the same Mandelbrot who discovered fractal scaling.

The newspapers’ financial pages are filled with … self-styled “chartists,” who plot the past suitably and proclaim they can predict the future from those charts’ geometry.

Technical analysis does not claim to predict the future, but it does allow us to make conditional statements and asymmetrical bets. A conditional statement is, “once price breaks $52, it will probably run to $55.” On the other hand, if it’s a false breakout, we’ll know by $51 and exit with a small loss.

If you can be right 50% of the time, with a 3 to 1 payoff, then you are a successful trader. Likewise, if you can find 1 to 1 bets with greater than 50% accuracy. These bets are asymmetrical because they have an expected value greater than zero. Books by John Carter and Marcel Link are full of them.

Win $3 x 50% + Lose $1 x 50% = Win $1 (per average trade)

Technical analysis can find these situations using patterns based on market psychology. That may sound like voodoo magic. A better expression might be, attention to where other traders are positioned, and what their strategies are likely to be. For an example, consider the venerable “line of resistance.”

Below is a chart of Amerisource Bergen, on which I have placed a horizontal line at $77.50. This line held, as resistance, for three months. After two attempts to break the line in late July, and then a discouraging August, bulls begin to suspect $77.50 is the best price they’re ever going to get. On the next attempt, in September, they’re ready to sell.

ABC ChartThroughout September and into August, investors who want the stock may buy it on dips, but they know not to pay more than $77.50. Investors who want out, having bought below $72, are happy with this price. Traders can rely on shorting the stock here, with a stop loss at $78 and a price target of $76. A consensus has formed, with “selling pressure” holding the line firm.

This pretty much explains why resistance lines are real (and not voodoo). The line may slope upward, as buyers gradually become willing to pay more for the stock. There are also support lines, which rely on the same psychology.

While we’re at it, let’s look at what happens when price finally breaks through resistance. Sellers are happy, obviously, but who’s buying? The bears. All the traders who were short from $77.50, plus some poor fools who went short after October 12, must now buy to cover their losses.

As October 27 opens, no one who owns the stock has ever paid more than $77.50. This range is a wilderness of stop loss orders from the shorts, and stop entry orders from the breakout traders. Price will move rapidly through such an area, which is why the latter group is waiting here. Price jumps to $79 and then pauses for the earnings report due October 29.

I like to think of resistance as standing out from the chart, like a ridge on a topographic map. As price moves toward resistance or support, it travels “up” this third dimension. Price generally rolls back the way it came. Once over the ridge, however, it will roll onward rapidly.

ABC Chart3This is why Jesse Livermore speaks of pivotal points generally, whether they are above or below the current price. Price may also flow in a channel, like a valley on the chart. I think this topographical metaphor is a good one, and I hope I demystified something, too.

Posted in Patterns | Tagged , , , , , | Leave a comment

Streamlining Tax Rates with Z-Statistics

We ask a lot of our federal income tax system. Its stated purpose is to fund the government, but we also expect it to do something about income inequality. The system redistributes income, implicitly or explicitly, through tax preferences and credits, tax-funded benefit programs, and a progressive rate scale.

Throughout the system, its basic revenue-collection function is intertwined with various redistribution features. Policy makers are thus unable to address revenue issues independent of fairness issues. Simpson and Bowles estimate that over $1 trillion of revenue is lost annually to tax preferences, and not all of them are progressive.

Fortunately, there is a simple way to generate a tax scale which separates redistribution from revenue collection. This article presents the technique, which is policy neutral with regard to redistribution, and also explores some of the political implications.

The z-transform is a technique commonly used in statistics to change the parameters of a probability distribution. The same technique can also be used to change the parameters of an income distribution. Consider the form of the z-statistic:

Formula 1Revenue is collected by subtracting from the population’s mean income μ. Redistribution is accomplished by reducing the standard deviation σ. Ideally, policy makers would eliminate all tax preferences, and leave redistribution entirely to the formula. The diagram below illustrates how reducing the standard deviation “squeezes” more people into the middle-income range.

Figure 1For an example, consider the income distribution shown below. This example is based on CPS income survey data for 114 million households earning less than $200,000 per year. This population has a mean income of $57,500 with a standard deviation of $43,400. From these households, we will collect tax revenue of $430 billion. This has the effect of reducing the mean from $57,500 to $53,700, an average of $3,800 from each of the households, with the burden distributed according to z-score.

Figure 2Having established our after-tax mean income level, we decide on $32,100 as the new standard deviation. Economists agree that, while high levels of income inequality can be damaging, the free enterprise system requires some degree of disparity. The z-score preserves relative disparity while reducing the absolute amounts. Someone earning at the 1.95 level, roughly $142,000 in this example, would still be at the 1.95 level after taxes. That is,

Formula 2Where x is an individual income figure, and the prime symbol (’) denotes “after tax.”

We convert each income level to a z-score using the pretax mean and standard deviation, then alter these two parameters as above, and convert the z‑scores back to income levels using the new parameters:

Formula 3Below is a graph of the income distribution, after tax.

Figure 3Observe that the after tax distribution has a narrower range, and no one in the “under $5,000” category. Those 4,000 households each received a $12,000 tax credit. The top category, households approaching $200,000, each paid a tax of $40,100 or 20%. This 26% reduction in standard deviation is the least disruptive relative to our current tax scale, which favors taxpayers in the $50,000 to $100,000 range. A taxpayer earning $10 million would have a z‑score of 229 and pay a 26% tax.

Once the parameters are set, the z-transform can be used to generate tax tables, as shown below, or (better) a few lines on the tax form:

  1. Subtract $57,500 from your gross income. The result may be negative.
  2. Divide this number by $43,400. This is your z-score.
  3. Multiply your z-score by $32,100, and then add $53,700. This is your net income.
  4. Subtract your net income from your gross income. This is your tax due.

For example, someone earning the population mean of $57,500 would have a z-score of zero and pay the average tax of $3,800.

Table 1We chose the CPS dataset for this example because its $5,000 increments provide a better illustration than the broad income bands used by the IRS. Indeed, the prospect of a continuously variable rate scale is one of the z-transform’s advantages.

For a second example, we use the IRS data. This is the complete population of 140 million tax returns. It has a mean income of $55,700, similar to the CPS data, but a standard deviation of $252,500. The distribution is skewed by a small number of households with income above $5 million. This “long tail” of high incomes is a power law distribution, with exponent α = 1.7. For the sake of clarity, we do not show here the z-transform technique applied to log-scaled income data.

From a policy perspective, the high standard deviation limits our ability to raise taxes on this group without also impacting a large number of households in the middle bands. We suspect that natural skew in the data is exaggerated by the broad IRS reporting bands.

In this example, we will again reduce the (higher) standard deviation by 26%. We collect $870 billion by setting the mean after-tax income to $49,500. The structure of the IRS data obscures the shape of the distribution, so we present the results in tabular form.

Table 2Recognizing that our tax system is a vehicle for income redistribution as well as revenue collection, this approach separates the two functions. It uses a simple formula which allows policy makers to set goals for both functions explicitly and independently. It has the added benefit of being continuously variable over the range of incomes. This eliminates well-known problems associated with rate brackets.

We close our exposition of the z-transform approach with a few political results. An individual tax payment (or credit) is given by:

Formula 4Where Δµ is the individual amount required to fund the federal government. It has no subscript because it is the same for every taxpayer. It is determined in advance, and it can be printed on the tax form:

Formula 5In our example from the 2009 IRS data, Δµ is $6,200. This is what each taxpayer contributes toward funding the government. The rest r, is paid (or received) as redistribution. Even taxpayers below the mean, and receiving a credit, will see the amount by which Δµ reduces their credit. Redistribution will not excuse inefficiency.

Total tax revenue is given by:

Formula 6Everyone is better off when µ increases year over year, or Δµ decreases. Note that ∑r = 0.  Solving for x’ where x = 0 gives:

Formula 7This is the national minimum income. In the CPS example above, it is roughly $12,000. Our current minimum income is roughly $45,000, although it is never stated as such. The z-transform method states it unambiguously, and rewards any incremental earnings. It avoids the patchwork of benefit programs identified by Alexander, and the “welfare cliff.”

The scale of redistribution is given by the change in standard deviation between the pre- and post-tax distributions. It is natural to state this is as a percentage change:

Formula 8In both examples, above, we “reduce inequality by 26%.” This is how politicians can state their intention to be more or less progressive. Taxpayers with negative z-scores (those receiving tax credits) will tend to favor a higher Δσ, while those above zero will favor a lower one. These self-interested tendencies come into balance when the median income, which evenly splits the voting population, is also the mean.

Posted in Economics | Tagged , , , | Leave a comment

190 Weeks without a Correction

It seems like pullbacks have been really small lately, so I whipped up a new indicator called Percent off Peak, and took a look at the history of corrections. The indicator simply ratchets itself up as new peaks form, and then calculates each bar’s percentage off (at close). Here is the chart, below. Note that I am using a log scale, so that moves of a given percentage are the same size up and down the scale.

Peaks_WeeklySure enough, the SPX hasn’t had a 10% pullback since August 5, 2011. Next, I wrote an indicator to count the bars between pullbacks of a given size (this is variable, but I stick with 10%). The tricky thing about counting pullbacks is choosing the start date. You can’t see a 10% drop from an intermediate peak, if it’s still in the shadow of a higher peak.

To get around this, I reset my bar counter after each pullback of the given size. Finally, I code a dummy buy signal based on this routine, so that the strategy evaluator will automatically collect the dates. Here’s how that looks, around the top of the dotcom bubble. This was just a rounded top on the SPX, by the way – the “crash” was on the NASDAQ.

Peaks_SignalI ran the strategy from February 1975 to the present, collecting 40 years’ worth of corrections. They’re surprisingly rare, only 34 in the period. The histogram is below. It looks like a Poisson distribution, which is what you’d expect for an interarrival rate. Call it the market’s “mean time to failure.”

Peaks_HistoContinuing in the Poisson vein, we find that the mean (which is also the standard deviation) is 61. Yes, that’s why I used a bin size of 30. So, the first four histogram bars contain 29 observations within one SD of the mean. Here are the five outliers:

Peaks_TableNote that the second and third observations are linked. That was the only correction in a ten-year bull market. Maybe we are in another one. I’d feel more confident if we had some fundamentals to support it, like “spending the peace dividend.”

I did not try to draw any inferences from these five (they’re outliers, after all), but I did notice one thing. When a bull run goes more than three years without a correction, whatever ends it gets a catchy name.

Posted in Indicators | Tagged , | 1 Comment

Momentum Based Balancing

Part two in a series

A few months ago, I presented a novel method for balancing a widely diversified portfolio. By “widely diversified,” I mean all asset classes, foreign and domestic. This is to reduce inter item correlation. Here are the balancing rules:

  • Look at last month’s return for each of the nine EFTs
  • Drop the bottom two of ten (including cash)
  • Add a constant to the remaining eight, so that the lowest score is set to 0.0%
  • Allocate funds to these issues, pro rata by their adjusted score
  • Repeat monthly

These rules produced pretty good results in backtesting over a ten year period. The method, and the portfolio, are described in the earlier post.

Table1You are probably thinking that last month’s return is a poor predictor for next month, and that’s certainly true for individual stocks. Here, though, we have a diverse group for which relative rank is predictive. For each month, I ranked the ten issues (including cash) from 1 to 10, with 10 being the lowest return. The table below shows the following month’s return for each rank.

Table2

For this group, rank does indeed predict next month’s return, with an R2 of 0.35. I also tested the hypothesis that the top five’s average trailing return of 0.7% is greater than the bottom five’s 0.2%, and it is (with 96% confidence). You can check this yourself by downloading monthly data series for the given ETFs.

Rank based on a three month lookback is an even stronger predictor, with an R2 of 0.81 (chart below). Running model #3 with the longer lookback increases its CAGR to 9.9% and its Sharpe to 0.76.

I keep saying “for this group,” because the method depends on low inter-item correlation. The theory is that the portfolio will cover the gamut of asset classes, with institutional flows to the leading classes persisting for several months. I selected the portfolio based on this theory, which is quantified by an average inter-item correlation of 35%.

It’s not really a rotation model. If you are rotating, say, sector funds, then you are still concentrated in U.S. equities.

The nine sector funds, shown above, are a strongly correlated group with no negatively correlated pairs. The minimum correlation coefficient is 20% (between XLK and XLU) and the inter-item mean is 55%. For this group, rank has no predictive value. The R2 is around 0.001. I’ll show results for some additional low correlation portfolios in a later post.

Posted in Strategies | Tagged , , , | Leave a comment

QE Bulls Are Back

If you follow the market every day, as I do, you can observe two distinct groups of bullish investors.  This is hard to depict on a chart, but I am going to try.  One group responds normally to economic news, selling when the news is bad.  The other group buys on bad news, anticipating monetary stimulus from the Fed.

One way to spot this behavior is to discount market movements according to changes in the dollar.  The S&P 500 has rebounded in February, while UUP has fallen steadily.  Investors move to GLD as an inflation hedge, and that chart shows the same thing.

DollarAnother way is to compare the market to some gauge of economic health.  Below, we see that the ECRI weekly leading index peaked in early January.  This is a little fuzzier than seeing the market react day by day, but you get the idea.  Bulls in today’s market are buying QE.

ECRIFor several weeks around Christmas, the “recovery” meme held sway.  The market correction on Jan. 23 was caused by this group losing faith.  The bounce that began on Feb. 4 was part of a normal corrective pattern that could be attributed to either group.  Following Janet Yellen’s testimony on Feb. 11, however, the QE bulls are back in charge.

The new Chairperson said she would continue the former policy of winding down QE, but the market did not believe her.  This is probably because of her shift from unemployment statistics to direct targeting of inflation.  Observers interpreted this as an opening for slower tapering, if not an outright reversal.

Posted in Economics, Indicators | Tagged , , , , | Leave a comment

Levy Dispersion Update

I watch $OEXA200R on Stockcharts, which says that 74 of the S&P 100 are trading above their respective 200 day moving averages.  It is generally considered a bad idea to go long when this figure is below 65, or in a downtrend.  The full method is here, thanks to John Carlucci.

I also compute a related statistic, which says that 60% of the S&P 500 are trading above their SMA(131).  Being broader, smoothed, and using a shorter average, this statistic may react faster than $OEXA200R.  The calculation is here.

Levy FebThe advantage of doing it this way is that you can see the central tendency of the 500 stocks.  The vertical axis shows individual stocks, of the 500, not a percentage.

This month, the mode is bang on 1.0, which is weaker than last month.  The mean has fallen from 1.06 to 1.02.  As we proceed into the correction, I would expect the median to fall below 1.0, meaning that half are trading below the moving average.

Posted in Indicators | Tagged , , , | Leave a comment

Correlation with Bond Yields

Hat tip to Gökhan Kula for sending this along.  For the few years I have been trading, bond yields have been strongly correlated with stocks.  Stocks go up when bonds go down, so much so that I regard SPY and TBT as the same play.

I have the following chart in my collection on stockcharts.com, just in case the correlation should someday break.  That seems to be happening now, with post-QE yields headed toward 3%.

TNX CorrelationMy chart, below, recapitulates the analysis of J. P. Morgan.  It shows the rolling two year correlation of weekly S&P 500 returns with weekly changes in the 10 year Treasury yield, plotted against the  Treasury yield.

SPX TNX CorrelationThis is the correlation of correlation, so to speak, with bond yields.  You can see that the strong correlation, to which I am accustomed, becomes weaker and then negative as yield rises through 5%.  Of the 2,700 or so datapoints, only 415 have TNX < 4.0, so this low yield is an historical anomaly.  Here is the timeline (note the log scale).

TNX LifeSince 2008, the correlation has seldom been below 0.4, and often as high as 0.7.  Here’s what that has looked like in terms of weekly returns.

Scatter3For comparison, here is that ancient period when stocks and yields were negatively correlated – or, to put it another way, stocks and bonds rose and fell together.  Don’t ask me what asset class funds flowed to when both were down.  Cash or real estate, I suppose.

History

Posted in Economics | Tagged , , | Leave a comment