Chartism Demystified

Technical analysis of chart patterns is so popular today, it’s hard to remember the scorn heaped on it by my professors at business school. This condescending remark is from Benoit Mandelbrot. Yes, the same Mandelbrot who discovered fractal scaling.

The newspapers’ financial pages are filled with … self-styled “chartists,” who plot the past suitably and proclaim they can predict the future from those charts’ geometry.

Technical analysis does not claim to predict the future, but it does allow us to make conditional statements and asymmetrical bets. A conditional statement is, “once price breaks $52, it will probably run to $55.” On the other hand, if it’s a false breakout, we’ll know by $51 and exit with a small loss.

If you can be right 50% of the time, with a 3 to 1 payoff, then you are a successful trader. Likewise, if you can find 1 to 1 bets with greater than 50% accuracy. These bets are asymmetrical because they have an expected value greater than zero. Books by John Carter and Marcel Link are full of them.

Win $3 x 50% + Lose $1 x 50% = Win $1 (per average trade)

Technical analysis can find these situations using patterns based on market psychology. That may sound like voodoo magic. A better expression might be, attention to where other traders are positioned, and what their strategies are likely to be. For an example, consider the venerable “line of resistance.”

Below is a chart of Amerisource Bergen, on which I have placed a horizontal line at $77.50. This line held, as resistance, for three months. After two attempts to break the line in late July, and then a discouraging August, bulls begin to suspect $77.50 is the best price they’re ever going to get. On the next attempt, in September, they’re ready to sell.

ABC ChartThroughout September and into August, investors who want the stock may buy it on dips, but they know not to pay more than $77.50. Investors who want out, having bought below $72, are happy with this price. Traders can rely on shorting the stock here, with a stop loss at $78 and a price target of $76. A consensus has formed, with “selling pressure” holding the line firm.

This pretty much explains why resistance lines are real (and not voodoo). The line may slope upward, as buyers gradually become willing to pay more for the stock. There are also support lines, which rely on the same psychology.

While we’re at it, let’s look at what happens when price finally breaks through resistance. Sellers are happy, obviously, but who’s buying? The bears. All the traders who were short from $77.50, plus some poor fools who went short after October 12, must now buy to cover their losses.

As October 27 opens, no one who owns the stock has ever paid more than $77.50. This range is a wilderness of stop loss orders from the shorts, and stop entry orders from the breakout traders. Price will move rapidly through such an area, which is why the latter group is waiting here. Price jumps to $79 and then pauses for the earnings report due October 29.

I like to think of resistance as standing out from the chart, like a ridge on a topographic map. As price moves toward resistance or support, it travels “up” this third dimension. Price generally rolls back the way it came. Once over the ridge, however, it will roll onward rapidly.

ABC Chart3This is why Jesse Livermore speaks of pivotal points generally, whether they are above or below the current price. Price may also flow in a channel, like a valley on the chart. I think this topographical metaphor is a good one, and I hope I demystified something, too.

Posted in Patterns | Tagged , , , , , | Leave a comment

Streamlining Tax Rates with Z-Statistics

We ask a lot of our federal income tax system. Its stated purpose is to fund the government, but we also expect it to do something about income inequality. The system redistributes income, implicitly or explicitly, through tax preferences and credits, tax-funded benefit programs, and a progressive rate scale.

Throughout the system, its basic revenue-collection function is intertwined with various redistribution features. Policy makers are thus unable to address revenue issues independent of fairness issues. Simpson and Bowles estimate that over $1 trillion of revenue is lost annually to tax preferences, and not all of them are progressive.

Fortunately, there is a simple way to generate a tax scale which separates redistribution from revenue collection. This article presents the technique, which is policy neutral with regard to redistribution, and also explores some of the political implications.

The z-transform is a technique commonly used in statistics to change the parameters of a probability distribution. The same technique can also be used to change the parameters of an income distribution. Consider the form of the z-statistic:

Formula 1Revenue is collected by subtracting from the population’s mean income μ. Redistribution is accomplished by reducing the standard deviation σ. Ideally, policy makers would eliminate all tax preferences, and leave redistribution entirely to the formula. The diagram below illustrates how reducing the standard deviation “squeezes” more people into the middle-income range.

Figure 1For an example, consider the income distribution shown below. This example is based on CPS income survey data for 114 million households earning less than $200,000 per year. This population has a mean income of $57,500 with a standard deviation of $43,400. From these households, we will collect tax revenue of $430 billion. This has the effect of reducing the mean from $57,500 to $53,700, an average of $3,800 from each of the households, with the burden distributed according to z-score.

Figure 2Having established our after-tax mean income level, we decide on $32,100 as the new standard deviation. Economists agree that, while high levels of income inequality can be damaging, the free enterprise system requires some degree of disparity. The z-score preserves relative disparity while reducing the absolute amounts. Someone earning at the 1.95 level, roughly $142,000 in this example, would still be at the 1.95 level after taxes. That is,

Formula 2Where x is an individual income figure, and the prime symbol (’) denotes “after tax.”

We convert each income level to a z-score using the pretax mean and standard deviation, then alter these two parameters as above, and convert the z‑scores back to income levels using the new parameters:

Formula 3Below is a graph of the income distribution, after tax.

Figure 3Observe that the after tax distribution has a narrower range, and no one in the “under $5,000” category. Those 4,000 households each received a $12,000 tax credit. The top category, households approaching $200,000, each paid a tax of $40,100 or 20%. This 26% reduction in standard deviation is the least disruptive relative to our current tax scale, which favors taxpayers in the $50,000 to $100,000 range. A taxpayer earning $10 million would have a z‑score of 229 and pay a 26% tax.

Once the parameters are set, the z-transform can be used to generate tax tables, as shown below, or (better) a few lines on the tax form:

  1. Subtract $57,500 from your gross income. The result may be negative.
  2. Divide this number by $43,400. This is your z-score.
  3. Multiply your z-score by $32,100, and then add $53,700. This is your net income.
  4. Subtract your net income from your gross income. This is your tax due.

For example, someone earning the population mean of $57,500 would have a z-score of zero and pay the average tax of $3,800.

Table 1We chose the CPS dataset for this example because its $5,000 increments provide a better illustration than the broad income bands used by the IRS. Indeed, the prospect of a continuously variable rate scale is one of the z-transform’s advantages.

For a second example, we use the IRS data. This is the complete population of 140 million tax returns. It has a mean income of $55,700, similar to the CPS data, but a standard deviation of $252,500. The distribution is skewed by a small number of households with income above $5 million. This “long tail” of high incomes is a power law distribution, with exponent α = 1.7. For the sake of clarity, we do not show here the z-transform technique applied to log-scaled income data.

From a policy perspective, the high standard deviation limits our ability to raise taxes on this group without also impacting a large number of households in the middle bands. We suspect that natural skew in the data is exaggerated by the broad IRS reporting bands.

In this example, we will again reduce the (higher) standard deviation by 26%. We collect $870 billion by setting the mean after-tax income to $49,500. The structure of the IRS data obscures the shape of the distribution, so we present the results in tabular form.

Table 2Recognizing that our tax system is a vehicle for income redistribution as well as revenue collection, this approach separates the two functions. It uses a simple formula which allows policy makers to set goals for both functions explicitly and independently. It has the added benefit of being continuously variable over the range of incomes. This eliminates well-known problems associated with rate brackets.

We close our exposition of the z-transform approach with a few political results. An individual tax payment (or credit) is given by:

Formula 4Where Δµ is the individual amount required to fund the federal government. It has no subscript because it is the same for every taxpayer. It is determined in advance, and it can be printed on the tax form:

Formula 5In our example from the 2009 IRS data, Δµ is $6,200. This is what each taxpayer contributes toward funding the government. The rest r, is paid (or received) as redistribution. Even taxpayers below the mean, and receiving a credit, will see the amount by which Δµ reduces their credit. Redistribution will not excuse inefficiency.

Total tax revenue is given by:

Formula 6Everyone is better off when µ increases year over year, or Δµ decreases. Note that ∑r = 0.  Solving for x’ where x = 0 gives:

Formula 7This is the national minimum income. In the CPS example above, it is roughly $12,000. Our current minimum income is roughly $45,000, although it is never stated as such. The z-transform method states it unambiguously, and rewards any incremental earnings. It avoids the patchwork of benefit programs identified by Alexander, and the “welfare cliff.”

The scale of redistribution is given by the change in standard deviation between the pre- and post-tax distributions. It is natural to state this is as a percentage change:

Formula 8In both examples, above, we “reduce inequality by 26%.” This is how politicians can state their intention to be more or less progressive. Taxpayers with negative z-scores (those receiving tax credits) will tend to favor a higher Δσ, while those above zero will favor a lower one. These self-interested tendencies come into balance when the median income, which evenly splits the voting population, is also the mean.

Posted in Economics | Tagged , , , | Leave a comment

190 Weeks without a Correction

It seems like pullbacks have been really small lately, so I whipped up a new indicator called Percent off Peak, and took a look at the history of corrections. The indicator simply ratchets itself up as new peaks form, and then calculates each bar’s percentage off (at close). Here is the chart, below. Note that I am using a log scale, so that moves of a given percentage are the same size up and down the scale.

Peaks_WeeklySure enough, the SPX hasn’t had a 10% pullback since August 5, 2011. Next, I wrote an indicator to count the bars between pullbacks of a given size (this is variable, but I stick with 10%). The tricky thing about counting pullbacks is choosing the start date. You can’t see a 10% drop from an intermediate peak, if it’s still in the shadow of a higher peak.

To get around this, I reset my bar counter after each pullback of the given size. Finally, I code a dummy buy signal based on this routine, so that the strategy evaluator will automatically collect the dates. Here’s how that looks, around the top of the dotcom bubble. This was just a rounded top on the SPX, by the way – the “crash” was on the NASDAQ.

Peaks_SignalI ran the strategy from February 1975 to the present, collecting 40 years’ worth of corrections. They’re surprisingly rare, only 34 in the period. The histogram is below. It looks like a Poisson distribution, which is what you’d expect for an interarrival rate. Call it the market’s “mean time to failure.”

Peaks_HistoContinuing in the Poisson vein, we find that the mean (which is also the standard deviation) is 61. Yes, that’s why I used a bin size of 30. So, the first four histogram bars contain 29 observations within one SD of the mean. Here are the five outliers:

Peaks_TableNote that the second and third observations are linked. That was the only correction in a ten-year bull market. Maybe we are in another one. I’d feel more confident if we had some fundamentals to support it, like “spending the peace dividend.”

I did not try to draw any inferences from these five (they’re outliers, after all), but I did notice one thing. When a bull run goes more than three years without a correction, whatever ends it gets a catchy name.

Posted in Indicators | Tagged , | 1 Comment

Momentum Based Balancing

Part two in a series

A few months ago, I presented a novel method for balancing a widely diversified portfolio. By “widely diversified,” I mean all asset classes, foreign and domestic. This is to reduce inter item correlation. Here are the balancing rules:

  • Look at last month’s return for each of the nine EFTs
  • Drop the bottom two of ten (including cash)
  • Add a constant to the remaining eight, so that the lowest score is set to 0.0%
  • Allocate funds to these issues, pro rata by their adjusted score
  • Repeat monthly

These rules produced pretty good results in backtesting over a ten year period. The method, and the portfolio, are described in the earlier post.

Table1You are probably thinking that last month’s return is a poor predictor for next month, and that’s certainly true for individual stocks. Here, though, we have a diverse group for which relative rank is predictive. For each month, I ranked the ten issues (including cash) from 1 to 10, with 10 being the lowest return. The table below shows the following month’s return for each rank.


For this group, rank does indeed predict next month’s return, with an R2 of 0.35. I also tested the hypothesis that the top five’s average trailing return of 0.7% is greater than the bottom five’s 0.2%, and it is (with 96% confidence). You can check this yourself by downloading monthly data series for the given ETFs.

Rank based on a three month lookback is an even stronger predictor, with an R2 of 0.81 (chart below). Running model #3 with the longer lookback increases its CAGR to 9.9% and its Sharpe to 0.76.

I keep saying “for this group,” because the method depends on low inter-item correlation. The theory is that the portfolio will cover the gamut of asset classes, with institutional flows to the leading classes persisting for several months. I selected the portfolio based on this theory, which is quantified by an average inter-item correlation of 35%.

It’s not really a rotation model. If you are rotating, say, sector funds, then you are still concentrated in U.S. equities.

The nine sector funds, shown above, are a strongly correlated group with no negatively correlated pairs. The minimum correlation coefficient is 20% (between XLK and XLU) and the inter-item mean is 55%. For this group, rank has no predictive value. The R2 is around 0.001. I’ll show results for some additional low correlation portfolios in a later post.

Posted in Strategies | Tagged , , , | Leave a comment

QE Bulls Are Back

If you follow the market every day, as I do, you can observe two distinct groups of bullish investors.  This is hard to depict on a chart, but I am going to try.  One group responds normally to economic news, selling when the news is bad.  The other group buys on bad news, anticipating monetary stimulus from the Fed.

One way to spot this behavior is to discount market movements according to changes in the dollar.  The S&P 500 has rebounded in February, while UUP has fallen steadily.  Investors move to GLD as an inflation hedge, and that chart shows the same thing.

DollarAnother way is to compare the market to some gauge of economic health.  Below, we see that the ECRI weekly leading index peaked in early January.  This is a little fuzzier than seeing the market react day by day, but you get the idea.  Bulls in today’s market are buying QE.

ECRIFor several weeks around Christmas, the “recovery” meme held sway.  The market correction on Jan. 23 was caused by this group losing faith.  The bounce that began on Feb. 4 was part of a normal corrective pattern that could be attributed to either group.  Following Janet Yellen’s testimony on Feb. 11, however, the QE bulls are back in charge.

The new Chairperson said she would continue the former policy of winding down QE, but the market did not believe her.  This is probably because of her shift from unemployment statistics to direct targeting of inflation.  Observers interpreted this as an opening for slower tapering, if not an outright reversal.

Posted in Economics, Indicators | Tagged , , , , | Leave a comment

Levy Dispersion Update

I watch $OEXA200R on Stockcharts, which says that 74 of the S&P 100 are trading above their respective 200 day moving averages.  It is generally considered a bad idea to go long when this figure is below 65, or in a downtrend.  The full method is here, thanks to John Carlucci.

I also compute a related statistic, which says that 60% of the S&P 500 are trading above their SMA(131).  Being broader, smoothed, and using a shorter average, this statistic may react faster than $OEXA200R.  The calculation is here.

Levy FebThe advantage of doing it this way is that you can see the central tendency of the 500 stocks.  The vertical axis shows individual stocks, of the 500, not a percentage.

This month, the mode is bang on 1.0, which is weaker than last month.  The mean has fallen from 1.06 to 1.02.  As we proceed into the correction, I would expect the median to fall below 1.0, meaning that half are trading below the moving average.

Posted in Indicators | Tagged , , , | Leave a comment

Correlation with Bond Yields

Hat tip to Gökhan Kula for sending this along.  For the few years I have been trading, bond yields have been strongly correlated with stocks.  Stocks go up when bonds go down, so much so that I regard SPY and TBT as the same play.

I have the following chart in my collection on, just in case the correlation should someday break.  That seems to be happening now, with post-QE yields headed toward 3%.

TNX CorrelationMy chart, below, recapitulates the analysis of J. P. Morgan.  It shows the rolling two year correlation of weekly S&P 500 returns with weekly changes in the 10 year Treasury yield, plotted against the  Treasury yield.

SPX TNX CorrelationThis is the correlation of correlation, so to speak, with bond yields.  You can see that the strong correlation, to which I am accustomed, becomes weaker and then negative as yield rises through 5%.  Of the 2,700 or so datapoints, only 415 have TNX < 4.0, so this low yield is an historical anomaly.  Here is the timeline (note the log scale).

TNX LifeSince 2008, the correlation has seldom been below 0.4, and often as high as 0.7.  Here’s what that has looked like in terms of weekly returns.

Scatter3For comparison, here is that ancient period when stocks and yields were negatively correlated – or, to put it another way, stocks and bonds rose and fell together.  Don’t ask me what asset class funds flowed to when both were down.  Cash or real estate, I suppose.


Posted in Economics | Tagged , , | Leave a comment

Portfolio Balancing Models

Part one in a series

When you read about portfolio management, the emphasis is typically on selecting issues for diversification – an inflation hedge, real estate, bonds, etc.  The allocation of capital within the portfolio is then revised periodically (rebalanced) based on some rules.  Nowadays, you can do this entirely with ETFs.

In this post, I will present some alternative balancing models, with different risk return characteristics.  Requirements for each model are:

  • Mechanical rules suitable for a nonprofessional (or a robot)
  • Balance once each month, to reduce commissions
  • Rule for going to cash

The choice of ETFs is secondary.  The main thing is that they be diverse.  That is, weakly (or negatively) correlated with each other.  I’ll come back to that in a later post.  For now, I present this portfolio without comment:

  1. SPY – SPDR S&P 500 ETF Trust
  2. EFA – iShares MSCI EAFE Index Fund
  3. IEF – iShares Barclays 7-10 Year Treasury Bond Fund
  4. TIP – iShares Barclays TIPS Bond Fund
  5. EEM – iShares MSCI Emerging Markets Index
  6. IWM – iShares Russell 2000 Index
  7. XLB – Materials Select Sector SPDR
  8. IYR – iShares Dow Jones US Real Estate
  9. TLT – iShares Barclays 20+ Yr Treasury Bond

The chart below shows their performance over the ten year period.  It’s hard to look at, but you can see the 2008 crisis and the jump in bond prices.

Chart0This is end of month data, and the period is January 2004 through January 2014.  We are going to write our rebalancing rules based on the month end figures, and assume we can buy at roughly that price to start the new month.  We are also going to assume a smooth allocation to the ETFs, disregarding lot sizes.

The chart below shows the performance of a balanced portfolio versus SPY.  We start with $1,000 in each of the nine ETFs, plus $1,000 cash.  Each month, we rebalance the total, one tenth into each category.  I call this the “equal allocation” model.  It is probably the most natural, intuitive way to do it.

Model1This model performs slightly better than SPY, returning 68% over the period versus 58%, and with less risk.  The standard deviation of monthly returns to the model is 3.2% versus 4.2% for SPY alone.  It does not perform as well as some of the other issues, in absolute or risk adjusted terms.  Since TIP is handy in the portfolio, we’ll use it to compute Sharpe ratios for whole group:

Table1The ETFs to beat are EEM and IWM, for absolute and risk adjusted returns, respectively.

Perhaps you have noticed a flaw in the logic of this model.  As Loeb says in his book, it has the effect of moving capital from issues that are performing well, and “spreading the wealth” to those that are not.  For the next model, we rebalance pro-rata according to how well each fund has done over the last month.  Here’s how that performs:

Model2In the first month we start with equal allocations, and we make a 2.3% return (actually, a little less because we start with $1,000 allocated to cash).

Table2For the next month, we allocate according to how each fund performed as a percentage of the total.  This “total” figure only serves to make the pro-rata calculation.  Here is the result:

Table3In this month, XLB did best, so it gets the most capital going into the next month.  This approach kind of assumes that last month is a predictor of next month’s performance, but not really.  That’s a proven fallacy.  What the model assumes, after Levy, is that last month’s relative rankings within the group are predictive.

I call this the “winners only” model, because it allocates only to those issues making gains.  Its average monthly return of 1.1% dominates EEM, and its variability is less than SPY.

Results for this model are impressive, but it has some drawbacks.  In some months all issues lose money, leaving no choice but to reuse the prior month’s rankings.  It also cannot go to cash.  Finally, the model produces a very erratic mix of issues from month to month.  The chart below shows the portfolio mix for a representative year:

Histo2This is the killer.  No one would think of running a retirement portfolio like this.  For the next model, we keep the pro-rata concept, but we resolve to stay in nine of the ten issues (counting cash) every month.  We do this by baselining all issues relative to the month’s worst performer.  For example, in a month where the returns are:

Table4We find the biggest loser, 2.6%, and add that amount to each return:

Table5Then, we repeat the pro-rata allocation as before.  Note that cash, which always returns zero, receives an allocation when any other issue makes a loss.  I call this the “drop one loser” model.  In this example, XLB is the biggest loser, and so the portfolio will hold no shares of XLB in the next month.  If all issues make gains, then cash is weakest, and the model goes fully into ETFs.

Model3This one is not as chaotic as model #2, but it also doesn’t perform as well.  Its monthly average return of 0.7% is weaker than IWM.  I’ll give a thorough comparison at the end.

Finally, I find a middle ground between models #2 and 3, by baselining to the second weakest performer.  Thus, the month shown above would be baselined to -1.7%, dropping SPY as well as XLB.

Table6The resulting allocation is:

Table7I call this the “drop two weakest” model.  I could go on with dropping three, etc.  This one has, to my eye, the right amount of churn from month to month.  Here is 2005 again.  If you compare it with the histogram above, you can see the resemblance (the colors are the same).  Model #4 tends toward the same issues as #2, but it doesn’t go all in.

Histo4The chart below shows all four models on the same chart.  Model #2 is the overall best performer.  Its superior return and low drawdown compensate for its higher variability.  If this were my IRA, though, I would go with the less erratic model #4.

Model4Safe, timid, model #1 has the worst drawdown of the bunch.  It continued shoveling money into stocks during 2008.  That’s why “equal balance” models need explicit warning signals to move out of stocks.  The other three models have implicit warning signals that move them seamlessly into cash and bonds.   Here is the drawdown chart:

DrawdownThe table below summarizes results for the four models.  I like to see the returns and variability on a monthly basis, instead of discounting the total return.  I think that’s more appropriate for this application.  Apart from that, my Sharpe ratio is conventional, using TIP as the risk free return.

Table8As long as you have negatively correlated issues to work with, you can achieve your desired risk return tradeoff by tweaking the model.  It’s kind of like the CAPM of portfolio balancing.  I will cover the selection of ETFs in a later post.

Posted in Strategies | Tagged , , , , , | Leave a comment

Short SPX for 2014

The purpose of this post is to explain why and how I shorted the S&P 500 on Jan. 6.  In an earlier post, I described the two methods I use for detecting “tops” or, more precisely, local maxima.  I also said that I don’t trade this signal.  In fact, since I use a countertrend strategy, I will be taking long swing trades all the way down.

The chart below shows SPX going trendless in early December.  You see ADX(14) < 20 and you also see SMA(10) go flat.  There’s a crazy ramp at year-end, and then SMA(10) resumes flat.  I reasoned that many investors were holding off selling until the new tax year, so I ignored the ramp.  This theory is supported by the low volume.

TrendlessThe other method is to count distribution days.  I posted this next chart on stocktwits, last Thursday.  By that time, I was very surprised the IBD hadn’t registered “under pressure.”  The chart shows daily volume, and it also shows Chaikin’s money flow, a handy way of evaluating up and down volume.

NYSEPeter Brandt said it best.  Chart reading doesn’t enable you to predict the future.  It enables you to take trades with favorable risk reward.  In this case, I could set my stop 1% away at “new all time high,” against the possibility of a prolonged decline.

The other issues I might have chosen to short, QQQ and IWM, were already more than 1% off their peaks.  The Russell had peaked earlier (another tell) on Boxing Day.  Speaking of Canadian holidays, last week I posted my approach to currency hedging.  Instead of selling SPY short, I bought SH, the 1x inverse ETF.  The market usually moves opposite to the USD, so my negative USD position is a hedge.

I might also have bought SDS, the 2x leveraged ETF, but then my stop would be 2% away.  It’s confusing enough having to remember that my mental stop to cover SPY at 184.69 is really to sell SH at 25.22. Happy new year!

Posted in Indicators | Tagged , , | Leave a comment

Robots Strike Again

I posted about this a few weeks ago, and now Twitter has supplied another example:

RobotsThis trader would still be in a not bad position but, once his stop was triggered, he got hosed for a $2.41 loss.  Nothing is more frustrating.  If you can’t be at your screen, then you may have to use a hard stop.  Otherwise, keep it close to your vest.

Here is Hunsader’s wonderful close up of gold futures plunging $30 (instantly) on heavy volume, and triggering a ten second halt.

Nanex_GCThe halt gave alert traders the opportunity to get back in, for futures.  Stock traders were not so lucky.

Posted in Trading | Tagged , | Leave a comment