Rockets and Blazers at the Rim

Every year, the NBA playoffs treats us to some awesome battles. In just one week, we’ve already enjoyed Kevin Durant vs. Tony Allen, Chris Paul vs. Stephen Curry, and Troy Daniels vs. Twitter1.

However, many of my favorite matchups aren’t about individuals–they’re the ones we get when two teams with clashing styles go head-to-head. When both sides excel at what they do, who can find an edge? In this post, I’d like to focus on one such battle being fought between the Rockets and Blazers.

We’ve heard a lot of discussion about how these teams use (or neglect) the mid-range jump shot. This post isn’t about that. While I could write plenty of things about the brilliance of LaMarcus Aldridge, I think the more interesting story takes place within a few feet of the Rockets’ basket.

It’s no secret that Houston likes to attack the rim. A large portion of their offense is predicated on getting into the paint for layups and kickouts. During the regular season, the Rockets took a whopping 43.9% of their shots within 5 feet of the basket, good for second in the league2. They made 60.1% of these attempts, ranking them 8th. The Rockets also led the league in free throw attempt rate, largely a product of them persistent assaults on the rim.

Meanwhile, in Portland, Terry Stotts built a conservative defense meant to encourage mid-range jumpers over more efficient shots. Center Robin Lopez finished among the league’s best rim protectors, allowing opponents to shoot just 42.5% on 10.3 shots at the rim defended per game. As a team, the Blazers held opponents to a mere 55.2% within 5 feet, placing them behind only the Pacers and Thunder. They also ranked fifth in the league in preventing opponents from getting to the free throw line. Although Portland’s overall defense finished close to league average, it seems like their strengths are well-suited to neutralize Houston’s attack.

So far, this battle has more or less been a draw. The Rockets are getting a TON of shots within 5 feet–an absurd 49.7 per game3, up from 35.2 during the regular season (Portland allowed 32.4 per game). A large portion of this increase has come from Dwight Howard, who’s more than doubled his shot attempts inside the restricted area from 7.7 to 16. However, Houston’s only shooting 53.7% within 5 feet, well below their season average. Credit for this goes to Lopez and Aldridge, who together are defending 33.3 shots at the rim per game, including 5 combined blocks.

Furthermore, the Blazers have held the Rockets well below their regular season FTA rate, despite their use of Hack-a-Howard. A big factor here has been Portland’s wings smartly refusing to swipe at the ball on Rockets’ drives. Instead of giving up free throws, they’re forcing Houston to make tough layups against the Blazers big men.

The Rockets’ biggest victory in the paint has been offensive rebounding. After grabbing 27.4% of their misses during the regular season, they’ve managed to pull down an impressive 36% of them against Portland. A lot of this is a byproduct of the pressure the Rockets’ penetration puts on Lopez and Aldridge. When they have to leave their man to contest a shot, the other Blazers have done a poor job of rotating to box out. Dwight Howard and Omer Asik are both grabbing offensive rebounds at a rate of 5.6 per 36 minutes. Part of the reason the Rockets have produced so many close shots is that they often get more than one in a single possession.

Even though the Blazers have taken away some of what the Rockets do best, the overall results have been neutral. After scoring 108.6 points per 100 possessions during the regular season, Houston is scoring 108.3 in the first three games of this series. The Blazers had a league average defense, so if you knew absolutely nothing about how the teams matched up, you’d expect exactly what we’ve observed–the Rockets scoring at their normal rate.

Sometimes, the process is a lot more interesting than the results.

1. Don’t forget Lance Stephenson vs. Evan Turner!

2. League average was 35.3%. The 76ers led with 44.2%.

3. It’s worth noting that two of the games went into overtime. But the Rockets still average 46.4 shots within 5 feet per 48 minutes.

Money, Minutes, and the NBA’s Bias Towards Offense

It’s no secret that NBA fans have a bias towards offense. And they have good reasons for this. First of all, offense is more fun. Aside from the nerdiest basketball wonks, most people would rather watch high-flying dunks and long-range threes than precise rotations and intricate pick-and-roll defense. Furthermore, offense is easier to understand. Whether you’re watching film or looking at stats, offensive contributions are easier to discern than defensive ones.

There’s absolutely nothing surprising or wrong with casual fans having a bias towards offense. However, if this also affects the people running teams–those who make their livings knowing more about basketball than the rest of us—then we have something interesting.

Fortunately, we can study this. In this post, I examine the values GMs and coaches put on offense and defense by looking at how they distribute money and minutes.

Let’s start in the front office. The easiest way to determine what’s important to GMs is to look at how much they pay different types of players. I ran regressions to predict player’s salaries based on three different methods of measuring offensive and defensive contributions.

  1. First, I used ESPN’s Real Plus-Minus, which I believe provides the best measure of a player’s impact on each side of the ball1.
  2. Since RPM comes with certain biases, I also used individual offensive and defensive ratings, courtesy of These stats simply measure how many points a player’s team scores and gives up when he’s on the court. Because they’re unadjusted, they have lots of noise, but they’re also as unbiased as it gets.
  3. Finally, I used win shares as calculated by as a totally different way to measure impact.

All data sets were from the 2013-14 regular season. I ran regressions with and without adjustments for position, and report position-adjusted results when they were meaningfully different.

Using RPM, a one-point increase in ORPM2 predicted a salary boost of $990 thousand, while an equal improvement on defense corresponded to a raise of only $660 thousand. This difference became magnified when I adjusted for position. A point on offense increased to $1.16 million against just $430 thousand for a point on defense.

Efficiency ratings saw a similar pattern. An extra point added to offensive rating predicted a salary boost of $180 thousand compared to $100 thousand for a point subtracted from defensive rating.

The win shares data gave less definitive results. A hundredth of a win share per 48 minutes was valued at $350 thousand, while the same amount of defense cost $320 thousand (a nonsignificant difference). However, adjusting for position widened the gap, with a hundredth of an offensive win share priced at $370 thousand and the same quantity of defense at $260 thousand3. It’s worth noting that the box score-based metric gave the most equal values for offensive and defensive contributions.

To give you a visual sense of this, I plotted player salaries against ORPM and DRPM.



Overall, these results make a strong case that GMs pay a higher price for offense–maybe even twice as much as they pay for equivalent defensive contributions. It’s tempting to point to that and scream “MARKET INEFFICIENCY!”, but there might be good reasons why this discrepancy exists.

First, I suspect it’s easier to have confident beliefs about a player’s abilities on offense. GMs and scouts can watch film to study either end, but only on offense can they use numbers to confirm what they’ve observed. Defense is still mostly opaque to stats. If GMs feel more uncertainty investing in a player’s defense, they won’t pay as much for it. Certainty comes at a premium, so offense will earn more money.

It could also be the case that future offensive performance is more predictable. GMs might believe that defensive contributions have more to do with a player’s fit within a team and system, while offense is consistent regardless of other factors. Again, this would lead to more certainty about what kind of impact a player’s offense will have.

Both of these explanations are mostly conjecture and might deserve more rigorous investigations of their own. But it’s important to remember that the fact that offense gets paid more than defense doesn’t automatically prove there’s a market inefficiency.

While GMs are in charge of money, coaches get to distribute minutes. It turns out that offense not only gets a player paid, it also gets him on the court.

Using RPM, an extra point of ORPM corresponded to 2.6 additional minutes per game. Meanwhile, a point of DRPM earned just 0.8 more minutes. This means the marginal playing time gain for offense was over three times that of defense.

The results from the other data sets were even more surprising. Using both efficiency ratings and win shares, increases in defensive contributions did not have a statistically significant effect on playing time4. An additional point of offensive efficiency predicted an increase of 0.6 minutes per game, while an equal improvement in defensive efficiency corresponded to a reduction of 0.2 minutes5.

Likewise, an extra hundredth of an offensive win share per 48 minutes predicted 0.8 additional minutes per game, while the same contribution on defense anticipated a loss of 0.1 minutes. After adjusting for position, the negative effect of defense disappeared and an additional hundredth of a defensive win share corresponded with an increase of 0.2 minutes.

Again, for visualization’s sake, here are graphs of minutes per game against ORPM and DRPM.



Let’s think about this for a moment. Markets are complicated, so GMs might have good reasons to consciously pay more for offense. But many of the factors that affect a dynamic market are absent or reduced for a coach. His goal is essentially to optimize his player’s minutes to build the best team possible6. It’s hard to come up with good reasons why coaches’ playing time distributions should make them appear apathetic towards defense. This data suggests they either don’t understand it as well as offense or don’t value it as much they should.

As one final note, it’s worth considering how these two phenomena might interact. For starters, coaches and GMs certainly talk about what players they do and don’t like. Some coaches have a say in how their rosters are built, and some GMs have influence over what happens on the court (and usually hire their coach). Each of these groups influences how the other thinks.

There could also be indirect effects. NBA coaches are smart people who want to keep their jobs. If a GM shells out for an offensively-focused player, his coach might feel pressure to play him more than he’d like, even if that hurts his team. On the flip side, coaches control which players GMs get to watch. If a good defender can’t get on the court because he hurts his team’s offense, the narrative surrounding him will likely focus on his deficiencies. No one will get a chance to see what he does well. Overall, it seems likely that there’s a relationship between the biases we see in each of these groups.

1. You can read more of my thoughts about Real Plus-Minus here.

2. In other words, an increase of one point of offense per 100 possessions.

3. Random note: Using all of these methods, a league average player would expect to make $5.3 million per year.

4. This was true both with and without positional adjustments.

5. Again, this reduction wasn’t statistically significant (p = 0.18). I don’t think being better at defense would actually cause a player to play less.

6. I’m not trying to say coaching is easy. They have to consider tons of factors in every decision. It’s just that their problems are more isolated from other teams than those facing a GM, and it’s more clear what a coach is optimizing for.

Explaining ESPN’s Real Plus-Minus

Last Monday, ESPN debuted their newest basketball statistic: Real Plus-Minus (RPM). Unsurprisingly, it was met with a variety of reactions ranging from excited to confused to baffled.

I think it’s great that people are critiquing the stat and studying its strengths and weaknesses. I’ve read a lot of really insightful discussion about how to best use it, with opinions all over the map. However, I’ve also encountered a number of misconceptions about RPM. In this piece, I’d like to correct, clear up, and emphasize a handful of points that I think are important for anyone who wants to use this stat.

1. Real Plus-Minus is not new.

RPM is the latest version of xRAPM1, a stat created by Jeremias Engelmann (with contributions from Steve Ilardi, who authored ESPN’s introduction to RPM). Engelmann’s stat is meant to improve upon RAPM2. That stat attempts to fix the problems in APM3, which was conceived as a better version of conventional plus-minus (which you can find in most box scores). All of these metrics have the same goal: to quantify how much a player hurts or helps his team when he’s on the court. Studying this lineage can help us understand why Real Plus-Minus exists, where it excels, and where it struggles.

We begin with a different sport and country. The Montreal Canadiens created the vanilla version of plus-minus during the ’50s. After becoming a standard hockey stat, it eventually made its way to basketball during the 2000s. Its computation is simple. If a player’s team outscores his opponent by 5 points while he’s on the court, his plus-minus is +5. You can easily convert this into a per-minute or per-possession metric.

The problem with plus-minus is that it doesn’t account for the quality of a player’s teammates and opponents. As an estimator of how much a player helps or hurts his team, it’s inherently flawed. Plus-minus has a statistical property called bias–no matter how much data you collect (i.e. even if you could make teams play thousands of games), it will systematically misrepresent certain player’s contributions.

This is easily illustrated with an example. If Jeremy Lamb plays all of his minutes with Kevin Durant, Lamb will look pretty good. Sure enough, Lamb has a raw plus-minus of +8.0, ranking him just ahead of LeBron James. Is Lamb really as good as LeBron, or is this a reflection of the fact that he plays a lot of his minutes with the likely MVP (and against opponents’ second units)? Plus-minus can’t answer this.

Which leads us to adjusted plus minus, or APM. This metric tries to isolate a specific player’s impact by adjusting his plus-minus numbers for the quality of his teammates and opponents. To continue our example (with made-up numbers), APM might use the minutes Durant plays without Lamb to find that he’s worth and additional 9 points per 100 possessions. This would mean Lamb actually loses the Thunder one point per 100, making him a below average player4. APM can use minutes where one teammate plays without the other to figure out how much each contributes when they play together.

Unfortunately, what makes APM better also makes it volatile. If two players share the court for a large portion of their minutes, the rare instances where one plays without the other can have disproportionately large effects on both player’s ratings5.

Imagine Durant and Lamb play all of their minutes together during the first 81 games of the season. However, for their last game, the Thunder decide to rest Durant for the playoffs. In that game, Reggie Jackson and Derek Fisher catch fire and the Thunder destroy their opponent. When APM is rating players, it will see that the Thunder looked great the only time Lamb played without Durant and wrongly conclude that Durant was actually holding Lamb back the whole season6. Even though this example is a bit contrived, there are a lot of real-world instances where small sample randomness leads to bizarre APM results.

How do you work around this problem? That’s where regularization–the R in RAPM–comes in.

In APM, a statistical technique called linear regression estimates each player’s rating. RAPM employs a modification of this called ridge regression7. This method of estimating ratings has the effect of pulling values toward some pre-determined expectation known as a prior. RAPM uses a rating of 0 as its prior–in other words, it’s skeptical of a player who rates as strongly above or below average, unless it has a lot of data to back that up. Using ridge regression reduces the impact of the small sample size problems I described above. Values that stray too far from the prior, especially without lots of data to back them up, will be pulled back into (hopefully) a more reasonable estimate. RAPM doesn’t entirely fix the extreme Durant-Lamb case I outlined earlier, but it does a good job with more realistic scenarios. While randomness can still have an effect, the damage is less than it is for APM.

RAPM solves the major problems its predecessors face, but can still be improved upon. Its biggest weakness is that its method of reducing statistical noise also causes it to ignore some useful data. This is where–finally–expected RAPM (xRAPM) comes in.

Let’s revisit those priors. Instead of automatically giving everyone a 0 prior, xRAPM assign each player a value that tries to predict what his RAPM should be. Since ridge regression pulls ratings toward priors, Durant’s xRAPM moves toward a higher number than Lamb’s (assuming the priors rate Durant better than Lamb). The prior is a mathematical way to say, “If you’re not sure who should get credit for this, it’s probably the guy who we already think is better.”  If you can devise a method to intelligently set these priors8, you will improve the statistic’s predictive power.

So how are these priors set? One major part of the prior is a player’s performance in the previous season. Intuitively, this makes sense. If a player was really good last year, he’ll probably be good this year. Another component is based on box score stats. Although these numbers are prone to misrepresenting certain types of players, they can still give xRAPM useful information that helps it separate out different players’ contributions. In addition to these, other types of data like height and age help determine priors. All of these factors gives xRAPM clues about how good it should expect a give player to be.

That was a lot to take in, so let’s recap:

  • Plus-minus measures how well a player’s team performs when he plays, but doesn’t adjust for context.
  • Adjusted plus-minus (APM) accounts for the quality of a player’s teammates and opponents, but struggles when players get most of their minutes together and is susceptible so sample size issues.
  • Regularized adjusted plus-minus (RAPM) smooths out APM’s more extreme results, but tends to be a little too conservative.
  • Expected regularized adjusted plus-minus (xRAPM) brings in other types of data to improve RAPM. This stat is what Real Plus-Minus is based on.

So there you have it–a short history of Real Plus-Minus, which, aside from some possible tweaking, is not a new statistic.

2. Real Plus-Minus does NOT measure how well a player has performed this season. 

Statistics can be divided into two categories. Descriptive statistics tell us about what happened in the past. For instance, I can check how many page views this blog post has. Predictive statistics try to forecast what will happen in the future. I could create a model that estimates how many page views I’ll get over the next 24 hours. This difference between these is subtle, but important.

Real Plus-Minus is meant to be predictive. It’s interested in how well a player will perform in the future, rather than what he did in the past. RPM’s emphasis on prediction explains why it uses some of the tricks it does.

For instance, I mentioned earlier that RPM uses data from previous seasons in its priors. If my primary goal is to evaluate how well a player did this season, it wouldn’t make a lot of sense to use data from other seasons. However, if I want to predict what will happen in the future, the older numbers can help me differentiate between players who have been consistently good (and will likely keep being good) and players who are merely going through a hot streak (and will likely regress to their mean).

This has a number of implications. One is that RPM tends to be skeptical of player improvements (or regressions) that exceed what is expected for a player that age. This season, Anthony Davis improved much faster than most 20-21 year old playes. People who watch basketball know that Davis is super talented and accelerated growth is expected from him. However, Real Plus-Minus doesn’t understand this and suspects that Davis’ numbers might be a random blip. As a result, Real Plus-Minus is liable to underestimate Davis’s impact this season9.

On a less technical note, RPM’s focus on prediction makes it a poor way to determine who should get end-of-season awards. I think this is an important point to emphasize because ESPN does exactly this in its introduction to RPM, using it to argue that Taj Gibson is a better candidate than Jamal Crawford for 6th Man of the Year. RPM is optimized to predict the future, not evaluate the past.

3. Offense and defense are equally important in Real Plus-Minus.

RPM separates players’ offensive and defensive contributions (into ORPM and DRPM, respectively) and counts each of these equally in overall RPM. It seems obvious that both ends of the court are equally valuable, but it’s important to appreciate the ramifications of this. Again, I’ll illustrate this with an example.

Let’s look at James Harden. The Beard is regarded as a great offensive player, and his ORPM of 5.69 (fourth-best in the league) bears this out. He’s also viewed as a laughably poor defender, which is confirmed by his DRPM of -2.66 (77th out of 91 shooting guards). When we analyze the components of his game, his numbers agree with the eye-test.

However, this doesn’t hold up when we consider his game as a whole. Many analysts and fans consider Harden to be a top-10 player. But RPM ranks him 46th overall, just ahead of Robin Lopez. What gives?

This discrepancy exists because Real Plus-Minus forces us to weight Harden’s offense and defense equally. A subjective evaluation of his game makes it easy to focus on all the great things he does on offense and forgive his shortcomings on the other end. I found myself surprised by how low he ranked, but after thinking about it, I have to agree. In order to think Harden is a top-10 player, you must believe one of the following:

  1. Harden is close to league average on defense.
  2. Harden is significantly better on offense than Chris Paul and Stephen Curry.
  3. A point added on offense is worth more than a point added on defense.

I don’t agree with any of these statements, so I’m forced to concede that Harden isn’t a top-10 player.

I’m grouping the next two together because they both deal with how to interpret Real Plus-Minus scores.

4. If Player X has an RPM of 2.0 and Player Y has an RPM of 1.0, Player X is not twice as good as Player Y. 

5. A player with a negative RPM does not necessarily hurt his team.

I’ve seen these and similar errors in more than a few places. RPM values are reported as relative to an average NBA player. In the first case, you could say “Compared to a league average player, Player X adds twice as many points per 100 possessions than Player Y” and be correct, but that’s a bit of  a mouthful.

A better way to look at RPM scores is to compare them to either A) a player’s backup or B) a replacement level player.

CJ Watson gives us a good example of why backup comparison is important. The consensus view of Watson is that he’s the Pacers’ most important bench player and the team was clearly affected during his recent injury. However, Watson’s RPM of -0.08 doesn’t particularly impress. An uninformed glance at that number might make you say, “Okay, he’s average, but does he help the team at all?”

The answer: YES!

When Watson was out, third-string guard Donald Sloan picked up most of his minutes. What’s Sloan’s RPM? An atrocious -7.7, the fourth-worst rating in the league. If Sloan takes 12 of Watson’s 19 minutes per game, the Pacers lose around 1.8 points. For one bench player, that’s a huge impact10. Even if the Pacers had a below average backup point guard, (say, Ish Smith, who rates at -1.94), he would be a huge help by virtue of keeping Sloan off the court11.

The replacement level comparison is important because it gives us a way to determine whether a player is helpful or harmful, and how much so. Replacement level is the theoretical cutoff between someone good enough to play in the NBA and someone who can’t quite make it. If a player falls below that point, he’s no longer contributing and should be replaced by someone from outside the league. Therefore, a player’s value can be measured by how much better he performs than a replacement level player12.

This is the idea behind ESPN’s Wins Above Replacement metric (WAR). That stat uses Real Plus-Minus to estimate how many points a player had added (or subtracted) over the entire season. It then determines how many wins those points would be expected to add. This is useful because “wins” is a more relevant and understandable unit than “points above average.”

6. Position is relevant when using Real Plus-Minus to evaluate a player.

Unsurprisingly, different positions tend to be better at different things. Point guards tend to be more focused on offense, while center are more likely to be gifted defenders. This should have an effect on how we value different players.

Let’s consider the following question: Does Roy Hibbert or Paul George contribute more to Indiana’s league-leading defense?  DRPM gives Hibbert a rating of 3.52 while George is at 2.61. If you stopped your analysis there, you’d conclude that Hibbert is the key to the Pacers’ stinginess.

However, when you adjust for position, this isn’t the case. An average center has a DRPM of 1.78, while small forwards average a rating of 0.04. Hibbert exceeds his positional average by just 1.74 points per 100, while George does so by an impressive 3.48 points. When we consider position, you can make a compelling argument that George provides more defensive value than Hibbert13.

7. The quality of a player’s teammates and opponents DOES NOT impact his rating.

I discussed this earlier, but I want to emphasize it because I’ve noticed a lot of people making this mistake. The entire goal of any adjusted plus-minus stat is to filter out other players’s effects on raw plus-minus in order to isolate one player’s contributions. If you notice someone has a better rating than you would expect, it doesn’t make sense to attribute it to him sharing the court with a superstar or playing against second-units. There are plenty of reasons to think a player’s RPM doesn’t reflect his true value, some of which I’ve mentioned here. That isn’t one of them.

This doesn’t mean lineup factors don’t have any effect on RPM. Players who spend most of their time with players who complement them well will get a boost to their ratings, and vice versa. For an extreme example of this, imagine if you tried to play the five highest-rated centers. RPM predicts that they would outscore opponents by 20 points per 100 possessions, but I’m quite confident they’d perform much worse. In more realistic cases, lineups that don’t have enough shooting, creating, etc. could hurt those players ratings14.

It’s perfectly reasonable to use lineup factors to doubt a player’s Real Plus-Minus, but you have to use deeper analysis than simply pointing out the players he tends to play with and against.

1. Short for “expected regularized adjusted plus-minus”

2. Regularized adjusted plus-minus

3. Adjusted plus-minus. We’re losing letters by the sentence.

4. Of course, it’s not quite this simple, since Durant’s APM score is also affected by Lamb’s. In practice, these values are estimated by setting up several equations and running a regression to estimate coefficients that represent each player’s APM.

5. This is called collinearity. In statistics, when two variable are highly correlated, it can be hard to determine which has more causal effect on the outcome.

6. The opposite could happen, too. If the Thunder laid an egg that game, Durant’s rating would get inflated.

7. For the stats nerds: Ridge regression modifies the optimization criterion for standard linear regression. Instead of simply trying to minimize the residuals, it minimizes the sum of the residuals and a term based on the distance between the coefficients and the priors.

8. This isn’t that hard when your baseline is to assume every player is equally good. But finding the best way to do it as in ongoing topic of study.

9. This effect tends to be stronger for players who improve later in their careers because RPM expects little to no improvement from them.

10. That’s roughly equal to the difference between the Chicago Bulls and Charlotte Bobcats.

11. It’s worth noting that Sloan probably isn’t that bad. His increase in minutes coincided with the team’s overall regression, so he likely gets a disproportionate share of the blame for that. Still, other metrics and the eye-test make him out to be pretty terrible for an NBA player.

12. NBA replacement level is set at -2.35 points per 100.

13. The basic difference here is what you’re comparing them to. Compared to an average NBA player, Hibbert’s better. But compared to their positional averages, George is better. Seeing as a realistic lineup would almost always have Hibbert replacing another big man, I think the positional comparison makes more sense.

14. The Real Plus-Minus framework could be extended to account for this by including coaches as members of every lineup they play. This would give them some of the credit for their ability to play lineups with chemistry. That might already be part of Real Plus-Minus, although I imagine that if ESPN were computing coach values they would publish that somewhere. Maybe it’ll be added in the future.

Danny Green Never Took Statistics

Such Is Life

If you’ve watched any of the NBA Finals so far, you’ve seen San Antonio Spurs guard Danny Green making a 3-pointer. In less than five games, Green has broken then-Celtics now-Heat sharpshooter Ray Allen’s record for most made 3s in a single Finals. As impressive as that volume is, what’s been more absurd is Green’s accuracy. Allen set his record by shooting a red-hot 22-42 (52%) over six games. Green has made three more shots on four less attempts, putting him at 25-38 (66%). Frankly, it’s a miracle he hasn’t burst into NBA Jam style flames.

Not even digital Rodman will go near that.

After sufficiently extolling Green’s performance, I started to wonder how improbable it really was. After all, Green was one of the most accurate long-range shooters in the NBA this season, and 38 shots isn’t a huge sample size. Furthermore, as Grantland’s Kirk Goldsberry informs us

View original post 510 more words

Are the Rockets Title Contenders?

The NBA playoffs are less than a week away. We know that one of the Spurs, Thunder, Clippers, and Heat will probably win the title. The Pacers have clearly lost their early season form. We currently have no idea if a sense of postseason urgency will kick in and solve their issues, but it shouldn’t take long to tell.

But what about the Rockets? Sometimes they look like they can hang with the big dogs (3-0 against the Spurs). Other times, they give up 51 points to Corey Brewer and lose to a depleted Timberwolves squad. How do we make sense of this? Are the Rockets a contender?

I think the answer is yes, but you have to look through a slightly optimistic lens. And even then, they’re only a weak contender.

The Rockets’ biggest problem has nothing to do with their perimeter defense or Dwight Howard’s free throw shooting or Patrick Beverley’s knee. It’s their conference.

The Western Conference’s superiority this season has been well documented. The Phoenix Suns–currently tied for eighth in the West–would be the third best team in the East. On paper, the West has three teams better than the East-leading Heat and five better than the second-place Pacers.

Although they’re probably the fifth-best team in the NBA right now, the Rockets are only the fourth-best in the West according to both record and net rating1. Since the 1999-2000 season, only two teams from outside the top-three of their conference have made it to NBA Finals2. Out of 28 Finals spots, 26 went to conference top-three teams. Already, things aren’t looking good for Houston.

However, there’s good news for Rockets fans. You can make a strong case that the Rockets have improved substantially over the course of the season, and the numbers bear that out. For games played during the first two months of the season, the Rockets’ net rating was +3.2 points. Since then, it has increased to +6.8 points. Although that isn’t enough to catch the West’s top-three teams, it suggests that the Rockets can at least compete with them.

This statistical improvement is backed by what I observe watching the Rockets play. During those first two months, the Rockets struggled to integrate Dwight Howard into their offense and rotation. First, they tried and failed to succeed with lineups featuring both Howard and Omer Asik. Then, their ball-handlers had to learn when and how to get Howard the ball. Since these problems have been solved, the Rockets have clearly played better. In this case, the numbers and eye-test agree.

One factor inflating the Rockets’ performance has been their good injury luck. So far, James Harden, Dwight Howard, and Chandler Parsons have made 208 of a possible 231 appearances–right at 90%. Meanwhile, the Spurs, Thunder, and Clippers have played through significant injuries to players named Parker, Leonard, Westbrook, and Paul. The fact that these teams outperformed the Rockets despite these injuries doesn’t bode well for Houston’s title chances.

Because I care way too much about this stuff, I have a program that lets me run thousands of playoff simulations in order to estimate the likelihoods of different outcomes (e.g. winning the Finals, winning one series, etc.). Because of the difference between the Rockets’ season-long rating and those since the first two months, I conducted separate experiments for both sets of ratings3.

First, I define a contender as a team that has at least a 5% chance of winning the championship. This number is partially influenced by the belief held by many NBA executives and media members that if a team has a 5% chance to win, they’re a few lucky breaks away from finding themselves in the Finals and should do whatever they can to improve their odds4.

Using the current standings and ratings, the Rockets have only a 4.1% chance of winning the NBA Finals–not enough to be a contender. This number struck me as low, but it makes sense when you look at what they’ll have to accomplish.

The Rockets will almost certainly play the Trail Blazers in the first round. Houston currently has home court advantage, although there’s a chance they’ll lose it. Either way, the Rockets are clearly the better team and the Blazers have been trending downward lately. However, the teams are close enough that Portland is at least a threat to the Rockets. My simulation gives Houston a 70% chance to win the series if they keep home court and a 63% chance to win if they lose it.

After this, things get a lot tougher. The Rockets’ most likely opponents would be the Spurs, the winner of Thunder vs. Clippers, and the Heat. That would require them to beat three of the four best teams in the NBA. Even if each series were 50-50, Houston would have just a 12.5% chance to win all three. When you consider that all of these teams rate better than the Rockets and would have home court, Houston’s chances start to look bleak.

However, when we only look at games played in 2014, the Rockets look like a legitimate contender. In this simulation, their title chances increase to 8.8%, more than double what they were before. Although it might be a bit optimistic, I think this is a more accurate representation of the Rockets’ actual chances at making a run.

For Rockets fans, the worst part is how different their outlook would be if they were in the Eastern Conference. With their current record, Houston would be the third seed in the East. When I simply switch them with the Raptors (the East’s current third-seed) and use the later-season ratings, the Rockets’ title chances increase to 19.3%. They would be the favorites in the East and the third most likely champion5.

So in conclusion, the Rockets’ season-long improvements have turned them into a contender, but the West is so tough that they still aren’t likely to bring the Larry O’Brien Trophy back to Houston.


1. Net rating gives a team’s average performance per 100 possessions relative to their opponent.

2. Those teams were the 2003-04 Lakers and the 2009-10 Celtics. Both lost in the Finals.

3. As fate would have it, these simulations use a method that current Rockets GM Daryl Morey adapted for basketball (it was originally created for baseball by Bill James). To read more about it, see The Origins of Log5 and Pythagorean expectation.

4. Morey: “If you’ve got even a 5 percent chance to win the title — and that group includes a very small number of teams every year — you’ve gotta be focused all on winning the title.” For more, see The 5 Percent Theory.

5. First and second were the Spurs and Clippers. The Heat and Pacers have played relatively poorly during the second half of the season, although the Heat have a history of coasting some during the regular season and turning it up for the playoffs. Still, using the later-season ratings, the simulation gives the Rockets a 49.5% chance to win the East. Sigh…