I know that the playoffs start tomorrow. And I know that just about every player is hoping for the same thing.
They are hoping to be hot.
Being hot is like basketball's magic trump card. How can the Jazz beat the Lakers? How can the Sixers beat the Magic? How can the Pistons beat the Celtics?
There are 100 answers -- but the easiest one is simply to be hot.
But are you aware that roughly one zillion studies, including a recent one that was extremely rigorous, find that there is literally no such thing as being hot?
Don't Speak Yet
There's a lot to this story. It's like that moment when you walk up the stairs out of the subway station into the bright day light. Just take a second and figure out where you are.
While you're standing there, blinking, trying to feel less discombobulated, watch this Monty Python clip, and then come back and acknowledge that all through human history there has been a difference between what we are pretty sure we know, and what we really know.
The people in Monty Python's dark ages are pretty sure that someone who weighs the same as a duck is made of wood and is therefore a witch. (What they really know, of course, is diddly squat.)
Since primitive times of medieval comedy, we have, thankfully, learned one hell of a lot. We have science, we have medicine and we have knowledge! (Now we know that of course there's no need to involve ducks -- the best witch test is simply to chuck water on them.)
Sure, there will always be detours. Much "science" was later proved untrue. But by and large, over time, we have taken a lot of things that we thought we knew and turned them into things that we know are either really wrong, really right, or really too complicated to figure out right now.
And you know what? It's a painful process. We rely on assumptions. We need them. There's too much going on around us to dig into every little thing, and sometimes life feels better when you stick with what you know and leave the rest to the great unknown. So witches are made of wood, OK? You can't just go poking around, questioning, and pretending every damned thing can be put into a spreadsheet.
By some magic (know any witches?) you're the head coach of an NBA basketball team. You're down one with twenty seconds to play. Looking around the huddle, you see a power forward who normally shoots 40% from the floor, but is a scalding hot eight of ten so far tonight, and has hit his last three. Then there's your shooting guard, who normally shoots 50% from the floor, but has oddly made just two of ten so far tonight, and has missed his last two.
One of them is going to be the first option in the play you draw up. You have got a decision to make -- and if you're like just about every basketball coach on the planet, one of the many things that will inform your decision is the reality that your power forward has some magic coursing through his veins right now. Everyone in the gym knows that right now he is hot, which means he's more likely than usual to hit his next shot.
And that matters.
But does it, really?
The idea that a player can be hot ... is that something you know, or something that you think you know?
I have three coins. One of them is truly 50/50. Another is weighted to be streaky -- it's slightly more likely to repeat the side that came up the last time. And the other is the opposite of streak -- if it's heads, the next time it's slightly more likely to be tails.
Which coin is this?
According to research into this kind of stuff (and there's a lot of it) most people will look at that list and see patterns and guess that's the streaky coin. You can even see your power forward's eight for ten in there, if you start counting from the first "tails."
But guess what -- there is no streak here. These are typical results of a normal 50/50 coin, and if you're seeing anything other than random chance, they're projections of your own mind.
Which is a normal human tendency: To project order where is none. (This also allows parents to convince themselves that their kids' rooms are "pretty tidy.")
Which means that even a poor-shooting power forward will make several shots in a row once in a while, even if he never actually has nights when he is more likely than normal to make shots.
In that timeout, as you're coaching, you'd probably like to know what's going on. Is he, in fact, hot, or is he like that coin up there -- once in a blue moon he'll hit a few in a row?
Because if there is such a thing as "being hot," having nights when your muscles, form, and vision are freakishly in tune, then you have yourself an 80% shooter who needs the ball. But if there are just random short-term variations, then you could be entrusting the game to a low-skill player who has never been that good at shooting.
I hadn't slept much, hadn't eaten much breakfast, and was starving by the time lunch rolled around at the MIT Sloan Sports Analytics Conference a few weeks ago. We scarfed boxed food over a lunch-time presentation, delivered over the crinkling of chip bags, by celebrated University of Chicago economics professor, and Yao Ming's agent, John Huizinga.
I sat in the back, next to John Hollinger. I was eager to learn about the hot hand, but mainly concerned with eating the two box lunches I had secured, while maintaining the appearance of only eating one.
That's when Huizinga put the power in Powerpoint. Explaining the work he had done with researcher Charles (Sandy) Weil (you can see their slides and more here) had done some serious digging into the hot hand issue. One of the first steps was to set up a vast database sorting through every single play of four years of the NBA. They did this so carefully that Sandy found several instances of mistakes in the data, enough that about 1.5% of the games had to be tossed because of oddities like a missed shot, followed by a defensive rebound, followed by another shot by the team that took the first shot. Was that a missed turnover, or a rebound that should have been offensive?
The hot hand has been studied a zillion times. This study was different in various ways -- by being smarter in approach, mainly. But also by being bigger. Lots of players each of whom shoots a lot. They ended up with 49 players who had each taken at least 1,400 shots in any of the four seasons beginning with 2002-2003. The list of names includes LeBron James, Kobe Bryant, Paul Pierce, Baron Davis, Allen Iverson, Stephon Marbury, Gilbert Arenas, Tracy McGrady, Dwyane Wade ... and on the face of it, it's a no-brainer that if anyone in the NBA has been hot, these players are it.
The seasons in question gave them more than 900,000 shots to mess with -- hundreds of times more than the Gilovich, Vallone, and Tversky paper that had defined the field since its publication in 1985. Of those, more than 75,000 were from the 49 key players.
Slide after slide was loaded down with dense and fascin
ating data (some of which is described here). At first I had been hurrying through my lunch, so as to make it less clear that I had two. But then I started hurrying even more, so that I could hear perfectly without the chewing. I didn't want to miss anything.
When they got to the key point, the findings were powerful. Sandy Weil summarizes it in an e-mail: "What we found is that, contrary to the existence of the hot hand, the 49 prolific shooters in our sample are less likely to make a shot after a made basket than after a miss."
You get that? If your guy has just made a shot, it slightly decreases the chance that his next shot will go in. (The part of you that is still coaching in that timeout ... what are you going to do with this news?)
"We found this effect stronger if the player made a jump shot on the previous shot than if they had made a non-jumper (mostly layups and dunks). The average of the players in our study shoots about 46.7% on a shot after missing his previous shot. But if he made a non-jumper, he'll shoot around 45%. And after a made jump shot, he'll shoot around 43.3%," explains Weil.
So what that means is that if you're looking for the guy most likely to make the next shot -- what any coach would want in a late game timeout -- this survey suggests you're no better off handing it to the guy who hit his last shot.
As I took my last swallow of lunch, I turned to Hollinger -- who internalizes numbers like few on the planet -- and asked if this debate was closed. Had they really killed the hot hand?
"It's pretty convincing," he says. Hollinger has since written: "Obviously, it's extremely difficult to offer 100 percent proof that something doesn't exist, but this presentation comes about as close as you can get. In other words, the burden of proof clearly has shifted. Until we're offered some kind of evidence as to how or why the Hot Hand might exist, the default, conventional-wisdom position now should be that it doesn't ... and that continued belief in its existence is causing players to make suboptimal offensive decisions."
Players Beg to Differ
This is all of basketball history we're talking about here. This is something every basketball player has felt in his own bones? Can we really kill it that fast?
There needs to be some kind of explanation. Just for fun, let's assume that the idea of the hot hand is real, and sometimes some shooters really do have streaks beyond what random chance would dictate. Could there be things to cause these results?
- Maybe after the Kobe Bryants of the world make a shot or two, a new tougher defense comes into play.
- Maybe after the Kobe Bryants of the world make a shot or two, their teammates stand around and watch the show, leaving Bryant to fly solo against a double-team or worse.
- Maybe after the Kobe Bryants of the world start to think they are hot, they start getting a really broad definition of what's a good shot.
- Maybe there's something else we're not thinking of.
Working from play by play data (like this) Huizinga and Weil had no real rigorous or involved way to judge the first two of these (although they took a crack at it) but they did hunt around in pursuit of evidence of the third point, and found quite a bit.
Weil explains: "In looking further, we found that their shot selection skews more towards taking a jump shot after a make than after a miss. Players take about 77% jumpers after a missed shot but they take about 85% jumpers after a made jump shot. And, even if we just look at, say, their performance on 2-point jump shots, they shoot worse on the shot following a made jump shot than if they had missed the previous shot."
They also took a crack at ruling out the role of improved defense: "We wanted to see if we could say if the defense overplays the player, forcing him into harder shots, or if the offense calls his number more. We looked at this several ways but the most compelling results came from our looking at two different measures of how soon the player shoots. We looked at how many seconds the player's team possesses the ball between his shots. And we looked at the probability that the player takes his team's next shot after his own miss or make."
Their idea was that if the defense gets tougher on a certain player, then he wouldn't take his next shot as soon, and he would be less likely to shoot at all. I don't know if I buy that tough defense inspires slow shots -- when the double team comes early in the clock, it forces a decision, and that decision may be to shoot. (Against single coverage, Kobe Bryant can dribble all day. Against two players, just holding on to the ball is sometimes a concern -- making that tough shot over the double team might wiser than it might appear. Part of its value is that it's not a turnover.)
Weil says they found evidence to support the idea that players who have just hit a shot tend to have shot selection issues on their next shot: "The player shoots, on average, 56 seconds later after a missed jump shot but shoots 47 seconds later after a made jump shot. They shoot the next shot about 26% of the time after a missed jump shot but they shoot 34% of the time after a made jump shot."
Enter Bill James and Dean Oliver
As Hollinger pointed out, it is very tough to prove something does not exist -- looking really hard for something and not finding it is not the same thing as proving it never existed in the first place.
So, did Huizinga and Weil not find evidence of a hot hand, or did they find there is no such thing?
As the presentation ended, and I was on my way to recycle a couple of lunch boxes, Celtics' stat guy Mike Zarren walked by muttering about Bill James' "Fog" article. (For some reason, that last sentence sounds like a made up transition, but I swear it really happened.) I asked Zarren about it later and he admitted this is one of his favorite papers in all of statistics. I read it, and the relevant passage would seem to be James' point about a study purporting to disprove the existence of clutch hitters.
We ran astray because we have been assuming that random data is proof of nothingness, when in reality random data proves nothing. In essence, starting with Dick Cramer's article, Cramer argued, "I did an analysis which should have identified clutch hitters, if clutch hitting exists. I got random data; therefore, clutch hitters don't exist."
Cramer was using random data as proof of nothingness -- and I did the same, many times, and many other people also have done the same. But I'm saying now that's not right; random data proves nothing -- and it cannot be used as proof of nothingness.
Why? Because whenever you do a study, if your study completely fails, you will get random data. Therefore, when you get random data, all you may conclude is that your study has failed. Cramer's study may have failed to identify clutch hitters because clutch hitters don't exist -- as he concluded -- or it may have failed to identify clutch hitters because the method doesn't work -- as I now believe. We don't know. All we can say is that the study has failed.
Sandy Weil knows all about the Fog article, and agrees that the thing they went looking for -- evidence of the hot hand -- could still be lost in the statistical fog. But what he's saying is that rath
er than not finding anything, they found strong evidence of something else: A player taking worse shots when he thinks he's hot.
"We're not saying people don't get hot," he explains. "But we're saying that if a player gets hot, it's not enough to care about, at least not compared to the effect of his thinking he's hot and performing less well because of that belief."
David Thorpe trains basketball players, and one of the things he cautions them against is what he calls "hunting shots." By that he means getting into a mode where they feel they must be taking lots of shots.
When I saw the list of people Huizinga and Weil studied, I thought to myself: These guys basically hunt shots for a living. The researchers excluded anyone who didn't shoot the most on their team, or shot fewer than 1,400 shots in a season. These players are expected, and paid, to shoot against any defense, and they see a lot of great defenses.
Hot hand issues aside, this study is entirely convincing to me that high volume shooters tend to hurt their teams by being too aggressive in seeking shots, especially after making a few. (One of the findings of the paper was that if every player on a team shot more aggressively after a make like those in the study, it would cost a typical team 4.5 wins per season.) Shooters beware: Some of your worst shots apparently come when you think you're hot, and your "heat checks" aren't helping your team.
This study convinced me that a lot of what seems like hot shooting is really just hits and misses falling into groups randomly.
Weil points out that, studying 49 shooters, you'd expect 24 of them to be more streaky than a random pattern of people with their shooting percentage. But they found just seven of them on the streaky side of random -- suggesting streaky shooting is much rarer than people seeing streaks where there are none.
But did this study stop me believing in the hot hand entirely? I can tell you that the other night I played in a game with a teammate who seemed to be an OK shooter. In the last game of the night, though, he nailed a couple of 3s. And it made a big impression on me: I immediately started setting picks for him, passing to him, and scheming ways to get him the ball on his preferred wing. I'm not even sure he was the best shooter on our team, but he was hot, and in some way I couldn't help but believe that mattered. And he did go on to hit something like four of five 3-pointers to close the game.
Dean Oliver points out in his book "Basketball on Paper" that the last major paper to refute the existence of the hot hand included research with Cornell students. They were asked to gamble on whether or not their next shot would go in. If they could feel when they were hot, they'd win a lot of those bets. As a group, they did no do well at all.
But as Oliver points out, four of them did demonstrate a significant ability to predict their shooting. And if there were no such thing as a hot hand, how could that be possible?
So, remember that coach in the timeout -- with a bad shooter having a hot night, and a good shooter having a cold night? What's the right call?
I asked Sandy Weil if he would really give the ball to the guy who has made just two of ten shots: "Your question presumes that we don't know why these players have excelled or struggled tonight. Maybe the 50% shooter has the flu today. Perhaps we know that the opponent has been double-teaming off of the 40% shooter to force bad shots by our 50% shooter. We may know that our 40% shooter has consistently beaten his defender off the dribble tonight but that he can't do that against most other teams' defenders. To answer your question: I think that I'd want to draw up a play to get us the best shot possible. And, unless I understand why these two players are performing the way they are, I'd prefer for the 50% percent shooter for the primary option."
In other words, counting on the hot hand was always dangerous. Actual basketball strategy is far superior. That was true before this study, it's still true after, and that's something we really know.