Last night I was trying to write about the difficulty of problem definition in research, and how, in particular, I frequently see research proposal drafts that are looking at a big question and combining what are really distinct research projects. The example I was considering was one that I often see in people researching social problems, when people bring together the three basic questions about addressing a social problem: what is the cause? what is the impact? what can be done about it? This combination of issues makes perfect sense from the perspective of someone who wants to do something about the problem, because they are not separate issues: understanding causes can help understand impacts, and vice versa, and both can inform possible courses of action to address the problem. But for a researcher, they don’t combine well. I’m not going to talk about that though. I’m going to talk about a similar issue in sports.
I really enjoy reading sports analysis. As a little kid, I loved baseball cards and books about sports. When I discovered Bill James (1988, I think, the year he published his last Baseball Abstract) I really fell in love with reading sports analysis, and especially discussions of which players are better. I particularly enjoy the way that James thinks about stuff: he’s very careful to separate out distinct issues. I don’t always agree, but… I follow baseball less now, and football and basketball. I enjoy reading Bill Barnwell, Zach Lowe, Neil Paine, and Chase Stuart, who are all well known writers with analytical approaches that I appreciate. I don’t read all that much sports, but I do read sports pretty regularly—several sports related articles a week. When I’m procrastinating, I read sports on the web. I particularly enjoy reading rankings of players. It doesn’t much matter to me who is ranked where, I’m interested in the ways that they justify the rankings. In such rankings, I often see a lot of slipping between distinct but related research concerns.
In his Historical Abstract (2nd edition, I think, but I’m not checking sources), Bill James talks about how player rankings—the search for GOAT—have to find some way to negotiate the two distinct concerns of peak performance vs. career totals. Gale Sayers (4956 rushing yards) and Terrell Davis (7607), for example, are two players who had very high peak performance with injury-shortened careers. How do we compare them to Edgerrin James, whose 12,246 rushing yds almost matches the total combined yardage of Sayers and Davis (12,563). (Sayers, of course, was a great returner, but I don’t want to get distracted evaluating Sayers or Davis or E. James.). Anyway, the two questions of peak performance and career performance are distinct issues that often get combined, because they are both important concerns in trying to decide who the “greatest” was.
A related conflation of concerns in player evaluation is the distinction between potential and performance that I don’t see explicitly discussed as often as the peak vs. career debate. (Admittedly, I’m not out scouring the web for sports.). Performance (i.e. what actually happened on the field) is not the same as potential (i.e., inherent talent, skill, or ability). Performance and potential are related, of course, but they are not identical (and sometimes I think that analytical and statistical approaches go too far in discounting actual performance to valorize potential, especially in cases of small samples. I think I’ll leave that discussion for another post).
Sometimes potential is what is of interest: in trying to predict the upcoming season, we want to know what the underlying ability is. In trying to evaluate who had the best season last year, performance is of interest, not potential. In trying to evaluate “the best,” however, there is no clear guidance as to whether performance is more important, or potential. Who is the best running back in the league right now? There are different answers for who had the best career (Adrian Peterson), who had the best year last year (Ezekiel Elliott, at least in terms of rushing yards), and who will have the best season this year (probably not Peterson, possibly Elliott, but not if he gets suspended for six games). From the point of view of asking a good research question, it helps to separate out the different issues.
Performance is one of the best indications of potential, so attempts to evaluate potential use actual performance. Potential is not perfectly related to performance, however, because any number of other factors influence performance in addition to potential. Actual outcome on the field is shaped by all the players, not to mention the refs/umps and the crowd and environmental conditions.
Being clear about which of the related issues we want to research allows us to actually do research. If we’re not clear, we just slip from one inconclusive argument to another.
Sometimes—in evaluating the GOAT, for example—you might want to try to find a balance between performance and potential. I would want to, at least, because I think that the GOAT should have ability that manifested in different ways. Other times—in evaluating the Hall of Fame, in my opinion—the question of performance seems paramount: the fact that someone did or did not do well matters, even if it is not an accurate reflection of potential.
Your purposes as a researcher (sports evaluator) affect how you deal with the two disparate dimensions, but recognizing that the two different dimensions are separate is important in keeping from slipping into evaluations based on shifting criteria.