Friday, October 20, 2017

New review of my book (2)

Another new review of my book (Getting the Best of Your Dissertationwas posted today (October 20), and it, too, is glowing:

A holistic approach to dissertation guidance  
I found this book at a time when I was feeling so anxious about writing my dissertation that I would sit down to write only to immediately stand up again and walk away. I have read and referred to other books on graduate school and writing, but found this one particularly useful because of its practical advice and attention to the psychological and emotional work of writing a dissertation. Since I had already gone through the planning and research phases of my dissertation, I got the most out of the sections of the book that addressed living with dissertation work and writing. Chapter 3 began with a simple but powerful reminder that the dissertation is meant to support my life and goals, and that I should not assume that it is acceptable (or wise!) to sacrifice my life for the dissertation. The advice in these chapters helped me to see the dissertation as a means to receiving a degree, rather than a monumental test of my overall intelligence and worth as a person. I also found the advice on writing practical and useful - I felt like the author was anticipating many of the excuses or mental traps I was falling into ("I just need to do a bit more reading" is an obvious one, but there were many), and helping me to avoid them or to move past them quickly. Overall, I highly recommend this book to anyone struggling with their dissertation or daunted by the prospect of beginning your research. Not only will it give you practical tools for finishing the project, it will teach you to be kind to yourself in the process.

For me, there's a special added bonus in that I don't know who posted this review--a bonus because someone I don't know is writing on the basis of the book itself, and (unlike the other new review, which was posted by someone with whom I've worked) therefore the review is really based on the book itself, and not influenced by any personal connection or factors outside of the book itself.

New review of my book (1)

Recently, two people have posted new reviews of my book Getting the Best of Your Dissertation.
The first (posted on October 9) was posted by a former client of mine, so definitely biased, but also glowing:

From Dissertation Nightmare to Dissertation Success with Dissertation Dave - The Best Dissertation Coach in the World 
One of my favorite sections in this book is 7.2 Managing People, Especially Your Professors. I started working with Dave after making essentially zero progress on my dissertation after more than a year. I was doing a literature review and reading a lot of stuff, but not really making measurable progress. After working with Dave, I started to race through writing my dissertation. He is not someone who added dissertation students to his other schedule of activities, he is a full-time dissertation coach and dissertation expert. With his Ph.D. from Berkeley and his work with hundreds of students, no matter what dissertation disaster you are facing, I'm sure he can help you with this book and with coaching. Start with this book, but call him, because if you are not making progress or your committee is not helping you or worse against you, you can benefit from his dissertation expertise and experiences. I was already an expert in my discipline, but I was not an expert at navigating the significant politics and protocols that accompany the dissertation process - that's why I needed Dave! That's why you might need this book and Dave too. Dissertation Dave was so effective in eliciting dissertation writing from me, that my husband who was also working on his dissertation started working with him too.Dissertation coaching with Dave is a mega catalyst for dissertation completion. My husband also finished his dissertation, thanks to working with Dissertation Dave. I do not want to go into all of my dissertation headaches on Amazon, but I am telling you that I had at least 50 I can't believe this happened, I don't know if I can make it, God are you out there, moments. Thank God I finally found Dissertation Dave.

Monday, October 16, 2017

A Bad Letter On Time is Better Than A Good Letter Late

“A bad letter on time is better than a good letter late.”  This is an idea I have long used as a quotation from the letters of Laurence Sterne, the 18th century English author.  It is, I find, a mis-quote of a letter Sterne wrote on August 3, 1760, which includes the following lines:
“thinking that a bad letter in season— to be better than a good one, out of it — this scrawl is the consequence, which, if you will burn the moment you get it—I promise to send you a fine set essay.”

The principle is one that I have used so many times, that I am quite surprised that I have only used it in one previous blog post, and never as the subject of one itself.

I was thinking of this quotation today for a couple of reasons, but then trying to find a subject for a blog post added another: I didn’t have a clear subject to discuss that I felt capable of discussing in a relatively constrained format.  I’m thinking a lot about the intersection of knowledge and politics, but there are a lot of separate threads that I’m having trouble untangling to put into any form that suits for a short piece.

I was thinking about the quotation with respect to a client who is sure he can’t write. My response is that the only way to resolve that is to practice writing—to be willing to produce something—anything—that can be critiqued. Good writers practice. I don’t think there’s any way around practicing.  I was also thinking how being willing to write bad drafts allows the practice that is crucial for generating good drafts.  The more you practice, the better your writing gets. Ironically, the willingness to be wrong allows the practice that allows growth, learning and the development of improved writing skills.

I was also thinking about it in terms of another client who has a number of different places to submit material, and I think a bad letter in season is better. If you have something to show to other people, they have an opportunity to appreciate it and learn from it, and/or to give you feedback so that you learn from the process. Sharing something bad creates the possibility of working with other people. By contrast, insisting on writing a good letter means missing opportunities—especially if your standard for a good letter is so high that you struggle to reach it.

In one episode of the Great British Baking Show, one of the participants ends up throwing his cake into the trash. As a result, he was sent home from the show. Unlike the others, he had nothing to show, and that was the deciding factor. Had he even shown any cake, he might well have survived for another week. For him, a bad cake in season would definitely have been superior.

Monday, October 9, 2017

Whose Responsibility is Communication?

My two previous posts were concerned with getting feedback and dealing with feedback, and this is following up on those ideas. I’m still thinking from the perspective of the writer concerned with the response, and particularly thinking about dealing with difficult feedback—complaints about the quality of work. I’m also thinking about a conversation I had with a friend about the purpose of music and of performing music.  The question in conversation was about the relationship between [author/performer/presenter] and audience, and where responsibility lies.

What burden lies on the performer to reach the audience? And is there any burden on the audience? In the previous post, I was writing about some comments that were difficult, and a lot of my response lies in my sense that the comments don’t reflect a sufficient attempt to understand the writer’s point of view.  But that idea requires believing that the reader has some responsibility in his or her approach to the work.

Different relationships between author/performer and audience bear different burdens of responsibility.  A professor definitely has different responsibility to the author of a dissertation than a bar patron does to no-cover charge musician. But still the question of where responsibility lies is one to consider, especially in the context of receiving feedback.

The bar patron hearing a no-cover musician bears little or no responsibility to the performer.  Certainly there is some normal standard of decorum—the bar patron can’t start yelling and trying to drown out the musician—but the bar patron certainly has the right to ignore the musician and to laugh out loud in conversation with a friend, even if that does interfere with the musician’s  performance.  If the audience for the musician has to pay for admission, then the expectations shift: having an audience paying to listen to music creates a greater responsibility for members of the audience. Of course, asking patrons to pay also means that they have a greater interest in fulfilling that general responsibility of listening. As anyone who has attended an expensive arena concert knows, there always seem to be plenty of people in the audience who have bought tickets whose primary interest is in the social event, not the concert itself, and thus talk through the music, but when people have paid for the music, this kind of behavior is less polite than identical behavior in a no-cover charge bar—it’s a matter of degree.

This was the conversation that I was having with my friend, who was talking about the difference in the behavior of audiences who paid vs. audiences at a free event.  That focuses on audience behavior.  The flip side is to wonder about the desires and purposes of the author or the performer. How the author/performer views the audience’s responses depends on what the author wants from the audience.

For my friend, the heart of the matter was in the music: the musician, he believed, should not compromise the integrity of the music, and it was important to have people who were coming to respect the music.  For me, the audience matters, too: if the music is really only about the music, then what’s the need for an audience? Once you bring the audience into the picture, the music in itself is not the only concern.  

To what extent is it a sell-out to shape the performance to meet the audience?

And to what extent is purity lost, if it reaches no audience?

Writers need audiences, and that means convincing audiences that the reading is worth the effort. If you have the choice to just write whatever you want and can then hope that someone will pick it up, that’s great—it will serve you well, if, like many writers, you have to submit it to many publishers before you find one that will take the work. On the other hand, if your audience is fixed—if you know that it’s a certain person—is it a sell-out to change what you do so that your audience will accept the work?

For writing more than for music, there is an underlying story or idea that could be transmitted in many different ways. To me, it’s that story that matters, and the form in which it is delivered is not fixed by the underlying purpose.

The Tao Te Ching opens by saying that the Tao that can be spoken (written) is not the absolute Tao.  But the book still continues to tell of the Tao. I think that writers need to think in those terms: the story that you tell is not the absolute version of the story, but you need to tell a story, anyway. Research (and therefore writing about research) delves into realms of uncertainty—but that can’t stop scholars, or the entire scholarly community would collapse. Research writing does its best to assert confidence, while still acknowledging the myriad limitations that any works of research faces.

Wittgenstein concluded his Tractatus Logico Philosophicus with the statement that if one cannot speak accurately, one should remain silent (I’m paraphrasing slightly), and he never published another significant work in his life—his Philosophical Investigations was published posthumously from his notebooks.  Modeling your work as a scholar on the pattern of Wittgenstein—refusing to say anything unless it’s exactly right and certain—is not a path to scholarly success. 

If you are a writer, it’s useful to think about the gap between the ideas that you espouse and want to share and the many different ways in which those ideas can be expressed so as to reach different audiences. Reaching the audience is the writer’s responsibility. Although the reader may bear some burden of responsibility, it’s usually beneficial to simply accept the burden of reaching the audience: what does my reader want?  

(As a practical aside, understanding how to identify and write for an audience is extremely useful in getting published, because publishers want to sell books, and that means they want to know who you think your book will sell to.)

Monday, October 2, 2017

On receiving difficult feedback

In my last post, I was writing about how getting feedback is good, even when it’s bad feedback.  And I still believe that, even though I’ve just spent the last 30 minutes fuming over the quality of the feedback from the dissertation chair of the pseudonymous RSP (really smart person). 

To me, much of it seems petty and unnecessary. It angers me to see, for example, general statements that are obvious—beyond obvious—taken to task. But I look again, and I wonder, is it really obvious?

RSP and I share some fundamental views about the very nature of philosophy, especially with respect to the indeterminacy/indefinite nature of structures of knowledge (that’s not necessarily how RSP would phrase it, though), which leads to my accepting ideas that others are not so ready to accept. And that’s the issue: I’m not the person that RSP has to satisfy, and getting angry at the chair doesn’t actually help me find a route to satisfy the chair.

It’s a challenge to work through feedback like that. It’s the death of a thousand pinpricks. I read one comment, and I’m slightly annoyed. I read two, I’m a little more annoyed. I read four or ten or a dozen, and I’m fuming. It’s not even my work and I’m still more than annoyed at the feedback. There are comments that I agree with and comments that complimentary. But those are respites in a sea of brambles, picking at my skin. 

Is this bad feedback?  That depends on the standards by which I judge it. By the standards that come most easily—the emotional response shaped by by immediate intellectual judgements about the feedback (e.g., being annoyed that the chair asks for a citation on a claim that I don’t think ought to be cited)—yes, it’s bad feedback.  Bad in two ways: 1. doesn’t give sufficient guidance on how to fix it (e.g., “I don’t like the way you do this” vs. “you need to take steps X, Y, and Z to resolve this problem”), and, 2. emotionally loaded, at times (e.g., not only saying “this is a problem” but also “I don’t know why you refuse to fix this problem”).  The thing about those judgements is that they’re entirely based on my own perspective. What about the professor’s perspective?

I don’t know the professor’s perspective, of course, so I’m left to guess. And given that there is not enough clear guidance on how to fix it to be confident, my guess is a little bit of a shot in the dark. But it’s the best I can do…

In this situation, it’s interesting to try to imagine what the person who gave feedback is thinking. What is it that the chair needs or wants that is not being delivered? Is the resistance a matter of resistance to the general project? Or is it a resistance to a specific absence?  These questions are speculative, of course, but exploring them can be useful at least in defusing some of the emotion. Is the chair unable to understand some points? Or unwilling? Is the problem that the chair disagrees with something or that the chair thinks something is unclear?

A dissertation writer is obviously a student who is n many ways at the mercy of the dissertation chair. But it still can be useful to think as a teacher: suppose, as a teacher, you have trouble reaching a student? Do you say that the student is too stupid? Or do you try to explain the same ideas from a different angle?
Getting feedback can be difficult to deal with, but to try to think through the eyes of the person who gave the feedback can help at least defuse some of the emotional charge.

Once you’re past the emotional charge (at least for a while): What is the plan to persuade that person of the value of your work? What steps can you take? In this case, and in many others, my next step is to look for the feedback that seems the best: there are dozens of comments in this draft—which ones do I think make good points that I want to address?  It’s with these that I will start, and the rest, I’ll look at later—maybe I’ll figure something out for them by trying to respond to the feedback that asks good questions.

None of this eliminates the emotional sting of a complaint, or the frustration of wading through pages filled with comments, but it does help me step back from the work to ask whether the same ideas could be conveyed in a different form. And what form would be suitable to satisfy the specific individual of significance (the chair)? The written work is not an abstract sharing of some idealized truth, but rather a lesson that teaches your reader the value of the work. If your reader doesn’t get it the first time, how can you do it differently to resolve the difficulties that appeared?

Monday, September 25, 2017

Getting Feedback is Good, Even When It's Bad Feedback

Feedback can be hard to take, but it’s necessary.  Simplistically, if your project is a total stinker, you need to know that. Of course, someone saying your work is a total stinker doesn’t mean that it is. Different things work for different people.
We all are limited in our perspectives: we know what we think, but we don’t know what other people think.  And when we’re trying to produce something that is intended to communicate with other people (if we’re writing or using other communicative media), what other people think is crucial.

Sometimes I think the feedback I would most like to get is someone saying they liked my work, and also echoing back my message in their own words. If someone says “I think you’re saying X,” and “X” is the message I was hoping to share, that’s a successful piece of writing.

If I have nerved myself up to give something to someone, getting no response can be painful in itself, so I’d rather get something terse. It can be frustrating if someone gives very terse comments—good or bad—because the comments may not give guidance on how to move forward.  But that’s a personal frustration: if someone likes or dislikes your work and doesn’t give you any more information than that, it’s still valuable feedback.
If they like it, you can rest on your laurels. Or you can work on things that you want to work on.  You can try to guess the reasons they liked it.  And you can at least feel good that you got positive feedback.
If someone says they don’t like your work, and nothing more, it doesn’t help you figure out how you can get that person to like a new draft, but it does give you some indication of the strength of the work in someone else’s eyes. It’s no good to have someone worrying overmuch about hurting my feelings. If the feedback I get is a sense that they’re unwilling to say what they really feel, I’m only left to imagine the worst, so I’d rather actually get feedback, even if it is “your work sucks.”

Whether your work is awesome or it stinks, having a sense of what other people think of it can help you decide how to proceed.  I’m trying to get a friend of mine to give me some feedback right now, and I want to assure him that telling me that my work sucks is better than him saying he hasn’t looked at it. Even if all he tells me is: “I gave it two minutes, and it sucked so much I didn’t want to deal with it any more.”

There is toxic feedback, of course: if someone writes that your work proves that you’re an imbecile who is a waste of food, air, and water, that’s not good. But such a personal attack hardly shows the maturity of the source of feedback. For the most part, you can always ignore personal attacks inspired by your work—only if they’re coming from someone on whom you depend (a dissertation advisor, for example), should you do anything more with personal attacks than ignore them (I mean, assuming they’re limited to mean responses to your writing, obviously if someone is slandering/libelling you to many, you might want to take action, but that’s not really in the realm of getting feedback on your work.)

Monday, September 18, 2017

Expressive writing and mental state

I regularly tout the benefits of writing and of practicing writing (or at least it has been a common theme in my writing over years, if not in recent blog posts).  A recent study at Michigan State University associated specific benefits associated with expressive writing—writing about feelings and thoughts. (

The authors of the study compared two groups of students who were set to perform the same main task—a test, and also a secondary task, either writing about what they did the previous day (not expressive) or writing about their feelings about the upcoming test (expressive). Their basic finding was that those doing the expressive writing were calmer (actually, they described it in terms of brain activation states because they were measuring the students with electroencephalography).  The lead author used a automotive fuel-efficiency metaphor, saying the difference between the brains performing the expressive writing task and those performing the control (non-expressive) task was like the difference between a Prius and a gas guzzler from the 1970s.  The students in the two groups performed the same on the main task (the test), so there was no direct impact on performance on the test itself. I am unsure from what I have read whether the higher-efficiency brain activity induced by the expressive writing task lasted into the main task.  In any event, this is good evidence that there is a real benefit to writing about your own feelings about a task.

For people who are stuck, I have often recommended writing about their feelings about the project—which has sometimes worked. One reason I like having people write about ow they feel about a project is that it can help reveal crucial theoretical assumptions. Another reason is that once someone has started writing about their feelings about the project, that can often transition into writing about the project itself. This study suggests that writing about how you feel about a project can help calm you down.

The many who have suggested that writing has therapeutic benefit—and there are many such in the self-help shelves—seem to have evidence to back up at least some claim to therapeutic benefit.

Generally speaking, writing is an important practice for people who will need to express ideas in their lives—both professionals and academics.  No matter how difficult writing may seem, it gets easier when you practice, and that allows you to work more efficiently because you communicate with others more efficiently.  This recent study suggests yet another reason to practice writing—or at least expressive writing: it helps improve your mental state.

Monday, September 11, 2017

Colleges and Universities are Good (revisited)

An article in the Washington Post this morning discussed the gap between how people in the U.S. see themselves and how people around the world see the U.S. and its residents. (Trump is Making Americans See the U.S. the Way the Rest of the World Already Did.)

While I think the author is a little careless in her generalizations, I generally agree with her main points that far too many residents of the U.S. are frightfully out of touch with the rest of the world. Certainly the U.S. public educational system does not dedicate great resources to understanding people from around the world.  I would not write a blog post just to agree with her, nor to take her to task for being a little careless in generalizing.  But towards the end of the article, the author makes a statement that just makes me angry for its basic acceptance of the anti-intellectual trend that is polluting public discourse in the U.S. at present:
many other average Americans with dangerously naive ideas about themselves and their country grow up to become teachers, foreign correspondents, presidents. What they did not learn as children will not be cured by what they learn at elite universities, in self-regarding metropolitan centers or in graduate schools that for the most part tell them that the United States is the center of the planet and that they are the smartest on it. 
Do I think there are many Americans (U.S. residents) who have dangerously naive views of themselves and their countries? Absolutely, I do.  But do I agree that such dangerously naive views cannot be cured by universities or graduate schools or metropolitan centers? Absolutely not.  The view that colleges and universities are part of the problem, or at least are no help in dealing with it, is pernicious anti-intellectual propaganda that serves conservative and Anglo-centric perspectives.

Firstly, let’s just stipulate that arrogance or hubris are not good. It’s good to believe in oneself, to feel proud of who and what you are, but it’s not good to be arrogant about it. It’s one thing to believe in oneself, and it’s quite another to believe oneself superior to another. And yet another thing to let that self-regard keep you from learning new things because you think you know better.

Secondly, I’m going to assert that the general idea of American Exceptionalism is either trite or inappropriate arrogance.  If we say that Americans are different from the rest of the world in that they are American and everyone else is not American, it is trite and tautological (the band Camper Van Beethoven sang “If you didn’t live here in America, you’d probably live somewhere else” in the song “Good Guys and Bad Guys”).  If Americans are different in some other way, then that characteristic should be something real that we can identify and define. We could then see if Americans are actually different (and potentially superior) in that way. The “American Exceptionalism” generally posited by the political right in the US is little more than an arrogant “Americans are better because we’re American,” without any clarifying or signifying characteristic that makes Americans better. If American exceptionalism said “Americans are better because they’re richer” (or smarter, or prettier, etc.), then we could discuss whether that was true using empirical evidence. And we could discuss whether being richer/smarter/prettier/etc. really translated to being better in any significant sense (what makes people “better” or “worse”, anyway?). If American Exceptionalism means “Americans make the best widgets,” well, if there is some way of proving that America makes the best widgets, then I’m all for American Exceptionalism. If American exceptionalism just means “we’re better because we’re American,” then that’s unfounded arrogance.  To the extent that American Exceptionalism is tied to the idea of Manifest Destiny (which depends on in the idea of the superiority of whites and Christians, and is a version of the “white man’s burden” myth), I reject it utterly.

It is possible to find arrogance everywhere, and maybe you do find it more often in elite universities and in “self-regarding metropolitan centers.” But what I would ask is: where are U.S. residents likely to find out about what people around the world think of the U.S.? You certainly could move to a foreign country, as the author of the article did (though living in a foreign country is no cure for arrogance, as colonial occupiers have demonstrated for centuries). Or, you could go to one of the places in America where you can meet people who aren’t from America.  You don’t have to leave America to meet people from around the world. You can learn from a Turk while living in Istanbul, but you can also learn from a Turk living in Berkeley, California while attending university. (One of the sloppy generalizations in the article is the notion that everyone in the U.S. is oblivious to what people in the rest of the world think. There are lots of people living in the U.S. who immigrated from other lands, or whose parents immigrated from other lands. Such people, by virtue of both personal experience and social connections, have a damn good idea of what people outside the US think of people inside the US. I get that the constraints of the article size limit the attention that an author can give to saying “I want to talk about something common in the US, but certainly not universal,” but the generalization is still sloppy: lots of Americans know what the rest of the world thinks of the US.)

Metropolitan centers are known for diversity of population, and this diversity is reflected in political realities. Who voted for Trump and blindness to the outside world? Not metropolitan centers. Metropolitan centers voted for the person who had served as Secretary of State for Barack Obama, who was widely admired outside the U.S. Metropolitan centers voted for the politician who believed in climate change, like the rest of the world believes in climate change. Metropolitan centers also voted for the politician who supported immigration, which reveals an inherent openness to new peoples with different ideas about the U.S. (An aside: to call the metropolitan centers “self-regarding” is to accuse them of arrogance. It’s an unjustified insult and a silly generalization. Where ever you go, some people will hold arrogant and unjustified pride in their homes. But in most places, there are justified grounds for pride. And in some cases—New York, Washington D.C., Los Angeles, and several other major U.S. cities—a certain self-regard is not out of place. The great cities of the U.S. rival the great cities of the rest of the world. Sure, Istanbul has thousands of years of history, and New York only a few centuries, but New York was a world cultural center of power rivaled by only a short list of other cities in the history of the world. In the middle of the twentieth century, New York was quite arguably the greatest city in the world. Washington, D.C. wielded military might unrivaled perhaps in history. Los Angeles and Hollywood influenced people around the world.)

Colleges and universities are also good places to meet people from around the world and to learn how they see the world.  If you go to college or university with an unshakeable belief in the inherent superiority of Americans (or white Christian Americans), well, college and university may not change you.  But such views are hardly common on university campuses (and not surprisingly, the GOP and conservative media often complain about the views that are expressed on U.S. university campuses).  University campuses try to harbor diverse views because an underlying view of research is that diversity of views helps develop debate. Universities almost always have foreign students and often foreign professors.  And again, the voting record clearly demonstrates that college and universities hold views that are more interested in understanding the outside world, and more focused on interacting with people in the outside world as equal partners, rather that as inferiors lacking whatever it is that is supposed to make Americans exceptional.  Is American Exceptionalism espoused by many on U.S. campuses? Well, generally professors and students both vote Democratic far more often than Republican, suggesting that the Republican appeal to American Exceptionalism isn’t generating enthusiasm on campuses. It should be noted that researchers—most professors at universities—are almost always working with scholars around the world, and they are trying to understand the ideas of the people with whom they work. Scholars may focus on their scholarship, but they’re not completely cut off from the rest of the world. Colleges often send students abroad in addition to bringing in students from overseas.

The metropolitan centers and colleges/universities voted for the candidate with the less insular views; they voted in favor of more interaction in the world, and less of an idea of “American Exceptionalism.”  Who did vote for the insular candidate? Who voted for American Exceptionalism? Not the metropolitan areas or colleges/universities.

So, Ms. Hansen, if your concern is for throwing off the American-centric views that disturb you, then metropolitan centers and colleges are the most likely places where someone will be cured of those views, short of going and living abroad. Since the rest of the world probably won’t let 300 million U.S. residents come live for a year or a decade, those colleges and universities and metropolitan centers are the best hope for curing Americans of their self-centered views. In the long run, sure, it would be great to change elementary and secondary education in the U.S. for more awareness of the wider world. But at present, colleges and universities and metropolitan areas are the best hopes for the cure you seek to American blindness. Colleges and universities are good.

Update/Addendum: Another place you can find out what people outside the U.S. think of people inside the U.S. is on the web, even on U.S.-based publications, as with this article written by a Mexican. Truth is, it's easy to learn what people think if you want to learn. But you have to go to places where there are different voices to be heard--like metropolitan centers and institutions of higher learning.

Monday, September 4, 2017

Issues in defining research questions: separating distinct but related questions. Sports example: performance vs. potential

Last night I was trying to write about the difficulty of problem definition in research, and how, in particular, I frequently see research proposal drafts that are looking at a big question and combining what are really distinct research projects. The example I was considering was one that I often see in people researching social problems, when people bring together the three basic questions about addressing a social problem: what is the cause? what is the impact? what can be done about it?  This combination of issues makes perfect sense from the perspective of someone who wants to do something about the problem, because they are not separate issues: understanding causes can help understand impacts, and vice versa, and both can inform possible courses of action to address the problem.  But for a researcher, they don’t combine well. I’m not going to talk about that though. I’m going to talk about a similar issue in sports.

I really enjoy reading sports analysis. As a little kid, I loved baseball cards and books about sports. When I discovered Bill James (1988, I think, the year he published his last Baseball Abstract) I really fell in love with reading sports analysis, and especially discussions of which players are better. I particularly enjoy the way that James thinks about stuff: he’s very careful to separate out distinct issues. I don’t always agree, but…  I follow baseball less now, and football and basketball. I enjoy reading Bill Barnwell, Zach Lowe, Neil Paine, and Chase Stuart, who are all well known writers with analytical approaches that I appreciate. I don’t read all that much sports, but I do read sports pretty regularly—several sports related articles a week. When I’m procrastinating, I read sports on the web. I particularly enjoy reading rankings of players.  It doesn’t much matter to me who is ranked where, I’m interested in the ways that they justify the rankings. In such rankings, I often see a lot of slipping between distinct but related research concerns. 
In his Historical Abstract (2nd edition, I think, but I’m not checking sources), Bill James talks about how player rankings—the search for GOAT—have to find some way to negotiate the two distinct concerns of peak performance vs. career totals. Gale Sayers (4956 rushing yards) and Terrell Davis (7607), for example, are two players who had very high peak performance with injury-shortened careers. How do we compare them to Edgerrin James, whose 12,246 rushing yds almost matches the total combined yardage of Sayers and Davis (12,563). (Sayers, of course, was a great returner, but I don’t want to get distracted evaluating Sayers or Davis or E. James.). Anyway, the two questions of peak performance and career performance are distinct issues that often get combined, because they are both important concerns in trying to decide who the “greatest” was.
A related conflation of concerns in player evaluation is the distinction between potential and performance that I don’t see explicitly discussed as often as the peak vs. career debate. (Admittedly, I’m not out scouring the web for sports.). Performance (i.e. what actually happened on the field) is not the same as potential (i.e., inherent talent, skill, or ability).  Performance and potential are related, of course, but they are not identical (and sometimes I think that analytical and statistical approaches go too far in discounting actual performance to valorize potential, especially in cases of small samples. I think I’ll leave that discussion for another post).
Sometimes potential is what is of interest: in trying to predict the upcoming season, we want to know what the underlying ability is. In trying to evaluate who had the best season last year, performance is of interest, not potential. In trying to evaluate “the best,” however, there is no clear guidance as to whether performance is more important, or potential. Who is the best running back in the league right now? There are different answers for who had the best career (Adrian Peterson), who had the best year last year (Ezekiel Elliott, at least in terms of rushing yards), and who will have the best season this year (probably not Peterson, possibly Elliott, but not if he gets suspended for six games). From the point of view of asking a good research question, it helps to separate out the different issues.
Performance is one of the best indications of potential, so attempts to evaluate potential use actual performance. Potential is not perfectly related to performance, however, because any number of other factors influence performance in addition to potential. Actual outcome on the field is shaped by all the players, not to mention the refs/umps and the crowd and environmental conditions.

Being clear about which of the related issues we want to research allows us to actually do research. If we’re not clear, we just slip from one inconclusive argument to another.
Sometimes—in evaluating the GOAT, for example—you might want to try to find a balance between performance and potential. I would want to, at least, because I think that the GOAT should have ability that manifested in different ways. Other times—in evaluating the Hall of Fame, in my opinion—the question of performance seems paramount: the fact that someone did or did not do well matters, even if it is not an accurate reflection of potential.

Your purposes as a researcher (sports evaluator) affect how you deal with the two disparate dimensions, but recognizing that the two different dimensions are separate is important in keeping from slipping into evaluations based on shifting criteria.

Monday, August 28, 2017

Finding complexity in analysis: sports (American football) examples

This post is a bit disjointed.  Mostly I'm interested in some sports stuff I was talking about with my nephew, but in the context of this blog, I thought it might be interesting because it shows some of the complexity that comes in analysis of various ideas--a complexity that gets a lot of researchers stuck.  Unfortunately, this blog post doesn't offer any real answers for researchers. 
My nephew and I were discussing football. He said in reference to quarterback play in particular  that talent basically comes in two tiers—those who have it and those who don’t. There’s a part of me that has sympathy with this argument: I do think that there are some guys who “get it” and other who don’t—guys who make the right plays at the right time and guys who don’t.
There’s another part of me that thinks it’s a lot more complex.
I am wary, for example, of psychological effects, like the influence of one strikingly important play. Tony Romo, for example, will never live down the fumbled kick hold against Seattle. Is that one play representative of Romo’s ability, or does the psychological impact of this striking event lead to over-emphasizing the one play in the larger evaluation of Romo?
The question of sports evaluation is one that interests me a lot and one that I have thought about discussing in this blog, because it’s a process of research and there are interesting issues that come up. Interesting to me, anyway.  Questions of evaluation and measurement are crucial to many areas of research, and both evaluation and measurement often involve questions of definition that are generally interesting. 
I don’t really agree with my nephew’s assertion, and I’m just going to work through that a little bit, partly as an illustration of general questions of reasoning.

Generally speaking, a lot of scholarship and research grows out of an assertion that is interesting and seems problematic.  There is the claim that there are basically two tiers of QB talent, and we might ask if there are any reasons that an observer would think this even if it’s not the case. 
We might observe two tiers of quarterback play for (at least) two reasons: one is that there actually are two tiers of talent, another is that there is some sort of threshold effect—a level of play over which QBs can be effective, and below which they are not.  Such a performance threshold would divide QBs into those who succeeded and those who didn’t, and would lead to two apparent tiers of talent.

But all of that depends on implicitly thinking about talent as a single unified construct.  But maybe “talent” is complex? This is similar to “intelligence”: we discuss it as a general ability, but maybe it’s actually a complex of different abilities?
When we look at quarterback play, there are actually several distinct dimensions of significance. We might say there is size, strength, speed, and intelligence. We might get a finer set of criteria:
  • arm strength
  • strength to withstand contact
  • running speed
  • running quickness/direction change
  • footwork
  • decision making
Some of these may themselves be complex: arm strength, for example, might be seen as reducing the range of concerns for throwing the ball: velocity, placement, touch (ability to suit velocity and angle to circumstance). Or for decision-making, there are concerns for reading the defense, for making adjustments/audibles, for making choices of when and where to throw the ball or to run.  Other ideas from sports might also be worthy of consideration: consistency, response to pressure (choker?), field vision.
It makes sense to talk about “talent” as a unified thing, because often that’s all that casual conversation needs.  But when you start to look at things as a researcher, and examine things, often a lot of complexity emerges that muddies the waters of simple ideas like talent.

One thing that muddies the waters in trying to evaluate “talent” is the fact that a lot of talent manifests in uncertain ways. A good freethrow shooter in basketball doesn’t hit 100% of freethrows. A good quarterback connects on maybe 70% of his passes.  How we evaluate an athlete often depends on a small sample of performances that might be indicative of the underlying talent.  A quarterback who complete 70% of passes might complete 22 of 25 one night, and 13 of 25 on another.  The actual performance is not a perfect reflection of underlying talent.  This is particularly true of football, where the coordination of a team is of crucial importance.  Tony Romo, mentioned above, may partly be forever remembered as having muffed the kick snap because of the highly controversial Dez Bryant incompletion against the Packers: had Bryant held the ball more firmly, or had it been ruled a completion, the Cowboys might well have taken the lead and won, and suddenly Romo would be remembered for a big fourth down throw and a comeback playoff win. Romo made a good pass on that play. It wasn’t the best pass ever, but it was a good pass.

The gap between underlying ability and actual results can make it difficult to assess underlying talent. For example, the question of Teddy Bridgewater’s ability to make it as an NFL quarterback may be moot: with the injury Bridgewater may never return to the level he previously attained.  Bridgewater was on the verge of making it as a young quarterback. His arm strength was suspect, but the rest of his ability set seemed more than adequate. If he never gets back to the starting position he once held, we can never know whether he had talent or not: would he have grown enough to excel, or would he have always played at the edge of competence, no matter his development?
Bridgewater is a good example of why I don’t think the two tiers of talent theory holds up, but maybe also why it seems effective: Bridgewater was generally suspect for two reasons: his arm strength, and his more general physical stature/strength. If things went right for Bridgewater, he had more than sufficient talent. But in moments when he was stretched, he had to play at the edge of his ability, and that can cause breakdowns that look glaring, especially when we take into account that football results often depend on only a small number of plays.  Suppose Bridgewater is able to complete 70% of passes below 15 yards downfield, but, due to his ‘poor’ arm strength, he can only hit 50% on passes over 15 yards, where a comparably accurate strong-armed passer who hits 70% of short passes, only drops off to 60% on the longer throws. In contexts where the game may ride on one play, that 10% will probably lead to significantly different won-loss records.  Indeed, that difference might well be the difference between holding a job as an NFL starter, but it’s hard to look at the two quarterbacks who are in many respects comparable in performance, and say that one guy has NFL talent while Bridgewater doesn’t.

An example that my nephew and I discussed was Eli Manning. Manning clearly has the ability to make big throws in big games. Manning is also notoriously inconsistent in his play (he is consistent in playing every game—he doesn’t miss starts), and this has worked against him throughout his career. showing up in high interception rates, in particular. My nephew suggested that talent and consistency were separate. But I wonder whether consistency is part of talent. If Manning threw a few more interceptions, he would not be able to hold a starting job. If Manning had not won two Super Bowls, he might have lost his job before now.  
The problem in trying to analyze this, again, is that our data is probabilistic: Manning’s actual performance is our guide to guessing his underlying talent, but to what extent is that assessment based on some element of luck—on the receiver making the good catch in the big game and the drop in the unimportant game, rather than the other way around? David Tyree didn’t make a lot of big catches in small games, but he made the helmet catch, and as a result, Eli has one of his Super Bowl MVP trophies (Manning deserved it, but if Tyree doesn’t hold that ball, Manning doesn’t win it).

Anyway, this blog post is really a bunch of rather confused notes, but I wanted to try to sort out some of the different ideas I was thinking while I was texting with my nephew. And it seems to me that it might serve as an example for more general issues for research—in particular the questions of how to define terms, and the ways that under analysis terms often reveal unanticipated complexity, but also some of the other ideas that researchers want to take into account: are observations characteristic, or are they the result of some random variation? Can we understand an apparent observation in terms of some complex behavior (i.e., does a “two tiers of talent” theory come out of some threshold effect)?

The complexity that is revealed to close examination is often frustrating and intimidating: if you want a simple answer, it is frustrating, and if you want to do research, it can be intimidating because of all that needs to be done.

Sunday, August 20, 2017

Seeking Truth: A Community Activity

As a philosopher, intellectual, and academic, I am concerned with seeking truth.  I want to know what is true and what is not so that I can make good decisions. 

Sir Karl Popper, the philosopher of science, argued that (1) we can have objective knowledge, and (2) science (and the search for objective knowledge generally) is a community endeavor in which the “best” theories are those which are most tested and survive the widest range of tests. Popper is hardly alone in arguing for a social role in research, though many of these would not accept Popper's belief in objective knowledge.

This community vision of research is manifest in research institutions: each field has its journals and publications, and those publications are filled with a variety of different views of scholars debating each other. Whether in the humanities or the sciences, scholars debate a variety of viewpoints.
Each scholar, presumably, believes in the value of his or her own work, and presents it as something “true.” I feel that this is so even in the works of those who debate the existence of ultimate truths.  Some of Derrida’s works are so cryptic as to be more like Zen koans—questions without an answer—but not all; his works sometimes read as if asserting  truthful claims. But this is the condition of the scholar: to attempt to put down in words something of vast complexity.  The Tao Te Ching begins by saying that the Tao that can be spoken of, is not the absolute Tao, and it still goes on to propound its views of the world. Every scholar has a bias in favor of his or her own work—not a malicious bias or desire to deceive for personal gain, but rather the bias of the individual who believes in the quality and integrity of his or her own work (let’s just put aside people who intentionally falsify for now).
Each scholar desires to do something original (which, on the whole, is the basic qualification for publication), and this originality manifests as the variety of published theories that make up the scholarly literature. While some kind of bias is inevitable—we all have to make choices of what to read and what to leave aside— biases based on ignorance are not desirable. Scholars ought to read widely, to examine a range of views, to test them, to challenge weak ideas.

To me, the same is true with journalism: two journalists doing good work with integrity may come to different conclusions on some issue due to use of different evidence. Different organizations have different biases just as different academic journals have different theoretical biases. As one who relies on the media for a lot of information about current events, it seems to me wise to read many different sources of news to get a wider sense of what is being said. With the internet, it is easy to read news from different news sources around the world and around the country.  I read things that are obviously conservative and things that are obviously liberal/progressive, and my views are shaped in part by comparing the quality of the arguments and evidence presented.

This post was sparked by a line in an article in the National Review, a conservative journal (  The line that struck me was the comment: “they know there were two sides out there [in Charlottesville, VA]. And they know the media has tried to obscure that fact.” What struck me about this was the comment “the media has tried to obscure the fact.”  Firstly, it is worth noting the singular verb form “has,” which suggests that the author is thinking of “the media” as a single, unified entity. The use of the singular “has” could have just been a grammatical error (“media” is the plural of “medium,” and those concerned with grammatical correctness would say that it is grammatically correct to write “The media have”), so it’s not certain that the author is thinking of them as some unified whole, but it is suggestive. Realistically, however, many diverse people and organizations make up “the media” and they have not, en masse, tried to obscure this fact. Different media outlets have given different amounts of attention and blame to the antifa.
“The media” is not a unified bloc. It is made up of disparate voices. Fox News is one of the loudest of those voices, and it would be inaccurate to say that Fox News (as a whole) has ignored the antifa. If we’re talking about “the media,” certainly the most commonly viewed sources, including Fox, should be counted.  And for that matter, the highly respected National Review, in which this article was published is also a prominent part of “the media.”  So it’s not accurate to say that the antifa and violence perpetrated by the antifa have been ignored by the media.  It’s unfortunate that this inaccuracy suggests an attempt to deceive about the range of opinions being expressed, but part of the current conservative worldview is the notion of a vast “liberal media,” while somehow not noticing that many of the biggest media outlets—the Murodch empire, Sinclair, and others—promote conservative positions and candidates.  Are there news outlets that show a liberal bias? Of course. But liberal news outlets are not the only news outlets.
As consumers of media we need to review a range of sources, and to then use critical judgement to choose amongst them. Getting news from a single source—however high its quality—is ignoring the whole range of views that should be the fabric of any critical examination of an idea.

Many graduate students struggle to become independent researchers because they try to understand published literature as the truth, not as the fabric of a larger debate. Until they learn to challenge the sources that they read, they can’t really begin to develop the necessary critical vision to understand the wide scope of issues of concern.  As consumers of media, we may be more willing to challenge news media, but too often these challenges become limited to an ad hominem dismissal: “if Fox said it, then it can’t be true,” or “if The New York Times said it, then it can’t be true.”  This is crucial if we believe that arriving at truth is essentially a communal effort.

It is also essential if we seek social unity. If we seek a unified society, we cannot operate by dismissing what others say, and we’ll struggle to operate if we focus only on points of disagreement. To find any semblance of social unity, we have to look for points of agreement.
Of course, as I discussed in previous posts, there are limits: an egalitarian society that believes in free speech is obliged to reject those who believe that only a select group deserve free speech. It may seem impossible to find common ground with someone who espouses racial hatred, but the reality is that societies already are made of both those who desire egalitarian society and those who espouse racial supremacy which means that either common ground can be found, or there is war.

Seeking truth is a community activity. When communities diverge on what is viewed as truth, violent conflict can ensue.

Wednesday, August 16, 2017

Limits to tolerance and self-defense

I try to avoid partisan politics. I think the Democrats and Republicans often have far too much in common. But Donald Trump is a step beyond. 
It is wrong to draw a moral equivalence between the white supremacists marching to defend the statue of Robert E. Lee and those protesting them, even if there were violent perpetrators on both sides.

John Stuart Mill suggested that liberty could only be allowed insofar as it did not impinge on anyone else’s safety and/or liberty (I may be mis-remembering—it’s been a while since I read Mill). Liberties must be balanced so that one person does not harm another.  This idea is, of course, captured in the laws that govern our free society: murder and theft are criminalized to protect people from those who would choose such courses of action for their own satisfaction. And this idea is particularly embedded in notions of self-defense: killing in self-defense is not a crime; it may even be heroic.  Violence against other people is compatible with being a responsible member of a tolerant society—at least in some situations.

Whether violence is wrong or not depends on the situation. One concern in trying to evaluate the morality of some violent action is understanding the motivation of the actor: what motivated the violent act? Again, if the motivation is self-defense the morality is different than if aggressive.

I’m not trying to get inside the head of any one person, but it is clear that the “unite the right” marchers had aggressive motivations: they want to change America; they don’t want people of color to have any voice. Some have actively declared that they are at war (
The counter-protestors, including the antifa, had more defensive motivations: they want to defend the egalitarian principles that the U.S. espouses (principles that, admittedly, the U.S. has not always lived up to).
The unite-the-right marchers would like to take away the rights and liberties of many (people of color, Jews).  They wish to impinge on the liberties of others. This is bad. This is antithetical to the principles espoused in US law, including the Constitution.
The counter-protestors wished to defend the liberties guaranteed by US law.
These different motivations necessarily color any interpretation of violent acts. Yes, antifa may have perpetrated some unjust violence, but their purpose was noble.  (And I’m going to ignore the possibility that some amongst the antifa are really just doing it for the pleasure of committing mayhem—that’s not really antifa, that’s just violence for the sake of being violent. Realistically, in almost any large group of people, there will likely be some whose real motives are reprehensible. It seems unlikely that the antifa had significantly more such people that the unite the right marchers.)
There was nothing noble about the unite the right marchers. It is possible that some may have only resorted to physical violence in self-defense, but their intent is to do violence to others by taking away their rights.

Donald Trump says there were some fine people marching in the unite the right march. No. Fine people do not march with Nazis. Fine people denounce Nazis. Fine people denounce racists. Such basic choices reflect moral character.
There were many fine people marching with the counter-protestors. Marching to defend the principles of equal protection before the law—the best of the principles that shaped this land of liberty—is noble. Marching to stop the spread of racism is noble.  Were all the counter-protestors noble? No—it’s rare that a whole group of people will be unified in a noble purpose. Some of the counter-protestors may have taken actions that should be condemned.
But saying that there is blame on both sides simply ignores the fundamental issue that brought the people to the protests in the first place. Some of the people went there to protest against American values and American laws. Some of the people went there to protect American values and American laws.  Whatever the actions of individuals in those larger groups, it is clear that one group is motivated by something reprehensible, and the other group motivated by something of which all U.S. citizens should be proud.

Unity does not grow out of encouraging or sheltering groups that call for disunity. Groups that claim that only some people—people with the right skin color or heritage—ought to have rights are calling for disunity.  This is completely different from a group that calls for suppression of those who want to create disunity. 

People who believe in the values of the United States Constitution should oppose racism and racist groups because such groups are inimical to the U.S. principle of equality before the law.  (And yes, I am aware that the original version of the Constitution was racist, but it has been amended since then.) Even though the Constitution guarantees freedom of speech and assembly to allow political dissent, this guarantee is intended to protect discourse. It is not meant to protect and nurture groups that impinge on the liberties of others.

To defend the United States Constitution and the values it represents, it is necessary to denounce racists, and white supremacists, and any “fine people” who want to help the racists and white supremacists—including Donald Trump.

Sunday, August 13, 2017

The Paradox of Tolerance, free speech, and political correctness

Recently, an engineer at Google was fired for writing a document that offered reasons for gender disparities in the Google workforce.  The whole incident received a great deal of publicity, with some taking the view that Google took the right position by firing him, and others viewing it as wrong and as squelching the free flow of ideas.  
Many on the right argued basically that this incident proved that “liberals” aren’t really liberal, but rather that they squelch dissent.  On the left, meanwhile, people argued that such discourse must be avoided, not because they want to squelch dissent, but because they want to make a place that is safe for everyone, including preventing speech that might be considered threatening.
Similar discussions come up frequently. Another recent incident receiving similar media coverage was related to rental properties listed on airbnb for a protest in Virginia (This post was mostly written before the events in Charlottesville and Seattle in the past weekend).

This general issue—espousing tolerance as a value, and being intolerant of those who agree—was called “The paradox of tolerance” by the philosopher Sir Karl Popper.  In The Open Society and its Enemies, Popper wrote:
If we extend unlimited tolerance even to those who are intolerant, if we are not prepared to defend a tolerant society against the onslaught of the intolerant, then the tolerant will be destroyed….I do not imply…that we should always suppress the utterance of intolerant philosophies…but we should claim the right to suppress them if necessary even by force; for it may easily turn out that they…may forbid their followers to listen to any rational argument,…and teach them to answer arguments by the use of their fists or pistols. (Open Society and its Enemies, vol. 1, chapter 5, footnote 4)
As a person who highly values the high principles espoused in the Declaration of Independence, who values the liberties promised in the Bill of Rights, and who pretty much grew up thinking that what mattered were, in the words of the old Superman television show, Truth, Justice, and the American Way—a construction in which, to me as a child, truth and justice were fundamental to defining the American Way—I am much inclined to Popper’s point of view.  Obviously, I’m very interested in the truth.
Paradoxes like the paradox of tolerance make it hard to understand what is justice, though. When is it appropriate for the tolerant to stop intolerance?

The notion of “political correctness” dovetails into this discourse.  Complaints against political correctness often are framed in terms of keeping people from telling the truth.  Or at least keeping them from telling the truth as they see it. It is viewed as an inappropriate restraint on expression.
I wonder about that complaint because I wonder to what extent it might not be better to frame many of these same issues as matters of manners, courtesy, and/or tact.  As a child taught to tell the truth, I struggled with the idea of “white lies”—social lies that are tactful or polite, if not honest.  If I go to a party, I tell the hosts I had a good time, regardless. Is that being politically correct, or is it being polite or tactful? If I hate the shirt or hat someone is wearing, I don’t have to tell them that, and restraining myself is not some gross imposition, but rather just a choice about how to treat people.
And, to wind this back to the Google case, if I, a man, work with some women and I don’t think women are capable (in general), do I need to tell that to my co-workers? It seems almost obvious that we could view the injunction to refrain from shouting out strong judgements about entire classes of people as being motivated by simple courtesy. or tact.

Of course, if you view members of a particular group as inhibiting efficiency, you might well wish to express this belief as part of a program to realize greater work efficiency. And this winds back to the idea of the paradox of tolerance.  In any large group of people—whether Google or a nation—there will be those who say “this group will be better if we exclude certain types of people.” For those who believe in developing a workplace (or society) where everyone can feel safe, it is necessary to argue for the exclusion of those who preach intolerance.

Logic will not lead to a simple answer here as to whether a nation that believes in free speech has right, reason and cause to forbid certain types of speech.  Perhaps the question needs to focus on the processes of argument: are ideas being argued in the realm of rational discourse, or are they being argued in the emotional realm where fists and pistols come into play? If democracy is a marketplace of ideas, we want the best ideas to prosper because they are the best ideas, not because they are espoused by thugs willing to use non-rational means to win their arguments.

While I have not checked the sites myself, I read that some of the white supremacists involved in the Charlottesville rallies argue that this already is a war. If there are protestors (on any side) who believe that the current debate over how to shape American society is a war, then that gives additional depth to Popper's warning about people who stop debate and replace it with "fists and pistols."

For a tolerant society to exist, it cannot allow members of the society to preach that other people cannot participate as equals in society. If one group argues that another should not be allowed to participate in society--as in Germany in the 1930s and through the war, where the Nazi party argued that Jews could not participate as equals (or at all)--the first group is actively threatening the second, and such threats should be treated as criminal in the same way that other threatening language (as in felony assault) is criminal.  

There is, of course, a fine line to tread, where rational discourse in favor of alternative social structures is allowed: political discourse cannot be shut off unduly, but the safety of members of the polity must be protected.

If the United States of America aspires to live up to the principles it espouses, then groups that deny the basic principle of the equality of people must be controlled so that they do not pose a threat to people who disagree with their opinions.  A tolerant society is not required to tolerate those who say the society should be destroyed, and indeed must act against those who would destroy the social fabric. To be that country that the Declaration of Independence aspired to be, and the country that the Constitution (in its amended form) aspires to be, it is necessary to suppress those who express intolerant views in ways that threaten the peace and safety of citizens. The guarantee of free speech should not be used to protect those committing violence against other members of society.

Monday, August 7, 2017

Truth and the speaker

For the ease of expression, I will speak about "truth" and "falsity," though I do not think there is a simple, objective “truth”—or at least logic does not easily lead us to such a thing.  As I said in my previous post, however, it seems to me intuitively the case that some things are obviously true and some are obviously false.
For example, I drank some water just after starting this post.  I think that is absolutely true.  And it is absolutely false that I drank some whiskey just after starting this post. I think that there are many things in the world that can be said to be true or false and it’s useful to be able to distinguish the two.
Ultimately, I think this is the purpose of research.  Even those who reject the idea of an absolute truth are, at the least, looking for ideas that can be used by many, not just a few.

As I have said in previous posts, knowing the truth is important.  To buy something at a store, I need to know where the store is.  To buy the right thing, I need to know what my needs are. To make an effective plan for dealing with research, I need to know how research works. And so on, and so forth.

The idealized scientist/researcher challenges accepted ideas.  For example, Darwin or Galileo.  But which ideas do we challenge?  Which ideas do we accept?  

Often, one reason to accept or challenge an idea is because of the speaker.  But the identity of the speaker is no guarantee of the truth or falsity of a given claim.  The ad hominem argument—the argument “to the man/person” (I think is the translation of the Latin)—attempts to build a claim about the truth of a statement based on identity of the speaker. This is logically a fallacy: the truth of a statement does not depend on the speaker.

There are times when this really bothers me.  In political arguments, the ad hominem fallacy is infuriating and generally misplaced.  To say that research is suspect because the researcher is affiliated with a particular political party is, quite frankly, asinine. If you believe that research is flawed, you should be able to do better than “I don’t trust the person who did it.”  

Even in the case of a proven serial liar, there is a good chance that the next claim will be true. The identity of the speaker does not guarantee the truth/falsity. Ideally, a critical thinker—including scholars/researchers/scientists—will check an idea on its own merits.

But practically speaking, we can’t do that. We can’t check everything. And so we rely on trusted sources.  Hopefully we have a good knowledge of the ideas we are using, and understand their strengths, weaknesses and controversies; hopefully we do not just accept/reject ideas because of the speaker, but practically speaking, it’s often effective to do just that.

From the perspective of a writer, it’s a crucial and invaluable tactic: we cite some scholar or philosopher, and that is the terminus of a line of exploration.  Again, the scholar ideally has theoretical reasons for the choice of a given idea, and does not choose the idea on the basis of the speaker. But in the battle to keep a presentation to a reasonable length, calling on well-known names can be an invaluable tool in reaching and convincing an audience, while avoiding the morass of theoretical debates that surround most important ideas. 

Monday, July 31, 2017

Is there Truth? Dealing with theoretical disagreements

One of my enduring interests is in trying to understand the nature of knowledge. In my studies, guided by my logical mind, I am absolutely confident that knowledge is limited, imperfect, and that there are no facts, no “capital-T Truth,” no absolute system of objective knowledge—no “God’s Eye View” to use a phrase from the philosopher Hilary Putnam. And unconsciously and intuitively, I am absolutely certain that some things are absolutely true and somethings are absolutely false.

I do not have an answer to this dislocation between my logic and my intuition. This is one of the reasons that I so often turn to the phrase from the Tao Te Ching, that the Tao that can be [spoken of/named] is not the absolute Tao. But saying that there is no answer is often unsatisfactory.

The idea that knowledge is contingent on historical or social or other factors—that it is not objective—is a lens that can be used on itself: the idea that knowledge is contingent, is an idea that is, itself, contingent. If you believe that knowledge is contingent, where do you choose to stand? To what do you commit? (I have some answers to these questions that I find at least marginally satisfying, but those answers are not what I want to talk about.)

In a recent post, I was talking about the political nature of knowledge—how beliefs guide our actions, shape our morals, and generally influence those decisions that move into the political sphere—the sphere of dealing with individuals and groups. Beliefs cause conflict. I had a professor once whose beliefs clashed with mine in ways that left us unable to work together. Fortunately, I was able to work with other people. But I was thinking about this conflict in the context of a scholar with whom I am currently working who has a number of theoretical differences with her professors, and this is contributing to difficulties.

Managing this kind of theoretical difference can be very difficult. Ideally professors would welcome other ideas—but, again, belief shapes action: if you believe that an idea has an insufficient grounding, you won’t accept work based on that idea. A professor who disagrees with a theory may not think “I should be open minded,” but rather “If only I could help this student understand their error.”

Getting through a situation like this is difficult, but I think it can be negotiated. I do not think that a student should give up ideas just because a professor doesn’t like those ideas. In my case, the ideas that my professor didn’t like were grounded in a large community of scholars doing good research, so it was hardly the case that I had good reason to abandon these well developed ideas because one professor objected. For the scholar with whom I am currently working, there is a similar concern: both she and her professors share some theoretical roots, but their theories diverge, and she has good reason to reject their line of reasoning and follow her own because it grows out of solid foundations. At the same time, we can’t say that her professors are “wrong” because judgement of right and wrong requires some absolute standard for comparison. We can say that their ideas are incompatible with hers, but by what standard do we choose one set of ideas as right and one as wrong, when one of the basic presumptions is that knowledge is contingent?

To me, the answer is pragmatic: there is no abstract answer, but action can be taken.

Faced with someone who disagrees with fundamental theories that you accept, I think there are generally three important steps:
1. Demonstrate that you understand and value their view. Look for places where their view overlaps with yours.
2. Focus on specific points of difference, and emphasize how they are limited differences within a larger framework of agreement.
3. Describe your specific framework as an alternative to, not a replacement of the other.
A possible fourth action is to call on specific published scholars to support you, but I haven’t included that in this list because citing the “wrong” scholar can cause problems: you don’t want to cite a scholar that your audience hates. Calling on a Freud or a Marx, or some other polarizing figure, can trigger emotional responses. But if you can use citations that your professors like to support your own arguments, then citation can be very effective.

Emotionally, I really believe that there are right and wrong, and when I get into a theoretical debate, the emotional side can really trigger a stronger attachment to ideas than I can logically defend. In a world where knowledge isn’t absolute, it should be possible to avoid some sorts of theoretical conflict by trying to accept and negotiate difference rather than trying to strongly defend a point.

I think theoretical debates are important because theory shapes action, and action has real impact in the world. But is winning a theoretical debate important? It depends on context, but I would always recommend that students try to avoid theoretical debate when it comes to writing their dissertation—is there some way that a contentious theoretical debate can be transformed into an exploration of a theoretical alternative?

This is hardly a well-rounded essay—ending, as it does with questions. But then, the truth [Tao] that can be told is not the absolute truth [Tao], so maybe an inconclusive ending is appropriate.

Monday, July 24, 2017

"A Better Deal"?

Really? That's the best message you can come up with Democrats? Talk about tepid and uninspired!
I generally vote Green or Peace and Freedom, so I'm more sympathetic with Democrats than with some of the other parties, but this is hardly going to inspire me to vote Democratic. Or anyone else.
They're trying to reference the New Deal, I suppose, but that hardly seems like a winning strategy: Republicans have been hating the New Deal for almost a century, so referencing it isn't going to inspire people who aren't already pro-Democrat.
I suppose they're also referencing Trump and his reputation as a deal-maker. But that hardly seems like it will differentiate them from Trump in a positive way--"better" is nebulous. There's nothing there that can make them anything more than being "not Trump."
Democrats, I offer the following two alternatives that are almost surely superior and more compelling: "A Fair Deal" and "An Honest Deal." Both still use the "deal" references, but also show some specific direction for how they're different. Both rely on important moral touchstones that Democrats don't use effectively. Somehow, Clinton was branded "crooked" while Trump skated through numerous self-contradictions and obviously false statements while being the representative of the party closely tied to the moral imperatives of fundamentalist Christianity.
"A Fair Deal" references the importance that people place on the idea that the economy is "fair”—an idea suggested by recent research that people are more concerned with “fairness” than inequality. (
“An Honest Deal” references an obvious issue, and in the context of commercial exchange “an honest deal” is a close synonym for “a fair deal.” But the idea that you’re getting an “honest deal” is potentially important in convincing people of the value of important regulation of economic behavior: most of the behaviors that Democrats try to regulate could be framed in terms of honesty or playing fair. Polluting could be described as “cheating,” as a way of getting an unfair advantage, and that kind of discussion can be blended back into Free Market theory: to suggest that the markets have to be free of cheaters.
I haven’t thought deeply about these. They’re the product of only a few moments of reflection after seeing the new “A Better Deal.”

All Knowledge is Political

When I say “All knowledge is political,” I am not thinking of the epistemological concerns—I am not concerned with the question of whether or not knowledge can be objective or not. That question is of interest, but not my subject here. I am interested in the political ramifications of accepting something as knowledge, and in that sense, it is related to my two recent posts about research institutions.
Accepting an idea has political ramifications. The stronger our belief in an idea, the greater our commitment to responding to that idea, and therefore, the greater that idea’s impact on political behavior. In my earlier post on political allegiance, I wrote about how partisan political attitudes towards certain ideas (e.g., evolution and climate change) would naturally lead to a partisan sorting: those who strongly believe in climate change have a strong motivation to align with the political party that accepts climate change. This sorting is a directly political dimension of knowledge: a belief in how the world works leads to political choice.
There are many such issues. Abortion, for example, is politicized according to beliefs about whether abortion is right or wrong. Beliefs about economic behavior become politicized, leading to choice in party alignment. The issues that define the political parties grow from beliefs about how the world works, thus all knowledge is political. Not all beliefs are necessarily partisan, but they can become more so: anti-abortion Democrats were more common once than now, for example, but as Democratic opposition to abortion became more conventional, anti-abortion Democrats had an increasing motivation to leave the Democratic party. This is the sorting process that I mentioned in the earlier post.

But this political nature of knowledge is not only a matter of national partisan politics, it’s in every arena where people interact. Scholars have to deal with politics in many places—publication is not pure blind review, for example, and university departments are often rife with political differences, some of which stem from beliefs. When I say “All knowledge is political,” I am being, I suppose, a bit hyperbolic, inasmuch as there are some ideas that do not immediately lead to political action. Knowing how to boil an egg, for example, is not likely to carry many political ramifications—at the least, it is difficult to imagine a situation in which disputes over how to boil an egg impact a community. But to the extent that ideas lead to action affecting people, those ideas are political in one dimension or another (in the general sense of “political” as meaning interactions among members of a community).

As a philosopher whose interest is the search for “truth,” whatever that may be, it seems to be completely appropriate for political action to follow knowledge. This is essentially the idea of evidence-based practice: we want to act on the basis of ideas that have been tested and demonstrated to be sound.
The opposite is not true, however. It may be unavoidable that knowledge will be shaped by political factors—certainly there is a vast body of philosophy that discusses such political factors that shape that which gets accepted as knowledge—but that does not mean that research should be founded on political concerns. Knowledge may be unavoidably shaped by political or social conditions, but this does not mean that research should be shaped by adherence to political ideology. Research should be exploratory and should strive for objectivity, even if objectivity is impossible in practice. In a sense, the goal of research should be to confirm, deny, or elaborate on/modify theories. And yet, even in such contexts, there are political influences that are undeniable—the very choice of subject of interest may be guided by political concerns.
This post, I suppose, is influenced by the current situation in the US, where news media outlets are regularly attacked as being biased—and some absolutely are, even explicitly. News organizations, like academics, are supposed to set out in search of the truth, of the facts, of that which is true for everyone, but the politicization of news, and the partisan lines drawn, work against this ideal. Outlets that expressly support one view or another—e.g., Huffpost on the left, or Breitbart on the right—have to be approached with great caution for the precise purpose that their material is driven by ideology, rather than non-partisan search for understanding. Such explicitly conservative or explicitly liberal news organizations would have been frowned upon in a different age when newspapers at least laid claim to objectivity. Fox News used to try and present itself as fair and balanced, but the need to be fair and balanced no longer seems to be necessary, as news outlets become increasingly politicized by the very choice of which stories to pursue.

The political dimension of knowledge cannot be eliminated. But if society hopes to avoid increasing partisan division and conflict, shared foundations need to be identified on which to build some sort of consensus. As a philosopher, I believe that the search for truth guided by good standards for research can provide such foundations. Some common ground is necessary for cooperation. Where can this be found if we let all knowledge be shaped by partisan politics? The idea that there can be knowledge that is shared—that is generally true, generally objective—is a crucial element in building some sort of consensus. The idea that we should strive for objectivity, even if it is not possible, is also political, in that it would guide actions and debates.

Bertrand Russell once wrote something like “with subjectivity in philosophy comes anarchy in politics” (in his History of Western Philosophy). Russell, obviously, was espousing objectivity, and in this paraphrase, and yet he captures the important political dimension of knowledge that knowledge leads to action, and without some sort of consensus of knowledge, there is a corresponding difficulty of finding consensus in political arenas.
I don’t know the answer to the increasing partisan differences in the US, and the increasing presence of false or misleading information masquerading as knowledge, but I would say that trying to find common ground in accepted truths and values would be a good place to start. Is there anyone who thinks that researchers/investigative reporters should be trying to find confirmation of what they already believe? Research—the search for knowledge—needs to strive for disruption: it is not about accepting common knowledge, but challenging it. Ideas should be challenged—but only on their own merits. Attacking an idea because of the person who espoused it is to surrender to the ad hominem fallacy and to lose sight of the attempt to understand the world.

Is there any way to find a common ground that grows out of earnest attempts to develop our understanding of the world?