Monday, December 25, 2017

For whom are you writing?

This blog is intended to be aimed at academic writers, especially those who are struggling.  A lot of the time, however, the subjects that I most want to write about are more political topics.  These considerations are in tension.  This post grew out of that tension. What’s on my mind is politics, and particularly a subject of constant concern to me — the frequent GOP-led attacks on institutions that do research, a group that include not only colleges and universities, but the journalistic media, and U.S. intelligence agencies, and non-partisan governmental offices. But as I was starting on possible essays, I was wondering whether that interest suits my intended audience. And I was thinking about negotiating the gap between what I want to talk about and what I want to say for my audience. 

This moves, I suppose, into the realm of rhetoric—a subject for which I have a great deal of respect, but on which I have never done much formal study. How does one motivate your audience to get the result that you desire?  One place to start, obviously, is with talking about something that your audience wants to hear.  In a way, that’s the only place you can start. If your audience doesn’t find your first sentence interesting, they may not go on to your second. And  if the second is not interesting, they may not go on to the third, etc.  Or if they do go on, moving from one sentence that doesn’t interest them to another, they hardly are likely to respond to the work in a positive fashion.

But trying to talk about what someone else wants is a cop-out, a rejection of the principle of telling the truth. Or is it? The answer is that it can be but isn’t necessarily. It depends on how you approach it. If you say to yourself: “I’ll say anything just to get the work approved; I’ll lie; I’ll ignore my own beliefs,” then, yes, that’s a cop-out. If you say to yourself: “I really want to talk about X, but my reader wants Y, so I’ll start with Y and see if I can bring the topic around to X,” that’s not a cop-out. That’s a rhetorical strategy.

Part of the job of the author is to convince the reader that the work is well reasoned, carefully thought out, carefully developed. If you think that X is important but other people think that Y is important, then your job as a writer is to show your readers why X deserves mention. And if you can only convince your readers by talking about Y, then that’s the place to start.

Many writers get stuck because they want to talk about X and they know their audience wants to talk about Y.  If you’re facing such a situation, you might ask yourself whether there is any way to start the discussion with Y and move to X.  This cannot always be done, of course, but if you can’t do find a connection between X and Y, you should ask yourself whether you have the right audience.  There are some barriers that cannot be overcome, but it can be easy to hold too tightly to your chosen point, and thereby miss good opportunities for sharing your ideas.

The gap between what you want and want the audience wants may shape the discourse, but it doesn’t necessitate a corruption of the crucial ideas.  The fact that discourses can be adjusted to suit audiences does not mean that all discourses are distorted by the desire to reach an audience. 

So for whom are you writing? What do they want to hear? And what do you want them to hear? How can you bridge that gap?

Friday, December 22, 2017

Video: what do you do if your dissertation starts to feel meaningless?

Here's my second video. I'm planning to do one each week for a little while to increase my presence on the web.

Here it is.

Tuesday, December 19, 2017

Shameless plug: Like Thoughtclearing on Facebook

Dear Reader,
please like Thoughtclearing on Facebook:

Monday, December 18, 2017

Ideas, words, and lexical flexibility

Recently, the U.S. Centers for Disease Control and Prevention (CDC), an arm of the federal government, was reportedly ordered by the presidential administration to stop using seven words. This report sparked this post on ideas about writing, communication and research. I am particularly interested in the relationship between words and ideas or between words and things in the world, and the gap between those two things, which is an issue of crucial importance for academic writers.

The alleged censoring at the CDC has been disputed by the head of the CDC, and the New York Times suggested that the purpose of the directive was not to ban words, but rather to have a better chance at getting the budget approved by Republicans.  This puts a different view on things, and suggests the possibility that the people who spoke to the Washington Post in the original report might have misunderstood the purpose of the instructions.

But the accuracy of the report and the political implications and interpretations are not my interest here.  My concern is for the gap between ideas and words, an issue of importance for academic writers, because it can cause problems.  In particular, many people get lost in semantic issues: what does a given term mean to different people?  Different word meanings might be a good focus for research if you’re a linguist. But if you’re not a linguist, you can spend a lot of time and effort discussing and debating the various definitions of a given word without making any real progress on your own work.  

Words mean different things to different people and they take on different meanings over time—they refer to different things. And this is the key: there is a gap between the words used to describe a thing and the thing itself.  Changing the word doesn’t change the thing.   This is clear in differences between languages: English says “the sea” and French says “la mer,” but they’re both referring to the same thing.

For a writer, it is important to keep this gap in mind. Academics, particularly, want to keep their attention focused on the thing: what is the idea that interests you? What is the thing in the world that you are studying?  How can you describe that thing?  

There are different ways to communicate any given idea—not just by changing language, but by changing the mode of expression. And these different modes of expression have different impacts on different audiences.  At the obvious level, a presentation in languageX will only reach people who speak languageX. At a more subtle level, different ways of expressing an idea in a given language can impact audience acceptance.

The CDC instruction to avoid certain words can be seen as guidance for how to present ideas so that they will be accepted by the people who have to review the presentations.  If a reader is likely to respond badly to the word “evidence-based,” then there is a good reason to try a different word or phrase to present the same idea.

Writers (and orators) want to consider their audience when crafting a presentation: what kinds of expressions will the audience hear? What kinds of expressions will the audience reject?

When practicing writing, try to open up the gap between the ideas that interest you and the words and expressions you use to express those ideas. What are different words that you can use? What are different phrases or expressions?  By exploring different ways of expressing the same ideas, you increase your expressive palette, and increase your ability to reach different audiences.

Saturday, December 16, 2017

My first video

A friend suggested that I make some videos for promotional purposes.  This is my first. It's definitely rough, but it's a new medium to me.  If I make more, I hope that I'll smooth things out.

Here it is.

One of the first things I'd like to figure out is how Youtube picks the thumbnail image--what it shows me is not what I would want--it's not the opening title, or the closing--it's in the middle somewhere, where I'm speaking and my mouth is open and I think I look a bit frantic, even though I didn't get that impression watching the video.

Sunday, December 10, 2017

Linguistic subtlety and grammar

In my previous post, I was talking about the importance of proper grammar, punctuation, and spelling—trying to recognize the great value of good command of these conventions that help facilitate communication, while also trying to keep focus on the ideas, not the formalities. Basically the argument was that errors in convention don’t matter if they don’t interfere with the communication of ideas (and that people who complain about errors in punctuation and grammar are often annoying when they over-emphasize attention on minor grammatical points at the expense of the ideas being communicated).  

Language is, or at least can be, extremely subtle in expressing significant difference, and the attention of the reader would be well spent exploring the subtleties, where the important difficulties lie, rather than attending to conventions.  

To reiterate the importance of punctuation, grammar, and spelling, it should be noted that the conventions themselves contribute to the subtlety—presence or absence of a comma can often have a significant influence on the meaning of a sentence, for example.  

But there are also times that the crucial questions are not problems with grammar, but rather small linguistic differences that are crucial to differences in ideas.  The Roman Catholic Church has recently moved to alter the Lord’s Prayer.  The change is linguistically minor, from one phrase that is grammatically sound to another phrase that is also grammatically sound.

The traditional English phrasing was “lead us not into temptation,” and the new recommended phrasing is “do not let us fall into temptation.” The conceptual difference of interest to the Catholic Church is the difference, roughly, between the Pied Piper and a lifeguard—the difference between actively luring people and aiding only when people go too deep (metaphorically). 

For me, such linguistic differences and their influence on the concepts being described are harder to notice when I’m focusing my attention on grammatical issues. And I definitely notice that people who are spending their time correcting grammar, and proving how well they know grammar, often miss the point of what is written. I remember once seeing a professional writer make a comment about the difference between US and UK conventions regarding use of the words “that” and “which,”  and someone responding “the rule is easy: here’s how you use ‘that’ and ‘which’…” Yup, you’re real proud that you know that grammatical "rule," but you totally missed the point about how that “rule” isn’t actually a rule, but rather is specific to the US context.  (Actually, even in the US context, that "rule" is often viewed as a suggestion--see what The Elements of Style has to say about "that" and "which.")

In short, the important stuff in writing isn’t the grammar. The ideas are what matter; grammar is only important as a tool to help communicate. And people who focus on grammar and miss the actual ideas are annoying.

Monday, December 4, 2017

On the importance of proper grammar

I was reading a book recently that quoted the 18th century scientist, Sir Joseph Banks, and commenting on the spelling used says in a footnote: “Despite his expensive education, [he] had managed somehow to avoid the basics. His disdain for grammar, spelling, and punctuation give his writings a magnificent immediacy.” The footnote, to me, characterizes an attitude that I find quite annoying. (I’m not going to give a source because I have no interest in criticizing the author, only the way that grammar is approached.)

As an editor, I value proper grammar and style highly. Despite the high value I place on grammar, I am tempted to correct it as little as possible. To me, grammar serves a larger purpose: a writer wants/needs to communicate clearly; proper spelling, punctuation, and grammar aid in clear communication. When I read as an editor, my major concern—often my only—is with the  ideas that the author is trying to communicate: what are they? Are they clear? Can I understand what the author is trying to accomplish? Can I paraphrase the author’s purpose in a way that would satisfy the author and also encompass all the issues that I see in the work?

The attitude that annoys me is when people just cannot stop from correcting other people’s grammar.  As a matter of my job as an editor, of course, I am often called to fix people’s grammar.  But some people just want to fix other people’s grammar for no particular purpose, except, possibly, to boost their own ego by proving that they know the rules of writing better than others. This is especially annoying in which an individual complains about a supposed grammatical error that isn’t really an error.

This is very much the case for the footnote I quote above: the author hides his disdain for Banks behind the backhanded compliment of a “magnificent immediacy,” and his claim that Banks “avoid[ed] the basics,” and had a “disdain” for grammar, spelling, etc.  Banks’ writings were bestsellers; they were works of great influence, largely responsible for Banks’s elevation to the prestigious position of president of The Royal Society (The President, Council and Fellows of the Royal Society of London for Improving Natural Knowledge), the English scientific organization, a position he held for over 40 years. To criticize their grammar seems profoundly irrelevant, even if we don’t take into account the historical linguistic context.  In criticizing Banks, the author does not take into account the history of the English language. Written English is guided by convention, and in some cases by specific style guidelines. But in the middle of the 18th century, those conventions were not set in stone.  There was, in fact, great variation in spelling used by different authors.  In 1754, the Earl of Stanhope complained that it was “a sort of disgrace to our nation, that hitherto we have had no… standard of our language; our dictionaries at present being more properly what our neighbors the Dutch and the Germans call theirs, word-books, than dictionaries in the superior sense of that title." Samuel Johnson’s dictionary was first published the next year, but it was hardly a uniform convention when Banks was writing during his 1768-1771 voyage with Captain James Cook.  The fluidity of English at the point in time is evident in Laurence Sterne’s The Life and Opinions of Tristram Shandy, Gentleman, which was published between 1759 and 1767, and uses quite unconventional English. Sterne, too, was a bestseller, and is still regarded as a significant figure in the history of English literature.  The less conventions are settled, the less appropriate it is to complain about someone diverging from convention.  Knowing a little of the history of English usage, the footnote disparaging Banks’s writing shows the misplaced interest in grammar that I find annoying.

Today, the English language has far more settled conventions than it did in the middle of the 18th century, but even today, there are people who want to correct when they just shouldn’t.  The obvious example of this is people who correct other people’s grammar on webpage comments: really, who cares that the commenter made a grammatical error or not? If the grammar is so bad that the thought is incoherent, sure. And if you want to insult someone, sure, pick on their grammar (it’s not egregious to complain about the grammar of the common “your an idiot/moron” or “your stupid”). 

As a consulting editor who works with graduate students, I get particularly annoyed with professors who spend their time focusing on grammar when, in my opinion, they should be focused on the ideas. Yes, it is within the purview of a professor to correct grammar, but the primary job of a professor is to teach higher-level subject matter.

Proper grammar and punctuation and spelling help a writer communicate to an audience. They are crucial tools in communicating. But the idea is the important part. If the idea comes through clearly, then any individual grammatical error is essentially irrelevant with respect to the larger purpose of the written work—or at least, it seems that way to me.  

Thursday, November 30, 2017

Benching Eli Manning

I generally try to write things that can at least provide some reflection on the issues of scholarly writing, but this doesn't have that. I grew up in New York, and as long as I’ve been a football fan, I’ve rooted for the Giants. I’m not as enthusiastic a fan as I was when younger, but the Super Bowl wins in the 2007 and 2011 seasons brought me a good deal of pleasure. Eli Manning, of course, was crucial in those Super Bowl games, making great plays when the game was on the line.

Eli Manning is the best quarterback in team history, and may go into the Hall of Fame. He had started 210 consecutive games—second longest such streak in NFL history (edging past his brother who started 209 consecutive games). He’s still playing about as well as he always has, although his stats are down because the rest of the team is not playing well.

Manning is getting benched for the next game, breaking his streak. This move has generally been panned, with lots of people saying it’s a bad decision, and many insisting that it’s the end of Manning’s career with the Giants. Maybe it is, but I agree with the decision to bench Manning at present—though my reasoning is not, apparently identical with that of the Giants’ coach.

 Coach McAdoo has decided to start Geno Smith. This is not a good decision, in my opinion. Geno Smith is not as good an NFL quarterback as Eli. And we’ve seen a good deal of Geno—30 NFL starts, over 850 NFL pass attempts. Geno is not some dude who has barely had a chance. Geno has had years of opportunity to impress. Maybe Geno’s coaches have all been wrong, but who has Geno Smith impressed? This is his fifth season, how many surprises does Geno have for us? There is no particularly compelling reason to start Geno over Eli, except, maybe, that you want to keep Eli from getting sacked so often. Or that you want to lose some games—tanking is an option here.

 It is hard on Eli to get benched, of course. But a lot of the difficulty comes from the way it was handled, too. They could have said: “You’re getting sacked a ton, our record stinks, and we want to see if the young guy is any good. Next year you’re our starter, and maybe for a few years, but we need to start thinking about our next QB.” That doesn’t feel good for Eli, sure, but at least it’s not a commentary on his play. It’s just a smart decision with respect to evaluating the state of the team.

The Giants are having a terrible season, and to fix the problems, they need to see what they have. They need to assess young players who haven’t had the time to play. In particular, they need to assess the quarterback on whom they spent a 3rd round pick last year, Davis Webb. A 3rd round pick is a very valuable asset in the NFL. If they don’t ever play Webb, then that’s just a wasted pick. If they do play Webb, and he plays well, he’s suddenly a highly valuable asset that the Giants can use. Maybe they keep him to groom him as a starter a couple of years from now if Manning flames out, or as a trade asset in the way the Patriots used Garoppolo. Maybe Webb looks bad, which might motivate the Giants to take one of the highly regarded quarterbacks who will be available in the upcoming draft—again to groom as Manning’s backup for a season or two.

Eli Manning will be 37 when next season starts. He’s not going to play forever. The Giants suck right now, Manning is getting hammered behind a bad offensive line (he’s been sacked more in 11 games this year than he was all last year), and the Giants need to assess the quarterback they drafted last year. That’s not a reflection on Manning’s quality or ability, it’s just a realistic assessment of what the Giants need to do to start preparing for future seasons, because they’re surely not going to the playoffs this year. If I ran the Giants, I would tell Eli that he’s my quarterback until he starts playing badly, but that right now he’s sitting so I can see whether my valuable 3rd round draft pick (Davis Webb) is worth anything that I can use to help the team win next year.

Saturday, November 25, 2017

The "Problem" of Similar Work

One problem that many graduate students face is that they have started on a project and then discover a book or article that is very close to what they have done.

Recently I received an email that said: “I have just found a book that makes a large number of the same arguments I was planning to make. I am having a bit of a rethink on my basic proposal, and will take longer than I planned.” I think the “will take longer” part of this is one of the most common stumbling blocks, and I think it can be generally resolved by looking at the similar work differently.

If you want to do original work, finding a work similar to what you intended can be seen as a block, as something that prevents you from doing what you wanted because what you wanted to do will no longer be original. There is, possibly, some loss in prestige in following work that someone else has already done, but this does not prevent you from doing original work that supplements or complements the already-published work. But finishing a project is a primary concern, and the existence of a published work that is very similar to what you hoped to do is actually a boon in terms of designing a project and getting it accepted.

Every similarity with some other work is something that you can cite in support of your own work. Instead of asserting a point yourself, you can make that assertion in combination with a citation, which makes the assertion more acceptable to most academic readers. The greater the similarity, the greater the strength of the foundation for your own work. When you read a work that is similar to yours, you can profit from that work if you can find one question about the work that you can turn into a good research project that you would be willing to do.

All scholarly works have some limits—some conclusion that may have interesting unexamined implications, some premise that had been defended or explained poorly, some side issue that hasn’t been examined, some point where you disagree with the work. All you have to do is find one place where you think the work is limited, and you can do some sort of study that addresses the limitation. There are even times when attempting to replicate an experiment or study can be valuable.

If you can find such a single point, you can build a study using the same theoretical framework as the work that was similar to what you wanted to do, which saves you a lot of working in explaining the motivations and theoretical foundations of your own project (which are often stumbling blocks).

When you use a lot of a specific work, you can get the additional rhetorical benefit of speaking positively of other scholars: you present your work as an attempt to cooperate with and build on work that you respect. If you frame your work in that positive cooperative relationship with the similar work, you will not be perceived as contrarian, even if you do choose to challenge one aspect of the similar work.

Sunday, November 19, 2017

New Review of My Book on Amazon UK

It's always pleasing to get a new review for my book, Getting the Best of Your Dissertation (at least so far--I'll see how I feel about that after I get slammed in a review for the first time).  This one is on Amazon UK, and I don't know if it will ever migrate to the regular US Amazon.

A very sensible and readable book, packed with good advice for doctoral students
I purchased the Kindle book because I wanted to review Dr Harris's ideas before speaking with him, and found it so useful that I have put in an order for the paperback as well. I'm based in the UK, and the book's advice is slanted towards the US system, but not overly so, and most of the discussion of topic selection, etc. is equally applicable over here. The fundamental rationale behind doctoral level study is pretty much universal, and that is what this book addresses.

If you want to know how to "survive" the "ordeal" of a doctoral degree, then this is probably not the book for you, but if you want sensible advice and an explanation of "what" you are being asked to do, "why" you are being asked to do it (dissertation tasks are not - only - the sadistic tendencies of your professors, they do serve a purpose), and most importantly "how" to do the various elements of a PhD, then I wholeheartedly recommend this book.

Part of getting the best out of your dissertation is the enjoyment to be found in the process of studying, and Dr Harris emphasises this factor. He doesn't hide the facts about the hard work required, but demonstrates how a change of attitude about this aspect can help you to work more effectively, faster, and to produce a better piece at the end.

The book covers questions over all aspects of study, from getting onto a programme to finishing your dissertation, and I cannot recommend it highly enough.

Tuesday, November 14, 2017

Words and Things

Writing is a process that involves a lot of learning…so much that what one wants to say can get sidetracked.
I picked the title of this post because I wanted to talk about the gap between things in the world and the words that are used to refer to them.  But there seemed a familiarity to that title, and a quick search showed that, in fact, there is a famous book titled Words and Things, by a philosopher named Ernest Gellner.  A brief perusal of Wikipedia suggests that my concern is not the same as Gellner’s but there is a relationship between them. But I don’t want to talk about Gellner’s ideas.

What I want to talk about is the gap between things in the world and the words that we use to describe them.  I have been thinking of this both with respect to a common issue that causes trouble for academics: the question of genres and how to write about genre issues, and also the question of race, which is getting a lot of attention in the U.S. press, for obvious reasons.

My main concern is that the words are not the things, and I think that dangerous effects come from assuming identity between the words and things.  This is especially a concern for the damage of over-generalization, especially the use of stereotypes.
Genre and race are social constructions: they’re lenses through which people can see the world, but close examination of the ideas will reveal that drawing the boundaries on categories like these is more a matter of choice than a matter of reality: the words get used  as people see fit, but those usages do not necessarily adhere to any objective standard that is beyond dispute. Yes, of course, if we look at individual examples (whether people or artifacts), we can easily see gross differences: yes, this man has dark skin and this man has light skin; this piece of writing has rhymes/verses, while that has prose narrative. And yes, these gross differences can be used to characterize large groups for whom those gross differences hold true.  Sometimes it can be very useful to hold on to such generalizations.
But sometimes those generalizations can become burdens.  I suppose that these burdens depend on the context, but in general, the issue at hand is what any term means is not objectively definable, nor is the meaning of any term the same for all people.

My concern for genre is prompted by a paper draft I was reading recently that spent a lot of effort on defining a genre and discussing the different theoretical concerns for the genre. The problem for the academic writer trying to use genre is that it is very easy to slip into genre debates, and little clear way to end genre debates.  The alternative for the academic writer is to avoid relying on genre terms (and other sweeping generalizations), and to focus on specific things: for example a specific work, or a specific characteristic of certain works. By focusing on the specific issues in the world—the things to be described, rather than the words chosen to describe them—there is no ground for debates that grow out of different ideas of what a word means.
This question of avoiding genre in academic debate is often a real danger to graduate students, in the sense that it can really delay development of good research, but this is a relatively insignificant concern compared to concerns about race.

My concern for race was sparked by a number of different articles I was reading recently, all of which made gross generalizations about race, despite the clear intention of the articles to reveal and disrupt the systemic patterns of racial discrimination present in the US. One article ( quoted Tim Wise: “Whites have ALWAYS felt that we were being discriminated against every time there was evidence of black or brown progress.” With all due respect to Mr. Wise, I think he should speak for himself. He has no idea what all white people think. I feel absolutely safe to say that in any large group of people, there will be a variety of opinions and ideas. Personally, I don’t feel that black or brown progress means that I am being discriminated against, even if black or brown progress erode my white privilege. Personally, I feel that black and brown progress shows a move towards the kind of society that I would like to live in, one in which all people have real opportunities, and where success is more dependent on personal traits than on parentage.
In this society that is characterized by such great divisiveness, I think that generalizations about groups tend to expand the divisions in society. Assuming that someone thinks or feels a certain thing on the basis of some gross generalization (e.g., “white vs. black”), dehumanizes the individual.  If the hope is to eliminate racism or other divisive patterns of thought, then there is benefit in trying to avoid such gross generalizations: reducing people to nothing more than avatars of some category that you have constructed in your understanding of the world, reduces your chance of cooperating with actual people.  
Whether someone gets counted as white or black depends on context—and that means that categories like white and black can be fluid. In the movie The Commitments the protagonist says “Do you not get it, lads? The Irish are the blacks of Europe. And Dubliners are the blacks of Ireland. And the Northside Dubliners are the blacks of Dublin. So say it once, say it loud: I'm black and I'm proud.”  But Irish, in the context of the US in the 21st century, are most definitely not black. 
Such things can be much more personal—one can imagine, for example, the Commitments character being “black” while in Dublin, and then traveling to the US and immediately becoming white.  Do we then assume that that person will feel the same way about discrimination against blacks that a white person born and raised in a Southern family with roots tracing back to the  Confederacy and further?

Using words to focus on one thing can also obscure focus on others: I recently read an article that argued that all white people are racist, because all white people have experienced the privileges of being white. (This does make the kind of over-generalization of which I was speaking—does a Dubliner experience “white privilege” if he or she lives in a context where he or she is at the bottom of the social order?)  In the case of this article, the word “racism” was used to describe a certain thing: the experience of white privilege.  But this use of the word “racism” obscures another use of the word racism—the meaning that “racism” is an attitude of racial superiority. And that attitude is a crucial one. Yes, whites all experience white privilege, but do all whites share racist attitudes?  Using “racism” to talk about people who have benefitted from white privilege obscures the fact the some white people think that black people are inferior, while other white people do not. A white child of 3 years of age had benefitted from white privilege, but I think it unlikely that a 3-year-old can have any meaningful sense of racial superiority. More personally, since I believe that race is a social construction, I don’t think it’s meaningful to speak of racial superiority because “race” isn’t inherent in people: how can one race be superior to another if races don’t exist?

My concern here is for the use of words and for the danger of using common simple words to describe complex things in the world. If we reduce a work of art to a genre—a “novel”, classical music, etc.—then we can miss important details. Reducing people to a concept—“black,” “white,” “racist,” “woke”—obscures the complexity of people and limits chances to work together.  Trying to focus on the thing in the world may require more words—it’s more complicated to say “people who have benefitted from structural inequalities that are often based on visual cues like the light color of their skin” than it is to say “whites”—but such careful definitions avoid making simplistic assumptions about people and their attitudes.

Friday, November 10, 2017

Unscrupulous man admits to lack of morals

Yesterday, in a tweet about trade relations with China, Donald Trump said:

"How can you blame China for taking advantage of people that had no clue? I would've done same!" (

That's right. He admits to "taking advantage of people" and seems to think it's a good idea. In some situations it is ok to take advantage of people: I don't blame sports teams for trying to exploit opponents' weaknesses or mistakes. But the president of the United States? That's a lousy way to govern within the nation. And, unfortunately, his policy choices certainly suggest his willingness to take advantage of the people who voted for him, by cutting services that help his voters, and giving massive tax breaks that help himself and his family.

It's a lousy way to conduct international diplomacy, too. How can any government deal with his administration and think that there's any cooperative spirit in the negotiations? Trump has made it clear that he will try to take advantage of people.

It's one thing to resist being taken advantage of. And it's completely another to go out and try to take advantage of people--especially those who are lacking knowledge (i.e., "have no clue" to use Trump's words). Saying "I take advantage of those lacking the knowledge I have," is hardly taking the moral high ground. Unless morality is measured in money.

Monday, November 6, 2017

Seeing the Other Side

In previous posts I have been talking about cooperation, referencing both academic and political issues. I will continue that here because, although, I want the blog to focus on research issues, I think the political division in the U.S. needs attention.  My voice is a very quiet one in this loud political debate, but debate and the variety of voices is crucial to the idea of democracy: democracy ideally is able to capture and respond to the best ideas provided by the many members. In this sense, the ideal democracy seems to operate on the same basic principle as the ideal competitive market: many different ideas/products get introduced, and the wisdom of the majority leads to choosing the best of them. Therefore, even quiet people like me have a civic responsibility to speak their minds.

As with previous posts, I will talk about a general principle that ought not be partisan.  If basic concepts like cooperation and compromise are partisan principles, then the political division in the U.S. is even worse than I thought.  This blog post is about something central to cooperation and compromise: the ability to understand and respect the perspective of others. Such understanding of others—empathy—is a significant part of social interaction, and when it is missing social interactions become much more difficult.  Empathy is, I think, closely relates to the Christian golden rule—do unto others as you would have them do unto you—for the golden rule you must imagine that the other feels the same things that you do.  That is, I think, part of empathy—the willingness to respect and honor the other and the other’s opinions and feelings. But the other part of empathy is to actually understand the position of the the other, to understand their ideas and feelings from a place of respect.

In politics this understanding is crucial: if you try to understand what someone unlike you feels, if you respect their feelings and their human complexity, you are more likely to be able to cooperate and compromise.

But this is also true in academia, especially as a student dealing with difficult professors.  Indeed, one reason professors can be difficult is because they don’t see the student’s side of the story. It is, of course, part of the student’s responsibility to make their side of the story clear, but part of being able to make a story clear depends on understanding what will make sense to the audience. 

It can be very difficult to understand why a person thinks in a certain way, but sometimes even basic understandings can be useful guides to attempts to communicate. For example:

1. A professor with a strong dislike of Freud, might be best approached with a discussion that mentions Freud as little as possible, even if the basic argument depends on some idea(s) of Freud. 

2. A professor who thinks in objectivist terms might best be approached using language that leans towards objectivism, even if the student is following postmodern premises. Back in the 60s, Derrida wrote:
There is no sense in doing without the concepts of metaphysics in order to attack metaphysics. We have no language—no syntax and no lexicon—which is alien to this history [of metaphysics]; we cannot utter a single destructive proposition which has not already slipped into the form, the logic, and the implicit postulations of precisely what it seeks to contest. (Derrida, 1989/1966, “Structure, Sign, and Play in the Discourse of the Human Sciences.” Trans. R. Macksey and E. Donato. In The Critical Tradition, 959-971. New York, NY: St. Martin’s Press.)
The implication, as I read this, is that someone who has certain beliefs about the nature of knowledge will structure their language—their syntax and lexicon—on the basis of those beliefs, and that implies that writing to reach that person requires speaking their language, even though that language may not fit the writer’s ways of thinking. In a way the article from which this quotation is taken exemplifies Derrida’s attempt to put his non-objectivist thinking into the language of objectivism, which ruled academia at the time (and still is influential today). Later works of Derrida (e.g., The Post Card, 1980) don’t make the attempt to be scholarly in the same way. But such works only do well with those who are already willing a predisposed to less formal logic.

Sometimes all that is really needed, is to echo back the opinion of the other: to make them certain that you heard them and respect their argument in some way. By clearly stating the position of the other, the other feels heard (hopefully), and this in itself is enough.

You may want to argue that the sky is yellow and sun is blue—it’s ok to argue for unconventional ideas (indeed, it is expected at a certain level)—but your argument ought to at least acknowledge that most people believe that the sun is yellow and the sky blue. 

If you’re working with a professor who is opposed to the foundational ideas that you use, then you may have no option but to write in a way that acknowledges their ways of thinking. If they expect you to prove things, you need to figure out how to “prove” your ideas, even if they are not “provable.”

If you can’t see the other side, you’re going to struggle to get your ideas a hearing.  This is especially true if you treat your interlocutor with disrespect.  If you can’t respect the ideas of your interlocutor, you’re more likely to show exasperation at them, which only makes future communication more difficult. And, on the flip side, it’s also true that if your professor shows exasperation with your work, it’s hard to avoid a negative emotional response, which makes future communication and cooperation more difficult.

Tuesday, October 31, 2017

Cooperation and compromise

My previous post was talking about the need to respect other people as part of the process of cooperation.  This post is concerned with the related issue of compromise in the process of cooperation. 
Compromise can be difficult. Compromise always requires giving up something that you want. If you got everything that you wanted, it wouldn’t be a compromise.
Sometimes compromise is inevitable: if you want to buy a cheap car, you give up power or luxury; if you want a really fast car, you can’t get the cheapest car.  There’s a tradeoff which requires some compromise.  Tradeoffs exist in real-world decision making.  Costs get balanced against benefits. The more expensive cut of meat may be tastier, but it’s more expensive. The organic produce may be more healthful, but it costs more. Not all tradeoffs involve monetary costs. If you want to see natural beauty, you can go to Yosemite or the Grand Canyon, but they are crowded. If you want to be alone in nature, you’re forced to go somewhere less famous, and perhaps less spectacular, or less accessible. For a writer, one common tradeoff is whether to produce a bad letter on time or a good letter late — increasing the quality of a written piece takes additional time. For a researcher, one related tradeoff is whether to do more preparatory reading or to begin a project—“I need to do more reading before I start my project” is a common cry. Such compromises are frustrating, but at least they’re not really personal debates or depend on compromise with a collaborator.
Trying to make a compromise with another person while working together presents a different level of concern because there is the interpersonal emotional element that doesn’t exist in making a personal decision of whether to buy the more expensive, more luxurious item, or the lower-cost, lower quality item.
When the personal emotional element gets involved, it’s harder to make clear-minded decisions.  In my post, I mentioned the idea of “reactive devaluation”—the devaluation of something because it is associated with someone who is an enemy, e.g., the example of U.S. residents being more likely to accept a nuclear reduction plan if told it was proposed by Ronald Reagan, than if told it was offered by Mikhail Gorbachev—and this is a crucial element.
Sometimes cooperation involves compromising certain principles. In academia, the writer is often forced to compromise in different ways.  This is perhaps most stark for students, but it’s not as if professors don’t face compromise in their work. Students may be forced to work with material that they don’t want to use. They may be forced to deal with ideas that their professors want to deal with, even if they don’t want to do those things, and even if they those are in conflicts with their beliefs.  
One of my go-to anecdotes on this kind of point is a story about a friend who earned himself an extra paper because, during his oral examination, he could not put aside a specific disagreement with one of his professors. The point on which the two disagreed was related to the work of the philosopher Donald Schon, who was important to the professor and disliked by the student. But the student’s work didn’t use Schon, so all that was really needed was for my friend to focus on the few parts of Schon’s work that generally agreed with his own work (there were some agreements, which explains why my friend was working with the professor in the first place). But my friend focused on what he disliked about Schon—which my friend did again with me in discussing his examination. The disagreement led to his writing extra paper. Writing an extra paper is hardly a disaster, but I think it was unnecessary, because I think my friend could have cherry-picked a few ideas that Schon expressed that agreed with his own work, and stayed silent about his causes of disagreement. The causes of his disagreement had led him away from Schon, but he could have certainly said “Schon shares some assumptions with the people I’m using” (who were generally in the school of American Pragmatism).
To cooperate, it is important to focus on what you’re going to get from the cooperation—the positive angle of it. You don’t want to be blind to the costs, of course, but you have to view those costs in terms of what you hope to get. If you complain excessively about the cost, it will scuttle the cooperative effort. If you focus on the benefits, then you can decide if the benefits are worth the cost. 
Sometimes that cooperation might be repugnant—a politically liberal individual in the U.S. might be so disgusted with Sen. Bob Corker that they find it impossible to work with him against Donald Trump, even though Corker has shown his opposition to Trump—but that cooperation might be able to deliver something of great value. Even if that does require working with someone who holds radically different views.
For an academic, these compromises are often less difficult: compromising by discussing a disliked philosopher is rather easier, in my opinion, than trying to actually cooperate with a disliked person.
An academic does benefit from “compromising” a work by shaping the presentation of ideas to suit an audience, even if that audience wants something the author doesn’t like. A scholar may not want to limit their work in the same way the a publisher does. A scholar may not want to shape their work to sell, but may be forced to make such compromises.

It’s important to know what you want and what you need, but the ability to compromise about those desires increases the chances of reaching a cooperative outcome.

Tuesday, October 24, 2017

Cooperation requires respect for others

Partisan politics in the United States are extremely hostile right now. My voice is very quiet, no doubt, but part of mass action includes small actors taking their small actions.  I don’t particularly care for either of the two major US parties, as both are far too beholden to large corporations and the wealthy, in my opinion.  But I want to talk about the question of divisive ideas in the context of political debate, and also, so as not to stray too far from the putative academic focus of this blog, on the question of divisive ideas in academic discourse.

Recently, I saw and was struck by an image that expresses the nature of the division in the US.  The first version I saw was a map of the US with states colored red and blue according to their partisan voting.  Beneath the map was a key that defined the red states as “United States of America” and the blue states as “Dumbfuckistan”. In this age of Trump, the GOP is responsible for far more divisive language than ever in my lifetime, and this image seemed part and parcel of that. But it didn’t take much looking to find that the earliest versions of the image have the opposite key; the blue states are labeled “United States of America” and the red states are labeled “Dumbfuckistan.”

That kind of idea—that American citizens are not really American—does no good for the body politic.  How does one have a conversation with a “dumb fuck”? 

As soon as you label the other side of a debate a “dumb fuck,” you vastly reduce any possibility for compromise, and you vastly increase the kind of emotional response that leads people away from the best modes of reasoning.  It would, of course, be great if everyone used optimal logic and rationality in making decisions, but let’s not fool ourselves: people don’t.  People tend to devalue ideas offered by people they dislike or distrust, even if they would accept the same idea when offered by someone they like or trust or identify with.  This psychological phenomenon is known as ”reactive devaluation.” One might imagine that this effect is greater when the emotional connection is more powerful. Calling someone a “dumb fuck” is not going to reduce that emotional impact, and certainly will distract from considering value in the proposals.

Some theories of negotiation (Getting to Yes, Non-Violent Conversation[NVC]) place a strong emphasis on the idea of empathy—on understanding the person with whom one is negotiating/debating. On a practical level, these negotiating guides emphasize the importance of understanding the position of the person sitting across the table, with specific emphasis on being able to echo back the idea that the other has expressed.  Such a practice could lead to two effects: (1) the speaker whose ideas are echoed back would feel understood, (2) the listener who echoes the ideas might better understand those ideas. Both of those effects, I imagine, would contribute to reducing the kind of reactive devaluation in the same way that thinking of the other as a “dumb fuck” might increase that reactive devaluation.

Recently I was reading an article about a ball-bearing company that was closing a U.S. factory in Indiana and moving a lot of the jobs to Mexico. In the article, a plant employee said that she what she wanted was a job and to be able to work, and she disliked the Democrats for talking about a social security net rather than talking about getting people jobs.  If I recall correctly, she was a non-voter who leaned Trump. Whether her view of the Democrats is correct in terms of policies that Democrats would put in place is not so relevant as what this shows about how some people understand both Democrats and Republicans.  To think of this woman as “dumb” for not supporting Democrats eliminates the chance of creating a positive dialogue that might reveal either that Democrats don’t worry about jobs enough, or that Republicans are doing a good job of setting the terms of the public discourse.

Speaking more generally—to bring this around to the academic realm—in debate, no matter the realm, if you assume that your interlocutor is “dumb,” you’re not likely to have much discursive success, unless you’re speaking to an audience that is already sympathetic.  Dealing with a suggestion from a professor or an academic/bureaucratic requirement as “dumb,” will not make it easier to communicate with those who made the suggestion/requirement.

Looking to understand the other, and to find points of agreement with the other can bring people together, even if they disagree on some ideas. Looking to find fault in the other—to find that they are stupid or out of touch or some other judgement that suggests your own knowledge is superior—only creates division (it’s also arrogant). You may be smarter than the other, but you may just be fooling yourself: another psychological distortion that is common in humans is to believe that we are more powerful/smarter/better than the average. Regardless of whether you are smarter than the other or not, if you want to cooperate with the other, thinking that you’re smarter doesn’t help matters.

Working together requires respecting the other. You don't work well with someone if you think of them as a "dumb fuck."  If a political party really wanted to bring people together, it should be very careful to respect the ideas of others.  (This is not to suggest that one should accept the patently false just to acknowledge the ideas of others, but it does suggest trying to understand where those ideas came from, not just writing them off as the product of stupidity. Even a smart person can be given the wrong information which can lead to the wrong conclusions.)

Friday, October 20, 2017

New review of my book (2)

Another new review of my book (Getting the Best of Your Dissertationwas posted today (October 20), and it, too, is glowing:

A holistic approach to dissertation guidance  
I found this book at a time when I was feeling so anxious about writing my dissertation that I would sit down to write only to immediately stand up again and walk away. I have read and referred to other books on graduate school and writing, but found this one particularly useful because of its practical advice and attention to the psychological and emotional work of writing a dissertation. Since I had already gone through the planning and research phases of my dissertation, I got the most out of the sections of the book that addressed living with dissertation work and writing. Chapter 3 began with a simple but powerful reminder that the dissertation is meant to support my life and goals, and that I should not assume that it is acceptable (or wise!) to sacrifice my life for the dissertation. The advice in these chapters helped me to see the dissertation as a means to receiving a degree, rather than a monumental test of my overall intelligence and worth as a person. I also found the advice on writing practical and useful - I felt like the author was anticipating many of the excuses or mental traps I was falling into ("I just need to do a bit more reading" is an obvious one, but there were many), and helping me to avoid them or to move past them quickly. Overall, I highly recommend this book to anyone struggling with their dissertation or daunted by the prospect of beginning your research. Not only will it give you practical tools for finishing the project, it will teach you to be kind to yourself in the process.

For me, there's a special added bonus in that I don't know who posted this review--a bonus because someone I don't know is writing on the basis of the book itself, and (unlike the other new review, which was posted by someone with whom I've worked) therefore the review is really based on the book itself, and not influenced by any personal connection or factors outside of the book itself.

New review of my book (1)

Recently, two people have posted new reviews of my book Getting the Best of Your Dissertation.
The first (posted on October 9) was posted by a former client of mine, so definitely biased, but also glowing:

From Dissertation Nightmare to Dissertation Success with Dissertation Dave - The Best Dissertation Coach in the World 
One of my favorite sections in this book is 7.2 Managing People, Especially Your Professors. I started working with Dave after making essentially zero progress on my dissertation after more than a year. I was doing a literature review and reading a lot of stuff, but not really making measurable progress. After working with Dave, I started to race through writing my dissertation. He is not someone who added dissertation students to his other schedule of activities, he is a full-time dissertation coach and dissertation expert. With his Ph.D. from Berkeley and his work with hundreds of students, no matter what dissertation disaster you are facing, I'm sure he can help you with this book and with coaching. Start with this book, but call him, because if you are not making progress or your committee is not helping you or worse against you, you can benefit from his dissertation expertise and experiences. I was already an expert in my discipline, but I was not an expert at navigating the significant politics and protocols that accompany the dissertation process - that's why I needed Dave! That's why you might need this book and Dave too. Dissertation Dave was so effective in eliciting dissertation writing from me, that my husband who was also working on his dissertation started working with him too.Dissertation coaching with Dave is a mega catalyst for dissertation completion. My husband also finished his dissertation, thanks to working with Dissertation Dave. I do not want to go into all of my dissertation headaches on Amazon, but I am telling you that I had at least 50 I can't believe this happened, I don't know if I can make it, God are you out there, moments. Thank God I finally found Dissertation Dave.

Monday, October 16, 2017

A Bad Letter On Time is Better Than A Good Letter Late

“A bad letter on time is better than a good letter late.”  This is an idea I have long used as a quotation from the letters of Laurence Sterne, the 18th century English author.  It is, I find, a mis-quote of a letter Sterne wrote on August 3, 1760, which includes the following lines:
“thinking that a bad letter in season— to be better than a good one, out of it — this scrawl is the consequence, which, if you will burn the moment you get it—I promise to send you a fine set essay.”

The principle is one that I have used so many times, that I am quite surprised that I have only used it in one previous blog post, and never as the subject of one itself.

I was thinking of this quotation today for a couple of reasons, but then trying to find a subject for a blog post added another: I didn’t have a clear subject to discuss that I felt capable of discussing in a relatively constrained format.  I’m thinking a lot about the intersection of knowledge and politics, but there are a lot of separate threads that I’m having trouble untangling to put into any form that suits for a short piece.

I was thinking about the quotation with respect to a client who is sure he can’t write. My response is that the only way to resolve that is to practice writing—to be willing to produce something—anything—that can be critiqued. Good writers practice. I don’t think there’s any way around practicing.  I was also thinking how being willing to write bad drafts allows the practice that is crucial for generating good drafts.  The more you practice, the better your writing gets. Ironically, the willingness to be wrong allows the practice that allows growth, learning and the development of improved writing skills.

I was also thinking about it in terms of another client who has a number of different places to submit material, and I think a bad letter in season is better. If you have something to show to other people, they have an opportunity to appreciate it and learn from it, and/or to give you feedback so that you learn from the process. Sharing something bad creates the possibility of working with other people. By contrast, insisting on writing a good letter means missing opportunities—especially if your standard for a good letter is so high that you struggle to reach it.

In one episode of the Great British Baking Show, one of the participants ends up throwing his cake into the trash. As a result, he was sent home from the show. Unlike the others, he had nothing to show, and that was the deciding factor. Had he even shown any cake, he might well have survived for another week. For him, a bad cake in season would definitely have been superior.

Monday, October 9, 2017

Whose Responsibility is Communication?

My two previous posts were concerned with getting feedback and dealing with feedback, and this is following up on those ideas. I’m still thinking from the perspective of the writer concerned with the response, and particularly thinking about dealing with difficult feedback—complaints about the quality of work. I’m also thinking about a conversation I had with a friend about the purpose of music and of performing music.  The question in conversation was about the relationship between [author/performer/presenter] and audience, and where responsibility lies.

What burden lies on the performer to reach the audience? And is there any burden on the audience? In the previous post, I was writing about some comments that were difficult, and a lot of my response lies in my sense that the comments don’t reflect a sufficient attempt to understand the writer’s point of view.  But that idea requires believing that the reader has some responsibility in his or her approach to the work.

Different relationships between author/performer and audience bear different burdens of responsibility.  A professor definitely has different responsibility to the author of a dissertation than a bar patron does to no-cover charge musician. But still the question of where responsibility lies is one to consider, especially in the context of receiving feedback.

The bar patron hearing a no-cover musician bears little or no responsibility to the performer.  Certainly there is some normal standard of decorum—the bar patron can’t start yelling and trying to drown out the musician—but the bar patron certainly has the right to ignore the musician and to laugh out loud in conversation with a friend, even if that does interfere with the musician’s  performance.  If the audience for the musician has to pay for admission, then the expectations shift: having an audience paying to listen to music creates a greater responsibility for members of the audience. Of course, asking patrons to pay also means that they have a greater interest in fulfilling that general responsibility of listening. As anyone who has attended an expensive arena concert knows, there always seem to be plenty of people in the audience who have bought tickets whose primary interest is in the social event, not the concert itself, and thus talk through the music, but when people have paid for the music, this kind of behavior is less polite than identical behavior in a no-cover charge bar—it’s a matter of degree.

This was the conversation that I was having with my friend, who was talking about the difference in the behavior of audiences who paid vs. audiences at a free event.  That focuses on audience behavior.  The flip side is to wonder about the desires and purposes of the author or the performer. How the author/performer views the audience’s responses depends on what the author wants from the audience.

For my friend, the heart of the matter was in the music: the musician, he believed, should not compromise the integrity of the music, and it was important to have people who were coming to respect the music.  For me, the audience matters, too: if the music is really only about the music, then what’s the need for an audience? Once you bring the audience into the picture, the music in itself is not the only concern.  

To what extent is it a sell-out to shape the performance to meet the audience?

And to what extent is purity lost, if it reaches no audience?

Writers need audiences, and that means convincing audiences that the reading is worth the effort. If you have the choice to just write whatever you want and can then hope that someone will pick it up, that’s great—it will serve you well, if, like many writers, you have to submit it to many publishers before you find one that will take the work. On the other hand, if your audience is fixed—if you know that it’s a certain person—is it a sell-out to change what you do so that your audience will accept the work?

For writing more than for music, there is an underlying story or idea that could be transmitted in many different ways. To me, it’s that story that matters, and the form in which it is delivered is not fixed by the underlying purpose.

The Tao Te Ching opens by saying that the Tao that can be spoken (written) is not the absolute Tao.  But the book still continues to tell of the Tao. I think that writers need to think in those terms: the story that you tell is not the absolute version of the story, but you need to tell a story, anyway. Research (and therefore writing about research) delves into realms of uncertainty—but that can’t stop scholars, or the entire scholarly community would collapse. Research writing does its best to assert confidence, while still acknowledging the myriad limitations that any works of research faces.

Wittgenstein concluded his Tractatus Logico Philosophicus with the statement that if one cannot speak accurately, one should remain silent (I’m paraphrasing slightly), and he never published another significant work in his life—his Philosophical Investigations was published posthumously from his notebooks.  Modeling your work as a scholar on the pattern of Wittgenstein—refusing to say anything unless it’s exactly right and certain—is not a path to scholarly success. 

If you are a writer, it’s useful to think about the gap between the ideas that you espouse and want to share and the many different ways in which those ideas can be expressed so as to reach different audiences. Reaching the audience is the writer’s responsibility. Although the reader may bear some burden of responsibility, it’s usually beneficial to simply accept the burden of reaching the audience: what does my reader want?  

(As a practical aside, understanding how to identify and write for an audience is extremely useful in getting published, because publishers want to sell books, and that means they want to know who you think your book will sell to.)

Monday, October 2, 2017

On receiving difficult feedback

In my last post, I was writing about how getting feedback is good, even when it’s bad feedback.  And I still believe that, even though I’ve just spent the last 30 minutes fuming over the quality of the feedback from the dissertation chair of the pseudonymous RSP (really smart person). 

To me, much of it seems petty and unnecessary. It angers me to see, for example, general statements that are obvious—beyond obvious—taken to task. But I look again, and I wonder, is it really obvious?

RSP and I share some fundamental views about the very nature of philosophy, especially with respect to the indeterminacy/indefinite nature of structures of knowledge (that’s not necessarily how RSP would phrase it, though), which leads to my accepting ideas that others are not so ready to accept. And that’s the issue: I’m not the person that RSP has to satisfy, and getting angry at the chair doesn’t actually help me find a route to satisfy the chair.

It’s a challenge to work through feedback like that. It’s the death of a thousand pinpricks. I read one comment, and I’m slightly annoyed. I read two, I’m a little more annoyed. I read four or ten or a dozen, and I’m fuming. It’s not even my work and I’m still more than annoyed at the feedback. There are comments that I agree with and comments that complimentary. But those are respites in a sea of brambles, picking at my skin. 

Is this bad feedback?  That depends on the standards by which I judge it. By the standards that come most easily—the emotional response shaped by by immediate intellectual judgements about the feedback (e.g., being annoyed that the chair asks for a citation on a claim that I don’t think ought to be cited)—yes, it’s bad feedback.  Bad in two ways: 1. doesn’t give sufficient guidance on how to fix it (e.g., “I don’t like the way you do this” vs. “you need to take steps X, Y, and Z to resolve this problem”), and, 2. emotionally loaded, at times (e.g., not only saying “this is a problem” but also “I don’t know why you refuse to fix this problem”).  The thing about those judgements is that they’re entirely based on my own perspective. What about the professor’s perspective?

I don’t know the professor’s perspective, of course, so I’m left to guess. And given that there is not enough clear guidance on how to fix it to be confident, my guess is a little bit of a shot in the dark. But it’s the best I can do…

In this situation, it’s interesting to try to imagine what the person who gave feedback is thinking. What is it that the chair needs or wants that is not being delivered? Is the resistance a matter of resistance to the general project? Or is it a resistance to a specific absence?  These questions are speculative, of course, but exploring them can be useful at least in defusing some of the emotion. Is the chair unable to understand some points? Or unwilling? Is the problem that the chair disagrees with something or that the chair thinks something is unclear?

A dissertation writer is obviously a student who is n many ways at the mercy of the dissertation chair. But it still can be useful to think as a teacher: suppose, as a teacher, you have trouble reaching a student? Do you say that the student is too stupid? Or do you try to explain the same ideas from a different angle?
Getting feedback can be difficult to deal with, but to try to think through the eyes of the person who gave the feedback can help at least defuse some of the emotional charge.

Once you’re past the emotional charge (at least for a while): What is the plan to persuade that person of the value of your work? What steps can you take? In this case, and in many others, my next step is to look for the feedback that seems the best: there are dozens of comments in this draft—which ones do I think make good points that I want to address?  It’s with these that I will start, and the rest, I’ll look at later—maybe I’ll figure something out for them by trying to respond to the feedback that asks good questions.

None of this eliminates the emotional sting of a complaint, or the frustration of wading through pages filled with comments, but it does help me step back from the work to ask whether the same ideas could be conveyed in a different form. And what form would be suitable to satisfy the specific individual of significance (the chair)? The written work is not an abstract sharing of some idealized truth, but rather a lesson that teaches your reader the value of the work. If your reader doesn’t get it the first time, how can you do it differently to resolve the difficulties that appeared?

Monday, September 25, 2017

Getting Feedback is Good, Even When It's Bad Feedback

Feedback can be hard to take, but it’s necessary.  Simplistically, if your project is a total stinker, you need to know that. Of course, someone saying your work is a total stinker doesn’t mean that it is. Different things work for different people.
We all are limited in our perspectives: we know what we think, but we don’t know what other people think.  And when we’re trying to produce something that is intended to communicate with other people (if we’re writing or using other communicative media), what other people think is crucial.

Sometimes I think the feedback I would most like to get is someone saying they liked my work, and also echoing back my message in their own words. If someone says “I think you’re saying X,” and “X” is the message I was hoping to share, that’s a successful piece of writing.

If I have nerved myself up to give something to someone, getting no response can be painful in itself, so I’d rather get something terse. It can be frustrating if someone gives very terse comments—good or bad—because the comments may not give guidance on how to move forward.  But that’s a personal frustration: if someone likes or dislikes your work and doesn’t give you any more information than that, it’s still valuable feedback.
If they like it, you can rest on your laurels. Or you can work on things that you want to work on.  You can try to guess the reasons they liked it.  And you can at least feel good that you got positive feedback.
If someone says they don’t like your work, and nothing more, it doesn’t help you figure out how you can get that person to like a new draft, but it does give you some indication of the strength of the work in someone else’s eyes. It’s no good to have someone worrying overmuch about hurting my feelings. If the feedback I get is a sense that they’re unwilling to say what they really feel, I’m only left to imagine the worst, so I’d rather actually get feedback, even if it is “your work sucks.”

Whether your work is awesome or it stinks, having a sense of what other people think of it can help you decide how to proceed.  I’m trying to get a friend of mine to give me some feedback right now, and I want to assure him that telling me that my work sucks is better than him saying he hasn’t looked at it. Even if all he tells me is: “I gave it two minutes, and it sucked so much I didn’t want to deal with it any more.”

There is toxic feedback, of course: if someone writes that your work proves that you’re an imbecile who is a waste of food, air, and water, that’s not good. But such a personal attack hardly shows the maturity of the source of feedback. For the most part, you can always ignore personal attacks inspired by your work—only if they’re coming from someone on whom you depend (a dissertation advisor, for example), should you do anything more with personal attacks than ignore them (I mean, assuming they’re limited to mean responses to your writing, obviously if someone is slandering/libelling you to many, you might want to take action, but that’s not really in the realm of getting feedback on your work.)

Monday, September 18, 2017

Expressive writing and mental state

I regularly tout the benefits of writing and of practicing writing (or at least it has been a common theme in my writing over years, if not in recent blog posts).  A recent study at Michigan State University associated specific benefits associated with expressive writing—writing about feelings and thoughts. (

The authors of the study compared two groups of students who were set to perform the same main task—a test, and also a secondary task, either writing about what they did the previous day (not expressive) or writing about their feelings about the upcoming test (expressive). Their basic finding was that those doing the expressive writing were calmer (actually, they described it in terms of brain activation states because they were measuring the students with electroencephalography).  The lead author used a automotive fuel-efficiency metaphor, saying the difference between the brains performing the expressive writing task and those performing the control (non-expressive) task was like the difference between a Prius and a gas guzzler from the 1970s.  The students in the two groups performed the same on the main task (the test), so there was no direct impact on performance on the test itself. I am unsure from what I have read whether the higher-efficiency brain activity induced by the expressive writing task lasted into the main task.  In any event, this is good evidence that there is a real benefit to writing about your own feelings about a task.

For people who are stuck, I have often recommended writing about their feelings about the project—which has sometimes worked. One reason I like having people write about ow they feel about a project is that it can help reveal crucial theoretical assumptions. Another reason is that once someone has started writing about their feelings about the project, that can often transition into writing about the project itself. This study suggests that writing about how you feel about a project can help calm you down.

The many who have suggested that writing has therapeutic benefit—and there are many such in the self-help shelves—seem to have evidence to back up at least some claim to therapeutic benefit.

Generally speaking, writing is an important practice for people who will need to express ideas in their lives—both professionals and academics.  No matter how difficult writing may seem, it gets easier when you practice, and that allows you to work more efficiently because you communicate with others more efficiently.  This recent study suggests yet another reason to practice writing—or at least expressive writing: it helps improve your mental state.

Monday, September 11, 2017

Colleges and Universities are Good (revisited)

An article in the Washington Post this morning discussed the gap between how people in the U.S. see themselves and how people around the world see the U.S. and its residents. (Trump is Making Americans See the U.S. the Way the Rest of the World Already Did.)

While I think the author is a little careless in her generalizations, I generally agree with her main points that far too many residents of the U.S. are frightfully out of touch with the rest of the world. Certainly the U.S. public educational system does not dedicate great resources to understanding people from around the world.  I would not write a blog post just to agree with her, nor to take her to task for being a little careless in generalizing.  But towards the end of the article, the author makes a statement that just makes me angry for its basic acceptance of the anti-intellectual trend that is polluting public discourse in the U.S. at present:
many other average Americans with dangerously naive ideas about themselves and their country grow up to become teachers, foreign correspondents, presidents. What they did not learn as children will not be cured by what they learn at elite universities, in self-regarding metropolitan centers or in graduate schools that for the most part tell them that the United States is the center of the planet and that they are the smartest on it. 
Do I think there are many Americans (U.S. residents) who have dangerously naive views of themselves and their countries? Absolutely, I do.  But do I agree that such dangerously naive views cannot be cured by universities or graduate schools or metropolitan centers? Absolutely not.  The view that colleges and universities are part of the problem, or at least are no help in dealing with it, is pernicious anti-intellectual propaganda that serves conservative and Anglo-centric perspectives.

Firstly, let’s just stipulate that arrogance or hubris are not good. It’s good to believe in oneself, to feel proud of who and what you are, but it’s not good to be arrogant about it. It’s one thing to believe in oneself, and it’s quite another to believe oneself superior to another. And yet another thing to let that self-regard keep you from learning new things because you think you know better.

Secondly, I’m going to assert that the general idea of American Exceptionalism is either trite or inappropriate arrogance.  If we say that Americans are different from the rest of the world in that they are American and everyone else is not American, it is trite and tautological (the band Camper Van Beethoven sang “If you didn’t live here in America, you’d probably live somewhere else” in the song “Good Guys and Bad Guys”).  If Americans are different in some other way, then that characteristic should be something real that we can identify and define. We could then see if Americans are actually different (and potentially superior) in that way. The “American Exceptionalism” generally posited by the political right in the US is little more than an arrogant “Americans are better because we’re American,” without any clarifying or signifying characteristic that makes Americans better. If American exceptionalism said “Americans are better because they’re richer” (or smarter, or prettier, etc.), then we could discuss whether that was true using empirical evidence. And we could discuss whether being richer/smarter/prettier/etc. really translated to being better in any significant sense (what makes people “better” or “worse”, anyway?). If American Exceptionalism means “Americans make the best widgets,” well, if there is some way of proving that America makes the best widgets, then I’m all for American Exceptionalism. If American exceptionalism just means “we’re better because we’re American,” then that’s unfounded arrogance.  To the extent that American Exceptionalism is tied to the idea of Manifest Destiny (which depends on in the idea of the superiority of whites and Christians, and is a version of the “white man’s burden” myth), I reject it utterly.

It is possible to find arrogance everywhere, and maybe you do find it more often in elite universities and in “self-regarding metropolitan centers.” But what I would ask is: where are U.S. residents likely to find out about what people around the world think of the U.S.? You certainly could move to a foreign country, as the author of the article did (though living in a foreign country is no cure for arrogance, as colonial occupiers have demonstrated for centuries). Or, you could go to one of the places in America where you can meet people who aren’t from America.  You don’t have to leave America to meet people from around the world. You can learn from a Turk while living in Istanbul, but you can also learn from a Turk living in Berkeley, California while attending university. (One of the sloppy generalizations in the article is the notion that everyone in the U.S. is oblivious to what people in the rest of the world think. There are lots of people living in the U.S. who immigrated from other lands, or whose parents immigrated from other lands. Such people, by virtue of both personal experience and social connections, have a damn good idea of what people outside the US think of people inside the US. I get that the constraints of the article size limit the attention that an author can give to saying “I want to talk about something common in the US, but certainly not universal,” but the generalization is still sloppy: lots of Americans know what the rest of the world thinks of the US.)

Metropolitan centers are known for diversity of population, and this diversity is reflected in political realities. Who voted for Trump and blindness to the outside world? Not metropolitan centers. Metropolitan centers voted for the person who had served as Secretary of State for Barack Obama, who was widely admired outside the U.S. Metropolitan centers voted for the politician who believed in climate change, like the rest of the world believes in climate change. Metropolitan centers also voted for the politician who supported immigration, which reveals an inherent openness to new peoples with different ideas about the U.S. (An aside: to call the metropolitan centers “self-regarding” is to accuse them of arrogance. It’s an unjustified insult and a silly generalization. Where ever you go, some people will hold arrogant and unjustified pride in their homes. But in most places, there are justified grounds for pride. And in some cases—New York, Washington D.C., Los Angeles, and several other major U.S. cities—a certain self-regard is not out of place. The great cities of the U.S. rival the great cities of the rest of the world. Sure, Istanbul has thousands of years of history, and New York only a few centuries, but New York was a world cultural center of power rivaled by only a short list of other cities in the history of the world. In the middle of the twentieth century, New York was quite arguably the greatest city in the world. Washington, D.C. wielded military might unrivaled perhaps in history. Los Angeles and Hollywood influenced people around the world.)

Colleges and universities are also good places to meet people from around the world and to learn how they see the world.  If you go to college or university with an unshakeable belief in the inherent superiority of Americans (or white Christian Americans), well, college and university may not change you.  But such views are hardly common on university campuses (and not surprisingly, the GOP and conservative media often complain about the views that are expressed on U.S. university campuses).  University campuses try to harbor diverse views because an underlying view of research is that diversity of views helps develop debate. Universities almost always have foreign students and often foreign professors.  And again, the voting record clearly demonstrates that college and universities hold views that are more interested in understanding the outside world, and more focused on interacting with people in the outside world as equal partners, rather that as inferiors lacking whatever it is that is supposed to make Americans exceptional.  Is American Exceptionalism espoused by many on U.S. campuses? Well, generally professors and students both vote Democratic far more often than Republican, suggesting that the Republican appeal to American Exceptionalism isn’t generating enthusiasm on campuses. It should be noted that researchers—most professors at universities—are almost always working with scholars around the world, and they are trying to understand the ideas of the people with whom they work. Scholars may focus on their scholarship, but they’re not completely cut off from the rest of the world. Colleges often send students abroad in addition to bringing in students from overseas.

The metropolitan centers and colleges/universities voted for the candidate with the less insular views; they voted in favor of more interaction in the world, and less of an idea of “American Exceptionalism.”  Who did vote for the insular candidate? Who voted for American Exceptionalism? Not the metropolitan areas or colleges/universities.

So, Ms. Hansen, if your concern is for throwing off the American-centric views that disturb you, then metropolitan centers and colleges are the most likely places where someone will be cured of those views, short of going and living abroad. Since the rest of the world probably won’t let 300 million U.S. residents come live for a year or a decade, those colleges and universities and metropolitan centers are the best hope for curing Americans of their self-centered views. In the long run, sure, it would be great to change elementary and secondary education in the U.S. for more awareness of the wider world. But at present, colleges and universities and metropolitan areas are the best hopes for the cure you seek to American blindness. Colleges and universities are good.

Update/Addendum: Another place you can find out what people outside the U.S. think of people inside the U.S. is on the web, even on U.S.-based publications, as with this article written by a Mexican. Truth is, it's easy to learn what people think if you want to learn. But you have to go to places where there are different voices to be heard--like metropolitan centers and institutions of higher learning.