Wednesday, December 31, 2008

Figure-ground reversal

This by way of a promissory note to responding to Sarah's comment yesterday. I actually had two separate responses of ideas. I forgot one. The other is about flipping and the general phenomenon of figure-ground reversals. I hadn't really thought about this phenomenon in the context of graduate studies, but I actually had one experience with it on the phone yesterday with a writer who--in the same phone call--told me about how her work was good and done for a purpose that she saw that was entirely unrelated to her professors, and she also told me how her work was done only to please her professors (she doesn't get a lot of support).


Anyway figure-ground reversals: seeing the same image in two different ways. I'll try to remember the other comment, too.


Happy New Years!

Tuesday, December 30, 2008

Taking Action

I was thinking about Sarah's situation, which I described in the previous post. And about her response. And what I wanted to highlight the most is that the effective thing to do is to choose to take action and to focus on that action. By taking action and focusing on the activity we do the most to alleviate any pressures for two reasons: 1. we are acting to remove some of the source of pressure (presumably we will choose to act effectively), and 2. by focusing on the specific action, we focus on what we can do, thus getting away from the negative emotional states created in the sense of being overwhelmed.

The thing that struck me a lot about Sarah's comment was that she said that I ahd given her good ideas for what to do, but that she had forgotten them. On the one hand, this is evidence that the emotional state is improving, which is a good thing. On the other, having a sense of specific actions that can be taken to alleviate the stress is also a good thing to do, so it worries me that the specific ideas were lost. And on yet another hand (for those of us who have more than two), there's also a sense that what I really wanted to communicate was an approach to problems: it was not the specific suggestions themselves that mattered, so much as the idea that when feeling overwhelmed, the appropriate response is to make a plan of specific actions and to focus on them.

And what we're looking for is to improve the situation: we want to move in the positive direction--we want to make our situation better. It's not so much that I'm hoping to entirely banish any feeling of being overwhelmed--it's not like I can make the problems themselves go away, and having an injury--for example--will naturally contribute to difficulties in keeping up with the demands of a busy schedule. What I'm hoping for--and what I suggest seeking is more the sense that we can do something to respond, then to banish the pressure put on us by the situation. Or, to rephrase (redundantly, I suppose): it's not about eliminating the pressures that come from having problems to resolve; it's about creating better and more effective response patterns so that even if we feel the pressure of many competing demands that we may not be able to successfully fulfill completely, we feel like we're making some progress in the battle, rather than feeling helpless. It's not a problem to feel swamped, if we also feel like we're able to swim.

It's hard to look back at our past and say "I did this thing poorly" without also getting stuck in some sort of negative cycle--because when we see the thing that we don't like in our past, as long as we remain at that level of analysis, we're creating more negative emotion. Only if we ask "how can I change that old result?" and "how can I create the future I want, instead of repeating old results?" are we switching our focus to the positive possibilities for the future. By practicing looking forward, we reduce the negative emotional impact of studying the past patterns of behavior that created results we didn't like.

Monday, December 29, 2008

Combinations

I was talking to a writer who was feeling overwhelmed.

"I'm starting a new chapter, and have a lot of material to manage; I also have issues managing my work space; I also have a trip to take; and, oh yeah, I also have an injury."

Well, that's a classic description of being overwhelmed: to be drowned beneath a mass.

Combinations of problems are made more difficult because each problem demands attention, and each has negative impact on emotions. The fact that all the problems demand attention also tend to take us away from the most efficient way of dealing with the problems: one at a time. We can't do everything at once; no matter how good we are at multi-tasking, the truth of the matter is that we work more efficiently if we can concentrate on one thing for an extended period. Partly we will work more efficiently because we will spend a smaller proportion of time switching between tasks, and partly we will work more efficiently because our attention being focused on one task, and being able to deal with that one task will give us better emotional stability to assist us as we try to deal with all tasks.

Feeling overwhelmed by problems that are not life threatening is something different than literally being overwhelmed by e.g., a tidal wave or a horde of hostile soldiers. Although, perhaps even in such situations the best strategy to keep from being literally overwhelmed is to act as efficiently as possible to stem the onrushing flood.

Problems in our personal lives ought to be dealt with as a doctor in an emergency room does triage: which problems demand the most immediate attention?

So, here was my general plan for trying to manage feeling overwhelmed: first you take an overview of the situation: what problems do you have in the moment? Then you prioritize: how are you going to schedule and allocate time to each problem? And then, only once you have gotten an overview of the situation, and made a plan for how you will address the situation, only then will you take action on any specific problem.
I recommend this course of action as a general schema for dealing with large problems or complex problems. It is useful in that, when feeling overwhelmed, it gives one specific general steps to follow, and having a plan of action can help focus attention and calm one, rather than letting the negative emotional impact of the competing problems drag you down. And though it suggests specific steps to take (1. take an inventory of problems; 2. prioritize; 3. schedule; 4. act), it is not highly restrictive and is generalizable to all situations, with the possible exception of split-second decisions. We can adjust the effort we invest in each step to the situation at hand. If the issues that we face have to be handled in a matter of minutes (e.g., a quiz in class, or even a difficult question in an interview), we simply allocate less time to each task: in the quiz we might allocate one minute to looking at the questions and getting an idea of which ones will be hard and which easy, as well as which ones will be most valuable; in the interview we might take a few seconds to think through the different parts of the question and try to assess which are of greatest concern to the interviewer. If our time is short, then we keep our initial overview short, but we can still benefit from it.

By breaking down the combination of problems into a set of discrete steps, we most effectively respond to situations. And the same breaking down of the situation into separate parts and separate steps, and focusing on one step at a time, we can most effectively counter our emotional sense of being overwhelmed.

This is hardly news, right? The idea of planning first and acting second is hardly a surprise. But we have to remind ourselves of its value when we're feeling overwhelmed. And we have to remember that it is a behavior that can be carried out at different time scales.

Saturday, December 27, 2008

Framing effects, reason and planning

Nobel-prize winner Daniel Kahneman, along with his primary collaborator Amos Tversky, and many colleagues showed that we do not always reason "logically".

For example, a patient faced with a life and death decision is more likely to choose a treatment that has a 90% survival rate than to choose one that has a 10% fatality rate. But, of course, a treatment plan with a 90% survival rate has a 10% fatality rate: the two are identical. The only difference is in the framing: one is framed in terms of life (which is positive and desirable) the other in terms of death (undesirable).

In terms of motivating ourselves, and in terms of doing good work, I think that results of this sort emphasize the importance of choosing positive framings for how we see our project. The same project can be both boring (during much of the work) and exciting (at the moments when the work comes together and progress is made), etc. How we choose to frame it can affect our plans with respect to the work, and can affect our mood, and therefore our ability to work (at least to the extent that I believe that we work better when we're in a good mood).

We want to work on building a positive framing for how we see the project and for framing the project outcome, and let that serve as the primary focus. We want to be able to plan for the worst cases, and we want to have the ability to respond to unexpected obstacles--we don't, in short, want to be naive, imprudent or impetuous--but generally we want to focus our attention on what we are trying to create and how we are going to bring that into existence. We want to frame our analysis of our past behavior and results in terms of how we can learn to do better in the future. Such framings are more likely to contribute to active plans and less likely to create negative emotional drag.

There isn't just one way to look at the world. The same thing can be seen in different ways: the glass is half full and half empty. What the results of Kahneman, Tversky and their colleagues show is that the two different framings have an effect on plans (and on emotions, too, I speculate, though I don't know if that is a reported result). That is to say that if someone tells you "the glass is half full" you are likely to make a different decision than if someone tells you "the glass is half empty;" the fact that you make a different decision is indicative of a potential emotional effect of the framing.

Friday, December 19, 2008

Why Do We Cite Papers?

A writer sent me a link to a page written by a professor of software engineering, Jeff Offut, at George Mason University. I know nothing about this person beyond a very brief perusal of his website. He's actually got a number of articles related to writing a Ph.D.; I've only read this one that I include in this post.
I don't necessarily endorse or agree with his positions; but that doesn't mean there's nothing valuable.

OK, so here's what he says:

Why Do We Cite Papers?

Excerpts from a conversation with a PhD student in 2005.

First, definitions.

A reference is the publication information about a paper. It should have enough information for a reader to find the paper.

A citation appears in the paper and points to the reference, which is usually at the end of the paper.

When a paper does not cite a key reference (or several), there is a concern. There are actually several possibilities:

1. The references should be there just as a matter of record.
2. The references tell the reader that the author knows the field.
3. If the author does not know the key papers, he or she may well be making mistakes in the work. There are at least four categories of mistakes:
1. Repeating work that was already done
2. Finding solutions to problems that are not as good as already published solutions
3. Finding solutions that are less complete than previously published solutions
4. Going in the wrong direction

If the author is lucky, then the only issue is number (1). Issue (2) will make it harder to get the paper accepted, for example, if the reviewer doubts that the author is sufficiently prepared to work in the area. If the problem is (3), the paper should not be published, and if it is published, it makes the author look dumb and the conference or journal irresponsible. If references are missing but the work is still sound, the paper should be accepted and the author should be told of the missing references. That is, a lack of references in and of itself should not be a reason for rejecting a paper.

Of course, I have omitted an all too common issue: The author omitted one of the reviewer's papers and missed the chance to stroke the reviweer's ego. Judging a paper on whether it makes our egos happy is unscientific and unprofessional. The fact that software engineering authors have to worry about it is an unfortunate comment on the lack of maturity of our field.


Except for the last point, which I would only consider in a sticky political situation, there is basically no overlap between his reasons and mine. Which is not to say that his reasons aren't important...
Why I cite:
1. to give credit where credit is due, and
2. to use the work of other researchers to explain my position.

Monday, December 15, 2008

Defining personal space

Just some random musing--I don't, at least as I start writing, have a conclusion.

A writer told me that I had been helping her set boundaries, especially with her committee, and that it was helping. Which seemed odd to me, because I couldn't remember ever talking about setting boundaries with her. I do remember talking a lot about having a sense of purpose, about believing in herself and her bringing out her own voice and about tapping in to the things that she felt most strongly about, and then to use that to tell her committee what it was that she wanted, and to use what they gave back as much as it was useful to her and not to get stressed about the parts that weren't useful, but just to stay focused on her own sense of purpose.

I understand, in retrospect, how these things can be seen as boundary setting issues.

But I guess I was thinking of them in a different way. We can define spaces--actual or conceptual--in two ways: in terms of distance from some central point(s), or in terms of boundaries. These can accomplish the same thing, but they don't act quite the same way.
Sometimes we have clearly delimited boundaries--between nations, we find rivers often are used to determine boundaries; a vegetarian sets a clear boundary between what will be eaten (everything but meat) and what will not (meat); religious fundamentalists tend to set strict boundaries.
Sometimes boundaries are not so clear: where is the clear line between music that is too loud and music that is not? Where the line between a photograph that is over-exposed and one that is not?

I like to think in terms of centers--in terms of principles--more than I like to think in terms of boundaries (though, as I say above, both are useful). Boundaries limit our ability to adapt and negotiate. I feel like when I'm focused on the central issue that I'm concerned with, then I can allow various changes in the periphery without sacrificing the central principle that is important to me. When I have a boundary set, it's harder to give it up in a compromise, even if that compromise allows me to attain a central goal.

To choose a stark example, we might look at the biblical commandment "Thou shalt not kill." This is defined in terms of a boundary: there is a clear line marked--the line between killing and not killing. The guidance derived from this boundary is clear. But that clarity can be problematic: what if you are faced with the option of killing one person to save the lives of hundreds (or thousands or millions...)? The boundary definition creates a dilemma: death will ensue, and to some extent one will be responsible, but the boundary rule pushes one towards allowing the individual to live. We can recast the same idea as a principle: strive to preserve life. This principle sets no boundary, and in the case described above, it clearly guides the user: one life is taken to save others. Perhaps this is a bad example, because the question is very tricky and loaded. Obviously this is the sort of justification that the Bush administration used to justify torture: "well, we don't want to do it, but our higher principle excuses it."
Nonetheless, I still find it powerful (without being reprehensible) to work from principles. In the case of the writer I was talking about at the beginning of the post, all the things that she saw as being a matter of setting boundaries, I saw as a matter of her developing a good understand of her core principles and her trying to apply them. She's thinking about it in terms of setting boundaries that keep her from accommodating her committee when it serves her ill; I'm thinking about it in terms of her clearly identifying what she wants and then clearly asking for what she wants to get from the committee. I guess I like being able to see it both ways and to think about it both ways.

I guess, also, I'm a little disturbed when I see how the use of reasoning from principles, instead of boundaries, can allow terrible justifications. But the flip side is that boundaries are also used to terrible ends--such as the boundaries set by certain cultures that allow the exploitation or destruction or oppression of another culture (e.g., the clear boundary that defined Jews in Nazi Germany, or the clear legal boundaries that defined who was "colored" and where "colored people" were allowed to go in the Jim Crow laws of the US).

Well, both ways of looking at things can cause problems, I guess, so it's best to understand how we can define spaces--in our lives and in our discourse--in two different ways, and having those two ways helps greatly.

Another thought: it had been kicking around in my mind, but hadn't popped out. How are categories defined? What makes a category? How is membership in a category determined? The classic, rationalist view is that categories are defined by a boundary, but cognitive science research (esp. by Eleanor Rosch and her colleagues) shows that conceptual categories are often structured around a central prototype (a model, paradigm or exemplar) and not defined by boundaries. Studies of semantics show that the usage of words is typically defined in terms of central models that are then extended to new meanings (cf. Lakoff, Women, Fire and Dangerous Things). So, in terms of how we think about the world (and in terms of how we might want to set up an argument about the world), we need to understand the difference between defining concepts in terms of boundaries or in terms of centers/prototypes.

I'm not going to try to draw this together into any meaningful conclusion.

Thursday, December 11, 2008

Dissertation writing books and related stuff

A friend pointed out to me an article that looks at dissertation-writing self-help books:

The Failure of Dissertation Advice Books: Toward Alternative Pedagogies for Doctoral Writing
by
Barbara Kamler and Pat Thomson
Educational Researcher, Vol. 37, No. 8, 507-514 (2008)

The research is basically looking at the genre as a genre, so, not surprisingly, they make sweeping generalizations, but that's necessary for research anyway: even when we study a singular case (i.e., a case study), we have an eye to what that case can teach us about other cases. Freud's case studies, of course, provided the foundation that produced claims like "men face the oedipal complex; women have penis envy." I don't like careless use of generalizations, but I also recognize the necessity of being able to generalize (In his story "Funes the Memorious," Borges writes "To think is to forget differences, generalize, make abstractions.")

They make four main complaints against the genre as a whole; the books:
1. support/create an expert–novice relationship with readers,
2. reduce dissertation writing to a series of linear steps,
3. reveal hidden rules, and
4. assert a mix of certainty and fear to position readers "correctly."

I think the article was interesting, and if you're out shopping for a book on dissertation writing, it's worth the read, just for thinking points on what to look for.

Personally, I find it hard to avoid three of the four points in expository writing.
1. It is presumed that the author knows something that the reader doesn't. The point of expository writing is to reveal something important that may not have been seen before, e.g., the results of research, or an insight from introspection/theoretical development (in, for example, mathematics or philosophy).
3. It is presumed, as above, that something hidden will be revealed, and inasmuch as that which is revealed is presumed to be "true" or at least supported by good research, we are presumed to take it into account in future behavior--of course often the knowledge won't have an impact on future behavior but it may, in the right circumstances (e.g., research that classifies widgets in some manner will implicitly set up a structure that other research on widgets should use).
4. Assert a mix of certainty and fear: well, maybe expository writing doesn't explicitly play with this as much as the self-help genre does, but following the premise that an expository work is arguing for a specific description of how the world works, it necessarily carries with it some threat of bad things following from not following the author's view (e.g., to follow up the previous example, the widget-researcher who fails to use a "true" classification scheme will ultimately create useless research).

To a lesser extent we can find something like #2 in much expository writing, too. #2 is too specific--but if we replace "dissertation writing" with "operation of systems (whether prescriptive or descriptive)" for the sake of generalizing--we find that this claim is basically that we are attempting to explain how things work or how to accomplish things: we reduce the complexity to a set of instructions that the reader can (hopefully) follow.

So, without meaning to impugn the authors, who are writing expository work and therefore cannot avoid three of the four things they decry, here are examples from the article:
1. They position themselves as experts by virtue of both their theoretical apparatus and their described methodology.
2. (this one doesn't easily fit)
3. They assert hidden rules (which are, in this case, the inverse of the points they decry), and
4. They assert that there is a possibility that dissertation-writing books that do not follow their rules may do harm, and they contrast this to their authority as researchers (fear and certainty).

That being said, I think I agree with their basic points, especially point 2. One of the main theoretical premises of Horst Rittel about the design process was that it cannot be reduced to a series of steps. In my work as a writing coach, that is one principle that I believe in strongly.

As for the other points:
1. It is important--fundamental, in my view, that the researcher understand that the dissertation is about developing their own voice (as my many posts labeled "your voice" will attest). Therefore, the researcher has to learn that to make the dissertation work, they have to start asserting their own voice and following their internal guidance, rather than looking for an outside guide. After all, the dissertation is partly about being able to guide yourself through a major research project--that's supposedly what makes you eligible to be a researcher yourself.
2. We don't want to reduce processes to a simple set of steps for many reasons that I'm not going to discuss here.
3. Aspiring researchers and writers don't want to simply follow someone else's guidance--they want to test the ideas and see whether they work--it's not about following someone else's rules (as discussed in point 1), it's about learning to make your own.
4. All people should balance caution with boldness. We neither want to be so cautious that we are paralyzed, nor do we want to be so bold that we are rash or reckless. I'm not sure that any writer who wants to convince the reader of something is free from the implicit "if you don't believe me, your life will be worse." A self-help writer whose aim was to help people be fearless might write a book that never mentioned fear, that only looked at the results of studies that looked at how to improve courage and talked about techniques to promote courage, and still have hanging, unspoken in the background the fear-inducing "if you don't do as I say, you will be fearful."

Wednesday, December 10, 2008

Controlled Experiments and Discoveries

Knowledge advances in two ways: intentionally and by accident (and these are, by definition, mutually exclusive: that which is not intentional is an accident, and vice versa).

It's worthwhile to understand what it is that constitutes knowledge and research, because having that fundamental understanding gives us the greatest opportunity to learn from the data we have.

I had considered titling this post "Found Art" because some of research is largely "found art": that which was discovered serendipitously, but was, perhaps, thought refuse.

But the motivation for this post was talking with a writer who had attempted to run an experiment, and the experiment failed. "I should try something else," he suggested. But I'm wondering what could be found in the wealth of data generated by what he did do. It may not be that the failed experiment will provide a gem of information, but it might provide valuable insights that will guide future research.

Controlled experiments are one of the paradigms of research--it is intentional research in its most extreme form: possible outcomes are limited as much as possible to that which can be accurately measured.

In the laboratory a great deal of control can be exerted to limit different kinds of variability. That kind of control cannot be exerted in the field. And so controlled experiments may break down for various reasons beyond the control of the researcher, eliminating the possibility of getting the results that had been desired and intended.

In the field, however, if you are documenting the process extensively, even a failed experiment will generate masses of data that can be processed and analyzed for insights that were missing when the experiment was set up.

The first place to look is the failures. Your experiment failed because of record-keeping lapses by the participants? What does this teach you about setting up an experiment that will work in similar conditions? Does this suggest a failure to engage in the experimental activities? Why? What can the failures of the experiment teach about setting up an experiment to study the issue that motivated the original study? The failures, in some cases, may tell you about the very thing that you're testing, too. Do they indicate any results that would indicate that there are problems with the general premises under which you are operating?

Careful examination of a "failed" study can be quite valuable because of all the data generated--it simply requires one to look at the data in a different way--to see it through different eyes--to see the urinal as art.

Friday, December 5, 2008

Boiling water and the complexity of our actions

Once I was a TA in a basic computer course (back in 1995); one exam question (or quiz or homework) asked students to write "pseudo-code" for boiling water ("pseudo-code" being a sort of plain-English description of an algorithm).

The expected answer was something like
Get a Pot
Fill it With Water
Put it on the Stove
Turn On the Stove
Wait

And that is a basic level description of what we do. But there's far more complexity there. The reason I write this is as a follow up to the previous post about research questions: we have basic ideas about the world, but when we examine them, we find that they open up into great complexity.

This sort of thing happens with computer programming, too, and that was the problem a lot of the students had with moving from five lines of pseudo-code to a working program--they just didn't recognize all the little details that are worthy of attention. Similarly, when we're looking at an assertion, we need to recognize all the details that are worthy of attention and examination.

Let's take a brief look at the pseudo-code. The first step: "Get a Pot."
Easy? Yes, it's easy if you have a pot, or know where to get one. Let's say you have a pot that you intend to use and it's in your kitchen. Then "get a pot" requires first going to your kitchen, which itself may not be trivial, if, for example, you're out running errands. You have to go home, then once home you have to go to the kitchen. Once you have gotten to the kitchen, you have to find the pot. This may be easy if your kitchen is well-kept. But maybe the pot is already in use--then you have to find an alternate pot. Or maybe the pot is dirty, and you have to wash it. Or maybe the pot isn't where you would expect it because the friend you had over for dinner the previous evening put away the dishes. Once you've gotten the pot and it's clean and ready for use, then you have to fill it with water. Again, we can find complexity here if we look for it. We need to have running water, we need to keep the pot oriented in the right direction, we need to keep it still, we need to support its weight, etc.

We want to look at our assertions at this level of detail, and then the research questions will start popping out at us.

Thursday, December 4, 2008

Research and Research Questions

Some research--or at least some important discoveries--are not made as the result of a specific research question. We might imagine Newton under the apple tree: the discovery of an idea that appears through the data. Similarly we might imagine that Darwin had no interest in evolution, but was only cataloguing the creatures he observed in his travels.

Such research need not be driven by a question; it comes about serendipitously. And that is somewhat problematic when there is an expectation to publish, to complete research projects and write them up. Can we just wait for a discovery to come to use as we peruse ever more data? That depends on what you want your life to be.

However, if your goal is to finish a research project so as to get a degree or to get published, then it helps to have a research question.

Research questions shape a work and guide it. They provide the focus. Any question could be a research question, but some are better suited to study than others.

A research question comes from a way of looking at the world. It starts with a basic perspective. We each have a fundamental, mostly unconscious set of ideas about how the world works. This then shapes the way we interact with the world and the questions we will ask of it.

We might, for example, believe the Christian creation myth, and set off on a journey to discover and document the many different species that survived the flood on Noah's ark. This would all be consistent with a desire to know all of God's creation that it might be celebrated. Our research question in such a case might be "are there any creatures that have not been documented yet?" or "how many undocumented creature can I find and document to the greater glory of the Lord?" These research questions are unlike the questions that are asked in most universities in America, but they are consistent within a certain world view. In much the same way that the questions asked at a secular university are consistent with their own world view.

Whatever your world view, it is the place from where you start: "I believe the world operates this way," you assert. Usually we have a number of assertions about how things work: "the sun will rise tomorrow; water will run when I turn the tap; e=mc^2; the sperm fertilizes the egg; drinking too much alcohol will make me sick; etc." We have a whole world of assertions; each of us has slightly different ones. Some of them we accept without question, and some of them we're curious about.

We might believe "Process X will improve the quality of my work." It's of obvious value if it's true. As a researcher, it is appropriate to be skeptical. Is this assertion true? And that becomes the research question that you test. The first place to look for answers, of course, is in the literature. Has anyone else asked this question? If so, what was their answer? If not, has anyone asked similar questions? Perhaps no one has asked "will Process X improve work in my field (let's call it 'field A')?" but someone has asked "will process X work in field B (which is related to field A)?"

By starting with what you believe, and testing what you believe, you can move into the logic of your area of interest in search of a question for which you want an answer but which there is no answer to be found.

Maybe you found someone, Dr.Q, who said "Process X will improve work in field A," but their argument was only theoretical, and they had never tested it. This then becomes an assertion that is in need of an empirical test, so you can set up a study to see if it will work in your field.

Maybe you also found someone who said "Process X works in field B if you make adjustments 1, 2 and 3." One thing you could do is to say, "I want to test process X in field A, as Dr. Q suggests, but I want to make adjustments 1 and 2 because of the similarity of fields A and B." Or you could say "I wonder whether the conditions that require adjustment 1 in field B also hold in field A, and if so will adjustment 1 suffice in field A?"

The examples should be viewed as examples of ways of thinking and asking questions. The premises and assertions used as examples could be replaced by any assertion or premise. Once we have a premise, we can start to look at whether it is true, and what reasons we have to believe it, and we can then go from there.

What is Research?

Research can be construed in many different ways, but one way to look at it is the exploration of hypotheses: we believe the world works in a certain way, but we don't know for sure, and we want to test that hypothesis.

So we might, to choose a culinary analogy that may not hold up, for example, have a hypothesis that tofu and pomegranate would go well together in a raw salad.

We may have reasons to believe this--we may, for example, have read a review of restaurant that served such a dish, or we may have read a cookbook that suggested that the flavors would work well together, or we may have read some chemistry/biology textbook that leads us to believe that some chemicals in the two would combine well. We have reasons that we believe the hypothesis. To the extent that such reasons are supported in published literature, and that our idea came from reading the literature, we can add such elements to our literature review. But presumably there is not such a preponderance of evidence to suggest that the exact thing that we're studying is certain (e.g., there are no reports of a tofu-pomegranate salad craze in major metropolitan restaurants, or other indication that our hypothesis has been extensively tested).

In order for us to be doing research, we have to be testing something that is at least in question. As far as empirical science is concerned (whether social science or hard science), a hypothesis, no matter its logical antecedents, is worthy of empirical testing if the given empirical test (or one substantially similar to it) has not been executed. The fact that theory suggests that something will happen is no guarantee that it will.

Things are complex. It's simple to say something about preparing tofu and pomegranate, but that hides a great deal of potential complexity, and potential difficulties. To make up an example, it might be the case that tofu and pomegranate go well only with a third ingredient, but that ingredient is rare, or expensive, or hard to work with in some way, making practical execution of a dish infeasible, even if the theory suggests that it should work.

The scientist looks for this complexity within the simpler statement.
"Tofu and pomegranate will go well together" is a simple hypothesis but it suggests more detailed hypotheses and issues: will they go well together when fried? when raw? when boiled? when mixed with vinegar? Will any problems crop up? What will be done to find out? The scientist looks at the simple hypothesis and asks detailed questions about how that can be true. And the starting place for that exploration is intellectual: what do you know about the situation? what are the important ideas that define the situation? what kinds of theories shape your understanding of the situation? what kinds of questions can you ask about the situation? What details are pertinent? Where do theory and practice diverge? And what is the impact of that divergence? If you're attempting to import a theory or practice from one type of endeavor to another, what differences are there going to be? For example, maybe one heard that pomegranate and chicken was really good, but was vegetarian, so thought of pomegranate and tofu instead. What reasons do you have to believe that the translation will work? What reasons do you have to believe that the translation won't work?

Looking at hypothesis with a critical eye looking for detail, many different questions and ideas and possibilities arise. Practically speaking, each needs to be tested individually. So if you start with a general question, you're looking to find a specific aspect of that question that you can test. If you think pomegranate and tofu will work well together, but that they'll need to have some sort of seasoning, then you'll try preparing some with one set of spices/flavors, and you'll see if that works, and then you'll test with a different set of spices/flavors, and see if that works. You won't throw all the spices in at once, because then all you get is confusion. So with a question that can be fragmented and broken down, you want to seek the different questions that could contribute to answering the main question, and look to answer one of those more detailed questions.

But whatever you're going to do, it starts with your looking at the world and putting forth a hypothesis: "I believe the world works this way," you say to yourself. For example "I believe pomegranate and tofu would taste good together," or "I believe that method X, used in field A, will also be useful in field B, despite some differences between those fields."

You start with having an understanding of how the world works, and an idea that it will work in a certain way. Then you look to see what evidence you have to support that view. If you think the evidence is overwhelming that the view is true, then it's not an interesting research question--but if there is doubt--perhaps there are people who believe that it isn't true, perhaps you doubt yourself; it doesn't matter where the doubt arises--then there is a viable research question: you believe the world will work in a certain way, and then you want to test that idea, and you try to find a way to test that idea. It all starts with how you understand the world and your exploration of the places where you are uncertain and curious.