Friday, December 30, 2016

Context and Attention Drive Personality

At times, almost everyone is inclined to frame an important issue according to a status quo concept. That certainly is true regarding the notion of personality.  For instance, in 2016, approximately 75 percent of published personality-oriented articles either made reference to or focused exclusively on the Big Five (BF) theory of personality.  I, too, have mentioned the Big Five frequently in this blog (see, for instance, Personality Change in Adulthood), doing so because of the theory’s dominance in professional research and literature.  But that does not mean that the BF is beyond reproach.

Let’s briefly consider another intriguing and useful way to interpret personality: the Context-Appropriate Balanced Attention Model (CABA).  First I will paraphrase the theory's explanation as discussed by Michael D. Collins, Chris J. Jackson, Benjamin R. Walker, Peter J. O’Connor, and Elliroma Gardiner (2016).  Then I will infer how the relatively novel approach might have relevance to our understanding of physical and mental health.

The BF provides a relatively simple and straightforward system from which to explain differences between people.  Most of us readily accept the notion that individuals usually can be classified reliably as extroverted or introverted, open or closed to experience, conscientious, on non-conscientious, agreeable or disagreeable, and neurotic or emotionally stable.  But the BF is of limited utility in describing the factors within a person that determine why they are as they are, or that cause them to be more or less personality-consistent over times and circumstances.  For instance, the BF does not explain why someone would be more introverted as they age or when they are in the presence of the opposite sex.

By contrast, the CABA applies mostly to how individuals adjust their behavior over times and circumstances.  As the name implies, the Context-Appropriate Balanced Attention Model operates via the allocation of attention within a given context.  It posits that we have only a limited supply of attention upon which to draw to activate adaptive thoughts, emotions, and behaviors and/or to inhibit maladaptive ones.  In any given situation, when adaptive processes are dominant, we naturally attend to that which is adaptive and when maladaptive processes are dominant, we naturally attend to that which is maladaptive.  Therefore, to change the situation-specific dominant mode from adaptive to maladaptive or vice versa, we must be able to redirect our attentional resources to the new focus.  That, of course, presumes that we are aware of the need to shift attention and capable of exerting the effort necessary to do so.  Michael Collins and his colleagues relate the CABA model and the CABA processes to their neurological substrates, but discussing that would take this blog too far afield.


For us, CABA helps underscore the role of attention, effort, and situation in determining our lifestyles.  Suppose you are an overeater and seek to overcome that condition.  According to the CABA model, your dominant mode involves attending maladaptively to food stimuli, and therefore eating too much, too often in one or more specific situations.  To reverse the condition, you must learn to redirect your attention in those specific overeating situations.  Your dominant mood needs to become one in which you attend to non-food stimuli with adequate power to maintain your non-food focus.   Obviously, you will not be able to reallocate your attentional resources if you remain unaware when your attention is drawn excessively to food.  And anything that depletes your attentional resources (e.g., fatigue or alcohol) will make your desired change less likely to occur.

When you seek a lifestyle change then, know the contexts most likely both to promote and to inhibit the new desired behavior.  With that knowledge as your guide, plan how you can regulate your contexts and attention adaptively toward the desired and away from the undesired stimuli.  I can frame this important issue according to a status quo concept familiar to all and advise you to be "mindful" of what your want to do, what you want to avoid, the contexts that promote each, and how you control your attention.       

     
Reference:


Collins, M., Jackson, C., Walker, B., O’Connor, P. & Gardiner, E.  (2016). Integrating the Context-Appropriate Balanced Attention Model and Reinforcement Sensitivity Theory: Towards a Domain-General Personality Process Model.  Psychological Bulletin, November 28. No Pagination Specified. http://dx.doi.org/10.1037/bul0000082

Saturday, December 24, 2016

Whom Do You Trust ?


Whenever we are advised to perform some health-affecting practice we face a question: Is this a worthwhile endeavor?  Always a difficult question to answer, it is especially challenging in this age of information overload.  That is not to say that each decision is equally momentous.  When told that chicken soup will cure your cold, you need not expend much cash, energy, or effort to comply with the advice, and you risk little whether you do or do not follow through.  On the other hand, you readily find Internet printed material and verbal advice to do such things as take or not take a prophylactic aspirin to protect your cardiovascular system. The "correct" answer is not always obvious or uncomplicated.  And depending on your decision, you could enjoy significant health benefits or experience significant health risks.

If the health-oriented information recommended to you is ambiguous, conflicting, or consequential, a number of factors are important to consider.  Let's focus on two central ones framed as a polarity: You can look within yourself to reach a conclusion that seems proper for you, or you can look to trusted other people to determine what they believe is proper.  (Of course, there is no reason that you cannot take your self-generated and others-generated information, compare it, and then decide.)

If you rely primarily on own, self-generated information, whether you act on your decision will be influenced powerfully by your level of self-confidence.  In that case, the research of Richard E. Petty suggests that confidence depends in large part on your sense of personal power.  As all thoughts, decision-relevant thoughts have an affective charge associated with them from the outset.  Those thoughts, in turn, are magnified by one's confidence level, making the positive thoughts more positive and the negative ones more negative.

If you rely primarily on others-generated information, whether you act on your own decision will be influenced powerfully by your faith in their opinions.  Here, the research of Noah J Goldstein, Steve J Martin, Robert Cialdini (2008) is worth considering.  According to them, when faced with information at odds with their own preconception, many people abandon their own view, moving instead toward what they regard as the middle value of their social reference group.  So, having initially believed that she should exercise for one hour per day 5 days per week, if most of those with whom she spoke favored 30 minutes, 2 days per week, she might very well settle on 45 minutes, 3 days per week.

Deciding to begin or to modify a health practice then, can cause us to sift through a welter of information.  And it exposes us to the advantages and disadvantages of relying on own own opinion, the opinions of others, or an amalgam of both.  So, the more you know about your strengths and weaknesses and your reference groups' strengths and weaknesses, the better.  Given those provisos, be mindful that the extent of your self confidence must be balanced against the extent to which you trust yourself and/or your associates in the very health area under consideration.  While there is nothing inherently wrong with choosing based on any of the self, others, or amalgam options, each choice has its own advantages and disadvantages, and each choice is rooted in your own unique personality predilections.

References:

N.J. Goldstein, S.J. Martin, & R. Cialdini (2008). Yes!: 50 scientifically proven ways to be persuasive.  New York: Simon and Schuster.  

R. E. Petty and J. T. Cacioppo (1996).  Attitudes and persuasion: Classic and contemporary approaches.  New York: Westview Press.


  

Saturday, December 17, 2016

Healthful Decisions

In common parlance, to say that you are making a decision implies that you are consciously deliberating. So I ask: Are most of your health-oriented practices the results of your conscious deliberations?

Some health decisions certainly are decisions in the deliberative sense.  This is particularly true for “big” decisions.  Most of us think deeply about whether to have a knee replacement or tooth implantation.  But such decisions are few and far between.  “Little” day-to-day health “decisions” most often occur automatically and unconsciously.  We usually do not deliberate about whether to have a second piece of cake.  Despite the fact that eating the cake is an enacted decision, we rarely think of it as a decision at all.  Over situations and over time, however, the automatically, unconscious enacted decisions determine our health no less than do the truly deliberative ones.  To be healthy then, we must understand both our consciously directed and unconsciously directed health-oriented choices.

According to our definitions, let’s think about one aspect of big decision making and one aspect of little decision making.

Big decisions depend largely on how we perceive our future selves.  Odd as it may seem, we often are indifferent to the person that we might become.  Derek Parfit (1971) suggested that the alienation of our current from our future self can be so extreme that we perceive future selves as if they were strangers.  In that case, when a knee replacement or tooth implantation decision does not seem pressing we might not think about how it would affect us in the future at all.  Daniel M. Bartels and Lance J. Rips (2010) make the obvious inference that alienation from the stranger who will become our future self can lead to major later-life negative consequences as a result of our failure to make even minor current-life sacrifices.  For instance, one might avoid causing the future self to endure a knee replacement by gradually losing weight now, or avoid a future tooth implantation by assiduously implementing enhanced dental hygiene starting today.  However, to make the lifestyle changes necessary the current self would need be a deliberative decision maker.

Little decisions can be affected by how we perceive our future self, but they also are very reactive to moment by moment present experiences.  Therefore, you must mindfully focus on how to handle the current setting and what you are thinking and feeling in the here-and-now.  The future self is relevant when you can anticipate a near-term health-oriented opportunity or challenge.  If you are going to a buffet tonight and typically spend far too much time at the dessert table, you can imagine your future self enacting counter-strategies, such as filling up on salad and water before approaching the cakes.  That deliberative decision, of course, means nothing if you fail to enact the strategy at the buffet itself.  Thus, your deliberation must prepare you to control your environment (to seat yourself far from the dessert table and not to linger near it), thoughts (I can have the orange instead), and feelings (If I eat the cake, I’ll feel guilty all night) so that the deliberated decision becomes the enacted health-enhancing decision.

References:

Bartels D. M., & Rips, L. J. (2010). Psychological connectedness and intertemporal choice. Journal of Experimental Psychology: General, 139, 49–69. 
http://dx.doi.org/10.1037/a0018062

Parfit, D. (1971).  Personal identity.  Philosophical Review, 80, 3-27.
 http://dx.doi.org/10.2307/2184309

Saturday, December 10, 2016

It’s Not So Crazy to Think That Sometime You Might Act Crazy


The human psyche, fortunately and necessarily, is oriented toward self-preservation.  If not, our species never would have survived.  Since we are the most physically dependent and the most social animal, our survival has demanded that we relate adaptively to those within our “tribe.”  Accordingly, humans developed extraordinary skill in understanding themselves and those around them.   Moreover, the two skills have been inextricably related—we understand ourselves by contrasting our behavior with that of our contemporaries and vice versa. 

Almost every day we observe someone doing something that we consider “crazy.”   Those crazy behaviors could include anything, from running away from home to never leaving the house.  Given our penchant for “social comparison,” we often imagine that we never could behave so maladaptively.  And, because of our “fundamental attribution error” predilection, we ascribe other people’s oddities to their enduring personalities while excusing ours as due to transient, external influences.  Moreover, to justify our perceptions about our odd neighbor, we can search through scores of mental illnesses enumerated in the Diagnostic and Statistical Manual of Mental Disorders (DSM–5) to find one that seems just right.

Our weird neighbor is the exception of course.  There’s never anything strange about us.  After all, mental illness is rare.  Isn’t it?

The conventional view had been that mental illness was uncommon.  However, over the last decade several studies suggested otherwise.  For instance, the National Institute of Mental Health reported that 18 percent of American adults suffered from mental illness in 2014 (www.nimh.nih.gov), and Kessler et al. (2005) suggested that about 50 percent of us will evidence a diagnosable mental illness during our lifetime.     

If you think those statistics are ominous, consider the even more startling conclusions reached by Jonathan D. Schaefer and his colleagues (2015) who believed that previous reports most commonly employed three data collection methods that produced spurious low results.  First, national registries, they said, included mostly or exclusively persons who received treatment in psychiatric facilities, missing those treated in other settings or those not treated at all.  Second, retrospective studies primarily were limited to persons diagnosed with Axis I mental illnesses (e.g., Schizophrenia) and, therefore, missed Axis II and other serious problems (e.g., Psychopathic personality). The final inadequate data collection method employed prospective cohort studies.  Although prospective cohort studies (that follow persons of similar age over time) often are considered excellent, the problem for Schaefer was that the ones he uncovered also assessed only Axis I.  Despite that limitation, however, the prospective research did disclose a mental illness rate ranging from 61 to 85 percent, significantly higher than the other methods.

To remedy the perceived research inadequacies, the Schaefer group examined data from the Dunedin (New Zealand) cohort, a group that had been studied meticulously from their births until middle-age.  The entire cohort was scrutinized – not just those with diagnosed mental illness - and all diagnoses – not just Axis I - were considered.  Given that comprehensive, all-inclusive criterion, an astounding 83 percent of the study’s subjects suffered a diagnosable mental condition sometime during their lives.  Also surprising was that those who did escape all mental illness were not especially intelligent, physically healthy, or from a financially privileged family.  Rather, the emotionally sound 17 percent tended to be ones who had managed to maintain high-quality interpersonal relationships, to be more satisfied with their lives, and to have achieved greater educational and occupational success.


Before you conclude that we are doomed because almost everyone is going out of their minds, recall that Schaefer employed a very broad definition of mental illness.  There are so many mental diagnostic categories that virtually any imaginable problem can be labeled.  Moreover, the study did not adequately report  the severity or diagnostic distribution of the illnesses that were found.  There is a world of difference between an “Adjustment disorder with depressed mood” and “Schizophrenia, Paranoid Type.”  The Dunedin study can, in fact, be of comfort to you.  If emotional problems are so ubiquitous, then many, virtually by definition, are everyday problems of living.  So, when anxious, depressed or otherwise afflicted, remember that you probably are no more deviant than the weird neighbor mentioned earlier.  Do not let your emotions overwhelm you.  Keep moving forward, and as I implied in the subtitle of my Don’t Rest in Peace book, do your best to maintain an activity-oriented, physically and mentally integrated lifestyle.

References:

Kessler, R., et al. (2005)  Lifetime prevalence and age-of-onset distributions of DSM–IV disorders in the National Comorbidity Survey Replication.  Archives of General Psychiatry, 62, 593– 602. http://dx.doi.org/10.1001/ archpsyc.62.6.593

Schaefer, J., et al. (2016)  Enduring mental health: Prevalence and prediction.  Journal of Abnormal Psychology, Dec, No Pagination Specified.  http://dx.doi.org/10.1037/abn0000232

Saturday, December 3, 2016

Can I Take a Pill for That?

There’s a pill for almost any lifestyle problem extant in 21st Century America, from smoking and dieting to sleeping and loving.  The question is: Do the lifestyle pills work, and, if so, for how long?  The drug industry certainly is paying attention; they spend billions of dollars annually including placebos in their studies to “prove” to the United States government that any positive change after taking their medication is attributable to the medicine and any negative change must be due to some non-medication influence.

Today, let’s address depression and anti-depressant pills.  Tofranil, one of the first anti-depressants in America, was approved for sale in 1959.  So, medication for depression has been available to the public for almost 60 years.  Anti-depressants certainly should have proven their worth by now.  But despite drug companies efforts to prove otherwise, many scientists believe that many anti-depressant and other pharmaceutical lifestyle “cures” amount to little more than transitory placebo effects.

To underscore the questionable utility of anti-depressants, consider this:  The explanatory power of placebos has been increasing over the years.   In fact, the placebo response for anti-depressants was twice as strong in 2005 than it was in 1980 (Rief, 2009).  Moreover, that “placebo drift” had been found for other types of medications as well (Captchuk & Miller, 2015).

No one has been more outspoken in cautioning about anti-depressants than Irving Kirsch (2014).  He noted: 1) that all anti-depressants are said to benefit patients via their influences on neurotransmitters and 2) that all anti-depressants are fairly similar in their efficacy.   Wondering why the similar effectiveness, he investigated the serotonin neurotransmitter and found that some anti-depressants decreased the chemical, some increased it, and some had no effect on it whatsoever.  On the other hand, the placebo effect was obvious and similar for all of the anti-depressants.  Since a properly structured drug study requires that subjects not know whether they have been given the investigational drug or the placebo, he was surprised to discover that 89% of patients getting the drug guessed correctly that they were not given the placebo (Rabkin et al., 1986).  And that fact certainly undermined those studies' scientific integrity and validity 

After enumerating a host of potential side effects of anti-depressants, including but not limited to sexual dysfunction, long-term weight gain, insomnia, nausea, diarrhea, withdrawal symptoms, suicidal ideation, stroke, and death, Kirsch concluded: “When different treatments are equally effective, choice should be based on risk and harm, and of all of these treatments, antidepressant drugs are the riskiest and most harmful. If they are to be used at all, it should be as a last resort, when depression is extremely severe and all other treatment alternatives have been tried and failed.”


Please understand that I am not saying anti-depressants never should be used.  I basically agree with Kirsch, however, that the pills are an option of last resort.  Depression usually can be ameliorated or even cured by lifestyle changes involving some combination of the health-promoting factors that I emphasized in my book: changes in the approach to one's cognitive-emotional perspective, interpersonal relationships, physical activity, diet, work, and relaxation-recreation practices.

References:

Kaptchuk, T. & Miller, F. (2015).  Placebo effects in medicine.  New England Journal of Medicine,  373:8-9. DOI: 10.1056/NEJMp1504023

Kirsch, I. (2015).   Antidepressants and the placebo effect.  Zeitschrift für Psychologie, 222, 3, 128-134.  doi: http://dx.doi.org/10.1027/2151-2604/a000176

Rabkin, J., et al., (1986).  How blind is blind? Assessment of patient and doctor medication guesses in a placebo-controlled trial of imipramine and phenelzine. Psychiatry Research, 19, 75–86

Rief, W., et al. (2009).  Meta-analysis of the placebo response in antidepressant trials.  Journal of Affective Disorders, 118, 1-2, 1-8.   DOI: http://dx.doi.org/10.1016/j.jad.2009.01.029