Saturday, September 27, 2025

Kimmel, Kirk, and Us

As almost everyone knows by now, Jimmy Kimmel had been “indefinitely suspended” by ABC on September 17, 2025 following his comments about the fatal shooting of conservative commentator Charlie Kirk.  A precise quote of his primary false offensive comment is, “We hit some new lows over the weekend with the MAGA gang desperately trying to characterize this kid who murdered Charlie Kirk as anything other than one of them and doing everything they can to score political points from it.”  

Let’s deconstruct Jimmy Kimmel’s assassin-related comments and then educate him.

First, Kimmel referred to Tyler Robinson as a “kid” despite his being over 22 years old; so, Kimmel needs to learn the following facts:

  • Primary elections: 21 states and Washington, D.C. allow 17-year-olds who will turn 18 by the general election to vote in the preceding primary election.
  • Local elections: Some towns and cities allow citizens younger than 18 to vote in local elections. Examples include several cities in Maryland where the voting age has been lowered to 16 for municipal contests
  • Voter preregistration: Most states, along with Washington, D.C., allow young people to preregister to vote before they are 18. The preregistration age varies by state, but can be as young as 16.
  • An American can join the armed forces without parental approval at age 18. If a person is 17 years old, they need the written consent of a parent or legal guardian to enlist. 
  • Approximately 61% of the Americans killed during the Vietnam War were 21 years old or younger.

Accordingly, Tyler Robinson was no kid, despite Jimmy Kimmel’s desire to find an excuse for the assassin.

Second, Robinson was in no way MAGA.  In fact, he was virulently, hatefully anti- MAGA. Moreover, Robinson was totally, delusionally opposed to democracy and free speech.  For instance, he justified murdering Charlie Kirk by saying, "There is too much evil and the guy [Charlie Kirk] spreads too much hate." So, an evil, hateful assassin projects his evil, hateful personality characteristics onto his target. And that targeted person was a staunch advocate for democracy and free speech.

Now let’s return to Jimmy Kimmel. On September 23rd, a mere six days after being put on indefinite suspension, he was almost fully back on the air.  And by September 26, he was fully back. Anyone with two intact cerebral hemispheres is not surprised to know that Kimmel’s television ratings profited enormously from his hateful speech.  Here is a chart showing his audience ratings before and after Kimmel’s slandering Kirk:

Comparative Ratings Table: "Jimmy Kimmel Live!" Before and After Controversy

Period

Total Viewers

18-49 year-olds

       Notes

1st qtr 2025

1.77 million

0.48 rating

       Pre-controversy baseline

8/1/25

1.1 million

0.35 rating

       Summer low

9/15/25

1.1 million

0.13 rating

       Day of controversial monologue

9/23/25

6.26 million

0.87 rating

Reinstated with 4X baseline increase


That 400% audience rating increase over baseline is precisely what rewards and keeps the influencer hate going. But perhaps you would argue that free speech is precisely what Charlie Kirk was advocating and that is true.  You,  I, and the people next door should be able to say whatever we want, because what we say will not promote widespread violence or severe retribution. We simply don’t have the platform to distribute our biases across the nation. My professional opinion—for what it’s worth—is described below in a more academic-like style.

The Power of the Microphone: Free Speech in the Age of Influence

In democratic societies, free speech is a cornerstone of liberty—a right enshrined in constitutions, protected by courts, and celebrated in public discourse. But as the digital age has redefined who holds a microphone, the consequences of speech have grown exponentially. There’s a critical difference between a private citizen expressing an opinion and a public figure with millions of followers making irresponsible, derogatory, or violent political statements.

Influence Amplifies Impact

A private citizen might vent frustrations at a dinner table or post a controversial opinion online, reaching a handful of people. But when someone with an enormous platform—be it a celebrity, politician, or elite influencer of any kind—uses their voice to spread inflammatory rhetoric, the stakes change. Their words can ripple across society, shaping public sentiment, fueling division, and even inciting violence.

Free Speech vs. Public Safety

The First Amendment protects speech from government censorship—but it doesn’t shield speakers from accountability. Courts have long held that speech inciting imminent lawless action is not protected (Brandenburg v. Ohio, 1969). The challenge today is that “imminence” is harder to define when viral posts can reach millions in seconds, and when coded language or dog whistles can mobilize groups without explicit calls to violence.

Social media companies have grappled with this dilemma. Platforms like Twitter (now X), Facebook, and YouTube have suspended or banned accounts of high-profile individuals for violating policies on hate speech and incitement. These decisions often spark debates about censorship, bias, and the boundaries of free expression.

Responsibility Comes with Reach

With great reach comes great responsibility. Public figures—especially those in politics or media—must recognize that their words carry weight. A single tweet or soundbite can validate extremist views, undermine democratic institutions, or provoke unrest. The difference between a private citizen and a public figure isn’t just scale—it’s influence. And influence, when wielded recklessly, can be dangerous.

Navigating the Future

As society continues to wrestle with the balance between free speech and public safety, one principle remains clear: speech is not just a right—it’s a responsibility. The louder the microphone, the greater the duty to use it wisely.  Free speech promulgated and disseminated by biased, controlling elites is not free; it exacts profound costs by destroying democracy, safety, and civility. Jimmie Kimmel successfully parlayed punishment into profit.  He no doubt now has taught millions of others to do the same—a master class in how to divide and destroy America. Our children, China, Russia, North Korea, and Iran are watching and learning !

 


Thursday, September 11, 2025

Assassinating Free Speech

To set this blog’s context, consider a 2025  pupil study from the non-partisan Foundation for Individual Rights and Expression (FIRE).   As it does each year, the foundation surveyed 68,000+ students from250+ colleges. And their report was as follows:

Acceptance of Violence =  34%  of students say using violence to stop someone from speaking on campus is acceptable.

Acceptance of Shouting Down Speakers = 72% of students say shouting down a speaker on campus is acceptable.

Self-Censoring with Fellow Students on Campus = 24% of students say they often self-censor with other students on campus.

Self-Censoring in Classroom Discussions = 28% of students say they often self-censor during classroom discussions.

Low Trust in School Administration = 27% of students say it is very or extremely likely their school administration would defend a speaker's rights to express their views.

Mental health professionals, anthropologists, and biologists suggest that it is language that separates human beings most definitively from animals.  When animals experience fear, they attack, freeze, or flee. Some American universities are passively allowing their students to do the same. Thus, in a sense,  universities are reducing students to the level of animals. To me, that’s the biggest lesson from the recent university assassination of Charlie Kirk.

As almost all Americans have heard, Charlie Kirk was fatally shot during a public event at Utah Valley University on September 10, around 12:10 p.m. MDT. He was hosting a "Prove Me Wrong" segment as part of his American Comeback Tour when the attack occurred. A bullet struck him in the neck, leading to his death later that day.  Kirk was assassinated while speaking to an audience and was encouraging them to step to the microphone to comment and question.  Thus, he was living and promoting freedom of speech and expression at the very moment that he was murdered.

If you or someone  you know hear about Charlie Kirk's assassination and immediately think or say, "Yes, but what about the Democrats who have been shot or killed, I have a few questions:  "Where is your humanity??  Is life in America your political game of win or lose?  Have you no empathy for his wife and children?  Is your mental processing so constrained and impoverished that you are unwilling and/or incapable of thinking about long-term consequences for America, for you, and for those whom you love?

Regardless of your politics, you might accept the following likely realities regarding Kirk’s death:

The Loss of a Brilliant, Energetic Young Leader

At just 31 years old, Charlie Kirk epitomized charisma, drive, and youthful energy. As the founder of Turning Point USA, he skillfully mobilized a generation to engage with conservative principles. His death represents not just personal loss, but the abrupt silencing of a once-bright voice in political youth activism. WHAT DOES SILENCING FREE SPEECH DO TO AMERICA?

Devastation for Family and Loved Ones

The emotional toll on his wife, Erika, and their two children is crushing. Beyond the political impact, the family has lost a husband and father—his passing a personal tragedy beyond public perception. The ripple extends to friends, colleagues, and supporters who mourn both the man and his family’s loss.  WHAT WILL THE MURDER DO TO OUR SOCIAL/POLITICAL CLIMATE?

Chilling Effect on Student Gatherings

The violent targeting of a speaker on campus could push some organizers and attendees toward caution, even fear. Upcoming gatherings—especially large events or rallies—might now contend with heightened security concerns and hesitancy, potentially curbing in-person engagement. GIVEN KIRK’S UNPRECEDENTED SUCCESS REGARDING CONSERVATIVE YOUTH, WILL MORE  DERANGED CONSERVATIVE HATERS BE ENCOURAGED TO CONSIDER ASSASSINATION TO ACHIEVE THEIR ENDS?

Suppression of Bold Voices

Charlie Kirk was unapologetically outspoken. His death may lead others with similar boldness to reconsider stepping into the public arena. The fear is that the political arena will grow more timid, less open to outspoken individuals challenging prevailing narratives. WILL THE ASSASSINATION SUCCEED IN DISSUADING YOUNG CONSERVATIVES FROM SPEAKING OUT?

Loss of a Communicator to Youth

Few reached young conservatives as directly as Kirk did. His ability to connect—via campus events, podcasts, social media, and media appearances—created a bridge between political messaging and youth culture. His absence leaves a void in channels that blend youthful energy with political persuasion. WILL THE ASSASSINATION FURTHER PREVENT RATIONAL DISCUSSION BETWEEN YOUNG CONSERVATIVES AND LIBERALS/PROGRESSIVES?

Discouragement of Fearless Expression

In classrooms or campuses, the notion that dissenting or non-mainstream conservative voices might be heard is now marred by tragedy. Students may feel discouraged from speaking against dominant campus opinions, fearing repercussions—whether subtle or stark. WILL CAMPUSES BECOME EVEN MORE BALKANIZED THAN THEY ALREADY ARE?

Reflection on the Political Atmosphere

While it's crucial not to equate cause and effect simplistically, the incident does underscore the toxicity of polarizing rhetoric in American political discourse. Some observers may ask whether the current climate—full of extreme labels, conspiracies, and demonization—facilitates tragedies like this. Voices from across the spectrum have warned about the need to reject political violence.

Deepening Anxiety Among Teens and College Students

Youth—especially college students—now face more than academic concerns: they’re confronting the possibility that expressing political ideas is dangerous. The psychological weight of that reality could inhibit open debate and intellectual risk-taking on campuses, shifting the atmosphere from one of engagement to one of caution and conformity.

Unfortunately. Charlie Kirk was courageous to a fault. His final days were marked by growing danger. According to Pastor Rob McCoy, a close friend and spiritual mentor, Kirk had been receiving death threats regularly—hundreds, McCoy claimed. Yet Kirk never flinched. “Every day he faced death threats from evil,” McCoy said, “and he was never afraid of that.”

Are you and I courageous enough to take a stand against the partisan,  violent malignancy infecting our youth and ourselves?

Monday, September 1, 2025

Is Your Intelligence Becoming Artificial?

You probably have heard that our brains make up only about 2% of the body’s total weight, yet consume roughly 20% of our energy (Raichle & Gusnard, 2002). That disproportionate energy demand hints at the immense processing power locked inside the human mind—and why our species has always been strategic in managing mental and physical effort. I emphasized those strategies in my Don’t Rest in Peace book (McCusker, 2016).

From an evolutionary perspective, early humans were constantly , unconsciously calculating efficiency.  Accordingly, anthropologists and evolutionary biologists argue that our ancestors often sought foods and resources that offered the greatest nutritional return for the least effort (Kaplan et al., 2000). That logic of minimizing effort while maximizing reward didn’t end with the Stone Age. Modern humans extended it into the way we manage our time, as well as energy.

Enter artificial intelligence. Many of us embrace AI because it saves time, reduces mental strain, and increases efficiency. Just as our ancestors preferred calorie-rich foods to fuel their bodies and brains, we are now drawn to digital tools that fuel productivity with minimal effort. Yet this evolutionary impulse raises an important question: when should we rely on AI, and when should we rely on our own cognition?

The answer may lie in a cost-benefit analysis. Using AI comes with obvious gains: speed, convenience, and access to information. But there are also hidden costs. If we allow AI to handle too much of our mental workload, we may weaken our memory, problem-solving skills, and even creativity over time (Mitchum & Kelley, 2023). On the other hand, strategic use of AI—such as delegating repetitive or low-level tasks—can free us to focus on higher-order thinking and creative work.

The gains versus costs issue was empirically studied by Nataliya Kosmyna, et al.(2025) at MIT’s Media Lab, with collaborators from Wellesley College and Massachusetts College of Art and Design. The research deserves our careful attention.  

Let’s begin with the methodology: 54 participants were asked to write SAT-style essays under three different conditions: Brain-only: Write without any external aid; Search engine: Use Google to assist; and LLM (ChatGPT): Use ChatGPT to assist.  This setup obtained across three sessions. In a fourth session, participants switched: those who had used AI changed to writing unaided (LLM-to-Brain), and vice versa (Brain-to-LLM).   

Measurements were as follows: Participants wore EEG headsets to monitor brain activity (cognitive neural connectivity across alpha, beta, theta bands, etc.). Researchers also analyzed the writing for originality, linguistic patterns, and had humans and AI evaluate the essays. Post-task interviews assessed recall and ownership.

Neural engagement assessment found that the brain-only group showed the highest and most widespread neural connectivity, indicating deep cognitive engagement. The search engine group fell in the middle—more engaged than the AI group, but less than brain-only. And The LLM (AI) group had the weakest neural engagement, suggesting cognitive offloading and diminished mental processing. 

Regarding memory and ownership, over 83% of the AI users (LLM group) couldn't quote their own essays, versus only 11% in the other groups. AI user essays appeared more formulaic and less original, and they reported feeling less ownership of the content.

The most surprising research finding was the persistence of  the cognitive effects.  For instance, in the final session, participants who had initially used AI and switched to writing unaided did not recover their earlier neural engagement—they remained under-engaged. By contrast, those who began writing unaided and then used AI (Brain-to-LLM) showed increased neural connectivity, almost matching the search engine group. The researchers coined “cognitive debt” to describe the long-term cost of over-relying on AI.  Thus, while AI could ease immediate effort, it appeared to erode critical thinking, creativity, memory retention, and essay ownership.

I must underscore something obvious: The MIT study involved only 54 subjects. That is hardly a ringing endorsement to its reliability and validity.  Its findings may or may not be replicated in the future. Most important is that the study found exactly what I expected it to find.  Maybe this is just one example of my confirmation bias.  

Regardless, for me, the best way forward is not to use AI as a replacement for human effort, but as a partner. Just as the body regulates how much energy goes to the brain and other systems, we can regulate how much work we give to AI versus how much we keep for ourselves. The challenge is to strike a balance: gaining the efficiency AI provides without losing the unique cognitive strengths that make us human. In this sense, using AI should not merely be an evolutionary continuation of our search for maximum return on minimal effort. We must mindfully balance  not only calories, but our time, attention, and intellectual engagement.

References

Kaplan, H., Hill, K., Lancaster, J., & Hurtado, A. M. (2000). A theory of human life history evolution: Diet, intelligence, and longevity. Evolutionary Anthropology: Issues, News, and Reviews, 9(4), 156–185. https://doi.org/10.1002/1520-6505(2000)9:4<156::AID-EVAN5>3.0.CO;2-7

Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task. MIT Media Lab. https://arxiv.org/abs/2506.08872 

McCusker, P. J. (2016). Don't Rest in Peace: Activity-Oriented, Integrated Physical and Mental Health (New York: Amazon).

Mitchum, A. L., & Kelley, C. M. (2023). The “Google effect” in the age of artificial intelligence: How reliance on external memory systems may impact learning and cognition. Memory & Cognition, 51(6), 1249–1264. https://doi.org/10.3758/s13421-023-01485-9

Raichle, M. E., & Gusnard, D. A. (2002). Appraising the brain’s energy budget. Proceedings of the National Academy of Sciences, 99(16), 10237–10239. https://doi.org/10.1073/pnas.172399499


Friday, August 15, 2025

Are You a Precrastinator? Yes, a PREcrastinator !

We all are too familiar with procrastination—putting things off until the last possible moment. But there’s a lesser-known cousin with an oddly similar name: precrastination. This is when someone rushes to start or finish a task far earlier than necessary, even when doing so costs them extra effort or inconvenience. A trivial, everyday example is grabbing a heavy grocery bag from the far end of the parking lot right away, instead of using a shopping cart until you’re closer to your car—just because it feels good to get started.

In 2025, Adam Fox, Ayesha Khatun, and Laken Mooney set out to answer two key questions about this peculiar behavior. First, is precrastination driven by trait impulsivity—the tendency to act quickly and prefer immediate rewards over delayed ones? Second, does the amount of physical effort required change how likely people are to precrastinate?

In Experiment 1, they measured people’s impulsivity using a “delay discounting” task (DD). [wherein participants make choices between a smaller, immediate amount of (hypothetical) money and a larger, delayed amount, and an adjusting-amount procedure is used to determine the subjective value of the delayed amount (Yeh, Y-H, Joel Myerson, J., & Green, J.L. (2021).]   DD is regarded as a common way to see how much a person devalues a reward if it’s not immediate. They then observed how often participants engaged in precrastination. Surprisingly, there was no clear link. People who tended to choose instant gratification were not necessarily the ones rushing to complete low-effort tasks early.

Experiment 2 shifted focus to effort. The researchers designed tasks where participants could choose to do something early—at an extra physical cost—or wait until it was easier. As the effort increased, something interesting happened: the precrastination urge began to fade. People started behaving more “optimally,” waiting until the task required less work.

The takeaway? Precrastination doesn’t seem to be the same thing as impulsivity, at least not the kind measured by delay discounting. Instead, it may be more about a preference for reducing mental load—getting something off your mind—so long as the effort required isn’t too high. There’s a limit: when the physical cost becomes noticeable, even precrastinators start thinking twice. In short, this study helps map the boundaries of precrastination. It’s a quirk of human behavior that thrives in low-effort situations, but fizzles when the cost of acting early becomes too steep.

Some would conclude that precrastination isn’t just impulsivity in disguise. It’s something else—a mental itch to “get it over with” and free up cognitive space. But like most itches, it’s easier to scratch when it doesn’t hurt. Once the physical cost rises high enough, even the most eager precrastinators start holding back. So, the next time you find yourself rushing to complete something—hauling laundry up two flights of stairs when you could wait for the elevator—pause for a second. You might be scratching an itch your brain invented, not solving a real problem.

The idea of  “impulsivity," however, was insufficiently addressed by Fox, et al. So, it’s worthwhile to “reflect” upon previous research regarding the difference between acting too fast, and pausing to first think thing through. There is, in fact,  a long-standing idea that people vary in how they balance reflection—pausing to consider options before acting—and impulsivity—acting quickly, often without much deliberation. Psychologists sometimes measure that balance with tools like the Matching Familiar Figures Test (MFFT) that has been used to explain why some people think before leaping while others just leap.

On the surface, precrastination seems like it should live on the impulsive end of this spectrum. After all, starting a task unnecessarily early—especially when it costs extra effort—sounds like an act now, think later strategy. But the Fox, Khatun, and Mooney study throws a wrench into that neat assumption. In reflection–impulsivity research, impulsive individuals tend to act with minimal forethought, especially when tempted by an immediate reward. Yet in the precrastination experiments, impulsivity (as measured by delay discounting, a common proxy for reward-driven impulsivity) didn’t predict who precrastinated. People who usually jumped at instant gratification weren’t necessarily the ones dragging the grocery bag from the far end of the parking lot.

This suggests that precrastination isn’t just about a failure to reflect, at least not in the same way classical impulsivity is. Instead, it might be tied to cognitive load management. In other words, precrastinators might be trying to reduce mental “to-do list” pressure by knocking out easy tasks quickly—more a case of mental housekeeping than impulsive thrill-seeking.

The effort manipulation in Experiment 2 strengthens this distinction. In traditional reflection–impulsivity models, impulsive individuals might still go for the quick option even if it’s harder. But here, when the physical cost rose, precrastination faded and behavior became more optimal. That’s not typical impulsivity—that’s a calculated willingness to stop acting early when the price is too high.

So, while both impulsivity and precrastination involve quick action, they spring from different motives: Classical impulsivity is about chasing immediate rewards and avoiding delay, often at the expense of accuracy, efficiency, or long-term benefit. Precrastination seems to be about clearing cognitive space, but only when the extra cost feels small. From the perspective of the reflection–impulsivity spectrum, precrastinators might sit in an unusual spot: they appear “impulsive” in timing, but “reflective” in weighing physical cost—almost a hybrid strategy shaped less by reward-seeking and more by the desire to offload mental burdens. It is refreshing to find that the current study can be reconciled with long-established research concerning results from the Matching Familiar Figures Test (MFFT).  That positive convergence is particularly encouraging, given how often similar and/or related behavioral science studies conflict-- a loose version of the so-called "replication crisis." in psychology.

REFERENCES

Fox, A. E., Khatun, A., & Mooney, L. A. (2025). Precrastination: The potential role of trait impulsivity and physical effort. Journal of Experimental Psychology: Human Perception and Performance, 51(9), 1224–1233. https://doi.org/10.1037/xhp0001348

Yeh, Y-H, Joel Myerson, J., Green, J.L. (2021) Delay discounting, cognitive ability, and personality: What matters?  Psychon Bull Rev. Apr;28(2):686-694.


Friday, August 1, 2025

Why Bad Is Stronger Than Good & What You Can Do About It

Imagine you're walking through a quiet forest trail, the sun dappled on the path, birds chirping in the trees. You’re calm. Peaceful. Then—snap! A twig breaks sharply in the woods to your right. Your heart jumps. Muscles tense. Adrenaline surges. For hours, even days, the memory of that sharp moment might linger, tainting the peaceful walk you were having.

Unfortunately, the human brain evolved to become a finely tuned survival machines hardwired to prioritize the bad over the good. This phenomenon—that bad is stronger than good—isn’t just a poetic observation. It’s a well-documented principle in psychology and neuroscience. The idea is simple but powerful: Negative events, emotions, and feedback have a stronger impact on our thoughts, behaviors, and well-being than equally intense positive experiences.

Over two decades ago, Baumeister and colleagues (2001) summarized research across many domains and found the same consistent pattern: whether in learning, memory, relationships, or impression formation, bad events outpower good ones nearly every time. They claimed that "Bad emotions, bad parents, and bad feedback have more impact than good ones, and bad information is processed more thoroughly than good."

From an evolutionary standpoint, this makes perfect sense. Our ancestors didn’t need to remember every lovely sunset, but they did need to remember which berries made them sick or which paths harbored predators. The brain's alarm system—primarily the amygdala—reacts more strongly to negative stimuli than to positive ones. In fact, neuroimaging studies show that the amygdala responds with more intensity and duration to unpleasant images or words than to pleasant ones (Cacioppo et al., 1999; Taylor, 1991).

A classic study by John Cacioppo found that the brain produced more electrical activity in response to negative photos than to positive or neutral ones. This means we literally process negative input more deeply. This negativity bias is also evident in memory. Negative events are more richly encoded and more vividly recalled. They stick like burrs. Compliments might lift us for a moment, but one insult can echo for years.

One of the most famous practical findings of the bad-is-stronger-than-good principle comes from marriage research by psychologist John Gottman. He discovered that healthy relationships tend to have a ratio of at least five positive interactions for every one negative interaction. Couples who don’t maintain this balance tend to spiral into dissatisfaction and conflict (Gottman, 1994). So, when your partner praises your cooking but frowns at your laundry skills, your brain may amplify the frown and barely register the praise (McCusker, 2016).

How to Overcome the Negativity Bias

The bad may be stronger than the good, but that doesn’t mean we’re helpless. Like any cognitive bias, once we recognize it, we can build counteracting strategies.

1. Deliberate Savoring

Because positive moments are often fleeting, we need to work to stretch them out. Psychologist Barbara Fredrickson suggests consciously savoring good experiences—taking 10–30 seconds to fully absorb them. This gives the brain more time to encode and store the experience, helping it stick.

One Not-So-Obvious-Benefit

The more time you spend savoring the positive, the less you have to ruminate on the negative. 

2. Gratitude Practice

Studies show that writing down three good things each day (Seligman et al., 2005) can rewire your brain toward noticing the positive. Over time, this shifts attentional patterns away from the negative default and helps balance the mental scales.

One Not-So-Obvious-Benefit

The more you gratitude practice, the more likely that it will become an automatic positive habit.

3. Positive Reappraisal

Cognitive behavioral therapy (CBT) teaches us to challenge automatic negative thoughts and replace them with more balanced interpretations. Instead of perseverating on a critique, ask: “What’s the learning here? Is this really as bad as it feels?”

One Not-So-Obvious-Benefit

The very act of positive reappraisal is one that helps you develop a mindfulness orientation that can help you in many many situations.

4. Limit Negative Exposure

News media, social media, and gossip can flood our brains with negativity. Being mindful of what you consume—and taking breaks—can lower the cumulative emotional toll.

One Not-So-Obvious-Benefit

Limiting negative exposure is akin to reducing stress. And reducing stress has obvious health and emotional well-being benefits.

5. Spread Goodness Intentionally

Because people tend to remember negative feedback more strongly, it takes effort to create a positive environment. Give praise generously, celebrate small wins, and go out of your way to express appreciation. In your workplace, home, or community, these positive acts are vital emotional counterweights.

One Not-So-Obvious-Benefit

When you intentionally spread goodness, you inadvertently and automatically raise your attractiveness. Moreover, you serve as a positive role model, especially to peers and children.

To conclude, the brain is a remarkable, selective storyteller. It writes bold headlines for threats and tragedy, but often buries the joyful details on page ten. That’s why bad is stronger than good—and why it takes conscious effort to let the good in and let it grow. You can’t change the fact that bad news hits harder, but you can choose to become someone who writes more positive chapters into the lives of others—and yourself.

                                                                        References

Baumeister, R. F., Bratslavsky, E., Finkenauer, C., & Vohs, K. D. (2001). Bad is stronger than good. Review of General Psychology, 5(4), 323–370. https://doi.org/10.1037/1089-2680.5.4.323

Cacioppo, J. T., Gardner, W. L., & Berntson, G. G. (1999). The affect system has parallel and integrative processing components: Form follows function. Journal of Personality and Social Psychology, 76(5), 839–855. https://doi.org/10.1037/0022-3514.76.5.839

Fredrickson, B. L. (2004). The broaden-and-build theory of positive emotions. Philosophical Transactions of the Royal Society B: Biological Sciences, 359(1449), 1367–1377.

Gottman, J. M. (1994). Why Marriages Succeed or Fail. New York: Simon & Schuster.

McCusker, P. J. (2016)  Don't Rest in Peace: Activity-Oriented Physical and Mental Health. New York: Amazon.

Seligman, M. E. P., Steen, T. A., Park, N., & Peterson, C. (2005). Positive psychology progress: empirical validation of interventions. American Psychologist, 60(5), 410–421.

 



Monday, July 21, 2025

Teen & Young Adult Music across the Decades

Today’s blog post—much longer than my usual—is a follow-up to my last post, “Why We Remember Our Teenage and Early Adult Lives So Vividly.”  If the reminiscence spike is as powerful as suggested in psychological research, we reasonably can expect two things.  First, that experiences from those age epochs have major  roles in the formation of our personalities.  And, second, that the memories  retained from that time are powerful.  As an example, I have chosen to highlight popular music from the decades of 1he 1960s through the 2020s. That music, I contend, almost certainly influenced how cohorts within those decades developed similar values and memories, and how persons of different decades developed significantly different ones.   

Teen and young adult music across the decades serves as a mirror for evolving attitudes, morals, emotions, and societal norms surrounding boy-girl relationships. Although the music is  by no means a definitive barometer, it does provide one gross measure of the general popular approaches to the aforementioned factors. Teen and young adult music illustrates that each era carries a distinct emotional vocabulary and thematic focus reflecting broader cultural shifts. Using the emotion detection feature of Sentiment Analysis, I produced the following breakdown of the emotion words and emotional themes typical of teen and young adult boy-girl relationship songs of each decade:

1960–1969: Innocence, Longing, and Heartbreak

Representative Artists: The Supremes, The Beatles, The Beach Boys, Frankie Valli, The Shirelles
Emotion Words: Lovecryingheartbabylonelydreamhurttrueforever

Emotional Themes:

  • Innocent romance: Relationships were often idealized, focused on holding hands, first kisses, and going steady.
  • Devotion and waiting: Girls sang about waiting faithfully (e.g., “Will You Love Me Tomorrow?”), while boys promised eternal love.
  • Heartbreak: Breakups were portrayed as emotionally devastating but clean. Songs like “The  Leader of the Pack” reflected melodrama.
  • Parental or societal disapproval: Romance against authority figures was a common trope.

Example lyric: “He’s a rebel and he'll never ever be any good" – The Crystals (1963)

Playlist:

1.         “Will You Love Me Tomorrow” – The Shirelles (1960)

2.         “Then He Kissed Me” – The Crystals (1963)

3.         “I Want to Hold Your Hand” – The Beatles (1964)

4.         “Be My Baby” – The Ronettes (1963)

5.         “Teenager in Love” – Dion and The Belmonts (1960)

6.         “You’ve Lost That Lovin’ Feelin’” – The Righteous Brothers (1964)

7.         “My Girl” – The Temptations (1965)

8.         “Let It Be Me” – Everly Brothers (1960)

9.         “Can’t Take My Eyes Off You” – Frankie Valli (1967)

10.       “Can’t Help Falling in Love” – Elvis Presley (1961

 1970–1979: Self-Discovery, Vulnerability, and Gender Shift

Representative Artists: Carpenters, Elton John, ABBA, Fleetwood Mac, Bee Gees, Heart
Emotion Words: Feelingsalonetouchdesiremissmemoriessorrylove

Emotional Themes:

  • Emotional introspection: Lyrics grew more confessional and inward-looking.
  • Ambiguity and change: Romantic roles blurred slightly as women expressed stronger voices (e.g., Carly Simon’s “You’re So Vain”).
  • Sexual awakening: Subtle hints of sexuality emerged but still cloaked in metaphor or vulnerability.
  • Loss and longing: Breakups were explored with more psychological depth (e.g., “How Deep Is Your Love?”).

Example lyric: “Feelings, nothing more than feelings, trying to forget my feelings of love…” – Morris Albert (1975)

Playlist:

  1. “Feelings” – Morris Albert (1975)
  2. “You're So Vain” – Carly Simon (1972)
  3. “Let’s Stay Together” – Al Green (1972)
  4. “If I Can’t Have You” – Yvonne Elliman (1977)
  5. “I Honestly Love You” – Olivia Newton-John (1974)
  6. “How Deep Is Your Love” – Bee Gees (1977)
  7. “Don’t Go Breaking My Heart” – Elton John & Kiki Dee (1976)
  8. “I Will Survive” – Gloria Gaynor (1978)
  9. “Magic Man” – Heart (1975)
  10. “Baby Come Back” – Player (1977)

 1980–1989: Passion, Power, and Angst

Representative Artists: Madonna, Prince, Cyndi Lauper, Bon Jovi, Whitney Houston, Debbie Gibson
Emotion Words: Crazyheartburningneedforeverwildbrokenobsession

Emotional Themes:

  • Intense passion: Romance became more physical and emotionally extreme (e.g., “Like a Virgin,” “Crazy for You”).
  • Teen rebellion: Love was defiant, dramatic, and sometimes destructive (e.g., “Love Is a Battlefield”).
  • Fantasy and glamor: Relationships were tied to idealized versions of love, often cinematic or escapist.
  • Empowerment: Especially for girls, music began showing emotional strength and agency (e.g., “Girls Just Wanna Have Fun”).

Example lyric: “Shot through the heart, and you’re to blame—you give love a bad name.” – Bon Jovi (1986)

Playlist:

  1. “Like a Virgin” – Madonna (1984)
  2. “Love Is a Battlefield” – Pat Benatar (1983)
  3. “I Want to Know What Love Is” – Foreigner (1984)
  4. “Crazy for You” – Madonna (1985)
  5. “Time After Time” – Cyndi Lauper (1983)
  6. “Faithfully” – Journey (1983)
  7. “Heaven” – Bryan Adams (1985)
  8. “You Give Love a Bad Name” – Bon Jovi (1986)
  9. “Open Your Heart” – Madonna (1986)
  10. “Eternal Flame” – The Bangles (1989)

 1990–1999: Honesty, Experimentation, and Emotional Complexity

Representative Artists: Britney Spears, TLC, Nirvana, Alanis Morissette, Backstreet Boys, Mariah Carey
Emotion Words: Crazyrealconfusedtrustsorryusedcrushstronglie

Emotional Themes:

  • Emotional realism: Love was messy, contradictory, and openly discussed (e.g., “You Oughta Know”).
  • Sexual openness: Songs openly addressed desire and consent, reflecting cultural shifts in attitudes.
  • Obsession and infatuation: Boy bands and pop queens sang of heart-pounding crushes.
  • Cynicism and mistrust: Lyrics hinted at betrayal, manipulation, and identity crises.

Example lyric: “I want it that way…” – Backstreet Boys (1999)

Playlist:

  1. “...Baby One More Time” – Britney Spears (1998)
  2. “I Want It That Way” – Backstreet Boys (1999)
  3. “You Oughta Know” – Alanis Morissette (1995)
  4. “Vision of Love” – Mariah Carey (1990)
  5. “Waterfalls” – TLC (1995)
  6. “Always Be My Baby” – Mariah Carey (1995)
  7. “Torn” – Natalie Imbruglia (1997)
  8. “Genie in a Bottle” – Christina Aguilera (1999)
  9. “Kiss Me” – Sixpence None the Richer (1997)
  10. “My Heart Will Go On” – Celine Dion (1997)

 2000–2009: Drama, Identity, and Textbook Love

Representative Artists: Taylor Swift, Usher, Avril Lavigne, Beyoncé, Chris Brown, Kelly Clarkson
Emotion Words: Hatetextfakedramajealousaloneforeverbrokenreal

Emotional Themes:

  • High drama: Love was depicted as full of emotional swings—jealousy, betrayal, passion.
  • Digital love: Texting, online relationships, and social media began influencing narratives.
  • Empowerment post-breakup: Anthems of moving on and self-respect (e.g., “Since U Been Gone”).
  • Fantasy vs. reality: Fairytale love stories (e.g., “Love Story”) clashed with real-life complications.

Example lyric: “Because of you, I find it hard to trust not only me, but everyone around me.” – Kelly Clarkson (2004)

Playlist:

  1. “Since U Been Gone” – Kelly Clarkson (2004)
  2. “Love Story” – Taylor Swift (2008)
  3. “Cry Me a River” – Justin Timberlake (2002)
  4. “Complicated” – Avril Lavigne (2002)
  5. “Irreplaceable” – Beyoncé (2006)
  6. “With You” – Chris Brown (2007)
  7. “Hey Ya!” – OutKast (2003)
  8. “Teardrops on My Guitar” – Taylor Swift (2006)
  9. “Unwritten” – Natasha Bedingfield (2004)
  10. “Beautiful Soul” – Jesse McCartney (2004)

2010–2019: Vulnerability, Self-Love, and Emotional Fluidity

Representative Artists: Billie Eilish, Ariana Grande, Shawn Mendes, Lorde, Olivia Rodrigo (late-decade), Harry Styles
Emotion Words: Anxietytoxicghostedvibealoneworthybrokensafefake

Emotional Themes:

  • Mental health and love: Lyrics frequently referenced depression, anxiety, and emotional insecurity in relationships.
  • Toxicity and boundaries: Songs explored emotional manipulation, gaslighting, and self-preservation.
  • Self-love and independence: A growing focus on putting oneself first emerged (e.g., “thank u, next”).
  • Emotional fluidity and queerness: Gender and romantic roles were less binary, more fluid and inclusive.

Example lyric: “I'm the bad guy, duh.” – Billie Eilish (2019)

Playlist:

  1. “thank u, next” – Ariana Grande (2019)
  2. “Lovely” – Billie Eilish & Khalid (2018)
  3. “Shallow” – Lady Gaga & Bradley Cooper (2018)
  4. “Lose You to Love Me” – Selena Gomez (2019)
  5. “If I Could Fly” – One Direction (2015)
  6. “Treat You Better” – Shawn Mendes (2016)
  7. “Without Me” – Halsey (2018)
  8. “Too Good at Goodbyes” – Sam Smith (2017)
  9. “Perfect” – Ed Sheeran (2017)
  10. “The One That Got Away” – Katy Perry (2010)1

Decade

Dominant Emotion Words

Themes

1960s

Love, cry, forever, baby

Innocent, idealized romance; heartbreak

1970s

Feelings, alone, touch

Emotional introspection, subtle sexuality

1980s

Burning, wild, forever

Passion, power dynamics, rebellion

1990s

Crush, lie, trust, sorry

Emotional honesty, confusion, betrayal

2000s

Drama, fake, broken

Digital-age love, empowerment, fantasy

2010s

Toxic, ghosted, vibe

Mental health, self-love, identity

To conclude, I feel compelled to mention that  it wasn't until approximately 1990–2000s that Hip Hop and Rap made the word "nigga" and "fuck" common in music.  For instance, N.W.A. sang , “Fuck tha Police” (1988) and Snoop Doggy Dogg's "Doggystyle" frequently used "nigga" throughout the album.  Do you believe that such radical song lyric alterations primarily reflected a immense disintegration in teenage and young adult social standards and respectful language over the decades, or was more a driver of them?

Tuesday, July 15, 2025

Why We Remember Our Teenage and Early Adult Lives So Vividly

There’s something about the adolescent and early adult years causing them to be   preferentially emblazoned in our minds.  A “reminiscence spike” consistently appears when remote memories over the ages are plotted on a graph. This blogpost discusses that finding. And the next will illustrate very significant emotional differences in  teen and early adult cohort experiences from the decades of 1960 to 2010.

The Stories We Keep

Why do so many of our most vivid memories come from our teens and twenties? The rush of a first kiss, the feeling of driving alone for the first time, the concerts, the heartbreaks, the friendships that felt like they would last forever—these memories stick with us in a way that even more recent experiences often do not. This psychological phenomenon is known as the "reminiscence spike," and it refers to the tendency for older adults to recall a disproportionately large number of autobiographical memories from their adolescence and early adulthood—typically from about ages 10 to 30, with a peak around the late teens to early twenties.

But why do these years burn so brightly in the mind’s eye?

Researchers have been fascinated by this for decades. The reminiscence spike shows up reliably when people over the age of 40 are asked to recall the most important events of their lives, or when they are prompted with cues like "Tell me about a memorable experience associated with the word 'freedom.'" Time and again, people reach back to their younger years—even when their memory of other life periods fades.

There are several psychological theories that seek to explain this striking memory phenomenon.

1. The Cognitive Account: Novelty Breeds Memory

One of the most influential explanations is the cognitive account, which suggests that we remember this period so well because it's packed with firsts: first job, first love, first move away from home. According to this view, the brain is more likely to encode and retain novel or emotionally intense experiences, and adolescence is full of them.

Psychologist David Rubin and colleagues have argued that these “firsts” act as strong memory anchors because new experiences lead to deeper encoding, and the novelty of events in adolescence and early adulthood make them more memorable (Rubin, Wetzler, & Nebes, 1986).

2. The Identity-Formation Hypothesis: Memory Serves the Self

Another theory suggests that this upward spike in memory is tied to the process of identity formation. According to Erik Erikson’s stages of psychosocial development, adolescence and young adulthood are the key periods when individuals ask, “Who am I?” and “What do I want from life?”

This idea is supported by research showing that the events people remember from this time are often ones that shaped who they are: a life-changing teacher, a choice to pursue a career path, a rebellious phase, or a defining cultural moment. Conway and Pleydell-Pearce (2000) argue that autobiographical memory is organized around a “self-memory system,” and events that contribute to the construction of a coherent self are more likely to be remembered.

3. The Cultural Life Script Hypothesis: Society Writes Our Story

A third perspective focuses less on the individual and more on shared cultural expectations. This is known as the cultural life script hypothesis. According to this view, cultures provide a template—a sort of timeline—about when major life events are expected to occur (like falling in love, graduating, getting married, or having children). Because many of these events typically occur in adolescence and early adulthood, we remember them more vividly.

Berntsen and Rubin (2004) showed that when people are asked to recall the “typical life of a person,” most of the important events they mention happen during this same reminiscence period—whether or not they personally experienced them. This suggests that memory is partially structured by shared societal narratives.

4. Neurological and Biological Changes: Brain at Its Peak

Some researchers point to neurological development. During adolescence, the brain—particularly the hippocampus and prefrontal cortex, which are crucial for memory encoding—is both active and plastic. Hormonal changes and heightened emotions can also strengthen memory formation. This period might simply be when our brains are most efficient at forming long-lasting, emotionally rich memories (Ghetti & Bunge, 2012).

5. Emotional Intensity and Rehearsal

Finally, the emotions associated with adolescence may be stronger and more personally meaningful, and we tend to rehearse those memories more often—by telling stories, looking at old photos, or daydreaming. Emotionally charged memories, especially those that are frequently revisited, tend to be better consolidated and retained over time (Kensinger, 2009).

To conclude, the reminiscence spike is not just a quirk of memory—it’s a window into how we build our life stories. These adolescent and early adult memories serve as emotional landmarks, guiding our sense of self across the years. Whether it’s your first apartment, the song that played during your senior prom, or the rush of independence that came with your first road trip, these are the moments our minds cling to—not only because they were exciting, but because they helped define who we are.  As life continues, new memories  form, but the ones from that crucial period of becoming—they remain the most vivid chapters in the autobiography we carry in our minds.

Although psychology has completed dozens of reminiscence spike studies, I have yet to find research in one related area that is worth considering.  That is the fact that memories are gross abstractions of actual experience.  And those abstractions often include major distortions. Some of your reminiscences  and/or parts of them are patently false.  Moreover, what you remember is always influenced by how you are feeling at the moment of recall.  That notion of “state dependent memory” is  robust and important. So, if you are happy at the moment of reminiscence, you are more likely to recall fondly, and the opposite, if you are sad.

 

REFERENCES

Berntsen, D., & Rubin, D. C. (2004). Cultural life scripts structure recall from autobiographical memory. Memory & Cognition, 32(3), 427–442. https://doi.org/10.3758/BF03195836

Conway, M. A., & Pleydell-Pearce, C. W. (2000). The construction of autobiographical memories in the self-memory system. Psychological Review, 107(2), 261–288. https://doi.org/10.1037/0033-295X.107.2.261

Ghetti, S., & Bunge, S. A. (2012). Neural changes underlying the development of episodic memory during middle childhood. Developmental Cognitive Neuroscience, 2(4), 381–395. https://doi.org/10.1016/j.dcn.2012.05.002

Kensinger, E. A. (2009). Remembering the details: Effects of emotion. Emotion Review, 1(2), 99–113. https://doi.org/10.1177/1754073908100432

Rubin, D. C., Wetzler, S. E., & Nebes, R. D. (1986). Autobiographical memory across the adult lifespan. In D. C. Rubin (Ed.), Autobiographical memory (pp. 202–221). Cambridge University Press.

Tuesday, July 1, 2025

What You Say and Don't Say

There’s a curious power in silence. Not just in what isn’t spoken, but in what is deliberately withheld. Every conversation, every sentence, even the briefest exchange, is an act of editing. We choose our words carefully—or sometimes carelessly—but either way, we’re revealing a version of ourselves. At the same time, we’re concealing something else. That’s the quiet truth at the heart of communication: what you say is only half the story. 

Think of the last time you held your tongue. Maybe it was in the middle of an argument, when your pride ached to say something sharp, but your better judgment told you not to. Or maybe it was during a moment of vulnerability, when someone you cared about opened up—and instead of blurting out advice, you simply listened. In either case, your silence wasn’t empty. It was filled with meaning, restraint, perhaps even love.

Words carry weight, but so does their absence. We sometimes forget this in a culture that rewards volume, speed, and opinions broadcast into the void. Social media encourages us to speak instantly and incessantly, as if silence were an admission of ignorance or irrelevance. But in real life, choosing not to say something can be the strongest statement of all. It can be a sign of maturity, of empathy, of knowing that not every thought needs to be shared to be understood.

Of course, there are risks in silence, too. Not speaking up when something matters—when injustice unfolds in front of you, or when someone needs a defender—can feel like complicity. That’s the other side of the coin. Just as our silence can protect, it can also betray. The challenge is learning when to use it wisely.

Ultimately, the way we communicate is less about mastering language and more about mastering ourselves. It’s about knowing that every word you release into the world changes something, however small. And every word you keep tucked away does, too. So, the next time you’re about to speak—or hold back—ask yourself not just what you want to say, but whyBecause in the end, what you choose to say—and not say—becomes the voice of who you are.

I also want to underscore the power contained in single word choices.  That is, what we say—and don’t say—is shaped not only by dictionary content but also by tone, and more specifically, by the emotional charge of the words we choose. Language isn’t just a tool for conveying facts. It’s a vehicle for feeling, for stirring emotion in others, and for revealing what lies under the surface of our own thoughts.

A single word—carefully chosen or carelessly flung—can elevate or destroy, soothe or provoke. Take the difference between saying someone is “stubborn” versus “determined.” Technically, they describe a similar trait, but emotionally, they land in entirely different places. “Stubborn” feels harsh, rigid, even critical. “Determined” feels admirable and strong. The facts remain unchanged, but the emotional color shifts entirely depending on the word.

That’s why individual word choice matters so deeply in close relationships, in leadership, and even in casual conversation. Think about how different “I’m disappointed” feels compared to “I’m angry.” Or how “I understand” can feel vastly more comforting than “I get it.” Each word carries emotional weight—a kind of invisible gravity that can pull others in or push them away.  Politicians, poets, and advertisers all know this. They wield words not just to inform but to move—to trigger hope, fear, pride, shame, or urgency. And we do this, too, even when we’re unaware. Our word choices are emotional fingerprints, revealing our moods, biases, and intentions, even in subtle ways.

The beauty and burden of language is that every word carries baggage. And that baggage enters the room the moment we speak. So, as much as communication is about deciding what to say and what to leave unsaid, it's also about the emotional texture of how we say it. A gentle word can soften the hardest truth. A harsh word can shatter even a delicate silence. In the end, words are not just tools—they are instruments. And like any instrument, they can play music or make noise. The difference is in how consciously—and compassionately—we choose to use them. The obvious point of this blog post is: what you choose to say, not say, and the words you select all determine what you communicate conceptually and emotionally, what you enact relationship-wise, and what you disclose about yourself.

Saturday, June 21, 2025

Before the First Shot Is Fired

In 2018, Israeli Prime Minister Benjamin Netanyahu famously presented what he claimed was a cache of over 100,000 Iranian nuclear documents — materials Israeli operatives reportedly smuggled out of a Tehran warehouse under the noses of Iranian security. As The New York Times and Haaretz later confirmed, this Mossad operation took months of planning and suggested Israel had deep operational capabilities within Iran’s borders (Bergman, 2018; Kershner, 2018).

Israel's success didn’t just stem from superior technology; it came from deep infiltration — agents, informants, and sympathizers placed over years within Iran’s military, nuclear infrastructure, and security circles. These networks enabled not only sabotage operations, like the explosion at Natanz nuclear facility in 2020, but also the assassination of top Iranian nuclear scientists, including Mohsen Fakhrizadeh in 2020, reportedly using remote-controlled weapons (BBC, 2020). The message was clear: Israel knows where you are, what you're doing, and how to stop it. That history laid the groundwork for the continuing incredible assassinations and military victories playing out now in 2025.

The unseen war, then, began long before soldiers marched or missiles launched. In the case of Israel and Iran, intelligence — the covert kind — has been the invisible hand tilting the balance. For decades, Israel has run one of the most aggressive and effective intelligence operations against Iran, especially through its Mossad agency. Through a blend of cyber warfare, human intelligence, and targeted sabotage, Israel has not only kept Iran's nuclear ambitions in check but, at times, humiliated the Iranian security establishment.

Now, consider not Israel and Iran, but China and the United States. If war ever broke out between the two superpowers, intelligence would again serve as the silent battlefield. And China, many experts argue, is already deeply embedded in the American fabric — not merely through clandestine spying, but through influence operations, data theft, cyber espionage, and intellectual infiltration.

The FBI has repeatedly warned that China poses the “greatest long-term counterintelligence threat” to the United States (Wray, 2020). The scope is staggering. From the theft of F-35 fighter jet blueprints to intrusions into U.S. government personnel records (the 2015 OPM hack affected over 20 million Americans), Chinese cyber operations have harvested a trove of sensitive material.

But the threat goes far beyond computers. China has pursued what some intelligence analysts call a "whole-of-society" approach — using every available avenue, from business acquisitions to university ties, to gather intelligence and exert influence.

Chinese companies, many with ties to the Chinese Communist Party (CCP), have purchased or invested in American farmland, agricultural supply chains, data companies, and even private security firms. In 2023, Chinese investors were found to have acquired land near U.S. military bases — notably in North Dakota near a sensitive drone facility (U.S.-China Economic and Security Review Commission, 2022). While such purchases are legal under American law, the strategic implications are unsettling.

Meanwhile, American universities — known for their openness and world-class research — have become targets of influence. The U.S. Department of Justice’s now-paused “China Initiative” aimed to root out intellectual property theft and undisclosed ties between American academics and Chinese institutions. Cases like that of Harvard chemist Charles Lieber, who secretly accepted funding from China while working on U.S.-funded projects, highlighted just how porous the boundaries between civilian science and strategic military application have become.

Then there are the students. As of 2023, there were nearly 300,000 Chinese students studying in the United States — by far the largest foreign student group. Many are focused on STEM fields (science, technology, engineering, mathematics), and while most are likely here simply to learn and advance their careers, a small fraction may be tapped — or pressured — by Chinese intelligence to collect information. The Chinese government maintains tight control over its citizens abroad, often using family back home as leverage.

A 2020 report by the Australian Strategic Policy Institute found evidence of Chinese military-affiliated researchers studying abroad in U.S. universities under civilian guises, learning cutting-edge defense-related technologies (Joske, 2020). These researchers often returned to China with skills that directly enhanced PLA (People’s Liberation Army) capabilities.

If a hot war erupted between the U.S. and China, Beijing might be better prepared than it seems — not because of an overwhelming military edge, but because of what it already knows about American systems, infrastructure, and weaknesses. Chinese cyber units have already demonstrated an ability to infiltrate American power grids, financial systems, and telecommunications networks. In a wartime scenario, they could sabotage logistics, disrupt communications, or sow domestic confusion before the first missile is fired.

In contrast, the U.S. may find it harder to gain similar traction inside China, where the state maintains rigid control over the internet, society, and foreign access. Beijing has learned from Moscow and Tehran the value of “asymmetric warfare”: war fought not just on battlefields, but in supply chains, social media, server rooms, and scholarly journals.

Just as Israel’s infiltration of Iran gave it a decisive edge in disrupting nuclear development, China's quiet, pervasive embedding into the United States’ commercial, technological, and educational systems could one day function similarly — not in preventing war, but in shaping how that war plays out. The battlefield is no longer only physical — it’s intellectual, digital, and relational.

The question is not whether we are being spied on. We are. The question is how deeply — and whether we’ll realize the consequences before it’s too late. 

What should we do to force our politicians to act? I suggest you consider the following::  On January 17, 2025, in the case of TikTok v. Garland the U.S. Supreme Court ruled unanimously that ByteDance, the Chinese parent company of TikTok, sell off its U.S. operations by January 19, 2025, or face a complete ban of the app within the United States. There was no ambiguity. On January 18 and 19, just as the ban was set to take effect TikTok briefly vanished from U.S. app stores and was temporarily taken offline. However, within hours, the app was restored following behind-the-scenes assurances that newly inaugurated President Donald Trump was preparing an executive order that would temporarily delay enforcement of the law.

Acordingly, after taking office on January 20, 2025, President Trump swiftly issued a 75-day reprieve to give ByteDance and potential American buyers time to negotiate a deal. Fast forward to June 20 and we have a third extension of the deadline. Now ByteDance has until September 17, 2025, to finalize a divestiture or face the app’s forced removal from the U.S.  So, TikTok remains fully operational and widely accessible in the United States. What would Israel do if  it was their country being infiltrated by Tik Tok and the Tik Tok owner was Iran? 


References

Kershner, I. (2018). Israel says it has secret files proving Iran lied about nuclear program. New York Times.

BBC News. (2020). Iran scientist killed by remote-controlled weapon.

Wray, C. (2020). China is the greatest threat to America’s national security. FBI speech.

U.S.-China Economic and Security Review Commission. (2022). Report on Chinese Land Purchases Near Military Sites.

Joske, A. (2020). The Chinese Communist Party’s global search for technology and talent. ASPI.

Bergman, R. (2018). Mossad's Iran nuclear archive heist. The New York Times.



Sunday, June 15, 2025

Sharing Stories

You’re sitting on the beach, about to share a story with a friend. You have something important to say, and how you say it will determine whether your friend actually understands—or whether they nod politely, eyes glazing over.

As I often have discuss, sometimes context is the single most important factor in what we say and when.  It frames the story and helps your friend understand where you’re coming from. If you launch into a complicated explanation about your job or a new project without explaining why it matters, your friend might feel lost. But if you first share why you’re excited or how this story connects to their interests, they’re more likely to engage. Context bridges the gap between your world and theirs.

Next comes clarity—stripping away the verbal clutter to make your message easy to grasp. If your statements meander or your point gets buried under too many details, your friend might struggle to keep up. Clear language—short, direct sentences—helps your message land cleanly.

That raises the issue of information compression. Nobody wants to be stuck listening to a story that could have been summed up in a few words. Information compression is  distilling your message ; it doesn’t mean leaving out important parts, but rather prioritizing what truly matters. When you compress information properly, interlocutors absorb your point without getting lost.

Finally, information specificity brings the message to life. Generalities are like blurry photos. When you’re specific, people see in their mind’s eye and hear in their mind’s ear the sights and sounds that populate your thoughts.  Those specifics make your communications vivid and memorable, helping the other person truly connect to what you’re sharing.

So, next time you’re sharing a story, explaining a plan, or giving advice, think about context (why it matters), clarity (making it easy to follow), information compression (keeping it concise), and information specificity (making it come alive). Together, these elements transform your words from a jumble of sounds into something that sticks—something that truly resonates with listeners.  After all, the best communicators aren’t just speakers; they are bridges between hearts and minds.

To illustrate my points, imagine someone telling the story of a bad first day at a new job. I’ll give you two versions; the first violates the ideas I discussed and the second embodies them.

Version 1: Poor Communication (lacking context, clarity, compression, and specificity)

"So, like, yesterday, I went to that place, you know, and it was really, like, not what I thought it would be. I mean, there was so much stuff going on, and I didn’t even know what to do. And then I messed up and made a big mistake with some, like, important thing that was there, and the person, I think, didn’t really like it. And then I had to do this other thing with the thing, and it was all just too much, and I felt really bad after. I hope tomorrow’s better."

Version 2: Effective Communication (with context, clarity, information compression, and specificity)

"Yesterday was my first day at the new marketing firm downtown, and it didn’t go as planned. First, I arrived 15 minutes late because of a traffic accident on the interstate, which stressed me out immediately. Then, during the team meeting, I accidentally spilled coffee on my boss’s presentation. I apologized, but the tension lingered, and I could tell my boss wasn’t happy. Afterward, I got assigned to a project I wasn’t fully briefed on, and I felt overwhelmed trying to catch up. Overall, I left feeling embarrassed and worried about how I came across."

My bare-bones communication compression of Version 1 and Version 2 is:

Context: The first version doesn’t clarify where, when, or even what the setting is.  The second establishes where and when the event happened (first day at a marketing firm downtown).

Clarity: The first version is vague and jumps around with ambiguous pronouns (“the thing,” “some, like, important thing”). The second clearly lays out what happened in sequence.

Information Compression: The first version is cluttered with filler words and meandering thoughts. The second one is concise, but still informative.

Specificity: The first version leaves everything open to interpretation. The second one provides concrete details: being late because of traffic, spilling coffee, tension with the boss, feeling overwhelmed.

Of  course, it's one thing to think about communication context, clarity, information compression, and specificity within an abstract, intellectual exercise  as now, and quite another to consistently communicate that way in real-time. The latter, at minimum, requires your  metacognition. As I have discussed many times, to metacommunicate is to think about thinking. In the present "communication context" that means thinking about and following through with your best efforts to ensure proper context, clarity, information compression, and specificity. By doing so, you will establish common ground with your interlocutor so that they are motivated to hear your story and understand it, as intended.