The Boston Citgo sign, all 3,600 square LED feet of which has served as the backdrop to Red Sox games since 1965, is now officially a "pending landmark."

Spanish Surrealist Salvador Dalí spent much of the 1940s in the U.S., avoiding World War II and its aftermath. He was a well-known fixture on the art scene in Monterey, Calif. — and that's where the largest collection of Dalí's work on the West Coast is now open to the public.

Copyright 2016 Fresh Air. To see more, visit Fresh Air.

The middle of summer is when the surprises in publishing turn up. I'm talking about those quietly commanding books that publishers tend to put out now, because fall and winter are focused on big books by established authors. Which brings us to The Dream Life of Astronauts, by Patrick Ryan, a very funny and touching collection of nine short stories that take place in the 1960s and '70s around Cape Canaveral, Fla.

When the United Kingdom voted to leave the European Union last month, the seaside town of Port Talbot in Wales eagerly went along with the move. Brexit was approved by some 57 percent of the town's residents.

Now some of them are wondering if they made the wrong decision.

The June 23 Brexit vote has raised questions about the fate of the troubled Port Talbot Works, Britain's largest surviving steel plant — a huge, steam-belching facility that has long been the town's biggest employer.

Solar Impulse 2 has landed in Cairo, completing the penultimate leg of its attempt to circumnavigate the globe using only the power of the sun.

The trip over the Mediterranean included a breathtaking flyover of the Pyramids. Check it out:

President Obama is challenging Americans to have an honest and open-hearted conversation about race and law enforcement. But even as he sits down at the White House with police and civil rights activists, Obama is mindful of the limits of that approach.

"I've seen how inadequate words can be in bringing about lasting change," the president said Tuesday at a memorial service for five law officers killed last week in Dallas. "I've seen how inadequate my own words have been."

Mice watching Orson Welles movies may help scientists explain human consciousness.

At least that's one premise of the Allen Brain Observatory, which launched Wednesday and lets anyone with an Internet connection study a mouse brain as it responds to visual information.

The FBI says it is giving up on the D.B. Cooper investigation, 45 years after the mysterious hijacker parachuted into the night with $200,000 in a briefcase, becoming an instant folk figure.

"Following one of the longest and most exhaustive investigations in our history," the FBI's Ayn Dietrich-Williams said in a statement, "the FBI redirected resources allocated to the D.B. Cooper case in order to focus on other investigative priorities."

This is the first in a series of essays concerning our collective future. The goal is to bring forth some of the main issues humanity faces today, as we move forward to uncertain times. In an effort to be as thorough as possible, we will consider two kinds of threats: those due to natural disasters and those that are man-made. The idea is to expose some of the dangers and possible mechanisms that have been proposed to deal with these issues. My intention is not to offer a detailed analysis for each threat — but to invite reflection and, hopefully, action.


Science: A Relationship You May Not Understand

Feb 25, 2013
Originally published on February 26, 2013 9:43 am

Eating more antioxidants can reduce your risk of stroke and dementia. Or maybe not. Moderate alcohol consumption has some health benefits. But also some risks. Women should take calcium supplements. Or maybe they shouldn't.

Sound familiar? Just when you thought you knew what you should and shouldn't be doing to improve your health and wellbeing (whether or not you were actually doing it), new science comes along and changes the story. It's enough to feel betrayed; to decide that science is unreliable.

Before your relationship with science ends up on the rocks, I urge you to consider the situation from another perspective.

Most of the time, when two findings actually or apparently conflict, it's not the result of foul play or error. It's the result of how science works. And if you understand why, you might be more inclined to forgive science for its inevitable inconsistencies. You might even come to find them intriguing, if not exactly charming.

Here's the key insight you need to understand the inner life of science: the conclusions drawn from scientific studies almost always involve generalizing from a sample to a population.

If science didn't make such generalizations, it wouldn't be terribly useful or interesting. After all, we want to know whether we should eat antioxidants and whether we should feel guilty about that second glass of wine, not whether the people in a particular study should have. But with generalization comes the possibility for error.

To understand why, it helps to consider a simple example. Suppose you're interested in testing the hypothesis that men are, on average, taller than women. You could measure the height of every person alive today — approximately 7 billion people — and see whether the average height for men is greater than the average height for women. That would take you a rather long time. So, instead, you might measure the heights of a sample of men and women to see whether the average height for the men is greater than the average height for the women in that sample.

But how do you choose the sample? Ideally, you want your sample to be perfectly representative of the population, something that you're likely to approximate by choosing people at random. So if you had a list of all the people on the Earth, you would want to choose some subset — say 100 men and 100 women — at random. Then you could fly around the world measuring those 200 people in their varied and remote locations.

This might be a nice adventure, but hardly the way to do efficient research. (And good luck getting funding!) Even if we did have a complete list of all people and how to find them, there would be some probability that the 200 we chose weren't representative of the population. We could be unlucky and choose a few unusually tall women and a few unusually short men, ending up with average heights that are exactly the same.

This is where statistical tests come in: they can tell us, roughly, how likely it is that we would observe a particular result by chance given that we're sampling from a population with particular characteristics.

In psychology, my field, the standard practice is to consider a result "statistically significant" if the probability that it was generated by chance is under 5 percent. But using this criterion, there is still some probability that the "significant" result was due to chance alone.

One implication is that about 1 in 20 "significant" findings is likely to be a fluke. In practice, the number may be far larger, as scientists often don't publish papers that fail to find a significant result. So, published research is likely to overrepresent the flukey 5 percent. And if the flukey 5 percent are especially interesting, perhaps because of their novel and unexpected findings, then media coverage may exaggerate this overrepresentation even further.

Surely you won't hold science responsible for bad luck and a little press? We can still make this relationship work, right?

Unfortunately, though, the challenges of generalizing from a sample don't end with statistics. Recall that our hypothetical study involved a magical list of all humans and their locations as well as (NSF-funded?) travel around the world.

In a more realistic situation, you'll have to find a more convenient sample and a less-expensive strategy for data collection.

Perhaps you can measure the heights of 200 men and women in a local park one Sunday morning. Perhaps you find that the men are, on average, taller than the women. The most conservative conclusion is that men are taller than women in this particular sample at this particular time with this particular method of measurement. But that's not terribly useful or interesting. We're more likely to want to know about men and women in general.

Are we warranted in drawing conclusions about men and women in general from our park-going sample? Maybe our conclusions should only extend to American men and women. Or just people who go to parks? Or could it be people in the park on Sundays (after all, it could be that heights change throughout the week)? Maybe the study only tells us about people whose heights are being measured (after all, height could be affected by the very process of measurement — quantum mechanics tells us that crazier things are possible). What if that Sunday morning happened to involve a men's basketball practice? Then we'd have a whole new reason to doubt the generality of our result.

Of course, this example seems a little silly because we already know a lot about height. We know that it's unlikely to vary much from Sunday to Tuesday and that basketball players might skew our sample. But when it comes to many targets of contemporary research, we're working from a place of relative ignorance.

Consider the Nurses' Health Studies: they have involved over 200,000 nurse participants since 1979 and yielded tremendous insights into women's health. To what extent do the findings generalize to men? To what extent do the findings generalize to women from other cultures, or with very different lifestyles from nurses? Or to women being born today, who are likely to have somewhat different medical histories from the women born decades earlier?

There's also a generalization problem when it comes to characterizing the factors measured in particular studies. For example, a recent study suggests that high total antioxidant consumption isn't protective against stroke and dementia. However, particular antioxidants are protective, at least when consumed in fruits and vegetables. So we might have thought that a previous finding linking blueberry consumption to positive health benefits was a consequence of "antioxidants in general," when really it was a consequence of "antioxidants acquired from fruits." (Or maybe just from blueberries. Or maybe just for people whose diets are also high in calcium, who have a particular genotype and who either play the flute or have a wry sense of humor. We just don't know yet.)

So, some of the time when two studies appear to be in conflict, it's because the generalizations that were drawn on the basis of one or both sets of findings were too broad, straying too far beyond the characteristics of the particular sample and the particular factors considered. Sometimes it's the "fault" of the world, for providing a statistically unrepresentative sample. Sometimes it's the fault of the scientists, for choosing a poor sample or mischaracterizing the population to which the findings apply. Sometimes it's the fault of reporters, for straying too far beyond the data. Sometimes it's the fault of the editors, who opt for the catchier — but less accurate — headline. [Editor: who, me?] And sometimes it's all of the above.

Having what we took to be solid, scientific knowledge shift beneath our feet can be unsettling. But it shouldn't be that surprising. It is, after all, how science works.

Science, by its nature, involves enormous hubris: we try to make sense of the world from our limited observations. We expect that what we observe here and now will tell us something about what we haven't observed and may never observe. Science is all about generalizations.

But science is also modest: it changes in light of new evidence. Science is willing to admit when it's wrong. And it's this combination that makes science such a powerful partner — one worth sticking with in sickness and in health.

You can keep up with more of what Tania Lombrozo is thinking on Twitter: @TaniaLombrozo

Copyright 2013 NPR. To see more, visit