10 Black Scientists You Should Know

Since before the Civil War, Black scientists have been conducting pioneering research that has changed the way we still live and work today. Despite experiencing racial bias from an early age, these remarkable people kept their eyes on the prize. They persevered when educational opportunities were barred because of prejudice, and found ways to do research when employment was denied for no reason other than the color of their skin.

From well-known Black scientists, such as George Washington Carver, to James West, who coinvented the microphone, to those whose impressive scientific records have nearly languished in obscurity, our list will have you rethinking what else might be left out of your history textbook.

10. George Washington Carver

George Washington Carver was a scientist and inventor best-known for discovering 100 uses for the peanut, but that’s only the tip of the iceberg in his remarkable life. He was born to enslaved parents on a Missouri farm at the close of the Civil War and kidnapped by raiders a week later, becoming an orphan in the process.

Carver’s former owners, Moses and Susan Carver, eventually located and returned Carver to the farm of his birth. In the years that followed, Susan Carver taught him to read and write because local schools did not allow Black students.

The experience sparked an interest in lifelong learning. Carver self-directed his way through high school and conducted biological experiments of his own design. Eventually, he enrolled in Iowa State Agricultural College’s botany program, where he earned a master’s degree — and a reputation as a brilliant scientist, teacher and advocate for farmers. He then became an instructor at the famed Tuskegee Institute, working alongside Booker T. Washington.

In addition to developing crop rotation methods for sharecroppers, many of whom were former slaves, Carver designed a horse-drawn classroom to illustrate his methods firsthand. He also pioneered a series of practical inventions that would make farming more profitable and less dependent on cotton, including more than 100 ways to monetize sweet potatoes, soy beans and peanuts with a conversion into dyes, plastics and fuel.

Carver became an adviser on agricultural matters to President Theodore Roosevelt, and in 1916, one of the few American members of the British Royal Society of Arts. Carver died in 1943, at age 78 [source: Biography].

9. James West

The next time you hear a telephone ring, think of James West. West is a Southern-born scientist best known for his 1962 coinvention of the electret microphone, a device that converts sound to electrical signals.

A stunning 90 percent of microphones currently designed or produced — ranging from telephones and hearing aids to portable recorders — are based on West’s work, the bulk of which occurred during his four decades at Bell Labs. During that time, West was granted more than 200 U.S. and foreign patents, and achieved dozens of professional honors, including inductions into the National Inventors Hall of Fame and the National Academy of Engineering. Upon his retirement in 2001, West joined the faculty of Johns Hopkins University.

It’s been an impressive career arc for West, whose parents once cautioned against scientific pursuits. West’s father pointed out three Black men with doctorates in chemistry and physics working at the local post office and wondered whether his son’s physics degree would simply become a winding road to a blue-collar job. But West was hired by Bell Labs right after graduating from Temple University. He’d interned there during his college summers [source: Homewood].

8. Charles H. Turner

Behavioral scientist Charles H. Turner is best known for his discovery that insects can hear. He was born in 1867 to working-class parents in Cincinnati, Ohio, and became the first African American to earn a doctorate in zoology from the University of Chicago.

Turner’s research centered on animal behavior, and he developed a series of techniques to study and measure how insects learn. For example, Turner was the first to discover insects could hear and that they were capable of changing their behavior based on previous experiences. Notably, his research showed that honeybees could recognize colors and patterns. (A former student wrote about one experiment: “The bees appeared at the table at all three meals. Then Dr. Turner put jam only at breakfast daily. They still came to each meal but found no jam at noon and night. Soon they stopped coming. This shows they have some idea of time” [source: Abramson]).

Much of his work was done without the benefit of laboratory space or research assistants, since Turner taught at high schools. Yet his findings dramatically changed the way scientists understood invertebrate species. Turner died in 1923, but many of his methods are still in use today [source: Biography].

7. Mae Jemison

When Mae Jemison peered back at Earth from the space shuttle Endeavour, she felt a sense of unity — with her hometown of Chicago far below, with every star in the galaxy and, importantly, with her childhood dreams of becoming a scientist. She was the first Black female astronaut to travel into space.

Jemison, born in 1956, grew up loving both the sciences and the arts. In college, she studied Russian and Swahili, and earned a bachelor’s degree in chemical engineering before completing medical school. She also took modern dance classes at the Alvin Ailey School.

The polymath joined NASA’s astronaut training program in 1987 and the Endeavour space shuttle crew in 1992. She was part of an eight-day mission that completed 127 Earth orbits and used her time in space to do bone cell research [sources: NASANational Women’s History Museum].

After leaving NASA in 1993, Jemison founded The Jemison Group to explore products that connect technology and science, and also BioSentient Corp., which focuses on medical technology projects. She also penned an autobiography, started an international sciences camp for children and appeared on science-related television shows, including “Star Trek: The Next Generation” [source: Changing the Face of Medicine].

The Google Doodle for March 8, 2019 (International Women’s Day), featured a quote from Jemison: “Never be limited by other people’s limited imaginations.” [source: Bach].

6. Percy L. Julian

Percy Julian was the grandson of enslaved people but became one of history’s greatest synthetic chemists, allowing many drugs to reach patients at much lower costs and wider availability.

He was born in 1899 in Montgomery, Alabama, into a family that understood the transformative power of higher education. At 17, he enrolled in dual coursework as a high school senior and freshman at DePauw University in Greencastle, Indiana, while also working to pay his way through school. Julian studied chemistry and graduated with a bachelor’s degree in 1920; he was the class valedictorian. After a brief stint as a teacher, he attended Harvard and earned a master’s degree, followed by a doctorate from the University of Vienna. By 36, he’d returned to DePauw to conduct research and was the first to synthesize physostigmine, an alkaloid that occurs naturally in the calabar bean and is used to treat glaucoma.

Although Julian faced barriers — he was once denied a research position because a town law forbade Black people to stay overnight — he was propelled by his work. His soybean compound research led to a number of patents and pioneering medications like synthetic versions of the female hormone progesterone and the steroid cortisone (used to treat rheumatoid arthritis). Julian also produced a fire-retardant foam widely used during World War II.

By 62, he’d formed and sold his private enterprise, Julian Laboratories, for more than $2 million and continued to work as a researcher and consultant until his death in 1975 [source: American Chemical Society].

5. Neil deGrasse Tyson

As the director of the Hayden Planetarium at New York City’s American Museum of Natural History, Neil deGrasse Tyson can be found encouraging children to explore the world around them. It’s a nice turnaround since a visit to a planetarium in the mid-1960s ignited a 9-year-old Tyson’s own passion for the stars.

Tyson is an astrophysicist by trade and science enthusiast by nature, and is considered one of the driving forces behind Pluto’s demotion from planet to dwarf planet. Throughout his career, the Harvard- and Columbia-educated scientist has repackaged complex theories and universal mysteries into essays, presentations and books aimed at laypeople. He’s hosted PBS’s “Nova ScienceNow” series and produces a StarTalk Radio podcast and radio program. Tyson also helped resurrect Carl Sagan’s “Cosmos” television series; he hosted the rebooted version, which debuted in 2014. A sequel series, “Cosmos: Possible Worlds,” which premiered March 9, 2020, was also hosted by Tyson.

Tyson has served as an adviser on the aerospace industry to President George W. Bush and on a later commission focused on space exploration policy. He was even voted People Magazine’s “Sexiest Astrophysicist Alive” in 2000 [sources: BiographyThe Planetary Society].

4. David Harold Blackwell

David Harold Blackwell was one of the world’s most notable statisticians, but as a child he didn’t particularly like math. That was until he met the right teacher who opened a numerical world to him.

Blackwell, born in 1919, grew up in southern Illinois and by 16 was enrolled at the University of Illinois at Urbana-Champaign. At 22, he graduated from his home state university with a doctoral degree in mathematics and then studied at Princeton. Although Blackwell aspired to a teaching position, racial bias closed doors; he was denied posts at Princeton and at the University of California at Berkeley. However, he was offered a position at Howard University. (Berkeley later offered Blackwell a teaching job, and he became the university’s first Black tenured professor in 1954).

While at Howard, Blackwell studied game theory and how it applied to decision-making in the government and private sectors during summers at RAND Corp. He became the United States’ leading expert on the subject, authoring a widely respected textbook on game theory, as well as research that resulted in several theorems named for him. One such theory, which explains how to turn rough guesses into on-target estimates, is known as the Rao-Blackwell theorem and remains an integral part of modern economics. In 1965, he became the first African American to be inducted into the National Academy of Sciences. He died in 2010 [sources: SandersSorkin].

3. Marie Maynard Daly

Marie Maynard Daly was a pioneer in the study of the effects of cholesterol and sugar on the heartand the first Black woman to earn a Ph.D. in chemistry in the United States. She was born in 1921, at a time when minority women often were denied educational and employment opportunities, but she didn’t allow prejudice to stop her pursuit of the sciences. By 1942, she had earned a bachelor’s degree in chemistry with honors from Queens College in New York. She went on to complete a master’s degree, also in chemistry, just one year later.

It was while earning her doctoral degree from Columbia University that Daly’s research really began to gel. She discovered how internally produced compounds help digestion and spent much of her career as a professor researching cell nuclei. Importantly, she discovered the link between high cholesterol and clogged arteries, which helped advance the study of heart disease. She also studied the effects of sugar on arteries, and cigarette smoking on lung tissue. Daly established a scholarship fund for Black students at Queens College in 1988. She died in 2003 [sources: WongChemical Heritage Foundation].

2. Patricia Bath

Patricia Bath improved the vision of generations thanks to her invention of a laser probe for cataract treatment.

Born in 1942, Bath’s educational achievements began early. She graduated from high school in only two years, then earned a bachelor’s degree from Hunter College and a medical degree from Howard University before accepting an ophthalmology fellowship at Columbia University. It was during this fellowship that Bath’s research uncovered some staggering statistics: When compared with her other patients, Black people were eight times more likely to develop glaucoma and twice as likely to go blind from it. She set her sights on developing a process to increase eye care for people unable to pay, now called community ophthalmology, which operates worldwide. Bath became the first African American to complete a residency in ophthalmology in 1973, and the first woman to join the ophthalmology department at UCLA in 1975.

By 1981, Bath was hard at work on her most notable invention, a laser probe that precisely removed cataracts in a less-invasive way. Using the laserphaco probe she devised, she was able to restore sight to patients who had been blind for as long as 30 years. In 1988, she became the first Black female doctor to receive a patent for a medical purpose. After her retirement in 1993, Bath continued to advocate for the medically underserved and focused on the use of technology to offer medical services in remote regions. She died in May 2019 after a short illness [source: Biography].

1. Ernest Everett Just

In 1916, Ernest Everett Just became the first Black man to earn a Ph.D. from the University of Chicago in experimental embryology, but perhaps his greatest legacy is the sheer number of scientific papers he authored during his career.

Just was born in 1883 and raised in Charleston, South Carolina, where he knew from an early age he was headed for college. He studied zoology and cell development at Dartmouth College in Hanover, New Hampshire, and worked as a biochemist studying cells at Woods Hole Marine Biological Laboratory in Massachusetts. He became a biology instructor at Howard University before finishing his Ph.D., and would spend 20 summers also working at Woods Hole. From 1920 to 1931 he was awarded a biology fellowship by the National Research Council. Just pioneered research into cell fertilization, division, hydration and the effects of carcinogenic radiation on cells.

Frustrated that no major American university would hire him because of racism, Just relocated to Europe in 1930. Once there, he wrote the bulk of his 70 professional papers, as well as two books. He died of pancreatic cancer in 1941 [sources: BiographyGenetics].

Originally Published: Feb 11, 2014

Ridiculous History: When West Point Cadets Rioted Over Eggnog in 1826

A student staggers back to his dorm after a late night tavern visit with friends, hoping to sneak in unnoticed. If caught, he’ll be arrested on the spot — again.

But as he aims for campus, a massive ravine as deep as a five-story building materializes out of seemingly nowhere and he tumbles out of sight. His friends yell blindly into the darkness, urging him to answer if he’s not dead. As luck — and booze — would have it, he doesn’t really feel a thing.

It wasn’t the first time Jefferson Davis — West Point class of 1828 graduate and future president of the Confederacy — had slipped away from his post at the military academy to get drunk, but it was the first time the plan nearly ended him.

Brandy and wine remained a European luxury, so Americans replaced it with the much more available, cheaper rum.

Cyrus Roepers, Histporical Food Blogger

Not long after Davis’ unplanned spelunking trip, talk turned to throwing an epic, eggnog-fueled Christmas Eve party, and naturally, he was all in.

Let the Eggnog Riot Begin

On Christmas Eve 1826, at least 70 cadets got rip-roarin’ drunk on eggnog, assaulted two officers and nearly destroyed the North Barracks. They broke windows, threw furniture, shattered plates and even tore banisters from stairways. Their noisy antics drew the attention of officers assigned to guard against such shenanigans. A subsequent surprise inspection of student quarters revealed a “Where’s Waldo” of drunk cadets. There were sloshed cadets poorly hidden under blankets and behind hats.

And in the hours that followed, the booze made them brave, so much so that they grabbed weapons and threatened to kill their superiors. One officer was threatened with a sword and hit with a piece of wood; another was shot at.

Alcohol was strictly forbidden at West Point in the early 1820s. The military academy, stationed on the west bank of the Hudson River was, after all, run by Colonel Sylvanus Thayer, an austere and stern superintendent bent on instilling discipline. If a student were caught with alcohol, or simply under the influence of alcohol, expulsion and arrest weren’t far behind. Plus, West Point, just 50 miles (80.5 kilometers) north of New York City, had its reputation to consider.

Maintaining Military Order

When West Point accepted its first class in 1802, a mere 10 students assembled in a handful of haphazard buildings. New students interested in joining the ranks were admitted — at any time throughout the year — with few questions. Then came the War of 1812, and Congress, hungry for military success, installed Thayer to whip the academy into shape.

By 1826, Thayer had done just as he was commanded.

Until Christmas Eve, that is. That’s when students broke out their secret stash of liquor: about four gallons of the cheapest whiskey they could find. They’d lugged it across the Hudson River and bribed a guard to bring it onto campus, where they hid it among their personal effects. Imagine: whiskey in boots, coat pockets, under mattresses and blankets, until the moment it was hastily mixed with eggs, milk and a few spices to become eggnog — the Colonial equivalent of a Jaeger Bomb. (Watch the video below for an idea of what might’ve been in the mix.)

“There are a lot of different theories as to how eggnog came about, but there’s a solid consensus that Medieval Europe played a large role in its creation,” says Cyrus Roepers of Arousing Appetites, a food blog focused on recreating traditional recipes from cuisines all over the world. “Many believe that eggnog is an offshoot of an old drink called posset, which is hot milk curdled with wine or brandy, and some added spices.”

Originally, says Roepers, posset was the preferred drink of the Old World’s 1 percenters. But it didn’t take long before this beverage of wealthy nobility became popular with the average person and hopped continents. As non-nobles in the New World began owning land and livestock, they started using readily available ingredients, like milk, eggs and liquor, to whip up their own grog.

“Brandy and wine remained a European luxury, however, so Americans replaced it with the much more available, cheaper rum, thanks to their Caribbean neighbors,” says Roepers.

And, as West Point cadets discovered, whiskey was an acceptable substitute, too.

AKA the Grog Mutiny

As the Christmas Eve 1826 eggnog riot stretched into Christmas morning, the revelry escalated. Students who weren’t busy dismantling the barracks or fist-fighting, armed themselves with guns and swords in preparation for a battle with West Point’s artillery men, who were expected to be summoned in an attempt to subdue them.

But then the eggnog’s effects began to wear off. Morning roll call revealed a corps staggering to line up, with many of the 260 cadets somewhere along the eggnog continuum of well-oiled to full-on hungover.

Thayer elected to censure only the most destructive revelers, and neither Jefferson Davis nor his compatriot, the future general Robert E. Lee, were among them. In the end, 19 cadets were expelled.

No word on whether they ever drank eggnog again. 

Now That’s Interesting

George Washington’s alleged personal eggnog recipe incorporated generous measures of three different types of booze — rye whiskey, rum and sherry.

Originally Published: Dec 7, 2015

What Are the 10 Largest Cities in the World by Population?

The world’s population has reached another milestone. There are now an estimated 8 billion people inhabiting this Big Blue Marble and that’s with our global population growing at its slowest rate since 1950. The majority of the world’s population — 56.2 percent — live in an urban area, and by 2030 that number is expected to increase to 70 percent of the world’s population.

Among these ever more populous urban environments are “megacities,” the most populated cities in the world. According to the United Nations, a megacity has a population of 10 million or more. Currently, there are fewer than 37 megacities in the world, with that number expected to rise to 41 by the year 2030.

So, here are the largest cities in the world by population, according to WorldAtlas, from Tokyo, the largest, to another Japanese city, Osaka, the 10th-largest:

1. Tokyo, Japan

Topping the list of world city populations is Tokyo. With a population of 37,274,000, it has become the most densely populated city in the world. Although home to an astounding number of citizens, Tokyo’s population growth has slowed during recent years with recorded declines ranging from a 0.09 percent drop from 2018 to 2019 to a 0.18 percent drop from 2021 to 2022. Experts point to an aging population paired with low birth rates as two of the reasons behind the slowed growth; another reason may be that the Japanese government implemented tax incentives for Japanese companies to move from Tokyo to less-populated prefectures and offered subsidies for citizen-employees to relocate along with them.

2. Delhi, India

Delhi, India, has a population of 32,065,760. This second-place ranking among the globe’s biggest cities, however, may be short-lived. Some population growth estimates forecast that Delhi’s population will reach a potential 56.4 million people by the year 2028. Delhi is home to several attractions that have each been designated as a World Heritage Site, including Red Fort Complex, named for its red-tinted limestone walls.

3. Shanghai, China

Shanghai, China, has a population of 28,516,904 and is forecast to enter a period of significant population growth over the next few years. Since 2018, Shanghai has recorded year-over-year population growth ranging from 2.59 percent to 2.87 percent. This means Shanghai is on track not only to secure its future ranking among the world’s top three most-populated cities, but to eventually vie for one of the top two spots. Located in east central China, Shanghai is a commercial powerhouse that is home to one of the world’s largest seaports.

4. Dhaka, Bangladesh

With 22,478,116 people living in Dhaka, Bangladesh, it is skyrocketing through the rankings of the world’s most populous cities. It’s also one of the most densely populated cities in the world. As recently as 2018, Dhaka was at No. 7, but after several years of rapid population growth (more than 3 percent), Dhaka has reached the No. 4 spot on this list. Dhaka is located at the center of Bangladesh, which borders the Bay of Bengal in the Indian Ocean. The city serves as the country’s administrative and economic powerhouse. Bangladesh has been the site of intense growth since it became an independent country in 1971, and the population is expected to continue to soar.

5. São Paulo, Brazil

São Paulo, Brazil, has a population of 22,429,800 people. São Paulo comes in at No. 5 on this list of the world’s most populous cities, and takes the top spot among Brazil’s 110 urban areas. The city is located in southeastern Brazil where it is surrounded by valleys and foothills. It is known for being the largest city in the Southern Hemisphere and has gained a global reputation as a commercial and industrial center. In 2007, São Paulo officials banned outdoor billboards and banners in a move intended to preserve the city’s scenic views. Although the ban was initially met with resistance, just a few years later the ban was praised for its effects — chief among them the highlighting of previously overlooked architecture, including such buildings as the Municipal Theater of São Paulo.

6. Mexico City, Mexico

The population of Mexico City has grown more than 540 percent since 1950, and is now an estimated 22,085,140 people. Mexico City may be the sixth most populous city in the world, but it has another claim to fame that is far less welcome: Mexico City is sinking. During the 1900s, the capital city sank an estimated 29 to 36 feet (9 to 11 meters) because it sits atop an underground aquifer that is being continually depleted. Even after a 1958 ban on drilling new wells into the aquifer, the clay soil under Mexico City compressed at steady rates. Today, some sections of the city are sinking up to 19 inches (50 centimeters) a year.

7. Cairo, Egypt

Cairo, Egypt, has reached a new peak population with 21,750,020 people. While Cairo may be the world’s seventh ranked city for population, the city — and all of Egypt — is forecast to achieve rapid economic growth through 2030, according to Harvard University researchers. Cairo is home to the Pyramids of Giza and is part of a region rife with historical archeological finds, including the recent discovery of mummies with gold-plated tongues. The mummies with golden tongues are estimated to be from 300 B.C.E. to 640 C.E. and were discovered at the Quweisna necropolis in the Nile delta about 40 miles (64 kilometers) north of Cairo.

8. Beijing, China

Since 1975, the population in Beijing, China, has been rising — and is expected to continue to grow through at least 2035, according to a United Nations report. There are 21,333,332 people now living in Beijing, which is home to one of the largest and most well-preserved architectural complexes of antiquity, known as The Forbidden City. The former imperial palace was first occupied in 1420 and is now a UNESCO World Heritage site that serves as a cultural museum. The Forbidden City is a 178-acre (72-hectare) complex that, among other significant characteristics, exhibits the traditional Chinese practice of feng shui.

9. Mumbai, India

At last count, Mumbai, India, was home to 20,961,472 people — a number forecast to grow well into the future. Mumbai is sometimes known by its old name, Bombay, or by its relatively new nickname, the City of Dreams. Mumbai earned the City of Dreams moniker for becoming the center of Bollywood, the name given to the Indian film industry. Bollywood produces more than 1,000 films per year. Mumbai also is the economic hub for India’s jewelry production.

10. Osaka, Japan

With a population of 19,059,856 people, Osaka, Japan, comes in at No. 10 on our list of the world’s most populated cities. Osaka’s population growth, however, is expected to decrease year-over-year through 2035. The decline is largely to be attributed to residents moving from the city to outlying suburban areas. Osaka is known as a foodie paradise, in large part because of its high concentration of Michelin star-rated restaurants. Osaka is also an important urban agglomeration for Japanese culture and is home to Osaka Castle, one of the oldest landmarks in the country.

Now That’s Interesting

What is the largest city in the world by square miles or square kilometers? New York City spans a whopping 5,395 square miles, or 8,683 square kilometers, and is still densely populated. An estimated 2,050 people inhabit every square mile (1.6 kilometers).

Why Is It Bad Luck to Break a Mirror?

After months of searching, you found the perfect apartment and it’s finally time to move. But just as you’re about to pat yourself on the back, something terrible happens: You trip over a crack in the sidewalk and the large, antique mirror you’re carrying slips from your grasp. Before you can even fully understand what’s happening, the mirror hits the concrete and cracks into hundreds of pieces. Your first thought? Well, we probably shouldn’t repeat it here. Your second? Seven years of bad luck.

But why the bad luck? Will breaking a mirror really heap misfortune upon your head? According to superstition, the answer is yes. Although the exact origins of the belief are inexact, potentially centuries-old lore holds fast to the idea that a mirror is a projection of one’s appearance — and one’s soul. Breaking a mirror would mean breaking the soul into pieces. The soul, now severely damaged, isn’t able to fully protect its owner from bad luck.

Or in an alternate explanation, the damaged soul seeks revenge against the one responsible for its injuries. The means of revenge varies, but often includes the loss of a close friend or the death of someone in the household [source: Radford]. The “seven years” part is likely due to ancient Romans believing the body renews itself every seven years [source: Drazin].

The idea that broken mirrors can bring bad luck most likely stems from the ancient Greeks, who believed spirits lived in reflective pools of water. In fact, the fate that awaited Greek mythological figure Narcissus may have grown out of this belief. Narcissus fell in love and it was his undoing; so besotted was he with his own reflection in still waters that he pined for himself (or, by some accounts, the visage of his late twin sister) until he died [source: Encyclopedia Britannica].

Regardless of the way it started, the notion that breaking a mirror brings bad luck is prevalent in cultures around the world, ranging from Greek and Chinese to Indian and American. Whether you subscribe to the superstition or not, breaking a mirror is bad news — if only because of the mess it creates.

Originally Published: Jun 17, 2015

7 Animals You Should Never Take Selfies With

Woman taking selfie with lioness in background

Kittens, puppies. The occasional squirrel. Most of these animals make good partners for selfie pics. Sure, they don’t have the charisma of a cougar or the magnetism of a bear, but they’re much better choices — for obvious safety reasons.

So the next time your brows are so perfect that you can’t help but photobomb a wildlife scene, keep one thing in mind: “That’s not how this works. That’s not how any of this works.”

No more sweet snaps with bears, ‘mkay?

1. Bears

So many people have been taking selfies with wild bears that officials closed a Colorado park. Park managers at Waterton Canyon recreation area near Denver temporarily stopped people from entering the park until the bears go into hibernation. Similar Insta-worthy situations have been spotted in Yosemite and other state parks.

2. Bison

A snap with a Bison? It’s too close for comfort, say Yellowstone National Park authorities. Several people have been hurt after taking pictures with a bison in the background — some as near as 3 to 6 feet (1 to 2 meters) away. Countless more have just been lucky to get away with a shareable moment and not serious injuries.

3. Any Snake, Ever

Everything turned out OK after the extreme close-up with a snake pictured below, but one California man wasn’t so lucky. He may lose his hand after taking a selfie with a rattlesnake — and then receiving a $150,000 hospital bill for anti-venom. Which hurts more?

4. Crocodiles

The freshwater crocodiles that sun themselves near pedestrians in a nature park in Australia’s Top End may not be as powerful as their saltwater cousins. They are, however, still deadly. Authorities have been warning tourists not to talk selfies with crocs within arm’s reach. You can imagine why, even though many others apparently couldn’t.

5. Camels

You know that moment, right before it all goes wrong? Yeah, we’re thinking this selfie ended with a painful nip, too. Although not as bad as this one. And, things really went wrong for this guy taking a snap with a kick-happy camel. This gentleman is about to have his hat reshaped.

6. This Cat

While filming a plea for someone — anyone — to re-home her “lovely cat,” a now-infamous felinerepeatedly bites and scratches this woman. Turns out, the feelings are probably mutual.

7. Tigers

So many single guys were sneaking up on tigers to take Tinder profile snaps that the New York State Assembly actually passed legislation to prevent the pics. People caught photobombing a big cat will now be fined. No word on whether #guyswithtigers are more popular on dating sites. #doubtit 

Now That’s Interesting

Japanese photographer and Minolta employee Hiroshi Ueda developed a version of the selfie stick in the early 1980s, though Canadian inventor Wayne Fromm claims credit for popularizing the gadget in the smartphone era

Looks and Acts Like a Hummingbird? Could Be a Hummingbird Moth

hummingbird moth on purple flower

You’ve spotted a 2-inch (5-centimeter) creature zipping from flower to flower, its wings moving so fast they are nearly an invisible blur. A hummingbird, you declare, feeling fortunate to catch a glimpse of this beneficial pollinator, long believed to be a harbinger of spring.

As you look closer, though, something doesn’t add up. Although it has the tongue-like proboscisof a hummingbird, this hovering creature is much smaller than any hummingbird you’ve ever seen and has antennae. Its tail is actually a bit of fluff that resembles feathers.

It’s not a hummingbird after all. It’s a hummingbird moth, a bug that looks like a bird.

There are 23 species of hummingbird moths but only five located in North America: hummingbird clearwing (Hemaris thysbe), snowberry clearwing (Hemaris diffinis), slender clearwing (Hemaris gracilis), Rocky Mountain clearwing (Hemaris thetis) and the white-lined sphinx moth (Hyles lineata).

While the territory of these five North American species of hummingbird moths ranges from Canada to Mexico, only two — the snowberry clearwing and the hummingbird clearwing — are commonly seen. Snowberry clearwings sport distinctive yellow and black body colors with a horizontal black line running down each side of its body from its eyes to its tail, while hummingbird clearwings have a distinctive reddish-brown abdomen and olive green back.

Hummingbird moths, like the hummingbirds they mimic, are adept at hovering and moving side-to-side or backward with helicopter-like precision. Although the body of a hummingbird moth takes on the cylindrical, barrel-chested proportions of a hummingbird, it is typically not as long. Hummingbird moths reach 1 to 2.5 inches (2.5 to 6 centimeters) at maturity, while hummingbirds are usually 3 to 4 inches (8 to 10 centimeters) in length. You’re also likely to spot a hummingbird moth’s six legs dangling as they hover, while a hummingbird will tuck its pair of slender legs into its downy belly feathers as it sups.

If you’d like to increase your odds of spotting a hummingbird moth, plant a variety of flowers. Hummingbird moths, like hummingbirds, feast on the nectar of a variety of long-necked flowers, such as trumpet creeper (Campsis radicans) or cardinal flowers (Lobelia cardinalis). Sometimes you may see a hummingbird and a hummingbird moth dining on the nectar of the same flowers; the two species often amicably share the same territory.

When it comes to leaving a legacy, however, hummingbird moths prefer specific plants like honeysuckle, dogbane and hawthorn, and trees like cherry and plum on which to lay their eggs. When the eggs hatch into larvae and transform into caterpillars, the next generation of hummingbird moths are then able to dine on their ideal plants. Although you may first see hummingbird moths in the spring, they are likely to make a reappearance in mid to late summer as their preferred flowers pump out more nectar.

Now That’s Cool

The hummingbird moth made history in 1991 when it starred in “The Silence of the Lambs.” Central to the story was the death’s-head hawk moth, several of which were featured throughout filming. The death’s-head hawk moth — a member of the hummingbird moth family — has a distinctive skull-like mark on its thorax.

Ridiculous History: Beloved Author Roald Dahl Was Also a Suave British Spy

Black and white photo of author Roald Dahl

“And above all, watch with glittering eyes the whole world around you, because the greatest secrets are always hidden in the most unlikely of places.” Roald Dahl penned these words for “The Minpins,” the final of 34 children’s books he wrote between 1943 and his death in 1990.

But unlike in this excerpt, there was nothing fictional about Dahl’s search for secrets. During World War II, the soon-to-be-beloved author of books including “Charlie and the Chocolate Factory,” “Danny, the Champion of the World” and “The Witches” served as a fighter pilot and an officer in the British Royal Air Force. Following that, he assumed lifestyle reminiscent of superspy James Bond, joining a secret organization based in the United States known as the British Security Coordination.

That spy network’s primary goals were to offset Nazi propaganda while protecting the interests of the United Kingdom. So, while Dahl dreamed up imaginative children’s stories, he also lived as a secret intelligence officer under the cover of working a public relations job at the British embassy in Washington, D.C.

By all reports, he was both very good and very bad at it. Dahl was especially talented at being a ladies’ man, a skill that came in handy when convincing both politicians and heiresses alike to part with closely guarded secrets. One biography described his romantic skill with particularly colorful language, following reports that Dahl reportedly had affairs with, among others, Millicent Rogers, heiress to the Standard Oil fortune, and Clare Boothe Luce, an influential congresswoman who later became an ambassador and foreign affairs advisor to presidents Nixon and Ford.

But as good as he was at the “sleeping” part, Dahl came up short at keeping secrets. According to his daughter Lucy, Dahl was a prolific gossip. “Dad never could keep his mouth shut,” she’s quoted saying in Donald Sturrock’s 2010 biography “Storyteller: The Life of Roald Dahl.”

Despite his penchant for spilling the beans, Dahl did manage to come across some interesting intelligence at cocktail parties — or perhaps it was thanks to his bedside manner. As early as 1944,

he’d uncovered early U.S. talk of landing a man on the moon. He also reportedly believed rumors that Franklin D. Roosevelt and the Norwegian crown princess Martha were having an illicit affair (a claim most historians discount), passing that information along with other intelligence directly to Winston Churchill.

Despite his Bond-style role in world affairs, Dahl will probably always be best remembered as one of the greatest children’s storytellers of all time. Many of his children’s books have been turned into movies, including “The BFG,” the tale of a friendly giant who befriends a young girl, then races against time to protect her from danger. It’s just the sort of story Dahl could truly appreciate.

Now That’s Cool

In a piece of real life intersecting with fiction, Roald Dahl wrote the screenplay for the 1967 James Bond thriller “You Only Live Twice.” 


The Surprisingly Radical History of Mother’s Day

silhouette of woman in bonnet with family playing in a field

Mother’s Day, one of the largest holidays in the world, has become an unstoppable idea. Whether it’s a set of earrings or a dozen roses, few can imagine allowing a Mother’s Day to come and go without giving Mom a gift.

In 2022, 84 percent of Americans planned to celebrate Mother’s Day, spending more than $245 per gift, on average. This figure, which has grown without fail for decades, doesn’t even count handmade perks like breakfast in bed. The top three gift categories were greeting cards, flowers and special outings [source: National Retail Federation].

How did the second Sunday of May become a milestone to mark the contributions of mothers in the first place? In the beginning, it wasn’t all sweetness and light. In fact, it was far from a feel-good holiday designed to celebrate women and how they care for their families.

Mother’s Day was built on radical ideals. It was an international movement meant to change the world, one that began through the collective efforts of influential women who sought to free the world from injustice and warfare.

The modern concept of Mother’s Day grew out of a seed planted in 1858, when Ann Reeves Jarvis began organizing Mothers’ Day Work Clubs to rail against the disease-causing environment of West Appalachia’s poorest workers. She believed too many of the workers’ children were dying from illnesses brought on by filthy conditions, so under the advice of her physician brother, Jarvis taught mothers how to boil water for drinking and keep food from spoiling. The practical nature of her Mothers’ Day Work Clubs served as a model for nearby towns, and by 1860 the idea had spread across West Virginia [source: The Library of Congress].

Just as Jarvis’ concept was gaining traction, her attention was drawn to another challenge. The American Civil War, which would play out from 1861 to 1865, was erupting right in her front yard.

Mother’s Day as a Political Movement 

By the 1860s, the Mothers’ Day Work Clubs launched by Ann Reeves Jarvis had become successful, but now she faced another complication. The area near her West Virginia home was a pivotal stop on the Baltimore and Ohio Railroad during the Civil War [source: West Virginia Division of Culture and History. Caught in a storm of rising tensions deep in this crossroads of the Civil War, Jarvis took a stand — by not taking a stand.

She insisted the Mothers’ Day clubs become neutral ports in a sea of political differences. Jarvis and other club members fed and clothed Union and Confederate soldiers; treated their wounds and, just as they’d done with the Appalachian’s poorest workers, taught life-saving sanitation methods.

It was a passion that grew out of Jarvis’ own tragedies. Years earlier, four of her children had died from contagious diseases such as diphtheria, scarlet fever and whooping cough [source: Tyler-McGraw]. Their deaths would be the first blows of many. Four more of her 12 children would be buried before reaching their 18th birthdays.

Even the end of the Civil War in 1865 did not bring peace. Tensions increased as Union and Confederate soldiers returned from war and found themselves occupying the same space in Grafton, still seething with bitter hostility. Jarvis once again took action. She organized a Mothers’ Friendship Day at a local courthouse. Although she publicized the event as a way to honor mothers, its real purpose was to bring together a fractured community by gathering battle-worn soldiers and their families — whatever side they had been on.

Amid rumors the event would erupt in violence, Jarvis’ Mothers’ Friendship Day opened with a prayer and a band that played “Should Auld Acquaintances Be Forgot.” According to an account by Sen. John D. Rockefeller, “By the time they reached the word ‘forgot,’ neighbors were weeping and shaking hands.” Mothers’ Friendship Day became an annual pacifist event [source: The Library of Congress].

Modern Mother’s Day

Ann Reeves Jarvis was one of several postwar influencers who promoted peace. In 1870, abolitionist Julia Ward Howe also took a stand, notably as author of a well-publicized “Mother’s Day Proclamation” urging women to promote peace through political means.

Howe, who had written the Civil War anthem “Battle Hymn of the Republic” years earlier, had become a pacifist after living through the Civil War (where 620,000 soldiers died) and reading accounts of the Franco-Prussian War that followed [source: Kohls]. Howe and other women organized events driven by their pacifist leanings, including Mother’s Days for Peace, which were held annually in various locations across the U.S. on June 2 [source: Rosen]. These eventually fell out of favor.

By Jarvis’ death in 1905, her daughter Anna was ready to take up the cause. She held a church service on the second Sunday in May, 1908 to honor her mother. Mrs. Jarvis had once said that she hoped there would be one day be a memorial day for mothers, adding, “There are many days for men, but none for mothers.”

The custom spread, as Anna publicized it through letters to newspapers and politicians. She lobbied enthusiastically to institute a national holiday that would personalize the mothers’ movement by encouraging sons and daughters to honor their own mothers. In 1914, then-U.S. President Woodrow Wilson officially named the second Sunday in May as Mother’s Day [source: Pruitt].

Almost as soon as the proclamation ink dried, merchants began advertising candies, flowers and greeting cards to commemorate the day. This disturbed Anna, who believed it was being corrupted from her intention of an intimate celebration of one’s own mother.

In the years that followed, she tried to reverse the commercialization of Mother’s Day, spending her sizable inheritance, along with her energy, on boycotts and lawsuits against groups that violated the spirit of the day. In 1923, she crashed a confectioners’ convention. In 1925, she protested the American War Mothers convention, which used Mother’s Day as a fundraising event by selling carnations. She was arrested for disturbing the peace [source: Handwerk].

Her efforts, though nearly impossible to ignore, went largely unanswered. She died penniless in a sanitarium in 1948, having no children of her own. Mother’s Day continued to gain momentum. Today, it’s an international holiday celebrated in 152 countries [source: The Library of Congress].

A Museum and a Shrine
For 20 years, Ann Reeves Jarvis taught Sunday school at the Andrews Methodist Episcopal Church in Grafton, W.V., which is now the International Mother’s Day Shrine. Since 1908, when the first Mother’s Day service was held, the building has been home to a celebration for mothers. Later, Jarvis’ home in Grafton (also the birthplace of her daughter Anna) became the Anna Jarvis Birthplace Museum. Notably, the home once served as the headquarters of Gen. George McClellan during the Civil War [sources: Anna Jarvis Birthplace Museum, International Mother’s Day Shrine].

Author’s Note: What does Mother’s Day have to do with the Civil War?
I had no idea. Yes, I’d heard of Ann Reeves Jarvis, her daughter and Julia Ward Howe. But Mother’s Day as a radical movement meant to disseminate peace? For me, that was a first — and one that changed my perspective. I’ve always thought of Mother’s Day as a perfunctory holiday. First, as an excuse to honor my mom (very deserving, by the way), then as a holiday for myself that is more often spent at graduations or ball tournaments than something specifically meant for me. Knowing its roots, I’m ready to rethink my celebration.

How Deadheading Helps Flowering Plants Flourish

hand with scissors cutting stem of pink rose

You’ve taken multiple trips to the plant nursery, selected a variety of plants and can already envision how they’re going to brighten up your flower beds throughout the spring and summer. But soon enough (too soon, in fact) these colorful additions lose their luster and you find yourself surrounded, not by the gorgeous landscape you’d planned, but by faded and dead blooms. Now what?

Before you throw those gardening gloves in the trash right along with your dreams of a beautiful botanical space, take a beat. There is a solution … and it is way simpler than you might think.

Deadhead. No, we’re not referring to those diehard fans who once traveled the continent seeing the Grateful Dead as many times as possible. Deadheading is the process of manually removing a spent bloom, whether on an annual or perennial plant, and it not only preserves the beauty of your plants, but encourages them to look their best for longer.

How to Deadhead, and Why
To deadhead is to do just as it sounds: remove the dead “head” — or blooming portion — of a plant. Often, this means using one’s thumb and forefinger to pinch and remove the stem of a spent bloom. For some tough-stemmed plants, however, garden snips or pruning shears may be needed. A sprawling mass of ground cover can even be deadheaded with the careful sweep of a somewhat indelicate garden tool, such as a weed eater.

“Remove the spent blossom as close to the larger main stem as possible because this helps you avoid leaving behind unattractive and flowerless stems,” says Erinn Witz, a garden expert and the co-founder of Seedsandspades.com, an educational website for people in any stage of their gardening experience. “What you want is a clean break in the stem, not a break from a pulling or twisting motion.”

In general, flowering plants are resilient, so if you are unsure where to remove a dead flower, simply pinch off the stem right under the flower. The important thing is that the faded bloom is no longer attached to the plant.

“Deadheading is basically stopping a plant from developing fruit and seeds so the energy is refocused on making more flowers,” emails Charlotte Ekker Wiggins, a University of Missouri master gardener emeritus and blogger at Gardening Charlotte.

After a plant blooms, it usually suspends the process of making new flowers so it can put its energy into forming seeds. Deadheading not only enhances a flowering plant’s performance by causing it to produce more blooms, but it can keep its form shapely and compact. Deadheading has the added bonus of removing the faded and browning blooms from view, so even as you wait for plants to rebloom you are rewarded with greenery.

Not all flowers require deadheading, according to Fiskars. Most bulbs produce only one round of flowers per season, as do flowers such as peonies and liatris, and therefore don’t need deadheading. Most flowering vines, periwinkles and impatiens don’t need it either. Here is a list, by no means inclusive, of some annuals and perennials that will benefit from deadheading:

  • Zinnias
  • Hardy geraniums
  • Cosmos
  • Marigolds
  • Delphinium
  • Snapdragons
  • Marguerite daisies
  • Petunias
  • Blanket flowers
  • Roses
  • Sweet peas
  • Bee balm
  • Campanula
  • Salvia

Just a Pinch, But How Often?
There’s one rule of thumb you need to know about deadheading, and it’s catchy enough to remember from year to year: early and often. The idea is to begin deadheading in the spring, right after the first blooms are spent. Every few days, tour your yard and observe any plants that bloom, then spend a few minutes removing faded or dead flowers. If you wait until late summer or early fall, the process will probably become overwhelming because of the sheer quantity of dead blooms that need your attention.

“Annual flowers that need to be replanted every year and perennial flowers that live more than two years will produce more flowers if they are deadheaded on a regular basis,” says garden designer Joanna VonBergen from GinghamGardens.com, in an email.

If you are growing geraniums or another type of flowering annual, it’s likely that deadheading the blooms will encourage them to reflower throughout the season. But, even amongst those plants that benefit from deadheading, there can be differences in where to cut them back.

“How you deadhead depends on the flowering plant,” says Chey Mullin, flower farmer and blogger at Farmhouse and Blooms, in an email. “Some plants require deadheading of the whole stem. Other plants benefit from a light pruning of spent blooms just back to the center stem. Then others only require the spent bloom to be removed just under the flower, such as with daylilies.”

There are, however, some exceptions to the deadheading rule. Some perennials, like peonies, won’t bloom again, even if you deadhead them. And some perennials, such as the sedum autumn joy stonecrop, will reward you with interesting seedheads throughout the winter if you don’t lop off their blooms. Hollyhocks and foxgloves are good examples of perennials that should be left alone after blooming so they can produce and drop seeds for the next growing season.

Now That’s Interesting
Need another good reason to deadhead? When you remove the faded blooms after a plant’s first flowering of the season, you’ll be rewarded for your efforts with a second bloom — and this second bloom will usually last much longer than the first. Instead of sending its energy and nutrients into making seeds, the plant will focus solely on new blossoms.

Which Is the Sweetest Grapefruit — White, Red or Pink?

Halved grapefruits

Grapefruit can be red, pink or white and, while it is definitely known for its pucker power, can range from intense to mild in flavor. Regardless of the type you choose, grapefruit is known for its balance of tart and sweet, and is packed with a host of health benefits.

With a hefty dose of vitamins A and C in each juicy segment, grapefruit can help boost immunity and has even been found to lower blood pressure and be beneficial in lowering triglyceride and “bad” LDL cholesterol levels. And, because grapefruits are more than 90 percent water, adding them to your daily diet can also boost hydration.

Grapefruit is usually easy to find, too. Once available primarily during its growing season, which runs October through June, grapefruit now can be purchased year-round in most U.S. markets. When selecting grapefruit, regardless of variety, look for the heaviest fruits because they tend to be the juiciest. Grapefruit that is fully ripe, without green color on the skin and without soft spots, will likely taste best.

Grapefruit has long been a popular fruit eaten for breakfast, but keep in mind that, no matter which variety you choose, the potential of this versatile superfruit reaches far beyond the morning.

Red Grapefruit
Possibly the most popular grapefruit variety — and one of the most widely available — is the red grapefruit. Although sometimes simply labeled as “red grapefruit,” there are several cultivars, or varieties, including Ruby Red and Rio Red, both of which top the grapefruit chart for sweetness. Generally, red grapefruit will be sweeter than pink or white grapefruit, although there will always be some exceptions. The rich scarlet color of the red grapefruit’s pulp comes from its high levels of lycopene, an antioxidant that helps protect the body against chronic diseases and environmental toxins.

“Red grapefruit are an excellent source of vitamin A,” says Jessica Randhawa, owner and head chef of The Forked Spoon, which features grapefruit as an ingredient in everything from cocktails to ceviche.

In addition to its sweet taste, red grapefruit tends to have a thinner skin than other types, maximizing the amount of tender, seedless fruit inside.

Pink Grapefruit
Rosy-colored pink grapefruit tastes similar to red grapefruit but offers a flavor parity that is unequaled among fruit of its kind, according to Randhawa.

“The right balance of sweet and tart is found in pink grapefruit. Their flesh is typically very juicy and not sour. Not only do they taste great, but these grapefruits are rich in vitamin C, fiber and vital antioxidants like beta-carotene and lycopene,” Randhawa says.

Although pink grapefruit — named for its blush-hued flesh — is typically not as sweet as red grapefruit, the complexity of their flavor profile makes them ideal for eating solo, adding to salads or even perking up plain water. Like red grapefruit, pink grapefruit is generally easy to find and is typically available year-round in most U.S. markets.

White Grapefruit
White grapefruit, unlike its red or pink cousins, has pale, yellow-colored flesh. It has a thin rind — thinner than most oranges, in fact — and ranges from a distinctive green to light yellow in color. As the fruit ripens, the rind shifts to a darker yellow. White grapefruit is sometimes referred to as “yellow” or “gold” grapefruit for the yellow shade of its ripe skin.

Cultivated less for its flesh and more for its slightly bitter juice, the white grapefruit is a familiar ingredient in sodas and mixed drinks. “White grapefruits are the least sweet variety,” Randhawa says. “However, they possess an intense flavor that is good for making juices and syrups.”

The white grapefruit’s prized bitterness comes from its high acid content and from the thick layer of albedo that lies between the peel and the flesh. Peel or cut into a white grapefruit and you’ll encounter its bright, intense and acidic scent.

Notable Grapefruit Varieties
Although grapefruit may be categorized as red, pink or white, there are dozens of cultivars and related fruits that can be difficult to distinguish from each other.

The white grapefruit is sometimes mistaken for the Oroblanco grapefruit. Although the two are both members of the citrus family and look very much alike, the similarities end when it comes to parentage and flavor. The Oroblanco and the white grapefruit are different species. The Oroblanco is part pomelo — a mild citrus fruit indigenous to Southeast Asia that can grow to the size of a watermelon — and part grapefruit.

“Oroblanco, which means ‘white gold’ in Spanish, has a thick rind, lemon-yellow skin and is almost seedless,” Randhawa says, but unlike the tart white grapefruit, the flesh of the Oroblanco “is juicy and sweet with little to no bitterness.”

Likewise, the melogold grapefruit is not a “true” grapefruit, but a cross between a pomelo and a grapefruit. It has pale, yellow-tinged, sweet-tart flesh and a green-tinged exterior.

Now That’s Interesting
The pomelo and grapefruit look and taste similar, but are different fruits. Widely considered the largest citrus fruit in the world, the pomelo (Citrus maxima or Citrus grandis) can grow to the size of a cantaloupe or watermelon. Native to Southeast Asia, the pomelo is a direct ancestor of the grapefruit. Pomelos, when crossed with sweet oranges, produced the first grapefruits.