Light shines on classification system for stars

Advertisement

Every discipline has its secret handshakes, knowledge or rituals unique to the discipline that help distinguish those “in the know” from those outside the discipline.

This month, I am going to explain one of astronomy’s secret handshakes: the magnitude scale.

It started about 129 B.C., when Greek astronomer Hipparchus created one of the first well known star catalogs. He ranked the apparent brightness of the visible stars into six groups, with the brightest stars in the first-magnitude group and the dimmest stars in the sixth-magnitude group.

The stars in the second-magnitude group were on average about half as bright as the stars in the first-magnitude group, with each subsequent group half as bright as the preceding group. Around 140 A.D. Claudius Ptolemy used Hipparchus’s system when he wrote the Almagest, which became the astronomy textbook for the next 1,400 years.

In 1610, when Galileo turned his telescope to the heavens, a problem developed for this magnitude system. Galileo found stars with his telescope that were dimmer than sixth-magnitude.

These stars became new groups, the seventh- and eighth-magnitude. The magnitude scale had become open-ended. Today, with a pair of 50mm binoculars you can see ninth-magnitude stars; with a six-inch telescope you can see thirteenth-magnitude; and the Hubble telescope enables us to see 30th-magnitude stars.

By around 1850 the science of astronomy had grown to the point that it needed a more precise scale. It was determined first-magnitude stars were about 100 times brighter than sixth-magnitude stars, so in 1856 Norman Pogson proposed that a five-magnitude difference be defined as a 100-to-1 ratio, with one magnitude difference being exactly 2.512 times as bright or dim.

The Pogson system also had another problem. Because some first-magnitude stars were much brighter than others, a zero-magnitude group was needed. Members of this new group included stars such as Rigel, Capella, Arcturus and Vega. The bright end of the scale, like the dim end, had now become open-ended. The star Sirius has a magnitude of –1.5, the planet Venus can be as bright as –4.4, the full moon is about –12.5, and the sun is a –26.7. Note that magnitude is a relative difference in brightness between two stars. Vega, with a value of 0, has often been used as the comparison star.

These values are called apparent magnitudes — how bright an object appears as seen from Earth. But apparent magnitude does not tell us how bright the object really is because it does not take into account how far it is from Earth.

The apparent magnitude of an object could be a large number because it is a really dim object, or it could be large because it is a bright object that is a great distance from Earth.

Therefore, astronomers use absolute magnitude to indicate the actual brightness of an object.

An object’s absolute magnitude is how bright it would appear to be if it were at a standard distance from Earth (The standard distance is 10 parsecs or 32.6 light-years.) For example, the apparent magnitude of the Sun is –26.7, but its absolute magnitude is 4.85.

So why is this the secret handshake? If you think about it, the stellar magnitude scale is counterintuitive. Usually when you count or measure something, a larger number means you have a greater amount. For example, 100 dollars is more money than 10 dollars. But the stellar magnitude scale works the other way around — a larger value means less brightness, and a smaller value means more brightness. Dim stars have large values, while really bright objects have negative values.

So that’s the secret handshake. You are now one step closer to being an astronomy insider.

Marty Scott is the astronomy instructor at Walla Walla University, and also builds telescopes and works with computer simulations. He can be reached at marty.scott@wallawalla.edu.

Comments

Use the comment form below to begin a discussion about this content.

Sign in to comment

4 free views left!