Income Distribution
Income distribution, the apportionment of total national income among all the individuals and families in a country, is an issue closely tied to the way business operates within a society. In any market economy, it is business that generates most personal income, not only through wages and benefits but through interest, dividends, and stock appreciation as well. Therefore the distribution of income is a central concern of business ethics, for evaluating both the fairness of particular business practices and the overall contribution of business to the well-being of society.
Income distribution is generally measured in one of two ways. The simpler way is to divide a society’s total income into segments, such as tenths or fifths, based on either per capita or family income. These slices can then be compared with one another at either a given point in time or over an extended period. Thus one can compare, for example, how much income growth the top 10% experienced over a decade compared with the bottom 60%. A more mathematically sophisticated measure is the Gini coefficient, which gives a single number indicating the income distribution of an entire society. This coefficient ranges from 0 (perfect equality) to 1 (all income is received by a single individual), and it is especially useful for comparing distributions between nations.
The social and economic significance of income distribution depends on its interaction with other trends regarding a particular society, especially changes in the absolute level of average or median income. Increasing disparity is less significant in an environment in which most individuals are experiencing growing incomes compared with a situation where median income stagnates or even declines since the latter situation is more likely to weaken social solidarity and political unity. Over the past generation, the United States has experienced this second, more contentious, set of circumstances. According to the Census Bureau, the Gini coefficient for the United States, which held fairly stable at about 0.40 between 1967 and 1977, has risen steadily since then and hit 0.46 in 2000. Furthermore, figures compiled by the World Bank put American society at the high end of income inequality among industrialized nations. U.S. income is more unevenly distributed than income in Japan, Korea, Taiwan, Australia, Canada, all of Europe (including Turkey), and India, though it does remain a more equal distribution than in China, Hong Kong, and Singapore and a number of poorer countries.
This rising inequality in income distribution in the United States coincides with the nation’s weakest generation of average income growth. Until the mid-1970s average compensation tended to track increases in productivity over time. During the postwar generation between 1947 and 1973, for example, both productivity and average family income grew 103% in inflationadjusted dollars. In contrast, between 1973 and 2003, productivity grew 71%, while family income increased only 22%. Furthermore, according to figures compiled by the Department of Labor, average hourly compensation (adjusted for inflation) has actually declined slightly since 1973 for the four fifths of the population not working in professional or managerial occupations.
Some have argued that tracking compensation over time produces unduly pessimistic results because income numbers do not adequately capture the improved quality of life that comes from innovation. This argument, however, fails to recognize how important innovation has been throughout almost all of American history and how it has traditionally occurred side by side with wage increases. The postwar generation not only doubled its pay but also saw the introduction of television, transistors, commercial jets, home air conditioning, plastics, new medical treatments, and a variety of other product breakthroughs. It would be difficult to argue that a business sector that generates new products but does not share the financial gains from productivity improvements with most of its employees contributes as much to society as a business sector that does both. The issue of fairness in income distribution has been highlighted in recent years by the well-publicized relative rise in executive compensation in the United States. In 1978, the average CEO earned about 35 times the salary of the average worker. This ratio doubled by 1989, just as the bull market started on Wall Street, and then hit 300:1 in 2000, as the market peaked. It has since come down to 185:1 in 2003, but this ratio remains more than five times what it was a generation ago, a period when American business was the envy of world.
A number of factors have been offered for this divergence in income. These include the following: increasing returns to certain kinds of vital technical and professional knowledge, weakening of union power to organize workers or bargain effectively, subcontracting work to less generous employers, movement (or even the threat of movement) of manufacturing overseas, declining value of the minimum wage, less generous benefits for workers and retirees, and a growing reluctance by employees to demand a raise in an era in which downsizing has become routine. Others put the responsibility on new ways of compensating executives through stock options and other means that tend to reward short-term cost cutting, including the cost of labor. While all of these explanations appear to be somewhat plausible, researchers find that none of these, alone, can explain either the timing or magnitude in this sea change in American income distribution, suggesting that a number of factors share responsibility for this change.
Whatever the precise causes, some people find this trend problematic from an ethical perspective. Income divergence raises important questions about fairness and organizational commitment when the benefits of success predominantly accrue to a few. For ethicists influenced by Rawls, such a trend threatens to violate his rule of fairness—that gains to the productive few should not be at the expense of the least fortunate. Libertarians, on the other hand, would be more cautious about making assumptions regarding the ethical implications of increasing income inequality. Unless inequality is generated by coercion or fraud, or by favoritism on the part of government toward one group at the expense of another, libertarians view fluctuations in the relative fortunes of different individuals as a normal part of the operation of markets as the demand for various skills, experiences, and professions shifts over time. Nonetheless, many business ethicists have argued for two decades that corporate executives need to honor and preserve implicit social contracts and, following Kant, to treat employees as stakeholders having ends of their own. Implementing such advice requires grappling with the reality of a diverging distribution of income.
What Was the Most Expensive Art Theft in History?
Last week’s theft from the Paris Museum of Modern Art saw criminals make off with more than $1 million in paintings, according to news reports. But although works by Picasso and Matisse disappeared in the Paris robbery, that heist doesn’t even compare to the Gardner Museum and Mona Lisa jobs.
There is some debate over which heist has earned the title of the most expensive one in history, said Alice Farren-Bradley, a stolen-art recovery specialist at the Art Loss Register, a firm that maintains the world’s largest database of stolen and missing art.
The debate revolves around the 1911 theft of the Mona Lisa. The robbery would undoubtedly rank as the most expensive art theft of all time, but since the painting is essentially priceless, no one can assign a value for the purpose of comparison to other heists, Farren-Bradley said.
Beside the Mona Lisa theft, the most expensive art robbery in history is the March 18, 1990, robbery of the Isabella Stewart Gardner Museum in Boston, according to the FBI. In that job, thieves made off with almost $300 million in paintings, including works by Vermeer, Rembrandt and Monet, according to the FBI.
While that high dollar figure may make art robbery seem as lucrative as it does in movies like “The Thomas Crown Affair,” real art thieves rarely make big money off their crimes, said Farren-Bradley. Since the stolen pieces of art are easily recognizable, fencing them proves prohibitively difficult.
“Often, it seems that the organization of theft is rarely accompanied by an equal amount of planning about what to do once the criminals have the painting in their hands,” Farren-Bradley said. It would be nearly impossible to sell these works on the open market.
For that reason, many art thefts end with the robbers trying to sell the paintings back to the people or institution they stole it from, Farren-Bradley said. In other cases, if the paintings aren’t recovered, the robbers either destroy the paintings to cover their tracks, trade the paintings for guns or drugs on the international black market or sell them to unsuspecting low level dealers as imitations paintings, Farren-Bradley said.
Public Art
In Manhattan’s Eighth Avenue/Fourteenth Street subway station, a grinning bronze alligator with human hands pops out of a manhole cover to grab a bronze “baby” whose head is the shape of a moneybag. In the Bronx General Post Office, a giant 13-panel painting called Resources of America celebrates the hard work and industrialism of America in the first half of the twentieth century. And in Brooklyn’s MetroTech Center just over the Brooklyn Bridge, several installations of art are on view at any given time—from an iron lasso resembling a giant charm bracelet to a series of wagons that play recordings of great American poems to a life-sized seeing eye dog that looks so real people are constantly stopping to pet it. There exists in every city a symbiotic relationship between the city and its art. When we hear the term art, we tend to think of private art—the kind displayed in private spaces such as museums, concert halls, and galleries. But there is a growing interest in, and respect for, public art: the kind of art created for and displayed in public spaces such as parks, building lobbies, and sidewalks.
Although all art is inherently public—created in order to convey an idea or emotion to others—“public art,” as opposed to art that is sequestered in museums and galleries, is art specifically designed for a public arena where the art will be encountered by people in their normal day-to-day activities. Public art can be purely ornamental or highly functional; it can be as subtle as a decorative door knob or as conspicuous as the Chicago Picasso. It is also an essential element of effective urban design.
The more obvious forms of public art include monuments, sculptures, fountains, murals, and gardens. But public art also takes the form of ornamental benches or street lights, decorative manhole covers, and mosaics on trash bins. Many city dwellers would be surprised to discover just how much public art is really around them and how much art they have passed by without noticing, and how much impact public art has on their day-to-day lives.
Public art fulfills several functions essential to the health of a city and its citizens. It educates about history and culture—of the artist, the neighborhood, the city, the nation. Public art is also a “place-making device” that instantly creates memorable, experiential landmarks, fashioning a unique identity for a public place, personalizing it and giving it a specific character. It stimulates the public, challenging viewers to interpret the art and arousing their emotions, and it promotes community by stimulating interaction among viewers. In serving these multiple and important functions, public art beautifies the area and regenerates both the place and the viewer.
One question often debated in public art forums is whether public art should be created with or by the public rather than for the public. Increasingly, cities and artists are recognizing the importance of creating works with meaning for the intended audience, and this generally requires direct input from the community or from an artist entrenched in that community. At the same time, however, art created for the community by an “outsider” often adds fresh perspective. Thus, cities and their citizens are best served by a combination of public art created by members of the community, art created with input from members of the community, and art created by others for the community.
Ancient Circle of Stones
Stonehenge is a very special monument in England. It’s said to be more than 5,000 years old. The “henge” in its name refers to circular structures from ancient times. In this case it refers to the circle of huge stones that stand upright at the center of the monument.
No one knows exactly why Stonehenge was built. Some people believe it might have been used as a device for predicting the movement of the Moon. Others think it was a temple for worshiping the sky or the Sun. Stonehenge includes the largest stone constructions in the British Isles. The monument’s biggest stones are arranged in the shape of a horseshoe and are surrounded by another big circle of upright tall stones. Originally all of these surrounding stones had stones on top, covering them like caps. Some are still capped. All of these objects are made of sandstone.
Beyond these stones is a circular ditch. Inside it stand several other stones, including the Altar Stone, the Slaughter Stone, and two Station stones. On the northeast of Stonehenge is the entrance. Outside it stands the Heel Stone and a straight path called the Avenue.
The Stonehenge that you can see today is more like a ruin. Much of it has probably disappeared with time and with changes brought on by weather over thousands of years. Still, it is an awe-inspiring sight.
Taj Mahal
Several hundred years ago most of India was conquered and ruled by the Mughals, who followed the religion of Islam. When the emperor Jahangir ruled over northern India, his son, Prince Khurram, married Arjumand Banu Baygam.
Prince Khurram called his wife Mumtaz Mahal, meaning “chosen one of the palace.” The two were almost always together, and together they had 14 children. Prince Khurram became emperor in 1628 and was called Emperor Shah Jahan. But three years later Mumtaz Mahal died while having a baby. Shah Jahan was heartbroken. He decided to build the most beautiful monument to his wife. He had the best architects design it in a perfect blend of Indian, Persian, and Islamic styles. Beginning in about 1632, over 20,000 workers labored for 22 years to create what was to become one of the wonders of the world.
The great monument was called the Taj Mahal (a form of Mumtaz Mahal’s name). It was built in the city of Agra, India, the capital of Shah Jahan’s empire. Its several buildings sit in a large garden on the South bank of the Yamuna River. From the garden’s south gateway you can see the front of the white marble mausoleum. It contains the tombs of Mumtaz Mahal and Shah Jahan. The mausoleum stands on a high marble platform surrounded by four minarets, or towers. Many of its walls and pillars shimmer with inlaid gemstones, including lapis lazuli, jade, crystal, turquoise, and amethyst. And verses from the Koran (the Muslim holy book) appear on many parts of the Taj. Many visitors still come to the Taj Mahal. To help protect and care for it for many years to come, the Taj was made a World Heritage site in 1983.
Learning
Human learning has been the focus of organized study for many decades, and the results of this work have become ever more important as societies intervene on so many levels to promote and influence learning. Today there is no one single way to define learning. Rather, what is found is a range of explanations, each of which provides an important frame of reference for thinking about learning as a human endeavor. Generally speaking, over the past 60 years, three major conceptual frameworks have emerged, and these three will be the focus of this entry.
The first of these frameworks looks at learning in terms of observable behavior. In simple terms, learning is defined as any relatively permanent change in behavior that is not the result of normal growth or maturation. There is no limit to the range of behaviors that might be considered or the contexts in which they occur. When people drive automobiles or operate machinery, perform school tasks such as writing and calculating, or engage in social activities with others, generally their behaviors are fairly complex behaviors that have been acquired over time and with much practice. People who study learning from a behavioral perspective want to know how these complex behaviors are acquired and how they change over time.
The second framework, which began to appear in the late 1960s and early 1970s, deliberately moved away from behavioral explanations, focusing instead on the information-processing activities that occur in the human brain. This movement, known as the cognitive revolution in human learning, developed largely around questions about memory and meaning. For example, when a person listens to people talk—or reads from text—how does he or she process and store what was heard or seen? How does the person represent information in memory for later use, and how does he or she gain access to the large amount of stored data? These and other related questions have dominated the study of learning for many decades, and they continue to be prominent in researchers’ thinking about learning. Thus, viewing learning as a cognitive activity, it can be defined as the acquisition of knowledge and the ağabeylity to use knowledge to solve problems.
A third framework for investigating human learning began to be noticed during the early 1990s. In contrast to the cognitive point of view, in which learning is defined in terms of an individual’s computation of information, this framework focuses more on how people work and learn in cultural settings. Here learning is defined not as the acquisition of knowledge but as participation in meaningful social practices. Examples of cultural practices naturally include a broad range of activities, such as child rearing, office work, professional endeavors, trades, hobbies and the like. As people participate in social practices, they develop roles relevant to their particular type of participation, and as these roles develop, people acquire identities as legitimate practitioners. One important distinction between this framework and the cognitive viewpoint is that in this framework, learning from a social perspective is never separated from doing.
When looking across this history of progress in learning research, there is a natural temptation to simply focus on the latest prominent explanation that may be enjoying most attention. Certainly today there is very little discussion of behaviorism as a viable framework for understanding learning. In fact, the intention and purpose of the cognitive revolution was not simply to modify behaviorism but to replace it altogether.
It can be argued, however, that each of the three frameworks outlined here provides an important window on learning and allows us to see human learning as the rich andmultifaceted phenomenon it really is