Volume 2, Issue 1
By Mark Wisniewski
On his earliest exposure to the Arawak people of the Caribbean, Christopher Columbus himself wrote, “They do not bear arms, and do not know them, for I showed them a sword, they took it by the edge and cut themselves out of ignorance. They have no iron. Their spears are made of cane. They would make fine servants. With fifty men we could subjugate them all and make them do whatever we want” (Columbus).
The events that followed would prove that The Great Navigator had intentions beyond the altruistic and beyond discovery for its own sake. From the start, designs on conquering and displacing the original occupants of the western hemisphere were being formed, and contributing to those designs was the rapidly conceived notion that those occupants were vastly inferior.
This essay will explore the diminishing presence of the cultures of Native Americans during the years of settlement by Americans of European descent, with a focus on the contrasts between those Native American cultures and European cultures. Native American culture was far more advanced than it is commonly regarded as having been. Part of the reason Native American culture fails to receive adequate acknowledgement is that it disappeared so rapidly. This loss – and the loss of any moral lessons that could have been learned from an objective observation of the true American experience prior to 1492 – represents a monumental human tragedy.
Comparing “Us” and “Them”
Among the reasons many Americans view pre-Columbian Native American culture as primitive is the nature of the comparison that is generally used. For example, in modern times, western civilization, with all of its technology and individual liberty, is considered by many to be the most advanced civilization of all time. However, comparing eighteenth century Native Americans to twenty first century Americans is unfair. Rather, a balanced comparison of cultures in the same time period is more appropriate. If one abandons the idea that technological progress, population growth, and conquest are the uncontested hallmarks of an advanced civilization, then what is left is the observation of family dynamics, social support, nourishment, prosperity, and community (Belic).
Without an objective examination of what a harmonious society looks like in a new context, the introduction of one culture to another can immediately bring to bear ideas of how these “new” people are different, and how they are weaker, and how they are other. This visceral feeling of other-ness infects a population like a virus. Left unchecked, it leads to a new kind of doctrine that has a destructive force. Throughout history, there are multiple examples of the lengths to which members of a conquering society will go to convince themselves and others that the people they are subjugating are socially or genetically inferior, or have somehow triggered the invasion and takeover through their own actions. For example, author James Bradley reports that when Japanese leadership decided that overpopulation of the Japanese islands in the 1930s necessitated the conquest of China, Emperor Hirohito ordered the Japanese forces to think of the Chinese people as subhuman. As Bradley puts it, “A dehumanized enemy is easy to kill, and Japanese soldiers were instructed that they were not dealing with humans at all but kichiku, or ‘devils.’ The idea of treating the Chinese as beasts was not informal scuttlebutt but a command from officers whose directives had to be considered orders of the emperor” (55).
Religion has also provided an escape hatch from the moral conflict of dehumanizing a culture’s enemies. Near the end of the 11th century, European clergy convinced Catholics that any atrocity committed against Muslims in the course of what was to be called The Crusades was cause for entry into heaven. The fact that Christ preached tolerance and acceptance in the New Testament aside, stomping out Islam was offered as the primary goal of Catholic invasion, with the capture of the holy land from the Muslims taken as a fortunate consequence (Powell 666).
Similarly, in the Western hemisphere, religion provided Europeans with a reason to view the Native Americans as “other.” Since Europeans who settled America were nearly all Christians of one denomination or another, the discovery of Native Americans who had never been exposed to Christianity prior to 1492 provided Europeans with a quandary. Some chose to dismiss the incongruity as a natural consequence of American isolation from the landmass of the Christian holy land, while others concluded that Native Americans were savages, who had actively denounced the Christian God in a direct effrontery. None, however, seem to have seriously considered the possibility that Native American religions could be equal or superior to theirs. To fully explore this, it is important to consider the expansionist nature of some religions. Many denominations of Christianity and Islam possess a feature, either explicit or implicit, which provides significant reward to those who devote themselves to the spread of that religion, even by violence. One needs only to briefly scan images of war-tattered cities of Europe and Asia and the Middle East to see the effects that expansionist clauses in religions can have on people, and the planet they inhabit. Native American religion was rooted in harmony with all things, from the Earth itself to all its creatures. To explorers travelling across the ocean, a religion that did not promote its own expansion must have appeared both innocuous and inert (Johnson 1101).
Christianity has proven to be one of the most powerful substantiating forces for massacre in the Americas, such as in the case of Cortes’ Conquistadors and the Spanish Requirement of 1513, whereby the Natives were informed (in Spanish) that Spain had authorized them to take the land and subjugate its people unless they converted to Catholicism (Loewen 33).
An Objective View of Native Cultures
The dehumanization of the occupying people of a land to be conquered is a moral expedient. In America, it led to the common belief among Europeans and their descendants that Native American culture was less advanced than that of Western Europe. The wild-fire character of the extermination of Native American tribes and their cultures constitutes a large part of the reason that such little evidence survives to this day of the complexity of pre-Columbian Native American cultures. However, what little we do know about the ways these people lived their lives and interacted with one another suggests that the values and societal structures on the American continents supported a lifestyle that was generally more harmonious, and more conducive to happiness than those on the continents that any subsequent invaders hailed from. Of course, there were wars between tribes of Native Americans prior to 1492, and the aim of this text is not to paint a picture of uninterrupted harmony and prosperity in the Western Hemisphere during pre-Columbian times; the point is that the back and forth battles over territory and sources of sustenance worked to keep life in balance, and the overarching theme was harmony.
Boldly asserting his views as contrasting those of his contemporaries, Thomas Jefferson voiced his own opinion clearly in his Notes on the State of Virginia. The published work includes numerous references to the courage and nobility of Native Americans, his opinions having been formed out of his own personal experiences with them. He even made a point to discount the European assertion that Native Americans were cowardly and dull-witted as uninformed conjecture and hearsay.
Their only controls are their manners, and that moral sense of right and wrong, which, like the sense of tasting and feeling, in every man makes a part of his nature. An offence against these is punished by contempt, by exclusion from society, or, where the case is serious, as that of murder, by the individuals whom it concerns. Imperfect as this species of coercion may seem, crimes are very rare among them: insomuch that were it made a question, whether no law, as among the savage Americans, or too much law, as among the civilized Europeans, submits man to the greatest evil, one who has seen both conditions of existence would pronounce it to be the last: and that the sheep are happier of themselves, than under care of the wolves (58).
Notice how Jefferson explores the differences between Native American treatment of criminals and European criminal law. He seems eager to point out that Native American custom is less developed than that of European nations, but that it functions well to maintain a low crime rate.
Indeed, crime within Native American tribes was reportedly rare. Just how rare is impossible to discern with much accuracy, since any hope of accessing records would have vanished with the members of the tribes themselves. In any case, it would seem that the ability to dispense with complex criminal law, lengthy judicial practices, and burdensome prison stays represented a higher level of development than that to which it has been compared, not only because it was more streamlined and inexpensive, but also because it was more effective. Modern-day studies of town records of mid 15th century Europe reveal average homicide rates of about 35 per 100,000 people per year (roughly the same as New Orleans in 2006), with some areas calculated at over 100 per 100,000 (Dykstra).
Further, the matrilineal social hierarchy of Native American tribes was developed to support a family centered approach to raising children and supporting the elderly. It was the responsibility of every member of the tribe to see to the fulfillment of the needs of all those who could not provide for themselves. Maintenance of the status quo through generations was viewed by Native Americans as a way to avoid conflict and maintain balance. Conversely, white Americans had been divesting themselves of their attachments to their extended families in order to pursue individual achievement. European Americans viewed this as the trait of a more developed culture, but perhaps the Native American position against progress for its own sake was more on the mark, evidenced by the general level of harmony maintained on the continent prior to European settlement. Repeatedly, studies of happiness as a psychological condition have demonstrated that once someone’s basic needs are met, a strong sense of community and family involvement are the greatest keys to human happiness . In the centuries after Columbus landed in the New World, many European Americans sought to increase their economic holdings in order to elevate their positions, and they sought happiness through economic prosperity (Stockwell; Belic).
When discussing overall happiness in a society, another important consideration is disease, and what it says about population density. The conditions in many cities in Europe prior to the twentieth century were very dirty. Many cities had little or no way to effectively deal with waste. Religious beliefs forbade frequent bathing. People lived very close together. Disease in Europe was rampant. The bubonic plague alone decimated over two thirds of Europe during the fourteenth century. Smallpox, cholera, influenza, tuberculosis, and measles also wiped out huge segments of European population. Such diseases had not existed in the western hemisphere; attributable to the fact that Native American civilization was such that large cities were uncommon. This represents yet another distinct advantage that Native American culture had over European culture (Stockwell).
The Decline of Native Cultures
Everything began to change rapidly for the Native Americans once widespread settlement by Europeans began. Infectious diseases, to which nobody on this continent possessed immunity, spread quickly throughout the land. Huge numbers of Native Americans who had never even met a white man were infected and annihilated. When a deadly contagious disease infected a whole village, old and young, weak and strong, there was nobody to tend to the ill. There was nobody to bury the dead. There was nobody to farm. There was nobody to do the hunting. There was nobody to get water. Death tolls were devastating, as the social structure of village after village broke down, rooted as they were in a strong sense of community and brotherhood. Today’s modern understanding of germ theory and the effect that sociology has on the spread of contagious disease does much to explain what nobody could know at the time. Germs were being passed from person to person, and infected victims who would race to the next village to seek help or supplies would keep the germs moving across the land at breakneck speed. Squanto, a Pawtuxet man who had been forced into captivity by European settlers, and shipped across the Atlantic Ocean against his will, escaped and returned to his village to find it empty, and realized he was the only survivor of a plague that had spread through his village the year before (Adolf 247).
The deeply religious settlers instantly deduced that the diseases were divine providence, clearing the way for the white man, and the massive death tolls were evidence that the Natives were ungodly, and undeserving of the land they occupied. Truly, such conclusions would have been self-evident among those who had begun their interactions already believing that Native Americans were inferior, possibly even to the extent that they were not the same species. Add to that the fact that Jesus Christ himself was completely unknown to Native Americans prior to 1492, and it is not hard to imagine Europeans thinking that God was sending disease to remove all possible obstacles to their settlement (Stockwell).
While the theory of massive population loss by genocide has been generally discredited, it is documented that Native American women routinely chose not to have children, after witnessing what might become their fate. Many women even chose to abort their fetuses or drown their infants, knowing they could neither provide for them nor protect them from danger. Consequently, generational population decline was precipitous. It is estimated that in the eighteenth century, the population of Native Americans on the continent was only about ten percent of what it had been in 1491. Those who did survive the many plagues were witness to entire communities having been decimated and dismantled. Tribal leaders had died, support systems collapsed, and social structures began to fall apart immediately. Tribes had to move and combine with one another in order to survive. Many individuals were displaced to other tribes, and had to acculturate to the tribe they had become part of. The erosion of Native American culture began early, and this new idea of tribes being forced to move and combine with other tribes would continue for centuries, constantly wearing away at individualism and the Native Americans’ sense of self (Zinn 3; Loewen 77).
Native and European Cultures in Conflict
Native American society had much to teach about tolerance, survival, mutual beneficence, harmony, and the human spirit, as they contribute to overall life happiness. These are lessons that many of us living in the twenty first century could benefit from, perhaps more than anyone at any time in history. Unfortunately, though, over a few hundred years, the role of Native Americans in what is now the United States of America went from that of curious observers, as in the Arawak people who Columbus met upon his arrival in the Caribbean, to friendly ambassadors and willing participants in the European expansion that would eventually lead to their demise as a society, as in Squanto at Massachusetts and Sacagawea on the Lewis and Clark expedition, to guerillas defending their future from annihilation, as in the Ohio tribes in the French and Indian War and the Cherokee tribes in the Cherokee Wars of the eighteenth century. The journey would come to a near end as Native Americans found themselves a largely marginalized and displaced people, relegated to prostrate positions on reservations and dependent upon the white man for subsistence, and fighting alcoholism and frighteningly high suicide rates. A sharper and more tragic multigenerational decline in an entire civilization of human beings is difficult to imagine (Adolf 247; Copeland 185).
To a people who had inhabited the continent for thousands of years in relative harmony, the expansion and oppression by the European Americans was difficult to understand, and would have been impossible to predict. Throughout the 16th and 17th centuries, up and down the East coast of North America, colonies were being established on land previously occupied by Native Americans. These colonies were generally tolerated, and often supported by the Natives. As a reward for the support given to the struggling colonists, thousands of Native Americans were kidnapped, abused, enslaved, raped, murdered, robbed, displaced, marginalized, and demonized. The culture of harmony, amity, and sympathy that had served their populace for longer than anyone could remember would betray them violently. It was the same betrayal they had experienced when the strength of their community turned incidental germ exposures into massive casualties throughout the hemisphere; these were lessons hard-learned, and not to be forgotten. This is another example of a major culture shift among the Natives at the hands of the Europeans. Violence and distrust amongst the tribes escalated to levels not seen before colonial settlement, and violence against the colonists sprung out as well. It seems difficult to blame them. Many Native Americans chose to accept American pressure to assimilate to European American culture. They cut their hair, learned English, attended American schools, and converted to Christianity. Some who chose this path assimilated well, but most were not fully accepted by white Americans, were not given the same rights as white Americans, and could not fully integrate into white culture, nor could they return to Native American culture (Zinn 5).
The change in culture is further evident in the story of the Treaty of New York, wherein George Washington met with Creek Chief Alexander McGillivray, in order to establish an agreement between the young United States and the Creek Nation to end the advance of Georgian settlements into Creek territory. Alexander McGillivray, a half Scottish, quarter French and quarter Native American plantation owner, was a slave owner and Chief of the Creek Nation. He spoke English, Spanish, Creek, and French. He was a shrewd politician, and advocated fiercely for the rights of the Creek people to keep their land. Despite his efforts and the efforts of George Washington and Henry Knox in concert with the United States Congress, Georgian settlers were advancing over the border into Creek territory and violating the treaty within two years of its signing. Washington discovered that legislation could do nothing to stop the flow of settlers moving westward. The collective power of an expanding population to defy law and pursue their dreams at the expense of Native people cannot be overstated. McGillivray’s status as a slaveholding statesman within the structure of European American politics evidenced a dramatic culture shift for Native Americans, and the ultimate fate of the Creek nation as scattered to pockets of the continent in southern Florida and Oklahoma triggered further culture changes, as thousands of Native Americans had to rebuild their lives again (Ellis 160).
Years later, in 1829, Georgia would pass the Indian Code, a law abolishing all Native American rights and claims to independence. With no ability to defend their claims to rights as independent nations, Natives across the country would be left to choose between assimilation and relocation to reservations. Either option would rob them of their culture (Brill 28).
In 1830, President Jackson addressed Congress, outlining his plan of Indian removal to the sparsely populated areas farther west. His explanation for how this solution was beneficial for the Native Americans as well as for the white Americans betrays an overarching disregard for the preservation of Native American culture.
It will relieve the whole State of Mississippi and the western part of Alabama of Indian occupancy, and enable those States to advance rapidly in population, wealth, and power. It will separate the Indians from immediate contact with settlements of whites; free them from the power of the States; enable them to pursue happiness in their own way and under their own rude institutions; will retard the progress of decay, which is lessening their numbers, and perhaps cause them gradually, under the protection of the Government and through the influence of good counsels, to cast off their savage habits and become an interesting, civilized, and Christian community (1).
Jackson seems unaware of the unlikeliness of happiness after casting off one’s habits, religion, and homeland. This displacement to reserved land was offered up by government as a charitable loan, to be repaid by the Native Americans after a period of time, during which they should have ample opportunity to develop a variety of profitable exports.
While there can be no official record of how many Native Americans lived in North America prior to 1492, estimates usually average around ten million. The United States Census Bureau conducted a census in 1860, the details of which are available on the Census Bureau’s public website. The census reported the number of “Indians” living in the United States and its territories, broken into two categories: Civilized Indians, totaling 44,020, and Indians retaining their tribal character, totaling 296,369. One can only deduce that “retaining tribal character” refers to Native Americans who were not assimilated into white culture or living among white European Americans, but it seems doubtful that most of the nearly 300 thousand counted would have agreed that they were “retaining their tribal character.” In any case, the total counted in the census, including both classifications of “Indians” was 340,389 persons (U.S. Dept of Commerce 15).
Beyond “Us and Them”
For many of us, there is one universal roadblock to the achievement of an unbiased understanding of history. Tragically, this roadblock may prove to be the greatest obstacle to a true discovery of what the long-deceased Native American people had to teach us about how to live our lives. The roadblock is the “us and them” factor. When studying or discussing American history, it is tempting to refer to the prominent players in past events using terms that identify them with ourselves. We use words like “we” and “us” when referring to people to whom we feel a bond, and we use words like “they” and “them” when referring to people we consider “other.” In truth, nobody who died before we were born is one of “us.” We have had zero influence over their behavior, and are therefore not entitled to feelings of pride for their accomplishments, nor condemned to feelings of guilt over their transgressions. We simply study and observe the actions, motivations, successes, and failures of people who lived in the past, all of whom are “them.”
The consequences of such reflexive behaviors are anything but innocuous. They cause people to distort their views of the past, and then to distort the facts. People generally do not wish to believe that they have done wrong, and it is human nature to downplay the negative aspects of ourselves, and display more prominently that which would make us appear most accomplished or beneficent. When we look upon the players in human history as “us,” we unconsciously set upon a journey toward manipulating the facts to suit the way we see ourselves.
We must stop doing this, and we must encourage others to stop doing this as well. We will never truly learn the lessons of American history in a productive way until we can acknowledge that the mistakes that have been made were made by humans, and those humans have given us a gift by recording their mistakes. The evidence of the marginalization of a whole culture of “others” exists, and to suppress this evidence harms every single one of us living today. All of us can learn better ways to live more enjoyable lives, and to be kinder to those with whom we share the planet.
Mark Wisniewski is a critical care registered nurse from upstate New York. He is currently enrolled in his final year of undergraduate studies at Ashford University in the Health Care Administration program. His interests include history, education, fitness, travel, nutrition, and of course, health care. His reading interests are primarily focused on nonfiction, including political history and military history. Mark plans to pursue a master’s degree and advance in leadership in his nursing career.
By Stephanie Winscher
“The fearful abounding at this time in this country, of these detestable slaves
Of the Devil, the witches and enchanters, hath moved me (beloved reader)
To dispatch in post, this following treatise of mine.” – King James,
Daemonologie (Gutenberg 1).
King James Stuart VI, king of Scotland in 1567, and also known as King James I of England and Ireland from 1603-1625, carried an exceptionally paranoid view of witches and witchcraft which affected his kingship and led to a massive execution crusade on persons accused of witchcraft. With this in mind, the following essay will discuss sixteenth and seventeenth-century witchcraft, some famous trials of those suspected or convicted of witchcraft, the controversies surrounding witchcraft, and the effect witchcraft had, not only on two centuries, but on the world. Arguably, witchcraft and the persecution of witches in this era seemed to be a conduit by which James projected not only his extravagant fear of his own demise, but his political failures as well.
The wholesale persecution of witches started in Scotland in 1590, when James VI was king and soon to be the future James I, king of England. The year saw the start of a series of trials for treason. Three hundred witches were accused of gathering to plot the murder of James, making these trials personal for him, and he suddenly developed a very keen interest in witchcraft. However, evidence for these alleged “crimes” remains imprecise at best. It is known, however, that James had a morbid fear of death as is evident in his book Daemonologie, and one can argue that this fear is the real reason behind the death of hundreds of possible innocents in this so-called “Era of the Witches” (Porterfield).
By the end of the 16th century, magic had taken several shades. There was “white” magic, which was considered pure of heart. One might compare white magic to the power of love while “Black” magic however, was seen as evil and magic conjured by the devil. It’s no surprise, then, that dark magic was considered a form of magic where the host or “witch” essentially made a pact with the devil in order to possess it. One cannot blame James for adopting the theory of good and bad, or “black” and “white” witches. It is a notion that existed long before James came to the throne. One can, however, argue that James’ position on the throne accelerated the movement of witchcraft persecutions and antagonized crowd hysterics. Ultimately, James’ paranoia and fanatical antics turned sixteenth-century Scotland into the site of a ravenous lynch mob out for blood (Anderson and Gordon).
It was only after crowning James king of both England and Scotland that the number of accused witches rose significantly in England. To the credit of the King, many suspected witches were given a trial, such as the very public Lancashire trials in 1612. However, the trials were covered in such detail by the press, that the records give the impression that such events were common when they were in fact not. What is most interesting in all of this is how James never involved himself in any of these stories. He claimed to have no interest in witchcraft. This claim, of course, can be challenged by his own hand in the rantings of his famous Daemonologie, published in two parts in 1591 and 1597. This book is a collection of King James’ letters, which offer diverse focus and styles. The book presents writings on witchcraft, trials, demons, fairies, possessions, were-wolves, and ghosts in Socratic dialogue in order to confuse, educate, or scare the wits out of a person. The book also expresses the first-hand accounts of witch trials and accusations that took place in Scotland, as well as some mention of the Mary Napier controversy as discussed in further detail below. Ultimately, this era of witches and witch-hunting left an irremovable mark on how witches were viewed and how we view suspected witches even to this day (Porterfield 26).
James had a justified fear of death, as he had countless enemies he had created not only in Scotland during his rule there, but in England and beyond. His religious policy consisted of asserting the supreme authority and divine right of the crown, and suppressing both Puritans and Catholics who objected. His accession was, however, not welcomed by a group of Catholics as he was a Protestant, and they were incensed when he passed a law demanding that people who did not attend the Protestant church pay substantial fines. In addition, the king saw little economic growth during his reign, poor political power outside of his own dictatorship, and caused a great deal of concern for his English subjects who rejected his relationship with France, their primal enemy. It was no surprise to James that he harbored such enemies in both Scotland and England, which is what fueled his paranoia of murder conspirators and led to many of the hate-filled accusations of witchcraft (Wormald).
The Witchcraft Act of 1563 was implemented by James, which made practicing witchcraft and consulting with witches illegal and punishable by death. Fast forwarding to 1590, and the last thirteen years of the reign of James, Scotland fully superseded the Act by instead accepting the “Christian Witch Theory” which made it considerably easier to hunt down witches as opposed to simply threatening to do so. This theory, also created by James, implemented a nation-wide hunt for witches by accusing those around suspected witches of being conspirators. As such, these conspirators were hunted down, just as the suspected witches were, and faced the same cruel punishments for their alleged crimes.
Witchcraft had been a criminal offense in Scotland prior to 1590, but action against suspected witches was limited, and it seems that witchcraft was seen as a minor issue by those in power. In 1583, for example, the General Assembly complained that witchcraft carried no punishment despite being outlawed in 1563. Why did this change in 1590? I believe the new perception of witchcraft, and everything it stood for, literally scared people into slaughtering many innocents for unproven crimes or conspiracy. Or perhaps people needed a way to show off their devotion to religious beliefs by fabricating unnecessary evil where it simply did not exist. What is so controversial here is not so much the convictions and executions of so many people for suspected witchcraft, but that James denied being responsible for pushing ahead with the persecutions of these many potentially innocent “witches.” The evidence is stacked against James’ involvement, however, suggesting he was not only involved, but was in many ways, a great facilitator and mastermind behind the carnage. Regardless, there can be no denying that his deep involvement and radical pursuit of known conspirators of witchcraft led to a great hatred toward the king (Hare).
Additionally, records reveal that in 1591, he showed a particular interest in the trial of Mary Napier, who was arrested for consulting a witch and thusly linked to treasonable and punishable activity according to the law. Mary claimed to be pregnant at the time of her arrest, in hopes that her life might be spared. Despite the 1563 law outlawing witchcraft, no one had ever been arrested in Scotland for consulting a witch. Yet James wrote to the court, ordering them to find out if she was pregnant or not. If she was not, she should be burned. The court acquitted Napier, much to the anger of James. However, from that point on, whether a woman was with child or not, if she was suspected of witchcraft or conspiring with a witch, she was still hung from the same noose or burned at the same stake as anybody else. In fact, rumors were spread that pregnant women suspected of witchcraft might hold the seed of the devil inside them, and this almost certainly did not help many pregnant women of this time. One might even go as far as suggesting that some of the modern-day controversies over abortion and the constitutional morality of abortion had their origins in the Napier trial. The trial of Mary was not that different from many of the issues we see today. Here we see a woman playing on her advantage of being with child to save her own skin. In time, people developed an unfaltering hatred for the king, as many grew tired of his tyranny and saw many of their townsfolk meet a tragic and unjust fate at the hands of the king (Hare; Anderson 171, 182).
Witchcraft, as well as this deep-rooted hatred for the king, grabbed the attention of the sixteenth century media, which included reading materials, plays, and, of course, gossip. Through these outlets, witches were exaggerated and horrible stories were told about them, which had a profound effect on the views of witches among the people. Take Macbeth, Act 2, for instance. A Shakespearian classic, yes, but the play also asserts the image of the witch as distinctly feminine, who is there to conversely wreak havoc on the natural social order, and as such, is an “element of the supernatural capable of tapping into Satanic powers” (Porterfield 8). As such, Shakespeare’s play Macbeth was directly influenced by James famous writings of the witch as depicted in his Daemonologie. The clearest correlation between Daemonologie and Macbeth can be seen in the tale of “three witches” described in Macbeth. James’ Daemonologie states: “rayse stromes and tempestes in the aire, either upon land or sea, though not universally; but in such a particular place and prescribed bunds as God will permitte them so to trouble[sic],” which can be seen directly in Macbeth when the witches summon winds and storms. It doesn’t take a great deal of imagination to see the similarities between the Macbeth scenes and those of Daemonologie and know it is more than mere coincidence that they are so similar. In fact, according to Amanda Mabillard, Shakespeare and James shared a unique friendship and admiration for each other’s written works. It is even said that Shakespeare sent his rough draft of Macbeth to James to have him add his revisions and unique tastes to the piece before it was finalized.
Essentially myths created the image of the witch, as well as labeling each suspected witch as having powers given to her by the devil himself, which could not be tolerated in the heart of a profoundly religious generation. According to Melissa Rynn Porterfield, theatre in the Early Modern Period profoundly affected the views of witches specifically for the English. Porterfield writes, “As English colonization began in earnest, the threats inherent in controlling the unknown and uncontrollable forces of nature became all too real and loomed menacingly on the outskirts of the English cultural mindset” (81). The sixteenth-century media’s focus, in fact, was not so dissimilar than the focus of today’s media on public trials. Take the OJ Simpson trial for instance. Would this trial have gained half the popularity and interest it did, had it not been so heavily gossiped about and displayed for the public? Unlikely.
It has been advocated that witches, who were predominantly women, never worked alone. Subsequently, the Christian Witch Theory was believed to give rise to the pursuit of people suspected of witchcraft based on gender and physical stereotyping. Thus, once a woman was suspected to be a witch, a hunt for more in the same town or village would immediately commence. Usually, the pursuit would not stop until several other witches had been found. Through the exploitation of the sixteenth-century witch hunts, women developed quite a reputation as accused witches. Men were accused of witchcraft but not as commonly as women. According to “Witchcraft and the Status of Women–The Case of England,” an article by Alan Anderson and Gordon Raymond from the British Journal of Sociology, in “the period of 1300-1500, about two-thirds of all accused [witches] were women” (172). The article goes on to say that it is reasonable to assume that women took the face of the ‘witch’ because women were seen as inferior to men, and in many ways, even persecuted by their own churches simply for their female gender. As such, suggesting that society was comprised of predominately “female” witches implied that women were inheritably impure and easily swayed by the devil. One can assume this is mostly related to the fact that folklore designated the devil as a man, and women of this period were not regarded very highly, especially when it came to temptation and trustworthiness in the presence of a man. This reasoning, of course, brought no objections from the king and his fanatical propaganda. He affirmed in his famous Daemonologie that “there are twentie women giuen to that craft, where there is one man[sic]” (Porterfield 174). It is sufficed to say that the Christian Witch Theory sparked an episodic preservation attempt for 16 and 17th century women and Christendom.
According to Hartmut Lehmann’s article, “The Persecution of Witches as Restoration of Order: The Case of Germany, 1590s-1650s,” witchcraft was just as popular throughout the world during this time as it was in England, however, no other land saw as much carnage as England or Europe. In Europe, for instance, those accused of witchcraft were at least tortured until they confessed, so at least they had a fighting chance at saving their own skin. However, torture was not used in England. In England, witches were hung, not burned. In the rest of Europe, witches were usually burned, but normally they were strangled first. One cannot help but put substantial, if not full blame for these turn of events, in the hands of James (Porterfield).
In America, there were several famous periods of time when witchcraft was high, and several famous trials, including the Salem witch trials of 1692, where an estimated 20-35 people died by hanging or imprisonment. Some people confessed without torture, but that still does not mean they were guilty. Despite these shocking hangings in the Americas, there can be no denying the Christian Witch Theory under the direction of James created more havoc. Why did James become so interested in the Christian Witch Theory? Was it that he put so much faith in the scripts of his Daemonologie that he actually began to believe his own lunatic ravings about witches’ conspiracies? It almost certainly seemed that way in 1589, when he visited Denmark to meet his future wife. The king’s journey back to Scotland proved to be a very rough and stormy one, and one ship was lost. Many suspected witches, this time in both Scotland and Denmark, were accused of attempting to drown James by calling up a storm while he was at sea with his new wife.While the witches were accused of classic witchcraft, the main issue as far as James was concerned was the plan to murder him, which was the highest form of treason known at the time (Bentley 635; Wormald 27).
Finally, the sixteenth and seventeenth century “era of witches” was a time the world will never forget, and perhaps not for the reasons one might assume. Perhaps when readers think of witches on Halloween this year, they will really think of where the idea of the witch came from and how many innocents had to suffer fighting against the persecution it brought. While many will never understand the morality of the issue at hand, it nevertheless should be given the respect and attention it warrants. One might argue that the modern era brought about more than the hunting and slaughtering of suspected witches; it also brought about the public disgrace of unfair trials, cruelty cases, human rights violations, the discussion of women’s rights, and perhaps even acknowledgment of the shortcomings of a king. Whatever purpose the season of the witches brought, it certainly has changed the mythology many of us take for granted and dismiss as mere folklore, and it should be discussed with the seriousness it deserves.
Stephanie Winscher is in her fourth year at Ashford University working on multiple degrees in both History and Education. Stephanie is an accomplished writer and is part of the Golden Key International Honour Society & Alpha Signa Lambda. In addition to her writing, Stephanie has a great passion for history and animals. Lastly, she is dedicated military veteran and prides herself in her ability to make the Dean’s list and maintain a 4.0 GPA
By Brandy Frew
Most, if not all, societies have formal and informal methods for their citizenry to deal with conflict. Some societies, however, have a significant enough of these constructs in place to encourage a peaceful and cooperative culture. These societies have many similar qualities that allow them to reduce, or eliminate, the amount of conflict within their societies. These include teaching children how to deal with conflict, the importance of conflict resolution, and the conflict resolution tools used by a society. An examination of these qualities will show that the peaceful tendencies of a culture are a result of cultural constructs that maintain peacefulness among these societies. Additionally, examining the qualities of peaceful societies can help larger, permanent societies to obtain conflict resolution skills that could reduce conflict within, and outside, their society.
The Enculturation of Children
Child-rearing practices are an important part of maintaining peacefulness within a society, practices must include passing on the skills needed to avoid, resolve, and/or deal with conflict. Teaching the children of a society early in life the principles and values surrounding peace and conflict helps to ensure that peaceful societies can continue peaceful practices. As Bruce Bonta, a member of Peace and Justice Studies Association, explains, the Hutterite colonies in the United States and Canada and the La Paz community in the Valley of Oaxaca, Mexico provides two examples of how children are taught early to adapt to norms that discourage conflict. An examination of both shows how each culture begins early in life with expectations on their members to reduce and/or eliminate conflict as much as possible (301-306).
The Hutterite are groups of people that live throughout North America, small communities that work together to raise livestock, farm, and manufacture goods. Bonta, writes that in many nonviolent societies, teaching children to live in a society where competition is not engaged helps to ensure that the values of peace and cooperation are passed on with each generation; one such society is the Hutterite colonies. Within the Hutterite colonies, cooperation, not competition, is the quality taught to children early in life. This avoidance of competition is so strong within the Hutterite children that visiting teachers have found it difficult to use competitiveness to motivate students and have found that an entire class will be embarrassed if a student is singled out for praise (“Hutterite Brethren”; 299-302).
As Bonta explains, ultimately, at a very early age, Hutterite children are taught that their community is more important than they are as individuals, as well as an understanding that their elders are more important than they are. In this particular peaceful society, the children grow up understanding that they are not to set themselves apart from others, compete with each other, and they are to view their elders with respect. Hostetler writes, “a successfully socialized adult Hutterite gets along well with others and, submissive and obedient to rules and regulations of colony, is a hard-working, responsible individual.” All of these qualities help the Hutterite society continue to pass on peaceful and cooperative constructs within their society (301; 245).
In addition to cooperation, another societal construct that can be helpful in maintaining a peaceful society is respect for other members of the society. This is shown in the La Paz community studied by Douglas P. Fry where he contrasted the methods of socialization used with children in La Paz with the neighboring community, San Andres. During Fry’s examination of the attitudes surrounding aggression within both communities, he found that the children of each were taught societal rules surrounding violence and aggression. The citizens of San Andres regularly engage in more violent and aggressive behavior then the citizens of La Paz (623-626).
In both communities, the children are exposed to the behaviors of the adults regularly and witness how conflict is dealt with. In particular, the citizens of La Paz have a more egalitarian view regarding gender relations; their women are prescribed more freedoms and are less likely to be subject to domestic violence. When closely studying the behavior of children in both communities, Fry concludes that “prevalent attitudes and values regarding what constitutes acceptable behavior, shared expectations about the nature of the citizenry, and overall images of the community’s aggressiveness and peacefulness are all elements of a child’s learning environment.” The children of La Paz are taught early in life to respect each other, they are more apt to have discussion with elders in times of conflicts (rather than receive corporeal punishment) therefore they are more likely to grow up with conflict resolution skills that the San Andres children may lack (621-633).
In both the Hutterite and La Paz societies, early socialization of children helps to ensure the peacefulness of a society. Enculturation at an early age allows the cultural norms surrounding peace, violence, cooperation, and competition to pass on to future generations. The types of methods discussed, cooperation and respect for others, are some tools that, given during childhood, allow the children a successful integration into society as they age. Other tools that are important in peaceful societies are conflict resolution skills; these have to be effectively practiced by children and adults within a culture in order for the cultural constructs to survive.
Conflict Resolution Skills
Conflict resolution is a necessity for any society to become a peaceful society. Without values strongly objecting to violence, aggression, and/or conflict, a society is less likely to be able to maintain its peaceful aspects over time. Two societies in particular, the Inuit and the Paliyan Foragers, show how this can be done in a modern social climate and rising conflict within their culture. In both, some aspects of their culture has changed significantly, such as a more permanent settlement and increased exposure to other cultures. Examining these two cultures can help show how traditional cultural constructs surrounding conflict can survive and adapt with increased modernization and globalization they face.
Jean Briggs discusses how the Inuit have adapted their conflict management from traditional nomadic camps to more modern settlements. In early nomadic years, the Inuit had several conflict resolution skills that were used to control conflicts in their small communities. Some of these included avoiding criticizing others, not taking sides during a quarrel, or an agreed upon member of the group publically lecturing individuals in cases where conflict is not resolved quickly or easily. Each of these conflict controls used in the nomadic Inuit camps helped these small, closely related groups of people live together and work in a cooperative manner (111-113).
However, as Briggs explains, these same controls may not work as well in a modern settlement, where the Inuit are exposed to a larger number of people in their group, as well as more diversity. In the community described by Briggs, the modern settlement is diverse in both language and ethnicity. This modern environment for Inuit members creates a number of unique conflicts not previously seen in their culture. For instance, Inuit’s may begin to feel that they are tied down, dependent on government services, unused to having government enforce community law, or struggle with traditional views in light of newer generations views. Ultimately, Briggs feels that all of the previous Inuit camp members that have moved into the modern settlement she studied fear aggressive confrontation, and go to length to avoid it, like avoiding public meetings where quarrels are known to happen (111-122).
To adjust to these new conflicts that Inuit face, Briggs shows that the Inuit either created new conflict control mechanisms, or adapted previously held ones. A new mechanism used in their society is the radio, through personal (talking to a friend/relative) or government (announcements) use the Inuit can feel that their large society is much smaller. They are not isolated from family members or from their community as much as they would be without this method of communication. While the radio can be a source of conflict as well, it provides a way for conflict to be resolved without confrontation. This has allowed the Inuit to adapt the traditional method of conflict avoidance, through use of technology. The Inuit are not the only traditional society that is facing change and modernization, the Paliyans of India provide a unique insight into changing conflict resolution methods as a part of their society settles (110-122).
The Paliyan are a foraging society who Peter Gardner, an anthropologist with the University of Missouri, examined to see how, and if, conflict resolution methods changed when a portion of a nomadic group settled into a permanent community. He compares and contrasts the conflict resolution methods, parenting, and conflicts witnessed between the Paliyan that have become sedentary and those that are still forest dwelling. While there was a slight increase in violence among the sedentary Paliyan, it still falls well below what most societies consider violent. However, conflicts or acts of aggression in the permanent Paliyan community were higher than in the forest dwelling Paliyan community (224-228).
In the Paliyan society, respect for others is paramount to their conflict control constructs, this includes respecting children as much as a person would an adult. Within the sedentary Paliyan, Gardner observed that there was a higher rate of incidences between child and parent that would have been seen as disrespectful in the foraging Paliyan, such as a mother raising her voice or swatting her child. He also reported an increased amount of arguing within playgroups that he did not witness with the foraging group. Gardner felt that this increased violence among the settled Paliyan children was due, in part, to the example set by another ethnic group that the children came into contact with during their day. While the Paliyan adults within the settled group still actively avoided conflict of all types, the children were exposed to the behaviors of more aggressive adults (224-228).
In addition, Gardner observed a new method of conflict resolution created by the sedentary Paliyan, the KuTTams, which are a communal method meant to deal with conflicts. However, strongly held beliefs of respecting others and that no one has power to tell another what to do (or how to behave) is believed to cause an uneasiness with the sedentary Paliyan and this new method. They appear to be suspect of a method that may result in someone being told by a group, or an individual, how they should behave or allowing a method that gives authority to a group or individual. Instead, even among the sedentary Paliyan there is a strong cultural construct that respect for each other is what will keep conflicts from arising. In contrast to the Inuit, the sedentary Paliyan have maintained many of the conflict resolution methods and are suspicious of new methods. Within their newest conflict resolution method, KuTTams, the sedentary Paliyan do not widely accept this, unlike the Inuit with their adaptation of the radio as a means to resolve and/or avoid conflict (215-236).
As globalization influences the economic and social structure of a nonviolent society, there are likely to be changes in how the dynamics of conflict resolution and maintenance of peaceful values function. Not only are the children of such societies likely to be exposed to differing views regarding conflict, competition, and aggression, the adults are likely to be exposed to new sources of conflict. These two societies, more so with the Paliyan, help give us insight as to how traditional conflict resolution strategies can be adapted to deal with modern growth, globalization, and conflicts.
What Techniques Work
The conflict resolution techniques within peaceful societies vary, however a common belief within these societies is deep conflict avoidance of any type as well as others. Bruce Bonta, an anthropologist with the University of Alabama, specifically defines
conflict resolution among peaceful peoples is the settlement or avoidance of disputes between individuals or groups of people through solutions that refrain from violence and that attempt to reunify and re-harmonize the people involved in internal conflicts, or that attempt to preserve amicable relations with external societies (406).
In order for peaceful societies to be able to accomplish this, they must create specific conflict resolution techniques that fit the specific structure of their culture. Therefore, while they may have some common techniques, they are also likely to have differing techniques from another peaceful society. Bonta states that his examination of the literature available on peaceful societies, there are six common techniques used; they are self-restraint, negotiation, separation, intervention, meetings, and humor. These common techniques used can be examined to help gain an understanding of how some cultures can maintain peacefulness (406-408).
In cases of self-restraint, it is common for a societal construct to be created teaching members of the society that they are expected to act, as well as react to situations, in a certain way. For example, Gardner states that the Paliyan “firmly and persistently reject the idea that they might consume alcohol. They say it unleashes anger” (231). Showing self-restraint in consumption of alcohol is seen here, if not complete avoidance. An expectation exists that all Paliyans avoid alcohol, as they believe it will create conflict in their groups and settlements. Bonta also explains that self-restraint in emotional reaction is important in many peaceful societies. Many believe that highly emotional responses or states can only lead to more trouble or conflict. Whether self-restraint in consumption or emotional response, self-restraint is a tool integrated into cultural constructs by many peaceful societies (406).
Negotiation is another quality found within peaceful societies, but not in the same context that the Western cultures might view negotiation. Bonta states that rather than prefer an outsider to hear both sides of an argument, or problem, and help the parties reach an agreement, the peaceful societies will generally rely on the individuals involved in the conflict to work it out themselves. As Briggs explains, the radio method used by the Inuit previously discussed is an example of how negotiation is avoided by an outside party. Through the radio method, two arguing parties can air their grievances, and perhaps settle them, without having to ask or receive help from another party. Conflict avoidance is imperative in peaceful societies, and this technique is a way for others to avoid conflict when one arises. If two parties are arguing in a small community and it is culturally acceptable for others to get involved with the conflict, then it could create a fission in the group. By disallowing negotiation through a third party, these cultures help others to avoid getting involved in conflicts they witness (406; 119-121).
The third conflict resolution technique described by Bonta is separation, where he states, “walking away from a dispute is one of the most favored ways of resolving conflicts among these people” (407). Allowing individuals or groups to remove themselves from a situation/conflict for any length of time until emotions calm greatly reduces the risk of conflict escalating to violence. Removing themselves from a conflict is not only an internal mechanism, but also as a mechanism to deal with external conflicts. Bonta further explains that it is not uncommon for historical records to show that peaceful societies in Western cultures would move entire communities to avoid conflict from an external society. While this is not something that a very large, permanent society could do to avoid conflict, there are other ways to move away from a conflict. Understanding the importance of separation in conflict resolution techniques can help more permanent, modern societies learn to avoid conflict with geographically bordering societies, perhaps with the use of increased diplomacy (407).
In contrast to the negotiation approach covered earlier, the fourth conflict resolution mechanism described by Bonta is intervention. Within some peaceful societies, he discovered that an individual that was informally selected to try to maintain the peace in the case of a conflict. They are typically people that show skill in calming others during a conflict, perhaps using humor or soothing tone of voice. In other peaceful societies, any bystander will intervene when a conflict is witnessed. Gardner writes that within the Paliyan foresting band, it is uncommon for a relative or bystander to not intervene and separate a frustrated parent from their child in an effort to avoid conflict. In either case, the cultures have created a conflict mechanism tailored to their societal values (407; 226).
The final two conflict resolution mechanisms described by Bonta are meetings and humor. Meetings are usually informal public gatherings where people are allowed to air their grievances. He also states that one purpose of these meetings could be to contain a conflict to a designated area, therefore removing the threat that it will “disrupt society, either by minimizing issues as private rather than public concerns, or by restricting involvement in order to allow informal mechanisms of social control to operate” (407). Meetings, negotiation, and intervention can all be closely related to the final resolution mechanisms, humor. Gardner describes several instances within the Paliyan society where individuals within the groups used humor to diffuse a tense situation or a conflict. Whether through meeting or humor, the premise behind these techniques is to minimize the importance of the conflict, before it escalates (222-23).
Understanding the importance and methods of conflict resolution with a peaceful society may help other societies to learn methods that could reduce conflict within their own society. While not all methods are easily obtained in a modern community, such as separation, there are aspects to each that can have an impact on the rates of aggression, violence, and conflict seen in permanent communities. For example, learning to avoid extreme emotional responses may help motor vehicle driver’s deal with the phenomenon of road rage. The peaceful societies covered each show a strong belief that conflict is not only destructive, but also effectively avoided when all members of the society make an effort.
Importantly, peaceful beliefs are not an ideal that these societies are constantly working toward. Instead, Majken Sorensen states “even more important, they do not see peacefulness as an ideal that they are strive for – they are peaceful” (607). Peaceful societies show a great respect for conflict avoidance, some to a greater extreme than others do. All, however, have formal and informal methods to resolve conflict, avoid conflict, and create cultural constructs that create social controls. These formal and informal methods become the building blocks of enculturation that members of the society are taught through interaction with each other.
All societies socialize their children early in life on what is considered culturally acceptable behavior; within peaceful societies, this typically includes values such as cooperation, respect for individuals, and self-sufficiency. Additionally, socializing children on conflict resolution skills, peaceful societies create informal mechanisms that strongly encourage their populous to act according to approved behavior models. As seen in the mechanisms covered by Bonta, some use humor, a strong belief in conflict avoidance and/or strong emotions, and other means to control the frequency and severity of conflict. All of these attributes of a peaceful society continually help to ensure that the society will have the mechanisms necessary to maintain peacefulness (407-408).
All of these peaceful societies have created conflict resolutions and views regarding violence through cultural constructs that are specific to their culture. In some cases, they have had to adapt traditional views to deal with modernization of their culture, such as with the Inuit. In other cases, traditional views are still held strongly despite a change in their nomadic behavior, such as is the case between the forest-dwelling Paliyan and the sedentary Paliyan. Through various strategies employed by the societies examined a great understanding in societies may learn to peaceful deal with conflicts may be obtained. Through conflict resolution methods that have been proven to work within a society, another society may better learn to deal with conflict.
Brandy Frew is a Cultural Anthropology major in her fourth year at Ashford University. She is a member of Alpha Sigma Lambda and Golden Key. Her passion is examining modern culture and watching how changes affect the peoples of the world. Upon graduating from Ashford, she hopes to move on to get a Master’s degree in Criminology. A proud mother of four, she hopes that her college education will inspire her children to obtain higher learning.
By Jamie Kouba
If a person is terminally ill or mortally injured, end of life or palliative care may be required to ease that patient’s suffering and make dying as comfortable as possible. Some people are planners and think ahead. These people might make a living will or advanced directive to state their wishes should they face a terminal illness or mortal injury. According to the Mayo Clinic, “advance directives guide choices for doctors and caregivers if you’re terminally ill, seriously injured, in a coma, in the late stages of dementia or near the end of life” (Mayo Clinic Staff). Advanced directives can include requests for DNR (Do Not Resuscitate), a request for palliative care, and, in states where it is legal, a request for Physician Assisted Suicide (PAS). Other people might be caught by surprise with a diagnosis of a terminal illness that can leave them in a position where they want to end their suffering. In addition, an unexpected tragedy such as a car accident could leave a loved one brain dead with no hope of regaining cognitive abilities. In these and other similar circumstances, ethical questions arise about the morality of PAS. Proponents of PAS argue that once a patient has established the need to end life, it should be done with as much efficiency and dignity as possible. Whereas, those who oppose PAS do so one various grounds including moral, cultural, and religious beliefs, as well as concerns about the Hippocratic Oath and the potential of violating that oath by physicians.
Adding to the complexity of this issue are debates about whether or not there is a moral distinction between active euthanasia and passive euthanasia and how these affect the patient. There are also the consequences for doctors and loved ones to consider as well. American physician Leon Kass, who argues that active euthanasia is morally wrong, states that “Ceasing medical intervention, allowing nature to take its course, differs fundamentally from mercy killing; in ceasing treatment the physician does not intend the death of the patient, even when death follows as a result” (475). Ethical theorists who debate on this topic of active and passive euthanasia arrive at different conclusions depending on the ethical theories that they bring to the debate. This essay will show that utilitarianism and moral relativism yield jarringly different conclusions on this issue. Using these theories, I propose that denying yourself life-saving measures (passive euthanasia) is morally equivalent to requesting life-ending measures (active euthanasia). I will demonstrate through the principles of utilitarianism that there is no difference between active and passive euthanasia morally; and, therefore PAS should be legally available in all states to those who seek it.
Anytime someone is wrestling with a major choice in their life, they often look to their own set of moral principles to guide them to make the “right” choice. Some people call it their gut instinct; it’s that nudge that you give yourself that says “This is right for me.” A relativist says there are no moral absolutes and that something is right or wrong for you based on your relative culture and upbringing. “Moral relativism extends this idea to the area of ethics. Ethical evaluations are made in terms of the context of that act and therefore are relative to the [person’s] culture and values” (Mosser 6.2). While moral relativism is often placed side by side with other ethical theories, it offers no moral principles or concrete rules to judge what is “right” or “wrong” in any given culture. Moral relativism does, however, help us to understand why laws on such things as PAS can differ from state to state.
For instance, the federal government of the United States does not have any laws against euthanasia, implying perhaps that the entire culture of the United States has no qualms morally with active or passive euthanasia. However, according to FindLaw.com, only five states have legalized PAS, implying that a very slim minority of the United States approves of it. Yet, if we look into the states with proposed legislation on the table for PAS this year, Death with Dignity lists 19 states on their map with support for PAS legislation. This clearly points out that the United States is a house divided on this issue and that the question of PAS can not be simply answered by appeals to whether or not it’s culturally acceptable (Take Action).
Moral relativism suggests that if the topic is neither right nor wrong, then it ought to be tolerated. Should that apply to laws however? It must be stated that over the last ten years, there has been a significant turn towards support for PAS and soon the entire country may all be on the same side of this debate. As Edward Rubin points out, “The intense controversy about assisted suicide and the related issue of terminating life support reflects the conflict between two moral systems, one traditional and the other evolving” (766). Based on the current diversity of views of PAS within the United States, moral relativism is not able to provide a worthwhile conclusion on whether euthanasia is right or wrong. Because of the failure of moral relativism to provide a clear-cut conclusion, a person seeking answers should look for a different ethical theory to examine the topic of PAS.
When a philosopher is seeking to determine whether an act is morally right or wrong, they may look at the consequences of the actions themselves. In considering those consequences, they are looking to see how much utility, or satisfaction, one gets from the act. This theory of ethics is known as utilitarianism. Utilitarianism is a theory that was first proposed by Jeremy Bentham in 1789. The principle of utilitarianism seems relatively straightforward; when given a choice between two options, one should choose the option which generates the greatest good, and produces the least amount of suffering. This means the greatest good or least amount of suffering for a given group, not just for an individual. Using utilitarianism to decipher the topic at hand, we must first determine what the options are (Mosser 6.1).
Let’s first consider the option of passive euthanasia for a terminally ill patient. This is usually an option requested in a living-will, or advanced directive. It generally states that no life sustaining measures are to be taken by the doctors, although they may sedate the patient and keep them comfortable, by applying pain medication. For most patients, this means no respirators, defibrillators, feeding tubes, or intubations; it is just a matter of trying to achieve a level of comfort while allowing the body to shut down and die on its own. It can be an agonizing process for the patient, loved ones, and even the doctors to go through; not just physically, but emotionally as well. The ultimate result is death of the patient.
The second option is active euthanasia, or PAS. Active euthanasia involves using pills or injections of a lethal dose of medication, given at the patient’s discretion to end life immediately. Of the five states that allow PAS, each state has different regulations on how, where, and by whom the medication may be administered. For instance, according to the Washington Death with Dignity Act, once 48 hours have passed since a patient has written their request, they may receive their life ending medication from a pharmacy, and administer the dose themselves. Some states require that the patient be given a psychiatric evaluation to determine mental competency and to make sure that no one is forcing the patient against their will to choose active euthanasia. California law requires the medical diagnosis of two doctors to confirm the patient is eligible by law, including psychiatric competency. After a series of oral and written requests and a 15-day waiting period, which may vary by state, the patient will be prescribed a lethal dose of a barbiturate, usually Seconal or Nembutal. Upon receiving their medication, the patient is in complete control of the time and place of their death, and who they want around them. The death is quick and painless, with the patient slipping into a coma and dying in their sleep (The Washington Death; Take Action).
Next, we must use the principles of utilitarianism to determine which is a better option, morally, by deciding which consequences will generate the greatest good and reduce suffering. If we are applying the principles of utilitarianism to the morality of active versus passive euthanasia, we must look at the consequences of the actions. In both cases, the patient ends up deceased, therefore the final consequences are the same. A patient who is terminally ill and suffering will benefit from either passive or active euthanasia because it would produce the greatest good, which would be to eliminate the patient’s suffering. If the patient is given the right to choose active euthanasia, their death is on their own terms and painless, usually performed in the comfort of their own home and surrounded by loved ones. Greatest good also applies here to the limiting of the financial strain on the family of the patient, and easing the emotional burden on the family from having to watch a loved one suffer needlessly. The patients’ needs are not the only ones being served during PAS. Although the death of a loved one is an obvious source of pain for any person, when a loved one is choosing PAS over suffering, a person can take comfort in the fact that it was a choice made by the deceased to shorten their overall suffering; and, therefore, the loved ones’ final wishes were honored. The utilitarian would suggest that getting the most good out of a difficult situation is the best option. With the peace of mind provided to the deceased and their loved ones that PAS provides, despite the loss, it is clearly the right moral choice.
However, in passive euthanasia, the patient is more likely to suffer more, and for longer periods of time. Although the patient is ultimately terminal, the death does not always come quickly. Sometimes “terminal sedation” is required where the patient is kept in a coma-like state, under sedation while they starve to death, as the body’s organs shut down one at a time. It can be a grueling process for the family and the medical staff to have to endure, as well as the patient. Then there is the added expense of health care for extended periods of time which loved ones are sometimes left to deal with. Also, the patient’s life often ends in a hospital or nursing home. The very act of letting nature take its course may seem like that “right” thing to do, but if the process extends the pain, misery, and grief for all the participants involved, it can hardly be considered as serving the greater good.
The right of a terminally ill individual to end his or her life is a serious matter, and they should be given options that will bring them comfort in their final moments. Someone planning a living-will has a number of priorities in mind; these “include being free of pain and psychological stress, having control over decisions about their care, avoiding treatments that prolong their deaths, and not burdening their families” (Alfonso 43). These priorities do not change when someone is at the end of their life, but has not planned ahead. Having the option of PAS will help to ease the patient’s mind that their death can be on their terms.
As mentioned earlier, Kass argues that active euthanasia is morally wrong because it involves the physician intending the death of a patient. However, two notable philosophers offer compelling arguments against this position. James Rachels argues that “active euthanasia is in many cases more humane than passive euthanasia” (78). t is much more compassionate to give the patient what they want, and let them choose their own death. We as pet owners feel that the compassionate thing to do is euthanize our four-legged friends when we can no longer help them medically. Why is it that Grandma should be treated with less compassion than we give Lassie? Especially, if it is a choice that Grandma is making herself. The utilitarian philosopher Peter Singer asserts that “if beings are capable of making choices, we should, other things being equal, allow them to decide whether or not their lives are worth living” (529). Given the above facts and considerations, a utilitarian must conclude since both cases end in death that the only option would be to choose the more humane of the two options. This is active euthanasia since it ends the patient’s life sooner and with less suffering, which corresponds to the greater good.
Having drawn this initial conclusion, it is still necessary to consider whether there are any significant moral distinctions between active and passive euthanasia; so now we turn to the side of the doctors and their perspectives. Advanced directives require that a doctor stop life-saving measures, but it does not require the doctor to stop trying to heal the patient. As Rachels points out, the current doctrine regarding end of life care for patients is this, “The idea is that it is permissible, at least in some cases, to withhold treatment and allow a patient to die, but it is never permissible to take any direct action designed to kill the patient” (78). A utilitarian may look at this issue from the doctor’s perspective and say that being forced to “kill” a patient is detrimental to the doctor’s mental health and emotional state, and therefore the consequences for the doctor do not outweigh the benefits for the patient. The idea is that these doctors are being forced to live with the negative consequences of their actions long after the patient’s suffering has ended. Dr. Stevens, in his paper “Emotional and Psychological effect of Physician-Assisted Suicide and Euthanasia on Participating Physicians,” came to the conclusion that “many doctors who have participated in euthanasia and/or PAS are adversely affected emotionally and psychologically by their experiences” (187). Opponents to PAS argue that the death affects more people than just the patient or the doctor. This may include other caregivers, friends, and family. They say those person’s emotional states should also be taken into account when weighing the consequences. A caregiver or loved one will suffer when a person dies, but opponents of PAS argue that knowing ahead of time when and how your loved one is going to die makes the grief worse. It is obvious that a topic such as death garners a lot of attention from both sides of the debate, however ultimately the dying person should have the final say in their own death because it is their body, and the greatest good will apply to them.
The assertion that Physician Assisted Suicide is morally permissible does not mean that doctors who are opposed to PAS must be forced to participate in it. If a doctor does not feel comfortable providing a life-ending medication to a terminally-ill patient, then that doctor should not be forced to do so; however, they should be able to provide the patient with the name of a doctor who will. As Rubin points out, “suicide is an appropriate response when there is no further possibility of living a fulfilling life” (780). If a loved one wants to help a person who is seeking PAS, yet they personally do not believe it is the right thing to do, there are other options that do not include active participation such as perhaps just being there for the final moments. A person doesn’t always need to stand in someone else’s way to stand up for what they believe is right. A pacifist wouldn’t start a physical fight to demonstrate his position against war; a person who is against PAS can still support a patient seeking PAS without breaking their own moral code. A doctor or loved one could put their personal feelings aside and help the patient get the care they request and deserve. A patient who chooses to end their life is making a very personal decision, and they should be surrounded by people who are on their side. They are already fighting their body and they shouldn’t have to fight someone else’s moral code as well in order to serve the greater good and end their suffering.
Death is hardest on the living. That is probably why the topic of active versus passive euthanasia is such a hotly debated topic. However, looking at it through the lens of a utilitarian’s perspective, it can be argued that there is no moral difference between active and passive euthanasia. “If patients can rationally opt for an earlier death by refusing life-supporting treatment or by accepting life-shortening palliative care, they must also be [considered] rational enough to opt for an earlier death by physician-assisted suicide or voluntary euthanasia” (Singer 538). Despite any objection from those who seek to muddy the waters with semantics over the difference between “allowing nature to take its course” and “mercy killing,” a terminally ill patient should legally have access to PAS, and be able to choose their own death because it leads to the greatest good and allows the suffering to be over for them, their loved ones, their caregivers, and their health care workers. If we choose the haphazard approach of moral relativism and leave the decision to eventually legalize PAS up to individual states, we are leaving countless patients needlessly suffering undignified deaths, while their loved ones and caregivers suffer the fate of watching them waste away. We should as a country, make PAS legally available to those in need of it; it is the morally right choice.
Jamie Kouba is a third year student at Ashford University, working on a double Bachelor’s Degree in English and Cultural Anthropology. She is a member of Alpha Sigma Lambda, and hopes to pursue her Master’s Degree in Anthropology after graduation. She is married, with one daughter, and spends most of her free time outdoors.
By Edward Lindenhofen
For over 50 years, James Bond fans have enjoyed the action-packed exploits of their favorite fictional British super-spy. No other movie franchise in history has enjoyed such an epic lifespan. First appearing in the espionage novel Casino Royale, penned by author Ian Fleming in 1953, James Bond’s 24 films continue to wow audiences through the tried-and-true formula of impossible gadgets, incredible locations and of course a bevy of beautiful “Bond Girls.” Skyfall, an epic action-thriller directed by Sam Mendes, was the 23rd film in the EON Productions stable, and was released in 2012 on the fiftieth anniversary of the franchise. The film was the third to star Daniel Craig in the title role, and took the character in a completely different direction. No longer the eternally-confident and unfailing hero, Skyfall exposed a fragile, aging and human side to the character, setting audiences up for a roller-coaster ride toward an explosive conclusion. Analysis will include the cinematic elements which Sam Mendes and his team used to construct this epic film; a break-down of the plot and story, and how they support the overarching themes of the film, as well as an examination of the cinematographic and narrative elements which make Skyfall the most thoughtful and well-produced Bond film to-date (Neuendorf, Gore, D’Alessandro, et. al. 747-748; Dodds 118).
The Birth of Bond
James Bond was born in 1953, when author Ian Fleming (1908-1964) penned his first novel featuring the iconic British spy, Casino Royale, while at his Jamaican estate “Goldeneye.” The protagonist was based on Fleming’s experience serving in the British Naval Intelligence Division during WWII, and was interestingly named for an Ornithologist and author of the book Birds of the West Indies. While much of Fleming’s wartime experience occurred primarily behind a desk, Bond in contrast was a dashing field agent charged with defending Queen and Country at any cost. Critical reception of the Bond novels was tepid until the film rights were purchased by Albert Broccoli and Harry Saltzman in 1961, who released their first film Dr. No, directed by Terrance Young in 1962, which starred Sean Connery as the infallible spy. Together Broccoli and Saltzman produced all Bond films until 1975, when production of Thunderball, also directed by Young, embroiled the pair in a licensing controversy, which resulted in the film rights reverting to a third party, Kevin McClory. The Broccoli family later reacquired the rights to the franchise and have produced all films with the exception of one. Never Say Never Again, a remake of Thunderball, starring Sean Connery in one last appearance in the role of Bond, was produced by Jack Schwartzman who acquired the rights from Kevin McClory. Never Say Never Again is considered to be outside of the franchise cannon. A total of six actors have donned the tuxedo over the past 53 years (Dodds 2-7).
As important to the success of the films as girls and gadgets, is the iconic theme song “The Name’s Bond…James Bond.” Monty Norman was tapped by Albert Broccoli to compose the theme, who adapted the theme from an earlier un-staged musical he had written called “A House for Mr. Biswas,” which was based upon the 1961 book of the same name by Indian author V.S. Naipaul. The theme was orchestrated by John Barry, and has been featured in every film since Dr. No. John Barry tried unsuccessfully to claim that it was in fact he who had composed the piece (Coleman Film Media Group LLC).
Skyfall (2012) was produced by Barbara Broccoli and EON Productions and released jointly by Sony Pictures and Columbia Pictures. The screenplay was written by Neal Purvis, John Logan and Robert Wade, and was director Sam Mendes first time at the helm of a Bond film. Prior to Skyfall, Mendes directed the 2008 romantic drama Revolutionary Road, and the 2009 romantic comedy And Away We Go. In addition to Craig, Skyfall starred Javier Bardem as the ruthless ex-MI6 agent-turned-cyberterrorist Silva, Dame Judi Dench as “M,” Ralph Fiennes as Gareth Mallory, Naomie Harris as Eve Moneypenny and Bérénice Marlohe as tragic Bond Girl Severine. Roger Deakins helmed Cinematography (“Skyfall“).
Skyfall represented a departure from previous films, taking Bond in a decidedly-different direction by focusing on his vulnerability, both mental and physical. Rather than the quip-laden, tongue-in-cheek humor which was the foundation of most of the prior films’ storylines, Skyfall portrayed the modern Bond as a gritty, vulnerable, misogynistic, aging and tortured soul. A key element which has contributed to the continued success of the films is the adaptability of the storylines. The villains and their evil methods have reflected the social and political times in which the films were released, from the cold war weapons of Dr. No, to the war in space of Thunderball, to the monopoly of clean water in director Marc Forster’s 2008 Quantum of Solace and cyberterrorism reminiscent of hacker Edward Snowden in Skyfall (Dodds 121; Wight 5-7).
Mendes introduced audiences to a very modern antagonist. The tried-and-true formulaic approach of beautiful women, explosions, fast cars and easily-defined villainy was modified to include a very new type of enemy, one who operated in the same “shadows” as Bond. In interviews conducted prior to the release of the film, Mendes pointed out the fact that he and the crew watched director Christopher Nolan’s films for “inspiration”, which is evident in several elements in the storyline which draw comparisons to Nolan’s Batman films, with both sharing more than just a common genre. Tortured childhood suffering the loss of both parents, Wayne Manor and Skyfall both destroyed, bad guy dresses as a cop, bad guy escapes from custody, and on and on. It’s clear then Mendes drew upon Nolan’s formula for the foundation of Skyfall’s backstory (Wight 1-1; “Literary Analysis”; Wright 2).
The plot of this installment finds Bond in Turkey chasing down a stolen computer disk (ala the MacGuffin) containing the secret identities of NATO agents. The mission goes horribly wrong and Bond is accidentally shot by a fellow agent and presumed dead. Meanwhile MI6 and M find themselves under attack as a mysterious cyberterrorist blows up MI6 headquarters and exposes the identities of five agents, leading to their execution. As M battles the British government to save her job as well as MI6, Bond returns from the shadows to hunt down the villain responsible to save M, England and himself (Mendes).
The themes of the film center around death and resurrection, both in terms of MI6 and Bond himself, and the narrative supports that theme throughout the story. After Bond’s failed mission to recover the stolen NATO disk as a result of an errant shot by fellow-agent Moneypenny, he exiles himself on an island, indulging in his favorite manic-depressive pursuits of women and drink and generally feeling sorry for himself, as he has let down England, M, and himself. M is facing a similar crisis as she confronts the potential end of the MI6 section as well as her job as she battles obsolescence, while at the same time dealing with the loss of her favorite agent and her headquarters, which is blown up by an as-yet unknown villain. It is only after a shocking attack on MI6 headquarters does Bond return to Mother England, re-enlisting himself in the service of Queen and Country. He then begins the arduous process of resurrecting himself while at the same time struggling with his increasing age and fragility; his only purpose in life is now threatened (Dodds 37-41; Mendes).
The stage is now set for his resurrection, which occurs in stages throughout the film. M directs Bond to undergo a series of tests in order to be declared fit for service, and his difficulty in accomplishing them shows the toll his age and his lifestyle is beginning to take on him. He is sequestered away in a solitary room, clad in a solid blue tracksuit, with only a single overhead fluorescent light, which in combination with the room’s desaturated gray color scheme, vividly portrays the un-shaven Bond as a “hamster in a cage,” with his every move viewed from behind a two-way mirror. Age is the enemy which vexes him at this point in the film. During his rehabilitation, we see his utter exhaustion as he struggles to perform pull-ups, collapsing after M Aide-de-camp Tanner played by Rory Kinnear leaves the room. The medium close-up shot of Bond clearly portrays his pain and frustration, exacerbated by the gunshot shrapnel still lodged in his shoulder. It is this scene where Bond resolves to overcome his weaknesses, displaying the resilience of which we are accustomed. Resurrection, not death, is foremost on his mind, made evident by the surgery he performs on himself, which we watch through the reflection in the bathroom mirror, to remove the scarred-over bullet fragments (Goodykoontz and Jacobs; Mendes).
With some less-than factual interpretation of the results, Bond is restored to active duty. In this scene, we begin to see his transformation back to the Bond we know; dressed in a perfectly fitting suit but still sporting the 5 o’clock shadow. Defense minister Mallory asks why he did not just “stay dead,” reinforcing the theme element of age by declaring that field work is as he calls it, a young man’s game, further admonishing Bond to not “cock it up” (Mendes). Age as a theme continues to thread its way throughout the film, a backstory of which is the British government’s questioning of the continued viability of the MI6 service. M battles the Parliament, insisting the importance of a spy service which operates “in the shadows” is as relevant as ever. Bond’s first meeting with a much younger Q played by Ben Winshaw in an art museum before a painting of a ‘bloody old ship” is a great example of the burgeoning new and younger MI6. As Q hands Bond only a gun and a radio beacon, Bond quips “not exactly Christmas is it?” to which Q responds “were you expecting an exploding pen? We don’t really go in for that sort of thing anymore.” This exchange is critical, as it exemplifies the new world in which Bond, like it or not, is now forced to operate (Dunt 2; Mendes).
Another reference to the theme comes in the form of a straight-blade shave performed by Moneypenny. While Bond gets the shave he so desperately needed, she comments on the antique nature of the blade, and refers to him as an “old dog with new tricks.” The low-key lighting and gold-hued overtones in the hotel room, combined with the out-of-focus view of the Singapore skyline through the patio doors give a very romantic feel to the scene. In contrast, even the evil Silva gets in on the game, as after Bond is captured and taken to his island hideout, Silva declares Bond to be a “physical wreck,” as he sits tied to a chair in Silva’s cavernous computer room. Later, Silva forces Bond to put his decaying marksmanship skills to the test, as he competes to shoot a shot glass off the head of the beautiful Severine using a black powder pistol in the bright afternoon sun, which conveyed the difficulty Bond experienced in not only handling the archaic weapon, but also the challenge of clearly seeing his target (Goodykoontz and Jacobs; Mendes; Wight).
Finally, the climax of the film begins at Bond’s foreboding childhood home in the Scottish Highlands, the aptly-named Skyfall. As Bond and M begin to layout their strategy to defeat the approaching Silva, they meet longtime family friend and gamekeeper Kincaid played by Albert Finney, who reveals the only remaining weapons at their disposal are a shotgun and a knife, declaring “sometimes, the old ways are the best ways” (Mendes). This scene sums up the theme of the film; Bond’s age has gone from being a detriment to an asset, as he uses old-school improvisation to build an arsenal of crude weapons, meant to take Silva and his crew by surprise. The English bulldog figurine bequeathed to Bond by M after her death is symbolic of the fortitude of both England and Bond, imploring him to press on in her absence (Mendes).
While this film has all the trappings of a dyed-in-the-wool action-thriller, it did not insist upon overuse of elements typical of that genre for action’s sake; there wasn’t even a car chase in the entire movie, which was just fine. The Bond Formula as it has been called (cars, gadgets, girls and scenery) were used sparingly and intelligently. The car, in this case a 1965 Aston Martin DB5, which was originally seen in Goldfinger was important not only because it provided an inconspicuous get away for Bond and M, but because it represented something that Bond has so little of in his life; emotional attachment. When Silva orders a helicopter to destroy the car at the Skyfall estate, we see the pan-in shot of sheer anger on Bond’s face when he realizes his beloved car has been destroyed. The gadgets issued to Bond in this film are sparse, and support the overarching theme of the movie, which is MI6 has entered a “brave new world”, one without exploding pens as Q quipped. Instead, Bond must rely on himself to get the job done, which plays to the theme of resurrection which pervades the story (Goodykoontz and Jacobs; Hamilton; Mendes).
The story of course takes Bond to several exotic locales, such as Turkey, Singapore and Macau, however in contrast to prior films, they were not shoved in just to give him somewhere else to go; each location had a purpose in developing the story as Bond continued to close in on Silva. Turkey started the film off with Bond chasing after henchman Patrice, played by Ola Rapace, who was in possession of the stolen NATO disk. Singapore is where Bond tracks Patrice, and where we first meet the film’s tragic Bond Girl Severine, and also where Bond finds the casino chip included in the gun case as payment for the hit on the art dealer. In Macau, Bond follows the trail to the casino, where he locates Severine and convinces her to take him to his ultimate target, the former agent-turned terrorist Silva. Perhaps no location was more critical to the film however than England, where much of the principal photography was shot. It is England which represents the ultimate goal of Bond’s resurrection.
Another departure of this film from those of prior installments is the obvious lack of salacious-sounding names for Bond’s girls. Names like Pussy Galore, Honey Rider and Holly Goodhead are nowhere to be found, which points to the grittier, more realistically-tangible Bond featured in this film. With all of these diversions from the tried and true “formula,” the film still works (Neuendorf, et.al. 747-748).
This film is visually stunning. Gone are the choppy scene cuts of prior films, where we would see Bond hang gliding over the Swiss Alps in one scene, and standing on the beach in Jamaica in the next, or the frenetic and off-kilter camera angles a la Bourne which were so prevalent in Quantum of Solace. Every scene in this film was introduced with smooth transitions, and served a specific purpose in developing the story. We followed Bond as he followed the clues and were kept in the dark until the very end. Mise-en-scene, the composition of all visual elements on screen, was perfection, with several impactful scenes standing out. When the bodies of the five assassinated agents are returned to England to lie in state, we see M standing at the end of the row of flag-draped caskets in a stark white room. She looks rather small, which emphasizes her diminishing role, as well as the fact that she is overwhelmed without her key agent by her side. Thomas Newman’s score for this scene is brooding and lush. Appropriately titled “Mother,” it enhances the utter sadness of the scene by enveloping the audience in both her pain, as well as her growing resolve to capture whomever is responsible for committing this horrific act of terrorism against her beloved MI6. Two additional scenes which were impeccably staged and shot are the scene where Bond is standing on the jitney boat crossing the river toward the casino in Macau. His stance signifies the return of his self-confidence, the dark-blue crushed velvet tuxedo chosen by costumer Jany Temime exudes the Bond style we have come to expect. The low candle-lit lighting gives an aura of mystery and sheer sex appeal. Another impactful scene features Bond and Severine standing on the bow of Silva’s yacht as they approach his island hideout; Bond’s stance again exudes self-confidence, and signals the fact that he is back in full form. Again, Newman’s sweeping orchestration in this scene, entitled “The Chimera” (the name of the yacht), increases the tension and hints at the dramatic turn of events that lies ahead (Goodykoontz and Jacobs; Mendes; “Skyfall“).
The Bond character is among other elements, very much defined by his sense of style. Always impeccably dressed for the occasion, his appearance is once of subtle perfection. Costumer Jany Temime continues the tradition by dressing Craig in Tom Ford suits, Berlioni Italian shoes, and Omega watches; a cinematic element that is not lost on the viewing public. Likely more than any other film franchise, Bond’s societal impact is firmly rooted in product sales and marketing. Craig’s first film, Casino Royale (2006) introduced Heineken, much to the horror of Bond purists, who argued their favorite spy would never lower himself to drink such swill. Undeterred the trend continued through the next two films, including Gordons Gin as well as the aforementioned suits, shoes and watches. There are websites dedicated to the sole purpose of helping the “average Joe” dress the part; that is if they can afford it (Mendes).
Daniel Craig solidified his place as the iconic spy in this installment in the franchise. His human imperfections, frailty and vulnerability shined through and enhanced the dramatic power of the character. The struggle for survival was neither slapstick or accidental, but rather gritty and tangible, and made real by the exceptional photography, well-paced character development, and sweeping musical orchestration. You don’t need to be a “Bond fan” to enjoy and appreciate this film, which in total makes it one if not the best Bond film ever produced. Having seen Spectre (2015), which was also helmed by Mendes and is an outstanding film in its own right, Skyfall is considered by many to be Mendes’ and Craig’s best collaborative work.
Filmmakers are of course trying to entertain, but they are also trying to convey a message through their storytelling. That message hopefully develops over the course of the film, and having the ability to pick up on the breadcrumbs being left for the viewer enhances the overall enjoyment of the movie-watching experience. Having said that, not every movie is what you would consider “deep” (think 1994’s cult favorite Dumb and Dumber), and there is the potential to over-analyze, which can interfere with the viewer’s ability to “suspend disbelief”. Movies certainly have a connection with society, either culturally, socially or politically. They draw upon real-world events and norms to either spoof, exaggerate or even teach the audience about what is important to the film makers, who are in turn, creating a reflection of what they believe is important to the viewer. They may not always get it right, but aren’t we glad they continue to try?
Edward Lindenhofen is in his third year at Ashford University, pursuing his Bachelor’s Degree in Business Administration. Ed is a tall ship sailor, and part-time high school marching band instructor, in addition to his role as a Vice President at JPMorgan Chase, where he has worked for the past 16 years. Ed is a member of Sigma Beta Delta, Alpha Sigma Lambda, Golden Key International Honour Society and Salute Veterans National Honor Society.
By Nicholas Clarkson
Fear and ignorance are brother and sister in a family of good intentions. When people end up consumed by these siblings, they always seem to ask for the same thing. In the case of domestic surveillance, the people cry for such clandestine programs to be shut down, while ignoring the larger picture. One would ask: do we lose sight because we choose to see only that portion of the picture that can be easily understood – the impact of such programs on our own lives – while ignoring the complex reality? Using the ethical theories of deontology and utilitarianism, I will argue that domestic surveillance is a necessary discomfort. Even though domestic surveillance has the potential to invade our privacy, with the proper oversight it is necessary to protect this country.
The information age, and the ability to use technology to send and access information across multiple platforms including social media, has opened up more ways than ever before for an individual or small group of people to be as destructive as an army using only minimal resources. This has been referred to as a “Netwar” where dispersed organizations, small groups, and individuals, communicate, coordinate, and conduct campaigns in a manner often without a central command using technologies attuned to the information age. This “Netwar” concept defines both the structure of the group and its use of technological networks; the information age has opened the door for such effective strategies to evolve into a new platform. The perfect example of a group utilizing this new medium in a more effective way is ISIS. In the past, a group such as ISIS would have had to maintain a vast infrastructure and work in limited distance locations, but today’s technology affords them a virtual infrastructure and allows them to work from anywhere in the world (Arguilla & Ronfeldt 6).
Moreover, this ability to use technology goes beyond rogue groups and applies as well to governments and clandestine organizations whose greater resources allow them to exploit information for their own purposes. David Gewirtz reports that all the allies spy on each other. The French broke into diplomats’ rooms, Israel tried to infiltrate the Pentagon, and more. Even though these clandestine actions cannot be corroborated, history has shown that governments are constantly trying to learn each other’s secrets by any quiet means. Furthering said means would involve the technologies available today. So the question remains how do we protect ourselves from this new form of warfare and potential damage on a limitless scale?
The purpose of domestic surveillance is to collect, process, and store US citizen data for the good of the nation. The argument against this is that people do not want their privacy invaded and do not think it is legal. The argument for domestic surveillance is that it protects US citizens from potential and realized threats that exist within our own country. Our country, like many others, has always had some form of surveillance on its own citizens for the very reason of protecting and securing its domain. However, after the terrorist attack on September 11, 2001, things changed. This would be the first time this country suffered an attack of such magnitude from an external aggressor, and the shock of the attack and the aftermath of fear that it brought allowed changes to happen quickly. Just weeks after the attack, the Patriot Act is passed on October 26, 2001 and later that year the NSA begins data mining. Arguments over privacy would come and go over the next twelve years; however, with Snowden’s revelation in 2013, when he leaked important government information showing the amount of domestic surveillance, the privacy argument would become forefront (“Domestic Surveillance Directorate”; Savage, et al.).
Opponents of domestic surveillance have good reason to be concerned, because it sets a precedent allowing for further invasion into an individual’s private life. The argument they mount revolves around a person’s right to privacy; however, neither the law nor the constitution explicitly states that right. Although, with the Katz case rejection, precedence was set and opened the door for a different interpretation of the Fourth Amendment, which in this case protected the privacy of the defendant. Similarly, because of this it opened the door for further redefinition of the Fourth Amendment. The reasonable expectation of privacy test, which has origins in Justice Harlan’s concurrence who describes this test as one where a person is show an expectation of privacy and that society be prepared to recognize it. This test allows fear to drive justice, which will only push the balance of securing the American citizen into a more dangerous place. What we need is more accountability that involves more of the government as shown with the Snowden incident. Much of what was in place was dormant and now needs to be put into long-term effect involving the courts and possibly Congress. Granted, surveillance programs and their operation did not function with full public consent and yet its need is still apparent. The continued operation of these programs needs to have more public involvement while still accomplishing its goals (Vagle 125-26; Setty, 101).
We have found ourselves in an increasing position where effective law enforcement relies on technological surveillance. There just are not enough officers and agents to handle the task of protecting American civilians with just their eyes and hands. As Justice Jackson points out in Johnson v. United States, law enforcement is a competitive enterprise in which government agents will seek any strategic advantage available to them. Their competition is the continually advancing world of crime in both technology and agency. It is like the mousetrap metaphor and we continue to need a larger one all the time. The question is how we do this without becoming a tyrannical surveillance state (Grey & Citron).
The types of technology being used for surveillance need a balance between security and privacy. Law enforcement is using technologies like drones, GPS tactics, and data aggregation. Data aggregation is a complex topic because it involves the use of communication companies like Verizon and AT&T, internet companies like Google and Facebook, and street video and picture cameras. As David Grey and Danielle Citron point out in their article, “The Right to Quantitative Privacy,” the critical goal will be to tailor an approach that satisfies Fourth Amendment standards in order to bring a clear understanding of both law enforcement and privacy interests at stake (102). Where the abuse of certain technologies by law enforcement occurs, action must be taken; however, on the other hand, law enforcement must be allowed to utilize enough technology in order to achieve a level of security demanded by the public.
In the ethical theory of deontology, one looks at the reason behind an act and the rule for which one chooses to act. This ethical does not deny that there are consequences or argue that consequences are not important; rather, it insists that consequences should not play a role in evaluating the morality of the act. Deontology derives its name from the word ‘deon’ meaning duty and is therefore described as a duty-based ethics. When evaluating the morality of an action, the deontologist considers if the action can be willed as a universal law. For example, as one ethicist points out in discussing the ethics of drones at an Oxford Union debate, “When you acknowledge or maintain that a particular form of warfare – a new form of warfare – is legitimate, you can’t just maintain that for yourself. You have to accept that it would be legitimate in the hands of your opponents or legitimate in the hands of any country that was engaged” (Waldron). Similarly, when evaluating the ethics of domestic surveillance, we have a duty to consider whether such programs can be willed as a universal law. Where one individual might find comfort in the numerous domestic surveillance protective technologies securing the country, many others enjoy and even need their privacy. For these people at their core the ability to keep a secret and selectively reveal them is a source of power that is important to a person’s autonomy. Even this brief analysis shows that because it cannot be willed as a universal law, domestic surveillance would not be seen as ethical by the theory of deontology (Mosser 6.1; Vagle 8).
In contrast to deontology and its emphasis on reason and universalizability, utilitarianism is a consequentialist theory which determines the morality of an action by evaluating the consequences the action will have on the majority. For a utilitarian the way to see whether an act is right or wrong is to look at its results or consequences. Utilitarianism argues that, given a set of choices, the act we should choose is that which produces the best result for the greatest number affected. Simply put the act which produces the greater good for the greatest numbers is the right choice. For example, as is demonstrated in the film Outbreak, where “extreme measures are taken to contain an epidemic of a deadly airborne virus,” if an incurable sickness afflicted a small village, a utilitarian would destroy it, wiping out all life, in order to stop the spread of the disease which could affect the surrounding area or farther potentially killing many more (Mosser 6.1; Outbreak).
In considering the ethics of domestic surveillance through the lens of utilitarianism, the larger affected group is the security of everyone. Statistically before the Snowden leak polls showed that 59% wanted reform and 63% wanted more oversight. This suggests that, as much as people may dislike domestic surveillance technologies and tactics, they understand why it is necessary. With the onslaught of the ISIS regime that is growing almost exponentially, such programs are necessary to protect the whole and to serve the greater good, even if that means that many experience real or psychological discomfort due to their presence. Therefore, continued use of domestic surveillance in America is a necessary discomfort (Jaycox).
One way to consider the necessary aspect of domestic surveillance is to see it as a kind of insurance policy. In our American society, we are for the most part dependent on insurance. We use this security net to protect us from all sorts of potential problems that may or may not ever happen in one’s lifetime, ranging across all aspects of life, from prescription medicine to your home collapsing from an earthquake. This is one value that American citizens believe to be necessary across the board. No one likes to pay for it, but everyone agrees that it is important in some variation in his or her life. September 11, 2001 made America reinvest in the government to provide insurance, a necessary insurance that brings with it discomfort in a way it did not want to acknowledge and yet signed off on it anyway. Like human nature, once something gets old, enough people forget and begin to throw away things that can protect them. Domestic surveillance may have grown out of control and yet the insurance that it provides is no less necessary. Those who would do America harm are planning on this human nature to open up our defenses in this technologically growing landscape so that just like on September 11, 2001, they can devastate us once again.
This is not to say that the program as it is does not need improvement. As stated before, utilitarianism show that maintaining domestic surveillance programs offers a greater good benefit to the greatest number, but proper oversight is needed. Moreover, the oversight must not come at the expense of appealing to the individual, when the focus needs to remain on the whole. Both short term and long term protection of our citizens requires some sacrifice and discomfort, both of which are justified because of the greater good of having a system in place protecting the most people and averting threats.
Domestic surveillance, even if not currently staving off a terrorist attack, is helping to avert those that would try. No one thought that we could be attacked in the egregious way that we were on September 11, 2001. In addition, no one thought a single terrorist group could amass such support until ISIS. The future is proving to be a place where the small can stand up to the big and even become big themselves. That future is only becoming more accessible day by day. If we are not ready to fight and protect against these new technologies then we will suffer our own hubris.
Nicholas Clarkson is in his third year at Ashford University pursuing a Bachelor’s Degree in Business Administration. Nicholas grew up in Portage, MI where he enjoyed the opportunity to express himself in various creative outlets including poetry and music. Having enjoyed those opportunities, Nicholas hopes to combine his background and current interests by focusing in Project Management and passing on his creative passion for words and life to others.