For looking ethically into the dilemmas that are currently in Information Technology I have chosen “the human impact on the use of Drones” and to answer Turing’s question “Can machines think?”.

Both of these issues will be from the point of view of whether either question has a “What is the right thing to do “exploring the meaning of justice and fairness.

Impact on Humanity through the use of Drones.

What is the right thing to do about the use of Drones? Drones use to be expensive and only used in the province of the military. For this discussion, I will be focusing on the military use of Drones. Utilitarian philosophy primarily tends to be used by governments in creating laws and decisions that promote the production of whatever produces happiness and prevention of pain and suffering of the majority. The legislation or decisions produce the outcomes or consequences are for the greatest number of individuals. Utilitarianism looks at the expected results and consequences of an act to determine whether or not the action is morally permissible. (Tavani, 2013) So how does this apply to Drones? Act Utilitarianism states that

An Act, X, is morally permissible if the consequences produced by the doing X result in the greatest good for the greatest number of persons affected by Act X.

Translated to the military use of drones: That the use of Drones to deliver a payload of bombs is morally permissible if the consequences produced by dropping bombs via drones, result in the greatest good for the greatest number of persons affected by the use of Drones to deliver a payload of bombs.

The reality is not necessarily the act of using of drones that is in question; it is the question about military action; whether the action is a just cause? According to Bradley J. Strawser, starting with this question is important. (Shane, 2012) For the war on terror that is happening in the middle east at the moment the answer would be a “yes” as it is about ridding the world of terrorists and there is always some collateral damage in regards to civilian populations.

Is it any worst that what was done in World War II when the carpet bombing by the Germans on the British or vice-versa?

The argument used by Strawser if that as they are used to “go after terrorist as ethically

Predator launching Hellfire missile
Citation: Wikipedia: https://en.wikipedia.org/wiki/General_Atomics_MQ-1_Predator

permissible but also might be ethically obligatory, because of their advantages in identifying targets and striking with precision … all the evidence we have so far suggests that drones do better at both identifying the terrorist and avoiding collateral damage than anything else we have” (Shane, 2012). In light of what occurred in World War II and in other methods to deter terrorists, this limitation of collateral damage would seem to be its main justification.

In the words of Mr Spock – “the needs of the many outweigh the needs of a few” (Meyer, 1982). This is quoted as being logic based although it can also seem to be utilitarian based, and seems to be the most applicable quote for the case in the use of drones in military actions

The fact that Drone Operators plan strikes by viewing targets hours or days ahead provides for timing being used more accurately when innocents are not nearby. Although there have been incidents where this has not been the case, this is more about the competence of the operator rather than the use of drones per se, whether from faulty intelligence, or from a recklessness that is sometimes seen on the battlefield, even though the operators are remote.

This brings us to the point of whether the use of Drones in warfare is actually lowering the threshold for lethal violence. This argument has also been used in the video gaming arena where violent games based on battlefield scenarios or first person shooters. Does the fact that a military Drone operator is a distance away from the battlefield endanger the operator’s morality or those who order the strikes in the first place? Daniel R. Brunstetter, political scientist at the University of California fears that drones are becoming “a default strategy to be used almost anywhere” (Shane, 2012). This brings the use in conflict with the ideal of a just war.

Turing’s Infamous Question

To answer this we will need to look at a few questions about the ethics of machines thinking and whether this has an impact on the answer to the question “Can Machines think?”

Firstly, Turing’s intent in asking the Question “Can Machines Think? The idea behind “The Imitation Game” as proposed by Turing was to have a third party (the judge or interrogator) determine which of the two proponents was a human and the other a machine. The game itself was not to prove intelligence rather that a computer could imitate a human. Even Turing considered that the question “Can Machines Think?” to be a bad question and replaced it with a description of “The Imitation Game”. Turing’s proposal was that any computing machine that regularly fooled a discerning judge in the game would be intelligent beyond a reasonable doubt, even though humans, even different genders, and computers may think differently.

Although there has been major advancements in the AI field, there is still a lot to do. Deep Blue was an expert in Chess just as the Bombe machine was an expert in Enigma code cracking. Watson has come close with its appearance on Jeopardy, although it did not pass the Turing Test, as its programming was specific to playing a game of Jeopardy.(Ferrucci, et al., 2010) (Ferrucci D. A., 2012). This comes down to there are many questions that a human may not be able to answer and vice-versa; or capabilities of either humans or machines for any given question.There are some in the field who believe that Watson was acting in a similar manner to the person in John Serle’s classic ” Chinese Room. Did Watson truly “understood” the questions or just used its programming to answer them.

So far computers have not demonstrated the ability to pass the Turing test although there are some that come close. The computers that have come close have prompted philosophers to ask more about what it is to be human.

This brings us to the research from Wendell Wallah and Colin Allen in regards to Moral Machines. We already have some AI entities or bots that assist us with organisation of work schedules for example; and we interact with them on a daily basis (SIRI on Apple’s iPhone / iPad or Google’s equivalent “Ask Google”). These AI entities do not achieve full consciousness and that could be the crux of passing Turing’s test – being fully conscious and choosing to imitate another gender. These AIs do not have any moral foundation built into them. They are primarily deep Question Answer machines. b

Autonomous Machines that can think for themselves, choose the best way to navigate a traffic jam in the case of the autonomous car, in some ways require us to trust in them or more explicitly their programming. As humans we may not be willing to do that, as we will need to create some emotional attachment to them, to build the trust as described by Coecklebergh in his article Moral Appearances: Emotions Robots, and Human Morality. Although Turing did not take into account the emotional aspect in his “game”, it does lead to how a computer may imitate a human if the morality and emotions are programmed into it.

So is it right and just to have a computer that can think like a human? In my opinion, no not yet. We as humans have not reached the enlightenment that would allow the use of such a machine to exist. We have elements of it now, and only time will tell if this is correct. Even science fiction writers tend to not have their computers, androids etc have more humanistic appearance.


References

Coecklebergh, M. (2010). Moral Appearances: Emotions Robots, and Human Morality. Ethics and Information Technology 12 no 3: 235-41.

Ferrucci, D. A. (2012). Introduction to “This is Watson”. IBM Journal of Research and Development, 56(3, 4), 1-15.

Ferrucci, D. A., Brown, E., Chu-Carroll, E., Fan, J., Gondek, D., Kalyanpur, A. A., . . . Welty, C. (2010). Building Watson: An overview of the Deep QA Project. AI Mazagine, 31(3), 59-79.

Mendhak. (2010, October 2). Bombe Machine, Bletchley Park. Retrieved September 2016, from Flickr: https://www.flickr.com/photos/mendhak/5125496254

Meyer, N. (Director). (1982). The Wrath of Khan [Motion Picture].

Shane, S. (2012, July 14). The Moral Case for Drones. Retrieved September 2016, from The New York Times: http://www.nytimes.com/2012/07/15/sunday-review/the-moral-case-for-drones.html?_r=0

Tavani, H. T. (2013). Ethics and Technology: Controversies, Questions and Strategies for Ethical computing. Rivier Universtity: John Wiley and Sons.

Turing, A. M. (1950). Computing machinery and intelligence. Mind(59), 433-460.