Alex robots, others caution that it could

Alex StearnsPeters12/3/2017English 121 Theterm “robotics” was first coined by the legendary science fiction writer SirIsaac Asimov in his 1941 short story “Liar!”. One of the first to see the vastpotential of the up and coming technologies that were yet to see publicapproval or interest in his time. Since then, however, robotics has been on astartling upward trajectory that has placed it into the forefront of cuttingedge technologies.

While robotics has come with many benefits to modern dayhumanity it is also a subject of endless heated debates. Humanity is on theverge of a robot revolution. And while many see it as a gateway to progress notseen since the Renaissance it could just as easily result in the end of humanity.With the ever-present threat of accidentally creating humanities unfeelingsuccessors it’s only natural to question how much, if at all, we should allowourselves to become reliant on our technologies.”As machines get smarter andsmarter, it becomes more important that their goals, what they are trying toachieve with their decisions, are closely aligned with human values,” saidStuart Russell, a professor of computer science at UC Berkley andco-author of the universities textbook on artificial intelligence. Astrong believer that the survival of humanity may well depend on instilling moralsin our AI’s, and that doing so could be the first step to ensuring a peacefuland safe relationship between people and robots, especially regarding simplersettings. “A domestic robot, for example, will have to know that you value yourcat,” he says, “and that the cat is not something that can be put in the ovenfor dinner just because the fridge is empty.” This begs the obviousquestion, how on Earth do we convince these potentially godlike beings toconform to a system of values that benefits us?While experts from several fieldsaround the world attempt to work through the ever-growing list of problems to createmore obedient robots, others caution that it could be a double-edged sword.

Sometimes it is hard to do all the work on your own
Let us help you get a good grade on your paper. Get expert help in mere 10 minutes with:
  • Thesis Statement
  • Structure and Outline
  • Voice and Grammar
  • Conclusion
Get essay help
No paying upfront

While it may lead to machines that are safer and ultimately better it may alsointroduce an avalanche of problems regarding the rights of the intelligencesthat we have created.The notion that human/robotrelations might prove tricky far from a new one. In 1947, legendary sciencefiction writer Isaac Asimov introduced his Three Laws of Robotics in the shortstory collection I, Robot, which were designed to be a basic set oflaws that all robots must follow to ensure the safety of humans. 1) A robotcannot harm human beings, 2) A robot must obey orders given to it unless itconflicts with the first law, and 3) A robot must protect its own existenceunless in conflicts with either of the first two laws. Asimov’s robots adherestrictly to the laws and yet, limited by their rigid robot brains, become trappedin unresolvable moral dilemmas. In one story, a robot lies to a woman and falselytells her that a certain man loves her who doesn’t, because the truth might hurther feelings, which the robot interprets as a violation of the first law. To notbreak her heart, the robot breaks her trust, traumatizing her and ultimatelyviolating the first law anyway. The conundrum ultimately drives therobot insane.

Although fictional literature, Asimov’s Laws have remained acentral and basic point entry point for serious discussions about the nature ofmorality in robots and acting as a reminder that even clear, well defined rulesmay fail when interpreted by individual robots on a case to case basis.  Accelerating advances in new AI technologyhave recently spurred an increased interest to the question of how newly intelligentrobots might navigate our world. With a future of highly intelligent AIseemingly close at hand, robot morality has emerged as a growing field of discussion,attracting scholars from ethics, philosophy, human rights, law, psychology, andtheology. There have also been several public concerns as many noteworthy mindsin the scientific and robotics communities have cautioned that the uprise of machinescould well mean the end of the world.

Public concern has centered around  “the singularity,” the theoretical moment whenmachine intelligence surpasses our own. Such machines could defy human control,the argument goes, and lacking morality, could use their superior intellects toextinguish humanity. Ideally, robots with human-level intelligence willneed human-level morality as a check against bad behavior.However, as Russell’s example ofthe cat-cooking domestic robot illustrates, machines would not necessarily needto be brilliant to cause trouble. In the near term we are likely to interactwith somewhat simpler machines, and those too, argues Colin Allen, will benefitfrom moral sensitivity.

Professor Allen teaches cognitive science and historyof philosophy of science at Indiana University at Bloomington. “The immediateissue,” he says, “is not perfectly replicating human morality, but rathermaking machines that are more sensitive to ethically important aspects of whatthey’re doing.”And it’s not merely a matter oflimiting bad robot behavior. Ethical sensitivity, Allen says, could make robotsbetter, more effective tools. For example, imagine we programmed an automatedcar to never break the speed limit. “That might seem like a good idea,” hesays, “until you’re in the back seat bleeding to death.

You might be shouting,’Bloody well break the speed limit!’ but the car responds, ‘Sorry, I can’t dothat.’ We might want the car to break the rules if something worse will happenif it doesn’t. We want machines to be more flexible.”As machines get smarter and moreautonomous, Allen and Russell agree that they will require increasinglysophisticated moral capabilities.

The ultimate goal, Russell says, is todevelop robots “that extend our will and our capability to realize whatever itis we dream.” But before machines can support the realization of our dreams,they must be able to understand our values, or at least act in accordancewith them.Which brings us to the firstcolossal hurdle: There is no agreed upon universal set of human morals.Morality is culturally specific, continually evolving, and eternally debated.

If robots are to live by an ethical code, where will it come from? What will itconsist of? Who decides? Leaving those mind-bending questions for philosophersand ethicists, roboticists must wrangle with an exceedingly complex challengeof their own: How to put human morals into the mind of a machine.There are a few ways to tackle theproblem, says Allen, co-author of the book Moral Machines: TeachingRobots Right From Wrong. The most direct method is to program explicitrules for behavior into the robot’s software—the top-down approach. The rulescould be concrete, such as the Ten Commandments or Asimov’s Three Laws ofRobotics; or they could be more theoretical, like Kant’s categorical imperativeor utilitarian ethics. What is important is that the machine is givenhard-coded guidelines upon which to base its decision-making

x

Hi!
I'm Gerard!

Would you like to get a custom essay? How about receiving a customized one?

Check it out