"marble cutting machine"カテゴリーの記事一覧
-
×
[PR]上記の広告は3ヶ月以上新規記事投稿のないブログに表示されています。新しい記事を書く事で広告が消えます。
-
Note - Godel's TheoremsThe work of an important, though eccentric, Czech-Austrian mathematical logician, Kurt Gödel (1906-1978) dealt with the completeness and consistency of logical systems. Later it was shown that many functions (even in number theory itself) were not recursive, meaning that they could not be solved by a Turing Machine. Is stone cnc machine it limited only to physical injury (the elimination of the physical continuity of human tissues or of the normal functioning of the human body)? Should "injury" in the First Law encompass the no less serious mental, verbal and social injuries (after all, they are all known to have physical side effects which are, at times, no less severe than direct physical "injuries")? Is an insult an "injury"? What about being grossly impolite, or psychologically abusive? Or offending religious sensitivities, being politically incorrect - are these injuries? The bulk of human (and, therefore, inhuman) actions actually offend one human being or another, have the potential to do so, or seem to be doing so. Despite its hardware appearance (a read/write head which scans a two-dimensional tape inscribed with ones and zeroes, etc. Life is about inventing new rules on the fly, as we go, and as we encounter new challenges in a kaleidoscopically metamorphosing world. This is ignoring, for discussion's sake, defects in manufacturing or loss of the implanted identification tags. To be properly implemented and to avoid their interpretation in a potentially dangerous manner, the robots in which they are embedded must be equipped with reasonably comprehensive models of the physical universe and of human society. Functions whose values are calculated by AIDED humans with the contribution of a computer are still recursive. Should we, as humans, rely on robots or on their manufacturers (however wise, moral and compassionate) to make this selection for us? Should we abide by their judgment which injury is the more serious and warrants an intervention?A summary of the Asimov Laws would give us the following "truth table":A robot must obey human commands except if:Obeying them is likely to cause injury to a human, or Obeying them will let a human be injured.This article deals with some commonsense, basic problems raised by the Laws. A robot must protect its own existence with three exceptions:That such self-protection is injurious to a human; That such self-protection entails inaction in the face of potential injury to a human; That such self-protection results in robot insubordination (failing to obey human instructions). The system may be complete - but then we are unable to show, using its axioms and inference laws, that it is consistentIn other words, a computational system can either be complete and inconsistent - or consistent and incomplete. Some argue against this and say that robots need not be automata in the classical, Church-Turing, sense. A robot will have to be somewhat human to recognize another human being, it takes one to know one, the saying (rightly) goes. The emphasis was on finiteness: a finite number of instructions, a finite number of symbols in each instruction, a finite number of steps to the result. But one kind of villain is a fixture in this psychodrama, in this parade of human phobias: the machine. So, we can generalize and say that functions whose values are calculated by an AIDED human could be recursive, depending on the apparatus used and on the lack of ingenuity or insight (the latter being, anyhow, a weak, non-rigorous requirement which cannot be formalized). By trying to construct a system both complete and consistent, a robotics engineer would run afoul of Gödel's theorem. But if robots were to be instructed to maximize overall utility, many borderline cases would be resolved. It is an automaton designed to implement an effective or mechanical method of solving functions (determining the truth value of propositions).Consider the James bond movies. This is unthinkable. Human identify other humans because they are human, too. The Laws are absolutely inadequate in this case.The second solution will prevent the robot from positively identifying humans. Many have noticed the lack of consistency and, therefore, the inapplicability of these laws when considered together. Both present additional difficulties. Godel pointed at one such self destructive paradox in the "Principia Mathematica", ostensibly a comprehensive and self consistent logical system. A human disk thrower or swimmer may easily be classified as "non-human" by a robot - and so might amputated invalids. This is called empathy. Put more simply, it is possible to "prove" the truth value (or the theorem status) of an expression in the propositional calculus but not in the predicate calculus. We can say that TMs can do whatever digital computers are doing but not that digital computers are TMs by definition. Should a robot refuse to obey human instructions which may result in injury to the instruction-givers? Consider a mountain climber should a robot refuse to hand him his equipment lest he falls off a cliff in an unsuccessful bid to reach the peak? Should a robot refuse to obey human commands pertaining to the crossing of busy roads or to driving (dangerous) sports cars? Which level of risk should trigger robotic refusal marble cutting machine and even prophylactic intervention? At which stage of the interactive man-machine collaboration should it be activated? Should a robot refuse to fetch a ladder or a rope to someone who intends to commit suicide by hanging himself (that's an easy one)? Should he ignore an instruction to push his master off a cliff (definitely), help him climb the cliff (less assuredly so), drive him to the cliff (maybe so), help him get into his car in order to drive him to the cliff. The next question pertains to the notion of "injury" (still in the First Law). Or unless all humans are somehow tagged from birth. There are many other types of functions (non-recursive) that can be incorporated in a robot, they remind us. I, Robot is just another - and relatively inferior - entry is a long line of far better movies, such as "Blade Runner" and "Artificial Intelligence".One of the possible solutions is, of course, to introduce gradations, a probability calculus, or a utility calculus. It relies on shoddy pseudo-science and a general sense of unease that artificial (non-carbon based) intelligent life forms seem to provoke in us.Moreover, the demand that recursive functions be computable by an UNAIDED human seems to restrict possible equivalents.The movie "I, Robot" is a muddled affair.Moreover, what, exactly, constitutes "inaction"? How can we set apart inaction from failed action or, worse, from an action which failed by design, intentionally?PR