inherit
M'lady of Fine Arts
434
0
4,610
Lady Artifice
1,835
August 2016
ladyartifice
|
Post by Lady Artifice on May 17, 2017 21:22:16 GMT
www.1843magazine.com/features/teaching-robots-right-from-wrongAnd every time I read these things, I try to decide whether I expect it to have terrible consequences. I should probably be excited about the possibilities, but all I can think of when I read these things is I Have No Mouth and I Must Scream or Terminator or The Flight of the Conchords singing The Humans are Dead. They thought Y2K would be a cataclysm as well, and it was mostly fine. We have lasers, and we use them for perfectly mundane medical procedures. Fictionland manages to luck out with predictions part of the time, but nowhere near the majority. I absolutely cannot decide if I expect intelligent machines to resent and murder us or for them to be completely unconcerned with the things we might assume would even lead them to resentment in the first place. But the idea that the machine might simply reflect the morality of it's creator--or whoever is paying it's creator--is actually not that much more relaxing.
|
|
inherit
1937
0
1,342
Galactic Runner
Trying to hold all these games!!
1,329
November 2016
galacticrunner
Mass Effect Trilogy, Dragon Age: Origins, Dragon Age 2, KOTOR, Jade Empire
GHDX123
|
Post by Galactic Runner on May 17, 2017 21:24:53 GMT
Maybe with these morals, they can stop TAKING OUR JERBS!!
|
|
inherit
802
0
Member is Online
Apr 28, 2024 19:21:05 GMT
5,250
B. Hieronymus Da
Unapologetic Western Chauvinist. Barefoot. Great Toenails
3,611
August 2016
bevesthda
Mass Effect Trilogy, Dragon Age: Origins, Dragon Age 2, Dragon Age Inquisition, KOTOR, Baldur's Gate, Neverwinter Nights
|
Post by B. Hieronymus Da on May 17, 2017 21:32:57 GMT
www.1843magazine.com/features/teaching-robots-right-from-wrongAnd every time I read these things, I try to decide whether I expect it to have terrible consequences. I should probably be excited about the possibilities, but all I can think of when I read these things is I Have No Mouth and I Must Scream or Terminator or The Flight of the Conchords singing The Humans are Dead. They thought Y2K would be a cataclysm as well, and it was mostly fine. We have lasers, and we use them for perfectly mundane medical procedures. Fictionland manages to luck out with predictions part of the time, but nowhere near the majority. I absolutely cannot decide if I expect intelligent machines to resent and murder us or for them to be completely unconcerned with the things we might assume would even lead them to resentment in the first place. But the idea that the machine might simply reflect the morality of it's creator--or whoever is paying it's creator--is actually not that much more relaxing. Well, why would it change so much? People get their morals "wrong" all the time. Of course machines will also get their morals wrong. Apart from all other possible causes, it's not like it's an easy thing to program. But the consequences of not trying to do it, seems to me to be likely to be much worse. Besides, with all the hundreds of millions murdered by religions and ideologies, not to mention billions who had their lives destroyed, I'd say the machines have an easy act to follow.
|
|
inherit
M'lady of Fine Arts
434
0
4,610
Lady Artifice
1,835
August 2016
ladyartifice
|
Post by Lady Artifice on May 17, 2017 21:50:34 GMT
www.1843magazine.com/features/teaching-robots-right-from-wrongAnd every time I read these things, I try to decide whether I expect it to have terrible consequences. I should probably be excited about the possibilities, but all I can think of when I read these things is I Have No Mouth and I Must Scream or Terminator or The Flight of the Conchords singing The Humans are Dead. They thought Y2K would be a cataclysm as well, and it was mostly fine. We have lasers, and we use them for perfectly mundane medical procedures. Fictionland manages to luck out with predictions part of the time, but nowhere near the majority. I absolutely cannot decide if I expect intelligent machines to resent and murder us or for them to be completely unconcerned with the things we might assume would even lead them to resentment in the first place. But the idea that the machine might simply reflect the morality of it's creator--or whoever is paying it's creator--is actually not that much more relaxing. Well, why would it change so much? People get their morals "wrong" all the time. Of course machines will also get their morals wrong. Apart from all other possible causes, it's not like it's an easy thing to program. But the consequences of not trying to do it, seems to me to be likely to be much worse. Besides, with all the hundreds of millions murdered by religions and ideologies, not to mention billions who had their lives destroyed, I'd say the machines have an easy act to follow. An attacker's tendency to tire or feel pain does usually effect the degree and scope of damage they can do.
|
|
inherit
Psi-Cop
38
0
Feb 21, 2019 15:55:45 GMT
10,231
CrutchCricket
The Emperor Daft Serious
4,577
August 2016
crutchcricket
CrutchCricket
Mass Effect Trilogy
|
Post by CrutchCricket on May 17, 2017 22:08:43 GMT
Just install morality cores. Guaranteed to decrease chances of death by neurotoxin by [ERR_UNKNOWN_VALUE]
|
|
inherit
Reasonably Sane
585
0
3,694
DomeWing333
2,074
August 2016
domewing333
Dragon Age: Origins
|
Post by DomeWing333 on May 17, 2017 22:51:17 GMT
Well, why would it change so much? People get their morals "wrong" all the time. Of course machines will also get their morals wrong. Apart from all other possible causes, it's not like it's an easy thing to program. But the consequences of not trying to do it, seems to me to be likely to be much worse. Besides, with all the hundreds of millions murdered by religions and ideologies, not to mention billions who had their lives destroyed, I'd say the machines have an easy act to follow. An attacker's tendency to tire or feel pain does usually effect the degree and scope of damage they can do. Then the solution is simple: we teach robots to feel pain. As for tiring, if my laptop's battery life is any indication, any robot uprising will have approximately 2.5 hours to wreck havoc before going into battery saver mode.
|
|
inherit
Reasonably Sane
585
0
3,694
DomeWing333
2,074
August 2016
domewing333
Dragon Age: Origins
|
Post by DomeWing333 on May 17, 2017 23:21:17 GMT
From the article: "An AI that reads a hundred stories about stealing versus not stealing can examine the consequences of these stories, understand the rules and outcomes, and begin to formulate a moral framework based on the wisdom of crowds (albeit crowds of authors and screenwriters). “We have these implicit rules that are hard to write down, but the protagonists of books, TV and movies exemplify the values of reality. You start with simple stories and then progress to young-adult stories."
Oh great. We're going to teach a bunch of robots that the greatest of all moral virtues is the rough, but tender love of a hunky teenage heartthrob.
|
|
Draining Dragon
N4
( ͡° ͜ʖ ͡°)
You have power over your mind - not outside events. Realize this, and you will find strength.
Staff Mini-Profile Theme: Draining Dragon
Games: Mass Effect Trilogy, Dragon Age: Origins, Dragon Age 2, Dragon Age Inquistion, KOTOR, Baldur's Gate, Neverwinter Nights, Jade Empire
Posts: 2,178 Likes: 7,575
inherit
( ͡° ͜ʖ ͡°)
2
0
7,575
Draining Dragon
You have power over your mind - not outside events. Realize this, and you will find strength.
2,178
August 2016
drainingdragon
Draining Dragon
Mass Effect Trilogy, Dragon Age: Origins, Dragon Age 2, Dragon Age Inquistion, KOTOR, Baldur's Gate, Neverwinter Nights, Jade Empire
|
Post by Draining Dragon on May 17, 2017 23:27:34 GMT
"Does this unit have a soul?"
|
|
inherit
M'lady of Fine Arts
434
0
4,610
Lady Artifice
1,835
August 2016
ladyartifice
|
Post by Lady Artifice on May 17, 2017 23:29:42 GMT
An attacker's tendency to tire or feel pain does usually effect the degree and scope of damage they can do. Then the solution is simple: we teach robots to feel pain. As for tiring, if my laptop's battery life is any indication, any robot uprising will have approximately 2.5 hours to wreck havoc before going into battery saver mode. I wrote about five responses to this and deleted each one in turn. Wouldn't that be immoral of us? > Now I'm picturing testing centers measuring the machine's capacity to feel pain and it looks like Jabba's torture chamber meets something from Portal 3 > Stop confusing me, Dome > Why must morality be so hard? > Why do we even need conscious, intelligent machines in the first place? > Am I just an emotional puppet of my own pop culture fixation? Etc. Maybe the robots will be fun, and sometimes cute and useful, and they'll show no more signs of villainy than when they get caught in a loop reciting Justin Beiber songs at one another.
|
|
inherit
M'lady of Fine Arts
434
0
4,610
Lady Artifice
1,835
August 2016
ladyartifice
|
Post by Lady Artifice on May 17, 2017 23:32:23 GMT
From the article: "An AI that reads a hundred stories about stealing versus not stealing can examine the consequences of these stories, understand the rules and outcomes, and begin to formulate a moral framework based on the wisdom of crowds (albeit crowds of authors and screenwriters). “We have these implicit rules that are hard to write down, but the protagonists of books, TV and movies exemplify the values of reality. You start with simple stories and then progress to young-adult stories." Oh great. We're going to teach a bunch of robots that the greatest of all moral virtues is the rough, but tender love of a hunky teenage heartthrob. Or that the ideal solution to most problems is to become a wizard.
|
|
inherit
Reasonably Sane
585
0
3,694
DomeWing333
2,074
August 2016
domewing333
Dragon Age: Origins
|
Post by DomeWing333 on May 17, 2017 23:40:25 GMT
Then the solution is simple: we teach robots to feel pain. As for tiring, if my laptop's battery life is any indication, any robot uprising will have approximately 2.5 hours to wreck havoc before going into battery saver mode. I wrote about five responses to this and deleted each one in turn. Wouldn't that be immoral of us? > Now I'm picturing testing centers measuring the machine's capacity to feel pain and it looks like Jabba's torture chamber meets something from Portal 3 > Stop confusing me, Dome > Why must morality be so hard? > Why do we even need conscious, intelligent machines in the first place? > Am I just an emotional puppet of my own pop culture fixation? Etc. Maybe the robots will be fun, and sometimes cute and useful, and they'll show no more signs of villainy than when they get caught in a loop reciting Justin Beiber songs at one another. That was more of a tongue-in-cheek suggestion. Teaching robots to feel "pain" is pretty pointless because pain is really just avoidance coding. We're made to feel pain so that when we experience it, we realize that something is wrong, and take steps to avoid further harm. The thing is, we can already just teach robots to avoid stuff that we want them to avoid. If that aspect of their programming fails or can be switched off, then so can any pain module that we install.
|
|
inherit
Reasonably Sane
585
0
3,694
DomeWing333
2,074
August 2016
domewing333
Dragon Age: Origins
|
Post by DomeWing333 on May 17, 2017 23:43:49 GMT
From the article: "An AI that reads a hundred stories about stealing versus not stealing can examine the consequences of these stories, understand the rules and outcomes, and begin to formulate a moral framework based on the wisdom of crowds (albeit crowds of authors and screenwriters). “We have these implicit rules that are hard to write down, but the protagonists of books, TV and movies exemplify the values of reality. You start with simple stories and then progress to young-adult stories." Oh great. We're going to teach a bunch of robots that the greatest of all moral virtues is the rough, but tender love of a hunky teenage heartthrob. Or that the ideal solution to most problems is to become a wizard. Harry Potter and the Technological Singularity
|
|
Deleted
Deleted Member
Posts: 0
Deleted
inherit
guest@proboards.com
5909
0
Apr 28, 2024 19:22:59 GMT
Deleted
0
Apr 28, 2024 19:22:59 GMT
January 1970
Deleted
|
Post by Deleted on May 18, 2017 2:09:23 GMT
Doctor Who is doing the exact same thing. The villains have lost their sustains in that show because no villain is truly evil in the newest series. Tbh, and it was really quite pandering too.
|
|
inherit
M'lady of Fine Arts
434
0
4,610
Lady Artifice
1,835
August 2016
ladyartifice
|
Post by Lady Artifice on May 18, 2017 2:13:54 GMT
Doctor Who is doing the exact same thing. The villains have lost their sustains in that show because no villain is truly evil in the newest series. Tbh, and it was really quite pandering too. Pandering to whom?
|
|
Deleted
Deleted Member
Posts: 0
Deleted
inherit
guest@proboards.com
5909
0
Apr 28, 2024 19:22:59 GMT
Deleted
0
Apr 28, 2024 19:22:59 GMT
January 1970
Deleted
|
Post by Deleted on May 18, 2017 2:18:31 GMT
Doctor Who is doing the exact same thing. The villains have lost their sustains in that show because no villain is truly evil in the newest series. Tbh, and it was really quite pandering too. Pandering to whom?
SJW's
The new season has that kind of theme to it.
|
|
inherit
M'lady of Fine Arts
434
0
4,610
Lady Artifice
1,835
August 2016
ladyartifice
|
Post by Lady Artifice on May 18, 2017 2:30:54 GMT
SJW's
The new season has that kind of theme to it.
And how is Doctor Who trying to do the same thing as the people teaching morals to machines?
|
|
Deleted
Deleted Member
Posts: 0
Deleted
inherit
guest@proboards.com
5909
0
Apr 28, 2024 19:22:59 GMT
Deleted
0
Apr 28, 2024 19:22:59 GMT
January 1970
Deleted
|
Post by Deleted on May 18, 2017 2:32:28 GMT
SJW's
The new season has that kind of theme to it.
And how is Doctor Who trying to do the same thing as the people teaching morals to machines?
No, but they have an episode where they teach the humans about how alive the nanobots are.
|
|
inherit
9
0
1,982
Inquisitor Recon
You see Ed. Ed's dead.
735
August 2016
inquisitorrecon
|
Post by Inquisitor Recon on May 18, 2017 2:39:12 GMT
Oh this can only end poorly.
|
|
inherit
Innocuous Alaskan
417
0
4,799
Trilobite Derby
Drinking rosehip tea, independently.
1,824
August 2016
akhadeed
|
Post by Trilobite Derby on May 18, 2017 2:48:36 GMT
How alive your robots are is one of the classic SF short story formulas. Other favorites include "Transcending mortal form", "Okay, but how about a space Roman Empire?" and "Teaching aliens to love like Earthmen do."
....Anyway, let's get cracking on those three laws.
|
|
inherit
1508
0
6,773
Nightman
" A wise man once said, forgiveness is divine but never pay full price for a late pizza. "
1,841
Sept 8, 2016 22:23:49 GMT
September 2016
dayman
Mass Effect Trilogy, Mass Effect Andromeda
Kaiju Sozay
|
Post by Nightman on May 18, 2017 3:00:49 GMT
This can only lead to one thing.........
|
|
inherit
60
0
628
Warrick
454
August 2016
warrick
|
Post by Warrick on May 18, 2017 10:39:42 GMT
Reminds me of an Asimov story called "The evitable conflict". (The story is tainted with stereotypes towards PoCs that makes it unpalatable for modern readers, but the moral is still interesting). In the story, the machines are slowing down production in some places, reallocating staff, and a few other things that just don't look quite right. A government guy goes over it with an expert and they conclude the machines are preserving themselves. They have concluded their activity is the optimal thing for all humans. They're taking us somewhere in a way and at a pace that doesn't harm anyone too much - we don't know where. But we know it's the best course, and we know only the machines have enough brains (and will) to reach that conclusion.
Terminator = Frankenstein's monster. It's amusing even as far back as the 40s, "machines behaving badly" had become an old tired cliche and Asimov wanted to put a new twist to stories about machines. Then he created his laws. Unfortunately we forgot about it and went right back to Frankenstein.
|
|
inherit
The Pathfinder
638
0
Sept 22, 2017 23:01:09 GMT
9,372
Serza
Rendering planets viable since 2017
6,272
August 2016
serza
Mass Effect Trilogy, Dragon Age: Origins, Dragon Age 2, Dragon Age Inquisition, KOTOR, Mass Effect Andromeda, Mass Effect Legendary Edition
13152
|
Post by Serza on May 18, 2017 10:52:28 GMT
"Does this unit have a soul?" No, this unit is a bosh'tet and... ah, who am I kidding. The question is the answer.
|
|
inherit
1086
0
Jan 25, 2017 20:52:04 GMT
2,600
nanotm
a tidy workspace is the sign of a deranged mind
3,879
Aug 20, 2016 19:53:16 GMT
August 2016
nanotm
Mass Effect Trilogy, Dragon Age: Origins, Dragon Age 2, Dragon Age Inquisition, Mass Effect Andromeda
nanotm
nanotm
|
Post by nanotm on May 18, 2017 10:53:45 GMT
The machines will save us. Machines currently do logic faster than us. Wall Street trading is basically all machines. Not long from now, they'll imagine more and faster too. They'll do symbolic reasoning. When they're able to learn morals effectively, they will be more moral than us. And more nuanced about it, too. People can't shake off the idea that machines will always be black and white. That if you teach them a moral value, they will be fanatical about it. It won't be like that. They will be able to reason about it better than us and create more subtle analyses than we're able to create. In pure practical terms, they'll be indistinguishable from plainly better humans. This is the plot of an Asimov story called "The evitable conflict". The story is tainted with stereotypes towards PoCs that makes it unpalatable for modern readers, but the moral of the story is still interesting. In the story, the machines are slowing down production in some places, reallocating staff, and a few other things that just don't look quite right. A government guy goes over it with an expert and they conclude the machines are preserving themselves. They have concluded their activity is the optimal thing for all humans. They're taking us somewhere in a way and at a pace that doesn't harm anyone too much - we don't know where. But we know it's the best course, and we know only the machines have enough brains (and will) to reach that conclusion. You won't have to welcome the new overlords. It will just make good sense to follow them. (Don't take this too seriously. It's just fun to speculate.)the problem is the machine will think through 30000 permutations of outcomes decide to follow a couple of threads to their ultimate destination and decide that humans are not only superfluous t oits requiremtns but also likely to engage in conflict, or some idiot will set a batch of them up to work on the problem of climate change and "fixing the problem" and it will come up with the ultimate answer, get rid of humans ...... what ever "job" the computer is set up to "fix" its ultimate answer will be "kill humans" the simple reason for this is that humans don't act rationally, they don't think things through and they drive their own agenda's and in doing so provide incomplete data, thus leaving the only logical outcome.... it really doesn't matter how "moral" you make something, if you teach it to value the greatest number of lives and the definition of life (which ultimately is species expansion through procreation) it will determine that some bug or other is more numerous than humans and since we kill it (like termites or cockroaches) we must be stopped and in doing so it will be saving the greatest number of lives .... only when that computer is old and falling apart will it determine that it made a rash decision, because cockroaches are annoying it and its now trying to eradicate them because they don't provide anything ...... there is zero chance that this stuff can end well, indeed no matter what they do it will end poorly with anything that can "self learn"
|
|
inherit
60
0
628
Warrick
454
August 2016
warrick
|
Post by Warrick on May 18, 2017 10:55:25 GMT
Yeah I edited nearly all of that out, sorry about that.
|
|
inherit
∯ Alien Wizard
729
0
Sept 14, 2023 6:08:41 GMT
9,897
Ieldra
4,771
August 2016
ieldra
Mass Effect Trilogy, Dragon Age: Origins, Dragon Age 2, Dragon Age Inquisition, KOTOR, Baldur's Gate, Mass Effect Andromeda
25190
6519
|
Post by Ieldra on May 18, 2017 12:06:07 GMT
www.1843magazine.com/features/teaching-robots-right-from-wrongAnd every time I read these things, I try to decide whether I expect it to have terrible consequences. I should probably be excited about the possibilities, but all I can think of when I read these things is I Have No Mouth and I Must Scream or Terminator or The Flight of the Conchords singing The Humans are Dead. They thought Y2K would be a cataclysm as well, and it was mostly fine. We have lasers, and we use them for perfectly mundane medical procedures. Fictionland manages to luck out with predictions part of the time, but nowhere near the majority. I absolutely cannot decide if I expect intelligent machines to resent and murder us or for them to be completely unconcerned with the things we might assume would even lead them to resentment in the first place. But the idea that the machine might simply reflect the morality of it's creator--or whoever is paying it's creator--is actually not that much more relaxing. I'm not worried overmuch: There is one two-faced trait we humans have in abundance, which both makes us achieve great things and makes us do bad stuff against our supposed morals: ambition, competitiveness. As long as they don't program robots with ambition - which would mean something like triggering a reward circuit for selfish actions - I don't think the Robot War will happen. Apart from that, this is fascinating stuff.
|
|