Values are the foundation of a person’s ability to judge between right and wrong. And are the results of their experiences and their worlds and would be very difficult to teach. The populations whose morals lean towards contributing and compassion are vulnerable to the morals leaning towards selfishness and apathy. Maybe AI will take over before the more destructive population does.
An algorithm of currently available memories, sounds so simple, oops then toss in love, hate, etc. Maybe your algorithms just get skewed, behavior gets skewed, outcomes get very unpredictable, causing more skewed algorithms all around.
Could anything be more complicated than flesh & bones?
When the phone in your pocket is smarter than you and important things are backed up in case of failure, are we in for an upgrade or total rebuild.
. Hey you, I need this giant rock moved up there, in the next 2 minutes
.. Sorry, this dead guy needs me
A hack or a wack, I think mine is still in beta, full of bugs.
An original thought? Except when AI knew that is what you were going to think. Programming humans for long term results will be easy for a computer, that’s the way machines think.
Soon your car salesman will be compared to the Walmart greeter, they will introduce you to the vehicles and you will discuss with the cars your transportation needs and options.
You will catch yourself saying, she’s not the fanciest model but I like the way she thinks!
I choose to [anything] of my own free will, or maybe it’s just a blend of prior experiences, leading to the most logical conclusion. Is conscience just another embodiment of the same process.
Will AI try to re-invent the wheel or just put a human on a leash to navigate this bipedalism world, just a temporary patch, until the planet becomes more usable for the new mechanical regent.
I was always taught THINK before you speak, I’ve since learned it’s more fun not to, and be just as surprised and shocked as everyone else at some of the stuff that comes out of my mouth.