Last week, we decided to continue with the idea of sentience and consciousness in robots, and so this resource was proposed for this week.
As technology progresses and we go further into the development into Artificial Intelligence (AI) and robot sentience and consciousness, the question of what comes next becomes more and more prominent. Things like the Turing Test were designed to test the line between AI and humans, but as we draw closer to the line, the question then becomes what do we do when they’re no longer all too different?
If AI have pain programmed into them by us or other AI, then do they need rights? Is it really them being able to feel anything? If yes, then where do we draw the line? Can these AI ever be considered the same as humans?