For example if someone creates something new that is horrible for humans, how will AI understand that it is bad if it doesn’t have other horrible things to relate it with?
For example if someone creates something new that is horrible for humans, how will AI understand that it is bad if it doesn’t have other horrible things to relate it with?
AI currently doesn’t “understand” or “know” anything. It’s trained on a collection of text, and then predicts and extends the text prompt you give it. It’s very good at doing this. If someone “creates something new” the trained AI will have no concept of it, unless you train a new ai model that includes text about that thing.
Oh wow it is really interesting that new things will be unknown! So basically AI still isn’t intelligence because it can’t really make choices on its own, just based on what it has learned.
This is a really good talk that outlines some possible criteria for intelligence and demonstrates how close chat GPT is or isn’t on those different ones
https://www.youtube.com/watch?v=qbIk7-JPB2c