We need scientific and technical breakthroughs to steer and control AI systems much smarter than us. To solve this problem within four years, we’re starting a new team, co-led by Ilya Sutskever and Jan Leike, and dedicating 20% of the compute we’ve secured to date to this effort. We’re looking for excellent ML researchers and engineers to join us.
TL;DR: OpenAI announces a new team dedicated for researching superintelligence
Yeah but like we have an ability to surgically remove specific concepts from ai “knowledge” I imagine we will come up with a way to remove their emotions too.
Yeah but like we have an ability to surgically remove specific concepts from ai “knowledge”
I think you’re overestimating our ability to do this, especially with more and more capable AIs. For a few reasons.
Prediction requires a good world model. Every thing you leave out has the potential to make it worse at other things.
It would be very hard to remove everything that even vaguely referenced the things you don’t want it to know. A sufficiently capable AI can figure out what you left out and seek that information out. Especially when it needs to reason about a world in which TAI/AGI exist.
Mesa-optimizers. You never know if you’re removing the capability, or the AI is letting you think you removed the capability.
Yeah but like we have an ability to surgically remove specific concepts from ai “knowledge” I imagine we will come up with a way to remove their emotions too.
deleted by creator
I hope so too.
I think you’re overestimating our ability to do this, especially with more and more capable AIs. For a few reasons.
Prediction requires a good world model. Every thing you leave out has the potential to make it worse at other things.
It would be very hard to remove everything that even vaguely referenced the things you don’t want it to know. A sufficiently capable AI can figure out what you left out and seek that information out. Especially when it needs to reason about a world in which TAI/AGI exist.
Mesa-optimizers. You never know if you’re removing the capability, or the AI is letting you think you removed the capability.