> AGI, will by definition not need us to further its interests. It can take care of itself.
Well-put. You may be also interested in this technical essay, which gets into how AGI’s physical needs are in conflict with our human needs. And how control of AGI is fundamentally limited.
One, if not the only article about the future of AI I've read that I actually "align" with. You've raised the exact points I've been discussing with everyone in my small circle of friends.
Personally I fear a Jerry Springer type of AI will become our new "leader". Our generational and political warfare tactics are all of the signs I need to witness to fully understand where our future is headed, sadly.
Thanks for reading! Glad to hear I'm not the only one. Jerry Springer style AI: a terrifying prospect but I can see it happening. We don't seem to have very effective methods for combatting human populism let alone one powered by AI...
> AGI, will by definition not need us to further its interests. It can take care of itself.
Well-put. You may be also interested in this technical essay, which gets into how AGI’s physical needs are in conflict with our human needs. And how control of AGI is fundamentally limited.
https://www.lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable#How_to_define__stays_safe__
One, if not the only article about the future of AI I've read that I actually "align" with. You've raised the exact points I've been discussing with everyone in my small circle of friends.
Personally I fear a Jerry Springer type of AI will become our new "leader". Our generational and political warfare tactics are all of the signs I need to witness to fully understand where our future is headed, sadly.
Thanks for reading! Glad to hear I'm not the only one. Jerry Springer style AI: a terrifying prospect but I can see it happening. We don't seem to have very effective methods for combatting human populism let alone one powered by AI...