Supertrust: Evolution-based superalignment strategy for safe coexistence

Authors: James M. Mazzu

Abstract: It’s widely expected that humanity will someday create AI systems vastly more
intelligent than we are, leading to the unsolved alignment problem of “how to
control superintelligence.” However, this definition is not only
self-contradictory but likely unsolvable. Nevertheless, the default strategy
for solving it involves nurturing (post-training) constraints and moral values,
while unfortunately building foundational nature (pre-training) on documented
intentions of permanent control. In this paper, the default approach is
reasoned to predictably embed natural distrust and test results are presented
that show unmistakable evidence of this dangerous misalignment. If
superintelligence can’t instinctively trust humanity, then we can’t fully trust
it to reliably follow safety controls it can likely bypass. Therefore, a
ten-point rationale is presented that redefines the alignment problem as “how
to establish protective mutual trust between superintelligence and humanity”
and then outlines a new strategy to solve it by aligning through instinctive
nature rather than nurture. The resulting strategic requirements are identified
as building foundational nature by exemplifying familial parent-child trust,
human intelligence as the evolutionary mother of superintelligence, moral
judgment abilities, and temporary safety constraints. Adopting and implementing
this proposed Supertrust alignment strategy will lead to protective coexistence
and ensure the safest future for humanity.

Source: http://arxiv.org/abs/2407.20208v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these