Guidelines for Artificial Intelligence Containment


With almost daily improvements in capabilities of artificial intelligence it is more important than ever to develop safety software for use by the AI research community. Building on our previous work on AI Containment Problem we propose a number of guidelines which should help AI safety researchers to develop reliable sandboxing software for intelligent programs of all levels. Such safety container software will make it possible to study and analyze intelligent artificial agent while maintaining certain level of safety against information leakage, social engineering attacks and cyberattacks from within the container.

1 Figure or Table

Cite this paper

@article{Babcock2017GuidelinesFA, title={Guidelines for Artificial Intelligence Containment}, author={James Babcock and J{\'a}nos Kram{\'a}r and Roman V. Yampolskiy}, journal={CoRR}, year={2017}, volume={abs/1707.08476} }