This paper reviews the reasons that Human-in-the-Loop is both critical for
preventing widely-understood failure modes for machine learning, and not a
practical solution. Following this, we review two current heuristic methods for
addressing this. The first is provable safety envelopes, which are possible
only when the dynamics of the system are fully known, but can be useful safety
guarantees when optimal behavior is based on machine learning with
poorly-understood safety characteristics. The second is the simpler circuit
breaker model, which can forestall or prevent catastrophic outcomes by stopping
the system, without any specific model of the system. This paper proposes using
heuristic, dynamic safety envelopes, which are a plausible halfway point
between these approaches that allows human oversight without some of the more
difficult problems faced by Human-in-the-Loop systems. Finally, the paper
concludes with how this approach can be used for governance of systems where
otherwise unsafe systems are deployed.