AI intellectual debt + invisibility + obscurity - human control authority = danger
If this doesn’t give us nightmares:
… I believe it’s well past high time we soberly consider how long this problem Cory’s post describes has actually been with us [humanity], and how or if humans have been working effectively to address it.
An extremely familiar scenario, with software / coding being just the latest iteration, and of that, infosec being ever more the most prominent and punishing example.
Plus IMO the implicit opportunity cost apparently or often does not have enough negative aspects to outweigh the real or perceived [by management] sunk cost for big influential businesses, including businesses that involve human health and safety, to do the right thing (hey looking at you, manufacturers of medical devices and uh yeah Boeing…).
This has late stage capitalism written all over it.
What bothers me is that these are invisible issues (as noted in the Arthur C. Clarke example in OP) until a catastrophe happens. It is simply not feasible for the human mind to infinitely expand and account for ever-increasing risk as use of AI proliferates–seemingly inescapably–in the public and private realm.

ETA: clarifying context