AI can now write programs and code with no human intervention.
No it can't. It produces strange, buggy code that has to be manually debugged.
I've used AI to fix code and research syntax. It's a long, long way from producing code for production.
The more realistic danger is that military hardware will be poised to act autonomously based on input. If it detects certain radar signatures, it will let missiles fly without telling anyone. A cruise missile is autonomous once launched but if we automate the firing of those missiles with Skynet software, then we're in Terminator territory.
The realistic scenario isn't that AI will suddenly decide to take over Earth, but that the Chinese will hack into it and fuck with its mind. Maybe a cheap ssd will glitch somewhere and things suddenly launch. Achieving malevolent awareness is not a danger we currently face with AI.