Microsoft Study: https://www.microsoft.com/en-us/research/uploads/prod/2025/01/lee_2025_ai_critical_thinking_survey.pdf
This Microsoft Study involved self reporting, so it isn't a robust unbiased study on the effects of using AI, but I have impressions.
I think we engage less with the work because it looks good enough and that's expected. We do the same when we accept work from other people. LGTM is basically that.
It's like a factory without QA. Workers will typically accept good enough work from machinery unlike someone who does most if not everything from scratch. I'd say there are factors in such as getting distracted (sickness, meeting, notifications, lunch) that cause accepting less quality work.
I believe there is a balance you can strike. Training people to supervise AI work properly might be much needed these days just like how QA people need to be trained to spot bad quality that the naive eye wouldn't be able to see. You should always ask questions to yourself about the work and how you think you could've improved it not just is this something I would do.
Of course I'm biased towards AI, because I use it quite a bit especially for troubleshooting or transforming code from one language to another, but I like reading code that comes out of it which is not unlike what I do already when looking at people's libraries on Github.
Skill issue, guys.