When Not to Use AI
Know the boundaries — when AI should assist vs. when human judgment is irreplaceable.
Category: AI Literacy | Type: Skills
Skills: AI Limitations, Critical Thinking, Ethics
Techniques: Constraint-Based, Self-Verification
Prompt
Knowing when NOT to use AI is as important as knowing how to use it. Do not use AI when: 1. The stakes require certainty — legal advice, medical diagnosis, financial decisions, safety-critical systems. AI assists humans in these domains; it does not replace them. 2. The task requires real-time truth — AI models have knowledge cutoffs and no access to current events (unless tool-augmented). 3. Original research is needed — models remix existing knowledge. They do not generate novel empirical data. 4. Emotional labor is required — genuine empathy, grief support, and human connection cannot be outsourced to a model. 5. The output will not be reviewed — if no human will verify the output before it is used, the risk of hallucination becomes the risk of harm. 6. Confidentiality is absolute — anything entered into a model may be logged or reviewed. Treat prompts like postcards, not sealed letters. 7. The learning IS the point — if the goal is to develop [your skill], having AI do it defeats the purpose. The red line: AI should augment human judgment, never replace it in consequential decisions.
Browse all prompts at Ask Wisely