OPEN_SOURCE ↗
REDDIT · REDDIT// 6d agoTUTORIAL
PhD Student Worries ChatGPT Dependence
A second-year PhD student says a year of leaning on ChatGPT for coding has left them worried about losing real coding ability. The thread largely argues that the fix is not quitting LLMs, but using them more deliberately and keeping some work fully manual.
// ANALYSIS
This reads like a skill-atrophy problem disguised as an AI tooling problem: once LLMs become your first instinct, the real bottleneck shifts from writing code to reading, debugging, and trusting it. The healthiest pattern is constraint, not guilt, with intentional no-LLM reps to keep fundamentals active.
- –Reserve some sessions for handwritten or from-scratch coding so syntax, control flow, and debugging stay familiar
- –Use LLMs for scaffolding, repetitive glue, and boilerplate, then rewrite the core logic yourself
- –Treat generated code as a draft: review every line, add tests, and force yourself to explain failures
- –Prefer incremental edits to existing repositories over full-codegen-from-zero workflows
- –Build confidence by comparing multiple model outputs and learning the differences instead of accepting the first answer
// TAGS
chatgptllmai-codingchatbotprompt-engineering
DISCOVERED
6d ago
2026-04-06
PUBLISHED
6d ago
2026-04-06
RELEVANCE
6/ 10
AUTHOR
etoipi1