(comments)
原始链接: https://news.ycombinator.com/item?id=43991256
A Hacker News discussion highlights the challenges LLMs face in maintaining context during multi-turn conversations, confirming observations that long interactions can "poison" results. Users share experiences where LLMs struggle to recover from initial errors, requiring fresh starts.
While some find LLMs helpful for compressing information and debugging complex issues like IPSEC configurations or PPP drivers, others note the models' tendency to mix versions, hallucinate details, and invent explanations. Many agree that LLMs lack introspection and often fail to ask for clarification when uncertain, unlike humans.
Solutions discussed include prompt engineering to keep context clean, manually editing conversation history, and forking conversations to explore different directions. Users suggest that LLMs often provide initial responses without adequate information, sticking with incorrect answers even when subsequent information clarifies. Managing context effectively is thus crucial for reliable results.
reply