104257 < bavaria> ↪ vhp — 274 (19/255) tokens — e8c77f4c6217ae3ff80fb422fe5d7c9b 104308 < Tree> +gpt vhp 104309 < bavaria> GPT had nothing else to say :( ↪ q6w — 273 (273/0) tokens — e8c77f4c6217ae3ff80fb422fe5d83e0
The second call was useless as vhp was stopped, not truncated, and the new prompt empty. This made inference costs (273 tokens!) for a call that has no possible output.