Daring Fireball: Seven Replies to the Viral Apple Reasoning Paper
[…] when it comes to these systems [LLMs], I’m mostly just interested in whether they’re useful or not, and if so, how.
That feels very short sighted. If we don‘t solve the alignment problem now, it will be too late when we do get to AGIs.