When optimization tracking is enabled, random effects currently track and summarize the convergence reason (or lack thereof) for each ID. The IDs that failed to converge should be logged and output at the end of training to aid tracking bugs.
It's also important to question: what causes models to fail to converge - can we detect it before training through data validation alone?