>> Additional layers of these 'LLMs' could read the responses and determine whether their premises are valid and their logic is sound as necessary to support the presented conclusion(s), and then just suggest a different citation URL for the preceding text
https://arxiv.org/abs/2403.18802
https://github.com/google-deepmind/long-form-factuality/tree...