If it is true that a graph must be automatically constructed and that there are significantly better and worse ways of constructing it with respect to its usefulness, e.g., its traversability in pursuit of meaningful relations, then it may be desirable (if not feasible) to learn its relations using some form of gradient descent. Not to train a neural net to score relations between claims, but to learn the relations between claims on the live graph. You could imagine learning these relations by letting an LLM traverse the graph as it sees fit until it finds what it’s looking for in response to user queries (or simulated user queries). Then, you increase the weight of the relation between the place where it started and the place where it ended up. Over the course of many such tweaks, you’d be generating an optimal closure over the claim graph to allow the LLM to respond to queries in as few steps as possible.