The idea is that you probably want to have an LLM look at every claim before its made in order to identify and help rectify common points of confusion, lack of clarity, etc. Therefore, you probably want to have a whole list of heuristics that could be used in an LLM-based algorithm to evaluate prospective claims. Here are a few things that could be on that list.

In principle vs. in current practice vs. in near-term practice

A Kialo claim proposes that regulating product labels for GMO-containing food is a better solution to GMO-related concerns than a ban. Someone else counters that current regulations are not sufficient. Does this actually count as a critique of the claim? It’s not clear whether the claim is saying that regulations would be a better solution in principle or in practice.

There may also be a differentiation to be made between in current practice and in practice — should the practice be attempted — in the foreseeable future. You could constrain the timeline to near term vs long term as well. You have a spectrum of time horizons: current -> near term -> long term -> in principle

Ask LLM to evaluate claim without context

Consider the claim Rules for labelling GMO products are a better solution than an outright ban. You probably want to be clear, here, about what the proposed solution is to. You probably also want to be clear about what the ban would be of. We can fill in the blanks to some extent in our minds, but this infill process may open the door to confusion. One of the counter-claims in Kialo (where this claim is from) was that the idea of label regulation is unlikely to satisfy those who are concerned about GMO for environmental reasons. This could have been made unnecessary by specifying what the labelling regulations were proposed to solve. Maybe this claim should instead read something like Rules for labelling GMO products are a better solution to concerns about GMO safety than an outright ban of GMO products. If you asked an LLM for the top 3 things that were unclear about the claim without giving it any context (e.g., related claims, the subject of discussion, etc.), it may be able to identify these items. This may require some prompt engineering.