It is a requirement that the system support some method of reasoning about conditions that are required in order for claims to be true. Truth scoring based on the truth of supporting claims implicitly models a conditional mechanism — the truth of X depends on the truth of supporting claim Y. But it seems as if there are other kinds of conditions that we’d like to model.
Implicit conditionals
Consider 01 | We should ban GMO crops which may be supported by 02 | GMO crops can reduce biodiversity. Here, the truth of 01 depends partly on the truth of 02. The truth of 01 also depends partly on the truth of the claim 03 | 02 supports 01 which might itself be supported by 04 | Biodiversity is good. In this way, the truth of 01 also depends partly on the truth of 04 (or the agreeableness, as is the case here). Still, all these conditional relations should be modelled automatically by the system as part of the truth scoring process.
Explicit conditionals
Can we translate between system-encoded conditionals and claim-encoded conditionals? Can the claim “we should build a house if we can find a way to pay for it” be translated into “01 | we should build a house” and “02 | 01 is true if and only if we can find a way to pay for it”? Would it be possible to have a system-encoded score for the truth of 01 that depends on the truth of 02 (and, therefore, also on the truth of claims relating to the viable ways to pay for the house)? If so, you could score truths in nuanced, user-defined ways. This may also give you a powerful way of establishing a foundation for agree/disagree claims (opinion-based) that’s made out of true/untrue claims (fact-based). This is, I think, one of the fundamental methods that we use to make opinion-based things, and the ability to reason about opinion-based things is one of the minimal requirements for Wicker. Decision-making, slightly more advanced on the hierarchy of use cases, must almost entirely be based on reasoning about opinion-based things.
Explicit conditionals about truth
This gets at the implicit vs. explicit modelling question arises naturally from the fact that claims can reference anything and say anything, including things about the implicitly-modelled parts of the system itself — fertile ground for paradoxes. The claim This claim is false is valid in Wicker.
Conditionals as properties of claims
One method of specifying custom conditionals outside of independent claims would be to make them properties of claims. This would seem fairly natural since a conditional is basically just specifying the conditions necessary for a claim to be true.
A more complex example
01 | Rules for labelling GMO products would be a better solution than an outright ban.
02 | In its current form, GMO labelling is not an effective way of informing consumers.
In Kialo where these claims are from, 02 was listed as a critique of 01.
@todo: todo describe conditional nature of the critique. Or actually just find a better example.
// todo: consider the case where a condition is the intention of a claim, e.g., whether 01 | Rules for labelling GMO products are a better solution than an outright ban means “in principle, practically” or “in principle, conceptually”.
On evaluating tradeoffs
Tradeoffs are quite tied to conditionals. Evaluating a tradeoff involves acknowledging that something may be true or agreeable but that your conclusion about the implications thereof depends on how true or agreeable it is relative to something else.