The core purpose of relevance ranking is to provide a way for the interface to know what to show you and when. They’re an attempt to answer the question, for a given node, “which other nodes are relevant to the user right now?” In contrast to other forms of scoring, relevance scores are determined (almost) entirely algorithmically and are not shown directly to the user. It seems imperative to measure relevance since many claims will be redundant variants or otherwise inappropriate to show.

Basically this is Wicker’s version of a recommender algorithm that tries to guess what’s going to be important. I guess one difference is that it’s trying to guess what’s important to a healthy, truth-seeking debate and not necessarily what individuals like. Of course, this algorithm is going to have to be transparent.

Some sources that could contribute to the relevance scoring algorithm include:

  • Traditional algorithms
    • E.g., number of connected nodes.
  • LLMs
    • In response to what you’ve said you’re interested in.
    • In response to the context you’re working in.
  • Statistical analysis of user interaction
    • Based on what people navigate to.
    • User feedback (see below)

Gamified user feedback on nodes

One important mechanism of deriving relevance scores is to have a little bit of UI on certain nodes that is dedicated to eliciting user feedback on what is visible in the node and, in the case of topic nodes, potentially how the subheadings are organized. Maybe it’s a “Help improve Wicker” section that asks one or more specific questions about the node. The questions should probably be simple, and potentially even yes/no answers. Feedback can be aggregated from many users in order to determine the modifications to make. Exactly how this should be gamified is unclear.