Modern chatbots appear to have already achieved many of the aims of a collaborative reasoning system. They represent and summarize the streams of thought represented in public discourse remarkably well. They can present all major perspectives on any issue in a way that feels balanced and fair. They have ready access to all the internet’s knowledge and can present facts and cite sources. They can reason about the content they’re presenting and tailor it to suit the unique needs of any user, whether they’re looking for a comparison between concepts, a summary of popular opinion, a deep dive, or a simple definition, and they do it all in natural language that feels personal and approachable. When it comes to cutting through the noise and making sense of the world (and each other), it would appear to be the greatest tool we’ve yet produced. What could a collaborative reasoning system offer that you can’t already get with ChatGPT? And what do these things matter?

1. ChatGPT is not social

You can’t use chatbots to communicate. Wikipedia is not social either, but it only deals with factual content. Collaborative reasoning, on the other hand, requires dealing with non-factual content, non-rational arguments, and non-truth-seeking user motivations. In order for a collaborative reasoning platform to have mass appeal it must permit, to some degree, the communication of emotionally charged, irrational, or factually inaccurate content. The forms of reasoning and communication that are most natural to humans are deeply rooted in emotionality, social ties, cultural norms, and personal history. There is no hard line to be drawn between communications made purely for the purposes of truth-seeking and those made for other reasons. In many cases, truth itself cannot be obtained without making some assumptions that cannot be rooted in objectivity. Therefore, though chatbots may help us approach the truth if we ask them questions with an open mind, our motivation to do so will be limited.

The role of identities

It also seems that new ideas may be harder to engage with if not tied to trustworthy personal or brand identities. It’s unclear whether this is good or not. Certainly, believing an idea just because you hear someone saying it seems antithetical to rationality; the idea should stand for itself, no matter who says it. On the other hand, weighing ideas according to who says them seems fairly fundamental to the way we think and may actually be an important method of filtering through the vast number of things available for consideration in a given moment.

2. You can’t contribute ideas

You might say that not being able to contribute ideas is okay. Modern LLM products read Twitter in real time; anything you might want to say has probably already been said and, if said often enough, will be represented in the LLM’s output. Though that may be true, the loop must be closed on this for reasons of user experience. If you disagree with something an LLM says, it’s a non-starter to suggest that you should hop on Twitter to make your disagreement known for the next LLM that crawls the site. What matters is not only your contribution but your experience of having a voice.

Merge with first point

The inability to contribute ideas can be grouped with the inability to send social signals. They’re both about contribution. The distinction only has to do with the kind of contribution in question, whether it be more rational or more emotional.

3. Content is transient and non-deterministic

Chats with LLMs are generally discarded after generation. They’re tailor-made to each new query and, though links to chats can technically be shared, they’re not very practical as reference material. If submitted repeatedly, any specific query will produce slightly different results, further underscoring the transient nature of the chats. The content on an effective collaborative reasoning system needs to remain fixed until it is intentionally changed. While it may be permissible for the specific presentation of underlying data to be transient, e.g., for a system to produce transient summaries in natural language, the data itself must be static and amenable to inspection by users.

Important

The idea that data must be fixed and amenable to inspection is important to consider. It means that, if the atoms of content are natural language claims, which can be combined or summarized, the claims must be formulated in a way so as to make them, if not rigorous enough for Russell, at least rigorous enough.

3.1 Transient chats + static nodes

If Wicker is to prominently feature some type of LLM chat which acts as an interface layer into the node graph, the question of whether this chat content should be permanent remains open. Off the top of my head (2025-10-28), I suspect that chat content should be transient — everything that you don’t want to lose should be encoded in nodes. However, would you open with a fresh chat every time? Would you keep a single chat that runs on forever? See also Static links to nodes.

4. Content is structured as a conversation

Chatbots structure content as linear back-and-forth exchanges — the fundamental unit of content is the chat — rather than being designed around the structure of ideas themselves as, for example, Wikipedia is. This problem is also related to the others; it has consequences for both the sharing of content and the contribution of content.