Limits to Aggregation

Jerry Michalski has assembled a staggering large mind-map in a tool called the brain. What would happen if that were dropped into the Super Collaborator?

.

I'm thinking we have three pools of information: graphs in the shared beam, selected graphs merged in the target, and whatever we offer to download to be uploaded later. Moving graphs between these spaces is important, but making sense of merged graphs in the target is where we will see surprise: "Oh, I never thought of those as connected." matrix

Let's say the target area gets confusingly full. How to we pick out the surprising part and move just that part? Some kind of multi-select? Maybe based on the common heritage of assemblies? Maybe schemas are joined by collisions as we sometimes see wiki sites joined by a single fork?

I think Jerry's brain is large, but has been homogenized by the simplicity of his tool. There may be no basis for selection other than the neighborhoods present in his link structure.

Graphviz can handle graphs much larger than it can usefully draw. This comes from the old-school design in C where one could easily run out of memory if the program wasn't always concerned about this possibility.

We may never be able to handle Jerry's brain. But if we inch up towards too big to be useful, then prune enough before proceeding, we should be able to get to places that are right now hard to imagine.

The check-box method of merging has the advantage when checking something in the beam ruins the target, we can just uncheck it and look elsewhere.

Here is an interesting contrast: in wiki one can get further by thinking carefully about each step. In the super collaborator things will happen much faster than that so the ability to try things and move on will be a skill we will have to develop.