We can use a wiki site as a collection point for data posted with lightweight mechanisms such as curl.
We add a plugin to a page that will receive authenticated puts through new endpoints added to the running server as the plugin's server-side component is launched at startup. Access keys are kept in the sites status directory safely away from the freely shared pages.
Aggregating heterogeneous datasets is the first step for understanding large and complex datacenter operations. We're piloting a second generation of such a system using exactly this method of collection.
We collect structural data within these pages where it can be interactively transformed and assembled into a whole destine for downstream inquiry and analysis.
Here we collect various mentions of the work we've done observing software by aggregating the metadata produced throughout its creation and operation. See Tips for Modeling
We seek explanation for what a system does and why. Microservices complicate this aspiration when mixed technologies have been assembled over years by evolving teams for changing business purpose.