AD4M

is a set of minimal assumptions to make all apps interoperate. site , github

YOUTUBE 5BLDCWg6GZI AD4M Explained in 12 minutes

3:07 They are like the agent’s second brain or Zettelkasten.

6:34 Trick No.2 Social DNA

7:24 That can easily be done through the Prolog engine that AD4M spawns for each perspective.

The Agent-Centric Distributed Application Meta-ontology or just: Agent-Centric DApp Meta-ontology github

Neighbourhood

Publishing that local Perspective by turning it into a Neighbourhood github

> The back-bone of a Neighbourhood is a LinkLanguage - a Language that enables the sharing and thus synchronizing of links (see LinksAdapter in Language.ts). While there can and should be many different implementations with different trade-offs and features (like membranes [⇒ semipermeable] etc.), there currently is one fully implemented and Holochain based LinkLanguage with the name Social Context. It is deployed on the current test network (Language Language v0.0.5) under the address: QmZ1mkoY8nLvpxY3Mizx8UkUiwUzjxJxsqSTPPdH8sHxCQ. Creating our unique LinkLanguage clone through templating

But we should not just use this publicly known Language as the back-bone for our new Neighbourhood, since we need a unique clone. So what we want is to use this existing Language as a template and create a new copy with the same code but different UUID and/name in order to create a fresh space for our new Neighbourhood.

AD4M_0.2.10_x64.dmg

Install AD4M github (ADAM_Launcher_0.5.1_x64)

⇀ copy Holochain binary

[2023-01-17T18:39:27.090805+01:00] INFO - Free port: 12000 [2023-01-17T18:39:27.215959+01:00] INFO - init command by copy holochain binary [2023-01-17T18:39:31.819982+01:00] INFO -  AD4M executor starting with version: 0.2.10 [2023-01-17T18:39:31.820189+01:00] INFO - Starting ad4m core with path: /Users/rgb/.ad4m [2023-01-17T18:39:31.820211+01:00] INFO - => AD4M core language addresses: languageLanguage bundle (hidden) + [ […]

⇀ AD4M executor

⇀ ad4m core

⇀ AD4M core language addresses

Introduction

The name AD4M is an ackronym for The Agent-Centric Distributed Application Meta-ontology or just: Agent-Centric DApp Meta-ontology. archive

AD4M is a meta-ontology and a spanning layer - an upper extension to the TCP/IP stack. But AD4M is also a framework for building apps - mainly social apps, which renders it an engine (like a game engine) for social networks and collaboration apps. With its ability to bootstrap specific ontologies from its meta-ontology, it is a malleable social network itself. It could be the last one. At its core, AD4M is just an idea, a formalization of a different approach, a complete set of basic concepts that together span a new paradigm of (distributed) software architecture. It tries to capture the quintessence of what really goes on in human communication networks, in order to shape the digital space around that reality - instead of having the technology dictate how we communicate. Putting the human first and starting from a pure agent-centric approach, AD4M deconstructs the concept of applications and suggests a different principle for the creation and maintenance of coherence in communication networks: social contexts (who am I talking to?) and shared subjective meaning, instead of assumed objectivity implied by monolithic apps that don't differentiate between agents' different renderings and associations of the same data or event or subject.

Meta-Ontology

What really goes on is that agents/humans exchange expressions of various (and evolving) languages in order to share their partial perspectives/associations with each other and thus convey meaning, build meaning, make sense of things together.

In order to suggest a minimal assumption for maximum buy-in, AD4M carves out this quintessence of what human networks and the internet have in common, by postulating an ontology of three basic and irreducible concepts:

* Agents * Languages, and * Perspectives.

Languages include Expressions in their definition, and Perspectives include Links (Link Expressions, to be precise).

Through combination of these basic principles, two important derived concepts are constructed:

* Neighbourhoods (i.e. shared Perspectives) * Social Organisms (i.e. fractal, super agents, defined through shared perspectives and shared interaction patterns/social DNA).

[…]

~

jsipfs cat /ipfs/QmRaaUwTNfwgFZpeUy8qrZwrp2dY4kCKmmB5xEqvH3vtD1/readme

https://js.ipfs.tech/

npm install ipfs -g

jsipfs cat QmPChd2hVbrJ6bfo3WBcTW4iZnpHm8TEzWkLHmLpXhF68A

⇒ no IPFS repo found in /Users/rgb/.jsipfs.

Core Concepts

# Agents

…are build around DID - Decentralized Identifier . Users can bring their existing identity or have AD4M create a new one. Conceptually, AD4M agents are modelled as something that can speak and that can listen. Agents speak by creating Expressions of AD4M Languages, whereby these Expression get signed by the agent's DID key. AD4M agents also have a publicly shared Perspective, that other agents can see just by resolving their DID URI. This Perspective is like the agents semantic web page, consisting of statements the agent chooses to share with the world. Statements either about themselves (acting as public profile used by various apps), or about anything else. Finally, AD4M agents declare a direct message Language, an AD4M Language they choose to be contacted with for receiving messages. AD4M's built-in Agent-Language resolves DID URIs to AD4M Expressions that look like this:

{ did: "did:key:zQ3shNWd4bg67ktTVg9EMnnrsRjhkH6cRNCjRRxfTaTqBniAf", perspective: { links: [] }, directMessageLanguage: "lang://QmZ9Z9Z5yZsegxArToww5zmwtPpojXN6zXJsi7WwMUa8" }

(see API docs about Agent)

⇒ docs.ad4m.dev Host Error site

# Languages

…encapsulate the actual technology used to communicate, like Holochain or IPFS and enable Agents to create and share Expressions. Expressions are referenced via a URI of the kind: <language>://<language specific expression address> (with special cases like DID URIs being parsed as such and resolved through the Agent Language). AD4M resolves these URIs by first looking up the Language via its hash (and potentially downloading the Language through the built-in Language of Languages) and then asking the Language about the Expression with given address. Languages are distributed and interpreted as JavaScript modules. AD4M passes in proxy-object to the managed Holochain, IPFS, etc. instances so Language developers can use these technologies without having to set them up or manage themselves.

// Example of a Language that uses the Holochain proxy object export default async function create(context: LanguageContext): Promise { const Holochain = context.Holochain as HolochainLanguageDelegate; await Holochain.registerDNAs([{ file: DNA, nick: DNA_NICK }]); // ... async get(expressionAddress: Address): Promise { const expression = await this.#DNA.call( DNA_NICK, "zome_name", "get_expression_zome_function_name", expressionAddress ); } }

(Read section in docs about how to write AD4M Languages)

# Perspectives

…are local and private graph databases. They represent context and association between expressions. They consist of a list of RDF/semantic web like triplets (subject-predicate-obejct) called links because all three items are just Expression URIs pointing to Expressions of arbitrary Languages. Perspectives are like Solid’s pods, but they are agent-centric:

* Atomic Perspectives belong to and are stored with a single Agent. * Links inside Perspectives are Link Expressions, so they include their provenance and cryptographic signature

While Expressions are objective (every agent resolving their URI renders the same data), Perspectives represent subjective associations between objective Expressions. (See Gettint Started section above for how to deal with Perspectives)