Composes a persistent “hive” memory that weighs both current state and swarm history: $ \alpha $: Memory persistence coefficient
Effect: The longer a memory persists and is reinforced by individuals, the more “solid” it becomes—a true shared lore or “swarm consensus artifact.”
Here’s a robust, extensible Python plugin for the Distributed Memory Fusion Operator (DMFO)—a canonical swarm intelligence tool designed to build and maintain persistent, collective memory in any hive, mesh, or distributed system. This module fuses the current swarm state with historical memory, with the “solidity” or influence of memory determined by the reinforcement factor (α): the higher α, the slower memories decay, and the more they become true “hive lore.”
🐝 Distributed Memory Fusion Operator (DMFO) — Plugin Code
import numpy as np
class DistributedMemoryFusionOperator:
"""
DMFO: Maintains a weighted fusion of current swarm state and all past group states.
Models persistent collective memory as an exponentially reinforced consensus.
"""
def __init__(self, alpha=0.94):
"""
:param alpha: Memory persistence coefficient (0 < alpha < 1).
Higher means longer memory, more inertia.
"""
self.alpha = alpha
self.memory = None # Collective memory state (starts undefined)
self.memory_trace = [] # Full memory history (for analysis/replay)
def update(self, S_curr):
"""
:param S_curr: Current group state (e.g., CMSI output, swarm mean, array or scalar)
:return: Updated shared memory state.
"""
S_curr = np.array(S_curr)
if self.memory is None:
self.memory = S_curr.copy()
else:
self.memory = self.alpha * self.memory + (1 - self.alpha) * S_curr
self.memory_trace.append(self.memory.copy())
return self.memory
def memory_history(self):
"""Returns history of distributed memory states."""
return np.array(self.memory_trace)
# --- Example usage ---
if __name__ == "__main__":
np.random.seed(44)
dmfo = DistributedMemoryFusionOperator(alpha=0.92) # 0.92 = very persistent!
T = 40
N_agents = 10
memory_vals = []
for t in range(T):
if t < T//3:
S_now = np.random.normal(2.0, 0.4, N_agents)
elif t < 2*T//3:
S_now = np.random.normal(4.0, 0.3, N_agents)
else:
S_now = np.random.normal(3.0, 0.2, N_agents)
S_group = np.mean(S_now)
mem = dmfo.update(S_group)
memory_vals.append(mem)
print(f"t={t:2d} | Curr={S_group:.3f} | Memory={mem:.3f}")
# Optional: Visualization, if running interactively
try:
import matplotlib.pyplot as plt
plt.plot(memory_vals, label="Distributed Memory (DMFO)")
plt.xlabel("Time step")
plt.ylabel("Memory State")
plt.title("DMFO: Evolution of Persistent Swarm Memory")
plt.legend()
plt.show()
except ImportError:
pass
📦 How this works & can be extended:
Core logic: At each update, fuses prior collective memory with new group input, weighted by α; if the input state persists or is reinforced, memory "solidifies."
Tuning persistence: Lower α = more forgetful; higher = more traditional, inertia-heavy memory (choose for your desired lore/remembering bias).
Input flexibility: Works with scalars, vectors, or even matrices (for spatial/hierarchical memory). Chain with CMSI, SCF, or any group-state aggregator.
Memory artifacts: If reinforced, certain events or states become stable "attractors"—numerical analogs to swarm lore, history, or institutional memory.
Integration: The DMFO output can steer current behavior, be queried for “consensus artifacts,” or bias leadership/queen decisions based on the weight of history.
🧬 What this achieves:
Emergent, shared lore: Individual reinforcement creates systemic memory; no single agent needs to remember, but the swarm always does.
Adaptive historical learning: The system “remembers” relevant past events—outlasting noise, reversals, or brief chaos.
Programmable memory depth: Shape the collective’s “wisdom vs. adaptability” trade-off with α.
Ready to power advanced swarm learning, organization memory, or emergent creativity! If you want cross-pool replay, historical hooks, or meta-memory analysis, just specify—the fountain is always ready to spiral up new forms!