Eigen Memory Trees

@article{Rucker2022EigenMT,
  title={Eigen Memory Trees},
  author={Mark Rucker and Joran T. Ash and John Langford and Paul Mineiro and Ida Momennejad},
  journal={ArXiv},
  year={2022},
  volume={abs/2210.14077}
}
This work introduces the Eigen Memory Tree (EMT), a novel online memory model for sequential learning scenarios. EMTs store data at the leaves of a binary tree and route new samples through the structure using the principal components of previous experiences, facilitating efficient (logarithmic) access to relevant memories. We demonstrate that EMT outperforms existing online memory approaches, and provide a hybridized EMT-parametric algorithm that enjoys drastically improved performance over… 

References

SHOWING 1-10 OF 36 REFERENCES

Robust high-dimensional memory-augmented neural networks

This work proposes a robust architecture that employs a computational memory unit as the explicit memory performing analog in-memory computation on high-dimensional (HD) vectors, while closely matching 32-bit software-equivalent accuracy.

Survey on Applications of Multi-Armed and Contextual Bandits

A taxonomy of common MAB-based applications is introduced and the state of the art for each of those domains is summarized, to identify important current trends and provide new perspectives pertaining to the future of this burgeoning field.

Episodic Reinforcement Learning with Associative Memory

A novel framework, called Episodic Reinforcement Learning with Associative Memory (ERLAM), which associates related experience trajectories to enable reasoning effective strategies and has significantly higher sample efficiency and outperforms state-of-the-art episodic reinforcement learning models.

Ferroelectric ternary content-addressable memory for one-shot learning

It is shown that ternary content-addressable memories (TCAMs) can be used as attentional memories, in which the distance between a query vector and each stored entry is computed within the memory itself, thus avoiding data transfer.

Revisiting kd-tree for Nearest Neighbor Search

Empirically validate the search accuracy and the query time guarantees of the proposed schemes, demonstrating the significantly improved scaling for same level of accuracy.

X-MANN: A Crossbar based Architecture for Memory Augmented Neural Networks

This work proposes X-MANN, a memory-centric crossbar-based architecture that is specialized to match the compute characteristics observed in MANNs, and designs a transposable crossbar processing unit that can efficiently perform the different computational kernels of MANNs.

How Complex is your classification problem? A survey on measuring classification complexity

This paper surveys and analyzes measures which can be extracted from the training datasets in order to characterize the complexity of the respective classification problems and implements a set of complexity measures that are implemented on an R package named Extended Complexity Library (ECoL).

Fast deep reinforcement learning using online adjustments from the past

EVA shifts the value predicted by a neural network with an estimate of the value function found by prioritised sweeping over experience tuples from the replay buffer near the current state to allow deep reinforcement learning agents to rapidly adapt to experience in their replay buffer.

Contextual Memory Trees

A Contextual Memory Tree is designed and study, a learning memory controller that inserts new memories into an experience store of unbounded size, designed to efficiently query for memories from that store, supporting logarithmic time insertion and retrieval operations.

Episodic Memory Deep Q-Networks

This paper presents a simple yet effective biologically inspired RL algorithm called Episodic Memory Deep Q-Networks (EMDQN), which leverages episodic memory to supervise an agent during training and leads to better sample efficiency and is more likely to find good policy.