Hyperdimensional Hashing: A Robust and Efficient Dynamic Hash Table
@article{Heddes2022HyperdimensionalHA, title={Hyperdimensional Hashing: A Robust and Efficient Dynamic Hash Table}, author={Mike Heddes and Igor O. Nunes and Tony Givargis and Alexandru Nicolau and Alexander V. Veidenbaum}, journal={ArXiv}, year={2022}, volume={abs/2205.07850} }
Most cloud services and distributed applications rely on hashing algorithms that allow dynamic scaling of a robust and efficient hash table. Examples include AWS, Google Cloud and BitTorrent. Consistent and rendezvous hashing are algorithms that minimize key remapping as the hash table re-sizes. While memory errors in large-scale cloud deployments are common, neither algorithm offers both efficiency and robustness. Hyperdimensional Computing is an emerging computational model that has inherentβ¦Β
2 Citations
Torchhd: An Open-Source Python Library to Support Hyperdimensional Computing Research
- Computer ScienceArXiv
- 2022
Hyperdimensional Computing (HDC) is a neuro-inspired computing framework that exploits high-dimensional random vector spaces. HDC uses extremely parallelizable arithmetic to provide computationalβ¦
An Extension to Basis-Hypervectors for Learning from Circular Data in Hyperdimensional Computing
- Computer ScienceArXiv
- 2022
This work proposes an improvement for level-hypervectors, used to encode real numbers, and introduces a method to learn from circular data, an important type of information never before addressed in machine learning with HDC.
References
SHOWING 1-10 OF 24 REFERENCES
Consistent hashing and random trees: distributed caching protocols for relieving hot spots on the World Wide Web
- Computer ScienceSTOC '97
- 1997
A family of caching protocols for distrib-uted networks that can be used to decrease or eliminate the occurrence of hot spots in the network, based on a special kind of hashing that is called consistent hashing.
Consistent Hashing with Bounded Loads
- Computer Science, MathematicsSODA
- 2018
This paper aims to design hashing schemes that achieve any desirable level of load balancing, while minimizing the number of movements under any addition or removal of servers or clients, and finds a hashing scheme with no load above βcm/nβ, referred to as the capacity of the bins.
Hash-Based Virtual Hierarchies for Scalable Location Service in Mobile Ad-hoc Networks
- Computer ScienceMob. Networks Appl.
- 2009
This work presents VHLS, a new distributed location service protocol, that features a dynamic location server selection mechanism and adapts to network traffic workload, minimizing the overall location service overhead.
Chord: a scalable peer-to-peer lookup protocol for internet applications
- Computer ScienceTNET
- 2003
Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.
Using name-based mappings to increase hit rates
- Computer ScienceTNET
- 1998
An analysis of HRW and validate it with simulation results showing that it gives faster service times than traditional request allocation schemes such as round-robin or least-loaded, and adapts well to changes in the set of servers.
Maglev: A Fast and Reliable Software Network Load Balancer
- Computer ScienceNSDI
- 2016
Maglev is Google's network load balancer, a large distributed software system that runs on commodity Linux servers that is specifically optimized for packet processing performance.
Dynamo: amazon's highly available key-value store
- Computer ScienceSOSP
- 2007
D Dynamo is presented, a highly available key-value storage system that some of Amazon's core services use to provide an "always-on" experience and makes extensive use of object versioning and application-assisted conflict resolution in a manner that provides a novel interface for developers to use.
Hardware Optimizations of Dense Binary Hyperdimensional Computing: Rematerialization of Hypervectors, Binarized Bundling, and Combinational Associative Memory
- Computer ScienceACM J. Emerg. Technol. Comput. Syst.
- 2019
Hardware techniques for optimizations of HD computing, in a synthesizable open-source VHDL library, are proposed to enable co-located implementation of both learning and classification tasks on only a small portion of Xilinx UltraScale FPGAs to significantly improve the throughput of classification.
Ultra-efficient processing in-memory for data intensive applications
- Computer Science2017 54th ACM/EDAC/IEEE Design Automation Conference (DAC)
- 2017
This paper proposes an ultra-efficient approximate processing in-memory architecture, called APIM, which exploits the analog characteristics of non-volatile memories to support addition and multiplication inside the crossbar memory, while storing the data.
High-Dimensional Computing as a Nanoscalable Paradigm
- Computer ScienceIEEE Transactions on Circuits and Systems I: Regular Papers
- 2017
We outline a model of computing with high-dimensional (HD) vectorsβwhere the dimensionality is in the thousands. It is built on ideas from traditional (symbolic) computing and artificial neuralβ¦