Leemon Baird

Learn More
Interference in neural networks occurs when learning in one area of the input space causes unlearning in another area. Networks that are less susceptible to interference are referred to as spatially local networks. To obtain a better understanding of these properties, a theoretical framework, consisting of a measure of interference and a measure of network(More)
We present algorithms that permit increased efficiency in the calculation of conservation functions for cellular automata, and report results obtained from implementations of these algorithms to report conservation laws for 1-D cellular automata of higher order than any previously known. We introduce the notion of trivial and core conservation functions to(More)
Sprouts is a simple, yet analytically interesting, game first developed in 1967 by Michael Paterson and John Conway. Mathematicians have analyzed the game for various strategies and mathematical properties. This paper adds to the body of knowledge by presenting a proof that several problems in the Game of Sprouts are NP-complete. In addition, for anyone(More)
We present theorems that can be used for improved efficiency in the calculation of conservation functions for cellular automata. We report results obtained from implementations of algorithms based on these theorems that show conservation laws for 1-D cellular automata of higher order than any previously known. We introduce the notion of trivial and core(More)
An important problem for secure communication is that of achieving jam resistance, without any prior shared secret between the sender and receiver, and without limits on the assumed computational power of the attacker. To date, only one system has been proposed for this, the BBC system, which is based on coding theory using codes derived from arbitrary hash(More)
Slow learning of neural-network function approximators can frequently be attributed to interference, which occurs when learning in one area of the input space causes unlearning in another area. To mitigate the effect of unlearning, this paper develops an algorithm that adjusts the weights of an arbitrary, nonlinearly parameterized network such that the(More)
  • 1