J. H. Gunther

Learn More
The popular Sudoku puzzle bears structural resemblance to the problem of decoding linear error correction codes: solution is over a discrete set, and several constraints apply. We express the constraint satisfaction using a Tanner graph. The belief propagation algorithm is applied to this graph. Unlike conventional computer-based solvers, which rely on(More)
The equations for iteratively decoding low-density parity-check (LDPC) codes are generalized to compute joint probabilities of arbitrary sets of codeword bits and parity checks. The standard iterative LDPC decoder, which computes single variable probabilities, is realized as a special case. Another specialization allows pairwise joint posterior(More)
The convergence of projection on convex sets (POCS) algorithms is monotonic and exponential near the point of convergence, so it is reasonable to predict the limit point using a simple exponential regression. For circumstances where the convergence of each coordinate direction is, in fact, monotonic, this results in a significant acceleration of POCS.(More)
The neighborhood model provides a moderate complexity method of introducing the concept of smoothness into a detection problem. As tested here, the smoothness is reduced to a simple scalar quantity whose probability is easily computed. The concept is fairly general, moving from vector matched filter processing as originally formulated to any scalar image.(More)
Soft output decoding of trellis-based problems, such as BCJR and the SOVA, has advantages in some applications. However these optimal methods are computationally intensive for all but short constraint lengths. A significant reduction in complexity can be obtained by using sequential decoding methods. This paper considers a modification of the stack(More)
The eigenmessage decoder O. Chauhan et al. (2003) expresses a degree of nonlocality in a message passing decoder by representing an entire cycle in a single linear equation. The eigenvector for the linear message passing matrix represents a fixed point of the message passing algorithm around a cycle and has been shown to significantly decrease the number of(More)
The contravariant vector associated with the conventional gradient vector (covector) via the Riemannian metric is the appropriate direction for gradient descent learning. This fact is the basis for Amari's familiar natural gradient learning algorithms. The language of differential geometry is used to derive the contravariant gradient rule for(More)
We consider the problem of using multiple mobile sensors as sensor arrays for the purpose of emitter location. Mobility suggests that intersensor positions and times are imprecisely known. We characterize the variance in the location estimation as a function of these uncertainties. Mobility also allows the array to reconfigure itself to improve(More)
We consider the problem of using a small number of mobile sensors as a sensor array which repositions itself to improve estimates for the range and bearing of a narrowband, stationary target. Mounting sensors to micro-unmanned aerial vehicles (MUAVs) pragmatically guarantees the sensor spacing will be much greater than the Nyquist sampling distance which(More)