#### Filter Results:

- Full text PDF available (14)

#### Publication Year

1988

2007

#### Publication Type

#### Co-author

#### Publication Venue

#### Key Phrases

Learn More

- Tal Grossman, Avishai Wooly
- 1994

The Set Covering problem (SCP) is a well known combinatorial optimization problem, which is NP-hard. We conducted a comparative study of eight diierent approximation algorithms for the SCP, including several greedy variants , fractional relaxations, randomized algorithms and a neural network algorithm. The algorithms were tested on a set of random-generated… (More)

- Tal Grossman, Alan S. Lapedes
- NIPS
- 1993

We show how randomly scrambling the output classes of various fractions of the training data may be used to improve predictive accuracy of a classification algorithm. We present a method for calculating the "noise sensitivity signature" of a learning algorithm which is based on scrambling the output classes. This signature can be used to indicate a good… (More)

- Tal Grossman, Robert M. Farber, Alan S. Lapedes
- ISMB
- 1995

Recently, there has been considerable interest in deriving and applying knowledge-based, empirical potential functions for proteins. These empirical potentials have been derived from the statistics of interacting, spatially neighboring residues, as may be obtained from databases of known protein crystal structures. In this paper we employ neural networks to… (More)

- Tal Grossman, Ron Meir, Eytan Domany
- NIPS
- 1988

We introduce a learning algorithm for multilayer neural networks composed of binary linear threshold elements. Whereas existing algorithms reduce the learning process to minimizing a cost function over the weights, our method treats the internal representations as the fundamental entities to be determined. Once a correct set of internal representations is… (More)

- Tal Grossman
- Cliques, Coloring, and Satisfiability
- 1993

- Tal Grossman
- 1993

A neural network model, the INN (Inverted Neurons Network), is applied to the Maximum Clique problem. First, I describe the INN model and how it implements a given graph instance. The model has a threshold parameter t, which determines the character of the network stable states. As shown in an earlier work 5], the stable states of the network correspond to… (More)

- Tal Grossman
- Complex Systems
- 1989

A new learning algorithm, learning by choice of int ern al repr esentations (CHIR), was recently int roduc ed. Th e basic version of thi s algorit hs was developed for a two-layer, single-out put , feed-forward network of binary neurons. This paper presents a gener alized version of t he CHIR algorithm th at is capable of t raining mult iple-output net… (More)

- Dimitry Nabutovsky, Tal Grossman, Eytan Domany
- Complex Systems
- 1990

A new learning algorithm for feedforward network s, learn-in g by choice of in t ern al rep resent at ions (C HIR), was recently introduced [1,2]. W hereas man y algor it hm s red uce th e learning proce ss to minimizin g a cost fu nction over t he weights, our method treats th e internal representations as t he funda me nt al ent it ies to b e determi ned.… (More)

We study the extent to which xing the second layer weights reduces the capacity and generalization ability of a two-layer perceptron. Architectures with N inputs, K hidden units and a single output are considered, with both overlapping and non-overlapping receptive elds. We obtain from simulations one measure of the strength of a network-its critical… (More)

- Tal Grossman
- NIPS
- 1989

A new learning algorithm, Learning by Choice of Internal Rep-resetations (CHIR), was recently introduced. Whereas many algorithms reduce the learning process to minimizing a cost function over the weights, our method treats the internal representations as the fundamental entities to be determined. The algorithm applies a search procedure in the space of… (More)