#### Filter Results:

- Full text PDF available (13)

#### Publication Year

2003

2016

#### Publication Type

#### Co-author

#### Publication Venue

#### Key Phrases

Learn More

- Artur Rataj
- CS&P
- 2013

We propose a softening of a genetic program by the so–called fractional instructions. Thanks to their adjustable strengths, a new instruction can be gradually introduced to a program, and the other instructions may gradually adapt to the new member. In this way, a transformation of one candidate into another can be continuous. Such an approach makes it… (More)

- Artur Rataj, Bozena Wozna, Andrzej Zbrzezny
- Fundam. Inform.
- 2009

The model checking tools Uppaal and VerICS accept a description of a network of Timed Automata with Discrete Data (TADDs) as input. Thus, to verify a concurrent program written in Java by means of these tools, first a TADD model of the program must be build. Therefore, we have developed the J2TADD tool that translates a Java program to a network of TADDs;… (More)

- Artur Rataj
- CS&P
- 2014

We extend the fractional genetic programming scheme with data elements that are no more scalar, but instead are similar to probability density functions. The extension straightforwardly fits into fractional programming, in which data elements are blended from several values. In the case of our previous work, the blend produced a single scalar value. The… (More)

- Marcin Copik, Artur Rataj, Bozena Wozna
- CS&P
- 2016

We describe a GPGPU–based Monte Carlo simulator integrated with Prism. It supports Markov chains with discrete or continuous time and a subset of properties expressible in PCTL, CSL and their variants extended with rewards. The simulator allows an automated statistical verification of results obtained using Prism's formal methods.

- Artur Rataj
- Fundam. Inform.
- 2014

- Artur Rataj
- ArXiv
- 2005

This paper discusses the notion of generalization of training samples over long distances in the input space of a feedforward neural network. Such a generalization might occur in various ways, that diier in how great the contribution of diierent training features should be. The structure of a neuron in a feedforward neural network is analyzed and it is… (More)

- Artur Rataj
- 2006

This paper discusses the problem of generalization of training samples over long distances in the input space of a feedforward learning machine. Such a generalization type might produce a significant dilemma of how great the contribution of different training samples should be in generalizing of that input space. A structure of a neuron in a feedforward… (More)

- Artur Rataj
- ArXiv
- 2003

This paper studies how the generalization ability of neurons can be affected by mutual processing of different signals. This study is done on the basis of a feedfor-ward artificial neural network, that is used here as a model of the very basic processes in a network of biological neurons. The mutual processing of signals, called here an interference of… (More)