Learn More
We show in this paper how several proposed Physical Unclonable Functions (PUFs) can be broken by numerical modeling attacks. Given a set of challenge-response pairs (CRPs) of a PUF, our attacks construct a computer algorithm which behaves indistinguishably from the original PUF on almost all CRPs. This algorithm can subsequently impersonate the PUF, and can(More)
We discuss numerical modeling attacks on several proposed strong physical unclonable functions (PUFs). Given a set of challenge-response pairs (CRPs) of a Strong PUF, the goal of our attacks is to construct a computer algorithm which behaves indistinguishably from the original PUF on almost all CRPs. If successful, this algorithm can subsequently(More)
We present a model-free reinforcement learning method for partially observable Markov decision problems. Our method estimates a likelihood gradient by sampling directly in parameter space, which leads to lower variance gradient estimates than obtained by regular policy gradient methods. We show that for several complex control tasks, including robust(More)
We investigate the foundations of Physical Unclonable Functions from several perspectives. Firstly, we discuss formal and conceptual issues in the various current definitions of PUFs. As we argue, they have the effect that many PUF candidates formally meet no existing definition. Next, we present alternative definitions and a new formalism. It avoids(More)
PyBrain is a versatile machine learning library for Python. Its goal is to provide flexible, easyto-use yet still powerful algorithms for machine learning tasks, including a variety of predefined environments and benchmarks to test and compare algorithms. Implemented algorithms include Long Short-Term Memory (LSTM), policy gradient methods,(More)
This paper discusses parameter-based exploration methods for reinforcement learning. Parameter-based methods perturb parameters of a general function approximator directly, rather than adding noise to the resulting actions. Parameter-based exploration unifies reinforcement learning and black-box optimization, and has several advantages over action(More)
We present a model-free reinforcement learning method for partially observable Markov decision problems. Our method estimates a likelihood gradient by sampling directly in parameter space, which leads to lower variance gradient estimates than those obtained by policy gradient methods such as REINFORCE. For several complex control tasks, including robust(More)
Many approaches for object detection based on color coding were published in the RoboCup domain. They are tuned to the typical RoboCup scenario of constant lighting using a static subdivision of the color space. However, such algorithms will soon be of limited use, when playing under changing and finally natural lighting. This paper presents an algorithm(More)
A common practical problem in mobile robotics is the task to calibrate the robot’s sensors. Although, the general mapping of the sensor data to robot-centered world coordinates is given by the hardware configuration, the parameters of this mapping vary even between robots with the same configuration. In the RoboCup domain, these parameters can change(More)
The crossword puzzle is a classic pastime that is well-known all over the world. We consider the crossword manufacturing process in more detail, investigating a two-step approach, first generating a mask, which is an empty crossword puzzle skeleton, and then filling the mask with words from a given dictionary to obtain a valid crossword. We show that the(More)