If a discrete time linear system, hereafter called a digital filter, is programmed on a digital computer or realized with digital elements, computational errors due to finite word length are unavoidable. These errors may be subdivided into three classes, namely, the error caused by discretization of the system parameters, the error caused by analog to digital conversion of the input analog signal, and the error caused by roundoff of the results which are needed in further computations. The first type of error results in a fixed deviation in system parameters and is akin to a slightly wrong value of (say) an inductance in an analog filter. We shall not treat this problem here; it has been treated in some detail by Kaiser. The other two sources of error are more complicated but if reasonable simplifying assumptions are made they can be treated by the techniques of linear system noise theory. It is our aim to set up a model of a digital filter which includes these two latter sources of error and, through analysis of the model, to relate the desired system performance to the required length of computer registers.
Unfortunately, ACM prohibits us from displaying non-influential references for this paper.
To see the full reference list, please visit http://dl.acm.org/citation.cfm?id=1464207.