Tumbe Group of International Journals

Full Text


DISCRETE FOURIER TRANSFORMATION

Dr. Vasanthakumari, T. N.

Dept. of Mathematics, GFGC, Tumakuru-02

 

Discrete Fourier transform

The Fourier transform is one of the most powerful methods of applied mathematics. Its finite dimensional analogue, the discrete Fourier transform, or DFT, is just as useful in scientific computing. The DFT allows direct algebraic solution of certain differential and integral equations. It is the basis of computations with time series data and for digital signal processing and control. It leads to computational methods that have an infinite order of accuracy (which is not the same as being exact). The drawback of DFT based methods is their geometric inflexibility. They can be hard to apply to data that are not sampled at uniformly spaced points. The multidimensional DFT is essentially a product of one dimensional DFTs. Therefore is it hard to apply DFT methods to problems in more than one dimension unless the computational domain has a simple shape. Even in one dimension, applying the DFT depends on boundary conditions.          

            At its core the DFT is an N × N unitary matrix that represents a change of basis. It would take N2 multiplications and additions to calculate the DFT of a vector by direct matrix vector multiplication. The fast Fourier transform, or FFT, algorithm calculates the DFT in 2N log2(N) multiplications. For N =1024, this is a reduction by a factor of 50.

            The Fast Fourier transformation is modified form of the Discrete Fourier Transformation which minimizes the computational time so that the processing time will be minimum .As the results it is possible to obtain the better speech quality in short duration of time.

               This paper focuses on the Fast Fourier transform which reduces the computation steps of  discrete Fourier transform (DFT), discrete convolution, and, particularly, the fast algorithms to calculate them. These topics have been at the center of digital signal processing since its beginning, and new results in hardware, theory and applications continue to keep them important and exciting. As far as we can tell, Gauss was the first to propose the techniques that we now call the Fast Fourier transform (FFT) for calculating the co-efficient in a trigonometric expansion of an asteroid's orbit in 1805. However, it was the seminal paper by Cooley and Tukey in 1965 that caught the attention of the science and engineering community and, in a way, founded the discipline of digital signal processing (DSP).

            The impact of the Cooley-Tukey FFT was enormous. Problems could be solved quickly that were not even considered a few years earlier. A flurry of research expanded the theory and developed excellent practical programs as well as opening new applications. In 1976, Winograd published a short paper that set a second flurry of research in motion. This was another type of algorithm that expanded the data lengths that could be transformed efficiently and reduced the number of multiplications required. The groundwork for this algorithm had be set earlier by Good and by Rader. In 1997 Frigo and Johnson developed a program they called the FFTW (fastest Fourier transform in the west) , which is a composite of many of ideas in other algorithms as well as new results to give a robust, very fast system for general data lengths on a variety of computer and DSP architectures. This work won the 1999 Wilkinson Prize for Numerical Software.

            It is hard to overemphasis the importance of the DFT, convolution, and fast algorithms. With a history that goes back to Gauss and a compilation of references on these topics that in 1995 resulted in over 2400 entries, the FFT may be the most important numerical algorithm in science, engineering, and applied mathematics. New theoretical results still are appearing, advances in computers and hardware continually restate the basic questions, and new applications open new areas for research. It is hoped that this book will provide the background, references, programs and incentive to encourage further research and results in this area as well as provide tools for practical applications.

            The development of fast algorithms usually consists of using special properties of the algorithm of interest to remove redundant or unnecessary operations of a direct implementation.

            Because of the periodicity, symmetries, and orthogonality of the basis functions and the special relationship with convolution, the discrete Fourier transform (DFT) has enormous capacity for improvement of its arithmetic efficiency. There are four main approaches to formulating efficient DFT algorithms. The first two break a DFT into multiple shorter ones. This is done in Multidimensional Index Mapping by using an index map and in Polynomial Description of Signals by polynomial reduction. The third is Factoring the Signal Processing Operators which factors the DFT operator (matrix) into sparse factors. The DFT as Convolution or Filtering develops a method which converts a prime-length DFT into cyclic convolution. Still another approach is interesting where, for certain cases, the evaluation of the DFT can be posed recursively as evaluating a DFT in terms of two half-length DFTs which are each in turn evaluated by a quarter-length DFT and so on. The very important computational complexity theorems of Winograd are stated and briefly discussed in Winograd's Short DFT Algorithms. The specific details and evaluations of the Cooley-Tukey FFT and Split-Radix FFT are given in The Cooley-Tukey Fast Fourier Transform Algorithm , and PFA and WFTA are covered in The Prime Factor and Winograd Fourier Transform Algorithms. A short discussion of high speed convolution is given in Convolution Algorithms, both for its own importance, and its theoretical connection to the DFT. We also present the chirp, Goertzel, QFT, NTT, SR-FFT, Approx FFT, Autogen, and programs to implement some of these. Ivan Selesnick gives a short introduction in Winograd's Short DFT Algorithms to using Winograd's techniques to give a highly structured development of short prime length FFTs and describes a program that will automatically write these programs. Markus Pueschel presents his _Algebraic Signal Processing" in DFT and FFT: An Algebraic View on describing the various FFT algorithms. And Steven Johnson describes the FFTW (Fastest Fourier Transform in the West) in Implementing FFTs in Practice

A powerful approach to the development of efficient algorithms is to break a large problem into multiple small ones. One method for doing this with both the DFT and convolution uses a linear change of index variables to map the original one-dimensional problem into a multi-dimensional problem. This approach provides a unified derivation of the Cooley-Tukey FFT, the prime factor algorithm (PFA) FFT, and the Winograd Fourier transform algorithm (WFTA) FFT. It can also be applied directly to convolution to break it down into multiple short convolutions that can be executed faster than a direct implementation. It is often easy to translate an algorithm using index mapping into an efficient program.

            The basic definition of the discrete Fourier transform (DFT)         

                                         Where n, k and N are integers, j=√-1

If the N values of the transform are calculated from the N values of the data, x (n), it is easily seen that N2 complex multiplications and approximately that same number of complex additions are required. One method for reducing this required arithmetic is to use an index mapping (a change of variables) to change the one-dimensional DFT into a two- or higher dimensional DFT. This is one of the ideas behind the very efficient Cooley-Tukey and Winograd algorithms. The purpose of index mapping is to change a large problem into several easier ones. This is sometimes called the divide and conquer" approach but a more accurate description would be organize and share" which explains the process.

            An example of the Cooley-Tukey radix-4 FFT for a length-16 DFT uses the type-two map with K1 = 4, K2 = 1, K3 = 1, K4 = 4 giving

n = 4n1 + n2

k = k1 + 4k2

The residue reduction is not needed here since n does not exceed N as n1 and n2

take on their values. Since, in this example, the factors of N have a common factor, only

one of the conditions  can hold and, therefore, becomes

           

Where of W16k1k3=W4

            This has the form of a two-dimensional DFT with an extra term W16, called a twiddle factor. The inner sum over n1 represents four length-4 DFTs, the W16 term represents 16 complex multiplications, and the outer sum over n2 represents another four length-4 DFTs.

            This choice of the Ki uncouples" the calculations since the first sum over n1 for n2 = 0

calculates the DFT of the first row of the data array^x(n1; n2), and those data values are never needed in the succeeding row calculations. The row calculations are independent, and examination of the outer sum shows that the column calculations are likewise independent.

            The left 4-by-4 array is the mapped input data, the center array has the rows transformed, and the right array is the DFT array. The row DFTs and the column DFTs are independent of each other. The twiddle factors (TF) which are the center W are the multiplications which take place on the center array. This uncoupling feature reduces the amount of arithmetic required and allows the results of each row DFT to be written back over the input data locations, since that input row will not be needed again. This is called in-place calculation and it results in a large memory requirement savings.           

            The purpose of index mapping is to improve the arithmetic efficiency. For example a direct calculation of a length-16 DFT requires 16^2 or 256 real multiplications (recall, one complex multiplication requires 4 real multiplications and 2 real additions) and an uncoupled version requires 144. A direct calculation of a length-15 DFT requires 225 multiplications but with a type-two map only 135 and with a type-one map, 120. Recall one complex multiplication requires four real multiplications and two real additions. Algorithms of practical interest use short DFT's that require fewer than N2 multiplications. For example, length-4 DFTs require no multiplications and, therefore, for the length-16 DFT, only the TFs(Transfer functions) must be calculated. That calculation uses 16 multiplications, many fewer than the 256 or 144 required for the direct or uncoupled calculation. The concept of using an index map can also be applied to convolution to convert a length N = N 1N2 one-dimensional cyclic convolution into a N1 by N2 two-dimensional cyclic convolution. There is no savings of arithmetic from the mapping alone as there is with the DFT, but savings can be obtained by using special short algorithms along each dimension.

The FFT as a Recursive Evaluation of the DFT:

            It is possible to formulate the DFT so a length-N DFT can be calculated in terms of two length-(N/2) DFTs. And, if N = 2M, each of those length-(N/2) DFTs can be found interms of length-(N/4) DFTs. This allows the DFT to be calculated by a recursive algorithm with M recursions, giving the familiar order Nlog (N) arithmetic complexity. Calculate the even indexed DFT values from by:

                    

 

And a similar argument gives the odd indexed values as:

Together, these are recursive DFT formulas expressing the length-N DFT of x (n) in terms

Of length-N/2 DFTs:

                                                C(2k)=DFTN/2{x(n)+x(n+N/2)}

                                                C(2k+1) = DFTN/2 {[x(n)-x(n+N/2)]}

This is a decimation-in-frequency" (DIF) version since it gives samples of the frequency

domain representation in terms of blocks of the time domain signal.

A DIT version can be derived in the form:

                                                C(k) = DFTN/2{x(2n)}+ DFTN/2{x(2n+1)}     

                                                C(k+N/2)= DFTN/2{x(2n)}- DFTN/2{x(2n+1)}

Which gives blocks of the frequency domain from samples of the signal.

            Similar recursive expressions can be developed for other radices and algorithms. Most recursive programs do not execute as efficiently as looped or straight code, but some can be very efficient.

            The FFT is based on the complex DFT, a more sophisticated version of the real DFT discussed in the last four chapters. These transforms are named for the way each represents data, that is, using complex numbers or using real numbers. The term complex does not mean that this representation is difficult or complicated, but that a specific type of mathematics is used Complex mathematics often is difficult and complicated, but that isn't where the complex DFT and provides the background needed to understand the details of the FFT algorithm. The          

            Since the FFT is an algorithm for calculating the complex DFT, it is important to understand how to transfer real DFT data into and out of the complex DFT format compares how the real DFT and the complex DFT store data. The real DFT transforms an N point time domain signal into two N/2 + 1 point frequency domain signals. The time domain signal is called just that will be the time domain signal. The two signals in the frequency domain are called the real part and the imaginary part, holding the amplitudes of the cosine waves and sine waves, respectively. This should be very familiar from past chapters. In comparison, the complex DFT transforms two N point.

                       In comparison, the complex DFT transforms two N point time domain signals into two N point frequency domain signals. The two time domain signals are called the real part and the imaginary part, just as are the frequency domain signals. In spite of their names, all of the values in these arrays are just ordinary numbers. (If you are familiar with complex numbers the j's are not included in the array values are a part of the mathematics. the operator, Im( ), returns a real number).

                       Calculating a real Inverse DFT using a complex Inverse DFT is slightly harder. This is because you need to insure that the negative frequencies are loaded in the proper format. Remember, points 0 through N/2 in the complex DFT are the same as in the real DFT, for both the real and the imaginary parts. For the real part, point N/2+ 1 is the same as point N/2- 1 , point N/2+ 2 is the same as point N/2- 2 , etc. This continues to point N+1 being the same as point 1. The same basic pattern is used for the imaginary part, except the sign is changed. That is, point N/2- 1 is the negative of point N/2+1 , point N/2- 2 is the negative of point N/2- 2 , etc. Notice that samples 0 and N/2 do not have a matching point in this duplication scheme. a guide to understanding this symmetry. In practice, you load the real DFT's frequency spectrum into samples 0 to N/2 of the complex DFT's arrays, and then use a subroutine to generate the negative frequencies between samples N/2+ 1 and N-1. To check that the proper symmetry is present, after taking the inverse FFT, look at the imaginary part of the time domain. It will contain all zeros if everything is correct (except for a few parts-per million of noise, using single precision calculations).

                       The FFT is a complicated algorithm, and its details are usually left to those that specialize in such things. This section describes the general operation of the FFT, but skirts a key issue: the use of complex numbers. If background in complex mathematics, are known between the lines to understand the true nature of the algorithm. The details elude few scientists and engineers that use the FFT could write the program from scratch. In complex notation, the time and frequency domains each contain one signal made up of N complex points. Each of these complex points is composed of two numbers, the real part and the imaginary part. For example, when we talk about complex sample X , it refers to the combination of ReX and ImX. In other words, each complex variable holds two numbers. When two complex variables are multiplied, the four individual components must be combined to form the two components of the product .The following discussion on "How the FFT works" uses this jargon of complex notation. That is, the singular terms: signal, point, sample, and value, refer to the combination of the real part and the imaginary part. The FFT operates by decomposing an N point time domain signal into N time domain signals each composed of a single point. The second step is to calculate the N frequency spectra corresponding to these N time domain signals. Lastly, the N spectra are synthesized into a single frequency.

                       Another popular algorithm eliminates the wasted calculations associated with the imaginary part of the time domain being zero, and the frequency spectrum being symmetrical. In other words, the FFT is modified to calculate the real DFT, instead of the complex DFT. These algorithms are called the real FFT and the real Inverse FFT (or similar names). Expect them to be about 30% faster than the conventional FFT routines. There are two small disadvantages in using the real FFT. First, the code is about twice as long. Second, debugging these programs is slightly harder because it cannot be use symmetry as a check for proper operation. These algorithms force the imaginary part of the time domain to be zero, and the frequency domain to have left-right symmetry. For debugging, check that these programs produce the same output as the conventional FFT algorithms.

                       Now the key element a frequency spectrum composed of these two types of symmetry can be perfectly separated into the two component signals. This is achieved by the even/odd decomposition. In other words, two real DFT's can be calculated for the price of single FFT. One of the signals is placed in the real part of the time domain, and the other signal is placed in the imaginary part. After calculating the complex DFT (via the FFT, of course), the spectra are separated using the even/odd decomposition. When two or more signals need to be passed through the FFT, this technique reduces the execution time by about 40%. The improvement isn't a full factor of two because of the calculation time required for the even/odd decomposition. This is a relatively simple technique with few pitfalls, nothing like writing an FFT routine from Scratch. Consider that the FFT is to Digital Signal Processing what the transistor is to electronics. It is a foundation of the technology; everyone in the field knows its characteristics and how to use it. However, only a small number of specialists really understand the details of the internal workings.

Applications:

Discrete Fourier Transform ( FFT) has various applications in many fields.

            In the field of solutions for wave equations.

            In the field of remote sensing.

            In the field of Image reconstructions.

            In the field of complex solution.

            In the field of Error corrections in communication system.

            In the field of Audio signal processing.

            In the field of Video signal processing.

            In the field of Digital signal processing.

            In the field of real time Musical applications.

            In the field of Testing of Civil structures.

Conclusion:   

            Discrete Fourier Transformation (DFT) is complex process where the processing time is more .In order to overcome this disadvantage the fast Fourier transformation are preferred where a large quantity of Data are processed in short duration of time.  Fast Fourier Transformation is a simplified form of Discrete Fourier Transformation

Bibliography:

  1. R. C. Agarwal and C. S. Burrus. Fast digital convolution using fermat transforms.
  2.  R. C. Agarwal and J. W. Cooley. New algorithms for digital convolution.
  3. L. Auslander, E. Feig, and S. Winograd. Abelian semi-simple algebras and algorithms for the discrete Fourier transform.
  4. S. Bagchi and S. Mitra. The Nonuniform Discrete Fourier Transform and Its Applications in Signal Processing.
  5.  D. H. Bailey. FFTs in external or hierarchical memory.


Sign In  /  Register

Most Downloaded Articles

Acquire employability in Indian Sinario

The Pink Sonnet

Department of Mathematics @ GFGC Tumkur

Knowledge and Education- At Conjecture

ಗ್ರಾಮೀಣ ಪ್ರದೇಶದಲ್ಲಿ ಆಯಗಾರಿಕೆ ಸಂಸ್ಕೃತಿ




© 2018. Tumbe International Journals . All Rights Reserved. Website Designed by ubiJournal