Speedup of Implementing Fuzzy Neural Networks With High-Dimensional Inputs Through Parallel Processing on Graphic Processing Units

@article{Juang2011SpeedupOI,
  title={Speedup of Implementing Fuzzy Neural Networks With High-Dimensional Inputs Through Parallel Processing on Graphic Processing Units},
  author={Chia-Feng Juang and T.-C. Chen and W.-Y. Cheng},
  journal={IEEE Transactions on Fuzzy Systems},
  year={2011},
  volume={19},
  pages={717-728}
}
This paper proposes the implementation of a zero-order Takagi-Sugeno-Kang (TSK)-type fuzzy neural network (FNN) on graphic processing units (GPUs) to reduce training time. The software platform that this study uses is the compute unified device architecture (CUDA). The implemented FNN uses structure and parameter learning in a self-constructing neural fuzzy inference network because of its admirable learning performance. FNN training is conventionally implemented on a single-threaded CPU, where… CONTINUE READING
Highly Cited
This paper has 133 citations. REVIEW CITATIONS
55 Citations
27 References
Similar Papers

Citations

Publications citing this paper.
Showing 1-10 of 55 extracted citations

133 Citations

0204060'13'15'17
Citations per Year
Semantic Scholar estimates that this publication has 133 citations based on the available data.

See our FAQ for additional information.

References

Publications referenced by this paper.
Showing 1-10 of 27 references

Parallel processing with CUDA-NVIDA’s highperformance computing platform uses massive multithreading

  • T. R. Halfhill
  • Microprocessor Rep., pp. 1–8, Jan. 2008.
  • 2008
Highly Influential
8 Excerpts

Stability analysis for an online evolving neuro-fuzzy recurrent neural network

  • J. J. Rubio Avila
  • Evolving Intelligent Systems: Methodology and…
  • 2010
Highly Influential
4 Excerpts

Similar Papers

Loading similar papers…