#### Filter Results:

- Full text PDF available (107)

#### Publication Year

1989

2017

- This year (2)
- Last 5 years (33)
- Last 10 years (78)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Data Set Used

#### Key Phrases

Learn More

- Pawan Harish, P. J. Narayanan
- HiPC
- 2007

Graph algorithms are fundamental to many disciplines and application areas. Large graphs involving millions of vertices are common in scientific and engineering applications. Practical-time implementations using high-end computing resources have been reported but are accessible only to a few. Graphics Processing Units (GPUs) are fast emerging as inexpensive… (More)

- Takeo Kanade, Peter Rander, P. J. Narayanan
- IEEE MultiMedia
- 1997

medium, Virtualized Reality, immerses viewers in a virtual reconstruction of real-world events. The Virtualized Reality world model consists of real images and depth information computed from these images. Stereoscopic reconstructions provide a sense of complete immersion, and users can select their own viewpoints at view time, independent of the actual… (More)

- Vibhav Vineet, P. J. Narayanan
- 2008 IEEE Computer Society Conference on Computer…
- 2008

Graph cuts has become a powerful and popular optimization tool for energies defined over an MRF and have found applications in image segmentation, stereo vision, image restoration, etc. The maxflow/mincut algorithm to compute graph-cuts is computationally heavy. The best-reported implementation of graph cuts takes over 100 milliseconds even on images of… (More)

- Sheetal Lahabar, P. J. Narayanan
- 2009 IEEE International Symposium on Parallel…
- 2009

Linear algebra algorithms are fundamental to many computing applications. Modern GPUs are suited for many general purpose processing tasks and have emerged as inexpensive high performance co-processors due to their tremendous computing power. In this paper, we present the implementation of singular value decomposition (SVD) of a dense matrix on GPU using… (More)

Modern Graphics Processing Units (GPUs) provide high computation power at low costs and have been described as desktop supercomputers. The GPUs expose a general, data-parallel programming model today in the form of CUDA and CAL. The GPU is presented as a massively multithreaded architecture by them. Several high-performance, general data processing… (More)

- P. J. Narayanan, Peter Rander, Takeo Kanade
- ICCV
- 1998

- Vibhav Vineet, Pawan Harish, Suryakant Patidar, P. J. Narayanan
- High Performance Graphics
- 2009

Graphics Processor Units are used for many general purpose processing due to high compute power available on them. Regular, data-parallel algorithms map well to the SIMD architecture of current GPU. Irregular algorithms on discrete structures like graphs are harder to map to them. Efficient data-mapping primitives can play crucial role in mapping such… (More)

- Jyothish Soman, Kishore Kothapalli, P. J. Narayanan
- 2010 IEEE International Symposium on Parallel…
- 2010

Graphics processing units provide a large computational power at a very low price which position them as an ubiquitous accelerator. General purpose programming on the graphics processing units (GPGPU) is best suited for regular data parallel algorithms. They are not directly amenable for algorithms which have irregular data access patterns such as list… (More)

- Jag Mohan Singh, P. J. Narayanan
- IEEE Transactions on Visualization and Computer…
- 2010

Compact representation of geometry using a suitable procedural or mathematical model and a ray-tracing mode of rendering fit the programmable graphics processor units (GPUs) well. Several such representations including parametric and subdivision surfaces have been explored in recent research. The important and widely applicable category of the general… (More)

The pattern recognition (PR) process uses a large number of labelled patterns and compute intensive algorithms. Several components of a PR process are compute and data intensive. Some algorithms compute the parameters required for classification directly for each test pattern using a large training set. Most algorithms have a training step, the results of… (More)