Zusammenfassung We present the rst implementation of a full lattice basis reduction for graphics cards. Existing algorithms for lattice basis reduction on CPUs o er reasonable results concerning runtime or reduction quality but unfortunately not both at the same time. In this work, we show that the powerful architecture of graphics cards is well suited to apply alternative algorithms that were merely of theoretical interest so far due to their enormous demand for computational resources and requirements on parallel processing. We modi ed and optimized these algorithms to t the architecture of graphics cards, in particular we focused on Givens Rotations and the All-swap reduction method. Eventually, our GPU implementation achieved a signi cant speed-up for given lattice challenges compared to the NTL implementation running on an CPU (e.g., a speed-up factor of 13.7 for the same nancial investment), providing at least the same reduction quality.