The next evolutionary step in enhancing video communication fidelity over wired and wireless networks is taken by adding scene depth. Three-dimensional video using integral imaging (II) based capture and display subsystems has shown promising results and is now in the early prototype stage. We have created a ray-tracing based interactive simulation environment to generate II video sequences as a way to assist in the development, evaluation and quick adoption of these new emerging techniques into the whole communication chain. A generic II description model is also proposed as the base for the simulation environment. This description model facilitate optically accurate II rendering using MegaPOV, a customized version of the open-source ray tracing package POV-Ray. By using MegaPOV as a rendering engine the simulation environment fully incorporate the scene description language of POV-Ray to exactly define a virtual scene. Generation and comparability of II video sequences adhering to different II-techniques is thereby greatly assisted, compared to experimental research. The initial development of the simulation environment is focused on generating and visualizing II source material adhering to the optical properties of different II-techniques published in the literature. Both temporally static as well as dynamic systems are considered. The simulation environment’s potential for easy deployment of integral imaging video sequences adhering to different II-techniques is demonstrated.