Environment-Based Music Generation


The goal of this project was to create a package that allows the robot to analyze the color composition of an image and detect if there are people in the image and then play music that fits the mood of the image. We used the OpenCV library and pcl_perception node to aid in image processing and person detection, and based on the warmth and saturation of the image and the presence of a person, the robot chose a song from a database of songs categorized by mood. The program was tested by putting different, distinctly-colored objects in front of the robot and monitoring how the mood changed as well as whether the robot picked a song from the appropriate category. In the future, more work could be done regarding actually creating music to fit the mood of the image or playing music based on physical location and not just the visual image feed.

Cite this paper

@inproceedings{CsEnvironmentBasedMG, title={Environment-Based Music Generation}, author={Cs and Smitha Nagar and Mayuri Raja and Lucy Zhao} }