Learn More
We study the use of crowdsourcing for self-reported attention in image quality assessment (IQA) tasks. We present the results from two crowdsourcing campaigns: one where participants indicated via mouse clicks the image locations that influenced their rating of quality, and another where participants chose locations they looked at in a free-viewing setting.(More)
—We carried out crowdsourced video quality assessments using paired comparisons and converting the results to differential mean opinion scores (DMOS). A previous lab-based study had provided corresponding MOS-values for absolute category ratings. Using a simple linear transformation to fit the crowdsourcing-based DMOS values to the lab-based MOS values, we(More)
  • 1