I had a full day today, five ethics classes. And in between during the afternoon I finished marking all of my university Data Engineering student project reports and presentations.
It was a bit of a slog, because most of the reports were not particularly high quality, with some fundamental mistakes and misunderstandings of how to apply statistical tests and present graphical data. The last report I had to mark was a final breath of fresh air though, as they had actually done the statistics correctly and achieved a decent result for their experiment.
They wanted to determine if brand awareness had an influence on people’s judgement of photo quality. They got a series of photos of the same scenes taken by an Apple and a Samsung phone camera, and made surveys where they showed them side by side and asked people to pick which photo they preferred. In one survey they showed the photos labelled simply as “option 1” and “option 2”. In a second survey with different people they labelled the photos as “Apple” or “Samsung” respectively. And then in a third survey they switched the labels so that the Apple photos were labelled “Samsung” and vice versa. I thought this was a really clever bit of experiment design.
The results showed that out of 200 responses to survey 1 (20 people judging 10 photo pairs each), 96 favoured the Apple photos, and 104 favoured the Samsung. This established a baseline for comparison, which was pretty even. In the second survey, they found 111 favoured the “Apple” labelled photos (which actually were Apple), while 89 chose “Samsung”. And in survey 3, 112 favoured “Apple” labelled photos (which were actually Samsung), while 88 chose “Samsung” (actually Apple). This is a pretty cool result! It really suggests that some people are swayed towards photos that they think were taken with an Apple phone, even if they weren’t. They did a chi-squared test on the numbers, but the p-value was 0.12, meaning there was a 12% chance of this discrepancy happening by random chance. We usually expect a value of 5% or less before we say that it was likely not random, but 12% is pretty close. The problem for this analysis is they didn’t quite have enough data – if they’d received the same proportions with more data it would have been more significant. Anyway, it was a really nice experiment and project and write-up.
The other thing I did today was take Scully for a long walk to Botanica Cafe for lunch. I started working my way down their all-day breakfast menu, from the first item, a “breakfast bowl” of tapioca and chia seeds with preserved mangoes, coconut, and fresh figs and berries. It’s delicious and cinnamony, but a large sweet meal for lunch really filled me up for the afternoon and I was craving something savoury afterwards. I waited for the minestrone that my wife made for dinner, using yesterday’s leftover vegetable soup.
New content today: