Monday 29 January, 2018. 12:53
I am sitting in the Juban Yakiniku House in Burlingame, having lunch during the conference lunch break. I had another terrible sleep, lying awake much of the night, and did not feel like getting up when the alarm went off at 07:00. But I bounded out of bed and checked the BART timetable as I gulped down a mixture of Special K and bran cereal for breakfast. Trains left for Millbrae at 07:21 and 07:36. I hurried so I could make the first one, and raced out to leave M. alone for her first day of solitary sightseeing. I just made the train by a minute and settled in for the ride to Millbrae.
The train was largely empty at this time, which was good, as I had a double seat to myself. I went through the printout of my talk to help cement in in my brain, although I’m not giving it until tomorrow. Once at Millbrae, I set out for the walk to the Hyatt Regency Hotel where the conference is on. Checking the time at both ends, it came in at 28 minutes of walking. The morning was grey, but not especially cold, and I removed my gloves part way along as I warmed up from the exercise.
I arrived at the hotel at 08:20, leaving plenty of time to register and collect my conference badge before the first talk I planned to attend at 08:50. Even before entering, I saw Nicolas outside, dealing with his transport. Then inside I ran into Kristyn, who apologised for not answering my email, saying she was halfway through replying. Her reply arrived a few minutes into the first conference session, saying she was keen to meet up with M. again, but had commitments with her in-laws while here in the Bay Area.
The first talk session was a joint session between Automotive Imaging and Image Quality & Systems Performance, chaired by Stuart. And the very first talk was given by Robin Jenkin, who I met briefly outside before he dashed off saying he needed to get ready to present. He spoke about the task of measuring image quality of automotive cameras, and how it was very different from measuring the image quality of normal photographic cameras, mainly because photography quality standards all factor in the influence of the human visual system, which is absent in an automotive context. Also the colour filter arrays of car cameras tend to be very different from the RGB Bayer filter for human-oriented photography, typically car cameras use red-white-white-white, or red-white-white-blue. The output of a car camera goes into a neural network designed to produce a decision on what the car should do, not to produce an image for humans to look at, so completely different criteria need to be used to judge the quality of the camera.
Another talk in this session was by Susan Farnand, who talked about colour calibration for unmanned autonomous vehicles, in particular drone cameras used for crop analysis. You can’t just stick a giant Gretag colour checker chart in a crop field, as each square needs to be over a square metre in size. So her group is working on developing colour calibration targets suitable for use in the field and in different daylight lighting and shadow conditions.
After the session I chatted with Stuart a bit, and then outside ran into Elaine and Margaret, before heading upstairs for the new Material Appearance conference which is starting up this year. This was a much smaller session with only a couple of dozen attendees. The first talk was an overview of the applications of material appearance capture and analysis in cultural heritage. This is a very diverse field, bringing together many different disciplines, ranging from optics and physics to computer vision to art history and cultural analysis. So it’s difficult to know enough in all the relevant fields and often people end up reinventing the wheel on different techniques. The speaker wants to create a database of methods and research that people on the field can refer to and adapt techniques from.
After some more talks about capturing material appearance, it was time for lunch. I walked out of the hotel and fifteen minutes down the road to Burlingame to find a place to eat. The new bridge over the 101 freeway is now complete and it was easy to cross it and get into Burlingame, compared to last year. While walking into the town area, I saw an incredible piece of aggressive driving. There was a queue of four or five cars in a lane waiting to go straight at a traffic light. The guy at the back decided to pull out into the right turn lane, go past the car in front of him, and then stick his nose into the small gap between that car and the one in front of it, so the car he’d gone past had to let him in when the light went green!
Sizzling salmon in Juban Yakiniku House, Burlingame |
I walked up and down the street to get an idea of what was available. The Thai place that I’ve been to a couple of times had closed down, so I couldn’t go there. I chose a Japanese place instead, ending up in Juban. I ordered a sizzling salmon plate, which came with zucchini, carrot, and onion on the sizzling plate, plus rice, miso soup, and a dipping sauce. I also got some daikon salad and a Sapporo beer. It was tasty and very good after the long morning of talks after my early breakfast.
17:29
I’m sitting on the BART train at Millbrae waiting to head back in to San Francisco to meet M. for dinner in the Mission District. After lunch, I walked back to the conference, making it with a good ten minutes to spare.
Sizzling salmon in Juban Yakiniku House, Burlingame |
The first session after lunch is an all-conference plenary, with a single invited talk. It was about machine learning and applications to imaging science, by a guy from Google’s DeepMind AI team. Being a plenary talk, it was more about general principles and an overview than any specific technical innovation. He described how machine learning works and how it has seen a breakthrough with the development of deep neural networks in the past few years, that have substantially improved the performance of AI applications. Many are now at the point where expertly hand-crafted solutions are worse than simply throwing machine learning at the problem.
For example, machine translation of human languages, such as Google Translate. A few years ago the translations were all human-crafted phrase-based translations. But now it’s moved entirely to machine learning algorithms, and that has reduced the error rate by 50 to 80%. In an image related field, machine learning algorithms for labelling objects in images has, since 2015, performed better than humans. What’s more, machine learnt image recognition can be very specific, identifying similar species of flower or breeds of dog, where most humans simply wouldn’t know the difference. There are also other image related applications such as generating a text description of an image, or transferring style of a painting onto other images. His conclusion was that deep neural networks, and convolutional neural networks in particular are starting to come into their own and will be crucial for development of imaging applications in the near future.
This was followed by a break, and then the final session of the day, which for me was back to the Material Appearance talks. These were less interesting than the ones before lunch, with some fairly dry technical stuff. The most interesting was about predicting the colour appearance of inks laid over the top of each other on arbitrary substrates.
21:06
After the last talks of the day, I walked quickly back to Millbrae station and caught a train to 24th Street and Mission, where M. was waiting already. We walked over to Velvet Cantina on 23rd Street, where we had a dinner booking, getting there shortly after 18:00.
Velvet Cantina, The Mission |
We got a booth in a nook around the corner from the door and ordered margaritas to start, sipping them while we munched in the complimentary chips and salsa as we compared stories of what we got up to all day. M. had spent the day shopping in the area around Union Square, and then relaxing and reading some of her book.
After a while we ordered food. M. got a veggie burrito while I chose the shrimp enchiladas. Mine came with rice and beans, and I chose pinto beans this time. The enchiladas were really good, nice and spicy, and M. liked her burrito too. There was also a red cabbage coleslaw on both our plates, which was really peppery. We had a good time enjoying the red tinted ambience of the place and the good food.
Dinner at Velvet Cantina |
Then we returned to our hotel on the train, and waited to see if Rohan would contact us for a late night ice cream after his own dinner in the city. There was no message from him during the day, so we just had to wait and see if he contacted us after his dinner. In the end he didn’t by 21:30, so we called it a night and went to bed.