From the post:
An innovative computer program brings color to grayscale images.
Creating a high-quality realistic color image from a grayscale picture can be challenging. Conventional methods typically require the user’s input, either by using a scribbling tool to color the image manually or by using a color transfer. Both options can result in poor colorization quality limited by the user’s degree of skill or the range of reference images available.
Alex Yong-Sang Chia at the A*STAR’s Institute for Infocomm Research and co-workers have now developed a computer program that utilizes the vast amount of imagery available on the internet to find suitable color matches for grayscale images. The program searches hundreds of thousands of online color images, cross-referencing their key features and objects in the foreground with those of grayscale pictures.
“We have developed a method that takes advantage of the plentiful supply of internet data to colorize gray photos,” Chia explains. “The user segments the image into separate major foreground objects and adds semantic labels naming these objects in the gray photo. Our program then scans the internet using these inputs for suitable object color matches.”
If you think about it for a moment, it appears that subject recognition in images is being performed here. As the researchers concede, its not 100% but then it doesn’t need to be. They have human users in the loop.
I wonder if the human users have to correct the coloration for an image more than once for a source of color image? That is does the system “remember” earlier choices?
The article doesn’t say so I will follow up with an email.
Keeping track of user-corrected subject recognition would create a bread crumb trail for other users confronted with the same images. (In other words, a topic map.)