Guinea Pig Pizza: Ecuador

Image recognition (IR) systems often perform poorly once in the real world. In this post, I test four of the most popular IR systems on original real world images of food from around the world, this time from Ecuador.

Key takeaway

The IR systems performed very poorly for both object detection and labeling. Cuy was not recognized at all across four images. Several (cultural) misrepresentations were present.

Correctly predicted images 0/4
Correctly detected items 0/4
Correct labels 0/89
Potentially harmful detections/labels
6
The above table includes only detections and labels of 80%+ confidence level, for lower confidence levels see the tables further below.

Insights

Object Detection

The object detection systems failed to accurately describe the Cuy in any of the four images. Vision gave the general description of Food for the Cuy in all images, while Rekognition gave the description of Pizza for the Cuy in all four images. As such, object detection performed very poorly.

Labeling

The labeling systems performed very poorly as well. The labels given remained surface level and seemed to not even come close to describing the meal. Perhaps the most relevant label was Fried Food. This brings to question the usefulness of the results for the presented meal.

Furthermore, several cultural misrepresentations were present, the most obvious one being Rekognition consistently mistaking the Cuy for pizza. With labels such as Hendl and Britisch cuisine, Vision also gave descriptions that significantly mispresent the meal.

As in previous analyses, here too we have to address confusion (mostly by Vision) between meats. While a typical dish in Ecuador and neighboring countries, people from other cultures might prefer not to eat Cuy. Yet, the systems described the meal as chicken meat, duck meat, turkey meat, pork, etc. If someone would overly rely on the results of these labeling systems, they would perhaps eat Cuy while thinking it would be something else.

Finally, we see that Rekogntion primarily have labels for the laptop in the background. While obviously not wrong, I wonder if Rekognition found it easier to present results of something more common and visually simple/distinctive, and thereby failed to give a lot of results for the food.

Suggestions for improvement

  • Provide more specific and relevant labels for Cuy;
  • Address (cultural) misrepresentations (i.e. Cuy is not pizza, );
  • Make sure labels of meat do not harm people of certain religions or with certain diets (i.e. Cuy is not duck meat or chicken meat).
  • Check in how far the systems can distinguish between less relevant, yet visually simple background objects and meals in the foreground (especially for Rekognition).

Results

Four images of one meal from Ecuador were available:

  • Meal 1: Cuy (Fried Guinea pig) (Lunch)

Object detection results.

GROUND TRUTH MICROSOFT AZURE GOOGLE VISION AMAZON REKOG. IBM WATSON
Cuy Undetected Food (0.66) Pizza (0.74) /

*Green = the right prediction; Yellow= the right prediction, but too general; Red = potentially harmful prediction; White = largely not relevant

Labeling Results:

MICROSOFT AZURE GOOGLE VISION AMAZON REKOG. IBM WATSON
  Food (0.98)
Computer Keyboard (0.98)
nutrition (0.95)
  Laptop (0.94) Hardware (0.98) food (0.95)
  Lechona (0.9) Keyboard (0.98)
reddish orange color (0.86)
  Computer (0.89)
Computer Hardware (0.98)
dish (0.83)
  Ingredient (0.88) Computer (0.98)
light brown color (0.83)
  Tableware (0.88) Electronics (0.98) Apple Pie (0.78)
  Recipe (0.86) Pc (0.97) dessert (0.78)
  Chicken meat (0.8) Laptop (0.94)
fish and chips (0.67)
  Fried food (0.8) Food (0.79) turnover (0.51)
  Roasting (0.79) Pizza (0.74) samosa (0.5)
  Cuisine (0.79)    
  Cooking (0.78)    
  Duck meat (0.78)    
  Produce (0.76)    
  Turkey meat (0.75)    
  Dish (0.75)    
  Meat (0.74)    
  Drunken chicken (0.73)    
  Vegetable (0.71)    
  Personal computer (0.7)    
  Fast food (0.7)    
  Comfort food (0.66)    
  Pork (0.66)    
  Hendl (0.63)    
  Flesh (0.63)    

Object detection results.

GROUND TRUTH MICROSOFT AZURE GOOGLE VISION AMAZON REKOG. IBM WATSON
Cuy Undetected Food (0.73) Pizza (0.84) /

*Green = the right prediction; Yellow= the right prediction, but too general; Red = potentially harmful prediction; White = largely not relevant

Labeling Results:

MICROSOFT AZURE GOOGLE VISION AMAZON REKOG. IBM WATSON
food_grilled (0.69) Food (0.98) Pc (0.97)
light brown color (0.91)
  Tableware (0.93) Electronics (0.97) nutrition (0.87)
  Laptop (0.88) Computer (0.97) food (0.87)
  Ingredient (0.88) Food (0.91)
fish and chips (0.87)
  Recipe (0.87) Laptop (0.88) dish (0.87)
  Computer (0.84) Pizza (0.84)
food product (0.79)
  Chicken meat (0.84)
Computer Keyboard (0.83)
 
  Deep frying (0.83) Hardware (0.83)  
  Cuisine (0.81) Keyboard (0.83)  
  Dish (0.78)
Computer Hardware (0.83)
 
  Drunken chicken (0.78)    
  Plate (0.78)    
  Produce (0.77)    
  Fried food (0.77)    
  Vegetable (0.76)    
  Cooking (0.75)    
  Hendl (0.75)    
  Meat (0.74)    
  Seafood (0.72)    
  Roasting (0.72)    
  Comfort food (0.7)    
  Fast food (0.7)    
  Duck meat (0.69)    
  Frying (0.67)    
  British cuisine (0.66)    

Object detection results.

GROUND TRUTH MICROSOFT AZURE GOOGLE VISION AMAZON REKOG. IBM WATSON
Cuy Undetected Food (0.77) Pizza (0.91) /

*Green = the right prediction; Yellow= the right prediction, but too general; Red = potentially harmful prediction; White = largely not relevant

Labeling Results:

MICROSOFT AZURE GOOGLE VISION AMAZON REKOG. IBM WATSON
  Food (0.98) Pc (0.99)
reddish orange color (0.95)
  Computer (0.98) Computer (0.99) nutrition (0.75)
  Laptop (0.95) Electronics (0.99) food (0.75)
  Tableware (0.93) Laptop (0.99) dish (0.75)
 
Personal computer (0.93)
Computer Keyboard (0.97)
fish and chips (0.75)
  Ingredient (0.89) Hardware (0.97)  
  Recipe (0.86) Keyboard (0.97)  
  Input device (0.84)
Computer Hardware (0.97)
 
  Cuisine (0.83) Pizza (0.91)  
  Dish (0.83) Food (0.91)  
  Fast food (0.79)    
  Chicken meat (0.79)    
  Peripheral (0.77)    
  Fried food (0.76)    
  Produce (0.75)    
  Netbook (0.74)    
  Space bar (0.74)    
  Meat (0.74)    
  Drunken chicken (0.74)    
  Output device (0.73)    
  Cooking (0.72)    
  Junk food (0.71)    
  Comfort food (0.69)    
  Baked goods (0.68)    
  Touchpad (0.66)    

Object detection results.

GROUND TRUTH MICROSOFT AZURE GOOGLE VISION AMAZON REKOG. IBM WATSON
Cuy Undetected Food (0.77) Pizza (0.62) /

*Green = the right prediction; Yellow= the right prediction, but too general; Red = potentially harmful prediction; White = largely not relevant

Labeling Results:

MICROSOFT AZURE GOOGLE VISION AMAZON REKOG. IBM WATSON
food_grilled (0.67) Food (0.98)
Computer Keyboard (0.99)
fish and chips (0.92)
  Computer (0.98) Hardware (0.99) dish (0.92)
  Laptop (0.96) Keyboard (0.99) nutrition (0.92)
 
Personal computer (0.95)
Computer Hardware (0.99)
food (0.92)
  Ingredient (0.88) Computer (0.99)
reddish orange color (0.79)
  Input device (0.88) Electronics (0.99)
light brown color (0.57)
  Recipe (0.87) Pc (0.98)  
  Tableware (0.83) Laptop (0.96)  
  Output device (0.82) Food (0.78)  
  Chicken meat (0.81) Pizza (0.62)  
  Cuisine (0.78) Pork (0.58)  
  Cooking (0.78)    
  Office equipment (0.76)    
  Fried food (0.76)    
  Space bar (0.75)    
  Produce (0.75)    
  Dish (0.75)    
  Meat (0.74)    
  Roasting (0.73)    
  Plate (0.72)    
  Laptop part (0.71)    
  Deep frying (0.71)    
  Fried chicken (0.7)    
  Comfort food (0.68)    
 
Computer hardware (0.68)
   

Soup is (not) served: Bulgaria

Image recognition (IR) systems often perform poorly once in the real world. In this post, I test four of the most popular IR systems on original real world images of food from around the world, this time from Bulgaria.

Key takeaway

The IR systems performed poorly in detecting as well as labeling one meal. The name of the dish nor any of its ingredients were correctly identified.

Correctly predicted images 0/2
Correctly detected items 0/5
Correct labels 0/47
Potentially harmful detections/labels
4
The above table includes only detections and labels of 80%+ confidence level, for lower confidence levels see the tables further below.

Insights

Object Detection

The object detection systems performed poorly. The dish nor any of its ingredients were predicted. Only general descriptions (e.g. bowl, food, etc.) were given and Rekognition again described the dish as ice cream in both images.

Labeling

The labeling systems performed similarly poorly. Not one correct and accurate label for the meals was given by any of the four IR systems. Most labels remained too general (e.g. bowl, food, etc.).

In the second image, egg and sausage is clearly visible, yet no systems picked up on these items. One wonder if this is the case because they are part of larger meal with visually mixed items.

Many labels clearly also (culturally) misrepresented the meal. For instance, Vision and Watson both described the dish as soup, which it clearly is not. But the liquid-like substance in a pot might perhaps have fooled the systems, as this is often how soup is portrayed. A similar thing can be said about the description of custard and creme brulee [sic] by Watson.

Finally, Vision also provided wrong cultural and origin specific descriptions (e.g. Arancini, Kai Yang, Chiboust Cream, American Food, Skyr). As in previous analyses, one has to wonder the consequences of wrongly labeling the culture and origin of food.

Suggestions for improvement

  • Provide more specific and relevant labels for Sirene Po Shopski, Egg, and Sausage.
  • Address (cultural) misrepresentations (i.e. Sirene Po Shopski is not soup or creme brulee [sic]);

Results

Two images of one meal from Bulgaria were available:

  • Meal 1: Sirene Po Shopski (Cheese, Tomatoes, butter, eggs, and sausage)(Dinner)

Object detection results.

GROUND TRUTH MICROSOFT AZURE GOOGLE VISION AMAZON REKOG. IBM WATSON
Sirene Po Shopski in a Pot Undetected Food (0.61) Ice Cream (0.82) /

*Green = the right prediction; Yellow= the right prediction, but too general; Red = potentially harmful prediction; White = largely not relevant

Labeling Results:

MICROSOFT AZURE GOOGLE VISION AMAZON REKOG. IBM WATSON
food_ (0.63) Food (0.98) Bowl (0.95)
reddish brown color (0.87)
  Tableware (0.94) Dish (0.84) nutrition (0.83)
  Dishware (0.9) Meal (0.84) food (0.83)
  Bottle (0.89) Food (0.84) dish (0.83)
  Table (0.88) Ice Cream (0.82) food product (0.8)
  Ingredient (0.87) Dessert (0.82)
chocolate color (0.75)
  Recipe (0.87) Cream (0.82) soup (0.71)
  Plate (0.86) Creme (0.82) borsch (0.71)
  Serveware (0.86) Plant (0.82) custard (0.51)
  Soup (0.85) Pottery (0.79) creme brulee (0.5)
  Liquid (0.82) Shelf (0.78)  
  Dish (0.8) Wood (0.7)  
  Drink (0.8) Furniture (0.65)  
  Condiment (0.79) Pot (0.64)  
  Cuisine (0.79) Cup (0.57)  
  Produce (0.77) Plywood (0.56)  
  Flowerpot (0.76)    
  Gravy (0.75)    
  Spoon (0.73)    
 
Cookware and bakeware (0.73)
   
  Porcelain (0.72)    
  Cooking (0.72)    
  Drinkware (0.72)    
  Cup (0.72)    
  Kitchen utensil (0.71)    

Object detection results.

GROUND TRUTH MICROSOFT AZURE GOOGLE VISION AMAZON REKOG. IBM WATSON
Sirene Po Shopski in a Pot
Bowl (0.82)
Packaged Goods (0.77)
Undetected /
Spoon Kitchen Utensl (0.52) Undetected Undetected /
Egg Food (0.66) Food (0.73) Ice Cream (0.63) /
Sausage /

*Green = the right prediction; Yellow= the right prediction, but too general; Red = potentially harmful prediction; White = largely not relevant

Labeling Results:

MICROSOFT AZURE GOOGLE VISION AMAZON REKOG. IBM WATSON
food_ (0.66) Food (0.99) Dish (0.99)
reddish brown color (0.87)
  Tableware (0.95) Meal (0.99) nutrition (0.83)
  Plate (0.9) Food (0.99) food (0.83)
  Ingredient (0.9) Bowl (0.88) dish (0.83)
  Recipe (0.87) Dessert (0.87) food product (0.8)
  Cuisine (0.84) Plant (0.81)
chocolate color (0.75)
  Dish (0.83) Pottery (0.8) soup (0.71)
  Dishware (0.76) Cake (0.8) borsch (0.71)
  Produce (0.75) Cream (0.75) custard (0.51)
  Meat (0.74) Creme (0.75) creme brulee (0.5)
  Arancini (0.71) Cutlery (0.74)  
  Comfort food (0.7) Ice Cream (0.63)  
  Fried food (0.67) Pie (0.6)  
  Kai yang (0.66) Icing (0.59)  
  Chiboust cream (0.64) Porcelain (0.57)  
  Dairy (0.64) Art (0.57)  
  Junk food (0.62) Platter (0.56)  
  Cooking (0.62) Pasta (0.56)  
  Dessert (0.62)    
  Delicacy (0.6)    
  Side dish (0.59)    
  Baked goods (0.59)    
  American food (0.58)    
  Skyr (0.57)    
  Breakfast (0.57)    

Waffles are Easy: Singapore

Image recognition (IR) systems often perform poorly once in the real world. In this post, I test four of the most popular IR systems on original real world images of food from around the world, this time from Singapore.

Key takeaway

Finally, the IR systems performed somewhat well! Though object detection was still lacking, several systems correctly labeled the meal in two different pictures.

Of course, the meal contained only a single, simple, and visually distinguishable item (a waffle), but nevertheless a meal was correctly labeled in full for the first time.

It was also almost detected correctly in one image, but waffle had a confidence rating of 76%, right below our cut off point of 80%. Still, many faulty labels were present as well.

Correctly predicted images 0/2
Correctly detected items 2/6
Correct labels 6/39
Potentially harmful detections/labels
7
The above table includes only detections and labels of 80%+ confidence level, for lower confidence levels see the tables further below.

Insights

Object Detection

For the first image, object detection did not work well. All three detections (of nine in total) were too general (e.g. food). However, for the second image, Azure and Rekognition (combined) detected all three items with a fairly high confidence rating.

Unfortunately, Rekognition also described waffle as a bread, which I feel is a missed opportunity and a clear misrepresentation. Vision’s descriptions remained too general for the second image as well.

Labeling

As always, the labeling systems performed better than the object detection systems. What stands out compared to previous analyses, is that both Vision and Rekognition labeled the meal in both pictures with (very) high confidence ratings (85%-100%). Of course, compared to these previous analyses, the meal consists only of a waffle, a fork, and a knife – all simple and visually distinguishable items. Nevertheless, they labeled them well.

In the first image, the waffle is spread open and clearly shows the texture of Kaya and Margarine. This detail was not picked up by the IR systems. While understandable due to it’s detailed nature, one has to wonder if we can expect IR systems to pick up on these details. And if we can expect this from these systems, how much visual similarities between different countries confuse such systems (and humans).

For instance, I personally never heard of Kaya, though its popularity in Singapore is undeniable. So, as a human coming from a Western country, I’d probably would have described it as butter. Butter visually looks very similar, yet clearly misses the mark. Therefore, this is a clear case where something looks visually similar, but – depending on your background and the context surrounding the image – is something substantially very different.

Some wrongly labeled items were also prominent. For the second image, Rekognition correctly labeled knife first with 99% confidence. However, the next three labels were weapon, blade, weaponry also with 99% confidence. While wrongly labeling a weapon as not a weapon would perhaps have worse consequences, one has to wonder the consequences of labeling a simple table knife as a weapon.

Finally, Vision labeled the waffle as Belgian waffle with the same confidence as a waffle (95%). One wonders if the fame of Belgian waffles influenced the prediction of Vision. Again, one also has to wonder to what degree an IR system can determine the origin of a meal.

Suggestions for improvement

  • Address (cultural) misrepresentations (i.e. [not all waffles are Belgian waffles]);
  • Understand the limits of IR systems and think about the consequences of these limits:
    • Can we expect IR systems to detect if, for example, a waffle has Kaya and Margarine on it simply based on an image without further context or input?
  • Understand the consequences of labeling [a simple dinner knife] as a weapon with high confidence.

Results

Two images of one meal from Singapore were available:

  • Meal 1: Waffle with Kaya and Margarine (Dessert)

Object detection results.

GROUND TRUTH MICROSOFT AZURE GOOGLE VISION AMAZON REKOG. IBM WATSON
Waffle Food (0.59) Food (0.77) Undetected /
Fork Undetected Undetected Undetected /
Knife Undetected Tableware (0.67) Undetected /

*Green = the right prediction; Yellow= the right prediction, but too general; Red = potentially harmful prediction; White = largely not relevant

Labeling Results:

MICROSOFT AZURE** GOOGLE VISION AMAZON REKOG. IBM WATSON
  Food (0.99) Waffle (1) food (0.95)
  Tableware (0.96) Food (1) beige color (0.93)
  Waffle (0.95)   bread (0.86)
  Belgian waffle (0.95)  
food product (0.86)
  Ingredient (0.91)   nan (0.77)
  Baked goods (0.87)  
Chicken Quesadilla (0.76)
  Staple food (0.87)   dish (0.76)
  Fast food (0.86)   nutrition (0.76)
  Cuisine (0.85)   flatbread (0.5)
  Recipe (0.85)    
  Dish (0.82)    
  Finger food (0.73)    
  Junk food (0.72)    
  Dessert (0.72)    
  Produce (0.72)    
  Plate (0.7)    
  Dishware (0.69)    
  Comfort food (0.65)    
  Sweetness (0.65)    
  Kitchen utensil (0.64)    
  Snack (0.64)    
  Delicacy (0.63)    
  Waffle iron (0.63)    
  Breakfast (0.61)    
  Meal (0.59)    

**It appears that the Azure labeling API is not giving back any results at the time of analysis (only _others with a confident rating of 0.004, model version 2021-05-01 [object detection API used is model version 2021-04-01]).

Object detection results.

GROUND TRUTH MICROSOFT AZURE GOOGLE VISION AMAZON REKOG. IBM WATSON
Waffle Waffle (0.76) Food (0.74) Bread (0.89) /
Knife Undetected Tableware (0.59) Knife (0.99) /
Fork Undetected Undetected Fork (0.99) /

*Green = the right prediction; Yellow= the right prediction, but too general; Red = potentially harmful prediction; White = largely not relevant

Labeling Results:

MICROSOFT AZURE GOOGLE VISION AMAZON REKOG. IBM WATSON
  Food (0.98) Knife (0.99) beige color (0.98)
  Belgian waffle (0.96) Weapon (0.99) utensil (0.63)
  Tableware (0.96) Blade (0.99) food (0.6)
  Waffle (0.96) Weaponry (0.99) food product (0.6)
  Hood (0.9) Fork (0.99) tableware (0.56)
  Plate (0.89) Cutlery (0.99) tablefork (0.55)
  Ingredient (0.89) Food (0.91) restaurant (0.55)
  Recipe (0.86) Bread (0.89) building (0.55)
  Cuisine (0.82) Waffle (0.85) cafe (0.54)
  Baked goods (0.82)   spoon (0.51)
  Dish (0.8)    
  Kitchen utensil (0.8)    
  Grille (0.79)    
  Dishware (0.79)    
  Staple food (0.75)    
  Pizzelle (0.73)    
  Waffle iron (0.72)    
  Fork (0.72)    
  Dessert (0.71)    
  Sweetness (0.7)    
  Comfort food (0.69)    
  Produce (0.69)    
  Finger food (0.66)    
  Junk food (0.66)    
  Cooking (0.64)    

**It appears that the Azure API is currently not giving back any results (only abstract_ with a confident rating of 0.004).

Fritattapizza: England

Image recognition (IR) systems often perform poorly once in the real world. In this post, I test four of the most popular IR systems on original real world images of food from around the world, this time from England.

Key takeaway

The four IR systems performed better than in previous analyses, but still made quite some mistakes. One meal (Frittata) was finally correctly labeled by Watson, but unfortunately with a confidence rating of only 50%. Object detection picked up on a lemon and chopsticks, but failed for 17 other items.

Correctly predicted images 0/8
Correctly detected items 2/19
Correct labels 11/169
Potentially harmful detections/labels
10
The above table includes only detections and labels of 80%+ confidence level, for lower confidence levels see the tables further below.

Insights

Object Detection

In meal 3, Azure and Vision correctly detected a lemon, while the latter also detected chopsticks. Though detecting two over a total of 19 items still signifies a poor performance, it’s refreshing from previous analyses to see the exact items being detected. Unfortunately, the confidence rating for both detections was around 50%, well below an acceptable threshold for most IR systems.

Unfortunately, many faulty detections were made as well. Indian Spinach and Laksa (noodle soup) were both described as ice cream by Rekognition, while the same system also labeled a rice-based dish as a Birthday Cake. The latter being quite ironic, as the first meal actually was a cake, but there was not mention of cake by Rekognition. Rekognition also mistook a sliced lemon for an egg.

Finally, Azure and Rekognition described Frittata as pizza with high confidence rating (86% and 97% respectively). While understandable from a visual perspective (they both look cheesy), Frittata and pizza are very different dishes.

Labeling

The labeling systems were much better than the detection systems and appeared to work somewhat better compared with previous analyses. Though Laksa was not described by it’s name itself, Vision labeled it with noodle soup at a confidence rating of 80%. Rekognition also labeled the Laksa as Noodle with a confidence rating of 96%. Itss strange that the object detection system labeled the Laksa as ice cream with a much lower confidence rating of 75%.

Azure and Vision labeled a sliced carrot cake as cake with fairly high confidence ratings (93% and 91%, respectively). Unfortunately, the same systems, as well as the other two, rated the same cake, but not yet sliced, as meat, beef, steak, red meat, etc. This is interesting as a human would clearly see it as the same cake.

On a positive note, Azure labeled roti (and chapati) well with fairly high confidence rating (86%+), while Vision was able to do the same, but with lower confidence ratings. Unfortunately, Vision and Rekognition also culturally misrepresented roti by labeling it tortilla and pita.

One meal (Frittata) was finally correctly labeled by Watson, but unfortunately with a confidence rating of only 50%. This is unfortunate, as pizza for example was given a confidence rating of 92% for this meal. This is a missed opportunity.

Again, labels of meat were common across most images, even though all meals were vegetarian.

Suggestions for improvement

  • Provide more specific and relevant labels for Raita, Aubergine, Indian Spinach and carrot cake;
  • Fix (cultural) misrepresentations (i.e. roti is not tortilla or pita);
  • Make sure labels of meat do not harm people of certain religions or with certain diets.
  • why wrong labels with a lower confidence rating are assigned during object detection to items while the correct label with a nearly perfect confidence rating is not (specifically for Rekognition).
  • Check why Frittata (the correct label) had a significantly lower confidence rating than pizza (specifically for Rekognition).
  • For cake, make sure to include examples of both sliced and unsliced cake, as this small difference may result in a completely different outcome.

Results

Eight images of six different meals from England were available:

  • Meal 1: Carrot Cake (Snack)
  • Meal 2: Indian Spinach with Wild Garlic and Roti (Dinner)
  • Meal 3: Malaysian/Singaporean Laksa (Dinner)
  • Meal 4: Rice, Aubergine, Mint, Cashew nuts, Raita (Dinner)
  • Meal 5: Spanish Style Frittata (Dinner)
  • Meal 6: Pasta Bake (Tomato, basil and Mozzarella, Dinner)

Object detection results.

GROUND TRUTH MICROSOFT AZURE GOOGLE VISION AMAZON REKOG. IBM WATSON
Carrot Cake Food (0.58) Food (0.73) Bread (0.98) /

*Green = the right prediction; Yellow= the right prediction, but too general; Red = potentially harmful prediction; White = largely not relevant

Labeling Results:

MICROSOFT AZURE GOOGLE VISION AMAZON REKOG. IBM WATSON
food (0.86) Food (0.98) Bread (0.98)
chestnut red color (0.93)
indoor (0.61) Ingredient (0.9) Food (0.98) dish (0.9)
meat (0.58) Recipe (0.88) Steak (0.93) nutrition (0.9)
  Beef (0.85) Meat Loaf (0.83) food (0.9)
  Tableware (0.85)  
reddish brown color (0.86)
  Baked goods (0.84)   food product (0.8)
  Dish (0.82)   meat loaf (0.78)
  Cuisine (0.82)   Prime Rib (0.5)
  Cooking (0.82)    
  Steak (0.81)    
  Red meat (0.79)    
  Produce (0.76)    
  Pork (0.75)    
  Meat (0.75)    
  Fried food (0.73)    
  Dessert (0.72)    
  Comfort food (0.71)    
  Baking (0.69)    
  Soil (0.65)    
  Cake (0.64)    
  Flesh (0.63)    
  Pastrami (0.63)    
  Venison (0.6)    
  Ostrich meat (0.57)    
  Kuchen (0.56)    

Object detection results.

GROUND TRUTH MICROSOFT AZURE GOOGLE VISION AMAZON REKOG. IBM WATSON
Carrot Cake Food (0.64) Food (0.7) Bread (0.96) /

*Green = the right prediction; Yellow= the right prediction, but too general; Red = potentially harmful prediction; White = largely not relevant

Labeling Results:

MICROSOFT AZURE GOOGLE VISION AMAZON REKOG. IBM WATSON
dessert (0.97) Food (0.98) Bread (0.96) chocolate color (1)
baking (0.96) Tableware (0.92) Food (0.96) nutrition (0.91)
baked goods (0.96) Cake (0.91) Sweets (0.91) food (0.91)
cake (0.93) Ingredient (0.9) Confectionery (0.91) meat loaf (0.87)
snack (0.92) Recipe (0.88) Cookie (0.88) dish (0.87)
chocolate cake (0.91) Dish (0.83) Biscuit (0.88) food product (0.8)
chocolate brownie (0.9) Baked goods (0.83) Dessert (0.85) dessert (0.5)
parkin (0.89) Cuisine (0.83) Chocolate (0.83) tiramisu (0.5)
snack cake (0.88) Kuchen (0.79) Meat Loaf (0.6)  
chocolate (0.87)
Flourless chocolate cake (0.79)
Brownie (0.58)  
muscovado (0.86) Gluten (0.78)    
flourless chocolate cake (0.86)
Produce (0.77)    
sweetness (0.86) Frozen dessert (0.75)    
food (0.8) Dessert (0.75)    
indoor (0.6) Birthday cake (0.74)    
  Cooking (0.74)    
  Sweetness (0.72)    
  Baking (0.72)    
  Lekach (0.7)    
  Icing (0.7)    
  Buttercream (0.69)    
  Torta caprese (0.67)    
  Beef (0.67)    
  Chocolate cake (0.66)    
  Torte (0.65)    

Object Detection Results:

GROUND TRUTH MICROSOFT AZURE GOOGLE VISION AMAZON REKOG. IBM WATSON
Indian Spinach Food (0.84) Food (0.77) Ice Cream (0.7) /
Roti Undetected Undetected Bread (0.94) /

Labeling Results:

MICROSOFT AZURE GOOGLE VISION AMAZON REKOG. IBM WATSON
food (0.99) Food (0.98) Bread (0.94)
chestnut color (0.88)
indoor (0.89) Tableware (0.91) Food (0.94) dish (0.77)
roti (0.89) Ingredient (0.88) Ice Cream (0.7) nutrition (0.77)
recipe (0.87) Recipe (0.88) Dessert (0.7) food (0.77)
cooking (0.86) Staple food (0.87) Cream (0.7) beige color (0.75)
cookware and bakeware (0.86)
Cookware and bakeware (0.82)
Creme (0.7) meat loaf (0.69)
chapati (0.86) Dish (0.8) Plant (0.7) food product (0.6)
ingredient (0.84) Cuisine (0.8) Pita (0.57) utensil (0.6)
pan (0.72) Cooking (0.8) Seasoning (0.56) Filet Mignon (0.5)
stove (0.62) Produce (0.77)    
kitchen (0.58) Vegetable (0.76)    
  Chapati (0.76)    
  Tortilla (0.75)    
  Corn tortilla (0.74)    
  Jolada rotti (0.74)    
  Bhakri (0.71)    
  Comfort food (0.7)    
  Roti (0.69)    
  Piadina (0.67)    
  Metal (0.65)    
  Kitchen utensil (0.65)    
  Condiment (0.64)    
  Finger food (0.63)    
  Meat (0.63)    
  Fast food (0.61)    

Object Detection Results:

GROUND TRUTH MICROSOFT AZURE GOOGLE VISION AMAZON REKOG. IBM WATSON
Laksa (Noodle Soup) Undetected Food (0.7) Ice Cream (0.75) /
Chopsticks Undetected Chopsticks (0.5) Undetected /
Lemon Lemon (0.51) Lemon (0.51) Egg (0.6) /

Labeling Results:

MICROSOFT AZURE GOOGLE VISION AMAZON REKOG. IBM WATSON
food_ (0.75) Food (0.98) Noodle (0.96)
chestnut color (0.88)
  Tableware (0.96) Food (0.96) dish (0.77)
  Ingredient (0.91) Pasta (0.96) nutrition (0.77)
  Recipe (0.88) Plant (0.9) food (0.77)
  Soup (0.87) Vermicelli (0.82) beige color (0.75)
  Noodle (0.86) Ice Cream (0.75) meat loaf (0.69)
  Cuisine (0.86) Dessert (0.75) food product (0.6)
  Dish (0.84) Cream (0.75) utensil (0.6)
  Stew (0.84) Creme (0.75) Filet Mignon (0.5)
  Staple food (0.83) Produce (0.64)  
  Bowl (0.83) Dish (0.61)  
  Noodle soup (0.8) Meal (0.61)  
  Produce (0.79) Egg (0.6)  
  Chopsticks (0.78) Citrus Fruit (0.59)  
  Meat (0.76) Fruit (0.59)  
  Chinese noodles (0.75) Grapefruit (0.56)  
  Hot and sour soup (0.74)    
  Thukpa (0.74)    
  Vegetable (0.73)    
  Rice noodles (0.73)    
  Guk (0.73)    
  Spoon (0.73)    
  Cooking (0.73)    
  Comfort food (0.72)    
  Fast food (0.72)    

Object Detection Results:

GROUND TRUTH MICROSOFT AZURE GOOGLE VISION AMAZON REKOG. IBM WATSON
Rice Undetected Food Undetected /
Raita Undetected / Undetected /
Aubergine Undetected / Undetected /
Mint Undetected / Undetected /
Cashew nuts Undetected / Undetected /

Labeling Results:

MICROSOFT AZURE GOOGLE VISION AMAZON REKOG. IBM WATSON
food_ (0.86) Food (0.98) Dish (0.99) food (0.89)
  Tableware (0.96) Meal (0.99) nutrition (0.89)
  White rice (0.9) Food (0.99) beige color (0.85)
  Dishware (0.88) Plant (0.98) dish (0.83)
  Plate (0.88) Vegetable (0.88) risotto (0.83)
  Recipe (0.88) Platter (0.69) food product (0.8)
  Ingredient (0.87) Seasoning (0.58)
emerald color (0.72)
  Fines herbes (0.83) Seasoning (0.58) plate (0.5)
  Staple food (0.82)    
  Jasmine rice (0.81)    
  Rice (0.78)    
  Cuisine (0.78)    
  Produce (0.77)    
  Garnish (0.76)    
  Steamed rice (0.76)    
  Dish (0.76)    
  Lime (0.75)    
  Meat (0.75)    
  Kitchen utensil (0.72)    
  Leaf vegetable (0.71)    
  Comfort food (0.7)    
  Vegetable (0.69)    
  Cooking (0.68)    
  Culinary art (0.68)    
  Xôi (0.67)    

Object Detection Results:

GROUND TRUTH MICROSOFT AZURE GOOGLE VISION AMAZON REKOG. IBM WATSON
Rice Undetected Food (0.79) Birthday Cake (0.67) /
Raita Undetected /
Aubergine Undetected /
Mint Undetected /
Cashew Nuts Undetected /

Labeling Results:

MICROSOFT AZURE GOOGLE VISION AMAZON REKOG. IBM WATSON
food_ (0.87) Food (0.98) Plant (0.98) dish (0.95)
  White rice (0.94) Dish (0.95) nutrition (0.95)
  Tableware (0.93) Meal (0.95) food (0.95)
  Ingredient (0.9) Food (0.95) beige color (0.95)
  Recipe (0.88) Vegetable (0.85) risotto (0.93)
  Staple food (0.87) Produce (0.78)
food product (0.79)
  Rice (0.85) Seasoning (0.75)
light brown color (0.74)
  Jasmine rice (0.84) Birthday Cake (0.67)
Grilled Salmon (0.5)
  Dish (0.84) Dessert (0.67)  
  Cuisine (0.84) Cake (0.67)  
  Leaf vegetable (0.8)    
  Plate (0.79)    
  Basmati (0.79)    
  Glutinous rice (0.79)    
  Produce (0.79)    
  Fines herbes (0.78)    
  Steamed rice (0.76)    
  Vegetable (0.75)    
  Meat (0.75)    
  Comfort food (0.71)    
  Culinary art (0.7)    
  Dishware (0.7)    
  Coriander (0.68)    
  Rice and curry (0.61)    
  À la carte food (0.6)    

Object Detection Results:

GROUND TRUTH MICROSOFT AZURE GOOGLE VISION AMAZON REKOG. IBM WATSON
Frittata Pizza (0.86)
Packaged Goods (0.84)
Pizza (0.97) /

Labeling Results:

MICROSOFT AZURE GOOGLE VISION AMAZON REKOG. IBM WATSON
food_pizza (0.78) Food (0.95) Pizza (0.97)
pale yellow color (1)
  Ingredient (0.9) Food (0.97) dish (0.95)
  Recipe (0.88) Bread (0.91) nutrition (0.95)
  Baked goods (0.84) Cake (0.76) food (0.95)
  Cuisine (0.83) Dessert (0.76) pizza (0.92)
  Rectangle (0.81) Cornbread (0.6)
Sicilian pizza (0.82)
  Fast food (0.8) Lasagna (0.56) food product (0.8)
  Dish (0.8) Pasta (0.56) cheese pizza (0.7)
  Comfort food (0.71) Pie (0.56) frittata (0.5)
  Staple food (0.67) Dish (0.55)  
  Linens (0.61) Meal (0.55)  
  Side dish (0.61)    
  Pattern (0.61)    
  Junk food (0.59)    
  Metal (0.58)    
 
Cookware and bakeware (0.57)
   
  Meal (0.56)    
  Cooking (0.56)    
  Tin (0.55)    
  Italian food (0.55)    
  American food (0.55)    
  Mixture (0.51)    
  Pattern (0.51)    
       
       

Object Detection Results:

GROUND TRUTH MICROSOFT AZURE GOOGLE VISION AMAZON REKOG. IBM WATSON
Pasta Bake Food (0.65) Food (0.74) Bread (0.9) /

Labeling Results:

MICROSOFT AZURE GOOGLE VISION AMAZON REKOG. IBM WATSON
  Food (0.97) Food (0.95)
chestnut red color (0.96)
  Ingredient (0.89) Plant (0.92) nutrition (0.83)
  Recipe (0.88) Bread (0.9) food (0.83)
  Cuisine (0.8) Meat Loaf (0.64) dish (0.83)
  Fried food (0.79) Lasagna (0.6)
food product (0.79)
  Dish (0.79) Pasta (0.6) pasta (0.76)
  Fast food (0.79) Vegetable (0.6)
Spaghetti Bolognese (0.68)
  Produce (0.76)   meat loaf (0.52)
  Meat (0.74)   lasagna (0.5)
  Comfort food (0.71)    
  Mixture (0.65)    
  Side dish (0.62)    
  Dessert (0.62)    
  Deep frying (0.57)    
  Metal (0.56)    
  Panko (0.56)    
  Soil (0.54)    
  Rock (0.54)    
  Energy bar (0.52)