Authors: Pengkun Jiao, Xinlan Wu, Bin Zhu, Jingjing Chen, Chong-Wah Ngo, Yugang Jiang
Abstract: Large Multi-modal Models (LMMs) have significantly advanced a variety of
vision-language tasks. The scalability and availability of high-quality
training data play a pivotal role in the success of LMMs. In the realm of food,
while comprehensive food datasets such as Recipe1M offer an abundance of
ingredient and recipe information, they often fall short of providing ample
data for nutritional analysis. The Recipe1M+ dataset, despite offering a subset
for nutritional evaluation, is limited in the scale and accuracy of nutrition
information. To bridge this gap, we introduce Uni-Food, a unified food dataset
that comprises over 100,000 images with various food labels, including
categories, ingredients, recipes, and ingredient-level nutritional information.
Uni-Food is designed to provide a more holistic approach to food data analysis,
thereby enhancing the performance and capabilities of LMMs in this domain. To
mitigate the conflicts arising from multi-task supervision during fine-tuning
of LMMs, we introduce a novel Linear Rectification Mixture of Diverse Experts
(RoDE) approach. RoDE utilizes a diverse array of experts to address tasks of
varying complexity, thereby facilitating the coordination of trainable
parameters, i.e., it allocates more parameters for more complex tasks and,
conversely, fewer parameters for simpler tasks. RoDE implements linear
rectification union to refine the router’s functionality, thereby enhancing the
efficiency of sparse task allocation. These design choices endow RoDE with
features that ensure GPU memory efficiency and ease of optimization. Our
experimental results validate the effectiveness of our proposed approach in
addressing the inherent challenges of food-related multitasking.
Source: http://arxiv.org/abs/2407.12730v1