Automatic image annotation has emerged as an important research topic due to its potential application on
both image understanding and web image search. This paper presents a model, which integrates visual
topics and regional contexts to automatic image annotation. Regional contexts model the relationship
between the regions, while visual topics provide the global distribution of topics over an image. Previous
image annotation methods neglected the relationship between the regions in an image, while these regions
are exactly explanation of the image semantics, therefore considering the relationship between them are
helpful to annotate the images. Regional contexts and visual topics are learned by PLSA (Probability
Latent Semantic Analysis) from the training data. The proposed model incorporates these two types of
information by MCDM (Multi Criteria Decision Making) approach based on WSM (Weighted Sum
Method). Experiments conducted on the 5k Corel dataset demonstrate the effectiveness of the proposed
model.