Image search engines tend to return a large number of images which the engines consider to be relevant, and such pool of results generally is very large and may be regarded to be effectively inexhaustible. While the images are presented as relevant, it is normally true that many of them are actually irrelevant, and that the distribution of relevant images over the returned results is non-uniform. To predict the relevance for individual images is generally difficult since it only takes on binary values and therefore tends to oscillate randomly between relevance and irrelevance with little noticeable trend. Increasing the range of possible values will be necessary to enhance the ability for prediction, and it is advantageous to accumulate the aggregate relevance for larger groups of images in a sequential manner. Our approach will involve appropriately grouping the random binary sequence into non-overlapping groups and convert it into a form which makes them more amenable for prediction. In this paper, we present a model for predicting Image Search Engines (ISEs) behaviour. We develop an empirical model and design a set of benchmark queries to measure system performance. We are able to establish a linear model which is able to give good and robust prediction of search performance. In addition, the results of this research can have a direct bearing on search engine design to provide informative guidance to users on the retrieval of relevant images, and allows the users to optimize their strategy in the recovery and discovery of images.