In the recent past, there have been rapid advances in the technology of processors, storage and networks, leading to technologies like cloud computing. However, amid all these advances, performance of clouds and cloud services continues to present challenges. Access latencies to the information on the cloud due to variable bandwidth continues to be a serious problem of research; more so in environments requiring mobile devices to stay connected to the cloud. One way to smooth out bumps in bandwidth available is to use anticipatory retrieval of data, and to cache data that is likely to be requested later. The proposed anticipatory retrieval and caching system is a solution that takes this path. It offers a better experience to those mobile users who are connected to a cloud and make frequent access to the cloud's datastore. The proposed method aims to provide ubiquitous access to data on clouds regardless of the bandwidth levels. This is done by locally caching all the one-hop related item-sets $I_1, I_2, \ldots, I_k$ semantically belonging to (or semantically linked to) a particular item-set $I'$. Caching is done asynchronously in the background during times of high bandwidth. The proposed algorithms assess the semantic relevance of the data using semantic distances along with user priorities and availability of bandwidth, and then prioritizes anticipatory data downloads on to the cloud's storage based on the relevance quotient.