In many personalized prediction applications, sharing information between entities/tasks/sources is critical to address data scarcity. Furthermore, inherent characteristics of sources distinguish relationships between input drivers and response variables across entities. For example, for the same amount of rainfall (input driver), two different basins will have very different streamflow (response variable) values depending on the basin characteristics (e.g., soil porosity, slope, …). Given such heterogeneity, a trivial merging of data without source characteristics would lead to poor personalized predictions. In recent years, meta-learning has become a very popular framework to learn generalized global models that can be easily adapted (fine-tuned) for individual sources. In this talk, we present an exhaustive analysis of the source-aware modulation based meta-learning approach. Source-aware modulation adjusts the shared hidden features based on source characteristics. The adjusted hidden features are then used to calculate the response variable for individual sources. Although this strategy shows promising prediction improvement, its applicability is limited in certain applications where source characteristics might not be available (especially due to privacy concerns). In this work, we show that robust personalized predictions can be achieved even in the absence of explicit source characteristics. We investigated the performance of different modulation strategies under various data sparsity settings on two datasets. We demonstrate that source-aware modulation is a very viable solution (with or without known characteristics) compared to traditional meta-learning methods such as model agnostic meta-learning.