Emoji have become commonplace in nearly all forms of text-based computer-mediated communication, but as picture characters with nuanced details, emoji may be open to interpretation. Emoji also render differently on different viewing platforms (e.g., Apple’s iPhone vs Google’s Nexus phone), potentially leading to communication errors. It is unknown whether people are aware that emoji have multiple renderings, or whether they would change their emoji-bearing messages if they could see how these messages render on recipients’ devices. In this thesis, I identify the risks of miscommunicating with emoji. Drawing from psycholinguistic theory, my collaborators and I developed a measure to demonstrate the potential for misconstrual of emoji due to people varying in their interpretations. I also investigated whether the presence of text would reduce this potential, finding little to no support for this hypothesis. Finally, I explored the real-world impact of the multi-rendering nature of emoji, finding that a substantial proportion of people are unaware that emoji have multiple renderings and that, in many instances of emoji use, increased visibility of different emoji renderings would affect communication decisions. To provide this visibility, I developed emoji rendering software that simulates how a given emoji-bearing text renders on various platforms, including when platforms do not support the given emoji. Altogether, this work identifies the risks of miscommunicating with emoji, but it also informs the design and development of technology to, at least partially, mitigate these problems. The data I produced and the emoji rendering software I built can be integrated into new tools for communication applications to prevent regretful exchanges due to ambiguous emoji or emoji rendering differences across platforms.