Browsing by Author "Salecha, Aadesh"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Emergence and Stability of Self-Evolved Cooperative Strategies using Stochastic Machines(2019-12) Kuan, Jin H; Salecha, AadeshTo investigate the origin of cooperative behaviors, we developed an evolutionary model of sequential strategies and tested our model with computer simulations. The sequential strategies represented by stochastic machines were evaluated through games of iterated Prisoner's Dilemma (PD) with other agents in the population, allowing bootstrapping evolution to occur. We expanded upon past works by proposing a novel mechanism to mutate stochastic Moore machines that introduces a greater spectrum of evolvable machines. These machines were then subjected to various selection mechanisms and the resulting evolved strategies were analyzed. We found that cooperation can indeed emerge spontaneously in evolving populations playing iterated PD, specifically in the form of trigger strategy. In addition, the strategy was found to be resilient towards mutation and thus is evolutionarily stable. To verify the validity of the proposed mutation mechanism, we also evolved the machines to play other 2x2 games such as Chicken and Stag's Hunt, and obtained interesting strategies that demonstrate a degree of Pareto optimality.Item Empirical Study of the Spread of Misinformation: A Big Data Approach(2021-05) Salecha, AadeshSocial media platforms like Twitter and Facebook have made the world a more connected place and have become indispensable parts of our lives. However, these networks have also become conducive environments for massive diffusion of misinformation. These platforms generate huge volumes of data, a sizable portion of which consists of what has popularly come to be known as fake news. These sites are also plagued with automated bots which serve as catalysts for the dispersion of misinformation whilst also making it harder for researchers to study misinformation by exponentially increasing the volume of data generated. This thesis is a part of a larger effort by researchers to advance our understanding of the spread of misinformation and its characteristics. In this thesis I first outline an approach we used to build a massive fake news dataset that was rich enough to capture complex behavioural patterns. Next, I describe an approach that we used to build machine learning models to detect false information spreaders on Twitter and present an empirical validation of our models that yield accuracies of over 90%. Finally, I propose a pipeline to filter out bots from these datasets by building on existing state-of-the-art bot detection techniques. I also present a comprehensive analysis of the effects that these bots have on fake news spreader detection. I conclude that a bot filtration phase is essential in ensuring optimal performance of models in predicting likely spreaders.