Browsing by Subject "Information theory"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item A computational investigation of being in the world(2012-10) Srivastava, NisheethIs there a rational explanation for human behavior? Or is it fundamentally idiosyncratic and beyond the ability of science to accurately predict? In everyday life, we are able to predict the preferences of other people relatively well, and function in a society that is strongly predicated on our ability to do so. Theoretical efforts at predicting how people form preferences, however, have met with repeated failures, resulting in widespread pessimism regarding the possibility of a universal rational explanation for human behavior. In this thesis, we provide precisely such an explanation. We show that the errors plaguing existing systems of preference representations are a direct result of the mystery surrounding the actual act of {\em formation} of preferences, and that once this latter mechanism is clarified, a very large number of paradoxical and contradictory empirical results from the behavioral economics literature are theoretically reconciled. Our investigations lead us to believe that a combination of two simple natural principles is sufficient to both predict and explain why humans make the choices they do: one, that humans seek to always learn what to do in the most statistically efficient manner possible, and two, that this quest for understanding is constrained in remarkably systematic ways by a competing search for choices that can be made with minimal cognitive effort. We find, therefore, that the rational goal that best describes human choice behavior is attempting to minimize the cognitive effort required to make a decision. In other words, in this dissertation, we propose a theory that rational human action is governed by a universal explanatory principle, one that does not match traditional expectations of utility maximization - the principle of least cognitive effort. This redefinition of rationality has far-reaching implications. In order to better understand them, we constructed an information-theoretic description of a meta-cognitive agent engaging with its environment which allowed us to formulate computationally tractable intrinsically motivated agents. In this dissertation, we report simulation studies which confirm that the behavior of cognitively efficient agents provides a unified explanation for a large number of behavioral biases identified by behavioral and experimental economists in human subjects, as well as a number of variations in subjects' perception of risk observed in neuroeconomics studies.We further show using mathematical arguments, that this construction is in consonance with existing reinforcement learning literature and, in fact, subsumes multiple strands of current research in broadening the definitions of reward in reinforcement learning. Finally, we extend our analysis to studying social behavior among populations of agents and shed new light on paradoxes in game theory and theories of social interaction, resulting in a demonstration of an amoral basis for being good - the existence of an entirely self-interested (and non-evolutionary) basis for cooperative and altruistic behavior. In short, this thesis proposes a quantifiable description of agents {\em being in the world} - detailing universal principles that explain how and why beings develop preferences of the form they do given the structure of the world they inhabit. Our results provide a unification of explanations for several biological and behavioral phenomena spanning economics, psychology, neurobiology, cognitive science, artificial intelligence and metaphysics.Item Information theory of random trees induced by stochastic grammars.(2012-05) Oh, Sang-younPrevious research has been done on the information theory of binary random rooted tree models in which every nonleaf vertex in a tree has exactly two children. Let α be a positive integer parameter > 2. The main contribution of this thesis is to extend the information theory results for binary random tree models to α-random tree models, which are random tree models in which up to α children per each nonleaf vertex are allowed. An α-random random tree model consists of a sequence {Tn : n = 2, 3, · · · }, in which each Tn is an α-random tree having n leaves, and there is a context-free grammar describing how each Tn is randomly generated. In particular, balanced α-random tree models are examined, meaning that the context-free grammars which are employed satisfy a type of balance condition. We obtain three types of results for α-random tree models, described as follows. First, given a general α-random tree model {Tn}, the entropy H(Tn) of each tree Tn is expressed as a recursive formula relating H(Tn) to the entropies of its subtrees. This recursion can be explicitly solved for some random tree models, and we show how this can be done for the binary random search tree model. Our second set of results, which are our main results, concern the balanced α-random tree model {Tn}. Defining the entropy rate of Tn as the ratio H(Tn)/n, we examine the asymptotic behavior of the entropy rate as n grows without bound. This asymptotic behavior is described via a continuous non-constant periodic function P#11; → [0,∞), having period one, satisfying the property that H(Tn)/n = P#11;(log#11; n) for every n. The graph of P#11; is seen as a fractal and its fractal properties along with its fractal dimension are investigated. We develop a direct and indirect way of describing P#11;. The direct way expresses the graph of P#11; as the attractor of an iterated function system which is explicitly given. The indirect way obtains P#11; via a change of variable on a hierarchical entropy function discovered by J. Kieffer. Our third and final set of results concern the development of compression methods for some α-random tree models {Tn}. In the case in which each random tree Tn is not equiprobable over the ensemble of trees in which it takes it values, we develop a encoding method via which Tn is uniquely represented by a variable-length binary codeword, so that the expected codeword length is roughly equal to the entropy H(Tn). In the case of the balanced α-random tree model, each Tn is equiprobable, which allows us to develop encoding methods via which each Tn is uniquely represented by a fixed-length binary codeword of minimal length ⌈H(Tn)⌉. One of the encoding methods discussed employs an one-to-one mapping of the ensemble of Tn into one of the hierarchical type classes discovered by J. Kieffer, allowing Kieffer’s method for encoding hierarchical type classes to be adapted to an encoding method for balanced α-random trees.