Systematic Review of AI Literacy Frameworks and Assessments
Authors
Published Date
Publisher
Type
Abstract
As AI technologies increasingly permeate various aspects of society, there is a growing need to understand and evaluate AI literacy across diverse populations. Optimizing how we instill AI literacy in individuals defines the thin line between responsible/effective and irresponsible/ineffective usage of AI. To inform this work, in this paper, I conduct a comprehensive systematic review of theoretical frameworks, models, and assessment tools related to AI literacy. This review synthesized existing literature on AI literacy conceptualizations and measurements, addressing a critical gap in our understanding of how AI literacy is defined, taught, and assessed. Analysis of sixteen studies that reached the final stage of screening left prints of common themes relating to elements of frameworks adhered to in
assessments, the popularity of assessment modalities used, and how frameworks and assessments built on other frameworks and assessments, respectively. Frameworks utilizing cognitive, evaluative, and sociocultural components were predominantly recognized. In comparison, most assessments included questionnaire items with expert evaluation. Studies geared toward academic applications implemented AI-assisted programs that teach students where AI belongs in their educational pursuits.
Description
Related to
item.page.replaces
License
Collections
Series/Report Number
Funding Information
item.page.isbn
DOI identifier
Previously Published Citation
Other identifiers
Suggested Citation
Wondeson, Sador. (2025). Systematic Review of AI Literacy Frameworks and Assessments. Retrieved from the University Digital Conservancy, https://hdl.handle.net/11299/275128.
Content distributed via the University Digital Conservancy may be subject to additional license and use restrictions applied by the depositor. By using these files, users agree to the Terms of Use. Materials in the UDC may contain content that is disturbing and/or harmful. For more information, please see our statement on harmful content in digital repositories.
