ChatGPT is a chatbot powered by artificial intelligence (AI) technology. Created by the San Francisco, California-based technology company OpenAI , ChatGPT made global headlines following its November 2022 public release. ChatGPT’s launch was widely characterized as an enormous success for AI technology in general and OpenAI in particular. The chatbot had attracted a reported 100 million users by January 2023, approximately two months after its release, leading international media sources to describe it as the fastest-growing consumer software application in the history of computing. By mid-2024, data-gathering sources were estimating that ChatGPT had around 180.5 million users.
Users interface with ChatGPT through a prompt-based format that mimics human dialogue. ChatGPT uses generative AI , which draws on training data to create original content from user prompts with high levels of realism. Users can add additional parameters to their input prompts, which allows ChatGPT to make corrections, revisions, and other alterations to its output.
AI technology aims to mimic the human mind’s capacity to apply existing knowledge, integrate new knowledge, draw from situational observations, and learn from mistakes dynamically and in real time. In broad terms, AI works by combining large repositories of data with rapid, real-time forms of iterative processing and advanced computing algorithms. These features allow AI-powered systems to learn from established and emerging patterns and the characteristics found in their data sets. In computing, iterative processing describes updates to algorithm parameters, which functionally result in gradual performance improvements. Algorithms are complex computer codes, heavily modeled with mathematics, that deliver instructions that tell a computer or software application how to perform tasks or solve specific problems.
ChatGPT is a chatbot, which is a specific type of software designed to simulate interactive conversations with human users. The first chatbot in computing history, named ELIZA, was created at the Massachusetts Institute of Technology (MIT) in 1966 by the German-American computer scientist Joseph Weizenbaum (1923–2008). ELIZA used techniques including substitution and pattern matching. Though simple by contemporary standards, ELIZA was novel and highly advanced for its time and was reportedly capable of convincing some users that they were conversing with another person.
OpenAI began developing ChatGPT in 2018, after the company released its initial generative pre-trained transformer (GPT) neural network machine learning model. ChatGPT uses a version of the technology known as GPT-3.5, which OpenAI combined with AI-assisted natural language processing (NLP). NLP combines multiple deep machine-learning models with principles of computational linguistics to understand and interpret text-based prompts in ways that simulate human language processing. ChatGPT is the first chatbot in computing history to integrate GPT and NLP (Source: EBSCO, 2024).
Artificial intelligence is the design, implementation, and use of programs, machines, and systems that exhibit human intelligence. Its most important activities are knowledge representation, reasoning, and learning. Artificial intelligence encompasses a number of important subareas, including voice recognition, image identification, natural language processing, expert systems, neural networks , planning, robotics , and intelligent agents. Several important programming techniques have been enhanced by artificial intelligence researchers, including classical search, probabilistic search, and logic programming.
Artificial intelligence is a broad field of study, and definitions of the field vary by discipline. For computer scientists, artificial intelligence refers to the development of programs that exhibit intelligent behavior. The programs can engage in intelligent planning (timing traffic lights), translate natural languages (converting a Chinese website into English), act like an expert (selecting the best wine for dinner), or perform many other tasks. For engineers, artificial intelligence refers to building machines that perform actions often done by humans. The machines can be simple, like a computer vision system embedded in an ATM (automated teller machine). They can also be more complex, such as a robotic rover sent to Mars. They can be extremely complex, for example, an automated factory that builds an exercise machine with little human intervention. For cognitive scientists, artificial intelligence refers to building models of human intelligence to better understand human behavior. In the early days of artificial intelligence, most models of human intelligence were symbolic and closely related to cognitive psychology and philosophy. The basic idea was that regions of the brain perform complex reasoning by processing symbols. Later, many models of human cognition were developed to mirror the operation of the brain as an electrochemical computer. This started with the simple Perceptron, an artificial neural network described by Marvin Minsky in 1969. These efforts graduated to the backpropagation algorithm described by David E. Rumelhart and James L. McClelland in 1986. The culmination was a large number of supervised and nonsupervised learning algorithms (Source: EBSCO, 2024).
Generative artificial intelligence is a type of artificial intelligence (AI) technology that can make content such as audio, images, text, and videos. It involves algorithms such as ChatGPT, a chatbot that can produce essays, poetry, and other content requested by a user, and DALL-E, which generates art. AI emerged gradually over more than half a century. Generative AI is a type of machine learning, which involves using data and algorithms to imitate how humans learn and become more accurate. While machine learning can perceive and sort data, generative AI can take the next step and create something based on the information it has. However, it remains expensive, and only a few well-financed companies, including OpenAI, DeepMind, and Meta, have built generative AI models.
British computer scientist, philosopher, and polymath Alan Turing proposed a question in his 1950 paper “Computing Machinery and Intelligence” that appeared in the journal Mind . He wondered, “Can machines think?” Turing suggested a test that he called the Imitation Game but in modern times is called the Turing Test. A subject posts questions to a human and a machine (computer), not knowing which is answering. If the person cannot tell which answers are from the person or computer, the machine passes the test and can be said to think. In the paper, Turing predicted that most of the time humans would be unable to tell machines and humans apart in this respect by the twenty-first century. However, when Turing wrote his paper, computers had several limitations. Notably, they could execute commands but could not store them. Computers were also prohibitively expensive, and only government facilities, large technology companies, and major universities had access to them.
The first AI program, Logic Theorist, was created in 1956. It was written by Allen Newell, Cliff Shaw, and Herbert A. Simon to perform automated reasoning, specifically to prove theorems from the three volumes of Principia Mathematica (1910–1913) by Bertrand Russell and Alfred North Whitehead. Research and Development (RAND) Corporation provided the funding for the development of Logic Theorist, which was presented in 1956 at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI). One of the hosts and organizers, John McCarthy, coined the term artificial intelligence at this conference. This inspired many in the field of computer science to focus on improving computers and programming. Over the next two decades, computers were developed to have better memory and speed and became more affordable, while programmers improved functionality. For example, Newell and Simon developed the General Problem Solver while others improved computers’ ability to interpret spoken language (natural language processing). The Defense Advanced Research Projects Agency (DARPA), a US Department of Defense agency, was sufficiently impressed to fund AI research. However, computers were still not powerful enough to store and process information quickly enough to approach AI, and funding dried up in the 1970s.
Development of AI and other computer uses was largely following Moore’s Law, which, rather than a law, was an observation engineer Gordon Moore expressed in 1965. Moore noted that the number of transistors in an integrated circuit or microchip had doubled roughly every two years and predicted that this would continue for at least another decade. Although the doubling went on for decades, through the latter half of the twentieth century, computer scientists were repeatedly limited by the storage and processing speed of the computers then available.
In the twenty-first century, AI became commonplace. Algorithms helped companies such as Netflix suggest movies and shows that customers might enjoy or items that Amazon shoppers might wish to purchase. Self-driving cars arrived on the market. Companies, organizations, and individuals used data mining in myriad ways, such as marketing and making political decisions.
AI has several subfields, including deep learning , machine learning, and neural networks. Deep learning is a subfield of neural networks, which is a subfield of machine learning. Machine learning is a neural network with at least three layers. Neural networks try to function in the way that the human brain does so they can learn from available data. Additional layers help to increase accuracy. Deep learning is commonly used to improve automation, such as by assessing processes to increase efficiency. This technology is used by digital assistants, self-driving vehicles, and voice-enabled devices such as television remotes. Deep learning can use unstructured data such as images and texts. For example, if tasked with organizing images of vehicles, a deep learning algorithm would decide what features are most important to consider, such as size. Machine learning uses structured data, or data that it can organize into a format that it can use to make predictions. In other words, a person would decide what features are most important and create a hierarchy for the machine learning algorithm to use. A deep learning algorithm evaluates its accuracy and, if presented with a new photo, can more precisely evaluate the subject (Source: Salem Press Encyclopedia of Science, 2025).
Understanding the Frameworks (also called Frames)
The Framework offered here is called a framework intentionally because it is based on a cluster of interconnected core concepts, with flexible options for implementation, rather than on a set of standards or learning outcomes, or any prescriptive enumeration of skills. At the heart of this Framework are conceptual understandings that organize many other concepts and ideas about information, research, and scholarship into a coherent whole. These conceptual understandings are informed by the work of Wiggins and McTighe,2 which focuses on essential concepts and questions in developing curricula, and also by threshold concepts3 which are those ideas in any discipline that are passageways or portals to enlarged understanding or ways of thinking and practicing within that discipline. This Framework draws upon an ongoing Delphi Study that has identified several threshold concepts in information literacy,4 but the Framework has been molded using fresh ideas and emphases for the threshold concepts. Two added elements illustrate important learning goals related to those concepts: knowledge practices,5 which are demonstrations of ways in which learners can increase their understanding of these information literacy concepts, and dispositions,6 which describe ways in which to address the affective, attitudinal, or valuing dimension of learning. The Framework is organized into six frames, each consisting of a concept central to information literacy, a set of knowledge practices, and a set of dispositions.
ACRL FRAMES
Authority Is Constructed and Contextual
Information resources reflect their creators’ expertise and credibility, and are evaluated based on the information need and the context in which the information will be used. Authority is constructed in that various communities may recognize different types of authority. It is contextual in that the information need may help to determine the level of authority required.
Information Creation as a Process
Information in any format is produced to convey a message and is shared via a selected delivery method. The iterative processes of researching, creating, revising, and disseminating information vary, and the resulting product reflects these differences.
Information Has Value
Information possesses several dimensions of value, including as a commodity, as a means of education, as a means to influence, and as a means of negotiating and understanding the world. Legal and socioeconomic interests influence information production and dissemination.
Research as Inquiry
Research is iterative and depends upon asking increasingly complex or new questions whose answers in turn develop additional questions or lines of inquiry in any field.
Scholarship as Conversation
Communities of scholars, researchers, or professionals engage in sustained discourse with new insights and discoveries occurring over time as a result of varied perspectives and interpretations.
Searching as Strategic Exploration
Searching for information is often nonlinear and iterative, requiring the evaluation of a range of information sources and the mental flexibility to pursue alternate avenues as new understanding develops.
Source: Association of College and Research Libraries, 2015
Source: Pew Research Center, 2022.
Image Credit: ChatGPT? We need to talk about LLMs, by Rebecca Sweetman and Yasmine Djerbal, 2023.
Creative Commons: Plagiarism in Higher Education: Tackling Tough Topics in Academic Integrity (2021) by Dr. Eaton