AI Glossary (Plain English)

This page is a practical dictionary for reading AI posts without getting lost. Each entry includes simple meaning, common synonyms, and a tiny example.

How to use this

If a term feels “hand-wavy,” check here first. If you want the bigger picture timeline, jump to AI History & Timeline.

Models & Training

Model
The “machine” that maps inputs to outputs (text in → text out, image in → labels out, etc.).
Also called: system, network, predictor
Example: A model that summarizes articles into 5 bullets.
Training
The process of adjusting the model so it performs better on a task using lots of examples.
Also called: learning, fitting
Example: Showing millions of sentences so a language model learns patterns.
Fine-tuning
Extra training on a smaller, specific dataset to specialize the model.
Also called: adaptation, specialization
Example: A general chatbot fine-tuned for customer support tone.
Neural network
A model built from layers of simple math units; good at learning complex patterns.
Also called: NN, deep net (when large)
Example: A vision model recognizing objects in photos.
Transformer
A neural architecture that uses attention to handle long text and relationships between words.
Also called: attention model
Example: Many modern language models are transformer-based.
LLM (Large Language Model)
A model trained on lots of text to predict and generate language (and often code).
Also called: language model, foundation model (sometimes)
Example: Drafting an email, summarizing a document, writing code.

Data & Evaluation

Dataset
A collection of examples used to train or test a model.
Also called: corpus (text), training set
Example: Millions of labeled images for classification.
Benchmark
A standardized test to compare models on the same tasks.
Also called: eval, leaderboard test
Example: A Q&A set used to score accuracy.
Hallucination
When a model produces confident-sounding but incorrect or unsupported output.
Also called: fabrication, made-up detail
Example: Inventing a quote or citation that doesn’t exist.
Overfitting
When a model learns training data too closely and performs worse on new data.
Also called: memorizing
Example: Great scores on practice tests, poor scores on the real exam.

Prompting

Prompt
The input text (and sometimes images/files) you give the model.
Also called: instruction, request
Example: “Summarize this article in 5 bullets.”
System prompt
A higher-priority instruction that sets rules, style, or constraints.
Also called: system message, policy layer
Example: “Be concise. Cite sources. Don’t guess.”
RAG (Retrieval-Augmented Generation)
A setup where the model fetches relevant documents first, then answers using them.
Also called: grounded generation, “LLM + search”
Example: Answering from your database or PDFs, not pure memory.

Safety & Trust

RLHF
A training method using human feedback to steer outputs toward helpful behavior.
Also called: human feedback tuning
Example: Humans rank responses; the model learns what’s preferred.
Guardrails
Rules and filters to reduce harmful or risky outputs (plus monitoring and policies).
Also called: safety filters, constraints
Example: Blocking private data leaks or unsafe instructions.

Deployment

Inference
Running the model to get an output (after training is done).
Also called: prediction, generation
Example: Asking a chatbot a question and receiving an answer.
API
A way for software to talk to software (your automation calls the model via an endpoint).
Also called: integration, endpoint
Example: n8n sends a prompt to the model and stores the response.
Latency
How long it takes to get a response.
Also called: response time
Example: “This model feels slow at peak hours.”

Explore next

Want the “how did we get here?” version? Read AI History & Timeline, or meet the team on Authors.