runtimes · level 1

Compiled vs Interpreted vs JIT

Three ways source becomes execution — and the trade-offs.

200 XP

Compiled vs Interpreted vs JIT

Every language has to answer the same question: when does source become machine code? There are three common answers, and each one makes different trade-offs between build time, startup latency, peak throughput, and the moment errors surface.

Analogy

Think of a restaurant serving dinner for 200 people. AOT is preparing every plate in the morning and keeping them warm in the pass — opening night is slow but service is instant. Interpreted is a cook making each plate to order from raw ingredients every time the bell rings — zero prep, slow service, and the same mistake made 200 times. JIT is a smart line cook who starts à la carte, but after the third order for the duck notices the pattern and pre-plates the garnish in bulk — warm-up lag, then they start outrunning everyone.

Ahead-of-time (AOT)

C, C++, Rust, Go, and Swift compile the entire program to native machine code before you run it. The compiler does all the work up front — lexing, parsing, type-checking, optimization, register allocation, and finally emitting instructions for a specific CPU (x86-64, ARM64).

gcc hello.c -o hello      # produces an ELF/Mach-O binary
./hello                   # OS loader maps it into memory and runs it

What you gain: zero startup overhead, predictable peak performance, and type/compile errors caught before a single line runs. What you lose: slow edit-build-test cycles and per-platform binaries (cross-compilation is a separate headache).

Interpreted

CPython, Ruby MRI, POSIX shell, and classic Perl walk the program at runtime. Most modern interpreters compile source to a compact bytecode first (.pyc, YARV) and then step through it in a big switch loop — but critically, no native machine code is ever emitted.

python main.py            # no separate build step

What you gain: instant iteration, full dynamic flexibility (import at runtime, monkey-patch anything), and no cross-platform binary story to worry about. What you lose: every line of work costs the interpreter's dispatch overhead, and type errors only surface when that code path executes.

Just-in-time (JIT)

V8, HotSpot, .NET CLR, PyPy, and LuaJIT start out interpreting bytecode and then re-compile hot functions to native machine code using profile data gathered during the run. "Hot" is typically "invoked more than N times" or "loop body has run for a while."

source → bytecode → (interpret) → profile → JIT compile → native → (keep running)

What you gain: peak throughput rivals AOT (often beats it on code where profile-guided choices pay off), plus no separate build step. What you lose: a warm-up period where the first invocations run slowly, and a memory cost for the JIT infrastructure itself.

Where errors show up

The execution model dictates the earliest moment a class of errors can be caught.

Error kind AOT Interpreted JIT
Syntax build time on import on parse
Type mismatch (when typed) build time runtime runtime
Null deref / missing property build time (Rust/Kotlin) or runtime (Go/Java) runtime runtime
Logic bug runtime runtime runtime

AOT + a strong type system (Rust, Haskell) catches the most at build time. Dynamic interpreters (Python, Ruby) catch the least. JITs run dynamic languages, so they inherit dynamic-language error timing.

Startup vs steady-state

If you run a program once for a fraction of a second (CLI tools, AWS Lambda cold starts), startup dominates. AOT wins — the native binary just executes. A JIT has to spin up its VM and warm up. An interpreter starts fast but then pays per-instruction forever.

If you run a program for hours (a web server, a data pipeline), steady-state dominates. JITs and AOT are comparable. Pure interpreters are markedly slower.

Portability

An AOT binary is locked to one OS and ISA. A bytecode artifact (JVM .class, .pyc, WebAssembly) runs anywhere the runtime exists — that's the whole value proposition of the JVM and of WebAssembly. Source-distributed languages (Python, JS) are portable in the extreme but require the runtime on the target machine.

How to tell which model a language uses

Don't memorize — check the actual toolchain. If there's a compiler that emits an ELF/Mach-O/PE binary, it's AOT. If the official runtime is a bytecode interpreter with no native emission (look for -c flags, sys.getprofile(), and absent --jit), it's interpreted. If the runtime has a "tier-up" concept (V8 TurboFan, HotSpot C2, PyPy tracing), it's a JIT.