DEV Community

Syeed Talha
Syeed Talha

Posted on

Understand Your First Axum Server by Comparing with FastAPI

If you're coming from Python and have used FastAPI, learning Rustโ€™s Axum can feel confusing at first. But the good news is: the core ideas are almost the same.

Letโ€™s break down this simple Axum server step by step and compare it with FastAPI so it clicks instantly.


๐Ÿงฉ The Full Code

use axum::{ routing::get, Router, };
use std::net::SocketAddr;

#[tokio::main]
async fn main() {
    // 1. Create a route
    let app = Router::new()
        .route("/", get(root));

    // 2. Define address
    let addr = SocketAddr::from(([127, 0, 0, 1], 3000));

    println!("Server running at http://{}", addr);

    // 3. Run server
    let listener = tokio::net::TcpListener::bind(addr).await.unwrap();
    axum::serve(listener, app).await.unwrap();
}

// 4. Handler function
async fn root() -> &'static str {
    "Hello, Axum! Hello guys how are you!!!"
}
Enter fullscreen mode Exit fullscreen mode

๐Ÿง  Big Picture

This program does 4 simple things:

  1. Create a web app
  2. Define a route (/)
  3. Start a server on 127.0.0.1:3000
  4. Return a response when someone visits

๐Ÿ”น Step 1: Importing Things

use axum::{ routing::get, Router };
use std::net::SocketAddr;
Enter fullscreen mode Exit fullscreen mode

What this means:

  • Bring Router โ†’ to create the app
  • Bring get โ†’ to define GET routes
  • Bring SocketAddr โ†’ to define IP + port

๐Ÿ FastAPI equivalent:

from fastapi import FastAPI
Enter fullscreen mode Exit fullscreen mode

๐Ÿ”น Step 2: Async Main Function

#[tokio::main]
async fn main() {
Enter fullscreen mode Exit fullscreen mode

What this means:

Rust normally doesnโ€™t support async in main, so we use Tokio (an async runtime).

๐Ÿ‘‰ This is like saying:

โ€œRun this program using async engineโ€

๐Ÿ FastAPI equivalent:

import asyncio

async def main():
    ...

asyncio.run(main())
Enter fullscreen mode Exit fullscreen mode

๐Ÿ”น Step 3: Create the App

let app = Router::new()
    .route("/", get(root));
Enter fullscreen mode Exit fullscreen mode

What this means:

  • Create a new app (Router::new())
  • Add a route /
  • When someone sends a GET request, call root

๐Ÿ FastAPI equivalent:

app = FastAPI()

@app.get("/")
async def root():
    return "Hello"
Enter fullscreen mode Exit fullscreen mode

๐Ÿ‘‰ Same idea, different syntax.


๐Ÿ”น Step 4: Define Address

let addr = SocketAddr::from(([127, 0, 0, 1], 3000));
Enter fullscreen mode Exit fullscreen mode

What this means:

Your server will run on:

http://127.0.0.1:3000
Enter fullscreen mode Exit fullscreen mode

๐Ÿ FastAPI equivalent:

uvicorn.run(app, host="127.0.0.1", port=3000)
Enter fullscreen mode Exit fullscreen mode

๐Ÿ”น Step 5: Bind the Server

let listener = tokio::net::TcpListener::bind(addr).await.unwrap();
Enter fullscreen mode Exit fullscreen mode

What this means:

  • Open port 3000
  • Start listening for incoming requests

Think of it like:

โ€œOpen the door and wait for visitorsโ€

๐Ÿ Python equivalent:

server.bind(("127.0.0.1", 3000))
server.listen()
Enter fullscreen mode Exit fullscreen mode

๐Ÿ”น Step 6: Start the Server

axum::serve(listener, app).await.unwrap();
Enter fullscreen mode Exit fullscreen mode

What this means:

  • Take incoming requests from listener
  • Pass them to your app (Router)
  • Run forever

๐Ÿ FastAPI equivalent:

uvicorn.run(app)
Enter fullscreen mode Exit fullscreen mode

๐Ÿ”น Step 7: Handler Function

async fn root() -> &'static str {
    "Hello, Axum! Hello guys how are you!!!"
}
Enter fullscreen mode Exit fullscreen mode

What this means:

When someone visits /, this function runs and returns a response.

๐Ÿ FastAPI equivalent:

@app.get("/")
async def root():
    return "Hello, Axum!"
Enter fullscreen mode Exit fullscreen mode

๐Ÿ” Full Request Flow

Hereโ€™s what happens when you open your browser:

Browser โ†’ http://127.0.0.1:3000/
        โ†“
TcpListener (listening)
        โ†“
Axum Server
        โ†“
Router matches "/"
        โ†“
root() function runs
        โ†“
Response sent back
Enter fullscreen mode Exit fullscreen mode

Now Full fastapi code:

from fastapi import FastAPI
import uvicorn

# 1. Create app
app = FastAPI()

# 2. Create route
@app.get("/")
async def root():
    return "Hello, Axum! Hello guys how are you!!!"

# 3. Run server
if __name__ == "__main__":
    uvicorn.run(app, host="127.0.0.1", port=3000)
Enter fullscreen mode Exit fullscreen mode

๐Ÿ”น Key Difference (important)

๐Ÿ‘‰ In Axum (Rust):

You manually:
bind address
create listener
start server

๐Ÿ‘‰ In FastAPI (Python):

uvicorn.run() does everything for you

Simple Mental Model

Concept Axum FastAPI
App Router::new() FastAPI()
Route .route("/", get(...)) @app.get("/")
Handler async fn root() async def root()
Server start axum::serve(...) uvicorn.run()
Address SocketAddr host + port

Final Summary

This whole Axum program means:

โ€œCreate a web server that listens on port 3000, and when someone visits /, return a simple message.โ€


If you're already comfortable with FastAPI, you're much closer to mastering Axum than you think. The concepts are the same โ€” Rust just makes things more explicit and safe.

Top comments (2)

Collapse
ย 
motedb profile image
mote โ€ข

Great breakdown! The FastAPI parallel is a smart way to lower the learning curve for Python developers.

One thing worth highlighting for newcomers: while the surface API looks similar, the underlying async model is quite different. FastAPI runs on uvicorn with Python asyncio, while Axum runs on Tokio -- a multi-threaded work-stealing scheduler. In practice Axum handlers are truly parallel across CPU cores by default, whereas FastAPI is concurrent but not parallel unless you spawn separate processes.

This distinction becomes critical when moving from web services to embedded or systems contexts. I ran into this building moteDB, an embedded Rust database for AI robots. The same Tokio runtime that powers Axum gives you fine-grained control over thread pinning and I/O dispatch -- which matters on a Raspberry Pi 5 with limited cores shared between inference and storage.

For anyone going deeper: tokio::main is convenient but in production you often want tokio::runtime::Builder::new_multi_thread() with explicit worker_threads so you do not accidentally starve critical tasks.

What is the target use case for your series? Web APIs only or heading toward embedded/edge too?

Collapse
ย 
syeedmdtalha profile image
Syeed Talha โ€ข

Thanks for you valuable comment ^_^. For now i'm focusing on web api