Backend Architecture Guide: The Rust Compiler Service

1. Overview

The backend is the engine of the Multi-Language Compiler platform. It is a high-performance, asynchronous web service built entirely in Rust. Its primary responsibilities are to expose a secure API endpoint, receive code compilation requests, execute the code in the appropriate language environment, and persist the results to the database.

The architecture prioritizes performance, type-safety, and modularity, leveraging Rust's powerful features to handle concurrent requests efficiently and safely.

  • Core Technologies:

    • Language: Rust

    • Web Framework: Axum (for building the REST API)

    • Asynchronous Runtime: Tokio (for managing async operations)

    • Database ORM: Prisma Client Rust (for type-safe database access)

    • Serialization: Serde (for handling JSON data)


2. Project Structure and Code Flow

The backend logic is organized into several distinct modules, each with a specific responsibility. The main entry point is src/main.rs.

  • src/main.rs: This file contains the core of the web service. It sets up the Axum server, defines the API routes, and contains the primary request handler (compile_code) which orchestrates the entire compilation process.

  • src/lexer.rs: Part of the custom language interpreter. Its role is to perform lexical analysis, breaking the raw code string into a stream of tokens.

  • src/parser.rs: Takes the tokens from the lexer and constructs an Abstract Syntax Tree (AST), a structured representation of the custom language code.

  • src/evaluator.rs: The final stage of the interpreter. It walks the AST to execute the custom language code's logic.

  • src/object.rs: Defines the internal object system (e.g., Integer, Boolean) used by the custom language's evaluator.


3. Core Service Logic: The main.rs File

The main.rs file orchestrates the entire backend service.

3.1. Server Initialization

The main function sets up the Axum server.

  1. Async Runtime: #[tokio::main] marks the entry point for the Tokio async runtime.

  2. Database Client: A new Prisma client instance is created and wrapped in an Arc to be shared safely across all concurrent requests.

  3. Router: An Axum Router is created, and the /api/compile route is bound to the compile_code handler function for POST requests.

  4. Listener: The server is bound to port 8000 and begins listening for incoming connections.

3.2. The compile_code Request Handler

This function is the heart of the backend. It manages the entire lifecycle of a code execution request.

  1. Extraction: The function uses Axum extractors to get the shared database client (State(db)) and to automatically deserialize the incoming JSON request body into a CodePayload struct (Json(payload)).

  2. Execution & Timing: It records the start time, executes the code based on the payload.language, and calculates the total execution_time.

  3. Multi-Language Routing: A central match statement determines how to handle the code.


4. Multi-Language Execution Strategy

The backend uses different strategies to execute code depending on the language.

4.1. Compiled Languages (Rust and C)

For languages that need to be compiled, the service performs the following steps:

  1. Create Temporary File: A temporary file (e.g., main.rs or main.c) is created in the system's temporary directory.

  2. Write Code: The user's source code is written to this file.

  3. Invoke Compiler: The std::process::Command module is used to call the system compiler (rustc or gcc) as a subprocess. This command compiles the source file into a binary executable.

  4. Execute Binary: If compilation is successful, a new Command is spawned to run the compiled binary.

  5. Capture Output: The stdout and stderr from the execution subprocess are captured.

4.2. Interpreted Languages (Python)

For Python, the process is simpler as there is no compilation step.

  1. Invoke Interpreter: std::process::Command is used to call the python3 interpreter directly.

  2. Pass Code: The user's code is passed as an argument using the -c flag (python3 -c "<code>").

  3. Capture Output: The stdout and stderr from the interpreter are captured.

4.3. Custom Interpreted Language

When language is "custom", the backend uses its own internal interpreter.

  1. Lexer: let lexer = Lexer::new(&code);

  2. Parser: let mut parser = Parser::new(lexer);

  3. AST Creation: let program = parser.parse_program();

  4. Evaluator: let evaluated = eval(program, &mut env);

  5. Result: The final value from the evaluator is formatted as the output string. This entire process happens in memory without creating any external files or processes, making it extremely fast.

Last updated