Tinymist Docs

Language Server

Architecture

Tinymist binary has multiple modes, and it may runs multiple actors in background. The actors could run as an async task, in a single thread, or in an isolated process.

The main process of tinymist runs the program as a language server, through stdin and stdout. A main process will fork:

  • rendering actors to provide PDF export with watching.
  • preview actors that give a document/outline preview over some typst source file.
  • compiler actors to provide language APIs.

From the directory structure of crates/tinymistcrates/tinymist, the main.rsmain.rs file parses the command line arguments and starts commands:


                                
match args.command.unwrap_or_default() {

                                
        Commands::Query(query_cmds) => query_main(query_cmds),

                                
        Commands::Lsp(args) => lsp_main(args),

                                
        Commands::TraceLsp(args) => trace_lsp_main(args),

                                
        Commands::Preview(args) => tokio_runtime.block_on(preview_main(args)),

                                
        Commands::Probe => Ok(()),

                                
}

                                
match args.command.unwrap_or_default() {

                                
        Commands::Query(query_cmds) => query_main(query_cmds),

                                
        Commands::Lsp(args) => lsp_main(args),

                                
        Commands::TraceLsp(args) => trace_lsp_main(args),

                                
        Commands::Preview(args) => tokio_runtime.block_on(preview_main(args)),

                                
        Commands::Probe => Ok(()),

                                
}

                                
match args.command.unwrap_or_default() {

                                
    Commands::Query(query_cmds) => query_main(query_cmds),

                                
    Commands::Lsp(args) => lsp_main(args),

                                
    Commands::TraceLsp(args) => trace_lsp_main(args),

                                
    Commands::Preview(args) => tokio_runtime.block_on(preview_main(args)),

                                
    Commands::Probe => Ok(()),

                                
}

                                
match args.command.unwrap_or_default() {

                                
    Commands::Query(query_cmds) => query_main(query_cmds),

                                
    Commands::Lsp(args) => lsp_main(args),

                                
    Commands::TraceLsp(args) => trace_lsp_main(args),

                                
    Commands::Preview(args) => tokio_runtime.block_on(preview_main(args)),

                                
    Commands::Probe => Ok(()),

                                
}

The queryquery subcommand contains the query commands, which are used to perform language queries via cli, which is convenient for debugging and profiling single query of the language server.

There are three servers in the serverserver directory:

  • lsplsp provides the language server, initialized by lsp_mainlsp_main in main.rsmain.rs.
  • tracetrace provides the trace server (profiling typst programs), initialized by trace_lsp_maintrace_lsp_main in main.rsmain.rs.
  • previewpreview provides a typst-previewtypst-preview compatible preview server, initialized by preview_mainpreview_main in tool/preview.rstool/preview.rs.

The long-running servers are contributed by the ServerStateServerState in the server.rsserver.rs file.

They will bootstrap actors in the actoractor directory and start tasks in the tasktask directory.

They can construct and return resources in the resourceresource directory.

They may invoke tools in the tooltool directory.

Debugging with input mirroring

You can record the input during running the editors with Tinymist. You can then replay the input to debug the language server.


                                
# Record the input

                                
tinymist lsp --mirror input.txt

                                
# Replay the input

                                
tinymist lsp --replay input.txt

                                
# Record the input

                                
tinymist lsp --mirror input.txt

                                
# Replay the input

                                
tinymist lsp --replay input.txt

                                
# Record the input

                                
tinymist lsp --mirror input.txt

                                
# Replay the input

                                
tinymist lsp --replay input.txt

                                
# Record the input

                                
tinymist lsp --mirror input.txt

                                
# Replay the input

                                
tinymist lsp --replay input.txt

Analyze memory usage with DHAT

You can build the program with dhat-heapdhat-heap feature to collect memory usage with DHAT. The DHAT will instrument the allocator dynamically, so it will slow down the program significantly.


                                
cargo build --release --bin tinymist --features dhat-heap

                                
cargo build --release --bin tinymist --features dhat-heap

                                
cargo build --release --bin tinymist --features dhat-heap

                                
cargo build --release --bin tinymist --features dhat-heap

The instrumented program is nothing different from the normal program, so you can mine the specific memory usage with a lsp session (recorded with --mirror--mirror) by replaying the input.


                                
./target/release/tinymist lsp --replay input.txt

                                
...

                                
dhat: Total:     740,668,176 bytes in 1,646,987 blocks

                                
dhat: At t-gmax: 264,604,009 bytes in 317,241 blocks

                                
dhat: At t-end:  259,597,420 bytes in 313,588 blocks

                                
dhat: The data has been saved to dhat-heap.json, and is viewable with dhat/dh_view.html

                                
./target/release/tinymist lsp --replay input.txt

                                
...

                                
dhat: Total:     740,668,176 bytes in 1,646,987 blocks

                                
dhat: At t-gmax: 264,604,009 bytes in 317,241 blocks

                                
dhat: At t-end:  259,597,420 bytes in 313,588 blocks

                                
dhat: The data has been saved to dhat-heap.json, and is viewable with dhat/dh_view.html

                                
./target/release/tinymist lsp --replay input.txt

                                
...

                                
dhat: Total:     740,668,176 bytes in 1,646,987 blocks

                                
dhat: At t-gmax: 264,604,009 bytes in 317,241 blocks

                                
dhat: At t-end:  259,597,420 bytes in 313,588 blocks

                                
dhat: The data has been saved to dhat-heap.json, and is viewable with dhat/dh_view.html

                                
./target/release/tinymist lsp --replay input.txt

                                
...

                                
dhat: Total:     740,668,176 bytes in 1,646,987 blocks

                                
dhat: At t-gmax: 264,604,009 bytes in 317,241 blocks

                                
dhat: At t-end:  259,597,420 bytes in 313,588 blocks

                                
dhat: The data has been saved to dhat-heap.json, and is viewable with dhat/dh_view.html

Once you have the dhat-heap.jsondhat-heap.json, you can visualize the memory usage with the DHAT viewer.

Server-Level Profiling

In VS Code, you can get the profiling data of the language server by searching and running the "Typst: Profile server" command.

To use this feature in other LSP clients, please check /editors/vscode/src/features/tool.ts/editors/vscode/src/features/tool.ts. A client should start and then stop profiling to collect performance events inside the time window.

Contributing

See CONTRIBUTING.md.