# How to Install and Run [LM Studio](https://lmstudio.ai/) on Ubuntu as a Headless Server This guide explains how to install and run LM Studio in a headless Ubuntu environment. --- ## **Prerequisites** Ensure you have an Ubuntu 24.10 server with sufficient resources for the models you plan to use. For testing, a machine with **2 GB RAM, 2 CPUs, and 60 GB SSD** is sufficient for smaller models. --- ## **Step 1: Install Required Packages** Run the following command to install all necessary dependencies: ```bash sudo apt install -y npm fuse fuse3 libfuse2 libatk1.0-0 libatk-bridge2.0-0 libcairo2 libgdk-pixbuf2.0-0 libgtk-3-0 libx11-6 libnss3 libasound2 libasound2t64 libcups2 xauth xvfb xfce4 xfce4-goodies ``` --- ## **Step 2: Download LM Studio** Download the LM Studio AppImage from the official site (https://lmstudio.ai/): ```bash wget https://installers.lmstudio.ai/linux/x64/0.3.6-8/LM-Studio-0.3.6-8-x64.AppImage ``` Make the file executable: ```bash chmod +x LM-Studio-0.3.6-8-x64.AppImage ``` --- ## **Step 3: Run LM Studio with a Virtual Display** Run LM Studio using `xvfb` to simulate a graphical environment: ```bash xvfb-run ./LM-Studio-0.3.6-8-x64.AppImage --no-sandbox ``` This initializes LM Studio and prepares it for headless operation. --- ## **Step 4: Install the LM Studio CLI** Install the LM Studio CLI tool (`lms`) using: ```bash npx lmstudio install-cli ``` During installation, you’ll see the following prompt: ``` We are about to run the following commands to install the LM Studio CLI tool (lms): echo 'export PATH="$PATH:/root/.lmstudio/bin"' >> ~/.profile echo 'export PATH="$PATH:/root/.lmstudio/bin"' >> ~/.bashrc ``` Confirm by typing `Yes`. This will add the CLI tool to your PATH. Open a new terminal session or run: ```bash source ~/.bashrc ``` Verify the installation: ```bash lms --help ``` --- ## **Step 5: Start the LM Studio Server** Start the server to enable API access: ```bash lms server start ``` Check the server status: ```bash lms status ``` You should see output indicating the server is running on port **1234**. --- ## **Step 6: Download and Load a Model** List available models to download: ```bash lms get ``` Select a lightweight model such as **Qwen2.5 Coder 3B Instruct** and confirm the download. Once downloaded, load the model: ```bash lms load qwen2.5-coder-3b-instruct ``` Verify the loaded models: ```bash lms ps ``` --- ## **Step 7: Test the Model via API** Send a request to the running server to test the model: ```bash curl http://localhost:1234/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "qwen2.5-coder-3b-instruct", "messages": [ { "role": "user", "content": "Write a Python function to reverse a string." } ] }' ``` ### Example Response: ```json { "id": "chatcmpl-o2nubfsik7bykwbkdysbrd", "object": "chat.completion", "created": 1736694420, "model": "qwen2.5-coder-3b-instruct", "choices": [ { "index": 0, "logprobs": null, "finish_reason": "stop", "message": { "role": "assistant", "content": "Here's a simple Python function to reverse a given string:\n\n```python\ndef reverse_string(s):\n # Using slicing to reverse the string\n return s[::-1]\n\n# Example usage\nreversed_s = reverse_string(\"hello\")\nprint(reversed_s) # Output: \"olleh\"\n```" } } ], "usage": { "prompt_tokens": 38, "completion_tokens": 116, "total_tokens": 154 }, "system_fingerprint": "qwen2.5-coder-3b-instruct" } ``` --- ## **Step 8: Manage Models** ### List Installed Models: ```bash lms ls ``` ### Unload a Model: ```bash lms unload ``` ### Stop the Server: ```bash lms server stop ``` --- ## **Conclusion** You have successfully installed and configured LM Studio on a headless Ubuntu server. You can now explore various models and use the API for your projects. If you encounter issues, refer to the LM Studio documentation or seek assistance. 😊