My journey from endless configuration debugging to perfect, automated desktop setups using local LLMs.
đ Hey everyone,
There are few things in the Linux world as rewarding and, frankly, as soul-crushing as configuring a window manager like Sway on Wayland. When it works, it’s lightning-fast, sleek, and utterly efficient. But when youâre troubleshooting a phantom keybind, a missing status bar segment, orâworst of allâa bizarre window focus issue, itâs a time sink that makes you question your life choices. Iâve spent countless hours in the past, digging through obscure forum threads, only to find that the error was a single missing comma or a forgotten bindsym parameter. It was maddening.
This week, I vowed to apply the very tools I use in my professional lifeâGenerative AI and the principles of DevOps automationâto this most personal of technical challenges. My goal wasn’t just to fix one config file; it was to build a system where my window manager config could be self-healing and generated by an LLM that understood the subtle, esoteric language of Swayâs configuration format.
My Goal This Week đŻ
My primary objective was to use a powerful, locally run Large Language Model (LLM) to take over the tedious parts of my Sway configuration management. Specifically, I wanted to achieve three things:
- Automated Feature Generation: I needed to generate complex, multi-line configuration blocks (like conditional rules for specific applications, e.g., “Open Firefox in workspace 3, floating, at 80% width”) purely from a natural language request.
- Syntax Debugging: I wanted to be able to paste a broken config file snippet and have the LLM instantly diagnose the obscure syntax error that I was missing.
- Local and Private Execution: Running a full-stack LLM like Code Llama (via Ollama) locally on my Arch Linux box ensures that my custom, personal configuration data never leaves my machine. This is critical for privacy and securityâa true “DevOps mindset” applied to the desktop.
This project was about moving from a reactive, manual configuration process to a proactive, automated one. It was about leveraging AI not as a gimmick, but as an integral part of my system administration pipeline.
The Process & The Code đšâđ»
The first step was getting the right tools installed. I chose Ollama because it makes running and switching between open-source LLMs like Code Llama incredibly simple on a Linux machine.
I used the codellama:7b-instruct model as itâs lightweight enough to run well on my mid-range GPU but powerful enough to grasp complex configuration syntax.
Here is the basic shell script I developed to pipe a natural language request directly to Code Llama, asking it to generate a specific Sway configuration block.
~~~bash
# 1. Start by defining your prompt (the desired configuration)
PROMPT="Generate a Sway config rule to open the 'Alacritty' terminal application
in floating mode on workspace 4, setting its title to 'Alacritty-Dev' and
making it always visible on that workspace."
# 2. Pipe the request to Ollama, asking Code Llama for the answer
# I use a specific system prompt to force it into the correct format.
ollama run codellama:7b-instruct \
--system "You are a Sway configuration file expert. Only output valid Sway syntax. Do not include any explanations or markdown formatting outside of the code block." \
<<< "$PROMPT"
~~~
And here is the output I consistently receivedâperfectly formatted and ready to copy-paste:
```bash
~~~conf
for_window [app_id="Alacritty"] floating enable, move to workspace 4
for_window [title="Alacritty-Dev"] title_format %title, sticky enable
~~~
Wait… That second line is not quite right for the original request, but it shows the model immediately grasping the syntax. I learned that by refining my prompt to be more explicit about merging the conditions, the results became even better. The key was to force a monolithic rule:
~~~conf
for_window [app_id="Alacritty" title="Alacritty-Dev"] floating enable, move to workspace 4, sticky enable
~~~
This single command replaced 10 minutes of searching the Sway wiki, referencing my old config files, and testing the syntax. It was a massive leap in my personal productivity.
Hitting The Wall đ§±
My initial attempts weren’t all sunshine. The biggest hurdle wasn’t the code generation itself, but a profound problem of context loss and LLM hallucinations.
When I would ask Code Llama to “Fix this file,” it didn’t inherently know the structure of a Sway config file or the specific variables I had defined at the top (like $mod for the modifier key). It would often hallucinate a completely valid but non-existent variable or keybind, introducing new errors while fixing the old ones.
The real “wall” was realizing that the LLM isn’t a replacement for understandingâitâs an accelerator. I couldn’t just throw the whole file at it. I had to chunk the problem.
My solution involved a structured, iterative process:
- Isolate the problematic function (e.g., the faulty keybinding section).
- Provide the LLM with the necessary global context (
set $mod Mod4). - Ask it to fix only the isolated block.
The moment I stopped treating the model as a genius capable of fixing the whole system and started treating it as an expert assistant for single configuration blocks, the reliability shot up to nearly 100%.
The Breakthrough Moment âš
The true breakthrough came not from fixing a single bug, but from generating a robust workspace auto-switcher script. I was tired of manually setting up new keybinds for every new application I installed. I wanted a generic function.
I challenged Code Llama to create a reusable function that, when given an application name and a workspace number, would create the necessary rule and the keybinding. This led me to a much deeper integration: a Python script that reads a simple list of (app, workspace) pairs and uses Code Llama to generate the final, executable configuration file fragment.
This wasn’t just AI-assisted configuration; it was Programmatic Configuration Generation. My config file became a dynamically generated artifact, not a static text file. This is the DevOps mindsetâInfrastructure as Codeâapplied to my desktop. I no longer edit the Sway config; I edit the simple Python input file and run the generator.
đ Recommended Resource
I canât overstate how much a solid understanding of Linux system administration fundamentals is required to truly harness the power of local LLMs and tools like Sway. For anyone looking to deepen their foundational knowledge and take control of their system, I strongly recommend “The Linux Programming Interface” by Michael Kerrisk. Itâs not a book about AI, but its exhaustive coverage of system calls, file systems, and process control is the bedrock you need. It helps you understand why a tool like Ollama works and how to set up the robust shell environments that make this AI-driven configuration possible. Itâs the definitive bible for every serious Linux user. Amazon
Key Takeaways đ
- đĄ Context is King for Local LLMs: When using Code Llama for configuration, always include the necessary contextual lines (e.g., global variables or preceding syntax) in your prompt. Don’t make the model guess your local environment.
- âïž Embrace Programmatic Generation: Stop hand-editing large, complex configuration files. Use an LLM to generate the config from a simplified, application-specific input list. Treat your desktop setup as Configuration-as-Code.
- đ AI Doesn’t Replace Expertise: The LLM is a phenomenal syntax checker and generator, but you must possess the expertise to design the system prompts and validate the output. It accelerates an expert; it doesn’t create one.
Thanks for Following â
â If you found this guide helpful, you can Buy Me a Coffee! Medium Etsy LinkedIn Read More
What is the next complex, repetitive configuration fileâbe it Nginx, Terraform, or a Kubernetes YAMLâthat you plan to conquer with the power of a locally-run LLM?
