If you have uv installed (and you should!), you can install llm globally in a uv
-managed tool environment with:
uv tool install llm
If you want to use models other than OpenAI models, you'll need some extensions:
llm install llm-deepseek llm-gemini llm-openrouter
Additionally, you'll need to set API keys for the providers you want to use:
llm keys set openai
# Enter your API key when prompted
To avoid running up any bills during this hackathon, I recommend setting an openrouter key and then only using their free models, available here. There are lots of great free models, including Deepseek R1 and a "cloaked" pre-release model with 1,000,000-token context window called "cypher-alpha".
Once you've set API keys, you can get the full list of available models with:
llm models
Then you can use the -m
flag to specify a model:
llm -m openrouter/openrouter/cypher-alpha:free "How many 'r's are in 'strawberrrrry'?"
You can also alias models to make them easier to use:
llm aliases set cypher-alpha openrouter/openrouter/cypher-alpha:free
-
Learning Linux with a prompt template
LLM lets you define a prompt template with input variables. I've created one with the details of my Linux distribution to speed up the process of asking for help with writing shell commands.
llm --system "You are a Linux command-line copilot running in WSL2 with the following Linux distribution: $(cat /etc/*-release)" --save linux
Now I can use it with the
--template
option:llm "How could I split a PDF by page range?" --template linux
-
Whole-repository Q&A in combination with repomix
For this one, you'll need
repomix
, which you can install globally with Node:npm install -g repomix
The
repomix
command will pack the entire current directory (excluding hidden and untracked files) into arepomix-output.xml
file designed for LLM consumption.repomix --include "*.sh,README.md" # Optionally filter with --include and a glob repomix --remote https://github.com/boxabirds/awesome-hooks # Or pack a remote repository
You can then pass the contents of the
repomix-output.xml
file tollm
with the-f
flag to ask questions about the repository:llm -f repomix-output.xml "What awesome-hooks are currently available?"
I often request implementation plans from multiple models and append them to a file for comparison:
cat repomix-output.xml | llm -m deepseek-reasoner "Write a step-by-step implementation plan to add a user profile page." >> plans.md
I then review the plans and implement the best one.
-
Progress reporting in combination with
git log
Say the last time you updated a client was June 1st, and you need to report on your progress since then. You can use git to get the list of commits since then, and then pipe the output to
llm
and have it generate a progress report:git log --since="2025-06-01" | llm "In visual markdown, prepare a timeline of development during this period, including stages of work and milestones."
You can then provide this timeline to your client in a progress report.
-
Data extraction with structured outputs
You can use the
--schema
option to specify a schema for JSON output. For instance, here's how we can extract structured information about a Github user from their profile page using curl and strip-tags:curl -o- https://github.com/chriscarrollsmith | strip-tags --minify | llm --schema "followers int, following int, tech_stack string"
It's possible to do some pretty ambitious stuff with llm
if you use it in a Bash script. Here's an example of how I used llm
in a Bash loop to find underrated Github repositories.
But! We've only got an hour, so I suggest you focus on coming up with a proof of concept and a pitch for how to scale the idea.
Here are some suggestions:
-
Give
llm
a Python toolOne of the cool things you can do with
llm
is give it Python functions that it can call. I haven't experimented with this yet myself, but there's all kinds of potential here. -
Embed some personal files for semantic search
You could try using
llm
'sembed
andsimilar
commands to embed some personal files and make them fuzzy-searchable. -
Chat with a local model
The
llm
tool supports chatting with local models, such as those installed with llama.cpp or ollama. See more in the documentation here. -
Add captions to your photos with a vision model
If you select a vision model, you can pass a file attachment with the
-a
flag:llm "Generate a caption for this image" -a image1.jpg
Try chaining this with other commands to actually add the caption as file metadata.
-
Combine
llm
with your favorite command-line toolGenerate
nix
build scripts. Manipulate structured JSON outputs withjq
. Open pull requests withgh
. Think about stuff you already do from the command line, and see if you can usellm
to do it better!
Here's the YouTube video of this talk: https://www.youtube.com/watch?v=UZ-9U1W0e4o