When you work with various LLMs, especially tool oriented LLM’s like Cursor, it’s often helpful to prompt the tool with some context of what the project is, what it does, why and how it works. This is especially useful if you’re working with it via longer chat sessions repeatedly.
We’ve all started our LLM chat convo with something like this:
I have a Ruby on Rails application that is used for widget manufacturing. I use Hotwire and Stimulus with PostgreSQL. The application managed widgets through […] (insert a whole bunch of business rules, etc etc etc)
Feature X does ABC when DEF happens and this is important because of BLAH, so we don’t ever want to do B, but instead do C in case […] (you get the gist)
Typing this every time is a pain.
This is where the llm-context.md
file comes into place.
It’s like a README that the LLM can use. It contains all the contextual project info so I don’t have to re-type it.
The llm-context.md
file is a productivity tool for you and the LLM together.
The llm-context.md
file contains all the pertinent info that will give the LLM a head start on your conversation.
Things that it should include:
- What the application is called
- What the application is used for (whats purpose for it to exist)
- What technologies are used?
- What do the features do?
- What are some important things it should know when its helping you?
Sometimes I’ll even add in specific things like:
- Don’t issue any git commands
- Do not change the .git folder
- Only use the Minitest testing framework, do no ever use RSpec.
However, at that point we’re getting into the custom rules area of Cursor, so if I find myself having to say that again I’ll update the rules file(s).
Updating the llm-context.md
After an AI assisted coding sesssion I’ll tell the LLM to go update the llm-context.md
file with a summary of the feature we added. I’ll say something like this:
Go update the llm-context.md file in the root of this project with a summary of the feature that you added during this chat session. Do not include any implementation details, just a summary of what the feature does for the user. Add any gotchas and thigns that are very important to know in the future. If you have questions, please ask them one by one and then when you’re ready, update the doc.
It will then ask you any clarifying questions and then it will go update the doc.
Now lets assume you keep the chat going. 5 to 10 to 15 minutes later you could type this:
Remember that llm-context.md file? Go update that again with this new info
It will then go update it.
When and How to Use the llm-context.md File
Sometimes the agent/chat window just gets too slow so you need to start a new one. When you do this, you lose all the context of your convo.
Enter the llm-context.md
file.
Tell the LLM to read the file and let you know when its ready:
Please go read the llm-context.md file in the root of the project and let me know when you’re ready.
It will go read the file and then it will have the context of what your application is about the high level details you care about/etc.
Now you will be starting your chat session with a solid foundation of context for it to build on.
Additional Uses / Spike Uses
Sometimes I try to do some off the wall stuff, and then I want undo what I did, but I’d like to save my work/ideas into a file if I want to undo them, but realize I might want to return to them later.
When this happens I’ll create a llm-context-YYYYMMDD-01.md
file.
The date for the current date and the 01
is for the number of files that day. I might have a few so I like to keep them numbered.
If I try something and I’m not sure if I like it or not, and I want to go revert back and try again, but still save the context of the convo, I’ll ask the llm to create a file with that nomenclature and provide all the details in there, including some implementation details.
If and when I want to resume that similar conversation in the future with that knowledge, I’ll ask the LLM to read that specific llm-context-YYYYMMDD.md
file and then I’ll go from there because it will have the context it needs to resume working again.
Hat tip to Adam Mokan for this idea.
Leave a Reply
You must be logged in to post a comment.