Learning to Work with LLMs: My Experience with Cursor, ChatGPT, and Daily Development
Over the past few months, I have been curious about how LLMs (large language models) could fit into my regular development workflow, not as a replacement for thinking but as a tool I could work alongside.
I tried different ways- sometimes chaotic and sometimes too structured — until I gradually discovered a natural rhythm.
This post captures some of those lessons and patterns I've been following lately. It's not a prescription, but a small glimpse into what has been working for me.
Setting Clear Boundaries Helped
When I need to create something like a new React component (say a TextPopup
), I first:
- Create the initial boilerplate manually, including the props and types.
- Then, I prompt the LLM with a very specific set of rules, such as:
# Rules:
- The component should be responsive.
- The component should use the given props and types I already created inside the main file
- The code should be readable, maintainable and well organized with appropriate internal render functions if needed.
- The code should be properly documented in each sections.
- If any local states or variables are defined, use well readable self explanatory variable names. And if needed, add comments explaining what that state handles and where it is being used and why...etc...
- Never use nested conditional statements
- Reduce usage of conditional statements as much as possible
- Use explicit return statements instead of single line arrow functions for better readability wherever possible.
- The component should have it's style classes defined inside the ComponentName.style.scss file
.
.
.
etc
.
.
.
Setting these boundaries up front helps keep the model focused, avoiding hallucinations and misinterpretations.
Keeping Each Chat Session Focused
Every component gets its own dedicated chat session. For example, if I'm building a TextInput
component, I open a fresh session and use that only for:
- The initial base implementation
- Iterative enhancements
- Bug fixes
No mixing multiple components in one chat.
Keeping the conversation history clean and contextually focused makes the LLM's responses much sharper and more accurate.
Building Features One Step at a Time
Instead of asking for all features in one go:
- Build the basic version first.
- Test and verify that it's working.
- Then, for each new feature (e.g., "Add support for password visibility toggle"), I prompt separately.
This "single-purpose prompting" dramatically improves the output quality and prevents complexity from snowballing.
Manually Picking the Right Model
Inside Cursor IDE, I found that manually selecting the model (e.g., Claude 3.7) gives far better results than relying on auto-selection.
Some models are better at code structure, others at writing comments. Picking intentionally based on task type makes a noticeable difference.
Commenting Code for Future Collaboration
Another key discovery: Adding detailed comments inside the code massively improves future LLM edits.
Whenever I need enhancements or fixes later, the model reads the comments as context, leading to much more precise and relevant changes.
Interestingly, I even let the LLM help generate these detailed comments during the initial coding stages.
Choosing the Right Tools for the Task
For utility functions or small helpers, I use regular ChatGPT Plus sessions. I:
- Co-develop functions in an "interactive pair programming" style.
- Refine them manually before copying into my codebase.
Cursor's agent mode sometimes makes it tricky to switch between "ask" and "code" intentions, so I stay mindful of that.
Closing Thoughts
This structure has helped me:
- Code faster without sacrificing quality.
- Maintain better organization in both my codebase and LLM usage.
- Reduce frustration from messy, unfocused interactions.
Of course, it's still evolving. Every project teaches me something new. But these principles have made working with LLMs feel like having a smart, patient experienced developer sitting next to me, ready to help whenever needed.
I'm sure my workflow will keep evolving as these tools grow smarter and as my needs change. And hey, while 'vibe coding' with an LLM can feel magical on a good day, there are times when the vibe just doesn’t vibe 😜, and that's when you have to pause, get a little serious 🧐, and gently steer it back on track.
For now, this structure has given me a good balance between creativity, control, and collaboration with these new digital companions.
Still learning, still experimenting 🧪
PS: If you have any tips, experiences, or workflows that work for you, I'd love to hear about them!