My Journey Into Vibes Coding (And How You Can Start Too)

My Journey Into Vibes Coding (And How You Can Start Too)

If you’re into AI tools or agents, you’ve probably tripped over this thing people have called vibes coding: using generative AI to build apps when you barely know what’s happening, hitting “approve” on whatever your favorite LLM churns out, and just rolling with it.

I find the meme hilarious and get a real kick out of the term sticking around.

Vibes coding is all about this crazy, exciting idea that you, as non-technical as you are, can turn your app dreams into something real without a technical degree or years of grinding learning what feels like 100 programming languages.

Maybe you're in Marketing and the idea of even writing a SQL query by yourself is scary, or an engineer that took a few coding courses and always wanted to make the jump but could never cross the full chasm.

Maybe you're just another tech person like myself that dabbled in but never fully committed to crossing the development chasm.

Grab a coffee, get comfy, and let’s chat about how I stumbled through learning to vibe code and how you can too.


What’s Vibes Coding, Anyway?

No-code and low-code tools have been around forever, but they always felt like a tease to me. You’d get this promise of building something awesome, but then you’d hit a wall.

It either looked good but couldn't do what I wanted (hi, Squarespace), or it was closer but ugly as sin (looking at you, Wix).

It felt like I was stuck in a box and forced to accept something subpar or be forced to go through the arduous journey of just becoming a developer.

Then generative AI rolled up and it was a game-changer. You don’t have to nail every detail or map out every step like some coding wizard.

Article content
This soon can be you too.

You just say what you want, and the AI fills in the blanks. Oftentimes, scarily well. It doesn't take much effort to search on X or LinkedIn and find absurd examples of what o3-mini-high or Claude 3.7 Sonnet have coded with a single user prompt.

It's not perfect and it comes with a catch: there’s a very jagged edge. In some directions, it can take you incredibly far.

However, you aim a few degrees to the right or left, and it sends you off a cliff.

It can seem random, but so can staying upright on a snowboard or skis. You'll find with sufficient practice, you can begin doing things that you thought were impossible when you just began.

I learned that the messy way, and I’ll walk you through how you can get up to speed without needing to fall over as much.


Level 1: Kicking Off with System Prompts

If you're just starting, I strongly recommend starting with just learning to craft system prompts. Even if you're a regular user of ChatGPT, Claude, or Gemini, I can almost guarantee you are underutilizing and underestimating the impact of system prompts on LLM outputs.

It doesn't matter which platform you use, just pick one. Add it to the top of any conversation you start. If you have ChatGPT, create a custom GPT. If you're on Claude, try using the Writing Styles or system instructions/Project files. If you're on Gemini, create a Gem.

If you're having a hard time understanding what I mean here, ask your favorite model to write you some system prompts to try that dramatically change their output behavior to see how effective they can be.

Here's a recent example of something I played around with; turning Gemini into a nutrition and macro tracker with natural language inputs.

Macro Tracking App Prompt:

“You’re my food tracking assistant. When I give you food descriptions or pics, track the meal’s macros (protein, carbs, fats) and keep a daily total. Assume it’s the same day unless I say otherwise, and every 3 meals, ask if it’s still today or a new day. If I’m vague, ask chill clarifying questions to nail it down.”

What It Enables: Your AI model becomes a macro-tracking app that logs your meals, tallies your daily protein/carbs/fats, and keeps time in check—all from a convo. You’re vibing your diet into shape.

In Action:

Me: “I had a chicken sandwich. Grilled chicken, bun, some mayo.”

AI: “Sweet, let’s log it. I’m guessing: 30g protein, 40g carbs, 10g fat. Cool? Daily so far: 30/40/10. What’s next?”

Me: “Later, 2 eggs and toast.”

AI: “Got it—12g protein, 15g carbs, 10g fat. Daily: 42/55/20. Another meal?”

Me: “Steak and rice.”

AI: “Nice—40g protein, 50g carbs, 15g fat. Daily: 82/105/35. That’s 3 meals—still today, orwe on a new day?”

Me: “Still today.”

AI: “Rad, keeping it rolling. What’s next?”


Level 2: Claude Artifacts

Article content
Claude can create interactive prototypes within its Artifacts feature.

After getting the hang of system prompts, my next recommendation is to get familiar with Claude and start playing around with code that can render in their Artifacts feature. ChatGPT's Canvas can offer similar capabilities, but I haven't explored that as much for this prototyping use case.

Rather than sticking to system prompts, start playing around with asking Claude to build interactive visualizations and micro-app experiences that you can test and tweak within the Artifacts interface. I especially recommend Claude as it's inarguably the most effective agentic coding LLM today and their chat UI is particularly easy to use.

There's not much more to say here. Start small and see what you can generate. Iterate on the initial generations in whatever way you want. Adjust the colors. Add buttons. Explain simple features and workflows you want to add.


Level 3: Vercel’s v0

Article content
v0 is an effective prototyping tool that writes functional code you can take to production.

Once you get the hang of Claude, move on to Vercel's v0 tool (alternatives include Lovable, Replit, and Bolt). This is like Claude Artifacts on steroids. v0 is an agentic development tool that takes your prompts and builds out a code repo to execute your query and renders it in real-time. You can continue to prompt to develop further and can even click on specific elements to focus the AI agent's attention.

Where this differs most from Claude is that it is actually laying out files as if you were going to ship this thing as a real application. It's not just one consolidated block of code just to illustrate your concept, it separates code into separate files so you can begin to look underneath the hood and see how your prompting is generating new code and how you can begin to prompt with the actual code in mind, even if you aren't fully understanding all of it.

It too isn't perfect, but you'd be shocked what you can develop with it. My UX designer at work has become a wizard with it and has massively extended his output by creating working prototypes that he can easily adjust to respond to stakeholder feedback.


Level 4: AI-Powered IDEs (Cursor, Cline, Windsurf)

The final step is to just jump directly into an AI-powered IDE to just genuinely write code with the help of LLMs. Products like Cursor and Windsurf come with AI integrations out of the box with little setup required. All you need to do is open up a new project and start prompting; your IDE of choice will begin coding exactly like before.

Article content
Windsurf with its Cascade ChatUI on the right.

This is level 4 because unlike the other tools, IDE's aren't designed to give you visual feedback of your app state after every prompt.

You need to figure out how you want to test or render your application (i.e., local vs cloud, terminal vs GUI, etc).

However, there's no limit to how big or how specific you can get with your prompting. Whether you want to build some sophisticated back-end workflow that perfectly matches your use case or connect to specific APIs or implement specific libraries, the world is your oyster.

Personally, I've found this freedom often lends itself to getting over my skis if you will. LLMs, especially SOTA models like Sonnet 3.7 or o3-mini-high, can solve VERY difficult problems when deployed through tools like Cursor.

However, their context window is limited and they can't maintain full awareness of all aspects of your code at the sam time. You may find yourself on a roll building out feature after feature, only to tweak one and break almost everything because that last tweak broke some assumption that all your other features relied on.

When you get to this stage, you need to take a far more careful approach to vibes coding.

Vibes may still rule the day, but you'll quickly realize that you do need to take a step back and engineer your solution.


The Honest Bit: Avoiding the Maze

When you start doing this stuff for real, you may quickly find yourself in what I call "the generative AI maze." LLMs can help you scope out incredibly ambitious things and can confidently execute on writing the code to take you there.

Just remember to start small and work iteratively. Even when generative AI fails you, remember to use it to understand and diagnose why things failed. Use it to make a better plan for it to execute and try again.

If you keep calm and keep trying, you'll soon find yourself build things you never thought possible.


To view or add a comment, sign in

More articles by Brandon Galang

Insights from the community

Others also viewed

Explore topics