AI Did Not Build My Cupboard.... It Helped Me Think Better
What a DIY Cupboard Taught Me About AI, Critical Thinking, and Better Systems
AI is often talked about as if its value is speed.
Write faster. Build faster. Summarise faster. Automate faster.
And yes, speed can be useful. But I think some of the most valuable uses of AI are much quieter than that.
Sometimes AI helps you slow down just enough to make a better decision.
I was reminded of this recently while working on a very unglamorous project: trying to improve a cupboard in my apartment.
Not a business process. Not a CRM migration. Not a complex HubSpot implementation. Just a practical DIY problem involving awkward dimensions, limited materials, budget constraints, and a lot of “will this actually work in real life?” thinking.
And weirdly, it became one of the clearest examples I’ve had of what good AI collaboration can look like.
Because AI did not solve the problem for me.
It did not magically understand my space, my tools, my budget, my tolerance for risk, or the fact that I did not want to go out and buy more wood.
What it did was help me think.
It helped me test options, question assumptions, spot weak points, compare trade-offs, and gradually arrive at a solution that was not just technically possible, but actually practical.
That distinction matters.
Because in business, this is where a lot of AI projects go wrong.
They aim for technically correct answers, when what people really need are practically useful ones.
The real value of AI is not always the answer
One of the biggest misconceptions about AI is that its main job is to give you the answer.
Ask a question. Get a response. Copy, paste, move on.
That can work for simple tasks. But the more complex, contextual, or operational the problem becomes, the less useful that model is.
A good answer depends on more than information. It depends on judgement.
It depends on constraints.
It depends on what matters most in the situation.
For my cupboard project, the “best” solution on paper might have involved buying the ideal materials, using the correct fixings, making perfect cuts, and following a neat step-by-step plan.
But that was not my reality.
My reality was:
- I wanted to avoid spending more money.
- I had existing materials I wanted to reuse.
- I had a specific space and odd measurements.
- I cared about the result looking tidy.
- I needed the solution to be strong enough, but not over-engineered.
- I was willing to improvise, but not in a way that would obviously fail.
That is where AI became useful.
Not because it had the perfect answer immediately, but because it gave me a way to think through the problem from different angles.
It became a thinking partner.
AI works best when it has context
The more context I gave, the better the conversation became.
At first, the advice could only be general. But once I explained the materials I had, the type of MDF, the size of the screws, the fact that I had sticky felt but no wood glue, and what I was trying to avoid buying, the suggestions became more useful.
That is exactly how AI works in business too.
Generic prompts create generic answers.
Useful prompts include the messy reality.
For example, if a business asks AI:
“How should we automate our sales process?”
The answer will probably be broad, safe, and forgettable.
But if the business says:
“We have a small sales team using HubSpot. Leads are coming from three different sources. The team often forgets to update lifecycle stages. Managers need clearer visibility, but reps already feel overloaded with admin. We want automation, but not at the expense of trust or data quality.”
Now the answer can become much more useful.
Because the system has something to work with.
AI does not just need a task.
It needs context.
And the better the context, the better the thinking.
Technically correct is not the same as practically useful
This is one of the biggest lessons I come back to in almost every systems project.
A solution can be technically correct and still be practically useless.
A HubSpot workflow can function perfectly and still confuse the team.
A CRM property can be logically structured and still fail to support reporting.
A dashboard can show accurate data and still not help anyone make a decision.
An AI-generated answer can be factually reasonable and still miss the point.
With my cupboard, there were several ideas that might have been technically possible. But that did not make them good ideas.
Would they hold?
Would they damage the material?
Would they look messy?
Would they create another problem later?
Would the hinge sit flush?
Would I regret it the moment I tried to open the door?
These are not abstract questions. They are practical design questions.
And business systems are full of the same thing.
The question is rarely:
“Can we automate this?”
The better question is:
“Should we automate this, and what needs to be true for the automation to be useful?”
The same applies to AI.
The question is not simply:
“Can AI do this?”
The better question is:
“Can AI help the right person make a better decision, take better action, or reduce unnecessary friction?”
That is a much more useful standard.
AI should challenge your assumptions, not replace your judgement
The best moments in the cupboard conversation were not when AI agreed with me.
They were when it helped me pause.
It did not simply say, “Yes, that will work.”
It helped me think about where the force would go, whether a material might compress, whether screws might hold properly, and whether something that looked fine at first might become unstable later.
That is a valuable role for AI.
Not blind approval.
Not fake certainty.
Not pretending to be an expert in the room when the real-world situation still needs human judgement.
Useful AI should act more like a second brain than a final authority.
It should help you ask:
- What am I assuming?
- What could go wrong?
- What matters most here?
- What is the simplest viable option?
- What would make this solution fail later?
- Am I solving the real problem, or just the visible symptom?
That is where AI becomes powerful in operational design.
Because most business problems do not fail because nobody had an answer.
They fail because the wrong assumptions were never challenged.
This is especially important in HubSpot and CRM design
In HubSpot, it is very easy to build something that looks neat in the portal but does not work in practice.
A pipeline can be beautifully structured but not match how the sales team actually sells.
A workflow can be clever but impossible for anyone to troubleshoot.
A lead scoring model can look impressive but be based on weak assumptions.
A data migration can technically import successfully while creating a mess nobody wants to own.
This is why I care so much about intelligent systems.
An intelligent system is not just one with AI features added to it.
It is a system designed around how people think, work, decide, and collaborate.
That includes:
- Clear processes.
- Useful data.
- Sensible automation.
- Human accountability.
- Reporting that supports decisions.
- AI used in the right places, for the right reasons.
AI can absolutely support this.
But it cannot compensate for a system that has not been properly designed.
If the process is unclear, AI will accelerate confusion.
If the data is messy, AI will make messy recommendations.
If ownership is vague, AI will not magically create accountability.
If the CRM does not reflect how the business actually works, adding AI will not fix the underlying problem.
It may just make the problem harder to see.
The best AI collaboration feels like better thinking
The cupboard project was small, but the lesson was not.
AI was useful because it helped me move through a practical problem in stages.
It helped me:
- Clarify the goal.
- Work within constraints.
- Compare options.
- Think through risks.
- Avoid overcomplicating the solution.
- Make a decision I felt confident enough to act on.
That is the kind of AI implementation I believe businesses should be aiming for.
Not AI for the sake of AI.
Not automation because it sounds efficient.
Not tools layered on top of broken processes.
But AI that helps people think better, design better, decide better, and build better systems.
That is much more interesting than simply asking AI to produce more output.
Because more output is not always the goal.
Sometimes the goal is better judgement.
A practical framework: how to use AI as a thinking partner
If you want to use AI more effectively, especially in a business or systems context, try using it less like a search box and more like a thinking partner.
Here is a simple framework.
1. Start with the real situation
Do not only describe the task. Describe the context.
Include what you are trying to achieve, what constraints exist, who is involved, what has already been tried, and what would make the answer genuinely useful.
2. Ask AI to identify trade-offs
Instead of asking, “What should I do?”, ask:
“What are the trade-offs between these options?”
This pushes the conversation beyond one-size-fits-all advice.
3. Ask what could go wrong
This is especially useful for workflows, automations, CRM design, migrations, and AI implementation.
A good system is not just designed for the happy path.
It is designed to handle reality.
4. Keep human judgement in the loop
AI can support the decision, but it should not own the decision.
You still need to decide what is appropriate, proportionate, ethical, and practical.
5. Test the answer against real life
Before implementing anything, ask:
“Will this actually work for the people who need to use it?”
That one question can prevent a lot of expensive mistakes.
The takeaway
AI did not build my cupboard.
It did something more useful.
It helped me think through a messy, practical problem with more clarity.
And that is the point I think businesses need to understand.
AI is not valuable because it removes the need for human judgement.
It is valuable when it strengthens human judgement.
The best AI-supported systems are not the ones that replace people with tools.
They are the ones that help people make better decisions, with better context, inside better-designed processes.
That is where AI becomes genuinely useful.
Not as a shortcut around thinking.
As a partner in thinking.
.png?width=1254&height=1254&name=ChatGPT%20Image%20May%2012%2c%202026%2c%2001_51_41%20PM%20(5).png)