Hands-On IT – A Prompt Engineering Deep Dive with Henry Smith, E20

Download MP3

Landon Miles (00:00)
All right. Hello everyone, and welcome back to the Automox Hands-on IT podcast. We've got a really exciting show today. We're going to be talking about AI and prompt engineering, and we have a special guest with us. You may recognize him from the Patch Tuesday podcast where he’s also a frequent host — Henry Smith, a security engineer at Automox. Henry, want to introduce yourself?

Henry Smith (00:28)
Sure. Yeah, my name is Henry, as Landon said — security engineer here at Automox. I always struggle with introductions because my role kind of changes all the time. But for the most part, I’m working on the security team and building out our security program to make it better every day.

Landon Miles (00:39)
Perfect.
That’s one of the best parts of working at Automox. Someone asks, “Hey, what do you do?” and you’re like, “Today was different than yesterday, and that was different than the day before.” I love it. I always joke that I explain complex things to people — and that seems to work.
So how long have you been at Automox? What’s your background? How did you get here?

Henry Smith (01:03)
Yeah — and I should clarify, I love it here. That’s not a complaint.

Landon Miles (01:22)
I always think it’s funny when people say that on podcasts. I’m going to start saying it too.

Henry Smith (01:27)
If my memory serves me right, it’s been over two years now. It’s been a wild ride — Automox is my first startup. Anyone who’s worked at a startup knows it comes with growing pains, but it’s also incredibly rewarding.
What was your other question?

Landon Miles (01:43)
How did you get here? What was your path — through college or straight into cybersecurity?

Henry Smith (02:07)
Every time someone asks me this, I talk too long — so I’ll try to keep it short.
I’ve always been a huge computer nerd. I got my first IT job right out of high school. I tried college, but it wasn’t for me. I don’t have a degree — I’ve just always been career-driven.
I worked for a couple of MSPs, and at one of them, I got bit by the “security bug,” as I like to call it. I took my first security course, got certified, and things took off from there. My first security job was actually at a previous company where I was a technical support engineer — they brought me back in as an entry-level security engineer. That’s where my security journey really started.

Landon Miles (03:10)
That’s great — and actually next month’s podcast is going to focus on career paths, and how college really isn’t for everyone. I tried it for a while too, dropped out, started working, and eventually went back to college — shout out to Arizona State! I’ve never actually been on campus — I did it all online.
Turns out, what worked for my brain was being able to fast-forward and rewind lectures. It only took me 11 years to graduate college, but hey — I did it!

Henry Smith (03:54)
Exactly.

Landon Miles (04:07)
My career advice: Go for what you’re good at, not what you think you should be doing.

Henry Smith (04:13)
And do what you love.
Also — networking. I honestly don’t think I’d be where I am without connections. I wouldn’t have gotten my first security job if I didn’t already know people at that company. Networking is so important.

Landon Miles (04:19)
Yeah, for sure.
We talk about this a lot — most of us are in IT or security not because computers are hard, but because they’re the easy part. The hard part is the intersection of people and tech.
Being able to talk to people without overwhelming them is key. Just saying, “Hey, we’ll get this fixed up,” is a lot better than rattling off registry keys. People skills and networking are critical.

Come back next month if you want to hear more — most of us at Automox didn’t follow a traditional career path. Whether it’s Ryan, who was in a rock band and got his degree in the back of a tour van, or Tom and Jason from the military — there’s no single right path into this industry.

Henry Smith (05:53)
Right. And I’m not saying don’t go to college — continuing education is valuable if it’s the right fit.

Landon Miles (05:55)
Exactly.
I’m not good at sitting still. The first time I went to college, I probably slept through most of my lectures. But if you go back as an adult and actually do the homework and listen? It’s a lot easier.

Landon Miles (06:31)
Anyway — back to AI.

Henry Smith (06:35)
To AI.

Landon Miles (06:41)
Prompt engineering.
I think of it like talking to kids — you speak differently to a child than you would to a colleague. The way we talk to AI systems matters. It shapes the output.
Henry, what are your thoughts on prompt engineering?

Henry Smith (07:13)
I agree.
Good prompt engineering is like the difference between working with a new hire and someone who's been on the job for years. When AI has context and training, it works better and faster.

Landon Miles (07:26)
Exactly.
It’s kind of like those Amelia Bedelia books — she’d take things literally. If you said “draw the curtains,” she’d sketch them instead of closing them. Prompting AI is similar. You need to give clear instructions, constraints, and context.

A lot of people say: treat AI like your junior developer — give it as much context as possible. Personally, I think it’s better to prompt as if you were being asked to do the task. What context would you want?

Some things it understands — like industry standards. But if it’s something unique to your team or project, it won’t automatically know what you mean. So treat it like another engineer and give it clear, relevant input.

Henry Smith (09:19)
Exactly.
If you ask someone to do a task without context, they’ll either ask questions or do research. AI works the same way — it gets the job done faster and better with the right context.

Landon Miles (09:49)
Yeah.
This goes for anything — coding, writing, blogging, whatever. Knowing what you want the output to be, and giving that information upfront, makes all the difference.

Henry Smith (10:23)
I also recommend ending your instructions with something like, “If you have clarifying questions, ask.” That way, it won’t assume or misinterpret something.

Landon Miles (10:31)
Totally agree.
Some models, like OpenAI’s GPT-4 or Claude from Anthropic, are already good at asking questions back. But guiding them clearly is still critical. You can even get feedback on your prompts to improve them — just ask the AI how to make it better.

(…continues in next message — transcript too long to fit in one.)

Henry Smith (12:18)
I could talk about this all day.
Yes — I’ve used AI to explain code sections I didn’t understand. If I haven’t seen a codebase before, I’ll ask: “What is this doing?” or “How do these pieces talk to each other?” It’s incredibly helpful to break things down that way.

Landon Miles (12:50)
Yeah, I’ve been using Claude Code from Anthropic a lot for this. It can scan a codebase, follow references, and give a high-level understanding. Super useful.
The next step for me is usually generating, auditing, or hardening code. And I know you’ve been experimenting with that a lot. Want to dive into that?

Henry Smith (13:51)
Yes — and again, it all goes back to context.
I had Claude generate a backend microservice with only generic instructions. The results were okay, but they didn’t align with Automox’s coding standards.
Then I started fresh and uploaded a markdown file with detailed information about our standards, conventions, and security practices. I told Claude to use that file for context — and the improvement in the output was dramatic.
Same goes for PR reviews. If you don’t give it context, it’s just guessing. But with the right background, it becomes a powerful assistant.

Landon Miles (16:00)
Yeah — it’s like asking your mom to review code versus asking an experienced developer at your company. Totally different results.
I often think: “Put your brain into a markdown file.” Include everything you know that’s relevant to the project and share that with the model.
Lately, I’ve been asking AI to make working-but-messy code better — optimize it, batch database writes, reduce runtime. That’s been a huge productivity boost.

Henry Smith (18:14)
They go hand in hand.
Claude has this slash init command that creates an AI agent instruction file based on your repo. It scans your code, Git logs, and so on to build that context.
I add directives to that file — short bullet points like, “You must validate all user input,” or “Ensure proper access control.”
Claude uses those directives during PR reviews to flag issues. You can even classify directives as “must” or “should,” and it’ll highlight critical issues accordingly.

Landon Miles (20:04)
Yeah, I’ve asked it to do security audits. It’ll flag input validation issues, suggest SQL injection fixes, and more.
And sometimes I’ll say, “Why isn’t this working?” and it’ll go, “Because of X,” and I’m like — “Wait, you wrote this code, Claude!”
Still, it’s great to have that quick review tool. It won’t replace humans, but it makes reviews more thorough.

Henry Smith (21:52)
Exactly. I’m not saying eliminate manual reviews. But AI should absolutely be leveraged to enhance them.

Landon Miles (22:39)
Yeah. There's “vibe coding” — where you just hit “Send It” and hope — and then there's assisted coding, where AI helps you become more productive. That’s what we’re advocating for.
Use it to review, improve, and accelerate — not replace your judgment.

Henry Smith (23:03)
Exactly. Context is everything. If you want good results, provide context — it’s that simple.

Landon Miles (23:25)
Yes. Go experiment. Try writing a bad prompt and compare it to a good one. The difference is night and day.

Henry Smith (23:47)
I actually have a perfect example of that — maybe we can include it in a blog post.
I once prompted AI to make an ice cream cone. With no context, it made a disaster. With a better prompt, it created something beautiful.

Landon Miles (24:20)
I love that.
And it applies to everything — whether you’re generating code, images, or blog posts. Know what you want before you start.

Henry Smith (24:38)
Also — good prompts save money. If AI has to guess, you waste tokens. Give it what it needs upfront.

Landon Miles (24:44)
Absolutely.
That leads us into another important topic — the context window. These models can only hold so much memory. You can’t dump a 10-million-line codebase and say, “Fix it.”
Focus on smaller sections, and be mindful of context limits. Claude even shows how much memory is left before it starts compressing older parts of the conversation.

Henry Smith (26:36)
Right. That’s why centralized, reusable AI instruction files are so helpful.
When I end a session, I ask Claude to write a context.md file summarizing everything we did. Then I can load that file in the future and pick up where I left off — saves tons of time and effort.

Landon Miles (27:36)
Yeah, and if you’re just starting out, try building a custom GPT in ChatGPT. You can pre-load your instructions and play with how it behaves.
That’s a great way to experiment without diving into command line tools.

Henry Smith (28:47)
Agreed. And honestly — if you’re not experimenting with AI yet, you should start. It’s growing fast.

Landon Miles (28:55)
Yeah. It’s fun, too.
Like the internet — people joked it was a fad. But here we are.

Henry Smith (29:06)
The velocity is nonstop. A new model comes out, does more than the last, and the cycle just keeps going.

Landon Miles (29:12)
Totally.
Ignore the hype — but also look past it. AI can be incredibly useful once you find your use case.

Henry Smith (30:05)
Right. And you don’t need the $200/month plan. Even the $20/month ChatGPT Plus lets you build and test custom GPTs.

Landon Miles (30:28)
Exactly.
I’ve even asked AI to help with recipes: “Here’s what’s in my fridge — what can I make?” And it turned out great.
The fun is in experimenting and seeing what it can do.

Henry Smith (30:39)
Hands-on learning is my go-to. I have to push buttons, break things, and see what happens.

Landon Miles (31:00)
Same here. Push all the buttons — then see what worked.

Henry Smith (31:05)
And sometimes get burned. That’s how you learn.

Landon Miles (31:09)
Exactly.
Anyway, Henry wrote a great blog this week on a Splunk integration with Automox — definitely check that out.
Before we go, anything else you want to share?

Henry Smith (31:45)
Yeah — we started a working group here at Automox to focus on safe and easy AI adoption. We want developers to use these tools effectively and securely. That’s been my focus lately.

Landon Miles (32:18)
That’s what’s fascinating to me — seeing how other people use AI. You learn so much by sharing use cases.
Talk to others. Experiment. And just keep exploring.
Thanks for joining us, Henry. You’ll also catch him on the Patch Tuesday podcast. I’m Landon Miles, and this is Hands-on IT. See you next time.

Henry Smith (33:20)
Thanks for having me.

Creators and Guests

Landon Miles
Host
Landon Miles
Landon Miles is the host of the Hands-On IT podcast. Landon’s profound passion for technology isn't just evident in his voice, it’s apparent in how he breaks down cutting-edge tech trends, formats user-friendly tutorials, and gets into the weeds of the complexities of IT technologies. His approach makes the Hands-On IT podcast an essential resource for both seasoned IT pros and those new to the field, looking to enrich their tech experience. With a background that spans various facets of technology, Landon brings a wealth of knowledge and practical insights to each episode.
Hands-On IT – A Prompt Engineering Deep Dive with Henry Smith, E20
Broadcast by