top of page

AI Is Not Magic. Here’s How I Use It to Create Real Value.

Human hand fist bumps robot hand

Rodolfo Iglesias

Jan 26, 2026

In my daily work, I treat AI as a mirror and amplifier of my human and professional experience. In this article, I will walk through a few prompts I apply frequently, and how they accelerate my productivity.

...for starters, I used GenAI to generate every draft of this blog post.


I’ll pause for a moment while a few readers quietly think “ah, more AI slop” and consider closing the tab.


If you’re still here: thank you, and please keep reading, because this reaction is exactly the point of this post, or rather, what I'd like to add nuance to.


The version you’re reading now is written in my own human voice. It has been edited, reshaped, challenged, rewritten, and occasionally argued with. GenAI, rather than doing the thinking behind the words, gave me something to push against and redirected where my own effort in writing was applied.


In my current experience, AI behaves less like a tool and more like a coworker: sometimes helpful, sometimes confidently wrong, occasionally frustrating, and consistently good at narrowing gaps in my own thinking. This dynamic isn’t novel. What defines humans as a species isn’t raw intelligence, but our ability to collaborate with systems larger than ourselves—languages, stories, institutions, tools. AI is simply the latest and most intimate of those systems. It mirrors our strengths, scales our habits, and exposes how much of what we call “thinking” is pattern recognition in a fancy suit. Uncomfortable, perhaps—but clarifying.


In my own work, I treat AI as exactly that kind of mirror and amplifier: it supports the parts of my work that benefit from speed and iteration, while leaving judgment, synthesis, and accountability firmly on my shoulders. Let me walk through a few concrete examples.


GenAI: my "research assistant"

or, learning faster without learning shallow


A recurring part of my work is ramping up on new technical or conceptual topics, often under time pressure, and usually at the intersection of multiple domains. GenAI has become a reliable learning accelerator in those moments, helping explore unfamiliar territory, challenge my understanding, and connect dots across disciplines: the applicability of a new Python library, a networking concept like SASE, or how adult learning theory actually holds up outside of the principles delivered by an online course or wiki, when implemented within a real organization. The value isn’t in the first prompt response; it’s in the follow-up questions it enables me to ask sooner.


For example, I would use this prompt (expand to view the initial output):

Explain the fundamentals of SASE (i.e. Secure Access Service Edge) to someone with a background in network observability and customer education. Then outline where training programs tend to oversimplify SASE adoption in real enterprise environments. Provide reliable sources for your explainers and helpful reading to expand on each topic.

Here, I am looking to bridge my own background to new, targeted topics—something I can't expect to get from any individual online piece of content. The main value comes from personalizing the learning experience to the benefit of the task at hand. I also instruct the LLM where I want to direct my research next: to reliable sources which can help me expand on the new field I am looking to get into.


GenAI / Agentic AI: my "project manager"

or, turning ambiguity into something actionable


Consulting project work is full of half-formed goals, moving constraints and competing priorities. GenAI—and increasingly, agentic AI—helps me sculpt that ambiguity into a working schedule.


For example, I would use this prompt (expand to see the initial output):

Wear multiple hats for this request: senior project manager, marketing communications expert, and lead Python developer.

Goal: within 60 days, design and implement a small Python-based automation that helps generate marketable consulting content faster. The system should:
- ingest source material (blog drafts, notes, PDFs, past LinkedIn posts)
- extract key themes and reusable insights
- propose content variants (blog sections, LinkedIn posts, short blurbs)
- keep human review and editing explicitly in the loop
- Break this into phases, identify key risks, suggest a realistic scope for a solo consultant, and flag where I should avoid over-investing effort early.

A real, present objective of my consulting business is to build content discipline, i.e. posting regularly to grow exposure while minimizing added weekly effort. A prompt like this helps ground a nebulous goal into something achievable and strategically useful.


This is also where the line between “get advice” and “get support” starts to blur, for greater results. My current work-in-progress solution is to build an LLM-powered content curator I can feed messy inputs (articles, PDFs, podcasts), which then can parse, index, and prioritize ideas based on my voice and preferences. Ultimately, I will want to receive an emailed weekly brief of top 3 post ideas, with working titles, structure and source material, from my "curator agent". Stay tuned for more on this side project!


GenAI: my "junior software developer"

or, lowering the cost of experimentation


I looove coding custom solutions and small apps to make my life easier. Here, aside from generating the initial code blocks themselves, AI provides a big push in two distinct but complementary ways: as an in-editor collaborator, and as a runtime capability embedded directly into the systems I build.


a. In-editor support: refactoring, debugging, expansion

Used inside a code editor (for example via VSCode chat), GenAI is most useful when treated as a fast, context-aware second set of eyes. I use it to refactor messy sections, reason through bugs, or explore small functional extensions without breaking flow.


For example, I would prompt (expand to see the result in Virtual Studio Code):

This is a Python module that has grown organically and is becoming hard to maintain.
Refactor it for clarity and testability, keeping behavior unchanged.
Add inline logging in updated code blocks to facilitate debugging later.
Call out any assumptions you’re making and flag edge cases I should review manually.

Bloated code does get under my skin easily, and can be a huge time sink to clean up. With this prompt, I’m accelerating the process substantially by surfacing update risks faster, and freeing up attention for architecture and solution intent.


b. Code efficiency via LLM API integration

Embedding LLMs directly into applications helps handle tasks that would otherwise require brittle logic or extensive edge-case handling. By integrating with commercial APIs (OpenAI and Claude are strong options as of this writing), I can use natural language to simplify summarization, classification, or data transformation.


For example, the output of this LLM API prompt could potentially replace hundreds of lines of code:

Convert the referenced PDF chunks into a structured JSON object with:
- title
- main topics
- intended audience
- key takeaways (max 5)

If the content is ambiguous or low-confidence, indicate that explicitly.

Gen AI: my "communications intern"

or, drafting with intent instead of staring at a blank page


Writing is now an unavoidable part of my work in ways it wasn’t before. Earlier in my career, my role typically started after contracts were signed: delivery, execution, outcomes. As an independent consultant, that boundary disappeared. Proposals, pre-sales emails, positioning, and first impressions are now part of the job. In areas where I lack mileage (and sometimes even where I don’t), GenAI produces competent first drafts I can confidently refine and sign off on.


As examples, I would prompt the following:

Draft a first-pass consulting proposal section for an instructional video production project.
Audience: Director-level stakeholder with limited time.
Focus: outcomes, scope clarity, and collaboration model.
Tone: confident, pragmatic, not salesy.
Length: ~400 words.
Deliverable: 3 instructional videos as per client specifications, with publishing guides.
Assume I will revise heavily.
Write a concise introductory email to a potential consulting client.
Audience: technical leader evaluating external help.
Focus: credibility, clarity of value, and invitation to a short conversation.
Tone: professional, human, low-pressure.
Length: 2–3 short paragraphs.
Draft an initial LinkedIn post on the topic of AI as a consulting multiplier.
Audience: technical and education leaders.
Themes: human judgment, AI as collaborator, practical use over hype.
Focus: thoughtful, slightly provocative, experience-based.
Length: ~250 words.

Across all examples, my "intern writer" GenAI produces the same value: I’m no longer starting from zero, and the "blank page jitters" melt away as I iterate and improve on my drafts.


A few general recommendations (and a clarification)


Before wrapping up, one clarification: while many examples above touch instructional design, customer success, or content development, none of this is meant to be limited to those fields. I do plan to follow up with a deeper dive into AI-assisted instructional content generation in an upcoming post, but the patterns here are intentionally general, and applicable to almost any professional role.


To close, a few practical recommendations, distilled from experience:


  1. Save your best prompts: Treat your prompts as assets. Build a small, living library of the prompts that consistently produce useful results, along with notes on when and why you use them. Over time, this becomes a personal playbook.

  2. Get comfortable with structured language: Prompts with clear sections (e.g. goal, audience, format, intent, constraints) tend to produce more predictable and reusable results. For your most valuable workflows, consider moving your prompts beyond prose into lightweight structured formats like XML or JSON.

  3. Use more than one model: Every LLM has strengths and blind spots. Even if you settle on a primary model (and you probably should), periodically testing others—OpenAI, Claude, Grok, or newcomers—keeps your mental model fresh and your workflows adaptable.

  4. Design for iteration, not perfection: The most effective AI workflows assume the first output will be wrong—or at least incomplete. Plan explicitly for revision, human checkpoints, and evolution over time. This mindset matters more than any single prompt.


Closing

If you’re exploring how AI could meaningfully support your work—whether that’s accelerating learning, clarifying strategy, improving technical workflows, or strengthening how you communicate value—I help organizations and individuals do exactly that, without hype and without losing human judgment along the way. If this post resonated with you, and you’re curious what a thoughtful, grounded AI collaboration could look like in your context, I’d be glad to start a conversation.

CONTACT ME

Let's get in touch! If you are interested in my services and/or how I can help your organization, submit the form below to schedule an initial free conversation.


(Note: fields with * are mandatory)

Preferred contact method
I am interested in (optional)

CONNECT

  • Linkedin

© 2025 by RODOLFO IGLESIAS. All rights reserved.

bottom of page