AI at Work: What Changes, What Stays Human
Brief ready
Regenerate only when room inputs or profiles change.
START HERE
Opening move
Quick lightning start: what is one workflow you’ve already changed with AI, in one sentence?
Set the rule: 60–90 seconds each, no vendor names, and I’ll interrupt to rotate so we get breadth and decisions.
MODERATOR FRAME
A tightly moderated 50-minute panel that gets past AI hype to what actually changes in day-to-day work, what must stay human, and what leaders can decide and do next week.
A tightly moderated 50-minute panel that gets past AI hype to what actually changes in day-to-day work, what must stay human, and what leaders can decide and do next week.
AI copilots are already creeping into routing, triage, and reporting, while employees want clarity and leaders feel productivity pressure; the gap is practical rollout choices...
Secondary layer
Full Prep Brief
Room snapshot, context, supporting notes
Secondary layer
Full Prep Brief
A tightly moderated 50-minute panel that gets past AI hype to what actually changes in day-to-day work, what must stay human, and what leaders can decide and do next week.
Keep answers short, rotate speakers on purpose, use contrast questions to surface tradeoffs, and end every segment with a decision rule or next action.
WHAT SHAPED THIS BRIEF
Moderator control is a priority
Host explicitly wants concise answers, deliberate rotation, and contrast questions to avoid hype, legal lecture, or vendor pitch.
Concrete-workflow requirement
Each panelist is expected to name one changed workflow and where leaders over-trust polished output.
Safety boundary: surveillance + legal advice
Avoid employee surveillance language and avoid jurisdiction-specific legal advice; keep risk guidance principle-based.
Audience needs next-week decisions
People leaders and operators want actionable decision rules and rollout steps they can take next week.
Built-in productive tension
Throughput/measurement vs trust/role design vs auditability/documentation is the core triangle to surface with contrast prompts.
WHAT TO KNOW BEFORE YOU ASK
Briefing cues, not source review
Anchor the room in concrete workflows, measurable tradeoffs, and clear rollout mechanics: scope, metrics, comms, review gates, and documentation.
- Get one concrete workflow change from each panelist early (90 seconds each).
- Do not accept time-saved claims without asking about accuracy, rework, exceptions, and downstream impact.
- Name specific decisions that must remain human (fairness, safety, terminations, disciplinary actions, high-stakes approvals).
IF TIME IS TIGHT
Protect the strongest thread first
Must cover
- Lightning workflow round: one workflow changed + metric + surprise outcome (all three panelists)
- Over-trust of polished outputs: where it fails and what gate catches it
- Employee communication before rollout: what to say and what not to imply
- Minimal documentation and audit trail before scaling (principle-based)
- Next-week actions: one concrete move per panelist
Optional if it opens up
- Vendor/demo evaluation red flags and how to run a realistic pilot test
- What stays human: explicit boundaries and escalation rules
- Audience Q&A with strict rotation and concise answers
Cut if short
- Long definitions of AI/LLMs
- Deep dives into specific regulations or jurisdictions
- Speculation about long-term job futures
Human Story Thread
Use this if you want a warmer opening.
Keep personal journeys minimal; use only quick ‘what surprised you in practice’ moments to humanize without drifting.
- A real-world surprise in frontline adoption
- A trust moment: what employees feared versus what was true
- A miss that forced a process change
Opening question
What did you expect would be the hard part, and what was actually the hard part?
Follow-ups
- What did you change in the rollout after that?
- What did you communicate differently the second time?
Conversation Plan
Questions and flow for your conversation
Enforce 90-second answers; extract workflow, metric, and surprise; summarize in one line after each speaker.
A strong answer includes
A named workflow, a metric with a baseline comparison, and a real-world failure mode or exception.
Ask for example
What did you have to change in the process, not the tool, to make it work?
Safe pivot
Let me stop you there and pull out the three things we need: the workflow, the metric, and the surprise.
Transition
Use the strongest ‘surprise’ to set up the first contrast tradeoff.
If short on time
Skip any backstory about tool selection.
Follow-up ladder
- What part of that workflow stayed stubbornly human?
- What did you have to change in the process, not the tool, to make it work?
- What would make you pause or roll it back in a week?
Transition block - no questions
Run paired contrasts (Dante vs Priya, then Marcus as boundary-setter); keep rebuttals to 30 seconds.
A strong answer includes
A clear tradeoff statement, a review gate, and a policy/people safeguard that doesn’t kill speed.
Ask for example
What’s the smallest ‘trust safeguard’ that doesn’t kill speed?
Safe pivot
I’m going to translate both of you into a single decision rule the audience can use next week.
Transition
Lock in what stays human with a boundary question to all three.
If short on time
Drop second contrast if the first runs long.
Follow-up ladder
- Where do you actually agree on the minimum standard?
- What’s the smallest ‘trust safeguard’ that doesn’t kill speed?
- If a team refuses that safeguard, do you still run the pilot?
Transition block - no questions
Turn principles into a checklist; ask Marcus for minimum docs, Priya for manager script elements, Dante for measurement cadence.
A strong answer includes
A concrete artifact: checklist, template, weekly metric review, escalation threshold, audit trail element.
Ask for example
What do you log so you can audit and learn from misses?
Safe pivot
If the gate is just ‘be careful,’ we don’t have a gate yet—what’s the actual step in the workflow?
Transition
Open to audience Qs with a ‘one question, one answer, one action’ rule.
If short on time
Condense into one ‘minimum viable rollout pack’ summary.
Follow-up ladder
- Who is the reviewer in the real workflow?
- What do you log so you can audit and learn from misses?
- What’s your threshold for turning it off or restricting use?
Transition block - no questions
Select questions that ask for actions/metrics; assign first responder and then one add-on from a second panelist.
A strong answer includes
Questions drifting into legal advice or surveillance; pull back to principles and safer framing.
Ask for example
What’s the decision gate where leadership must sign off?
Safe pivot
Let’s keep it to three artifacts total—if it’s longer, it won’t happen.
Transition
Transition to next-week actions round.
If short on time
Limit to 3 questions.
Follow-up ladder
- What’s the fastest way to produce that artifact in a week?
- What’s the decision gate where leadership must sign off?
- What’s the one artifact you see teams skip that later bites them?
Transition block - no questions
Get one action per panelist in 20 seconds; close with a 3-bullet recap.
A strong answer includes
A specific action with an owner and an artifact (metric, script, doc, gate).
Ask for example
How do you measure whether that step worked in two weeks?
Safe pivot
I’m going to pause there—can you turn that into one step and one measure?
Transition
Thank panel; invite hallway follow-ups.
If short on time
Skip recap and just do actions.
Follow-up ladder
- What’s the first step on Monday?
- How do you measure whether that step worked in two weeks?
- What’s the failure mode if you do this poorly?
Transition block - no questions
ADDITIONAL QUESTIONS
“Dante, Priya, Marcus: name one workflow you’ve seen change with AI, the one metric you watch, and one thing that surprised you in practice.”
“Dante, if you optimize for throughput first, what do you do that Priya would call risky? Priya, what do you insist on that Dante would call slow?”
“Give me one place polished AI output looks right but is wrong, and the specific gate that catches it before harm.”
“What are the three artifacts you’d require before a pilot scales: one metric, one communication element, and one documentation/audit element?”
“I’m going to assign a first responder, then one add-on—keep it to one action you’d take next week.”
“One move to take next week: Dante gives a metric to add, Priya gives a comms line managers can use, Marcus gives a documentation/control item.”
Supporting Context
Participants
Nora Iqbal
Moderator
Role in this conversation: keep the room tactical, paced, and useful for People leaders, operations executives, product leaders, and policy-minded founders..
Dante Ruiz
Panelist
Dante has deployed AI copilots in routing, support triage, and internal reporting. He will push for practical measurement. Perspective: Operations leader focused on throughput, process quality, and frontline adoption.
Priya Shah
Panelist
Priya will resist productivity-only framing and bring the conversation back to clarity, consent, and role redesign. Perspective: People leader focused on trust, job design, manager behavior, and employee communication.
Marcus Bell
Panelist
Marcus can explain policy boundaries, bias risk, auditability, and what leaders should document before rollout. Perspective: Legal advisor who can translate risk without freezing the room.
Room Dynamics
- Need for narrowly scoped pilots with clear metrics
- Importance of human review where judgement, fairness, or safety matter
- Agreement that employee communication and consent are essential
- Value of basic documentation and audit trails before scaling
- Throughput (Dante) vs. trust and role clarity (Priya)
Conversation Arc
Lightning workflow round (90 seconds each)
Ground the panel in reality fast and prevent hype or abstraction.
Contrast: speed versus trust
Surface the real tradeoffs leaders face and force a decision rule.
Polished output trap: show me the gate
Prevent over-trust in confident outputs by forcing concrete controls and measurement.
Minimum viable rollout pack (people + docs + metrics)
Turn principles into a short checklist leaders can implement without a big program.
Audience Q rotation control
Keep Q&A practical, balanced, and safe in a public room.
Watchouts
Employee surveillance framing
Public rooms can trigger fear and derail trust; it also risks sounding like endorsement of monitoring.
Safer: Focus on transparency, purpose-limited measurement, and reviewing the work output rather than monitoring individuals.
Jurisdiction-specific legal advice
Risky and unhelpful for a mixed audience; can turn into a legal lecture.
Safer: Ask for principles, common documentation patterns, and when to involve counsel for their context.
Vendor pitch dynamics
Audience will tune out and the room loses credibility.
Safer: Ban vendor names; focus on workflow fit, test design, and measurable outcomes.
Job displacement panic
Can hijack the room away from implementable choices.
Safer: Talk role redesign, task shift, and safeguards; keep it to what leaders can do in the next 2–6 weeks.
Deeper Context Notes
Show deeper context notes
Key terms
- Copilot
- An AI assistant that suggests or drafts work but does not make final decisions on its own.
- Human-in-the-loop
- A person reviews, can correct, and remains accountable before AI output takes effect.
- Output polish
- AI outputs that sound confident and clean even when they’re wrong or incomplete.
Angle coverage
Operational measurement
Prevents ‘we shipped AI’ from masking error, rework, and exceptions.
Ask toward: Ask for the exact metrics, how they’re captured, and what threshold triggers rollback.
Trust and communication
Adoption fails when people feel surprised, monitored, or de-skilled.
Ask toward: Ask for the manager script, what employees can say no to, and how feedback changes the pilot.
Legal/auditability/documentation
Leaders need defensible controls without freezing progress.
Ask toward: Ask for a minimal checklist of documentation and review gates before scaling.
Read-only demo
Refine controls are available in your own rooms
Create a room to edit, regenerate, and export
Read-only demo