
The $1,200 Return That Shouldn't Take Thirty Minutes to Process
An AI agent that evaluates return requests against policy, calculates refunds, and routes high-value cases for approval before your CS team finishes their first coffee.
Forty-Seven Returns Before Lunch
A Customer Service Manager at a 300-person online apparel retailer opens her queue on a Tuesday morning and counts forty-seven return requests waiting for decisions. Some are straightforward: a $45 accessory, tags still on, purchased last week. Others are not. A customer wants to return a $1,200 leather jacket for a fit issue, claiming the small runs like a medium in the shoulders. The item shipped eighteen days ago. The customer is a Gold-tier loyalty member with four previous orders and one prior return of the exact same product six months earlier.
That last detail changes everything.
The CS manager now has to pull up the return policy, confirm the request falls within the thirty-day window, check whether original packaging and proof of purchase are required, calculate the 15% restocking fee on a high-value item, and flag it for manager approval because it crosses the $200 escalation threshold. She needs to weigh the customer's loyalty tier against the repeat-return pattern on the same SKU. Then she has to draft two documents: an internal evaluation report for her team and a customer-facing refund confirmation. After that, she emails both parties, copies her manager, and notifies inventory and accounting so downstream systems stay in sync.
That's one return. She has forty-six more.
Processing a single return costs between 20% and 65% of the original item's value, according to industry benchmarks. On a $1,200 jacket, that's $240 to $780 in handling costs before the customer even ships the item back. Multiply that across dozens of daily requests, and the math gets ugly fast. A mid-size retailer processing forty returns a day at fifteen minutes each burns ten hours of senior CS time on decisions that feel routine but aren't.
The real cost isn't the time, though. It's the inconsistency. One analyst applies the restocking fee. Another waives it for Gold-tier customers because she remembers a Slack conversation from three months ago. A third misses the repeat-return flag entirely and auto-approves a pattern that should have been escalated. The policy exists. The problem is that applying it correctly to every request, every time, with full context from order history, loyalty data, and item condition requires a kind of sustained attention that humans aren't built for at volume.
Why a Rules Engine Won't Solve This
The obvious move is to automate the simple stuff. Build a rules engine: if the return is within thirty days and under $200, auto-approve. If it's over $200, route to a manager. Done.
Except it's never done.
Return authorization is the kind of process that looks deterministic from a distance but gets messy up close. A $1,200 return with a reason code of "fit issue" and a condition of "new with tags" seems like a clean approval. But the customer has returned this exact product before. The policy requires original packaging, and you won't know if that's true until the item arrives. The customer's comment says the sizing runs large (which, honestly, is the kind of feedback your product team should be hearing too). A rules engine can check the dollar amount and the calendar. It cannot read the return reason, cross-reference the customer's history across four orders, assess whether a repeat return on the same SKU is a red flag or just bad luck, and generate a nuanced evaluation that your manager can approve in thirty seconds instead of thirty minutes.
Return authorization processing is the act of evaluating a customer's return request against company policy, calculating the financial impact including fees and refund method, and routing edge cases for human approval. According to research from Opensend, 85% of shoppers expect refunds within one week, yet the average processing time stretches to 9.5 days. That gap is where customer loyalty erodes.
The same structural problem shows up outside ecommerce. A warranty claims coordinator at a 200-person electronics manufacturer faces an identical bind: each claim needs a policy lookup, a defect-versus-misuse judgment call, a parts cost calculation, and an escalation path for claims above a dollar threshold. The rules are different, but the shape of the work is the same. Simple automation handles the lookup. It chokes on the judgment.
A general-purpose chatbot can't help either. You could paste a return request into a chat window and ask for a recommendation, but it has no access to the customer's order history, no awareness of your restocking fee schedule, no ability to check whether the item's product category is even eligible for returns. You'd spend more time assembling the context than you'd save on the decision.
And then there are the edge cases that pile up. A customer submits a return on day twenty-nine. Within window, technically. But the order contains two items, one high-value and one under the threshold, and the customer only wants to return the expensive one. Your policy has rules for partial returns, but they're buried in a paragraph your newest CS rep has never read. Meanwhile, the customer's loyalty tier just upgraded from Silver to Gold between purchase and return, and nobody is sure which tier's benefits apply. These aren't hypothetical scenarios. They're Tuesday.
The real failure mode of simple automation isn't that it gets the easy cases wrong. It's that it can't tell the easy cases from the hard ones. A $50 return and a $1,200 return with a repeat-return pattern both arrive as rows in a queue. The judgment to treat them differently, to escalate one and approve the other, requires reading the full context every time. Rules engines don't read context. They match conditions.
The core tension in return authorization isn't complexity or volume alone. It's that every request needs both: rigid policy math and contextual judgment, applied together, at speed.
This is the problem lasa.ai solves for ecommerce and customer service teams: an AI agent that evaluates return requests against your exact policy, calculates refunds with restocking fees, and escalates the right cases for manager approval.
See what this looks like for your process →
What If the Decision Was Already Made When You Got to Your Desk
Here's the shift. Instead of the CS manager working through forty-seven returns one at a time, pulling up order histories, checking policy documents, and drafting emails, the AI agent processes them as they arrive. Not by replacing the manager's judgment. By applying it consistently.
The agent reads each return request and does what the manager would do if she had unlimited time and perfect memory. It matches the request to the original order. It checks the return window. It evaluates whether the item, the amount, the customer's history, and the return reason align with policy, or whether something needs a human eye.
The distinction matters. This isn't a chatbot guessing at answers. The agent follows a defined, auditable process with specific steps, thresholds, and escalation rules baked in. Agent-level outcomes with workflow-level reliability. Every decision is traceable. Every calculation is reproducible. If an auditor asks why a $1,200 return was approved with a $180 restocking fee and routed through manager approval, the answer is documented from trigger to notification.
From Return Request to Refund Confirmation in Four Steps
Walk through what happens when a Gold-tier customer submits a return for a $1,200 leather jacket, eighteen days after purchase, with a reason code of "fit issue" and a condition of "new with tags."
Step one: the agent validates eligibility. It pulls the return request, matches it to order ORD-55021, confirms the order status is "Delivered," and checks the timestamp. Eighteen days against a thirty-day window. Within policy. It also flags that the item's product category is "Apparel," which is on the eligible list.
Step two: it assesses condition and risk. This is where the AI earns its keep. The agent evaluates the item condition, checks whether original packaging and proof of purchase requirements are met, and reviews the customer's history. Four orders, Gold loyalty tier, one prior return of the same SKU six months earlier. It generates a risk assessment that weighs the repeat-return pattern against the customer's overall profile.
Step three: it calculates the refund. The requested amount is $1,200. The policy specifies a 15% restocking fee on all returns. That's $180. Net refund: $1,020, back to the original payment method. Because the amount exceeds the $200 high-value threshold, the agent flags this for manager escalation with a twenty-four-hour approval window.
Step four: it generates everything. Two documents come out the other end. The evaluation report opens with a request summary table showing the return request ID, order ID, customer name, items, and requested amount. Then a policy check section: window status, condition assessment, packaging verification, proof of purchase status. Then the decision with both a customer-facing explanation and internal notes for the CS team. Then the refund calculation table breaking down the unit price, restocking fee percentage, fee amount, and net refund.
The second document is the customer's refund confirmation: their details, the order and refund breakdown, approval information, and next steps including the expected five-to-ten business day refund timeline and shipping instructions for the prepaid carrier label.
For a warranty claims analyst at an electronics manufacturer, the data shape shifts from SKUs and restocking fee percentages to part numbers, defect codes, and warranty coverage periods. But the output structure (request summary, policy check, decision rationale, financial calculation) looks the same. The pattern is universal.
Once the manager approves, the agent emails the customer the refund confirmation, copies the CS manager, and publishes the decision to inventory and accounting workflows so downstream teams can update stock levels and ledger entries without anyone sending a Slack message or filling out another form.
Here's what the manager actually sees in her approval queue: a one-page evaluation showing that request RET-1092 is for a Premium Leather Jacket, purchased eighteen days ago, condition "new with tags," customer has Gold loyalty status but a prior return of the same SKU flagged as a risk note. The restocking fee is already calculated. The customer-facing explanation is already drafted. The manager doesn't need to open three tabs and reconstruct the story. She reads, decides, clicks. The whole approval takes less time than reading this paragraph.

What Tuesday Looks Like When the Agent Runs on Monday
The CS manager still opens her queue on Tuesday morning. But instead of forty-seven decisions waiting for her, she finds forty-seven decisions already made. Thirty-nine standard returns, processed and confirmed. Five escalations sitting in her approval queue with full evaluation reports, risk assessments, and refund calculations already attached. She reviews each one in under a minute because the agent did the analysis. Three denials with documented reasons, customer notifications drafted and ready.
Her Tuesday used to be triage. Now it's oversight.
The downstream effects compound. Inventory gets restocking notifications the same day, not three days later when the CS team finally processes the backlog. Accounting has the refund amounts and methods before end of business. Customers get their confirmation emails within hours of submitting the request, not days. That 9.5-day average processing time? It collapses.
Teams that tighten their return authorization process often find the next bottleneck is upstream: understanding why returns happen in the first place. That's where something like a review sentiment analyzer comes in, catching declining product quality signals before they become a wave of fit-issue returns on the same SKU.
Whether you're processing returns at a 300-person online retailer, adjudicating warranty claims at a mid-size manufacturer, or handling credit memo requests at a wholesale distributor, the morning changes the same way. You stop spending senior time on decisions the policy already answers. You start spending it on the exceptions that actually need you.
And the consistency compounds over weeks. When every return gets the same policy treatment, you stop having the conversation where a customer calls back and says "but the other agent waived the restocking fee last time." The answer is in the evaluation report. The logic is documented. The fee was applied or waived for a stated reason, not because one analyst was in a better mood than another on a different Tuesday.
lasa.ai builds AI agents that handle return authorization, order exception routing, and claims processing end to end. If your CS team is buried in return decisions that follow a policy nobody has time to apply consistently, see what the agent looks like for your process.
If your team processes return requests that need policy evaluation, refund calculation, and escalation routing:
See what this looks like for your process →Frequently Asked Questions
How long does it take an AI agent to process a return authorization request?
What happens when a return request falls outside the standard return window?
Can the agent handle restocking fees and different refund methods?
How does the agent decide which returns need manager approval?
Does the agent notify inventory and accounting teams automatically?
See What This Looks Like for Your Process
Let's discuss how LasaAI can automate this for your team.