The $28M Mistake
What the Memphis Deployment Teaches Every Operator — And Every Business Owner in Its Path
The letter was three paragraphs. Victor stopped at the second.
Thirty-eight months had passed since his fund authorized the $28 million deployment. The Memphis facility had been the deal he felt best about that year, a well-run distribution center, a credible operator, a vendor with a live reference list and a proposal that modeled an 18-month payback on an IRR his investment committee approved without a single request for revision. The pre-sales engineer had walked them through the model twice. It held together. Every number followed from the assumptions.
The Chief Restructuring Officer's letter did not argue with any of those numbers. It simply described what had happened in the 38 months since they had been the basis for a capital decision. The integration with the legacy WMS had slipped 14 months beyond the vendor's Gantt chart. Labor savings were running at 22%, against a projection of 38%. Change orders had totaled $4.2 million, none of them modeled in the original proposal. The workforce transition had cost $1.224 million that no document had ever assigned a number to, because no one had thought to ask.
Victor was not a careless investor. He had asked the right questions in the right rooms. What he had not done, what no one in the process had done, was build a model that did not depend on the vendor's assumptions.
The business owner who sold him that facility three years earlier had made the same mistake in reverse. He did not know that the automation capital acquiring his building would reprice it on assumptions that had never been tested against his WMS. He received what looked like a full-price offer. He signed. He did not understand until much later what the $14 million gap between projection and reality had done to the equity value he thought he was walking away with, or why a more informed seller would have negotiated differently.
That is the thing the Memphis deployment teaches. Not that vendors lie. Not that operators are naive. That vendor financial models are optimistic by structure, not by accident, and that the same six cost categories are understated in every automation proposal, in the same direction, by roughly the same magnitude. The operator is exposed on one side of it. The business owner being acquired is exposed on the other.
Here is what that structure looks like, what it cost in Memphis, and what you can do before your next proposal arrives or your next acquisition inquiry.
The Proposal That Made Sense
The Memphis proposal was not unusual. That is the first thing worth understanding.
It was a well-constructed document, internally consistent, professionally formatted, anchored to a base contract of $11.2 million with a total deployment value that, by the time it reached Victor's investment committee, sat at $28 million across multiple facilities and phases. The vendor had deployed this system in nine buildings. The pre-sales engineer had built the financial model using the same template the company used on every proposal, calibrated to the site survey data the operator had provided. The labor savings projection: 38% reduction in pick/pack headcount within 12 months, was consistent with what AMR and goods-to-person vendors typically project at 33–42% depending on the mix.
The model held together because vendor financial models are designed to hold together. They are built by people who know the technology and know the product, using inputs from the operator filtered through templates built from past wins. They are not built by people with a structural incentive to find the risks, because the pre-sales process does not reward finding the risks. It rewards producing a document capable of passing a capital approval.
This is not a character indictment. It is a description of an incentive architecture.
The pre-sales engineer who built the Memphis model was not dishonest. She was building the model that her employer needed her to build, using the data available to her, within a process that had no independent mechanism to correct optimistic assumptions. The operator reviewing the model was looking at a document produced by people who knew more about the technology than he did. Victor's investment committee was looking at a document produced by people who knew more about the deployment than either of them.
Nobody lied. The model was wrong.
Six Places the Model Was Wrong
Twelve automation deployments. Six cost categories. In every deployment, across all six categories, the direction of error was the same: the vendor's projection came in below actual cost. Not sometimes. Every time.
This is the pattern that a single postmortem cannot establish. One failed deployment is a cautionary tale. Twelve is a data set. And the data set says this: if you receive an automation proposal and it contains a number for each of these six categories, those numbers are understated. The question is by how much.
The Six Categories
-
Vendors project 33–42% reduction in pick/pack headcount for AMR and goods-to-person systems. The empirical median in first-24-month performance across twelve deployments is 21–28%. Memphis delivered 22%. The gap exists because vendor projections use steady-state performance benchmarks. They do not model the ramp period, the workforce adaptation curve, the system tuning cycle, or the shift-level variability that real operations produce. Each of those factors compresses the savings during exactly the window the payback model depends on most.
-
The Memphis proposal listed integration at an $85,000 line item, approximately 6% of base contract, consistent with what vendors typically show for modern WMS environments. Memphis was not a modern WMS environment. The facility ran a system installed in 2008 that had 14 years of undocumented customizations layered on top of the base platform. The vendor's implementation team had deployed this system in four prior buildings. The oldest WMS in any of those buildings was six years old. Nobody had tested the integration against the actual data volume of a 2008 system under peak load. By month seven, the $85,000 line item had become $1.47 million. The empirical range for legacy-complexity environments is 30–45% of base contract. The Memphis proposal used a modern-WMS assumption in a legacy-WMS environment.
-
The vendor's Gantt chart showed a nine-month implementation. The median slippage in complex deployments across twelve projects is six to fifteen months beyond vendor projection. Memphis slipped 14 months. That outcome sits squarely within the empirical distribution, which means it was predictable, not anomalous. What the vendor's Gantt chart did not include was a financial consequence for slippage. Each month of delay at Memphis scale carried $140,000–$280,000 in operating cost variance. A timeline is a schedule. A timeline with a financial consequence for each month of slippage is a financial document. The Memphis proposal included the former.
-
Vendor proposals model year-one maintenance costs. Twelve deployments show 15–20% annual escalation from year two onward. Memphis came in at 18% in year two, 21% in year three. The original model had flat maintenance through year five. In a seven-year contract, the cumulative maintenance gap between a flat projection and an 18% escalation curve is not a rounding error. It is a number that changes the payback period by months, not weeks, and changes the NPV model by a figure that would have altered the investment committee discussion.
-
Twelve deployments averaged 12–18% of base contract in change orders that were not in the original proposal. Memphis came in at 16% — $4.2 million in change orders authorized across the first 24 months of operation, none of them modeled at contract signing. This is not a Memphis-specific failure. It is the structural consequence of deploying complex automation into real operational environments that are never fully captured in a pre-sales site survey. The 12–18% range is not a range that vendors do not know about. It is a range they have no commercial incentive to put in the proposal.
-
The HR Director at the Memphis facility spent three weeks doing what she called data archaeology, pulling payroll records, severance documentation, retraining invoices, productivity loss records from the transition period, and manager time allocated to the changeover. The number she produced was $1.224 million. The vendor's model had included no number for this category. Not an understated number. No number. The implicit assumption was zero. Across twelve deployments, workforce transition costs ranged from $380,000 to $2.1 million, with a median of $840,000. Every operator absorbed that cost. Every vendor proposal treated it as nonexistent.
What The Gap Actually Cost
Add the six categories together across twelve deployments and the aggregate cost of what was not in the vendor proposals is approximately $14 million. Memphis accounts for the substantial majority of that gap.
That $14 million is not a dramatic finding. It would be more useful if it were surprising. It is not. It is the predictable arithmetic of a model architecture that understates the same categories in the same direction every time. Integration overruns. Timeline carrying cost. Maintenance escalation compounded across seven years. Change orders absorbed without a budgeted contingency. Workforce transition costs that three weeks of data archaeology recovered after the fact, because no one had thought to estimate them before.
“The $14 million is boring. That is the most important thing about it. Boring means structural. Boring means predictable. Boring means it will happen again in your next proposal if you do not build a different model.”
The payback period Victor's committee approved was 18 months. The payback period the Memphis deployment produced, using the actual cost data, documented after the fact, was not calculable in the conventional sense, because the facility had entered a structured debt review before the project reached its projected payback date.
Other outcomes are available. The operator who builds an independent model before the contract is signed is working with a different number, one built from independent evidence, documented assumptions, and a conservative scenario that no vendor has a commercial incentive to produce.
What This Means If You Are the Business Owner, Not the Operator
The Memphis story is usually told from Victor's perspective. The capital that lost. But there is another version of it that almost never gets told.
The business owner who sold his distribution center to Victor's fund was not present in the board room when the automation proposal was approved. He was not in the room when the integration overran. He was not copied on the Chief Restructuring Officer's letter. He had already signed, already closed, already moved on to whatever came next.
But the $14 million gap was already embedded in the assumptions that valued his business. The PE firm's acquisition model had projected automation savings that would not materialize at the rate it modeled. The labor cost reduction it underwrote was 38%. The actual first-24-month performance was 22%. That gap, between what the capital expected the automation to deliver and what it actually delivered, did not appear after the acquisition. It was priced into the offer before the acquisition closed.
Business owners in industrial and distribution sectors are receiving acquisition inquiries from PE capital at a rate that reflects record dry powder and deployment pressure. The capital is informed. It has underwritten automation scenarios, projected EBITDA multiples post-deployment, and built financial models that assume a specific automation trajectory for the business it is acquiring. The business owner, in most cases, has not.
Understanding what automation capital is projecting about your business, what it expects your operation to look like after deployment, what EBITDA multiple those projections support, and what the gap between vendor assumptions and postmortem reality means for your negotiating position, is not a niche concern. It is the context that separates an informed exit from one executed without it.
The Memphis deployment is not just a cautionary tale for operators. It is a data point for every business owner in a sector where PE automation capital is active. The same six-category understatement that cost Victor's fund $14 million in deployment overruns is the same understatement that shapes how that fund values your business before it calls you.
Why the Vendor Does Not Fix This
The question that follows naturally from the Memphis data is: why don't vendors build more accurate models?
The answer is structural. A vendor producing a model to support a sale is in a different information environment than an operator producing a model to make a capital decision. The vendor's model needs to pass a capital approval. The operator's model needs to survive contact with operational reality. Those are different documents, even when they contain the same line items.
The pre-sales engineer builds her model from the inputs the operator provides during the RFP process, filtered through templates her company built from past wins, not from the full deployment distribution, which includes the projects that underperformed, the integration overruns, the workforce transition costs that nobody tracked. The template reflects what the vendor knows worked well enough to become a reference site. The median deployment is not a reference site. The Memphis deployment was not a reference site.
Vendor reference lists are curated marketing instruments, not representative samples. The operator who calls the references on the list is calling the best deployments the vendor has managed to produce, selected by the vendor. Those deployments are real. They are not representative. The reference process, done on the vendor's terms, produces information about a selected subset of outcomes. It does not produce information about the empirical distribution.
None of the individuals in the Memphis process were operating in bad faith. The pre-sales engineer used standard assumptions. The implementation team worked within the scope they had been contracted for. Victor's investment committee approved a model that was internally consistent. The structural problem was that no one in the process had an incentive to produce the accurate model, and the operator had no independent mechanism to pressure-test the model they received.
That mechanism is the thing Memphis was missing. Building it before the next proposal arrives is the only correction that works.
What You Can Do Before the Next Proposal
You do not need to become a financial modeling expert to protect against the six-category understatement. You need to recognize the architecture in fifteen minutes, ask the questions the proposal cannot answer with presentation polish alone, and build your own model before any contract conversation begins.
The 15-minute scan. Pull any automation proposal and look for four specific markers. First: what is the integration cost assumption as a percentage of base contract, and what is the explicit rationale for that assumption? A proposal that shows integration at 6–8% without specifying the WMS environment it is based on is using a modern-environment assumption. If your WMS is more than eight years old, that number is wrong. Second: what is the labor savings projection methodology? A projection built from a benchmark is not a site-specific analysis. Third: does the timeline model include a financial consequence for slippage, or is it a Gantt chart? Fourth: does the maintenance cost trajectory escalate from year two, or does it project a flat line?
The questions vendors cannot answer with polish. Ask your vendor what their integration cost assumption is for a WMS of your age and complexity, and what the empirical range is from their last ten comparable deployments. Ask them what the conservative-scenario labor savings projection looks like, not the base case, the conservative case. Ask them to walk you through what slippage in months 7 through 12 would do to the payback model, with the financial consequence modeled for each month. A vendor who cannot answer these questions fluently is telling you something about the model's foundation.
Find the references they did not give you. The vendor's reference list is a marketing instrument. The installed base audit finds the deployments that are actually comparable to yours, by WMS age, by facility scale, by industry and peak season profile, regardless of whether the vendor selected them. Ask your vendor for their full installed base in your industry segment and the contact information for the operations director at each site. The sites they resist providing are the sites you most need to call.
Build the model yourself. The independent total cost of ownership model takes a weekend. Six capital cost line items: base contract, integration, implementation carrying cost, change order contingency (budget 15% of base contract), workforce transition (budget $840,000 as a floor), and maintenance through year five with an 18% annual escalation from year two. Document every assumption explicitly. Run a base case and a conservative scenario. If the conservative scenario does not meet your investment threshold, the deployment is not ready. If the base case barely meets your threshold, you are one realistic estimate away from a 47-month payback.
If you are a business owner receiving acquisition inquiries. Understand what automation capital is projecting about your business before you respond to the first offer. The fund that calls you has already built an automation scenario into its valuation model. Knowing what that scenario assumes, and how it compares to what the Memphis postmortem data says about first-24-month performance, is the difference between a negotiation you enter with context and one you enter blind. Your EBITDA multiple is not fixed. It depends partly on what the acquirer expects automation to do to your cost structure after they own it. That expectation is built from the same six-category model that cost Victor's fund $14 million.
“Victor did not get a second chance at Memphis. But the operators, and the business owners, who read the same pattern in the next proposal are reading a different letter at the end. Or they are not reading a letter at all.”
The VP of Operations in one of the twelve deployments built his model in a weekend. He sent it to his CFO on Monday morning. The number he sent was not the vendor's number. It was his number, built from independent evidence, documented assumptions, and a conservative scenario that no vendor had a structural incentive to produce. His model showed a 33-month base case. He walked away from a vendor whose model had produced a 47-month base case against his organization's 36-month policy threshold.
The walk-away was not a failure. It was the correct application of a model built from accurate assumptions.
Before Your Next Vendor Meeting
Pull the most recent automation proposal your organization received and run a six-row audit: one row per category (labor savings, integration cost, timeline, maintenance, change orders, workforce transition), three columns — what the vendor's model says, the empirical range from twelve actual deployments, and the delta. If any category is missing from the vendor's model, that is the number. A missing category is not a gap in the presentation. It is a gap in the financial model you are being asked to approve.