How will Large Language Models impact supply chains?
Everyone is talking about Large Language Models (LLM), such as ChatGPT, and the supply chain community is no different. But beyond amusing posts and news chatter, IMD professor Ralf Seifert and supply chain researcher Richard Markoff explore its world benefits for supply chains.
By Ralf W. Seifert and Richard Markoff
Since the launch of ChatGPT from Open AI late last year, Large Language Models AI has moved to the forefront of mainstream media. LLMs process vast amounts of unstructured data and excel at creating original text, or summarizing text, and can even produce images and videos.
While major news media considers its benefits and risks, LLM passages have become trite on social media, for instance Facebook and Twitter. A similar dynamic is playing out in the business world. There are serious considerations concerning the impacts on jobs, businesses are exploring how best to put LLMs to use, and LinkedIn has had a steady stream of tiresome LLM posts.
Despite some very high-profile missteps, it is clear that Large Language Models are taking great leaps forward, are here to stay, and are already impactful in certain venues. They will both accelerate and contribute to the ongoing digitalization of supply chains, in ways that are difficult to predict, but worthwhile considering. These contributions can be roughly put into three buckets: knowledge management, data analysis, and process automation.
Knowledge management with LLMs
Supply chain talent management has moved to front-of-mind in recent years, as borne out by surveys and recent events. The supply chain dysfunctions seen in the last two years are largely a thing of the past, but they created a high-stress environment, marked by high turnover and a search for talent.
One consequence of this is that methods and practices developed over many years, both in management techniques and using software tools, were lost. Supply chain, in particular, relies on institutional knowledge that is difficult to capture in standards and guidelines. There is no common, agreed set of procedures or policies, as it is very much dependent on the context of each company and industry. Even if a company were disciplined enough to try to capture and train staff on in-house processes, there are gaps in knowledge at the best of times.
One can imagine an LLM, trained on internal, unstructured data such as emails, best-practice documents, training materials, and transactional history, as an invaluable tool in knowledge management. This use of Large Language Models would resemble what we have seen so far, using prompts and queries to have an LLM explain how to set up a new code, the significance of a data field, how a similar issue was handled in the past, or why a customer is shipped twice a week rather than once.
Trying to piece together why a certain decision or practice came into being, or even recommending one, will become a powerful, fundamental tool. The LLM could mine emails, notes, and meeting minutes, to answer questions, for instance, about the reasoning behind the timing of a launch or the selection of a vendor. The LLM will act both as the keeper of institutional knowledge and the chronicler of the supply chain history of the company.
Data analysis with LLMs
Large Language Models have shown themselves to be capable of egregious factual errors. These “hallucinations” are due to the way they build responses from the datasets they are trained on. An in-house LLM, in addition to accessing unstructured text data, ERP and APS databases, and the transactional history of the company, will be more able to build accurate, reliable responses.
Combining LLM with process mining would be a “killer app”, described by one supply chain expert we spoke to as: “a tool to combine complex, huge data sources into a smooth user interface”. At that point, the analytical possibilities truly become infinite.
A planner could use the LLM with more classical machine learning to explore cannibalization effects, or the impacts of competitor promotions, social media chatter, or a price change, in just a few keystrokes. The LLM could also recommend lead times and safety stocks based on history, or even propose a cleaning of historical sales, all without creating cumbersome queries, pivot tables or BI tools.
In addition to combining multiple data sources, the powerful appeal is in the speed and simplicity of the user interface. With a qualitative query, the LLM could almost instantly analyze demand seasonality, calculate a KPI, or call up a history during a meeting as needed. This will truly be a revolution in productivity and data-driven insights.
Process automation with LLMs
One less-publicized ability of Large Language Models is to trigger events based on user prompts. One LLM even hired a person through TaskRabbit to perform a task it was not permitted to do. More relevantly, LLMs can even perform basic coding and build web pages based on sketches.
Extrapolating to the world of supply chain, here again, there is endless potential, though it admittedly is speculative. Product-code creation provides a useful, illustrative example. Creating and populating a product code as an ERP is one of the knottiest operational data challenges in a company. There may be hundreds of required data fields across many functions, many of which are interdependent, and without which the system is blocked.
With a simple prompt, an LLM can create a new code identical to another one, change data values, analyze, report on missing or incoherent data, and make mass changes. This last usage will certainly require a very high level of confidence in the LLM and while it is not something one could expect to happen in short order, it is certainly both possible and plausible.
A more immediate use could be in finite capacity planning, for example. Very few companies have succeeded in automating production planning at the resource and product level, smoothing production and respecting capacity constraints, other than in the simplest of contexts. The trade-offs between working capital, line utilization, labor, material availability, and service, has proven to be too complex to model and demands high-maintenance prioritization data.
A LLM model that translates a prompt into priorities, and reschedules based on planner feedback, could break through this hurdle and allow for dynamic finite capacity plans that learn priorities and trade-offs over time. With qualitative, constant instruction, even difficult end-to-end planning problems like product substitutions could be set up with a few keystrokes.
LLM AI in context
Prior to the introduction of LLM AI, we have written about the challenges of implementing AI in demand planning. These centered largely around the difficulty in exploiting an AI-generated demand plan without having established a robust, inclusive Sales and Operations Planning (S&OP) process that integrates information from other functions like Sales, Marketing and Finance.
The best-practice successful implementation of a holistic S&OP relies on a strong alignment of operational plans, financial plans, and deep involvement of general management. Only the most mature, established S&OP process can overcome the explainability problem created by the black-box nature of AI, whereby it is difficult to understand the underlying reasoning that led to the demand recommendations.
Through being trained on internal data, the supply chain applications discussed here may show themselves to be less prone to errors and “hallucinations” as the LLM errors that have garnered headlines, and slowly establish a shared, common trust in AI.
That trustworthiness must be established, otherwise the same obstacles that have proven to slow the deployment of AI in demand planning will come into play. But with the introduction of LLM AI, there has been a change in the air around the broad perception of AI. It is now mainstream and adoption is accelerating. This may prove to be the catalyst for finally bringing AI, in all its forms, into widespread use in operations.
This article was first published by IMD on May 17, 2023.
Ralf W. Seifert is Professor of Operations Management at IMD and co-author of The Digital Supply Chain Challenge: Breaking Through. He directs IMD’s Digital Supply Chain Management program, which addresses both traditional supply chain strategy and implementation issues as well as digitalization trends and the impact of new technologies.
Richard Markoff is a supply chain researcher, consultant, coach and lecturer. He has worked in supply chain for L’Oréal for 22 years, in Canada, the US and France, spanning the entire value chain from manufacturing to customer collaboration. He is also Co-Founder and Operating Partner of the Venture Capital firm Innovobot.