ARPM Inside Rubber Issue 1, 2025

AI & DATA STRATEGY  Look at how people use computers today. If a user needs to create a work order, there are a set of micro-instructions they have to follow to do it. Click this button, pull down this menu, click that field, and so on. Done in the right order, a work order gets created. But there’s a problem with command-based interaction. It’s the same problem implementers find with adoption of their shiny new software. It requires the user to conform to the very specific expectations of the software. A user doesn’t tell the computer, “make me a work order.” They have to tell it every little micro-command that, done correctly in the right sequence, ultimately results in a work order. And that friction is what causes the adoption gap: The difference between what an enterprise system could do, and what it actually is achieving for a manufacturer, because employees have many work priorities and learning the intricacies of an ERP interface is not the highest. The result is usually an endless and thankless series of reminders to put information in the system, to use the information in the system, and trainings on how to do both, all set against a backdrop of rising staff turnover, labor shortages, and staff who feel they don’t have the time to use the system every time they’re supposed to. THE NEW PARADIGM But according to Nielsen, that’s all changing, because we are now entering the third, newest paradigm: intent-based computing. Whereas command-based interaction requires users to conform to computers, intent-based computing does the opposite: It conforms the computer’s actions to the user’s intents. Users can treat the computer like a person, which can be told the desired outcome, and then it will figure out how to accomplish it, eliminating nearly all the friction of using traditional systems. What makes this possible? Generative AI, and specifically a form of generative AI called Large Language Models (LLMs). Large language models are trained by letting big networks of digital neurons learn by reading billions of words, from books, magazines, newspapers, and the internet, in dozens of languages across the world. Their task to learn is simply to predict words. It’s not obvious that this would result in something resembling intelligence. Predicting the next word someone will say sounds more like a party trick than a sign of general intelligence. But human language has an odd property: Unlike any other natural sound, it has embedded intelligence and reasoning in it. Books teach not just distilled knowledge but the ability to reason about the world. And so when a neural network trains on that text, it’s actually training not just on knowledge, but on how to think— including the ability to reason about people and human intent. HUMAN-LIKE MEMORY Training these models on human language unlocked another new ability—a new kind of memory that’s more human than digital. Humans remember information by grouping facts together conceptually; dogs and cats are closely related in our memory, because even though they’re different words, they’re both pets, animals, furry, and so on. Language models work the same way. They can turn huge amounts of information into clusters of related concepts, using a process called embedding. Instead of demanding keywords that have to exactly match, AI language models can find concepts similar to what a user is looking for. That means that knowledge, which typically is scattered across Sharepoint drives, databases, paper notes, or video, can all be brought together in a memory system that works a lot like people, so that companies can create a centralized intelligent memory that users can easily turn to when they need tribal knowledge for their job. BRINGING IT TOGETHER In 2025 and beyond, companies will adopt both this artificial human-like reasoning ability and human-like memory. It’s happening in five major ways. CENTRALIZED AI MEMORY: By storing all their process, machine, and material knowledge into AI memory, manufacturers can insulate themselves from the knowledge loss associated from technical staff turnover, and better share tribal knowledge across shifts. Internal AI models learn from technical staff, training on photos, speech, and videos, and then use that knowledge to assist the rest of the team. That can eliminate downtime created by not being able to put experts from every technical field on each shift. It also lets companies make their best technicians’ knowledge available to the entire team. DRIVING ACCOUNTABILITY AND INFORMATION SHARING: Manufacturing teams share important information verbally in daily shift meetings and production meetings. But it’s common for action items to get lost, and for information discussed at those meetings to  Knowledge, which typically is scattered across Sharepoint drives, databases, paper notes, or video, can all be brought together in a memory system that works a lot like people, so that companies can create a centralized intelligent memory that users can easily turn to when they need tribal knowledge for their job.  WWW.ARPMINC.COM / 05

RkJQdWJsaXNoZXIy Nzc3ODM=