The financial and manufacturing sectors are most advanced with deployment of industrial artificial intelligence (AI) technologies, reckons Fujitsu. In conversation with RCR Wireless, on the back of a rush of news about its AI initiatives – including, lately, a new generative AI framework to help enterprises manage and regulate large volumes of data in unwieldy large-language models (LLMs), and a deal with US data security and privacy outfit Cohere to develop localised LLMs for enterprises in Japan – the Japan-based firm put focus on the developing role of AI in the Industry 4.0 market, and presented key applications, challenges, and measures for enterprises to make the most of it.
“AI adoption is progressing [well] in the financial industry, a business field with a certain amount of data available and relatively little analogue and unstructured data compared to other industries,” said the firm in an email exchange. It continued: “Fujitsu has introduced [more] AI solutions to the financial industry than to any other industry. It also has great potential to be used in manufacturing where a large amount of non-structural data (diagrams, for example) are handled and where the accuracy of data tends to fluctuate due to the factory environment. Fujitsu is also focusing on the development of offerings in this field.”
Fujitsu is offering a “wide line-up of AI services”, it said, including third-party LLMs to develop bespoke AI for custom enterprise use cases. “For example, we are currently working on a solution based on Google Gemini for use cases with a high number of I/O tokens,” it said, making reference as well to the supply of “routing technologies to provide unique models”. The engineering industry, running adjacent to the Industry 4.0 market, is a clear focus, it said – where Fujitsu is “most excited to enable LLMs to reference business data for AI adaptation”. It explained: “Standardisation of operations is essential, and combining [our] SI expertise with core technologies is important.”
For core technologies, here, read: “the expansion of business-specific LLMs and the evolution of ‘retrieval augmented generation’ (RAG)”. RAG bridges the algorithmic techniques used for inferencing in AI and the fine-tuning of foundation models to create digital assets for generative AI in order to make connections between, and ultimately to raise the accuracy and reliability of generative AI systems – as discussed here. It is a crucial technique, relatively new, if generative AI is to find a foothold in critical Industry 4.0 sectors. Fujitsu is looking to make that RAF bridge automatic – to “automatically generate… an optimal combination of LLMs and RAG”, it responded.
“Within this system, customers operate from a single UI, and the generative AI combines data and AI models without the need for input from data scientists. In this way, we ultimately aim to significantly improve work efficiency by enabling AI to provide rapid and autonomous recommendations.” More generally, responding to a direct question about “top use cases”, it suggested some form of industrial AI will be used commonly on both factory floors and administrative offices – for “responding to customer inquiries, detection of defective products, maintenance and maintenance recommendations, presentation of estimates, and various kinds of reviews”.
The firm points to a reference page (in Japanese) of example generative-AI chatbot responses to a series of customer enquiries to a Mazda call centre. It stated: “The role of generative AI in Industry 4.0 is that AI sublimates and efficiently organises corporate data as knowledge in all business scenes, including R&D, estimations, design, procurement, manufacturing, shipping, maintenance, and functions – as a reliable partner for management decisions and business implementers. Beyond Industry 4.0, people are advocating for a human-centric approach, where AI helps people to focus on making decisions and generating ideas, rather than taking their work away.”
It continued: “For example, there is a field called ‘materials informatics’ within the development of innovative materials in R&D, and, in our opinion, computational science, AI, and generative AI could be combined to expand ideas and advance development without needing to go through experiments and prototypes. In the future, generative AI will evolve into artificial general intelligence and artificial super intelligence (AGI and ASI), establishing itself as a human assistant through autonomous learning. We expect that the spread of work-specific LLMs is going to increase. However, when it comes to emotions and intuition, we will still have to rely on experienced humans.”
But what about all the challenges with generative AI in Industry 4.0 – in terms of infrastructure deployment and readiness, appropriate domain-specific reference data, and hallucination and accuracy (to list just three)? Fujitsu responded to each prompt, in turn, summing up the first challenge (deployment) as: “the need to secure real-time data processing, low latency, high computing power and the right infrastructure to connect business processes and data to cloud-based solutions for efficient AI learning”. In sum, it said simply: “It will be important that customers can access cloud-based HPC solutions free of charge.”
In terms of reference data, it responded that “data quality and diverse models have an impact on reliability”. It stated: “Business professionals need to create work patterns and use the resulting data as reference data. Thus, AI in Industry 4.0 will require such business professionals.” The discussion about so-called AI ‘hallucinations’ (unexplainable AI brain-farts, which throw data analytics / insights off course, and business systems with it, potentially), was more expansive, but the point in the end is to keep humans in the loop, and make AI explain itself. “Humans need to supervise the instructions/prompts to the AI and review answers given by the AI model,” it wrote.
“Business processes are being created that involve human judgement of AI inputs and outputs… Fujitsu has developed technologies to protect conversational AI from hallucinations, which it is offering through its Kozuchi AI platform. Fujitsu has [also] started a strategic partnership and joint development with… Cohere to provide generative AI for enterprises… [and] improve the reliability of LLMs themselves. Cohere’s LLM provides a clear and reliable data set for creating LLMs. This allows us to provide more accurate answers. Second, we can minimise hallucinations in customer operations by fine-tuning customer operations based on Takane, Fujitsu’s Japanese-language LLM.”
Make of that what you will; but the top-line logic seems clear. So how should Industry 4.0 procure and process domain specific models to train their generative AI tools on? Fujitsu responded: “The trend of collecting data and building and fine-tuning models in collaboration with customers will continue. [But] there are limits to data collection within a company. By collaborating with many companies, we can collect data across industries, and we anticipate a future in which the value of generative AI will increase faster than ever before.” The point here is enterprises cannot train LLMs alone, and Fujitsu has been doing it for ages (in the life of gen AI) – on bountiful complementary data sets.
It can draw in enterprise-specific data, alongside – and the RAG-time running between it all will make the recommendation process even more fluent. “Fujitsu has accumulated knowledge while promoting business-specific LLMs and will continue to offer the most appropriate data sets for customer operations, including consulting services. It is further promoting the development of a generative AI amalgamation technology that combines existing machine learning models. Rather than only creating LLMs, this approach aims to create LLMs best suited for customers’ needs by combining different existing LLMs.”
And so, finally, what steps should Industry 4.0 take to harness generative AI? Fujitsu highlighted three, which are “not significantly different” between enterprises and industries. “One: standardise; standardise operations and standardise data within these operations. Two: introduce business settings; enterprises should not only identify the increase in efficiency [they wish to achieve with] generative AI, but also how it [will] contribute to business growth and value. Three: start small; enterprises should create introductory AI roadmaps based on a usage hypothesis, and start small but also fast to bring things forward.”