The automation and artificial intelligence (AI) used today forms part of the fourth industrial revolution, known as industry 4.0. The first of these revolutions was mechanisation, occurring in the 18th and 19th centuries in Europe and the US, characterised by the transformation from rural to industrial societies. The second was mass production, which took place during 1870–1914. It was a period of growth for pre-existing industries and expansion of new ones, such as steel, oil and electricity. The third was the digital revolution, which began in the 1980s and is ongoing. It refers to the technological advancements from analogue electronic and mechanical devices to the digital technology available today.
Research has estimated that as much as 80% of the recent advances in AI arose from improvements in computation over the past 20 years. Computation is not merely providing the hardware but also the sophistication of the algorithms. Regardless of the technology used for manufacturing, the priorities of safety, quality, productivity and profitability remain constant.
Smart manufacturing
AI is often discussed as being a single entity, but it is actually an umbrella term going back to the 1950s.
It involves two key elements: relevant data and specific rules about how to analyse that data. These rules usually comprise binary questions that a computer program asks in sequence until it is able to provide a relevant answer. If an answer cannot be reached, more rules need to be created that are specific to the problem trying to be solved.
There are a number of data sources that can be used within smart manufacturing, such as incoming inspection, in-process inspection, testing, equipment monitoring, production data, as well as planning and field data.
AI provides a mechanism to collect and analyse these systematically, providing valuable insights.
In order to analyse this mass of data, it must be able to be stored. It is possible to upload all of the data to cloud storage, but the overwhelming amount of data makes this an impractical solution because of the inherent limitations of a network. Furthermore, it is difficult to gain timely insights from this data, because by the time it has been uploaded to the cloud, analysed and returned, it is too late. An alternative is edge computing, a form of cloud computing that uses a network of remote servers to store, manage and process data. Edge computing works much faster, so findings from data analysis can be more quickly acted upon.
Although collecting and analysing large amounts of data is valuable for optimising the manufacturing process, this implementation can be problematic. There are two main issues that can occur. The first is that data is micromanaged, where unnecessary data is collected and analysed that does not provide any useful insights. This wastes precious time and resources. The second potential for misuse is through issues with cybersecurity. It is therefore hugely important that efforts are made to ensure that all the data collected and analysed is not easily stolen.
Kept in focus
Medical device companies, such as Carl Zeiss Vision Care, are already using smart manufacturing technology. They produce precision optics, which are customised to order. Making these devices involves obtaining the raw materials, forming the glass, and putting the glass through a number of finishing operations and quality controls. Carl Zeiss Vision Care use DXC technology to monitor the manufacturing process. Labour, production, quality, equipment and environmental data are generated, which allows them to predict when a problem will arise and then respond immediately. This ensures that any issues are addressed before they have the chance to impact output.
An exciting development within AI is machine learning (ML). This relies upon neural networks, which are modelled on the human brain and nervous system. ML can sort information into categories based upon their characteristics. This means that it is not always necessary to label the data. ML uses probability to make decisions with a reasonable degree of certainty, and is thus a trial and error method. ML is also able to refine itself when given feedback about its outputs, making modifications to prevent similar problems from occurring in the future.
“It’s opening a lot of doors and making complex analysis more possible in a wide range of applications,” says Dave Saunders, the chief technology officer of Galen Robotics. “A great rule of thumb, from Stanford’s Dr Andrew Ng, is that anything a human can ‘think through’ in a second or less is a possible candidate for AI or machine learning.”
Digging deeper, ML can be divided into supervised and unsupervised learning. Supervised learning is where the machine is made to learn explicitly. Data with a clearly defined input is provided as well as direct feedback. This type of learning is able to predict outcomes as well as resolve problems with classification and regression. For example, there is a target temperature for injection moulding. The ML algorithm would then be provided with passive data on sensors so that it is able to determine what is inside and outside of range.
Unsupervised learning is where the machine understands the data and is able to identity patterns. Evaluation is indirect and it is not able to make specific predictions. An example is machine vision, where the system is fed large volumes of data and, through looking at several layers of the data, is able to determine the key features and how to improve them.
Supervised and unsupervised learning can be combined through the use of human-in-the-loop systems. In this approach, humans are directly involved in training, tuning and testing data for a particular ML algorithm.
Research suggests that a variant of Pareto’s 80:20 rule is consistent with most ML systems, with 80% being AI driven, 19% being human input and 1% due to randomness. Although human-in-the-loop systems are useful because they ensure that ultimate control rests with humans, they do have limitations. For example, prompt responses are required to prevent humans becoming the bottleneck in the manufacturing process. In addition, such systems demand a high level of expertise and alertness, which is not always feasible. This type of intelligence is thus inherently limited.
The robots are coming
Greater flexibility is afforded through the use of robotics, which are designed to mimic human thinking and behaviour. Industrial robots are the fastest growing area of robotics within medical device manufacturing. They are automated, programmable and capable of movement on two or more axes. Some are programmed to carry out specific actions repeatedly with a high degree of accuracy. Others are more flexible and can operate on a range of objects.
“Advances in artificial intelligence have allowed robots to learn, watch and expand their capabilities,” says Kiyonori Inaba, general manager of Fanuc. “Deep learning has also cut time-intensive robot behaviour programming.”
Robots can work either collaboratively or independently from humans in the manufacturing process. Cobots, which work collaboratively, are designed to be implemented within the manufacturer’s existing infrastructure, without the need for safety caging, allowing them to interact with human line workers. They are ideal for many tasks typically completed by humans, such as packaging, and so they can be easily inserted into the line. Incorporating cobots in the manufacturing process for extremely precise and quality-driven aspects of the medical device industry can help reduce human error and improve quality. This is particularly useful in medical device packaging, where ineffective packaging has a negative impact on the efficacy of a medical device or can even result in the entire device failing to get to market.
One company that has benefitted from using cobots is Tegra Medical. The company was facing reduced profits as a result of increased development costs and customerdemanded price cuts. Tegra turned to Universal Robots to keep up with customer demands and to reduce costs. By using three robotic-arm devices from Universal Robots, Tegra was able to double throughput and free up 11 full-time positions to tend to other processes.
“Being in the medical industry, we can’t change our process without notifying our customers and going through validation activity,” says Hal Blenkhorn, the director of engineering at Tegra Medical. “But by simply replacing the operator with a robot, we just changed the handling of components in-between the processes. That was a huge win for us. With the Universal Robot robots, we only get a few rejects per day. Before, that number was significantly higher.”
Another way that robots can be used in a collaborative way is with exoskeletons, predicted by ABI Research to become a $2 billion industry by 2025. These are devices worn by workers to provide support to their activities. They can help to lower work-related injuries, reduce money spent on medical expenses and sick leave, and lower work fatigue, which all helps to boost productivity. When sensors are used within this technology, for example when wearing haptic gloves, the human ability of how to perform the operation can also be translated into machine intelligence.
Vision-guided robots also offer a number of benefits within manufacturing. This is where machine vision is used to locate a part and automatically communicate the location to the robot. This technology has been around for two decades but has become increasingly popular because of developments in deep learning, which have made machine vision more powerful. In the next few years, likely advances include automation of inspection and defect detection.
Challenge accepted
The future development of AI is faced with a number of challenges. These include the ongoing demand for mission-critical decision making, involving timely and safe judgments within an inherently unpredictable environment. Another key challenge is the current lack of understanding about how AI makes decisions. This makes for a lot of uncertainty about AI technology being more readily implemented. There is also a need to ensure that AI developments remain compliant with ongoing shifts in policies and procedures. Despite these challenges, AI is clearly here to stay, and consistently offering huge potential for current and future manufacturing processes.