New 3-D Printing Method Creates Shape-Shifting Objects

A team of researchers from Georgia Institute of Technology and two other institutions has developed a new 3-D printing method to create objects that can permanently transform into a range of different shapes in response to heat.

The team, which included researchers from the Singapore University of Technology and Design (SUTD) and Xi’an Jiaotong University in China, created the objects by printing layers of shape memory polymers with each layer designed to respond differently when exposed to heat.

“This new approach significantly simplifies and increases the potential of 4-D printing by incorporating the mechanical programming post-processing step directly into the 3-D printing process,” said Jerry Qi, a professor in the George W. Woodruff School of Mechanical Engineering at Georgia Tech. “This allows high-resolution 3-D printed components to be designed by computer simulation, 3-D printed, and then directly and rapidly transformed into new permanent configurations by simply heating.”

An 3D printed is suspended in water after permanently morphing from a flat to a curved shape in response to hot water.
An object created by a team of researchers from Georgia Institute of Technology and two other institutions is suspended in water after permanently morphing from a flat to a curved shape in response to hot water. (Credit: Rob Felt)

The research was reported April 12 in the journal Science Advances, a publication of the American Association for the Advancement of Science. The work is funded by the U.S. Air Force Office of Scientific Research, the U.S. National Science Foundation and the Singapore National Research Foundation through the SUTD DManD Centre.

Their development of the new 3-D printed objects follows earlier work the team had done using smart shape memory polymers (SMPs), which have the ability to remember one shape and change to another programmed shape when uniform heat is applied, to make objects that could fold themselves along hinges.

“The approach can achieve printing time and material savings up to 90 percent, while completely eliminating time-consuming mechanical programming from the design and manufacturing workflow,” Qi said.

To demonstrate the capabilities of the new process, the team fabricated several objects that could bend or expand quickly when immersed in hot water – including a model of a flower whose petals bend like a real daisy responding to sunlight and a lattice-shaped object that could expand by nearly eight times its original size.

“Our composite materials at room temperature have one material that is soft but can be programmed to contain internal stress, while the other material is stiff,” said Zhen Ding, a postdoc researcher at Singapore University of Technology and Design.  “We use computational simulations to design composite components where the stiff material has a shape and size that prevents the release of the programmed internal stress from the soft material after 3-D printing. Upon heating the stiff material softens and allows the soft material to release its stress and this results in a change – often dramatic – in the product shape.”

The new 4-D objects could enable a range of new product features, such as allowing products that could be stacked flat or rolled for shipping and then expanded once in use, the researchers said. Eventually, the technology could enable components that could respond to stimuli such as temperature, moisture or light in a way that is precisely timed to create space structures, deployable medical devices, robots, toys and range of other structures.

“The key advance of this work is a 4-D printing method that is dramatically simplified and allows the creation of high-resolution complex 3-D reprogrammable products,” said Martin L. Dunn a professor at Singapore University of Technology and Design who is also the director of the SUTD Digital Manufacturing and Design Centre. “It promises to enable myriad applications across biomedical devices, 3-D electronics, and consumer products. It even opens the door to a new paradigm in product design, where components are designed from the onset to inhabit multiple configurations during service.”

>> Read more by Josh Brown, Georgia Tech News Center, April 12, 2017

Dassault Systemes Highlights the Role of Simulation in 3D Printing

What does simulation have to do with 3D printing? Plenty, as it turns out, and plenty more is possible by simulating various additive manufacturing (AM) processes, machines and materials.

Recently, Dassault Systèmes kicked off its Science in the Age of Experience conference in Chicago with an Additive Manufacturing Symposium.

The Hole Story

Penn State's Timothy Simpson shows an example of distortion during his presentation at Dassault Systemes' Additive Manufacturing Symposium.
Penn State’s Timothy Simpson shows an example of distortion during his presentation at Dassault Systemes’ Additive Manufacturing Symposium.

“Why are holes circular?” asked Timothy Simpson, professor of mechanical and industrial engineering at Penn State University and co-director of the Penn State Center for Innovative Materials Processing through Direct Digital Deposition (CIMP-3D). “Is this the most efficient use of material? Is this the way Nature intended? No, it’s because this is how we’re used to making them. We’ve been making them for hundreds of years. It’s cheapest, fastest, quickest to drill a circular hole. Holes don’t have to be circular anymore. You can print all sorts of geometries now.”

That simple realization can cause a ripple effect in design thinking. 3D printing technologies make it possible to rethink standard shapes, combine different materials, apply materials in different densities in the same part, and combine multiple parts into one.

“You can change alloys on the fly,” Simpson said. “You can get corrosion resistance here, fatigue properties there, better hardness there … all in a single component.”

However, it’s not that simple when it comes to integrating those possibilities into product design and development. “How do you design that single component in your CAD system that has multiple materials? How are you going to analyze that in your finite element analysis (FEA). Even worse, now that we can print it, how are you going to certify that you have the right microstructure in the right place?” Simpson asked. “This is what’s causing both the excitement and companies to sort of freak out a little bit about where we’re going with additive.”

Challenges of Simulating Additive Manufacturing

And we’ve got a long way to go when it comes to efficiently combining design and simulation with additive and traditional manufacturing. Sometimes holes are circular in parts because they need to be bolted onto existing assemblies. Right now it’s difficult to take into account all the possibilities of 3D printing and combine them with traditional manufacturing requirements while optimizing them for specific AM processes and machines. Simpson gave one example of his students “not knowing what they can’t do” and cobbling together four or five different software packages to design an optimized part, only to spend 30 of the 54 hours of build time—and $1500 of the $2000 in materials costs—on supports, not the actual part.

“A lot of our tools, our simulation models, the design workflow that we have, is now the bottleneck in our system,” Simpson said.

Subsequent presenters shared their work on removing those bottlenecks. Jack Beuth, professor of mechanical engineering at Carnegie Mellon, director of the university’s NextManufacturing Center shared his work on developing process maps. Process mapping is a means of understanding the capabilities and variabilities in AM machines via experiments and simulations to control process outcomes and qualify parts.

“You can also design the [AM] process itself,” he said. “Most people don’t fully appreciate how much freedom there is in designing the process. It’s very possible to manipulate the process variables significantly on existing machines.”

Beuth is working on process mapping software that someone could use to design the process variables as they’re designing the part. Learn more in this video:

Process Simulation

Jacob Rome, a structural analyst at The Aerospace Corporation, focused his presentation on simulating the process of developing and qualifying AM parts for space applications.

“In the aerospace industry, there is a lot of emphasis on making things properly that work the first time,” he said. In AM, variables such as the orientation of the part, the moisture content of the metal powder being used, where supports are placed, and laser print speeds, power and patterns, just to name a few, can affect the quality of the parts.

“Tools are available now and becoming more available to simulate AM processes,” Rome added. “In the future, software will be capable of pre-correcting distortions, predicting microstructures and optimizing build parameters.”

But Rome was quick to point out AM is not magic. The old design constraints have been replaced by new ones. “You can’t make anything you want with any properties you want.”

The trick is knowing what is possible during the design stage and what your virtual changes will mean to the physical part. A lot of those variables depend on the material being used.

Material Matters

As Lyle Levine from the Materials Measurement Lab at National Institute of Standards Technology (NIST) put it: “Frankly, if people had asked me many years ago, ‘What is the worst possible way to build a material?’ I probably would have said ‘by welding millions and millions of little bits of metal together.’”

Levine’s job today is all about making that metal AM process work better. Specifically, he’s trying to help “build the tools that will allow engineers to design a specific part for a specific engineering application with a specific additive manufacturing machine.” As part of that mission NIST has founded AM Bench, an AM benchmark test series of highly controlled builds and detailed measurements that they plan to make available publicly so people can validate their simulations.

Integrating 3D printing, whether metal or polymers, into a production environment, takes longer than many companies expect. Mike Vasquez, a consultant from 3Degrees, said it can take six months, if everything goes well, just to begin part production. Companies that go into AM for production need to understand the challenges involved.

From Dassault Systèmes’ perspective, the solution to many of these challenges can be realized via collaboration on a single platform. At the conference, the company shared its vision for combining design, simulation and optimization for additive manufacturing via the 3DEXPERIENCE platform.

“Two to three years ago, there was a lot of hype for AM. When hype met reality, we got pulled in,” said Subham Sett, director of Additive Manufacturing & Materials at SIMULIA. “Things were failing, distorting. When we looked at it, we looked at it not just from the simulation side. The simulation side alone doesn’t help AM. It’s an ecosystem from design to simulating manufacturing to production.”

Sumanth Kumar, VP of SIMULIA Growth at Dassault Systèmes, said the journey for simulating AM is just beginning, and it’s already revealing some unexpected benefits.

“One side effect of the transformation in AM is that the designer or the engineer is getting very familiar with the value of simulation,” he said. “The analyst is now understanding how to design parts. So the silos we’ve had are changing. Customers such as  Airbus have made monumental changes in their organization. The boundaries between various roles are changing. It’s a fantastic side effect I’m seeing from all this transformation.”

In the followup to this article, we’ll expand beyond AM to look at Dassault’s vision for advancing simulation that it shared during the Science in the Age of Experience 2017 conference. Additive manufacturing is just one disruption companies need to respond to as we enter into what the company calls the Age of Experience.

Readying Your Robots and Workforce for Industry 4.0

Industry 4.0 may seem more conceptual than real. More fantastical future than practical solution. For many manufacturers, the industrial internet of things (IIoT), cyber-physical systems, cloud robotics, fog computing, and big data, can be intimidating. Visions of a smart factory can make us feel pretty dumb.

The smart factory connects the digital world of information technology with the physical world of operational technology, what many call IT/OT convergence. But Industry 4.0 is not a distant vision for the factory of the future. It is here and it is now. Networks of robots are connecting to the cloud and contributing mass amounts of insightful data. Today, manufacturers are using these information pipelines to simplify asset management and maintenance, maximize equipment and process efficiency, and improve product quality.

We’ll explore how one of the Motor City’s Big Three automakers is capitalizing on its connected robots to not only prevent downtime, but predict failures before they occur. We’ll demonstrate how you can put practical tools to work today to prepare your factory and your workforce for a future propelled by connectivity and collaboration.

Stop Downtime Before It Occurs

Cloud-connected welding robots on automotive assembly lineGeneral Motors is putting IoT and the building blocks of Industry 4.0 to work – today. The automaker’s robot supplier and strategic partner, FANUC America Corporation, is helping GM build a strong foundation for smart manufacturing. GM, FANUC, and networking giant Cisco together developed the Zero Down Time (ZDT) solution. ZDT uses a cloud-based software platform to analyze data collected from robots across GM’s factories in order to detect potential problems that could lead to production downtime.

In automobile manufacturing where a new car body comes down the assembly line every 60 or 90 seconds, downtime can cost OEMs over $20,000 a minute. A single downtime could easily rack up millions in losses. When lines screech to a halt, those backups can impact the entire supply chain, further compounding the losses. The delays also trickle down to customers, the automotive dealers, fleet users, and the car-buying public.

“We’ve had initiatives ongoing for some time now trying to better predict and maintain the health of our manufacturing equipment,” says Marty Linn, Manager of Advanced Automation Technologies and Principal Engineer of Robotics at General Motors Co. in Detroit, Michigan. “We got together with FANUC and talked about what we could do to avoid issues while we’re doing productive manufacturing. This wasn’t some great vision for Industry 4.0. It was about what we can do to eliminate downtime in our plants from unpredicted maintenance.”

A ZDT pilot program was launched at GM in 2014. The strategic partnership between GM and FANUC was a key enabler for the successful launch. The history between the two companies dates back to the early 1980s, when GM entered into a joint venture with the Japanese robot manufacturer to form GMFanuc Robotics Corporation to develop and market robots in the U.S. The venture would later be divested, but the strong relationship continued. It’s a long history we highlighted in last month’s article, The Robotmakers – Yesterday, Today and Tomorrow.

Of the approximately 35,000 robots GM has deployed worldwide, about 95 percent of them are FANUC robots says Linn. And that number continues to grow.

“We’re integrating robots as we speak. Every day we have robots and systems from our integrators that get shipped to the plants as we’re introducing new products and new programs.”

Currently, GM has over 8,500 robots connected to FANUC’s ZDT platform. More robots are hooked up to the cloud every day.

ROI

Every day, ZDT is making a difference on the plant floor. Linn says GM has been able to avoid over 100 significant unscheduled downtimes since the program’s inception.

“That avoids on the order of 6 to 8 hours of unscheduled downtime depending on what was going to fail. You can do the math, it’s a lot. It’s a big deal to us for any of our facilities, but especially in our high-volume truck and SUV plants, where each downtime event is significant.”

With thousands of robots connected and communicating with the cloud, it wasn’t long before GM begin realizing their return on investment. FANUC won GM’s 2016 Supplier of the Year Innovation Award for the ZDT solution. The robot maker was the only non-vehicle component supplier to win the esteemed award. This video explains how ZDT works in GM plants.

“This is not ‘Jetsons’ technology,” says Linn. “This is using big data, the internet of things, new algorithms, computer capacity, all those things that have evolved over the past years, and using them in the most efficient way. Preventing downtime, and anticipating or even forgoing maintenance until it’s needed, is huge!”

How It Works

ZDT includes both hardware and software platforms. The robots are connected via Ethernet, usually to the local workcell network. Then the local workcell network is connected to the plant network.

Data is collected on each robot and fed into the robot controller software. Then data collector hardware located in the production plant collects the data from all of the robot controllers across the plant. Once the data arrives in the data collector, then it is securely transferred to the Cisco Cloud. All of the data is fully encrypted by Cisco software as it’s transmitted to the cloud.

Analytics software developed by FANUC analyzes the data coming into the cloud databases for potential areas of concern. Once an anomaly or specific criteria is detected, then an email alert is sent to the FANUC Service Center, so parts and service can be dispatched for preventive maintenance. Email alerts are also sent to predesignated customer personnel, so they can take appropriate action to address the issue before downtime occurs.

“A crucial element of ZDT is the ability to predict failure in advance,” says Jason Tsai, Vice President of Product Development at FANUC America Corporation in Rochester Hills, Michigan. “This is a challenging task.”

Tsai explains that the ability to predict the failure up to two weeks in advance may not be good enough due to 24/7 production schedules. That short of a window can still cost production downtime.

“If you’re able to predict three or four weeks earlier, then you have a chance to schedule the replacement during the weekend to avoid downtime.”

FANUC offers two options for data collection. Customers can either use the data collector hardware supplied by Cisco, or they can use their own server provided it meets certain system requirements. In that case, ZDT data collector software would need to be loaded onto the customer’s server so that it is capable of the same functionality as the Cisco hardware. The Cisco data collector, called UCS™ for Unified Computing System, is designed to cover an entire plant of up to 1,000 connected robots.

Diagram showing cloud-connected software platform for collecting and analyzing massive amounts of data from thousands of robots

“Robots might be the most reliable piece of automation we have in our assembly plants,” says GM’s Linn. “But the robots still wear out. There are problems with cables that break. You have gearboxes that wear out. We tend to run our robots very hard. We run several of our plants on three shifts a day. That means you don’t have as much time for regularly scheduled maintenance.

“For example, we might have a robot with a gearbox that is going to fail,” he says. “The robot does a test. It determines that the gearbox is going to fail. We would then schedule during the next available downtime to swap out that gearbox.”

Maintenance Only When Needed

GM started slowly, connecting a couple thousand robots over the first year or two. But by fall 2016, GM had over 6,000 robots connected to the ZTD platform, and just six months later over 8,500. Right now, the solution is focused on FANUC robots and FANUC robot-controlled processes. There’s no intention to connect robots made by other manufacturers.

“We’ve probably used every manufacturer’s robots since robotics started, so there are still some other manufacturers’ robots that we’re using in production today,” says Linn. “But all of our new robot purchases for more than a decade have been with FANUC.

“We started out slow with the deployment. When we saw a problem, we went in and replaced parts. Then we studied those parts. Sure enough, we’ve been able to validate and verify that those parts were going to fail. They were going to cause us downtime. It was at that point when we were able to reduce unscheduled maintenance events that everybody got really excited and started saying, this is great, what else can we do with it?”

GM is also using ZDT to schedule maintenance only as needed, rather than beholden to routine maintenance schedules.

“For example, a robot might be designed to have routine scheduled maintenance at 1,000 hours. So we would plan to maintain it at that time,” says Linn. “But it might actually last 1,250 hours before it needs maintenance. So we’re working on getting away from fixed maintenance schedules to instead schedule as needed. This is one of the major ways you can find significant savings going forward.”

Machine Learning

ZDT doesn’t only apply to robots. It’s also applicable to process equipment. Processes that are directly controlled by the robot, such as welding, painting and some dispensing applications. Linn cites GM’s automotive paint shop as an example.

“Looking at air pressures, looking at downdraft pressures, the speeds of the actuators that are dispensing the paints, looking at a lot of the paint processes and the parameters that go into that, we’re able to monitor the health of the equipment and therefore the quality of the job.”

Finish quality is crucial in the automotive paint shop. All of FANUC’s new paint robots are ZDT-ready, which means they can monitor a variety of functions including paint canisters, spray applicators, regulators, and drive health.

“If you count the total number of moving parts that relate to automotive painting, there are over 200 moving parts per robot,” says FANUC’s Tsai. “A significant number of those moving parts are related to the process-specific devices controlling the gun, regulator and pressure. If any one of those devices have any kind of premature failure, it can cause a quality issue and/or production downtime.”

Right now, GM is using ZDT as more of a predictive maintenance tool as opposed to an in-process adaptive tool. But as the technology evolves, and more data is collected and analyzed, and the algorithms get more sophisticated, you can see how with machine learning it could become more of an adaptive tool for real-time process improvement.

“In the case of the paint shop, by recognizing and understanding that there are very subtle process changes going on and correcting for those, we’re able to improve our processes,” says Linn. “We want to expand this strategy of having the equipment be smart, able to diagnose itself, and notify us of changes to its operational performance, so that we can go in during the opportunities that suit us to make adjustments or repairs as needed.”

Automotive and Beyond

FANUC’s ZDT analytics solution is monitoring over 10,000 cloud-connected robots at customers’ facilities around the world, and growing every day. Right now it’s only used in the automotive industry, but FANUC plans to release software and hardware support for general industry, non-automotive customers in late 2017.

“Our solution needs to be scalable for small general industry manufacturers with two to three robots,” says Tsai. “The way you install the software and set up the hardware needs to be plug and play for the smaller manufacturer, because they don’t have the IT department to support it.”

As part of their standard product, FANUC’s data collection software is preloaded on the robot controller before the robot is shipped. For legacy robots, FANUC supplies the ZDT software which needs to be loaded onto the robot controller.

“At least 80 percent of those 10,000 robots that are already connected are legacy robots,” says Tsai. “We’ve been successful loading those controllers with the software; it’s straightforward.”

Eventually, FANUC and Cisco intend to use this data communication highway developed for ZDT to connect other equipment beyond robots. ZDT is part of the FANUC Intelligent Edge Link and Drive (FIELD) system, which provides the open software platform that allows for advanced analytics and deep learning capabilities for FANUC CNCs, robots, peripheral devices, and sensors used in automation systems. FIELD is based on edge computing technology where a large amount of data is processed within the manufacturing site at the edge of the network, thereby minimizing the volume and cost of sharing data.

“With the ZDT Cloud solution the data is flowed from the devices on the production floor all the way to the cloud, where you will have latency or delays,” explains Tsai. “The benefit of receiving the data on the floor using the FIELD platform is then you can respond to the event in real time, and that’s what the FIELD does. It’s a piece of open platform software that can be loaded into computing hardware, which then allows you to access data from the robot, the PLC (programmable logic controller), or the machine tool device, and apply analytics in real time. It can even change your production based on how you behave. That’s where real-time machine learning can provide good value.

“Industry 4.0 is not just a dream. It’s real,” says Tsai. “It’s an exciting time for automation.”

Exciting indeed, as more robot manufacturers introduce their own IIoT solutions for embracing the level of connectivity heralded by Industry 4.0.

ABB’s Ability™ Connected Services allow manufacturers to harness the full value of intelligence from single robots to entire fleets, using real-time data from intuitive dashboards to improve robot system performance and reliability. Watch it in action at this smart factory manufacturing circuit breakers in Heidelberg, Germany. Connected Services are part of the ongoing evolution of the Remote Services platform, which ABB introduced in 2006 – years before the ‘Internet of Things’ was coined. Today, ABB has thousands of remotely monitored robots and offers fully integrated IIoT solutions.

Robot Data at Your Fingertips

KUKA Connect is a brand-new platform officially released earlier this year. The cloud-based software platform allows customers to easily access and analyze data from their KUKA robots on any device, anywhere at any time.

The solution provides three main functions: asset information management, condition monitoring, and maintenance alerts. This video demonstrates how KUKA Connect leverages cloud computing technologies and big data analytics to provide customers insightful information about their connected robots.

“If you are a big OEM customer, you may have thousands of robots in one facility,” explains Andy Chang, Director of Product Marketing, Americas, at KUKA in Austin, Texas. “The way they manage asset information management today is by using giant Excel spreadsheets that are manually maintained. The information on the spreadsheet may or may not be accurate and they may not actually know what type of robots they have.”

Chang says that not having the right asset information could impact how well the robots are maintained over their lifetime.

“By using a tool like KUKA Connect, you can easily look up all the thousands of robots around the plant and navigate through them individually to see when they were commissioned, check serial numbers, and what software is currently installed without having to physically walk up to the machines.”

For condition monitoring, KUKA Connect provides specific robot KPIs (key performance indicators) to help technicians and the maintenance crew gauge how well the robot is performing. Chang describes an example.

“We provide temperature charts for all the different axes on a robot. So if the production person or maintenance person starts observing that the trend of the temperature graph for a particular axis has been going up over the last week, that probably means a couple of things. One, the gearbox is probably overheating for whatever reason, or two, maybe the payload has to change. The object that the robot is picking up is maybe not what the machine was designed for.”

KUKA Connect is designed to work on desktops as well as mobile devices, smartphones or tablets, anything that supports a web browser.

Industry 4.0 hand-held interface for software platformCurrently, KUKA Connect only supports KUKA robots. Intuitive dashboards help visualize data according to specific criteria. Whether you’re trying to optimize your maintenance schedule or manage spare parts inventory, all the data is at your fingertips, so you can anticipate potential downtime and take steps to fix issues before downtime occurs.

“Today, there are two ways that KUKA Connect provides information,” says Chang. “One is very literal. When there is a controller error message, we provide the real-time notification to the user with the error code and the error description, so it’s very dynamic. The second part is more passive. We present the data to the end user and then they will need to interpolate what that means for their robot, their line, their factory.”

The software platform not only interfaces with the robots, it will also monitor the automation equipment controlled by the robot controller, such as a welding gun or gluing gun, or even an additional axis if the robot is on a rail.

“Any information that is controlled or aided by the robot control will be part of the platform,” says Chang. “That’s something we are currently working on, is the ability to actually visualize the robot data in conjunction with the process-specific data, so the end user can understand not only the health of the mechanics of the machine, but also the key performance of the process itself.”

How It Works

KUKA Connect leverages a fog computing device, which uses end-to-end application software to securely discover, communicate, and transfer robot data to the cloud.

“In order to connect all the robots together, you need a physical device,” explains Chang. “We partnered with a company called Nebbiolo Technologies. They provide a fogNode™ (edge computing device) that is built on Fog OS (operating system) technology. Essentially it’s a real-time PLC and a network bridge two-in-one, so it can intelligently understand the network infrastructure, separate the firewall between the production firewall and IP firewall, as well as have the processing power to do some data ingestion onsite. Then on the other end it will send information to the cloud.”

IIoT software interface showing assets, conditions, alerts, on any device, anytime.The fog computing device resides in the customer’s factory. KUKA Connect is a subscription-based service. All that is needed is a user account in order to log in and see the robots that are connected to that fogNode. Each node can support up to 60 KUKA robots.

KUKA Connect works with the robot manufacturer’s KR C4 controllers and supports KUKA.SystemSoftware (KSS) version 8.3.20 and above. Chang notes that it’s the robot controllers that are actually talking to one another. Any robot that is connected to those controllers will work. But currently, KUKA does not support older legacy systems.

“We’re exploring some mechanisms in which the customer experience would be consistent (across generations of controllers and system software versions). It wouldn’t matter if you have a KR C2 versus a KR C4 controller.”

“We want to make sure when customers purchase KUKA Connect, that hardware implementation is easy. It’s plug and play,” says Chang. “There’s no software to load (every robot sold in the last year already has the software loaded on the controller). All our customer needs is a user account.”

KUKA offers an instructional video to help users get started.

“When we designed and built this product, one of the key requirements was to lower the barrier to entry. We wanted to make sure KUKA Connect was user friendly, from user deployment to daily usage. The benefit of leveraging the cloud is that we can continue to push new features and new functionality to all of our customers as soon as we have it. We want to make sure when our customers want to have access to the latest and greatest technologies, they can have it right away. This is the platform we will continue to build upon.”

Born and Bred in Austin

You might be surprised to learn that KUKA Connect was born in the Lone Star State. It’s the product of a German robot manufacturer recently acquired by Chinese appliance maker Midea. This new KUKA division in Austin, Texas, was established in 2015 and is part of KUKA AG, the holding company in Augsburg, Germany.

So why Austin? Chang says the city has developed into a major technology hub.

“We’ve definitely seen momentum over the last 10 years. And similar to California, we have a really good education system with The University of Texas at Austin just down the street, and then Texas A&M and Rice University. Compared to other places like Silicon Valley and Boston, the cost is very attractive from a business standpoint. There is definitely a tremendous robotics community that’s been brewing.”

Chang says KUKA Connect was envisioned, developed and released all in the Austin office. They recently exhibited at SXSW 2017, where their bottle flipping robot stole the show.

Workforce Development for Industry 4.0

To truly realize the vision of Industry 4.0 and the smart factory, we will need vast talent pools across multiple geographic regions and industries to bridge the growing skills gap. One company is helping to build that talent pool and nurture the skills required for the factory of the future.

Festo is a leading global manufacturer of pneumatic and electromechanical systems, controls, and components for process control and factory automation solutions. The German automation supplier has already put Industry 4.0 into practice at their own factory, the Scharnhausen Technology Plant, where they make valves, valve terminals, and electronics. Their daughter company, Festo Didactic, is a worldwide leader in industrial education for technical training institutions and manufacturing companies.

At the Automate conference in April, Ted Rozier, Engineering Development Manager for Festo Didactic Inc. in Eatontown, New Jersey, gave a presentation on Growing the Next Generation Automation Capable Workforce, noting that more than 300,000 U.S. manufacturing jobs go unfilled for lack of qualified candidates. That number is only expected to grow.

“It’s important to teach students to be familiar with the complete portfolio of automation hardware and the software,” says Rozier. “They need to understand the integration process of robots and PLCs, as well as be familiar with how IoT can enhance the complete process. This is common practice in Europe and we are looking to raise the visibility of this type of training in North America.”

Rozier stresses the importance of multidisciplinary learning, especially focusing on mechatronics.

“The Internet of Things has to thrive in order to assist in bringing life to Industry 4.0. To do that, you need a strong IT background and strong mechatronics background. We have the opportunity to breed individuals that can not only understand, but influence automation manufacturing processes from the top floor to the shop floor, from the IT level down to the sensors that assist in helping robotics make decisions. That’s an important skill going forward.”

Modular cyber-physical leaerning platform models real production plant for studentsThe Learning Factory

Festo Didactic provides “learning factory modules” for hands-on industrial training in mechatronics, control technology, and automation technology. The system starts with a single Project Workstation I4.0 for teaching the fundamentals of control technology. Several modules can then be added to create a complete learning CP (cyber-physical) Factory, which can include a realistic industrial pallet circulating system and an autonomous mobile robot to connect different workstations.

See Festo’s CP Factory in action. The system is completely modular, so individual workstations can be added, removed, and moved around as learning requirements change. Training topics include: PLC project engineering, working with human machine interfaces and RFID sensors, commissioning the web server and TCP/IP and OPC-UA interfaces, energy monitoring and management, working with intelligent process data modules, enterprise resource planning (ERP) systems, manufacturing execution systems (MES), and rapid prototyping.

Community colleges with manufacturing and STEM tracks are using Festo’s CP factory modules to prepare their students for jobs immediately upon graduation. York Technical College in Rock Hill, South Carolina, has an entire room devoted to the CP Factory, with about six modules. Companies like Mercedes-Benz in Vance, Alabama, are using Festo learning equipment to enhance their workers’ knowledge.

Festo Didactic has also opened a learning center, where they host training and apprenticeship programs like this one in Mason, Ohio. Trainers from Festo’s Mason and Eatontown facilities also travel the country providing training for vocational programs and corporate continuing education.

We must prepare our workforce for the new ways in which humans, machines and data will interact in a hyper connected world. There are steps you can take now to prepare for Industry 4.0 and the smart factory. RIA member suppliers can help you get started on the right path.

>> Reposted article by Tanya M. Anandan, Robotics Industry Insights, 5/25/2017

Smart Factories Could Add $500 Billion to Global Economy in 5 Years

Capgemini, a global leader in consulting, technology and outsourcing services, has today announced the findings of its Smart Factories report. According to the research from Capgemini’s Digital Transformation Institute, manufacturers expect that their investments in smart factories will drive a 27% increase in manufacturing efficiency over the next five years which would add $500 billion in annual added value to the global economy.

Often described as a building block of the ‘Digital Industrial Revolution’, a smart factory makes use of digital technologies including the Internet-of-Things, Big Data Analytics, Artificial Intelligence and Advanced Robotics to increase productivity, quality and flexibility. Smart factory features include collaborative robots, workers using augmented reality components and machines that send alerts when they need maintenance. By the end of 2022, manufacturers expect that 21% of their plants will be smart factories. Sectors, such as aerospace and defense, industrial manufacturing and automotive, where people are working alongside intelligent machines, are expected to be the leaders of this transition.

Digitization of factories is a necessity

As a result of productivity, efficiency and flexibility improvements, smart factories will benefit from significant reductions in operating costs. For example, the report estimates that the average automotive manufacturer could drive up to a 36% improvement in operating margin through improved logistics and material costs, equipment effectiveness and improved production quality. As such, the majority of industrial companies have already embarked upon their digitalization of plants to stay competitive; only 16% of those surveyed say they don’t have a smart factory initiative in place, or upcoming plans to implement one.

Early-adopters, including factories in the US and Western Europe are leading the pack; half of respondents in the US, France, Germany and the UK have already implemented smart factories as opposed to 28% in India and 25% in China. A divide is seen across sectors as well; 67% of industrial manufacturing and 62% of aerospace and defense organizations have smart factory initiatives. Yet a little more than a third (37%) of life science and pharma companies are leveraging digital tech, opening their business up for industry disruption.

Money is pouring into smart factories; more than half (56%) of those surveyed have invested $100 million or more in smart factory initiatives over the past five years and 20% have invested $500 million or more. Yet, according to analysis by Capgemini’s Digital Transformation Institute only a small number of organizations (6%) are at an advanced stage of digitizing production. Further, only 14% of those questioned stated that they felt ‘satisfied’ with their level of success.

As manufacturers’ smart factory efforts ramp up and returns improve the report predicts further investments in digitization. The upper end of the Digital Transformation Institute’s forecast is that half of factories could be smart by the end of 2022 with the increased productivity gains adding up to $1,500 billion to the global economy

“This study makes it clear that we are now in the digital industrial revolution. The impact on overall efficiency will be profound,” said Jean-Pierre Petit, Global Head of Digital Manufacturing at Capgemini. “The next few years will be critical as manufacturers step up their digital capabilities and accelerate their digital outcomes to maximize company benefits.”

Smart Factories will change the skill demand globally

The shift to smart factories will transform the global labor market, and while previous waves of automation have reduced low-skill jobs, organizations have recognized the skills imperative and are now acting on it.

Respondents see automation as a means to remove inefficiencies and overheads, rather than jobs, so more than half (54%) of respondents are providing digital skills training to their employees and 44% are investing in digital talent acquisition to bridge the skill gap. For highly skilled workers in areas such as automation, analytics and cyber security, there are even more employment opportunities.

Grégoire Ferré, Chief Digital Officer at Faurecia and Capgemini client said, “At Faurecia, we are seeing the greatest success in our employees working alongside intelligent tech. For example, we use smart robots in our business where there are ergonomic issues, ultimately creating a safer environment for workers and it gives them time back to focus on other, more-important tasks.”

On Faurecia’s smart factory plans, he added: “Launching Greenfield smart factories as well as digitizing Faurecia’s more than 300 plants is a key building block of our digital transformation program. We are also seeing success in ‘revamping’ old processes to be more efficient, for example making our shop floor paperless, or using technology as part of our predictive maintenance scheme – all of which save our employees time.”

Capgemini’s Smart Factories Report Methodology

The research, which was conducted from February to March 2017, interviewed 1,000 executives holding director or above rank in manufacturing companies with a reported revenue of more than $1 billion each. The research was conducted across six sectors; industrial manufacturing, automotive & transportation, energy & utilities, aerospace & defense, life science & pharmaceuticals and consumer goods. Directors from the US, UK, France, Germany, Sweden, Italy, India and China were interviewed in both qualitative and quantitative interviews.

>> Read more from Capgemini Press Release, May 15, 2017

Embedded Vision Champions Design Flexibility, Ease of Use

Call it the trifecta of machine vision. Customers across any number of industries want vision systems that are smaller, cheaper, and more powerful/faster. Embedded vision heeds the call with small cameras and application-specific processing, resulting in a compact system that is big on processing power but low on per-unit cost and energy consumption.

“Embedded vision expands the reach of robust computer vision from industrial applications to areas outside the factory where smaller processors are a better fit than classic PCs,” says Matthew Breit, Senior Consulting Engineer & Market Analyst at Basler Inc. (Exton, Pennsylvania). Application examples include portable medical devices, mobile robotics, and passport kiosks.

In order for these applications to come to fruition, manufacturers of embedded vision components are providing flexibility — and the necessary resources — to the embedded system designer or integrator.

“Today if someone is building an embedded vision system, they would buy only the image sensor and task their engineering team with designing the supporting electronics and firmware,” Breit says.

That process can be time-consuming, taking weeks, months, and even longer depending on requirements of the project. “Our job is to look at that embedded designer’s situation and find ways to make their life easier, such as taking care of the entire image sensor integration inside the camera itself,” Breit says.

The Basler dart board-level camera, measuring 27 mm x 27 mm and weighing 15 g, aims to make a wide range of embedded vision systems possible. The camera offers two interfaces: USB 3.0 and BCON, Basler’s proprietary interface based on LVDS (low-voltage differential signaling).

“Besides taking on the sensor integration, BCON allows the embedded system designer to access the camera in a very direct way without the overhead of a PC-style interface like USB 3.0 or Gigabit Ethernet,” Breit says. “The result is that instead of handling a raw sensor, the designer can integrate a completely finished camera module as they would do with any other electrical device. The means adding vision with much less effort than before.”

Starting in the fall, Basler will offer an extension module that lets users operate dart via the MIPI/CSI-2 camera interfaces, which are the most common interfaces for embedded systems.

The dart is used in a broad range of applications, including automated license plate reading. “We’re also seeing a larger demand for handheld medical devices,” Breit says. “It’s much easier on the patient to sit next to a doctor with a handheld scanner versus sitting in a large noisy machine.”

Furthermore, embedded vision opens the door to applications that once seemed unachievable. “We’ve all encountered ‘dream’ applications over the years, whether it’s a camera in a refrigerator, or the idea of a prosthetic eyeball. But now some of those dreams are getting closer to becoming reality. Of course, ‘bionic eyes’ are still down the line, but you can buy the refrigerator today.”

He cites the prospect of interactive digital signage. “When a company pays for advertising space at the bus stop, they usually put up a poster and that’s the end of it,” says Breit. “But imagine the potential of actually engaging the person while they’re waiting.”

In this scenario, a camera would identify who is looking at the ad and gauge their reactions. Even a few years ago, an application like that would require a PC, cables, and lighting. “But now, since the power of the processors has improved, and the size and the cost of everything has gotten smaller, we’re seeing signage like this today in our shopping malls,” Breit says.

Flexing the Embedded Muscle

For the embedded vision systems and smart cameras it develops, Teledyne DALSA (Waterloo, Ontario) emphasizes flexibility, ease of use, and a small footprint. The GEVA 3000 embedded vision system provides an alternative to standard PC systems for inspection tasks in harsh industrial environments. GEVA, which accommodates Teledyne DALSA’s Genie Nano GigE Vision CMOS area scan camera, offers customers a choice between two application software suites: the wizard-based iNspect for users requiring easy setup, and Sherlock, for end-users who need flexibility in creating a graphical application.

Meanwhile, BOA Spot vision sensors combine Teledyne DALSA’s BOA vision system with integrated LED lighting, lens cover, and software. The resulting system is low cost, quick to set up, and easy to integrate with equipment on the factory floor. Accessible through a simple point-and-click interface, BOA Spot’s embedded vision tools enable automated inspection and identification applications.

“Barcode reading is one of the biggest sellers for us, and we have an improved version that runs on BOA Spot,” says Bruno Menard, Software Program Manager, Smart Products Division at Teledyne DALSA. “It is much faster, more robust, and has high read rates.”

Deployment of embedded vision systems that use infrared imaging, whether for inspection in a tight industrial space or on a surveillance drone, also is on the rise. In response, Teledyne DALSA introduced the Calibir uncooled long wave infrared (LWIR) camera, which measures 29 mm by 29 mm. While Calibir supports GigE Vision output, customers who want to have a different interface like analog or USB3 Vision can use Teledyne’s Engine as the front-end architecture and plug into their own backend.

In addition to traditional machine vision division applications, Calibir is being used in outdoor applications that demand a small form factor, including solar panel inspection by an IR camera-equipped drone and night vision for hunting where the camera is mounted on the firearm.

Teledyne is showing customers the opportunities offered by embedded vision at its Imaging Possibility hub. “Smart cameras coupled with IoT [Internet of Things] will bring many possibilities,” Menard says. “For homeowners, it might start with remote home surveillance and control.”

Diving into Deep Learning

While many embedded vision installations are taking place outside the factory, the technology is finding its stride in some manufacturing applications — key among them robotics. As a turnkey machine vision integrator in 2D and 3D applications, Integro Technologies (Salisbury, North Carolina) integrates embedded industrial vision that acts as the robot’s eyes for tasks such as pick and place, load and unload, and vision-based quality inspection.

“Robot-mounted vision is not embedded to the degree that a driverless car is, but it serves the same purpose with many of the same software tools and can greatly enhance automation capability,” says Scott Holbert, Sales Engineer at Integro Technologies.

Holbert sees another trend that could enhance industrial inspection: deep learning. “The computer is using smarter and smarter algorithms and interfaces to discern what it is looking at,” says Holbert. “Machine vision and embedded vision systems, without being programmed like a traditional computer, are learning by example and increasing their accuracy over time.”

Unlike traditional machine learning, which relies on manual feature extraction, deep learning in machine and embedded vision learns features directly from images. “Whenever you have a human making a decision based on what they see, it opens you up to a lot of variability,” Holbert says. “We have cameras that can see better than the human eye, computers that make decisions very quickly, and systems that continue to learn and outperform humans doing certain tasks.”

As cameras continue to shrink in size and cost without sacrificing power, the Embedded Vision Alliance projects a “rapid proliferation of embedded vision technology into many kinds of systems.” Because of economies of scale, the automotive, medical, and retail sectors will continue to drive embedded vision development.

The potential for embedded vision is vast, and embedded designers will develop creative and unique solutions as time goes on. “This is the most exciting aspect for me, “says Basler’s Breit. “Imagine if you put the best brushes and paints in the hands of these artists. What masterpieces will we see?”

>> Reposted article by Winn Harden, VisionOnline, 5/17/17

Universal Robots Academy Offers Do-It-Yourself Robot Programming

Collaborative robots are defined by a number of characteristics, from human friendliness to flexibility. One of their hallmarks is the ease of configuration. Unlike the caged automotive robots for welding and painting, collaborative robots don’t require a full-time programmer on hand. You don’t need to be a software engineer to get your collaborative robot picking and packing.

Universal Robots has developed online training modules to further lower the training barrier to robot deployment. The hands-on modules are offered free of charge, open to all, and built to deliver hands-on learning via interactive simulations. Anyone with a desire to learn the concepts of collaborative robots can log in to the Universal Robots Academy and get the introduction necessary to master basic programming skills – and actually, it’s more configuration than programming.

Universal robots training module with robot and conveyor.
(Source: Universalrobots.com)

The six easy online training modules are built to deliver hands-on learning via interactive simulations to maximize your engagement. The free online training is open to everybody in these languages: English, Spanish, German, French or Chinese.  The modules include:

  • First look: Features and terminology
  • How the robot works
  • Setting up a tool
  • Creating a program
  • Interaction with external devices
  • Safety settings

The training sessions are designed to give new users a feel for what it takes to deploy a collaborative robot. While you can’t replace hands-on experience with actual robots, this program teaches the basic concepts and tests students as they go along. Whirlpool is one of the large customers in this training.

>> Learn more at Universal Robots Academy

>> Read more by Rob Spiegel, Design News, April 24, 2017