New Ethernet MCU Could Simplify Creation of Sensor Networks

A new family of microcontrollers (MCU) from Texas Instruments Inc. promises to simplify the process of creating wired and wireless sensor networks for factory automation, building control, and other IoT applications.

SimpleLink MSP432 Ethernet MCUs are said to help design engineers by enabling them to more easily create Ethernet-based gateways, which are going to be increasingly important for applications with large numbers of sensors. “We solve the Ethernet hardware problem for our customers by putting everything into a single chip,” noted Dung Dang, product marketing engineer for Texas Instruments. “So they can shrink the size of their (circuit) board, spend less time debugging their layout, and spend more time focusing on their application.”

Texas Instruments is targeting their new Ethernet MCU at factory automation and building control applications. (Source: Texas Instruments Inc.)

The new MCUs are based on a 120-MHz ARM Cortex-MF4 core. They are said to reduce design time and simplify board layout because they incorporate the Ethernet physical layer (PHY) and medium access control (MAC), along with USB and CAN. By integrating the MAC and PHY, in particular, the MSP432E411Y MCU eliminates the need for the developer to lay out a board with as many as 20 external components in order to accommodate Ethernet.

“By putting all that inside the chip, we’ve made it easier for the developer,” Dang told Design News. “So they don’t have to worry about all the intricacies of Ethernet IP.”

TI engineers believe the timing is right for the introduction of an integrated Ethernet MCU, given the rapid growth of the IoT. By the end of 2017, more than eight billion devices are expected to be connected to the Internet. That number is expected to climb to more than 20 billion in 2020 and 75 billion in 2035. As a result, experts expect end-users to have a correspondingly greater need to manage and process data from sensor nodes, and to transfer information to cloud-based servers.

“There are going to be more nodes,” Dang said. “They’ll be more complex and they won’t all speak the same languages. So you’ll need intelligent Ethernet gateways to help manage it.”

The Ethernet MCU is part of TI’s SimpleLink MCU portfolio, which helps developers create connected products. The new Ethernet MCU is aimed at building control, factory automation and grid infrastructure applications, all of which are expected to have large numbers of so-called “edge nodes.”

With the introduction of the new Ethernet MCU, the company is counting on the fact that future developers will want to spend more time concentrating on those edge nodes, and less on Ethernet IP. “With this, you can accelerate your development time, and get a robust and reliable gateway up and running,” Dang said. “That way, you can spend more time differentiating your application.”

>> Read more by Charles Murray, Design News, November 29, 2017

Line of Sight: Eye Tracking Cuts Training, Boosts Safety on Factory Floor

To get where someone’s coming from, you’re supposed to walk a mile in their shoes. So by that rationale, to understand your workers’ point-of-view, you should spend some time in their eyeballs. With Go Pros and smartglasses ubiquitously recording everyone’s POV, you’ll get the gist of their daily life, but you don’t get to see what they’re really focusing on in the picture.

For that, you’ll need wearable eye trackers, which look like smartglasses, but not only stream POV video to a researcher’s tablet, but also precisely mark with a red dot where the lenses are honed in on, and presumably, the worker’s attention.

Knowing that, Tobii Pro says, provides a window into the wearer’s behavior. This is why the company recommends deploying their Tobii Pro Glasses 2— in conjunction with the Tobii Pro Insight team—to get a baseline of how your assembly line or plant workers view and engage their environment. From there, management can gather actionable intel, such as discovering any inefficiencies to improve performance or unsafe practices to avoid accidents.

“We live life visually and through eye tracking we can produce a reliable barometer of processes, training and cognitive load,” says Tom Englund, president of Tobii Pro. “Our research consultants can apply the same eye tracking methodology to any business to ascertain the unique processes and skills needed for a more productive and safe work environment.”

One clear way is comparing an experienced worker’s eye-tracking data during an assembly to a novice’s.

“This will help you discover what is behind the best practices of the more experienced people on your team. You can turn their individual skills into the company’s knowledge so it can be transferred to new employees,” Tobii explains on its website.

This “gazeplot” indicates how a junior worker fared against a veteran’s.

A recent study Tobii conducted at H&H Castings, a foundry in York, Penn., backed this up. With all that molten aluminum dripping down and fireballs bellowing up throughout the facility, it’s not ideal for trainers to peer over a trainee’s shoulder while they learn to pour metal. With the glasses, a supervisor can see in real-time exactly where the trainee’s attention is directed.

For instance, watching the ladle’s spout is critical to manage speed and avoid spills. If their gaze shifts, that’s an obvious teachable moment, one that would happen before someone literally becomes a lead foot.

Staying still while pouring is also pretty important, and the wearable’s gyroscope also senses movement, allowing the observer to know if a trainee is moving their head or body during the volatile process.

After the physical component of  the study, which involved six workers filling, cleaning, transporting, and pouring, the video data was replayed so they can see where they need to improve.

“We hope the eye tracking video will save us two days per employee. Ideally, this would save us 400 hours of training time per year in that department,” says Jacob Hammill, system manager of H&H Castings.

Here’s a video of the study:

>> Read more by John Hitch, New Equipment Digest, November 29, 2017

Artificial muscles give soft robots superpowers

Soft robotics has made leaps and bounds over the last decade as researchers around the world have experimented with different materials and designs to allow once rigid, jerky machines to bend and flex in ways that mimic and can interact more naturally with living organisms. However, increased flexibility and dexterity has a trade-off of reduced strength, as softer materials are generally not as strong or resilient as inflexible ones, which limits their use.

Now, researchers at the Wyss Institute at Harvard University and MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have created origami-inspired artificial muscles that add strength to soft robots, allowing them to lift objects that are up to 1,000 times their own weight using only air or water pressure, giving much-needed strength to soft robots. The study is published [this week] in Proceedings of the National Academy of Sciences (PNAS).

“We were very surprised by how strong the actuators [aka, “muscles”] were. We expected they’d have a higher maximum functional weight than ordinary soft robots, but we didn’t expect a thousand-fold increase. It’s like giving these robots superpowers,” says Daniela Rus, Ph.D., the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT and one of the senior authors of the paper.

Origami-inspired artificial muscles are capable of lifting up to 1,000 times their own weight, simply by applying air or water pressure. (Credit: Shuguang Li / Wyss Institute at Harvard University)

“Artificial muscle-like actuators are one of the most important grand challenges in all of engineering,” adds Rob Wood, Ph.D., corresponding author of the paper and Founding Core Faculty member of the Wyss Institute, who is also the Charles River Professor of Engineering and Applied Sciences at Harvard’s John A. Paulson School of Engineering and Applied Sciences (SEAS). “Now that we have created actuators with properties similar to natural muscle, we can imagine building almost any robot for almost any task.”

Each artificial muscle consists of an inner “skeleton” that can be made of various materials, such as a metal coil or a sheet of plastic folded into a certain pattern, surrounded by air or fluid and sealed inside a plastic or textile bag that serves as the “skin.” A vacuum applied to the inside of the bag initiates the muscle’s movement by causing the skin to collapse onto the skeleton, creating tension that drives the motion. Incredibly, no other power source or human input is required to direct the muscle’s movement; it is determined entirely by the shape and composition of the skeleton.

“One of the key aspects of these muscles is that they’re programmable, in the sense that designing how the skeleton folds defines how the whole structure moves. You essentially get that motion for free, without the need for a control system,” says first author Shuguang Li, Ph.D., a Postdoctoral Fellow at the Wyss Institute and MIT CSAIL. This approach allows the muscles to be very compact and simple, and thus more appropriate for mobile or body-mounted systems that cannot accommodate large or heavy machinery.

“When creating robots, one always has to ask, ‘Where is the intelligence – is it in the body, or in the brain?’” says Rus. “Incorporating intelligence into the body (via specific folding patterns, in the case of our actuators) has the potential to simplify the algorithms needed to direct the robot to achieve its goal. All these actuators have the same simple on/off switch, which their bodies then translate into a broad range of motions.”

The team constructed dozens of muscles using materials ranging from metal springs to packing foam to sheets of plastic, and experimented with different skeleton shapes to create muscles that can contract down to 10% of their original size, lift a delicate flower off the ground, and twist into a coil, all simply by sucking the air out of them.

The structural geometry of artificial muscle skeleton determines the muscle’s motion. (Credit: Shuguang Li / Wyss Institute at Harvard University)

Not only can the artificial muscles move in many ways, they do so with impressive resilience. They can generate about six times more force per unit area than mammalian skeletal muscle can, and are also incredibly lightweight; a 2.6-gram muscle can lift a 3-kilogram object, which is the equivalent of a mallard duck lifting a car. Additionally, a single muscle can be constructed within ten minutes using materials that cost less than $1, making them cheap and easy to test and iterate.

These muscles can be powered by a vacuum, a feature that makes them safer than most of the other artificial muscles currently being tested. “A lot of the applications of soft robots are human-centric, so of course it’s important to think about safety,” says Daniel Vogt, M.S., co-author of the paper and Research Engineer at the Wyss Institute. “Vacuum-based muscles have a lower risk of rupture, failure, and damage, and they don’t expand when they’re operating, so you can integrate them into closer-fitting robots on the human body.”

“In addition to their muscle-like properties, these soft actuators are highly scalable. We have built them at sizes ranging from a few millimeters up to a meter, and their performance holds up across the board,” Wood says. This feature means that the muscles can be used in numerous applications at multiple scales, such as miniature surgical devices, wearable robotic exoskeletons, transformable architecture, deep-sea manipulators for research or construction, and large deployable structures for space exploration.

The team was even able to construct the muscles out of the water-soluble polymer PVA, which opens the possibility of robots that can perform tasks in natural settings with minimal environmental impact, as well as ingestible robots that move to the proper place in the body and then dissolve to release a drug. “The possibilities really are limitless. But the very next thing I would like to build with these muscles is an elephant robot with a trunk that can manipulate the world in ways that are as flexible and powerful as you see in real elephants,” Rus says.

“The actuators developed through this collaboration between the Wood laboratory at Harvard and Rus group at MIT exemplify the Wyss’ approach of taking inspiration from nature without being limited by its conventions, which can result in systems that not only imitate nature, but surpass it,” says the Wyss Institute’s Founding Director Donald Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at HMS and the Vascular Biology Program at Boston Children’s Hospital, as well as Professor of Bioengineering at SEAS.

This research was funded by the Defense Advanced Research Projects Agency (DARPA), the National Science Foundation (NSF), and the Wyss Institute for Biologically Inspired Engineering.

>> Re-posted article by Lindsay Brownell, Wyss Institute News, November 27, 2017

3D-printing gets a turbo boost from U-M technology

A major drawback to 3-D printing – the slow pace of the work – could be alleviated through a software algorithm developed at the University of Michigan.

The algorithm allows printers to deliver high-quality results at speeds up to two times faster than those in common use, with no added hardware costs.

One of the challenges for today’s 3-D printers lies in vibrations caused as they work. A printer’s movable parts, particularly in lightweight desktop models, cause vibrations that reduce the quality of the item being produced. And the faster the machine moves, the more vibrations are created.

“Armed with knowledge of the printer’s dynamic behavior, the program anticipates when the printer may vibrate excessively and adjusts its motions accordingly,” said Chinedum Okwudire, an associate professor of mechanical engineering who directs U-M’s Smart and Sustainable Automation Research Lab.

To ensure details are reproduced accurately, the machines are operated slowly. And the pace of 3-D printing is one of the factors that has prevented the technology finding a broader audience.

Okwudire cited statements made last year by one 3-D printing company executive about the issues holding the industry back.

“We’re just waiting for the next evolution of the technology,” Simon Shen, CEO of XYZPrinting, told TechCrunch last year. “If they can do it much faster, more precise and easier, that will bring more people to 3-D printers. Not waiting for four to six hours for a print, but 40 to 60 minutes.”

In explaining how his algorithm works, he uses the example of someone trying to deliver a speech in a large hall. To reach ears in the farthest rows, that speaker will have to shout.

On the bottom, vibrations from the 3-D printer caused the printhead to offset multiple times. On the top, the new U-M algorithm was applied to the printer, enabling a successful print. Both U.S. Capitol replicas were printed on a HICTOP Prusa i3 3-D printer at ~2X speed. Photo: Evan Dougherty, Michigan Engineering

Should someone produce a megaphone, and the speaker still continues to shout, their voice will be overly amplified and cause the audience to squirm. Using the megaphone in a normal voice, however, produces the right clarity and volume.

“Our software is like that person who realizes their voice is going to be overly amplified,” Okwudire said. “It acts preemptively because it knows that the behavior of the printer is going to be ahead of time.

“Eventually, one of the places we would want to see the algorithm applied is in the firmware – the software that runs on the printer itself,” he said. “That way, it will be integrated with the printers, regardless of the size.”

Okwudire said his software can also be used on a variety of industrial-grade machines which suffer from similar limitations due to vibrations.

The journal Mechatronics recently published the lab’s findings in a paper titled: “A limited-preview filtered B-spline approach to tracking control – With application to vibration-induced error compensation of a 3D printer.”

>> Re-posted from The Michigan Engineer News Center, October 31, 2017

A Guide to Making Robots Work in Your Factory

Robots come in many forms, but from now on, I’ll use the word “robot” to refer to the robot arms—also known as industrial robots—involved in manufacturing tasks. What is a robot, anyway? If you’re a manufacturer who wants robots to work in your factory, then you can think of a robot as an “automatically controlled… manipulator” (to paraphrase the International Federation of Robotics’ definition, which is more detailed).

However, there’s not much you can do with just a robotic arm. You need other components, too, which I’ll describe below. That’s why it makes more sense to talk about a robotic cell rather than just a robot. In general, a cell is any station in the manufacturing process, such as on a production line, that’s performing a specific operation.

If the operation is done by a human, the station is known as a manual cell (Fig. 1).

1. This is a general view of a manual and robotic cell.

When factories install a robotic cell, their purpose is to automate a process. That process could be one that’s currently done at a manual cell, or it could be an entirely new function. As you may have guessed by now, a robotic cell is simply a station that includes a robot (Fig. 1, again).

When you buy a robotic arm, it comes with two important elements: the controller, which is the computer that drives its movement, and the teach pendant, which is the user interface that the operator uses to program the robot (Fig. 2).

You can think of the controller as a conventional desktop tower; the teach pendant would be your monitor and keyboard.

2. Setup of industrial robot arm, controller, and teach pendant.

What comes after the robot’s wrist, and what’s added around the robot, varies depending on the application. But no matter the application, your robot will always need to be equipped with other components in order to work properly (Fig. 2, again).

These components might include end-of-arm tools (grippers, welding torches, polishing head, etc.) and sensors (such as force-torque sensors, safety sensors, vision systems, etc.). You’ll need to install the robot on your manufacturing floor by bolting it to a sturdy surface. Installation might also involve adding part-feeding mechanisms, safeguards like protective fencing, and more.

The robotic cell doesn’t only include hardware. The controller comes with some pre-installed software, but you will have to write the program—namely, the list of instructions the robot will follow to perform a specific task.

3. Overview of the robotic cell deployment process.

Figure 3 shows the main steps of the robotic cell deployment process: design, integrate, and operate.

  1. The design phase includes all of the tasks needed to move from the manual (or original) process to having the plan and materials for the robotic cell.
  2. From there, the integrate phase consists of putting the pieces of the robotic cell together, programming it, and installing the cell on the production line.
  3. The operate phase represents the end goal of deployment: A productive robotic cell that does its job properly on an ongoing basis.

Who Provides What?

When you buy a “robot” from a robotics company, you’re typically only getting the arm, controller, and teach pendant. Most robot companies do offer other hardware and software add-ons, but these don’t cover every possible application.

4. Fragmentation of vendors in the industrial robotics industry.

That’s why a whole industry has sprung up around providing application-specific solutions—and it’s why the industrial robotics ecosystem is structured as shown in Figure 4. Different companies specialize in providing various pieces of the solution. Whoever does the cell deployment needs to put these solutions together themselves during the design and integrate phases. These first two phases can be done by an in-house team working for the manufacturer that has bought the robot, or by external contractors called system integrators. Of course, the robot buyer is responsible for operating it, so the third phase—operate—is done by the factory’s team.

For more information, you can download Lean Robotics here.

>> This article, by Samuel Bouchard of Robotiq, is re-posted from New Equipment Digest, November 20, 2017

Identifying the Immediate ROI in the Industrial IoT

By now, most manufacturers have heard of the promise of the Industrial Internet of Things (IIoT).

In this bold new future of manufacturing, newly installed sensors will collect previously unavailable data on equipment, parts, inventory and even personnel that will then be shared with existing systems in an interconnected “smart” system where machines learn from other machines and executives can analyze reports based on the accumulated data.

By doing so, manufacturers can stamp out inefficiencies, eliminate bottlenecks and ultimately streamline operations to become more competitive and profitable.

However, despite the tremendous potential, there is a palpable hesitation by some in the industry to jump into the deep end of the IIoT pool.

When asked, this hesitation stems from one primary concern: If we invest in IIoT, what specific ROI can we expect and when? How will it streamline my process such that it translates into greater efficiencies and actual revenue in the short and long term?

Although it may come as a surprise, the potential return can actually be identified and quantified prior to any implementation. Furthermore, implementations can be scalable for those that want to start with “baby steps.”

In many cases, this is being facilitated by a new breed of managed service providers dedicated to IIoT that have the expertise to conduct in-plant evaluations that pinpoint a specific, achievable ROI.

These managed service providers can then implement and manage all aspects from end-to-end so manufacturers can focus on core competencies and not becoming IIoT experts. Like their IT counterparts, this can often be done on a monthly fee schedule that minimizes, or eliminates, up-front capital investment costs.

Defining IIoT

Despite all the fanfare for the Internet of Things, the truth is many manufacturers still have a less-than-complete understanding of what it is and how it applies to industry.

While it might appear complicated from the outside looking in, IIoT is merely a logical extension of the increasing automation and connectivity that has been a part of the plant environment for decades.

In fact, in some ways many of the component parts and pieces required already exist in a plant or are collected by more manual methods.

However, a core principle of the Industrial “Internet of Things” is to vastly supplement and improve upon the data collected through the integration of sensors in items such as products, equipment, and containers that are integral parts of the process.

In many cases, these sensors provide a tremendous wealth of critical information required to increase efficiency and streamline operations.

Armed with this new information, IIoT then seeks to facilitate machine-to-machine intelligence and interaction so that the system can learn to become more efficient based on the available data points and traffic patterns. In this way, the proverbial “left hand” now knows what the “right hand” is doing.

In addition, the mass of data collected can then be turned into reports that can be analyzed by top executives and operations personnel to provide further insights on ways to increase operational savings and revenue opportunities.

In manufacturing, the net result can impact quality control, predictive maintenance, supply chain traceability and efficiency, sustainable and green practices and even customer service.

Bringing it all together

The difficulty, however, comes from bridging the gap between “here” and “there.”

Organizations need to do more than just collect data; it must be turned into actionable insights that increase productivity, generate savings, or uncover new income streams.

For Pacesetter, a national processor and distributor of flat rolled steel that operates processing facilities in Atlanta, Chicago and Houston, IIoT holds great promise.

“At Pacesetter, there are so many ways we can use sensors to streamline our operation, says CEO Aviva Leebow Wolmer. “I believe we need to be constantly investigating new technologies and figuring out how to integrate them into our business.”

Pacesetter has always been a trendsetter in the industry. Despite offering a commodity product, the company often takes an active role in helping its customers identify ways to streamline operations as well.

The company is currently working with Industrial Intelligence, a managed service provider that offers full, turnkey end-to-end installed IIoT solutions, to install sensors in each of its facilities to increase efficiency by using dashboards that allow management to view information in real time.

“Having access to real-time data from the sensors and being able to log in and see it to figure out the answer to a problem or question so you can make a better decision – that type of access is incredible,” says Leebow Wolmer.

She also appreciates the perspective that an outsider can bring to the table.

“Industrial Intelligence is in so many different manufacturing plants in a given year and they see different things,” explains Leebow Wolmer. “They see what works, what doesn’t, and can provide a better overall solution not just from the IIoT perspective but even best practices.”

For Pacesetter, the move to IIoT has already yielded significant returns.

In a recently completed project, Industrial Intelligence installed sensors designed to track production schedules throughout the plant. The information revealed two bottlenecks: one in which coils were not immediately ready for processing – slowing production – and another where the skids on which they are placed for shipping were often not ready.

By making the status of both coil and skids available for real time monitoring and alerting key personnel when production slowed, Pacesetter was able to push the production schedule through the existing ERP system.

This increased productivity at the Atlanta plant by 30%. Similar implementations in the other two facilities yielded similar increases in productivity.

Taking the First Step

According to Darren Tessitore, COO of Industrial Intelligence, the process of examining the possible ROI begins with a factory walk-through with trained expertise in manufacturing process improvement and IoT engineers that understand the back-end technologies.

A detailed analysis is then prepared, outlining the scope of the recommended IIoT implementation, exact areas and opportunities for improvement and the location of new sensors.

“The analysis gives us the ability to build the ROI,” says Tessitore. “We’re going to know exactly how much money this will make by making the changes. This takes much of the risk out of it so executives are not guessing how it might help.”

Once completed, a company like Industrial Intelligence can then provide a turnkey, end-to-end-solution.

According to Tessitore, this covers the entire gamut: all hardware and software, station monitors, etc.; the building of real-time alerts, reports & analytics; training management on how to use data points to increase profits; and even continuously monitoring and improving the system as needed.

“Unless you’re a huge company, you really don’t have somebody who can come in and guide you and create a cost effective solution to help you compete with the larger players in the space,” says Pacesetter’s Leebow Wolmer. “I think that’s what Industrial Intelligence offers that can’t be created on your own.”

“It’s not a one-size-fits-all approach,” she adds. “They have some things that can give you a little bit of IIoT or they can take an entire factory to a whole new level. By doing this they can be cost effective for a variety of sizes of organizations.”

For more information, contact Industrial Intelligence at www.industrialintelligence.net.

>> This article by Jeff Elliott was re-posted from Industry Today.

GE’s huge 3D metal printer makes aircraft parts

GE has unveiled its previously-announced 3D metal printer, suitable for making aircraft parts. At the manufacturing trade show formnext in Germany, the GE Additive team revealed the as-yet-unnamed machine, demonstrating its ability to print parts as large as 1 meter in diameter directly from a computer file. Using additive manufacturing technology, the machine fuses together thin layers of metal powder with a 1-kilowatt laser.

(Source: GE)

The machine has the potential to build even larger parts, too, thanks to its scalable nature, plus its design can be configured to add more lasers if required. Mohammad Ehteshami, part of GE’s Project ATLAS team (Additive Technology Large Area System), said it had already been used to print a jet combustor liner. “It can also be applicable for manufacturers in the automotive, power and space industries,” he added.

The printer, which is still in beta stage, draws on additive manufacturing technology which is already being used by several GE businesses. GE Aviation is building the Advanced Turboprop, a commercial aircraft engine made largely of 3D parts. Using the technology designers reduced 855 separate parts down to just 12. According to Ehteshami, the machine is “an engineer’s dream”.

>> This article by Rachel England was re-posted from Engadget.com, 11/16/17.

 

Better, Faster, Cheaper: Machine Vision Comes of Age in Automotive Manufacturing

Walk along a modern automotive manufacturing line and you might think you’ve stepped onto the set of a “Terminator” movie. Everywhere you look, you’ll see robots, and very few humans, diligently building cars.

That’s because automotive manufacturing has always been a leader in the adoption of automation technology, including machine vision and robots — and for good reason. The use of automation has made automobiles more affordable to the masses and significantly safer due to higher-quality construction and advanced automotive systems, many of which wouldn’t be possible without the assistance of automation technology.

Given the automotive industry’s leading-edge adoption of automation tech, it’s no surprise that the number of new applications being automated for the first time isn’t driving the adoption of vision and other advanced automation solutions. Instead, growth in the automotive industry comes more from retooling and retrofits to production lines, rather than new applications solved for the first time. Today, integrated vision systems packed with intelligence to simplify their setup and operation are driving vision’s penetration into the motor vehicle market, helping the automotive manufacturing industry to achieve new heights in productivity and profitability.

Bumper-to-Bumper Vision

A list of automotive systems that use vision technology during assembly or quality inspection reads like the table of contents from a service manual, covering every aspect of the automobile from chassis and power trains to safety, electronics, and tire and wheel. In most cases, machine vision is tracking the product through the use of 1D and 2D barcodes and performing quality inspections. But it’s also helping to assemble the products.

“Most of the applications we’re solving today involve material handling, moving parts and racks to assembly lines using either 2D or 3D vision,” explains David Bruce, Engineering Manager for General Industry & Automotive Segment for FANUC America (Rochester Hills, Michigan). “But the biggest buzz word right now is ‘3D.’”

FANUC’s iRVision machine vision package has long been a staple of the automotive industry, especially in the U.S. and Asia. In recent years, FANUC introduced a fully integrated 3D Area Sensor vision product that uses two cameras and structured light to generate 3D point clouds of the camera’s field of view.

“Today, one of the last manual processes on the automotive manufacturing line involves part feeding, getting parts out of bins, and so on,” Bruce says. “Our 3D Area Sensor isn’t just a hardware solution. It includes a lot of software developed just for bin picking applications.”

In some of the most advanced material handling work cells, one robot with a 3D sensor picks the parts out of the bin and places them on a table so that a second robot with a 2D vision system can easily pick up the part and feed another machine, conveyor, or other process. Bruce also notes that end-of-arm tooling is one of the toughest challenges for bin picking applications; magnets and vacuum work best.

“By having the vision system controller directly integrated with the robot instead of using a PC, the engineers can focus on the mechanical engineering challenges and developing a bin picking system with buffering to make sure acceptable cycle times are achieved,” Bruce says.

Tighter integration between vision system and robot also makes it easier for end users to train the FANUC-based work station. “The way you set up iRVision has gotten a lot simpler,” says Bruce. “You can take images of the robot in 7, 8, or 10 different poses and the system will guide you through programming. Or if you’re looking at a large part that won’t fit in the field of view — not uncommon in automotive manufacturing — you can take images from several small fields of view of the part, and the robot controller can determine the full 3D location of the part.”

3D vision is also enhancing the latest class of robot assembly: lightweight robots, also called collaborative robots due to their low-force operation and ability to work next to humans with minimal safety systems.

While the automotive industry is boosting the number of collaborative vision work cells, “right now the killer application is kitting,” says Bruce. Kitting is the process of collecting parts into a bin for a specific product configuration or assembly.

The Path to Full Traceability

Any kitting or assembly task is only as good as the quality and accuracy of the incoming parts, which is why track-and-trace vision applications are so important to the automotive industry. “Over the last 31 years, the industry average was 1,115 car recalls per every 1,000 sold, according to the National Highway Traffic Safety Administration,” says Adam Mull, Business Development Manager Machine Vision/Laser Marking for Datalogic (Telford, Pennsylvania). The recall rate can exceed 1,000 because a single car can have more than one recall.

“While we’re seeing applications across the board from inspection to vision-guided robotics [VGR], we’re definitely seeing a trend toward full traceability,” adds Bradley Weber, Application Engineering Leader and Industry Product Specialist – Manufacturing Industry at Datalogic. “There’s always been traceability of the most critical components of the car, but now it’s going everywhere. Every part is being laser marked or peened, read, and tracked. That’s part of what has opened a lot of doors for Datalogic because we have many types of laser markers, vision systems to verify those marks, and then both handheld and fixed barcode readers to read and track those marks all through the process.”

According to Mull, while one manufacturing plant used to manufacture only one type of vehicle, today each plant either makes parts for multiple vehicles or assembles different vehicles.

Consumer demand is driving the need for more automation in the factory. “When you go to a dealership, there are so many more options than there were years ago, from the color of the dashboard to the onboard electronics,” Weber says. “With all those choices, OEMs need a strong manufacturing execution system that is being fed data from every part along the manufacturing process.”

With machine-readable codes going on more and more components, it also opens up the possibility of reworking problem parts instead of scrapping them.

As automation and machine-to-machine communication continue to blur the lines between robot, vision, marking system, and production equipment, the benefit to the manufacturer is greater ease of use, leading to greater machine vision adoption.

Advanced vision software such as Matrox Design Assistant is aiding new adopters to quickly set up ID, VGR, and inspection routines using simple flow-chart programming and automated sample image acquisition, according to Fabio Perelli, Product Manager at Matrox Imaging (Dorval, Quebec).

Better automation integration is also helping to educate engineers, opening up even more opportunities for vision and other automation solutions.

“In automotive, engineers often work in bubbles,” says Datalogic’s Mull. “Everyone’s running with their own part of the project. But as one system works more closely with another system, the team members start to cross-pollinate, opening up the opportunity to teach the engineer who only knows about smart cameras how to use embedded controllers and advanced vision or marking systems. And since our systems all use the same software environment, it makes it seamless for an engineer to move from smart cameras to embedded controllers to other Datalogic solutions.”

>> This article by Winn Hardin was re-posted from AIA Vision Online (11/21/17)

Siemens Digitizes Industrial Machines to Speed Development

Siemens PLM has created the Advanced Machine Engineering (AME) solution to provides a platform that connects mechanical, electrical, and software engineering data to allow engineers access to a completely digital machine-build prototype. This digital twin represents an industrial machine operation that can be tested virtually throughout the development process. The goal of the engineering platform is to increase collaboration and reduce development time, while also reducing risk and allowing for the reuse of existing designs.

The AME uses modularized product development to establish common parts and processes among a family of products while defining functional modules that can be easily modified to meet specific requirements support changes. In other words, you can build the manufacturing process like a collection of Legos (chunks of software), then customize the configuration and test it before you begin banging equipment into place.

Mechatronic design provides a common platform for concurrent product development. Image courtesy of Siemens PLM

By involving mechanical engineering, electrical engineering, and software development processes simultaneously, you shift away from the more time-consuming serial development process. You create a concurrent method that effectively turns the process into mechatronics.

Siemens developed the AME into order to speed the time it takes to set up plant equipment while also making the machine configurations easier to customize. “We created this for companies that are making automation control equipment, packaging machines, printing machines, anything that has a lot of mechanical systems and components, as well as sensors, and drives,” Rahul Garg, senior global director of industrial machinery and heavy equipment at Siemens PLM, told Design News. “Typically, these are the companies making products and machines that go into a plant.”

Creating the Modular Plant

One of the goals in developing AME was to make plant equipment modular, so the overall configuration of plant processes could be done more quickly and with greater flexibility. The digitized modular plant concept was also designed to reduce risk and engineering time. The process can be design and tested digitally. “Many of these companies need to serve their end customers with increasing customization,” said Garg. “We wanted to create the ability to modularize the machine structure to deal with customization and quickly respond to engineering or systems changes.”

Leverage a digital twin to virtually test complex machine requirements. Image courtesy of Siemens PLM

The modular approach to managing plant equipment also supports change, especially since much of the engineering to support the change is worked out on a digital level using existing modules that are already validated. “This improves the way the machine builders manage the end-customer requirements. Those requirements are change. How do you, manage that change? Get the engineering communicated to the shop floor and to those who service the products,” said Garg. “We are trying to improve the way they manage the engineering process and schedules to better control and improve the risk while working on large projects.”

Mechatronics on the Machine Level

The idea is to build new functionality into the equipment driven by automation and analytics. The intention is to turn it into an easy and rapid process. “You have to deliver the innovation in a fast process and reuse it,” said Garg. “The idea is to create a digital twin of the machine where you can simulate the entire behavior of the machine using control software and control applications. You drive the systems with the software.”

The AME contributes to the concept of the digital twin, which digitizes a product from design, through configuration, and into operation at the customer’s plant. “What we are trying to do is create manufacturing functions through the visualization process,” said Garg. “Then we want to take digitation further, by closing the loop with the physical product. Once the plant equipment is out in the field and the customers start using the equipment and machines, we want the ability to see and monitor the performance of the equipment and see how it’s performing.”

>> This article by Rob Spiegel was reposted from DesignNews.com (November 23, 2017)

Rethink Adds KPI Collection, Extra Cameras to Sawyer

As collaborative robots, or co-bots, become more advanced, so do their communications. Indeed, Rethink Robotics recently announced it has upgraded Sawyer so that the collaborative robot can now communicate its production key performance indicators (KPIs) and other metrics.

The metrics can include part counts, robot speed, or bad-part tallies. Rethink has released Intera 5.2, an expansion of the company’s Intera software platform. The upgrade provides production data in real-time during the manufacturing process. This is data that is typically collected via a third-party IoT system – if it’s collected at all.

The new feature, Intera Insights, displays KPIs via a customizable dashboard on the robot’s on-board display, making it accessible to those on the factory floor. The same charts are also fed back to the Intera Studio platform, providing visibility to other members of the manufacturing team. The goal is to eliminate the need to invest in or create an outside data collection system.

The advances in the Intera platform were prompted by customers in the field. “We ask for feedback from our customers. One of the biggest areas of feedback involved extracting data, knowledge, and KPIs about how the robot’s performing,” Jim Lawton, COO of Rethink Robotics, told Design News. “They want to know the average cycle time, the part count, and how many good versus bad parts were made. The robot knows what’s going on. It was just a matter of how to get access to the data.”

The goal in creating data collection for Sawyer (photo, right) is to help users begin to move into smart manufacturing without necessarily investing in new equipment. “There has been a lot of talk about the IoT and Industry 4.0. People see the value there, but they are wondering what it will look like and how it works in the world of robots. We’re showing what the end game looks like,” said Lawton. “A lot of customers don’t have the ability to get that to their own data. Now they have a robot that knows how much it is doing and knows what it has done. Plus, the robot doesn’t make mistakes when it counts.”

Adding Cameras to the Robot

The Intera 5.2 release also includes additions to Sawyer’s vision capabilities. As well as the embedded cameras that are standard with Sawyer, manufacturers can now integrate external cameras. This was designed to allow manufacturers to optimize cycle time with improved vision, or leverage in-house vision systems on the robot.

The ability to add cameras to Sawyer and integrate those cameras into Sawyer’s overall functioning was also a suggestion from customers in the field. “The second big area of feedback from customers involved external vision. We have a camera in the wrist and one in the head. Some customers wanted external cameras as well as the internal one. There are circumstances where that would be beneficial. But how do we get a third-party camera to work?” said Lawton. “We designed a feature to make an added camera part of Sawyer, so it would be easy to use and the robot would understand where it is.”

>> This article by Rob Spiegel was re-posted from DesignNews.com (November 20. 2017)