Cobot Vision System Doubles Machine Shop Productivity

During a small Mississippi machine shop’s busy time, it turned to a lights out solution from Robotiq to meet demand without having to hiring and training a temporary CNC operator.

Exclusively attached to UR robot, the Plug + Play Wrist camera is intuitive enough that anyone in the plant or shop can teach it for machine trending, assembly, and pick & place tasks.

Mississippi’s WALT Machine Inc. specializes in high-precision optical work for scientific camera assemblies. Each spring WALT Machine must meet a massive challenge: Deliver around 6,000 camera housings in 2 months’ time, with only one CNC machine.

Being able to deliver this massive order in time implies that WALT Machine’s president, Tommy Caughey, would need to hire a full-time CNC machine operator. By the time the new employee is fully trained and operational, the parts are almost delivered and his/her services will soon be no-longer needed.

To deal with short-term rise in production volume, Tommy Caughey started to look into a robotic based solution a few years back. “I saw Universal Robots at IMTS Trade Show maybe 4 or 6 years ago and I found it interesting: a robot that does not require any extra stuff, like jigs for example. I followed up throughout the years and thought that’s where we needed to go one day.”

One problem remained: He knew he needed either a vision system or a conveyer, to pick up the raw parts from the table. “Everyone told me that it’s a very difficult process, that you need to have a person in your shop to do it,” Caughey recalls.

A big plus when using a robot is the ability for it to continue production during unattended hours in the shop. This of course increases productivity since twice as many parts can be made in the same time.

In June 2016, Robotiq released the Plug + Play Wrist Camera made exclusively for Universal Robots. For the entrepreneur, this was a game changer. “I didn’t need a vision expert anymore, I could do it myself. I bought the camera, and it’s super simple. It takes about 10 minutes and your part is taught. ”

It takes 30 to 45 minutes to machine one side of those camera housings. If you only work 8 hours a day. It will be several weeks of work to produce the order with one machine, and WALT could not afford that bottleneck. “So being able to run 15-20 hours a day and not having to hire anyone else is a major plus for us,” says Caughey.

The economics of a cobot visioning system makes a lot of cents (and dollars). Walt Machine Inc. has doubled its production time by cutting its machine idle time, allowing the company to stay on track without spending a lot of time and money training and employing a worker they would only need for two months.

Since WALT Machine Inc. bought its robot, named Arthur, twice as many parts are being machined everyday. The CNC machining still takes the same time, but it’s the number of operational hours that make all the difference. The company was able to save half the time in production by eliminating a lot of machine idle time.

The contract in question needs to be completed within two months so there is a production rush to be able to deliver all the parts on time. A huge benefit when integrating a robot in the production line is that the machine can now run non-stop in this two month period. Tommy does not need to worry about training, extra salary, or employee retention between production rushes.

Tommy Caughey was also stressed about quality and consistency during unattended hours. “It’s just about letting it go and accept the fact that it’s gonna run for an extra 4-6 hours, that you’re gonna go home and nothing’s gonna break. And it’s actually the case!”

Arthur’s arrival among the team also allows long-time machinist Matthew Niemeyer to improve his skillset on the production floor. “First, I got to learn how to program the robot,” Niemeyer explains.

After training with the object teaching interface, the operator can walk away, or even go home, while the robot continues to stay productive.

Then, you have the robot loading the machine, but you’re still doing all the fine tuning of it, such as the programming. But the remedial tasks of loading and unloading the machine is taken care for you, so you don’t get worn out.”

When everything is running fine, in the factory, Matthew is able to focus on his new role at WALT Machine. The robot’s arrival created an opportunity for him to upgrade as a sales representative in the team. “We can get more and more business into the shop, which will lead to more machines, more robots and a promising overall growth.”

All of this wouldn’t be possible without this first robot. Far from stealing jobs, Tommy Caughey truly believes that in 10 years, every small shop like his will have at least one robot. For him, this change happens for the same reason that so many changes happened beforehand in other industries.

With the UR3, the Wrist Camera’s focus range is 70 mm to ∞, and with the UR5, it’s 2.76 in. to ∞. (Robotiq)

“No one is yelling at a contractor for using an excavator instead of hundred men with shovels, he compares. I didn’t fire anyone to do this. It just changes where the work is. Instead of having guys sitting here just putting parts in and out of the machine, they can do more quality-related stuff. They can check parts, clean, package them and even bring in more sales!”

And with a robot that doubles the production capacity, business opportunities are greater and orders are delivered on time. Foremost, with the satisfaction of WALT Machine’s first integration project, they see these new business opportunities as a way to scale their robotics capabilities in their shop.

Robotic automation is intimidating for someone who never touched a robot before. Tommy Caughey is one of those entrepreneurs who started from scratch with his first robot. “I’ve programmed CNC machines and G-codes for 10 years, done XYZ positional and spatial stuff, but never explored robotics. When I got the robot, I did a little reading and it was pretty simple. There is a lot of help on the UR and Robotiq websites, with programming for example.”

Robotiq Wrist Camera

As for the vision system, Caughey was really impressed by the Robotiq Wrist Camera teaching methods. “Either you take your part and set it on the surface where you want to pick it and you take four snapshots of it in four different orientations. Or if it’s something simple like a rectangular or a circular blank, you just set the dimensions of what you are picking and it knows.”

The next step is to put 15 to 20 of the same unmachined parts on the table within the camera’s field of view. The robot then rotates and looks over the table, takes one snapshot to see all the parts. For more accurate picking, it gets closer and takes another snapshot of the part it is about to pick. After, the robot places the part into the vice in the CNC machine. Then it sends a signal to the Haas CNC machine to press the start button.

“It is so easy, Caughey adds. We don’t even have to teach waypoints because it only needs to look on this table for parts. We don’t need a conveyor or any special fixturing. If you change parts, you just need to tell Arthur that you are looking for a different part and change the 2-Finger Gripper’s closing setup, it’s pretty simple. There isn’t a lot of changeover so  that’s why I like the camera-gripper combo.”

>> Read more by Robotiq, New Equipment Digest, November 2, 2017

Futureworld: The IoT-driven ‘Vertical Farm’

(Source: Aerofarms)

Imagine a farm without herbicides, insecticides or pesticides; a farm that cuts water consumption by 95 percent; that uses no fertilizer and thus generates no polluting run-off; that has a dozen crop cycles per year instead of the usual three, making it hundreds of times more productive than conventional farms; a farm that can continually experiment with and refine the taste and texture of its crops; a farm without sun or soil. That’s right, a farm where the crops don’t need sunlight to grow and don’t grow from the ground.

Such a farm – an “indoor vertical farm” – exists, it’s located in that grittiest, most intensely urban of inner cities, Newark, NJ, in a former industrial warehouse. Visiting there, you go from a potholed, chain linked back street into a brightly lit, clean (visitors wear sanitary gowns, gloves, masks and head coverings), 70,000-square-foot facility. Walking in, you get that rare, uncanny sense of having stepped into the future. Way into the future.

The farm consists of large, flat platforms stacked 10 levels high (“grow towers”) of leafy greens and herbs thriving in seeming contentment under long rows of LED lights, irrigated with recycled water that sprays the exposed roots hanging, suspended, from the crops, under the watchful “eye” of IoT sensors that, with machine learning algorithms, analyze the large volumes of continually harvested (sorry!) crop data.

Aerofarms began developing sustainable growing systems since 2004, and has adopted a data-driven technology strategy that’s a showcase for the IoT and deep learning capabilities of Dell Technologies (see below).

By building farms in major population centers and near major distribution routes (the Newark farm is a mile from the headquarters of one of the largest supermarket chains in the New York City area), the company radically shortens supply chains and lowers energy resources required to transport food from “farm to fork”  while also decreasing spoilage. It enables local farming at commercial scale year-round, regardless of the season. It tracks and monitors its leafy greens from seed to package so that the source of food, if some becomes tainted, can be quickly identified. Taken together, AeroFarms claims to achieve 390 times greater productivity than a conventional field farm while using 5 percent as much water.

“We are as much a capabilities company as we are farmers, utilizing science and technology to achieve our vision of totally controlled agriculture,” said David Rosenberg AeroFarms co-founder and CEO. The company’s vision, he said, is to understand the “symbiotic relationships” among biology, environment and technology, to leverage science and engineering in ways that drive more sustainable, higher-yield food production.

IoT come into play via AeroFarms’ Connected Food Safety System, which tracks the “growth story” of its products, analyzing more than 130,000 data points per harvest. The growth cycle begins when seeds are germinated on a growing medium that looks like cheesecloth, receiving a measured amount of moisture and nutrients misted directly onto their roots that dangle in a chamber below the growing cloth, along with a spectrum of LED lighting calculated to match the plants’ needs throughout a 12- to 16-day growing cycle.

Rosenberg said Aerofarms decided to partner with Dell because it “offers a comprehensive infrastructure portfolio that spans our IT needs, from edge gateways and rugged tablets to machine learning systems and network gear.”

At the edge, sensors and cameras in the aeroponic growing system gather data on everything from moisture and nutrients to light and oxygen and then send operating and growing environment data to Dell IoT Edge Gateways for processing. Information is then relayed over their farm network to Dell Latitude Rugged Tablets and a local server cluster, making it available to Aerofarms workers for monitoring and analysis. AeroFarms’ precision growing algorithms allow just-in-time growing for its selling partners. Once the plants reach maturity, they are harvested and packaged onsite and then distributed to local grocery stores.

Aerofarms is developing a machine learning capability that identifies patterns based on analysis of images and a combination of environmental, machine and historical growing data.

The company said it may expand its use of Microsoft Azure to conduct more analytics in the cloud while leveraging geo-redundant data backup, collect disparate data from its multiple vertical farms and multiple data sources, including information interpreted in historical context, leveraging data previously collected and analyzed over time to improve taste, texture, color, nutrition and yield.

Aerofarms said it also is working on real-time quality control through multi-spectral imaging from its grow trays. Cameras with integrated structured light scanners send data to Dell Edge Gateways, which create 3D topological images of each grow tray. When an anomaly is detected, the gateway sends an alert to operators using Dell Latitude Rugged Tablets on the farm floor.

“For me, the journey started with an appreciation of some of the macro-challenges of the world, starting with water,” said Rosenberg. “Seventy percent of our fresh water goes to agriculture. Seventy percent of our fresh water contamination comes from agriculture.”

Land is another problem.

“By U.N. estimates, we need to produce 50 percent more food by 2050, and we’ve lost 30 percent of our arable farm land in the last 40 years,” he said. “Looking at all those macro-issues, we need a new way to feed our planet.”

>> Read more by Doug Black, EnterpriseTech, October 26, 2017

Vision Systems Drive Auto Industry Toward Full Autonomy

The race is on to make self-driving vehicles ready for the road. More than 2.7 million passenger cars and commercial vehicles equipped with partial automation already are in operation, enabled by a global automotive sensors market estimated to reach $25.56 billion by 2021. And of those sensors, cameras will see the largest growth of nearly 400 million units by 2030.

Estimates about the arrival of fully autonomous vehicles vary depending on whom you ask. The research firm BCG expects that vehicles designated by SAE International as Level 4 high automation — in which the car makes decisions without the need for human intervention — will appear in the next five years.

Meanwhile, most automotive manufacturers plan to make autonomous driving technology standard in their models within the next 2 to 15 years. Tesla, whose admired and admonished Autopilot system features eight cameras that provide 360 degrees of visibility up to 250 meters, hopes to reach Level 5 full autonomy in 2019.

Carmakers are building upon their automated driver-assistance systems, which include functions such as self-parking and blind-spot monitoring, as the foundation for developing self-driving cars. The core sensors that facilitate automated driving — camera, radar, lidar, and ultrasound — are well developed but keep undergoing improvements in size, cost, and operating distance.

The industry still must overcome other technological challenges, however. These include mastering the deep learning algorithms that help cars navigate the unpredictable conditions of public roadways and handling the heavy processing demands of the generated data. To help them carve a path toward total autonomy in driving, automakers are turning to vision software companies as an important player in the marketplace.

Algorithms Get Smarter

The machine vision industry is no stranger to the outdoor environment, with years of experience developing hardware and software for intelligent transportation systems, automatic license plate readers, and border security applications. While such applications require sophisticated software that accounts for uncontrollable factors like fog and sun glare, self-driving vehicles encounter and process many more variables that differ in complexity and variety.

“Autonomous driving applications have little tolerance for error, so the algorithms must be robust,” says Jeff Bier, founder of Embedded Vision Alliance, an industry partnership focused on helping companies incorporate computer vision into all types of systems. “To write an algorithm that tells the difference between a person and tree, despite the range of variation in shapes, sizes, and lighting, with extremely high accuracy can be very difficult.”

But algorithms have reached a point where, on average, “they’re at least as good as humans at detecting important things,” Bier says. “This key advance has enabled the deployment of vision into vehicles.”

AImotive (Budapest, Hungary) is one software company bringing deep learning algorithms to fully autonomous vehicles. Its hardware-agnostic aiDrive platform uses neural networks to make decisions in any type of weather or driving condition. aiDrive comprises four engines. Recognition Engine uses camera images as the primary input. Location Engine supplements conventional map data with 3D landmark information, while Motion Engine takes the positioning and navigation output from Location Engine to predict movement patterns of surroundings. Finally, Control Engine controls the vehicle through low-level actuator commands such as steering and braking.

For an automated vehicle to make critical decisions based on massive volumes of real-time data coming from multiple sensors, processors have had to become more powerful computationally while consuming less operational power. Software suppliers in this space are developing specialized processor architectures “that easily yield factors of 10 to 100 times better efficiency to enable these complex algorithms to fit within the cost and power envelope of the application,” Bier says. “Just a few years ago, this degree of computational performance would have been considered supercomputer level.”

To make safe, accurate decisions, sensors need to process approximately 1 GB of data per second, according to Intel. Waymo, Google’s self-driving car project, is using the chipmaker’s technology in its driverless, camera-equipped Chrysler Pacifica minivans, which are currently shuttling passengers around Phoenix as part of a pilot project.

However, the industry still needs to determine where the decision-making should occur. “In our discussions with manufacturers, there are two trains of thought as to what these systems will look like,” says Ed Goffin, Marketing Manager for Pleora Technologies (Kanata, Ontario). “One approach is analyzing the data and making a decision at the smart camera or sensor level, and the other is feeding that data back over a high-speed, low-latency network to a centralized processing system.”

Pleora’s video interface products already play in the latter space, particularly in image-based driver systems for military vehicles. “In a military situational awareness system, real-time high-bandwidth video is delivered from cameras and sensors to a central processor, where it is analyzed and then distributed to the driver or crew so they can take action or make decisions,” Goffin says. “Designers need to keep that processing intelligence protected inside the vehicle. Because cameras can be easily knocked off the vehicle or covered in dust or mud, they need to be easily replaceable in the field without interrupting the human decision-making process.”

Off the Beaten Path

While the self-driving passenger car dominates media coverage, other autonomous vehicle technology is quietly making a mark away from the highway. In September 2016, Volvo began testing its fully autonomous FMX truck 1,320 meters underground in a Swedish mine. Six sensors, including a camera, continuously monitor the vehicle’s surroundings, allowing it to avoid obstacles while navigating rough terrain within narrow tunnels.

Meanwhile, vision-guided vehicles (VGVs) from Seegrid (Pittsburgh, Pennsylvania) have logged more than 758,000 production miles in warehouses and factories. Unlike traditional automated guided vehicles — which rely on lasers, wires, magnets, or floor tape to operate — Seegrid VGVs use multiple on-vehicle stereo cameras and vision software to capture existing facility infrastructure as their means of location identification for navigation.

As Bier of Embedded Vision Alliance points out, even the Roomba robotic vacuum cleaner — equipped with a camera and image processing software – falls under the category of autonomous vehicles.

Whether operating in the factory or on the freeway, self-driving vehicles promise to transport goods and people in a safe, efficient manner. Debate persists over when fully autonomous cars will hit the road in the U.S. Even as the industry overcomes technical challenges, governmental safety regulations and customer acceptance will affect the timing of autonomous vehicles’ arrival.

In the meantime, automakers and tech companies continue to pour billions of dollars into research and development. Each week seems to bring a new announcement, acquisition, or milestone in the world of self-driving vehicles. And vision companies will be there for the journey.

>> Re-posted from Vision Online, 10/20/17

Warehouse Robots Smarten Up

Self-driving cars have certainly reaped the rewards from the advances made in sensors, processing power, and artificial intelligence, but they aren’t the sole beneficiaries. One needn’t look any further than to the autonomous cooperative robots (cobots) currently invading the warehouses and stores in which they will work in close quarters with people.

1. Aethon’s latest TUG is festooned with sensors and can fit under carts to tow them to desired locations.

Aethon’s TUG (Fig. 1) is the latest in a line of autonomous robots designed for environments like warehouses. It has more sensors on it than older platforms, which is indicative of the falling price of sensors, improvements in sensor integration, and use of artificial intelligence to process the additional information. This allows robots like this to get a better model of the surrounding environment. It means the robots operate more safely, since they can better recognize people and objects. It also means they can perform their chores more effectively, because they often need to interact with these objects.

Aethon’s TUG series provides a range of capabilities up to versions that can haul around as much as 1200 lbs. These typically find homes in industrial and manufacturing environments. Smaller TUGs have been set up in hospitals to deliver medicine, meals, and materials. TUGs move throughout a hospital calling elevators and opening doors via network connections. As with warehouse robots, they operate around the clock doing jobs that allow others to do theirs.

2. The RL350 robotic lifter from Verna Robotics rises under a cart and lifts 350 kg off the ground. It then delivers the contents to the desired location, dropping down and leaving the cart.

Vecna Robotics has lightweight and heavy-duty robots, too. Its RL350 robotic lifter can hoist 350 kg or more than 770 lbs (Fig. 2). It can also adjust the payload height with other pieces of material-handling equipment, like conveyor belts. It can be used in applications such as fulfillment operations or lineside supply. The robot has a top speed of 2 m/s, and can run for eight hours before seeking out a charging station. It is ANSI/ITSDF B56.5 compliant and ISO Class D ready. It uses LIDAR and ultrasonic sensors like many of the other robots in this class.

 

3. Fetch Robotics’ VirtualCoveryor targets warehouse applications such as DHL’s distribution center.

Fetch Robotics has a range of products, from robotic arms for research to its datasurvey inventory robot. It also offers the VirtualCoveryor (Fig. 3), which comes in a number of different sizes to address different weight configurations. The Freight500 can move up to 500 kg, while the Freight1500 handles up to 1500 kg. They run up to nine hours on a charge, and incorporate LIDAR and 3D cameras on the front and rear. As with most warehouse robots, Fetch Robotics delivers them with its FetchCore Management software.

4. I Am Robotics put a robotic arm on its I Am Swift platform. The suction grip is designed for grabbing lightweight objects that would be typical in many warehouse pick-and-place environments.

I Am Robotics includes a robotic arm on its I Am Swift platform (Fig. 4). It can run for more than 10 hours picking and placing small objects using its suction grip. The typical boxes or bottles found on store shelves are open game. The robot is designed to work with the I Am SwiftLink software.

The I Am Flash 3D scanner is used to teach the system about objects that will be manipulated. It records the barcode, object dimensions, and weight after an object is placed in its scanning area. The I Am Swift robot can then determine what objects it sees on a shelf or in its basket and move them accordingly.

5. Omnidirectional wheels on Stanley Robotics’ robot platform make it easy to move in tight quarters.

Stanley Robotics warehouse platform utilizes omnidirectional wheels in order to move literally in any direction from a standing start. This simplifies path planning and allows it to work in tight quarters.

6. Stan from Stanley Robotics handles valet parking by literally picking up a car and putting in a parking spot.

The latest offering from Stanley Robotics was not able to fit on the show floor, though. Its Stan valet parking system (Fig. 6) turns any car into a self-driving car, at least to park it. It rolls under a typical car and then raises itself, thereby lifting the car. Many warehouse robots that lift carts instead of cars use an identical technique—it’s the same idea, but applied to a much larger object.

7. Fellows Robots’ NAVii will function within a store, offering information to customers while performing inventory scanning.

Fellows Robots’ NAVii (Fig. 7) is designed to operate within a store, providing customers with information while performing inventory scanning. It can map out a store on its own and then track the stock using machine-learning techniques. NAVii will notify store managers when stock is low, or if there are price discrepancies.

NAVii can also interact with store customers using its display panels. On top of that, store employees can take advantage of this mobile interface to interact with the store’s computer network. As with most autonomous robots, it seeks out a charger when its battery runs low.

>> Read more by William Wong, New Equipment Digest, October 05, 2017

 

Machine Vision Techniques: Practical Ways to Improve Efficiency in Machine Vision Inspection

Machine vision efficiency is at the core of production efficiency. The speed of manufacturing is often dependent upon the speed of machine vision inspection. Creating efficiencies in machine vision can have wide reaching benefits on manufacturing productivity.

Are you doing all you can to make machine vision as accurate and efficient as possible? The following are a few practical ways to improve the efficiency of your machine vision systems.

4 Practical Tips for Machine Vision Efficiency

The following tips are fundamental, but quick, fixes to improve machine vision efficiency if you’re inspection processes are slowing down or impacting production.

1. Lighting Techniques

Is your lighting technique maximizing contrast for the area of inspection? Between backlighting, bright field lighting, grazing, low angle linear array, and dark field lighting, there are often several different ways to illuminate the same application. The technique with the highest contrast will help improve the accuracy of image capture.

2. Light Wavelength and Frequency

Some parts, such as metallic products, may arrive at your facility and be inspected with a light coating of oil on them from storage. This will create noise in your images. Adjusting the frequency and wavelength of light you’re using can help combat this type of noise introduced into the inspection environment.

3. Trigger Range Function

Sometimes, the broader industrial environment will create electrical noise and cause false triggering of your inspection system – which could have numerous devastating consequences on production, such as the software concluding that passable objects are failing inspection. Implementing a trigger range function, controlling for the length of the trigger signal, to maintain the integrity of machine vision inspection systems.

4. Filtering

Industrial environments often introduce background and/or overhead lighting noise into the inspection area. Many times, this can be completely filtered out with the correct wavelength of lens filter, improving the accuracy and quality of image capture.

Machine vision works best in consistent, undisturbed environments, but this is rarely the case in an industrial setting. The tips mentioned above are some quick ways to improve the efficiency of machine vision inspection, which improves the efficiency of overall production.

>> Learn more at Vision Online, 09/12/2017

Machine Vision in the Food and Beverage Industry: Mostly Feast, But Some Famine

Food and beverage producers face continuous pressure to verify product quality, ensure safe and accurate packaging, and deliver consumables that are completely traceable through the supply chain. Machine vision has been helping the industry achieve these goals for the better part of two decades. But as government regulations tighten and consumers demand more transparency about the contents of their sustenance, adoption of vision and imaging systems in food inspection is on the rise — despite a few segments that show hesitance toward the technology.

Safety First

Even though the U.S. Food Safety Modernization Act (FSMA) took effect in 2011, some food processors and packagers are still finalizing solutions to meet the law’s product tracking and tracing requirements. “FSMA has forced the food industry to have better recording and reporting systems of their processes, so more food and beverage manufacturers are using 2D barcode reading to track and serialize data,” says Billy Evers, Global Account Manager for the food and beverage industry at Cognex (Natick, Massachusetts).

But a more pressing need is driving the adoption of both barcode and vision technologies in food processing facilities. “Right now as a society, we’re at an all-time high for food allergies,” Evers says. “There’s a heightened awareness in the industry about determining proper labels for allergen-based contaminants.”

Incorrect or incomplete allergen labeling could lead to customer illness, costly recalls, and damage to the food producer’s brand. While some manufacturers are using barcode readers for label verification, many of them “have legacy artwork that’s been in existence for 60 or 70 years and don’t want to mess up their brand by putting a 2D code on their packaging,” Evers says.

In such cases, companies will use optical character recognition (OCR) and verification (OCV) of existing alphanumeric characters on the label, or pattern matching to track fonts or check for the absence/presence of certain words. Food producers also are using barcode readers and vision systems to comply with a 2016 U.S. law mandating the labeling of food that contains genetically modified ingredients, or GMOs.

Sometimes, the demand for barcode scanning comes from within the supply chain itself. Evers cites the example of one food company pushing its suppliers to guarantee that their barcodes are accessible from almost every portion of the pallets containing them so that workers aren’t wasting time twisting individual boxes in order to scan them at distribution centers or back-of-store warehouses.

PET Projects

(Image source: Pressco.com)

Like other industries relying on machine vision for inspection, food and beverage makers want systems that do more with less. For the past decade, many beverage filling facilities have been manufacturing PET plastic bottles on site rather than relying on a converter to make, palletize, and ship them. Pressco Technology Inc. (Cleveland, Ohio) has developed vision systems that conduct inspection up-and-down the line to include not only the preforms blown into the PET bottles but also the fill levels, caps, and labels on the filled containers.

“The advantage of doing all of this with one control is that you don’t have to train operators on or buy spare parts for three or four different inspection systems,” says Tom O’Brien, Vice President of Marketing, Sales, and New Business Development at Pressco.

O’Brien points to two competing challenges in the plastic bottling industry that can benefit from machine vision inspection. One is the lightweighting of PET containers and closures to reduce cost and provide a more sustainable package. “As you make things lighter, you use less plastic and have a greater opportunity for defects to occur,” he says.

Secondly, with the use of post-consumer, re-ground material to make new beverage bottles, vision systems can inspect for contaminants such as dirt that can enter the production process as the recycled PET is melted and extruded into pellets.

To accommodate customers’ requests for more intelligence in their machine vision products, Pressco provides correlation of defects in the blow molder for mold, spindle, and transfer arms, and in the filler for filling valves and capping heads. “If you get a repetitive defect coming from one of those machines, the machine vision system identifies which component is producing the defect to pinpoint that machine’s component so the customer can take corrective action,” O’Brien says.

Imaging opaque plastics like high-density polyethylene (HDPE) and polypropylene presents another challenge, as these materials require x-ray, gamma ray, or high-frequency units to measure fill lines. “We have primarily been a machine vision–based company, but we’re selectively developing those technologies because of the market demand,” O’Brien says.

Pedal to the Metal

On the metals side of its business over the last two years, Pressco has fielded a high volume of requests for its Decospector360 product, which inspects the entire outside surface of a decorated beverage can. “This is something can makers have wanted and needed for many years because the process of decorating a beverage can is volatile and unstable,” says Michael Coy, Marketing Manager at Pressco.

Decospector360 features multiple cameras, sophisticated software algorithms, and a proprietary lighting design that illuminates a wide range of labels, colors, and can styles and heights. The system accurately inspects every can on the line, which typically runs about 2,000 units per minute.

“To be able to inspect 360 degrees around the outside of that decorated can and look at the label for any print quality issues and color defects, and to do it that fast, is extremely challenging,” Coy says. “Our system solves that problem to the degree that the world’s largest can manufacturers are installing the technology.”

According to Coy, prior to the release of Decospector 360, can makers relied on inspectors to eyeball the production line. If plant personnel saw a suspicious defect such as ink missing from cans, they would have to flag entire pallets of cans that already completed the production process to be reinspected.

This process, known as hold for inspection (HFI), “is probably one of the most expensive and time-consuming for any can manufacturer,” Coy says. “You have to store the pallets someplace and pay someone to look at those cans and decide if they’re going to scrap them or ship them, and the can maker also runs the risk of making their customer angry.”

In fact, brand protection is a key driver for automated can inspection. “Visual brand identity is very important to beverage manufacturers,” Coy says. “The cans have to be perfect. Our system provides a degree of assurance that the cans are being produced, printed, and sent to the filling companies with a quality that matches the brand owner’s expectations.”

To Protect and Serve … Safe Food

When a food product recall occurs, it’s more than a company’s brand or reputation at risk. A North Carolina meat processing company recently issued a recall of more than 4,900 pounds of ground beef because it contained shredded pieces of Styrofoam packaging.

Upon reading about the recall, Steve Dehlin, Senior Sales Engineer with machine vision integrator Integro Technologies in Salisbury, North Carolina, reached out to the meat processor. “I have contacted numerous people in quality and plant management positions and told them that we can help prevent future recalls using machine vision technology, specifically using hyperspectral imaging,” Dehlin recalls. “In fact, we are reaching out to a number of food manufacturers to solve this problem before it impacts consumer health and becomes both a financial and PR issue for the companies.”

Multispectral and hyperspectral imaging of meat products has been well documented. In 2009, the U.S. Department of Agriculture’s Agricultural Research Service successfully used hyperspectral imaging to inspect contaminated chicken carcasses in a commercial poultry plant. And machine vision companies like Integro also have installed numerous hyperspectral imaging systems that use RGB to check color differences in the meat and infrared wavelengths to inspect for contaminants below the surface.

Despite the evidence, meat processors are reluctant to employ the technology. “The food industry is very cost sensitive, and while machine vision greatly reduces quality-control risk, it takes planning, design, installation, and training, which may be the reason for their hesitancy,” Dehlin says. “With meat or any food coming down the line at high speeds, the product has natural variation and color change. Customized machine vision inspection systems are ideal applications to detect quality issues.”

Often the reluctance comes from a lack of knowledge about hyperspectral imaging among plant engineers at the meat processing facilities. Other segments of the food industry can benefit from the technology as well. For example, a 2016 salmonella outbreak in cantaloupe likely could have been prevented if hyperspectral imaging had been used to detect pathogens, according to Dehlin.

Dehlin expects that the U.S. Food and Drug Administration eventually will require spectral analysis of a food product sample to test for pathogens, but the push to adopt multispectral and hyperspectral imaging technology on a broader scale will likely come from food conglomerates like Walmart. Opportunities for machine vision in the food industry are ripe for the picking. To encourage continued adoption of machine vision technologies, system integrators have one more food metaphor to rely on: The proof is in the pudding.

>> Reposted article by Winn Hardin at Visiononline.com, 9/22/17

Case Study: Robotiq’s Wrist Camera Doubles the Productivity of WALT Machine Inc.

Tommy Caughey Is President of WALT Machine Inc. (Image source: Robotiq.com)

Mississippi’s WALT Machine Inc. specializes in high-precision optical work for scientific camera assemblies. Each spring, they must meet a massive challenge: Deliver around 6000 camera housings in 2 months’ time, with only one CNC machine.

To deal with short-term rise in production volume, Tommy Caughey started to look into a robotic based solution a few years back. “I saw Universal Robots at IMTS Trade Show maybe 4 or 6 years ago and I found it interesting: a robot that does not require any extra stuff, like jigs for example. I followed up throughout the years and thought that’s where we needed to go one day.”

A First Simple Vision System for Collaborative Robots

One problem remained: He knew he needed either a vision system or a conveyor, to pick up the raw parts from the table. “Everyone told me that it’s a very difficult process, that you need to have a person in your shop to do it”, Caughey recalls. A big plus when using a robot is the ability for it to continue production during unattended hours in the shop. This of course increases productivity since twice as many parts can be made in the same time.

In June 2016, Robotiq released the Plug + Play Wrist Camera made exclusively for Universal Robots. For the entrepreneur, this was a game changer. “I didn’t need a vision expert anymore, I could do it myself. I bought the camera, and it’s super simple. It takes about 10 minutes and your part is taught. “

Case Study: Robotiq’s Wrist Camera Doubles the Productivity of WALT Machine Inc.

Benefits were huge for WALT Machine Inc. and future opportunities arise since their robot, named Arthur, is part of the team. Read the complete WALT Machine Inc. case study here

>> Reported by David Maltais on Robotiq.com, Sep 26, 2017

Robotic Machining On Tap for Aerospace

A thin stream of water slices through 8 inches of titanium like it was butter. Driving the process is a six-axis robot that painstakingly maneuvers the waterjet nozzle across the part, shaping the graceful contours of jet engine airfoils with ease. You have to see it, to believe it.

Robotic machining has come a long way. Robots have proven to be robust and accurate enough to achieve the demanding tolerances required by the aerospace industry. In this case, they’ve done it with help from a waterborne solution.

In the first part of the video noted previously, a robotic waterjet system is rough cutting a titanium blisk, or integrated bladed rotor (IBR), for a commercial jet engine. With just tap water mixed with abrasive media and shot out of a small orifice at ultra-high velocity, this robotic waterjet system can cut through solid metal up to a foot thick.

“This is 3D cutting with waterjet,” says Dylan Howes, Vice President of Business Development for Shape Technologies Group (SHAPE) in Kent, Washington. “The Aquarese system (pictured) is the only 3D robotic abrasive waterjet machine able to achieve 94,000 psi (6,500 bar).”

Aquarese is part of the SHAPE family of companies focused on waterjet cutting solutions and integrated systems, along with Flow International Corporation, which manufacturers the high-pressure pumps and waterjet technology. Aquarese integrates Flow’s technology with advanced robotics to provide turnkey solutions for its aerospace, energy, and automotive customers.

Smooth Moves

The robot brings flexibility and smooth motion to the waterjet process. With six degrees of freedom, the articulated arm can approach the workpiece from virtually any angle and follow a smooth, accurate, and highly repeatable toolpath to create precision cuts and contours. In metal cutting applications, the waterjet typically rough-cuts the components, which subsequently undergo final milling operations.

“One of the primary benefits of waterjet is that it’s extremely versatile,” says Howes. “You can cut metal, composites, glass, stone, paper, food, just about anything. With waterjet, you could be cutting metal one day and cutting foam the next day on the same machine.”

Aquarese systems are used to cut titanium alloys, Inconel, Ni-based alloys, other superalloys, stainless steels, and composites. Abrasive waterjet is required for cutting metals. Garnet is used as the abrasive media in 99 percent of abrasive waterjet applications. Water and garnet exit the waterjet cutting head at nearly four times the speed of sound to increase cutting power by 1,000 times.

Despite its immense power, robotic waterjet machining is a cold-cutting process, so there’s no heat-affected zone (HAZ) or thermal fatigue. This is an advantage over laser and plasma cutting. Howes says there is no mechanical stress on the part, so part integrity is not compromised and only light fixturing is required, as compared to milling or conventional machining.

“Waterjet is more efficient than rough milling or wire EDM (electrical discharge machining),” says Howes. “It’s much faster, has a lower operating cost, and produces large offcuts which are easier to recycle than the chips that you get from a milling operation.”

The waterjet process is chemical-free and environmentally friendly. The water, as well as any garnet used as an abrasive, can be recycled.

“There are no hazardous fumes,” says Howes. “You can use closed-loop water systems. There’s none of the dross waste you would find in a laser or plasma application.”

Robotic Accuracy, Repeatability, Rigidity

The robot in the featured waterjet application is manufactured by Swiss-based Stäubli Corporation.

“We use Stäubli, particularly for this cutting application, because of its robustness and path accuracy,” says Howes. “We worked closely with Stäubli to refine this process for our needs and it’s been a great partnership.”

Traditionally, robotic waterjet has been more common for softer materials and other industries. Now we’re seeing it in the aerospace industry for cutting metals and composites.

“It’s become a more common application because now we can achieve better performance,” says Sebastien Schmitt, North American Robotics Division Manager for Stäubli Corporation in Duncan, South Carolina. “We’ve made so much progress with the rigidity of the arm and precision. It makes it possible today to work within that domain.

“Accuracy, repeatability, rigidity, all this comes from our patented gear box that we manufacture and design in-house,” continues Schmitt. “We’re the only robot manufacturer that designs its own gear box. That gives us superior trajectory performance.”

The robot is a high-payload 100 kg model, which Schmitt says you need for rigidity. But it’s also important for the counterforce from the high-pressure waterjet. Aquarese found minimal to no pushback with the Stäubli robots. Imagine the kickback you might get from a fire hose when you turn it off and on. Not from these systems.

“The fact that we are rigid, very precise, very repeatable gives you the ability to push the edge of performance,” says Schmitt, noting they are now able to compete with traditional milling methods. “The cost of a 5-axis CNC machine is three or four times the cost of a system like you see here.”

Aquarese is using a Stäubli TX200 HE robot (pictured). The HE stands for humid environment. This robot was developed specifically for wet environments. The enclosed arm structure is IP65 rated and reinforced by arm suppression for added waterproofing. The IP67 rated wrist is corrosion resistant and protected against low-pressure immersion. The tool flange and critical parts are made of stainless steel to hold up in corrosive environments.

Longevity is also important, especially when your robot is working in harsh environments like abrasive waterjet applications.

“You’re making an investment for years to come,” says Schmitt. “That’s something Stäubli is known for, to be able to maintain our quality for many years. We have systems that have been working for 20 years and still performing like day one.”

Stäubli’s proprietary robot programming language, VAL 3, is optimized for compatibility with CAD-to-path software. According to SHAPE’s Howes, it’s a very simple process to import a CAD model and generate an optimized toolpath. The waterjet systems are programmed using SHAPE’s own software suite called FlowXpert, which comes bundled with the system. For 3D robotic waterjet cutting, the AquaCAM3D module is also supplied, which has built-in modules and functions for specific applications, including roughing of blisks and trimming of fan blades. AquaCAM3D is optimized to work seamlessly with Stäubli robots and export the generated toolpath.

Material Savings

Material savings is a major advantage of robotic waterjet. Turning our attention to the video again, at 35 seconds into the footage, you can watch 3D nesting with robotic waterjet. In this application, the process is roughing out two turbine blades from one bar of lightweight alloy.

“For one slug, you get two parts that are near net shape before final machining and grinding,” says Howes. “This is a huge advantage with waterjet because you’re using 3D nesting which can’t be done with milling. The only other way you can do this is with wire EDM, which is very expensive.”

You can also use common cut lines when cutting sheet metal. The waterjet’s thin cutting width, ranging from 0.003 to 0.015 inches for a pure waterjet stream and 0.015 to 0.070 inches for abrasive waterjet, allows for intricate detail. Howes says you can’t do this efficiently with conventional machining where the kerf, or width of the cut, is too wide. Common cut lines, 3D nesting, and larger offcuts all provide significant material savings.

Aquarese also integrates robotic waterjet stripping solutions for coating removal on aircraft engine parts, including boosters and combustors for the maintenance, repair and overhaul (MRO) sector. They also have systems for ceramic shell and core removal for investment casting foundries typically in the aerospace or industrial gas turbine market.

“We can also integrate the core removal systems with cutting solutions for de-gating, as well as systems for removing the flashing from forged materials,” says Howes. “All of these are robotic applications.”

Researching Enabling Technologies

Still, robotic machining, whether with waterjet or more conventional means, has its limitations when it comes to rigidity and accuracy. Researchers are exploring novel ways to address these limitations.

Research is underway at the recently inaugurated Boeing Manufacturing Development Center (BMDC) on the campus of Georgia Institute of Technology in Atlanta. The BMDC is focused on implementing industrial automation in non-traditional ways, such as shimless machining. The center is located in the 19,000-square-foot Delta Advanced Manufacturing Pilot Facility (AMPF).

Although the ribbon cutting for the center was just held in June, Georgia Tech’s strategic partnership with Boeing is in its tenth year according to Shreyes Melkote, Associate Director of the Georgia Tech Manufacturing Institute and Morris M. Bryan, Jr. Professor of Mechanical Engineering for Advanced Manufacturing Systems at Georgia Tech.

“The AMPF is a translational research facility focused on discrete parts manufacturing where we work with industry to take ideas and technology developed in the lab and translate or tailor them to applications that might be of use to the industry sponsor (in this case, Boeing),” says Melkote, who is also an affiliated member of Georgia Tech’s Institute for Robotics and Intelligent Machines, where he serves as a bridge between manufacturing and the robotics and automation area.

Melkote’s research focuses on robotic milling. His objective is to use enabling technologies to allow robots to produce more complex features and surfaces, and to do it with a high degree of accuracy.

“Lack of stiffness and accuracy are limitations that still need to be overcome,” says Melkote. “Technologies such as sensing, compensation, and metrology addressing stiffness and the limitations of articulated arm robots are what we’ve been working on for the last 4 or 5 years.”

For example, Melkote says they are using laser tracking devices and other types of metrology systems, along with in-process sensing of forces to help address accuracy issues when using robots in high-force applications. Some of these findings have been published in journals.

In the paper “A Wireless Force-Sensing and Model-Based Approach for Enhancement of Machining Accuracy in Robotic Milling,” Melkote and fellow researchers at Georgia Tech test a new hybrid method that combines wireless force sensing with a mechanistic model of the milling forces to increase the accuracy of robotic milling while preserving its flexibility. Milling experiments are conducted with a high-payload articulated robot (pictured) and a wireless polyvinylidene fluoride (PVDF) sensor system for real-time force measurement. The results show significant improvement, over 70 percent, in the dimensional accuracy of simple geometric features machined by the new method.

“Robotic machining could offer a lower-cost and more flexible, versatile technology for enabling assembly of aerospace components,” says Melkote. “We’re exploring how we can use other technologies, like metrology, to compensate for the limitations and achieve the tolerance requirements that the aerospace industry needs.”

Several technologies developed at Georgia Tech have been successfully transitioned into production at Boeing, including new design methods for advanced commercial aircraft, flow control for 787 aircraft, material handling for F/A-18, F-15 and C-17 components, vision systems for hole countersinking, and autonomous robotics for assembly.

Other research underway on the Georgia Tech campus includes teaching robots tasks through human demonstration and exploring the use of robots as flexible fixturing devices. Robotic-assisted manufacturing will continue to be a focus for researchers as they help lead a resurgence in advanced manufacturing across the nation.

RIA Members featured in this article:
Georgia Institute of Technology
Stäubli Corporation

>> Read more by Tanya M. Anandan, Robotics Industry Insights, Robotics Industries Association, 09/26/2017

Machine Vision Lighting: A Brief Design Overview

(Image source: visiononline.org)

Machine vision lighting is a critical component of a successful automated imaging system. Without proper lighting, your machine vision system will not function as intended and may provide skewed or inaccurate results.

Every machine vision application needs its own lighting solution. Many processes could be illuminated a few different ways and every lighting technique has its pros and cons.

Integrating a machine vision lighting system is a complicated undertaking and there are several application-specific considerations to take into account before implementing anything. The best way to start is to know what lighting design factors to consider for your application.

6 Machine Vision Lighting Design Considerations

Here are 6 considerations for integrating and designing a machine vision lighting solution that’s applicable to any application:

  1. Determine the Exact Features of Interest

Determine in advance exactly what a proper machine vision and lighting system will produce. For example, in inspection applications, determine what a properly illuminated part will look like, and which areas are most important for inspection, and therefore most important for lighting.

  1. Analyze Part Access/Presentation

Will the part be clear or obstructed? Will it be moving or stationary? You need to understand how the part is presented to be able to accommodate for this in lighting design. Part presentation will have a significant impact on the resulting lighting system.

  1. Analyze Surface Characteristics

Texture, reflectivity, contrast and surface shape all have an impact on lighting and must be taken into consideration. A curved surface, for example, will have different lighting requirements than a flat surface.

  1. Understand Your Lighting Options

What type of lighting equipment will you use: rings, domes, bars, spots and/or controllers? Will you use bright field, diffuse, dark field or back lighting? You need to understand all your available options to choose the best one for your application.

  1. Understand Your Lighting Limitations

Many applications have inherent limitations for enhancing illumination. Contrast enhancement, for example, could be limited by light direction and wavelength. Take into consideration the pre-existing limitations in your application.

  1. Factor in Environmental Issues

Machine vision lighting does not take place in a vacuum – environmental factors will always be an issue. Ambient light, for example, can have a dramatic effect on lighting systems, creating inconsistent lighting environments.

The 6 considerations above are a broad overview of all the things you need to consider before implementing a machine vision lighting solution.

Machine vision lighting is an integral component of any machine vision system, but integration and design can be a challenge because of the number of variables you need to consider. It’s wise to work with a qualified, certified vision system integrator for any machine vision lighting projects.

>> Read more from Vision Online, 8/29/17

Big Hero 7: Newest Automation Tools to Lighten Your Workload

Robots may not be ready to take over the world, but they are set to take over your repetitive tasks and data crunching to make manufacturing easier than ever. Here are the latest tools in robotics and automation to enhance or replace outdated equipment in your factory.

Take notice of these seven automation products, ranging from laser equipment to painting robots, that are guaranteed to speed up and optimize their assigned tasks.

Laser Scanner Goes Robotic – Integrated with the portable ROMER Absolute Arm, RS4 scanner offers new optics, electronics, and mobile capabilities designed to deliver up to 60% faster scan rates than previous models. The new RS4 scanner introduces an ultra-wide laser line nearly double the width of its predecessor, which translates to larger surface coverage and faster data collection. The newly designed profile also allows users to scan more deeply into difficult-to-reach cavities than ever before, with no reduction in accuracy performance.

OTTO Self-Driving Vehicle carrying boxes through plant.
OTTO Self-Driving Vehicle by OTTO Motors (Images source: ottomotors.com)

OTTO Rolls into Industry 4.0OTTO M software enables Industry 4.0 capability by managing your autonomous fleet. The software connects to OTTO self-driving vehicles with the material flow in a production line. It creates a map of you facility’s floor plan, makes live updates, and initiates pick-up / delivery moves.

Quick-fastening Cable Protection for Cobot Arms – LSDFB and MESUB are two universal attachments that combine simple, rapid fitting with high slip resistance and wear-free fixing of the cable protection. Both systems have a wide temperature range: -40° to +80°C (with transient exposure up to 100°C). The easy-fit fastening solutions prevent downtime, which would normally be encountered due to damage of the cable protection and wiring.

Motoman GP25 robot
Motoman GP25 offers a 25 kg payload capacity. (Image source: www.yaskawa.eu.com)

Faster, More Flexible Assembly Robot – Motoman GP25 robot is a compact robot that is ideal for assembly, dispensing, handling, material removal, and packaging applications. All axis speeds have been increased, some over 40%, surpassing other robots in its class and delivering increased productivity. Its small footprint allows for minimum installation space and minimizes interference with peripheral devices. This allows it to be placed in close proximity to workpieces and other robots to create flexible, high-density layouts.

Tugger Precisely Backs That Thang Up – The MAX-N10 tugger is able to move loads up to 10,000 lb. without the need of tape, tags, or reflectors. It features SUREPATH natural feature navigation that requires no magnetic tape or RFID tags. The modular design allows for different vehicle and load handling frame configurations to accommodate specific handling needs including man-aboard trail frame, tiller handle controls, straddle or counter balance fork attachments, and various unit-load attachments. The vehicle travels up to 4 mph and performs precise reversing maneuvers to accommodate automatic trailer hitching and unhitching functions.

Compact Benchtop Robot Offers Effortless Programming – The RP Series robots offer a generous workspace of 36 x 20 in., which provides the stable foundation for your fixturing / tooling needs. The machine comes pre-loaded with leadscrews, ball slides, and brushless servo motors on each axis. All of the parts remain stationary on the workspace while the dispense head moves on the high speed gantry.

Preconfigured paint bot ready to spray.
Durr and Kuka preconfigured automation package for painting tasks (Image source: www.kuka.com)

Pre-installed Paint Bot Works with Several Materials – Pre-installed and ready to spray, the ECORP 10 R1100 contains fully compatible, tried-and-tested components, and offers a unique combination in the market. Perfectly suited for the requirements of general industry, its areas of application include the painting of wood, plastics, glass, and metal. While the robot comes from Kuka, Dürr provides the paint application technology.