GE Looks for a Line of Sight on 3D Builds with Digital Twins

3D printing has come a long way since it was introduced in the 1980s. It has firmly established itself as a valuable prototyping tool. Now the technology stands poised to transform manufacturing, finding particular relevance in the production of parts for industries like aviation and medical instrumentation. But 3D printing seems slow to take that next big step to advanced manufacturing, with many production facilities taking a wait-and-see attitude toward the technology.

One problem is that 3D printing simply takes too long to produce large or complex components. For example, some advanced 3D-printed aviation parts take more than a week to complete and require several additional hours for processing and validation. On top of that, if something goes wrong near the end of production, operators may have to scrap the part and start all over again, wasting valuable time and resources.

Performance Variability

Despite advances in additive manufacturing technology, even advanced powder-based printers experience variable performance, which can result in a less-than-satisfactory build. Poor performance stems from a variety of factors. These range from variability in the size of the powder particles to complex dynamics occurring when new powder layers are added. The reality is that every time the printer deposits a layer of powder, opportunities exist for imperfections.

To avoid having to redo printing jobs and extend production schedules, operators must be able to spot imperfections in time to rectify them and determine when an imperfection must be corrected.

GE’s control system uses an inspection process that leverages high-resolution cameras, machine learning, and CT scanning to identify streaks, pits, divots, and other imperfections. Armed with this capability, operators will be able to take corrective action in a timely fashion, ultimately achieving 100% yield. Image courtesy of GE Global Research.

Toward 100% Yield with Digital Twins

This is exactly what GE Global Research proposes to do with the new 3D printing control system it is working on. Ultimately, the researchers hope the system will enable 100% yield.

The control system begins with an inspection process that uses high-resolution cameras to film every layer deposited by the printer, recording streaks, pits, divots, and other patterns in the powder — features nearly invisible to the human eye. In the next stage, the operators examine the samples with a CT scanner, searching for flaws that will compromise the quality of the part.

This image offers a look at powder bed defects. In this case, the streaking and pitting are caused by powder particles dragged across the power bed by the coater. Although these features are nearly invisible to the human eye, imperfections such as these can compromise the quality of a build. (Image courtesy of GE Global Research.)

The system stores all of this data in computer memory, and machine-learning algorithms correlate defects revealed by the scanners with powder patterns recorded by the camera. As with other machine learning processes, the more inspection data used to train the system, the smarter it becomes.

“We believe computer vision and machine learning are key technologies in our overall goal of achieving 100% print yields,” says Joseph Vinciquerra, additive technology platform leader for GE Global Research. “We’re using computer vision to try and spot any anomalies that may occur during the printing process. Then, using artificial intelligence and machine learning, we’re building ‘digital twins’ of these anomalies.”

The digital twins are virtual models of the production process. These, in turn, can compare ideal profiles with the actual conditions in real time and recommend process changes to achieve a better result.

“Digital twins are living, learning models of industrial parts, assets, processes and systems,” says Vinciquerra. “In the case of a 3D printer, we’re building a twin of a build process and recording the slightest defects, deviations and other build characteristics. With our twins, they will continually be updated with each new build and become ever smarter in recognizing and troubleshooting any potential issues that might arise. These twins are providing line of sight to part builds that previously could not be seen.”

The Ultimate Goal

The researchers hope to take the technology one step further, incorporating the defect-spotting ability into the printer’s controls. That way, when the computer vision identifies a known fault condition, the printer can automatically add more power or speed to the laser beam to adjust the thickness of the next layer.

In the end-state, the control system would not require the CT scan. “Because this technology is in the early stages of development, we can’t just rely on the digital models we are creating,” says Vinciquerra. “We need to validate and — to a certain extent — actually train the models we’re building, which we can do by cross-checking the computer vision results with a CT scan, which is something we often do with parts after they’re made. So, today, the CT scanner is part of the learning process, but in the future, we hope to obviate most downstream inspection.”

GE sees the next generation of digital twins incorporating information from other sensors monitoring the printing process, such as the shape of the pool of metal rendered molten by the laser.

In addition, this smart, real-time quality control will not function in isolation.

“The true power of digital twins is their ability to share insights with each other,” says Vinciquerra. “So you can imagine thousands of machines sharing unique build insights with each other that makes them each more informed about what to watch for during a build process.”

>> This article by Tom Kevan was re-posted from Digital Engineering, December 18, 2017

Better, Faster, Cheaper: Machine Vision Comes of Age in Automotive Manufacturing

Walk along a modern automotive manufacturing line and you might think you’ve stepped onto the set of a “Terminator” movie. Everywhere you look, you’ll see robots, and very few humans, diligently building cars.

That’s because automotive manufacturing has always been a leader in the adoption of automation technology, including machine vision and robots — and for good reason. The use of automation has made automobiles more affordable to the masses and significantly safer due to higher-quality construction and advanced automotive systems, many of which wouldn’t be possible without the assistance of automation technology.

Given the automotive industry’s leading-edge adoption of automation tech, it’s no surprise that the number of new applications being automated for the first time isn’t driving the adoption of vision and other advanced automation solutions. Instead, growth in the automotive industry comes more from retooling and retrofits to production lines, rather than new applications solved for the first time. Today, integrated vision systems packed with intelligence to simplify their setup and operation are driving vision’s penetration into the motor vehicle market, helping the automotive manufacturing industry to achieve new heights in productivity and profitability.

Bumper-to-Bumper Vision

A list of automotive systems that use vision technology during assembly or quality inspection reads like the table of contents from a service manual, covering every aspect of the automobile from chassis and power trains to safety, electronics, and tire and wheel. In most cases, machine vision is tracking the product through the use of 1D and 2D barcodes and performing quality inspections. But it’s also helping to assemble the products.

“Most of the applications we’re solving today involve material handling, moving parts and racks to assembly lines using either 2D or 3D vision,” explains David Bruce, Engineering Manager for General Industry & Automotive Segment for FANUC America (Rochester Hills, Michigan). “But the biggest buzz word right now is ‘3D.’”

FANUC’s iRVision machine vision package has long been a staple of the automotive industry, especially in the U.S. and Asia. In recent years, FANUC introduced a fully integrated 3D Area Sensor vision product that uses two cameras and structured light to generate 3D point clouds of the camera’s field of view.

“Today, one of the last manual processes on the automotive manufacturing line involves part feeding, getting parts out of bins, and so on,” Bruce says. “Our 3D Area Sensor isn’t just a hardware solution. It includes a lot of software developed just for bin picking applications.”

In some of the most advanced material handling work cells, one robot with a 3D sensor picks the parts out of the bin and places them on a table so that a second robot with a 2D vision system can easily pick up the part and feed another machine, conveyor, or other process. Bruce also notes that end-of-arm tooling is one of the toughest challenges for bin picking applications; magnets and vacuum work best.

“By having the vision system controller directly integrated with the robot instead of using a PC, the engineers can focus on the mechanical engineering challenges and developing a bin picking system with buffering to make sure acceptable cycle times are achieved,” Bruce says.

Tighter integration between vision system and robot also makes it easier for end users to train the FANUC-based work station. “The way you set up iRVision has gotten a lot simpler,” says Bruce. “You can take images of the robot in 7, 8, or 10 different poses and the system will guide you through programming. Or if you’re looking at a large part that won’t fit in the field of view — not uncommon in automotive manufacturing — you can take images from several small fields of view of the part, and the robot controller can determine the full 3D location of the part.”

3D vision is also enhancing the latest class of robot assembly: lightweight robots, also called collaborative robots due to their low-force operation and ability to work next to humans with minimal safety systems.

While the automotive industry is boosting the number of collaborative vision work cells, “right now the killer application is kitting,” says Bruce. Kitting is the process of collecting parts into a bin for a specific product configuration or assembly.

The Path to Full Traceability

Any kitting or assembly task is only as good as the quality and accuracy of the incoming parts, which is why track-and-trace vision applications are so important to the automotive industry. “Over the last 31 years, the industry average was 1,115 car recalls per every 1,000 sold, according to the National Highway Traffic Safety Administration,” says Adam Mull, Business Development Manager Machine Vision/Laser Marking for Datalogic (Telford, Pennsylvania). The recall rate can exceed 1,000 because a single car can have more than one recall.

“While we’re seeing applications across the board from inspection to vision-guided robotics [VGR], we’re definitely seeing a trend toward full traceability,” adds Bradley Weber, Application Engineering Leader and Industry Product Specialist – Manufacturing Industry at Datalogic. “There’s always been traceability of the most critical components of the car, but now it’s going everywhere. Every part is being laser marked or peened, read, and tracked. That’s part of what has opened a lot of doors for Datalogic because we have many types of laser markers, vision systems to verify those marks, and then both handheld and fixed barcode readers to read and track those marks all through the process.”

According to Mull, while one manufacturing plant used to manufacture only one type of vehicle, today each plant either makes parts for multiple vehicles or assembles different vehicles.

Consumer demand is driving the need for more automation in the factory. “When you go to a dealership, there are so many more options than there were years ago, from the color of the dashboard to the onboard electronics,” Weber says. “With all those choices, OEMs need a strong manufacturing execution system that is being fed data from every part along the manufacturing process.”

With machine-readable codes going on more and more components, it also opens up the possibility of reworking problem parts instead of scrapping them.

As automation and machine-to-machine communication continue to blur the lines between robot, vision, marking system, and production equipment, the benefit to the manufacturer is greater ease of use, leading to greater machine vision adoption.

Advanced vision software such as Matrox Design Assistant is aiding new adopters to quickly set up ID, VGR, and inspection routines using simple flow-chart programming and automated sample image acquisition, according to Fabio Perelli, Product Manager at Matrox Imaging (Dorval, Quebec).

Better automation integration is also helping to educate engineers, opening up even more opportunities for vision and other automation solutions.

“In automotive, engineers often work in bubbles,” says Datalogic’s Mull. “Everyone’s running with their own part of the project. But as one system works more closely with another system, the team members start to cross-pollinate, opening up the opportunity to teach the engineer who only knows about smart cameras how to use embedded controllers and advanced vision or marking systems. And since our systems all use the same software environment, it makes it seamless for an engineer to move from smart cameras to embedded controllers to other Datalogic solutions.”

>> This article by Winn Hardin was re-posted from AIA Vision Online (11/21/17)

Rethink Adds KPI Collection, Extra Cameras to Sawyer

As collaborative robots, or co-bots, become more advanced, so do their communications. Indeed, Rethink Robotics recently announced it has upgraded Sawyer so that the collaborative robot can now communicate its production key performance indicators (KPIs) and other metrics.

The metrics can include part counts, robot speed, or bad-part tallies. Rethink has released Intera 5.2, an expansion of the company’s Intera software platform. The upgrade provides production data in real-time during the manufacturing process. This is data that is typically collected via a third-party IoT system – if it’s collected at all.

The new feature, Intera Insights, displays KPIs via a customizable dashboard on the robot’s on-board display, making it accessible to those on the factory floor. The same charts are also fed back to the Intera Studio platform, providing visibility to other members of the manufacturing team. The goal is to eliminate the need to invest in or create an outside data collection system.

The advances in the Intera platform were prompted by customers in the field. “We ask for feedback from our customers. One of the biggest areas of feedback involved extracting data, knowledge, and KPIs about how the robot’s performing,” Jim Lawton, COO of Rethink Robotics, told Design News. “They want to know the average cycle time, the part count, and how many good versus bad parts were made. The robot knows what’s going on. It was just a matter of how to get access to the data.”

The goal in creating data collection for Sawyer (photo, right) is to help users begin to move into smart manufacturing without necessarily investing in new equipment. “There has been a lot of talk about the IoT and Industry 4.0. People see the value there, but they are wondering what it will look like and how it works in the world of robots. We’re showing what the end game looks like,” said Lawton. “A lot of customers don’t have the ability to get that to their own data. Now they have a robot that knows how much it is doing and knows what it has done. Plus, the robot doesn’t make mistakes when it counts.”

Adding Cameras to the Robot

The Intera 5.2 release also includes additions to Sawyer’s vision capabilities. As well as the embedded cameras that are standard with Sawyer, manufacturers can now integrate external cameras. This was designed to allow manufacturers to optimize cycle time with improved vision, or leverage in-house vision systems on the robot.

The ability to add cameras to Sawyer and integrate those cameras into Sawyer’s overall functioning was also a suggestion from customers in the field. “The second big area of feedback from customers involved external vision. We have a camera in the wrist and one in the head. Some customers wanted external cameras as well as the internal one. There are circumstances where that would be beneficial. But how do we get a third-party camera to work?” said Lawton. “We designed a feature to make an added camera part of Sawyer, so it would be easy to use and the robot would understand where it is.”

>> This article by Rob Spiegel was re-posted from (November 20. 2017)

Rise Of The Machines — Autonomous Robots And The Supply Chain Of The Future

When considering the future of the supply chain, organizations are often confronted with apprehension and uncertainty about how to implement disruptive technologies or pursue new operating models in manufacturing and distribution. However, as cost reduction and capability advancements of emerging technology, such as autonomous robots and automated material handling, become more pervasive within the industry, supply chain managers must begin exploring new strategic approaches to capitalize on technology and develop competitive advantages. Consistent visibility, faster decisions, real-time responses, heightened agility and more precise and predictive forecasting will become the benchmarks of a high-performing supply chain. While introducing new processes and operational systems and restructuring the role of human workers may seem arduous, the opportunities and benefits presented by innovative technologies, such as autonomous robots, stand to revolutionize the supply chain.

The spectrum of autonomous robots ranges from sophisticated machinery executing physical procedures to logic or AI-based software and data analytics. In either case, autonomous robots — if implemented in the right places — have the potential to greatly improve operations and vastly increase the productivity of supply chains. In addition to bolstering an organization’s bottom line and securing industry advantages, robots can also help streamline decision making processes by generating real-time data and insights, while simultaneously optimizing the strategic roles and experiences of human employees. This close collaboration between robots and humans will become the new industry standard in tackling supply chain challenges. Armed with deep insights from autonomous robots (physical or software-based), human employees can quickly and deliberately remedy issues or handle increasing volumes at the same or lower-unit cost.

To transform the supply chain and meet the compounding demands of buyers and consumers, industry professionals should update systems across manufacturing facilities and consider rethinking traditional supply chain controls or risk falling behind. Organizations that develop comprehensive plans and execute a methodical approach toward introducing these technologies can reap the economic benefits of being pioneers or early adopters in the supply chain of the future.

Improving Operations and Increasing Productivity

Automated robots have been around since the 1980s, but the products available today greatly exceed the capabilities of yesterday’s technology and are becoming more widely available across industries. While perceptions of automated robots often conjure visions of machines gliding quickly along factory floors, the most transformative operational impacts are derived from robotic process automation (RPA) or robots powered by rules-based software.

Enabled by logical algorithms, these software-driven tools can apply judgements and execute decisions to facilitate more streamlined processes. By collecting and analyzing larger quantities of data to benefit end-to-end supply network needs much faster and more efficiently than humans — RPA can empower manufacturers and distributors to engage more targeted and productive courses of action. From this vantage point, routine and often time-consuming tasks related to purchasing and inquires can be managed by robots, allowing human workers to focus on more upskilled capabilities.

What’s more, due to advancements in artificial intelligence and machine learning, modern RPA tools can go beyond discrete analytics or coded rules-based actions and exceed the constraints of predetermined computations. For example, these technologies can inform planning and forecasting by analyzing sales histories and identifying concrete trends, illustrating a more accurate interpretation of the future. By interpreting an increasingly holistic picture of an organization’s data, RPA is positioned to deliver more valuable insights to guide manufacturing and distribution decisions.

These technologies can also play a pivotal role in maintaining quality through real-time data collection — identifying acute trends that anticipate faults in quality standards before they endanger productivity or product reliability. While the demands on supply chains increase, the ability to isolate issues and correct errors near-instantaneously will be crucial to reinforcing buyer and consumer confidence. Speed and consistency in these areas can position supply chains to meet the challenges of an increasingly on-demand market.

Reducing Risk and Optimizing Employee Experience

Stretching beyond software and data analytics, physical robots can also minimize potentially hazardous work for employees. Camera technology, as well as more deft motor controls, position physical robots to excel in autonomous materials handling and distribution processes. Capable of operating in a range of structured environments, these new innovations allow humans to work side-by-side with robots, training machines to complete robust, labor-intensive tasks. Beyond reducing the cost of labor, the deployment of physical robots also encourages employees to further refine complex skillsets and heighten strategic capabilities — ultimately lending greater value to the organization.

For instance, automated machines have become markedly beneficial to the automotive industry, where multiple vehicle components must be welded and constructed in rapid succession with precision accuracy. Within the automotive industry, robots can act in near-exclusive autonomy, switching from one task to the next on the assembly line. However, as robots adopt more physical roles in labor-oriented tasks, human guidance in completing more refined responsibilities still serves as a cornerstone of manufacturing.

Core Considerations for Moving Forward

The race to innovate the supply chain of the future and shift toward new technologies is already underway. To harness the benefits of RPA and robotic machinery, organizations should engage their leaders to focus on developing and activating strategic plans.

The first step in constructing a strong integration strategy is to consider the challenges and needs of an operation. As every organization is different, each supply chain produces a unique set of demands. An effective strategy works to recognize challenges and determine opportunities where automated robotics can benefit the supply chain network.

Integrating robotics, whether software-based or physical, is an investment. As such, organizations often undergo a trial period to determine if scaled investment in certain areas meets or exceeds the demands of the supply chain. With any capital investment, the organization strives for — and expects — a strong return. By introducing RPA or robotic machinery in planned phases, business leaders can avoid any unforeseen obstacles and more accurately determine strategic advantages.

Finally, structuring a strong team to support this technology is critical for success. Cultivating a group with deep knowledge of automated robots ensures a supply chain’s continued productivity and an organization’s increased profitability. As the supply chain of the future transforms the industry through robotics, people remain a pillar of strategic importance.

>> This article by Adam Mussomeli & Joseph Fitzgerald, Deloitte Consulting LLP, was posted in Manufacturing Business Technology, 11/08/2017.

Cobot Vision System Doubles Machine Shop Productivity

During a small Mississippi machine shop’s busy time, it turned to a lights out solution from Robotiq to meet demand without having to hiring and training a temporary CNC operator.

Exclusively attached to UR robot, the Plug + Play Wrist camera is intuitive enough that anyone in the plant or shop can teach it for machine trending, assembly, and pick & place tasks.

Mississippi’s WALT Machine Inc. specializes in high-precision optical work for scientific camera assemblies. Each spring WALT Machine must meet a massive challenge: Deliver around 6,000 camera housings in 2 months’ time, with only one CNC machine.

Being able to deliver this massive order in time implies that WALT Machine’s president, Tommy Caughey, would need to hire a full-time CNC machine operator. By the time the new employee is fully trained and operational, the parts are almost delivered and his/her services will soon be no-longer needed.

To deal with short-term rise in production volume, Tommy Caughey started to look into a robotic based solution a few years back. “I saw Universal Robots at IMTS Trade Show maybe 4 or 6 years ago and I found it interesting: a robot that does not require any extra stuff, like jigs for example. I followed up throughout the years and thought that’s where we needed to go one day.”

One problem remained: He knew he needed either a vision system or a conveyer, to pick up the raw parts from the table. “Everyone told me that it’s a very difficult process, that you need to have a person in your shop to do it,” Caughey recalls.

A big plus when using a robot is the ability for it to continue production during unattended hours in the shop. This of course increases productivity since twice as many parts can be made in the same time.

In June 2016, Robotiq released the Plug + Play Wrist Camera made exclusively for Universal Robots. For the entrepreneur, this was a game changer. “I didn’t need a vision expert anymore, I could do it myself. I bought the camera, and it’s super simple. It takes about 10 minutes and your part is taught. ”

It takes 30 to 45 minutes to machine one side of those camera housings. If you only work 8 hours a day. It will be several weeks of work to produce the order with one machine, and WALT could not afford that bottleneck. “So being able to run 15-20 hours a day and not having to hire anyone else is a major plus for us,” says Caughey.

The economics of a cobot visioning system makes a lot of cents (and dollars). Walt Machine Inc. has doubled its production time by cutting its machine idle time, allowing the company to stay on track without spending a lot of time and money training and employing a worker they would only need for two months.

Since WALT Machine Inc. bought its robot, named Arthur, twice as many parts are being machined everyday. The CNC machining still takes the same time, but it’s the number of operational hours that make all the difference. The company was able to save half the time in production by eliminating a lot of machine idle time.

The contract in question needs to be completed within two months so there is a production rush to be able to deliver all the parts on time. A huge benefit when integrating a robot in the production line is that the machine can now run non-stop in this two month period. Tommy does not need to worry about training, extra salary, or employee retention between production rushes.

Tommy Caughey was also stressed about quality and consistency during unattended hours. “It’s just about letting it go and accept the fact that it’s gonna run for an extra 4-6 hours, that you’re gonna go home and nothing’s gonna break. And it’s actually the case!”

Arthur’s arrival among the team also allows long-time machinist Matthew Niemeyer to improve his skillset on the production floor. “First, I got to learn how to program the robot,” Niemeyer explains.

After training with the object teaching interface, the operator can walk away, or even go home, while the robot continues to stay productive.

Then, you have the robot loading the machine, but you’re still doing all the fine tuning of it, such as the programming. But the remedial tasks of loading and unloading the machine is taken care for you, so you don’t get worn out.”

When everything is running fine, in the factory, Matthew is able to focus on his new role at WALT Machine. The robot’s arrival created an opportunity for him to upgrade as a sales representative in the team. “We can get more and more business into the shop, which will lead to more machines, more robots and a promising overall growth.”

All of this wouldn’t be possible without this first robot. Far from stealing jobs, Tommy Caughey truly believes that in 10 years, every small shop like his will have at least one robot. For him, this change happens for the same reason that so many changes happened beforehand in other industries.

With the UR3, the Wrist Camera’s focus range is 70 mm to ∞, and with the UR5, it’s 2.76 in. to ∞. (Robotiq)

“No one is yelling at a contractor for using an excavator instead of hundred men with shovels, he compares. I didn’t fire anyone to do this. It just changes where the work is. Instead of having guys sitting here just putting parts in and out of the machine, they can do more quality-related stuff. They can check parts, clean, package them and even bring in more sales!”

And with a robot that doubles the production capacity, business opportunities are greater and orders are delivered on time. Foremost, with the satisfaction of WALT Machine’s first integration project, they see these new business opportunities as a way to scale their robotics capabilities in their shop.

Robotic automation is intimidating for someone who never touched a robot before. Tommy Caughey is one of those entrepreneurs who started from scratch with his first robot. “I’ve programmed CNC machines and G-codes for 10 years, done XYZ positional and spatial stuff, but never explored robotics. When I got the robot, I did a little reading and it was pretty simple. There is a lot of help on the UR and Robotiq websites, with programming for example.”

Robotiq Wrist Camera

As for the vision system, Caughey was really impressed by the Robotiq Wrist Camera teaching methods. “Either you take your part and set it on the surface where you want to pick it and you take four snapshots of it in four different orientations. Or if it’s something simple like a rectangular or a circular blank, you just set the dimensions of what you are picking and it knows.”

The next step is to put 15 to 20 of the same unmachined parts on the table within the camera’s field of view. The robot then rotates and looks over the table, takes one snapshot to see all the parts. For more accurate picking, it gets closer and takes another snapshot of the part it is about to pick. After, the robot places the part into the vice in the CNC machine. Then it sends a signal to the Haas CNC machine to press the start button.

“It is so easy, Caughey adds. We don’t even have to teach waypoints because it only needs to look on this table for parts. We don’t need a conveyor or any special fixturing. If you change parts, you just need to tell Arthur that you are looking for a different part and change the 2-Finger Gripper’s closing setup, it’s pretty simple. There isn’t a lot of changeover so  that’s why I like the camera-gripper combo.”

>> Read more by Robotiq, New Equipment Digest, November 2, 2017

Futureworld: The IoT-driven ‘Vertical Farm’

(Source: Aerofarms)

Imagine a farm without herbicides, insecticides or pesticides; a farm that cuts water consumption by 95 percent; that uses no fertilizer and thus generates no polluting run-off; that has a dozen crop cycles per year instead of the usual three, making it hundreds of times more productive than conventional farms; a farm that can continually experiment with and refine the taste and texture of its crops; a farm without sun or soil. That’s right, a farm where the crops don’t need sunlight to grow and don’t grow from the ground.

Such a farm – an “indoor vertical farm” – exists, it’s located in that grittiest, most intensely urban of inner cities, Newark, NJ, in a former industrial warehouse. Visiting there, you go from a potholed, chain linked back street into a brightly lit, clean (visitors wear sanitary gowns, gloves, masks and head coverings), 70,000-square-foot facility. Walking in, you get that rare, uncanny sense of having stepped into the future. Way into the future.

The farm consists of large, flat platforms stacked 10 levels high (“grow towers”) of leafy greens and herbs thriving in seeming contentment under long rows of LED lights, irrigated with recycled water that sprays the exposed roots hanging, suspended, from the crops, under the watchful “eye” of IoT sensors that, with machine learning algorithms, analyze the large volumes of continually harvested (sorry!) crop data.

Aerofarms began developing sustainable growing systems since 2004, and has adopted a data-driven technology strategy that’s a showcase for the IoT and deep learning capabilities of Dell Technologies (see below).

By building farms in major population centers and near major distribution routes (the Newark farm is a mile from the headquarters of one of the largest supermarket chains in the New York City area), the company radically shortens supply chains and lowers energy resources required to transport food from “farm to fork”  while also decreasing spoilage. It enables local farming at commercial scale year-round, regardless of the season. It tracks and monitors its leafy greens from seed to package so that the source of food, if some becomes tainted, can be quickly identified. Taken together, AeroFarms claims to achieve 390 times greater productivity than a conventional field farm while using 5 percent as much water.

“We are as much a capabilities company as we are farmers, utilizing science and technology to achieve our vision of totally controlled agriculture,” said David Rosenberg AeroFarms co-founder and CEO. The company’s vision, he said, is to understand the “symbiotic relationships” among biology, environment and technology, to leverage science and engineering in ways that drive more sustainable, higher-yield food production.

IoT come into play via AeroFarms’ Connected Food Safety System, which tracks the “growth story” of its products, analyzing more than 130,000 data points per harvest. The growth cycle begins when seeds are germinated on a growing medium that looks like cheesecloth, receiving a measured amount of moisture and nutrients misted directly onto their roots that dangle in a chamber below the growing cloth, along with a spectrum of LED lighting calculated to match the plants’ needs throughout a 12- to 16-day growing cycle.

Rosenberg said Aerofarms decided to partner with Dell because it “offers a comprehensive infrastructure portfolio that spans our IT needs, from edge gateways and rugged tablets to machine learning systems and network gear.”

At the edge, sensors and cameras in the aeroponic growing system gather data on everything from moisture and nutrients to light and oxygen and then send operating and growing environment data to Dell IoT Edge Gateways for processing. Information is then relayed over their farm network to Dell Latitude Rugged Tablets and a local server cluster, making it available to Aerofarms workers for monitoring and analysis. AeroFarms’ precision growing algorithms allow just-in-time growing for its selling partners. Once the plants reach maturity, they are harvested and packaged onsite and then distributed to local grocery stores.

Aerofarms is developing a machine learning capability that identifies patterns based on analysis of images and a combination of environmental, machine and historical growing data.

The company said it may expand its use of Microsoft Azure to conduct more analytics in the cloud while leveraging geo-redundant data backup, collect disparate data from its multiple vertical farms and multiple data sources, including information interpreted in historical context, leveraging data previously collected and analyzed over time to improve taste, texture, color, nutrition and yield.

Aerofarms said it also is working on real-time quality control through multi-spectral imaging from its grow trays. Cameras with integrated structured light scanners send data to Dell Edge Gateways, which create 3D topological images of each grow tray. When an anomaly is detected, the gateway sends an alert to operators using Dell Latitude Rugged Tablets on the farm floor.

“For me, the journey started with an appreciation of some of the macro-challenges of the world, starting with water,” said Rosenberg. “Seventy percent of our fresh water goes to agriculture. Seventy percent of our fresh water contamination comes from agriculture.”

Land is another problem.

“By U.N. estimates, we need to produce 50 percent more food by 2050, and we’ve lost 30 percent of our arable farm land in the last 40 years,” he said. “Looking at all those macro-issues, we need a new way to feed our planet.”

>> Read more by Doug Black, EnterpriseTech, October 26, 2017

Vision Systems Drive Auto Industry Toward Full Autonomy

The race is on to make self-driving vehicles ready for the road. More than 2.7 million passenger cars and commercial vehicles equipped with partial automation already are in operation, enabled by a global automotive sensors market estimated to reach $25.56 billion by 2021. And of those sensors, cameras will see the largest growth of nearly 400 million units by 2030.

Estimates about the arrival of fully autonomous vehicles vary depending on whom you ask. The research firm BCG expects that vehicles designated by SAE International as Level 4 high automation — in which the car makes decisions without the need for human intervention — will appear in the next five years.

Meanwhile, most automotive manufacturers plan to make autonomous driving technology standard in their models within the next 2 to 15 years. Tesla, whose admired and admonished Autopilot system features eight cameras that provide 360 degrees of visibility up to 250 meters, hopes to reach Level 5 full autonomy in 2019.

Carmakers are building upon their automated driver-assistance systems, which include functions such as self-parking and blind-spot monitoring, as the foundation for developing self-driving cars. The core sensors that facilitate automated driving — camera, radar, lidar, and ultrasound — are well developed but keep undergoing improvements in size, cost, and operating distance.

The industry still must overcome other technological challenges, however. These include mastering the deep learning algorithms that help cars navigate the unpredictable conditions of public roadways and handling the heavy processing demands of the generated data. To help them carve a path toward total autonomy in driving, automakers are turning to vision software companies as an important player in the marketplace.

Algorithms Get Smarter

The machine vision industry is no stranger to the outdoor environment, with years of experience developing hardware and software for intelligent transportation systems, automatic license plate readers, and border security applications. While such applications require sophisticated software that accounts for uncontrollable factors like fog and sun glare, self-driving vehicles encounter and process many more variables that differ in complexity and variety.

“Autonomous driving applications have little tolerance for error, so the algorithms must be robust,” says Jeff Bier, founder of Embedded Vision Alliance, an industry partnership focused on helping companies incorporate computer vision into all types of systems. “To write an algorithm that tells the difference between a person and tree, despite the range of variation in shapes, sizes, and lighting, with extremely high accuracy can be very difficult.”

But algorithms have reached a point where, on average, “they’re at least as good as humans at detecting important things,” Bier says. “This key advance has enabled the deployment of vision into vehicles.”

AImotive (Budapest, Hungary) is one software company bringing deep learning algorithms to fully autonomous vehicles. Its hardware-agnostic aiDrive platform uses neural networks to make decisions in any type of weather or driving condition. aiDrive comprises four engines. Recognition Engine uses camera images as the primary input. Location Engine supplements conventional map data with 3D landmark information, while Motion Engine takes the positioning and navigation output from Location Engine to predict movement patterns of surroundings. Finally, Control Engine controls the vehicle through low-level actuator commands such as steering and braking.

For an automated vehicle to make critical decisions based on massive volumes of real-time data coming from multiple sensors, processors have had to become more powerful computationally while consuming less operational power. Software suppliers in this space are developing specialized processor architectures “that easily yield factors of 10 to 100 times better efficiency to enable these complex algorithms to fit within the cost and power envelope of the application,” Bier says. “Just a few years ago, this degree of computational performance would have been considered supercomputer level.”

To make safe, accurate decisions, sensors need to process approximately 1 GB of data per second, according to Intel. Waymo, Google’s self-driving car project, is using the chipmaker’s technology in its driverless, camera-equipped Chrysler Pacifica minivans, which are currently shuttling passengers around Phoenix as part of a pilot project.

However, the industry still needs to determine where the decision-making should occur. “In our discussions with manufacturers, there are two trains of thought as to what these systems will look like,” says Ed Goffin, Marketing Manager for Pleora Technologies (Kanata, Ontario). “One approach is analyzing the data and making a decision at the smart camera or sensor level, and the other is feeding that data back over a high-speed, low-latency network to a centralized processing system.”

Pleora’s video interface products already play in the latter space, particularly in image-based driver systems for military vehicles. “In a military situational awareness system, real-time high-bandwidth video is delivered from cameras and sensors to a central processor, where it is analyzed and then distributed to the driver or crew so they can take action or make decisions,” Goffin says. “Designers need to keep that processing intelligence protected inside the vehicle. Because cameras can be easily knocked off the vehicle or covered in dust or mud, they need to be easily replaceable in the field without interrupting the human decision-making process.”

Off the Beaten Path

While the self-driving passenger car dominates media coverage, other autonomous vehicle technology is quietly making a mark away from the highway. In September 2016, Volvo began testing its fully autonomous FMX truck 1,320 meters underground in a Swedish mine. Six sensors, including a camera, continuously monitor the vehicle’s surroundings, allowing it to avoid obstacles while navigating rough terrain within narrow tunnels.

Meanwhile, vision-guided vehicles (VGVs) from Seegrid (Pittsburgh, Pennsylvania) have logged more than 758,000 production miles in warehouses and factories. Unlike traditional automated guided vehicles — which rely on lasers, wires, magnets, or floor tape to operate — Seegrid VGVs use multiple on-vehicle stereo cameras and vision software to capture existing facility infrastructure as their means of location identification for navigation.

As Bier of Embedded Vision Alliance points out, even the Roomba robotic vacuum cleaner — equipped with a camera and image processing software – falls under the category of autonomous vehicles.

Whether operating in the factory or on the freeway, self-driving vehicles promise to transport goods and people in a safe, efficient manner. Debate persists over when fully autonomous cars will hit the road in the U.S. Even as the industry overcomes technical challenges, governmental safety regulations and customer acceptance will affect the timing of autonomous vehicles’ arrival.

In the meantime, automakers and tech companies continue to pour billions of dollars into research and development. Each week seems to bring a new announcement, acquisition, or milestone in the world of self-driving vehicles. And vision companies will be there for the journey.

>> Re-posted from Vision Online, 10/20/17

Warehouse Robots Smarten Up

Self-driving cars have certainly reaped the rewards from the advances made in sensors, processing power, and artificial intelligence, but they aren’t the sole beneficiaries. One needn’t look any further than to the autonomous cooperative robots (cobots) currently invading the warehouses and stores in which they will work in close quarters with people.

1. Aethon’s latest TUG is festooned with sensors and can fit under carts to tow them to desired locations.

Aethon’s TUG (Fig. 1) is the latest in a line of autonomous robots designed for environments like warehouses. It has more sensors on it than older platforms, which is indicative of the falling price of sensors, improvements in sensor integration, and use of artificial intelligence to process the additional information. This allows robots like this to get a better model of the surrounding environment. It means the robots operate more safely, since they can better recognize people and objects. It also means they can perform their chores more effectively, because they often need to interact with these objects.

Aethon’s TUG series provides a range of capabilities up to versions that can haul around as much as 1200 lbs. These typically find homes in industrial and manufacturing environments. Smaller TUGs have been set up in hospitals to deliver medicine, meals, and materials. TUGs move throughout a hospital calling elevators and opening doors via network connections. As with warehouse robots, they operate around the clock doing jobs that allow others to do theirs.

2. The RL350 robotic lifter from Verna Robotics rises under a cart and lifts 350 kg off the ground. It then delivers the contents to the desired location, dropping down and leaving the cart.

Vecna Robotics has lightweight and heavy-duty robots, too. Its RL350 robotic lifter can hoist 350 kg or more than 770 lbs (Fig. 2). It can also adjust the payload height with other pieces of material-handling equipment, like conveyor belts. It can be used in applications such as fulfillment operations or lineside supply. The robot has a top speed of 2 m/s, and can run for eight hours before seeking out a charging station. It is ANSI/ITSDF B56.5 compliant and ISO Class D ready. It uses LIDAR and ultrasonic sensors like many of the other robots in this class.


3. Fetch Robotics’ VirtualCoveryor targets warehouse applications such as DHL’s distribution center.

Fetch Robotics has a range of products, from robotic arms for research to its datasurvey inventory robot. It also offers the VirtualCoveryor (Fig. 3), which comes in a number of different sizes to address different weight configurations. The Freight500 can move up to 500 kg, while the Freight1500 handles up to 1500 kg. They run up to nine hours on a charge, and incorporate LIDAR and 3D cameras on the front and rear. As with most warehouse robots, Fetch Robotics delivers them with its FetchCore Management software.

4. I Am Robotics put a robotic arm on its I Am Swift platform. The suction grip is designed for grabbing lightweight objects that would be typical in many warehouse pick-and-place environments.

I Am Robotics includes a robotic arm on its I Am Swift platform (Fig. 4). It can run for more than 10 hours picking and placing small objects using its suction grip. The typical boxes or bottles found on store shelves are open game. The robot is designed to work with the I Am SwiftLink software.

The I Am Flash 3D scanner is used to teach the system about objects that will be manipulated. It records the barcode, object dimensions, and weight after an object is placed in its scanning area. The I Am Swift robot can then determine what objects it sees on a shelf or in its basket and move them accordingly.

5. Omnidirectional wheels on Stanley Robotics’ robot platform make it easy to move in tight quarters.

Stanley Robotics warehouse platform utilizes omnidirectional wheels in order to move literally in any direction from a standing start. This simplifies path planning and allows it to work in tight quarters.

6. Stan from Stanley Robotics handles valet parking by literally picking up a car and putting in a parking spot.

The latest offering from Stanley Robotics was not able to fit on the show floor, though. Its Stan valet parking system (Fig. 6) turns any car into a self-driving car, at least to park it. It rolls under a typical car and then raises itself, thereby lifting the car. Many warehouse robots that lift carts instead of cars use an identical technique—it’s the same idea, but applied to a much larger object.

7. Fellows Robots’ NAVii will function within a store, offering information to customers while performing inventory scanning.

Fellows Robots’ NAVii (Fig. 7) is designed to operate within a store, providing customers with information while performing inventory scanning. It can map out a store on its own and then track the stock using machine-learning techniques. NAVii will notify store managers when stock is low, or if there are price discrepancies.

NAVii can also interact with store customers using its display panels. On top of that, store employees can take advantage of this mobile interface to interact with the store’s computer network. As with most autonomous robots, it seeks out a charger when its battery runs low.

>> Read more by William Wong, New Equipment Digest, October 05, 2017


Machine Vision Techniques: Practical Ways to Improve Efficiency in Machine Vision Inspection

Machine vision efficiency is at the core of production efficiency. The speed of manufacturing is often dependent upon the speed of machine vision inspection. Creating efficiencies in machine vision can have wide reaching benefits on manufacturing productivity.

Are you doing all you can to make machine vision as accurate and efficient as possible? The following are a few practical ways to improve the efficiency of your machine vision systems.

4 Practical Tips for Machine Vision Efficiency

The following tips are fundamental, but quick, fixes to improve machine vision efficiency if you’re inspection processes are slowing down or impacting production.

1. Lighting Techniques

Is your lighting technique maximizing contrast for the area of inspection? Between backlighting, bright field lighting, grazing, low angle linear array, and dark field lighting, there are often several different ways to illuminate the same application. The technique with the highest contrast will help improve the accuracy of image capture.

2. Light Wavelength and Frequency

Some parts, such as metallic products, may arrive at your facility and be inspected with a light coating of oil on them from storage. This will create noise in your images. Adjusting the frequency and wavelength of light you’re using can help combat this type of noise introduced into the inspection environment.

3. Trigger Range Function

Sometimes, the broader industrial environment will create electrical noise and cause false triggering of your inspection system – which could have numerous devastating consequences on production, such as the software concluding that passable objects are failing inspection. Implementing a trigger range function, controlling for the length of the trigger signal, to maintain the integrity of machine vision inspection systems.

4. Filtering

Industrial environments often introduce background and/or overhead lighting noise into the inspection area. Many times, this can be completely filtered out with the correct wavelength of lens filter, improving the accuracy and quality of image capture.

Machine vision works best in consistent, undisturbed environments, but this is rarely the case in an industrial setting. The tips mentioned above are some quick ways to improve the efficiency of machine vision inspection, which improves the efficiency of overall production.

>> Learn more at Vision Online, 09/12/2017

Machine Vision in the Food and Beverage Industry: Mostly Feast, But Some Famine

Food and beverage producers face continuous pressure to verify product quality, ensure safe and accurate packaging, and deliver consumables that are completely traceable through the supply chain. Machine vision has been helping the industry achieve these goals for the better part of two decades. But as government regulations tighten and consumers demand more transparency about the contents of their sustenance, adoption of vision and imaging systems in food inspection is on the rise — despite a few segments that show hesitance toward the technology.

Safety First

Even though the U.S. Food Safety Modernization Act (FSMA) took effect in 2011, some food processors and packagers are still finalizing solutions to meet the law’s product tracking and tracing requirements. “FSMA has forced the food industry to have better recording and reporting systems of their processes, so more food and beverage manufacturers are using 2D barcode reading to track and serialize data,” says Billy Evers, Global Account Manager for the food and beverage industry at Cognex (Natick, Massachusetts).

But a more pressing need is driving the adoption of both barcode and vision technologies in food processing facilities. “Right now as a society, we’re at an all-time high for food allergies,” Evers says. “There’s a heightened awareness in the industry about determining proper labels for allergen-based contaminants.”

Incorrect or incomplete allergen labeling could lead to customer illness, costly recalls, and damage to the food producer’s brand. While some manufacturers are using barcode readers for label verification, many of them “have legacy artwork that’s been in existence for 60 or 70 years and don’t want to mess up their brand by putting a 2D code on their packaging,” Evers says.

In such cases, companies will use optical character recognition (OCR) and verification (OCV) of existing alphanumeric characters on the label, or pattern matching to track fonts or check for the absence/presence of certain words. Food producers also are using barcode readers and vision systems to comply with a 2016 U.S. law mandating the labeling of food that contains genetically modified ingredients, or GMOs.

Sometimes, the demand for barcode scanning comes from within the supply chain itself. Evers cites the example of one food company pushing its suppliers to guarantee that their barcodes are accessible from almost every portion of the pallets containing them so that workers aren’t wasting time twisting individual boxes in order to scan them at distribution centers or back-of-store warehouses.

PET Projects

(Image source:

Like other industries relying on machine vision for inspection, food and beverage makers want systems that do more with less. For the past decade, many beverage filling facilities have been manufacturing PET plastic bottles on site rather than relying on a converter to make, palletize, and ship them. Pressco Technology Inc. (Cleveland, Ohio) has developed vision systems that conduct inspection up-and-down the line to include not only the preforms blown into the PET bottles but also the fill levels, caps, and labels on the filled containers.

“The advantage of doing all of this with one control is that you don’t have to train operators on or buy spare parts for three or four different inspection systems,” says Tom O’Brien, Vice President of Marketing, Sales, and New Business Development at Pressco.

O’Brien points to two competing challenges in the plastic bottling industry that can benefit from machine vision inspection. One is the lightweighting of PET containers and closures to reduce cost and provide a more sustainable package. “As you make things lighter, you use less plastic and have a greater opportunity for defects to occur,” he says.

Secondly, with the use of post-consumer, re-ground material to make new beverage bottles, vision systems can inspect for contaminants such as dirt that can enter the production process as the recycled PET is melted and extruded into pellets.

To accommodate customers’ requests for more intelligence in their machine vision products, Pressco provides correlation of defects in the blow molder for mold, spindle, and transfer arms, and in the filler for filling valves and capping heads. “If you get a repetitive defect coming from one of those machines, the machine vision system identifies which component is producing the defect to pinpoint that machine’s component so the customer can take corrective action,” O’Brien says.

Imaging opaque plastics like high-density polyethylene (HDPE) and polypropylene presents another challenge, as these materials require x-ray, gamma ray, or high-frequency units to measure fill lines. “We have primarily been a machine vision–based company, but we’re selectively developing those technologies because of the market demand,” O’Brien says.

Pedal to the Metal

On the metals side of its business over the last two years, Pressco has fielded a high volume of requests for its Decospector360 product, which inspects the entire outside surface of a decorated beverage can. “This is something can makers have wanted and needed for many years because the process of decorating a beverage can is volatile and unstable,” says Michael Coy, Marketing Manager at Pressco.

Decospector360 features multiple cameras, sophisticated software algorithms, and a proprietary lighting design that illuminates a wide range of labels, colors, and can styles and heights. The system accurately inspects every can on the line, which typically runs about 2,000 units per minute.

“To be able to inspect 360 degrees around the outside of that decorated can and look at the label for any print quality issues and color defects, and to do it that fast, is extremely challenging,” Coy says. “Our system solves that problem to the degree that the world’s largest can manufacturers are installing the technology.”

According to Coy, prior to the release of Decospector 360, can makers relied on inspectors to eyeball the production line. If plant personnel saw a suspicious defect such as ink missing from cans, they would have to flag entire pallets of cans that already completed the production process to be reinspected.

This process, known as hold for inspection (HFI), “is probably one of the most expensive and time-consuming for any can manufacturer,” Coy says. “You have to store the pallets someplace and pay someone to look at those cans and decide if they’re going to scrap them or ship them, and the can maker also runs the risk of making their customer angry.”

In fact, brand protection is a key driver for automated can inspection. “Visual brand identity is very important to beverage manufacturers,” Coy says. “The cans have to be perfect. Our system provides a degree of assurance that the cans are being produced, printed, and sent to the filling companies with a quality that matches the brand owner’s expectations.”

To Protect and Serve … Safe Food

When a food product recall occurs, it’s more than a company’s brand or reputation at risk. A North Carolina meat processing company recently issued a recall of more than 4,900 pounds of ground beef because it contained shredded pieces of Styrofoam packaging.

Upon reading about the recall, Steve Dehlin, Senior Sales Engineer with machine vision integrator Integro Technologies in Salisbury, North Carolina, reached out to the meat processor. “I have contacted numerous people in quality and plant management positions and told them that we can help prevent future recalls using machine vision technology, specifically using hyperspectral imaging,” Dehlin recalls. “In fact, we are reaching out to a number of food manufacturers to solve this problem before it impacts consumer health and becomes both a financial and PR issue for the companies.”

Multispectral and hyperspectral imaging of meat products has been well documented. In 2009, the U.S. Department of Agriculture’s Agricultural Research Service successfully used hyperspectral imaging to inspect contaminated chicken carcasses in a commercial poultry plant. And machine vision companies like Integro also have installed numerous hyperspectral imaging systems that use RGB to check color differences in the meat and infrared wavelengths to inspect for contaminants below the surface.

Despite the evidence, meat processors are reluctant to employ the technology. “The food industry is very cost sensitive, and while machine vision greatly reduces quality-control risk, it takes planning, design, installation, and training, which may be the reason for their hesitancy,” Dehlin says. “With meat or any food coming down the line at high speeds, the product has natural variation and color change. Customized machine vision inspection systems are ideal applications to detect quality issues.”

Often the reluctance comes from a lack of knowledge about hyperspectral imaging among plant engineers at the meat processing facilities. Other segments of the food industry can benefit from the technology as well. For example, a 2016 salmonella outbreak in cantaloupe likely could have been prevented if hyperspectral imaging had been used to detect pathogens, according to Dehlin.

Dehlin expects that the U.S. Food and Drug Administration eventually will require spectral analysis of a food product sample to test for pathogens, but the push to adopt multispectral and hyperspectral imaging technology on a broader scale will likely come from food conglomerates like Walmart. Opportunities for machine vision in the food industry are ripe for the picking. To encourage continued adoption of machine vision technologies, system integrators have one more food metaphor to rely on: The proof is in the pudding.

>> Reposted article by Winn Hardin at, 9/22/17