Rough terrain? No problem for beaver-inspired autonomous robot

Like the state animal of New York, the rover-like vehicle uses surroundings to build complex structures, overcome obstacles.

Autonomous robots excel in factories and other manmade spaces, but they struggle with the randomness of nature.

To help these machines overcome uneven terrain and other obstacles, University at Buffalo researchers have turned to beavers, termites and other animals that build structures in response to simple environmental cues, as opposed to following predetermined plans.

“When a beaver builds a dam, it’s not following a blueprint. Instead, it’s reacting to moving water. It’s trying to stop the water from flowing,” says Nils Napp, PhD, assistant professor of computer science and engineering in UB’s School of Engineering and Applied Sciences. “We’re developing a system for autonomous robots to behave similarly. The robot continuously monitors and modifies its terrain to make it more mobile.”

The work is described in a study to be presented this week at the Robots: Science and Systems conference. The work could have implications in search-and-rescue operations, planetary exploration for Mars rover-style vehicles and other areas.

It’s all about math

While the project involves animals and robots, its main focus is math: specifically, developing new algorithms — the sets of rules that self-governing machines need to make sense of their environment and solve problems.

Creating algorithms for an autonomous robot in a controlled environment, such as an automotive plant, is relatively straightforward. But it’s much more difficult to accomplish in the wild, where spaces are unpredictable and have more complex patterns, Napp says.

To address the issue, he is studying stigmergy, a biological phenomenon that has been used to explain everything from the behavior of termites and beavers to the popularity of Wikipedia.

According to stigmergy, the complex nests that termites build are not the result of well-defined plans or deep communication. Instead, it’s a type of indirect coordination. Initially, a termite will deposit a pheromone-laced ball of mud in a random spot. Other termites, attracted to the pheromones, are more likely to drop their mudballs at the same spot. The behavior ultimately leads to large termite nests.

Researchers have compared this behavior to Wikipedia and other online collective projects. For example, one user creates a page in the online encyclopedia. Another user will modify it with additional information. The process continues indefinitely, with users building more complex pages.

Testing the autonomous rover

Using off-the-shelf components, Napp and his students outfitted a mini-rover vehicle with a camera, custom software and a robotic arm to lift and deposit objects.

They then created uneven terrain — randomly placed rocks, bricks and broken bits of concrete — to simulate an environment after a disaster such as a tornado or earthquake. The team also places hand-sized bean bags of different sizes around the simulated disaster area.

Researchers then activate the robot, which uses the algorithms Napp developed to continuously monitor and scan its environment. It picks up bean bags and deposits them in holes and gaps in between the rock, brick and concrete. Eventually the bags form a ramp, which allows the robot to overcome the obstacles and reach its target location, a flat platform.

“In this case, it’s like a beaver using nearby materials to build with. The robot takes its cues from its surroundings, and it will keep modifying its environment until it has created a ramp,” Napp says. “That means it can fix mistakes and react to disturbances; for example pesky researchers messing up half-built ramps, just like beavers that fix leaks in their dams.”

In 10 tests, the robot moved anywhere from 33 to 170 bags, each time creating a ramp which allowed it reach its target location.

“Just like an animal, the robot can operate completely by itself, and react to and change its surroundings to suit its needs,” Napp said.


>> Originally posted by Cory Nealon, University of Buffalo News Release, June 27, 2018

Connecting the Digital Twin: From Idea Through Production, to Customers and Back

Rapidly advancing technology and groundbreaking innovations are changing the world of manufacturing. Trends such as Big Data, Cloud Technology, and the Internet of Things (IoT) are just some of the tools fueling a digital transformation that is impacting how products are developed, manufactured, and used across all sectors of the manufacturing industry. Harnessing the power of emerging technologies is key to successful, continuous innovation.

While transitioning can be a struggle, companies that embrace digitalization have the potential to not just survive, but also thrive in, and even disrupt, the market. Some examples of how new technologies are already transforming industries are:

  • Transportation vehicles that understand their environment and operate autonomously.
  • Medical implants designed and manufactured to the needs of a specific individual.
  • Aircraft that operate without pilots.
  • Energy systems that understand how to optimize themselves to minimize consumption.

In the context of developing and manufacturing complex smart products, digitalization begins with creating a digital model. This digital model, or digital twin, should describe, define, capture, and analyze how the product is expected to perform. The digital twin is often described as a digital replica of different assets, processes, and systems in a business that can be used in a number of ways.

The product digital twin is an intelligent model with characteristics to predict and interrogate performance.

While this generic definition is basically correct, a comprehensive digital twin consists of many mathematical models and virtual representations that encompass the asset’s entire lifecycle — from ideation, through realization and utilization — and all its constituent technologies.

Ideation: The Digital Product Twin

Companies today deal with greater competitive disruption on a global basis. To deal with these challenges, it is imperative that companies transform their engineering, design thinking, and processes practices. Integrated software tools such as CAD/CAM/CAE can enable companies to truly digitally represent the entire product in both mechanical and electrical/electronic disciplines.

The closed-loop digital twin brings together designers and manufacturers to make plans that link what needs to be made with how to make it, the resources required, and where it’s made.

Creating a 3D model with a CAD system is often the first thought to come to mind when talking about a digital twin; however, the digital product twin is actually a complex system of systems, including all the design elements of a product. This can be created using a Systems Driven Product Development (SDPD) methodology, which drives the creation of intelligent 3D models built with generative design practices and validated through predictive analytics.

SDPD brings together core elements of the design process including intellectual property, configuration, and change control with elements from systems engineering; mechanical, electronics, and software design; and multi-domain modeling and simulation. SDPD also supports interfaces and integrations with domain-specific tools.

Beginning with requirements and ending with integrated designs showing verification status, SDPD provides end-to-end traceability. It can also significantly increase reuse of proven models and simulations, which can improve quality. Additionally, it promotes rapid assessment of change impacts and early discovery of issues to improve schedule performance and product development times.

A product digital twin will typically include electronics and software simulations; finite element structural, flow, and heat transfer models; and motion simulations. This allows a company to predict the physical appearance of a product, as well as other factors such as performance characteristics. They rely heavily on predictive engineering analytics, which combines multidisciplinary engineering simulation and tests with intelligent reporting and data analytics. These capabilities lead to digital twins that can predict the product’s real-world behavior throughout its lifecycle.

A production digital twin is a fully digitalized factory process model used to predict and optimize operational performance.

This comprehensive computerized model of the product enables almost 100 percent of virtual validation and testing of the product under design, which minimizes the need for physical prototypes, reduces the amount of time needed for verification, improves quality of the final manufactured product, and enables faster reiteration in response to customer feedback. For example, the digital twin of an aircraft can be tested to see how it will respond through a number of environmental conditions, helping to predict potential failure modes under a wide set of conditions.

Another example is the automotive industry’s development of road-safe autonomous vehicles. It would be impossible to test vehicle reliability using physical testing and imprecise analytical models because there are an infinite number of combinations to test when considering environmental conditions, other vehicles, pedestrians, and traffic signs. It is estimated that physical testing would require 14 billion miles of testing — the equivalent of running 400 prototypes in parallel for 100 years at 40 mph for every hour of the year. A digital twin of the vehicle could enable testing to be completed through simulation, leading to safer vehicles, faster.

Realization: The Digital Production Twin

A smart factory is a fully digitalized factory model representing a production system — a digital twin for production — that is completely connected to a product lifecycle management (PLM) data repository via sensors, supervisory control and data acquisition (SCADA) systems, programmable logic controllers (PLCs), and other automation devices. In manufacturing, a digital twin enables flexibility to reduce time needed for the manufacturing process and system planning, as well as for production facility design. Breakthrough strengths and key enablers of smart factories are additive manufacturing, advanced robotics, flexible automation, and virtual and augmented reality.

Conventionally, designers and manufacturers work independently in different systems and throw information over the wall. This can create problems as information gets out of sync, making it difficult for everyone to see the same picture. As a result, teams are able to assess performance and make necessary adjustments only at the late stage of a physical prototype. Issues discovered this late in the process can cause delays in production and significantly increase the cost to fix.

Additionally, these errors can be transferred into assembly and installation instructions, which end up on the shop floor or in the field. This not only makes the product more difficult to produce the way in which it was designed, but also can negatively impact the quality of the product itself.

Using a digital twin in manufacturing offers a unique opportunity to virtually simulate, validate, and optimize the entire production system to test how the product will be built in its entirety using the manufacturing processes, production lines, and automation in place. The process logistics can also be incorporated into the digital production twin to aid teams in designing an effective sideline logistics solution to feed the production lines.

Utilization: The Digital Performance Twin

The digital performance twin of factory assets in operation, and products in service, closes the loop between expected performance and actual performance. With IoT, companies can connect to real-world products, plants, machines, and systems to extract and analyze actual performance and utilization data.

Data analytics can then be used to derive information and insights from the raw data. These actionable insights can then be applied to close the loop with the digital product twin and digital production twin to optimize products, production systems, and processes in the next cycle of innovation. Collected and analyzed, this information could also uncover product issues before they occur, identify potentially problematic configurations, and help fine-tune operations.

Closing the loop on digital manufacturing enables companies to incorporate their customers’ voices and trends into the product innovation cycle, which not only can speed time to market, but also help companies predict shifts in the market.

The manufacturers who will succeed in the evolving digital world will be the ones to harness the intelligence being produced in real time to get innovation at higher quality to market faster than their competitors.


>> Written by Jim Rusk for Tech Briefs, June 1, 2018

Looking Into The Future Of Sensor Technology

While the recent Sensors Expo conference in San Jose clearly demonstrated the important role sensors currently play in today’s connected world, attendees are also eager to learn what future sensor technologies will emerge in coming years. Alissa Fitzgerald, President of MEMS design and consulting firm AMFitzgerald and Associates, presented an interesting look into upcoming sensor developments in a conference session.

Noting that most future sensor technologies have their roots in research projects at university labs, Fitzgerald gave the example of one product, Chirp Microsystems’ MEMS Ultrasound time of flight (ToF) sensor (see video), had its roots in research at the University of California at Berkeley back in 2012. The product was a finalist at Sensors Expo’s annual Innovation Award’s competition this year.

Fitzgerald mentioned several promising sensor technologies in various stages of R&D. Of these, event-driven sensors were the closest to commercialization, which she projected would be within the next five years. Northeastern University, for instance, is researching a sensor built to detect infrared light waves that would essentially remain off until an event is detected. Standby power requirements are almost nil. Fitzgerald suggested IoT and security as future applications for event-driven sensor technology.

Also in the development pipeline are piezoelectric resonators, which integrated piezoelectric technology in CMOS and thus occupy a small footprint with no package needed. These resonators could work in RF filters for 5G, millimeter wave imaging, and personal radar applications, according to Fitzgerald.

Piezoelectric technology is also being looked for ultrasonic transceivers located inside the body, Fitzgerald said. Such transceivers could improve applications such as imaging telemetry, health monitoring, and wearable sensors, although the concept still needs a lot of testing.

3D printing is starting to make an impact in many prototyping and low-volume manufacturing applications, and Fitzgerald noted that research is now being done on a screen-printed potentiometric sensor with a porous 3D printed housing. The sensor’s biodegradable sensor would allow time-based sampling for agriculture and environmental monitoring applications. The challenges in advancing this technology would be development the manufacturing infrastructure for mass production, according to Fitzgerald.

Even further down the research pipeline is a dissolvable paper-based battery that uses bacteria as an electron source. The battery could provide power for temporary medical implants, environmental sensors, and disposable consumer electronics. This idea is now in the early stage proof of concept and is at least a decade away, Fitzgerald noted.


>> Originally posted by Spencer Chin, Product Design & Development, 07/05/2018

Laser-made aircraft parts

Researchers are 3D printing aircraft parts using new laser technology that could transform industry.

Engineers testing laser-made aircraft parts on a fighter jet. (Source: RUAG Australia)

The team of RMIT researchers led by Professor Milan Brandt are using ‘laser metal deposition’ technology to build and repair steel and titanium parts for defence force aircraft in collaboration with RUAG Australia and the Innovative Manufacturing Cooperative Research Centre (IMCRC).

It works by feeding metal powder into a laser beam, which is scanned across a surface to add new material in a precise, web-like formation.

It can be used to 3D print parts from scratch or to fix existing parts with a bond that is as strong as, or in some cases stronger, than the original.

“It’s basically a very high-tech welding process where we make or rebuild metal parts layer by layer,” explains Brandt, who says the concept is proven and prospects for its successful development are extremely positive.

Head of Research and Technology at RUAG Australia, Neil Matthews, says the technology could completely transform the concept of warehousing and transporting for defence and other industries.

Currently, replacement parts require storage before being transported to where they’re needed, but this technology means parts could just be built or repaired onsite.

“Instead of waiting for spare parts to arrive from a warehouse, an effective solution will now be on-site,” says Matthews.

“For defence forces, this means less downtime for repairs and a dramatic increase in the availability and readiness of aircraft.”

The technology will apply to existing legacy aircraft as well as the new F35 fleet.

The move to locally printed components is expected to save money on maintenance and spare part purchasing, scrap metal management, warehousing and shipping costs.

An independent review, commissioned by BAE Systems, estimated the cost of replacing damaged aircraft parts to be more than $230 million a year for the Australian Air Force.

CEO and Managing Director of the IMCRC, David Chuter, believes the technology also has applications in many other industries.

“The project’s benefits to Australian industry are significant,” says Chuter. “Although the current project focuses on military aircraft, it is potentially transferable to civil aircraft, marine, rail, mining, oil and gas industries.”

“In fact, this could potentially be applied in any industry where metal degradation or remanufacture of parts is an issue.”

The two-year project is the latest in a series of collaborations over the past decade between RUAG Australia and Brandt, who is Director of RMIT’s Centre for Additive Manufacturing and a leading expert in the field.

“As the leading Australian research organisation in this technology, we are confident of being able to deliver a cost-effective solution that fulfils a real need for defence and other industries,” says Brandt.


>> IMCRC Media Release, 6/28/18

Humans, AI, and Automation Merge in a Fully IoT World

People are irreplaceable even in the age of artificial intelligence, offering nuanced skills that AI and machine learning don’t have the capability to replace.

Intelligent process automation primarily focuses on automating and optimizing business processes that involve people and documents—not processes related to manufacturing automation. The technology is creating essential capabilities that improve business process automation by addressing workflow, forms, and mobile apps. It also is expanding into new and emerging areas, such as robotic process automation, process and machine intelligence, and AI and machine learning.

As the technology continues to develop, it is also on a path to provide the missing link in the IoT value chain. The potential is that a virtuous circle of IoT data, insight, decisions, and actions can be leveraged using a holistic set of tools. Those tools will drive both continuous improvement and more fully optimized business operations. But where do the people fit in?

Intelligent Process Automation

“With intelligent process automation, we are seeing rote manual tasks being replaced, and-for tasks that require human intelligence, creativity, strategic thinking, or decision-making-humans are normally still making the decisions, but with the support of computers,” Matt Fleckenstein, chief marketing officer at Nintex, told Design News. “Over time, the trend is that this will merge to a point where humans will defer to machines for more decision making. We are currently in the early stages of this maturity curve, where machines can learn how decisions were reached and can create a model for making specific decisions.”

A workflow-enabled digital supply chain and sophisticated best-of-breed delivery management system provides rich functionality, such as truckload mapping, container management, dispatching, scanning into the truck, condition monitoring, and more. (Image Source: ChainLink Research)

Fleckenstein said that the use of intelligent process automation is also being extended for use in supply chain management and to optimize manufacturing processes. An example is in ecommerce, where a website can provide an incredible shopping and checkout experience, but problems emerge as soon as something goes wrong, such as the customer being shipped the wrong item. Even though part of the process is highly automated, all of the processes after the purchase are manually handled and are often incredibly inefficient.

“In the world of IoT, I can have RFID tags on containers and systems to help with container management,” he added. “But if the processes that happen when products are delayed, for example, are not automated in combination with humans and machines, companies are unable to get the value out of the system that could be achieved. Lack of automation affects the consumer experience and customer experience in a number of ways.”

The IoT is interesting because much of the discussion in the IoT world is related to mission-critical use cases and a focus on production and processes running efficiently. The real value of IoT will be when users can also exploit the value of that data across the enterprise, which means getting the information into the hands of people to deal with it.

“Once data becomes actionable, the organization needs to know what action to take and how systems can be used to guide people to complete specific tasks,” Fleckenstein said. “It becomes less about the capturing of data at the edge, less about transmitting data and separating signal from noise, and more about identifying processes that are impacted and how they need to change. And ultimately, there is a question about how internal processes need to become more automated to realize the full value of what IoT can bring to the table.”

IPA-enabled IoT

For companies, the goal of IoT isn’t just to generate data, analyze the data, or even to enable data-driven decisions. It is all of these things. But it is even more important to enable data-driven actions that can be continuously improved and thereby optimized. This ability to enable not just automation but the orchestration and optimization of complex processes is the overarching strategic benefit of intelligent process automation.

According to the Nintex viewpoint, with business optimization as the goal, a crucial tactic is to start small by defining discrete IPA projects that can yield concrete near-term results. This approach builds awareness of and support for IPA within the enterprise, and creates examples that others in the organization can emulate. As a result, new workflows can create their own demand and spread virally across the enterprise.

Their takeaway is that, for all the attention the IoT has received, it is still in its early stages in part because enterprises are still figuring out exactly where and how IoT can benefit them. Adoption will build steadily as they come to understand the role of IPA-enabled IoT in optimizing businesses.


>> Originally posted by Al Presher, Design News, July 6, 2018

Hitting at the Hidden (and Not So Hidden) Costs of CAE

(Source: Shutterstock Solcan Design 679891588)

Today’s engineers use Computer Aided Engineering (CAE) software to solve a range of engineering problems associated with the mass production of products. Yet, many know first-hand that single licenses of advanced CAE multi-physics solvers can easily run in the tens of thousands or even up to a hundred thousand dollars per license, per year. On top of licenses is the cost of HPC systems used to run sophisticated CAE software.

On-demand, scalable, cloud and SaaS-based CAE business models have been recently introduced to allow engineers to use advanced CAE software without having to juggle licenses, share tools or wait for access to typically limited on-premises HPC. These new models will usher in a new wave of innovation, similar to what the engineering world experienced with the transition from the draftsman’s table to CAD, or from 2D to 3D CAE. Allowing engineers to utilize nearly limitless HPC resources in the cloud will forever change CAE workflows.

Engineers evaluating their current legacy CAE systems against the newer cloud CAE models should examine all the costs, including hidden costs, associated with legacy CAE.

R&D Costs

The role of engineering within an organization is to turn innovations into profitable products while minimizing R&D costs. One way to think of those costs is as the sum of engineer-hours committed to a project or product, plus the costs of prototypes, plus the cost of engineering tools, including CAE software, HPC hardware, bench equipment, etc.

Of these costs, engineer-hours committed to a project and physical prototyping typically outweigh engineering tools; however, selection of the latter drives the costs of the former, either up or down.

For example, if an engineering firm uses a CAE package unable to effectively analyze and optimize engineering designs in a timely manner, engineers wait for CAE results and data that are ultimately invalid. Engineers are often the highest paid staff, so when they spend more time waiting for CAE results, instead of actually making informed, data-driven design decisions, the firm suffers financially.

The problem is compounded when engineers are forced to “learn-by-prototyping,” the discovery of design issues by physical – rather than by less expensive, simulated – prototyping. Consider the MEMS sensors industry. When design issues aren’t detected during initial phases, but only after the sensor die is assembled and packaged, this adds another round of testing that can cost millions of dollars and months of time.

Product Risk

Product risk is the summation of risks associated with the release of a new product and its product/market fit. Will the new product perform as expected? Will it beat competitive products on critical specs, such as cost, power, performance, etc.? Many of these risks only become evident after a product is launched into the marketplace.

If CAE tools don’t deliver the wealth of design insights needed to optimize a product’s design, the risk of market failure is heightened. When engineers are constrained by their tools, such as using 2D instead of 3D models due to computational constraints, it’s like looking at the universe through a straw. An inordinate amount of time can be spent waiting on design optimization study results from outdated CAE tools.

This is especially applicable to the $20 billion wireless component market that is looking to make the transition from 4G to much more stringent 5G specifications. Engineers are scrambling to miniaturize the dozens or hundreds of tiny RF components that make an RF front-end work so it can deliver streaming 4K video to future 5G smartphones. In the past, RF engineers have relied on empirical methods, rudimentary 2D CAE methods and expensive physical prototyping to optimize things like SAW and FBAR filters for RF front-ends. In our 3D CAE world, design decisions based on 2D data simply won’t yield optimal designs. Legacy 2D CAE practices are unsustainable, and many wireless firms will lose market share to smaller firms that adopt better CAE, explore larger 3D designs, and find optimal designs to meet customer demand.

Time-to-Market

Business history is replete with examples of products that arrived too late and led to the collapse of massive publicly traded companies.

Codified engineering product design processes leveraging CAE have perfected the innovation engine at many companies, and new technology emerges at a dizzying pace. As a result, further optimization is not only possible, but required to stay competitive. Never has this been truer than in industries like the driverless car market.

Technological advancement in the driverless car space requires massive amounts of CAE simulation and HPC core-hours. Engineers designing the thousands of electronics that comprise next-generation driverless cars simply don’t have the luxury of learning from prototyping. Moreover, the sensors, systems and algorithms to power driverless cars will be tested with millions of virtual passenger miles, so that they will work flawlessly the first time a completely driverless car drives a real human.

Conclusion

To meet the demand for world-changing technologies, engineers need access to as much computing horsepower as they can get. When engineers are constrained by their CAE tools, innovation suffers, and when innovation suffers, the world suffers. When evaluating their current CAE tools, engineers should assess all the costs and not just those directly associated with the tools themselves.


>> Originally posted by Ian Campbell, EnterpriseTech, July 9, 2018

EOS to Offer Simulation Aided Manufacturing with Additive Works

Leading industrial 3D printing supplier EOS, and German software start-up Additive Works have confirmed that they are working together to enhance metal additive manufacturing processes.

As part of the partnership, EOS is offering Additive Works’ Amphyon simulation based software to its customers, and the two companies have committed to the further development of the platform.

Martin Steuer, Head of Product Management Software and Services at EOS, explains, “United by the mission to make Industrial 3D printing even more intuitive and user friendly, EOS is happy to partner with Additive Works on the subject of AM-process-simulation,”

Simulate before you create

“Although the AM-technology itself is very mature,” states Dr. Nils Keller, CEO of Additive Works, “especially for unexperienced users it can be difficult to predict if a part will be 3D printed as expected.” This is particularly evident in the number of research projects and software/imaging developments of recent years which are seeking to study the exact behavior of the laser melting process.

Simulation then, which is a standard for traditional and subtractive manufacturing processes, is one way of avoiding issues like surface defects or interior residues.

Amphyon is Additive Works’ solution for overcoming these challenges.

Amphyon user interface: optimization of the build direction by consideration of all possible orientations. (Image via Additive Works)

ASAP metal 3D printing

Amphyon can be summarized in four workflow principles: Assessment, Simulation, Adaption and Process, or ASAP.

Assessment – at this stage, the software estimates an object’s print time, material usage, the post processing needed and distortion sensitivities created by orientations within the build chamber.

Simulation – Next, Amphyon performs a calcification of the residual stress and distortions of the part.

Adaption – scan speed or laser power is adapted for the specific application.

Process – a “first time right” 3D print is completed.

An Instrumented Stator Vane 3D printed at EOS using Amphyon. (Photo via Additive Works)

Streamlining metal additive manufacturing

Additive Works was founded in 2015. Since then its proprietary Amphyon sofware, consisting of multiple programs, has been integrated by the likes of Altair, 3D Systems (in 3DExpert) and SLM Solutions.

Throughout its partnership with EOS, Additive Works will be help integrate Amphyon assessment, simulation and support modules into the company’s proprietary EOSPrint 2 3D printing software.


>> Originally posted by Beau Jackson, 3D Printing Industry, June 29, 2018

Advances in Embedded Vision Pave Profitable Path

It’s no secret that today’s iPhone has several thousand times the processing power of NASA’s Apollo command modular computers — although that statement only tells half the story when it comes to the strengths of 1960s computing. No matter how you look at it, however, the power of modern computing compared to just a few decades ago is startling. Even more surprising than having an ocean of teraflops in our pockets is what we do with it. In the 1960s, computers were a novelty. Today, many people in the developed world have at least one computational device.

The same relationship between power, size, and mass consumption applies to machine vision, and more specifically, embedded machine vision. Embedded vision incorporates an image sensor, powerful processor, and I/O into an application-specific system low on weight, energy consumption, and per-unit cost. Advances in embedded vision hardware and software have expanded opportunities in industrial machine vision as well as medical imaging, autonomous vehicles, and consumer electronics — opening the world of machine vision to a universe of new applications.

Smart Cameras Lead the Way

Bridging traditional machine vision systems with embedded functionality, smart cameras continue to satiate appetites for compact all-in-one vision capabilities. The smart camera market in North America grew 25 percent year-over-year to $408 million in 2017, representing the fastest-growing category in the vision industry. The adoption of smart camera-based systems is projected to accelerate 8.9 percent from 2017 to 2025, but component cameras and imaging boards are also seeing their share of the embedded vision pie increase to $189 million and $39 million, respectively.

Today’s smart cameras can handle a range of applications — from recognizing traffic signs to performing in-machine inspection — that require high performance and flexibility in a small form factor. Measuring 29 mm x 29 mm x 10 mm and weighing 10 g, the board-level version of FLIR’s Blackfly S camera is optimized for embedded systems such as handheld devices and unmanned aerial vehicles. The 5.0 MP USB3 Vision camera, scheduled for release in Q3 2018, provides a feature set including automatic and manual control over image capture and on-camera preprocessing.

Meanwhile, Basler offers the dart, a board-level camera that measures 27 x 27 mm, weighs 15 g, and offers two interfaces: USB 3.0 and the camera maker’s proprietary BCON for MIPI interface compatible with the GenICam machine vision standard. By using the latter interface, “the result is that instead of using a sensor module, the designer can integrate a finished camera module with much less effort,” says Matthew Breit, Senior Consulting Engineer & Market Analyst at Basler.

Developing the Complete Package

Achieving small, fast, and power-efficient smart cameras and other embedded vision systems relies on advances in hardware, software, and the technologies that support them. When it comes to image processing, many applications call for heterogenous platforms that combine the power of a central processing unit (CPU) with a field-programmable gate array (FPGA), graphics processing unit (GPU), or low-power ARM core.

Combining CPUs with a GPU can significantly reduce the processing time for an image set. For example, Qtechnology A/S uses an accelerated processing unit (APU) in its smart camera platforms that combines the GPU and CPU on the same die. The GPU is a massively parallel engine that can apply the same instructions across large data sets (in this case, pixels) at the same time. Performance can further be increased by pairing the APU with an external, discrete GPU, which enables the addition of GPU processing resources to support even more intensive vision tasks.

When compared with GPUs, FPGAs produce less heat for compact applications because they run at slower speeds, but they also require significant programming knowledge. The VisualApplets product line from Silicon Software aims to simplify the chip configuration process by simplifying the development environment for FPGAs. Recently, Silicon Software reported a deep learning implementation on FPGA architecture capable of classifying six different defects on images of metallic surfaces with 99.4% accuracy and an image throughput rate of more than 220 Mbps.

FPGAs are used extensively in embedded cameras for medical imaging because they reduce component costs and power consumption while allowing rapid development of camera interfaces such as CoaXPress and Camera Link, says AIA Vice President Alex Shikany. Embedded vision is becoming ubiquitous in endoscopy, surgery microscopy, dermatology, ophthalmology, and dentistry. In fact, research firm MarketsandMarkets expects the global medical camera market to reach $3.69 billion by 2021, up from $2.43 billion in 2016, at a compound annual growth rate of 8.7 percent.

A pair of new product releases from OmniVision Technologies illustrates the demand for embedded vision technology in medical imaging, particularly for point-of-care diagnostics and treatment. Designed for disposable and reusable endoscopes and catheters, the OH01A medical image sensor provides 1280 x 800 resolution at 60 frames per second in a 2.5 mm x 1.5 mm package.

Meanwhile, the OVMed image signal processor (ISP) for medical, veterinarian, and industrial and endoscopy applications integrates with OmniVision’s compact CMOS image sensors and features a short system delay of less than 100 ms. The ISP’s video processing unit has two versions: one that can fit inside an endoscope handle and an advanced option that resides in the camera control unit.

The automotive industry is following a similar trajectory with advanced driver assistance systems (ADAS). ADAS components include cameras, image processors, system processors, ultrasonic sensors, lidar, radar, and IR sensors, and they are responsible for a number of complex tasks — among them driver drowsiness detection, lane-change assistance, pedestrian identification, and traffic sign recognition.

Not only do these applications require high-performance image processing, this task must happen under extreme conditions and stringent automotive safety standards. To address these challenges, ARM has developed the Mali-C71, a custom ISP capable of processing data from up to four cameras and handling 24 stops of dynamic range to capture detail from images taken in bright sunlight or shadows. Reference software controls the ISP, sensor, auto white balance, and autoexposure. To further leverage the device into the automotive market, the company has plans to develop Automotive Safety Integrity Level–compliant automotive software. Mali-C71 represents just one ADAS component in a market expected to reach $89.3 billion in annual revenue by 2025.

Simplifying the Complex

With global tech giants staking their claim in embedded vision’s future, developers can expect more simplified deployment and management of the technology. In May 2018, Microsoft announced a vision AI developer kit based on the Qualcomm Vision Intelligence Platform. The kit brings the hardware and software required to develop camera-based IoT solutions using Azure IoT Edge and Azure Machine Learning. The goal is to deliver “real-time AI on devices without the need for constant connectivity to the cloud or expensive machines,” according to a blog from Microsoft.

Another recently announced design environment is Intel’s OpenVINO, which stands for Open Visual Inference & Neural Network Optimization. OpenVINO provides common software tools and optimization libraries to enable write-once, deploy-everywhere software that is attractive to embedded system developers. The toolkit ports vision and deep learning interference capabilities from popular frameworks such as TensorFlow, MXNet, and Caffe, as well as OpenCV, to Intel FPGAs and Movidius vision processing units.

For many traditional machine vision companies, the PC- or frame grabber–based industrial inspection system remains the bread and butter of their business. But visionary stakeholders already are capitalizing on new opportunities far beyond the factory floor, empowered by advances in embedded vision.


>> Originally posted by Winn Hardin, Vision Online, 6/26/2018

Smart Grippers Encourage SMEs to Embrace Automation

Robots have obvious benefits for small and medium-sized manufacturers, but the cost can often be too high. With Robotiq’s new solution, automation is within grasp.

When it comes to robotics, carmakers are not bashful about investing time and resources to ramp up a production cycle. As Robotiq CTO Jean-Philippe Jobin explains, they design lines for seven-year loops, drawing from the combined intelligence and manhours of a hundred or more engineers, installing brand new infrastructure to run pneumatics, hydraulic, and electrical power. Whatever it takes to get the job done, they do it.

That’s just not feasible for smaller companies working on higher volumes of components that can change on a year-to-year, or more likely, month-to-month basis, a machine shop or electronics assembler, for example. Everything matters and must be considered.

“Many manufacturers really struggle to put robots in their factory because it’s too costly and too complicated,” says Jobin, who co-founded the Quebec-based Robotiq a decade ago this July.

If you’re a small or medium-sized enterprise (SME) who figured a way around this, that’s great and another example of fortune favoring the bold. It’s still a sobering reminder that the little guys and gals who support all the big companies, and really needs robotics to stay ahead can’t always do so. Furthermore, getting enough human workers for your assembly line or pick-and-place operation has been a recurring struggle. That’s exactly why Robotiq was created, and why the company just launched a new electric parallel gripper in late June, called the Hand-E.

The compact 50-mm stroke grippers are the strongest you’ll find in their size and power class, with a grip force of 60 to 130 newtons and 5 kg form-fit grip payload. They are plug-and-play, with set-up time under 10 minutes and they easily attach to Robotiq’s FT 300 force torque sensor for machine tending on CNC machines, or a wrist camera for precise pick-and-place of small electronics. With no sharp edges or pinch points, they are perfect for collaborative robots.

It was designed to work seamlessly with Universal Robots new e-Series cobots, also announced at the Automatica 2018 in Munich. The e-series takes an hour to unpack and program and can be plugged into a conventional electrical socket.

Two larger previous models in the Adaptive Gripper line, the 2F-85 and 2F-140, also will work with the e-series.

“The big difference with the Hand-E is that we upgraded the software and you have [the Universal Robots] interface in which you enter the dimensions and all the rest is done behind the scenes,” Jobin says.

As with the latest crop of intuitive software, an operator taps stylus to tablet a few times and the robot and gripper do the rest. No longer does a manufacturer need to rely on a team of engineers and technical specialists.

“This opens up for less technically experienced workers to get the same results,” Jobin says.

The force, speed, and position are all configurable, with the grippers excelling at part detection and part validation

The IP67-rated grippers are so precise, Jobin says, that in a pick-and-place experiment, the Hand-E could easily sort 47 and 48-mm PCBs in to two distinct piles. In another test, the Hand-E was able to thread a tiny wire through a hole, only possible with its increased force sensing functionality.

If the gripper senses the part is bad, you can program the robot to place the part in a reject bin. Getting the dimensions right is vital when collaborating with machine tool. If the metal block were to slip and be incorrectly placed on the workholding, a jam could occur, slowing production.

“The goal of Robotiq is too simply help them install robots and help them start their production faster,” Jobin says. “If you are able to start one month earlier, it will make a big difference.

The Hand-E electric parallel grippers can get set up in about 10 minutes. (Robitiq)

Jobin should know. After he and CEO Samuel Bouchard graduated from Laval University, they traveled the world to understand the needs robots would solve, then lugging around suitcases full of end-effector prototypes to U.S. businesses to get their startup off the ground.

Since then, Robotiq has become an influential voice in the robotics community, with Jobin joining the board of the ISO committee for robot safety, and Bouchard authoring Lean Robotics, a book to help manufacturers systematically and intelligently deploy an automated workforce.

The book is obviously just an extension of the company’s mission statement, while the Hand-E is the latest execution.

The most visible improvement is space reduction in the work cell, as there’s no need for pneumatic air, compressors, or airline. Installation time and costs, along with eliminating maintenance on regulators and valves, and finding air leaks, are also gone. The tradeoff is that electric grippers are less powerful than hydraulic and pneumatic counterparts, so they won’t work for every application.

They do come with fingertips starter kit with three kits, so the number of applications is quite broad.

“That’s a big customization the end user will want to do,” Jobin says. “Probably 50% will design own. They can machine from the kit, creating the groove or slot for the part they want to grip. And we provide digital file on website of all grippers and do modification they need.”

(Robotiq)

>> Originally posted by John Hitch, New Equipment Digest, June 26, 2018

Digitalization Is Disrupting the Supply Chain

My last three articles analyzed the relationship between digitalization and the manufacturing execution system (MES), considering MES as a key component of any digitalization initiative. At the time, I touched only briefly on the impact MES has on the supply chain. For this blog, I want to spend some more time on how digitalization is impacting and probably disrupting the supply chain—or at least the traditional concept we have for supply chain.

When we spoke of supply chain over the past 40 years, we mostly thought of logistics. Optimizing the supply chain was basically optimizing the flow of materials, from the first supplier of raw materials to the distribution center of the finished goods, whatever they may be. Efforts have been put mostly on reducing stocks and lead times, guaranteeing the availability of the components at each stage of production. Delocalization of production and globalization have created new challenges in managing supply chains, introducing new complexity in logistics and quality control. The main challenge has become finding the right combination of low production cost, necessary quality of products or semi-finished goods and reasonable transportation costs, guaranteeing at the same time the availability of the products when needed.

Recently, things have changed. A significant reason for that is the availability of new technologies that changed the industrial landscape:

Real-time Big Data and analytics. A massive amount of data will be available, along with systems and tools to collect, analyze and transform that data into information. Even more important, the information will be available in real time, enabling companies to make decisions on product design, manufacturing, distribution and even prices.

Mobility. The information to manage the supply chain will not be generated by desktop devices anymore. Mobile devices are already frequently used in logistics, but their usage is growing day by day. Mobile devices provide the ability to collect and deliver information in real time wherever the user is. The impact of mobile order and delivery information entry in the coordination of the supply chain is critical. It dramatically changes how orders are received and processed, with a critical impact on production organization and scheduling.

Internet of Things (IoT). The number of sensors used in any kind of industry is rapidly growing. Because of the rapidly decreasing cost of devices that can be considered smart because of their embedded computational capabilities, they are being incorporated in an increasing number of products. Most of these devices can be connected to the Internet and can become an incredible source of information that would otherwise not available. Only a few years ago, it was common to participate in surveys aimed to understand how products were used by end users and how they could be improved. Today, many products provide this information themselves while they are being used.

Social media. The information coming from the products can be related to the sentiment of users collected on social media. Many companies monitor—manually or automatically—social channels to understand what their clients think of their products, and use the information to tune design, production, logistics, or marketing and communication.

3D printing. 3D printing is an emerging technology that is becoming quite common in many different kinds of industries. Automotive, aerospace and medical device industries are increasing the usage of 3D printed components in their products. Other industries, like the food industry, are just starting to experiment with the use of 3D printers. It can be an extremely disruptive technology that permanently modifies the existing supply chain in some industries. It allows the transfer of information instead of material goods and moves component production close to where the assembly takes place. This basically virtualizes all the logistics. Moreover, 3D printing allows low-cost product customization and allows the production of micro lots or single products to be sustainable.

Drones and self-driving vehicles. The use of drones has already significantly modified the supply chain in areas and markets where transportation was a critical bottleneck. One of the best examples is the delivery of blood bags in some areas of Africa. Ground transportation was unacceptable because it took too much time, especially during the rainy seasons. Drones can deliver blood bags in less than 30 minutes in an area big enough to serve a significant amount of population from a single dedicated warehouse. Something similar is happening with self-driving vehicles. They are pretty common inside production plants, where automated guided vehicles (AGVs) or laser-guided vehicles (LGVs) are used to move goods. Interesting experiments are in progress in several cities, both to address the last mile issue or the transportation to distribution centers.

Companies that will be able to correctly manage the opportunities provided by these technologies will significantly change the way they manage their supply chain. They will change the way they collaborate with their suppliers (even tier three and four). But what is even more impacting is that they will start to collaborate with end users, which will speed up the innovation of products and/or services.

The hard work done in the past to optimize the supply chain in a delocalized environment is almost useless in the new world enabled by the usage of the technologies that have become available and affordable. A new approach—and especially a cultural shift—is needed in considering the end user as an active and driving part of the supply chain itself.


>> Originally posted by Luigi De Bernardini, AutomationWorld, June 25, 2018