With all of the discussion happening around collaborative robots (cobots) there is, of course, a lot of talk of how these robots will interact (and possibly replace) humans in the workspace. One less discussed aspect is how these robots will collaborate with each other. But if smart factories continue to bring in cobots and the robots themselves take on increasingly more complex tasks, centralized systems will become increasingly inefficient and distributed robotics will become more critical.

“In the last 15 or so years one of the big trends in AI has been the move toward distributed AI or multi-agent systems,” Maria Gini, distinguished professor at the Department of Computer Science and Engineering at the University of Minnesota, told  Design News . “These are systems in which instead of just one entity that is intelligent and makes decisions you think about multiple entities, – such as robots and programs – that somehow have a task to do and accomplish it by some form of collaboration and communication.”

Maria Gini, renowned AI and robotics expert and upcoming ESC keynote speaker . (Image source: University of Minnesota College of Science and Engineering)

Gini, a keynote speaker at the upcoming 2017  Embedded Systems Conference  (ESC) in Minneapolis, has spent the bulk of her 30 years as an artificial intelligence and robotics researcher working in the field of distributed robotics. “I really focus more and more on how to build communities and teams of robots and on the competitive – the game theory aspects – of how they can work together.”

For Gini, the greatest advantage distributed systems offer is robustness. “If I’ve built my [distributed] system properly, if one robot breaks other robots can still do the work. Whereas if I have a central system then if one robot breaks maybe the system doesn’t know and if it does know it has to reallocate everything. Centralized systems produce better quality solutions, but they aren’t as robust to failures. A distrubuted system is more resilient.”

What makes distributed robotics particularly challenging from an AI perspective, according to Gini, is that engineers have to think of all the different ways that robots could interact and react to each other. More than simply handing off tasks from one robot to another like an assembly line, distributed robots may be called upon to course correct errors, pick up slack workloads for each other, and even figure out how to most quickly and efficiently accomplish a task.

“Imagine I have to clean a building and I have a team of people to do it with,” Gini said. “[With humans] I could say to each of them, ‘Run wherever you want and pick up garbage’ and after some random amount of time the building will be clean. It works, but there’s no communication.” Gini said a lot of robotics is approached this same way, with each robot working independently and never taking advantage of the possibility of communication.

But with that communication comes a lot of questions in regards to process and implementation. Should all of the robots in a space be communicating all of the time? What should happen if a robot fails? We expect extra collaboration to make jobs faster and to save energy, but if done badly it could have just the opposite effect.

“If I want to collaborate with robots the question then becomes how do I do it and one of the main issues is communications,” Gini said. “Do have communication with one robot? All the other robots? Just some? Do I have a central controller that tells teach robot what to do and allocates space to each robot? I could make a local system or a global system or I can have a system where the team self-organizes. All of these methods are different in real life and when you want to write programs.”

One method of addressing this, Gini said, is with the use of an auction system, wherein robots “bid” on tasks (based on how quickly they can accomplish it, for example) and the machine that is best able to accomplish it is assigned the task. “Once I know what all the tasks are I can assign them a value and say something like, ‘This task is going to cost me five or 15 or whatever.’ When you give a task a one number like this the communication is very light.”

Once all of the robots have submitted bids the system acts as an auctioneer and picks the best robot for the task and assigns it. “So there’s a bit of centralization, but it’s not one entity that makes all the decisions,” Gini said. “Right now, the robots run a program that says to compute your cost you figure out where you are, how far you have to go, how much battery power you need, and that’s how you submit the bid. In the long term we want the robots to learn how to do those things. But this is much farther away.” One of Gini’s most recent studies looks at this challenge directly by examining methods of allocating tasks to robots in conditions where time and space are limited.

There is however debate as to whether distributed robotics is the best solution going forward for all applications. The International Journal of Advanced Robotic Systems is currently working on a special issue on ” Distributed Robotic Systems and Society ” that aims to examine all sides of the issue. According the the journal, many of the characteristics that make distributed robotic systems ideal for certain emerging applications are also holding it back from being more widely adopted. “For instance, controlling the motion and behavior of large teams of robots still presents unique challenges for human operators, who cannot yet effectively convey their high-level intentions in application.” the journal said.

“Not everyone likes this idea [of distrubted robotics], there’s debate in the scientific community, because you do lose something,” Gini said. “In a sense it’s the same with humans if you have a team of people and you have a commander who is fully aware of the system and can decide what to do. Humans have failed at some local ability to understand, but if you send a person to a room and the room is locked the person is not going to sit there. A robot however will just sit there unless you’ve programmed it otherwise. In a distributed system robots may be stuck doing things nobody knows about.”

It sounds like the long-term ideal for distributed robotics then would be to implement AI sophisticated enough that it can adapt even to unforeseen circumstances – essentially having robots able to program themselves. Gini agrees, though she said that this notion is far off at this point. However, she said that as distributed robots are applied to more complex tasks the algorithms and AI behind them will also grow in complexity.

“Right now the areas where these things are looking to be used are warehouses such as Amazon’s distribution centers – though Amazon uses a centralized system. Hospitals can use robots to move medicines, supplies, and food around and they’re also trying to automate as much as possible so that they don’t need a centralized system.”

Someday however distributed robotics could be seen outside of the manufacturing and could one day pop up in our smart cities and become a key technology in the widespread use of autonomous vehicles. “Vehicle routing problems are another big application,” Gini said. “If you think about the logistics of trucks, for example. How is this done? It can be done centrally but imagine a system where each truck diver gets a list that says, ‘These are the things we need to ship around. How much will it cost you?’ and the driver submits a bid.”

Now imagine those trucks are automated and you can start to envision how it would work. “Technically, in general you are not guaranteed to find an optimal solution,” Gini said. “But a more robust system can be resilient when things break, when there’s noise in the communication, or something else. With robots it’s very critical because things never work the way you expect them to.”

>>Read more by Chris Wiltz, Design News, October 05, 2017

 

4 Benefits of 3D Scanning When Designing a Product

The advent of 3D technologies has opened a whole new realm of possibilities when it comes to designing products. 3D scanning in particular has many benefits for product development. And while in the past, the process of obtaining a 3D scan was difficult, today the situation is very different. With the right handheld scanner and software, it has become so easy that even kids can properly use 3D scanning technologies. In fact, numerous students across the country are starting to use these types of technologies in the classroom. Take for instance the Mid Pacific Institute, which has created hands-on scanning classes around the reverse engineering of museum artifacts.

Just as students have embraced the technology, product designers and engineers are also jumping on the bandwagon. As evidenced by its success in the classroom, utilizing 3D scanning, can be quick and easy, and its applications for product development are no exception. What’s more, it brings designers and developers an array of benefits, which include:

Allowing for More Intricate Designs Quicker

Whether you are working with a clay or wax model at full-scale or small-scale, 3D scanning technology has made the process of transitioning from physical object to digital model much quicker and smoother. Leading 3D scanning technology companies have also started to incorporate artificial intelligence (AI) into their offerings, allowing the scanning process to be more automated and intuitive, while decreasing the time needed for training. This will allow for more sophisticated product designs to prevail, where in the past, designs would be simplified since the digital design process was so laborious and thus costly.

There is also the benefit of being able to scan and merge together different physical objects to create a unique design. Although more of an art application, a good example is the #WOODVETIA campaign launched by the Schweizerholz (Swiss Wood) and the Swiss Federal Office for the Environment. The campaign revolves around the creation of historical Swiss figures in wood to promote its benefits as a sustainable building material. To create the full-bodied figures, figurative artist Inigo Gheyselinck first creates a bust of the historical figure out of clay. The hand-sculpted bust is scanned. Then a person is selected who has the same body type as the historical figure. They are also scanned. The bust and body are digitally merged together and then the life-size historical figure is CNC machined out of wood. This has allowed for the design to be applied to a unique medium and reduced large amounts of during the creation process.

Artist Inigo Gheyselinck develops a wooden figure by combining 3D scans of a clay bust and a human model.

In addition, 3D scanners have also allowed for better ergonomic designs. With the ability to digitally capture human anatomy, products can truly be created to conform to the human body. As wearable technologies and products advance, this will only become more important. The same logic can be applied when designing aftermarket parts that are meant to fit with existing products. For example, a company that wants to create custom seats for cars can remove the stock seats, scan the interior of the vehicle and use the digital model of the car’s interior during the design process. This would allow the company to virtually test different seat sizes and designs within the vehicle. It would also provide exact measurements of where the vehicle’s rails and bolt holes are so that the new seat can be designed to perfectly align during installation.

Adding Flexibility to Designs

Say a visual prototype was created, which doesn’t have working parts. As mentioned above, using a 3D scanner, the exact shape of the product can be digitally captured. The 3D scan can be used to create the proof-of-concept prototype, which would be extremely close to the product created in mass production.

Having a 3D model from the first stage allows for design flexibility throughout the process. During the initial design of a product, using a 3D scan and editing program allows users to ensure that surfaces are represented in high fidelity, products are symmetrical (by mirroring scans) and enables users to scale the overall size up or down. Moving from one phase to another, the design can be further edited to make space for internal components or simply to change the look of the product as feedback is received. Pairing 3D scanning and modeling with rapid prototyping technologies, such as CNC Machining, 3D printing and plastic injection molding, allows for the prototype to come together quickly. Depending on the materials, scans that are used to create molds helps to ensure complete accuracy and guarantees that parts fit specific design requirements.

For example, MU Form Furniture, manufacturers and distributors of furniture products for the modern home and business, was using a more traditional approach to product design since its preferred material to work with was high quality bent ply. To produce an original piece, MU Form would ship a physical prototype model to a factory overseas so they could reverse engineer the model by using a router duplicator to create a wood mold, which would be used to shape the bent ply. This method produced a somewhat accurate representation of the piece, but it was understood that manual work would need to be done on the mold to fix curvatures and surfaces. Due to the trial and error process, inaccuracies in the final production piece would occur.

Today, the company has incorporated 3D scanning and simplified the trial and error process. Using this technology, the furniture designer develops the physical prototype of a furniture piece, which is then reverse engineered using a handheld 3D scanner. The model is edited and perfected digitally. The 3D model is then emailed to the factory, which creates an accurate CNC metal mold directly from the file.

MU Form Furniture uses 3D scanning in product design.

Streamlined Logistics

By eliminating trial and error processes, 3D scanning quickens the design process by improving accuracy and cutting down on logistics. As MU Form Furniture experienced, by having a 3D scan of a product, there was no need to ship a physical prototype to a factory for replication. Instead a manufacturer anywhere in the world could have the 3D model overnight, whether it was of a small single part that needed to be altered or of an entire chair that needed a mold to be created. The processes of 3D scanning an object is also quick. In a matter of minutes, a 3D model can be captured. With the right technology the model can even be rendered as an object is being scanned.

Using 3D scans and CAD models of other parts, a designer can ensure that parts are going to fit together on the first attempt. If a custom part needs to be designed out of house, the 3D model will also serve as a useful tool during the collaboration process.

Providing the Designer with a 3D File

During product design, it’s not uncommon for a product to be shipped around the world before it goes to mass production. With 3D scans, 3D files can easily be created and supplied in place of a physical product – saving time and money on logistics. In addition, these same 3D files can be used as an additional resource when applying for patents.

There are also added benefits to having a 3D model when it comes to working and selling products to retailers and online marketplaces. It is becoming more and more common for retail websites to incorporate 3D models of their products for shoppers to view from every angle. There are already many platforms available that let consumers decorate a 360-degree online room by dropping in 3D models of furniture and products that are available for purchase. To take it a step further, these companies are looking for new ways to leverage augmented and virtual reality to create immersive experiences for those shopping at home. The décor and furniture segment has been ahead of the game in this area, but it won’t be long before everyday items get the same treatment. Having a 3D model ready will become an added benefit for placing products on these virtual shelves.

The examples above only brush the surface of the benefits of 3D scanning. Utilizing this technology ensures accuracy from the very start of a project, speeding up time to market. As brick and mortar retailers continue to look for ways to boost their online shopping capabilities, having 3D models of products will soon become a requirement and those already utilizing 3D scanning today will be ahead of the curve.

>> Read more by Andrei Vakulenko, Product Design & Development, 10/12/17

Upskill Skylight Update Aimed at Bringing Augmented Reality to Mainstream

Upskill wants to be the development platform for your smart glasses, regardless of the brand. This agnostic approach is fairly unusual for companies building augmented reality applications and it provides enterprises with a neutral way to build these applications to work across different smart glasses systems.

Company CEO Brian Ballard says the hardware is beginning to mature, but what’s been missing is a development tool for creating content more easily. The latest update to the company’s Skylight development platform includes several new pieces to increase the use of augmented reality inside large companies.

For starters, the company has added a couple of tools that simplify app creation including an Application Builder with pre-built user interface cards that enable non-technical personnel to drag and drop these cards to build a simple workflow application without any coding skill. Skylight Connect is another new piece designed to tap into a company databases without any coding. Upskill claims to handle the connectivity for you in the background. You just point to the database and it does the rest.

If the legacy application is a bit tougher than something Connect can handle, there is also an SDK designed for enterprise programmers to connect to systems that prove a bit more challenging. Finally the company includes Skylight Live, a Facebook Live-like experience that allows a person to broadcast what they are seeing through their smart glasses.

(Photo: Upskill)

Upskill hopes that today’s update will bring large companies one step closer to deploying AR applications at scale. While many companies like Boeing and GE are playing with AR proof of concepts, very few have large-scale deployments yet. One of the things that is holding them back is that these applications don’t exist in a vacuum.

Most enterprise companies have a vast legacy infrastructure and the AR applications often have to work with these legacy systems to pull information like inventory, documentation or back office data. This requires a platform that’s been built to handle those kinds of connections. Skylight has always aspired to be that platform, but the upgrade enhances that and adds tools to bring less technical personnel into the content creation mix.

(Photo: Skylight)

What’s more, when companies go to the trouble and expense of building an AR app that pulls data from various systems across the company, they don’t necessarily want to be tied to a smart glasses proprietary development system that locks them into a single hardware manufacturer. With Skylight, they can move much more smoothly from headset to headset type without having to redo the code in a substantial way.

>> Re-posted article by Ron Miller, TechCrunch, Oct 10, 2017

Airbus Tests Flight with 3D-Printed Primary Flight Control Component

Airbus has successfully flown its first aircraft with a 3D-printed primary flight control hydraulic component, a spoiler actuator valve block on a test flight of an A380. The valve block, made from titanium powder, is part of a spoiler actuator made by Liebherr-Aerospace and provides primary flight control functions on board the A380.

The 3D-printed spoiler actuator valve block made by Liebherr-Aerospace is 35 percent lighter in weight than the conventional valve block.

The 3D-printed component provides the same performance as the conventionally produced version, but is 35 percent lighter and consists of fewer parts, Liebherr says. The additive manufacturing process is also less complex than the traditional milling method and is more material efficient.

Liebherr-Aerospace developed the 3D-printed hydraulic component in close cooperation with Airbus and the Chemnitz University of Technology in Germany. The project was partly funded by the German Federal Ministry of Economic Affairs and Energy.

Liebherr-Aerospace & Transportation SAS is already working on the next generation of 3D-printed hydraulic and electro-mechanic components, including an integrated rudder actuator. Unlike the conventionally produced version, the 3D-printed component does not feature a separate valve block or cylinder housing, or an extra reservoir. All parts are built into one monolithic, compact housing.

The next generation of additive manufacturing technology will further influence the design process of components. Liebherr-Aerospace estimates that the resulting weight savings at system level could contribute to a reduction of fuel consumption as well as CO2 and NOx emissions of future aircraft.

>> Source: Additive Manufacturing Magazine, 10/4/17

Wireless sensor networks: Expanding opportunities for Industrial IoT

Wireless sensor network (WSN) technologies are fueling the Industrial Internet of Things (IIoT). Today’s WSN and associated cloud technologies are central elements for IIoT: multiyear battery-powered wireless nodes, IP addressability, fieldbus tunneling, and cloud-based provisioning and management systems.

Short-range wireless mesh technologies, such as WirelessHART and ANSI/ISA-100.11a, as well as Wi-Fi, Bluetooth, and proprietary solutions, will make up the majority of the market over the next five years. Adoption of Low-Power Wide Area Network (LPWAN) technologies, such as LoRa, LTE-Cat-M1, and LTE-Cat-NB1 (NB-IoT) will increase even faster.

In November 2016, ON World collaborated with ISA for an extensive survey of wireless sensor networks on the IoT. We surveyed more than 180 industrial automation vendors, end users, systems integrators, and service providers. In this article, we compare the major findings from this survey with a previous survey from the last quarter of 2014.

Accelerating WSN adoption

Three in five respondents are currently pilot testing, deploying, or are involved with commercial WSN deployments. Twenty-eight percent have deployed more than 1,000 nodes, compared with 14 percent in our previous survey.

Figure 1. Number of WSN devices installed (all locations)

Wireless mesh deployments increase

Eighty percent of respondents who are involved with process automation have deployed at least some wireless mesh WSN nodes, up from 62 percent in the previous survey.

Figure 2. Industrial wireless mesh adoption

In the current survey, there was a slight drop in the percent of respondents using WirelessHART. Adoption of ISA100 Wireless increased by 36 percent over the past two years.

Figure 3. Industrial wireless mesh standards adoption

Preferred standards approach

WirelessHART continues to be the preferred standards approach going forward, but 25 percent prefer ISA100 Wireless or a hybrid strategy. The ability to support a star topology, faster response timing, and application tunneling makes ISA100 suited for a growing number of markets, such as gas detection, steam trap monitoring, and connecting control systems to offshore oil platforms over multiple kilometers.

Figure 4. Preferred standards approach

Most likely applications

For those planning future WSN applications, 55 percent are targeting machine heath and other types of asset monitoring, compared with 38 percent in the previous survey.

Figure 5. Most likely WSN applications within 18 months

Most important features

After data reliability and security, “no battery changes” and “low costs” are rated the most important WSN features. Both of these were rated as more important in our current survey, while standards and IP addressability were rated as less important.

*Percent important or most important

Figure 6.Most important WSN features

Satisfaction levels

Satisfaction levels increased overall. For respondents targeting process automation, however, satisfaction levels decreased slightly, with the biggest drop in satisfaction for battery life, scalability, and costs.

*Percent satisfied or most satisfied for respondents targeting process automation

Figure 7. Process automation – Satisfaction with current WSN systems

Strategic investments

Interest in Industrial IoT has accelerated over the past two years. Two-thirds view IoT platforms as “important” or “most important,” which is two times more than in our previous survey.

*Percent important or most important

Figure 8. Strategic investment areas

Cloud platform adoption

Nearly half of all respondents and 30 percent of end users are using a cloud IoT platform, such as AWS, Microsoft Azure, IBM BlueMix, or Google Cloud. Cloud IoT platforms combined with ongoing advances in radio frequency transceiver power consumption and sensitivity have enabled Low-Power Wide Area Networks, a growing segment of WSN solutions.

Figure 9. Industrial companies using a cloud IoT platform

Low-Power Wide Area Networks

LPWAN technologies, such as Sigfox, LoRa, RPMA, and LTE-Cat-M1 and LTE-Cat-NB1 (NB-IoT), are blindsiding existing wireless sensor network technologies with multikilometer network ranges, multiyear battery lifetimes, and cloud-integrated network stacks. In addition to reducing access costs significantly with very small transmission payloads and the ability to scale to thousands of nodes per gateway, an LPWAN can manage network complexity in the cloud or edge server rather than from LAN-based network controllers.

Although LPWANs will challenge existing industrial WSN technologies, especially for mobile assets and for applications in remote and difficult-to-reach areas like oil fields, pipelines, mines, ranches, and pump stations, the biggest impact will come from enabling new markets and services. A few examples are smart buttons, tracking and locating low- to mid-value assets, and remotely monitoring processes, equipment, and other assets for which it was not previously economically feasible, such as irrigation, crops, and animal health.

Our survey found accelerating development of LPWAN solutions with two in five respondents researching, developing, or currently offering LPWAN products and services.

Figure 10. LPWAN adoption status

Of the respondents developing LPWAN products, three-quarters are applications that are not feasible with existing wireless IoT technologies.

Figure 11. LPWAN applications

One in three survey respondents believe LPWANs will displace 40 percent or more of existing WSN technologies within the next decade. They see LPWAN as having the biggest effect on electric power, oil and gas, and water and wastewater. In addition, respondents had the most faith in LoRa. The highest number of respondents thought it would have a significant share of the LPWAN market in 10 years, followed by LTE-M1 and NB-IoT.

Figure 12. Technologies projected to have significant LPWAN market share

The biggest IoT concerns are network costs, complexity, and security. WSN technologies, such as ISA100 Wireless and LPWANs, are enabling seamless system integration, end-to-end security, and cloud-based application development. Our ongoing research and surveys will bring further insights about the latest developments for industrial wireless sensing and IoT.

Virtual Reality in Design

In this video, David Leonard of Leonard Design talks of the advantages of Virtual Reality in the design process for architects and designers. Via the use of the HTC Vive headset, Unreal Engine developers platform and Tiltbrush Leonard Design have been able to drop the client directly in to the building prior to its construction. Giving the client this unique perspective to look around the spaces has allowed the team to “work in a faster more collaborative way” which in turn can head off problems in the design before its too late.

 

Six Steps for Preparing Your Team for Lean Robotics

Alex Owen-Hill from Robotiq shares his thoughts about why you should prepare your team for Lean Robotics and how you would do it.


Your Lean Robotics implementation can only be successful if your team is completely on board. By properly preparing your team, you can be sure the everyone is happy about the idea of using robots, use each employee to their full potential and even save yourself from wasting time.

But, how do you prepare your team?

Should you just send round an impersonal email memo saying “We’re introducing robots to the shop floor”? No. That’s not the best way to approach it.

In this article, we give a six step process for properly preparing your team.

Four Reasons You Need to Prepare Your Team

Why should you spend extra time and effort preparing the team? Why not just dive straight into your robot deployment?

Here are four great reasons:

  1. It reduces bad feeling — Robots have a lot of baggage associated with them in some parts of the world. Your team may be uncertain about robots. They may worry that they’ll lose their jobs, either because the robot will take it over or because they do not have robotics skills so could be supplanted by a more skilled worker. Addressing these fears early on is important for making sure the robots are accepted by everyone.
  2. It maximizes the team’s potential — Lean Robotics defines several “common wastes” which can impede the robot cell deployment. One of the most important of these is “Underutilizing Human Potential.” Team members can only contribute fully to the robotics deployment if they are involved early.
  3. It improves your use of the robot — The team are the most knowledgeable people about your manufacturing process. This makes them vital because a successful Lean Robotics implementation relies on three categories of skill: robotics skills, project management skills and knowledge of your manufacturing process.
  4. It is more efficient and economical — A Lean Robotics implementation involves developing skills within your team. Some of these will have to be introduced through training. However, it is likely that members of your team have existing skills which can contribute to the deployment. Utilizing their skills will help to get robots off the ground quicker and will reduce the initial training investment.

Some people, myself included, like to over-prepare before bringing an idea to the team. If you’re the same, it may be challenging to bring the team on board before you have prepared every detail. However, the sooner they get involved the better your Lean Robotics deployment will be.

Six Steps to Prepare Your Team for Lean Robotics

Here is a process that you can use to prepare your team. The order of steps might be a little different in your case, depending on how well developed your idea is.

1. Clarify Your Plan

The first step of Lean Robotics is to define your project scope. This means identifying which manual cell you want to automate, choosing appropriate metrics to measure progress and setting a timeline for the deployment.

If you already have a specific application in mind, it is a good idea to do some preparatory work to develop the idea enough to be able to present it to the rest of the team.

However, if you have not chosen a specific application, you might want to get the team involved earlier to help come up with potential applications for the robot.

2. Identify Stakeholders

Only some employees will be involved with the robot deployment. Decide which members of your team should be brought on board and at what stage.

Begin by thinking about all the people within your business who are, or could be, affected by Lean Robotics. Some stakeholders will be directly involved in the robot deployment (e.g. the lead automaton engineer, line workers, programmers, etc). These people should be brought on board as soon as possible.

Other stakeholders will simply be affected by the robots’ presence in the workshop (e.g. a cleaner who may have to wipe down the robot surfaces, workers from other parts of the manufacturing line, etc). These people can be updated about the project at a later stage.

3. Identify the Skills Needed

Lean Robotics relies on a core set of skills in your team. However, you do not have to have all these skills to get started — you can introduce them through training when they are needed.

Each phase of deployment has its own set of key skills. For example, the Design phase requires manual task mapping skills, robot cell design skills and several more. The Integrate phase requires programming, mechanical installation and industrial communication skills. You can find a complete list of skills by downloading our “Team Skills Spreadsheet” by clicking on the link at the end of this post.

Part of the Lean Robotics process is to identify which team members can provide each of these skills.

4. Meet With the Team

As soon as you have loosely defined your project scope, it is a good idea to set a meeting with your team to discuss it. Don’t fall into the trap of trying to over-prepare everything before you bring it to them. It’s usually more efficient to develop your deployment plan collaboratively with everyone.

During your meeting, discuss the skills you identified in the previous step. Find out which members of your team already have some skills and experience in these areas. From this information, you can assign the roles in the next step. Also, gather ideas and input from the team about your chosen robot application.

5. Assign Roles

There are ten key roles in Lean Robotics. To ensure that the deployment goes as smoothly as possible, each of these roles should be assigned to a member of your team. This doesn’t mean that you need to have ten people on the deployment team, two or three roles can be fulfilled by the same person. However, try not to overload one person with many roles as this can easily lead to an inefficient deployment.

Roles in Lean Robotics include: manufacturing manager, project leader, project coordinator, engineer, installer, programmer, operation and maintenance worker, process adviser, procurement person, and continuous improvement person. You can find out more about these roles at leanrobotics.org.

6. Set Next Steps

At the end of your meeting with the team, decide on everyone’s “next action” and set a date for the follow up meeting. Now that you have started the robot deployment process, you want to keep up momentum. This is easier when everyone knows what they have to do next and has a defined date when they will report back on their progress.

Where to Find Out More About Preparing Your Team for Lean Robotics

Lean Robotics is a specific application of Lean concepts to make sure your robot cell deployment is a success.

You can learn more about Lean Robotics by going to leanrobotics.org.

Make sure to download a copy of the Team Skills Spreadsheet when you are there!

>> Read more by Alex Owen-Hill, Robotiq.com, October 2, 2017

 

3D-Printed ‘Bionic Skin’ Could Give Robots the Sense of Touch

3D printed electronic sensors on model hand
A one-of-a-kind 3D printer built at the University of Minnesota can print touch sensors directly on a model hand. (Credit: Shuang-Zhuang Guo and Michael McAlpine, University of Minnesota, “3D Printed Stretchable Tactile Sensors,” Advanced Materials. 2017. Copyright Wiley-VCH Verlag GmbH & Co. KGaA.)

Engineering researchers at the University of Minnesota have developed a revolutionary process for 3D printing stretchable electronic sensory devices that could give robots the ability to feel their environment. The discovery is also a major step forward in printing electronics on real human skin.

The research will be published in the next issue of Advanced Materials and is currently online.

“This stretchable electronic fabric we developed has many practical uses,” said Michael McAlpine, a University of Minnesota mechanical engineering associate professor and lead researcher on the study. “Putting this type of ‘bionic skin’ on surgical robots would give surgeons the ability to actually feel during minimally invasive surgeries, which would make surgery easier instead of just using cameras like they do now. These sensors could also make it easier for other robots to walk and interact with their environment.”

McAlpine, who gained international acclaim in 2013 for integrating electronics and novel 3D-printed nanomaterials to create a “bionic ear,” says this new discovery could also be used to print electronics on real human skin. This ultimate wearable technology could eventually be used for health monitoring or by soldiers in the field to detect dangerous chemicals or explosives.

“While we haven’t printed on human skin yet, we were able to print on the curved surface of a model hand using our technique,” McAlpine said. “We also interfaced a printed device with the skin and were surprised that the device was so sensitive that it could detect your pulse in real time.”

McAlpine and his team made the unique sensing fabric with a one-of-a kind 3D printer they built in the lab. The multifunctional printer has four nozzles to print the various specialized “inks” that make up the layers of the device—a base layer of silicone, top and bottom electrodes made of a conducting ink, a coil-shaped pressure sensor, and a sacrificial layer that holds the top layer in place while it sets. The supporting sacrificial layer is later washed away in the final manufacturing process.

Surprisingly, all of the layers of “inks” used in the flexible sensors can set at room temperature. Conventional 3D printing using liquid plastic is too hot and too rigid to use on the skin. These flexible 3D printed sensors can stretch up to three times their original size.

“This is a completely new way to approach 3D printing of electronics,” McAlpine said. “We have a multifunctional printer that can print several layers to make these flexible sensory devices. This could take us into so many directions from health monitoring to energy harvesting to chemical sensing.”

Researchers say the best part of the discovery is that the manufacturing is built into the process.

“With most research, you discover something and then it needs to be scaled up. Sometimes it could be years before it is ready for use,” McAlpine said. “This time, the manufacturing is built right into the process so it is ready to go now.”

The researchers say the next step is to move toward semiconductor inks and printing on a real body.

“The possibilities for the future are endless,” McAlpine said.

In addition to McAlpine, the research team includes University of Minnesota Department of Mechanical Engineering graduate students Shuang-Zhuang Guo, Kaiyan Qiu, Fanben Meng, and Sung Hyun Park.

The research was funded by the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health (Award No. 1DP2EB020537). The researchers used facilities at the University of Minnesota Characterization Facility and Polymer Characterization Facility for testing.

To read the full research paper entitled “3D Printed Stretchable Tactile Sensors,” visit the Advanced Materials website.

>> University of Minnesota News Release, 5/10/17

Warehouse Robots Smarten Up

Self-driving cars have certainly reaped the rewards from the advances made in sensors, processing power, and artificial intelligence, but they aren’t the sole beneficiaries. One needn’t look any further than to the autonomous cooperative robots (cobots) currently invading the warehouses and stores in which they will work in close quarters with people.

1. Aethon’s latest TUG is festooned with sensors and can fit under carts to tow them to desired locations.

Aethon’s TUG (Fig. 1) is the latest in a line of autonomous robots designed for environments like warehouses. It has more sensors on it than older platforms, which is indicative of the falling price of sensors, improvements in sensor integration, and use of artificial intelligence to process the additional information. This allows robots like this to get a better model of the surrounding environment. It means the robots operate more safely, since they can better recognize people and objects. It also means they can perform their chores more effectively, because they often need to interact with these objects.

Aethon’s TUG series provides a range of capabilities up to versions that can haul around as much as 1200 lbs. These typically find homes in industrial and manufacturing environments. Smaller TUGs have been set up in hospitals to deliver medicine, meals, and materials. TUGs move throughout a hospital calling elevators and opening doors via network connections. As with warehouse robots, they operate around the clock doing jobs that allow others to do theirs.

2. The RL350 robotic lifter from Verna Robotics rises under a cart and lifts 350 kg off the ground. It then delivers the contents to the desired location, dropping down and leaving the cart.

Vecna Robotics has lightweight and heavy-duty robots, too. Its RL350 robotic lifter can hoist 350 kg or more than 770 lbs (Fig. 2). It can also adjust the payload height with other pieces of material-handling equipment, like conveyor belts. It can be used in applications such as fulfillment operations or lineside supply. The robot has a top speed of 2 m/s, and can run for eight hours before seeking out a charging station. It is ANSI/ITSDF B56.5 compliant and ISO Class D ready. It uses LIDAR and ultrasonic sensors like many of the other robots in this class.

 

3. Fetch Robotics’ VirtualCoveryor targets warehouse applications such as DHL’s distribution center.

Fetch Robotics has a range of products, from robotic arms for research to its datasurvey inventory robot. It also offers the VirtualCoveryor (Fig. 3), which comes in a number of different sizes to address different weight configurations. The Freight500 can move up to 500 kg, while the Freight1500 handles up to 1500 kg. They run up to nine hours on a charge, and incorporate LIDAR and 3D cameras on the front and rear. As with most warehouse robots, Fetch Robotics delivers them with its FetchCore Management software.

4. I Am Robotics put a robotic arm on its I Am Swift platform. The suction grip is designed for grabbing lightweight objects that would be typical in many warehouse pick-and-place environments.

I Am Robotics includes a robotic arm on its I Am Swift platform (Fig. 4). It can run for more than 10 hours picking and placing small objects using its suction grip. The typical boxes or bottles found on store shelves are open game. The robot is designed to work with the I Am SwiftLink software.

The I Am Flash 3D scanner is used to teach the system about objects that will be manipulated. It records the barcode, object dimensions, and weight after an object is placed in its scanning area. The I Am Swift robot can then determine what objects it sees on a shelf or in its basket and move them accordingly.

5. Omnidirectional wheels on Stanley Robotics’ robot platform make it easy to move in tight quarters.

Stanley Robotics warehouse platform utilizes omnidirectional wheels in order to move literally in any direction from a standing start. This simplifies path planning and allows it to work in tight quarters.

6. Stan from Stanley Robotics handles valet parking by literally picking up a car and putting in a parking spot.

The latest offering from Stanley Robotics was not able to fit on the show floor, though. Its Stan valet parking system (Fig. 6) turns any car into a self-driving car, at least to park it. It rolls under a typical car and then raises itself, thereby lifting the car. Many warehouse robots that lift carts instead of cars use an identical technique—it’s the same idea, but applied to a much larger object.

7. Fellows Robots’ NAVii will function within a store, offering information to customers while performing inventory scanning.

Fellows Robots’ NAVii (Fig. 7) is designed to operate within a store, providing customers with information while performing inventory scanning. It can map out a store on its own and then track the stock using machine-learning techniques. NAVii will notify store managers when stock is low, or if there are price discrepancies.

NAVii can also interact with store customers using its display panels. On top of that, store employees can take advantage of this mobile interface to interact with the store’s computer network. As with most autonomous robots, it seeks out a charger when its battery runs low.

>> Read more by William Wong, New Equipment Digest, October 05, 2017