Search

The Robots Have Eyes - Machine Design

jokbanga.blogspot.com

Machine vision as a category is growing rapidly; the 3D machine vision market is expected to double in size during the next six years, and today, the technology is a vital component of most modern automation solutions.

There are many factors contributing to the increased adoption of this technology in a manufacturing context. First, demand for automation solutions in general has increased as manufacturers continue to grapple with labor shortages. Second, the cost has decreased dramatically—when cameras, sensors, robotics and processing power are inexpensive, they can be applied to more solutions.

Technology performance also is increasing, giving machine vision systems the ability to process large amounts of information in a fraction of a second. Finally, advanced artificial intelligence and machine learning algorithms are making the data collected from machine vision even more valuable, and manufacturers are realizing the power of those insights.

But what exactly is machine vision, and how can it be incorporated into automation solutions to produce better outcomes?

Giving Robots Eyes and a Brain

A vision system typically includes a series of disparate parts, including cameras, lenses, lighting sources, robotic components, processing computers and application-specific software.

The cameras are of course the “eyes” in this system—there are many types of cameras in use for machine vision, and each camera can be assigned for different application needs. There are also differences in how cameras are configured within an automation solution.

Static cameras are placed in a fixed position, have a more bird’s-eye view of the process unfolding below and can be used in scenarios where speed is imperative. There are also dynamic cameras, which are positioned on the end of the robot arm and much closer to the process, thereby delivering higher accuracy.

Computing power is also an important aspect of the vision system—essentially, the “brain” that helps the eyes conduct their work. The computation resources required for machine vision that leverage machine learning algorithms will be different than those needed for traditional machine vision applications. Many companies offer software libraries for implementing vision capabilities.

Some capabilities are designed for application users; others are intended for use by software programmers. Regardless, it’s software that gives machine vision advanced capabilities that have dramatic impact for manufacturers, with programs to control tasks and the ability to feedback valuable insights from the line.

Machine Vision Applications

The concept of replacing basic human capabilities with a vision-guided solution is gaining steam, as vision for assembly lines can be used in an increasing range of applications and processes.

A typical “electronics in a box” assembly process provides one example. This is a product category that spans many shapes and sizes and is relevant across a number of industries (including but not limited to medical equipment, power tools and home appliances). The assembly process for these hardware components consists of placing the electronics, such as a circuit board, into the housing, or box.

Many of the assembly steps involved in electronics in a box assembly can benefit from the use of machine vision because of the precision required.

Inspection. Machine vision can be used to inspect all top and bottom cover components as they enter the assembly line, looking for defects such as cracks in the metal—if these components are in poor condition, it can lead to quality issues for the assembled unit. With machine vision, cracks are detected quickly, and if they are larger than a specified size, the component is automatically rejected. In addition to cracks, color variations can also be inspected; a color camera can identify discoloration damages and reject faulty units.

Product tracking. Required in the automotive and healthcare industries, product tracking is used during the entire manufacturing process. So usually, the first vision task for circuit boards during the assembly processes is to read the product label, serial number or barcode. The product label identifies the specific unit so it can then be tracked throughout the entire assembly process.

The label location can be pre-defined and the vision system is used to read the label and communicate its findings to other factory systems. Applying labels is a common requirement in assembly lines and another perfect use case for machine vision, which can detect any obstacles on the surface and ensure perfect placement.

Installation. Adding components to a circuit board can require vision capabilities both for quality control and for positioning. As an example, for a varistor that needs to be added to a circuit board, machine vision would first inspect the varistor to verify that both legs are straight and not damaged. Vision can also inspect the varistor’s identification to confirm it’s the correct part, and the varistor’s orientation for proper assembly on the circuit board.

Finally, vision capabilities can identify the correct location on the circuit board where the varistor will be installed and soldered in place. During the soldering process, vision is used to monitor conditions such as the wetting area, lead lengths, solder balls, contact angles and solder fill.

Machine vision also aids in similar processes that require consistency and precision, like thermal paste dispensation application and metal shield insertion. Dispensing sealant is another example; a vision system can reference the expected path and compare it to the achieved result.

Final assembly. Fastening methods, such as screwdriving, also are frequently used to assemble products. This process can be more consistent with a vision system. Machine vision can be used to identify the screw hole and to navigate the robot with the screwdriver accurately to the specified location. Once complete, a vision system can verify that the screw has been fastened properly. If there is more than one screw, the vision system can be used to navigate to additional holes and implement the screwing process as many times as needed.

Machine vision can also be useful in component feeding, guiding detailed part assembly, part sorting and closed-loop activities such as DIMM card insertion using force feedback control. (any process where human inputs are not needed, and the system is designed to achieve known outcomes).

The ROI of Machine Vision

To achieve next-generation automation and fully realize the benefits of the technology, machine vision on an assembly line isn’t just “nice to have”—it’s vital. This technology enables diverse capabilities for inspection, robot navigation and quality control, and the use cases for machine vision are continuing to expand in scope and complexity as the technology continues to evolve.

That translates to tremendous potential for manufacturers, as vision solutions for automated lines can improve production capacity, production stability and production yield. As both the hardware and the algorithms involved with machine vision continue to improve, so does the ROI for manufacturers who invest in these solutions.

Assaf Eden is director of product management at Bright Machines, a full-stack technology company offering software- and AI-driven solutions to automate product assembly, manufacturing operations and production execution on the factory. He's based in Tel Aviv.

Throughout history, farmers have looked for more efficient and effective ways to improve their craft. Starting with the advent of the tractor over 100 years ago, productivity on the farm has only grown to be more technologically advanced, allowing farmers to complete more tasks with less available resources.

Today, farmers continue to have the responsibility of feeding a growing world population while overcoming challenges like the decline of arable farmland, a shrinking labor pool and unpredictable weather conditions. Given this, the agriculture industry has found it critical to maximize on the automation of tasks that require extreme precision and consistency so farmers can spend more time managing other important jobs related to their operation.

This level of resilience is a force to be reckoned with, and every industry from healthcare to retail to construction and beyond can glean insights on how sensor technology, computer vision and robotics are driving automation on the farm to increase efficiency, profitability and sustainability for generations to come.

Sensor Technology

Generations of the same family often farm the land for years and, in the process, record each season’s successes and failures. The repetitive nature of this data and documentation makes the farmer the most important sensor on the farm, though the task of record-keeping has only gotten more streamlined with the availability of advanced technology.

With increasingly complex decisions needing to be made, it’s helpful for a farmer to have highly automated machines that can understand historical data and pair it with real-time data to determine what’s needed in the moment. Sensor technology adds to the farmer’s intuition and experience by enabling machines moving throughout the field to understand what’s happening and compare it to years past for the best possible outcome.

For example, sensors on a planter allow the machine to calculate where to place a single seed into the soil. Sensors ensure each seed is placed one-by-one in the ground, perfectly spaced at the right depth and the right distance from one to another, while recording if something goes wrong. While doing this, the sensors are also informing the planter when to automatically adjust the pressure of each individual robot in charge of placing seeds when transitioning between different soil types in the field, such as from sand to soil.

From sensors to robotics, the agriculture industry is employing automation in new ways, and the lessons have applicability for other industrial automation markets.From sensors to robotics, the agriculture industry is employing automation in new ways, and the lessons have applicability for other industrial automation markets.John Deere tractor

Computer Vision for Observation

Observation is critical for any worker in any industry. In agriculture, observation can help a farmer make important, timely decisions—but the human eye can only see so far.

With computer vision, farmers gather data from places where they wouldn’t typically have the ability to go, such as inside a combine, the massive “factory on wheels” that harvests grain when crops are ready. During harvest, grain must be separated from the rest of the plant and a farmer’s goal is to make that happen with as little damage as possible to the kernel. This used to happen manually, with a farmer routinely reviewing a handful of harvested kernel samples to know when and how to adjust settings.

Now, sensors and computer vision technology in the grain tank can register how much grain is coming off the plants, whether the grains are broken and whether there is any material other than grain also getting through, all in real-time. If the conditions aren’t optimal the combine uses machine learning to understand what’s wrong and automatically acts to adjust the combine’s settings to achieve more favorable results.

Robotics for Precision

Farming is a year-round job that requires precision at all stages of the growing cycle. As operations can typically be physically taxing and highly repetitive over the course of many hours, robotics are used to take unnecessary burdens off of the farmer while still supporting their productivity and quality goals.

With robotic technologies, farmers can automate tasks and execute at the plant level with great precision. As machines plant seeds, apply nutrients and harvest crops, there are a lot of robotics at play to do so. This is especially helpful when there are thousands of acres of farmable land, which can be home to hundreds of millions of plants.

For example, after planting and before harvest, a farmer will use a sprayer, another advanced machine, to apply crop protection and nutrients to their plants. Weather conditions like temperature, humidity and wind speed all impact a farmer’s ability to spray where and what’s needed. Advanced robotics built into the machines enable a farmer to control the amount of nutrients sprayed, droplet size and even distance from the plants by automatically adjusting the volume, pressure and height of the boom, which is attached to the sprayer and can measure up to 120 feet wide with nearly 100 different spray nozzles.

Advanced Technologies for Tomorrow

Farmers have always been at the forefront of solving problems with hard work and innovation. When paired with advanced technology like sensors, computer vision and robotics, farmers have the tools necessary to maintain their business and grow quality food for the world.

Advanced technology’s contribution to agriculture is especially impactful today as, according to the United Nations, the world’s population is expected to grow to nearly 10 billion by 2050, increasing global food demand by 50%.

The sky is the limit in terms of how farmers will continue to embrace technology in the future—and it’s something that every industry can learn from.

Jesse Haecker is business manager for Planting, Spraying and Nutrient Application at John Deere.

Let's block ads! (Why?)



"machine" - Google News
April 09, 2021 at 02:38AM
https://ift.tt/2PGMntE

The Robots Have Eyes - Machine Design
"machine" - Google News
https://ift.tt/2VUJ7uS
https://ift.tt/2SvsFPt

Bagikan Berita Ini

0 Response to "The Robots Have Eyes - Machine Design"

Post a Comment

Powered by Blogger.