Maintaining final product quality with in-situ process monitoring and feedback is essential when cutting metals with flexible, reconfigurable robotics. However, this is challenging when the inspection data streams gathered are poor quality. In this work, we solve this problem with intelligent data processing algorithm design.
Specifically, this project was concerned with utilising optical scanning methods to densely measure robotic machining errors accumulated in the part and using these to compensate robot programs prior to final cuts. This is motivated by the need to maintain high tolerances <100 microns. However, a key problem when optically scanning machined surfaces for error measurement is noise and localised data gaps due to reflectance.
Robotic machining error compensation in this way can therefore be thought of as a two-part problem involving acquiring quality scan data and processing these large datasets optimally to compensate a machining trajectory. This project contributes to the latter by presenting a dimensional deviation evaluation method for efficiently computing compensated robotic machining trajectories from aligned optically scanned point clouds without processing redundant data, as is typical in related works. Results validate this method, showing a 96% improvement on dimensional error measurement time in comparison to conventional methods, and conclusions are drawn on direction for further development.
This solution developed is a novel contribution to the current state of the art of optical scan data processing for on-line robotic machining error measurement and compensation, allowing the best to be made of imperfect measurements for in-situ condition monitoring purposes. The work done made progress towards realising the cost and health and safety benefits of robotic machining techniques in large scale manufacturing industries.
This work was published in Measurement: Journal of the International Measurement Confederation in 2018. For more information please contact us or see the full article: