Tweaking existing computer technologies can enhance surgeons visualization of persistent wounds, according to a study presented Oct. 30 at the 2014 American College of Surgeons Clinical Congress in San Francisco.
The research team employed a new 3-D sensor and computer algorithms on a tablet computer and machine learning a type of artificial intelligence for the first time allowing surgeons to precisely measure the area, depth and tissue type of chronic wounds with a mobile device, according to a news release.The research team said this high-tech imaging technique is more accurate than standard methods.
Wound assessment often relies on crude visual observation, according to the studys senior investigator, Peter C. W. Kim, MD, PhD, FACS, of Children’s National Health System, Washington, D.C. He is associate surgeon-in-chief at Children’s National and vice president of the health systems Sheikh Zayed Institute for Pediatric Surgical Innovation.
Chronic, nonhealing wounds, which can result from burns, diabetes, circulation problems or excess pressure caused by immobility, affect 6.5 million Americans and cost the U.S. $25 billion annually in medical and surgical care, according to a study published in 2009 in the journal Wound Repair and Regeneration.
Despite this significant clinical burden, there is a general lack of objective evidence to guide wound management, Kim said in the release.
Visual estimation of wound dimensions can vary among examiners by 30% or 40%, according to Kim. He added that eyeballing a wound cannot determine depth, an important consideration because some chronic wounds extend to the bone.
Traditionally, a wound care specialist manually delineates the wound borders using a transparent film and divides the wound bed into different areas by tissue type. This two-step process is called segmentation. Although several automated wound segmentation applications exist, Kim said none operates solely on a mobile device and, at the same time, calculates the wounds physical 3-D measurements.
Kim and his colleagues created an interactive automated wound assessment system with a mobile application for easy access to wound images. They implemented computer algorithms on an Apple iPad, using OpenCV, which is an open-source computer vision library. Computer vision is the way computers perceive the world; that view is then converted into digital information, such as patterns and images. One of the researchers algorithms, based on an existing graph-cut algorithm, identifies wound borders from the evaluators finger strokes on a touch screen.
Using machine learning, the investigators programmed a second algorithm to automatically classify tissues as one of three types: granulation, eschar and slough. They then tested the new systems performance speed and consistency how much the results varied compared with the traditional manual tracing method. On an iPad, five wound experts analyzed 60 digital images of different wounds using both the automated method and manual tracing for each image.
Study results showed the experts delineated the wound borders and classified the tissue type 33% faster using the automated system. For each task, the wound experts averaged 31.6 seconds per image compared with 47.2 seconds per image using manual tracing. Researchers also found the automated results were highly consistent with the standard manual method, as defined by a high overlap score above 90%.
Our method of wound assessment saves time, which saves money, and is more accurate, which translates to improved patient care, Kim said in the release.
Children’s National Medical Center holds patents on the algorithms for wound segmentation and has developed a company, Fairfax, VA.-based eKare Inc., to further develop the mobile wound assessment method.