Interventional procedures, minimal-invasive therapies, and image-guided therapies (IGT) have become pivotal components of modern medical practice, offering less invasive alternatives to traditional surgical approaches. Central to these advancements are medical interventional devices (MIDs) such as aspiration and biopsy needles, which enable clinicians to perform a wide range of clinical procedures with precision and minimal patient discomfort [1, 2]. These devices find application in diverse medical contexts, from diagnostic biopsies to therapeutic interventions [1, 3, 4].
In certain clinical scenarios, these MID are used without external imaging guidance, particularly when the target site is close to the skin, devoid of sensitive structures along the needle path, and where precise placement might be less critical [1, 2]. An instance of this scenario is when a pharmaceutical solution is intended to diffuse into adjacent tissue, obviating the need for high-level placement accuracy.
On the contrary, there are cases where precise placement is critical to ensure accurate MID path traversal, when it is important to get a confirmation on having reached the target site, and to verify the procedure efficacy. In such cases, external imaging guidance becomes indispensable [1,2,3].
While imaging methods such as ultrasound, X-ray, CT, and MRI are employed for guidance, they can introduce artifacts that obscure needle location and accurate tissue targeting [5]. These challenges are particularly evident in procedures requiring precise needle placement and confirmation of target site attainment. However, imaging modalities–especially MRI–offer the crucial advantage of visualizing not only the needle trajectory but also its spatial relationships with surrounding anatomical structures. The value of such imaging for precision in robotic and image-guided surgery is exemplified by Dallan et al. [6]).
Ultrasound (US), planar X-ray, Computed Tomography (CT), and in specific cases, Magnetic Resonance Imaging (MRI), serve as the primary imaging modalities for guiding these procedures [7,8,9].
While US holds wide applicability as the system of choice for most of the simpler applications and is harmless in terms of radiation exposure, X-ray and CT expose both, clinicians and patients, to harmful radiation, and MRI requires the use of specialized, compatible, and costly MID to avoid magnetic attraction forces and to ensure high image quality. Nonetheless, all these imaging techniques exhibit limitations, often resulting in artifacts that obscure the MID’s path, diameter, and precise location within the anatomy. This becomes particularly pertinent when the MID tip remains partially or entirely outside the imaging field of view [7, 9, 10]. To address these issues, there is a growing interest in adding sensors to enhance the guidance during these procedures.
To enhance the guidance and sensing capabilities of existing MIDs adding force or other contact sensors directly to the needle tip has been proposed, but this approach has many drawbacks including reduced device functionality, complex and with that expensive construction, typically the need for an active cable connection, and a complicated operation and setup [5]. A more favorable approach involves developing proximally attached clip-on systems, based on measuring vibroacoustic signals generated in the interaction between the moving MID and the tissue that the MID penetrates through [11,12,13,14]. These generated waves propagate through the MID shaft to its proximal end, where they can be captured as audio signals for subsequent processing and analysis [15]. Accurate determination and classification of these signals for in vivo applications demands robust testing, dedicated in vitro and ex-vivo experimental setups with specialized tissue phantoms that replicate real tissue interactions [11,12,13,14].
Our previous works [12, 16] have demonstrated the potential of vibroacoustic signals for classifying MID related penetration events during catheter and needle procedures.
In this paper, we introduce a dedicated framework combining signal processing and deep learning for tissue classification during needle procedures. Leveraging these audio signals in combination with deep learning techniques for tissue classification could enhance real-time guidance during medical interventions.
The primary research objective and goal of this paper is to investigate the feasibility and effectiveness of utilizing vibroacoustic signals captured through audio sensors for tissue classification during needle procedures in an experimental setup using dedicated phantoms. It was our aim to demonstrate that these signals, combined with advanced signal processing techniques and deep learning algorithms, can reliably differentiate between different artificial and ex-vivo tissue types. If accurate tissue classification could be achieved in the experimental setup it might be a feasible technology to enhance the guidance and confidence of performing needle procedures.
The goal of this paper is to present a novel approach to tissue classification based on vibroacoustic signals captured from needle–tissue interaction with following key contributions:
Novel Signal Analysis: Introduce a dedicated denoising algorithm inspired by ECG signal processing to enhance the quality of vibroacoustic signals captured during needle–tissue interaction.
Signal Representation Exploration: Explore and compare different signal representations, including Continuous Wavelet Transform (CWT) and Mel Scale, to identify the most effective representation for tissue classification.
Deep Learning Architectures: Propose and evaluate two deep learning architectures, namely Needle Net and ResNet, for accurate tissue classification using the processed audio signals.
Validation: Experimental results demonstrate that the proposed strategy leads to promising classification results for different tissue types, paving the way for future research to enhance guidance during needle procedures.
Comments (0)