acm-header
Sign In

Communications of the ACM

Review articles

Seeing Beneath the Skin with Computational Photography


diffuse optical tomography simulation

Credit: University of Strasbourg

From X-rays to magnetic resonance imaging (MRI), methods for scanning the body have transformed how we understand and care for our health. These non-invasive techniques allow clinicians to observe and diagnose conditions while minimizing risks to the patient from invasive medical procedures. Recently, methods using red-green-blue (RGB) and near-infrared (NIR) cameras, other photosensors, such as more specialized and sophisticated tomography,6,8,40 and radio waves and Wi-Fi signals have enabled a range of new non-invasive and non-contact health monitoring techniques.1,8,12,21,22,26,28,32,33,40

Back to Top

Key Insights

ins01.gif

Human tissue interaction with visible and infra-red light is predominantly through scattering. There are two immediate consequences to this light scattering by tissue. Firstly, since the scattering is predominantly forward-scattering, photons can still illuminate deep into human tissue providing the potential to use light to image beneath the skin. Secondly, diffuse scattering, by definition diffuses the spatial coherence and localization in light patterns, making conventional imaging challenging. But the statistical and structural regularity present even in diffuse scattering provides us with the potential that computational demultiplexing algorithms along with imaging systems can provide sufficient spatial resolution in deep tissue imaging to be clinically useful.

One advantage of optical and longer wavelength E/M imaging methods is that unlike the higher energy X-rays, they use low energy, non-ionizing wavelengths, such as the visible light. While visible and near infra-red light is scattered by human tissue, absorption is minimal. Consequently, most of these systems operate with wavelengths from 400nm to 1100nm and can illuminate and image beneath the stratum corneum reaching imaging depths of a few millimeters. Thus, they can be used for measuring blood volume, blood composition, and imaging small structures. Computational imaging systems have been shown to capture vital signs, such as pulse rate and respiration,28,33 visualize three-dimensional (3D) structures inside the body, such as the veins,25 and even reconstruct low-resolution 3D images as deep as seven centimeters40 (see the accompanying figure).

uf1.jpg
Figure. Optical coherence tomography offers a cross-sectional and longitudinal view of a coronary artery.

Furthermore, unlike X-ray, computed tomography (CT) and MRI, optical imaging systems can often be inexpensive and portable. These properties make convenient, frequent, and opportunistic measurements possible. They also make devices such as cameras attractive as a tool in low resource settings. The SARS-CoV-2 (COVID-19) has acutely highlighted the potential benefits of scalable health monitoring. The desire to protect healthcare workers and patients and reduce the need for travel has increased the demand for tools that enable remote care, including tele-health. Imagine if software could analyze the video feed during a patient visit and automatically extract vitals, monitor a baby for jaundice or provide a high-resolution scan of the inside of a retinopathy patient's eye.

In this article, we present seven emerging state-of-the-art computational imaging modalities that leverage low-energy E/M. We explain the principle of operation of each modality, its advantages, and its limitations. We also present five important clinical applications of these imaging systems and discuss the advantages of using optical systems of clinical imaging and the challenges in widely deploying them.

Back to Top

Imaging Technologies

Imaging is not just limited to RGB cameras and covers a wide range of frequencies (wavelengths) and a large array of sensors and systems. Different wavelengths of operation, and different modalities of information may be better suited for different applications. The accompanying table provides a summary of different optical imaging and low energy E/M modalities.

ut1.jpg
Table. Summary of imaging methods.

RGB cameras, such as those on a cellphone or a webcam, are the most ubiquitous type of imager and, therefore, very attractive for measuring health parameters. In 2020 alone, an estimated 1.35 billion mobile phones were sold worldwide, a majority of which contain multiple RGB cameras. These cameras are typically relatively low-cost and readily available.

Fortunately, hemoglobin absorption varies within the visible optical frequency range, and conventional RGB imagers can capture these changes over time.19,34 The volumetric changes of hemoglobin carried by oxygenated blood with each cardiac cycle lead to small changes in the amount of light absorbed inside the tissue and reflected from the body. These absorption changes manifest themselves as minute intensity variations. The larger the volume of blood is at a given time, the more light will be absorbed inside the tissue, and the corresponding intensity of the skin pixels in an image will be slightly lower.

Unlike most wearable sensors that are "point" measurement devices (that is, they measure blood flow at a specific point on the body), cameras can measure information spatially. While this has not been exploited fully, there is the potential for mapping peripheral blood flow, for example, to capture bilateral differences or measure pulse transit time without the need for multiple sensors.5

uf2.jpg
Figure. One advantage of optical and longer wavelength E/M imaging methods is that rather than using X-rays, they use low-energy, non-ionizing wavelengths, such as the visible light.

Because RGB cameras use visible light, they often do not need a dedicated light source to illuminate the skin. Rather, the ambient light in the environment is sufficient. This means the small and portable devices, such as smartphones, are all that is required to measure the physiological signals. However, this also presents challenges. Ambient light can vary in intensity, direction, and hue in natural settings and camera-based measurement is sensitive to these variations. Therefore, RGB cameras may not be the most effective in situations where the ambient light is uncontrolled, such as in driver monitoring and in low light settings, such as in sleep monitoring. Moreover, visible light scatters significantly more than longer light wavelengths, such as near-infrared. This limits the range of imaging that can be performed to the outer layers of the skin (between 3mm–4mm).

Near-infrared (NIR) cameras. There are physiological sensing and imaging applications for which relying on visible light is clearly a limitation. Sleep monitoring, whether for infants (for example, baby monitor) or adults (for example, sleep studies) and automotive driver vital signs tracking are examples in which ambient visible light may either be weak or uncontrolled, significantly reducing the reliability of vital signs estimates from conventional RGB cameras.

NIR cameras sense the light whose wavelengths are in the range of 700nm–1,000nm. Human eyes are not sensitive to this wavelength range and therefore imaging systems can be made active—that is, with an added NIR illumination source, without the source disturbing or otherwise intruding with the humans in that environment. Most of us have encountered these sensors in security cameras (or baby monitors) that operate at night. Typically, these sensors have additional NIR light sources that are invisible to the human eye but illuminate the scene with NIR illumination that can then be sensed by the sensor.

These properties can make NIR cameras advantageous in uncontrolled21,22 and low light settings.32 Furthermore, longer wavelengths of light can penetrate deeper inside the tissue because they scatter less than the shorter visible light wavelengths. Moreover, since NIR cameras do not record visible light, most changes in ambient light do not affect their measurements. This makes NIR cameras advantageous in uncontrolled21 and low light settings.4 However, we also need to consider that hemoglobin absorption is weaker in the infra-red frequency range and thus the signal-to-noise ratio for peripheral blood flow is generally weaker.

NIR cameras often require a dedicated active light source. Active light systems can be combined with additional hardware components to further enhance the quality of the recorded images. For example, optical bandpass filters can be placed on the camera to only allow a narrow range of wavelengths of interest to reach the sensor. This can help limit the amount of remaining ambient light which may affect the measurements.21,22,32

Requiring a dedicated light source makes low power and small form factor imaging systems slightly more challenging to create. Moreover, additional care must be considered to ensure the light source is not harmful. Ensuring eye safety is especially important in NIR wavelengths because NIR light is not visible to the human eye and there is no pupillary reflex which usually contracts the pupils in bright visible light to limit the amount of light entering the eye.

Thermal cameras. Thermal wavebands typically cover the wavelength range from 2micron to about 14microns. At these wavelengths, most objects even in ambient temperature act as emissive sources—radiating heat in the form of thermal emission within this waveband. That makes these wavelengths unique in two ways. Firstly, as most natural subjects, including humans are emissive sources, there is no need for any separate illumination—either ambient or controlled. Second, since this emission is directly related to body temperature, it allows for ways to measure human body temperature without contact. One challenge in the use of these sensors is that as you move out of the NIR band and into the thermal band, silicon-based sensors can no longer be used—sensor technologies for the thermal band are either based on micro-balometers (blackbody adsorption) or on novel materials, such as indium gallium arsenide (In-Ga-As) that are much more expensive than silicon. Consequently, thermal sensors are typically lower resolution, lower SNR and much more expensive than RGB or NIR sensors.

Some of the first cardio-pulmonary non-contact imaging systems were created with thermal cameras.9 These can be used to capture physiological information, including peripheral hemodynamics, body temperature and respiration/breathing. Hotter objects emit more infra-red light than cooler objects. This allows them to record the ambient temperature of the body and other objects in the scene. As humans expel air (that is, breath out), this air is usually warmer than the surrounding environment as the internal body temperature is higher. Thus, changes in temperature around the mouth and nostrils can be used to capture breathing-rate. Cardiac sensing is made possible via the thermal signals emitted from major superficial vessels as the blood volume pulse varies.9

Radio waves. Wi-Fi and radio waves can also be used to measure cardiac and respiration signal.1 These devices operate in much longer wavelengths in the electromagnetic spectrum than RGB, NIR, and thermal cameras. While sensors for these frequencies might not often be thought of as "imagers" as spatial visualization is not typical, they can be used to reconstruct both spatial and temporal signals, including tracking body motions, respiration, and pulse signals—even of multiple people in a room. Because these signals are not dependent on any form of visual spectrum illumination and do not require cooling (as is the case with thermal cameras), they are well suited for sleep monitoring.1 Their higher frequency means that instead of measuring the changes in absorption due to hemoglobin, these wireless devices measure changes in motion or vibrations which may be the mechanical side effects of cardiac or pulmonary activity. Radio waves reflected off a subject encode these subtle motions, for example, radio waves will be differently reflected during an inhale and an exhale. This reflection difference is manifested as a time delay of the radio wave returning to the sensor and can be used to measure many temporal health parameters.

The advantage of the imaging methods using radio waves is that these signals scatter less than light and they can operate at larger distances from the person (up to 8m). They can even penetrate the walls and can monitor the person from the other room. This may be beneficial in some situations, especially in very extreme settings, such as search and rescue and military applications to detect human presence behind a wall.1

However, because these methods rely on the vibrations which are small motions of the body, presence of any other body motions will corrupt the measurements and the subjects must sit perfectly still. So, while they might be suited to sleep tracking, these sensors might work less well for other applications, such as cardiac measurements during working out, for example, on a treadmill or on a stationary bike. Another disadvantage of wireless devices is they may not be able to distinguish between different people, or pets, living in the same home as recognition of individuals from Wi-Fi or radio waves is much more difficult than from visual images.

Optical coherence tomography (OCT) is a contact-free imaging technique that uses low-coherent light to capture three-dimensional images of scattering tissue surfaces, such as the surface of the retina, as well as for endoscopic imaging inside the body.6,12,13

OCT is based on the principle of low-coherence interference. When you use a light-source with a very short coherence length, then interference fringes are only observed when the target beam and a reference beam have the same path-length. This principle can be used to localize in depth and image only a thin layer. By changing the path-length of the reference beam one can scan in depth within the specimen. By scanning the source and detector on the surface in two-dimensions (in addition to the 1D z-scan), one can effectively obtain three dimensional images with lateral and axial resolution of a few microns.6 One advantage of OCT systems is that their depth and lateral resolution are independent of one another. Lateral resolution depends on the imaging aperture, while the depth resolution does not. This independence makes it possible to vary the width of the probing beam without affecting the depth resolution and using extremely narrow endoscopic systems (for example, 1mm in diameter). On the other hand, a disadvantage of OCT technology is that it has very limited penetration depth in scattering media (1mm–2mm), such as muscle tissue6,7 This limits the use of OCT to surface or near-surface imaging.

While OCT has already been widely adopted for clinical diagnostics, its high cost is prohibitive for some clinics, especially in low resource settings. Recently, Kim et al.14 developed a low cost, portable OCT system. They were able to reduce the cost of typical OCT systems to under $7,200. Kim et al. reduced the cost of a hand-held OCT system by using an inexpensive light source and accounted for the light fluctuation by periodically capturing the background regions as a baseline. The resolution and the power throughput of the portable OCT system were comparable to the state-of-the-art systems, making it a promising alternative for more accessible medical imaging.

Diffuse optical tomography (DOT) is a non-invasive and inexpensive approach which can image through scattering media, such as tissue3,8,29,30,39 DOT uses pairs of light sources and photodetectors to only capture the indirectly scattered light and to eliminate the dominant direct surface reflections and scattered light.18 DOT is able to achieve up to up to 15mm penetration depth. But its spatial resolution is on the order of 1cm.29

A significant advantage of DOT systems is that they have the capability to reconstruct 3D images captured inside the tissue. In contrast, previous optical tomography methods were restricted to visualizing 2D images due to the challenges associated with reconstructing along the depth dimension. However, DOT requires richer measurement information to reconstruct 3D images and sophisticated reconstruction algorithms which are computationally expensive.29

A drawback of DOT systems is their large form-factor due to the required large number of source-detector pairs. Few source-detector pairs lead to poor resolution, however, using many source-detector pairs, leads to prohibitive computational complexity of the reconstruction algorithms.

Liu et al.18 demonstrated that a single camera and a single illumination projector can replace the existing bulky DOT systems. In this work, the researchers were able to achieve not only a small form-factor DOT system but also much higher resolution, while maintaining the ability to image deep inside a scattering medium. They showed that their portable DOT can detect accurate boundaries and relative depths of structures up to a depth of 8mm inside a scattering medium, such as milk.

Photoacoustic tomography (PAT) attempts to combine the spatial resolution benefits of optical imaging with the minimal scattering benefits offered by acoustic detection.35,36,40 Acoustic waves scatter less in biological tissues than light, giving PAT the ability to image deep inside the tissue, while maintaining high sensitivity to optical absorption. Any material which has sufficiently high optical absorption can be imaged with PAT. By selecting the right wavelengths, a very wide range of materials can be detected and imaged with PAT.

As an object absorbs light that is amplitude modulated at acoustic frequencies, the absorbed optical energy is converted to heat which increases the temperature. The temperature rise leads to a thermoelastic expansion within the object and generates an acoustic wave. PAT can achieve penetration depth of up to 10mm in vivo and up to 70mm in phantom samples, while maintaining spatial resolution of up to 600mm–700mm. However, the depth and spatial resolution of the PAT systems is dependent on the type of transducer used.40

There are several advantages of PAT over other imaging methods. PAT is an optical technique, and it is not invasive. PAT can also penetrate deeper than traditional optical imaging methods. PAT is also able to achieve higher spatial resolution than DOT. Moreover, PAT is cheaper and faster than comparable imaging methods, such as MRI. Most current PAT equipment uses sophisticated lasers which are expensive and are not portable. To address this issue, Hariri et al.11 built a portable and affordable PAT system. To reduce the cost, they used more affordable LED arrays instead of lasers and a conventional ultrasound transducer. The proposed portable PAT system was able to image a pencil lead through 3.2cm layers of chicken breast which is a medium with similar scattering properties as the human body.

Back to Top

Emerging Health Applications

Here, we provide examples of different health applications of the presented imaging technologies, ranging from imaging at the skin surface to imaging deeper within tissue.

Cardiac cycle measurements. The cardiac cycle is defined as the cycle from the beginning of a heartbeat to the beginning of the next heartbeat. It consists of two periods: diastole, where the heart relaxes and fills with blood and systole, where the heart contracts and pumps blood out of the heart and into arteries. This cycle is near periodic with small but significant beat to beat variations. The near periodicity of the cardiac cycle and the predictable, repetitive changes to blood flow that it induces throughout the human body provide us with an opportunity to estimate multiple different health related measurements of the cardiac cycle from the changes in blood flow.

Peripheral hemodynamics. Photoplethysmography (PPG) and ballistocardiography (BCG) are non-invasive techniques of measuring cardiac activity. Both can be measured optically. PPG measures the changes in the intensity of the light reflected from the body due to blood volume changes at the periphery of the skin. BCG measures changes in the mechanical motion of the body instead of the intensity variations of the skin. PPG and BCG signals can be measured remotely by analyzing a video of the body.2,28 These signals can be captured with RGB, NIR and thermal cameras.21,22,32 One advantage of imaging methods is that they do not require contact with the body which enables unobtrusive monitoring. PPG can only be measured from the skin and all conventional methods require that that the skin is not covered. BCG can be measured through clothing or hair, as these typically move with the body, although the signal-to-noise ratio is likely to be lower than when measuring directly from rigid parts of the body. The signal intensity of both PPG and BCG varies around the body, as does the waveform morphology. Thus, measurement from the hand will both provide different information and different noise than measurement from the face. Radio waves and Wi-Fi signals can only measure BCG information,1 but many of the processing methods that can be used to recover the signal from videos can also be applied to these signals (for example, blind source separation,28 and filtering.16).

PPG signals can be used to reliably measure heart rate (HR),28 breathing rate (BR),28 heart rate variability (HRV)20 and blood oxygenation (SpO2).31 Pulse rate is the number of cardiac pulses per minute and is measured as the dominant frequency of the photoplethysmogram (PPG) or ballistocardiogram (BCG) signal,28 this is often equivalent to heart rate but can be subtly different under certain circumstances. Pulse rate variability (PRV) is a measure of the variance in the time between consecutive cardiac pulses. Measuring PRV usually requires very clean signals and is very challenging in presence of noise, which may introduce erroneous peaks in the signal.20 Similarly, pulse rate variability is like HRV. BR is the number of breaths a person takes per minute, and it can be computed from the PPG or BCG waveform as the low frequency amplitude changes.28 SpO2 is the measure of the percentage of hemoglobin molecules saturated with oxygen. The measurement of SpO2 requires two different wavelengths of light with sufficiently different absorption of oxygenated and deoxygenated hemoglobin to distinguish between the intensity variations caused by the absorption of oxygenated and deoxygenated hemoglobin. Preliminary evidence has shown that RGB camera bands are sufficient for measuring SpO2.31


A significant advantage of diffuse optical tomography systems is they have the capability to reconstruct 3D images captured inside tissue.


Blood pressure. More recently, researchers have started to explore the potential of measuring correlates of blood pressure using cameras.5 Simultaneously recording PPG signals from different body locations enables the measurement of the time it takes blood to travel from one body location to another, called the pulse transit time (PTT). PTT is related to blood pressure—the higher the blood pressure, the faster the blood travels, and the shorter the PTT.5 Pulse transit time can either be calculated by computing the time lag between PPG peaks at different parts of the body (for example, hands and face) or the lag between BCG and PPG pulse peaks. The challenge with PTT measurements is that different body locations have physiologically different PPG waveform shapes and the exact relationship between the PTT and blood pressure is not well characterized.5 The shape of the PPG waveform can also be used to assess the health of the arteries and can non-invasively diagnose arterial diseases and arterial stiffness. More recently, several techniques have been developed that use machine learning techniques to convert the PPG waveform into the blood pressure waveform—typically with PPG waveforms recorded from finger-based transmissive pulse oximeters that provide higher signal to noise ratio measurements compared to non-contact alternatives.

Blood perfusion is a measure of how the blood flows through different regions of the body. Perfusion is clinically useful for several reasons, for example, for wound healing assessments, for diabetic patients who may have peripheral vascular disease, and for monitoring perfusion and re-perfusion during and immediately after surgery.15 Currently, perfusion can only be measured with very expensive specialized devices, such as laser Doppler imaging.15 Contact devices only measure PPG waveforms at a single point of contact and cannot be used to measure perfusion through a larger region of the body. On the other hand, cameras placed at a distance from the skin can simultaneously record PPG signals from a larger area on the body and have the potential to enable new measurement possibilities in the home.15

Glucose monitoring is crucial for diabetic patients. There are very accurate approaches for measuring the glucose levels, however these approaches are invasive, painful, and frequently piercing the skin can lead to great risk of infection. For these reasons, non-invasive optical methods of glucose monitoring are very attractive. Several optical imaging approaches have been already proposed, including fluorescence spectroscopy and NIR spectroscopy. While these approaches show promising results, these systems are expensive and bulky, making them infeasible for daily glucose monitoring in the patient's home. There is early clinical evidence that camera-based systems in the visible and NIR wavelength range may be valuable in glucose monitoring. The advantages of cameras are that they are low-cost and portable and could offer convenient means of glucose monitoring. Diabetes may be monitored with optical approaches by taking the photos of the retina to screen for diabetic retinopathy and photos of the foot to analyze diabetic foot ulcers. A small study has found that patients with diabetes may have higher pulse wave velocity.27 Machine learning algorithms may be used to extract features from PPG waveforms obtained with cameras to estimate blood glucose by combining them with information about the patient's age and BMI. Early research suggests that even smartphone cameras placed directly on a finger may be able to measure the glucose changes in the blood. NIR wavelengths have been found to have variable absorption of glucose. Changes in the intensity of the skin captured in the NIR light have been correlated with glucose concentration. However, these studies had very few participants and these results have not been clinically validated. Moreover, the absorption of glucose in the visible and NIR light is orders of magnitude lower than the absorption of other components of blood, such as water and hemoglobin. Therefore, it is very challenging to measure glucose concentration using cameras. Changes in blood oxygenation, water, and even body temperature significantly corrupt the camera-based measurements of glucose.38 While there are promising preliminary results of monitoring glucose with cameras, the above-mentioned challenges are yet to be overcome. There is ongoing research to improve these systems and to address these limitations.


There is early clinical evidence that camera-based systems in the visible and near-infrared wavelength range may be valuable in glucose monitoring.


Localizing veins. Numerous medical procedures require accurate localization of vessels, including veins, to insert a needle to perform a blood test or to administer intravenous (IV) medication. However, veins are often difficult to localize because they are small, and they can be deep inside the body. The difficulty with accurately localizing the veins leads to multiple attempts of inserting the needle in the body, which causes pain, bleeding outside of the blood vessels, and even infections.25 There are several commercial devices which can be used to non-invasively to image the veins by using NIR cameras. These devices are portable, non-contact, simple to use, have low power consumption, handsfree operation, and can tolerate the patient's motion. The vein finding devices use NIR LEDs and a high resolution NIR camera. The lens of the NIR camera is fitted with an optical bandpass filter to block all wavelengths of ambient light outside of the NIR wavelength range of interest to improve the image quality. NIR light penetrates deeper inside the tissue than visible light and is better suited to visualize the veins inside the body. Moreover, deoxyhemoglobin present in the veins absorbs more NIR light than the rest of the tissue, making the veins appear darker and more visible in the NIR image. The obtained high-resolution image of the veins can then be displayed on top of the skin by using a visible light projector to facilitate the localization of the veins.25 These vein viewing devices have successfully reduced the number of attempts of inserting the needles and the associated complications. Moreover, visualizing the veins in the body can also be used to assess the flush of the IV fluids and to determine if the catheter is flowing properly.25 In addition to NIR cameras, DOT systems show promise in imaging blood vessels and cardiovascular structures.18 DOT may not only be useful for localizing veins and other blood vessels, but may also aid visualization of the structure of the vasculature to diagnose cardiovascular diseases.

Ocular imaging. The eye is the only place in the body with direct non-invasive access to blood vessels. There have been several indications that disease states including hypertension, AIDS, rheumatoid arthritis, and even Parkinson's and Alzheimer's disease can potentially be diagnosed by imaging the eye. With this wide variety of potential health-related applications, there are several existing imaging technologies that have been adapted to imaging the eye.

Traditional ophthalmology with OCT. The single most common use of optical coherence tomography is to image the retina of the human eye. OCT allows clinical practitioners and ophthalmologists to obtain micron resolution three dimensional images of the patient's retina. This may allow for early detection of several pathologies of the posterior part of the eye, the fundus, the cornea, the structural changes of the chamber angle, and the iris. OCT has also shown promising clinical evidence in diagnosing macular degeneration, retinal thickening, as well as the loss of normal foveal pit appearance.6 While OCT systems may be able to provide highly accurate images of the eye, these imaging devices are often large and expensive. Recently several portable and low-cost solutions have been developed to image the eye using visible light and RGB cameras, including mobile phone cameras.23,24

Estimating refractive errors. Refractive errors of the human eye occur when the eye is unable to correctly bend incoming light and focus an image on the retina. Typically, patients perceive this as blurry images. Refractive errors include myopia (nearsightedness), hyperopia (farsightedness), presbyopia (loss of near vision with age), and astigmatism and can be corrected with glasses. Pamplona et al. developed a prototype of a portable, inexpensive, and simple to use screening tool for estimating refractive errors (nearsightedness) in the human eye called NETRA.23 The NETRA device is based on a high-resolution programmable display, inexpensive optics, interaction with the user, and computational reconstruction. A micro-lens array is placed over an LCD display. The user looks at the display and the image is formed on the user's retina. A healthy eye sees a clear image, while a myopic eye, for example, will see two distinct images on the display. The user can move the position of the displayed objects until they are aligned. The distance that the user moves the device is used to compute the refractive error in the user's eye.

Detecting cataracts. Cataracts are the leading cause of avoidable blindness. Clinicians use expensive and difficult to operate equipment with slit lamps to subjectively diagnose the back scattering of the light in the eye, the hallmark of cataracts. In addition to their high cost, the traditional approaches of diagnosing cataracts usually do not detect the early stages of cataracts. Pamplona et al.24 presented a low-cost solution for imaging cataracts by using a compact snap-on piece for mobile phones called CATRA. The device is easy to use and has even shown promise in detecting early stages of cataracts. An eye with cataracts scatters and reflects light before the light reaches the retina. The scattering of light is caused by the clouding of the lens in eyes with cataracts. CATRA detects cataracts by measuring the cloudiness of the eye lens by comparing a light path in a healthy eye to a light path in an eye with cataracts. CATRA uses forward scattering instead of back scattering and includes the user in the loop—the user responds to what they visually experience on the display. Using the user's feedback, CATRA scans the entire lens section by section and creates a map of light attenuation caused by the cataracts.

Retinal imaging. Many diseases are manifested in the retina, including diabetes, AIDS, and hypertension. However, most existing devices for imaging the retina are large, expensive, and not available in developing countries. The high cost of devices used for imaging the retina stems from alignment and illumination requirements. Lawson et al.17 built an inexpensive and portable prototype of a retinal imager that fits in a pair of glasses. They indirectly illuminate the retina by shining light through the tissue of the temple. The use of indirect light avoids the need for dilating the pupils, necessary when directly shining light on the eye. The gaze of the eye is locked in a correct location by displaying patterns in front of the eye. The user moves their eyes around to capture multiple images of different areas of the retina. Multiple images of the retina are then stitched together to create wide field-of-view retinal panoramas. These images show the fovea, the macula, the optic disc, and structures in the periphery, such as the blood vessels and blood flow.

Cancer imaging. Various types of cancers are the leading causes of death in developed countries. Traditional imaging methods used to diagnose cancer are X-ray imaging and ultrasonography. X-ray imaging is highly accurate; however, it is ionizing and less safe than imaging systems using visible light. Ultrasonography is safe but it is less accurate in detecting early-stage cancers because of the low acoustic contrast. While these traditional imaging techniques are likely to remain the standard of care for diagnosis of cancer, there is a long-term potential in exploring visible and NIR based imaging techniques since these do not involve the use of ionizing radiation.

DOT imaging systems can be a potential alternative for early-stage cancer diagnosis, compared to traditional methods, such as X-rays and ultrasonography. DOT has shown early promise in the detection of thyroid cancers.10 This is potentially important since traditional imaging techniques are not very effective in diagnosing thyroid cancer, especially in the early stages.8

Definitive diagnosis of thyroid cancers usually requires careful examination of histological specimen which are invasively obtained from the body in a biopsy. DOT could be used to detect these features from 3D tomographic images based on changes in the optical properties of the tissue, such as absorption and scattering coefficients.8

PAT has also shown promising clinical evidence in detecting early-stage breast cancer.40 Similar to DOT, PAT imaging is nonionizing, making it safer than X-ray imaging, and it is more sensitive to early-stage tumors than the low contrast ultrasonography. The additional advantage of diagnosing cancer with PAT is that PAT can not only capture high resolution images deep inside the body, but it can also measure blood oxygenation. An increased blood oxygenation can potentially be used to detect cancer more easily because it may indicate hyper-metabolism of cells, which is often a sign of cancerous growth.40

Dermatology. Optical imaging methods have limited penetration depth through skin due to the high scattering nature of the material and presence of many inhomogeneous structures. However, optical methods can penetrate at least a few millimeters into the shallow layers of the skin and can be used for diagnosing dermatological conditions. Optical imaging offers the advantage of non-invasive diagnostics as opposed to standard histologic procedures.

OCT is valuable in dermatology due to its cross-sectional imaging of the epidermis and upper dermis in living tissue. As a result, OCT offers non-invasive measurements for cosmetology, estimating sun damage, monitoring wound healing, inflammatory skin diseases, and even potentially tumor diagnosis.6

DOT systems could also potentially be used to diagnose dermatological conditions by closely examining the skin condition on the surface of the skin and tissue in the layers beneath the skin surface.18 However, the use of DOT for assessing dermatological conditions is still in early stages of research and development and these systems have not been clinically validated yet. Very early research is investigating the feasibility of monitoring wound healing and assessing the thickness of burn wounds. But current DOT systems suffer from low spatial resolution and there are efforts to improve these systems for clinical use.

There is early clinical evidence that OCT may be used to noninvasively monitor healing of burn wounds. Monitoring burn wounds is critical because deeper wounds may form hypertrophic scars which are thick, raised scars which may require surgery. Preliminary clinical studies suggest that OCT could be used to assess the type of cancer occurring in basal cells, which are in the outermost layers of the skin. These assessments are helpful in triaging whether patients with basal cell cancer should receive medical and surgical treatment. The accuracy of OCT in assessing the thickness of basal cell tumors has been compared to ultrasound imaging. While both techniques overestimate the tumor depth compared to the gold standard invasive methods, OCT is more accurate and less biased than ultrasound. Moreover, early clinical investigation has shown that OCT may be able to diagnose malignant melanoma which is the most serious kind of skin cancer. Significant differences were found between benign and malignant melanomas with OCT diagnosis. In addition, OCT has shown promise in non-invasive and real-time assessment of the depth and diameter of vascular lesions which are malformation of the skin caused by cancer.

However, these are very preliminary results and the ability of OCT to successfully diagnose melanomas and other skin cancers has not been clinically validated yet. Furthermore, the spatial resolution of existing OCT systems is not sufficient yet to diagnose skin cancers on the cellular and sub-cellular scale.

Back to Top

Challenges and Opportunities

There are several advantages to monitoring health with non-contact and non-invasive optical and low energy E/M imaging techniques.

Access to device (+). Remotely monitoring health with imaging systems, especially with cameras, often does not require access to any specialized device. Many systems have been developed to be low-cost, portable, and easily accessible.11,14,18 Many of these portable systems can operate with common RGB cameras and even mobile phones.23,24 The same cellphone or laptop webcam used for the virtual doctor could be used during the video call to automatically measure vital signs and to take other clinical measurements.

Safety (+). All imaging technologies presented in this article are safe and non-ionizing. These imaging systems use wavelengths of light with low energy and do not pose risks to the patients. This enables regular and long-term monitoring of patients which may be necessary for monitoring the progress of treatment of certain diseases, such as cancer.

Comfort (+). The presented imaging systems are also non-invasive because they do not cause pain or discomfort to the patient. Most of these devices can operate on the surface of the skin and many do not require direct contact with the skin. Measurements that can be taken in a contactless way would enable comfortable and continuous measurements without distracting the users.

Low cost (+). Many currently available clinical imaging systems are very expensive. Their cost may be prohibitive in many contexts. However, many of the presented imaging systems have been demonstrated to work well with low-cost photodetectors and low-cost illumination sources. The imperfections in the hardware can be corrected with computation and carefully designed algorithms. Imaging has the potential to provide inexpensive but accurate imaging devices that could be used in clinical settings.

However, there is "no free lunch" and imaging methods present challenges that must be overcome to obtain accurate and equitable measurements.

Motion (−). Motion of the patient during the measurement causes changes of the reflection of light on the surface of the skin, corrupting the captured signals. The motion challenge could be overcome by asking the people to sit still during the measurement. However, it might be challenging for monitoring small children, or during longer and continuous measurements, especially when the person is performing other tasks. Motion compensation, such as tracking, could be used in post-processing of the captured images, but these solutions may not be applicable in imaging with a long exposure or a low frame rate.

Ambient illumination (−). Changes in the ambient illumination can also change the intensities recorded on the skin surface, especially if the imaging system relies on visible light. The ambient illumination variations are often much larger than the underlying clinical signal of interest and severely corrupt the measurements. The illumination challenge could be overcome by adjusting the illumination in the room or setting the imager settings based on the ambient light before the measurements, but it may be difficult in certain less controlled situations.

Unfairness (−). People with darker skin types have a higher melanin concentration in their skin. Melanin, like hemoglobin, absorbs light inside the tissue. Consequently, a higher melanin concentration will cause more light to be absorbed inside the body and less light to come back to the camera. This leads to weaker clinical signals and potentially worse contrast of the captured images. These disparities may place people with darker skin tones at a higher risk of erroneous measurements.


There is strong interest in continuing to push the boundaries of imaging capabilities, including the depth of the imaging, the resolution, and the applications where these systems can be deployed.


Scattering (−). Optical imaging methods have limited penetration depth inside the body due to the scattering of light. A large fraction of the light rays is either scattered at the surface of the skin or scattered and absorbed inside the body. Some advancements have been made over the years to image deeper inside the tissue and to use the scattered light to reconstruct an image. Moreover, PAT can penetrate much deeper into the tissue than traditional optical imaging methods because it uses acoustic waves which scatter less than light. However, optical methods are still unable to create sharp images deep inside the tissue.

Back to Top

Future Directions

There is now a wide range of non-invasive imaging and testing modalities that have already become an integral part of the healthcare system. The non-invasive imaging methods provide a method to image beneath the skin surface and diagnose diverse health conditions. Hence, there is strong interest in continuing to push the boundaries of imaging capabilities, including the depth of the imaging, the resolution, and the applications where these systems can be deployed.

Penetration depth. Light scatters as it travels inside a tissue. While the light may be able to penetrate deep inside the body, most of it is scattered and we cannot create sharp images using the received light. There are currently efforts underway to build novel hardware and algorithms to reduce scattering inside the tissue by controlling the light source and to computationally reconstruct images from the measured light.

Non-invasive cellular monitoring. Currently, none of the non-invasive optical methods offer the ability to image at a single cell resolution below the skin. Imagine, if we could image live biology at a cellular scale non-invasively, without having to draw blood. This ability could open a whole new class of medical devices and healthcare applications. For example, if the next-generation smartwatches had the see-below-the-skin microscopes that could image live biology, then the healthcare "apps" on the smartwatches could extract relevant information to support specific clinical conditions.

Telemedicine. The ability to monitor health with cameras and other optical devices could be very valuable for remote care. Currently, telehealth systems are reliant on additional hardware for collecting vitals. If the video stream that is already used for the communication could also provide information about vitals, that could improve the standard of remote care at home, as well as other applications related to health. More improvements in the imaging technology are still necessary to ensure these systems are robust in the real-world scenarios. However, existing work has already demonstrated the huge potential of portable versions of these systems in measuring several clinical parameters that will improve the quality of life.22,37

Back to Top

Summary

We have summarized a set of non-invasive imaging technologies that use electromagnetic (E/M) waves for inspecting the inside of the body and measuring vital signs. Across the electromagnetic spectrum imaging technology can capture cardiac and pulmonary vital signs and image blood vessels and, in the future, even the individual cells. While not all these techniques have the same properties, there are several attractions of imaging approaches. Imagers are non-invasive and can enable convenient, comfortable measurement. Some forms of imagers (such as, RGB cameras) are ubiquitous. Many use E/M waves that are safe for the human body (unlike X-rays). Many of these technologies can be made portable and relatively inexpensive. Overall, we argue that imaging methods present exciting opportunities for the field of medicine.

There are some directions that will help the field advance and achieve translational adoption. The first and perhaps most critical is producing more evidence for the accuracy in clinical settings and demonstrating the benefits of imaging over contact sensors in practice. Translational research such as this is very challenging, given the interdisciplinary nature and sensitive applications.

From the computational angle, the field of machine learning could offer benefits to these imaging systems. Deep learning has led to significant improvements in computer vision tasks. Medical imaging is still to take the full advantage of many of these methods.

The fairness and equitability of these systems is critical. Firstly, imaging systems must overcome the hardware and software biases that they exhibit. Measurements should be possible with equivalent performance across all skin types and age groups; however, solutions are likely to neglect minorities unless this is something that is the core principle in the way these solutions are designed. Secondly, just because these imaging systems can be lower cost does not mean that they could be considered as suitable for applications in low resource settings. More work is needed to ensure access to affordable and fair care for all demographics.

Back to Top

References

1. Adib, F., Mao, H., Kabelac, Z., Katabi, D., and Miller, R. Smart homes that monitor breathing and heart rate. In Proceedings of the 33rd Annual ACM Conf. Human Factors in Computing Systems (2015), 837–846.

2. Balakrishnan, G., Durand, F., and Guttag, J. Detecting pulse from head motions in video. In Proceedings of the IEEE Conf. Computer Vision and Pattern Recognition (2013), 3430–3437.

3. Bélanger, S., Abran, M., Intes, X., Casanova, C., and Lesage, F. Real-time diffuse optical tomography based on structured illumination. J. Biomedical Optics 15, 1(2010), 016006.

4. Chen, X., Hernandez, J., and Picard, R. Estimating carotid pulse and breathing rate from near-infrared video of the neck. Physiological Measurement 39, 10 (2018), 10NT01.

5. Elgendi, M., et al. The use of photoplethysmography for assessing hypertension. NPJ Digital Medicine 2, 1 (2019), 1–11.

6. Fercher, A. Optical coherence tomography–development, principles, applications. Zeitschrift für Medizinische Physik 20, 4 (2010), 251–276.

7. Fercher, A., Drexler, W., Hitzenberger, C., and Lasser T. Optical coherence tomography-principles and applications. Reports on Progress in Physics 66, 2 (2003), 239.

8. Fujii, H., Yamada, Y., Kobayashi, K., Watanabe, M., and Hoshi, Y. Modeling of light propagation in the human neck for diagnoses of thyroid cancers by diffuse optical tomography. Intern. J. Numerical Methods in Biomedical Engineering 33, 5 (2017), e2826.

9. Garbey, M., Sun, N., Merla, A., and Pavlidis, I. Contactfree measurement of cardiac pulse based on the analysis of thermal imagery. IEEE Trans. Biomedical Engineering 54, 8 (2007), 1418–1426.

10. Godavarty, A., Rodriguez, S., Jung, Y., and Gonzalez, S. Optical imaging for breast cancer prescreening. Breast Cancer: Targets and Therapy 7 (2015), 193.

11. Hariri, A., Lemaster, J., Wang, J., Jeevarathinam, A., Chao, D., and Jokerst, J. The characterization of an economic and portable LED-based photoacoustic imaging system to facilitate molecular imaging. Photoacoustics 9 (2018), 10–20.

12. Hee, M., et al. Optical coherence tomography of the human retina. Archives of Ophthalmology 113, 3 (1995), 325–332.

13. Huang, D., et al. Optical coherence tomography. Science 254, 5035 (1991), 1178–1181.

14. Kim, S., Crose, M., Eldridge, W., Cox, B., Brown, W., and Wax, A. Design and implementation of a low-cost, portable OCT system. Biomedical Optics Express 9, 3 (2018), 1232–1243.

15. Kumar, M., Suliburk, J., Veeraraghavan, A., and Sabharwal, A. PulseCam: High-resolution blood perfusion imaging using a camera and a pulse oximeter. In Proceedings of the 38th Annual Intern. Conf. IEEE Engineering in Medicine and Biology Society. (2016), IEEE, 3904–3909.

16. Kumar, M., Veeraraghavan, A., and Sabharwal, A. DistancePPG: Robust non-contact vital signs monitoring using a camera. Biomedical Optics Express 6, 5 (2015), 1565–1588.

17. Lawson, E., Boggess, J., Khullar, S., Olwal, A., Wetzstein, G., and Raskar, R. Computational retinal imaging via binocular coupling and indirect illumination. In ACM SIGGRAPH 2012 Talks, 1–1.

18. Liu, C., Maity, A., Dubrawski, A., Sabharwal, A., and Narasimhan, S. High resolution diffuse optical tomography using short range indirect subsurface imaging. In Proceedings of the 2020 IEEE Intern. Con. Computational Photography. IEEE, 1–12.

19. Martinez, L., Paez, G., and Strojnik, M. Optimal wavelength selection for noncontact reflection photoplethysmography. In Proceedings of the 22nd Congress of the Intern. Commission for Optics: Light for the Development of the World 8011 (2011). Intern. Society for Optics and Photonics, 801191.

20. McDuff, D., Gontarek, S., and Picard, R. Remote measurement of cognitive stress via heart rate variability. In Proceedings of the 36th Annual Intern. Conf. IEEE Engineering in Medicine and Biology Society. 2014. IEEE, 2957–2960.

21. Nowara, E., Marks, T., Mansour, H., and Veeraraghavan, A. SparsePPG: Towards driver monitoring using camera-based vital signs estimation in near-infrared. In Proceedings of the 1st Intern. Workshop on Computer Vision for Physiological Measurement (2018).

22. Nowara, E., Marks, T., Mansour, H., and Veeraraghavan, A. Near-infrared imaging photoplethysmography during driving. IEEE Trans. Intelligent Transportation Systems (2020).

23. Pamplona, V., Mohan, A., Oliveira, M., and Raskar, R. NETRA: Interactive display for estimating refractive errors and focal range. In Proceedings of ACM SIGGRAPH 2010, 1–8.

24. Pamplona, V., Passos, E., Zizka, J., Oliveira, M., Lawson, E., Clua, E., and Raskar, R. CATRA: Interactive measuring and modeling of cataracts. ACM Trans. Graphics 30, 4 (2011), 1–8.

25. Pan, C., Francisco, M., Yen, C., Wang, S., and Shiue, Y. Vein pattern locating technology for cannulation: A review of the low-cost vein finder prototypes utilizing near infrared (NIR) light to improve peripheral subcutaneous vein selection for phlebotomy. Sensors 19, 16 (2019), 3573.

26. Phan, D., Bonnet, S., Guillemaud, R., Castelli, E., and Pham Thi, N. Estimation of respiratory waveform and heart rate using an accelerometer. In Proceedings of the 30th Annual Intern. Conf. IEEE Engineering in Medicine and Biology Society. 2008, 4916–4919.

27. Pilt, K., Meigas, K., Temitski, K., and Viigimaa, M. Second derivative analysis of forehead photoplethysmographic signal in healthy volunteers and diabetes patients. In Proceedings of the World Congress on Medical Physics and Biomedical Engineering (Beijing, China, May 26–31, 2012). Springer, 410–413.

28. Poh, M., McDuff, D., and Picard, R. Advancements in noncontact, multiparameter physiological measurements using a webcam. IEEE Trans. Biomedical Engineering 58, 1 (2010), 7–11.

29. Shimokawa, T., Ishii, T., Takahashi, Y., Sugawara, S., Sato, M., and Yamashita, O. Diffuse optical tomography using multidirectional sources and detectors. Biomedical Optics Express 7, 7 (2016), 2623–2640.

30. Siegel, A., Marota, J., and Boas, D. Design and evaluation of a continuous wave diffuse optical tomography system. Optics Express 4, 8 (1999), 287–298.

31. Tarassenko, L., Villarroel, M., Guazzi, A., Jorge, J., Clifton, D., and Pugh, C. Noncontact video-based vital sign monitoring using ambient light and auto-regressive models. Physiological Measurement 35, 5 (2014), 807.

32. van Gastel, M., Stuijk, S., and Haan, G. Motion robust remote-PPG in infrared. IEEE Trans. Biomedical Engineering 62, 5 (2015), 1425–1433.

33. Verkruysse, W., Svaasand, L., and Nelson, J. Remote plethysmographic imaging using ambient light. Optics Express 16, 26 (2008), 21434–21445.

34. Vizbara, V. Comparison of green, blue and infrared light in wrist and forehead photoplethysmography. Biomedical Engineering 2016 17, 1 (2013).

35. Wang, L. Prospects of photoacoustic tomography. Medical Physics 35, 12 (2008), 5758–5767.

36. Wang, L. Tutorial on photoacoustic microscopy and computed tomography. IEEE J. Selected Topics in Quantum Electronics 14, 1 (2008), 171–179.

37. Wang, W., den Brinker, A., Stuijk, S., and de Haan, G. Algorithmic principles of remote PPG. IEEE Trans. Biomedical Engineering 64, 7 (2017), 1479–1491.

38. Wang, Y., Wang, W., van Gastel, M., and de Haan, G. Modeling on the feasibility of camera-based blood glucose measurement. In Proceedings of the IEEE Intern. Conf. Computer Vision Workshops (2019).

39. Zhang, X. Instrumentation in diffuse optical imaging. Photonics 1 (2014), 9–32. Multidisciplinary Digital Publishing Institute.

40. Zhou, Y., Yao, J., and Wang, L. Tutorial on photoacoustic tomography. J. Biomedical Optics 21, 6 (2016), 061007.

Back to Top

Authors

Ewa Nowara is a research scientist at Meta Reality Labs Research in Sunnyvale, CA, USA.

Daniel McDuff is a staff research scientist at Google in Seattle, WA, USA.

Ashutosh Sabharwal is chair of the Department of Electrical and Computer Engineering and the Ernest Dell Butcher Professor of Engineering at Rice University, Houston, TX, USA.

Ashok Veeraraghavan is a professor of electrical and computer engineering and of computer science at Rice University, Houston, TX, USA.


©2022 ACM  0001-0782/22/12

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from permissions@acm.org or fax (212) 869-0481.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2022 ACM, Inc.


 

No entries found