Chinese Optics Letters, 2020, 18 (12): 121705, Published Online: Dec. 8, 2020   

Deep learning virtual colorful lens-free on-chip microscopy Download: 795次

Hua Shen 1,2,3,*Jinming Gao 1,2
Author Affiliations
1 School of Electronic Engineering and Optoelectronic Technology, Nanjing University of Science and Technology, Nanjing 210094, China
2 MIIT Key Laboratory of Advanced Solid Laser, Nanjing University of Science and Technology, Nanjing 210094, China
3 Department of Material Science and Engineering, University of California Los Angeles, Los Angeles, CA 90095, USA
Abstract
Currently, it is generally known that lens-free holographic microscopy, which has no imaging lens, can realize a large field-of-view imaging with a low-cost setup. However, in order to obtain colorful images, traditional lens-free holographic microscopy should utilize at least three quasi-chromatic light sources of discrete wavelengths, such as red LED, green LED, and blue LED. Here, we present a virtual colorization by deep learning methods to transfer a gray lens-free microscopy image into a colorful image. Through pairs of images, i.e., grayscale lens-free microscopy images under green LED at 550 nm illumination and colorful bright-field microscopy images, a generative adversarial network (GAN) is trained, and its effectiveness of virtual colorization is proved by applying it to hematoxylin and eosin stained pathological tissue samples imaging. Our computational virtual colorization method might strengthen the monochromatic illumination lens-free microscopy in medical pathology applications and label staining biomedical research.

Compared to the conventional microscope, lens-free on-chip microscopy whose principle is based on digital in-line holography has advantages of a further larger field-of-view (FOV), lower cost, and more compactness[14" target="_self" style="display: inline;">4]. As shown in Figs. 1(a) and 1(b), a lens-free on-chip microscope has the simplest structure, which consists of a light source, sample, and a digital image sensor. Once the wave emitted from the light source reaches the sample, the wave will be modulated by the object information and diffracted to the digital image sensor. Usually, in order to make the lens-free on-chip microscopy with lower cost and smaller, light-emitting diodes (LEDs) or laser-diodes (LDs) are adopted as the light source, and the charge coupled device (CCD) or complementary metal oxide semiconductor (CMOS) is utilized for harvesting images. Then, by using phase retrieval algorithms, such as transport-of-intensity equation (TIE) methods, Gerchberg–Saxton (G-S) phase retrieval iterations, and the like, the complex light field of the object plane can be reconstructed from the hologram images produced by the object diffraction. However, the inherent limits of the light source for lens-free on-chip microscopy are that the spectrum of the light source must be with narrow band, which is decided by the holographic imaging principle. Thus, to obtain a colorful image, it should reconstruct the image from several recorded holograms of different wavelengths. For example, some previous lens-free red (R), green (G), and blue (B) colorization methods utilize the images illuminated from three sources of different wavelengths (RGB), which have relatively narrow spectrum peaks[4].

Fig. 1. Lens-free on-chip microscope setup. (a) Schematic of lens-free on-chip microscopy. (b) Experimental lens-free on-chip microscope setup. (c) An example image of H&E stained pathological tumor tissue.

下载图片 查看所有图片

In medical microscopy applications, chemical dye stained pathological human tissues are widely used. As shown in Fig. 1(c), hematoxylin and eosin (H&E) stain is one of widely used tissue stains in histology and medical diagnosis, which is viewed as a gold standard. For example, in a biopsy of a suspected tumor, the tissue is sectioned into micrometer-level thin slides, which are stained by a combination of H&E. Normally, the acidic structures of tissues and cell structures (basophilic) could be stained purplish-blue by hematoxylin dyes, which are basic dyes and positively charged. Hematoxylin is usually conjugated with a mordant (aluminum salt), which also defines the color of the stain. In order to form tissue-mordant-hematoxylin complex link, which can stain the nuclei and chromatin bodies purple, the mordant will bind to the tissue firstly, and then the hematoxylin will bind to the mordant. Eosin dye is an acidic dye, which is with negative charge (eosinophilic). Eosin stains the cellular matrix and cytoplasm (acidophil), giving them a red or pink color. In short, cell nuclei will appear purplish-blue, while the cellular matrix and cytoplasm will appear pink by H&E stain. Based on the two main colors, all structures of the tissue take on different shades and hues, which results in the general distribution of cells in the tissue sample, and its detailed structure can be easily observed. Consequently, it is easy to distinguish between the nuclei, cytoplasm, and boundaries of a tissue/cell sample in medical diagnosis by H&E stain.

The digital colorization of grayscale images is a hot topic in machine learning[510" target="_self" style="display: inline;">10]. It always matches the luminance and texture information of a grayscale image, which should be colorized, and an existing color image to realize the colorization. Actually, in an H&E staining pathologic slide, cell nuclei appear purplish-blue, while the cellular matrix and cytoplasm appear pink, which means the different structures can be presented by different shades and hues based on two colors (or chromatic spectrum). The H&E color is further simpler and more monotonous than the natural color, which means that it is easier to virtually stain a colorful image from a gray image. Besides, hematoxylin staining is purplish-blue, which has a high transmission rate in the short visible wavelength range (i.e., purplish-blue spectrum region), and eosin staining has a high transmission rate in the long visible wavelength range (i.e., pink-red spectrum region). Thus, if the illumination is an LED at 550 nm, we can get a high contrast gray image whose shades mean the stained part of the tissue and bright areas stand for the unstained part.

In this paper, we propose a deep learning style transfer method to realize a colorful lens-free on-chip microscopy, with only one wavelength illumination. In theory, the data of our method is only 1/3 of conventional lens-free on-chip microscopy with RGB illumination, which means our microscope has higher efficiency and lower cost.

Lens-free on-chip microscope setup. Figure 1 presents our lens-free on-chip microscope setup. It mainly includes: 1, green LED (G LED 3 W, Juxiang, China); 2, optical fibers (CORE200UM, Shouliang, China); 3, XYZ axial adjuster (XYZ25MM, Juxiang, China); 4, mechanical elements for supporting; 5, CMOS image sensor (DMM27UJ003-ML, The Imaging Source, Germany). The G LED’s claimed wavelength is 550 nm, and the spectral bandwidth is about 40 nm. In order to improve the spatial coherence, the LED light is coupled into an optical fiber whose core diameter is 200 μm. As shown in Fig. 1(b), the spatial-coherence-improved light departs from the end of the optical fiber at the top of the setup and then illuminates the bio-sample below. Then, the light diffracted by the bio-sample is harvested by the CMOS as a hologram. The distance between the end of the optical fiber and the bio-sample slide is about 100 mm. The distance between the CMOS and the bio-sample is less than 2 mm. The bio-sample is put in a mechanical holder, which is fixed on an XYZ axial adjuster with micrometer-level precision. In the process of the experiment, we should record three holograms. At the initial diffraction distance, we can get the first hologram. As shown in Fig. 1(a), the bio-sample is then moved about 50 μm along the Z axis, and the second hologram can be obtained. Finally, in order to harvest the third hologram, the bio-sample keeps on moving about 50 μm along the same Z-axis direction. The accurate diffraction distance can be computationally determined by the digital autofocusing method.

Autofocusing. Generally, if the defocusing range is small, the defocusing can be recognized as a Fresnel propagation because the LED illumination is partially coherent[11]. Distance autofocus algorithms can be used to calculate z. The essential part of the computational autofocusing algorithm is iterative forward/back propagation based on a single digitally recorded hologram[1215" target="_self" style="display: inline;">15]. Then, the best focus position could be calculated by a reasonable criterion and search strategy. The autofocusing criterion and the searching strategies should be beneficial for getting a unimodal autofocusing curve over a wide equivalent diffractive distance (EDD) range. Since the theory of this paper is established on Fresnel diffraction, the autofocusing criterion adopted here is the classical sparsity-based metrics[14,15], the Tamura coefficients of gradients (TG) [Eq. (1)]: {F=1I¯r1MNm=1Mn=1N[Ir(m,n)I¯r]2,Ir=(Ix)2+(Iy)2,where I means the grayscale image, M means the number of rows, and N means the number of columns. x and y are, respectively, the dimension directions along the row and the column.

Phase retrieval algorithms. We adopt TIE and G-S phase retrieval algorithms to reconstruct in-focus image from the three holograms mentioned above[14" target="_self" style="display: inline;">4]. As shown in the flowchart of Fig. 2, in the fourth and fifth steps, we firstly calculate the complex wave-front of the bio-sample by TIE and then input it to the iteration of the multi-height G-S algorithm as the initial guess. Initially, we use two diffraction holograms of different image planes to directly calculate the complex wave-front. Additionally, in order to obtain accurate complex-field distribution of the sample, we introduce the multi-height G-S iterations. The detailed process of our method is expressed as follows. Step I, use TIE to calculate the complex field of the sample U0(x,y;z0) as the initial guess; Step II, by using the Fresnel diffraction and angular spectrum propagation theorems, the complex field U1(x,y;z1) at the plane of z1 distance can be acquired when U0(x,y;z0) is propagated to the plane of z1 position computationally; Step III, we use the square root of I1(x,y;z1), which is the intensity of the hologram harvested at the z1 position by CMOS, to substitute the amplitude of the complex field U1(x,y;z1), while the phase term is retained. The renewed complex field is named U1(x,y;z1). Afterwards, we should adopt the same process as Steps II and III to deal with the three holograms repeatedly, which is harvested by CMOS at defocusing distances. Once U2(x,y;z4) can be obtained, the complex field should be back-propagated to the plane of the z0 position. Now it means that one iteration is just finished, and the same 5–10 iterations should be operated until the converged solution can be acquired. Eventually, we can get an accurate complex wave-front U0(x,y;z0), whose amplitude part is our needed in-focus image.

Fig. 2. Computational algorithm flowchart to get a virtually colorized lens-free on-chip microscopy image.

下载图片 查看所有图片

Virtual colorization by deep learning. In this paper, the YCbCr color space is used for colorization. As shown in Fig. 3, there are two main parts to form a GAN, one is a generator network (GN), and the other is a discriminator network (DN). The GN’s architecture adopted by us is symmetric and added with skip connections, which is usually called U-Net.[510" target="_self" style="display: inline;">10,1618" target="_self" style="display: inline;">–18] In GN, the numbers of encoding units are the same with those of decoding units. We utilize 4×4 convolution layers with stride 2 to form the contracting path for down-sampling, and 4×4 transposed convolutional layer with stride 2 to form each unit in the expansive path for up-sampling. Meanwhile, the expansive path concatenates with the activation map of the mirroring layer in the contracting path, followed by batch normalization and rectified linear unit (ReLU) activation function. The network’s last layer using the tanh function is a 1×1 convolution, like the cross-channel parametric pooling layer. The output is data with three layers with YCbCr color space. The DN gets colored images from both GN outputs and the bench-top commercial microscope. The image-style-transfer deep convolutions[1922" target="_self" style="display: inline;">22] actually define two difference functions; one describes the difference between the two images’ content, and the other is about the difference between the two images’ style. Then, the output of GN is actually both the desired content image and the desired style image, which is initially input by the content image. Then, we try to use the GN to transform the input image by minimizing both of the difference functions, i.e., the content difference function and the style difference. In GN, deep convolutions and deep back-propagations are to create images matching content of the content image and the style of the style image. Thus, we design a GAN, in Fig. 3, to achieve the virtual colorization for lens-free microscopy.

Fig. 3. Deep learning GAN established to achieve virtual colorization.

下载图片 查看所有图片

Figure 4 is the experimental results of virtual colorful lens-free on-chip microscopy by our deep learning style transfer method. Three diffraction holograms at different defocusing distances are harvested by the CMOS. The complex wave-fronts are reconstructed by combining the TIE and G-S iterations phase retrieval method. Then, by training GAN with 200 pairs of the reconstructed amplitude images and conventional microscopy images (256 × 256 pixels), the deep learning style transfer kernel network parameters are obtained. When a new grayscale lens-free on-chip microscopy image is input into the trained GAN, a virtual colorful H&E stained microscopy image can be received. In hardware, the G LED (JXLED3WG, Juxiang, China) is with a spectrum bandwidth of 30nm at the dominant wavelength of 550 nm. The CMOS image sensor is MV-CB120-10UM-B/C/S, Hikvision, China, and the XYZ axial translation stage (XR25C/M, Zhishun, China) is used to align the sample and the CMOS image sensor. The GAN was implemented using TensorFlow framework version 1.4.0 and Python version 3.7. We implemented the software on a desktop computer with a Core i7-7700K CPU at 4.2 GHz (Intel) and 64 GB of RAM, running a Windows 10 operating system (Microsoft). The network training and testing were performed using dual GeForce GTX 1080Ti GPUs (NVIDA). The training time is 2h, of which the virtual colorizing (image-style transferring) time of a lens-free image in practice or the test mode is 7.3ms. Three regions of interest (ROIs) are marked in Fig. 4, which are also zoomed in Fig. 5. Comparisons of lens-free on-chip microscopy images, bench-top commercial microscope images, and virtual colorization H&E stained images are presented in Fig. 5, which are image pairs of ROI #1, ROI #2, and ROI #3.

Fig. 4. Data processing to achieve virtual colorful lens-free on-chip microscopy. The yellow scale bar is 200 μm.

下载图片 查看所有图片

Fig. 5. Comparisons of lens-free on-chip microscopy image, bench-top commercial microscopy image, and virtual colorization image, which are image pairs of ROIs #1, #2, and #3 in Fig. 4.

下载图片 查看所有图片

From the lens-free on-chip microscopy images of Fig. 5, under the G LED illumination, it can be observed that the cell nuclei are almost black, which should be stained by the hematoxylin dye with blue or purple, and other cellular matrix and cytoplasm parts are tintedgray, which should be stained by the eosin dye with pink color. These corresponding stained colors are also presented in the bench-top commercial microscope images in Fig. 5. In contrast, we also show the virtually colorized lens-free on-chip microscopy images by our proposed deep learning style transfer method. It can be seen clearly that the texture details of the tissue in our virtually colorized images are very well kept. Moreover, by our method, the black cell nuclei in the original grayscale lens-free on-chip images are virtually colorized as the bluish-purple color and the tinted gray parts are colorized as pink color simultaneously. The nuclei, cytoplasm, and boundaries of a tumor tissue slide could be recognized and differentiated easily and clearly in a colorful mode by our process. Figure 5 shows the largest advantage of our method is that, by our deep learning virtual colorization, only one quasi-chromatic illumination and a monochromatic CMOS image sensor would achieve visually colorful microscopy. The results are approximately the same with the true colorful images.

In this paper, we report a deep learning style transfer method to achieve virtual colorization images for H&E stained pathological diagnosis using the grayscale lens-free on-chip microscopy with only one chromatic illumination. The convincing experimental results demonstrate that our method works well and transfers the grayscale lens-free on-chip microscopy image to a colorful human vision image without any more data and hardware cost. It is believable that our method can be useful for improving the application of the lens-free on-chip microscope in telepathology and resource-limited situations.

References

[1] WuY.OzcanA., Methods136, 4 (2018).MTHDE91046-2023

[2] WangM.FengS.WuJ., Chin. Opt. Lett.17, 110901 (2019).CJOEE31671-7694

[3] ChangC.QiY.WuJ.XiaJ.NieS., Chin. Opt. Lett.16, 100901 (2018).CJOEE31671-7694

[4] WuY.ZhangY.LuoW.OzcanA., Sci. Rep.6, 28601 (2016).SRCEC32045-2322

[5] RivensonY.LiuT.WeiZ.ZhangY.de HaanK.OzcanA., Light: Sci. Appl.8, 23 (2019).

[6] RivensonY.WangH.WeiZ.de HaanK.ZhangY.WuY.GünaydınH.ZuckermanJ. E.ChongT.SiskA. E.WestbrookL. M.WallaceW. D.OzcanA., Nat. Biomed. Eng.3, 466 (2019).

[7] LiuH.FuZ.HanJ.ShaoL.LiuH., J. Visual Commu. Image Represent.53, 20 (2018).

[8] NguyenT.MoriK.ThawonmasR., arXiv:1604.07904 (2016).

[9] ShabanM. T.BaurC.NavabN.AlbarqouniS., in 2019 IEEE 16th International Symposium on Biomedical Imaging (2019), p. 953.

[10] ZhangY.de HaanK.RivensonY.LiJ.DelisA.OzcanA., Light: Sci. Appl.9, 78 (2020).

[11] GoodmanJ. W., Introduction to Fourier Optics (Roberts, 2005).

[12] FonsecaE. S. R.FiadeiroP. T.PereiraM.PinheiroA., Appl. Opt.55, 7663 (2016).APOPAI0003-6935

[13] RenZ.XuZ.LamE. Y., Optica5, 337 (2018).

[14] LieblingM.UnserM., J. Opt. Soc. Am. A21, 2424 (2004).JOAOD60740-3232

[15] ZhangY.WangH.WuY.TamamitsuM.OzcanA., Opt. Lett.42, 3824 (2017).OPLEDP0146-9592

[16] LiuR.PengC.LiangX.LiR., Chin. Opt. Lett.18, 041402 (2020).CJOEE31671-7694

[17] ZhouX.JinZ.FengT.ChengQ.WangX.DingY.ZhanH.YuanJ., Chin. Opt. Lett.18, 041701 (2020).CJOEE31671-7694

[18] LiH.LiangH.HuQ.WangM.WangZ., Chin. Opt. Lett.18, 050602 (2020).CJOEE31671-7694

[19] NisharH.ChavankeN.SinghalN., arXiv:2010.02659 (2020).

[20] AbrahamT.ShawA.O’ConnorD.ToddA.LevensonR., arXiv:2008.08579 (2020).

[21] IzadyyazdanabadiM.BelykhE.ZhaoX.Borba MoreiraL.GandhiS.CavalloC.EschbacherJ.NakajiP.PreulM. C.YangY., arXiv:1905.06442 (2019).

[22] Tarek ShabanM.BaurC.NavabN.AlbarqouniS., arXiv:1804.01601 (2018).

Hua Shen, Jinming Gao. Deep learning virtual colorful lens-free on-chip microscopy[J]. Chinese Optics Letters, 2020, 18(12): 121705.

本文已被 3 篇论文引用
被引统计数据来源于中国光学期刊网
引用该论文: TXT   |   EndNote

相关论文

加载中...

关于本站 Cookie 的使用提示

中国光学期刊网使用基于 cookie 的技术来更好地为您提供各项服务,点击此处了解我们的隐私策略。 如您需继续使用本网站,请您授权我们使用本地 cookie 来保存部分信息。
全站搜索
您最值得信赖的光电行业旗舰网络服务平台!