Abstract
Cell viability is an indispensable aspect of cells in the field of drug discovery, cell biology, and biomedical research to assess the physiological conditions of cells such as healthiness, functionality, survivability, etc. Recently, there have been several methods for determining the cell viability through either cell staining with trypan blue and acridine orange, propidium iodide, calcein-AM, etc., or colorimetric assays such as cell counting kit-8 assay. However, these methods have some limitations like time-consuming, expensive, unstable, individual variability, etc. Even present artificial intelligence software such as QuPath, ImageJ, etc., can only determine the cell viability after cell staining. Therefore, we attempted to determine whether cells are alive or not depending on the visual characteristics of an individual cell using Teachable Machine, a web-based artificial intelligence tool provided by Google. Labeling work to assign correct answers to learning data consumes a lot of time and human costs because it is usually done manually. To solve this problem, labeling was automated by recognizing and extracting only individual cells from the image using the contour function to increase time efficiency. In addition, many datasets were created to evaluate and compare the performances of models. Based on the results, the model that showed the best performance showed an accuracy of more than 80%. In conclusion, this model could minimize analysis time, expenses, individual variability, etc., enhancing the efficacy and reproducibility of biological experiments in the fields of drug discovery, drug development, and biological research.
When conducting experiments using cells, measuring cell viability is an essential process. Cell viability refers to the number of living cells, and its purpose is to check the physiological states of cells under different environmental factors, including exposure to drugs/compounds, disease states, and cell culture parameters (1, 2). The most direct way to measure cell viability is through either staining cells or metabolic assay. For staining cells, the cells are treated with trypan blue or acridine orange (AO)/proiodide (PI) and thereafter directly measuring the number of viable cells (3, 4) but introduces more sample variability because of having multiple sample preparation steps. Additionally, these methods require specific reagents like AO/PI, trypan blue, 7-AAD (7-aminoactinomycin D), etc. Similarly, MTT (3-(4,5-dimethylthazolk-2-yl)-2,5-diphenyl tetrazolium bromide), MTS (3-(4,5-dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2-(4-sulfophenyl)-2H-tetrazolium), CCK-8 (cell counting kit-8) and XTT (sodium 3’-[1-(phenylaminocarbonyl) 3,4-tetrazolium]-bis 94-methoxy-6-nitro benzene sulfonic acid hydrate) assays are introduced based on the principle of measuring formazan product that has been made from a chemical reaction through active metabolism in living cells (2, 5). However, the metabolic activity of cells can vary across cell types and conditions, which can fluctuate independently of viability, leading to inconsistencies. Both staining and colorimetric assays additional require reagents, machines, and often reinterpretation, making them less applicable in current-time. Therefore, this is a need to develop label-free cell viability assessment, avoiding both staining and metabolic assays.
Deep learning is a machine learning technique that structures features hierarchically by learning a deep neural network (6) and improves results while repeating the work. Deep learning uses a certain form of artificial neural network (ANN) to carry out training through data, and the trained ANN evaluates the provided data according to learned rules. Data is provided and processed through the input layer, and hidden layer, respectively, collecting data into the output layer. Such ANNs are currently showing excellent performance in the field of computer vision like image processing. Recently, Teachable Machine, a web-based artificial intelligence (AI) tool was developed by Google in 2017 (7) which facilitates AI learning related to images, sounds, and postures, composed of Class (data collection), Training (learning), and Preview stages. Class is based on binary classification, and multiple classifications are carried out when additional data are input. In the Training stage, parameters for model learning can be set. Various models can be learned by adjusting a total of three elements: epoch, batch size, and learning rate. Preview enables checking the result of the classification of test data before extracting models after receiving data through a webcam or file upload in a web environment. The models learned through the Teachable Machine provide various functions so that functions can be selected and extracted from TensorFlow.js, TensorFlow Lite, and TensorFlow depending on the purposes, such as browser and mobile. The Teachable Machine is a form of transfer learning and since it is based on MobileNet (8), a pre-trained image recognition network, it is possible to train more quickly with less data than training from scratch (9).
Based on the foregoing, this study was hypothesized to develop an AI model for evaluating cell viability based on the shape of living and dead cells using the Teachable Machine. In this study, the AI model was developed after analyzing the results of several relevant models using the Teachable Machine (Supplementary Fig. S1). Furthermore, this study could be an alternative approach for evaluating cell viability in the field of basic and applied biological research which can significantly save time, and money and the individual variability in results.
Human adipose-derived mesenchymal stem cells (hADMSCs) and human lung fibroblast cells (MRC-5) were purchased from Stemore (SCT002), and Korean Cell Line Bank (KCLB No. 10171), respectively and all cell experiments were conducted using hADMSCs having passage number 12. In addition, the CCK-8 assay was purchased from MedChem Express while AO and PI were obtained from Thermo Fisher Scientific, and Sigma-Aldrich, respectively. Both Dulbecco’s modified Eagle’s medium (DMEM) and Trypsin-EDTA were bought from WELGENE. Similarly, fetal bovine serum (FBS) and penicillin-streptomycin antibiotics (P/S) were purchased from Gibco-BRL, and Thermo Fisher Scientific, respectively.
Both hADMSCs and MRC-5 cells were separately cultured in complete DMEM supplemented with 10% FBS and 1% P/S and incubated in a humidified 5% CO2 incubator at 37℃. Cells were resuspended with 0.25% trypsin and 1.0 mM EDTA-4Na and incubated in a humidified 5% CO2 incubator at 37℃ for 4 minutes. Afterward, trypsin was inactivated with a complete medium, and cells were washed with phosphate buffer solution (pH 7.4) and centrifuged at 712 g for 3 minutes. Lastly, the cells were resuspended in the same complete medium. Then, cells were divided into two groups. The first group consisted of living cells (LC100; 0.5×106) and the second group was composed of living cells and dead cells (LC50) in a 1:1 ratio (0.25×106: 0.25×106). Dead cells were prepared by heating 0.25×106 cells at 80℃ for 15 minutes and mixed with the same number of living cells.
CCK-8 assay was used to evaluate cell viability and cells were processed as above. For quantitative analysis, cell suspension (90 μL/well) was further added to a 96-well plate and 10% CCK-8 solution was added to each well. Then, the plate was incubated for 2 hours in the incubator and measured the absorbance at 450 nm using a microplate reader.
Once cells were processed as above, the premixed solution of AO/PI (100 µL) was added into a 900 µL complete medium containing cells. Then, tubes were incubated for 15 to 20 minutes and transferred into a small petri dish containing a complete medium. Fluorescent images were taken using a fluorescent microscope. Red and green fluorescent indicated the dead, and living cells, respectively. Live/dead staining solution was prepared using premixed solution of 10 μg/mL AO, and 50 μg/mL PI.
Cell images were captured using a microscope (ECLIPSE Ti2; Nikon). The cell images were taken by shifting the position of the plate using 40× magnification. Each brightfield image taken by the microscope was triplicated in the same position by different filters. For green fluorescence (AO), the microscope was fixed at wavelength 446∼486 nm and 500∼550 nm as excitation and emission filters, respectively. Similarly, PI was detected using an excitation filter of 542∼582 nm and an emission filter of 582∼636 nm.
To use microscope images to fit into learning, Python’s OpenCV, a programming library for computer vision, was used. In addition, a laptop equipped with a 14-core GPU, 16 GB RAM, and an M1 Pro chip was used for dataset creation and training. The image pre-processing code and models used in our study are available online (10).
Initially, the cell viability was quantitively analyzed in LC100 and LC50 groups using CKK-8 assay to validate the number of cells viable. It has been shown that CCK-8 showed 100.00%±3.00%, and 57.11%±2.29% cell viability in LC100, and LC50, respectively (Fig. 1A). Furthermore, AO/PI data also supported the CCK-8 assay results, indicating approximately all viable cells and half of the live/dead cells in LC100 and LC50 groups, respectively (Fig. 1B).
Before carrying out cell detection, image processing is carried out to construct a training dataset. Noises and details from images were removed using the Gaussian Blur and Canny functions from all bright field, FITC channel, and TRITC channel images (Fig. 1C). Gaussian Blur utilizes the weights around pixels based on Gaussian distribution for removing noise and details that increase the softness, developing a more aesthetically visual effect. The Canny function is an algorithm to find edges in images and it detects points where edges change suddenly. Most importantly, noises such as non-cell particles were successfully removed and only the edges of the cells remained. Altogether, raw images were pre-processed and obtained aesthetically enhanced images using Gaussian Blur and Canny functions.
Originally, the resolution of the pre-processed image was found to be 2,928×2,928 pixels but recommended size by the Teachable Machine is 224×224 pixels. And there was the possibility of misinterpreted results because of having low-quality captured images. To solve this problem, it was necessary to recognize and extract only cells from the entire image. Therefore, we wanted to automatically distinguish and cut out the region of interest. Most of the background noises were removed using the Gaussian Blur and Canny functions for FITC channel images and TRITC channel images, and then the range for cell locations was designated before contouring the images using the Dilation function. The coordinate values of cells existing in the image were obtained through the Contour function, and the average of cell sizes was manually calculated to make cells with values outside the range would be removed in the process of automation of cell image segmentation. Contour coordinate values were obtained for each of 644 cell objects to construct boxes surrounding the contours, and the approximate sizes of the cell objects were obtained using the boxes. Based on the result, the minimum value was calculated to be 50,830 pixels, the maximum value was calculated to be 468,720 pixels, and the average value was calculated to be approximately 109,531 pixels. As various types of image processing were performed, the size of cells was found to be somewhat different from the original size of cells (Fig. 2). Moreover, experiments 2, 4, 6, 8, 9, and 10 applied filtrations, which removed unnecessary small particles, whereas experiments 1, 3, 5, and 7 were unfiltered. Altogether, image slices and filtering of objects could remove small objects from the background.
Since the number of images used in training for the object recognition process was small, only processed images in training were recognized. However, the augmentation process may fail to recognize the new images not seen during training or it may encounter overfitting issues because of excessive learning from training for images. In this study, image augmentation applies various transformations such as rotating the images to the original images for the aforementioned problems. Altogether images were proliferated through the process of rotating cell images by 90, 180, and 270 degrees.
In order to compare the results of the experiment diversely, training was conducted in a total of 10 types and trained the same model, and the ratio of data used for training and testing was the same at the ratio of 4:1 (Supplementary Table S1). Each row represents the characteristics of the dataset used in experiments 1 to 10. Among these, experiments 1, 3, 5, and 7 did not apply filtering and added exceptions, dividing them into a total of 3 labels. Experiments 2, 4, 6, and 8 to 10 did not use Exception labels but were classified into two labels by applying filtering. Experiments 1 and 2 were conducted with a dataset in which the images of single cells were taken. Experiments 3 and 4 were conducted to compare results according to image data and carried out with a dataset in which the shapes of cells were not compared to the abovementioned datasets because cells were not completely caught by the camera lens or data in which cells were adjacent to each other were included. Experiments 5 to 8 were conducted with a dataset in which the images of fibroblasts were taken. In addition, images were processed without filtering in Experiments 1, 3, 5, and 7 to check the difference in cell survival classification performance depending on the presence or absence of filtering, and the labels were divided into three types: green, red, and exception. In Experiments 2, 4, 6, and 8 to 10, images were processed by applying filtering, and labels were divided into only two types: green, and red labels (Fig. 3). It has been seen that most of the images were not filtered out in the image processing process and are considered as exceptions.
The Teachable Machine, a web-based open-source software AI tool, was used to check the results of various generated datasets relatively quickly. The learning parameters of the Teachable Machine were set as the number of times of learning of 200, a batch size of 16, and a learning rate of 0.001. To measure the performances of the six models created as such with objective values, the confusion matrix was checked (Supplementary Fig. S2) (11). The confusion matrix was an indicator used to evaluate the performance of classification algorithms in fields such as deep learning and enables the user to check whether the given data has been classified according to the intent. Interestingly, Experiment 2 showed the highest level of true positive rate (93.06%) among all experiments. Moreover, the accuracy, recall, and F1 score were 0.94, 0.93, and 0.95, respectively in Experiment 2. Altogether, it has been shown that Experiments 2 and 4 without having exception labels showed a higher true positive rate/true negative rate ratio as compared to Experiments having exception labels (Experiments 1 and 3).
Moreover, Experiments 1, 3, 5, and 7 did not apply filtering and added exceptions to divide them with a total of 3 labels. Instead, Experiments 2, 4, 6, and 8 to 10 did not use exception labels and utilized filtering to classify the data with a total of two labels. Experiment 9 was a dataset composed of the sum of data from Experiments 4 and 8. Experiment 10 was a dataset composed of the sum of data from Experiments 4 and 9. The accuracy shows the results of the classification of test data for the trained model. It has been found that filtered-applied Experiments 2 and 4 showed relatively higher accuracy as compared to non-filtered Experiments 1 and 3.
Interestingly, the average accuracy of datasets from MSCs was 93.25% while 84.90% from fibroblast datasets. To further validate the accuracy of the Teachable Machine, experiments were carried out using both MSCs and fibroblast in one experiment and the results showed the high accuracy of its performance with an average of more than 85% (Table 1). In experiment 2 (only clear MSCs), a total of 126 cell image data were tested, and based on the result, cell object recognition and survival classification were accurately performed for 119 pieces of data. Through the foregoing, it has been seen that the data with unclear cell shapes were not completely taken by the camera lens and affected the performance of the model. In addition, unnecessary air bubbles present in the image were removed through filtering so that the survival of cells was distinguished using only two labels: green and red. Therefore, accurately extracting and learning only cells from the image shows higher classification performance than using exception labels for exception handling.
Automatically determining cell survival based on cell shape is closely linked to technologies in machine learning like computer vision and deep learning, as well as technologies in biology and medicine. In biology and medicine, current methods for confirming cell survival with staining can encounter numerous issues during the staining process and can be time-consuming to confirm results as well as facing challenges in labeling from a computer vision standpoint. Labeling is the work of creating an answer sheet for given data and is generally a necessary work for object recognition. This work is mostly done manually using tools such as labeling (12), and not only does it require precise work, but also it takes a lot of time. In this study, contour functions were used for individual images to automatically recognize and classify cell objects in the images without a separate labeling process. As a result, labeling work for 100 images that took approximately 60 minutes when done manually was completed within 3 minutes at the maximum when the image processing process used in this study was applied. Time efficiency was greatly improved compared to cases where labeling was carried out manually.
In this study, cell survivability was assessed depending on cell morphology which significantly indicated high accuracy, especially in the classification for single cells survival. However, when multiple cells were adjacent to each other, the contour boxes overlapped and the cells were recognized as one object, which became one of the factors that lowered accuracy. In cases where cells are too close together, each cell is not recognized as an individual being in the image-processing process. To improve this situation, the need to create and learn various datasets for cells adjacent to each other was felt, and efforts are being made to find optimized kernels in the image processing process. Through this process, it is expected that the accuracy of cell survival classification can be increased further, and the range of utilization can be expanded further. And this model can address the problems arising from the cell viability of therapeutic cells such as stem cells, fibroblasts, and other cells which limits their functions and increases heterogenicity in therapeutic applications (13, 14).
The present work addressed the critical need for rapid and efficient methods to assess cell viability using AI. This model rapidly determined the cell viability efficiently using visual characteristics of cells, omitting the use of dyes/chemicals, and stability problems. Furthermore, the implications of this study extend the realm of research laboratories to clinical practices, where timely assessment of cell viability is indispensable with higher reproducibility and minimizing the risk of treatment failure. By enabling rapid and accurate viability assessments, this approach has great potential to enhance efficiency and reliability in the field of cell biology and biomedical research.
Supplementary data including one table and two figures can be found with this article online at https://doi.org/10.15283/ijsc24105
Notes
Authors’ Contribution
Conceptualization: JHJ, JY. Data curation: CK, JS, DC, YKP. Formal analysis: CK, JS, DC, YKP. Funding acquisition: JHJ, JY. Investigation: CK, JS, YKP. Methodology: CK, JS. Project administration: JHJ, JY. Resources: JHJ, JY. Software: CK, JS, YKP. Supervision: JHJ, JY. Validation: CK, JS, YKP. Visualization: CK, JS, DC, YKP. Writing – original draft: CK, JS, YKP. Writing – review and editing: DC, JHC, DR, JHJ, JY.
References
1. Adan A, Kiraz Y, Baran Y. 2016; Cell proliferation and cytotoxicity assays. Curr Pharm Biotechnol. 17:1213–1221. DOI: 10.2174/1389201017666160808160513. PMID: 27604355.
2. Aslantürk ÖS. Larramendy ML, Soloneski S, editors. 2018. In vitro cytotoxicity and cell viability assays: principles, advantages, and disadvantages. Genotoxicity - a predictable risk to our actual world. InTech;DOI: 10.5772/intechopen.71923.
3. Chan LL, Wilkinson AR, Paradis BD, Lai N. 2012; Rapid image-based cytometry for comparison of fluorescent viability staining methods. J Fluoresc. 22:1301–1311. DOI: 10.1007/s10895-012-1072-y. PMID: 22718197.
4. Ude A, Afi-Leslie K, Okeke K, Ogbodo E. Sukumaran A, Mansour MA, editors. 2023. Trypan blue exclusion assay, neutral red, acridine orange and propidium iodide. Cytotoxicity - understanding cellular damage and response. IntechOpen;DOI: 10.5772/intechopen.105699.
5. Assay CP, Assay C. 2013. Cell Counting Kit-8 technical manual. Dojindo Laboratories;Kumamoto:
6. Sarker IH. 2021; Deep learning: a comprehensive overview on techniques, taxonomy, applications and research directions. SN Comput Sci. 2:420. DOI: 10.1007/s42979-021-00815-1. PMID: 34426802. PMCID: PMC8372231.
7. Teachable Machine [Internet]. Google Creative Lab. Available from: https://teachablemachine.withgoogle.com. cited 2022 Oct 21.
8. Howard AG, Zhu M, Chen B, et al. 2017. MobileNets: efficient convolutional neural networks for mobile vision applications. arXiv:1704.04861 [Preprint]. Available from: https://doi.org/10.48550/arXiv.1704.04861. cited 2022 Oct 21.
9. Carney M, Webster B, Alvarado I, et al. 2020. Apr. 25-30. Teachable machine: approachable web-based tool for exploring machine learning classification. Paper presented at: CHI '20: CHI Conference on Human Factors in Computing Systems. Honolulu (HI), USA: 1–8. DOI: 10.1145/3334480.3382839.
10. OOTE-1138. 2025. Feb. 1. OOTE-1138/Cell_detection: v1.0.0 [Internet]. Zenodo;Genève: Available from: https://zenodo.org/records/14784130. cited 2025 Feb 1.
11. Liao S, Huang C, Zhang H, Gong J, Li M, Wang Z. 2022. Aug. 10-13. Object detection of welding defects in SMT electronics production based on deep learning. Paper presented at: 2022 23rd International Conference on Electronic Packaging Technology (ICEPT). Dalian, China: 1–5. DOI: 10.1109/ICEPT56209.2022.9873297.
12. Düntsch I, Gediga G. 2019; Confusion matrices and rough set data analysis. J Phys Conf Ser. 1229:012055. DOI: 10.1088/1742-6596/1229/1/012055.
13. Augustine R, Gezek M, Nikolopoulos VK, Buck PL, Bostanci NS, Camci-Unal G. 2024; Stem cells in bone tissue engineering: progress, promises and challenges. Stem Cell Rev Rep. 20:1692–1731. DOI: 10.1007/s12015-024-10738-y. PMID: 39028416.
14. Zhidu S, Ying T, Rui J, Chao Z. 2024; Translational potential of mesenchymal stem cells in regenerative therapies for human diseases: challenges and opportunities. Stem Cell Res Ther. 15:266. DOI: 10.1186/s13287-024-03885-z. PMID: 39183341. PMCID: PMC11346273.
Fig. 1
Cell image pre-processing. (A, B) The cell viability of human mesenchymal stem cells (hMSCs) and human lung fibroblast (MRC-5) were quantitatively analyzed using CCK-8 assay in both LC100 and LC50 groups (n=5). Live/dead assay also was utilized to visualize the live and dead cells, respectively, in both groups. The data are presented as mean±SEM. ****p<0.0001. Scale bars=50 μm. (C) For imaging processing, objects according to survival results from bright-field images were extracted and classified using different coordinate information obtained from the FITC channel and TRITC channel images. Then, the FITC channel images went through a Gaussian Blur filter (7×7 size) to remove the background which was further extracted based on the edges of the image using a Canny edge detection algorithm. Afterward, dilation convolution was applied after the Canny edge detection. Since an object was not recognized as a single object when the outlines were not completely connected, the receptive field was expanded using dilation convolution before a closing operation was performed. The object from the image was cropped by storing the coordinate values of the points constituting the previously obtained outlines as a list and obtaining the contour box for the cell from the bright-field image.
Fig. 2
Image contours and slice analysis. Some bright-field images were included in the datasets from Experiments 1 to 10, and the FITC and TRITC channel images related to the bright-field images were displayed. The results of objects separated from the bright-field image were listed through the coordinate values of the cells obtained from the FITC channel images and TRITC channel images.
Fig. 3
Classifier analysis. The outcomes of categorizing test data using a model trained on datasets from Experiments 1 to 10 were displayed. All image data used in the test were images extracted from Fig. 2. There were models classified with three labels according to learning (Experiments 1, 3, 5, and 7). In this case, it has been found that most of the images that were not filtered out in the image-processing process were treated as exceptions.
Table 1
Accuracy of data sets using Teachable Machine



PDF
Citation
Print



XML Download