Abstract
OBJECTIVE
Unfolding is a rendering method to visualize organs at a glance by virtually incising them. Although conventional methods exploit gray-scale volume datasets such as CT or MR images, we use the Visible Korean Human dataset preserving actual color. This can be helpful for the study of anatomical knowledge. Segmented images of Visible Korean Human dataset store the boundary of organs. Since medical experts manually perform the segmentation from anatomical color images, it is very time-consuming. In general, therefore, some images selectively sampled with interval from entire color images are segmented. When we generate a segment volume dataset with the selected images, final results are deteriorated due to lack of segmentation information for missed images. In this paper, we solve this problem by generating intermediate images without performing a manual segmentation.
METHODS
Firstly, after comparing differences of organ's contours in between two consecutive segmented images, we represent the differences as a user-defined value in the intermediate images. This procedure is repeated for all pairs of manually segmented images to reconstruct entire volume data consist of manually segmented images and their intermediate images. In rendering stage, we perform the radial volume ray casting along with the central path of target organ. If a ray reaches to a region having the user-defined values, we advance over the region without compositions to the boundary of that region. Then the color composition is begun by performing backtracking, since the advanced region is regarded to the thickness of it.