An effective RGB color selection for complex 3D object structure in scene graph systems

Our goal of the project is to develop a complete, fully detailed 3D interactive model of the human body and systems in the human body, and allow the user to interacts in 3D with all the elements of that system, to teach students about human anatomy. Some organs, which contain a lot of details about a particular anatomy, need to be accurately and fully described in minute detail, such as the brain, lungs, liver and heart. These organs are need have all the detailed descriptions of the medical information needed to learn how to do surgery on them, and should allow the user to add careful and precise marking to indicate the operative landmarks on the surgery location. Adding so many different items of information is challenging when the area to which the information needs to be attached is very detailed and overlaps with all kinds of other medical information related to the area. Existing methods to tag areas was not allowing us sufficient locations to attach the information to. Our solution combines a variety of tagging methods, which use the marking method by selecting the RGB color area that is drawn in the texture, on the complex 3D object structure. Then, it relies on those RGB color codes to tag IDs and create relational tables that store the related information about the specific areas of the anatomy. With this method of marking, it is possible to use the entire set of color values (R, G, B) to identify a set of anatomic regions, and this also makes it possible to define multiple overlapping regions.

INTRODUCTION 2-D virtualization plays a crucial role in the modern up to date 3-D rasterization. With the aid of 2-D algorithms, rendering can be employed on images of texture and object animations. Textures can be classified into two parts: 1) transparent area on images 2) non-transparent area on the images based on the color channel information. Image without transparent area doesn't have any transparent pixel ("holes") because it consists only of RGB color component as well as high pixels' alpha value. This particular is highly beneficial to govern the display logic that means any two objects can be merged with each other without affecting the color pixels. The rendering process will be notably simple and quicker. In the case of non-transparent images, rasterization is quite effortless due to the handling of entire texture via blocks instead of the pixel by pixel. The memory copy operation is used to transfer the memory array of texture into the frame buffer. There is only one norm exist: to evade framebuffer during addressing, attentiveness should be paid to the screen edges. Moreover, based on object position, data should be segmented at the time of copying. A viewport culling is executed row by row at the image texture if there is an existence of any object in any direction out of the screen bounding Ì ISSN: 2088-8708 layer. Although this process requires further processing but the solution is still rapid enough. So this method was usually popular and preferred in the traditional computer games [1][2][3].
In the transparent texture, the portion of the image consists of a group of transparent points or pixels of an image that are blurred in some intensity. For the implementation of the first category, colorkey, an unused preselected color is employed to highlight transparent pixel. Nowadays, this is attained with the use of alpha channels associated with the image. The demand for this type of image texture is increased and utilized in many application areas to enhance the experience of visualization. To handle this kind of information is not much complicated but it required high computerization because the variation of the transparent and non-transparent area in the texture image that leads rendering process is done at per-pixel level [4][5][6]. The graphical objects are assessed pixel by pixel through the graphical engine and create the final image see in Figure 1. The main drawback is that it requires high computation due to per-pixel drawing and calling thousands of functions. For every pixel, color information is fetched from the memory and its necessarily color information should already exist on the memory buffer. This technique is not able to fulfill the demand of today's complex games, where up to a hundred different dynamic objects are drawn on one screen simultaneously [6][7][8][9]. The paper proposed an effective RGB color selection for complex 3D object structure in scene graph systems. Our solution combines a variety of tagging methods, which use the marking method by selecting the RGB color area that is drawn in the texture, on the complex 3D object structure. The article is structured as follows: Section 2 introduces the related works of RGB color selection approaches for complex 3D object structure. Section 3 presents the method proposed and Section 4 implemented our research method and algorhythms. Section 5 illustrates experimental analyze and evaluate the performance. Section 6 conclude the our proposed system and discussed some approaches for the future.

2.
RELATED WORKS Many researchers have done some studies related to generate the 3D models from photographs and attract computer vision, photogrammetry, and photographic communities to enhance their application. The focus of this research in the computer vision community is based on recovering camera parameters and surface geometry. Balazs, A. et al. presented a novel solution to the gap problem. Vertex programs are used to generate approximately shaded fat borders that fill the gaps between neighboring patched [8]. In [9], Baumberg presented a unique autonomous approach that can help to extend a traditional two-dimensional image blending method to a three-dimensional surface, which generates good quality results at the lowest computational price. This study illustrates an unprecedented system to create texture maps especially for a surface of arbitrary topology from a natural object image clicked with the normal camera and unmanageable light. In recent years, advanced studies on image and complex objects, computer vision methods play a game-changer role to replace the non-automatic techniques of 2D images analysis. In [10], Liu et al. proposed an autonomous object segmentation through the use of RGB-D cameras. The authors developed a tri-state filter that includes boundary information, the value of RGB data, and distance. Richtsfeld et al. implemented and explained the coordination between the patches on the 3D surface image based on the Gestalt principle to construct a learning-based structure [11]. Yalic image with the use of surface normal and similarities in the region. In [13], AL-Mousa et al. employed a method to enhance the RGB color image encryption-decryption technique through the use of encryption square key combined with a dimensional matrix. The author in [14] presented a high-rich annotated, large scale repository of 3D models that provide rigid alignments, and bilateral symmetry planes. Moreover, it used deep learning to encrypt the shape priors of a particular category. In [15], a three-dimensional RCNN model is used to generate the three-dimensional object shapes in the form of a volumetric presentation. H Fan et al, proposed a network called point set generation network that can help to originate the three-dimensional shapes in point cloud from two-dimensional images [16]. Many researchers worked on 3D shapes in volumetric representation without deep learning and deep learning-based approaches. There are some exceptions such as point clouds shapes at the time of generation [17], cuboid primitive, and numerous surfaces [18][19][20][21][22][23]. In Chengjie Niu et al. presented a technique to retrieve three-dimensional contour structure from a mono RGB image, structure means cuboid represented shape parts and parts connection surrounding connectivity and symmetry [24]. A technique of autonomous recognition of structural elements from a photographic camera picture of the construction area [25,26]. A 2D algorithmic program is being proposed by E. Trucco and A. P kaka [27] to recognize the objects and structures in the construction site image. To detect the column in a given image, an object recognition technique was developed that train classifiers on 100 sets of images of the concrete column [28]. In [29], the authors developed a technique for comparison of the final construction schedule with the updated progress of work done at the construction site. Podbreznik and Rebolj [30] presented an architecture that used to reconstruct the three-dimensional geometric models from a two-dimensional image of the construction area. In [31], Hyojoo Son and Changwan Kim presented a highly productive, autonomous three-dimensional structural element recognition and modeling technique that uses color and three-dimensional data obtained from a stereo vision system deployed on the construction area for monitoring purpose.

3.
OUR METHODOLOGY On a 3D anatomical object [32][33][34][35], for instance the brain, there are many anatomic landmarks that need to be marked with information for the student, such as pin marking, direct drawing, or selecting the entire brain to see detailed information as shown in Figure 2(a). But there are also many grooves, and areas of surgical information, which may overlap, so the problem is the way to mark them to show all these. Here, we propose the use of a method to plot anatomy on individual textures, based on the RGB color code of the region in its true UV position [7][8][9]36]. Our method can be compared with the method of selecting PIN, but this only focuses on a certain point on the 3D object. A second method uses marks by an alpha channel in the texture of the object, which means that the number of selection points is limited to a value between 0 to 255 only as shown in Figure 2(b)) [9,37,38]. (e) Handles storage and connects to the database accurately to retrieve more relevant regional information.

Methodology
When using the RGB region selection method, we can mark a multitude of regions, as it is a combination of RGB color values, and it still identifies regions that can overlap, which provides the required viewer effects [2]. The result is that it is possible to mark and select areas and areas that overlap each other. This provides us with a very efficient solution to our problem. Figure 3 presents our process of achieving this solution is described in 6 steps. The following steps will explain further details in the implementation.

. Draw before selection and identifier
Draw an area and paint the area on a texture, based on the UV position created from the texture map and normal map . RGB color codes will be remembered as (R, G, B) or hexa. In a texture, all RGB color codes of the regions must be different. Two colors (0, 0, 0) and (1, 1, 1) should be removed from the texture, since they are used to show select and deselect the 3D object [8,9]. Figure 4 shows the draw color areas based on accurate UV.

Fat border define
In this step, the space between the adjacent patches instigated by the totally non depending triangulations that are fill-up with shaded fat borders. Number of triangle are connected to each other in these borders and their parameters such as orientation, width and colors are view-dependent and analysis each frame through the use of vertex program. Gap filling algorithm input contains N number of level of details (LOD)-sets sets There is only one condition on these LOD set is that to choose a LOD M ki , as per theM i , such that distance between the approximate surface (M ki ) and original surface (M i ). At the time of projected on the screen, eimg pixel is greater than everywhere, particularly along the patch border [8]. This loophole can be in such a way that e img pixel is always higher than screen-space projection of sphere with radius r. This algorithm required set of polylines each depict a boundary curve as an input. For each segment of the line of the narrow path direction The visual direction is constructed by expanding the line segment in such a way that the image spatial projection maximizes the portion of the img pixel predicted by each side, as shown in Figure 5 [8]. To set up the newly introduced triangle is exactly the same as the edges of the actual original tool in the string using the newly generated vertices. (c) To generate six new vertices by supplant T i perpendicular to every direction enumerated in above mentioned steps and viewing direction D i = C−Vi C−Vi (see Figure 6), where C is the location of the camera where is the object space geometric error guaranteeing a screen space error of img pixel.
(d) To push the recently developed vertices apart from the viewer together with the viewing direction again by .
(e) To generate new triangles by attaching the resulting vertices (see in Figure 5). A single quad strip can be explained for every boundary curve because of fat border simple structure.
(f) To calculate the color of every latest vertices from allocating the shading parameter of the native border vertices. The orientation of the fat borders provide gap filling service, and they don't affect on the original shading.
It is required to continuously update the position of new verticles because of dynamic nature of viewing direction i.e. pixel to pixel changes. Although six points are necessary to confirm the enlargement of the border even at sharp corners, usually in real-time only 4 or 2 new vertices are enough to deliver the good results and have a vast impact on the rendering performance. The relative fat border origination schemes for 2, 4, and 6 vertices are discussed in Figure 6. In this case, we employ four vertices, V i2 and V i5 vertices in equation 1 are ruled out, if two vertices are employed only V i2 and V i5 are originated. Also note that the fat border for the patch is directly proportional to the level of the detail (LOD) level, if there is no change on the LOD level, the fat border will also not change. We can make use of this property to enclose the fat border in the display list and thus remove the requirement of sending this data over to computer graphic hardware in every frame, which makes the fat border practically requirement is equal to zero [8]. Figure 6 represents the different fat border origination algorithm. The only pre-essential for this is to issue six dummy vertices and their mutual connection for every border vertex. The vertex program only runs for the border vertices and its run is stopped while the paths are rendering itself. Figure 7 represents the whole process of it.

Connect to the database
For each RGB color code that has been created, it should be numbered and named after the name of the anatomy, and also store many data fields to contain the relevant information of the anatomy (e.g., names of the parts, names in different languages (currently Vietnamese, Latin and English), detailed description of anatomy, RGB color code, etc.). Create relational data table (using MySQL database system), to later query the information. Create tables to save the path to get the texture matching the required 3D object as shown in

Programmable manipulation to control the main texture, RGB textures states
It is recommended to have the main texture always load on a certain channel (e.g., index 1). RGB textures should be on channel 0, and load only one RGB texture at a time, then load other RGB texture changes if needed. We use high pixel density of 2048 × 2048 pixels to create beautiful color selections, and the edges of the region are less distorted [37].

Interact with the selected area
Get the RGB color coded value selected by picking up the RGB texture in the handle event [31,36]. Based on the RGB color code (red, green, blue) we send queries to the database to retrieve the ID and Name of the region, from which all relevant information can be retrieved [8]. The virtual void is declared in algorithm. Figure 9 shows the effect of RGB textures on interacting with the different selected area of head states.

Rendering
For the general scene representation, the Scene graphs system is employed to render real-time synchronization with hardware rasterization. Users are stopped to engage in low-level problems such as threading and file formatting, texture. At the time of the text formatting and encoding, all systems come with their specific file formats. The binary encoding is a combination of objects scenes with their information and platform handling specialties and serialized the information at the sending time [39]. In addition, Shading has directly employed this technique. The transfer matrices at the vertices have to be magnified with the use of a distant lighting environment in every frame. The coefficient for the particular basis is moved down to the pixel shaders, then the latest normal can be inspected in the normal map and employed to analyze a specific basis. When M = 4, shader code below the pixel shader. Shared vertex move down the texture coordinates and three different registers that consists of normal coefficients [9,37,40]. Three different textures are sampled that include Texture dependent normal, maps and the albedo of the surface. To shift the workload from CPU to GPU, reduce the amount of data and increase system performance, transfer matrices are fused with the cyclopropane carboxylic acid (CPCA). Figure 10 represents the effective of RGB color selection for complex 3D lungs object from modelling after 6 steps. Figure 10. An effective of RGB color selection for complex 3D lungs object.

5.
RESULT AND DISCUSSION Our 3D human body simulation system (Anatomy Now) (see https://appadvice.com/app/anatomynow/1222845241) is developed on two different platforms: 1) OpenGL tools for the simulation of modelled parts in 3D virtual environment 2) Blender for the modeling of limbs. Full human body anatomy includes skeletal system, muscular system, circulatory system, nervous system, respiratory system, digestive system, excretory and genital system, glands and lymph nodes are implemented on the virtual 3D interactive system.

Int J Elec & Comp Eng
ISSN: 2088-8708 Ì 5959 Anatomy professors at universities and hospitals are evaluated our application through the different parameters as well as analysis the accuracy of each and every organs system proposed in our application [41,42]. The 3D model on the right side of the navigation page has no function at all, therefore is a waste of space. This waste results in a narrow navigation section, which only contains three of the ten body systems. Users need to scroll down to access the remaining systems. The original 3D model page has a total of twenty-one buttons, each with their own unique function [43][44][45]. However, there is no focus of functions in this design. We organized the functions and tried to emphasize on nine core functions which are related to the 3D model interaction. They are highlighted in the Figure 11. The result is that it is possible to mark and select areas and areas that overlap each other. We also apply the proposal method in the anatomy simulation system, as shown in Figure 12. Figure 11. Our 3D human body simulation system (Anatomy Now) and the functions of model interaction. Figure 12. The result of our method and algorithms applied on the brain.
Our method applied to render static and dynamic 3D object. We measured the effective RGB color selection for complex 3D object structure in scene graph systems computation time for several typical texture map resolutions (total number of pixels in all texture maps). The graphics processing units (GPU) based reference implementation was developed with the OpenGL API. The details of experimental computer configuration and test environment are given in Table 1. Full simulation system of human anatomy system in 3D models, anatomical landmarks are shown in Table 2. Diversity of anatomical presentation: Anatomical details, anatomical mold, anatomical region, anatomical group, anatomical area, anatomical system. Excretory and genital system 20 models 5 Muscular system 510 models 6 Digestive system 41 models 7 Nervous system 1027 models: 989 neural, 39 brain models 8 Excretory and denital system 20 models 9 Endocrine system 191 models: 11 glands, 180 lymph node models In the first step, a practical test is done to evaluate the performance of pixel formation Table 3 summarizes the results of our algorithm performance on organs systems in human body. The main task of every test was to write the pixel on the full screen, to evaluate the processing time of computation-intensive We have compared and evaluated the parameters: Average speed of rasterization (FPS), GPU and CPU usage percentage (%), GPU dedicated memory, GPU system memory and GPU committed memory. From the results in Table 4 shows that the average speed of rasterization are always in the range from 44.02 to 60.01, the average of GPU usage is 9.35%, the average of CPU usage is 10.58%, the the average of GPU dedicated memory is 1.26 GB, the average of GPU system memory is 78.38 MB and the average of GPU commited memory is 1.186 GB. The results clearly show the advantages of our solution. In the following our objective is take an effective RGB color selection for complex 3D object structure in scene graph systems. The measurements were performed show that our method well with objects complexity, see Table 4 and Figure 13. The optimized GPU deployment provide the fastest performance. But analysis is done through the dedicated hardware. So, there is no need to exchange data between the main memory and GPU.
Our proposed solution allows for effective rendering of complex structured 3D objects on the different devices (see Figure 14). 3D virtual environment creates the full human body that allows users to interact with the body parts, organs and get a narrow-certified detail from this environment. Moreover, this application setup a bridge between the learning and implementation.  Figure 13. The comparison of the existing techniques. Figure 14. The effective of our method in practicing in virtual reality of the Anatomy Now system.

CONCLUSION
Adding many different items of information to a complex 3D object is challenging when the area to which the information needs to be attached is very detailed and overlaps with all kinds of other medical information related to the area. Existing methods to tag areas was not allowing us sufficient locations to attach the information to. Our solution combines a variety of tagging methods, which use the marking method by selecting the RGB color area that is drawn in the texture, on the complex 3D object structure. Then, it relies on those RGB color codes to tag IDs and create relational tables that store the related information about the specific areas of the anatomy. With this method of marking, it is possible to use the entire set of color values (R, G, B) to identify a set of anatomic regions, and this also makes it possible to define multiple overlapping regions.