A mathematical model of movement in virtual reality through thoughts

Received Nov 15, 2019 Revised Jun 4, 2020 Accepted Jun 15, 2020 In this article, we'll introduce ways to build virtual worlds through different computer programs. We will show the method of rectangles for analyzing data obtained from the electroencephalogram. We will demonstrate basic mathematical models for movement prediction in a system of virtual reality. Using this data, the main transformations are possible-change of position and rotation (change of orientation).


INTRODUCTION
There are many discussions about the origin of the term "virtual reality". One of them is that it dates back to the German philosopher Immanuel Kant [1][2][3][4][5]. The exact definition is rather philosophical and it does not include terminology. The modern use of this term was presented by Jason Lanier in in the 1980s. Nowadays this term is quite extensive and has become very popular. Initially, a virtual environment was used but now most of the researches prefer to use the term "virtual reality" instead. This environment should not be considered only as "virtual" in the current definition [6][7][8]. Another widely used concept is advanced reality (AR). This concept is applied for systems in which most of the objects are visualized through glass, cameras or eyeglasses. In most cases, these virtual ob-jects looks like they are added to the user's virtual world. Mixed reality (MR) is used as a summary of VR, AR and normal reality. This term VR/AR/MR is mostly used to denote all forms. In this paper, AR, MR, telepresence and tele-operation are going to be perfect examples of VR [6][7][8].
The term virtual reality is quite contradictory. Burbules finds a solution to this problem by proposing the alternative term-virtuality. We make the follow-ing statement that the virtual word refers to the perceptions as a part of VR's ideology [6]. In most cases, VR includes a very important component: interaction. In other words, if other sensors participate in the virtual word. If only the eyes are in-volved, then the VR system is called an open loop, otherwise-closed. In the second case, the human body has partial control of the stimulation. This incorporates the body movements, including the head, eyes, hands or legs. The other opportunities are: voice commands, heart rate, body temperature. Many scientists build the VR system as a part of experiment. This is necessary if it is part of scientific issue or hypothesis. Many of these attempts are unsuc-cessful. The aim of this study is to present a metropolis for moving into the virtual world through human thoughts. We use data from company Emotiv, for manage movement in MR [3,4]. Figure 1 shows the mask through which we receive data from the EEG signal. Figure 1. Emotiv-brain controlled technology

RESEARCH METHOD
In the past twenty years, the studies of brain-computer interface (BCI) studies appear. This new technology will allow paralyzed people to connect with electronic devices in order to create robotic hands. In this study, we present a model for using this data for entertainment [9][10][11][12][13][14][15][16]. The Electroencephalogram (EEG) is the most common method for obtaining data from BCI. EEG is nothing but recorded electrical signal, generated by the brain. The first such report was published in 1929. This allowed people to observe and analyze the effects of the brain [17].
With its function, human brain generates millions of small electrical voltages. The combination of these fields can be detected by electrodes attached to the scalp. The amplitude of these signals varies from 1 μV to 100 μV. EEG shows wide variations in amplitude depending on external stimulation and various internal mental states. Various methods are used to obtain EEG signals.
We use the non-invasive EEG recording. These are signals, captured with electrodes attached to the scalp. In our case, the used electrodes are small metal discs. They are made of stainless steel, tin, gold or silver coated with a silver chloride coating. Exemplary input data are presented in the Table 1 [17][18][19][20][21][22]. Table 1 presents some of the eeg signals of the brain through which we control the virtual camera. Through the algorithms we have developed, we can classify these signals and move the camera in four directions.

RESULTS AND DISCUSSION
Our aim is by using brainwave data to be able to move in the virtual world. In our experiment we use three directions: forward, left and right [21]. In Figures 2 and 3 we present two views in a system of virtual reality from the 3D social network second life. We can consider the input data as random signals. It is a signal that is a function of time. Their values are previously unknown. This type of signal expresses an accidental physical phenomenon or physical process. When random signal is registered, only one of the random process outputs is realized. It is possible to do this only after multiple repetition of observations and the calculation of certain statistical characteristics of the signal conversion set [23].
The random stationary signals keep their own statistical characteristics in the sequential conversions of the random process [20]. The digital signal is typically set in the form of discrete series of numerical data: a numerical array of consecutive arithmetic values for Δ=const, but generally the signal can also be given in the form of tables of arbitrary values of the argument [23][24][25].
The analysis of the input data can be presented as NP task. The reason of this is the high volatility and the different examined people. They give different brain signals in the same kind of external conditions. For example, in our experiments in training a subject man: driving the cursor in three directions-forward, left and right. When a woman appear, the data changes. The described EEG data needs a proper analysis and interpretation in order to solve the above-mentioned tasks [25].
The methodology for extraction of dependencies-clustering is applied to the EEG data. Obviously, the results are not very good. The reason to do these experiments is following [25]. Our bodies are not designed for the virtual world. These artificial incentives very often violate biological mechanisms that have evolved for hundreds of millions of years. We very often give information to the brain that is not compatible with its perceptions. There are cases that our bodies cannot adapt to the new environment. Unfortunately in many cases our body responds through headaches or increased fatigue. VR disease, which usually includes symptoms of dizziness and nausea, is described [16,18,19,20]. In order to answer the above-mentioned questions, we should consider: 1) the physiology of the human body, including sensory organs and neural pathways, 2) to examine the basic theories of experimental perceptive psychology, and 3) the construction of a VR system and the resulting of these consequences or side effects.
Our goal is to be able to manage VR movement in real time, through brain returns. Some studies have shown that if you control the movement in a virtual environment, some side effects such as nausea, pain and more can be avoided [16]. In our future researches we plan to use filtering such as discrete cosine transform (DCT), discrete wavelet transform (DWT) and classification methods K-nearest neighbor, linear discriminant analysis (LDA), Naive bayes and others. Our goal is to use real-time detection of signals. This will allow us to actually move the camera in Unreal engine [22]. The movement of the virtual camera is done in the Unreal engine system. The movement itself is implemented through built-in classes but C ++. Our simulations were performed using the art-generated simulations presented in Figure 4.

CONCLUSION
In this paper, we only present a classification method. In our future studies, we will combine the above-mentioned methods for obtaining better results. The different methods of evaluation and analysis of results are made in terms of their reliability, correct differentiation of the given objects, respectively related to their direction. In our future studies, we will include the four main directions. The neuro-biological point of view and interpretation should also be taken into account in the analysis of the characteristic selection step. For example, latency (response time) may vary due to specifics in the individual's mental capabilities, and therefore these characteristics are more informative in data processing.