The tongue is an aesthetically useful organ located in the oral cavity. It can move in complex ways with very little fatigue. Many studies on assistive technologies operated by tongue are called tongue-human computer interface or tongue-machine interface (TMI) for paralyzed individuals. However, many of them are obtrusive systems consisting of hardware such as sensors and magnetic tracer placed in the mouth and on the tongue. Hence these approaches could be annoying, aesthetically unappealing and unhygienic. In this study, we aimed to develop a natural and reliable tongue-machine interface using solely glossokinetic potentials via investigation of the success of machine learning algorithms for 1-D tongue-based control or communication on assistive technologies. Glossokinetic potential responses are generated by touching the buccal walls with the tip of the tongue. In this study, eight male and two female naive healthy subjects, aged 22-34 years, participated. Linear discriminant analysis, support vector machine, and the k-nearest neighbor were used as machine learning algorithms. Then the greatest success rate was achieved an accuracy of 99% for the best participant in support vector machine. This study may serve disabled people to control assistive devices in natural, unobtrusive, speedy and reliable manner. Moreover, it is expected that GKP-based TMI could be alternative control and communication channel for traditional electroencephalography (EEG)-based brain-computer interfaces which have significant inadequacies arisen from the EEG signals.