Haptic Reading Program “Sawaru Glyph”:
Overview of Principles, Effectiveness, Clinical Research, and Intellectual Property

 

Introduction

Sawaru Glyph is a multisensory learning program that integrates active touch perception with the simultaneous processes of seeing, touching, and reading aloud. It was designed to enhance reading fluency and improve letter recall, ultimately reducing the cognitive load associated with reading and writing.
For an investigation of the effects of haptic-based learning, please refer to the clinical study titled “Facilitative Changes in Literacy and Naming Speed through Multisensory Learning with Haptic Text.”
Although the study used different textual materials than those in the Sawaru Glyph program, it employed similarly shaped and elevated 3D-printed letters and followed a stepwise learning protocol, progressing from letter forms to words and short sentences.

I. Sawaru Glyph, Dyslexia, and Active Touch Perception
II. Principles and Structure of the Sawaru Glyph Learning System
III. Clinical Research on Haptic Reading-Based Learning (as of 2025)
IV. Technical Innovations and Intellectual Property: Patents, Utility Models, and Copyrights
V. Future Outlook and Potential Applications

 

I. Sawaru Glyph, Dyslexia, and Active Touch Perception
Sawaru Glyph is a multisensory learning program in which learners engage in “seeing, touching, and reading aloud” using 3D-printed text while watching accompanying video and audio content. With the included instruction manual and evaluation sheets, the program can also be used at home.

The program utilizes the cognitive effects of active tactile perception to enhance three key literacy-related functions: 

  1. Formation of letter-shape memory.
  2. Formation of associative memory between letters and sounds.
  3. Formation of word-form memory.

While braille-based haptic reading methods have long existed for the visually impaired, no haptic-based approach had been developed specifically for sighted learners until now. In 2020, Sawaru Glyph was awarded the world’s first technical patent by the Japan Patent Office for a haptic learning method that supports the acquisition of letter forms and word spellings for sighted individuals.

 

1. Developmental Dyslexia
One of the primary target populations for Sawaru Glyph is individuals with developmental dyslexia. Dyslexia is a learning disability in which individuals exhibit significant difficulties in reading and writing, despite having normal vision, hearing, and overall intellectual abilities.In cases of reading difficulty, impaired decoding abilities (the association between letters and sounds) and weaknesses in word-form recognition result in slow and effortful reading. Individuals may not associate letters with their corresponding sounds, and may struggle to perceive words as cohesive units. This often leads to fatigue during reading tasks.Writing difficulties associated with dyslexia involve problems recalling the correct spelling of letters or words, even with repetition. This is attributed to underdeveloped visual imagery for letter shapes and spellings, making it difficult to retrieve them from memory.The neurological basis of dyslexia lies in individual differences in brain functions required for reading and writing. Recent studies have identified a spectrum of cognitive vulnerabilities, including weaknesses in phonological processing, visual recognition, and Rapid Automatized Naming (RAN).Sawaru Glyph was specifically designed to address these vulnerabilities. It employs three mechanisms of active tactile perception to support and compensate for the cognitive weaknesses associated with dyslexia.

2. The Role of Active Touch and Multisensory Learning
For learners with complex cognitive vulnerabilities such as dyslexia, multisensory learning approaches involving tactile perception—like clay modeling and sand-letter tracing—have long been employed, particularly in English-speaking countries【6】.Recent studies suggest that the effectiveness of these methods is grounded in the memory formation process supported by multisensory integration through active touch perception.Moreover, although still hypothetical, haptic-based learning may have a unique benefit in enhancing Rapid Automatized Naming (RAN)—a core function closely linked to reading fluency.

・Enhancement of Visual Memory through Tactile Perception
The first notable effect is the enhancement of visual memory via tactile feedback. When individuals actively touch and explore objects they have visually observed, tactile input increases attention to those objects, resulting in stronger memory traces of their shape and structural features.Another key aspect is the integration between visual and tactile modalities. Recent fMRI studies【7】have demonstrated that visual and tactile information is integrated through a shared neural network in the brain—specifically from the lateral occipital complex (LOC) to the fusiform gyrus—leading to the formation of precise and concrete multimodal representations (see Figure 3).This enhancement effect was empirically supported in a 2023 study I (Miyazaki) conducted using raised versions of the Rey-Osterrieth Complex Figure. The study, titled “Visuohaptic multisensory learning enhances encoding and recall of ReyOsterrieth complex figure shapes”, confirmed significant improvements in memory-based reproduction of geometric figures【8】.

Formation of Associative Memory Between Visual and Phonological Representations
The second mechanism involves the facilitation of connections between visual representations (orthographic forms) and phonological representations (spoken sounds).A study conducted in France【9】examined the learning of word spellings and their corresponding pronunciations. The study found that incorporating tactile exploration enhanced performance on decoding tasks, which assess the learner’s ability to link letters to sounds.The researchers concluded that active touch, which engages both simultaneous visual processing (visual representation) and sequential auditory processing (phonological representation), plays a key role in supporting this understanding. By allowing learners to explore letters with their hands while perceiving their shapes and corresponding sounds, associative memory between letters and sounds is strengthened.This finding suggests that Sawaru Glyph can be especially beneficial in addressing decoding and encoding weaknesses commonly observed in individuals with dyslexia, by promoting robust visual-phonological integration.

Potential for Enhancing Rapid Automatized Naming (RAN)

The fourth mechanism is the enhancement of Rapid Automatized Naming (RAN)

A unique effect observed specifically in haptic-based learning.RAN is a cognitive efficiency measure that evaluates the ability to quickly shift attention to visual stimuli (such as images or numbers) and retrieve their phonological labels【10】. Recent studies have established a strong correlation between RAN performance and reading ability.In our 2024 clinical study【1】with eight children diagnosed with dyslexia, we implemented a stepwise haptic learning protocol that involved seeing, touching, and reading aloud, progressing from individual letters to special syllables, words, and high-imagery sentences. This multisensory approach led to a significant improvement in RAN task speed.
Because RAN is a critical predictor of reading and writing skills, this observed improvement marked the first known instance of RAN enhancement through tactile-based learning.Our research team hypothesizes that this effect is driven by the combined multisensory integration mechanisms facilitated through tactile exploration. Further details are discussed in Section III: Clinical Research on Haptic-Based Learning (as of 2025).

 

II. Principles Behind the Sawaru Glyph Program

While active touch perception has historically been used in literacy instruction, conventional multisensory methods have typically been limited to isolated practices, such as forming individual letters with clay or exploring letter blocks by touch.These approaches can support the formation of letter-shape memory, but they do not sufficiently foster the decoding of letter-sound relationships, which is essential for fluent reading, nor the development of word-form memory.To overcome these limitations, Sawaru Glyph was created through the integration of a novel instructional program and original technological innovations.In this program, learners watch visual and auditory content while simultaneously touching and reading aloud from three-dimensional letter boards (see Figure 4).Through this structured process, the program promotes the following stages of literacy development:

Formation of letter-shape memory

Formation of associative memory between letter shapes and sounds

Formation of word-form memory

Generalization at the sentence level

 

(Figure 4) A Child Reading Aloud While Touching Raised Kana Characters and Watching Audio-Visual Content

1. Section 1: Formation of Letter-Shape Memory and Letter–Sound Associations

The first stage of the program begins with tactile reading of enlarged, three-dimensional kana characters (both hiragana and katakana).
Next, learners are introduced to special syllables, such as yōon and sokuon, focusing on forming memory at the two-character, one-mora level.
By tracing each moraic unit with their fingers, learners develop clear and detailed memory of letter shapes.
Additionally, by simultaneously watching, touching, and reading aloud in coordination with visual-audio content, tactile input from the letter shapes serves as a cognitive cue that supports the formation of associative memory between letter forms and their corresponding sounds within working memory.

2. Section 2: Formation of Word-Form Memory through Chunking of Letter Strings

In this stage, learners engage in tactile reading of words ranging from two to seven morae in length.
Building on the letter-shape memory established in Section 1, learners begin to process character strings as chunks, gradually forming holistic word-form memory.
This approach helps establish a visual-lexicon network, enabling learners to recognize entire words at a glance—without decoding each letter individually.
By reading aloud while watching visual-audio content, where words are highlighted in sync with pronunciation, learners are encouraged to form associative links between word forms (visual representations) and spoken language (phonological representations).
Strengthening this connection between visual and phonological word forms is also believed to contribute to improvements in Rapid Automatized Naming (RAN), as discussed later.

3. Section 3: Generalization at the Sentence Level

The word-form memory and letter–sound associative memory formed in Sections 1 and 2 are further generalized through tactile reading at the sentence level.
By reading sentences composed of high-imagery descriptions through touch, a bypass is created between the visual lexicon and the phonological lexicon via semantic memory.
This process helps build a more generalized character recognition network, allowing learners to recognize even words that are not included in the Sawaru Glyph materials without excessive cognitive load.

4. Kanji Learning (Haptic-Based + Auditory Method)
In Sawaru Glyph, kanji learning using tactile perception is implemented in parallel with Sections 1 through 3 of the program.
Many children with reading and writing learning disabilities (LD) struggle to memorize kanji even through repetitive handwriting due to weaknesses in the formation and recall of letter-shape memory.
To address this cognitive vulnerability, my (Miyazaki’s) study【8】demonstrated that reading aloud while touching three-dimensional characters with specific heights and convexity effectively strengthens memory retention.
Furthermore, for children who retain auditory verbal memory, the “auditory bypass method”—in which learners verbalize the parts of kanji characters while engaging in tactile exploration—has proven effective【11】.
In this method, memory traces are enhanced through dual pathways of tactile and auditory input by reciting component parts of kanji while touching them.
For the auditory component of this approach, we adopt the Michimura Method【12】as our instructional material and program.

III. Clinical Study on Haptic Reading-Based Learning for Children with LD (as of 2025)
Between 2023 and 2024, I (Miyazaki) conducted a clinical study involving eight Dyslexia children with severe reading and writing learning disabilities (LD), who exhibited naming speed deficits and difficulties at the kana level (eight with reading difficulties, four with writing difficulties; average age: 10.4 years).
Although the tactile reading training used in the study employed different material designs and text stimuli from those used in Sawaru Glyph, the letters were similarly three-dimensional in height and shape. To assess reading and writing ability, we used the Revised Standardized Reading and Writing Screening Test (STRAW-R)【13】, along with a custom-designed questionnaire to evaluate perceived burden related to reading and writing tasks.

Table 1. Changes in STRAW-R Scores (z-score differences) and Subjective Evaluations After Haptic Learning

(Table 1) Values showing a z-score improvement of 1.0 or higher are marked with an asterisk (*). For RAN and speeded reading tasks, improvement is indicated by a decrease in required time (–); for dictation tasks, improvement is indicated by an increase in the number of correct responses (+). Subjective evaluation: After completing the learning tasks, each child was asked whether they felt a reduction in the burden of reading and writing, with responses recorded separately for each domain.

1. Improvement in Reading Fluency and Reduction in Perceived Reading Burden
Among eight children with reading difficulties, seven showed a reduction in reading time of 1.5 z-score points or more on at least one of the five STRAW-R speeded reading tasks (hiragana words, katakana words, non-words, and sentences).
When using a criterion of a 1.0 z-score difference, two or more tasks showed improved reading time in these same children.
The most notable improvements were observed in the word and non-word tasks, which suggests that the stepwise learning process—progressing from kana (50 basic sounds) to special syllables, words, and then short sentences—facilitated the formation of letter-shape and word-form memory, enabling chunked word recognition.
Moreover, the observed fluency improvement even in meaningless non-words implies that the formation of associative memory between kana and sounds was also promoted.
Children who showed improvement in the speeded reading tasks also reported, in the subjective questionnaire, that the burden of reading had decreased, indicating that objective performance gains aligned with their subjective experiences.

2. Improvement in Writing Recall and Reduction in Writing Burden
All four children with writing difficulties showed an increase in the number of correct responses exceeding a z-score difference of 1.5 on at least one of the two kana (dictation) tasks in the STRAW-R.
In addition, all four children demonstrated a reduction in the time required to begin writing.
It is considered that touching and reading aloud from three-dimensional hiragana and katakana letters promoted the formation of concrete and detailed letter-shape memory for kana.
Furthermore, through reading aloud accompanied by tactile input, the formation of associative memory between letter forms and sounds likely facilitated the encoding function needed to recall letter shapes from auditory input in dictation tasks (“writing what was heard”).
The shortened time between hearing and initiating writing is also thought to reflect an increase in retrieval efficiency from auditory cues to letter-shape memory.
All four children reported a reduction in perceived writing burden in the subjective questionnaire.
These self-reported reductions were consistent with the observed improvements in writing function.

3. Facilitative Changes in Rapid Automatized Naming (RAN)

(Figure 5) Change in Performance: Average Completion Time of Three RAN Tasks Standardized by z-scores
In this study, all eight participating children with dyslexia exhibited impairments in Rapid Automatized Naming (RAN).
Following the haptic reading-based learning, seven out of the eight children showed a significant reduction in the average time required to complete the three RAN tasks included in the STRAW-R (see Figure 5).
Specifically, six children improved by 1.5 z-score points or more, and one child improved by between 1.0 and 1.5 z-score points.
The enhancement of RAN performance may be explained by two key effects of the haptic learning method:
Improvement in the formation and recall of letter-shape memory

Strengthening of the connection between visual letter forms and phonological representations (spoken words)

These two effects likely worked together to promote RAN enhancement through “see–touch–read aloud” multisensory activities.
In typical naming tasks, visual stimuli are processed through the semantic system, followed by retrieval of phonological information from the phonological lexicon.
However, studies involving patients with aphasia have shown that visual letter cues can facilitate naming performance【14】.
In the current study, the improvement in recalling letter forms through tactile input, combined with stronger connections between letters and sounds, likely enabled the visual and phonological lexicons to interact, allowing the recalled visual forms (“letter-shapes”) of pictures or numbers to act as internal cognitive cues, thereby supporting phonological retrieval.
Furthermore, neurological studies have reported that connectivity between the left fusiform gyrus and areas responsible for semantic memory and phonological processing is strengthened during the development of naming ability【15】.
The left fusiform gyrus plays a central role in representing both letter and word forms, and also in integrating visual, tactile, and auditory information.
Thus, the multisensory reading process involving vision, touch, and sound may have contributed to changes in the connectivity of neural networks centered around the left fusiform gyrus.
While the exact mechanisms behind RAN improvement remain unclear, future studies using functional brain imaging (e.g., fMRI) will be essential for further investigation.

4. Adaptability of Haptic Reading-Based Learning and Individual Differences

In this study, seven out of eight children demonstrated improvements in RAN and reading/writing abilities following haptic-based learning. However, one child did not show any such improvements, and no reduction in perceived burden was reported in the subjective questionnaire.
This finding suggests that no single learning or training method is universally effective, and that the effectiveness of tactile-based learning may vary among individuals.
I (Miyazaki) focused on performance in the Rey-Osterrieth Complex Figure Test (ROCFT), a tool commonly used to assess visual memory.
Specifically, I examined changes in reproduction accuracy 3 minutes and 30 minutes after copying the figure.
In ROCFT, it is known that a “reminiscence effect” often occurs, in which the 30-minute delayed reproduction outperforms the 3-minute one due to cognitive reorganization over time【16】.
This effect may reflect characteristics of memory formation involving multisensory input, including tactile perception.
Upon analysis, children whose 30-minute reproduction scores exceeded their 3-minute scores tended to respond well to haptic reading-based learning.
In contrast, the one child who showed no improvement from haptic learning also experienced a decline in reproduction performance between the 3-minute and 30-minute intervals.
These findings suggest that performance on ROCFT reproduction tasks may serve as a potential indicator of an individual’s adaptability to haptic-based learning.
Moving forward, we aim to develop screening tasks that can more directly evaluate individual suitability for tactile-based learning methods.
V. Future Outlook

(Figure 7) Proto-writing Discovered Approximately 5,500 Years Ago
Looking ahead, we aim to build scientific evidence for Sawaru Glyph from two complementary perspectives: behavioral indicators and physiological indicators.
In clinical research, we are planning to move beyond pilot studies and conduct both controlled comparative experiments and brain function studies using fMRI.
Our goal is to establish a more scientifically grounded model not only for improving literacy skills but also for explaining the facilitative effects on Rapid Automatized Naming (RAN).
One of the world’s oldest known writing systems, discovered near Baghdad in present-day Iraq (Figure 7), dates back approximately 5,500 years. These early symbols—carved into clay tablets to represent livestock and quantities—demonstrate that writing began with tactile interaction.
Given that vision and hearing are thought to have evolved from tactile perception, it is likely that the act of touching enabled humans to link spoken language with physical form.
I (Miyazaki) believe that the research behind Sawaru Glyph brings us closer to understanding the fundamental relationship between people and written language, and that this work can meaningfully support individuals who struggle with reading and writing.

Miyazaki, K., Hashimoto, Y., Uchiyama, H., & Sakai, M. (2025). Facilitative changes in literacy skills and RAN through multisensory learning using haptic materials. Cognitive Neuroscience, 26(3–4).

Miyazaki, K. (2020). Spelling learning device and method. Japan Patent Office Publication No. P2020-187271A.

Uno, A. (2016). Developmental dyslexia. Higher Brain Function Research, 36(2), 170–176.

Seki, A. (2009). Functional imaging of developmental dyslexia in children. Cognitive Neuroscience, 11(1), 54–58.

Uno, A., Sunohara, N., Kaneko, M., Awaya, N., Kozuka, J., & Goto, T. (2018). Cognitive deficits underlying developmental dyslexia: A comparison with age-matched controls. Higher Brain Function Research, 38(3), 267–271.

Philip, A. P., & Cheong, L. S. (2011). Effects of the clay modeling program on the reading behavior of children with dyslexia: A Malaysian case study. The Asia-Pacific Education Researcher, 20(3), 456–468.

Nishino, Y., & Ando, H. (2008). Neural mechanisms of object recognition based on 3D shape. Japanese Journal of Psychology Review, 51(2), 330–346.

Miyazaki, K., Yamada, S., & Kawasaki, S. (2023). Enhancement of visual memory in Rey-Osterrieth Complex Figure Test through multisensory learning using vision and touch. Cognitive Neuroscience, 24(3–4), 87–92.

Florence, B., Edouard, G., Pascale, C. L., & Sprenger, C. (2004). The visuo-haptic and haptic exploration of letters increases kindergarten children’s reading acquisition. Cognitive Development, 19(3), 433–449.

Kaneko, M., Uno, A., Sunohara, N., & Awaya, N. (2012). Predictive power and limitations of pre-school RAN screening for reading difficulties after school entry. Brain and Development, 44(1), 29–34.

Uno, A., Sunohara, N., Kaneko, M., Goto, T., Awaya, N., & Kozuka, J. (2015). Kana training using a bypass method for children with developmental dyslexia: A case series. Japanese Journal of Speech, Language and Hearing Research, 56(2), 171–179.

Kanji Cloud Co., Ltd. Michimura-style Kanji Learning Method. Retrieved from https://kanji.cloud/

Uno, A., Sunohara, N., Kaneko, M., & Wydell, T. N. (2017). STRAW-R: Revised Standardized Reading and Writing Screening Test – Assessment of Accuracy and Fluency. Interna Publishing: Tokyo.

Wakamatsu, C., & Ishiai, S. (2018). Facilitative mechanism of initial kana-letter cues in naming tasks for aphasia patients with poor phonemic cueing. Japanese Journal of Neuropsychology, 34(4), 299–309.

Arya, R., Ervin, B., Buroker, J., Greiner, H. M., Byars, A. W., Rozhkov, L., et al. (2022). Neuronal circuits supporting development of visual naming revealed by intracranial coherence modulations. Frontiers in Neuroscience, 16, 867021.

Ogifu, Y., Kawasaki, S., Okumura, T., & Nakanishi, M. (2019). Developmental patterns and scale construction of the Rey-Osterrieth Complex Figure Test in childhood. Journal of Biomedical Fuzzy Systems, 21(1), 69–77.

Miyazaki, K. (2023). Learning materials composed of convex shapes and sizes suitable for haptic learning. Japan Utility Model No. U3240844.

NHK (2022, October 25). “Moji”: The Double-edged Sword That Captivated Humanity [Television broadcast].