AT and me: An interview with Yotam Sechayk
At Paths to Technology, we are passionate about assistive technology and even more passionate about centering the perspectives of students with visual impairment that use it in their own lives. This includes students of all ages and academic levels— even PhD students!
In this Q&A-style interview, P2T intern (and PhD student) Veronica Lewis interviews accessibility researcher Yotam Sechayk, a PhD candidate at University of Tokyo studying Creative Informatics. Yotam has albinism, and his research interests include assistive technology and accessibility for low vision access. Some of his current projects include VeasyGuide, a tool designed to make lecture videos easier to follow along with, as well as another project for making graphs and charts easier to see. When he isn’t studying, Yotam is often thinking about his next trip— he has been to over twenty countries and loves exploring new places during his solo travels. His experiences as a student with low vision profoundly shape his research and the ways he interacts with technology, and this post is filled with tons of tips, resources, and ideas for exploring accessibility research. How would you describe your visual impairment?
I have albinism (plus nystagmus and photophobia), so low vision is something I have lived with for my entire life. It is very difficult for me to explain to other people what my usable vision looks like, because I don’t have a comparison for how I am supposed to see.
The most useful way I can think of to explain my vision is to compare two videos, where one is in a 4K super high resolution and the other video is in 480p. You can still see things, but you are missing out on a lot of the details. People ask me if my vision is blurry, and I tell them that I have no idea. I mean, compared to blur effects, my vision is not the same, but I really do not know since my vision has always been like this. Did you know anyone else with albinism/low vision growing up?
Not really. I am the only person in my immediate family with albinism, so I have mostly been on my own when it comes to figuring things out. When I was in primary school and junior high school, I attended a yearly summer program where I would meet other kids that had low vision or even albinism. They were close in age to me or a few years older/younger, but I didn’t have any role models with albinism or anyone that I could really learn from or get tips from. How do you access information? Do you mostly access things visually?
I read large print, watch movies, and play video games, but reading can be very exhausting, and I read more slowly. So I will also use audiobooks or text-to-speech to make it easier to read, especially when reading papers for my research. I rely on text-to-speech engines more often nowadays; the technology is much more accessible now than it was when I was younger. But I am really glad I took the time to learn how to read print materials, even if listening is easier, because you don’t always have access to text-to-speech to read something. What kind of technologies and skills did you use as a young student?
Other than glasses or sunglasses, the first assistive technology tool I remember using was a dome magnifier in primary school─ it looked like half a sphere. It was very convenient and comfortable to read with, more than a lot of the digital tools that are around nowadays. It is only inconvenient when you get to the very edge of the page because it is harder to put the magnifying glass on the edge. When I was studying for my bachelor’s degree, I also had a distance video magnifier I would use to see the board called the Transformer, but the camera quality back then was not very good (this was over ten years ago). 3.8-inch dome magnifier positioned on top of a history textbook, providing 5x magnification A 3.8-inch dome magnifier that provides 5x magnification
In primary school, I would have all my assignments enlarged on A3 paper and would sit in the front row of the class. However, I had to remind teachers to write larger on the board frequently, and it was very rare that teachers would remember to enlarge any handouts. I can probably count on two hands the number of times that I did not have to do it myself. Most times, I would have to leave the classroom to enlarge my own work on the copier. I would often lose focus because I had to go to the principal’s office or the secretary’s office to use the copier, then come back to class and try to re-focus on doing the assignment or taking the test.
Lastly, one of the things I need in places with windows are curtains, because the light can be too bright for me. I would have loved to have complete control over how much light there is in the world, but I don’t have that. Curtains make it easier for me to see by blocking some of the light, especially in classrooms. My primary school didn’t have curtains, and also refused to buy them. So my family purchased curtains on our own, and every year when I moved to a new class, my family would come and take down the curtains and put them up in my new class. It’s important not to get discouraged just because we face some barriers. Self-advocate, find ways to create an environment that works for you, and build a supportive community of family, friends, and people that understand you. It’s important to learn your own accessibility needs and to learn how to make things accessible for you. What assistive technology and accessibility features do you use now?
I use several different tools and features on my Windows laptop, Windows desktop computer, and on my iPhone. I prefer the desktop because the monitor is bigger. I mostly use a single monitor, but there can be some situations where it is helpful to have two monitors to read two files with large print. But I cannot switch back and forth between monitors very quickly with my eyes, so one monitor is enough for me.
My “tech toolbox” comes down to three main components: magnification, text-to-speech, and dark mode. Also recently, like many people, I use AI chat and AI agents, which help with summarizing or understanding things like large amounts of text. Screen magnification
For screen magnification, I use Windows Magnifier with a full screen view. I also use Zoom on my iPhone along with a large text size and display size. I also use the Magnifier app that connects to my phone camera. It is really helpful for reading a menu behind a counter at a restaurant or for reading nutrition details on products. I have these features enabled with [AssistiveTouch] so I can have a button on my screen to quickly access magnification. Text-to-speech
For text-to-speech, I use a free tool called BalaBolka that has a lot of features. It gives you the ability to create shortcuts for specific things. For example, when you select text anywhere, you can click the shortcut, it will copy the text, and read that text out loud. As long as you can copy it, BalaBolka can read it. It works in combination with the Microsoft Edge speech engines, so I can read basically any language and choose voices, customize the pitch, speed, and all sorts of things. I created a slew of keyboard or mouse shortcuts to quickly change the rate, start reading, stop reading, all on-the-fly. It is very, very, useful. I am not using all of the BalaBolka features, but maybe I should explore them more— there is a feature that helps you train your reading speed too.
I thought I would use [Speak Screen or Speak Text] more often on my iPhone, but I tend not to for two reasons. One is because a lot of people send audio messages these days, which is pretty nice. But text-to-speech is not as easily accessible to use as I would like. I have to swipe down with two fingers from the top of the screen, listen to it start from the beginning, click next, next, next, next, and navigate, then maybe I clicked next too many times and I have to go back and try to find my place. And it seems that I can’t easily change the speed either. I really wish text-to-speech worked better on iPhone. Something I really like about Android is that you can click the [accessibility shortcut] on your Home Screen and select an area on your screen, like a bounding box, and it reads the content inside of that box. I think that is very useful. Dark mode
For dark mode, I use the Dark Reader web extension to turn websites into dark mode. It’s not perfect, but it’s pretty good. I find it much easier to see white text on a black background. I also use color filters on Windows, like color inversion. Inverting colors is my go-to filter when dark mode doesn’t work. What has been the most challenging assistive tech tool you have had to learn?
There are some assistive technology tools that I would have liked to be able to use, but don’t, because they are either too much for me or I can’t customize it. For example, a screen reader can be very useful for me in some situations, but it is hard for me to select specific segments of text. Text-to-speech is much more targeted.
Another tool I used to use a lot was this custom CSS plugin for a web browser— CSS is a styling scripting language for the web. I could write short snippets that I could use to magnify and change the font sizes, colors, and things like that on websites. I could just enlarge the title of a page, or just the content, or change the colors. This has great potential for websites you use frequently, but it can be challenging to learn CSS and how to use it.
It can be challenging to try a new tool, but in the long run it can be very useful. So while it is difficult to learn something new at first, the benefits can be far greater than the difficulty in the beginning. A comparison of two screenshots showing an excerpt of this post. Screenshot 1 has the default CSS values of 2.5rem for heading and 1rem for body text. Screenshot 2 has a CSS snippet added for “.main font-size: 30px”, changing all text on the page to the same larger size, equivalent to about 22 pt font. Screenshot 1 has the default CSS values of 2.5rem for heading and 1rem for body text (40px and 16 px). Screenshot 2 has a CSS snippet added for “.main font-size: 30px”, changing all text on the page to the same larger size What is your workflow for programming/coding?
I use Visual Studio Code with a dark theme, and I enlarged the interface size and font size as well. The most common programming languages I use are Python, TypeScript, and R, but sometimes I also use C# or C++.
Some programming languages like C++ are very text-heavy, which can be overwhelming. There are other languages that are less heavy on text and have more spacing. One common difficulty I experience is with languages that have a lot of indentation. For example, with HTML, the element tree can get pretty deep and indent so much that I end up seeing only a few words before it wraps to the next line. It can be very annoying. Eight levels of nested HTML in Visual Studio Code. The text “This message is nested 8 levels deep!” is split across three lines with 11-13 characters on each line Eight levels of nested HTML in Visual Studio Code. The text “This message is nested 8 levels deep!” is split across three lines with 11-13 characters on each line What is a strategy or tech trick that has really helped you?
My mouse has two extra buttons on the side near where the thumb is, so I configured these buttons to activate the shortcut keys on BalaBolka to read any text I select. One of the buttons has the shortcut key for this text-to-speech engine, so I can select text, then click that shortcut key on the mouse to listen to the text read out loud. The other button next to it makes it stop talking. So I can very quickly and easily select text, like triple-clicking to select a paragraph, and then clicking the first mouse button to read it out loud or the other one to stop reading. It’s a great workflow for convenience. Example of side buttons on mouse How do you access information for your courses/research?
I mostly use my computer for all of my classes and research, so I spend a lot of time using a screen magnifier, using dark mode, and then use text-to-speech when I am studying on my own. I don’t have face-to-face classes frequently, but those can be very challenging because I just can’t see the slides, and the instructors don’t always share their slides. Also, when instructors use the whiteboard it can be especially challenging, because even if you remind people to write bigger, they might write, like, one sentence, and then you can see their handwriting slowly shrinks down in size as they keep writing.
I take online classes, which can be more accessible, but they can still be problematic because the instructors will use a pointer or a pen that is super small or thin and hard to see. And the contrast is sometimes not good enough either but you can’t really stop them in the middle of the class and ask them to explain what they are writing because it is hard to read. But at least on Zoom I can take screenshots and read them later.
When people with low vision are in an online class, they always have to be alert. Like when the instructor is pointing at something and using verbal cues like “over here.” Or if there is a pause in their speech, they might be sketching or drawing something. It’s very exhausting, and can prevent you from calmly learning, studying, and following along in class because you think so much about access, not so much learning. This was the motivation to my recent work VeasyGuide which tries to address this exact problem. How do these experiences influence your research?
I am very passionate about accessibility because I can see a lot of the gaps in accessible designs. For years, I have wondered why we have super advanced cameras on our phones but all of the specialty accessibility tools I was using in school had these really low quality cameras despite being super expensive. And then they wouldn’t get updates. Accessibility can be so much more, and this line of research is very important and very personal to me. I want to identify problems, create solutions, and ensure that whatever I develop is made available for people. I want to give people access to the technology and tools that I develop.
My experience has shown me that the needs of people with low vision are very individual, and can also change depending on the situation. Everyone has their own way of dealing with visual information, and I want to support that instead of replacing it. So in VeasyGuide, I don’t replace pointing, marking, and sketching in presentation videos, I make them easier to see and notice, so people don’t have to spend energy on trying to locate them or find ways to zoom in. I want to narrow the gap between the experiences of sighted people and low vision people, and make it easier for them to access visual information.
Basically, I don’t want to keep people from using their vision, I want to just support people and their existing ways. I remember when I had to use the video magnifier in university, it was a clunky device I had to carry with me. Everyone could see that I was using it, and I felt like I stood out. There is nothing wrong with that feeling, but the best thing would have been if there was a way to make things on the board easier to see for everyone. Like for example, if the school had cameras pointed at the board or could stream it so people could see it on their own screens. I would not need to use the magnifier then. I want to create tools in ways that are accessible from the get-go, providing ease of access for everyone. Do you use the technologies you create in your own workflows?
Yes, for sure! VeasyGuide was actually motivated by one of the lecturers here at the University of Tokyo. Listening to the lectures was very cognitively demanding due to several factors. First, the speech wasn’t clear for a few reasons, like the volume, the pace, or the way someone might rephrase something multiple times, which can make it harder to follow along. Then the slides were a white background with black text, and it was very bright. And they used a pointer that was basically the size of one pixel so it was very difficult to see, and the sketches were also really thin. And then the content and topic itself was also difficult, so it was really exhausting to watch the lectures. VeasyGuide makes it much easier to notice these pointers or sketches, zoom in, and follow along. VeasyGuide will be available publicly as a web application by October 2025. VeasyGuide interface screenshots. Taken from https://veasyguide.github.io/ What advice would you have for others who are interested in research and/or accessibility?
First, always be curious, inquisitive, and interested about why and how things are the way they are, and be motivated to change things. So many people give up because maybe they don’t know if something can be solved or not. I come from the perspective of nothing is impossible, the sky’s the limit, asking “what if I had everything in my power, what could I do?” Then it’s just about making a small step towards that.
Another important thing is being creative, which is easy to say but more difficult to do. Solutions don’t have to be complex or complicated. Good solutions are simple and creative ones. One way to practice inducing creativity is to give yourself a forced limitation. For example, one limitation could be not changing how people navigate or not converting content to a new format—which is the limitation I posed on myself for VeasyGuide. So that is an example of ways to force yourself to think creatively. But even when you limit yourself, still think from the perspective of “the sky is the limit.”
For accessibility research, one very important component is to be interested in people and their experiences. Maybe it is kind of obvious, but to create accessible solutions, it is very, very important that you are interested in the experiences of people. Like, maybe you have some solution that might work in your mind, but if other people have a different experience, it is important to listen to those experiences and listen to the way they use things. It is very easy to think that “oh, they are just not using it right,” but you also have to consider why they aren’t using it the way you intended. Always be interested in people and their experiences.
In a sense, we are all researchers. I don’t think many people live their life without trying different things, because a lot of people want to make their life easier, better, nicer, and more comfortable. But people with disabilities are especially great researchers, because you always have technology that you have to try, or you are trying to find strategies or workflows that work for you. You do trial and error, you do experiments with yourself, and that is essentially research. It took me a very long time to be able to reach a point where my workflow is comfortable for me, and there are many things that I am still learning. But I have curiosity, creativity, and a passion for making my life better. What excites you the most about the future of accessibility and assistive technology?
I envision a future where technology is made accessible from the start. One of the ways to achieve that is through the involvement of people with disabilities or people that have specific access needs from the inception of new technology. Also, involving a lot of personalized interfaces or customizable experiences where users can change different settings and control how things operate or behave will contribute to accessibility. For a recent example, AI agents can generate text in a way that you like; maybe you want more, or less text, maybe you want to explain this one thing a bit more, or maybe you want to explain something else. I am very much looking forward to the future where everything is designed with accessibility in mind, and personalization plays a big role in that.
| Thank you to Yotam Sechayk for participating in this interview and sharing so much information! To connect with Yotam and learn more about this research, visit his website at Yotam Sechayk | GitHub |
Enjoy Reading This Article?
Here are some more articles you might like to read next: