As a student with low vision that primarily accesses information visually, I frequently use accessibility features on my iPad like Dynamic Type large font sizes, Hover Text, Zoom screen magnification, and other strategies for accessing information with large font sizes or with display customizations. While I learned about how to use VoiceOver on iPad as part of my coursework/training as an assistive technology specialist, VoiceOver wasn’t an accessibility feature that I enabled frequently until a blind friend pointed out that I was frequently straining my eyes and seemed to be struggling with visual fatigue, which had manifested as me sounding annoyed or getting impatient with visual tasks while we were both working on a project.
Up until that point, I thought of VoiceOver and screen readers as enabling auditory access for blind users or users who access information nonvisually, or as a tool that could be paired with a refreshable braille display. I hadn’t considered that I could use screen readers like VoiceOver as a secondary access modality, where I primarily access information visually with a secondary preference for audio access. Although I was used to using text-to-speech features like Speak Screen, learning to use VoiceOver with low vision would help me even more with accessing visual information in my classes or using apps that were not designed for low vision access.
Learning to use VoiceOver with low vision is not a skill that I developed within a few hours or even within a few days; it took a few months for me to practice using VoiceOver on an iPad in my daily life, and I’m still not nearly as proficient as my friends who rely on VoiceOver to use their Apple devices without looking at the screen. That said, here are helpful tips and strategies for learning to use VoiceOver with low vision that can be used alongside visual access strategies like large print and magnification.
What is a screen reader? What is text-to-speech?
A screen reader enables users to access text, images, and user interfaces without looking at the screen. Screen reader software reads information out loud using synthesized speech so users know what is on the screen, so users can navigate their device using a keyboard or gestures instead of using a computer mouse; some examples of gestures can include tapping quickly on the screen, swiping, or long-pressing on items. Blind users can’t use a mouse (but can use touchscreens), though someone who has low vision or some usable vision might use a mouse and a screen reader at the same time.
Screen readers are typically “always on,” and it is reasonable to assume that someone wouldn’t be able to use their device if their screen reader was turned off or not working. On Apple devices like the iPhone, iPad, and Mac, the built-in screen reader is called VoiceOver.
Screen readers like VoiceOver are different than text-to-speech. Text-to-speech enables users to access text and images by reading information out loud when prompted by the user. Text-to-speech can be activated as needed using a shortcut, hotkey, or gesture. Once text-to-speech finishes reading all of the text/visible content on a page, it shuts off until the user activates it again; the user might not need to read everything on the screen or may only need help reading text. Text-to-speech does not use any specific gestures or require the user to change how they interact with their device. On Apple devices like the iPhone, iPad, and Mac, text-to-speech features include Speak Selection/Speak Text and Speak Screen.
Both VoiceOver and Speak Selection/Speak Screen can be enabled by opening Settings > Accessibility > Vision. While someone could have both features enabled simultaneously, VoiceOver is prioritized over text-to-speech.
Related links
- A to Z of Assistive Technology For Low Vision
- How To Make iPad Accessible for Low Vision
- Enabling Temporary Accessibility Settings For iPad
Why I use VoiceOver with low vision
Many students with low vision learn how to access information visually (e.g. large print, screen magnification) as well as how to access information using screen readers or other audio/tactile access options when learning compensatory skills, how to use assistive technology, or when engaging with other components of the Expanded Core Curriculum (ECC) in school. That said, my TVI (teacher of students with visual impairments) exclusively focused on having me use visual modalities like large print, so I didn’t know about the benefits of using a screen reader with low vision until I was already in college.
Wondering why someone who can read large print would use a screen reader? Here are some of the reasons why I learned to use VoiceOver with low vision:
- Managing eye fatigue. It becomes harder to focus my eyes and control double vision when I am tired, which can make it harder to read and locate/press on-screen buttons or links. Instead of just relying on my eyes, I can also listen to VoiceOver to figure out what is on the screen.
- Sensory efficiency. If I have been using my vision for a long time, it’s helpful to have an option for reading that doesn’t involve having to look at my screen continuously or read information letter-by-letter.
- Vision changes. On more than one occasion, one or both of my eyes have swelled shut due to short-term illness or seasonal allergies. While I often have a tiny amount of residual vision in these situations, it is much more practical and efficient to use VoiceOver or other screen readers instead of straining my eyes.
Specific ways that I have used VoiceOver to access content as a college student with low vision include:
- Reading math equations in programs like WebAssign, which often include small symbols that I might not otherwise notice. I can position my iPad at an angle under the lined bifocal in my glasses.
- Accessing eTextbooks or completing built-in quizzes that have lots of icons or small answer choices. Some of these tools can be challenging to access with low vision due to the smaller font size, but I can read them more easily with a screen reader. I find that platforms often have more information available for screen reader access compared to information about low vision accessibility.
- Swiping through options on the dining hall menu.
- Proofreading documents. It is easier to pause VoiceOver and make changes compared to Speak Screen, and verbosity settings can be adjusted to spot errors more efficiently.
In class, I use VoiceOver with a pair of bone-conducting headphones so I can still hear what others are saying, or I will use a single earbud to listen to content. I find over-the-ear headphones uncomfortable to wear for sensory reasons, but many of my friends use these at home or at work.
Related links
- Disability Accommodations For Fluctuating Eyesight
- How I Use WebAssign With Low Vision
- Ten Questions To Ask When Buying Digital Textbooks
- Navigating The Dining Hall: College O&M
- Ten Lessons My TVI Taught Me
How to turn on VoiceOver (and how to turn off VoiceOver)
I don’t rely on VoiceOver to access my iPad with low vision, so I don’t have VoiceOver enabled at all times. I turn on VoiceOver when I want to perform a specific task using one of the following methods:
- Ask Siri. “Hey Siri, turn on VoiceOver” can be used to quickly turn on VoiceOver, and can also be used to turn off VoiceOver without having to open Settings.
- Accessibility Shortcut. VoiceOver can be added to the Accessibility Shortcut menu or Control Center. To access the Accessibility Shortcut menu, triple-click the power button or add the Accessibility Shortcut menu to Control Center.
- Settings application. Go to Settings > Accessibility > Vision > VoiceOver to turn VoiceOver on or off.
I prefer to use the Accessibility Shortcut when I am in class since I often turn off Siri during lectures or in noisy environments.
VoiceOver gestures and using VoiceOver with a keyboard
VoiceOver fundamentally changes how users interact with their devices by converting on-screen content into spoken or braille output, while providing options for performing tasks like scrolling, selecting, or highlighting text using gestures or keyboard shortcuts (hotkeys).
Learn VoiceOver gestures with low vision
iOS offers a free built-in interactive tutorial for learning to use VoiceOver, including how to use gestures and basic features like navigation, scrolling, text entry, the VoiceOver rotor, and more. This can be found by going to Settings > Accessibility > Vision > VoiceOver > VoiceOver Tutorial. Note that some gestures may not work at first due to the interactive nature of the tutorial.
Another option for exploring VoiceOver gestures is the free built-in VoiceOver Practice feature, where users can practice gestures without opening another application. After turning on VoiceOver, select VoiceOver Practice to practice gestures. Descriptions of each gesture are displayed with the system font size.
Use VoiceOver with external keyboard
Since I am used to keyboard access and using screen readers with a keyboard, I often find it easier to use keyboard shortcuts/keyboard hotkeys instead of gestures. To use VoiceOver with an external keyboard, go to Settings > Accessibility > Vision > VoiceOver > Typing > Modifier Keys to set the VoiceOver modifier key to use VoiceOver with an external keyboard. I personally use the Magic keyboard with my iPad.
VoiceOver Practice is not available for learning or practicing keyboard shortcuts.
Turn on screen curtain
Screen Curtain is a privacy feature that hides the device display so that others cannot see it. When Screen Curtain is enabled, the screen appears blank/turned off, but VoiceOver continues to provide audio feedback, reading out text and describing on-screen elements. This feature is incredibly helpful when using devices in a classroom or other public places since it prevents others from being able to see what is on the screen, and can also help students practice using VoiceOver without looking at the screen.
Another way I have used VoiceOver with low vision is for listening to videos that might have strobe lights and/or flashing lights, and using VoiceOver to control the video playback.
Turn on Caption Panel
The Caption Panel displays VoiceOver’s output as on-screen text, not unlike captions used in videos. When enabled, users can read what VoiceOver is saying instead of or in addition to listening to it, which can be useful when learning to use VoiceOver or for confirming what is being read. Users can customize the font size, opacity, and position of the panel in Settings.
Configuring VoiceOver for low vision
When I first started learning VoiceOver with low vision, I wasn’t sure where to start with customizing settings. Many of my friends were VoiceOver power users reading at over 500 words per minute, using VoiceOver with a braille display, or sending text messages with the VoiceOver braille keyboard or dictation. I wasn’t sure what settings would be the best option for a beginner, so I asked the assistive technology specialist at my university along with a friend who considered themselves an intermediate user for their recommendations.
Now that I am an assistive technology specialist with low vision that has been using VoiceOver for years, here is a list of VoiceOver settings and customizations that I recommend exploring when learning to use VoiceOver with low vision:
Speaking rate
Speaking rate controls the speech rate of VoiceOver and how quickly information is read out loud. This is separate from the speaking rate for Speak Selection, and users often increase the speaking rate over time as they develop stronger listening skills. The speaking rate can also be adjusted from the VoiceOver Rotor, which can be used for temporarily increasing/decreasing VoiceOver speed without having to open settings.
Verbosity
Verbosity controls what VoiceOver announces and how, and is configured in Settings > Accessibility > VoiceOver > Verbosity. For each category, users can device whether information is spoken, conveyed with a sound or pitch change, displayed on a braille display, or not announced at all. As someone who can see the screen, I don’t need VoiceOver to announce every single thing, so adjusting verbosity helped make VoiceOver more practical to use.
Here are the verbosity options I find most relevant for low vision users:
- Punctuation. Controls how much punctuation VoiceOver reads aloud. The options are All (reads every comma, period, etc.), Some (only certain punctuation like slashes and greater-than signs), None (no punctuation spoken, though VoiceOver still pauses where punctuation appears), or a custom group. I use Some for most things and switch to All when proofreading.
- Speak Hints. Toggles usage hints like “double tap to toggle setting” or “swipe up or down to adjust the value.” These are helpful when first learning VoiceOver.
- Controls. Choose how control types like buttons and links are announced: before the name (Button, Share), after the name (Share, Button), or not at all. I set this to Don’t Speak for most activities since I can usually tell visually whether something is a button or a link.
- Capital Letters. Choose whether VoiceOver says “cap” before capital letters, plays a sound, changes pitch, or does nothing. I use “Change Pitch” since it’s less disruptive than hearing “cap” before every capitalized word when proofreading or working with a case-sensitive programming language.
- Deleting Text. Controls how VoiceOver announces deleted text. Options are Speak, Play Sound, Change Pitch, or Do Nothing. I use “Play Sound” so I know something was deleted without hearing it read back to me every time.
- Links. Controls how VoiceOver announces links. VoiceOver can speak “link,” play a sound, change pitch, or do nothing. I use Change Pitch so links sound slightly different from regular text without adding extra words.
- Numbers. Choose between Words (reads “one hundred and twenty-three”) or Digits (reads “one, two, three”). I use Words for most things and switch to Digits when working with data.
- Emoji. Controls whether VoiceOver speaks emoji names. You can also toggle the Emoji Suffix separately, which controls whether the word “emoji” is appended after the name; VoiceOver would announce “red heart emoji” instead of just “red heart.” I keep the suffix on so I know when something is an emoji versus text that happens to describe a heart.
- Media Descriptions. Controls how closed captions and subtitles are handled. Options are Off, Speech, Braille, or Speech and Braille.
- Table Output. Two separate toggles: one for Table Headers (whether column/row headers are announced) and one for Row & Column Numbers. I turn off Row & Column Numbers for most things since I can see the table but turn them back on for complex spreadsheets.
- System Notifications. Controls how notifications are handled when the device is locked versus when they appear as banners. Options include Speak, Speak Count (just announces the number of notifications), Braille, or Do Nothing. I use Speak Count for lock screen notifications, so I know something came in without having it read aloud unexpectedly. That said, I typically turn off VoiceOver before locking my device.
Speech and Voices
The default VoiceOver voice on my device is Samantha at 50% pitch, a female American voice. There are several voices available for different languages, and you can have different voices set for different languages. Anecdotally, several of my American friends prefer British or Australian English voices because they emphasize syllables differently and are easier to identify.
Other options for customizing Speech and Voices include:
- Voice Rotor. Add multiple voices to a Voices rotor to quickly switch between voice options without going into settings. This is useful for setting up voices for specific tasks, such as a faster voice for skimming text content or a slower voice for reading instructions.
- Per-Voice Customization. For each voice, users can individually adjust the rate, pitch, speech volume, or apply an equalizer.
- Pitch Change. The global pitch change setting affects how VoiceOver uses pitch to convey information (like indicating capital letters or links, if those verbosity options are configured to Change Pitch).
- Detect Languages. Automatically switches the VoiceOver voice when it encounters text in a different language. Useful for accessing VoiceOver to read content in other languages, though it can lead to errors with proper nouns.
- Pronunciations. Add custom pronunciations for specific words, acronyms, symbols, and names. For example, I added the correct pronunciation for a friend’s name since VoiceOver didn’t recognize it. This is also useful for technical terms, abbreviations, or anything VoiceOver consistently mispronounces.
Sound
Audio Ducking controls whether background audio (music, podcasts, videos) is lowered when VoiceOver speaks. Options are Off, When Speaking, or Always. I use When Speaking so audio ducks only when VoiceOver is actively talking, rather than staying lowered the whole time VoiceOver is on.
Another helpful feature for students is Speech Channel, which is used to configure which audio output channel VoiceOver speech comes from. This is particularly useful for students that use headphones, as VoiceOver can be configured to play in a single ear.
Enable Large cursor for VoiceOver
The Large Cursor setting makes the visual VoiceOver cursor (a black rectangle that appears around the focused element) larger and easier to see. This is helpful for using VoiceOver with low vision, since the default cursor can be hard to spot on the screen.
Rotor customization
The Rotor is a circular gesture control; place two fingers on the screen and rotate them like turning a dial to cycle through navigation options, then swipe up or down to use the selected option. By default, it includes items like Characters, Words, Headings, Links, and more.
The Rotor can be customized by adding and removing items and reordering them so frequently used options are easiest to reach. Go to Settings > Accessibility > VoiceOver > Rotor to add, remove, or rearrange items.
A few Rotor items I find particularly useful as a low vision user:
- Speech Rate. Adjust speaking rate on the fly without going into settings
- Speech Volume. Adjust VoiceOver volume independently from system volume
- Headings. Navigate a page by headings, which is much faster than swiping through every element
- Links. Jump between links on a page
- Zoom. For those who use Zoom alongside VoiceOver, this allows magnification to be adjusted from the Rotor
There’s also a Change Rotor with Item setting that automatically switches the Rotor to a relevant option based on what’s focused. For example, when an item has available actions, the Rotor can switch to Actions automatically. I find this feature helpful for reducing visual clutter and hiding non-relevant features.
Navigate Images
This setting controls whether VoiceOver focuses on images when navigating content. Options include:
- Always. VoiceOver focuses on all images
- With Descriptions. VoiceOver only focuses on images that have alt text or image descriptions
- Never. VoiceOver skips images entirely, whether they have alt text or not.
I use With Descriptions, since I can see images visually and don’t need VoiceOver to stop on every decorative photo, but appreciate having access to alt text when available. I do not recommend enabling Never, as this can make it more challenging to use the Photos app (gallery).
Navigation mode
Flat is the default navigation style for VoiceOver on iOS. When navigating in flat mode, VoiceOver moves through every item on the screen one at a time in sequence; swiping right moves to the next item, swiping left moves to the previous item. Every element is treated as being on the same level, regardless of whether it’s inside a container, a group, or a section.
Grouped navigation is similar to how VoiceOver works on macOS and other desktop screen readers by default. Rather than moving through every item one at a time, VoiceOver recognizes containers and groups of elements (like a list of messages in Mail) and treats them as units. In grouped mode, a two-finger swipe right moves into a group (called interact), and a two-finger swipe left moves out of it. Navigation happens in two stages: first moving to a group, then moving into it to access the individual items inside.
Related links
- How To Write Alt Text and Image Descriptions for the Visually Impaired
- How To Access Images Without Alt Text
- Make Online Learning Accessible For VI Students: Quick Start Guide
VoiceOver Live Recognition and low vision
VoiceOver Recognition groups four features: Screen Recognition, Live Recognition, Image Descriptions, and Text Recognition. These tools use on-device machine learning to improve accessibility and do not require internet access.
Screen Recognition
Screen Recognition can make it easier to navigate apps that were not designed with accessibility in mind by scanning the visible interface and exposing detected controls (for example, buttons, sliders, and labels) to VoiceOver. It runs on the device and does not require internet access, though a one-time model download may be required). Screen Recognition can also be added to the Rotor for quick toggling, and it can be limited to specific apps rather than enabled system-wide.
There are a few limitations to using Screen Recognition, including:
- Controls may be identified without meaningful labels (for example, an icon recognized as “button”).
- Separate elements may be grouped incorrectly.
- For best results, use text-labeled interfaces; icon-only screens are more inconsistent.
Live Recognition
Live Recognition uses the camera to provide real-time information about the surroundings. In iOS 18 and later, these tools are consolidated into a single Rotor item.
How to activate Live Recognition:
- Live Recognition Rotor item (add at Settings > Accessibility > VoiceOver > Rotor)
- Four-finger triple-tap
- Custom gesture (VoiceOver settings)
Once active, Live Recognition cycles through these options:
- Reads text in the camera view (signs, labels, printed text).
- Point and Speak. Reads the text under a finger while moving across a surface (for example, appliance panels or medication labels).
- Provides a general description of what the camera is pointed at.
- Detects people in view (requires LiDAR).
- Detects doors and provides details, including how they open (requires LiDAR).
- Identifies furniture in view.
Out of these options, I use the Text and Point and Speak options the most often for reading text with my iPad camera and low vision. My iPad does not have LiDAR, so I have not used the door or people detection features on my personal devices.
Image Descriptions
Image Descriptions provides descriptions of images in apps and websites. It can run automatically in selected apps or be triggered manually with a custom gesture.
Text Recognition
Text Recognition reads text embedded in images (for example, screenshots, memes, and photos). It is separate from Image Descriptions, so it can remain on even when Image Descriptions is off.
Related links
- All About Visual Assistance Apps For Visually Impaired
- iOS Magnifier and Low Vision Accessibility
- OCR Scanner Apps For Low Vision Students
Typing with VoiceOver
There are three options for using an on-screen (touchscreen) keyboard when VoiceOver is turned on:
- Standard typing. Press a key once to hear it out loud. Double-tap the key to activate it/type it on the screen.
- Touch Typing. Move a finger across the keyboard and lift it when the desired key is heard.
- Direct Touch Typing. The keyboard behaves as if VoiceOver is off. Touching a key enters it immediately. This option can work well for low vision users who can see the keyboard well enough to type normally, since typing habits do not need to change when VoiceOver is on.
Typing Feedback can also be configured to control what VoiceOver says while typing: nothing, characters only, words only (spoken when space or punctuation is entered), or characters and words. I use Words, so I get confirmation of what I typed without hearing every individual letter.
For users that have trouble identifying letters, Phonetic Feedback adds phonetic words after letters (e.g., “H, hotel”) to help distinguish similar-sounding letters. I keep this off since I can see the keyboard, but it’s useful when practicing with the screen curtain on or when proofreading.
Related links
VoiceOver Activities and low vision
The Activities feature enables users to create custom VoiceOver settings for specific apps or categories of apps. Activities can be set to activate automatically when a specific app is opened, or when the user is in a specific context category: Word Processing, Narrative, Messaging, Social Media, Spreadsheet, Source Code, or Console.
When creating an activity, it’s possible to customize the following features:
- Voice
- Speaking Rate
- VoiceOver Volume
- Mute Speech
- Mute Sound
- Audio Ducking
- Various Verbosity settings
- Punctuation
- Emoji
- Container Descriptions
- Table Headers
- Row and Column Numbers
- Image Descriptions, Speak Hints
- Typing Style
- Navigation Style
- Braille settings
- Modifier Keys
I set up VoiceOver activities for tasks like checking my email, which uses a faster speaking rate compared to my other reading applications.
Setting up custom VoiceOver commands and gestures
Interested in customizing VoiceOver even further? There are several options available for creating custom commands, gestures, and keyboard shortcuts when using VoiceOver with low vision:
- All Commands shows every available VoiceOver command organized by category. Tapping any command reveals what gesture or keyboard shortcut is currently assigned to it and allows a new one to be assigned. This is helpful for creating gestures for items on the Rotor.
- Touch Gestures shows the full list of possible gestures organized by type and number of fingers, and lets the assigned command for each one be changed. Many gestures are already assigned to something, so it’s worth checking what is already assigned before changing anything.
The same command can be assigned to multiple gestures, and commands can also be assigned to keyboard shortcuts for those who use a physical keyboard. If a custom gesture assignment doesn’t seem to work the first time, try assigning it again; I have noticed that my first attempt at assigning gestures tends to fail (which is a known bug).
More tips for learning to use VoiceOver with low vision
- iOS Shortcuts has a VoiceOver tutorial and a few other accessibility tutorials, which I wrote about in Ten iOS Shortcuts For Visual Impairment
- Need a list of keyboard shortcuts for using VoiceOver on an iPad? Read Use VoiceOver on iPad with an Apple external keyboard – Apple Support
- Want to learn more about the history of VoiceOver? Check out 36 Seconds That Changed Everything – How the iPhone Learned to Talk
- Looking for ideas on how to introduce both sighted and visually impaired students to assistive technology? Check out my ideas in Global Accessibility Awareness Day: Activity Ideas for Students.
Published September 18, 2018. Updated April 2026
