One of the most-used applications on my Android phone is the Google Lens app, an image recognition tool that lets me take pictures or upload pictures from my phone gallery to get information about what is in an image or search for additional relevant information. In the last twenty-four hours leading up to this post update, I have used the Google Lens app twelve times to help me get information about different items related to schoolwork, items in the house, and it even helped keep me from eating something I was allergic to. Here is how I use Google Lens with low vision, and how I introduce this application to other Android users with low vision.
Overview of Google Lens
Google Lens is not just a single app, but rather it is an image recognition technology incorporated into several Google products, such as:
- Google Search application for iOS and Android devices (look for the camera icon in the search bar)
- Google Lens app for Android— while Lens features are technically built-in to Android devices, I prefer to use the separate Google Lens app for ease of use
- Lens feature in Google Photos app for getting information about pictures in the Android gallery
- Built-in features for Camera app on Android devices (such as QR scanner and document scanner, which copies text to clipboard)
- Google Lens shortcut is pinned to default Pixel launcher on supported devices.
Over the years, Google Lens has simplified its feature channels to provide options for translating text, visual search based on images, and homework help (powered by Socratic). I go into more detail about how I use each option throughout the post.
Related links
How to use Google Lens
Google Lens provides users with the option to take a picture of something using their device’s back camera and get information about it. If using the Google Lens app, users can choose a feature channel from the bottom of their screen by swiping to select which option they would like to use (translate, search, or homework).
To search for content with Google Lens, users select the shutter or search button in the center of their screen to take a picture, or select the Gallery icon to select a photo from their device. A pop-up on the bottom of the screen will provide information about the item or take the user to a Google search page with additional information. Pictures taken in Google Lens are not stored in the gallery or camera unless the user takes a screenshot or otherwise downloads the image.
Another option for getting information about something in the Google Lens app is to long-press the shutter button and speak a question about an image, such as “what color is this?”, “what are those shoes called?”, or “what dish is this?” This can also be used to record a video, where the user can either speak their question during the recording or type it after they are finished. Again, the video is not saved to the camera roll, and pre-recorded videos cannot be uploaded to Google Lens.
Translating text with Google Lens
When using Translate with Google Lens, users can hover their phone camera over text in one language and translate it to another language. The translated text will be displayed as an overlay on top of the original text. When uploading an existing image from the gallery, users can copy the text to their clipboard with “Select All” to read it in large print in another application, or use the Listen feature to listen to text read out loud in a synthesized voice.
Examples of how I have used the Translate feature with Google Lens include:
- Reading a handwritten Christmas card in French
- Translating a screenshot from a Spanish website
- Browsing a menu that was written in another language I wasn’t sure of— Google automatically detected the language that was being used and translated it to English
As someone with low vision, I personally find the Translate feature easier to use with existing screenshots or text from my camera roll, because it can be copied in large print or read out loud without me having to continuously hold my phone over something.
Related links
- Language Learning Tips And Resources For Low Vision
- Google Reader Mode and Low Vision
- Customize Accessibility Settings For Specific Apps
Recognizing text with Google Lens
There are a few options for recognizing text with Google Lens, including uploading an existing image and using the Search feature to take a picture of text. Both features provide the option to copy text from an image onto the clipboard for use with another application, as well as have text read out loud with Select-to-speak or the Listen feature. Users can also search for text from an image on Google or with the visual search tool.
Examples of how I use text recognition features with Google Lens include:
- Reading text from a screenshot that has too-small font
- Extracting text to copy/paste for alt text or image descriptions
- Recognizing text from an image and pasting it into another application so I can read it
- Reading expiration dates and ingredients lists for food items
- Using OCR to recognize handwritten text so that I can read it more easily as digital print
- Enlarging the serial number on a guitar or other instruments (using the flashlight option in the top corner of the screen can make this easier)
- Capturing text from a flyer or board that I can save to Google Keep or another notes application
- Enlarging text on a sign in my hotel room
- Reading room numbers or copying down locations on a map/directory
It’s worth noting that Google Lens is not designed to magnify text, rather it is for text recognition. For users that want to hear text read out loud as it is recognized by their camera, I recommend using the Google Lookout app on Android that is designed for blind and low vision users.
Related links
- Reading Handwriting With Assistive Technology
- A to Z of Assistive Technology for Reading Digital Text
- How To Access Images Without Alt Text
- All About Visual Assistance Apps For Visually Impaired
- Hotel Familiarization and Low Vision
- Google Lookout App For Low Vision
Using Google Search with Google Lens
Google Lens has had several feature channels over the years such as shopping, dining, and product identification, which are now all incorporated under the “Search” feature. All of these features are now rolled into one visual search tool, where users can take a picture or upload a photo to get more information about what is in an image.
Examples of how I use visual search with Google Lens include:
- Identifying animals or breeds of dog, as well as species of plants and flowers
- Getting high resolution pictures of art or artifacts at a museum, which can be helpful for getting a closer look at items that are far away
- Scanning product barcodes or taking pictures of food packaging to search for ingredients list— this feature has been tremendously helpful as someone with food allergies and low vision!
- Taking a picture of a clothing item to figure out where it is from or getting information about care instructions or styling ideas
- Reverse-searching an image to find another version in higher resolution, or a description of what it looks like
- Scanning QR codes for menus or other files
From my phone gallery, I use the Lens feature to get more information about what is in an image, such as what types of items are visible, the original source link, or to recognize text from an image. It’s also helpful for reading product labels or identifying items that might be hard to see otherwise.
Related links
- How To Write Alt Text and Image Descriptions For Flowers
- Tips For Visiting Art Museums With Low Vision
- How To Access Images Without Alt Text
- Making Clothing Stores Accessible For Low Vision
Getting homework help with Google Lens
Powered by the Socratic tool, Google Lens can provide homework help across various subjects by connecting learners with additional resources for learning a topic. For example, I could take a picture of a math problem and get a detailed explanation of how to solve these types of problem, a step-by-step guide to finding the solution, and videos/websites from trusted academic sources that can provide more information about a topic.
Examples of how I use homework help with Google Lens includes:
- Getting help with a challenging math problem, or recognizing text from a math problem that I can copy into another application
- Receiving an explanation for a chemistry, physics, or biology question
- Browsing other sources and information about history or literature, as well as grammar explanations for writing
It’s worth noting that I don’t actually use the Homework feature for doing my homework or any other graded assignments— that’s against the honor code! But it is helpful for tutoring or studying, especially if I couldn’t see the original lesson very clearly.
Related links
- Adapting Digital Equations: Math Problems and Low Vision
- The Best Study Tips For Visually Impaired Students
- How I Use My Phone As Assistive Technology In Class
Google Lens accessibility features for low vision
Google Lens is designed to help users “search what you see,” but for those of us who can’t see very much there are several options for using Google Lens with existing accessibility settings. These can include:
- Having highlighted text read out loud with Select-to-speak or TalkBack (to be honest, I prefer using Google Lookout with TalkBack)
- Opening results for visual search in Google Chrome to view information in a full screen view
- Flashlight (lightning icon) for providing addition illumination
- Option to review history of content/images recognized with Google Lens by selecting History icon (clock with arrow at top of screen for Google Lens app)
- Upload image from gallery for image recognition
- Ask questions about an image using voice recognition or taking a video
- Support for large print and system font sizes on labels (search results may require users to open a separate window or swipe down)
- Add to search by typing or asking questions based on an image
For users that benefit from visual assistance but don’t need to have results read out loud or a ton of features like specialty applications for users with visual impairments, Google Lens and its free app is a great way to use image recognition features to get information about objects within view of a user’s camera in a fast and discreet way.
Related links
- Low Vision Accessibility Settings For Android Phones
- How To Use Select-to-speak on Android
- Customize Accessibility Settings For Specific Apps
- Google Lookout App For Low Vision
More resources for how I use Google Lens with low vision
- Google Lens is built into the Google Assistant, and I use it often when traveling— I share an example in How I Use Google Assistant While Traveling
- When I’m teaching someone how to use Google Lens, I encourage them to identify five items that they have trouble seeing and try using Google Lens to recognize them. I try to incorporate personal interests as well, such as identifying instruments, plants, craft supplies, book titles/covers, and pictures of food.
- Another app I have talked about on my website is Microsoft Lens, which is a document scanner and separate from Google Lens. Learn more in How I Use Microsoft Lens With Low Vision
- Want to learn more about how I use image recognition in different contexts? Read How To Access Images Without Alt Text

Published December 24, 2019. Updated December 2024
