Whatthefont

A font identification app powered by machine learning.
Overview
WhatTheFont is an instant font identification tool from MyFonts, the world’s largest font store. As designers, it’s common for us to see a great looking font in use, in print, or in an image, and have no idea what it is. And unless you happen to have a designer friend who’s great at knowing all the fonts you’re out of luck… unless you use an app like WhatTheFont.

WhatTheFont exists as a webapp and a mobile app. I designed the experiences for both and worked closely with the developers during implementation, including acting in an official Product Owner capacity for the mobile app.
My Contributions
UX Design
UI Design
Product Owner
Screenshots of my design for WhatTheFont for iOS, released in October 2017. We also simultaneously released an Android version.
The Past and the Future
An older version of WhatTheFont had existed on the web for a number of years and had significant value to the business, as it was a major traffic driver to MyFonts. Nearly a quarter of the inbound traffic to the site came in through WhatTheFont, and the service was used 1.5 million times a month. In 2017, WhatTheFont’s font identification engine was completely reworked using a deep learning neural network and I had the opportunity to design a completely new user experience to fit. The new AI-powered font identification engine meant that I could radically simplify the user experience and remove several manual steps that had been required in the original version.

The original WhatTheFont experience had been coded during the early 2000s and relied on the user manually identifying key letters in a font sample, after which the system would try to use vector outlines to match the letters against the fonts in its database.
Webflow
Brand Designer
Apr 2014 - Mar 2015
Far far away, behind the word mountains, far from the countries Vokalia and Consonantia, there live the blind texts. Separated they live in Bookmarksgrove right at the coast of the Semantics, a large language ocean. A small river named Duden flows by their place and supplies it with the necessary regelialia. It is a paradisematic country, in which roasted parts of sentences fly into your mouth.

Far far away, behind the word mountains, far from the countries Vokalia and Consonantia, there live the blind texts. Separated they live in Bookmarksgrove right at the coast of the Semantics, a large language ocean. Far far away, behind the word mountains, far from the countries Vokalia and Consonantia, there live the blind texts. Separated they live in Bookmarksgrove right at the coast of the Semantics, a large language ocean. A small river named Duden flows by their place and supplies it with the necessary regelialia. It is a paradisematic country, in which roasted parts of sentences fly into your mouth.

Far far away, behind the word mountains, far from the countries Vokalia and Consonantia, there live the blind texts. Separated they live in Bookmarksgrove right at the coast of the Semantics, a large language ocean.
The original WhatTheFont experience, 2009–2017. Screenshots captured in 2014.
The new WhatTheFont, powered by machine learning, did away with almost all of the manual user steps. On the back end, the neural network looks at an image, detects the text, and is able to identify the font used from even a very short sample.CLICK TO VIEW
My design for WhatTheFont on the web, 2017.
Challenges
Over the last few years, the field of machine learning has had major gains and computer vision has gotten VERY good. Built on deep learning, a cutting edge form of machine learning, the font identification neural net can identify most fonts in its database with high accuracy, and it can do so very quickly and without needing human intervention. It’s so good that it turned out one of the main challenges with the design of this app was actually to intentionally slow things down, due to limitations on the number of API requests we could reasonably make at one time.

My original concept for the UX was something like a two step process where the user would upload a photo, and the app would tell them what fonts were used in it, just like the way you’d be able to ask your smart designer friend to identify fonts for you. Sounds straightforward, right? Well, in practice, it turned out that that intended user flow was a little too good to be true.

In testing, we discovered that although it was technically possible for the font identification neural net to identify every piece of text in a complex image at once, doing so would cause too many API requests to the server which would slow the whole app down pretty badly. This technical limitation ended up affecting the final design of the app. My challenge was to create a user experience with just the right amount of friction, to balance technical needs with user needs while still keeping the experience as smooth as possible.

Computers don’t see text the same way we do. While a human may look at this image and intuitively know that each word in it is the same font, a computer looks at it and assumes that each word could be a different font and each one needs to be identified separately.
So in the case of a complex image, like a page from a book, the server would get bogged down as the system tried to individually identify each piece of text spotted by the computer vision API.

My solution for this was to have the user select a single piece of text they wanted to identify. I was originally concerned that the user would want to select all the text at once, but in usability testing I found that they easily adjusted to this workflow. We helped them out by pre-selecting one of the bits of text that we identified automatically. This worked quite well in practice and users were able to easily tap through to their font results.

At the end of the day, as user experience designers, it’s not enough to understand how the user would want to use the product, we also need to understand the benefits and limitations of the technologies we work with. I was lucky enough to work with a really engaged and passionate team who were excited to help me understand how the new technology they had built worked, and it was through this that I was able to come up with design solutions to meet the technical challenges.