Smart Living | Devices

Devices | 20 Oct 2021

Pixel 6 and iPhone 13: How smartphones are now clever enough to coach you at tennis

Artificial Intelligence (AI) now lives inside the brain of the latest smartphones, opening up intriguing new possibilities in the apps we use, from tracking your tennis swings to helping you sing your heart out.

Smartphones may look a little samey these days, but the brains inside them are getting more sophisticated than ever.  

While they’ve used a form of AI for some time, the new Apple iPhone 13 range and the upcoming Google Pixel 6 series, take a different approach to AI that is already paying off. 

Now see, hear 

Until now, most AI smartphone features have relied on your data being sent over the air from your phone to the ‘cloud’ – data centres full of powerful server computers. They apply their synthetic smarts to your data, then send the resulting insights back to you. 

A classic example of this approach is the Google Assistant feature on recent Android smartphones. With your permission, Google’s servers analyse data such as the emails in your Gmail inbox. Google Assistant will then automatically remind you about, for example, bills that need paying and upcoming train and plane journeys. 

On the latest, most powerful smartphones, AI can run on the phone itself rather than in the cloud.  

The power and flexibility of on-device AI can go far beyond analysing emails, as apps like SwingVision demonstrate. With your iPhone mounted on a tripod, SwingVision can record your tennis matches, tracking and analysing elements such as stroke type, ball speed and footwork. It can even line call in real-time – detecting where the ball has landed on court. By automatically tallying and organising such statistics, the app aims to help you improve your game. 

According to Swupnil Sahai, a veteran of Tesla and now the CEO of Mangolytics which developed SwingVision, the real-time feedback features of the app wouldn’t be possible without on-device AI.  

“[Apple’s] Neural Engine is the biggest game-changer and probably the only reason SwingVision can even exist today. There is still, to date, no chip on any Android phone that is as fast or efficient. This is the primary reason we are still on iOS – only the iPhone XR/XS and newer are able to run our models fast enough to keep up with the video frame rates in real-time. 

“We’ve been waiting two years for a similar chip to come out on the Android side. This year’s Google Pixel 6 and its Tensor chip seems like the first real candidate,” Mr Sahai told Vodafone UK News.

Another example of on-device AI is NaadSadhana. Designed for musicians, especially those versed in Indian genres and styles, this iOS app will listen to your singing and then – after some brief initial processing – automatically generate a real-time multi-instrument backing track. 

For NaadSadhana’s developer Sandeep Ranade, the instantaneous responsiveness that on-device AI brings is just as crucial as it is for Mr Sahai and SwingVision. NaadSadhana “needs to respond to musical cues in under a millisecond”, Mr Ranade said. 

Other factors also influenced Mr Ranade’s decision to use on-device AI.  

The amount of data needed for analysis of music, and the AI models involved is large, as is the variability. It’s better to evolve the models on-device based on each musician’s unique musicality and style,” he said. 

Privacy and security were also important considerations for Mr Ranade, with data about people’s voices staying solely on their devices rather than in the cloud. 

Both developers agreed that running AI on processors with AI-specific circuits, like Apple’s A-series processors and its Neural Engine, can be more energy-efficient than running it on those without – an important consideration on mobile devices. 

The energy-efficiency of on-device AI hardware is continuing to improve, says Mr Sahai.  

So for example, even if the A15 [the processor in the iPhone 13] is only moderately faster than the A14 [the processor in the iPhone 12], the power consumption is substantially less when running the same algorithms,” he explains. 

AI, AI, oh! 

The latest phones can do this clever stuff because their processors have specific circuits devoted to running nothing but AI. 

This includes all iPhones which have an A11 processor or newer (namely every iPhone since the iPhone 8 and X released in 2017). These processors have a so-called Neural Engine for running AI. 

On the Android side, Google has made waves with its announcement that it has designed AI-specific circuits for its Tensor processor which will be in the upcoming Pixel 6 and Pixel 6 Pro phones. Other Android phones with certain Qualcomm Snapdragon processors have had similar circuits since 2016. 

Even if you’re not interested in tennis or Indian music, there are plenty of other cool examples of onboard AI in action. 

If you’re using iOS 15 on an iPhone XS, XR or later, for example, then the built-in Photos app can automatically recognise text in photos of signs, billboards and shopfronts. You can then manipulate that text as if you’d typed it yourself, such as copying, pasting and searching.

Long-time computer users will recognise this as Optical Character Recognition (OCR), a software feature often used when scanning paper documents. What sets Apple’s version apart – it calls this Live Text – is that it works on more than just documents and it recognises text with next-to-no processing time. 

On the Android side, recent Google Pixel phones have an AI-based feature called Live Caption. This automatically creates captions for videos that otherwise don’t have them, a boon for the hearing impaired, all without needing to use the cloud.

The Pixel 6 phones will extend this feature even further with Live Translate, producing translated captions for foreign-language videos using its Tensor processor. Outside of videos, the Live Translate feature will even be able to translate the speech of someone standing next to you.

Machine learning 

The common thread running through all these apps and features is that AI is being used to recognise things in the real world, from human speech and singing, to the written word and tennis balls.  

Under the hood, to put it very simply, software ‘models’ are trained beforehand to recognise objects using a huge number of examples – whether they be recordings of speech and music, photos of the written word, or videos of tennis matches.  

When you use an app that uses on-device AI, it’s running these models on your device’s processor and then refining them even further based on what you use them for. 

It’s comparable to the way a person might learn a foreign language by listening to recordings of native speakers and then practising by repeating certain phrases and vocabulary until they get the pronunciation right. Hence the term Machine Learning or ML. 

Rise of the machines 

Although on-device AI/ML is highly advanced technology, it’s only just getting started. Smartphones powerful enough to use it have only been available for the past couple of years, while many app developers are still cutting their teeth on the technology. 

As a sign of things to come, a 2020 Google research paper described a ML technique for recognising body poses called BlazePose. While intended for improving fitness training apps, it could potentially be used for recognising and translating sign language. 

Smartphones may look at bit boring these days, but what they can do is becoming more exciting by the day. 

 

To get the iPhone 13 you want at a price you choose, head to the Vodafone website to learn more about flexible EVO pay monthly plans.

Stay up-to-date with the very latest news from Vodafone by following us on Twitter and signing up for News Centre website notifications.