Android newbie HMD’s Nokia 8 flagship lets you livestream ‘frontbacks’

Rebooting the venerable Nokia smartphone brand has not been a rush job for HMD Global, the Foxconn-backed company set up for the purpose of licensing the Nokia name to try to revive the brand’s fortunes on smartphones.

But after starting with basic and mid-tier smartphones, it’s finally outted a flagship Android handset, called the Nokia 8, which it will be hoping can put some dents in Samsung’s high end. And/or pull consumers away from Huawei’s flagships handsets — or indeed the swathe of Chinese OEMs surging up the smartphone market share ranks.

With the Nokia 8, HMD is putting its flagship focus on content creators wanting to livestream video for their social feeds.

Competition in the Android OEM space has been fierce for years and there’s no signs of any slack appearing so HDM faces a steep challenge to make any kind of dent here. But at least it now has an iron in the fire. As analyst CCS Insight notes, the handset will be “hugely important in getting Nokia-branded smartphones back on the mobile phone map”.

Specs wise, the Nokia 8 runs the latest version of Android (Nougat 7.1.1) — which HMD is touting as a “pure Android experience”, akin to Google’s Pixel handsets. (There’s a not-so-gentle irony there, given Nokia’s history in smartphones. But clearly HMD is going full in on Android.)

On the hardware front, there’s a top end Qualcomm Snapdragon 835 processor, plus 4GB of RAM and 64GB of internal memory (expandable thanks to a MicroSD card slot). While the 5.3 inch ultra HD resolution display puts it on the verge of phablet territory — and squarely within the current smartphone screen size sweet spot.

Also on board: dual rear cameras, both 13MP (one color, one B&W), and a 13MP front facing lens — all with f/2.0; using Zeiss optics; and with support for 4K video.

The flagship camera feature — and really phone feature too — is the ability to livestream video from both front and back cameras simultaneously.

HMD is trying to coin a hashtaggable word to describe this: “bothie” (as opposed to a selfie)…

Hello #Bothie! The world’s first smartphone to broadcast live with both cameras simultaneously. Meet the #Nokia8.

— Nokia Mobile (@nokiamobile) August 16, 2017

This split screen camera feature can also be used for photos — so they’ve basically reinvented Frontback. Well done.

“Content creators can natively broadcast their unique #Bothie stories to social media through the Dual-Sight functionality located within the camera app. Fans can also enjoy unlimitedphoto [<16MB in size] and video uploads to Google Photos,” HMD writes.

This could prove a sticky feature for social media lovers — perhaps especially the dual video option, which lets people share twin perspective video direct to Facebook and YouTube via the camera app.

Or it could prove a passing fad, like Frontback. Time will tell. CCS Insight describes it as an “interesting approach” but also cautions on whether consumers will take to it.

Commenting on the feature in a statement, HMD’s Juho Sarvikas, chief product officer, said: “We know that fans are creating and sharing live content more than ever before, with millions of photos and videos shared every minute on social media. People are inspired by the content they consume and are looking for new ways to create their own. It’s these people who have inspired us.”

Elsewhere on the device, there’s a spatial surround sound recording tech that uses three microphones and is apparently drawing on Nokia’s Ozo 360 camera division, plus USB type C charging port; a 3.5mm headphone jack; and a non-removable 3090 mAh battery.

The handset, which is clad in an aluminium unibody casing and has a fingerprint reader on the front for device unlocking and authentication, is described as splashproof rather than waterproof.

Global RRP for the Nokia 8 is €599, with a rollout due to start in September. The handset comes in a choice of four colors: Polished Blue, Polished Copper, Tempered Blue and Steel.

Google is testing a data-friendly version of its Search app

Google might soon release a data-friendly version of its search app for mobile.

That’s because the company is currently piloting such an app in Indonesia, as the eagle-eyed team at Android Police first spotted.

“Search Lite” — which TechCrunch understands is not the name of the app, but it is certainly an accurate description of it — is essentially a modified version of the Google search app that’s optimized for those using poor quality connections, with limited mobile data allocations, or in possession of a smartphone with little internal memory.

In that respect it’s similar to the YouTube Lite app that Google launched in India last year, and other ‘lite’ apps from Facebook, LinkedIn, Twitter and others. India has been a core market for these data-friendly apps and there are clues within the app that this Google app is headed to India soon.

Beyond offering an easier way to search the web, the app connects to other content including news, weather and Google’s Translate service. There’s an option to navigate to external websites inside the app’s dedicated browser, a move that would seemingly save on data, too.

Image via Android Police

Google declined to comment on the app specifically.

“We’re always experimenting with our products with the goal of providing the most useful and optimal experience for our users. This is a new experimental app to help improve the search experience for users in Indonesia,” a spokesperson told TechCrunch.

Beyond individual apps, Google is putting serious focus on developing services that are optimized for emerging markets, where it sees the next billion internet users coming online. It is developing a lightweight version of Android — Android Go — to power smartphones, and has made strategic acquisitions in Southeast Asia and most recently India to build out engineering teams that are dedicated to emerging markets.

Beijing’s public transport system gets an app for paying fares — but Apple isn’t invited

Apple continues to be locked out of China’s massive mobile payments space. The latest reminder came this week when Beijing’s transportation system opened up to smartphone payments… via an Android app.

Already Tencent’s WeChat Pay and Alibaba’s Alipay services dominate China’s mobile payment space, which is estimated to have processed $3 trillion last year, but now Apple has missed out being part of what is sure to be a very convenient usage case.

The Financial Times reports that Beijing’s public transport payments company Yikatong launched an app for ‘most’ Android devices that allows commutes to ditch their physical card and pay fares via their phone.

Apple isn’t included most likely because its operating system doesn’t support third-party payments like Yikatong, instead favoring its own Apple Pay. But it is also worth noting that iOS accounts for just 16 percent of all smartphones in China, according to data from Kantar as of March. Though the figure in urban areas is likely to skew in Apple’s favor, it doesn’t dominate which may be another factor.

It’s unclear whether potential iPhone owners would go to the lengths of buying an Android device just to use the transportation app, but it’s another piece of anecdotal evidence that shows the difficulty Apple is up against in China, where revenue was down 10 percent year-on-year in its most recent quarter of business.

Apple recently removed the popular tip feature from chat app WeChat, a move that some believe might tempt its users to move over to Android where it continues to exist. While WeChat itself, far and away the most popular Chinese app, has ‘leveled the playing field’ in some ways by standardizing parts of the mobile experience for users whether they are on iOS or Android, the latter of which is often (far) cheaper.

That said, analysts are optimistic that the forthcoming next iPhone — which has been heavily linked with a range of new features — can sell well in China if Apple is able to differentiate it from previous models. Time will tell, but missing out on wide deployments like Chinese public transport remains a blow.

Featured Image: Fredrik Rubensson/Flickr UNDER A CC BY-SA 2.0 LICENSE (IMAGE HAS BEEN MODIFIED)

Escher Reality is building the backend for cross-platform mobile AR

The potential of mobile augmented reality is clear. Last summer Pokemon Go gave a glimpse of just how big this craze could be, as thousands of excited humans converged on parks, bus stops and other locations around the world to chase virtual monsters through the lens of their smartphones.

Apple was also watching. And this summer the company signaled its own conviction in the technology by announcing ARKit: a developer toolkit to support iOS developers to build augmented reality apps. CEO Tim Cook said iOS will become the world’s biggest augmented reality platform once iOS 11 hits consumers’ devices in fall — underlining Cupertino’s expectation that big things are coming down the mobile AR pipe.

Y Combinator-backed, MIT spin-out Escher Reality’s belief in the social power of mobile AR predates both these trigger points. It’s building a cross-platform toolkit and custom backend for mobile AR developers, aiming to lower the barrier to entry to building “compelling experiences”, as the co-founders put it.

“Keep in mind this was before Pokemon Go,” says CEO Ross Finman, discussing how he and CTO Diana Hu founded the company about a year and a half ago, initially as a bit of a side project— before going all in full time last November. “Everyone thought we were crazy at that time, and now this summer it’s the summer for mobile augmented reality… ARKit has been the best thing ever for us.“

But if Apple has ARKit, and you can bet Google will be coming out with an Android equivalent in the not-too-distant future, where exactly does Escher Reality come in?

“Think of us more as the backend for augmented reality,” says Finman. “What we offer is the cross-platform, multiuser and persistent experiences — so those are three things that Apple and ARKit doesn’t do. So if you want to do any type of shared AR experience you need to connect the two different devices together — so then that’s what we offer… There’s a lot of computer vision problems associated with that.”

“Think about the problem of what ARKit doesn’t provide you,” adds Hu. “If you’ve seen a lot of the current demos outside, they’re okay-ish, you can see 3D models there, but when you start thinking longer term what does it take to create compelling AR experiences? And part of that is a lot of the tooling and a lot of the SDK are not there to provide that functionality. Because as game developers or app developers they don’t want to think about all that low level stuff and there’s a lot of really complex techs going on that we have built.

“If you think about in the future, as AR becomes a bigger movement, as the next computing platform, it will need a backend to support a lot of the networking, it will need a lot of the tools that we’re building — in order to build compelling AR experiences.”

“We will be offering Android support for now, but then we imagine Google will probably come out with something like that in the future,” adds Finman, couching that part of the business as the free bit in freemium — and one they’re therefore more than happy to hand off to Google when the time comes.

The team has put together a demo to illustrate the sorts of mobile AR gaming experiences they’re aiming to support— in which two people play the same mobile AR game, each using their own device as a paddle…

What you’re looking at here is “very low latency, custom computer vision network protocols” enabling two players to share augmented reality at the same time, as Hu explains it.

Sketching another scenario the tech could enable, Finman says it could support a version of Pokemon Go in which friends could battle each other at the same time and “see their Pokemons fight in real time”. Or allow players to locate a Gym at a “very specific location — that makes sense in the real-world”.

In essence, the team’s bet is that mobile AR — especially mobile AR gaming — gets a whole lot more interesting with support for richly interactive and multiplayer apps that work cross-platform and cross device. So they’re building tools and a backend to support developers wanting to build apps that can connect Android users and iPhone owners in the same augmented play space.

After all, Apple especially isn’t incentivized to help support AR collaboration on Android. Which leaves room for a neutral third party to help bridge platform and hardware gaps — and smooth AR play for every mobile gamer.

The core tech is essentially knitting different SLAM maps and network connections together in an efficient way, says Finman, i.e. without the latency that would make a game unplayable, so that “it runs in real-time and is a consistent experience”. So tuning everything up for mobile processors.

“We go down to, not just even the network layer, but even to the assembly level so that we can run some of the execution instructions very efficiently and some of the image processing on the GPU for phones,” says Hu. “So on a high level it is a SLAM system, but the exact method and how we engineered it is novel for efficient mobile devices.”

“Consider ARKit as step one, we’re steps two and three,” adds Finman. “You can do multi-user experiences, but then you can also do persistent experiences — once you turn off the app, once you start it up again, all the objects that you left will be in the same location.”

Consider ARKit as step one, we’re steps two and three.

“People can collaborate in AR experiences at the same time,” adds Hu. “That’s one main thing that we can really provide, that Google or Apple wouldn’t provide.”

Hardware wise, their system supports premium smartphones from the last three years. Although, looking ahead, they say they see no reason why they wouldn’t expand to support additional types of hardware — such as headsets — when/if those start gaining traction too.

“In mobile there’s a billion devices out there that can run augmented reality right now,” notes Finman. “Apple has one part of the market, Android has a larger part. That’s where you’re going to see the most adoption by developers in the short term.”

Escher Reality was founded about a year and a half ago, spun out of MIT and initially bootstrapped in Finman’s dorm room — first as a bit of a side project, before they went all in full time in November. The co-founders go back a decade or so as friends, and say they had often kicked around startup ideas and been interested in augmented reality.

Finman describes the business they’ve ended up co-founding as “really just a nice blend of both of our backgrounds”. “For me I was working on my PhD at MIT in 3D perception — it’s the same type of technology underneath,” he tells TechCrunch.

“I’ve been in industry running a lot of different teams in computer vision and data science,” adds Hu. “So a lot of experience bringing research into production and building large scale data systems with low latency.”

They now have five people working full time on the startup, and two part time. At this point the SDK is being used by a limited number of developers, with a wait-list for new sign ups. They’re aiming to open up to all comers in fall.

“We’re targeting games studios to begin with,” says Finman. “The technology can be used across many different industries but we’re going after gaming first because they are usually at the cutting edge of new technology and adoption, and then there’s a whole bunch of really smart developers that are going after interesting new projects.”

“One of the reasons why augmented reality is considered so much bigger, the shared experiences in the real world really opens up a whole lot of new capabilities and interactions and experiences that are going to improve the current thoughts of augmented reality. But really it opens up the door for so many different possibilities,” he adds.

Discussing some of the “compelling experiences” the team see coming down the mobile AR pipe, he points to three areas he reckons the technology can especially support — namely: instruction, visualization and entertainment.

“When you have to look at a piece of paper and imagine what’s in the real world — for building anything, getting direction, having distance professions, that’s all going to need shared augmented reality experiences,” he suggests.

Although, in the nearer term, consumer entertainment (and specifically gaming) is the team’s first bet for traction.

“In the entertainment space in the consumer side, you’re going to see short films — so beyond just Snapchat, it’s kind of real time special effects, that you can video and set up your own kind of movie scene,” he suggests.

Designing games in AR does also present developers with new conceptual and design challenges, of course, which in turn bring additional development challenges — and the toolkit is being designed to help with those challenges.

“If you think about augmented reality there’s two new mechanics that you can work with; one is the position of the phone now matters,” notes Finman. “The second thing is… the real world become content. So like the map data, the real world, can be integrated into the game. So those are two mechanics that didn’t exist in any other medium before.

“From a developer standpoint, one added constraint with augmented reality is because it depends on the real world it’s difficult to debug… so we’ve developed tools so that you can play back logs. So then you can actually go through videos that were in the real world and interact with it in a simulated environment.”

Discussing some of the ideas and “clever mechanics” they’re seeing early developers playing with, he suggests color as one interesting area. “Thinking about the real world as content is really fascinating,” he says. “Think about color as a resource. So then you can mine color from the real world. So if you want more gold, put up more Post-It notes.”

The business model for Escher Reality’s SDK is usage based, meaning they will charge developers for usage on a sliding scale that reflects the success of their applications. It’s also offered as a Unity plug-in so the target developers can easily integrate into current dev environments.

“It’s a very similar model to Unity, which encourages a very healthy indie developer ecosystem where they’re not charging any money until you actually start making money,” says Hu. “So developers can start working on it and during development time they don’t get charged anything, even when they launch it, if they don’t have that many users they don’t get charged, it’s only when they start making money we also start making money — so in that sense a lot of the incentives align pretty well.”

The startup, which is graduating YC in the summer 2017 batch and now headed towards demo day, will be looking to raise funding so they can amp up their bandwidth to support more developers. Once they’ve got additional outside investment secured the plan is to “sign on and work with as many gaming studios as possible”, says Finman, as well as be “head down” on building the product.

“The AR space is just exploding at the moment so we need to make sure we can move fast enough to keep up with it,” he adds.

Facebook buys Ozlo to boost its conversational AI efforts

Facebook has gone ahead and purchased Charles Jolley’s conversational AI startup Ozlo. Jolley, formerly Head of Platform for Android at Facebook, will not be returning to the company. The Ozlo team is expected to join Facebook to work on natural language processing challenges.

Ozlo launched with a consumer-facing app back in October 2016. Jolley told me at the time that the conversational AI space was rapidly consolidating (Samsung had just bought Viv) and he was happy to run a service independent of the major tech giants. With today’s acquisition, Ozlo is no longer independent and the conversational AI space grows just a bit more consolidated.

In March, Ozlo launched a suite of APIs. One of the company’s key differentiators was its knowledge graph — its database of facts about the world necessary for demonstrating any sense of intelligence. Ozlo sold its knowledge layer to developers as a service.

That knowledge layer, in addition to an intent API and converse API, will be wound down in the wake of the acquisition, according to Facebook. The same will be true for the original, readily available, consumer bot.

“1.2 billion people around the world use Messenger to connect with the people and businesses they care about,” a Facebook spokesperson said in a statement. “We’re excited to welcome the Ozlo team as we build compelling experiences within Messenger that are powered by artificial intelligence and machine learning.”

It’s unclear exactly what the Ozlo team will work on at Facebook. The Ozlo knowledge graph could find a home as a backbone for Facebook M. A number of recent acquisitions by large tech companies have been aimed at increasing the scale of such information repositories. Apple recently purchased Lattice Data to help convert unstructured data into a knowledge graph that can be reasoned across to deliver relevant answers to user questions.

Facebook declined to disclose the size of its purchase of Ozlo. The startup was previously backed by AME Cloud Ventures and Greylock Partners.

Featured Image: Sean Gallup/Getty

Google opens its Nearby Connections tech to Android developers to enable smarter offline apps

Google announced today the public availability of a developer tool that will allow Android apps to better communicate with nearby devices, even while offline. The company touts a number of potential use cases for this technology – like hotel rooms that sense your entry then set the temperature accordingly and turn on your favorite music, or phones that can merge their address books while in proximity, among other things.

However, the initial implementations of the technology aren’t perhaps quite as magical. Instead, forthcoming apps will use the Nearby Connections API, as the technology is called, for things like offline media sharing or the distribution of urgent weather warnings in low-bandwidth areas, for example.

Google has been developing its Nearby Connections API for some time. The API was first announced in 2015 as a way for mobile devices to be used as second screen controllers for games that are running on your TV.

At this year’s Google I/O developer conference in May, the company announced the API was being refreshed.

(Nearby Connections API discussed above at 24:15 mark)

The technology itself leverages Wi-Fi, Bluetooth LE and Classic Bluetooth under the hood to establish connections with nearby devices, Google explains. Apps using the API can switch between these various radios when it makes sense, or even take advantage of new radios when they become available – without requiring developers write new code to do so.

Apps can take advantage of this technology in a couple of ways.

In one scenario, a centralized device – like the host of an offline game or a teacher’s device in a classroom quiz app – could be connected to other nearby devices. Another implementation could create “mesh networks” for things like offline chat or ad-hoc project groups for real-time collaboration.

Google also today announced some of the apps that will be using the new API.

This includes The Weather Channel, which is using the technology to create on-demand mesh networks in data-deficient areas to spread urgent weather warnings; Hotstar is working on offline media sharing for those times connectivity isn’t available, like on airplanes or subways; and GameInsight will use the API to find nearby players and to run entire games offline.

In addition, Android TV will get a new remote control app that will use Nearby Connections to make the initial setup process easier on end users, as well to enable new second screen experiences.

The API was previously available to early partners, but is now open to all Android developers, and works across all Android devices (Jelly bean and up) running Google Play services 11.0 and up.

Developers are now able to publish their apps to Google Play that use this API, but many are still waiting to do so, Google tells us. That’s because many are waiting for Google Play services 11 to roll out to more users. There are several pilot apps that will be launching soon, but Google is not yet able to name them publicly, we’re also told.

Waze finally arrives on Android Auto

After a beta that kicked off earlier in 2017, crowdsourced navigation app Waze is coming to Android Auto. The Google-owned Waze seemed like a shoe-in for gaining app support for Android’s native in-car mode, but it’s taken a while to bring it to Android Auto – but the months-long beta and the time show that Waze wanted to get the experience right for drivers.

The Waze experience in Android Auto actually brings a lot of the app’s experience to your in-car display, provided you have a vehicle that supports Android Auto like the Chevrolet Cruze I tested it with earlier this week. The interface includes features like accident, delay, police and hazard reporting just like you’ll find in the mobile app, but translated to your car’s infotainment screen with native UI elements that are suited for the larger canvas.

It’s pretty easy to report delay-causing factors using the large, 7-inch center mounted infotainment touchscreen with the Waze UI; the whole thing is built around making this possible in as few steps as possible, with UI elements like icons and menus that use large font, and a minimum of selection options to help minimize distraction. I used the car’s built-in 4G LTE Wi-Fi hotspot for data, but you could just as easily use the data on the phone for map updates and two-way communication with the Waze reporting service.

  1. waze-android-auto-

  2. waze-android-auto-DSCF0320

  3. waze-android-auto-DSCF0336

  4. waze-android-auto-DSCF0321

  5. waze-android-auto-DSCF0322

  6. waze-android-auto-DSCF0323

  7. waze-android-auto-DSCF0324

  8. waze-android-auto-DSCF0325

  9. waze-android-auto-DSCF0326

  10. waze-android-auto-DSCF0327

  11. waze-android-auto-DSCF0328

  12. waze-android-auto-DSCF0329

  13. waze-android-auto-DSCF0330

  14. waze-android-auto-DSCF0331

  15. waze-android-auto-DSCF0332

  16. waze-android-auto-DSCF0333

  17. waze-android-auto-DSCF0334

  18. waze-android-auto-DSCF0335

The Cruze also has a steering wheel mounted voice control button, which you can hold down to ask Waze for directions. This works regardless of where you are in the Android Auto UI, provided you have Waze selected as your navigation app (the default is Maps, but a long press on the navigation icon in Android Auto easily allows you to switch it up). In my experience, voice searching worked well and returned relevant results.

Waze users can also call up their saved home and work addresses, as well as favorited destinations for easy one-touch navigation. You’ll see crowd-sourced reports of potential delays ahead on the live map as you navigate, and you’ll also receive updates about potential alternate routes that could save you time which you can opt-in to on the fly.

The Waze Android Auto port is so complete that you also get its in-app location-based advertising platform, including map pins for local sponsored spots and promoted search results. That’s good news for Waze on the advertiser side, since it opens up to the platform to a broader potential audience of Android Auto users. The one thing it doesn’t offer, however, is the ability to run it on Android Auto running on a smartphone in standalone mode – you’ll need to be plugged into a vehicle or head unit to use Waze in Android Auto itself.

Overall, in my brief usage Waze was a great navigation option on Android Auto, and one that carried over essentially everything about the mobile app that drives its high engagement with its dedicated fan base.

Google releases the final Android O developer preview

Google today launched the fourth and final developer preview of Android O, the latest version of its mobile operating system. As expected, there are no major changes in this release and, according to Google, the launch of Android O remains on track for later this summer. There’s still some time left before the official end of the summer (that’s September 22, in case you wondered), but given that Android Nougat was on a very similar schedule, I expect we’ll see a final release in August.

The final APIs for Android O arrived with the third preview release, so today’s update is all about incremental updates and stability. All of the major Android SDKs, tools and the Android Emulator will get minor version bumps in the next few days and the Android Support Library (version 26.0.0) is now considered stable, but, like before, the focus here is on making sure that developers can test their apps before the final version rolls out to users.

For users and developers, the new version of Android brings better notifications support across the OS, picture-in-picture support, autofill and more. There also are new features that are meant to optimize your phone’s battery. While none of the changes are revolutionary, Android developers should probably test their apps on Android O as soon as possible (even if they don’t plan to support the new features). To do so, they also should update to the latest version of Android Studio, Google’s IDE for writing Android apps.

The Google Play store is now also open for apps that are compiled against the latest API.

The Android O developer preview is available as an over-the-air update for regular users, too (assuming you are brave enough to run pre-release software on your phone). It’s available for Google’s Pixel, Pixel XL, Pixel C, Nexus 5X, Nexus 6P and the Nexus Player. To get it, you can enroll here.

Last year’s update, Android Nougat, now has around 11.5 percent market share in the Android ecosystem. It’s no secret that it takes the Android ecosystem quite a while to adapt new OS versions, but with a considerable number of Google’s own Pixel phones in the market now, it’s probably a good idea for developers to jump on the Android O bandwagon soon.

Alexa is coming to the Amazon app on Android, starting this week

This spring, Amazon introduced Alexa to a wider audience by making the virtual assistant a feature that could be accessed within the retailer’s main shopping app. However, that integration – which allows you to ask Alexa about news, weather, basic facts, or use the assistant’s add-on “skills,” among other things – was available only for iPhone users. This week, Alexa is arriving on Android, as well.

Amazon hasn’t made a formal announcement about the launch but, when asked, the company confirmed the integration is indeed rolling out this week.

Looks like Alexa was just added to the Amazon app for Android! Who wants to try playing Deal or No Deal from their Amazon app? 🙂

— Nick Schwab (@nickschwab) July 20, 2017

Hat tip to Nick Schwab, who noticed Alexa on Android today

It still seems a little odd for Alexa to be integrated with the Amazon shopping application, given that Amazon has a standalone Alexa app already available.

But Amazon is likely relying on its flagship app’s massive reach to market Alexa’s capabilities to a broader customer base – including those who may not quite understand yet what Alexa is or what she can do. It’s sort of like a way to try a demo of Alexa without actually having to buy an Echo or other Alexa-powered device.

Plus, given that Amazon’s app already had voice capabilities for things like checking on orders or finding products, it makes sense to simply augment those existing commands by integrating Alexa’s more powerful assistant.

As with the Alexa that ships on Echo speakers and other gadgets, the in-app version of Alexa can perform a similar set of functions, including answering basic factual questions about people, places, dates, music, sports and more, or give you an update on your daily news through the Flash Briefing feature, and more.

Alexa can also dole out information on weather and traffic conditions, or even play music for you while you shop within the Amazon app.

The in-app Alexa also lets you control smart home devices (to an extent), or use other Alexa skills that let you do things like play a game, order an Uber, place your Starbucks pick-up order, and more.

Above: Alexa in the iOS Amazon app

It doesn’t seem like Amazon shopping app users would really need to perform these sort of tasks from their Amazon app, but again, this feels more like an Alexa demo than an everyday use case. The idea is to get consumers familiar with Alexa, which could encourage them to purchase a hardware device, like the Echo or Echo Dot, to bring her into their home.

The Alexa feature isn’t live for all Amazon app users on Android at this time.

In fact, it seems that current Echo device owners got an early heads up on the integration by way of the Alexa companion app. In the Alexa app, a notification appeared alerting to a new Alexa device being automatically added to users’ accounts. (See above tweet). The new addition, as it turned out, was the Amazon mobile app. Surprise!

The notification card also includes options to customize Alexa, or dismiss the card.

As far as we can tell, there isn’t any new Alexa functionality included with the Android launch – it’s just now becoming available to Android users in addition to iOS. Like most releases of this scale, the feature is rolling out gradually, rather than hitting all users at once.

Google brings its GIF-making Motion Stills app to Android

Google last year introduced an app called Motion Stills that aimed to help iOS users do more with their Live Photos – including being able to crop out blurry frames, stabilize images, and even turn Apple’s Live Photos format into more sharable GIFs. Today, Google says it’s bringing Motion Stills to Android, along with a few changes.

Obviously, Android users aren’t in need of a Live Photos image editing tool. Live Photos, after all, are a format Apple introduced back in 2015, allowing iPhone users to snap photos that animate with a touch.

And with the introduction of iOS 11 later this year, Apple is rolling out a number of built-in tools for editing Live Photos, further eliminating the need for third-party applications in order to do things like cropping, picking out a key photo, or applying effects – like the new loop effect that will make your Live Photos play more like a GIF.

It makes sense, then, that Google would now find a use case for some of its Motion Stills technology on its own Android platform.

The company says the Android app includes a new recording experience where everything you shoot is immediately transformed into short, sharable clips. To use this feature, you simply capture a Motion Still with a tap, like taking a photo. If that sounds a lot like Google is introducing its own take on Live Photos, well…you’d probably be right.

Another new feature called Fast Forward lets you reduce a longer recording into a short clip, as well. This works with recordings up to a minute long, and the video is processed right on your phone. You can adjust the playback speed from 1x to 8x after recording. Google details some of the technology it’s using to make this possible, including how it encodes videos with “a denser I-frame spacing to enable efficient seeking and playback;” and the use of “adaptive temporal downsampling in the linear solver and long-range stabilization.”

Or, in human speak, it’s making more stable, smoother clips you can easily share with friends, even if the original footage was super shaky.

The company shows this off in a sped-up clip of a bike ride over a dirt path:

Meanwhile, in terms of turning regular recordings into GIFs, Google introduced new technology as well. It says it redesigned its existing iOS video processing pipeline to use a streaming approach that processes each video frame as it’s recording. It then stabilizes the image while performing the loop optimization over the full sequence. Again, translated, this means you can quickly make a recording and immediately get a smoothed-out GIF to share as a result.

The company says the new app is meant to be a place where Google can continue to experiment with short-form video technology, and hints that some of the improvements may make their way to Google Photos in the future.

The Motion Stills app for Android is available as a free download on Google Play and works on Android 5.1 and higher.