Everything announced at Google I/O 2024 including Gemini AI, Project Astra, Android 15, and more

Everything announced at Google I/O 2024 including Gemini AI, Project Astra, Android 15, and more

At the end of I/O, Google’s annual developer conference at Shoreline Amphitheater in Mountain View, Google CEO Sundar Pichai revealed that the company said “AI” 121 times. This was, essentially, the gist of Google’s two-hour keynote — bringing AI into all of Google’s apps and services used by more than two billion people around the world. Here are all the major updates announced by Google at the event.

Gemini Pro

Google

Google has announced an all-new AI model called Gemini 1.5 Flash, which it says is optimized for speed and efficiency. The Flash sits between the Gemini 1.5 Pro and Gemini 1.5 Nano, the company’s smallest model that runs natively on the device. Google said it created Flash because developers wanted a lighter, less expensive model than Gemini Pro for building AI-powered apps and services while retaining some things like a million-character long context window that set Gemini Pro apart from competing models. Later this year, Google will double the Gemini context window to 2 million tokens, meaning it will be able to process 2 hours of video, 22 hours of audio, and more than 60,000 lines of code or more than 1.4 million words in the same breath. the time. .

Astra ProjectAstra Project

Google

Google showed off Project Astra, an early version of a global AI-powered assistant, which Demis Hassabis, CEO of Google’s DeepMind, said is Google’s version of an AI agent “that could be useful in everyday life.”

In a video that Google says was filmed in one take, an Astra user moves around Google’s office in London holding his phone and pointing the camera at different things — a speaker, some code on a whiteboard, outside a window — and has a natural conversation with the app about what it looks like. The command. And in one of the most impressive moments in the video, she correctly tells the user where she left her glasses before without the user ever lifting her glasses.

See also  PS5 Pro will likely launch before GTA 6, analysts say

The video ends with a surprise – when the user finds the missing glasses and puts them on, we learn that they have a built-in camera system and are able to use Project Astra to seamlessly have a conversation with the user, which could suggest that Google may be working on a competitor to Meta’s Ray Ban smart glasses.

Ask the picturesAsk the pictures

Google

Google Photos was already smart when it came to searching for specific photos or videos, but with AI, Google is taking things to the next level. If you’re a Google One subscriber in the US, you’ll be able to ask Google Photos a complex question like “Show me the best photo from every national park you’ve visited” when the feature is rolled out over the next few months. Google Photos will use your GPS information as well as its own judgment of what is “best” to provide you with options. You can also ask Google Photos to generate captions for posting photos on social media.

ViewView

Google

Google’s new AI-powered media creation engines are called Veo and Imagine 3. Veo is Google’s answer to OpenAI’s Sora. Google said it can produce “high-quality” 1080p videos that can last “more than a minute,” and can understand cinematic concepts like time-lapse.

Meanwhile, Imagen 3 is a text-to-image generator that Google claims handles text better than its predecessor, Imagen 2. The result is the company’s highest-quality text-to-image model with an “amazing level of detail” for “realistic, lifelike images.” ” and fewer artifacts — which essentially pits it against OpenAI’s DALLE-3.

See also  You will never do a patch like that again
Google searchGoogle search

Google

Google is making big changes to how search fundamentally works. Most of the updates announced today are like the ability to ask really complex questions (“Find the best yoga or Pilates studios in Boston and view details on offerings and walk times from Beacon Hill.”) and use search to plan meals and vacations. It won’t be available unless you sign up for Search Labs, the company’s platform that lets people try out beta features.

But the big new feature, which Google calls AI Overviews and which the company has been testing for a year now, is finally rolling out to millions of people in the United States. Google Search will now present AI-generated answers at the top of results by default, and the company says it will make the feature available to more than a billion users around the world by the end of the year.

Gemini on AndroidGemini on Android

Google

Google is integrating Gemini directly into Android. When Android 15 is released later this year, Gemini will be aware of what app, photo, or video you’re playing, and they’ll be able to drag it as an overlay and ask it context-specific questions. Where does that leave Google Assistant that already does this? Who do you know! Google didn’t bring this up at all during today’s keynote.

There have been a bunch of other updates as well. Google said it will add digital watermarks to AI-generated videos and texts, make Gemini accessible in the side panel in Gmail and Docs, support an AI-powered virtual colleague in Workspace, listen to phone calls and detect if you’re actually being scammed time, and much more.

See also  iPhone 15: Apple had to get rid of the Lightning charger

Follow all the news from Google I/O 2024 live here!

Leave a Reply

Your email address will not be published. Required fields are marked *