- USDT(TRC-20)
- $0.0
Google is announcing a slew of new Gemini features today, this time aimed squarely at its free users. Features that were previously only available with the $20 per month Advanced plan will now be accessible to the public.
Gems are Gemini's little AI helpers that you can create for any task. You can start with pre-made ones like Google's Career guide. But you can create a Gem for any purpose. You can create one for repeated tasks, or to help you research a topic with very specific prompts. Previously, the Gems feature was exclusive to Gemini Advanced users, but now Google is making them to available to all users.
You'll find Gems in the sidebar, where you can easily get started with premade Gems.
Google, too, has a Deep Research feature. But unlike with Perplexity, it was previously behind a paywall. Now, Google is making its Deep Research feature available for free to all users.
Deep Research is an AI feature where the AI model takes some time to think through your question using a reasoning model, and then goes out in the open web, collating sources, figuring issues out in depth, and then presenting you with a detailed report instead of the simple bullet-point answers that regular AI chatbots provide.
In addition to making Deep Research free, Google is also adding its Gemini 2.0 Flash Thinking Experimental model to Deep Research. The new thinking model will help with the every step of research, including planning, reasoning, analyzing, and reporting. Deep Research will be available in more than 45 languages, and can be accessed using the drop-down menu in the Prompt box.
In material shared with press, Google wasn't clear how many Deep Research queries free users get per day, though the company did promise expanded access for Gemini Advanced users.
The Gemini 2.0 Flash Thinking (Experimental) model is also getting an upgrade. You can now upload files for it to use while answering prompts, and Google says it has improved the model's performance and introduced advanced reasoning capabilities.
Gemini Advanced users will also now have access to a 1 million token context window, enabling users to solve more complex problems.
Google is adding a new experimental feature called Gemini with personalization, powered by the Gemini 2.0 Flash Thinking model. This allows Gemini to connect with your Google apps and services. The company is starting with Google Search, but will expand to Photos and YouTube in coming months.
This means that Gemini will have more context about you, based on your Google Searches, but only if you choose to enable the Personalization (experimental) model from the model picker drop-down menu. Google also says that the new feature will only use your search history when Gemini's advanced models deem that it's needed.
With the 2.0 Flash Thinking model, Gemini is now better able to tackle complex requests that involve multiple Google services, including Calendar, Notes, Tasks, and now Photos.
According to Google, you can use a single prompt like "check my Calendar to find that gelato place Ezra and I went to back in May, save its address to my notes, and text it to Lauren and suggest we go there" instead of jumping between multiple apps, or asking three different questions to Gemini.
Full story here:
Gems
Gems are Gemini's little AI helpers that you can create for any task. You can start with pre-made ones like Google's Career guide. But you can create a Gem for any purpose. You can create one for repeated tasks, or to help you research a topic with very specific prompts. Previously, the Gems feature was exclusive to Gemini Advanced users, but now Google is making them to available to all users.
You'll find Gems in the sidebar, where you can easily get started with premade Gems.
Deep Research
Google, too, has a Deep Research feature. But unlike with Perplexity, it was previously behind a paywall. Now, Google is making its Deep Research feature available for free to all users.
Deep Research is an AI feature where the AI model takes some time to think through your question using a reasoning model, and then goes out in the open web, collating sources, figuring issues out in depth, and then presenting you with a detailed report instead of the simple bullet-point answers that regular AI chatbots provide.
In addition to making Deep Research free, Google is also adding its Gemini 2.0 Flash Thinking Experimental model to Deep Research. The new thinking model will help with the every step of research, including planning, reasoning, analyzing, and reporting. Deep Research will be available in more than 45 languages, and can be accessed using the drop-down menu in the Prompt box.
In material shared with press, Google wasn't clear how many Deep Research queries free users get per day, though the company did promise expanded access for Gemini Advanced users.
Updates to the Gemini 2.0 Flash model
The Gemini 2.0 Flash Thinking (Experimental) model is also getting an upgrade. You can now upload files for it to use while answering prompts, and Google says it has improved the model's performance and introduced advanced reasoning capabilities.
Gemini Advanced users will also now have access to a 1 million token context window, enabling users to solve more complex problems.
Your Google Search history comes to Gemini
Google is adding a new experimental feature called Gemini with personalization, powered by the Gemini 2.0 Flash Thinking model. This allows Gemini to connect with your Google apps and services. The company is starting with Google Search, but will expand to Photos and YouTube in coming months.
This means that Gemini will have more context about you, based on your Google Searches, but only if you choose to enable the Personalization (experimental) model from the model picker drop-down menu. Google also says that the new feature will only use your search history when Gemini's advanced models deem that it's needed.
More powerful connections with Google apps
With the 2.0 Flash Thinking model, Gemini is now better able to tackle complex requests that involve multiple Google services, including Calendar, Notes, Tasks, and now Photos.
According to Google, you can use a single prompt like "check my Calendar to find that gelato place Ezra and I went to back in May, save its address to my notes, and text it to Lauren and suggest we go there" instead of jumping between multiple apps, or asking three different questions to Gemini.
Full story here: