What’s ahead for Bard: More global, more visual, more integrated
It’s been less than two months since we launched Bard, our experiment that lets you collaborate with generative AI, and I’m amazed to see the creative and imaginative ways people have interacted with it. (I, for one, have gotten some really fun ideas to help teach my 7-year old fractions!)
Since we rolled out Bard — initially in the U.S. and the U.K. — we’ve gotten quite a bit of feedback and have adapted quickly to make your experience with it even better. We recently moved Bard to PaLM 2, a far more capable large language model, which has enabled many of our recent improvements — including advanced math and reasoning skills and coding capabilities. In the past few weeks, coding has already become one of the most popular things people do with Bard.
But this early momentum is just the beginning. Today we’re introducing new ways for you to collaborate with Bard, and we’re sharing a bit more about our vision for what’s ahead.
Bringing Bard to more people
As we continue to make additional improvements and introduce new features, we want to get Bard into more people’s hands so they can try it out and share their feedback with us. So today we’re removing the waitlist and opening up Bard to over 180 countries and territories, including India — with more coming soon.
Not just that: Bard is now available in Japanese and Korean, and we’re on track to support 40 languages soon. As we’ve said from the beginning, large language models are still a nascent technology with known limitations. So as we further expand, we’ll continue to maintain our high standards for quality and local nuances while also ensuring we adhere to our AI Principles.
Making your interactions with Bard more visual
Coming soon, Bard will become more visual both in its responses and your prompts. You’ll be able to ask it things like, “What are some must-see sights in New Orleans?” — and in addition to text, you’ll get a helpful response along with rich visuals to give you a much better sense of what you’re exploring.
You’ll also be able to include images — alongside text — in your own prompts, allowing you to boost your imagination and creativity in completely new ways. To make this happen, we’re bringing the power of Google Lens right into Bard. Let’s say you want to have some fun using a photo of your dogs. You can upload it and prompt Bard to “write a funny caption about these two.” Using Google Lens, Bard will analyze the photo, detect the dogs’ breeds, and draft a few creative captions — all within seconds.
Introducing coding upgrades and export features
It’s imperative that we build Bard alongside people, because feedback is key to making it better. As part of that effort, we’re incorporating developers’ feedback into a few key coding upgrades, including the following:
- Source citations: Starting next week, we'll make citations even more precise. If Bard brings in a block of code or cites other content, just click the annotation and Bard will underline those parts of the response and link to the source.
- Dark theme: Today we’re launching Dark theme, which is another feature developers have asked for — and one we think will help make interacting with Bard a lot easier on your eyes.
- “Export” Button: We've heard that developers love the export to Colab feature, so coming soon, we're adding the ability to export and run code with our partner Replit, starting with Python.
And since people often ask Bard for a head start drafting emails and documents, today we’re launching two more Export Actions, making it easy to move Bard’s responses right into Gmail and Docs. For example, let’s say — like me — you’re a die-hard pickleball fan. You can ask Bard to write an email invitation for your new pickleball league, summarizing the rules of the game and highlighting its inclusivity of all ages and levels. Just click the “draft in Gmail” button so you can make those final tweaks before getting your pickleball league off the ground.
Connecting Bard with the services you love
Looking ahead, we’ll introduce new ways to fuel your imagination and curiosity by integrating the capabilities of Google apps and services you may already use — Docs, Drive, Gmail, Maps and others — right into the Bard experience. And of course, you’ll always be in control of your privacy settings when deciding how you want to use these tools and extensions.
Bard will also be able to tap into all kinds of services from across the web, with extensions from outside partners, to let you do things never before possible. In the coming months, we’ll integrate Adobe Firefly, Adobe’s family of creative generative AI models, into Bard so you can easily and quickly turn your own creative ideas into high-quality images, which you can then edit further or add to your designs in Adobe Express.
Let’s say I’m planning a birthday party for my 7-year-old who loves unicorns, and I want a fun image to send out with the invitations. All I have to do is ask Bard: “Make an image of a unicorn and a cake at a kids party” — and it’ll generate an image within seconds, all while adhering to Adobe’s high standards for quality and ethical responsibility.
We want Bard to be a home for your creativity, productivity and curiosity — so we’re working to connect Bard with helpful Google apps and many more partners, including Kayak, OpenTable, ZipRecruiter, Instacart, Wolfram and Khan Academy.
There’s a lot ahead for Bard — connecting tools from Google and amazing services across the web, to help you do and create anything you can imagine, through a fluid collaboration with our most capable large language models. When we combine human imagination with Bard’s generative AI capabilities, the possibilities are boundless. We can’t wait to see what you create with it.