Google’s Bard AI can now write and execute code to answer a question Ars Technica

Google’s Bard AI can now write and execute code to answer a question Ars Technica

It can provide Language Large Models (LLMs) such as ChatGPT and Google Bard some Decent answers to certain types of questions, but ironically, these computers are pretty bad at computing. Google has a new solution for trying to get language models to do simple tasks, like math, right: get AI Write a program. Google says that now when you ask the Bard a “computational” task like math or string manipulation, instead of showing the output of a language model, that language model will instead write a program, execute that program, and then show the output of that program to the user as an answer.

A Google blog post provides an example of an entry that “reverses the word ‘Lollipop’ for me”. ChatGPT flips this question around and gives the incorrect “pillopoL” answer, because language models see the world in bits of words, or “symbols”, and they’re not very good at this. Here’s an example output of Bard:

Google

You get the correct output as “popilloL”, but what’s more interesting is that too Includes python code I wrote to answer the question. This is great for people interested in programming to see what’s behind the hood, but wow, this is probably the scariest way out of all for normal people. Nor is it particularly relevant. Imagine if Gmail showed you a block of code when you just asked it to fetch the email. It’s weird. Just do the work you’re asked to do, cool.

Google likens an AI model that writes a program to humans doing long division in that it’s a different mode of thinking:

This approach is inspired by a well-studied dualism in human intelligence, which is covered specifically in Daniel Kahneman’s book Thinking fast and slowSeparate “System 1” and “System 2” thinking.

  • System 1 thinking is fast, intuitive and easy. When a jazz musician improvises on the spot or when someone thinks of a word and watches it appear on the screen, they are using System 1 thinking.
  • By contrast, System 2 thinking is slow, deliberate, and laborious. When you do long division or learn how to play an instrument, you are using System 2.

In this analogy, the LLM can be seen as operating only under System 1 – producing a text quickly but without much thought. This leads to some amazing capabilities, but it can fall short in some amazing ways. (Imagine you’re trying to solve a math problem using System 1 alone: ​​you can’t stop and do the math, you just have to write down the first answer that comes to mind.) Classical calculations are closely aligned with System 2 thinking: it’s a formula and an inflexible process, but the right sequence of steps It can lead to impressive results, such as long division solutions.

Google says the “code on the fly” method will also be used for questions like: “What are the prime factors of 15,683,615?” and “Calculate the growth rate of my savings.” The company says, “So far, we’ve seen this method improve the accuracy of Bard’s responses to numeracy-based word and math problems in our internal challenge datasets by about 30%.” As usual, Google warns Bard that “you may not get it right” for misinterpreting your question or just, like all of us, writing code that doesn’t work the first time.

See also  The 2007 first-generation iPhone sold for more than $63,000

Bard is quickly coding up the answers right now if you want to give it a try at bard.google.com.

Leave a Reply

Your email address will not be published. Required fields are marked *