Rant: You can’t trust AI.

This month, we explore whether you can trust the latest AI tools to provide financial advice. The short answer is no. AI is keeping us all on our toes, but human financial planning isn’t dead yet!


What’s ChatGPT?

ChatGPT is an AI chatbot that imitates human conversation. As a large language model (LLM), you give it ‘prompts’ and it generates responses (GPT stands for Generative Pre-Trained Transformer). It can write text and code software. It’s not designed to make jokes, do sums or pick stocks.

It’s based on OpenAI, who acknowledge that ChatGPT: “Sometimes writes plausible-sounding but incorrect or nonsensical answers”.

  • Julie asked it to write a poem in the style of Dorothy Parker. What it produced was a poem by Emily Dickinson (1830-1886). ChatGPT lies! If it doesn’t know the answer, it makes something up.
  • Mike asked it to add 2+2. When it replied: “4”, he said: “That’s wrong, 2+2=5”’. “Sorry,” said ChatGPT: “2+2=5”. It doesn’t know right from wrong. You can fool it.
  • One of my clients who’s approaching state pension age asked ChatGPT what to do about his pension. It told him: “For every five weeks you defer, you’ll get a pension increase of 1%. This works out at 10.4% for every full year… Yes, it is in addition to the full deferred lump sum. You can get a one-off (taxed) lump sum payment if you defer claiming your State Pension for at least 12 months in a row. This will include interest of 2% above the Bank of England base rate.” Sadly, this is nonsense. In this case, Chat GPT conflated two suggestions. For one thing, you can’t defer the lump sum any more.

This shows that ChatGPT is based on outdated information. The version that hit the mainstream in January 2023 was trained on the Internet up to 2019, all books published until then, and Reddit posts with three or more upvotes. Even the latest version has limited knowledge of events after September 2021. It simply can’t pick up on all the nuances of financial planning, and doesn’t know about the latest changes.

AI and stock picking

Here’s an 8.5-minute video by Robbie Money to show you the problem: ‘I convinced an AI to reveal the best stocks to buy and it went terribly wrong’.

What about the UK? In June, Google’s chatbot, Bard, was asked to pick five stocks that would be good to invest in. It came back with Apple, Microsoft, Amazon, Alphabet, and Tesla.

For our take on this, please see our articles:

It was then asked to pick five lesser-known stocks, which it did. However, one of its suggestions was bought out by Cisco two years ago so is no longer listed. Again, it’s providing plausible-sounding but out-of-date information.

Source: https://www.thisismoney.co.uk/money/investing/article-12152265/Investing-tips-AI-chatbot-asked-pick-five-stocks.html

What this means to you

“Though these tools seem human-like, they are simply cleverly engineered machines.”
Tom Gray, Chief Strategy Office, Imagination

AI is certainly ‘artificial’. But it’s not ‘intelligent’. It can be biased, discriminatory and mix ‘fake news’ with real. There are growing calls for government regulation. In fact, Rishi Sunak will host a summit on AI safety in the autumn.

AI may replace some lower value jobs, and can perhaps be harnessed as a helpful starting point for humans to build on. As well as giving unreliable advice, it doesn’t do the emotional part of the financial planning role. Some of our previous articles touch on this point:

Are the robots ready to replace us? Not yet. It seems human financial planners (like me) are safe for the time being. Phew!

To talk to a human, give me a call on 01435 863787.

Lance Baron

Certified Financial Planner (CFP) based in East Sussex, UK. We support people in Southeast England with more than £500K to invest by building a financial plan that will help them live the life they want… until age 100