The new browser-based chatbot ChatGPT recently went viral, stunning millions of users with its ability to write coherent essays, build computer programs, and even create music. Many people immediately hailed the bot as a momentous breakthrough, believing it gives us a glimpse of our AI-driven future.

Whether or not that’s the case, it became quickly apparent that - though impressive - the chatbot has some rather unsettling qualities:

It has no idea of context

  • The bot was equally as happy composing poems as it was writing essays justifying sexual assault. With no innate sense of what is or isn’t appropriate, the technology - or future versions of it - could be easily harnessed by people with intent to harm.

It can write perfectly composed phishing emails

  • It’s common for phishing emails to contain spelling mistakes, with some positing that this is done on purpose to avoid spam filters and others believing the errors are usually genuine. When asked, the chatbot wrote a professional, cleanly written phishing email - which could save scammers a lot of time, potentially leading to an uptick in the number of attacks.

It can write malware

  • When the writers asked the chatbot to write a piece of malware, only some of their requests were flagged. On request, it wrote a JavaScript program that would detect credit card details. It’s possible the bot could become a powerful tool in the arsenal of cybercriminals who lack coding experience - or are just short on time.

It has racism and sexism built-in

  • One of the most prominent concerns about AI is its risk of bias. Users found that ChatGPT was coming to some of its conclusions discriminatorily, while the creators admitted that this is a major problem.

It might automate many jobs 

  • Following the chatbot’s release, many people panicked that it signalled the beginning of the end for professions such as software engineering and journalism. But - as the bot itself admitted - its abilities are far surpassed by humans (for now, at least).

It’s persuasive, but often wrong

  • The bot writes eloquently and engagingly, answering questions with the confidence of an expert - even though the information it gives is often inaccurate. Because it sounds so convincing, the chatbot could be used to spread misinformation - dangerous in a world where so many people are easily misled by what they read online.

 

This chatbot shows the current maturity of artificial intelligence technology. Although ChatGPT is not much more than an interesting toy at this stage, staying abreast of developments in AI might prove indispensable for businesses that are preparing for the future. These darker, worrying elements point to a deeper issue. The question isn’t just what we do with the technology now it’s here, but how do we make sure these technologies are designed with care in the first place so we can limit the risk they’ll be used for harm?

 

View the full article