What is ChatGPT? In this post, we discuss how this disruptive AI tool works, what you can do with it and its implications for how we find and create content online.
You may have started to hear the phrase ‘ChatGPT‘ creep into conversations recently. Since its launch in November 2022, it’s made huge waves in the media, picked up 100 million users, caused Google to issue a ‘code red’ warning and upset Nick Cave.
But what is it?
The short answer is that it’s a very clever chatbot that uses artificial intelligence (AI) to give you answers to questions — or generate content that you ask it for.
For a longer answer, let’s see what ChatGPT itself has to say:
“ChatGPT is an advanced AI language model developed by OpenAI that uses a neural network architecture to generate human-like text responses. It has been trained on a large corpus of text data and can be used for a wide range of language-related tasks, such as answering questions, generating text, and even carrying out conversations.”
Let’s unpack this a bit.
How ChatGPT works
ChatGPT basically uses two key things to generate its responses to questions: information previously generated by humans (a lot of this from the Internet) and machine learning.
The machine learning part is provided by the ‘GPT’ in ChatGPT, which stands for ‘Generative Pre-Trained Transformer.’ This can be thought of as a digital emulation of a human brain — a ‘neural network architecture‘ complete with artificial neurons and nodes.
This ‘brain’ contains a vast amount of information and the ability to provide it a very human-like, conversational way.
GPT-3, the latest version of the GPT architecture, has been shown to perform a large variety of language processing tasks extraordinarily well — and via an easy-to-use, consumer-friendly interface. That’s why there is so much interest in ChatGPT right now.
Elon Musk’s involvement in the project has arguably contributed to the publicity too; he’s one of the founders of OpenAI, a resarch organization based in San Fransisco that describes its mission as ensuring that ‘artificial general intelligence benefits all of humanity.’
(Given how scary some of the applications of AI potentially are, let’s hope the company isn’t trolling us, and can make good on this mission statement).
Other founders of OpenAI include Sam Altman, Reid Hoffmann, Peter Thiel and Jessica Livingston, who collectively funded the organization to the tune of $1 billion.
All in all, ChatGPT is a seriously advanced piece of software, with some serious people and serious money behind it.
But what can it actually do?
Key applications of ChatGPT
Because ChatGPT is effectively a powerful digital brain that has been fed a massive quantity of data, the range of applications for GPT is huge.
Some of the most useful are:
- Content generation — ChatGPT can be used to generate summaries, articles and even books.
- Knowledge retrieval — ChatGPT can be used as an alternative to Google (more on which in a moment), allowing you to ask questions on virtually anything and, depending on the topic involved, receive helpful answers to them.
- Language translation — you can use ChatGPT to translate text into multiple languages, including English, Spanish, French, German and Chinese.
- Coding — ChatGPT produces very usable code in response to a brief provided (and in lots of computer languages too).
- Chatbot service provision — because ChatGPT provides access to an API (application programming interface), online businesses can use it as the engine that drives their own chatbots. Customer support queries, sales enquiries, healthcare questions and more can all be fielded on third-party sites via ChatGPT; and companies like Shopify and Microsoft are already integrating it into their platforms in various ways.
In essence, ChatGPT gives you a huge amount of knowledge on a plate, and ways to make it available to other digital services.
But how accurate is it?
How accurate is ChatGPT?
Despite being a computer program, ChatGPT relies on humans for its ‘training data’ — the stuff that people publish on the web and the conversations that they have with each other online (in forums, Twitter etc.).
So the question of how accurate ChatGPT is largely relies on how ‘accurate’ humans are. That, of course, becomes a very subjective question, especially when you get into the area of politics or arts. How ‘right’ is a certain policy? How ‘good’ is particular movie? Humans have diverse opinions and a sometimes loose relationship with facts (to put it mildly).
The other problem in determining how accurate ChatGPT is involves dataset transparency: we don’t know exactly what content was used to train it. If you ask ChatGPT to reveal the sources of its training data, you’ll get quite a vague answer — when I posed this question, I was told that “some of the sources include news articles, web pages, forums, and social media.”
However, we can be confident that the amount of data that ChatGPT was trained on is huge, and this should — in theory — boost the likelihood of getting an accurate answer to a query.
Based on my own tests of the tool, I would say that it provides broadly accurate answers to questions.
But I also found that on quite a few occasions ChatGPT disagreed with itself: for example, when I asked “What was the Beatles’ final number one single in the UK?”, I got different results, depending on how I phrased the question (depending on the wording and context, ChatGPT ventured that it was “Let It Be,” “Get Back” and “The Long and Winding Road.”).
The key takeaway here is that ultimately ChatGPT is relying on the wisdom of crowds to provide answers, and it is best to view any information it gives you through that prism. For now at least, factchecking any content that it provides before publishing it anywhere else is vital.
The one thing that we can definitively say regarding accuracy is that ChatGPT is currently relying on slightly out-of-date information to provide answers to queries. This is because most of the training data for the tool was collected up until 2021 — so events that happened after that year won’t be reflected in the information that ChatGPT provides to its users.
(If you ask it about events that happened after 2021, you may get nonsensical answers, or none at all).
The implications of ChatGPT for Google
Nobody really foresaw the MP3 replacing the CD (the music industry certainly didn’t); similarly, streaming killing off the MP3 quite so swiftly came as a surprise to many.
The arrival of ChatGPT feels like a similarly disruptive moment for search engines. To continue the musical analogy, right now Google — despite its status as the search engine king — could be viewed as the MP3, and ChatGPT as Spotify. Hence Google’s recent alarm at the arrival of ChatGPT on the scene.
Google is right to be worried: ChatGPT certainly makes doing a lot of things easier than a traditional search engine. You ask it a question and you don’t get a long list of links to evaluate — you get a simple and concrete answer.
The flipside of course is that with ChatGPT — for now at least — you miss out on two key things that Google provides: transparency and diversity of results.
When you’re presented with search results in Google, you can see who’s providing the information — the website, author etc. And you also get a large variety of sources to evaluate. You can make your own mind up regarding which answer to trust and why.
Let’s see how this works in practice. If you Google ‘What was the Beatles’ last UK number one?’ you get the following set of results.
As you can see, there’s a ‘featured snippet’ at the top which basically gives you the answer — but there’s also a lot of chart history lists to look at. If you don’t trust the answer you’re given in the snippet, you can look at all the other data provided and make your own mind up.
Google also provides links to some related questions that can help you expand your search on the topic, and perhaps gain additional insights into it.
By contrast, ChatGPT simply gives you an answer right away — with no additional context or sources, but no distractions either.
(And as we saw earlier with regard to that Beatles question, the answer can be wrong.)
Whether the diversity and transparency of results will be valued more than the convenience of a quick, concrete answer is debatable — but the human enthusiasm for a shortcut suggests that latter approach may win out.
That said, it’s important to remember about the threat posed by ChatGPT to Google is that for now, ChatGPT is very much the ‘David’ in a David vs Goliath situation. While ChatGPT’s parent company, OpenAI, is valued in the tens of billions of dollars, Google is valued at $1.3 trillion.
And it’s not as though Google has been resting on its laurels when it comes to machine learning either: its search engine algorithm is built around it, and it has been developing its own AI conversation technology too — LaMDA.
But ChatGPT’s growth has been undeniably impressive: it gained a million users in just one week, and has acquired100 million since its launch in November 2022.
These sort of statistics have spooked Google, and effectively forced the search engine giant into launching its own AI chatbot offering — ‘Google Bard’ — and more quickly than it anticipated doing (which possibly explains why the announcement and demo of this tool didn’t quite go to plan).
Regardless of the timings involved, if we see a transformation of Google from a ‘links provider’ into a supplier of ‘digested answers’ and AI-generated content, this could have huge implications for content creators.
Let’s look at why.
The implications of ChatGPT and other AI chatbots for content creators
There are two obvious ways that ChatGPT-style tools — ‘large language models’ — are likely to affect content creation.
First, they have the potential to make things a lot cheaper for publications, because they reduce the need for writers, translators and even coders. Ask ChatGPT to write something and it will do a usually do a pretty good job.
You’ll need to factcheck or edit the output a bit, but used judiciously, AI bots like ChatGPT can create very usable content. And, given that machine learning is involved in the process, the quality of the output is likely to improve over time.
So it’s not hard to foresee a situation when publications use AI tools like ChatGPT as the starting point for content creation, rather than journalists (in fact, some have already started doing this).
Second, this technology has the potential to significantly speed up the production of content. Ask ChatGPT for an article and it will deliver one instantaneously.
And even if content producers don’t rely on AI tools to write entire articles, there are still ways that they can be used to improve the content production process.
For example, writers can use ChatGPT for research purposes or to improve the clarity of their work (you can use it to perform checks on grammar, or rewrite badly worded paragraphs).
It can also be used to produce content outlines — ask it to give you a structure for a blog post, for example, and it will give you a comprehensive list of all the key topics to cover, plus guidelines on how to cover them.
But will ChatGPT eat itself?
Although ChatGPT has the potential to be incredibly helpful to content creators, it also has the potential to take away the reason anyone would want to produce content in the first place.
The reason for this involves content visibility: if ChatGPT becomes the de facto way that web users search for information, people won’t see a creator’s content, but a digest of it.
And when web content becomes hidden, the things that typically fund its production (adverts and the opportunity to sell products) become invisible too. The upshot being that a key incentive for writers to produce content — money — could be taken away by ChatGPT.
So here’s where things get pretty funky: if ChatGPT becomes a huge success, content creators could stand to lose out financially, and produce less content. ChatGPT then has less content to learn from — and becomes a much less useful tool.
Granted, it would still have a lot of data sources to learn from — but a tool that learns from the wisdom of crowds might soon be learning from a much smaller (or more out of date) crowd.
And, if ChatGPT is used to write everybody’s content in the first place, it may end up (to a degree) in some sort of doom loop, where it starts offering users digests of content that it itself has written. Plagiarising itself, in other words.
And speaking of which…
ChatGPT and the plagiarism question
A lot of attention has been focussed on the potential of ChatGPT to encourage plagiarism — so much so that universities are considering replacing essay-writing assignments with oral and written exams.
There’s no question that ChatGPT has the potential to encourage shortcuts and cheating, and as a former lazy student myself, I can say without hesitation that educational institutions are right to be wary of it.
However, you could argue that ChatGPT is itself guilty of plagiarism — and on an industrial scale, too.
As I discussed above, ChatGPT doesn’t show content creators’ work, but spits out a digest of it. In effect, it uses other people’s work (millions of people’s work!) to formulate its answers or produce new content. Many writers have raised ethical concerns about this.
For example, at Style Factory, a key part of what we do involves reviewing ecommerce platforms. This involves a huge quantity of research every month to ensure that the content we produce about this topic is accurate, fair and in-depth. Our reviews are then monetized via advertising.
While testing ChatGPT, I asked the bot to write me a review of Shopify (one of the key platforms we write about). The content that it produced looked extremely similar to ours; it was phrased differently, but essentially made the same key points, and I’m fairly confident that at least some of our articles (which tend to rank highly in search results) informed ChatGPT’s output.
(Other writers that I’ve spoken to have described experiencing a similar sense of déjà vu when asking ChatGPT to produce content about topics they write about.)
The limitations of ChatGPT
So far we’ve looked at ChatGPT through the prism of it being a powerful, all-conquering new AI tool.
However, it does have its limitations, and there are a few things to be aware of when using it, especially:
- Bias: ChatGPT can have biases based on the data it is trained on (and because the data in question was generated by humans, the chance of this is very high!).
- Reliability as a source: the quality of the answers provided by ChatGPT is dependent on the quality of the data it is trained on. And the Internet is a bit of a wild west as far as that goes — the tool might be using a peer reviewed study as the basis of its response, but it could also be using a Twitter spat (we just don’t know!).
- Lack of access to current data: as discussed above, ChatGPT has been trained on a dataset that only extends to 2021.
- Lack of contextual understanding: ChatGPT lacks a ‘general’ understanding of the world; this means that it can be difficult for it to answer questions that require ‘common sense’ or contextual knowledge. Sometimes the chatbot just doesn’t ‘get’ what you’re asking it to do.
- No rich media is provided — currently, the content produced by ChatGPT is text-based only — it doesn’t output audio, video or images.
However, because of the machine learning aspect, over time you can expect these limitations to become less of an issue (with the possible exception of bias).
So what next for ChatGPT and the rest of us?
Although the machine learning technology behind ChatGPT has been deployed in various ways before, its release in an consumer-friendly, chatbot format has come as a shock. It’s laid bare just how far AI has progressed — and how powerful it is.
It may make 2023 the year that the tech giants start to roll out machine learning on a much bigger scale than ever before — turning a ‘background activity’ into a tool that gives consumers direct access to AI.
It’s not hard to imagine Google starting to provide ‘hybrid’ answers to questions soon —ChatGPT-style ones, alongside a list of search results (and, of course, ads). And it looks like the Bing search engine, with its investment in ChatGPT, is also heading down that route.
A paid-for version of ChatGPT is on the way, too — one that will provide faster answers to questions and priority access to the tool (when there’s high demand, access to the free version of ChatGPT isn’t always available).
Regardless of how the tech giants decide to roll out this sort of technology, and which company ends up dominant in providing ‘consumer-facing’ AI, a Pandora’s box has been opened, and the consequences for content creators and consumers are likely to be enormous.
This technology raises huge question marks for anyone currently working in the knowledge economy — will we need writers to create fact-based content any more? Or computer programmers? Or translators?
It also poses questions for those working in SEO, and software companies like Ahrefs and Semrush that have built their entire business models around the dominance of Google.
And if ChatGPT-style interfaces — ones that don’t reveal their data sources — become the norm, can we really trust the answers they provide or content they create? For all Google’s faults, at least it shows you where it’s getting its answers from (for now at least).
And what does this mean for education and academia? Will this sort of technology aid independent thinking — by providing easier access than ever to a huge knowledge bank — or will it stifle it, by taking away the need to refer to multiple sources of data?
We are entering strange new digital waters; ChatGPT represents just a tip of an AI iceberg, and only hints at what is to come.
What is ChatGPT?
ChatGPT is a chatbot (or in technical terms a ‘large language model’) that can hold a conversation with you, or provide content that you request — articles, computer code and other text-based content. Using data largely sourced from the World Wide Web, and a process called machine learning, it has been trained to understand and respond to questions and statements in a human-like way.
What does ChatGPT stand for?
The ‘GPT’ in ChatGPT stands for ‘Generative Pre-Trained Transformer’ — a ‘neural network architecture,’ which in simple terms can be thought of as a digital emulation of a human brain.
Who created ChatGPT?
ChatGPT was created by OpenAI, an artificial intelligence company based in San Fransisco, USA. The founders of this organization include Sam Altman, Jessica Livingston, Elon Musk and Peter Thiel.
When was ChatGPT released?
ChatGPT was launched as a prototype on 30 November 2022 by its parent company, OpenAI.
Is ChatGPT free?
ChatGPT is currently available for free, although depending on demand, the free version may run slowly or be limited to a certain number of users. A paid version, ChatGPT Plus, is coming soon.
How do I use ChatGPT?
To use ChatGPT, you need to sign up for a free account on the ChatGPT website. Thereafter, it’s a case of using a text-based interface to ask it questions or request content from it.
What can I use ChatGPT for?
ChatGPT can be used in a similar way to a search engine — i.e., as a way to get responses to queries. However, it can also be used to create entirely new content — article outlines, blog posts, computer code and more — and you can use it to translate content between languages too.
Where does ChatGPT get its information?
ChatGPT gets its information from the web. The main sources — which are detailed in OpenAI research papers — include general crawl data, Reddit posts, Internet-based book collections and Wikipedia.
Is ChatGPT accurate?
How accurate ChatGPT is depends on the data that it is trained on. Currently this is data provided by humans (via the web / text content supplied to ChatGPT by its human AI trainers). As such the accuracy of ChatGPT is currently a ‘wisdom of crowds’ affair, with some bias and factual errors inevitably creeping into the answers it provides and the content it generates. It’s important to note that ChatGPT’s training dataset only contains information up to 2021 — so if you ask it about events that took place after that year, it may not answer correctly.
What are ChatGPT’s limitations?
Although ChatGPT is an AI technology, it nonetheless makes use of data generated by humans — so it can exhibit biased behavior. Additionally, its data set only goes up as far as 2021, and it doesn’t yet output any rich media or web results to enhance or support its answers to queries.
What does ChatGPT mean for SEO and the search engine business?
ChatGPT is likely to have big consequences for the SEO industry, because it may displace Google as a key way of retrieving information, or encourage the company to create a ChatGPT-style tool of its own. Either way it means that the whole business of optimizing content so that it has maximum online visibility will change, along with the processes involved for doing so.
Will ChatGPT replace Google?
It’s unlikely that in the short term ChatGPT will replace Google. This is because Google is a much bigger company than ChatGPT’s parent company (OpenAI) and has the resources and the technology to invest in AI on a much larger scale. However, what ChatGPT is definitely doing is influencing Google — as the hurried announcement of its own chatbot, ‘Bard,’ suggests, Google is going to start incorporating ChatGPT-style features into its search engine, and quickly. ChatGPT is also likely to take market share away from Google too, whether that’s due to its incorporation into the Bing search engine, or some consumers switching to ChatGPT as a means of retrieving answers to questions.
Has ChatGPT passed the Turing Test?
If you ask ChatGPT whether it’s passed the Turing Test (a practical test to determine if a computer can pass as a human in a conversation with a real person), it will reply that it hasn’t undergone it. But according to Buzzfeed data scientist Max Woolf, ChatGPT has passed the Turing Test by producing responses that were indistinguishable from human conversation to a panel of judges.
Such a helpful, clear article on this Chris. Really appreciate your insights on it.
Cheers Vanessa! Glad you found it useful.
As a freelance SEO guy I am glad that for now, at least Chat GPT can’t help with queries like “Find me a handyman in North West London”.
What it can do is come up with answers to queries like “Name 20 factors you should consider when booking a handyman”. Of the factors that came up my client had not adequately addressed at least half of them. So we added these in using long-form, well-researched keyword-rich text, images and videos. Hopefully, our efforts will be rewarded with higher rankings and more clicks.
Thanks Mike — yes for now ChatGPT not going to be that helpful when it comes to providing recommendations for particular services. But as you point out it’s great for sparking ideas for improving the SEO for websites offering them.