ChatGPT can invent many things, but it can do it in such a credible way that it is capable of inventing references, and even urls.
In this example you will see a tremendous case in which a new statue has been invented in Barcelona, and he has been promising me that it is true until the end, when he has recognized his mistake.
I ask him to tell me something that few know about Barcelona, and among other things, he tells me:
Here the fly appeared behind the ear… there is only one statue of liberty in Barcelona, inside the Arús Library, near the Arc de Triomf, there is nothing like that in the port, so I ask you for references:
It tells me the exact place (where Columbus is), and gives me the url of the website of the town hall that refers to the supposedly famous “goddess of freedom”, because yes, it tells me that it does exist, but that it is commonly called as “Goddess of Liberty”… all very strange.
The fact is that the url that you have told me does not exist, it gives a page not found, and it has never existed, because looking at the Internet history at archive.org, it tells me that this page has never existed.
I ask him for more sources, and he invents more urls:
None of those web addresses exist, they all give page not found, and archive.org still can’t find them in the past.
I tell ChatGPT that all these sources are false, that they all give a 404 error, and it sends me back to a non-existent page of the Barcelona City Hall where it says that there are even photos (all lies)
In the end he admits his mistake, but says that no url has been invented (his word against mine, of course, because those urls have never existed). He has confused the statue of Columbus with the Statue of Liberty… telita.
A clear example of why we have to tread carefully, especially if we use ChatGPT as a source of information, something that I always advise against.