The difference between AI and human, between thinking and processing is getting smaller and smaller...
In 1950, Alan Turing proposed the Turing test as a way to measure a machine’s intelligence. The test pits a human against a machine in a conversation. If
mpost.io
I asked ChatGPT if it passed the test and it said this:
Input: "Does chatgpt pass the turing test?"
Output:
"The Turing test is a subjective measure of a machine's ability to exhibit intelligent behavior that is indistinguishable from that of a human. While ChatGPT is a highly advanced language model that can simulate human-like conversations, it is not capable of passing the Turing test in its current form.
The Turing test involves a human evaluator interacting with both a human and a machine through a text-based interface, without knowing which one is the machine. If the evaluator is unable to reliably distinguish between the human and the machine based on their responses, then the machine is said to have passed the Turing test.
However, ChatGPT's responses are generated based on statistical patterns learned from vast amounts of text data, and it does not have true consciousness or understanding of the world in the way that humans do. While it can provide useful information and engage in interesting conversations, it is still ultimately a machine and not a human."
Answer: it depends on the definition.
"Thinking" and "processing" are concepts based on a limited number of languages by one mammal species. A common reality underlying both concepts is that they are electrical activity, which doesn't follow human semantics and definitions. So at some level (somewhere at the molecular, crystalline, or atomic level), I'd say they cannot be distinguished.
My understanding is that the AI was trained with a vast amount of data that's already out on the web somewhere. Its programmes are presumably still feeding it data, so it nominally gathers data off the web. However, it doesn't appear to google (or bing or duckduckgo) the answer and turn that into text. It looks in its database to see how the words in the query usually associate with other words. Is that thinking? I dunno.
I asked:
Input: "Do you require an internet search to provide an output?"
Ouput:
"As an AI language model, I do not always require an internet search to provide an output. When you ask me a question or input, I analyze the text and use my knowledge and understanding of the language to generate a response.
However, there may be times when I need to access external sources of information to provide a more accurate or complete response. In those cases, I may use the internet to search for relevant information to include in my answer.
For example, if you ask me a factual question like "What is the capital of France?", I can provide the answer based on my pre-existing knowledge. However, if you ask me a more complex question like "What are the best restaurants in Paris?", I may need to search the internet to provide a more informed answer.
In summary, while I can provide an output without an internet search, there may be cases where I need to access external information to provide the most accurate and complete response possible."
GPT4 is supposedly much better, but I don’t know when the public will be able to get their hands on it.
Google is also developping "Bard", which is also better. Maybe OpenAI wants GPT4 to be as good as Bard, first. I think they give us access so they can gather data on how it's used and there will always be some verrsion we don't have access to (freely).
And now, we're in the age of "BIG Data". Machines are recording data about us far faster than the entire human race every day! My phone counts every step I make. Fitbits record every heart beat... It's only gonna get worse. AI models will add huge amounts of data about you - yes YOU!
Privacy is pretty much long gone (cameras, Alexa, Tik...) And Predictive modeling is getting crazy smart!
"Big Data" is exactly what was first required to make AI as we know it today possible. Our data is collected but individual privacy is kept. Banks have been doing this for centuries (and email servers for a few decades).
I am completely blown away by this. I fundamentally didn't think I would ever see something like this during my lifetime. . .I thought it was total science fiction. I am actually scared - not kidding.
Many people, including CEOs have expressed concerns. But I say unchartered territories have been around forever, so m'eh. New things aren't new.
(Things that are new, are new, but the advent of things that are new, is not new.)