Going to be hearing a lot more about this and similar in the near future. ChatGPT
We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is a sibling model to InstructGPT, which is trained to follow an instruction in a prompt and provide a detailed response.
No mention of the elimination of most jobs. The idle masses will rely on governments to provide them with food and shelter. What could possibly go wrong?
“May…could…it’s possible”. So “maybe” a future full of joy and happiness. Technology has failed to deliver on those promises. What makes you think they aren’t lying?
Thank god KC is here to act as intermediary for us.
And I for one welcome our new computer overlords. I would also like to remind them, that as a member of the gCaptain forum, I can be useful in their pursuits of total world domination.
Input and programmed algorithms?
Oh you mean like the media or Facebook, Tiktok, Whatsapp, Google, et al and the old guard at Twitter who manipulate their users to push their agenda or cater to their biases for profit? If you haven’t seen this on Netflix it’s worth watching:
The world hasn’t realized yet how powerful ChatGPT is, and so Open AI still can live in a kind of relative peace. I am sorry to say that will not last for long.
Stack Overflow, a software forum, already has already banned ChatGPT content because it has led to an unmanageable surfeit of material. The question is whether that ban can be enforced.
From Stack Overflow:
Overall, because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking or looking for correct answers.
The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce. There are also many people trying out ChatGPT to create answers, without the expertise or willingness to verify that the answer is correct prior to posting. Because such answers are so easy to produce, a large number of people are posting a lot of answers. The volume of these answers (thousands) and the fact that the answers often require a detailed read by someone with at least some subject matter expertise in order to determine that the answer is actually bad has effectively swamped our volunteer-based quality curation infrastructure.
This is a “gift article” (subscribers can give ten articles a month) so the link should work for anyone.
We often say that a person “doesn’t care about the truth,” but what we mean is that they don’t care about telling the truth. Even the most shameless liar knows at some level what the truth is — they have to, if only to avoid accidentally stating it.
AI literally doesn’t care what is true. It can emulate the style of a news article, and even some of the substance. But it cannot (yet) emulate our interest in whether that article is a reasonably faithful reflection of the real world. With the right prompt, it will just as confidently write an article about an imaginary policy as a real one.
We should note that while hallucinating copiously is ChatGPT’s strong suit (by design), returning factual information reliably remains a work in progress.
And with ease of access to AI, there came a time when instead of imposing word counts on member posts to avoid excesses, some administrators themselves fell prey to its charms.
You’re point’s taken, it’s long and maybe unnecessary. But the Chat part is an image (PNG file), so no word count.
As far as ChatGPT…that’s like saying someone has fallen prey to the charms of Google Search… ChatGPT does hallucinate at times but once one figures out how to query it properly it is, in many instances, more powerful and useful than freely available search engines.