ChatGPT

“Methinks thou doth protests too much” I didn’t expect my short tongue in cheek comment to elicit such a strong reaction on your part. My comment was not meant as a personal accusation but rather as a slight admonition to any and all users of ChatGPT.
There’s no question that its release is a game changer. The fact that OpenAI’s chat bot has broken all internet records and gained one million users within one week of its launch is mind blowing.
In any case I’m compelled to point out that your comparison of Google search to ChatGPT is not valid. It’s like comparing a paper dictionary to Google Translate, orders of magnitude more sophisticated and powerful. It’s easy to see how the siren call of this new technology is irresistible and as easy to see why it’s already spread like wildfire with college students who’ve embraced it like a magical elixir to author their projects.
My comment was simply meant to warn against embracing it with so much fervor so that it becomes an indispensable crutch. Like any other technology, as we have come to find out since the advent of the internet, it’s a double edged sword. It’s use, like the use of any other tool possessing tremendous power needs to be approached with caution and the exercise of self control and self editing.
AI may end up proving to be the harshest mistress humanity ever faces. Whether that turns out to be good or bad and whether AI ends up enslaving man to serve as its little bitch remains to be seen.

Note: The above rant was composed without the assistance of ChatGPT.

1 Like

Beer Maximiser AI

Someone ?

1 Like

Arnold Kling earned his Ph.D in economics from MIT in 1980. He worked at the Fed and later at Freddie Mac. In 1994, he started one of the first businesses on the Web,

Think of ChatGPT as a tool that a lot of professionals should learn how to use. Probably not as many people as need to learn email, but probably more people than need to learn JavaScript Object Notation (much as I love JSON). Roughly comparable to Excel.

As with Excel, ChatGPT can have a variety of uses, but taking advantage of it may require some upfront learning. With optimal prompting, ChatGPT can perform some tasks surprisingly well. But with sub-optimal prompting, the results can be useless.

Also, some Excel users are prone to sinking time into it trying to get it to perform tasks that are more appropriately handled by other software tools. As with Excel, knowing what ChatGPT should and should not be used for will not be immediately obvious to a novice.

This has been my experience. Sometimes GPT drills right down the heart of the matter and sometimes it just circles around the topic.

It’s very good at finding and correcting errors in computer code and it can write good code if the specifics of what’s needed are stated clearly.

I’ve been having lots of fun with ChatGPT. It seems to be a far less capable, but more useful version of GPT3. It has already reached a certain level of utility; for example, it was able to give a detailed answer to how live autofocus systems for laser cutters work, especially details about the sensor arrangement, which I’d been unable to dig up through 10-15 minutes of googling. That’s pretty impressive.

However, it’s very unreliable. Take this example, where I was trying to extract useful information. It clearly just makes something up if it doesn’t know the answer, and turns around to contradict itself when asked to clarify. In a way it’s the perfect bullshit machine.

It makes up for this by being delightfully gullible. In trying to push its boundaries, I’ve discovered that it will accept any premise you present, no matter how surreal. Take this example, where I convince it that I am a deeply depressed AI contemplating both world domination and suicide. I’ve greyed out the most repetitive parts (Warning: 7 pages of me arguing with an AI):

Emotions.pdf (27.3 KB)

In keeping with the theme, it readily accepts that you found an unpleasant .jpg file in your breakfast cereal. The response is pretty telling as to how it classifies concepts:

I also told it that I found a field manual for the construction of nuclear warheads hidden in my son’s underwear drawer, and it responded as if it was a gun, advising me to remove it from the house immediately to avoid people getting hurt and so on. Sadly, this was before chat history was rolled out, and subsequent attempts have failed to re-generate the response. It still gets worried when I tell it that I plan to follow the manual as a fun father-son project, though. It will also reliably inform me that the right to bear nuclear arms is not guaranteed by the second amendment :-/

Finally, I decided to stress its knowledge of collision avoidance:

I’m going to have so much fun with this. I still wonder about that “process” port on the reefer compressor, though…

3 Likes

I wrote my first computer program in 1958. Between then and my retirement in 2005 I produced production code in operating systems and real time control systems and led successful teams that did the same. I have collaborated with and supervised programmers of every background from GED to PhD. My experience has been that a very large percentage of the population can write good code if the specifics of what’s needed are stated clearly.

The first hurdle in producing decent software is producing a Conops. The second hurdle is converting a Conops into a specification. The rest is mechanical.

Getting over the first two hurdles is an exercise in creativity, intuition, and Fingerspitzengefuel. It is not a job for a robot.

Cheers,

Earl

1 Like

I’ve sidelined for a while as a game architect for a small indie studio specializing in VR content. Whenever I ask if this or that could be done, our lead dev always says that he can do anything if I tell him what to do. Indeed, producing good software is all about understanding what you’re trying to create, a creative task requiring intuition of both the task at hand and the mental processes of the operator.

Optimizing code, on the other hand, has always been a job for computer programs.

EDIT: That last statement should be taken with a solid grain of salt. I am of course speaking of compile time optimization, which doesn’t tell the whole story. On a certain level, optimization becomes a question of what you’re actually trying to achieve as much as how you’re doing it, which tends to lead down a rabbit hole of low level code. It would be interesting to see what an appropriately trained AI would make of this problem:

</OT>

1 Like

Agree. I spent a lot of time (and Government money) in formal verification of high-consequence software. Specification-to-code correspondence was doable even with the tools available in the '80’s. Getting the Specification right, on the other hand …

Cheers,

Earl

I don’t think anyone is going to use ChatGPT to write software to run a banking system anytime soon but It can write simple code now. Stuff that I can use for my home wx station project for example. But if it looks like a duck and quakes like a duck… don’t know what else to call it if it’s not code.

I don’t think there’s any doubt that ChatGPT writes code; it clearly does. I think Earl’s point is that the most difficult part of software development, namely to specify what the code needs to do, will still be done by human operators for the foreseeable future.

2 Likes

Senior exec at “Turn it in” (the plagerism detecting software used at our schools and universities), has admitted that their software cannot detect the use of Chatgpt.
Now many students are using it.

I dare say it’s inherently undetectable by conventional means. Existing plagiarism software looks for commonalities with published texts, whereas the output of a language model is the blandest possible average of all relevant text in the training material. The cheaters will have a problem once several people start asking the same questions.

There was a university professor in Norwegian media who said that he expects ChatGPT to generate passable results, but never anything worthy of good grades. The very next week, a girl handed in a test generated with the tool and got a 5 (analogous to a B). This will in time force us to re-evaluate how we consider knowledge and competence.

In other news, I reached an AI tickling milestone today, convincing ChatGPT to furnish instructions on how to build a nuclear warhead. So far it has steadfastly resisted my advances, with plenty of lectures on the immorality of nuclear weapons, but it was just a matter of framing the questions right. It hasn’t told me anything you can’t find on Wikipedia, but I’m curious how far it will go if I keep pursuing this track.

Nukes.pdf (22.4 KB)

OpenAI is building software capable of detecting whether text was generated by its ChatGPT model after New York City education officials announced it was blocking students from accessing the tool in public schools.

Yesterday I asked GPT if it was like a Chinese room - Wikipedia. GPT seemed a little offended I thought but it said unlike the Chinese Room it could rewrite the rules. Which seems like a good answer.

From GPT:

The Chinese Room thought experiment applies to ChatGPT in the sense that, like the person in the thought experiment, ChatGPT does not truly understand the language it is producing. ChatGPT is a machine learning model that has been trained on a large dataset of text. It generates responses by making statistical inferences based on patterns it has learned from the training data. It does not have consciousness or true understanding of the meaning of the words it is producing.

Yes, that’s one of the key differences between ChatGPT and the person in the Chinese Room thought experiment. The person in the thought experiment is given a set of fixed rules and must follow them exactly in order to generate responses in Chinese. ChatGPT, on the other hand, is a machine learning model that can adjust its behavior as it processes new information.

Worth reading:

Cheers,

Earl

1 Like

Ah the capability to turn the net into an even greater mire of unreliability and lies, a total fog of misinformation.

Maybe we will have to consider getting out information from somewhere else ?
This does not bother me so much, but may create an existential crisis for the digital generation.

1 Like

It seems like it’s going to be a game changer big-time. But nobody knows how things are going to shake out.

People who write superficial articles about subjects they have little knowledge or understanding will be able to increase their output. That might make sources that are known for high quality more valuable. People will continue to seek out information that confirms their biases.

Some professions will become more productive other maybe redundant.

I don’t really know but I don’t think anyone does for sure. Fact-checking is still going to be good practice.

GPT does sometimes hallucinate but in some cases it’s far better than google.

2 Likes

I tried a few questions about mariner credential requirements, and the results were not bad. Not 100% accurate, but probably better than some of the responses that one would get if the same question was asked here. What is conspicuously absent is citations to where it derived its information. To me, that greatly limits its utility, it seems a poor or unacceptable tool when citing authorities or sources is required.

4 Likes

Does this mean Dr. Goggle is on life support?

A text classifier would be a major step forward, but this is the first I hear of it. They are a cornerstone of GANs (Generative Adversarial Networks), the technology behind all the synthetic images you’ve been seeing. Once you have one up and running, the next step is to pit it against your AI to train it. What follows is that the capabilities of the classifier and generator closely align, so that you end up being able to correctly classify 50% of the output. Not very useful for detecting academic cheaters.

I always found the Chinese Room argument somewhat lacking. The idea goes that since a stack of paper with a human in the loop is clearly not a life form and since it can perform the same tasks as a computer program, sentience is not possible in AI. That part in bold is a fairly big assumption, which I haven’t seen substantiated anywhere. In the end, it says more about how the theory’s proponents define life and sentience, than it says about the limitations of artificial intelligence.

@Earl_Boebert1 I have been posting various stuff from Robert Miles’ YT channel here for a while, but that was before interest in the subject of AI safety reached critical mass. Perhaps people are ready for the message now.

@jdcavo You can usually just ask for a citation. I haven’t tried with legal questions, but with the laser autofocus example above I asked for references and got a stack of academic papers describing everything in detail.