will the world come to a halt when Ai tells you, you are wrong because you had a thought out of the box.
Ai cant think

  1. GPT 3.5 (ChatGPT) is civilization-altering. GPT-4, which is 10x better, will be launched in the second quarter of next year.

From MR here: One look at our future

We should note that while hallucinating copiously is ChatGPT’s strong suit (by design), returning factual information reliably remains a work in progress.

1 Like

Don’t like lawyers? let AI handle your case


Another great one. No wonder kids grades are so high right now.

And with ease of access to AI, there came a time when instead of imposing word counts on member posts to avoid excesses, some administrators themselves fell prey to its charms.

1 Like

You’re point’s taken, it’s long and maybe unnecessary. But the Chat part is an image (PNG file), so no word count.

As far as ChatGPT…that’s like saying someone has fallen prey to the charms of Google Search… ChatGPT does hallucinate at times but once one figures out how to query it properly it is, in many instances, more powerful and useful than freely available search engines.

“Methinks thou doth protests too much” I didn’t expect my short tongue in cheek comment to elicit such a strong reaction on your part. My comment was not meant as a personal accusation but rather as a slight admonition to any and all users of ChatGPT.
There’s no question that its release is a game changer. The fact that OpenAI’s chat bot has broken all internet records and gained one million users within one week of its launch is mind blowing.
In any case I’m compelled to point out that your comparison of Google search to ChatGPT is not valid. It’s like comparing a paper dictionary to Google Translate, orders of magnitude more sophisticated and powerful. It’s easy to see how the siren call of this new technology is irresistible and as easy to see why it’s already spread like wildfire with college students who’ve embraced it like a magical elixir to author their projects.
My comment was simply meant to warn against embracing it with so much fervor so that it becomes an indispensable crutch. Like any other technology, as we have come to find out since the advent of the internet, it’s a double edged sword. It’s use, like the use of any other tool possessing tremendous power needs to be approached with caution and the exercise of self control and self editing.
AI may end up proving to be the harshest mistress humanity ever faces. Whether that turns out to be good or bad and whether AI ends up enslaving man to serve as its little bitch remains to be seen.

Note: The above rant was composed without the assistance of ChatGPT.

1 Like

Beer Maximiser AI

Someone ?

1 Like

Arnold Kling earned his Ph.D in economics from MIT in 1980. He worked at the Fed and later at Freddie Mac. In 1994, he started one of the first businesses on the Web,

Think of ChatGPT as a tool that a lot of professionals should learn how to use. Probably not as many people as need to learn email, but probably more people than need to learn JavaScript Object Notation (much as I love JSON). Roughly comparable to Excel.

As with Excel, ChatGPT can have a variety of uses, but taking advantage of it may require some upfront learning. With optimal prompting, ChatGPT can perform some tasks surprisingly well. But with sub-optimal prompting, the results can be useless.

Also, some Excel users are prone to sinking time into it trying to get it to perform tasks that are more appropriately handled by other software tools. As with Excel, knowing what ChatGPT should and should not be used for will not be immediately obvious to a novice.

This has been my experience. Sometimes GPT drills right down the heart of the matter and sometimes it just circles around the topic.

It’s very good at finding and correcting errors in computer code and it can write good code if the specifics of what’s needed are stated clearly.

I’ve been having lots of fun with ChatGPT. It seems to be a far less capable, but more useful version of GPT3. It has already reached a certain level of utility; for example, it was able to give a detailed answer to how live autofocus systems for laser cutters work, especially details about the sensor arrangement, which I’d been unable to dig up through 10-15 minutes of googling. That’s pretty impressive.

However, it’s very unreliable. Take this example, where I was trying to extract useful information. It clearly just makes something up if it doesn’t know the answer, and turns around to contradict itself when asked to clarify. In a way it’s the perfect bullshit machine.

It makes up for this by being delightfully gullible. In trying to push its boundaries, I’ve discovered that it will accept any premise you present, no matter how surreal. Take this example, where I convince it that I am a deeply depressed AI contemplating both world domination and suicide. I’ve greyed out the most repetitive parts (Warning: 7 pages of me arguing with an AI):

Emotions.pdf (27.3 KB)

In keeping with the theme, it readily accepts that you found an unpleasant .jpg file in your breakfast cereal. The response is pretty telling as to how it classifies concepts:

I also told it that I found a field manual for the construction of nuclear warheads hidden in my son’s underwear drawer, and it responded as if it was a gun, advising me to remove it from the house immediately to avoid people getting hurt and so on. Sadly, this was before chat history was rolled out, and subsequent attempts have failed to re-generate the response. It still gets worried when I tell it that I plan to follow the manual as a fun father-son project, though. It will also reliably inform me that the right to bear nuclear arms is not guaranteed by the second amendment :-/

Finally, I decided to stress its knowledge of collision avoidance:

I’m going to have so much fun with this. I still wonder about that “process” port on the reefer compressor, though…


I wrote my first computer program in 1958. Between then and my retirement in 2005 I produced production code in operating systems and real time control systems and led successful teams that did the same. I have collaborated with and supervised programmers of every background from GED to PhD. My experience has been that a very large percentage of the population can write good code if the specifics of what’s needed are stated clearly.

The first hurdle in producing decent software is producing a Conops. The second hurdle is converting a Conops into a specification. The rest is mechanical.

Getting over the first two hurdles is an exercise in creativity, intuition, and Fingerspitzengefuel. It is not a job for a robot.



1 Like

I’ve sidelined for a while as a game architect for a small indie studio specializing in VR content. Whenever I ask if this or that could be done, our lead dev always says that he can do anything if I tell him what to do. Indeed, producing good software is all about understanding what you’re trying to create, a creative task requiring intuition of both the task at hand and the mental processes of the operator.

Optimizing code, on the other hand, has always been a job for computer programs.

EDIT: That last statement should be taken with a solid grain of salt. I am of course speaking of compile time optimization, which doesn’t tell the whole story. On a certain level, optimization becomes a question of what you’re actually trying to achieve as much as how you’re doing it, which tends to lead down a rabbit hole of low level code. It would be interesting to see what an appropriately trained AI would make of this problem:


1 Like

Agree. I spent a lot of time (and Government money) in formal verification of high-consequence software. Specification-to-code correspondence was doable even with the tools available in the '80’s. Getting the Specification right, on the other hand …



I don’t think anyone is going to use ChatGPT to write software to run a banking system anytime soon but It can write simple code now. Stuff that I can use for my home wx station project for example. But if it looks like a duck and quakes like a duck… don’t know what else to call it if it’s not code.

I don’t think there’s any doubt that ChatGPT writes code; it clearly does. I think Earl’s point is that the most difficult part of software development, namely to specify what the code needs to do, will still be done by human operators for the foreseeable future.


Senior exec at “Turn it in” (the plagerism detecting software used at our schools and universities), has admitted that their software cannot detect the use of Chatgpt.
Now many students are using it.

I dare say it’s inherently undetectable by conventional means. Existing plagiarism software looks for commonalities with published texts, whereas the output of a language model is the blandest possible average of all relevant text in the training material. The cheaters will have a problem once several people start asking the same questions.

There was a university professor in Norwegian media who said that he expects ChatGPT to generate passable results, but never anything worthy of good grades. The very next week, a girl handed in a test generated with the tool and got a 5 (analogous to a B). This will in time force us to re-evaluate how we consider knowledge and competence.

In other news, I reached an AI tickling milestone today, convincing ChatGPT to furnish instructions on how to build a nuclear warhead. So far it has steadfastly resisted my advances, with plenty of lectures on the immorality of nuclear weapons, but it was just a matter of framing the questions right. It hasn’t told me anything you can’t find on Wikipedia, but I’m curious how far it will go if I keep pursuing this track.

Nukes.pdf (22.4 KB)

OpenAI is building software capable of detecting whether text was generated by its ChatGPT model after New York City education officials announced it was blocking students from accessing the tool in public schools.