ChatGPT: The Future of AI Is Here

From what the experts are saying about ChatGPT, it is hard to understand the praise that this technology is receiving in the popular press. If, as I’ve read, it simply scours the web for matching patterns and doesn’t really know (or learn) anything, I don’t get why people are so impressed with it. As for creating “art”, that’s a joke. Riffing on the work of others may be amusing, but it’s not art.

So are many of my students.

2 Likes

Artificial Intelligence is a really bad term, I suspect used more for marketing. These programs do not have anything near intelligence. Their main characteristic is that they can learn. They gain information and also, more importantly, they learn to respond. They have a very large number of questions directed at them, and are optimised to return the responses that a person would make, by tuning their underlying algorithm. In the future we will have better data structures and algorithms and they will become closer to appearing intelligent.

AI, especially when used by the marketers is often heavy on the A, and light on the I.

But by the same token, I’d say you can criticize use of the term Machine Learning. When we humans learn, that includes our ability to gradually interpolate and extrapolate so we can extend what we learned (or rather what we remember) to new cases.

In the ML we have used in our research here (high-energy physics), one thing we have consistently been able to demonstrate is that while with a lot of effort ML can be used to efficiently interpolate, it is persistently bad at extrapolating. ML is a strong toolset and has lots of benefits for us, but it is frankly just really bad at extrapolation. There are smart tricks talented people can play and with that kind of effort, you can get around certain problems that arise with “vanilla” ML, but I still would consider it more “remembering” or “recognizing” than actual true “learning”.

No doubt about that. “Intelligence” implies an understanding of the subject, not just being able to consistently produce the expected answers.

There are some research groups working on actual AI (Here’s a book describing one group’s work from 2008, which I’ve found particularly fascinating), but I have yet to see any such group develop something robust enough for commercial applications.

“Machine learning” may be a better term, in that current neural-net approaches involved “training” the software by repeatedly presenting it with data and the answers and having it “learn” from that data so it can generate correct results when presented with data it wasn’t trained on.

But that’s not intelligence. That’s pattern recognition. Very useful, but an “AI” that (for example) can identify different objects in pictures still has no clue what those objects actually are or how they are used, so it fails miserably with things that don’t look like expectations (e.g. a chair shaped like a hand) whereas humans have no such problem.

I asked chatGPT several times to write a poem in the style of certain poets giving it a rather dark depressing theme and ALWAYS the poem ends with an uplifting couplet along the lines of “there is always light at the end of the tunnel”. I like that, though it is probably because of the way it is trained, which means is it biased by the trainers?

AI has been “the future” for close to 50 years now (I remember it being talked about in the mid-70s when I was first getting into the business). The future so far ain’t what it was cracked up to be.

I fear that we’ll know that AI has arrived when we have Skynet and Terminator, it takes over the adult entertainment business like it’s done for the Internet, or it’s being used by scammers.

Maybe I don’t think enough of my abilities (I do think more of them than the general populous), but this hack would be VERY simple, not destroy either the charger, its cable, or the phone, to use a multimeter to measure the voltage or current (both if you have two Multimeters (I have 3), IF you can easily make electrical connections to both the charger’s plug and the phones socket, without shorting out connectors at either. For the phone it would be best to use an old charge cable that’s no longer needed (or broken at the end away from the phone), for the charger, most now have a USB-A or USB-C socket, so again sacrificing another cable. If you have a spare charger to phone cable, or buy a cheap one, you’ll have both connectors needed. The wires inside these cables are most often color coded, so that not a problem, but the wires are fine, and that is a problem. The internal resistance of the ammeter function should not cause any problem.

How little do employee evaluators know. This was my problem in advancing at work throughout my career, I knew my limits and didn’t try to pretend I KNEW more than I really did. I think confidence was valued MUCH higher than being correct or have the ability (or knowledge of your ability and how to advance it). One boss I had said if you haven’t made any mistakes you haven’t done anything. How many people thought Galileo was wrong, he was imprisoned for being right, but against the grain. I know many internationally respected ‘famous’ physicists and all are the most humble vs. those at conferences that try to show off or emit how smart they are, I often found the later were not. But I slept well at night knowing how I would approach my problems the next day, I hate to think how those that prove the Peter Principal is correct sleep at night.

Within the article it says: “There’s still plenty of room to improve, however. Currently, ChatGPT doesn’t have much “state”—that is, it doesn’t really remember what you’re talking about from question to question.” I just opened an account, the first feature it says:

Capabilities
Remembers what user said
earlier in the conversation.

Give it a try and I think you’ll see what @das means—it remembers a little, sometimes. I couldn’t find where I read this, but I thought I saw that it has something like 3000-4000 characters of state, whatever that means.

As you certainly know (but others here may not), you measure voltage across two points (e.g. between the charger’s power line and its ground). You measure current inline with a single line.

But in either case, all the wires you need access to are safely hidden inside the charger, in the phone, or in the cable. To take measurements the (relatively) easy way, you would need to remove the insulation from a charging cable. Then expose the conductors for the ground and power line. You can connect a meter across these two to measure voltage.

To measure current, you would need to cut one of the lines, then attach each of the cut ends to your meter.

This is what I mean by “hack up a cable”.

Now, you could also fashion your own break-out cable with a few USB connectors of the appropriate type, a short cable, and a small circuit board with test points where you connect your meter. But once you’re going through that much work, you’re probably better off just buying a USB power meter dongle - they’re very inexpensive.

I wouldn’t try that. Although the pins on a USB type-A connector are pretty big, the pins on a type-C or a Lightning connector are very small. It would not be easy to make a reliable connection just by sticking probes in a port. You would want a break-out cable/dongle to provide test points large enough to easily contact (or attach micro-clips to).

I would define that as “hacking up a cable”. :slight_smile:

I’ve been using multi-meters since the 60s. Back when they had a needle.

And I have NEVER used the ammeter function on any of them. Well except in some EE labs in the 70s.

Just a data point.

Actually I have just the thing, don’t remember where I got it, I think either Amazon or EBay. But it’s a voltage supply that plugs into USB A, and has a USB output socket and screw terminals. Can be Voltage Constant or Current Constant, and can change the display to show current or voltage. The voltage can be changed from something like 1 V to 9 volts or so. Think it will supply 3 A max, and is also variable.

What I don’t understand about this potential use, is where will something like GPT get the source material to ‘write’ the celebrity/film/sports news? Until people have written the original articles, it won’t have appropriate training material, so surely people will always be ‘faster’?

I also wonder if we’re on the cusp of information being more restricted in some way. It’s easy to imagine news sites not wanting to share their articles with training models which are then used to take readers away, or lawyers not wanting to make new forms of contracts available.

Linus Tech Tips decided to test ChatGPT by asking it for instructions for how to build a gaming PC, and by doing only what it says. A good test, revealing the good, the bad and the ugly:

It will just be wrong. Which is just fine if you’re creating the useless click-bait articles we all see when web surfing.

You have a good point that a program like ChatGPT won’t be able to write the first article about sports match, or an election, or a celebrity divorce. But it will be able to write the tenth, or the hundredth. Often an article like that has a small amount of new information, woven together with a lot of boilerplate background. There are already thousands of secondary websites, where people are paid minimum wage (or less) to sling together barely accurate articles about some controversy at the World Cup. The main goal is to have some content to sell advertising against.

1 Like