Grumpus wrote:There's going to be no end to concerns over the application of AI for all sorts of reasons but . . .
. . . unless connected to the real world, not just a database, AI is always going to be like any other programming, junk in junk out.
Concern should be over predictor like applications, medical, social direction, etc.
Many of the Chats are also connected to the real world, i.e., the web, and give links (ChatGPT still doesn't though and is restricted to 2021).
AI for medical diagnosis is clearly one of the more useful applications for the tech. But presumably the doctors don't just take it at its word. E.g., the AI spots something and then the doctor looks deeper for other confirmation. But the AI just speeds up the initial search and saves time. The Chat AIs, when they are accurate, are similar in the time-saving aspects - or even when they are not accurate. For example, the other day I saw that Guido van Rossum, the inventor of Python, uses AI code generation because it saves him time. The fact that it doesn't necessarily get it right doesn't matter so much as he has the knowledge to fix up the errors. But it's still quicker than working from scratch. I'm a programmer and I can vouch for the same on a lesser scale when I'm coding in Visual Basic .NET, find an example in C#, and then run it trough an online C# -> VB translator. It gets bits wrong. But it saves time and I know enough to fix it up or I can work through the compiler errors if it's obscure.
Not so much the foibles of the programming but more so the decisions made after application by insurance companies, educators, law enforcement and other business'
Recent and various reports of predicting someone will develop this cancer or that or possibly have a heart attack may go beyond reality and cause inaccurate decision making.
Loss of coverage, increased costs to the insured or even loss of jobs, all due to some inaccurate, virtual based, decision.
Big Tech uses AI in its censorship decisions. In theory, there's supposed to be manual review but their procedures are opaque and/or inconsistent, so who can tell. But I'd say AI making physical decisions, such as taking down a YouTube channel and/or demonetising it, is far more consequential than a Chat AI returning dodgy search results. But it depends on the significance of what you're searching for. The more important it is to your life the more important it is for you to cross-check. But that applies to non-AI searching as well. Thee is no "safe" magic bullet to any of this.
Btw, since I was last here I've played around with 10 Chat AIs (or really 8, because the 10th one is kind of broken and the 9th is just a ChatGPT clone.
What I've found is
1. The Chat AIs are both
more and
less impressive than you expect. (By less impressive I mean they get some seemingly simple questions
totally wrong.)
2. No
one Chat is better than
all others for
all queries/prompts.
3. For some queries a standard search is better than all or most of the Chats.
4. For those Chats which give web references they do not always clearly draw information from the particular link cited).
5. They can provide a link where the Chat tells you something that actually
contradicts what is in the link. So the Chat kindly provided the user with information to refute the Chat (if they were diligent enough to follow up). In this case it was obvious that the error was due to the Chat's anti-right wing bias. You could tell from the way it was worded.
Anyway, I don't know whether that makes you feel more or less reassured!