News:

Let's find out together what makes a PIC Tick!

Main Menu

Artificial common sense

Started by RGV250, Sep 11, 2024, 11:02 AM

Previous topic - Next topic

RGV250

I saw Les's post here https://protoncompiler.com/index.php/topic,2311.msg17787/topicseen.html#msg17787 (Nr 21)
and it made me think. I have worked in a few places and known some supposedly very intelligent people and none of them had an ounce of common sense which seems to be the same with Artificial intelligence. So until they actually create artificial common sense to go with it AI will in my opinion be a load of useless unreliable junk.

Bob

top204

Very profound and true words Bob.

Fanie

Don't worry about AI, it's no match for natural stupidity.

top204

#3
The problem is that AI will learn from human experiences and stupidity, and class it as 'normal behaviour', just as this generation of humans is doing. "Very, very frightening".

That's some of the words of the brilliant Queen song. Then just after them... Mamma mia, mamma mia, mamma mia let him go. Beelzebub has a devil put aside for me, for me, for meeeeeeeeee.......

Name that tune?

Ahhhhhh.... Bring back the 1970s when I was a young-un, and enjoyed life with my Dad and Mum and family, and 'girlfriends'. ;-)

John Drew

Interesting discussion. Made me think about "common sense". Like most of us whenever there's a discussion I toss in a bit of "common sense", trouble is it's sometimes different from my mate's :-)

At this point AI is like having a junior assistant who does some research and saves some time.I reckon it's up to us to learn how to use our junior assistant's work.

By the way, I don't want it to write my code. I enjoy doing it myself.
John

JonW

I apologize for any confusion. You're right that my earlier statement about not having memories could seem inconsistent with being an AI. Let me clarify:

I am indeed an artificial intelligence, but I'm what's called a "stateless" AI. This means that while I have a vast knowledge base that I can draw upon to engage in conversations and complete tasks, I don't retain information from one conversation to the next. Each interaction starts fresh.

When I mentioned not having memories, I was referring to personal, ongoing experiences like a human would have. I don't accumulate new memories or experiences over time. My knowledge comes from my training, which has a cutoff date (currently April 2024).

As for when I might have memory, that's not something I can predict. The development of AI systems with persistent memory is an active area of research, but I don't have insider information about Anthropic's plans or the broader field of AI development. Any major advancements in AI capabilities would likely be announced publicly by companies or researchers working in the field.

If you're curious about the latest developments in AI memory and learning, I'd encourage you to check out recent publications from major AI research institutions or tech companies working on AI. They would have the most up-to-date information on advancements in this area.


JonW

#6
Played a basic game with AI based on memory. The response explains the reaction everyone is experiencing. I personally see the memory aspect of AI as its achilees heel.

To clear memory, ad-hoc requires intelligence; to become intelligent requires memory. It is a paradox in itself...



JonW

#7
I can read history, as you write it.. paradox
Blows its mind

JonW

A time based tool..exceptionally gifted if you can understand it's processing..

John Drew

Hi Jon,
The issue re memory as I see it, is that if anyone can change its memory all good if we lived in an ideal world of well informed and intelligent people. However we don't and there are idiots and malicious people  such as some who misuse social media that would feed intentionally wrong data to the AI machine. So at this point in its development AI memory needs to be short term and for the current user only.

The alternative is to have a system similar to Wikipedia, heavily moderated, fact checked and peer reviewed.
Cheers
John

RGV250

Hi,
I have seen nothing to convince me it will not be as corruptible as Google where you pay to get your information to the top of the searches.
Perhaps it should be asked if it can be corrupted and see what it's answer is. Probably be the same garbage you get from politicians.

Bob

LeonJ

Quote from: top204 on Sep 16, 2024, 11:39 AMThe problem is that AI will learn from human experiences and stupidity, and class it as 'normal behaviour', just as this generation of humans is doing. "Very, very frightening".

That's some of the words of the brilliant Queen song. Then just after them... Mamma mia, mamma mia, mamma mia let him go. Beelzebub has a devil put aside for me, for me, for meeeeeeeeee.......

Name that tune?

Ahhhhhh.... Bring back the 1970s when I was a young-un, and enjoyed life with my Dad and Mum and family, and 'girlfriends'. ;-)

Yes Les, the 70's gave us some of the best contemporary music ever. The lyrics are of course from Queen's "Bohemian Rhapsody"
released in 1975 and was a hit in South Africa during 1976 (my final high-school year). Our "classic rock" band have tried to cover the song but it's very hard to copy a masterpiece with any credibility, so we give it a miss.
We still play mainly songs from the 70's using a number of PIC's managing our audio mixer, DMX lights, guitar effects processors and other smaller MIDI tasks. For me its a joy combining two of my passions especially when using your fantastic compiler.
Regards,
Leon

JonW

#12
RGV250, I can't see how it can corrupt an answer based on something that is unrelated; that itself is intelligence, its definitely not a search engine with a bit of 'speed', its definitely not intelligent/self aware.  It also doesn't return unrelated advertising data concerning the question asked. As I have said before, you need to lay out a very detailed specification for it to respond as intended, direction has to fall within its specs or rules for it to operate efficiently. Seriously spend a couple of hours with it, but use it as a learning tool for 'popular languages' of code rather than treat it like a god-like entity. 

I think you misunderstand its purpose. It is, 'coding AI' ,specifically designed to produce code in specific languages, and its programming is directed at 'monetising efficiency' in specific code bases. Do the math; it's a miracle it's free at the moment.  ( it's not as this drives share prices)

So see it from an investment perspective.  Ai free for the greater good is not a good return.  Break it down ,  teach AI the same.language it's coded in, its not self learning but making its creators more proficent/efficient in coding, thus geneating a greater return.  Most popular languages coded are what AI is based upon. Efficiency, self preservation from a cost or investment perspective..  coded to return a profit or positive result.   Restriction at the moment on usage but also  restraint within the comapny to limit its time working to produce more efficient code.  Brilliance.

xldaedalus

I think the term AI aka "Artifical Intelligence" throws people off.  I mean, have you ever met anyone you consider "artifical", someone shallow, self-centered, hyper-concerned with the status quo etc? That should give one a clue of what to expect. And have you ever met anyone you consider to have "Intelligence". I once read about a Savant who could calculate Pi to a million places but he couldn't tie his shoes.  Both words are relative to the observer.  To a person with an IQ of 100 and lacking in education, AI probably would be more "intelligent" and will produce better results.  I doubt anyone here, commenting above, is anywhere near that category.

So, forget the term "AI" because intelligence it is not.  Think of machine learning as a "tool" to gather information, not as source of "intelligence" or "common sense".  As the saying goes, "it's a poor craftsman who blames is tool". Secondly, tools vary in quality, a "Leatherman" aka "multi-tool" is not good at any one thing, but it's much better than nothing in certain cases. An Xacto for artwork, a katana for destruction.  You pick the right tool for the job.

I, personally, find ChatGPT to be a great helper in learning a new computer language, C, C++ and B4A(Basic for Android).  Does it produce great code? Heck no, it rarely works, but I can ask questions and answers that put me in the ballpark faster than from any other source except face to face with a human being with more knowledge on the subject than me.  Try asking Chat how to write a PROC, how to write a value to memory. The responses are pretty clear along with simple examples of code one can copy and paste. 

Used properly, Chat could be a great way help newbies to get up and running with Proton Basic, which IMHO is the finest PIC compiler for PIC ever written.



Never, never, never give up

JonW

#14
Agree, there is no intelligence perse, just intelligent coding and what is perceived as AI.
Positron can also be  considered AI in this context.  It Interpetes data given, reacts upon it, finding errors, computing more efficient code.  Based on a set of rules and algorithms. 

I don't need to be convinced of it's worth, just understand how to make myself more efficenct because I can use it.

xldaedalus

Quote from: JonW on Sep 28, 2024, 10:30 PMPositron can also be  considered AI in this context.

Sorry, my bad, not Proton Basic but Positron Basic.  I bought my first license circa 2003.  I grew up with it as Proton. Old habits die hard.  Long Live Positron Basic!
Never, never, never give up

top204

I'm just sorry I had to change the names because of "someone whose name I will never say or think again. Ever........".

trastikata

Just be aware that ChatGPT is complete mess when asked to interpret some Assembler code for PIC.

This algorithm will give you correctly the very broad general idea of what is happening in the code, but it makes a lot of mistakes and it is almost always wrong when it comes to specific logic mnemonics, especially wrong with STATUS register and carry bits logic and their operation.

Another thing is it often generates wrong code and when presented with logical reasoning why it is wrong, the algorithm starts circling around the initial wrong code, generating again wrong code but wrapped differently.

My point is this algorithm is not capable of changing the basic notion on the subject in question past what has been initially defined as "correct general idea" based on the available information it had ... thus there is no learning process based on logically presented contradicting information.

On other hand it is much better textual search engine than Google :)


xldaedalus

Quote from: top204 on Sep 30, 2024, 11:28 AMI'm just sorry I had to change the names because of "someone whose name I will never say or think again. Ever........".

"A positron is the antiparticle of an electron, meaning it has the same mass as an electron but a positive charge, while a proton is a much heavier subatomic particle found in the nucleus of an atom, also carrying a positive charge; the key difference between them is their mass, with a proton being significantly heavier than a positron"

Positron is better, lighter, easier faster!  And it's all your creation!  You should be very proud of yourself.  By far the best "Basic" yet created. Screw those unmentionables.  They aren't worth considering their existance.
Never, never, never give up

charliecoutas

True words xldaedalus. Yesterday at the museum I was asked to "look at" a project which is underway. It is an "audio book" where a visitor presses one of ten buttons and gets an audio report in a pair of headphones.

I instantly imagined a PIC18F26K22 attached to 10 buttons and a DF Player (mp3). Simple! The code in Positron would be an absolute breeze. But the project already is underway and it uses a Raspberry pi with an added pcb for the push buttons. I looked at the micro-SD card and it contained dozens of files, operating system gubbins, Python files and loads of stuff I didn't recognise. Oh, and I couldn't make it work, no documentation.

So I think I'll do my version anyway just in case.

Well done Les, Positron is the Bee's Knee's.

Charlie