Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Do we actually have "AI" right now?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Do we actually have "AI" right now?

Everyone is talking about how "AI" is going to kill X, Y, and Z career, however... current "AI" offerings such as ChatGPT, Bard, and Bing still have zero emotional intelligence.

It's basically just trained language models with no ability to process emotions, which makes a lot of decisions it could make fundamentally flawed.

None of them can truly understand and process things such as identities, genders, bias, complex situational circumstances, etc.

So, that presents the question:

Is what we have right now truly AI, or, are these just large databases with trained language models advertised as AI?

/discuss

«13

Comments

  • MumblyMumbly Member
    edited April 2023

    ChatGPT:

    As an AI language model, I am not sentient and do not possess consciousness, emotions or self-awareness. I operate on algorithms and mathematical models that have been trained on large amounts of data. However, I am capable of understanding natural language and generating responses based on the patterns and information that I have been trained on. So while I am not a true AI in the sense of being a conscious being, I am a powerful tool for language processing and can assist in a wide range of tasks.

  • @Mumbly said:
    ChatGPT:

    As an AI language model, I am not sentient and do not possess consciousness, emotions or self-awareness. I operate on algorithms and mathematical models that have been trained on large amounts of data. However, I am capable of understanding natural language and generating responses based on the patterns and information that I have been trained on. So while I am not a true AI in the sense of being a conscious being, I am a powerful tool for language processing and can assist in a wide range of tasks.

  • It all depends on how you define AI. For me, I do not really consider data-driven models real “intelligence”. The same applies to recent LLMs: they “remember” a huge amount of text data and reassemble them to give a somewhat reasonable or unreasonable response, but they do not really “understand” the text itself.

    While I understand the current form of AI does boost productivity to some extend, and it’s probably the best we can achieve, I’m afraid we are on the wrong track if our ultimate goal is to get real intelligence.

    TBH, I do not expect to see real AI in my life. We are just too far from it. Hopefully I’m wrong.

    Thanked by 1titaniumboy
  • FranciscoFrancisco Top Host, Host Rep, Veteran


    if () { } else { }

    Francisco

    Thanked by 3stefeman kait lukehebb
  • @Kousaka said:
    It all depends on how you define AI. For me, I do not really consider data-driven models real “intelligence”. The same applies to recent LLMs: they “remember” a huge amount of text data and reassemble them to give a somewhat reasonable or unreasonable response, but they do not really “understand” the text itself.

    While I understand the current form of AI does boost productivity to some extend, and it’s probably the best we can achieve, I’m afraid we are on the wrong track if our ultimate goal is to get real intelligence.

    TBH, I do not expect to see real AI in my life. We are just too far from it. Hopefully I’m wrong.

    Agreed on the first part... but, I'm in my mid 20's -- based on Moore's law (using it as a basis for simply the evolution of technology), I find it hard to believe we don't have true conscious AI before the time I die (but... I do have a tendency to make poor decisions).

  • @Francisco said:

    if () { } else { }

    Francisco

    Thanked by 1ralf
  • When most people think of AI, they are thinking of AGI ("Artificial General Intelligence).

    We have not yet developed AGI and it remains a goal that is likely many years out of reach. from what we have today. OpenAI is working on getting ChatGPT to that point, but even the current 4.0 version leaves much to be desired.

    Most AI deployments today are various types of narrow AI that are built to focus on specific tasks or purposes.

    Thanked by 1emgh
  • dosaidosai Member

    Penus

  • emghemgh Member

    @organicallyblue said:
    When most people think of AI, they are thinking of AGI ("Artificial General Intelligence).

    We have not yet developed AGI and it remains a goal that is likely many years out of reach. from what we have today. OpenAI is working on getting ChatGPT to that point, but even the current 4.0 version leaves much to be desired.

    Most AI deployments today are various types of narrow AI that are built to focus on specific tasks or purposes.

    I agree but GPT-4 is miles better than 3.5 and to be fair it was released quickly after GPT-3.

    I highly recommend (not to you specifically but to anyone here that’s interested):

    Sam talks a lot about releasing with improvements that are good, noticable and for the better but without being afraid of launching just because it’s not yet perfect.

    Very interesting guy, highly intelligent.

  • emghemgh Member

    By the way, in the AGI segment Sam says that it’s ”definitely not an AGI”, because it ”dosen’t feel that close”

  • jarjar Patron Provider, Top Host, Veteran

    I think some of us have too high of expectations for what we're willing to call artificial intelligence. I see a lot of people downplaying it like "it's just machine learning" and stuff. I get where they're coming from. But I think we're too quick to forget what little it is about us that makes us who we are: Our intelligence is nothing more than the sum of the data we've ingested since birth.

    Thanked by 1emgh
  • emghemgh Member
    edited April 2023

    @jar said:
    I think some of us have too high of expectations for what we're willing to call artificial intelligence. I see a lot of people downplaying it like "it's just machine learning" and stuff. I get where they're coming from. But I think we're too quick to forget what little it is about us that makes us who we are: Our intelligence is nothing more than the sum of the data we've ingested since birth.

    I agree about people dismissing Chat GPT too quickly but not about there being little that makes us who we are.

    ”Our intelligence is nothing more than the sum of the data we've ingested since birth”:

    It kind of this though, that’s what makes us human.

    Two people growing up in the exact same way, going to the exact same school, having the excact same friends (not that it’s possible, but if it was possible to replicate two upbringings to 100 % let’s say) - they wouldn’t think the same, not even know the same.

    Thanked by 1jar
  • rm_rm_ IPv6 Advocate, Veteran
    edited April 2023

    Go ahead send it some PHP or Python code (or any language you prefer), ask to explain what it does, to rewrite it, optimize, remodel to perform a different task, etc. The results that I see so far are nothing less than stunning. It is unthinkable (sic!) how any this is even possible. How a "language model", or as some people like to diminish it, an autocomplete on steroids, can do any of that.

    Thanked by 3jar eva2000 Shazan
  • jarjar Patron Provider, Top Host, Veteran

    @emgh said: Two people growing up in the exact same way, going to the exact same school, having the excact same friends (not that it’s possible, but if it was possible to replicate two upbringings to 100 % let’s say) - they wouldn’t think the same, not even know the same.

    True. I mean, minor biological differences and a different home life can contribute a great deal to difference. But I mean, I don't think we're ever going to replicate something like "This AI considers this to be a lower priority than this other AI, because it has a genetic iron deficiency that causes a chain reaction in how they process information over time" so I do think we are actually looking at what real AI is going to look like. I think over time it'll get better along the same lines, and more streamlined due to improvements in processing, and we'll eventually come to accept that GPT3 was the catalyst.

    Thanked by 1emgh
  • @rm_ said:
    Go ahead send it some PHP or Python code (or any language you prefer), ask what it does, ask to rewrite it, optimize, etc. The results that I see so far are nothing less than stunning. It is unthinkable (sic!) how any this is even possible.

    Sure, but that doesn't use any human thought. Emotions are human, and humans can program, but programs can't emotion. A computer knows best how to be a computer.

  • rm_rm_ IPv6 Advocate, Veteran

    @SirFoxy said: Emotions are human, and humans can program, but programs can't emotion.

    Emotions are not intelligence and emotional intelligence is not intelligence either.

  • emghemgh Member

    A friend of mine did a very simple yet useful plugin for WordPress released and approved to the Plugin Library with GPT-3 in like 25-30 minutes.

    Thanked by 2jar eva2000
  • jarjar Patron Provider, Top Host, Veteran
    edited April 2023

    @rm_ said:
    Go ahead send it some PHP or Python code (or any language you prefer), ask to explain what it does, to rewrite it, optimize, perform a different task, etc. The results that I see so far are nothing less than stunning. It is unthinkable (sic!) how any this is even possible. How a "language model", or as some people like to diminish it, an autocomplete on steroids, can do any of that.

    I've been using it to build code one function at a time. I tell it in excruciating detail what I want, and then I audit that code block and consider how it could be exploited, then I make any necessary changes and add it to my code. A few times I've even been so bold as to ask it to consider how that function could be exploited and then adjust it to compensate, and it catches exactly what I had in mind.

    I mean yeah, always audit third party code, don't just toss a few thousand lines in your code and pray it's not vulnerable. But I'm not being fed a bunch of shit, I'm being fed powerful, very finely detailed code that does exactly what I asked for. It's amazing.

    Thanked by 2emgh eva2000
  • @rm_ said:

    @SirFoxy said: Emotions are human, and humans can program, but programs can't emotion.

    Emotions are not intelligence and emotional intelligence is not intelligence either.

    I do believe in emotional intelligence, but, it is a new concept introduced in the 60's when they started becoming more aware of mental health, and mental related things.

    You can frequently see someone incredibly smart 4.0 with ease, often the programmer type guys who have zero emotion intelligence, can't talk to girls, don't know how to have friends, fail to process emotions and become depressed because they have zero emotional intelligence.

    I 100% personally believe there is a huge difference from traditional intelligence and modern intelligence with various different branches.

    To make something artificially, synthetically, intelligent, you need to encompass all the faucets of human intelligence.

  • emghemgh Member

    @jar said:

    @rm_ said:
    Go ahead send it some PHP or Python code (or any language you prefer), ask to explain what it does, to rewrite it, optimize, perform a different task, etc. The results that I see so far are nothing less than stunning. It is unthinkable (sic!) how any this is even possible. How a "language model", or as some people like to diminish it, an autocomplete on steroids, can do any of that.

    I've been using it to build code one function at a time. I tell it in excruciating detail what I want, and then I audit that code block and consider how it could be exploited, then I make any necessary changes and add it to my code. A few times I've even been so bold as to ask it to consider how that function could be exploited and then adjust it to compensate, and it catches exactly what I had in mind.

    I mean yeah, always audit third party code, don't just toss a few thousand lines in your code and pray it's not vulnerable. But I'm not being fed a bunch of shit, I'm being fed powerful, very finely detailed code that does exactly what I asked for. It's amazing.

    Yes.

    I didn’t notice a big difference with GPT-4, apart from when I forget to toggle it on and accidently use GPT-3 and after awhile realize that something must be wrong because it sends bad code and never fixes what I ask it to fix.

    I had like a 20 reply back and forth about a simple Python change, I just wanted it to do it.

    After taking a break I reslized it was set to GPT-3.

    After switching to GPT-4 it nailed it on the first go.

    (Also, sadly, a human characteristic, noticing worsening a lot more compared to things getting better. Losing $100 feels a lot worse than making $100 feels good.)

    Thanked by 1jar
  • SirFoxySirFoxy Member
    edited April 2023

    @emgh said:

    @jar said:

    @rm_ said:
    Go ahead send it some PHP or Python code (or any language you prefer), ask to explain what it does, to rewrite it, optimize, perform a different task, etc. The results that I see so far are nothing less than stunning. It is unthinkable (sic!) how any this is even possible. How a "language model", or as some people like to diminish it, an autocomplete on steroids, can do any of that.

    I've been using it to build code one function at a time. I tell it in excruciating detail what I want, and then I audit that code block and consider how it could be exploited, then I make any necessary changes and add it to my code. A few times I've even been so bold as to ask it to consider how that function could be exploited and then adjust it to compensate, and it catches exactly what I had in mind.

    I mean yeah, always audit third party code, don't just toss a few thousand lines in your code and pray it's not vulnerable. But I'm not being fed a bunch of shit, I'm being fed powerful, very finely detailed code that does exactly what I asked for. It's amazing.

    Yes.

    I didn’t notice a big difference with GPT-4, apart from when I forget to toggle it on and accidently use GPT-3 and after awhile realize that something must be wrong because it sends bad code and never fixes what I ask it to fix.

    I had like a 20 reply back and forth about a simple Python change, I just wanted it to do it.

    After taking a break I reslized it was set to GPT-3.

    After switching to GPT-4 it nailed it on the first go.

    (Also, sadly, a human characteristic, noticing worsening a lot more compared to things getting better. Losing $100 feels a lot worse than making $100 feels good.)

    Tell Las Vegas that, I heard @Francisco spends a good amount of time there.

  • rm_rm_ IPv6 Advocate, Veteran
    edited April 2023

    @SirFoxy said: I do believe in emotional intelligence, but, it is a new concept introduced in the 60's when they started becoming more aware of mental health, and mental related things.

    Sure, but it is a separate concept, an ability to understand and handle one's own emotions, and those of other people.

    Has nothing to do with the question of whether we have an AI or not.

    It absolutely can have a conscience and capability for independent intelligent thought, without any concept of emotions whatsoever. We can discuss whether it already has any of that (maybe not yet), but emotions are not a requirement in this question.

    @SirFoxy said: To make something artificially, synthetically, intelligent, you need to encompass all the faucets of human intelligence.

    Nope. That's like requiring that cars need to have legs like a horse had.

  • emghemgh Member
    edited April 2023

    @SirFoxy said:

    @rm_ said:

    @SirFoxy said: Emotions are human, and humans can program, but programs can't emotion.

    Emotions are not intelligence and emotional intelligence is not intelligence either.

    I do believe in emotional intelligence, but, it is a new concept introduced in the 60's when they started becoming more aware of mental health, and mental related things.

    You can frequently see someone incredibly smart 4.0 with ease, often the programmer type guys who have zero emotion intelligence, can't talk to girls, don't know how to have friends, fail to process emotions and become depressed because they have zero emotional intelligence.

    I 100% personally believe there is a huge difference from traditional intelligence and modern intelligence with various different branches.

    To make something artificially, synthetically, intelligent, you need to encompass all the faucets of human intelligence.

    I agree with both of you, but if one is being strict about the wording I agree with @rm_ - I don’t like it when people say xyz followed by intelligent or smart.

    To me those are characteristics, interests and so forth.

    Intelligence to me, can be measured in IQ.

    Once we start this path of branching intelligence it loses what it meant to be.

    And what even is the word for real actual intelligence anymore?

  • @rm_ said:

    @SirFoxy said: I do believe in emotional intelligence, but, it is a new concept introduced in the 60's when they started becoming more aware of mental health, and mental related things.

    Sure, but it is a separate concept, an ability to understand and handle one's own emotions, and those of other people.

    Has nothing to do with the question of whether we have an AI or not.

    It absolutely can have a conscience and capability for independent intelligent thought, without any concept of emotions whatsoever. We can discuss whether it already has all of that (maybe not yet), but emotions are not a requirement in this question.

    Do you believe something/someone can be subjective without emotion?

  • emghemgh Member
    edited April 2023

    @SirFoxy said:

    @rm_ said:

    @SirFoxy said: I do believe in emotional intelligence, but, it is a new concept introduced in the 60's when they started becoming more aware of mental health, and mental related things.

    Sure, but it is a separate concept, an ability to understand and handle one's own emotions, and those of other people.

    Has nothing to do with the question of whether we have an AI or not.

    It absolutely can have a conscience and capability for independent intelligent thought, without any concept of emotions whatsoever. We can discuss whether it already has all of that (maybe not yet), but emotions are not a requirement in this question.

    Do you believe something/someone can be subjective without emotion?

    An AI trained on subjective datasets surely can.

    It can even answer as if it had feelings.

    It could even, if trained to, answer that it straight up is hurt or depressed.

  • @emgh said:

    @SirFoxy said:

    @rm_ said:

    @SirFoxy said: Emotions are human, and humans can program, but programs can't emotion.

    Emotions are not intelligence and emotional intelligence is not intelligence either.

    I do believe in emotional intelligence, but, it is a new concept introduced in the 60's when they started becoming more aware of mental health, and mental related things.

    You can frequently see someone incredibly smart 4.0 with ease, often the programmer type guys who have zero emotion intelligence, can't talk to girls, don't know how to have friends, fail to process emotions and become depressed because they have zero emotional intelligence.

    I 100% personally believe there is a huge difference from traditional intelligence and modern intelligence with various different branches.

    To make something artificially, synthetically, intelligent, you need to encompass all the faucets of human intelligence.

    I agree with both of you, but if one is being strict about the wording I agree with @rm_ - I don’t like it when people say xyz followed by intelligent or smart.

    To me those are characteristics, interests and so forth.

    Intelligence to me, can be measured in IQ.

    Once we start this path of branching intelligence it loses what it meant to be.

    And what even is the word for real actual intelligence anymore?

    In terms of "And what even is the word for real actual intelligence anymore?" it's a rabbit hole that only DMT can answer.

    In this context, as close to human as possible without being human (imo).

    Thanked by 1emgh
  • @emgh said:

    @SirFoxy said:

    @rm_ said:

    @SirFoxy said: I do believe in emotional intelligence, but, it is a new concept introduced in the 60's when they started becoming more aware of mental health, and mental related things.

    Sure, but it is a separate concept, an ability to understand and handle one's own emotions, and those of other people.

    Has nothing to do with the question of whether we have an AI or not.

    It absolutely can have a conscience and capability for independent intelligent thought, without any concept of emotions whatsoever. We can discuss whether it already has all of that (maybe not yet), but emotions are not a requirement in this question.

    Do you believe something/someone can be subjective without emotion?

    An AI trained on subjective datasets surely can.

    It can even answer as if it had feelings.

    It could even, if trained to, answer that it straight up is hurt or depressed.

    See and that's the thing. It's objectively trying to be subjective.

    Thanked by 1emgh
  • emghemgh Member
    edited April 2023

    @SirFoxy said:

    @emgh said:

    @SirFoxy said:

    @rm_ said:

    @SirFoxy said: I do believe in emotional intelligence, but, it is a new concept introduced in the 60's when they started becoming more aware of mental health, and mental related things.

    Sure, but it is a separate concept, an ability to understand and handle one's own emotions, and those of other people.

    Has nothing to do with the question of whether we have an AI or not.

    It absolutely can have a conscience and capability for independent intelligent thought, without any concept of emotions whatsoever. We can discuss whether it already has all of that (maybe not yet), but emotions are not a requirement in this question.

    Do you believe something/someone can be subjective without emotion?

    An AI trained on subjective datasets surely can.

    It can even answer as if it had feelings.

    It could even, if trained to, answer that it straight up is hurt or depressed.

    See and that's the thing. It's objectively trying to be subjective.

    Yes, I guess.

    Sam mentions in the interview that an AI could be seen as ”not faking” when the dataset don’t contain any fact, reference or logic to any concept even in somewhat of a relation to feeling something, and if you asked it ”Hey I’ve had this thing where” and you explain the concept of suffering for example, and it goes ”Yeah me too!” - then that’s real.

    Lex Fridman seems to think it’s more about it’s appearance, which I guess isn’t irrelevant.

    Thanked by 1SirFoxy
  • emghemgh Member

    BREAKING: Chat-GPT a hoax powered by an indian call center south of Calcutta

    Thanked by 1SirFoxy
  • @emgh said:
    BREAKING: Chat-GPT a hoax powered by an indian call center south of Calcutta

    Thanked by 1emgh
Sign In or Register to comment.