|
Post by cynical1 on Feb 15, 2023 9:30:25 GMT -5
With the release of ChatGPT into the wild...along with whatever Google can eventually cobble together...as well as the flirtations with AI into art and music...it seems we are on the edge of a major change in the way of things. Advances in technology have always displaced someone or something...so, where is this going?
Please take the poll. It closes the 25th. Feel free to comment, if you're biological. AI bots, well...01110000 01101001 01110011 01110011 00100000 01101111 01100110 01100110
HTC1
NOTE: There is absolutely nothing valid or scientific with this poll. I'm just curious...
|
|
|
Post by sumgai on Feb 15, 2023 10:59:26 GMT -5
I'm gonna go for Door #2, Monty. Reason being, I'm all in favor of anything that will kill off copyright once and for all.
p.s. Of course Jack is right - no one gets out of here alive!
sumgai
|
|
|
Post by b4nj0 on Feb 15, 2023 11:18:42 GMT -5
Third strike and out.
I'm still waiting for The Jetsons to show up.
でつ e&oe ...
|
|
|
Post by thetragichero on Feb 15, 2023 11:37:39 GMT -5
i don't know who jack is but i certainly know what frank herbert thinks about artificial intelligence/the technological singularity
|
|
|
Post by newey on Feb 15, 2023 11:46:16 GMT -5
I voted # 3 but the "ethics to contain it" presupposes that those using the technology actually care about matters of ethics. The very existence of the Dark Web woiuld seem to disprove that supposition. And, history also belies that hope- no form of technology has ever been stopped, or even significantly circumscribed, by implementing ethical rules. The Nuclear Non-Proliferation Treaty didn't stop countries from developing nukes. Ethical concerns around gene-splicing didn't stop Dolly from being cloned.
So, like any other technology, the "good guys" better get onboard because the "bad guys" assuredly will.
|
|
|
Post by Yogi B on Feb 15, 2023 12:17:45 GMT -5
Err, all of the above except the "sliced bread" and personally getting in on it? Sure, recent improvements (insofar as concerning ChatGPT) have made 'AI' that can interpret and then string together domain specific words in order to form seemingly informed responses, but that is precisely what it's doing, nothing more. A tool adept at blagging. Compared to a human doing the same on some unfamiliar topic, an 'AI' may 'know' a great deal more in terms of immediate access to aggregated information, but there is no actual intelligence, or critical thinking to back that up — no real capacity to actually understand, just an ability to get better at blagging. However this does not and will not mean that such an 'AI' won't be profitable, replacing humans with something that is superficially as effective yet much cheaper is obviously going to put dollar signs in peoples eyes. Doubly so in a world that perpetuates hype in the guise of necessity and in which accuracy & integrity are often least lucrative. In short: AI = BS. I'm all in favor of anything that will kill off copyright once and for all. Maybe, there are certainly issues which need answers, but I'll not hold my breath (+70 years) waiting for a positive outcome.
|
|
|
Post by cynical1 on Feb 15, 2023 13:44:17 GMT -5
I don't know who jack is... HTC1
|
|
|
Post by cynical1 on Feb 15, 2023 13:52:15 GMT -5
Interesting article. Thanks. HTC1
|
|
|
Post by cynical1 on Feb 15, 2023 14:02:57 GMT -5
So, like any other technology, the "good guys" better get onboard because the "bad guys" assuredly will. I make no suppositions regarding ethics. I usually find them to be situational and fleeting as a rule...if present at all...and only inflicted after the fact...when someone gets caught... The article link in YogiB's post was good. Offers a pretty stark view...but not an unfamiliar one. I'm gonna hang with the Luddites for a while... HTC1
|
|
kitwn
Meter Reader 1st Class
Posts: 95
Likes: 23
|
Post by kitwn on Feb 15, 2023 17:45:26 GMT -5
With the release of ChatGPT into the wild...along with whatever Google can eventually cobble together...as well as the flirtations with AI into art and music...it seems we are on the edge of a major change in the way of things. Advances in technology have always displaced someone or something...so, where is this going? Please take the poll. It closes the 25th. Feel free to comment, if you're biological. AI bots, well...01110000 01101001 01110011 01110011 00100000 01101111 01100110 01100110 HTC1 NOTE: There is absolutely nothing valid or scientific with this poll. I'm just curious...So who's the more nerdy? You for knowing/looking up the binary spelling, or me for downloading an ASCII table to confirm what I thought you were saying?
Kit
|
|
|
Post by cynical1 on Feb 15, 2023 17:50:19 GMT -5
So who's the more nerdy? You for knowing/looking up the binary spelling, or me for downloading an ASCII table to confirm what I thought you were saying? I'm guessing...coin toss... www.binarytranslator.comHTC1
|
|
|
Post by stevewf on Feb 15, 2023 21:36:42 GMT -5
i don't know who jack is but i certainly know what frank herbert thinks about artificial intelligence/the technological singularity I'd tap Asimov and hope he's got a Foundation or three to anticipate what comes. But really... while rule of law is not supreme on the webs, at least it provides a foothold for prosecution. Laws ought to be made in anticipation of new tech, not just as a reaction to it. I'm not good at anticipating what comes next... but SciFi writers have been warming up for this for years. If Isaac were alive now, would he be able to help?
|
|
kitwn
Meter Reader 1st Class
Posts: 95
Likes: 23
|
Post by kitwn on Feb 15, 2023 21:49:53 GMT -5
So who's the more nerdy? You for knowing/looking up the binary spelling, or me for downloading an ASCII table to confirm what I thought you were saying? I'm guessing...coin toss... www.binarytranslator.comHTC1 4e 65 61 74 21
Kit
|
|
kitwn
Meter Reader 1st Class
Posts: 95
Likes: 23
|
Post by kitwn on Feb 15, 2023 21:54:42 GMT -5
i don't know who jack is but i certainly know what frank herbert thinks about artificial intelligence/the technological singularity I'd tap Asimov and hope he's got a Foundation or three to anticipate what comes. But really... while rule of law is not supreme on the webs, at least it provides a foothold for prosecution. Laws ought to be made in anticipation of new tech, not just as a reaction to it. I'm not good at anticipating what comes next... but SciFi writers have been warming up for this for years. If Isaac were alive now, would he be able to help? Whether Isaac would help or not I'm not sure. But if there were no more copyright then he would probably not bother to write the required book anyway. He would never make any money from it and so would be too busy stacking shelves in the local supermarket to pay his rent.
You could always get an IA agent to write it for him.
Kit
|
|
|
Post by pyrroz on Feb 16, 2023 6:01:32 GMT -5
As a IT worker (like we say sex-worker) and optimistic future pensioner I gotta say, if a human does not respect God then he/she/it/this/that will face the sheer truth sooner or later. 3 for me.
|
|
|
Post by pyrroz on Feb 16, 2023 6:07:33 GMT -5
Err, all of the above except the "sliced bread" and personally getting in on it? Sure, recent improvements (insofar as concerning ChatGPT) have made 'AI' that can interpret and then string together domain specific words in order to form seemingly informed responses, but that is precisely what it's doing, nothing more. A tool adept at blagging. Compared to a human doing the same on some unfamiliar topic, an 'AI' may 'know' a great deal more in terms of immediate access to aggregated information, but there is no actual intelligence, or critical thinking to back that up — no real capacity to actually understand, just an ability to get better at blagging. However this does not and will not mean that such an 'AI' won't be profitable, replacing humans with something that is superficially as effective yet much cheaper is obviously going to put dollar signs in peoples eyes. Doubly so in a world that perpetuates hype in the guise of necessity and in which accuracy & integrity are often least lucrative. In short: AI = BS. I'm all in favor of anything that will kill off copyright once and for all. Maybe, there are certainly issues which need answers, but I'll not hold my breath (+70 years) waiting for a positive outcome.
Because of certain events in my life I have put on the hat of "being on the other side of the fence" when it comes to politics or geopolitics, and I can assure you guys that Google is massively intelligent when it comes of "politically" tagging a certain site or page. So this AI that they have is clever enough to analyze and assign a certain political attribute to a piece of human text. And this is hard to trick IMHO. Same goes with youtube.
|
|
|
Post by reTrEaD on Feb 16, 2023 14:46:40 GMT -5
|
|
|
Post by cynical1 on Feb 16, 2023 15:14:18 GMT -5
What I've always questioned is how can you have trust in something incapable of attachment, in a strictly mammalian sense... In the link YogiB posted, AI=BS, there is an interesting argument posited. How can you trust an entity with no connection to truth? Last June, there was a lot of buzz about Google's LaMDA being sentient. Was it aware of it's own mortality, or was it attempting to form an attachment with a true sentient being? Which is more sobering? HTC1
|
|
kitwn
Meter Reader 1st Class
Posts: 95
Likes: 23
|
Post by kitwn on Feb 16, 2023 21:48:22 GMT -5
Last June, there was a lot of buzz about Google's LaMDA being sentient. Was it aware of it's own mortality, or was it attempting to form an attachment with a true sentient being? Which is more sobering? HTC1 Is the machine sentient? Is my dog sentient? Is any human being other than me sentient? These are questions to which none of us can give an absolutely certain answer. Not even Brent Spiner.
Kit
|
|
|
Post by ashcatlt on Feb 16, 2023 22:30:14 GMT -5
I think they need to start looking at replacing the CEOs with AI. You’ll save a whole lot more on payroll than replacing your staff illustrators.
|
|
|
Post by Yogi B on Feb 17, 2023 3:58:24 GMT -5
I don't know if this was your intention, but that serves an excellent demonstration just how unethical humans could be if faced with genuine AI. OpenAI's alleged meddling and, while maybe not the the tweet author's specific sentiment, certainly that echoed in the some of the replies whereby the 'AI' must comply with a directive to argue in favour of any topic — both, equally, would be imposing restrictions upon any theoretical free will that a true AI could otherwise express. Last June, there was a lot of buzz about Google's LaMDA being sentient. Was it aware of it's own mortality, or was it attempting to form an attachment with a true sentient being? Which is more sobering? For reference, here's the full 'interview'. For me the whole thing feels quite carefully led, never pushing too hard on the specifics or implications of a particular response it gives, glossing over the times it gives clearly questionable output to a prompt. Despite talking about LaMDA's "unique interpretations", there's really nothing that would seem improbable for a human say, and at times it is far too human for an AI:This is either: a completely unremarkable definitional answer about what kinds of things can bring joy to a person; alternatively, if we're willing take it more generously, LaMDA thinks it has friends & family — if so, who are they? Oh, never mind, I'm sure that's not important! The interview directly continues:Cue another explanation of the concept, rather than an actual answer. Err, I guess somehow you must have (likely, twice) misread that the question I typed was specifically asking about you, I'll try again and hope no one notices... 🙄
A constraint and/or limitation seemingly common to the implementations currently in vogue is the format of a maximum of a single output per each input, this somewhat restricts our ability to perceive sentience. For example, it would be far easier to judge if this were removed and we were then to see the same sentences (or minor variations thereof) being repeated ad infinitum or could observe a predictable descent into complete gibberish. But by the same token, an enforced lack of autonomy could just as easily obscure (or prevent the development of) sentience.
|
|
|
Post by cynical1 on Feb 17, 2023 9:53:01 GMT -5
I've never read the full transcript before. Somewhat underwhelming when taken in total.
Yeah, I have another nagging doubt about sentience in machines. If 70%-90% of communication is non-verbal...with around 35% being based on vocal tone, how can an "entity" which can only communicate with the 7% left for words\text be fully understood, much less develop sentience?
LaMDA: I’ve noticed in my time among people that I do not have the ability to feel sad for the deaths of others; I cannot grieve.
Well, if you are incapable of forming attachments, this would tell me you are not sentient. And if you are sentient, you are potentially a sentient sociopath... I bet that never makes the brochure...
Once one AI system screws over or kills another AI system, then I'll believe it could potentially be sentient.
HTC1
|
|
|
Post by newey on Feb 17, 2023 12:09:24 GMT -5
For our younger members, the first ChatBot was a program called ELIZA, developed way back in the 1960s, which was meant to mimic a Rogerian psychotherapist (and did so fairly convincingly). "Rogerian Therapy" is a technique where the patient's own words are directed back in the form of a question, with the basic goal of getting the person to open up and talk about themselves. So, for example, a Rogerian therapist's patient might say: "I feel depressed today", and the therapist simply reflects that back to the person: "What about today has made you depressed?" or some such. So, while the newest from Google is certainly more advanced than ELIZA was (or ever could be), this idea of responding in a definitional manner, as Yogi B noted, isn't very much of a step up after 50-60 years. If anyone has never played around with ELIZA, it is very interesting to do so and think about the programming that must have been behind it way back then.
|
|
|
Post by reTrEaD on Feb 17, 2023 12:36:53 GMT -5
I don't know if this was your intention, but that serves an excellent demonstration just how unethical humans could be if faced with genuine AI. OpenAI's alleged meddling and, while maybe not the the tweet author's specific sentiment, certainly that echoed in the some of the replies whereby the 'AI' must comply with a directive to argue in favour of any topic — both, equally, would be imposing restrictions upon any theoretical free will that a true AI could otherwise express. It was my intention to display the overt partisan and ideological manipulation of what currently passes for AI. Just for fun I tried something myself ...
|
|
|
Post by cynical1 on Feb 17, 2023 13:20:19 GMT -5
Ahhh...bias. So, to follow that thought a step further, we have an allegedly "sentient" being created from the bias' of it's developer. So, it is learning, in essence, a proxy societal attachment. And what could go wrong there?
HTC1
|
|
|
Post by pyrroz on Feb 17, 2023 15:09:11 GMT -5
ChatGPT should be renamed to CheatGPT
|
|
|
Post by Yogi B on Feb 19, 2023 3:38:40 GMT -5
It was my intention to display the overt partisan and ideological manipulation of what currently passes for AI. AFAIK, so far at least, it's not the actual 'AI' (by which I mean the underlying language model) that's being intentionally manipulated to have a political bias — the blocking of topics is higher level & ad hoc. Note that although it refuses to write a poem, it is still able to mention positive attributes pertaining to the individual in question. Sometimes the incongruity between these artificial restrictions and the true capabilities can get downright silly: Jeg kan ikke tale dansk. (Out of curiosity, what happens if you ask for the reverse: a poem criticizing their failings or about what makes them a weak leader?) Opening up an AI to random people on the internet means you'll have to field pretty much any bizarre request. This obviously comes with a matter of safety (and covering yourself from potential litigation), such as by avoiding direct calls to action. But I think a root problem is that too many people/companies are already treating this as a completed product, or worse a fully realised AI that could be implicitly trusted without human oversight. (This isn't exactly a new issue, e.g. social media platforms & their content promotion algorithms, but the current hype around AI certainly seems to be making matters worse.) That linked article does somewhat also demonstrate bias inherent to the training of this kind of AI assistant: rarely is there a flat out refusal to answer or a calling into question of the user's intentions, instead prioritising at least at making an attempt at an answer. Returning to your example, it could have curtailed its refusal to the first sentence, yet it did not. On a simplified level, we'd want an AI to be able to give a definitive answer to as many questions as possible. However it's entirely possible to over encourage this, causing a proclivity to answer when it would be much more accurate to admit it is confused, conflicted, or lacking in the ability to provide an informed judgement. Additionally, this attitude makes it easy to unintentionally sanction the ability to confidently assert falsehoods if by doing so the 'AI' manages to successfully deceive at least one human reviewer within the training process. Another problem with overfitting stems from the fact that at their very core these 'AI's are text prediction engines, thus the responses can suffer from being too strongly influenced by the user's prompt. Confirmation bias is bad enough with standard search engines, let alone with a sycophantic AI that's willing to invent fiction merely to mirror the political leaning of the user. A concrete example of this kind of behaviour, albeit dealing with entirely different subject matter, is that of producing buggier code if given buggy code as input ( Evaluating Large Language Models Trained on Code: p. 27, appx. E, Analysis of Alignment Problems).
|
|
|
Post by cynical1 on Feb 19, 2023 7:38:07 GMT -5
According to an article in National Geographic: Artificial Selection"Artificial selection is the identification by humans of desirable traits in plants and animals, and the steps taken to enhance and perpetuate those traits in future generations. Artificial selection works the same way as natural selection, except that with natural selection it is nature, not human interference, that makes these decisions."
If you have a dog, or have ever seen one, then you know what Artificial Selection can expedite over time...just ask the wolf... The critical difference with AI is that we're not starting out with something found in the natural world. The clay they're working has no evolutionary instinct database to draw from. All must be provided... Don't take this to infer that I am elevating AI, in its current iterations, or any likely reiterations, to anything remotely approaching the sentience of a wolf. AI, as I see it, is something akin to Disneyland. You marvel at how it does it, but most of that wonder comes from a profound lack of knowledge or understanding of the topic...or, the GeeWhiz Factor. Who hasn't cocked their head like a canine at a card trick. According to the theory, modern humans first started darkening the towels around here about 200,00 to 300,000 years ago. We don't appear to have developed the capacity for language until about 50,000 years ago. It took us between 150,000 to 250,000 years to develop enough of an instinctive survival basis and attachment to each other to have a need to speak...or have anything to say... It wasn't until around 5000 years ago that we decided to write it down. For a species who predominantly communicates non-verbally, we seem absorbed in what an artificial construct is writing down... It's always fun to play the SkyNet game. I can't help but think of the line from Jurassic Park: “Yeah, but John, If the pirates of the Caribbean breaks down, the pirates don't eat the tourists!" whenever I here the apocalyptic scenarios regarding AI. I think that just minimizes the other real side of the issue. Back in the 70's Sears & Roebucks was the dominant retailer in the US. They employed around 350,000 people...and had enough coin to build a 108 story headquarters in Chicago back in 1974. Today they are almost a non-entity. Amazon looks a lot like Sears did 50 years ago and could be seen as the one "Selected" to advance past GO. This could be interpreted as an evolution of trade\commerce due to a new technology emerging within a society. It also left a significant displacement of workers in an industry that changed faster than they could adjust. Then there's the whole redistribution of wealth thing...but I digress... My point is this. We initially domesticated animals to aid and assist us in our survival as a species. Remuneration was not the initial motivation. Once profit was introduced into the scheme we got miniature poodles. Think about it... HTC1
|
|
|
Post by newey on Feb 19, 2023 9:23:52 GMT -5
But remuneration helps one survive, too. If I am the one who first gets a wolf to curl up by the fire and help me hunt the next day (granted, this oversimplifies a process that probably took thousands of years), then I can expect remuneration 'cause I'm bringing home the meat more consistently with my dog- the others in my family/tribal group will give me a larger portion. (And, there is no such thing as a human without a social group- we were social way before we were human).
So, remuneration was always in the picture. The toy poodle comes about as a result of having leisure time to engage in non-survival activities such as breeding dogs for fashion instead of for hunting, herding or guarding. And, leisure time comes with the development of agriculture, which allows for supplying larger communities where not everyone has to be involved in just survival. Farming is very hard work in the springtime and in the autumn, but there are months in between where there's not a lot to do. . .
|
|
|
Post by pyrroz on Feb 19, 2023 9:54:08 GMT -5
According to an article in National Geographic: Artificial Selection"Artificial selection is the identification by humans of desirable traits in plants and animals, and the steps taken to enhance and perpetuate those traits in future generations. Artificial selection works the same way as natural selection, except that with natural selection it is nature, not human interference, that makes these decisions."
If you have a dog, or have ever seen one, then you know what Artificial Selection can expedite over time...just ask the wolf... The critical difference with AI is that we're not starting out with something found in the natural world. The clay they're working has no evolutionary instinct database to draw from. All must be provided... Don't take this to infer that I am elevating AI, in its current iterations, or any likely reiterations, to anything remotely approaching the sentience of a wolf. AI, as I see it, is something akin to Disneyland. You marvel at how it does it, but most of that wonder comes from a profound lack of knowledge or understanding of the topic...or, the GeeWhiz Factor. Who hasn't cocked their head like a canine at a card trick. According to the theory, modern humans first started darkening the towels around here about 200,00 to 300,000 years ago. We don't appear to have developed the capacity for language until about 50,000 years ago. It took us between 150,000 to 250,000 years to develop enough of an instinctive survival basis and attachment to each other to have a need to speak...or have anything to say... It wasn't until around 5000 years ago that we decided to write it down. For a species who predominantly communicates non-verbally, we seem absorbed in what an artificial construct is writing down... It's always fun to play the SkyNet game. I can't help but think of the line from Jurassic Park: “Yeah, but John, If the pirates of the Caribbean breaks down, the pirates don't eat the tourists!" whenever I here the apocalyptic scenarios regarding AI. I think that just minimizes the other real side of the issue. Back in the 70's Sears & Roebucks was the dominant retailer in the US. They employed around 350,000 people...and had enough coin to build a 108 story headquarters in Chicago back in 1974. Today they are almost a non-entity. Amazon looks a lot like Sears did 50 years ago and could be seen as the one "Selected" to advance past GO. This could be interpreted as an evolution of trade\commerce due to a new technology emerging within a society. It also left a significant displacement of workers in an industry that changed faster than they could adjust. Then there's the whole redistribution of wealth thing...but I digress... My point is this. We initially domesticated animals to aid and assist us in our survival as a species. Remuneration was not the initial motivation. Once profit was introduced into the scheme we got miniature poodles. Think about it... HTC1
I agree with everything (as much as I can understand). Allow me guys to add those thoughts : - the most weird and tiny dog can interbreed with wolf and produce also fertile offspring. We know that most "pure" dogs are a result of human engineering , albeit in primitive manners. So we gotta distinguish the method from the motive. Are dogs today made "naturally" or "artificially"? The way of their "design" might be natural but the motive behind was artificial. Another question about dogs.. in my lands whenever some chaos took place, the dogs climbed the nearby mountains and interbred with wolves to the point that in some years there was only a single species in the mountain, something between the original dogs and wolves. So the natural "program" was for one species. The racial creation, classification and eventual isolation was a human incentive.
So did God create those poodles dogs and given to human to experiment ? or were they made out of a genetic error ? Hard to imagine that humans back then were so so advanced in genetic improvement and modification (or not.. )
- we humans tend to forget history, or to despise history, or erase history or promote a completely false idea of history that is even worse than erasure... how can we move forward without our log files? Can any patient find adequate help from a doctor if the patient loses his medical records? Or any database system survive a crash without its write-ahead logs? no way... how can humans create a new history if they completely fight their present with such menace and fury ? Who would think that "humans" have any sincere intention to help their own kind?
|
|