|
Post by ozboomer on Mar 5, 2023 4:04:48 GMT -5
'lo all... Yes, he's BACK... but with retirement, as well as health issues, housing issues (and goodness knows, finance issues here in Oz, with Superannuation, etc(!)), comes more 'navel gazing' and looking to the future and what is (might) come (Hello, HG Wells 'n Jules Verne)... I'm no expert in AI but I've been watching it develop since the early days of 'Eliza' in the 1960s... through the fashionable experimenting with 'expert systems' and 'code generators' (or automatic programmers) in the 1980s and '90s... and it seems most everything has been based on a 'functional' development - we need some 'mechanical' (repetitive) process done and digital computers are good at doing that. In recent times, modelling has been the fashion, where the 'analog' world has been 'modelled' and replicated, again through digital means. ...but in the arts, the analog.. non-digital.. hasn't been looked at as much; even neural networks (often being pseudo-analog), etc are still basically 'modelling' our reality to determine how they will work - they generally don't spontaneously create something 'new' - something that neither us nor any AI has seen before. Music (and the arts, in general) are all about allowing us to express ourselves in ways that language denies us. We try to express our emotions (which are often irrational and connected/dependent) with language and fail miserably... but we DO get the feelings communicated between us when we use music (although, that is often imperfect anyway). As the 'learning' and history of AI has been predominantly about cause->effect and deduction, humans, with the way we bring abstractions together and use induction in our thinking, I kindof think the AI systems won't be able to do such a thing... so won't create music like we have known, particularly that created during the 'common practice' period (say, 1600-1910)... but the loss to the world is that AI will have no need to. "Thnking..." --Deep Thought, in "The Hitchhikers Guide to the Galaxy", a BBC Radio 4 program first aired in 1978.
|
|
|
Post by newey on Mar 5, 2023 7:00:53 GMT -5
ozboomer- Glad to see you posting again! I don't know if you saw this recent thread from cynical1, it is more than tangentially related to your topic. guitarnuts2.proboards.com/thread/10234/ai-opinions-neededI think overall, you are right about the "human element" not being recreate-able. But OTOH, I wonder if we aren't simultaneously defining that human element ever more narrowly as the AI advances. When I was growing up, it was "computers will never think for themselves, they'll only do what you tell them to do" to now: "Well, they can't be truly creative". That sliver of human uniqueness seems to be getting narrower and narrower . . . At some point, will we get to "well, yes, they are creative but they haven't come up with anything as great as Beethoven's symphonies, they're only at the level of a Mahler"??
|
|
|
Post by ozboomer on Mar 5, 2023 7:25:57 GMT -5
Yay.. Thanks, newey ... Haven't yet looked at cynical1's post... but I expect there'll be mention made of the recent articles OpenAI CEO admits ChatGPT risks. What now? ...and OpenAI wants to build systems smarter than us ... ...but I'm more cynical (HA HA) and believe folks will go crazy on development and then come back to try and fix things, as that's what we normally do... ...but doing things 'the right way' is even more difficult when we can see what's coming... but the people who can make a difference... don't do anything. I spent a career trying to deal with literally 100s of people who had no vision. Oh.. time to start quoting from my quote collection:- Blinkered vision, when combined with shortsightedness leads to permanent blindness. --Me
|
|
|
Post by ozboomer on Mar 5, 2023 7:53:33 GMT -5
When I was growing up, it was "computers will never think for themselves, they'll only do what you tell them to do" My Dad's version was "You can't blame {the current media hobby-horse} on the computers - they're just dumb machines that do what they're told to do. Blame the computer program designers." This, from the man who introduced me to 'computers' in ~1967 (when I was still ~6 years old) by showing me his 'box of notched cards'. He'd need to find all the over-dimensional loads that were due to use Princes Highway and were over 300 tons in the next 12 months... and he'd use a knitting needle on the cards to do the selecting. I built all sorts of similar 'decks' in that year. ... For those unfamiliar, have a squiz: BEFORE COMPUTERS: NOTCHED CARD DATABASES...but I'll continue exploring this over in cynical1's poll...
|
|
|
Post by cynical1 on Mar 5, 2023 16:21:37 GMT -5
Somewhere along the line I recall hearing the phrase, "it's never as bad as it seems, nor as good as it looks..."
I don't believe we have now, as far as I know, a true AI loose in the wild. It won't stop certain entities from attempting to cash in on the primitive variant we do have, though. Once they do successfully commercialize it, Skynet fails to be economically viable...but video games are sure to improve exponentially...
Remember when the Internet was going to revolutionize the world and empower the common human? Well, that turned into Facebook, Twitter and Pornhub fast, didn't it? I don't see AI being much more than that in 20 years. It will be significant when perfected, but it's not going to be a panacea, or Armageddon, either...
In the short term I believe the most dramatic thing AI will bring is a re-distribution of corporate wealth. These days, it seems like giving the entitled and idle more time to ponder a cloistered reality is a sure payday. Who gets there first with the most fingers remains to be seen.
It's always the merdo you don't see coming that gets you. 250,000 plus years of biological evolution have brought us to where we are now as a species...for better or worse... Can a human, or small collections of humans, successfully translate all of the unspoken moments which created our shared homo sapien instincts into a non-corporeal artificial entity? Especially in time to hit the peak of the market...
I'm not going to live long enough to see it. Attachment is one primary survival mechanism in mammals. We are one of those mammals that takes a long time to grow from coyote fodder to self-sufficient being. We require nurturing and a sense we are safe, or we may develop in potentially anti-social ways as an adult.
We are attempting to develop an artificial intelligence detached from any sense of a safe refuge, or safe harbor to explore and develop from, where the AI is allowed nothing but a verbal "conversation" with sentient beings whose communication style is 90% non-verbal. This is a relatively one sided experience for the AI, as it is only required to perform the rote recitation of facts, or perform tasks on command. With no common evolutionary or societal experience...this is just asking for a collection of dysfunctional sociopaths.
I do have to admit, I am a bit intrigued at how far the Law of Unintended Consequences can be bent moving forward.
HTC1
|
|
kitwn
Meter Reader 1st Class
Posts: 95
Likes: 23
|
Post by kitwn on Mar 5, 2023 16:32:05 GMT -5
Hi ozboomer. As a recently retired boomer living in Oz I'm very pleased to meet you.
One of the most telling points in Cynical1's thread is the quote from ChatGPT posted by reTrEaD which includes the words "..It goes against my programming to generate content that.....". This is a machine that creates no opinions of it's own but is programmed with the ability to edit existing material into a coherent sounding narrative pushing any agenda it's programmer wants. Political speech writers may find themselves out of a job in the near future.
Kit
|
|
|
Post by ozboomer on Mar 5, 2023 17:16:56 GMT -5
Just quickly... This is the fundamental thing (many) folks are not really across... that 'Chat GPT' is, by definition, a language model ( see 'language modeling' ) but folks are seeing it as AI. As mentioned previously, that particular application is impressive... but it's still questionable as to whether it's a real 'intelligence'. It's kinda classic, really.. As definitions of terms ('intelligence') are not static, measurement relative to that definition is not absolute, consistent or even reproducible. Case in point: In my (admittedly minor) explorations of 'Chat GPT', I've repeatedly found that changing the way a question is expressed often leads to 'Chat GPT' giving a very different answer. So, the 'language model' still can't 'infer meaning'.. or actually 'understand' in a human way. ...as the rabbit hole gets deeper...
|
|
|
Post by cynical1 on Mar 5, 2023 18:22:27 GMT -5
So, the 'language model' still can't 'infer meaning'.. or actually 'understand' in a human way. Let's be honest here, the better contestant here is the human. We can lose the meaning quicker than it would take George Santos to tell you he's the Emperor of Mars. If humans react to predicable "dog whistles", imagine the directions a conversation could go if someone wrote that into the model...for something as simple as punching up site traffic... Imagine where the rabbit hole could go next. Social media has twisted perceptions in some people to the point that a Gucci belt has taken on the same critical importance as food used to... gotta have those 15 minutes. But that's only as far as humans can take it. So, human builds better tool. Target whatever you want based on language, dialect...whatever...and employ a tool that can interpret and adjust on the fly to any trend or commonality. The shepherd builds a 24\7 shepherd surrogate that never asks for weekends off. It's a funny mental image to think of when some script kiddie figures out a way to give the AI a tourette's moment... HTC1
|
|
kitwn
Meter Reader 1st Class
Posts: 95
Likes: 23
|
Post by kitwn on Mar 5, 2023 20:45:49 GMT -5
I remember when a work colleague of mine was showing off his ability to talk to his phone and get an answer. He called it artificial intelligence, I thought it was speech recognition driving a search engine. Sadly I never got the chance to test it with the exchange...
Kit: Who won the ladies singles title at Wimbledon in 1971?
Phone: Evonne Goolagong.
Kit: How many legs did she have on that day?
|
|
|
Post by reTrEaD on Mar 6, 2023 12:41:04 GMT -5
I remember when a work colleague of mine was showing off his ability to talk to his phone and get an answer. He called it artificial intelligence, I thought it was speech recognition driving a search engine. Sadly I never got the chance to test it with the exchange... Kit: Who won the ladies singles title at Wimbledon in 1971? Phone: Evonne Goolagong. Kit: How many legs did she have on that day? I ran that for you ... Me: Who won the ladies singles title at Wimbledon in 1971? ChatGPT: In 1971, the ladies singles title at Wimbledon was won by Evonne Goolagong Cawley from Australia. She defeated Margaret Court, also from Australia, in the final with a score of 6-4, 6-1. Me: How many legs did she have on that day? ChatGPT: As Evonne Goolagong Cawley is a human being, she had the usual two legs on the day she won the Wimbledon ladies singles title in 1971.
|
|
kitwn
Meter Reader 1st Class
Posts: 95
Likes: 23
|
Post by kitwn on Mar 6, 2023 17:05:19 GMT -5
I remember when a work colleague of mine was showing off his ability to talk to his phone and get an answer. He called it artificial intelligence, I thought it was speech recognition driving a search engine. Sadly I never got the chance to test it with the exchange... Kit: Who won the ladies singles title at Wimbledon in 1971? Phone: Evonne Goolagong. Kit: How many legs did she have on that day? I ran that for you ... Me: Who won the ladies singles title at Wimbledon in 1971? ChatGPT: In 1971, the ladies singles title at Wimbledon was won by Evonne Goolagong Cawley from Australia. She defeated Margaret Court, also from Australia, in the final with a score of 6-4, 6-1. Me: How many legs did she have on that day? ChatGPT: As Evonne Goolagong Cawley is a human being, she had the usual two legs on the day she won the Wimbledon ladies singles title in 1971. Thanks reTrEaD, My experience was several years ago, this thing is obviously very capable. Is it time to resurrect the Luddites or should I embrace the capabilities of this new technology? I'm not Luddite by nature, I've spent much of my my life working with and learning/teaching about advances in technology but even I find this kind of progress seriously creepy. I'm old enough to want to know the difference between when I'm exchanging information with a machine and exchanging ideas with a person.
Kit
|
|
|
Post by ozboomer on Mar 7, 2023 0:42:11 GMT -5
Just had to add this bit of nonsense, as it made me LOL... Obviously Chat GPT isn't perfect... but still... It didn't 'recognize' the quotient in the first question as being 'special'... but asking about the '3.14...' number prompted a non-contextual answer... until it did recognize it was 'special'. Asking about another number, prompted the answer 'it's a number'... but asking about another number again ('42'), it DID recognize the number as 'special'... but it didn't recognize that '23' is prime, making it very special indeed Again, the entertainment comes from learning how literal the 'gadget' is... and how 'nuanced' we people-types are...(!)
|
|
|
Post by newey on Mar 7, 2023 6:14:27 GMT -5
Chat GPT: 3.141592 (This is 7 decimal places? )Yes, it is. The digit to the left of the decimal is also a decimal place, technically.
|
|
|
Post by cynical1 on Mar 7, 2023 7:26:31 GMT -5
Chat Gpt: 42 is the "Answer to the Ultimate Question of Life, the Universe, and Everything" from the book The Hitchhiker's Guide to the Galaxy by Douglas Adams. Did it say anything about it being a tough assignment? HTC1
|
|
|
Post by sumgai on Mar 7, 2023 13:46:01 GMT -5
Chat GPT: 3.141592 (This is 7 decimal places? )Yes, it is. The digit to the left of the decimal is also a decimal place, technically. Err, no. The very word 'decimal' means "one tenth of the base unit", which is further defined in mathematics as a division of the base unit to be placed on the right side of the radix point. Thus the base unit, located immediately to the left of the radix point, cannot serve as both the base unit and as a decimal place. I'll not dwell any further on this discussion, because in my doddering old age I might go off the rails and get pretty far-fetched. And possibly (>cough<probably>cough<) wrong. sumgai
|
|
|
Post by b4nj0 on Mar 7, 2023 14:16:58 GMT -5
Figures Significant it is Yoda.
でつ e&oe ...
|
|
|
Post by sumgai on Mar 7, 2023 20:47:29 GMT -5
I think this answers everyone's question:
|
|
|
Post by ozboomer on Mar 9, 2023 21:59:40 GMT -5
Figures Significant it is Yoda. ..and is why I spoke about decimal places. Other than some of the more technically obsessed passionate (and slide rule users), most folks don't understand the difference between 'significant figures' and 'decimal places' ... Boy, was THAT drummed into me by my Form 5/Year 11 Physics teacher (Thank You, Mr Rose-Meyer)...
|
|
|
Post by ozboomer on Mar 9, 2023 22:21:49 GMT -5
More 'discoverettes' from a little more exploration of the 'pseudo-AI' of Chat GPT... Like most of us used to do when search engines came into being, I put my own name into the thing... and it couldn't have presented me with anything more incorrect (I was actually seeing if it could 'find' some more obscure facts about my dear ol' Dad, who has the same exact name as mine). Obviously, most of us have non-unique names... so it was more likely to present me with some factoid that was wrong, than something correct... but interestingly, it thought it was a 'convolution [reverb] engine' more than a 'language model' (let alone an actual search engine)... as when I added more 'true' factoids about myself to the prompt I had entered and re-queried the thing, it would 'mash-up' factoids from all the other people who shared my name... and maybe my primary school.. or not... and maybe my birth year.. or not... and present 'ever-expanding wrongness' in its response.. until it was actually presenting me with almost total gobbledygook. I've now given it all away (for a bit, anyway)... as I'm not really that interested in further testing a 'syntactical phrase generator'... ...but in a similar vein, another very... 'limited'... generator thingy is sort of interesting... MusicLM
|
|
|
Post by newey on Mar 10, 2023 7:24:42 GMT -5
There was an editorial by Noam Chomsky (along with a couple of co-writers) in yesterday's New York Times on this subject. I can't link to it becuase it's behind their paywall, but well worth a read if you can access it. I don't always agree with Chomsky but he is undoubtedly a brilliant mind. His take was similar to what has been said above, these programs are not really "thinking" or producing originality.
|
|
|
Post by cynical1 on Mar 10, 2023 8:34:43 GMT -5
I wasn't about to cough up the subscription, so I found someone else who did and read their synopsis...and found a few more articles regarding AI and Chomsky. He is always educational. I found another one, but lost it, that points to a boom in language skills in young children over the past few years. They attribute this to the children using personal digital assistants, like Alexa, or other devices that utilize voice recognition to achieve a desired outcome. The more you think about that, the more sobering it becomes.
This smells a lot like the Internet Boom from the 90's. The promise of ridiculous wealth eclipses any sense of judgement. You begin to think that AI≠Artifical Intelligence, rather AI=Attracting Investors. When this begins to underwhelm, as it will, it might be entertaining seeing how this all shakes out.
HTC1
|
|
|
Post by sumgai on Mar 10, 2023 13:25:38 GMT -5
as I'm not really that interested in further testing a 'syntactical phrase generator'... Interesting turn of phrase there, ozzy. I just a moment ago ran into this video, she has some interesting things to say about the future of music in general, and computer aided music generation in particular. sumgai
|
|
|
Post by ozboomer on Mar 11, 2023 6:26:04 GMT -5
Thx for the pointers to articles, etc... With all these fancy things 'society'(!?) is so keen on developing, I'm always reminded of the first 'questionable technology' film I saw (I think), being 'The Fly' from 1958 (a little before my time)... and another favourite (pseudo-) quote from somewhere in my travels:- “Just because we can do something doesn’t mean we should."...variously (erroneously, I think) attributed to a number of folks (if you believe Google... and don't even try throwing that one at 'Chat GPT'). As our dear cynical1 suggests, I think the AI = Attracting Investors is a good 'alternative description'... although, I'm not really sure how 'deeply' we're hitting it with $$$ down here in Oz, given we've always been 'bleeding edge' early adopters on most anything 'tech'.... points to a boom in language skills in young children over the past few years. They attribute this to the children using personal digital assistants, like Alexa, or other devices that utilize voice recognition to achieve a desired outcome. The more you think about that, the more sobering it becomes. I find this interesting... as I've sorta been a 'quiet advocate' for alternate learning systems for a while (long-time readers might remember me talking about (the late) Tony Buzan's 'mind mapping' [distinct from 'concept mapping' and other graphical methods], which is something I've been using more than daily for most everything for the last 35+ years)... As part of my 're-learning' of all my keyboard playing skills in recent times, I've been revisiting the various techniques which involve 'spaced repetition', which is, apparently, a less-effective method than using some sort of 'deep' encoding (funnily, which is what 'mind mapping' is primarily about - the association/linking of concepts between each other and within your existing knowledge). Anyway, I'll get back to working on playing my scales for a bit... and I'm looking forward to "...the next exciting episode..."
|
|
|
Post by ozboomer on Mar 11, 2023 18:25:21 GMT -5
There was an editorial by Noam Chomsky (along with a couple of co-writers) in yesterday's New York Times on this subject. I can't link to it becuase it's behind their paywall, but well worth a read if you can access it. I don't always agree with Chomsky but he is undoubtedly a brilliant mind. His take was similar to what has been said above, these programs are not really "thinking" or producing originality. Just read the article... and he makes a good point, I think (HA HA), in that we only see the 'end point' of a 'fact' from 'Chat GPT'. There is no explanation of what it does to get to its answer. Even more telling, is the point from which something is 'declared as fact' is that of a multitude of data, whereas for us people-types, we can come to a conclusion from a 'minimal' set of data.. and we can also know (and actually understand) why/how we come up with a 'fact' that others of us can demonstrate to be (objectively?) 'wrong'. ...and the morality issue is another thing.. although, I wonder if that's something we are taught/learn, rather than something that is intrinsically part of our intellect... "My brain hurts" --just another Gumby
|
|
|
Post by cynical1 on Mar 12, 2023 7:46:29 GMT -5
There is no explanation of what it does to get to its answer. There's the rub. We all know it's an artificial construct. We can assume it has little to no tactile sense either. It performs information dissemination based on code and algorithms. At least my cat ponders gravity...before he knocks the porcelain figurines on the floor... I don't know how far you can take this argument as strictly an AI problem. Human can't agree on facts...observe any legislative body in action...or their supporters. Some geopolitical entities decide facts on a "consensus" basis. Confirmation bias and event perception are just two "human" conditions that effect what we value as true or false. It's inevitable if you live in any type of social structure. Most of it is unconscious on the part of the individual...but we all experience it. What strikes me as humorous is all the alleged effort put into these "AI" entities to eliminate this from the "answer" part of any exchange. Noble thought, but if humans developed it, it will reflect human nature...through a lens, but it's there.. It seems like the marketing and legal side of the development team has the final say on released product so far. Morality seems to be one of those things that keeps honest people honest...like locks on screen doors... It shifts shape and rears it's righteous head whenever survival isn't part of the equation...just ask anyone who's been to war... It would be a better world if we all shared a common sense of morality...but we just aren't wired that way. Xenophobia is instinctive...just another survival mechanism developed over 200 million odd years of evolution. That's what makes us human. Regulating morality to fit a mold will always present issues and disenfranchise or alienate some group in the process. We are taught so many things from infancy until we reach old age...then essentially stop processing at a high functioning level. During those early years so much of our development is shaped by events, interactions and outcomes that we form instinctive reactions to events\stimuli before we develop memory. These perceptions effect how we react and function with other individuals, within social groups, or society as a whole. The question, as I see it, is not "what" defines morality, but "who" defines it. At least you have a brain. Think of all those poor bastards in Congress who might benefit from one... HTC1
|
|
|
Post by newey on Mar 12, 2023 15:30:08 GMT -5
I'm always reminded of the first 'questionable technology' film I saw (I think), being 'The Fly' from 1958 (a little before my time)... Not "Frankenstein"?
|
|
|
Post by ozboomer on Mar 12, 2023 17:23:54 GMT -5
I'm always reminded of the first 'questionable technology' film I saw (I think), being 'The Fly' from 1958 (a little before my time)... Not "Frankenstein"? Acshually, I think so, ya.. I can remember seeing 'The Fly' when I was something like 8 yrs old (in the later '60s)... and when I was watching 'Milton the Monster' and 'Gigantor', etc, the concept of 'technology' wasn't something I 'actively' had in my mind - these 'creatures' just 'existed' and were not 'made by humans'.. and if they were people-made, they were 'good' and maybe 'went bad'... ..huh.. maybe I'm remembering as I rewrite(!).. Anyhoo, maybe I'm suffering from 'selective memory'.. like 'Chat GPT' integrates its 'data' with any number of biases...
|
|
|
Post by pyrroz on Mar 14, 2023 3:50:35 GMT -5
At least you have a brain. Think of all those poor bastards in Congress who might benefit from one... HTC1
|
|
|
Post by ozboomer on Mar 17, 2023 1:19:21 GMT -5
Well, y'haveta wonder when this sorta thing is being produced (rather than the normal EDM-style stuff)...
Although, I venture there's still a fair bit of human intervention with this example...
Eeep.
|
|
|
Post by cynical1 on Mar 17, 2023 8:53:57 GMT -5
Although, I venture there's still a fair bit of human intervention with this example... By choosing what the AI used as a basis qualifies. I doubt they let it listen to much Varèse...or Yoko... Typically, machines just took over mundane, repetitive tasks. Now it's venturing into that side of behavior that makes us human. To me, this is one of the more unsettling parts of AI...or the promise of the aforementioned... During his lifetime, somewhere between 1671 and 1751, Tomaso Albinoni wrote a certain piece of music, That score was eventually housed in the Dresden Library in Germany by 1945...coincidentally, the exact time that Dresden was bombed to ashes. Jump ahead into the 1950's and an Italian, Remo Giazotto, discovers fragments of this score in said library. While most of it was lost in the fires, this little surviving fragment inspires him to finish the piece...which we know as the Adagio in G. I would love to see what the AI does with that same little fragment. HTC1
|
|