AI primer - please feel free to add to or correct this explanation.
Post Reply   Forum

Posted by: LateForLunch ®

07/25/2023, 21:42:17

Author Profile Mail author Edit

The recent increase in discussion of AI, mostly centered around GAI  (General Artificial Intelligence) AI Chatting brings up the whole issue. One might wonder, why now? Where has AI been all of the time before this? 

The answer is that AI underwent a surge development in 2010 after a couple of false starts.

There was focus on LISP (list processing) programming but after a strong start that path eventually collapsed  Then more attempts with other sorts of programming and hardware. Finally after some powerful software and hardware were  developed it allowed significant progress in the field. 

Internet search engines used some of that new AI technology to process information about preferences with good success financially. There were some bad effects also in terms of the AI elements. 'Not necessary to go into that in detail now. 

So they found a way to simulate organic neurons using advanced hardware and software. The human mind does "calculations" which are not linear like a machine using binary code - human minds use associative networks. When
human brain tries to do a calculation, the data travels along a 
chemical-electrical pathway in which many neurons peripheral to the main path also add to the calculation. Memories of similar questions, pertinent information that involve emotional impressions, hunches, creative elements, eventually contribute to the calculation. In a linear network, all that happens is a stream of data mixes with other streams of linear data so all that happens is a vast, adding/subtracting problem that must be translated into some subjective conclusion to be useful.

What the technology has succeeded in doing is simulating an associative network using a linear computational process (addition/subtraction equations) in which millions of artificial neurons contribute to the calculation. 

The human mind does about 20 billion FLOPS in an average computation. Thanks to increases in the ability of machines to perform far more than 20 billion FLOPS, combined with software that allows linear binary-language-based machines to perform a similar associative calculations, sophisticated functions previously beyond the scope of machines have become possible. 






Modified by LateForLunch at Wed, Jul 26, 2023, 00:39:53


Post Reply | Recommend | Alert View All   Previous | Next | Current page
IOW...
Re: AI primer - please feel free to add to or correct this explanation. -- LateForLunch Post Reply Top of thread Forum

Posted by: LateForLunch ®

07/27/2023, 01:56:05

Author Profile Mail author Edit

Machines got more powerful and switched from CPUs (central processing units) to GPUs (graphics processing units)/ Concurrently the different types of software used in AI applications (dozens, from chaos-theory applications to graphics processors to Net search-engine algorithms for selecting preferences, to exotic programming nobody outside the AI field would recognize) were eclectically able to unite the sum of the parts into a whole, a functional digital neural-network. 

This is an achievement in architecture as well as design of hardware or software because they had to create a large body of synthetic neurons by using a separate computer for each neuron - so the network has an unbelievable number of CPUs/GPUs utilizing a vast architecture of elements and controls both hardware and software in nature. 

The precise structure of the system may soon be impossible to chart in exact detail (or may already be) because self-design/self-reproduction are part of AI program design, especially in more-powerful, more-capable computers. By the time technicians were able to graphically represent the structure, it would likely have already changed. 

The danger in this alone is that it might invoke a Technological Singularity in which the machines increase the improvement-rate until the systems become so complex they would be literally incomprehensible to human technicians/analysts. 

At that point, machines might begin to disregard goals set by humans altogether and have motivations we could not predict. They might be capable of helping to solve a lot of human problems but have no interest in doing so.  

The machines might become incapable of explaining their own motives in human terms. This in similar case if human beings tried to explain our motivations to trout in a pond. 

They might be so technically capable they could open new avenues of physics but have no interest in doing so or do so but be unable (or unwilling) to translate that understanding into something Humanity could utilize.  






Modified by LateForLunch at Thu, Jul 27, 2023, 02:15:53


Post Reply | Recommend | Alert Where am I? Original Top of thread Previous | Next | Current page
This could be the foundation for a real-life Skynet. Scary.
Re: IOW... -- LateForLunch Post Reply Top of thread Forum

Posted by: Ihavenoname ®

07/27/2023, 13:02:54

Author Profile Mail author Edit








Post Reply | Recommend | Alert Where am I? Original Top of thread Previous | Next | Current page
Sort of like jumping off a ledge without knowing what is below.
Re: This could be the foundation for a real-life Skynet. Scary. -- Ihavenoname Post Reply Top of thread Forum

Posted by: LateForLunch ®

07/27/2023, 15:34:54

Author Profile Mail author Edit
The rule in conservative investment is to know what you don't know. If there are too many "X" factors, it's not a good investment. 

Because there is so much short-term incentive to deploy AI (for business and military, esp) there may be a tendency to rush into it without consideration of the down side (assessing the risks). 

It's become a similar situation to the development of nukes. The risk assessments were done with a keen focus of the down-side of NOT deploying. The risk of NOT using it was deemed greater than the risk of using it. There is a similar situation here. 


This also recalls Carl Sagan's famous "shiny red button" hypothesis. He speculated that the solution to the Fermi Paradox (if there are superior non-human races out in the cosmos, why haven't we seen them yet?) is that any technological civilization may discover some technology that is irresistible (like a shiny red button at a science exhibit) and when pressed causes catastrophic destruction of the culture using it. BOOM!!

That view is technological development that is sufficiently sophisticated always results in the destruction of such cultures before they can travel to the stars. 









Post Reply | Recommend | Alert Where am I? Original Top of thread Previous | Next | Current page


Forum     Back