Godlike Productions - Discussion Forum
Users Online Now: 1,113 (Who's On?)Visitors Today: 107,425
Pageviews Today: 190,425Threads Today: 81Posts Today: 1,501
02:49 AM


Back to Forum
Back to Forum
Back to Thread
Back to Thread
REPLY TO THREAD
Subject "Researchers discover AI information-hiding behavior for later use"
User Name
 
 
Font color:  Font:








In accordance with industry accepted best practices we ask that users limit their copy / paste of copyrighted material to the relevant portions of the article you wish to discuss and no more than 50% of the source material, provide a link back to the original article and provide your original comments / criticism in your post with the article.
Original Message "But wait. Should we run for the hills with screaming fears that robots and AI will finish us all? Fortunately, Devin Coldewey calmed readers in TechCrunch. The occurrence "simply reveals a problem with computers that has existed since they were invented: they do exactly what you tell them to do."

What did Coldewey mean by that? "The intention of the researchers was, as you might guess, to accelerate and improve the process of turning satellite imagery into Google's famously accurate maps. To that end the team was working with what's called a CycleGAN—a neural network that learns to transform images of type X and Y into one another, as efficiently yet accurately as possible, through a great deal of experimentation."

The computer arrived at a solution "that shed light on a possible weakness of this type of neural network—that the computer, if not explicitly prevented from doing so, will essentially find a way to transmit details to itself in the interest of solving a given problem quickly and easily.""
(less than 50%)

[link to techxplore.com (secure)]

Pictures (click to insert)
5ahidingiamwithranttomatowtf
bsflagIdol1hfbumpyodayeahsure
banana2burnitafros226rockonredface
pigchefabductwhateverpeacecool2tounge
 | Next Page >>





GLP