
I never thought I would see Time publish an article advocating that "allied nuclear countries [should be] willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs", written by a man who didn't go to high school. Tom On Sat, Apr 01, 2023 at 01:51:54PM +0200, MigMit wrote:
Or this: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
(to which somebody said "I work in a datacenter. So far no AI threatened violence on me; Yudkowsky just did")
On 1 Apr 2023, at 13:36, Aloïs Cochard
wrote: I don't know, personally I could not stop laughing at the answer from version 3.5... it's so freaking stupid!
But if like Dominik you are scared after reading that part you might be interested to look into: https://futureoflife.org/open-letter/pause-giant-ai-experiments/
On Sat, 1 Apr 2023 at 11:44, Dominik Schrempf
wrote: Well, it seems like GPT4 gets most of the questions right. This was more of a scary read than a funny one to me! Dominik
On April 1, 2023 8:18:08 AM GMT+02:00, "Aloïs Cochard"
wrote: How can this be useful when you have to anyway review everything is doing as he might to just randomly insert a bug or a security flaw??? I prefer to read poems by my human friends. I highly recommend starting reading this paper at page 128 instead of wasting your time on that prompt: https://arxiv.org/pdf/2303.12712.pdf
Be ready for a good laugh