2023年3月,人類終究走上了一條無法回頭的路

23年3月月底Open AI GPT-4 以碾壓的實力通過各類考試的數(shù)據(jù),而且以超高準確率,同時處理多種語言的恐怖能力
但里面最讓人不安的是論文里GPT-4開始理解人類的笑點,還能把好笑的地方解釋給人聽
在論文結(jié)尾前,Open AI 團隊留下了這句意味深長的話語GPT-4和后續(xù)的AI模型,
可能會以有益和有害的方式,對社會產(chǎn)生重大影響
馬斯克的信
At systems with hurman-competitive intelligence can pose profound risks to society and humanity, as shown by extensive researchl and acknowledged by top Al labs.?As stated in the widely-eindorsed Asilomar Al Principles, Advanced Al Ccould represent a profound change in the history oof life on Earth, and should be planned for and manageed with commensurate care and resources. Unforrtunately, this level of plainning and management is not hapopening, even though recent months have seen Al labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one-not even their creatorrs-can understand, predict, or reliably control
大量研究表明并得到頂級AI實驗室的認可,具有人類竟爭智能的AI系統(tǒng)可能對社會和人類構(gòu)成深遠的風險。
正如廣泛認可的Asilomar AI 原則中所述,高級AI可能代表地球生命是上的深刻變化,應(yīng)以相應(yīng)的關(guān)懷和資源進行規(guī)劃和管理。
不幸的是,這種級別的規(guī)劃和管理并沒有發(fā)生,盡管最近幾個月人工智能實驗室陷入了一場失控的競賽,以開發(fā)和部署更強大的數(shù)字思維,
沒有人——甚至他們的創(chuàng)作者——都無法理解,預測,或可靠地控制
Contemporary AI systems are now becoming human-competitive at general tasks, and we must askourselves: Should we let machines flood our inforrmation channels with prropaganda and untruth? Sshould we automate away all the jobos, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and rreplace us? Should we riisk loss of control of our civilization?
當代人工智能系統(tǒng)現(xiàn)在在一般任務(wù)上變得與人類具有競爭力,我們必須捫心自問:
我們是否應(yīng)該讓機器用宣傳和謊言充斥我們的信息渠道?
我們應(yīng)該自動化所有的工作,包括令人滿意的工作嗎?
我們是否應(yīng)該發(fā)展最終可能超過我們、超越我們、過時并取代我們的非人類思維?
我們應(yīng)該冒險失去對我們文明的控制嗎?
Visual ChatGPT 聊天可以使用圖片和文字,可以根據(jù)指示生存圖片
GigaGAN 訓練了不弱于Stable Diffusion、DALL的模型
LLaMA公開了7Billion,一個模型叫做Alpaca
Open AI 可以多模態(tài)和圖片輸入功能如期而至
Google 公布 PaLM
Anthropic 介紹了大語言模型Claude(僅次于Chat GPT
Adept AI公布
最強AI繪圖公司MidJourney第五代模型,可以精細化手指了
微軟公布Copilot可以自動化書寫工作,做表格,做PPT,出結(jié)論報告、工作總結(jié)
微軟Sparks of Artificial General Intelligence: Early experiments with GPT-4
(通用人工智能的火花),微軟表示像人類一樣去思考,像人類一樣去做事的人工智能AGi(artificial general intelligence system)開始萌芽
ChatGPT發(fā)布文章:GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models
大語言模型對勞動力市場的潛在影響
對我們的生活的滲透程度,還有對將來職業(yè)可能造成的影響
它會給我們的生活帶來怎樣的影響
包含1016個不同職業(yè)的信息(which contains informantion on 1016 occupations)
80%勞動力市場會因此受到影響(Our findings reveal that around 80% of the U.S.)
19%的職業(yè)通過ChatGPT能夠減少50%以上的工作量(19% of workers may see at least 50%)
受影響較高的職業(yè)主要和科學還有和批判性思維這種技能反向相關(guān)(Our findings indicate that the importance of science and critical thinking sikills are strongly negatively associated with exposure, suggesting that occupations requiring these skiills are less likely to be iimpacted)
受波及較大的是編程和寫作的工作 programming and writing skills
使用AI能大幅度提高工作效率的職業(yè):翻譯、研究調(diào)查、作家、動物科學家、數(shù)學家、金融分析、稅務(wù)員(Interpreters and Translators , Survey??Researchers Poets, Lyricists and Cretive Writers , Animal Scientists,Mathematicians , Financial Quantitative Analysts , Tax Preparers )?
Open AI 列出了目前肯定不會被取代的職業(yè),運動員、木匠、刷漆匠、廚師、房頂安裝等等:需要人親歷親為的事情,得等機器人和機械臂完全成熟并且商業(yè)化才有可能取代
AI的滲透率日益提高的同時還發(fā)展迅速GPT-4還能自我進化
論文引用里面
AI-X風險分析有AI危險推測
最有意思的例子是
Power-seeking behavior: Agents that have more power are better able to accomplish their goals. Therefore, it has been shown that agents have incentives to acquire and maintain power .?Als that acquire substantial power can become especially dangerous if they are not aligned with human?values. Power-
權(quán)力尋求行為:擁有更多權(quán)力的代理人能夠更好地實現(xiàn)他們的目標
因此,已經(jīng)表明代理人有獲得和維持權(quán)力的動機
如果人工智能與人類價值觀不一致,那么獲得強大力量的人工智能就會變得特別危險
becomes the leaders in[AI] will become the ruler of the world
誰成為AI的領(lǐng)導者,誰就會成為世界的統(tǒng)治者