最美情侣中文字幕电影,在线麻豆精品传媒,在线网站高清黄,久久黄色视频

歡迎光臨散文網(wǎng) 會員登陸 & 注冊

What is ChatGPT And How Can You Use It? (英譯中)

2023-02-28 11:04 作者:winnie朱小二  | 我要投稿

This is what ChatGPT is and why it may be the most important tool since modern search engines

這篇文章關于ChatGPT是什么以及為什么它可能是從現(xiàn)代搜索軟件產生以來最重要的工具。

OpenAI introduced a long-form question-answering AI called ChatGPT that answers complex questions conversationally.

OpenAI 引進了一種長篇解答問題的AI,叫做ChatGPT,它能夠以一種對話的形式回答問題。

It’s a revolutionary technology because it’s trained to learn what humans mean when they ask a question

它是一種革命性的技術,因為它被訓練去學習當人類問問題的時候是什么意思。

Many users are awed at its ability to provide human-quality responses, inspiring the feeling that it may eventually have the power to disrupt how humans interact with computers and change how information is retrieved.

許多用戶驚嘆于它能夠提供人性的回答的能力,激發(fā)了一種它可能有中斷人類如何和電腦互動的方式并且改變數(shù)據(jù)的檢索方式。

What Is ChatGPT?

什么是ChatGPT?

ChatGPT is a large language model chatbot developed by OpenAI based on GPT-3.5.

ChatGPT是在GPT-3.5的基礎上,由OpenAI研制的一個巨大的語言模型機器人。

?It has a remarkable ability to interact in conversational dialogue form and provide responses that can appear surprisingly human.

它有一種能夠在口語交談形式下互動的非凡能力,而且能夠提供一種驚人的像是人類的回應。

Large language models perform the task of predicting the next word in a series of words.

巨大的語言模型可以運轉能夠在一系列詞中預測下一個詞的任務。

Reinforcement Learning with Human Feedback (RLHF) is an additional layer of training that uses human feedback to help ChatGPT learn the ability to follow directions and generate responses that are satisfactory to humans.

用人類反饋以增強學習是通過用人類反饋來幫助ChatGPT學習去遵循方向以及生成使人類滿意的答案的一種額外的訓練。

Who Built ChatGPT?

誰制造了ChatGPT?

ChatGPT was created by San Francisco-based artificial intelligence company OpenAI.?

ChatGPT由總部位于舊金山的人工智能公司OpenAI所創(chuàng)造。

OpenAI Inc. is the non-profit parent company of the for-profit OpenAI LP.

OpenAI股份有限公司是盈利公司OpenAI LP的非盈利母公司。

OpenAI is famous for its well-known DALL·E, a deep-learning model that generates images from text instructions called prompts.

OpenAI以DALL·E而著名,它是一種深度學習的模型,能夠從被稱為提示的文本指令中生成圖像

The CEO is Sam Altman, who previously was president of Y CombinatorMicrosoft is a partner and investor in the amount of $1 billion dollars.?

Sam Altman是它的CEO,他以前是Y CombinatorMicrosoft的總裁,現(xiàn)在是合伙人以及投資10億的投資人。

They jointly developed the Azure AI Platform.

他們合作開發(fā)了 Azure AI Platform.

Large Language Models

大型語言模型

ChatGPT is a large language model (LLM).?

ChatGPT是一個大型的語言模型。

Large Language Models (LLMs) are trained with massive amounts of data to accurately predict what word comes next in a sentence.

大型語言模型被大量的數(shù)據(jù)訓練以準確預測一句話中下一個詞。

It was discovered that increasing the amount of data increased the ability of the language models to do more.

它被發(fā)現(xiàn)增加數(shù)據(jù)總量會增強語言模型做更多事情的能力。

According to Stanford University:

根據(jù)斯坦福大學

“GPT-3 has 175 billion parameters and was trained on 570 gigabytes of text.?

GPT-3有1750億的參數(shù),還被570千兆字節(jié)的文本訓練。

For comparison, its predecessor, GPT-2, was over 100 times smaller at 1.5 billion parameters.

作為比較,它的前一代,GPT-2,只有15億參數(shù),別它小100倍。 ?

This increase in scale drastically changes the behavior of the model — GPT-3 is able to perform tasks it was not explicitly trained on, like translating sentences from English to French, with few to no training examples. ?

這種規(guī)模上的增漲大大地改變了模型的性能——GPT-3能夠執(zhí)行它沒有明確訓練過的任務,比如在很少甚至沒有訓練模型的情況下,把英語句子翻譯成法語。

This behavior was mostly absent in GPT-2.?

GPT-2幾乎沒有這種性能。

Furthermore, for some tasks, GPT-3 outperforms models that were explicitly trained to solve those tasks, although in other tasks it falls short.”

與此同時,雖然GPT-3在別的任務上表現(xiàn)不好,但對于某些任務,則表現(xiàn)優(yōu)于某些已經(jīng)明確訓練解決那些任務的模型。

LLMs predict the next word in a series of words in a sentence and the next sentences – kind of like autocomplete, but at a mind-bending scale.

LLMs在一句話的一系列詞語中預測下一個詞和下一個句子——有點像自動完成,卻是一個令人興奮(難以理解?)的規(guī)模。?

This ability allows them to write paragraphs and entire pages of content.

?這種能力是他們能夠寫文章和整面的內容。

But LLMs are limited in that they don’t always understand exactly what a human wants.

但是LLMs是有局限的他們不能一直明白人類到底想要什么。

And that’s where ChatGPT improves on state of the art, with the aforementioned Reinforcement Learning with Human Feedback (RLHF) training.

這就是ChatGPT利用上述的用人類反饋以增強學習的訓練來提高技術的地方。? ? ? ? ?

How Was ChatGPT Trained?

ChatGPT是如何訓練的?

GPT-3.5 was trained on massive amounts of data about code and information from the internet, including sources like Reddit discussions, to help ChatGPT learn dialogue and attain a human style of responding.

GPT-3.5由大量包括密碼和來自網(wǎng)絡的信息的數(shù)據(jù)所訓練,包括來自新聞網(wǎng)站的討論,以幫助Chat GPT學習對話和得到一種人類風格的回應。


ChatGPT was also trained using human feedback (a technique called Reinforcement Learning with Human Feedback) so that the AI learned what humans expected when they asked a question.?

Chat GPT也被訓練使用人類反哭(一種叫做用人類反饋以增強學習的技術)所以當被問問題時,AI學習到什么是人類期待的。

Training the LLM this way is revolutionary because it goes beyond simply training the LLM to predict the next word.

這樣訓練LLM是革命性的,因為它超越了僅僅是訓練LLM去預測下一個詞。

A March 2022 research paper titled Training Language Models to Follow Instructions with Human Feedback?explains why this is a breakthrough approach:

2022年3月,一個以訓練語言模型利用人類反饋去遵循指令為題的研究報告解釋了為何這是一個突破性的進步:

“This work is motivated by our aim to increase the positive impact of large language models by training them to do what a given set of humans want them to do.

這項工作是受我們通過訓練它們去做一組特定人群想讓它們做的來增加大型語言模型積極影響的目標的啟發(fā)。

By default, language models optimize the next word prediction objective, which is only a proxy for what we want these models to do.

默認情況下,語言模型優(yōu)化預測下一個單詞的目標,這僅僅是我們希望這些模型去做的一種代替品。

Our results indicate that our techniques hold promise for making language models more helpful, truthful, and harmless.

我們的結果說明我們的技術有希望是語言模型跟有用,更真實,更無惡意的。

Making language models bigger does not inherently make them better at following a user’s intent.

使語言模型更大并沒有內在地使它們更好地遵循用戶意圖。

For example, large language models can generate outputs that are untruthful, toxic, or simply not helpful to the user.

比如說,大型語言模型會生成不實的、有毒的、或者僅僅是對用戶無用的信息。

In other words, these models are not aligned with their users.”

換句話說,這些模型與用戶(的需求)并不一致。

The engineers who built ChatGPT hired contractors (called labelers) to rate the outputs of the two systems, GPT-3 and the new InstructGPT (a “sibling model” of ChatGPT).

構建ChatGPT的工程師們雇傭了承包商(所謂的標價機)去評估兩個系統(tǒng)的輸出,GPT-3和新的InstructGPT(一個ChatGPT的同胞模型)。

Based on the ratings, the researchers came to the following conclusions:

在評估的基礎上,研究者們得出以下結論:

“Labelers significantly prefer InstructGPT outputs over outputs from GPT-3.

“標價機們相比于GPT-3的輸出,明顯地傾向于Instruct GPT的輸出。

InstructGPT models show improvements in truthfulness over GPT-3.

InstructGPT模型在真實層面比GPT-3有所提高。

InstructGPT shows small improvements in toxicity over GPT-3, but not bias.”

InstructGPT相比于GPT-3在有毒?方面有小小的進步,但不是偏向性的。

The research paper concludes that the results for InstructGPT were positive.?

這篇研究報道總結了InstructGPT的影響是有利的這一結論。

Still, it also noted that there was room for improvement.

盡管如此,研究還指出它們還有一些進步空間。

“Overall, our results indicate that fine-tuning large language models using human preferences significantly improves their behavior on a wide range of tasks, though much work remains to be done to improve their safety and reliability.”

總之,我們的結果說明那些顯著地利用人類傾向微調的大型語言模型,在一系列的任務中提高了它們的性能,即使對于提升它們的安全性和可靠性還有許多工作要做。

What sets ChatGPT apart from a simple chatbot is that it was specifically trained to understand the human intent in a question and provide helpful, truthful, and harmless answers.

使ChatGPT有別于一個簡單聊天機器人的是它是為明白人類在問題中的內容并且提供有用的、可信的和無惡意的回答而特別訓練的。

Because of that training, ChatGPT may challenge certain questions and discard parts of the question that don’t make sense.

因為那種訓練,ChatGPT可能挑戰(zhàn)具體問題并且丟棄那些問題沒用的部分。

Another research paper related to ChatGPT shows how they trained the AI to predict what humans preferred.

另一篇有關ChatGPT發(fā)研究報道指出他們如何訓練AI去預測人類所傾向的東西。

The researchers noticed that the metrics used to rate the outputs of natural language processing AI resulted in machines that scored well on the metrics, but didn’t align with what humans expected.

研究者注意到,用來評估自然語言處理AI的指標導致機器在指標上評分很高,但與人類預期不一致。

The following is how the researchers explained the problem:

下面是研究人員如何解釋這個問題:

“Many machine learning applications optimize simple metrics which are only rough proxies for what the designer intends.

許多機器學習應用程序充分利用簡單的指標,它們只是設計者意圖的粗略代理(?)

?This can lead to problems, such as YouTube recommendations promoting click-bait.”

這會導致問題,比如YouTube建議推廣“點擊誘餌”。

So the solution they designed was to create an AI that could output answers optimized to what humans preferred.

所以他們設計的解決方法是去創(chuàng)造一個能夠根據(jù)人類偏好優(yōu)化答案的AI。

To do that, they trained the AI using datasets of human comparisons between different answers so that the machine became better at predicting what humans judged to be satisfactory answers.

為了做到它,他們訓練用人類在不同答案比較的數(shù)據(jù)集訓練AI,以至于機器變得被擅長于預測被人類評價為滿意的答案。

The paper shares that training was done by summarizing Reddit posts and also tested on summarizing news.

論文呢分享道訓練已經(jīng)通過總結新聞網(wǎng)站的帖子完成,并且仍然在總結新聞反面測試。

The research paper from February 2022 is called Learning to Summarize from Human Feedback.

這篇2022年2月的研究論文題目為從人類反饋中學習總結。

The researchers write:

研究表示:

“In this work, we show that it is possible to significantly improve summary quality by training a model to optimize for human preferences.

在這項工作中,我們證明通過訓練一個模型去根據(jù)人類的偏好優(yōu)化是有可能顯著提高總結的品質的。

We collect a large, high-quality dataset of human comparisons between summaries, train a model to predict the human-preferred summary, and use that model as a reward function to fine-tune a summarization policy using reinforcement learning.”

我們收集了一個巨大的、高品質的人類在總結中比較的數(shù)據(jù)集,訓練一個可以去預測人類傾向總結的模型,并且把那個模型作為一個獎勵功能通過加固設備學習去微調一個總結原則。

What are the Limitations of ChatGPT?

ChatGPT的局限是什么?

Limitations on Toxic Response

毒性反應的局限

ChatGPT is specifically programmed not to provide toxic or harmful responses.?

ChatGPT被特別設計成不提供有毒或者有害的回應的。

So it will avoid answering those kinds of questions.

所以它會避免回答這類問題。

Quality of Answers Depends on Quality of Directions

回答的質量依賴于指向的質量

An important limitation of ChatGPT is that the quality of the output depends on the quality of the input.?

ChatGPT一個很大的限制是輸出的質量有賴于輸入的質量。

In other words, expert directions (prompts) generate better answers.

換句話說,專業(yè)的指向(提示)生成更好的答案。

Answers Are Not Always Correct

回答不總是正確

Another limitation is that because it is trained to provide answers that feel right to humans, the answers can trick humans that the output is correct.

因為它被訓練成提供人類感覺正確的答案,所以另一個局限是這些答案會引起人類輸出是正確的錯覺。

Many users discovered that ChatGPT can provide incorrect answers, including some that are wildly incorrect.

許多用戶發(fā)現(xiàn)ChatGPT會提供不正確的答案,包括一些極其錯誤的回答。

The moderators at the coding Q&A website Stack Overflow may have discovered an unintended consequence of answers that feel right to humans.

編碼網(wǎng)站的負責人可能已經(jīng)發(fā)現(xiàn)非預期的人類感覺正確的結果

Stack Overflow was flooded with user responses generated from ChatGPT that appeared to be correct, but a great many were wrong answers.

編碼網(wǎng)站已經(jīng)充斥著用戶反應的ChatGPT生成的看起來正確但是許多是錯誤的答案



The thousands of answers overwhelmed the volunteer moderator team, prompting the administrators to enact a ban against any users who post answers generated from ChatGPT.

數(shù)以千計的回答壓倒了志愿仲裁團隊,促使管理員通過禁令,禁止上傳從ChatGPT上生成的回答。

The flood of ChatGPT answers resulted in a post entitled: Temporary policy: ChatGPT is banned:

許多的來自ChatGPT的回答導致(?)一個名為臨時政策:ChatGPT被禁了。

“This is a temporary policy intended to slow down the influx of answers and other content created with ChatGPT.

為了減慢回答的大量涌入和其他由ChatGPT生成的內容,這是一個臨時政策。

…The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically “l(fā)ook like” they “might” be good…”

最初的問題是當ChatGPT的回答有很高錯誤率時,它們通??雌饋硭鼈兒芎?。

The experience of Stack Overflow moderators with wrong ChatGPT answers that look right is something that OpenAI, the makers of ChatGPT, are aware of and warned about in their announcement of the new technology.

Stack Overflow仲裁者嗎和ChatGPT看起來正確的錯誤答案的經(jīng)歷是OpenAI,ChatGPT的開發(fā)者,正意識到并且在它們新技術的公布中所警惕的。

OpenAI Explains Limitations of ChatGPT

OPenAI解釋ChatGPT的局限性

The OpenAI announcement offered this caveat:

OpenAI的公告提出了這份警告

ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.

Fixing this issue is challenging, as:

ChatGPT有時寫出貌似合理的但是錯誤或者無意義的回答。

  1. during RL training, there’s currently no source of truth;

在RL的訓練中,目前沒有真相來源

(2) training the model to be more cautious causes it to decline questions that it can answer correctly; and

訓練模型更小心使得它減少了回答能回答正確的問題的數(shù)量。

(3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.”

監(jiān)督訓練誤導這個模型,因為最理想的模型有賴于模型知道什么,而不是演示者知道的。

Is ChatGPT Free To Use?

ChatGPT能免費使用嗎

The use of ChatGPT is currently free during the “research preview” time

在目前研究預覽時間,ChatGPT是免費使用的。

The chatbot is currently open for users to try out and provide feedback on the responses so that the AI can become better at answering questions and to learn from its mistakes.

聊天機器人現(xiàn)在對用戶開放以嘗試并提供對答復的反饋,這樣以后AI可以更擅長于回答問題并從錯誤中學習。

The official announcement states that OpenAI is eager to receive feedback about the mistakes:

官方聲明稱OpenAI正期待從錯誤中獲得反饋:

“While we’ve made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behavior.

當我們已經(jīng)努力使模型拒絕不合適的請求時,它有時將會回應有害指令或表現(xiàn)出有偏見的行為。

We’re using the Moderation API to warn or block certain types of unsafe content, but we expect it to have some false negatives and positives for now.

我們正用Moderation API去警告或鎖住某些類型的不安全內容,但是我們現(xiàn)在希望它能有一些誤報漏報。

We’re eager to collect user feedback to aid our ongoing work to improve this system.”

我們希望收集用戶的反饋去幫助我們繼續(xù)接下來提升這個系統(tǒng)的工作。

There is currently a contest with a prize of $500 in ChatGPT credits to encourage the public to rate the responses.

目前有一個500美元的ChatGPA的比賽以鼓勵公眾去評估這些反應。

“Users are encouraged to provide feedback on problematic model outputs through the UI, as well as on false positives/negatives from the external content filter which is also part of the interface.

用戶被鼓勵去通過UI對有問題的模型輸出提供人類反饋,同時關于來自漏報誤報從是用戶界面的一部分的外部過濾器

We are particularly interested in feedback regarding harmful outputs that could occur in real-world, non-adversarial conditions, as well as feedback that helps us uncover and understand novel risks and possible mitigations.

我們特別感興趣的是關于發(fā)生在有害輸出的現(xiàn)實世界,不敵對的條件下的反饋和能夠幫助我們發(fā)現(xiàn)和理解新的風險和可能的解決辦法的反饋。

You can choose to enter the ChatGPT Feedback Contest3 for a chance to win up to $500 in API credits.

你可以選擇進入ChatGPT 反饋測試3,有機會贏得500美元在API的信用里。

Entries can be submitted via the feedback form that is linked in the ChatGPT interface.”

可以通過ChatGPT界面中鏈接的反饋表單提交條目

The currently ongoing contest ends at 11:59 p.m. PST on December 31, 2022.

目前持續(xù)的測試將在2020年12月31日11:59 p.m. PST 結束。.


Related: OpenAI May Introduce A Paid Pro Version Of ChatGPT

相關:OpenAI可能推出ChatGPT付費專業(yè)版


Will Language Models Replace Google Search?

語言模型會替代Google搜索嗎?

Google itself has already created an AI chatbot that is called LaMDA.?

Google已經(jīng)開發(fā)了一個叫LaMDA的AI聊天機器人

The performance of Google’s chatbot was so close to a human conversation that a Google engineer claimed that LaMDA was sentient.

Google的聊天機器人的表現(xiàn)很接近一個人的對話,這使得一個Google工程師聲稱LaMDA是有知覺能力的。

Given how these large language models can answer so many questions, is it far-fetched that a company like OpenAI, Google, or Microsoft would one day replace traditional search with an AI chatbot?

鑒于這些大型語言模型如何回答如此多的問題,像OpenAI、谷歌或微軟這樣的公司有朝一日會用AI聊天機器人取代傳統(tǒng)搜索,這是否牽強?

Some on Twitter are already declaring that ChatGPT will be the next Google.

有些人在推特上已經(jīng)宣稱ChatGPT將會是下一個Google

ChatGPT is the new Google.

ChatGPT是新的Google

— Angela Yu (@yu_angela) December 5, 2022


The scenario that a question-and-answer chatbot may one day replace Google is frightening to those who make a living as search marketing professionals.

有關于一個問答機器人將有一天取代Google發(fā)語言對那些以搜索營銷專業(yè)人士為生的人來說是可怕的。

It has sparked discussions in online search marketing communities, like the popular Facebook SEOSignals Lab where someone asked if searches might move away from search engines and towards chatbots.

它已經(jīng)在網(wǎng)絡搜索市場界引發(fā)了討論,例如有人在在流行的Facebook SEOSignals Lab上問道是否研究可能把搜索引擎改變?yōu)榱奶鞕C器人。

Having tested ChatGPT, I have to agree that the fear of search being replaced with a chatbot is not unfounded.

已經(jīng)測試過ChatGPT之后,我不得不同意搜索被聊天機器人代替的恐懼是無根據(jù)。

The technology still has a long way to go, but it’s possible to envision a hybrid search and chatbot future for search.

這種技術還有很長的路要走,但是它有可能展望一種混合搜索和聊天機器人的未來去研究。

But the current implementation of ChatGPT seems to be a tool that, at some point, will require the purchase of credits to use.

但是目前ChatGPT非使用看起來像是一種工具,從某些時候,將要用信用卡支付使用。

How Can ChatGPT Be Used?

ChatGPT要如何使用?

ChatGPT can write code, poems, songs, and even short stories in the style of a specific author.

ChatGPT可以寫代碼、詩歌、歌曲,甚至以特定作者的風格寫一個短故事。

The expertise in following directions elevates ChatGPT from an information source to a tool that can be asked to accomplish a task.

以下方面的專業(yè)知識把ChatGPT從一個信息資源提升為一個可以完成任務的工具。

This makes it useful for writing an essay on virtually any topic.

這使得它對于寫幾乎關于任何話題的小說都很有用。

ChatGPT can function as a tool for generating outlines for articles or even entire novels.

ChatGPT可以起到用于生成文章大綱甚至整篇小說的工具的作用。

It will provide a response for virtually any task that can be answered with written text.

實際上,它將給任何可以用書面文本回答的任務提供回應。

Conclusion

總結

As previously mentioned, ChatGPT is envisioned as a tool that the public will eventually have to pay to use.

正如之前所說,ChatGPT被設想為一種將讓公眾不得不購買使用的工具。

Over a million users have registered to use ChatGPT within the first five days since it was opened to the public.

從它公布以來,在最初的五天,已經(jīng)有超過一百萬用戶注冊了ChatGPT。

More resources:

  • ChatGPT Examples: 5 Ways SEOs and Digital Marketers Can Use ChatGPT

  • Microsoft Bing With ChatGPT Reportedly Launching In March

  • ChatGPT For Content and SEO?

  • The Future Of Chatbots: Use Cases & Opportunities You Need To Know

  • How Machine Learning in Search Works


What is ChatGPT And How Can You Use It? (英譯中)的評論 (共 條)

分享到微博請遵守國家法律
滦平县| 东安县| 扶绥县| 松溪县| 平泉县| 明光市| 辛集市| 武宁县| 莱芜市| 原平市| 凌云县| 兴城市| 台东县| 北碚区| 龙海市| 九寨沟县| 尼玛县| 汤原县| 芦溪县| 弥勒县| 吉木萨尔县| 酒泉市| 嘉峪关市| 枝江市| 奇台县| 平阳县| 霍城县| 蕲春县| 惠东县| 鞍山市| 保德县| 青海省| 张北县| 巨野县| 澄迈县| 定日县| 开阳县| 巴马| 丹巴县| 津市市| 涟水县|