-
當(dāng)前位置:首頁(yè) > 創(chuàng)意學(xué)院 > 技術(shù) > 專題列表 > 正文
OpenAI地區(qū)不可以用(openapi)
大家好!今天讓創(chuàng)意嶺的小編來(lái)大家介紹下關(guān)于OpenAI地區(qū)不可以用的問(wèn)題,以下是小編對(duì)此問(wèn)題的歸納整理,讓我們一起來(lái)看看吧。
ChatGPT國(guó)內(nèi)免費(fèi)在線使用,一鍵生成原創(chuàng)文章、方案、文案、工作計(jì)劃、工作報(bào)告、論文、代碼、作文、做題和對(duì)話答疑等等
只需要輸入關(guān)鍵詞,就能返回你想要的內(nèi)容,越精準(zhǔn),寫出的就越詳細(xì),有微信小程序端、在線網(wǎng)頁(yè)版、PC客戶端
官網(wǎng):https://ai.de1919.com
本文目錄:
一、openai能當(dāng)爬蟲使嗎
你好,可以的,Spinning Up是OpenAI開源的面向初學(xué)者的深度強(qiáng)化學(xué)習(xí)資料,其中列出了105篇深度強(qiáng)化學(xué)習(xí)領(lǐng)域非常經(jīng)典的文章, 見 Spinning Up:
博主使用Python爬蟲自動(dòng)爬取了所有文章,而且爬下來(lái)的文章也按照網(wǎng)頁(yè)的分類自動(dòng)分類好。
見下載資源:Spinning Up Key Papers
源碼如下:
import os
import time
import urllib.request as url_re
import requests as rq
from bs4 import BeautifulSoup as bf
'''Automatically download all the key papers recommended by OpenAI Spinning Up.
See more info on: https://spinningup.openai.com/en/latest/spinningup/keypapers.html
Dependency:
bs4, lxml
'''
headers = {
'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36'
}
spinningup_url = 'https://spinningup.openai.com/en/latest/spinningup/keypapers.html'
paper_id = 1
def download_pdf(pdf_url, pdf_path):
"""Automatically download PDF file from Internet
Args:
pdf_url (str): url of the PDF file to be downloaded
pdf_path (str): save routine of the downloaded PDF file
"""
if os.path.exists(pdf_path): return
try:
with url_re.urlopen(pdf_url) as url:
pdf_data = url.read()
with open(pdf_path, "wb") as f:
f.write(pdf_data)
except: # fix link at [102]
pdf_url = r"https://is.tuebingen.mpg.de/fileadmin/user_upload/files/publications/Neural-Netw-2008-21-682_4867%5b0%5d.pdf"
with url_re.urlopen(pdf_url) as url:
pdf_data = url.read()
with open(pdf_path, "wb") as f:
f.write(pdf_data)
time.sleep(10) # sleep 10 seconds to download next
def download_from_bs4(papers, category_path):
"""Download papers from Spinning Up
Args:
papers (bs4.element.ResultSet): 'a' tags with paper link
category_path (str): root dir of the paper to be downloaded
"""
global paper_id
print("Start to ownload papers from catagory {}...".format(category_path))
for paper in papers:
paper_link = paper['href']
if not paper_link.endswith('.pdf'):
if paper_link[8:13] == 'arxiv':
# paper_link = "https://arxiv.org/abs/1811.02553"
paper_link = paper_link[:18] + 'pdf' + paper_link[21:] + '.pdf' # arxiv link
elif paper_link[8:18] == 'openreview': # openreview link
# paper_link = "https://openreview.net/forum?id=ByG_3s09KX"
paper_link = paper_link[:23] + 'pdf' + paper_link[28:]
elif paper_link[14:18] == 'nips': # neurips link
paper_link = "https://proceedings.neurips.cc/paper/2017/file/a1d7311f2a312426d710e1c617fcbc8c-Paper.pdf"
else: continue
paper_name = '[{}] '.format(paper_id) + paper.string + '.pdf'
if ':' in paper_name:
paper_name = paper_name.replace(':', '_')
if '?' in paper_name:
paper_name = paper_name.replace('?', '')
paper_path = os.path.join(category_path, paper_name)
download_pdf(paper_link, paper_path)
print("Successfully downloaded {}!".format(paper_name))
paper_id += 1
print("Successfully downloaded all the papers from catagory {}!".format(category_path))
def _save_html(html_url, html_path):
"""Save requested HTML files
Args:
html_url (str): url of the HTML page to be saved
html_path (str): save path of HTML file
"""
html_file = rq.get(html_url, headers=headers)
with open(html_path, "w", encoding='utf-8') as h:
h.write(html_file.text)
def download_key_papers(root_dir):
"""Download all the key papers, consistent with the categories listed on the website
Args:
root_dir (str): save path of all the downloaded papers
"""
# 1. Get the html of Spinning Up
spinningup_html = rq.get(spinningup_url, headers=headers)
# 2. Parse the html and get the main category ids
soup = bf(spinningup_html.content, 'lxml')
# _save_html(spinningup_url, 'spinningup.html')
# spinningup_file = open('spinningup.html', 'r', encoding="UTF-8")
# spinningup_handle = spinningup_file.read()
# soup = bf(spinningup_handle, features='lxml')
category_ids = []
categories = soup.find(name='div', attrs={'class': 'section', 'id': 'key-papers-in-deep-rl'}).\
find_all(name='div', attrs={'class': 'section'}, recursive=False)
for category in categories:
category_ids.append(category['id'])
# 3. Get all the categories and make corresponding dirs
category_dirs = []
if not os.path.exitis(root_dir):
os.makedirs(root_dir)
for category in soup.find_all(name='h4'):
category_name = list(category.children)[0].string
if ':' in category_name: # replace ':' with '_' to get valid dir name
category_name = category_name.replace(':', '_')
category_path = os.path.join(root_dir, category_name)
category_dirs.append(category_path)
if not os.path.exists(category_path):
os.makedirs(category_path)
# 4. Start to download all the papers
print("Start to download key papers...")
for i in range(len(category_ids)):
category_path = category_dirs[i]
category_id = category_ids[i]
content = soup.find(name='div', attrs={'class': 'section', 'id': category_id})
inner_categories = content.find_all('div')
if inner_categories != []:
for category in inner_categories:
category_id = category['id']
inner_category = category.h4.text[:-1]
inner_category_path = os.path.join(category_path, inner_category)
if not os.path.exists(inner_category_path):
os.makedirs(inner_category_path)
content = soup.find(name='div', attrs={'class': 'section', 'id': category_id})
papers = content.find_all(name='a',attrs={'class': 'reference external'})
download_from_bs4(papers, inner_category_path)
else:
papers = content.find_all(name='a',attrs={'class': 'reference external'})
download_from_bs4(papers, category_path)
print("Download Complete!")
if __name__ == "__main__":
root_dir = "key-papers"
download_key_papers(root_dir)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
二、openaiplus無(wú)限調(diào)用嗎?
不可以。
據(jù)新浪財(cái)經(jīng)網(wǎng)信息,OpenAI成功綁定信用卡后,升級(jí)Plus訂閱不能無(wú)限調(diào)用,還得再次綁定。
OpenAI開通了plus計(jì)劃,一個(gè)月需要20美元,大概人民幣130元左右。
三、chatgpt怎么更新
chatgpt的更新方法是:ChatGPT是由OpenAI團(tuán)隊(duì)研發(fā)的大型自然語(yǔ)言處理模型,更新通常由OpenAI團(tuán)隊(duì)進(jìn)行。如果您正在使用OpenAI API訪問(wèn)ChatGPT,您不需要擔(dān)心模型的更新,因?yàn)镺penAI會(huì)定期更新模型并為其提供支持。如果您使用的是自己訓(xùn)練的ChatGPT模型,您可以通過(guò)添加更多的訓(xùn)練數(shù)據(jù)或使用更先進(jìn)的訓(xùn)練技術(shù)來(lái)提高模型的性能和準(zhǔn)確性。另外,您還可以使用預(yù)訓(xùn)練的語(yǔ)言模型,如GPT-3,以獲得更好的效果。無(wú)論哪種方式,不斷更新和改進(jìn)是提高ChatGPT性能和準(zhǔn)確性的關(guān)鍵。
四、開放api是開源嗎
開放API并不等同于開源。開放API是指一個(gè)軟件或平臺(tái)允許第三方開發(fā)者使用其接口和數(shù)據(jù),以便創(chuàng)建新的應(yīng)用程序或服務(wù)。開源則是指軟件的源代碼是公開的,任何人都可以查看、修改和分發(fā)。雖然開放API和開源都可以促進(jìn)創(chuàng)新和合作,但它們是不同的概念。
開放API的優(yōu)點(diǎn)是可以讓不同的應(yīng)用程序之間實(shí)現(xiàn)互操作性,從而提高整個(gè)生態(tài)系統(tǒng)的價(jià)值。例如,許多社交媒體平臺(tái)都提供開放API,使得第三方開發(fā)者可以創(chuàng)建各種應(yīng)用程序,如社交媒體管理工具、數(shù)據(jù)分析工具等。這些應(yīng)用程序可以幫助用戶更好地管理和分析他們的社交媒體賬戶,從而提高效率和效果。
總之,開放API和開源是兩個(gè)不同的概念,但它們都可以促進(jìn)創(chuàng)新和合作。開放API可以讓不同的應(yīng)用程序之間實(shí)現(xiàn)互操作性,從而提高整個(gè)生態(tài)系統(tǒng)的價(jià)值。而開源則可以讓開發(fā)者更容易地查看、修改和分發(fā)軟件的源代碼,從而促進(jìn)創(chuàng)新和合作。
以上就是關(guān)于OpenAI地區(qū)不可以用相關(guān)問(wèn)題的回答。希望能幫到你,如有更多相關(guān)問(wèn)題,您也可以聯(lián)系我們的客服進(jìn)行咨詢,客服也會(huì)為您講解更多精彩的知識(shí)和內(nèi)容。
推薦閱讀:
手機(jī)上怎么換字體(oppo手機(jī)上怎么換字體樣式)
怎么學(xué)視頻剪輯制作軟件(手機(jī)怎么學(xué)視頻剪輯制作)