PHP调用openai api chatgpt示例

申请OpenAI账号

过程略
最终得到api_key
每个账号在调用API的额度这块,有5美元的免费额度,按照对应模型和对应tokens/1k的价格消耗完后就需要充值了

chat completion

这里给出对话场景的调用代码,prompt类型的completion接口同理,只是参数格式与响应格式需要参照文档修改下

    public function answer(){
        //获取用户信息,你可以自定义实现这个方法
        $user = $this->getUser();
        $post = $this->request->post();
        //获取提问信息
        $question = $post['question'];
        //问答记录都存在这个qalist表里
        $qaModel = new QAListModel();
        //每次会话默认查询50条会话,这样保证了与ai对话的连贯性,并且各个用户的上下文是独立的,ai不会串
        $qaList = $qaModel->where('user_id', $user['id'])->where(['type' => 2])->order('created desc')->limit(50)->select()->toArray();
        //把数据组装成api的参数格式
        $messages = [];
        for($i = count($qaList); $i > 0; $i--){
            $item = $qaList[$i-1];
            $questionData = [
                'role' => 'user',
                'content' => $item['problem']
            ];
            $messages[] = $questionData;
            if($item['answer']){
                $answerData = [
                    'role' => 'assistant',
                    'content' => $item['answer']
                ];
                $messages[] = $answerData;
            }
        }
        $messages[] = [
            'role' => 'user',
            'content' => $question
        ];
        //这里模型就用gpt-3.5-turbo-0301,token便宜而且回答质量还可以(当然比不上在线版,更比不上gpt-4)
        $data = [
            'model' => 'gpt-3.5-turbo-0301',
            'messages' => $messages,
        ];
        $url = 'https://api.openai.com/v1/chat/completions';
        //post方法下段代码贴出来
        $response = Request::post($url, $data);
        
        if($response['code'] === 0){
            return $this->error($response['msg']);
        }

        $res = $response['data'];
        //取得回答
        $answer = $res['choices'][0]['message']['content'];
        //除了回答以外,保存这次对话的信息,包括token消耗量等等
        $answerData = [
            'answer' => $answer,
            'ai_id' => $res['id'],
            'index' => $res['choices'][0]['index'],
            //'logprobs' => $res['choices'][0]['logprobs'],
            'finish_reason' => $res['choices'][0]['finish_reason'],
            'prompt_tokens' => $res['usage']['prompt_tokens'],
            'completion_tokens' => $res['usage']['completion_tokens'],
            'total_tokens' => $res['usage']['total_tokens'],

            'user_id' => $user['id'],
            'problem' => $question,
            'type' => $post['q_type'],
            'created' => time()
        ];
        
        $qaModelA = new QAListModel();
        $qaModelA->save($answerData);

        return $this->success('success', [
            'qaId' => $qaModelA->id,
            'answer' => $answer
        ]);
    }

post方法

    public static function post($url, $data){
        $options = array(
            'http' => array(
                'header'  => "Content-type: application/json\r\nAuthorization: Bearer " . ConfigGPT::api_key,  //这里替换为你的api_key
                'method'  => 'POST',
                'content' => json_encode($data)
            ),
        );

        $context  = stream_context_create($options);
        $response = file_get_contents($url, false, $context);

        if ($response === false) {
            return ['code' => 0, 'msg' => '请求失败'];
        }

        return ['code' => 1, 'msg' => '请求成功', 'data' => json_decode($response, true)];
    }

本人环境Nginx/1.22.1 & PHP 7.4.33 & Sqlite

注意:调用OpenAI API的服务器一定要在国外(不包括香港)或者上述方法加上代理(proxy),不细说了

官方入参文档

最后附上官方入参文档,以及机翻解释,前面的代码只传递了必要参数,其它均使用默认值。有兴趣的可以了解下,合适的参数设置,chatgpt的回答也会越精准。

model
string
Required
ID of the model to use. See the model endpoint compatibility table for details on which models work with the Chat API.
要使用的模型的ID。有关哪些模型可以使用Chat API的详细信息,请参阅模型端点兼容性表。

messages
array
Required
The messages to generate chat completions for, in the chat format.
要为生成聊天结束信息的消息,格式为聊天。

temperature
number
Optional
Defaults to 1
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
使用什么样的采样温度,介于0和2之间。值越高(如0.8),输出越随机,而值越低(如0.2),输出就越集中,确定性更强。
我们通常建议更改this或top_p,但不能同时更改两者。

top_p
number
Optional
Defaults to 1
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
一种替代温度采样的方法,称为核采样,其中模型考虑具有top_p概率质量的令牌的结果。因此,0.1意味着只考虑包含前10%概率质量的代币。
我们通常建议改变这个或温度,但不能同时改变。

n
integer
Optional
Defaults to 1
How many chat completion choices to generate for each input message.
为每条输入消息生成多少聊天完成选项。

stream
boolean
Optional
Defaults to false
If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message. See the OpenAI Cookbook for example code.
如果设置,将发送部分消息增量,就像在ChatGPT中一样。令牌将在可用时作为仅限数据的服务器发送事件发送,流由data:[DONE]消息终止。请参阅OpenAI食谱中的示例代码。

stop
string or array
Optional
Defaults to null
最多4个序列,其中API将停止生成更多的令牌。

max_tokens
integer
Optional
Defaults to inf
The maximum number of tokens to generate in the chat completion.
The total length of input tokens and generated tokens is limited by the model's context length.
聊天完成时要生成的最大令牌数。
输入令牌和生成令牌的总长度受模型上下文长度的限制。

presence_penalty
number
Optional
Defaults to 0
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
See more information about frequency and presence penalties.
介于-2.0和2.0之间的数字。正值会根据到目前为止新标记是否出现在文本中来惩罚它们,从而增加模型谈论新主题的可能性。
查看有关频率和存在惩罚的更多信息。

frequency_penalty
number
Optional
Defaults to 0
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
See more information about frequency and presence penalties.
介于-2.0和2.0之间的数字。到目前为止,正值会根据新标记在文本中的现有频率对其进行惩罚,从而降低模型逐字重复同一行的可能性。
查看有关频率和存在惩罚的更多信息。

logit_bias
map
Optional
Defaults to null
Modify the likelihood of specified tokens appearing in the completion.
Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
修改指定令牌出现在完成中的可能性。
接受一个json对象,该对象将令牌(由令牌化器中的令牌ID指定)映射到-100到100之间的相关偏差值。从数学上讲,在采样之前,将偏差添加到模型生成的logits中。每个模型的确切效果会有所不同,但-1到1之间的值应该会降低或增加选择的可能性;像-100或100这样的值应该导致对相关令牌的禁止或独占选择。

user
string
Optional
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more.
代表您的最终用户的唯一标识符,可以帮助OpenAI监控和检测滥用。了解更多信息。

结语

OpenAI的新模型发布后,gpt-4有可能会免费体验,一起期待吧

渣渣百度机翻

posted @ 2023-03-22 16:34  红岸  阅读(1709)  评论(0编辑  收藏  举报