在 Botframework 中为 Google 上的操作创建自定义适配器
Creating a custom adapter in Botframework for Actions on Google
我目前正在用 Typescript 编写自定义适配器以将 Google Assistant 连接到 Microsoft 的 Botframework。在此适配器中,我试图通过网络调用捕获 Google 助理对话对象并使用我的机器人更改它。
目前,我的机器人正在做的唯一一件事就是在 Google 上接收来自 Actions 的请求并将请求正文解析为 ActionsOnGoogleConversation 对象。在此之后,我调用 conv.ask() 来尝试在两个服务之间进行简单的对话。
Api端点:
app.post("/api/google", (req, res) => {
googleAdapter.processActivity(req, res, async (context) => {
await bot.run(context);
});
});
适配器 processActivity 函数:
public async processActivity(req: WebRequest, res: WebResponse, logic: (context: TurnContext) => Promise<void>): Promise<void> {
const body = req.body;
let conv = new ActionsSdkConversation();
Object.assign(conv, body);
res.status(200);
res.send(conv.ask("Boo"));
};
当我尝试开始对话时,我在 Google 控制台上的操作中收到以下错误。
UnparseableJsonResponse
API Version 2: Failed to parse JSON response string with
'INVALID_ARGUMENT' error: "availableSurfaces: Cannot find field." HTTP
Status Code: 200.
我已经检查了响应,我可以在 AoG 控制台中找到一个名为 availableSurfaces 的字段,当我使用 Postman 调用我的机器人时。
回复:
{
"responses": [
"Boo"
],
"expectUserResponse": true,
"digested": false,
"noInputs": [],
"speechBiasing": [],
"_responded": true,
"_ordersv3": false,
"request": {},
"headers": {},
"_init": {},
"sandbox": false,
"input": {},
"surface": {
"capabilities": [
{
"name": "actions.capability.MEDIA_RESPONSE_AUDIO"
},
{
"name": "actions.capability.AUDIO_OUTPUT"
},
{
"name": "actions.capability.ACCOUNT_LINKING"
},
{
"name": "actions.capability.SCREEN_OUTPUT"
}
]
},
"available": {
"surfaces": {
"list": [],
"capabilities": {
"surfaces": []
}
}
},
"user": {
"locale": "en-US",
"lastSeen": "2019-11-14T12:40:52Z",
"userStorage": "{\"data\":{\"userId\":\"c1a4b8ab-06bb-4270-80f5-958cfdff57bd\"}}",
"userVerificationStatus": "VERIFIED"
},
"arguments": {
"parsed": {
"input": {},
"list": []
},
"status": {
"input": {},
"list": []
},
"raw": {
"list": [],
"input": {}
}
},
"device": {},
"screen": false,
"body": {},
"version": 2,
"action": "",
"intent": "",
"parameters": {},
"contexts": {
"input": {},
"output": {}
},
"incoming": {
"parsed": []
},
"query": "",
"data": {},
"conversation": {
"conversationId": "ABwppHEky66Iy1-qJ_4g08i3Z1HNHe2aDTrVTqY4otnNmdOgY2CC0VDbyt9lIM-_WkJA8emxbMPVxS5uutYHW2BzRQ",
"type": "NEW"
},
"inputs": [
{
"intent": "actions.intent.MAIN",
"rawInputs": [
{
"inputType": "VOICE",
"query": "Talk to My test app"
}
]
}
],
"availableSurfaces": [
{
"capabilities": [
{
"name": "actions.capability.AUDIO_OUTPUT"
},
{
"name": "actions.capability.SCREEN_OUTPUT"
},
{
"name": "actions.capability.WEB_BROWSER"
}
]
}
]
}
有谁知道这可能是什么原因造成的?我个人认为创建 ActionsSdkConversation 可能是原因,但我没有找到任何使用 Google Assistant 而不从 standard intent handeling setup.
获取 conv 对象的示例
所以我设法通过改变方法来修复它,而不是有一个适合 bot 框架结构的 API 点,我将它改为 AoG 的 intenthandler。
Google 控制器
export class GoogleController {
public endpoint: GoogleEndpoint;
private adapter: GoogleAssistantAdapter;
private bot: SampleBot;
constructor(bot: SampleBot) {
this.bot = bot;
this.adapter = new GoogleAssistantAdapter();
this.endpoint = actionssdk();
this.setupIntents(this.endpoint);
};
private setupIntents(endpoint: GoogleEndpoint) {
endpoint.intent(GoogleIntentTypes.Start, (conv: ActionsSdkConversation) => {
this.sendMessageToBotFramework(conv);
});
endpoint.intent(GoogleIntentTypes.Text, conv => {
this.sendMessageToBotFramework(conv);
});
};
private sendMessageToBotFramework(conv: ActionsSdkConversation) {
this.adapter.processActivity(conv, async (context) => {
await this.bot.run(context);
});
};
};
interface GoogleEndpoint extends OmniHandler, BaseApp , ActionsSdkApp <{}, {}, ActionsSdkConversation<{}, {}>> {};
一旦 conv 对象在适配器中,我就使用 conv 对象创建一个 activity 机器人用来做它的事情并使用 context.turnState()[= 将其保存在状态中15=]
适配器进程活动
public async processActivity(conv: ActionsSdkConversation, logic: (context: TurnContext) => Promise<void>): Promise<ActionsSdkConversation> {
const activty = this.createActivityFromGoogleConversation(conv);
const context = this.createContext(activty);
context.turnState.set("httpBody", conv);
await this.runMiddleware(context, logic);
const result = context.turnState.get("httpBody");
return result;
};
机器人
export class SampleBot extends ActivityHandler {
constructor() {
super();
this.onMessage(async (context, next) => {
await context.sendActivity(`You said: ${context.activity.text}`);
await next();
});
}
机器人发送响应后,我使用结果修改 conv 对象,将其保存,然后 return 在 processActivity() 中。
private createGoogleConversationFromActivity(activity: Partial<Activity>, context: TurnContext) {
const conv = context.turnState.get("httpBody");
if (activity.speak) {
const response = new SimpleResponse({
text: activity.text,
speech: activity.speak
});
conv.ask(response);
} else {
if (!activity.text) {
throw Error("Activity text cannot be undefined");
};
conv.ask(activity.text);
};
context.turnState.set("httpBody", conv);
return;
};
这导致了 Google 助手和 Bot Framework 之间的简单对话。
我目前正在用 Typescript 编写自定义适配器以将 Google Assistant 连接到 Microsoft 的 Botframework。在此适配器中,我试图通过网络调用捕获 Google 助理对话对象并使用我的机器人更改它。
目前,我的机器人正在做的唯一一件事就是在 Google 上接收来自 Actions 的请求并将请求正文解析为 ActionsOnGoogleConversation 对象。在此之后,我调用 conv.ask() 来尝试在两个服务之间进行简单的对话。
Api端点:
app.post("/api/google", (req, res) => {
googleAdapter.processActivity(req, res, async (context) => {
await bot.run(context);
});
});
适配器 processActivity 函数:
public async processActivity(req: WebRequest, res: WebResponse, logic: (context: TurnContext) => Promise<void>): Promise<void> {
const body = req.body;
let conv = new ActionsSdkConversation();
Object.assign(conv, body);
res.status(200);
res.send(conv.ask("Boo"));
};
当我尝试开始对话时,我在 Google 控制台上的操作中收到以下错误。
UnparseableJsonResponse
API Version 2: Failed to parse JSON response string with 'INVALID_ARGUMENT' error: "availableSurfaces: Cannot find field." HTTP Status Code: 200.
我已经检查了响应,我可以在 AoG 控制台中找到一个名为 availableSurfaces 的字段,当我使用 Postman 调用我的机器人时。
回复:
{
"responses": [
"Boo"
],
"expectUserResponse": true,
"digested": false,
"noInputs": [],
"speechBiasing": [],
"_responded": true,
"_ordersv3": false,
"request": {},
"headers": {},
"_init": {},
"sandbox": false,
"input": {},
"surface": {
"capabilities": [
{
"name": "actions.capability.MEDIA_RESPONSE_AUDIO"
},
{
"name": "actions.capability.AUDIO_OUTPUT"
},
{
"name": "actions.capability.ACCOUNT_LINKING"
},
{
"name": "actions.capability.SCREEN_OUTPUT"
}
]
},
"available": {
"surfaces": {
"list": [],
"capabilities": {
"surfaces": []
}
}
},
"user": {
"locale": "en-US",
"lastSeen": "2019-11-14T12:40:52Z",
"userStorage": "{\"data\":{\"userId\":\"c1a4b8ab-06bb-4270-80f5-958cfdff57bd\"}}",
"userVerificationStatus": "VERIFIED"
},
"arguments": {
"parsed": {
"input": {},
"list": []
},
"status": {
"input": {},
"list": []
},
"raw": {
"list": [],
"input": {}
}
},
"device": {},
"screen": false,
"body": {},
"version": 2,
"action": "",
"intent": "",
"parameters": {},
"contexts": {
"input": {},
"output": {}
},
"incoming": {
"parsed": []
},
"query": "",
"data": {},
"conversation": {
"conversationId": "ABwppHEky66Iy1-qJ_4g08i3Z1HNHe2aDTrVTqY4otnNmdOgY2CC0VDbyt9lIM-_WkJA8emxbMPVxS5uutYHW2BzRQ",
"type": "NEW"
},
"inputs": [
{
"intent": "actions.intent.MAIN",
"rawInputs": [
{
"inputType": "VOICE",
"query": "Talk to My test app"
}
]
}
],
"availableSurfaces": [
{
"capabilities": [
{
"name": "actions.capability.AUDIO_OUTPUT"
},
{
"name": "actions.capability.SCREEN_OUTPUT"
},
{
"name": "actions.capability.WEB_BROWSER"
}
]
}
]
}
有谁知道这可能是什么原因造成的?我个人认为创建 ActionsSdkConversation 可能是原因,但我没有找到任何使用 Google Assistant 而不从 standard intent handeling setup.
获取 conv 对象的示例所以我设法通过改变方法来修复它,而不是有一个适合 bot 框架结构的 API 点,我将它改为 AoG 的 intenthandler。
Google 控制器
export class GoogleController {
public endpoint: GoogleEndpoint;
private adapter: GoogleAssistantAdapter;
private bot: SampleBot;
constructor(bot: SampleBot) {
this.bot = bot;
this.adapter = new GoogleAssistantAdapter();
this.endpoint = actionssdk();
this.setupIntents(this.endpoint);
};
private setupIntents(endpoint: GoogleEndpoint) {
endpoint.intent(GoogleIntentTypes.Start, (conv: ActionsSdkConversation) => {
this.sendMessageToBotFramework(conv);
});
endpoint.intent(GoogleIntentTypes.Text, conv => {
this.sendMessageToBotFramework(conv);
});
};
private sendMessageToBotFramework(conv: ActionsSdkConversation) {
this.adapter.processActivity(conv, async (context) => {
await this.bot.run(context);
});
};
};
interface GoogleEndpoint extends OmniHandler, BaseApp , ActionsSdkApp <{}, {}, ActionsSdkConversation<{}, {}>> {};
一旦 conv 对象在适配器中,我就使用 conv 对象创建一个 activity 机器人用来做它的事情并使用 context.turnState()[= 将其保存在状态中15=]
适配器进程活动
public async processActivity(conv: ActionsSdkConversation, logic: (context: TurnContext) => Promise<void>): Promise<ActionsSdkConversation> {
const activty = this.createActivityFromGoogleConversation(conv);
const context = this.createContext(activty);
context.turnState.set("httpBody", conv);
await this.runMiddleware(context, logic);
const result = context.turnState.get("httpBody");
return result;
};
机器人
export class SampleBot extends ActivityHandler {
constructor() {
super();
this.onMessage(async (context, next) => {
await context.sendActivity(`You said: ${context.activity.text}`);
await next();
});
}
机器人发送响应后,我使用结果修改 conv 对象,将其保存,然后 return 在 processActivity() 中。
private createGoogleConversationFromActivity(activity: Partial<Activity>, context: TurnContext) {
const conv = context.turnState.get("httpBody");
if (activity.speak) {
const response = new SimpleResponse({
text: activity.text,
speech: activity.speak
});
conv.ask(response);
} else {
if (!activity.text) {
throw Error("Activity text cannot be undefined");
};
conv.ask(activity.text);
};
context.turnState.set("httpBody", conv);
return;
};
这导致了 Google 助手和 Bot Framework 之间的简单对话。