Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added a hook useBotSpeakingState to get the state of bot speaking event #5429

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,9 @@ Notes: web developers are advised to use [`~` (tilde range)](https://github.com/
- Added dedicated loading animation for messages in preparing state for Fluent theme, in PR [#5423](https://github.com/microsoft/BotFramework-WebChat/pull/5423), by [@OEvgeny](https://github.com/OEvgeny)
- Resolved [#2661](https://github.com/microsoft/BotFramework-WebChat/issues/2661) and [#5352](https://github.com/microsoft/BotFramework-WebChat/issues/5352). Added speech recognition continuous mode with barge-in support, in PR [#5426](https://github.com/microsoft/BotFramework-WebChat/pull/5426), by [@RushikeshGavali](https://github.com/RushikeshGavali) and [@compulim](https://github.com/compulim)
- Set `styleOptions.speechRecognitionContinuous` to `true` with a Web Speech API provider with continuous mode support
- Added a hook "useBotSpeakingState" to get the state of bot speaking event, in PR [#5429](https://github.com/microsoft/BotFramework-WebChat/pull/5429), by [@VanessaAntao](https://github.com/VanessaAntao)



### Changed

Expand Down
174 changes: 174 additions & 0 deletions __tests__/html2/hooks/useBotSpeakingState.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,174 @@
<!doctype html>
<html lang="en-US">
<head>
<link href="/assets/index.css" rel="stylesheet" type="text/css" />
<script crossorigin="anonymous" src="https://unpkg.com/[email protected]/umd/react.development.js"></script>
<script crossorigin="anonymous" src="https://unpkg.com/[email protected]/umd/react-dom.development.js"></script>
<script crossorigin="anonymous" src="/test-harness.js"></script>
<script crossorigin="anonymous" src="/test-page-object.js"></script>
<script crossorigin="anonymous" src="/__dist__/webchat-es5.js"></script>
</head>
<body>
<main id="webchat"></main>
<script type="importmap">
{
"imports": {
"@testduet/wait-for": "https://unpkg.com/@testduet/wait-for@main/dist/wait-for.mjs",
"jest-mock": "https://esm.sh/jest-mock",
"react-dictate-button/internal": "https://unpkg.com/react-dictate-button@main/dist/react-dictate-button.internal.mjs"
}
}
</script>
<script type="module">
import { waitFor } from '@testduet/wait-for';
import { fn, spyOn } from 'jest-mock';
import {
SpeechGrammarList,
SpeechRecognition,
SpeechRecognitionAlternative,
SpeechRecognitionErrorEvent,
SpeechRecognitionEvent,
SpeechRecognitionResult,
SpeechRecognitionResultList
} from 'react-dictate-button/internal';
import { SpeechSynthesis, SpeechSynthesisEvent, SpeechSynthesisUtterance } from '../speech/js/index.js';
import renderHook from './private/renderHook.js';

const {
React: { createElement },
ReactDOM: { render },
testHelpers: { createDirectLineEmulator },
WebChat: {
Components: { BasicWebChat, Composer },
hooks: { useBotSpeakingState },
renderWebChat,
testIds
}
} = window;

run(async function () {

const speechSynthesis = new SpeechSynthesis();
const ponyfill = {
SpeechGrammarList,
SpeechRecognition: fn().mockImplementation(() => {
const speechRecognition = new SpeechRecognition();

spyOn(speechRecognition, 'abort');
spyOn(speechRecognition, 'start');

return speechRecognition;
}),
speechSynthesis,
SpeechSynthesisUtterance
};

spyOn(speechSynthesis, 'speak');

const { directLine, store } = createDirectLineEmulator();
const WebChatWrapper = ({ children }) =>
createElement(
Composer,
{ directLine, store, webSpeechPonyfillFactory: () => ponyfill },
createElement(BasicWebChat),
children
);

// WHEN: Render initially.
const renderResult = renderHook(() => useBotSpeakingState(), {
legacyRoot: true,
wrapper: WebChatWrapper
});

await pageConditions.uiConnected();

// THEN: `useBotSpeakingState` should returns IDLE.
await waitFor(() => expect(renderResult).toHaveProperty('result.current', 0)); // IDLE


// WHEN: Microphone button is clicked and priming user gesture is done.
await pageObjects.clickMicrophoneButton();

await waitFor(() => expect(speechSynthesis.speak).toHaveBeenCalledTimes(1));
speechSynthesis.speak.mock.calls[0][0].dispatchEvent(
new SpeechSynthesisEvent('end', { utterance: speechSynthesis.speak.mock.calls[0] })
);


// THEN: Should construct SpeechRecognition().
expect(ponyfill.SpeechRecognition).toHaveBeenCalledTimes(1);

const { value: speechRecognition1 } = ponyfill.SpeechRecognition.mock.results[0];

// THEN: Should call SpeechRecognition.start().
expect(speechRecognition1.start).toHaveBeenCalledTimes(1);

// WHEN: Recognition started and interims result is dispatched.
speechRecognition1.dispatchEvent(new Event('start'));
speechRecognition1.dispatchEvent(new Event('audiostart'));
speechRecognition1.dispatchEvent(new Event('soundstart'));
speechRecognition1.dispatchEvent(new Event('speechstart'));

// WHEN: Recognized interim result of "Hello".
speechRecognition1.dispatchEvent(
new SpeechRecognitionEvent('result', {
results: new SpeechRecognitionResultList(
new SpeechRecognitionResult(new SpeechRecognitionAlternative(0, 'Hello'))
)
})
);

// WHEN: Recognized finalized result of "Hello, World!" and ended recognition.
await (
await directLine.actPostActivity(() =>
speechRecognition1.dispatchEvent(
new SpeechRecognitionEvent('result', {
results: new SpeechRecognitionResultList(
SpeechRecognitionResult.fromFinalized(new SpeechRecognitionAlternative(0.9, 'Hello, World!'))
)
})
)
)
).resolveAll();


// WHEN: Bot replied.
await directLine.emulateIncomingActivity({
inputHint: 'expectingInput', // "expectingInput" should turn the microphone back on after synthesis completed.
text: 'Aloha!',
type: 'message'
});
await pageConditions.numActivitiesShown(2);

// THEN: Should call SpeechSynthesis.speak() again.
await waitFor(() => expect(speechSynthesis.speak).toHaveBeenCalledTimes(2));

// THEN: Should start synthesize "Aloha!".
expect(speechSynthesis.speak).toHaveBeenLastCalledWith(expect.any(SpeechSynthesisUtterance));
expect(speechSynthesis.speak).toHaveBeenLastCalledWith(expect.objectContaining({ text: 'Aloha!' }));

// THEN: `useBotSpeakingState` should return SPEAKING.
renderResult.rerender();
await waitFor(() => expect(renderResult).toHaveProperty('result.current', 1));

// WHEN: After synthesis completed.
speechSynthesis.speak.mock.calls[1][0].dispatchEvent(
new SpeechSynthesisEvent('end', { utterance: speechSynthesis.speak.mock.calls[1] })
);

// THEN: `useBotSpeakingState` should return IDLE.
renderResult.rerender();
await waitFor(() => expect(renderResult).toHaveProperty('result.current', 0));

// WHEN: Click on microphone button.
await pageObjects.clickMicrophoneButton();

// THEN: `useBotSpeakingState` should return IDLE.
renderResult.rerender();
await waitFor(() => expect(renderResult).toHaveProperty('result.current', 0));

});

</script>
</body>
</html>
2 changes: 2 additions & 0 deletions packages/api/src/hooks/Composer.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ import {
setSendTypingIndicator,
singleToArray,
startDictate,
setBotSpeakingState,
startSpeakingActivity,
stopDictate,
stopSpeakingActivity,
Expand Down Expand Up @@ -101,6 +102,7 @@ const DISPATCHERS = {
sendPostBack,
setDictateInterims,
setDictateState,
setBotSpeakingState,
setNotification,
setSendBox,
setSendBoxAttachments,
Expand Down
4 changes: 3 additions & 1 deletion packages/api/src/hooks/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,7 @@ import useUIState from './useUIState';
import useUserID from './useUserID';
import useUsername from './useUsername';
import useVoiceSelector from './useVoiceSelector';
import useBotSpeakingState from './useBotSpeakingState';

export {
useActiveTyping,
Expand Down Expand Up @@ -143,5 +144,6 @@ export {
useUIState,
useUserID,
useUsername,
useVoiceSelector
useVoiceSelector,
useBotSpeakingState
};
1 change: 1 addition & 0 deletions packages/api/src/hooks/internal/WebChatAPIContext.ts
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,7 @@ export type WebChatAPIContextType = {
sendMessageBack?: (value: any, text?: string, displayText?: string) => void;
sendPostBack?: (value?: any) => void;
sendTypingIndicator?: boolean;
setBotSpeakingState?: (botSpeakingState: number) => void;
setDictateInterims?: (interims: string[]) => void;
setDictateState?: (dictateState: number) => void;
setNotification?: (notification: Notification) => void;
Expand Down
5 changes: 5 additions & 0 deletions packages/api/src/hooks/internal/useSetBotSpeakingState.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
import useWebChatAPIContext from './useWebChatAPIContext';

export default function useSetBotSpeakingState(): (botSpeaking: number) => void {
return useWebChatAPIContext().setBotSpeakingState;
}
5 changes: 5 additions & 0 deletions packages/api/src/hooks/useBotSpeakingState.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
import { useSelector } from './internal/WebChatReduxContext';

export default function useBotSpeakingState(): number {
return useSelector(({ botSpeakingState }) => botSpeakingState);
}
3 changes: 2 additions & 1 deletion packages/api/src/internal.ts
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
import useSetDictateState from './hooks/internal/useSetDictateState';
import useSetBotSpeakingState from './hooks/internal/useSetBotSpeakingState';

export { useSetDictateState };
export { useSetDictateState, useSetBotSpeakingState };
35 changes: 31 additions & 4 deletions packages/component/src/Activity/Speak.tsx
Original file line number Diff line number Diff line change
@@ -1,9 +1,10 @@
import { hooks } from 'botframework-webchat-api';
import type { WebChatActivity } from 'botframework-webchat-core';
import PropTypes from 'prop-types';
import React, { FC, memo, useCallback, useMemo } from 'react';
import React, { useEffect, FC, memo, useCallback, useMemo } from 'react';
import ReactSay, { SayUtterance } from 'react-say';

import { useSetBotSpeakingState } from 'botframework-webchat-api/internal';
import { Constants } from 'botframework-webchat-core';
import SayAlt from './SayAlt';

// TODO: [P1] Interop between Babel and esbuild.
Expand All @@ -12,6 +13,9 @@ const { useMarkActivityAsSpoken, useStyleOptions, useVoiceSelector } = hooks;

// TODO: [P4] Consider moving this feature into BasicActivity
// And it has better DOM position for showing visual spoken text
const {
BotSpeakingState: { IDLE, SPEAKING }
} = Constants;

type SpeakProps = {
activity: WebChatActivity;
Expand All @@ -21,6 +25,18 @@ const Speak: FC<SpeakProps> = ({ activity }) => {
const [{ showSpokenText }] = useStyleOptions();
const markActivityAsSpoken = useMarkActivityAsSpoken();
const selectVoice = useVoiceSelector(activity);
const setBotSpeaking = useSetBotSpeakingState();

useEffect(
() => () => {
setBotSpeaking(IDLE);
},
[setBotSpeaking]
);

const handleOnStartSpeaking = useCallback(() => {
setBotSpeaking(SPEAKING);
}, [setBotSpeaking]);

const markAsSpoken = useCallback(() => {
markActivityAsSpoken(activity);
Expand Down Expand Up @@ -50,9 +66,20 @@ const Speak: FC<SpeakProps> = ({ activity }) => {
!!activity && (
<React.Fragment>
{speechSynthesisUtterance ? (
<SayUtterance onEnd={markAsSpoken} onError={markAsSpoken} utterance={speechSynthesisUtterance} />
<SayUtterance
onEnd={markAsSpoken}
onError={markAsSpoken}
onStart={handleOnStartSpeaking}
utterance={speechSynthesisUtterance}
/>
) : (
<Say onEnd={markAsSpoken} onError={markAsSpoken} text={singleLine} voice={selectVoice} />
<Say
onEnd={markAsSpoken}
onError={markAsSpoken}
onStart={handleOnStartSpeaking}
text={singleLine}
voice={selectVoice}
/>
)}
{!!showSpokenText && <SayAlt speak={singleLine} />}
</React.Fragment>
Expand Down
10 changes: 10 additions & 0 deletions packages/core/src/actions/setBotSpeakingState.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
const SET_BOT_SPEAKING = 'SET_BOT_SPEAKING';

export default function setBotSpeakingState(botSpeaking) {
return {
type: SET_BOT_SPEAKING,
payload: { botSpeaking }
};
}

export { SET_BOT_SPEAKING };
4 changes: 4 additions & 0 deletions packages/core/src/constants/BotSpeakingState.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
const IDLE = 0;
const SPEAKING = 1;

export { IDLE, SPEAKING };
4 changes: 3 additions & 1 deletion packages/core/src/createReducer.ts
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ import sendTypingIndicator from './reducers/sendTypingIndicator';
import shouldSpeakIncomingActivity from './reducers/shouldSpeakIncomingActivity';
import suggestedActions from './reducers/suggestedActions';
import suggestedActionsOriginActivity from './reducers/suggestedActionsOriginActivity';
import botSpeakingState from './reducers/botSpeakingState';

import type { GlobalScopePonyfill } from './types/GlobalScopePonyfill';

Expand All @@ -38,6 +39,7 @@ export default function createReducer(ponyfill: GlobalScopePonyfill) {
shouldSpeakIncomingActivity,
suggestedActions,
suggestedActionsOriginActivity,
typing: createTypingReducer(ponyfill)
typing: createTypingReducer(ponyfill),
botSpeakingState
});
}
5 changes: 4 additions & 1 deletion packages/core/src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@ import stopSpeakingActivity from './actions/stopSpeakingActivity';
import submitSendBox from './actions/submitSendBox';
import * as ActivityClientState from './constants/ActivityClientState';
import * as DictateState from './constants/DictateState';
import * as BotSpeakingState from './constants/BotSpeakingState';
import createStore, {
withDevTools as createStoreWithDevTools,
withOptions as createStoreWithOptions
Expand Down Expand Up @@ -69,8 +70,9 @@ import type { CreativeWork as OrgSchemaCreativeWork } from './types/external/Org
import type { DefinedTerm as OrgSchemaDefinedTerm } from './types/external/OrgSchema/DefinedTerm';
import type { Project as OrgSchemaProject } from './types/external/OrgSchema/Project';
import type { Thing as OrgSchemaThing } from './types/external/OrgSchema/Thing';
import setBotSpeakingState from './actions/setBotSpeakingState';

const Constants = { ActivityClientState, DictateState };
const Constants = { ActivityClientState, DictateState, BotSpeakingState };
const buildTool = process.env.build_tool;
const moduleFormat = process.env.module_format;
const version = process.env.npm_package_version;
Expand Down Expand Up @@ -107,6 +109,7 @@ export {
sendMessage,
sendMessageBack,
sendPostBack,
setBotSpeakingState,
setDictateInterims,
setDictateState,
setLanguage,
Expand Down
17 changes: 17 additions & 0 deletions packages/core/src/reducers/botSpeakingState.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
import { SET_BOT_SPEAKING } from '../actions/setBotSpeakingState';
import { IDLE } from '../constants/BotSpeakingState';

const DEFAULT_STATE = IDLE;

export default function botSpeakingState(state = DEFAULT_STATE, { payload, type }) {
switch (type) {
case SET_BOT_SPEAKING:
state = payload.botSpeaking;
break;

default:
break;
}

return state;
}
3 changes: 3 additions & 0 deletions packages/core/src/selectors/botSpeaking.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
import type { ReduxState } from '../types/internal/ReduxState';

export default ({ botSpeakingState }: ReduxState) => botSpeakingState;
1 change: 1 addition & 0 deletions packages/core/src/types/internal/ReduxState.ts
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ type ReduxState = {
sendTimeout: number;
sendTypingIndicator: boolean;
shouldSpeakIncomingActivity: boolean;
botSpeakingState: string;
};

export type { ReduxState };