The Search and Score window displays the list of all the interactions that are recorded and sent to QM. You can select an interaction from the list to access and evaluate the recordings.
TheSearch and Scorewindow populates all recorded interactions received by QM, irrespective of the processing status.
All processed interactions are assigned a unique identifier number (UUID).
Hover over interaction and click to play the recording.
To modify the interaction processing settings, contact your customer care team.
When you search for an interaction using Interaction UUID, the results are displayed irrespective of the date filter you have used in the search criteria.
Click to export the necessary details. You can export Evaluations, List (of interactions for the search criteria), or Transcripts using this option. Click the Export button to select the necessary option
If you frequently search for a recording by using the same search criteria, you can save your search criteria for future use. A 'Saved Search' refers to the saved search criteria.
To create a saved search:
Click . The Filter window appears.
As required, specify values in the filter sections. You have the following options:
If you want to immediately view the list to which all your search criteria are applied, click Apply Filter.
When you search using Interaction UUID, the results are displayed irrespective of the date filter you have used in the search criteria.
You can revert your changes by clicking.
In theFilterwindow, clickSave Template. The Save Search window appears.
In theNamefield, specify a name with which you identify the search, and then clickSave.
The saved search is created, and it appears as an option in theSaved Searchdrop-down list box.
You can select a random sample of interactions from the list of interactions for scoring purposes.
To randomize the search result:
Click the button. The Randomize window is displayed.
Enter the Number of Results to specify the number of random interactions you want to view.
Click Apply.
You can view the filtered data in the form of a chart and layer the results from another filter for a comparative study. To view the Comparison Chart, click . The chart view of the data selected is displayed. To remove the Comparison Chart view, click again.
Click the button in the top right corner. The View Configuration window opens
Enter a name in the Dataset Label field to identify the dataset you plan to chart. Click the Color field and select a color for the dataset. Click to add the dataset to the list of datasets you want to map. You can create a maximum of 3 datasets simultaneously for comparative study. The Datasets are created and listed under the Dataset table.
To edit the saved dataset, hover on the dataset and click . The Search window opens where you can select your search criteria. Click Apply. Do this for all the datasets you have created.
Select the Report Type from the drop-down list. Choose between Interaction Count and Scorecard Score Average.
Select the Interval from the drop-down list.
Select either Date Range or Custom Dates to select the required date range for the data to display.
A comparative chart with the selected filters for the different datasets is displayed.
On the right side of the chart, the list of datasets along with the colors chosen is displayed. You can click the respective color buttons to select or unselect them from the display.
To save the view, click from the Select or Create a View field. The View window opens.
Enter a name for the view in the Name field.
Select the users from the Available list who can access this view. By default, if a user is not selected, the view is available for all the users.
Click Save.
Note
To select a view from a list of available views, select from the Select or Create a View drop-down.
The auto-scoring functionality is dependent on the criteria set for processing an interaction, the level of processing selected, and the status enabled as per your requirement. For example, if an auto scorecard is set at a global level and is in theActivestatus, the auto-scorecard will run through all interactions.
To reprocess the autoscoring of the displayed set of interactions:
ClickReprocess. TheReprocesspopup window appears.
Select the actions to be performed using thecheckbox.
ClickAccept.
The selected interaction is scheduled for reprocessing.
When an auto-scoring job runs, QM does not evaluate all the auto scorecards. Auto scorecards have filters and only the interactions that meet the criteria for the scorecard get evaluated.
To review a recorded interaction or perform manual scoring, double-click the interaction that you want to review.
Interaction Review
The Interaction Review screen provides comprehensive details related to the interaction such as the transcript and evaluation information. It also enables you to leave comments for other assessors, arbitrators, managers, and agents and contribute the evaluation to the e-learning library, which serves as a resource for training and quality management purposes.
For voice calls, you can listen to the call recording and also view the visualization of the selected voice interaction in a waveform.
To review a recorded interaction or perform manual scoring, double-click the interaction that you want to review. The Interaction Review window is displayed with detailed information about the interaction between the agent and the customer.
You can use the forward/backward arrows () on the upper-right corner of the window to move to the next or previous recording.
The banner displays the name of the agent who performed the interaction with the customer, the date and time of the interaction, the phone number of the customer, the agent sentiment, and the customer sentiment.
You can download the interaction in .mp3 format using thebutton.
You can download the transcript of the interaction in .pdf format. Under the Transcript tab, clickto download the transcript.
To listen to an audio interaction:
On the Interaction Review screen, to listen to the recording, click . The length of the audio is displayed next to the button.
Click to play the recording from the beginning.
You can adjust the playback speed, by clicking drop-down arrow, and then selecting the preferred speed. You can make the audio slower (0.50x) or faster (2.00x) as required.
You can download the interaction in .mp3 format using thebutton.
To leave a comment to an audio interaction:
On the Interaction Review screen, click . The Comment window pane is displayed.
In the Enter your comment:box, enter your comment:
To mark a specific time range in the call to which the audio note is applicable, select theRangecheck box, and then select the Start time and End time.
To use the audio note for coaching agents, select theCoachingcheckbox.
From theCategorydrop-down list, select an audio note category.
ClickRegister.
To leave a comment without clicking the play/pause icon, when listening to the recording, use theComment & Pauseoption.
You can use the Actions Panel to search for keywords, add tags, and view or provide audio notes. To access the Actions Panel, on the Search and Score window click .
Keyword Search: Enables you to search for keywords while listening to the call recording by typing keywords, separated by commas, or by selecting the existing keyword lists from the drop-down list. You can filter the search results by keywords used by the customer, agent, or either. You can search for any word, including the configured keywords. If the keyword is part of the interaction, it is highlighted in the interaction along with details of the intent.
Tags: Enables you to select tags from the drop-down list to group this interaction with related interactions.
Audio Notes:Displays the audio notes created by all the assessors.
You can view the detailed metrics and metadata related to the interaction as well as the recording using the different tabs on the Interaction Review window.
Following are the different tabs on the Interaction Review screen:
Tab
Description
Evaluations
Displays the evaluations provided by all assessors. The Evaluations tab is visible only if you are an assessor or an arbitrator. You can use this tab to view details of the scoring of an interaction and also score an interaction manually. For more information see Manual Scoring.
Transcript
Provides detailed text transcription of the voice call.
Analysis
Provides the automatic call analysis information such as sentiment, silence duration, talk over, and hold time.
Information
Displays general information about the interaction, such as interaction ID, agent information, call session ID, date of the call, call center, and service information.
The Call Summary section displays a summary of the customer interaction with information such as the reason for the interaction, actions taken, and proposed next steps. The call summary is available for both real-time and non real-time interaction types, including Voice, SMS, Email, and Chat.
Metadata
Displays the metadata received with the recording and all other information related to the interaction.
Related Interactions
Displays the other interactions related to the recording (for example, a call recording from the same agent).
Interaction Intents
Displays the possible call drivers populated by the system based on the configured keywords or keyword lists.
Agent Assist
Displays details of the rules applied during the interaction.
History
Displays the history of views and evaluations by all users on the interaction.
Video
Displays the screen recording of the interaction. This tab is active only if you have enabled screen recording.
If you are assigned as an assessor or arbitrator, you can manually evaluate an interaction, in the Interaction Review window of SpeechIQ, by using the configured scorecards.
For information about how to assign assessors and arbitrators, see User Maintenance.
To score or evaluate an interaction:
On the WFO tab, click SpeechIQ >Search and Score.
In the Search and Score window, double-click the interaction that you want to evaluate. The Interaction Review window appears displaying a visualization of the voice interactivity between the customer and the agent, in a waveform.
Click on the play/pause icon () to listen to the recording.
On the Evaluationstab, clickNew. A new evaluation section appears. In the Scorecard field, select a scorecard.
If theNewbutton is inactive, go toConversation Configuration >User Maintenance to verify that you are assigned as an Assessor or an Arbitrator. For more information about your user roles, see User Maintenance.
Alternatively:
Click theFilter ()icon.
Select the scorecard category from theCategorydrop-down list.
Select theShow Services Assigned Onlycheckbox, if you want to filter the scorecards which are assigned to any service.
ClickFilter. The questions added to the selected scorecard appear.
For information about how to create a scorecard, add grades and questions to a scorecard, and assign scorecards, seeScorecards.
Select the required answers for the questions. The grade and score appear on the header row.
To add comments, double-click the question, and then enter comments in the Comments section.
In the General Comment and Question Comment sections, you can enter general comments related to the interaction and comments specific to the scorecard questions, respectively.
ClickSave. Your evaluation report is added to the interaction.
To delete an evaluation report, select the evaluation report by clicking the checkbox, and then clickDelete.
To export an evaluation report to a Microsoft Excel workbook click Export ().
To send the evaluation report for coaching the agent:
On the Evaluationstab, clickSend.
SelectCoach Task.
Define the priority and the due date.
ClickCreate. The learning task is assigned to the agent.
To send the evaluation report to the e-learning library:
On the Evaluationstab, clickSend.
Selecte-Learning Library.
Enter a name for the training.
Select the category and the section.
ClickCreate. The learning module is added to the e-learning library.