Microsoft Azure Question Answering Client Library for Python
Question Answering is a cloud-based API service that lets you create a conversational question-and-answer layer over your existing data. Use it to build a knowledge base by extracting questions and answers from your semi-structured content, including FAQ, manuals, and documents. Answer users’ questions with the best answers from the QnAs in your knowledge base—automatically. Your knowledge base gets smarter, too, as it continually learns from users' behavior.
Source code | Package (PyPI) | API reference documentation | Product documentation | Samples
Azure SDK Python packages support for Python 2.7 ended 01 January 2022. For more information and questions, please refer to https://github.com/Azure/azure-sdk-for-python/issues/20691
Install the Azure Question Answering client library for Python with pip:
pip install azure-ai-language-questionanswering
Note: this version of the client library defaults to the service API version
2021-10-01
.
In order to interact with the Question Answering service, you'll need to create an instance of the QuestionAnsweringClient class or an instance of the AuthoringClient for managing projects within your resource. You will need an endpoint, and an API key to instantiate a client object. For more information regarding authenticating with Cognitive Services, see Authenticate requests to Azure Cognitive Services.
You can get the endpoint and an API key from the Language resource in the Azure Portal.
Alternatively, use the Azure CLI command shown below to get the API key from the Language resource.
az cognitiveservices account keys list --resource-group <resource-group-name> --name <resource-name>
Once you've determined your endpoint and API key you can instantiate a QuestionAnsweringClient:
from azure.core.credentials import AzureKeyCredential
from azure.ai.language.questionanswering import QuestionAnsweringClient
endpoint = "https://{myaccount}.api.cognitive.microsoft.com"
credential = AzureKeyCredential("{api-key}")
client = QuestionAnsweringClient(endpoint, credential)
With your endpoint and API key, you can instantiate a AuthoringClient:
from azure.core.credentials import AzureKeyCredential
from azure.ai.language.questionanswering.authoring import AuthoringClient
endpoint = "https://{myaccount}.api.cognitive.microsoft.com"
credential = AzureKeyCredential("{api-key}")
client = AuthoringClient(endpoint, credential)
To use an Azure Active Directory (AAD) token credential, provide an instance of the desired credential type obtained from the azure-identity library. Note that regional endpoints do not support AAD authentication. Create a custom subdomain name for your resource in order to use this type of authentication.
Authentication with AAD requires some initial setup:
After setup, you can choose which type of credential from azure.identity to use. As an example, DefaultAzureCredential can be used to authenticate the client:
Set the values of the client ID, tenant ID, and client secret of the AAD application as environment variables:
AZURE_CLIENT_ID
, AZURE_TENANT_ID
, AZURE_CLIENT_SECRET
Use the returned token credential to authenticate the client:
from azure.ai.language.questionanswering import QuestionAnsweringClient
from azure.identity import DefaultAzureCredential
credential = DefaultAzureCredential()
client = QuestionAnsweringClient(endpoint="https://<my-custom-subdomain>.cognitiveservices.azure.com/", credential=credential)
The QuestionAnsweringClient is the primary interface for asking questions using a knowledge base with your own information, or text input using pre-trained models.
For asynchronous operations, an async QuestionAnsweringClient
is in the azure.ai.language.questionanswering.aio
namespace.
The AuthoringClient provides an interface for managing Question Answering projects. Examples of the available operations include creating and deploying projects, updating your knowledge sources, and updating question and answer pairs. It provides both synchronous and asynchronous APIs.
The azure-ai-language-questionanswering
client library provides both synchronous and asynchronous APIs.
The only input required to ask a question using a knowledge base is just the question itself:
import os
from azure.core.credentials import AzureKeyCredential
from azure.ai.language.questionanswering import QuestionAnsweringClient
endpoint = os.environ["AZURE_QUESTIONANSWERING_ENDPOINT"]
key = os.environ["AZURE_QUESTIONANSWERING_KEY"]
client = QuestionAnsweringClient(endpoint, AzureKeyCredential(key))
output = client.get_answers(
question="How long should my Surface battery last?",
project_name="FAQ",
deployment_name="test"
)
for candidate in output.answers:
print("({}) {}".format(candidate.confidence, candidate.answer))
print("Source: {}".format(candidate.source))
You can set additional keyword options to limit the number of answers, specify a minimum confidence score, and more.
If your knowledge base is configured for chit-chat, the answers from the knowledge base may include suggested prompts for follow-up questions to initiate a conversation. You can ask a follow-up question by providing the ID of your chosen answer as the context for the continued conversation:
import os
from azure.core.credentials import AzureKeyCredential
from azure.ai.language.questionanswering import QuestionAnsweringClient
from azure.ai.language.questionanswering import models
endpoint = os.environ["AZURE_QUESTIONANSWERING_ENDPOINT"]
key = os.environ["AZURE_QUESTIONANSWERING_KEY"]
client = QuestionAnsweringClient(endpoint, AzureKeyCredential(key))
output = client.get_answers(
question="How long should charging take?",
answer_context=models.KnowledgeBaseAnswerContext(
previous_qna_id=previous_answer.qna_id
),
project_name="FAQ",
deployment_name="live"
)
for candidate in output.answers:
print("({}) {}".format(candidate.confidence, candidate.answer))
print("Source: {}".format(candidate.source))
import os
from azure.core.credentials import AzureKeyCredential
from azure.ai.language.questionanswering.authoring import AuthoringClient
# get service secrets
endpoint = os.environ["AZURE_QUESTIONANSWERING_ENDPOINT"]
key = os.environ["AZURE_QUESTIONANSWERING_KEY"]
# create client
client = AuthoringClient(endpoint, AzureKeyCredential(key))
with client:
# create project
project_name = "IssacNewton"
project = client.create_project(
project_name=project_name,
options={
"description": "biography of Sir Issac Newton",
"language": "en",
"multilingualResource": True,
"settings": {
"defaultAnswer": "no answer"
}
})
print("view created project info:")
print("\tname: {}".format(project["projectName"]))
print("\tlanguage: {}".format(project["language"]))
print("\tdescription: {}".format(project["description"]))
import os
from azure.core.credentials import AzureKeyCredential
from azure.ai.language.questionanswering.authoring import AuthoringClient
# get service secrets
endpoint = os.environ["AZURE_QUESTIONANSWERING_ENDPOINT"]
key = os.environ["AZURE_QUESTIONANSWERING_KEY"]
# create client
client = AuthoringClient(endpoint, AzureKeyCredential(key))
project_name = "IssacNewton"
update_sources_poller = client.begin_update_sources(
project_name=project_name,
sources=[
{
"op": "add",
"value": {
"displayName": "Issac Newton Bio",
"sourceUri": "https://wikipedia.org/wiki/Isaac_Newton",
"sourceKind": "url"
}
}
]
)
update_sources_poller.result()
# list sources
print("list project sources")
sources = client.list_sources(
project_name=project_name
)
for source in sources:
print("project: {}".format(source["displayName"]))
print("\tsource: {}".format(source["source"]))
print("\tsource Uri: {}".format(source["sourceUri"]))
print("\tsource kind: {}".format(source["sourceKind"]))
import os
from azure.core.credentials import AzureKeyCredential
from azure.ai.language.questionanswering.authoring import AuthoringClient
# get service secrets
endpoint = os.environ["AZURE_QUESTIONANSWERING_ENDPOINT"]
key = os.environ["AZURE_QUESTIONANSWERING_KEY"]
# create client
client = AuthoringClient(endpoint, AzureKeyCredential(key))
project_name = "IssacNewton"
# deploy project
deployment_poller = client.begin_deploy_project(
project_name=project_name,
deployment_name="production"
)
deployment_poller.result()
# list all deployments
deployments = client.list_deployments(
project_name=project_name
)
print("view project deployments")
for d in deployments:
print(d)
The above examples can also be run asynchronously using the clients in the aio
namespace:
import os
from azure.core.credentials import AzureKeyCredential
from azure.ai.language.questionanswering.aio import QuestionAnsweringClient
endpoint = os.environ["AZURE_QUESTIONANSWERING_ENDPOINT"]
key = os.environ["AZURE_QUESTIONANSWERING_KEY"]
client = QuestionAnsweringClient(endpoint, AzureKeyCredential(key))
output = await client.get_answers(
question="How long should my Surface battery last?",
project_name="FAQ",
deployment_name="production"
)
Optional keyword arguments can be passed in at the client and per-operation level. The azure-core reference documentation describes available configurations for retries, logging, transport protocols, and more.
Azure Question Answering clients raise exceptions defined in Azure Core. When you interact with the Cognitive Language Service Question Answering client library using the Python SDK, errors returned by the service correspond to the same HTTP status codes returned for REST API requests.
For example, if you submit a question to a non-existent knowledge base, a 400
error is returned indicating "Bad Request".
from azure.core.exceptions import HttpResponseError
try:
client.get_answers(
question="Why?",
project_name="invalid-knowledge-base",
deployment_name="test"
)
except HttpResponseError as error:
print("Query failed: {}".format(error.message))
This library uses the standard logging library for logging. Basic information about HTTP sessions (URLs, headers, etc.) is logged at INFO level.
Detailed DEBUG level logging, including request/response bodies and unredacted
headers, can be enabled on a client with the logging_enable
argument.
See full SDK logging documentation with examples here.
See the CONTRIBUTING.md for details on building, testing, and contributing to this library.
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.