Microsoft Azure Event Hubs checkpointer implementation with Blob Storage Client Library for Python
Azure EventHubs Checkpoint Store is used for storing checkpoints while processing events from Azure Event Hubs.
This Checkpoint Store package works as a plug-in package to EventHubConsumerClient
. It uses Azure Storage Blob as the persistent store for maintaining checkpoints and partition ownership information.
Please note that this is an async library, for sync version of the Azure EventHubs Checkpoint Store client library, please refer to azure-eventhub-checkpointstoreblob.
Source code | Package (PyPi) | API reference documentation | Azure Eventhubs documentation | Azure Storage documentation
Python 3.6 or later.
Microsoft Azure Subscription: To use Azure services, including Azure Event Hubs, you'll need a subscription. If you do not have an existing Azure account, you may sign up for a free trial or use your MSDN subscriber benefits when you create an account.
Event Hubs namespace with an Event Hub: To interact with Azure Event Hubs, you'll also need to have a namespace and Event Hub available. If you are not familiar with creating Azure resources, you may wish to follow the step-by-step guide for creating an Event Hub using the Azure portal. There, you can also find detailed instructions for using the Azure CLI, Azure PowerShell, or Azure Resource Manager (ARM) templates to create an Event Hub.
Azure Storage Account: You'll need to have an Azure Storage Account and create a Azure Blob Storage Block Container to store the checkpoint data with blobs. You may follow the guide creating an Azure Block Blob Storage Account.
$ pip install azure-eventhub-checkpointstoreblob-aio
Checkpointing is a process by which readers mark or commit their position within a partition event sequence. Checkpointing is the responsibility of the consumer and occurs on a per-partition basis within a consumer group. This responsibility means that for each consumer group, each partition reader must keep track of its current position in the event stream, and can inform the service when it considers the data stream complete. If a reader disconnects from a partition, when it reconnects it begins reading at the checkpoint that was previously submitted by the last reader of that partition in that consumer group. When the reader connects, it passes the offset to the event hub to specify the location at which to start reading. In this way, you can use checkpointing to both mark events as "complete" by downstream applications, and to provide resiliency if a failover between readers running on different machines occurs. It is possible to return to older data by specifying a lower offset from this checkpointing process. Through this mechanism, checkpointing enables both failover resiliency and event stream replay.
Both offset & sequence number refer to the position of an event within a partition. You can think of them as a client-side cursor. The offset is a byte numbering of the event. The offset/sequence number enables an event consumer (reader) to specify a point in the event stream from which they want to begin reading events. You can specify a timestamp such that you receive events enqueued only after the given timestamp. Consumers are responsible for storing their own offset values outside of the Event Hubs service. Within a partition, each event includes an offset, sequence number and the timestamp of when it was enqueued.
EventHubConsumerClient
The easiest way to create a EventHubConsumerClient
is to use a connection string.
from azure.eventhub.aio import EventHubConsumerClient
eventhub_client = EventHubConsumerClient.from_connection_string("my_eventhub_namespace_connection_string", "my_consumer_group", eventhub_name="my_eventhub")
For other ways of creating a EventHubConsumerClient
, refer to EventHubs library for more details.
BlobCheckpointStore
to do checkpointimport asyncio
from azure.eventhub.aio import EventHubConsumerClient
from azure.eventhub.extensions.checkpointstoreblobaio import BlobCheckpointStore
connection_str = '<< CONNECTION STRING FOR THE EVENT HUBS NAMESPACE >>'
consumer_group = '<< CONSUMER GROUP >>'
eventhub_name = '<< NAME OF THE EVENT HUB >>'
storage_connection_str = '<< CONNECTION STRING OF THE STORAGE >>'
container_name = '<< STORAGE CONTAINER NAME>>'
async def on_event(partition_context, event):
# Put your code here.
await partition_context.update_checkpoint(event) # Or update_checkpoint every N events for better performance.
async def main():
checkpoint_store = BlobCheckpointStore.from_connection_string(
storage_connection_str,
container_name
)
client = EventHubConsumerClient.from_connection_string(
connection_str,
consumer_group,
eventhub_name=eventhub_name,
checkpoint_store=checkpoint_store,
)
async with client:
await client.receive(on_event)
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
BlobCheckpointStore
with a different version of Azure Storage Service APISome environments have different versions of Azure Storage Service API.
BlobCheckpointStore
by default uses the Storage Service API version 2019-07-07. To use it against a different
version, specify api_version
when you create the BlobCheckpointStore
object.
Enabling logging will be helpful to do trouble shooting.
azure.eventhub.extensions.checkpointstoreblobaio
logger to collect traces from the library.azure.eventhub
logger to collect traces from the main azure-eventhub library.azure.eventhub.extensions.checkpointstoreblobaio._vendor.storage
logger to collect traces from azure storage blob library.uamqp
logger to collect traces from the underlying uAMQP library.logging_enable=True
when creating the client.Get started with our EventHubs Checkpoint Store async samples.
Reference documentation is available here.
If you encounter any bugs or have suggestions, please file an issue in the Issues section of the project.
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
This version and all future versions will require Python 2.7 or Python 3.6+, Python 3.5 is no longer supported.
New features
list_ownership
, claim_ownership
, update_checkpoint
, list_checkpoints
on BlobCheckpointStore
to support taking **kwargs
.This version will be the last version to officially support Python 3.5, future versions will require Python 2.7 or Python 3.6+.
Bug fixes
Bug fixes
BlobCheckpointStore.list_ownership
and BlobCheckpointStore.list_checkpoints
triggering KeyError
due to reading empty metadata of parent node when working with Data Lake enabled Blob Storage.Bug fixes
New features
api_version
of BlobCheckpointStore
now supports older versions of Azure Storage Service API.Stable release. No new features or API changes.
Breaking changes
BlobPartitionManager
to BlobCheckpointStore
.BlobCheckpointStore
has been updated to take the storage container details directly rather than an instance of ContainerClient
.from_connection_string
constructor has been added for Blob Storage connection strings.blobstoragepmaio
is now internal, all imports should be directly from azure.eventhub.extensions.checkpointstoreblobaio
.BlobCheckpointStore
now has a close()
function for shutting down an HTTP connection pool, additionally the object can be used in a context manager to manage the connection.New features
list_checkpoints
which list all the checkpoints under given eventhub namespace, eventhub name and consumer group.This release has trivial internal changes only. No feature changes.
New features
BlobPartitionManager
that uses Azure Blob Storage Block Blob to store EventProcessor checkpoint data