Streamstone alternatives and similar packages
Based on the "Database" category.
Alternatively, view Streamstone alternatives based on common mentions on social networks and blogs.
* Code Quality Rankings and insights are calculated and provided by Lumnify.
They vary from L1 to L5 with "L5" being the highest. Visit our partner's website for more details.
Do you think we are missing an alternative of Streamstone or a related project?
Streamstone is a tiny embeddable library targeted at building scalable event-sourced applications on top of Azure Table Storage. It has simple, functional style API, heavily inspired by Greg Young's Event Store.
- Fully ACID compliant
- Optimistic concurrency support
- Duplicate event detection (based on identity)
- Automatic continuation for both writes and reads (over WATS limits)
- Custom stream and event properties you can query on
- Synchronous (inline) projections and snapshots
- Change tracking support for inline projections
- Friendly for multi-tenant designs
- Sharding support (jump consistent hashing)
- Compatible with .NET Standard 2.0 and .NET Framework 4.6
To install Streamstone via NuGet, run this command in NuGet package manager console:
PM> Install-Package Streamstone
To build Streamstone binaries on Windows you will need to have Visual Studio 17 Update 3 or higher and .NET Core SDK 2.0 or higher. To build binaries on Linux use dotnet cli tooling (ie
Running unit tests
Unit tests require Azure Storage Emulator 5.2 or higher, which is currently available only on Windows. Alternatively, you could run against real Azure by setting storage account connection string to Streamstone-Test-Storage user-level environment variable.
Streamstone is just a thin layer (library, not a server) on top of Windows Azure Table Storage. It implements low-level mechanics for dealing with event streams, and all heavy-weight lifting is done by underlying provider.
The api is stateless and all exposed objects are immutable, once fully constructed. Streamstone doesn't dictate payload serialization protocol, so you are free to choose any protocol you want.
Optimistic concurrency is implemented by always including stream header entity with every write, making it impossible to append to a stream without first having a latest Etag. Duplicate event detection is done by automatically creating additional entity for every event, with RowKey value set to a unique identifier of a source event (consistent secondary index).
- Provisioning stream [see]
- Opening stream [[see](Source/Example/Scenarios/S02_Open_stream_for_writing.cs)]
- Writing to stream [[see](Source/Example/Scenarios/S04_Write_to_stream.cs)]
- Reading from stream [[see](Source/Example/Scenarios/S05_Read_from_stream.cs)]
- Additional entity includes [[see](Source/Example/Scenarios/S06_Include_additional_entities.cs)]
- Optimistic concurrency [[see](Source/Example/Scenarios/S08_Concurrency_conflicts.cs)]
- Handling duplicate events [[see](Source/Example/Scenarios/S09_Handling_duplicates.cs)]
- Custom stream metadata [[see](Source/Example/Scenarios/S07_Custom_stream_metadata.cs)]
- Virtual partitions [[see](Source/Streamstone.Tests/Scenarios/Virtual_partitions.cs)]
- Implementing stream directory [[see](Source/Example/Scenarios/S10_Stream_directory.cs)]
- Using snapshots [[see](Source/Example/Scenarios/S06_Include_additional_entities.cs)]
- Creating projections [see]
- Querying events [see]
- Classic Greg Young's CQRS demo using Streamstone [repo]
- Using Streamstone in stateful applications. Event-sourced actors for Project Orleans [see]
While Streamstone allows you to pass any number of events to
Stream.Write, the max batch size limit imposed by Azure Table Storage is 100 entities, therefore:
- The batch will be automatically flushed for every 99 events (100 - 1 header entity)
- The batch will be automatically flushed for every 49 events with id being set (100/2 - 1 header entity)
- You will get back
InvalidOperationExceptionwhen trying to write an event which together with its includes is over max batch size limit
- The actual size in bytes of event payload is not taken into account, so all limitations outlined below still apply
Other limitations of the underlying Azure Table Storage API:
- Maximum size of batch is 4MB
- Maximum size of entity is 1 MB
- Maximum size of property is 64Kb
- Maximum length of property name is 255 chars
- An entity can have up to 255 custom properties
*Note that all licence references and agreements mentioned in the Streamstone README section above are relevant to that project's source code only.