Getting started
Get started by connecting your application to EventStoreDB.
Connecting to EventStoreDB
For your application to start communicating with EventStoreDB, you need to instantiate the client, and configure it accordingly. Below, you will find instructions for supported SDKs.
Insecure Clusters
All our GRPC clients are secure by default, and must be configured to connect to an insecure server via a connection string, or the client's configuration.
Required packages
Install the client SDK package to your project.
# From Pypi
$ pip install esdbclient
# With Poetry
$ poetry add esdbclient
# Yarn
$ yarn add @eventstore/db-client
# NPM
$ npm install --save @eventstore/db-client
# TypeScript Declarations are included in the package.
# Yarn
$ yarn add @eventstore/db-client
# NPM
$ npm install --save @eventstore/db-client
# Maven
<dependency>
<groupId>com.eventstore</groupId>
<artifactId>db-client-java</artifactId>
<version>5.2.0</version>
</dependency>
# Gradle
implementation 'com.eventstore:db-client-java:5.2.0'
$ dotnet add package EventStore.Client.Grpc.Streams --version 23.1.0
go get github.com/EventStore/EventStore-Client-Go/v3.2.0/esdb
No additional configuration is needed having Rust installed. Go check https://rustup.rs.
Connection string
Each SDK has its own way to configure the client, but it's always possible to use the connection string. The EventStoreDB connection string supports two schemas: esdb://
for connecting to a single-node server, and esdb+discover://
for connecting to a multi-node cluster. The difference between the two schemas is that when using esdb://
, the client will connect directly to the node; with esdb+discover://
schema the client will use the gossip protocol to retrieve the cluster information and choose the right node to connect to. Since version 22.10, ESDB supports gossip on single-node deployments, so esdb+discover://
schema can be used for connecting to any topology.
The connection string has the following format:
esdb+discover://admin:changeit@cluster.dns.name:2113
There, cluster.dns.name
is the name of a DNS A
record that points to all the cluster nodes. Alternatively, you can list cluster nodes separated by comma instead of the cluster DNS name:
esdb+discover://admin:changeit@node1.dns.name:2113,node2.dns.name:2113,node3.dns.name:2113
There are a number of query parameters that can be used in the connection string to instruct the cluster how and where the connection should be established. All query parameters are optional.
Parameter | Accepted values | Default | Description |
---|---|---|---|
tls | true , false | true | Use secure connection, set to false when connecting to a non-secure server or cluster. |
connectionName | Any string | None | Connection name |
maxDiscoverAttempts | Number | 10 | Number of attempts to discover the cluster. |
discoveryInterval | Number | 100 | Cluster discovery polling interval in milliseconds. |
gossipTimeout | Number | 5 | Gossip timeout in seconds, when the gossip call times out, it will be retried. |
nodePreference | leader , follower , random , readOnlyReplica | leader | Preferred node role. When creating a client for write operations, always use leader . |
tlsVerifyCert | true , false | true | In secure mode, set to true when using an untrusted connection to the node if you don't have the CA file available. Don't use in production. |
tlsCaFile | String, file path | None | Path to the CA file when connecting to a secure cluster with a certificate that's not signed by a trusted CA. |
defaultDeadline | Number | None | Default timeout for client operations, in milliseconds. Most clients allow overriding the deadline per operation. |
keepAliveInterval | Number | 10 | Interval between keep-alive ping calls, in seconds. |
keepAliveTimeout | Number | 10 | Keep-alive ping call timeout, in seconds. |
userCertFile | String, file path | None | User certificate file for X.509 authentication. |
userKeyFile | String, file path | None | Key file for the user certificate used for X.509 authentication. |
When connecting to an insecure instance, specify tls=false
parameter. For example, for a node running locally use esdb://localhost:2113?tls=false
. Note that username and passwords aren't provided there because insecure deployments don't support authentication and authorisation.
Creating a client
First, let's create a client and get it connected to the database.
client = EventStoreDBClient(
uri="{connectionString}"
)
const client = EventStoreDBClient.connectionString`{connectionString}`;
const client = EventStoreDBClient.connectionString`{connectionString}`;
EventStoreDBClientSettings settings = EventStoreDBConnectionString.parseOrThrow("{connectionString}");
EventStoreDBClient client = EventStoreDBClient.create(settings);
const string connectionString = "esdb://admin:changeit@localhost:2113?tls=false&tlsVerifyCert=false";
var settings = EventStoreClientSettings.Create(connectionString);
var client = new EventStoreClient(settings);
settings, err := esdb.ParseConnectionString("{connectionString}")
if err != nil {
panic(err)
}
db, err := esdb.NewClient(settings)
let settings = "{connectionString}".parse()?;
let client = Client::new(settings)?;
The client instance can be used as a singleton across the whole application. It doesn't need to open or close the connection.
Creating an event
You can write anything to EventStoreDB as events. The client needs a byte array as the event payload. Normally, you'd use a serialized object and it's up to you to choose the serialization method.
Server-side projections
User-defined server-side projections require events to be serialized to JSON format.
We use JSON for serialization in the documentation examples.
The code snippet below creates an event object instance, serializes it and puts it as payload to the EventData
structure, which the client is able to write to the database.
new_event = NewEvent(
id=uuid4(),
type="TestEvent",
data=b"I wrote my first event",
)
const event = jsonEvent({
type: "TestEvent",
data: {
entityId: uuid(),
importantData: "I wrote my first event!",
},
});
type TestEvent = JSONEventType<
"TestEvent",
{
entityId: string;
importantData: string;
}
>;
const event = jsonEvent<TestEvent>({
type: "TestEvent",
data: {
entityId: uuid(),
importantData: "I wrote my first event!",
},
});
TestEvent event = new TestEvent();
JsonMapper jsonMapper = new JsonMapper();
event.setId(UUID.randomUUID().toString());
event.setImportantData("I wrote my first event!");
EventData eventData = EventData
.builderAsJson("TestEvent", jsonMapper.writeValueAsBytes(event))
.build();
var evt = new TestEvent {
EntityId = Guid.NewGuid().ToString("N"),
ImportantData = "I wrote my first event!"
};
var eventData = new EventData(
Uuid.NewUuid(),
"TestEvent",
JsonSerializer.SerializeToUtf8Bytes(evt)
);
testEvent := TestEvent{
Id: uuid.NewString(),
ImportantData: "I wrote my first event!",
}
data, err := json.Marshal(testEvent)
if err != nil {
panic(err)
}
eventData := esdb.EventData{
ContentType: esdb.ContentTypeJson,
EventType: "TestEvent",
Data: data,
}
let event = TestEvent {
id: Uuid::new_v4().to_string(),
important_data: "I wrote my first event!".to_string(),
};
let event_data = EventData::json("TestEvent", event)?.id(Uuid::new_v4());
Appending events
Each event in the database has its own unique identifier (UUID). The database uses it to ensure idempotent writes, but it only works if you specify the stream revision when appending events to the stream.
In the snippet below, we append the event to the stream some-stream
.
client.append_to_stream(
"some-stream",
events=[new_event],
current_version=StreamState.ANY,
)
await client.appendToStream(STREAM_NAME, event);
await client.appendToStream(STREAM_NAME, event);
client.appendToStream("some-stream", eventData)
.get();
await client.AppendToStreamAsync(
"some-stream",
StreamState.Any,
new[] { eventData },
cancellationToken: cancellationToken
);
_, err = db.AppendToStream(context.Background(), "some-stream", esdb.AppendToStreamOptions{}, eventData)
client
.append_to_stream("some-stream", &Default::default(), event_data)
.await?;
Here we are appending events without checking if the stream exists or if the stream version matches the expected event version. See more advanced scenarios in appending events documentation.
Reading events
Finally, we can read events back from the some-stream
stream.
events = client.get_stream("some-stream")
for event in events:
# Doing something productive with the event
print(event)
const events = client.readStream(STREAM_NAME, {
direction: FORWARDS,
fromRevision: START,
maxCount: 10,
});
const events = client.readStream<TestEvent>(STREAM_NAME, {
direction: FORWARDS,
fromRevision: START,
maxCount: 10,
});
ReadStreamOptions options = ReadStreamOptions.get()
.forwards()
.fromStart()
.maxCount(10);
ReadResult result = client.readStream("some-stream", options)
.get();
var result = client.ReadStreamAsync(
Direction.Forwards,
"some-stream",
StreamPosition.Start,
cancellationToken: cancellationToken
);
var events = await result.ToListAsync(cancellationToken);
stream, err := db.ReadStream(context.Background(), "some-stream", esdb.ReadStreamOptions{}, 10)
if err != nil {
panic(err)
}
defer stream.Close()
for {
event, err := stream.Recv()
if errors.Is(err, io.EOF) {
break
}
if err != nil {
panic(err)
}
// Doing something productive with the event
fmt.Println(event)
}
let options = ReadStreamOptions::default().max_count(10);
let mut stream = client.read_stream("some-stream", &options).await?;
while let Some(event) = stream.next().await? {
// Doing something productive with the events.
}
When you read events from the stream, you get a collection of ResolvedEvent
structures. The event payload is returned as a byte array and needs to be deserialized. See more advanced scenarios in reading events documentation.