Auto saved by Logseq

This commit is contained in:
Matthias Eckert 2023-03-06 15:28:29 +01:00
parent 8a3eeffa58
commit da4ce6ef7d

View File

@ -30,10 +30,11 @@
- https://www.chefkoch.de/rezepte/1481031253346883/Schwaebischer-Salzkuchen.html [[Rezepte]]
- API Questions:
- What guarantees are given on the data on the events endpoint? Is ID increasing all the time (in increments of 1)?
- Event IDs a strictly monotonically increasing. We don't guarantee increments of 1 though.
- Event IDs a strictly monotonically increasing. We don't guarantee increments of 1 though. The gaps in the offsets are due to the exactly once semantic processing of our kafka stream computing the events.
- https://stackoverflow.com/questions/54636524/kafka-streams-does-not-increment-offset-by-1-when-producing-to-topic/54637004#54637004
- The only guarantee you get is, that each offset is unique within a partition.
- What about the missing event ID 119434986? Is that a bug or can events have missing IDs in the increasing series?
- Not a bug. As stated above it is due to the kafka stream.
- What is createdAt referring to? (I assumed its the time the event happened, but then later IDs should have higher timestamps) Why can it be decreasing in events with later IDs. How does it relate to the document timestamps (we established in earlier conversations that document updatedAt can differ from event createdAt because they are different processes)
- How am I supposed to know if I got all events if there are missing ones and timestamps and/or IDs are not monotonically increasing?
- Also if you replay events, do you also replay data, so is e.g. document 7a233a97-1f18-3792-b287-c7b35f601f0b also not yet updated on the documents endpoint and we need to wait 15 minutes from some timestamp (in that case no idea how to figure that out instead of retrying for some time) for this to update?