Proprietary data connectors which feed both data and transactional systems
Hey YC!
Edgar Cabrera and I are excited to launch Moving Lake today.
TLDR: MovingLake aims to do ETLs for event-driven architectures. Power your data warehouse as well as your microservices using the same API connectors.
After talking to dozens of data engineers as well as technology stakeholders, and from our past experience, we uncovered a set of intertwined problems with current data connectors:
Realtime, event-driven, deduplicated, replayable, ELT friendly and SDK ready connectors which feed both data and transactional systems at the same time.
We are rolling out connector for APIs which give you the ability to add any number of destinations including data and backend systems. We provide on top of that transformations for the data side so that you get the data in nicely formatted tables. Plus we also aim to provide easy to use SDKs so that consuming MovingLake’s webhooks is as easy and fast as possible.
❓ What’s the ask?
Do you have recurring issues with data reliability? Do you end up writing batch scripts to recover lost data from webhooks? Do you struggle with running your machine learning models from batch pipeline’s data? Reach out to founders@movinglake.com or contact us through our website.
Thanks!