DataTorrent RTS

The industry’s only open source

enterprise-grade unified stream and batch platform

Open source enterprise-grade core engine
Open Source Enterprise-Grade Platform
Enterprise Security Integration
Enterprise Security Integration
Data Integration & Analytics
Data Integration & Analytics
Graphical Application Assembly
Graphical Application Assembly
Real-Time Data Visualization
Self-service Real-Time Data Visualization
Data ingestion and distribution for Hadoop
Data ingestion and distribution for Hadoop

DataTorrent Customers

PM-logo

“DataTorrent RTS, which runs on Amazon EMR,is powering  PubMatic’s real-time Ad analytics platform enabling publishers to drive the highest value for their digital media assets. It also enables advertisers to provide consumers with a more personalized advertising experience across display, mobile and video.”

 

– Sudhir Kulkarni | VP of Data & Analytics

ssn-logo

“At Silver Spring we deploy and operate some of the largest, most data-intensive networks on earth, connecting more than 20 million Internet-of-Things devices on five continents.  DataTorrent RTS is an integral component of our SilverLink(tm) Sensor Network solution and together we look forward to inspiring a legion of new developers to create even more powerful big data applications.“

– Jeremy Johnson | Director of Product Management

Latest Blog Posts

Stay connected with what’s going on in the DataTorrent world with the most recent blog posts.

dtIngest – Arrival of Scalable, Fault Tolerant, BigData Ingestion

The most literal meaning of data ingestion is absorbing the data in. Even though ingestion is a problem by itself, it’s often insufficient just to take the data in. The broader term ingestion refers to discovering the data sources, importing the data from these data sources, and processing this data to produce intermediate data often for later use by various…Read more »

dtIngest – Unified Streaming & Batch Data Ingestion For Hadoop

The Hadoop data lake is only as good as the data in the lake. Given the variety of data sources that need to pump data into Hadoop, customers often need to set-up one-off data ingestion jobs. These one-off jobs copy files using FTP & NFS mounts or try to use standalone tools like ‘distCP’ to move data in and out…Read more »