ONE DATA Platform

ONE LOGIC's data-driven application builder enables you to take the fast and efficient route from prototype to production.

Request demo

Give big data an even bigger meaning

ONE DATA enables you to master big data for your business applications, create value from your data and improve process efficiency. Using the ONE DATA platform, business users, managers and data scientists can operationalize predictive models in large-scale enterprises, overcome traditional big data challenges, and boost business outcomes for high-value use cases.

True to our principle “from prototype to production”, ONE DATA paves the way for ideas to be rapidly implemented after only a short prototyping phase. Register today and experience a new level of scalability, flexibility, collaboration and production-readiness for your data science projects. All registered demo users additionally receive a complimentary whitepaper from Gartner on how to operationalize machine learning projects.

Unified Data Platform

Integrate, manage, and rework data of any kind, from any source, at massive scale

Application blueprints

Start with more than a blinking cursor: Blueprints improve the time-to-market of your apps


Produce results in days with easy-to-deploy A.I. that can dovetail into any IT infrastructure

Take your data projects from prototype to production

Discover how ONE DATA enables you to turn your data into automated actions, self-service apps and alert systems.

Master the entire data science life cycle

Establish clear business understanding with low entrance barriers for all user types

  • Visualize information with a broad range of visualization options
  • Enable managers, analysts and citizen data scientists to run statistical models and small applications, and drill into data
  • Foster collaboration between numerous stakeholders also on application level

Enable easy access to various sources with pre-defined data connections

  • Easily connect relevant data sources with only a few clicks
  • Enhance quick understanding of the data by receiving instant feedback on data quality and data texture upon upload
  • Seamlessly integrate Hadoop-based data lakes, connect to common databases and more via ODBC and JDBC
  • Use pushdown computation and incremental data loads for large amounts of data from data lakes

Provide an easy-to-handle user interface to aggregate data without coding

  • Perform transformations of the data sets with pre-built data processors via drag and drop: Column renaming, joins, grouping, SQL statements etc.
  • Integrate external authorization tools like LDAP or Active Directory for consistent enterprise security and governance
  • Authorize data on data set level
  • Strengthen GDPR compliance on data and model level by managing specific user groups and their respective rights

Easily connect algorithms and methods via drag and drop

  • Train, maintain and serve models of different types
  • Select coding or non-coding tools for every step (Spark, R, SQL, Python)
  • Enable holistic collaboration by combining different environments, allowing each data scientist to stay in his preferred environment
  • Automatically evaluate and select models or manually interfere
  • Save trained models in the model hub and manage them with transparent versioning to compare and select the best ones per use case

Evaluate results through different methodologies, e.g. cross-validation

  • Evaluate models through pre-built workflows to apply cross-validation options
  • Store meta data on parameters and track evolution of evaluations and update model KPIs within dashboards or via the model hub
  • Version code workflows and configure job history
  • Define quality gates to assure safe production environment
  • Monitor model drift and distribution of outcome by defining thresholds and notifying users once thresholds are reached
  • Use alert system to notify responsible roles about completed processes, data quality or other metrics via mail or Slack

Automatically trigger production lines to enable process automation

  • Create pipelines to define sequences of workflows for production environments
  • Easily scale amount of users, data and computing power
  • Save time with fast initial set-up on-premise or in the cloud
  • Deploy in both development and production environments
  • Support of docker technology
  • Operationalize retraining with re-usable workflows to re-train while maintaining existing work and schedule training times depending on use case
  • Easily integrate into enterprise IT environments

Blueprint your big data success



Many data science solutions don’t need to invented from scratch anymore. ONE LOGIC’s app building platform offers business users, managers, data scientists, data engineers and citizen data scientists the possibility to export applications based on ready-to-go blueprints that can be easily customized.

Improve the time-to-market and time-to-value of your big data applications: Simply combine resources, modules and designs to customize, visualize and implement apps for your business needs in record time.

Discover our first application blueprint for cash management, and get notified when the next ones are released by subscribing.

Enable end-to-end data science

From cash management to demand sensing, process analytics and beyond, ONE DATA helps businesses make end-to-end decisions with confidence. The modular setup of ONE DATA enables collaboration from data connection to result consumption - on-premise or in the cloud - allowing users to easily manage data sources, models and use cases, automate processes and workflows, unify workstreams with your teams and prepare interactive dashboards for actionable insights.

Data Hub

Model Hub

Processing Library

App Builder

Use Cases

Proven. Not promised.

Discover use cases of the ONE DATA platform and how we optimize and create new business based on data


Frequently asked questions

Transform how your business works with data from end to end: ONE DATA is all about taking your data project from prototype to production. As this can be a multi-step process with different stakeholders involved, questions will arise for which we provide this FAQ. For additional information feel free to use the contact form below and we’ll get back to you asap.

Discover all FAQs.
Visit ONE LOGIC’s service portal.

Born out of the drive to stop wasting resources, ONE DATA as a data-driven application builder transforms data into added value for your business quickly and efficiently. True to the principle “from prototype to production”, ONE DATA paves the way for successful ideas to be rapidly implemented after only a short prototyping phase. The main components of ONE DATA are datasets, workflows, models and dashboards - the four main layers of a typical data science process. Our goal within ONE DATA is not to make data science easy - our goal is to simplify the process of getting data science production-ready.

ONE DATA is a data-driven application builder that can combine heterogeneous data sources, lets you build and visualize data products transparently and efficiently and, after a short prototyping phase, quickly establishes them in a productive environment according to our principle 'from prototype to production’. ONE DATA functions as an independent self-service platform, is a link between different tools to connect interfaces and can be used as a Data Hub. With interactive reporting charts, even business users without in-depth IT knowledge can track and understand data science projects and easily derive recommendations for action from them. The user interface of the platform can be completely customized to your requirements. It fits seamlessly into the company's own IT infrastructure and is easily scalable at enterprise level. Data scientists and other technically savvy users develop their own analysis approaches, uncovering further optimization potential in an uncomplicated manner.

Within the ONE DATA platform, every team member in your enterprise as well as external users or clients can work together on projects. Whether you are a decision-maker with less IT knowledge, a data scientist, a data engineer, an assistant to a department or an external user, each user has an individual access to predefined areas within one or more projects.

We have observed, that for various reasons, many ideas and projects do not make it beyond a PoC (Proof of Concept) into production. Until then, they usually create high costs, but few value-creating insights that can be used sustainably and profitably in the company.
"From prototype to production" means, from our perspective, that ideas from the prototype phase - i.e. from the think tank in which you tinker and build - are put into production, into daily, regular use and set profitable in a meaningful and efficient way within the company. ONE DATA considers the production-readiness of the projects right from the beginning, not just at the end of the process.

  • When you like to work efficiently in a team and have the opportunity to do so cross-functionally across different departments.
  • When transparency is important to you and the results of your work should be easy to understand with the help of interactive dashboards even for stakeholders of your project with less IT knowledge.
  • When you are a decision maker without in-depth IT knowledge and would like to be able to derive recommendations for actions independently from the analyses of your colleagues with the help of individualized interfaces.
  • When you’d like to go one step further and change parameters in workflows and analyses that are tailored to your needs.
  • When your project needs the integration and harmonization of heterogeneous data sources.
  • When you are aware that data science allows you implement your successful ideas faster, easier and more efficiently, bringing projects "from prototype to production".

The ONE DATA platform is implemented on a client / server architecture. Our central Apache Spark component manages the parallelization and execution logic, including the available physical and virtual infrastructure components. The client is based on a HTML5 / JavaScript frontend and the server component is sub-divided in modules and is implemented in Java using as the main application framework. The ONE LOGIC platform uses Spark, Python and R computation depending on the required context, which helps to achieve scalable and efficient workflow executions. A HDFS and Apache Parquet are used to save intermediate results and datasets. User management and meta information is stored in a DBMS (data base management system). The ONE DATA platform can scale-up and scale-out by design. Depending only on the availability of minimal hardware requirements, a nearly unlimited amount of data can be processed using ONE DATA.

  • Our software operates independently from platforms and databases (on-premise and in the cloud)
  • Open source-based architecture for cluster computing based on Apache Spark
  • Distributed file storage and data transfer architecture based on Apache Hadoop
  • Modularly expandable
  • User interface completely customisable as required (corporate design)
  • Flexible graphics engine
  • Extendable library (Java, R, Scala)

In ONE DATA, every analysis process is defined as a separate workflow. As a result, users can draw on a comprehensive library of predefined processes and methods which they can use to transform data, apply statistical methods or conveniently create entire analysis sequences. All existing processes can be individually adjusted, and internal algorithms (e.g. R or Python) additionally integrated.

Thanks to the precise roles and rights management system, only users who genuinely require access are able to gain access to analysis workflows. The tool takes care of data management and archiving previous analysis by itself. This ensures audit compliance and makes every change can be analyzed, made transparent and become traceable at all times.

Yes. For comprehensive analytics, ONE DATA can be connected to and ingest data from a various number of data sources. ONE DATA can also connect to a non-SQL database, as it transforms the data accordingly before it is processed. E.g. for Apache Cassandra, ONE DATA supports an native connector that can access data from these systems.

ONE DATA offers a unified rights and roles management. Users with appropriate rights can easily create subprojects and define roles within a project. A group of users can be assigned to each role, equipped with a set of access and execution rights. Analysis authorization is used to restrict access to data on a row-level, enabling project owners to scale a growing user base with various responsibility levels. Only groups or users specified by explicit authorization dimensions can access the data specified. Keyring, like in real life, allows you to keep a certain sets of keys together for external ETL data sources. The credentials store usernames and passwords to facilitate access to different data sources.

ONE DATA supports various ways to access data: upload files, upload models, relational databases, Web APIs, streaming data, no-SQL databases, specific connectors and is extendable for additional data sources:


The ONE DATA platform offers the option to integrate and execute models with zero code in your analysis workflows, upload external developed models and manage your trained models. The integration of Python and R code is supported. Machine Learning algorithms are available as non-coding elements (based on Spark ML). ONE DATA provides a modular setup allowing for selecting coding or non-coding tools for every step (Spark, R, Python, SQL are supported) and you can run Python (incl. scikit-learn) and R models using Docker containers for execution as well as for model serving. Tensorflow is currently supported within Python. The ONE DATA platform can train, maintain and serve models of different sources (Spark, Mleap for R and Python).

For the consumption of results in form of visualizations, the ONE DATA platform provides comprehensive report functionalities to visualize your data. You can visualize the analysis results from your workflows in interactive apps. In addition to the visualizations, users can embed containers to change parameters of models or algorithms. This provides the opportunity for subject matter experts to change variables or input parameters without the need to modify the entire underlying workflow and analysis. ONE DATA comes with a wide range of about 25 visualization possibilities, like bar charts, gauge charts, boxplots, heat maps, KPI visualizations and many more, optimizable for different devices with a descriptive language that gives data scientists all options they need:


We make use of a variety of tools in order to be able to implement teamwork in a meaningful way. ONE DATA is divided into projects whose participants are assigned to different roles and rights. In addition, we provide you the option of Analysis Authorization, which can be used to make different areas visible and/or editable for different groups of people at global company level. Keyrings allow previously defined persons to view, edit or use analyses of critical data without passing on sensitive access data.

A unified rights and roles management is natively integrated in the ONE DATA platform. Our user management offers user-based authentication, resource restrictions, analysis authorization, comprehensive group & role assignments, access to key administration via an included keyring system and an open registration and / or an invite process. The ONE DATA platform offers state-of-the-art security and backup technologies to provide a reliable service in form of transparent data encryption, secure data provisioning via token-based authentication between peers, secure transmission using HTTPS and Kerberos support.

The ONE DATA platform offers full transparency for entitled user groups and reproducibility of all analysis workflows and results created within the platform. ONE DATA stores the complete history of analysis workflows on the platform in an efficient and encrypted way. Therefore, you can maintain the results and the quality of the implemented functionalities at any time. Within the ONE DATA platform, all resource types can be individually named, tagged with user-defined keywords and searched for. Resources can also be added to specific projects and can then be additionally documented on project level. Processors within an analysis workflow can be renamed, color coded, grouped and more. Additionally, you are able to share your resources within a project.

For a basic installation and running environment in a single-node-setup, the following minimum hardware requirements should be met:

  • 8 physical/dedicated CPU-Cores
  • 4 GB RAM on each CPU-Core = 32GB RAM
  • 100 GB system volume for operating system and temporary data (SSD)
  • 2 TB Data volume (HDD/Network)

If the amount of data is likely to be larger than 2 TB, a cluster setup is the best way to support a ONE DATA platform installation with a minimum of three nodes to save data and execute distributed operations:

  • 32+ CPU-Cores
  • 8+ GB RAM per CPU-Core
  • 250 GB System-Volume for operating system and temporary data
  • 4 TB Data volume for HDFS

OS and environment for installation:

  • Preferably a Linux operating system (Red Hat or Debian), but also other OS are supported
  • PostgreSQL (version >= 9.6 and < 10) for saving meta data
  • Java 1.8
  • Tomcat 8.5
  • PostgreSQL JDBC driver
  • JavaMail

Request a ONE DATA platform demo and
receive a complimentary Gartner whitepaper


GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, and is used herein with permission. All rights reserved.